venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Posterior Matching for Arbitrary Conditioning Abstract Arbitrary conditioning is an important problem in unsupervised learning, where we seek to model the conditional densities p(xu | xo) that underly some data, for all possible non-intersecting subsets o, u ⊂ {1, . . . , d}. However, the vast majority of density estimation only focuses on modeling the joint distribution p(x), in which important conditional dependencies between features are opaque. We propose a simple and general framework, coined Posterior Matching, that enables Variational Autoencoders (VAEs) to perform arbitrary conditioning, without modification to the VAE itself. Posterior Matching applies to the numerous existing VAE-based approaches to joint density estimation, thereby circumventing the specialized models required by previous approaches to arbitrary conditioning. We find that Posterior Matching is comparable or superior to current state-of-the-art methods for a variety of tasks with an assortment of VAEs (e.g. discrete, hierarchical, VaDE). 1 Introduction Variational Autoencoders (VAEs) [21] are a widely adopted class of generative model that have been successfully employed in numerous areas [4, 15, 26, 33, 16]. Much of their appeal stems from their ability to probabilistically represent complex data in terms of lower-dimensional latent codes. Like most other generative models, VAEs are typically designed to model the joint data distribution, which communicates likelihoods for particular configurations of all features at once. This can be useful for some tasks, such as generating images, but the joint distribution is limited by its inability to explicitly convey the conditional dependencies between features. In many cases, conditional distributions, which provide the likelihood of an event given some known information, are more relevant and useful. Conditionals can be obtained in theory by marginalizing the joint distribution, but in practice, this is generally not analytically available and is expensive to approximate. Easily assessing the conditional distribution over any subset of features is important for tasks where decisions and predictions must be made over a varied set of possible information. For example, some medical applications may require reasoning over: the distribution of blood pressure given age and weight; or the distribution of heart-rate and blood-oxygen level given age, blood pressure, and BMI; etc. For flexibility and scalability, it is desirable for a single model to provide all such conditionals at inference time. More formally, this task is known as arbitrary conditioning, where the goal is to model the conditional density p(xu | xo) for any arbitrary subsets of unobserved features xu and observed features xo. In this work, we show, by way of a simple and general framework, that traditional VAEs can perform arbitrary conditioning, without modification to the VAE model itself. Our approach, which we call Posterior Matching, is to model the distribution p(z | xo) that is induced by some VAE, where z is the latent code. In other words, we consider the distribution of latent codes given partially observed features. We do this by having a neural network output an approximate partially observed posterior q(z | xo). In order to train this network, we develop a straightforward maximum likelihood estimation objective and show that it is equivalent to maximizing p(xu | xo), 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the quantity of interest. Unlike prior works that use VAEs for arbitrary conditioning, we do not make special assumptions or optimize custom variational lower bounds. Rather, training via Posterior Matching is simple, highly flexible, and without limiting assumptions on approximate posteriors (e.g., q(z | xo) need not be reparameterized and can thus be highly expressive). We conduct several experiments in which we apply Posterior Matching to various types of VAEs for a myriad of different tasks, including image inpainting, tabular arbitrary conditional density estimation, partially observed clustering, and active feature acquisition. We find that Posterior Matching leads to improvements over prior VAE-based methods across the range of tasks we consider. 2 Background Arbitrary Conditioning A core problem in unsupervised learning is density estimation, where we are given a dataset D = {x(i)}Ni=1 of i.i.d. samples drawn from an unknown distribution p(x) and wish to learn a model that best approximates the probability density function p. A limitation of only learning the joint distribution p(x) is that it does not provide direct access to the conditional dependencies between features. Arbitrary conditional density estimation [18, 24, 38] is a more general task where we want to estimate the conditional density p(xu | xo) for all possible subsets of observed features o ⊂ {1, . . . , d} and unobserved features u ⊂ {1, . . . , d} such that o and u do not intersect. Here, xo ∈ R|o| and xu ∈ R|u|. Estimation of joint or marginal likelihoods is a special case where o = ∅. Note that, while not strictly necessary for arbitrary conditioning methods [24, 38], we assume D is fully observed, a requirement for training traditional VAEs. Variational Autoencoders Variational Autoencoders (VAEs) [21] are a class of generative models that assume a generative process in which data likelihoods are represented as p(x) = ∫ p(x | z)p(z) dz, where z is a latent variable that typically has lower dimensionality than the data x. A tractable distribution that affords easy sampling and likelihood evaluation, such as a standard Gaussian, is usually imposed on the prior p(z). These models are learned by maximizing the evidence lower bound (ELBO) of the data likelihood: log p(x) ≥ Ez∼qψ(·|x)[log pϕ(x | z)]− KL(qψ(z | x) || p(z)), where qψ(z | x) and pϕ(x | z) are the encoder (or approximate posterior) and decoder of the VAE, respectively. The encoder and decoder are generally neural networks that output tractable distributions (e.g., a multivariate Gaussian). In order to properly optimize the ELBO, samples drawn from qψ(z | x) must be differentiable with respect to the parameters of the encoder (often called the reparameterization trick). After training, a new data point x̂ can be easily generated by first sampling z from the prior, then sampling x̂ ∼ pϕ(· | z). 3 Posterior Matching In this section we describe our framework, coined Posterior Matching, to model the underlying arbitrary conditionals in a VAE. In many respects, Posterior Matching cuts the Gordian knot to uncover the conditional dependencies. Following our insights, we show that our approach is direct and intuitive. Notwithstanding, we are the first to apply this direct methodology for arbitrary conditionals in VAEs and are the first to connect our proposed loss with arbitrary conditional likelihoods p(xu | xo). Note that we are not proposing a new type of VAE. Rather, we are formalizing a simple and intuitive methodology that can be applied to numerous existing (or future) VAEs. 3.1 Motivation Let us begin with a motivating example, depicted in Figure 1. Suppose we have trained a VAE on images of handwritten 3s, 5s, and 8s. This VAE has thus learned to represent these images in a low-dimensional latent space. Any given code (vector) in this latent space represents a distribution over images in the original data space, which can be retrieved by passing that code through the VAE’s decoder. Some regions in the latent space will contain codes that represent 3s, some will represent 5s, and some will represent 8s. There is typically only an interest in mapping from a given image x to a distribution over the latent codes that could represent that image, i.e., the posterior q(z | x). However, we can just as easily ask which latent codes are feasible having only observed part of an image. For example, if we only see the right half the image shown in Figure 1, we know the digit could be a 3 or an 8, but certainly not a 5. Thus, the distribution over latent codes that could correspond to the full image, that is pψ(z | xo) (where ψ is the encoder’s parameters), should only include regions that represent 3s or 8s. Decoding any sample from pψ(z | xo) will produce an image of a 3 or an 8 that aligns with what has been observed. The important insight is that we can think about how conditioning on xo changes the distribution over latent codes without explicitly worrying about what the (potentially higher-dimensional and more complicated) conditional distribution over xu looks like. Once we know pψ(z | xo), we can easily move back to the original data space using the decoder. 3.2 Approximating the Partially Observed Posterior The partially observed approximate posterior of interest is not readily available, as it is implicitly defined by the VAE: pψ(z | xo) = Exu∼p(·|xo) [ qψ(z | xo,xu) ] , (1) where qψ(z | xo,xu) = qψ(z | x) is the VAE’s encoder. Thus, we introduce a neural network in order to approximate it. Given a network that outputs the distribution qθ(z | xo) (i.e. the partially observed encoder in Figure 2), we now discuss our approach to training it. Our approach is guided by the priorities of simplicity and generality. We minimize (with respect to θ) the following likelihoods, where the samples are coming from our target distribution as defined in Equation 1: Exu∼p(·|xo) [ Ez∼qψ(·|xo,xu)[− log qθ(z | xo)] ] . (2) We discuss how this is optimized in practice in Section 3.4. Due to the relationship between negative log-likelihood minimization and KL-divergence minimization [3], we can interpret Equation 2 as minimizing: Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) ) ] . (3) We can directly minimize the KL-divergence in Equation 3 if it is analytically available between the two posteriors, for instance if both posteriors are Gaussians. However, Equation 2 is more general in that it allows us to use more expressive (e.g., autoregressive) distributions for qθ(z | xo) with which the KL-divergence cannot be directly computed. This is important given that pψ(z | xo) is likely to be complex (e.g., multimodal) and not easily captured by a Gaussian (as in Figure 1). Importantly, there is no requirement for qθ(z | xo) to be reparameterized, which would further limit the class of distributions that can be used. There is a high degree of flexibility in the choice of distribution for the partially observed posterior. Note that this objective does not utilize the decoder. 3.3 Connection with Arbitrary Conditioning While the Posterior Matching objective from Equation 2 and Equation 3 is intuitive, it is not immediately clear how this approach relates back to the arbitrary conditioning objective of maximizing p(xu | xo). We formalize this connection in Theorem 3.1 (see Appendix for proof). Theorem 3.1. Let qψ(z | x) and pϕ(x | z) be the encoder and decoder, respectively, for some VAE. Additionally, let qθ(z | xo) be an approximate partially observed posterior. Then minimizing Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) )] is equivalent to minimizing Exu∼p(·|xo) [ − log pθ,ϕ(xu | xo) + KL ( qψ(z | xo,xu) || qθ(z | xo,xu) ) ] , (4) with respect to the parameters θ. The first term inside the expectation in Equation 4 gives us the explicit connection back to the arbitrary conditioning likelihood p(xu | xo), which is being maximized when minimizing Equation 4. The second term acts as a sort of regularizer by trying to make the partially observed posterior match the VAE posterior when conditioned on all of x — intuitively, this makes sense as a desirable outcome. 3.4 Implementation A practical training loss follows quickly from Equation 2. For the outer expectation, we do not have access to the true distribution p(xu | xo), but for a given instance x that has been partitioned into xo and xu, we do have one sample from this distribution, namely xu. So we approximate this expectation using xu as a single sample. This type of single-sample approximation is common with VAEs, e.g., when estimating the ELBO. For the inner expectation, we have access to qψ(z | x), which can easily be sampled in order to estimate the expectation. In practice, we generally use a single sample for this as well. This gives us the following Posterior Matching loss: LPM(x, o, θ, ψ) = −Ez∼qψ(·|x) [ log qθ(z | xo) ] , (5) where o is the set of observed feature indices. During training, o can be randomly sampled from a problem-specific distribution for each minibatch. Figure 2 provides a visual overview of our approach. In practice, we represent xo as a concatenation of x that has had unobserved features set to zero and a bitmask b that indicates which features are observed. This representation has been successful in other arbitrary conditioning models [24, 38]. However, this choice is not particularly important to Posterior Matching itself, and alternative representations, such as set embeddings, are valid as well. As required by VAEs, samples from qψ(z | x) will be reparameterized, which means that minimizing LPM will influence the parameters of the VAE’s encoder in addition to the partially observed posterior network. In some cases, this may be advantageous, as the encoder can be guided towards learning a latent representation that is more conducive to arbitrary conditioning. However, it might also be desirable to train the VAE independently of the partially observed posterior, in which case we can choose to stop gradients on the samples z ∼ qψ(· | x) when computing LPM. Similarly, the partially observed posterior can be trained against an existing pretrained VAE. In this case, the parameters of the VAE’s encoder and decoder are frozen, and we only optimize LPM with respect to θ. Otherwise, we jointly optimize the VAE’s ELBO and LPM. We emphasize that there is a high degree of flexibility with the choice of VAE, i.e. we have not imposed any unusual constraints. However, there are some potentially limiting practical considerations that have not been explicitly mentioned yet. First, the training data must be fully observed, as with traditional VAEs, since LPM requires sampling qψ(z | x). However, given that the base VAE requires fully observed training data anyway, this is generally not a relevant limitation for our purposes. Second, it is convenient in practice for the VAE’s decoder to be factorized, i.e. p(x | z) = ∏ i p(xi | z), as this allows us to easily sample from p(xu | z) (sampling xu is less straightforward with other types of decoders). However, it is standard practice to use factorized decoders with VAEs, so this is ordinarily not a concern. We also note that, while useful for easy sampling, a factorized decoder is not necessary for optimizing the Posterior Matching loss, which does not incorporate the decoder. 3.5 Posterior Matching Beyond Arbitrary Conditioning The concept of matching VAE posteriors is quite general and has other uses beyond the application of arbitrary conditioning. We consider one such example, which still has ties to arbitrary conditioning, in order to give a flavor for other potential uses. A common application of arbitrary conditioning is active feature acquisition [13, 23, 25], where informative features are sequentially acquired on an instance-by-instance basis. In the unsupervised case, the aim is to acquire as few features as possible while maximizing the ability to re- construct the remaining unobserved features (see Figure 3 for example). One approach to active feature acquisition is to greedily select the feature that will maximize the expected amount of information to be gained about the currently unobserved features [23, 25]. For VAEs, Ma et al. [25] show that this is equivalent to selecting each feature according to argmax i∈u H(z | xo)− Exi∼p(·|xo) [ H(z | xo, xi) ] = argmin i∈u Exi∼p(·|xo) [ H(z | xo, xi) ] . (6) For certain families of posteriors, such as multivariate Gaussians, the entropies in Equation 6 can be analytically computed. In practice, approximating the expectation in Equation 6 is done via entropies of the posteriors p(i)(z | xo) ≡ Exi∼pθ,ϕ(·|xo) [ qθ(z | xo, xi) ] , where samples from pθ,ϕ(xi | xo) are produced by first sampling z ∼ qθ(· | xo) and then passing z through the VAE’s decoder pϕ(xi | z) (we call p(i)(z | xo) the “lookahead” posterior for feature i, since it is obtained by imagining what the posterior will look like after one acquisition into the future). Hence, computing the resulting entropies requires one network evaluation per sample of xi to encode z, for i ∈ u. Thus, if using k samples for each xi, each greedy step will be Ω(k · |u|), which may be prohibitive in high dimensions. In analogous fashion to the Posterior Matching approach that has already been discussed, we can train a neural network to directly output the lookahead posteriors for all features at once. The Posterior Matching loss in this case is LPM-Lookahead(x, o, u, ω, θ, ϕ) = ∑ i∈u Exi∼pθ,ϕ(·|xo) [ Ez∼qθ(·|xo,xi) [ − log q(i)ω (z | xo) ]] , (7) where ω is the parameters of the lookahead posterior network. In practice, we train a single shared network with a final output layer that outputs the parameters of all q(i)ω (z | xo). Note that given the distributions q(i)ω (z | xo) for all i, computing the greedy acquisition choice consists of doing a forward evaluation of our network, then choosing the feature i ∈ u such that the entropy of q(i)ω (z | xo) is minimized. In other words, we may bypass the individual samples of xi, and use a single shared network for a faster acquisition step. In this setting, we let q(i)ω (z | xo) be a multivariate Gaussian so that the entropy computation is trivial. See Appendix for a diagram of the entire process. This use of Posterior Matching leads to large improvements in the computational efficiency of greedy active feature acquisition (demonstrated empirically in Section 5.5). 4 Prior Work A variety of approaches to arbitrary conditioning have been previously proposed. ACE is an autoregressive, energy-based method that is the current state-of-the-art for arbitrary conditional likelihood estimation and imputation, although it can be computationally intensive for very high dimensional data [38]. ACFlow is a variant of normalizing flows that can give analytical arbitrary conditional likelihoods [24]. Several other methods, including Sum-Product Networks [32, 6], Neural Conditioner [2], and Universal Marginalizer [10], also have the ability to estimate conditional likelihoods. Rezende et al. [34] were among the first to suggest that VAEs can be used for imputation. More recently, VAEAC was proposed as a VAE variant designed for arbitrary conditioning [18]. Unlike Posterior Matching, VAEAC is not a general framework and cannot be used with typical pretrained VAEs. EDDI is a VAE-based approach to active feature acquisition and relies on arbitrary conditioning [25]. The authors introduce a “Partial VAE” in order to perform the arbitrary conditioning, which, similarly to Posterior Matching, tries to model p(z | xo). Unlike Posterior Matching, they do this by maximizing a variational lower bound on p(xo) using a partial inference network q(z | xo) (there is no standard VAE posterior q(z | x) in EDDI). Gong et al. [13] use a similar approach that is based on the Partial VAE of EDDI. The major drawback of these methods is that, unlike with Posterior Matching, q(z | xo) must be reparameterizable in order to optimize the lower bound (the authors use a diagonal Gaussian). Thus, certain more expressive distributions (e.g., autoregressive) cannot be used. Additionally, these methods cannot be applied to existing VAEs. The methods of Ipsen et al. [17] and Collier et al. [9] are also similar to EDDI, where the former optimizes an approximation of p(xo,b) and the latter optimizes a lower bound on p(xo | b). Ipsen et al. [17] also focuses on imputation for data that is missing “not at random”, a setting that is outside the focus of our work. There are also several works that have considered learning to identify desirable regions in latent spaces. Engel et al. [11] start from a pretrained VAE, but then train a separate GAN [14] with special regularizers to do their conditioning. They only condition on binary vectors, y, that correspond to a small number of predefined attributes, whereas we allow for conditioning on arbitrary subsets of continuous features xo (a more complicated conditioning space). Also, their resulting GAN does not make the likelihood q(z | y) available, whereas Posterior Matching directly (and flexibly) models q(z | xo), which may be useful for downstream tasks (e.g. Section 5.5) and likelihood evaluation (see Appendix). Furthermore, Posterior Matching trains directly through KL, without requiring an additional critic. Whang et al. [40] learn conditional distributions, but not arbitrary conditional distributions (a much harder problem). They also consider normalizing flow models, which are limited to invertible architectures with tractable Jacobian determinants and latent spaces that have the same dimensionality as the data (unlike VAEs). Cannella et al. [7] similarly do conditional sampling from a model of the joint distribution, but are also restricted to normalizing flow architectures and require a more expensive MCMC procedure for sampling. 5 Experiments In order to empirically test Posterior Matching, we apply it to a variety of VAEs aimed at different tasks. We find that our models are able to match or surpass the performance of previous specialized VAE methods. All experiments were conducted using JAX [5] and the DeepMind JAX Ecosystem [1]. Code is available at https://github.com/lupalab/posterior-matching. Our results are dependent on the choice of VAE, and the particular VAEs used in our experiments were not the product of extensive comparisons and did not undergo thorough hyperparameter tuning — that is not the focus of this work. With more carefully selected or tuned VAEs, and as new VAEs continue to be developed, we can expect Posterior Matching’s downstream performance to improve accordingly on any given task. We emphasize that our experiments span a diverse set of task, domains, and types of VAE, wherein Posterior Matching was effective. 5.1 MNIST In this first experiment, our goal is to demonstrate that Posterior Matching replicates the intuition depicted in Figure 1. We do this by training a convolutional VAE with Posterior Matching on the MNIST dataset. The latent space of this VAE is then mapped to two dimensions with UMAP [27] and visualized in Figure 4. In the figure, black points represent samples from qθ(z | xo), and for select samples, the corresponding reconstruction is shown. The encoded test data is shown, colored by true class label, to highlight which regions correspond to which digits. We see that the experimental results nicely replicate our earlier intuitions — the learned distribution qθ(z | xo) puts probability mass only in parts of the latent space that correspond to plausible digits based on what is observed and successfully captures multimodal distributions (see the second column in Figure 4). 5.2 Image Inpainting One practical application of arbitrary conditioning is image inpainting, where only part of an image is observed and we want to fill in the missing pixels with visually coherent imputations. As with prior works [18, 24], we assume pixels are missing completely at random. We test Posterior Matching as an approach to this task by pairing it with both discrete and hierarchical VAEs. Vector Quantized-VAEs We first consider VQ-VAE [30], a type of VAE that is known to work well with images. VQ-VAE differs from the typical VAE with its use of a discrete latent space. That is, each latent code is a grid of discrete indices rather than a vector of continuous values. Because the latent space is discrete, Oord et al. [30] model the prior distribution with a PixelCNN [29, 36] after training the VQ-VAE. We similarly use a conditional PixelCNN to model qθ(z | xo). First, a convolutional network maps xo to a vector, and that vector is then used as a conditioning input to the PixelCNN. More architecture and training details can be found in the Appendix. We train VQ-VAEs with Posterior Matching for the MNIST, OMNIGLOT, and CELEBA datasets. Table 1 reports peak signal-to-noise ratio (PSNR) and precision/recall [35] for inpaintings produced by our model. We find that Posterior Matching with VQ-VAE consistently achieves better precision/recall scores than previous models while having comparable PSNR. Hierarchical VAEs Hierarchical VAEs [22, 37, 39] are a powerful extension of traditional VAEs that allow for more expressive priors and posteriors by partitioning the latent variables into subsets z = {z1, . . . , zL}. A hierarchy is then created by fractorizing the prior p(z) = ∏ i p(zi | z<i) and posterior q(z | x) = ∏ i q(zi | z<i,x). These models have demonstrated impressive performance on images and can even outperform autoregressive models [8]. Posterior Matching can be Posterior Matching 0.834 ± 0.001 0.246 ± 0.002 0.330 ± 0.013 5.964 ± 0.005 0.857 ± 0.000 -8.963 ± 0.007 0.450 ± 0.002 -3.116 ± 0.175 0.573 ± 0.000 77.488 ± 0.012 VAEAC 0.880 ± 0.001 -0.042 ± 0.002 0.574 ± 0.033 2.418 ± 0.006 0.896 ± 0.001 -10.082 ± 0.010 0.462 ± 0.002 -3.452 ± 0.067 0.615 ± 0.000 74.850 ± 0.005 ACE 0.828 ± 0.002 0.631 ± 0.002 0.335 ± 0.027 9.643 ± 0.005 0.830 ± 0.001 -3.859 ± 0.005 0.432 ± 0.003 0.310 ± 0.054 0.525 ± 0.000 86.701 ± 0.008 ACE Proposal 0.828 ± 0.002 0.583 ± 0.003 0.312 ± 0.033 9.484 ± 0.005 0.832 ± 0.001 -4.417 ± 0.005 0.436 ± 0.004 -0.241 ± 0.056 0.535 ± 0.000 85.228 ± 0.009 ACFlow 0.877 ± 0.001 0.561 ± 0.003 0.567 ± 0.050 8.086 ± 0.010 0.909 ± 0.000 -8.197 ± 0.008 0.478 ± 0.004 -0.972 ± 0.022 0.603 ± 0.000 81.827 ± 0.007 ACFlow+BG 0.833 ± 0.002 0.528 ± 0.003 0.369 ± 0.016 7.593 ± 0.011 0.861 ± 0.001 -6.833 ± 0.006 0.442 ± 0.001 -1.098 ± 0.032 0.572 ± 0.000 81.399 ± 0.008 naturally applied to hierarchical VAEs, where the partially observed posterior is represented as q(z | xo) = ∏ i q(zi | z<i,xo). We adopt the Very Deep VAE (VDVAE) architecture used by Child [8] and extend it to include the partially observed posterior (see Appendix for training and architecture details). We note that due to our hardware constraints, we trained smaller models and for fewer iterations than Child [8]. Inpainting results for our VDVAE models are given in Table 1. We see that they achieve better precision/recall scores than the VQ-VAE models and, unlike VQ-VAE, are able to attain better PSNR than ACFlow for MNIST and CELEBA. Figure 5 shows some example inpaintings, and additional samples are provided in the Appendix. The fact that we see better downstream performance when using VDVAE than when using VQ-VAE is illustrative of Posterior Matching’s ability to admit easy performance gains by simply switching to a more powerful base VAE. 5.3 Real-valued Datasets We evaluate Posterior Matching on real-valued tabular data, specifically the benchmark UCI repository datasets from Papamakarios et al. [31]. We follow the experimental setup used by Li et al. [24] and Strauss and Oliva [38]. In these experiments, we train basic VAE models while simultaneously learning the partially observed posterior. Given the flexibility that Posterior Matching affords, we use an autoregressive distribution for qθ(z | xo). Further details can be found in the Appendix. Table 2 reports the arbitrary conditional log-likelihoods and normalized root-mean-square error (NRMSE) of imputations for our models (with features missing completely at random). Likelihoods are computed using an importance sampling estimate (see Appendix for details). We primarily compare to VAEAC as a baseline in the VAE family, however we also provide results for ACE and ACFlow for reference. We see that Posterior Matching is able to consistently produce more accurate imputations and higher likelihoods than VAEAC. While our models don’t match the likelihoods achieved by ACE and ACFlow, Posterior Matching is comparable to them for imputation NRMSE. 5.4 Partially Observed Clustering Probabilistic clustering often views cluster assignments as a latent variable. Thus, when applying Posterior Matching in this setting, we may perform “partially observed” clustering, which clusters instances based on a subset of observed features. We consider VaDE, which uses a mixture of Gaussians as the prior, allowing it to do unsupervised clustering by treating each Gaussian component as one of the clusters [19]. Despite differences in how VaDE is trained compared to a classic VAE, training a partially observed encoder via Posterior Matching remains exactly the same. We train models on both MNIST and FASHION MNIST (see Appendix for experimental details). Figure 6 shows the clustering accuracy of these models as the percentage of (randomly selected) observed features changes. As a baseline, we train a supervised model where the labels are the cluster predictions from the VaDE model when all of the features are observed. We see that Posterior Matching is able to match the performance of the baseline, and even slightly outperform it for low percentages of observed features. Unlike the supervised approach, Posterior Matching has the advantage of being generative. 5.5 Very Fast Greedy Feature Acquisition As discussed in Section 3.5, we can use Posterior Matching outside of the specific task of arbitrary conditioning. Here, we consider the problem of greedy active feature acquisition. We train a VAE with a Posterior Matching network that outputs the lookahead posteriors described in Section 3.5, using the loss in Equation 7. Note that we are also still using Posterior Matching in order to learn qθ(z | xo) and therefore to produce reconstructions. Training details can be found in the Appendix. We consider the MNIST dataset and compare to EDDI as a baseline, using the authors’ publicly available code. We downscale images to 16 × 16 since EDDI has difficulty scaling to highdimensional data. We also only evaluate on the first 1000 instances of the MNIST test set, as the EDDI code was very slow when computing the greedy acquisition policy. EDDI also uses a particular architecture that is not compatible with convolutions. Thus we train a MLP-based VAE on flattened images in order to make a fair comparison. However, since Posterior Matching does not place any limitations on the type of VAE being used, we also train a convolutional version. For our models, we greedily select the feature to acquire using the more expensive sampling-based approach (similar to EDDI) as well as with the lookahead posteriors (which requires no sampling). In both cases, imputations are computed with an expectation over 50 latent codes, as is done for EDDI. An example acquisition trajectory is shown in Figure 3. Figure 7 presents the root-mean-square error, averaged across the test instances, when imputing xu with different numbers of acquired features. We see that our models are able to achieve lower error than EDDI. We also see that acquiring based on the lookahead posteriors incurs only a minimal increase in error compared to the sampling-based method, despite being far more efficient. Computing the greedy choice with our model using the sampling-based approach takes 68 ms ± 917 µs (for a single acquisition on CPU). Using the lookahead posteriors, the time is only 310 µs ± 15.3 µs, a roughly 219x speedup. 6 Conclusions We have presented an elegant and general framework, called Posterior Matching, that allows VAEs to perform arbitrary conditioning. That is, we can take an existing VAE that only models the joint distribution p(x) and train an additional model that, when combined with the VAE, is able to assess any likelihood p(xu | xo) for arbitrary subsets of unobserved features xu and observed features xo. We applied this approach to a variety of VAEs for a multitude of different tasks. We found that Posterior Matching outperforms previous specialized VAEs for arbitrary conditioning with tabular data and for image inpainting. Importantly, we find that one can switch to a more powerful base VAE and get immediate improvements in downstream arbitrary conditioning performance “for free,” without making changes to Posterior Matching itself. We can also use Posterior Matching to perform clustering based on partially observed inputs and to improve the efficiency of greedy active feature acquisition by several orders of magnitude at negligible cost to performance. With this work, we hope to make arbitrary conditioning more widely accessible. Arbitrary conditioning no longer requires specialized methods, but can instead be achieved by applying one general framework to common VAEs. As advances are made in VAEs for joint density estimation, we can expect to immediately reap the rewards for arbitrary conditioning. Acknowledgments and Disclosure of Funding We would like to thank Google’s TPU Research Cloud program for providing free access to TPUs. This research was partly funded by NSF grant IIS2133595 and by NIH 1R01AA02687901A1.
1. What is the focus and contribution of the paper regarding the arbitrary conditioning problem? 2. What are the strengths of the proposed method, particularly in its simplicity and flexibility? 3. What are the weaknesses of the paper, especially regarding Theorem 3.1 and its limited value? 4. Do you have any minor questions or comments on the content, such as citations and definitions? 5. How does the reviewer assess the limitations and potential negative societal impact of the work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work proposes a simple method for the arbitrary conditioning problem, which trains an amortized inference network for the distribution q ( z | x o ) . Compared with previous work, the method does not require reparameterizable inference networks, and thus enables the use of more flexible models. Post-rebuttal update: I am satisfied by the authors' response, and believe this work is suitable for publication. Strengths And Weaknesses Strengths: The method is simple, intuitive and appears to have competitive performance. Weaknesses: I do not have any major concerns. My only gripe is that Theorem 3.1 appears to add little value -- the distributions to be optimized depend on θ in a somewhat complicated way, and the only thing I can read out of Eq. (4) is that when both p ϕ and q ψ are optimized, the imputation distribution determined by q θ will be correct; but this already follows from Eq. (3). More discussions on the general case would be helpful. Questions I do not have any major questions. A few minor questions / comments: You should cite the early works on data imputation with VAEs, e.g. Rezende, Mohamed & Wierstra (2014, Appendix F). In Theorem 3.1 the definitions of p θ , ϕ , q θ ( z | x o , x u ) should be moved to the text. The latter distribution should also depend on ϕ (the definition of q θ does not involve x u ). As both terms depend on θ , ideally there should be more discussion about the optima, as mentioned in the above section. Limitations Limitations and potential negative societal impact are adequately addressed.
NIPS
Title Posterior Matching for Arbitrary Conditioning Abstract Arbitrary conditioning is an important problem in unsupervised learning, where we seek to model the conditional densities p(xu | xo) that underly some data, for all possible non-intersecting subsets o, u ⊂ {1, . . . , d}. However, the vast majority of density estimation only focuses on modeling the joint distribution p(x), in which important conditional dependencies between features are opaque. We propose a simple and general framework, coined Posterior Matching, that enables Variational Autoencoders (VAEs) to perform arbitrary conditioning, without modification to the VAE itself. Posterior Matching applies to the numerous existing VAE-based approaches to joint density estimation, thereby circumventing the specialized models required by previous approaches to arbitrary conditioning. We find that Posterior Matching is comparable or superior to current state-of-the-art methods for a variety of tasks with an assortment of VAEs (e.g. discrete, hierarchical, VaDE). 1 Introduction Variational Autoencoders (VAEs) [21] are a widely adopted class of generative model that have been successfully employed in numerous areas [4, 15, 26, 33, 16]. Much of their appeal stems from their ability to probabilistically represent complex data in terms of lower-dimensional latent codes. Like most other generative models, VAEs are typically designed to model the joint data distribution, which communicates likelihoods for particular configurations of all features at once. This can be useful for some tasks, such as generating images, but the joint distribution is limited by its inability to explicitly convey the conditional dependencies between features. In many cases, conditional distributions, which provide the likelihood of an event given some known information, are more relevant and useful. Conditionals can be obtained in theory by marginalizing the joint distribution, but in practice, this is generally not analytically available and is expensive to approximate. Easily assessing the conditional distribution over any subset of features is important for tasks where decisions and predictions must be made over a varied set of possible information. For example, some medical applications may require reasoning over: the distribution of blood pressure given age and weight; or the distribution of heart-rate and blood-oxygen level given age, blood pressure, and BMI; etc. For flexibility and scalability, it is desirable for a single model to provide all such conditionals at inference time. More formally, this task is known as arbitrary conditioning, where the goal is to model the conditional density p(xu | xo) for any arbitrary subsets of unobserved features xu and observed features xo. In this work, we show, by way of a simple and general framework, that traditional VAEs can perform arbitrary conditioning, without modification to the VAE model itself. Our approach, which we call Posterior Matching, is to model the distribution p(z | xo) that is induced by some VAE, where z is the latent code. In other words, we consider the distribution of latent codes given partially observed features. We do this by having a neural network output an approximate partially observed posterior q(z | xo). In order to train this network, we develop a straightforward maximum likelihood estimation objective and show that it is equivalent to maximizing p(xu | xo), 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the quantity of interest. Unlike prior works that use VAEs for arbitrary conditioning, we do not make special assumptions or optimize custom variational lower bounds. Rather, training via Posterior Matching is simple, highly flexible, and without limiting assumptions on approximate posteriors (e.g., q(z | xo) need not be reparameterized and can thus be highly expressive). We conduct several experiments in which we apply Posterior Matching to various types of VAEs for a myriad of different tasks, including image inpainting, tabular arbitrary conditional density estimation, partially observed clustering, and active feature acquisition. We find that Posterior Matching leads to improvements over prior VAE-based methods across the range of tasks we consider. 2 Background Arbitrary Conditioning A core problem in unsupervised learning is density estimation, where we are given a dataset D = {x(i)}Ni=1 of i.i.d. samples drawn from an unknown distribution p(x) and wish to learn a model that best approximates the probability density function p. A limitation of only learning the joint distribution p(x) is that it does not provide direct access to the conditional dependencies between features. Arbitrary conditional density estimation [18, 24, 38] is a more general task where we want to estimate the conditional density p(xu | xo) for all possible subsets of observed features o ⊂ {1, . . . , d} and unobserved features u ⊂ {1, . . . , d} such that o and u do not intersect. Here, xo ∈ R|o| and xu ∈ R|u|. Estimation of joint or marginal likelihoods is a special case where o = ∅. Note that, while not strictly necessary for arbitrary conditioning methods [24, 38], we assume D is fully observed, a requirement for training traditional VAEs. Variational Autoencoders Variational Autoencoders (VAEs) [21] are a class of generative models that assume a generative process in which data likelihoods are represented as p(x) = ∫ p(x | z)p(z) dz, where z is a latent variable that typically has lower dimensionality than the data x. A tractable distribution that affords easy sampling and likelihood evaluation, such as a standard Gaussian, is usually imposed on the prior p(z). These models are learned by maximizing the evidence lower bound (ELBO) of the data likelihood: log p(x) ≥ Ez∼qψ(·|x)[log pϕ(x | z)]− KL(qψ(z | x) || p(z)), where qψ(z | x) and pϕ(x | z) are the encoder (or approximate posterior) and decoder of the VAE, respectively. The encoder and decoder are generally neural networks that output tractable distributions (e.g., a multivariate Gaussian). In order to properly optimize the ELBO, samples drawn from qψ(z | x) must be differentiable with respect to the parameters of the encoder (often called the reparameterization trick). After training, a new data point x̂ can be easily generated by first sampling z from the prior, then sampling x̂ ∼ pϕ(· | z). 3 Posterior Matching In this section we describe our framework, coined Posterior Matching, to model the underlying arbitrary conditionals in a VAE. In many respects, Posterior Matching cuts the Gordian knot to uncover the conditional dependencies. Following our insights, we show that our approach is direct and intuitive. Notwithstanding, we are the first to apply this direct methodology for arbitrary conditionals in VAEs and are the first to connect our proposed loss with arbitrary conditional likelihoods p(xu | xo). Note that we are not proposing a new type of VAE. Rather, we are formalizing a simple and intuitive methodology that can be applied to numerous existing (or future) VAEs. 3.1 Motivation Let us begin with a motivating example, depicted in Figure 1. Suppose we have trained a VAE on images of handwritten 3s, 5s, and 8s. This VAE has thus learned to represent these images in a low-dimensional latent space. Any given code (vector) in this latent space represents a distribution over images in the original data space, which can be retrieved by passing that code through the VAE’s decoder. Some regions in the latent space will contain codes that represent 3s, some will represent 5s, and some will represent 8s. There is typically only an interest in mapping from a given image x to a distribution over the latent codes that could represent that image, i.e., the posterior q(z | x). However, we can just as easily ask which latent codes are feasible having only observed part of an image. For example, if we only see the right half the image shown in Figure 1, we know the digit could be a 3 or an 8, but certainly not a 5. Thus, the distribution over latent codes that could correspond to the full image, that is pψ(z | xo) (where ψ is the encoder’s parameters), should only include regions that represent 3s or 8s. Decoding any sample from pψ(z | xo) will produce an image of a 3 or an 8 that aligns with what has been observed. The important insight is that we can think about how conditioning on xo changes the distribution over latent codes without explicitly worrying about what the (potentially higher-dimensional and more complicated) conditional distribution over xu looks like. Once we know pψ(z | xo), we can easily move back to the original data space using the decoder. 3.2 Approximating the Partially Observed Posterior The partially observed approximate posterior of interest is not readily available, as it is implicitly defined by the VAE: pψ(z | xo) = Exu∼p(·|xo) [ qψ(z | xo,xu) ] , (1) where qψ(z | xo,xu) = qψ(z | x) is the VAE’s encoder. Thus, we introduce a neural network in order to approximate it. Given a network that outputs the distribution qθ(z | xo) (i.e. the partially observed encoder in Figure 2), we now discuss our approach to training it. Our approach is guided by the priorities of simplicity and generality. We minimize (with respect to θ) the following likelihoods, where the samples are coming from our target distribution as defined in Equation 1: Exu∼p(·|xo) [ Ez∼qψ(·|xo,xu)[− log qθ(z | xo)] ] . (2) We discuss how this is optimized in practice in Section 3.4. Due to the relationship between negative log-likelihood minimization and KL-divergence minimization [3], we can interpret Equation 2 as minimizing: Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) ) ] . (3) We can directly minimize the KL-divergence in Equation 3 if it is analytically available between the two posteriors, for instance if both posteriors are Gaussians. However, Equation 2 is more general in that it allows us to use more expressive (e.g., autoregressive) distributions for qθ(z | xo) with which the KL-divergence cannot be directly computed. This is important given that pψ(z | xo) is likely to be complex (e.g., multimodal) and not easily captured by a Gaussian (as in Figure 1). Importantly, there is no requirement for qθ(z | xo) to be reparameterized, which would further limit the class of distributions that can be used. There is a high degree of flexibility in the choice of distribution for the partially observed posterior. Note that this objective does not utilize the decoder. 3.3 Connection with Arbitrary Conditioning While the Posterior Matching objective from Equation 2 and Equation 3 is intuitive, it is not immediately clear how this approach relates back to the arbitrary conditioning objective of maximizing p(xu | xo). We formalize this connection in Theorem 3.1 (see Appendix for proof). Theorem 3.1. Let qψ(z | x) and pϕ(x | z) be the encoder and decoder, respectively, for some VAE. Additionally, let qθ(z | xo) be an approximate partially observed posterior. Then minimizing Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) )] is equivalent to minimizing Exu∼p(·|xo) [ − log pθ,ϕ(xu | xo) + KL ( qψ(z | xo,xu) || qθ(z | xo,xu) ) ] , (4) with respect to the parameters θ. The first term inside the expectation in Equation 4 gives us the explicit connection back to the arbitrary conditioning likelihood p(xu | xo), which is being maximized when minimizing Equation 4. The second term acts as a sort of regularizer by trying to make the partially observed posterior match the VAE posterior when conditioned on all of x — intuitively, this makes sense as a desirable outcome. 3.4 Implementation A practical training loss follows quickly from Equation 2. For the outer expectation, we do not have access to the true distribution p(xu | xo), but for a given instance x that has been partitioned into xo and xu, we do have one sample from this distribution, namely xu. So we approximate this expectation using xu as a single sample. This type of single-sample approximation is common with VAEs, e.g., when estimating the ELBO. For the inner expectation, we have access to qψ(z | x), which can easily be sampled in order to estimate the expectation. In practice, we generally use a single sample for this as well. This gives us the following Posterior Matching loss: LPM(x, o, θ, ψ) = −Ez∼qψ(·|x) [ log qθ(z | xo) ] , (5) where o is the set of observed feature indices. During training, o can be randomly sampled from a problem-specific distribution for each minibatch. Figure 2 provides a visual overview of our approach. In practice, we represent xo as a concatenation of x that has had unobserved features set to zero and a bitmask b that indicates which features are observed. This representation has been successful in other arbitrary conditioning models [24, 38]. However, this choice is not particularly important to Posterior Matching itself, and alternative representations, such as set embeddings, are valid as well. As required by VAEs, samples from qψ(z | x) will be reparameterized, which means that minimizing LPM will influence the parameters of the VAE’s encoder in addition to the partially observed posterior network. In some cases, this may be advantageous, as the encoder can be guided towards learning a latent representation that is more conducive to arbitrary conditioning. However, it might also be desirable to train the VAE independently of the partially observed posterior, in which case we can choose to stop gradients on the samples z ∼ qψ(· | x) when computing LPM. Similarly, the partially observed posterior can be trained against an existing pretrained VAE. In this case, the parameters of the VAE’s encoder and decoder are frozen, and we only optimize LPM with respect to θ. Otherwise, we jointly optimize the VAE’s ELBO and LPM. We emphasize that there is a high degree of flexibility with the choice of VAE, i.e. we have not imposed any unusual constraints. However, there are some potentially limiting practical considerations that have not been explicitly mentioned yet. First, the training data must be fully observed, as with traditional VAEs, since LPM requires sampling qψ(z | x). However, given that the base VAE requires fully observed training data anyway, this is generally not a relevant limitation for our purposes. Second, it is convenient in practice for the VAE’s decoder to be factorized, i.e. p(x | z) = ∏ i p(xi | z), as this allows us to easily sample from p(xu | z) (sampling xu is less straightforward with other types of decoders). However, it is standard practice to use factorized decoders with VAEs, so this is ordinarily not a concern. We also note that, while useful for easy sampling, a factorized decoder is not necessary for optimizing the Posterior Matching loss, which does not incorporate the decoder. 3.5 Posterior Matching Beyond Arbitrary Conditioning The concept of matching VAE posteriors is quite general and has other uses beyond the application of arbitrary conditioning. We consider one such example, which still has ties to arbitrary conditioning, in order to give a flavor for other potential uses. A common application of arbitrary conditioning is active feature acquisition [13, 23, 25], where informative features are sequentially acquired on an instance-by-instance basis. In the unsupervised case, the aim is to acquire as few features as possible while maximizing the ability to re- construct the remaining unobserved features (see Figure 3 for example). One approach to active feature acquisition is to greedily select the feature that will maximize the expected amount of information to be gained about the currently unobserved features [23, 25]. For VAEs, Ma et al. [25] show that this is equivalent to selecting each feature according to argmax i∈u H(z | xo)− Exi∼p(·|xo) [ H(z | xo, xi) ] = argmin i∈u Exi∼p(·|xo) [ H(z | xo, xi) ] . (6) For certain families of posteriors, such as multivariate Gaussians, the entropies in Equation 6 can be analytically computed. In practice, approximating the expectation in Equation 6 is done via entropies of the posteriors p(i)(z | xo) ≡ Exi∼pθ,ϕ(·|xo) [ qθ(z | xo, xi) ] , where samples from pθ,ϕ(xi | xo) are produced by first sampling z ∼ qθ(· | xo) and then passing z through the VAE’s decoder pϕ(xi | z) (we call p(i)(z | xo) the “lookahead” posterior for feature i, since it is obtained by imagining what the posterior will look like after one acquisition into the future). Hence, computing the resulting entropies requires one network evaluation per sample of xi to encode z, for i ∈ u. Thus, if using k samples for each xi, each greedy step will be Ω(k · |u|), which may be prohibitive in high dimensions. In analogous fashion to the Posterior Matching approach that has already been discussed, we can train a neural network to directly output the lookahead posteriors for all features at once. The Posterior Matching loss in this case is LPM-Lookahead(x, o, u, ω, θ, ϕ) = ∑ i∈u Exi∼pθ,ϕ(·|xo) [ Ez∼qθ(·|xo,xi) [ − log q(i)ω (z | xo) ]] , (7) where ω is the parameters of the lookahead posterior network. In practice, we train a single shared network with a final output layer that outputs the parameters of all q(i)ω (z | xo). Note that given the distributions q(i)ω (z | xo) for all i, computing the greedy acquisition choice consists of doing a forward evaluation of our network, then choosing the feature i ∈ u such that the entropy of q(i)ω (z | xo) is minimized. In other words, we may bypass the individual samples of xi, and use a single shared network for a faster acquisition step. In this setting, we let q(i)ω (z | xo) be a multivariate Gaussian so that the entropy computation is trivial. See Appendix for a diagram of the entire process. This use of Posterior Matching leads to large improvements in the computational efficiency of greedy active feature acquisition (demonstrated empirically in Section 5.5). 4 Prior Work A variety of approaches to arbitrary conditioning have been previously proposed. ACE is an autoregressive, energy-based method that is the current state-of-the-art for arbitrary conditional likelihood estimation and imputation, although it can be computationally intensive for very high dimensional data [38]. ACFlow is a variant of normalizing flows that can give analytical arbitrary conditional likelihoods [24]. Several other methods, including Sum-Product Networks [32, 6], Neural Conditioner [2], and Universal Marginalizer [10], also have the ability to estimate conditional likelihoods. Rezende et al. [34] were among the first to suggest that VAEs can be used for imputation. More recently, VAEAC was proposed as a VAE variant designed for arbitrary conditioning [18]. Unlike Posterior Matching, VAEAC is not a general framework and cannot be used with typical pretrained VAEs. EDDI is a VAE-based approach to active feature acquisition and relies on arbitrary conditioning [25]. The authors introduce a “Partial VAE” in order to perform the arbitrary conditioning, which, similarly to Posterior Matching, tries to model p(z | xo). Unlike Posterior Matching, they do this by maximizing a variational lower bound on p(xo) using a partial inference network q(z | xo) (there is no standard VAE posterior q(z | x) in EDDI). Gong et al. [13] use a similar approach that is based on the Partial VAE of EDDI. The major drawback of these methods is that, unlike with Posterior Matching, q(z | xo) must be reparameterizable in order to optimize the lower bound (the authors use a diagonal Gaussian). Thus, certain more expressive distributions (e.g., autoregressive) cannot be used. Additionally, these methods cannot be applied to existing VAEs. The methods of Ipsen et al. [17] and Collier et al. [9] are also similar to EDDI, where the former optimizes an approximation of p(xo,b) and the latter optimizes a lower bound on p(xo | b). Ipsen et al. [17] also focuses on imputation for data that is missing “not at random”, a setting that is outside the focus of our work. There are also several works that have considered learning to identify desirable regions in latent spaces. Engel et al. [11] start from a pretrained VAE, but then train a separate GAN [14] with special regularizers to do their conditioning. They only condition on binary vectors, y, that correspond to a small number of predefined attributes, whereas we allow for conditioning on arbitrary subsets of continuous features xo (a more complicated conditioning space). Also, their resulting GAN does not make the likelihood q(z | y) available, whereas Posterior Matching directly (and flexibly) models q(z | xo), which may be useful for downstream tasks (e.g. Section 5.5) and likelihood evaluation (see Appendix). Furthermore, Posterior Matching trains directly through KL, without requiring an additional critic. Whang et al. [40] learn conditional distributions, but not arbitrary conditional distributions (a much harder problem). They also consider normalizing flow models, which are limited to invertible architectures with tractable Jacobian determinants and latent spaces that have the same dimensionality as the data (unlike VAEs). Cannella et al. [7] similarly do conditional sampling from a model of the joint distribution, but are also restricted to normalizing flow architectures and require a more expensive MCMC procedure for sampling. 5 Experiments In order to empirically test Posterior Matching, we apply it to a variety of VAEs aimed at different tasks. We find that our models are able to match or surpass the performance of previous specialized VAE methods. All experiments were conducted using JAX [5] and the DeepMind JAX Ecosystem [1]. Code is available at https://github.com/lupalab/posterior-matching. Our results are dependent on the choice of VAE, and the particular VAEs used in our experiments were not the product of extensive comparisons and did not undergo thorough hyperparameter tuning — that is not the focus of this work. With more carefully selected or tuned VAEs, and as new VAEs continue to be developed, we can expect Posterior Matching’s downstream performance to improve accordingly on any given task. We emphasize that our experiments span a diverse set of task, domains, and types of VAE, wherein Posterior Matching was effective. 5.1 MNIST In this first experiment, our goal is to demonstrate that Posterior Matching replicates the intuition depicted in Figure 1. We do this by training a convolutional VAE with Posterior Matching on the MNIST dataset. The latent space of this VAE is then mapped to two dimensions with UMAP [27] and visualized in Figure 4. In the figure, black points represent samples from qθ(z | xo), and for select samples, the corresponding reconstruction is shown. The encoded test data is shown, colored by true class label, to highlight which regions correspond to which digits. We see that the experimental results nicely replicate our earlier intuitions — the learned distribution qθ(z | xo) puts probability mass only in parts of the latent space that correspond to plausible digits based on what is observed and successfully captures multimodal distributions (see the second column in Figure 4). 5.2 Image Inpainting One practical application of arbitrary conditioning is image inpainting, where only part of an image is observed and we want to fill in the missing pixels with visually coherent imputations. As with prior works [18, 24], we assume pixels are missing completely at random. We test Posterior Matching as an approach to this task by pairing it with both discrete and hierarchical VAEs. Vector Quantized-VAEs We first consider VQ-VAE [30], a type of VAE that is known to work well with images. VQ-VAE differs from the typical VAE with its use of a discrete latent space. That is, each latent code is a grid of discrete indices rather than a vector of continuous values. Because the latent space is discrete, Oord et al. [30] model the prior distribution with a PixelCNN [29, 36] after training the VQ-VAE. We similarly use a conditional PixelCNN to model qθ(z | xo). First, a convolutional network maps xo to a vector, and that vector is then used as a conditioning input to the PixelCNN. More architecture and training details can be found in the Appendix. We train VQ-VAEs with Posterior Matching for the MNIST, OMNIGLOT, and CELEBA datasets. Table 1 reports peak signal-to-noise ratio (PSNR) and precision/recall [35] for inpaintings produced by our model. We find that Posterior Matching with VQ-VAE consistently achieves better precision/recall scores than previous models while having comparable PSNR. Hierarchical VAEs Hierarchical VAEs [22, 37, 39] are a powerful extension of traditional VAEs that allow for more expressive priors and posteriors by partitioning the latent variables into subsets z = {z1, . . . , zL}. A hierarchy is then created by fractorizing the prior p(z) = ∏ i p(zi | z<i) and posterior q(z | x) = ∏ i q(zi | z<i,x). These models have demonstrated impressive performance on images and can even outperform autoregressive models [8]. Posterior Matching can be Posterior Matching 0.834 ± 0.001 0.246 ± 0.002 0.330 ± 0.013 5.964 ± 0.005 0.857 ± 0.000 -8.963 ± 0.007 0.450 ± 0.002 -3.116 ± 0.175 0.573 ± 0.000 77.488 ± 0.012 VAEAC 0.880 ± 0.001 -0.042 ± 0.002 0.574 ± 0.033 2.418 ± 0.006 0.896 ± 0.001 -10.082 ± 0.010 0.462 ± 0.002 -3.452 ± 0.067 0.615 ± 0.000 74.850 ± 0.005 ACE 0.828 ± 0.002 0.631 ± 0.002 0.335 ± 0.027 9.643 ± 0.005 0.830 ± 0.001 -3.859 ± 0.005 0.432 ± 0.003 0.310 ± 0.054 0.525 ± 0.000 86.701 ± 0.008 ACE Proposal 0.828 ± 0.002 0.583 ± 0.003 0.312 ± 0.033 9.484 ± 0.005 0.832 ± 0.001 -4.417 ± 0.005 0.436 ± 0.004 -0.241 ± 0.056 0.535 ± 0.000 85.228 ± 0.009 ACFlow 0.877 ± 0.001 0.561 ± 0.003 0.567 ± 0.050 8.086 ± 0.010 0.909 ± 0.000 -8.197 ± 0.008 0.478 ± 0.004 -0.972 ± 0.022 0.603 ± 0.000 81.827 ± 0.007 ACFlow+BG 0.833 ± 0.002 0.528 ± 0.003 0.369 ± 0.016 7.593 ± 0.011 0.861 ± 0.001 -6.833 ± 0.006 0.442 ± 0.001 -1.098 ± 0.032 0.572 ± 0.000 81.399 ± 0.008 naturally applied to hierarchical VAEs, where the partially observed posterior is represented as q(z | xo) = ∏ i q(zi | z<i,xo). We adopt the Very Deep VAE (VDVAE) architecture used by Child [8] and extend it to include the partially observed posterior (see Appendix for training and architecture details). We note that due to our hardware constraints, we trained smaller models and for fewer iterations than Child [8]. Inpainting results for our VDVAE models are given in Table 1. We see that they achieve better precision/recall scores than the VQ-VAE models and, unlike VQ-VAE, are able to attain better PSNR than ACFlow for MNIST and CELEBA. Figure 5 shows some example inpaintings, and additional samples are provided in the Appendix. The fact that we see better downstream performance when using VDVAE than when using VQ-VAE is illustrative of Posterior Matching’s ability to admit easy performance gains by simply switching to a more powerful base VAE. 5.3 Real-valued Datasets We evaluate Posterior Matching on real-valued tabular data, specifically the benchmark UCI repository datasets from Papamakarios et al. [31]. We follow the experimental setup used by Li et al. [24] and Strauss and Oliva [38]. In these experiments, we train basic VAE models while simultaneously learning the partially observed posterior. Given the flexibility that Posterior Matching affords, we use an autoregressive distribution for qθ(z | xo). Further details can be found in the Appendix. Table 2 reports the arbitrary conditional log-likelihoods and normalized root-mean-square error (NRMSE) of imputations for our models (with features missing completely at random). Likelihoods are computed using an importance sampling estimate (see Appendix for details). We primarily compare to VAEAC as a baseline in the VAE family, however we also provide results for ACE and ACFlow for reference. We see that Posterior Matching is able to consistently produce more accurate imputations and higher likelihoods than VAEAC. While our models don’t match the likelihoods achieved by ACE and ACFlow, Posterior Matching is comparable to them for imputation NRMSE. 5.4 Partially Observed Clustering Probabilistic clustering often views cluster assignments as a latent variable. Thus, when applying Posterior Matching in this setting, we may perform “partially observed” clustering, which clusters instances based on a subset of observed features. We consider VaDE, which uses a mixture of Gaussians as the prior, allowing it to do unsupervised clustering by treating each Gaussian component as one of the clusters [19]. Despite differences in how VaDE is trained compared to a classic VAE, training a partially observed encoder via Posterior Matching remains exactly the same. We train models on both MNIST and FASHION MNIST (see Appendix for experimental details). Figure 6 shows the clustering accuracy of these models as the percentage of (randomly selected) observed features changes. As a baseline, we train a supervised model where the labels are the cluster predictions from the VaDE model when all of the features are observed. We see that Posterior Matching is able to match the performance of the baseline, and even slightly outperform it for low percentages of observed features. Unlike the supervised approach, Posterior Matching has the advantage of being generative. 5.5 Very Fast Greedy Feature Acquisition As discussed in Section 3.5, we can use Posterior Matching outside of the specific task of arbitrary conditioning. Here, we consider the problem of greedy active feature acquisition. We train a VAE with a Posterior Matching network that outputs the lookahead posteriors described in Section 3.5, using the loss in Equation 7. Note that we are also still using Posterior Matching in order to learn qθ(z | xo) and therefore to produce reconstructions. Training details can be found in the Appendix. We consider the MNIST dataset and compare to EDDI as a baseline, using the authors’ publicly available code. We downscale images to 16 × 16 since EDDI has difficulty scaling to highdimensional data. We also only evaluate on the first 1000 instances of the MNIST test set, as the EDDI code was very slow when computing the greedy acquisition policy. EDDI also uses a particular architecture that is not compatible with convolutions. Thus we train a MLP-based VAE on flattened images in order to make a fair comparison. However, since Posterior Matching does not place any limitations on the type of VAE being used, we also train a convolutional version. For our models, we greedily select the feature to acquire using the more expensive sampling-based approach (similar to EDDI) as well as with the lookahead posteriors (which requires no sampling). In both cases, imputations are computed with an expectation over 50 latent codes, as is done for EDDI. An example acquisition trajectory is shown in Figure 3. Figure 7 presents the root-mean-square error, averaged across the test instances, when imputing xu with different numbers of acquired features. We see that our models are able to achieve lower error than EDDI. We also see that acquiring based on the lookahead posteriors incurs only a minimal increase in error compared to the sampling-based method, despite being far more efficient. Computing the greedy choice with our model using the sampling-based approach takes 68 ms ± 917 µs (for a single acquisition on CPU). Using the lookahead posteriors, the time is only 310 µs ± 15.3 µs, a roughly 219x speedup. 6 Conclusions We have presented an elegant and general framework, called Posterior Matching, that allows VAEs to perform arbitrary conditioning. That is, we can take an existing VAE that only models the joint distribution p(x) and train an additional model that, when combined with the VAE, is able to assess any likelihood p(xu | xo) for arbitrary subsets of unobserved features xu and observed features xo. We applied this approach to a variety of VAEs for a multitude of different tasks. We found that Posterior Matching outperforms previous specialized VAEs for arbitrary conditioning with tabular data and for image inpainting. Importantly, we find that one can switch to a more powerful base VAE and get immediate improvements in downstream arbitrary conditioning performance “for free,” without making changes to Posterior Matching itself. We can also use Posterior Matching to perform clustering based on partially observed inputs and to improve the efficiency of greedy active feature acquisition by several orders of magnitude at negligible cost to performance. With this work, we hope to make arbitrary conditioning more widely accessible. Arbitrary conditioning no longer requires specialized methods, but can instead be achieved by applying one general framework to common VAEs. As advances are made in VAEs for joint density estimation, we can expect to immediately reap the rewards for arbitrary conditioning. Acknowledgments and Disclosure of Funding We would like to thank Google’s TPU Research Cloud program for providing free access to TPUs. This research was partly funded by NSF grant IIS2133595 and by NIH 1R01AA02687901A1.
1. What is the focus and contribution of the paper regarding posterior matching? 2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity and comparisons with other works? 3. Do you have any concerns or questions about the method's assumptions and limitations, such as its reliance on fully observed data and missing completely at random model? 4. How feasible is it to extend this approach to cases where the data is not fully observed? 5. Can the in-painting examples capture the uncertainty in digits as motivated in Figure 1? 6. How well can this method handle missingness that isn't missing completely at random? 7. What is the optimal strategy for picking the missingness patterns during training?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This manuscript proposes a method "Posterior Matching" that trains Variational Autoencoders (VAEs) to be robust to missing data when they are trained with fully observed data. The idea is straightforward to implement and include with a wide variety of VAEs, and leads to relatively straightforward algorithms. Theory demonstrates that the method attempts to approximate arbitrary conditional distributions. Limiting to a subset of VAE approaches, the posterior matching method demonstrates improved performance on a number of benchmarks. Strengths And Weaknesses The biggest strength of this paper is that it is a theoretically motivated algorithm that is simple to include in many VAE models, and can especially work with more complex variants such as VADE. While there are two glaring limitations (see next section), this is a big advantage and the empirical results suggest that the method is effective. The simplicity of the active learning approach is quite nice as well. A weakness is the relative lack of comparisons. While Posterior Matching is compared to multiple algorithms within the VAE space, there are many competing approaches for applications such as inpainting. It seems clear that the proposed approach can improve in this cases, but it is unclear if I should consider their approach for an inpainting task. There are several minor issues. For example, the notation is a bit inconsistent. As an example, in equation (1) the partially observed posterior is written in terms of the variational posterior; that is explicitly true as the variational distribution is an approximation and they are not exactly equal. This seems like the authors meant that this term should be similar to q θ ( z | x 0 ) instead. Either way, this should be clarified and explained more. As a comment, given the motivating case in Figure 1, it would have been nice to see the proposed approach maintain the multimodality in the generated examples of Figure 5(b). I would appreciate getting stronger evidence that the multi-modality posterior is in fact maintained. Questions How feasible is it to extend this approach to a case where the data is not fully observed? Can the in-painting examples in fact capture the uncertainty in digits as motivating in Figure 1? How well can this method handle missingness that isn't missing completely at random? What is the optimal strategy for picking the missingness patterns during training? How would this be determined for a new dataset? Limitations The authors clearly state that they assume that they have access to fully observed data for training, and they correctly identify that as a limitation; however, they state that "this is generally not a relevant limitation for our purposes." This limitation is quite large and is downplayed too much in their manuscript. For example, in the motivating medical case given in the introduction, this would render the method infeasible. I work with medical data, and I am unaware of any problem within that domain where I could naturally apply their methods, whereas I can and do apply competing missing data approaches for VAEs. I would suggest that the authors clarify this limitation upfront in the introduction, at least, and discuss it at greater length rather than making it an small comment in a technical section. The other major limitation, which was not addressed, is that this method assumes a "Missing Completely at Random" model of missingness. This model is incorrect for many medical problems, and would be an issue on the motivating example in the introduction. I do not see how to adapt the Posterior Matching approach to handle this case, but it would be helpful for the authors to clearly state this limitation to give the readers a better sense of when this method will and will not work.
NIPS
Title Posterior Matching for Arbitrary Conditioning Abstract Arbitrary conditioning is an important problem in unsupervised learning, where we seek to model the conditional densities p(xu | xo) that underly some data, for all possible non-intersecting subsets o, u ⊂ {1, . . . , d}. However, the vast majority of density estimation only focuses on modeling the joint distribution p(x), in which important conditional dependencies between features are opaque. We propose a simple and general framework, coined Posterior Matching, that enables Variational Autoencoders (VAEs) to perform arbitrary conditioning, without modification to the VAE itself. Posterior Matching applies to the numerous existing VAE-based approaches to joint density estimation, thereby circumventing the specialized models required by previous approaches to arbitrary conditioning. We find that Posterior Matching is comparable or superior to current state-of-the-art methods for a variety of tasks with an assortment of VAEs (e.g. discrete, hierarchical, VaDE). 1 Introduction Variational Autoencoders (VAEs) [21] are a widely adopted class of generative model that have been successfully employed in numerous areas [4, 15, 26, 33, 16]. Much of their appeal stems from their ability to probabilistically represent complex data in terms of lower-dimensional latent codes. Like most other generative models, VAEs are typically designed to model the joint data distribution, which communicates likelihoods for particular configurations of all features at once. This can be useful for some tasks, such as generating images, but the joint distribution is limited by its inability to explicitly convey the conditional dependencies between features. In many cases, conditional distributions, which provide the likelihood of an event given some known information, are more relevant and useful. Conditionals can be obtained in theory by marginalizing the joint distribution, but in practice, this is generally not analytically available and is expensive to approximate. Easily assessing the conditional distribution over any subset of features is important for tasks where decisions and predictions must be made over a varied set of possible information. For example, some medical applications may require reasoning over: the distribution of blood pressure given age and weight; or the distribution of heart-rate and blood-oxygen level given age, blood pressure, and BMI; etc. For flexibility and scalability, it is desirable for a single model to provide all such conditionals at inference time. More formally, this task is known as arbitrary conditioning, where the goal is to model the conditional density p(xu | xo) for any arbitrary subsets of unobserved features xu and observed features xo. In this work, we show, by way of a simple and general framework, that traditional VAEs can perform arbitrary conditioning, without modification to the VAE model itself. Our approach, which we call Posterior Matching, is to model the distribution p(z | xo) that is induced by some VAE, where z is the latent code. In other words, we consider the distribution of latent codes given partially observed features. We do this by having a neural network output an approximate partially observed posterior q(z | xo). In order to train this network, we develop a straightforward maximum likelihood estimation objective and show that it is equivalent to maximizing p(xu | xo), 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the quantity of interest. Unlike prior works that use VAEs for arbitrary conditioning, we do not make special assumptions or optimize custom variational lower bounds. Rather, training via Posterior Matching is simple, highly flexible, and without limiting assumptions on approximate posteriors (e.g., q(z | xo) need not be reparameterized and can thus be highly expressive). We conduct several experiments in which we apply Posterior Matching to various types of VAEs for a myriad of different tasks, including image inpainting, tabular arbitrary conditional density estimation, partially observed clustering, and active feature acquisition. We find that Posterior Matching leads to improvements over prior VAE-based methods across the range of tasks we consider. 2 Background Arbitrary Conditioning A core problem in unsupervised learning is density estimation, where we are given a dataset D = {x(i)}Ni=1 of i.i.d. samples drawn from an unknown distribution p(x) and wish to learn a model that best approximates the probability density function p. A limitation of only learning the joint distribution p(x) is that it does not provide direct access to the conditional dependencies between features. Arbitrary conditional density estimation [18, 24, 38] is a more general task where we want to estimate the conditional density p(xu | xo) for all possible subsets of observed features o ⊂ {1, . . . , d} and unobserved features u ⊂ {1, . . . , d} such that o and u do not intersect. Here, xo ∈ R|o| and xu ∈ R|u|. Estimation of joint or marginal likelihoods is a special case where o = ∅. Note that, while not strictly necessary for arbitrary conditioning methods [24, 38], we assume D is fully observed, a requirement for training traditional VAEs. Variational Autoencoders Variational Autoencoders (VAEs) [21] are a class of generative models that assume a generative process in which data likelihoods are represented as p(x) = ∫ p(x | z)p(z) dz, where z is a latent variable that typically has lower dimensionality than the data x. A tractable distribution that affords easy sampling and likelihood evaluation, such as a standard Gaussian, is usually imposed on the prior p(z). These models are learned by maximizing the evidence lower bound (ELBO) of the data likelihood: log p(x) ≥ Ez∼qψ(·|x)[log pϕ(x | z)]− KL(qψ(z | x) || p(z)), where qψ(z | x) and pϕ(x | z) are the encoder (or approximate posterior) and decoder of the VAE, respectively. The encoder and decoder are generally neural networks that output tractable distributions (e.g., a multivariate Gaussian). In order to properly optimize the ELBO, samples drawn from qψ(z | x) must be differentiable with respect to the parameters of the encoder (often called the reparameterization trick). After training, a new data point x̂ can be easily generated by first sampling z from the prior, then sampling x̂ ∼ pϕ(· | z). 3 Posterior Matching In this section we describe our framework, coined Posterior Matching, to model the underlying arbitrary conditionals in a VAE. In many respects, Posterior Matching cuts the Gordian knot to uncover the conditional dependencies. Following our insights, we show that our approach is direct and intuitive. Notwithstanding, we are the first to apply this direct methodology for arbitrary conditionals in VAEs and are the first to connect our proposed loss with arbitrary conditional likelihoods p(xu | xo). Note that we are not proposing a new type of VAE. Rather, we are formalizing a simple and intuitive methodology that can be applied to numerous existing (or future) VAEs. 3.1 Motivation Let us begin with a motivating example, depicted in Figure 1. Suppose we have trained a VAE on images of handwritten 3s, 5s, and 8s. This VAE has thus learned to represent these images in a low-dimensional latent space. Any given code (vector) in this latent space represents a distribution over images in the original data space, which can be retrieved by passing that code through the VAE’s decoder. Some regions in the latent space will contain codes that represent 3s, some will represent 5s, and some will represent 8s. There is typically only an interest in mapping from a given image x to a distribution over the latent codes that could represent that image, i.e., the posterior q(z | x). However, we can just as easily ask which latent codes are feasible having only observed part of an image. For example, if we only see the right half the image shown in Figure 1, we know the digit could be a 3 or an 8, but certainly not a 5. Thus, the distribution over latent codes that could correspond to the full image, that is pψ(z | xo) (where ψ is the encoder’s parameters), should only include regions that represent 3s or 8s. Decoding any sample from pψ(z | xo) will produce an image of a 3 or an 8 that aligns with what has been observed. The important insight is that we can think about how conditioning on xo changes the distribution over latent codes without explicitly worrying about what the (potentially higher-dimensional and more complicated) conditional distribution over xu looks like. Once we know pψ(z | xo), we can easily move back to the original data space using the decoder. 3.2 Approximating the Partially Observed Posterior The partially observed approximate posterior of interest is not readily available, as it is implicitly defined by the VAE: pψ(z | xo) = Exu∼p(·|xo) [ qψ(z | xo,xu) ] , (1) where qψ(z | xo,xu) = qψ(z | x) is the VAE’s encoder. Thus, we introduce a neural network in order to approximate it. Given a network that outputs the distribution qθ(z | xo) (i.e. the partially observed encoder in Figure 2), we now discuss our approach to training it. Our approach is guided by the priorities of simplicity and generality. We minimize (with respect to θ) the following likelihoods, where the samples are coming from our target distribution as defined in Equation 1: Exu∼p(·|xo) [ Ez∼qψ(·|xo,xu)[− log qθ(z | xo)] ] . (2) We discuss how this is optimized in practice in Section 3.4. Due to the relationship between negative log-likelihood minimization and KL-divergence minimization [3], we can interpret Equation 2 as minimizing: Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) ) ] . (3) We can directly minimize the KL-divergence in Equation 3 if it is analytically available between the two posteriors, for instance if both posteriors are Gaussians. However, Equation 2 is more general in that it allows us to use more expressive (e.g., autoregressive) distributions for qθ(z | xo) with which the KL-divergence cannot be directly computed. This is important given that pψ(z | xo) is likely to be complex (e.g., multimodal) and not easily captured by a Gaussian (as in Figure 1). Importantly, there is no requirement for qθ(z | xo) to be reparameterized, which would further limit the class of distributions that can be used. There is a high degree of flexibility in the choice of distribution for the partially observed posterior. Note that this objective does not utilize the decoder. 3.3 Connection with Arbitrary Conditioning While the Posterior Matching objective from Equation 2 and Equation 3 is intuitive, it is not immediately clear how this approach relates back to the arbitrary conditioning objective of maximizing p(xu | xo). We formalize this connection in Theorem 3.1 (see Appendix for proof). Theorem 3.1. Let qψ(z | x) and pϕ(x | z) be the encoder and decoder, respectively, for some VAE. Additionally, let qθ(z | xo) be an approximate partially observed posterior. Then minimizing Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) )] is equivalent to minimizing Exu∼p(·|xo) [ − log pθ,ϕ(xu | xo) + KL ( qψ(z | xo,xu) || qθ(z | xo,xu) ) ] , (4) with respect to the parameters θ. The first term inside the expectation in Equation 4 gives us the explicit connection back to the arbitrary conditioning likelihood p(xu | xo), which is being maximized when minimizing Equation 4. The second term acts as a sort of regularizer by trying to make the partially observed posterior match the VAE posterior when conditioned on all of x — intuitively, this makes sense as a desirable outcome. 3.4 Implementation A practical training loss follows quickly from Equation 2. For the outer expectation, we do not have access to the true distribution p(xu | xo), but for a given instance x that has been partitioned into xo and xu, we do have one sample from this distribution, namely xu. So we approximate this expectation using xu as a single sample. This type of single-sample approximation is common with VAEs, e.g., when estimating the ELBO. For the inner expectation, we have access to qψ(z | x), which can easily be sampled in order to estimate the expectation. In practice, we generally use a single sample for this as well. This gives us the following Posterior Matching loss: LPM(x, o, θ, ψ) = −Ez∼qψ(·|x) [ log qθ(z | xo) ] , (5) where o is the set of observed feature indices. During training, o can be randomly sampled from a problem-specific distribution for each minibatch. Figure 2 provides a visual overview of our approach. In practice, we represent xo as a concatenation of x that has had unobserved features set to zero and a bitmask b that indicates which features are observed. This representation has been successful in other arbitrary conditioning models [24, 38]. However, this choice is not particularly important to Posterior Matching itself, and alternative representations, such as set embeddings, are valid as well. As required by VAEs, samples from qψ(z | x) will be reparameterized, which means that minimizing LPM will influence the parameters of the VAE’s encoder in addition to the partially observed posterior network. In some cases, this may be advantageous, as the encoder can be guided towards learning a latent representation that is more conducive to arbitrary conditioning. However, it might also be desirable to train the VAE independently of the partially observed posterior, in which case we can choose to stop gradients on the samples z ∼ qψ(· | x) when computing LPM. Similarly, the partially observed posterior can be trained against an existing pretrained VAE. In this case, the parameters of the VAE’s encoder and decoder are frozen, and we only optimize LPM with respect to θ. Otherwise, we jointly optimize the VAE’s ELBO and LPM. We emphasize that there is a high degree of flexibility with the choice of VAE, i.e. we have not imposed any unusual constraints. However, there are some potentially limiting practical considerations that have not been explicitly mentioned yet. First, the training data must be fully observed, as with traditional VAEs, since LPM requires sampling qψ(z | x). However, given that the base VAE requires fully observed training data anyway, this is generally not a relevant limitation for our purposes. Second, it is convenient in practice for the VAE’s decoder to be factorized, i.e. p(x | z) = ∏ i p(xi | z), as this allows us to easily sample from p(xu | z) (sampling xu is less straightforward with other types of decoders). However, it is standard practice to use factorized decoders with VAEs, so this is ordinarily not a concern. We also note that, while useful for easy sampling, a factorized decoder is not necessary for optimizing the Posterior Matching loss, which does not incorporate the decoder. 3.5 Posterior Matching Beyond Arbitrary Conditioning The concept of matching VAE posteriors is quite general and has other uses beyond the application of arbitrary conditioning. We consider one such example, which still has ties to arbitrary conditioning, in order to give a flavor for other potential uses. A common application of arbitrary conditioning is active feature acquisition [13, 23, 25], where informative features are sequentially acquired on an instance-by-instance basis. In the unsupervised case, the aim is to acquire as few features as possible while maximizing the ability to re- construct the remaining unobserved features (see Figure 3 for example). One approach to active feature acquisition is to greedily select the feature that will maximize the expected amount of information to be gained about the currently unobserved features [23, 25]. For VAEs, Ma et al. [25] show that this is equivalent to selecting each feature according to argmax i∈u H(z | xo)− Exi∼p(·|xo) [ H(z | xo, xi) ] = argmin i∈u Exi∼p(·|xo) [ H(z | xo, xi) ] . (6) For certain families of posteriors, such as multivariate Gaussians, the entropies in Equation 6 can be analytically computed. In practice, approximating the expectation in Equation 6 is done via entropies of the posteriors p(i)(z | xo) ≡ Exi∼pθ,ϕ(·|xo) [ qθ(z | xo, xi) ] , where samples from pθ,ϕ(xi | xo) are produced by first sampling z ∼ qθ(· | xo) and then passing z through the VAE’s decoder pϕ(xi | z) (we call p(i)(z | xo) the “lookahead” posterior for feature i, since it is obtained by imagining what the posterior will look like after one acquisition into the future). Hence, computing the resulting entropies requires one network evaluation per sample of xi to encode z, for i ∈ u. Thus, if using k samples for each xi, each greedy step will be Ω(k · |u|), which may be prohibitive in high dimensions. In analogous fashion to the Posterior Matching approach that has already been discussed, we can train a neural network to directly output the lookahead posteriors for all features at once. The Posterior Matching loss in this case is LPM-Lookahead(x, o, u, ω, θ, ϕ) = ∑ i∈u Exi∼pθ,ϕ(·|xo) [ Ez∼qθ(·|xo,xi) [ − log q(i)ω (z | xo) ]] , (7) where ω is the parameters of the lookahead posterior network. In practice, we train a single shared network with a final output layer that outputs the parameters of all q(i)ω (z | xo). Note that given the distributions q(i)ω (z | xo) for all i, computing the greedy acquisition choice consists of doing a forward evaluation of our network, then choosing the feature i ∈ u such that the entropy of q(i)ω (z | xo) is minimized. In other words, we may bypass the individual samples of xi, and use a single shared network for a faster acquisition step. In this setting, we let q(i)ω (z | xo) be a multivariate Gaussian so that the entropy computation is trivial. See Appendix for a diagram of the entire process. This use of Posterior Matching leads to large improvements in the computational efficiency of greedy active feature acquisition (demonstrated empirically in Section 5.5). 4 Prior Work A variety of approaches to arbitrary conditioning have been previously proposed. ACE is an autoregressive, energy-based method that is the current state-of-the-art for arbitrary conditional likelihood estimation and imputation, although it can be computationally intensive for very high dimensional data [38]. ACFlow is a variant of normalizing flows that can give analytical arbitrary conditional likelihoods [24]. Several other methods, including Sum-Product Networks [32, 6], Neural Conditioner [2], and Universal Marginalizer [10], also have the ability to estimate conditional likelihoods. Rezende et al. [34] were among the first to suggest that VAEs can be used for imputation. More recently, VAEAC was proposed as a VAE variant designed for arbitrary conditioning [18]. Unlike Posterior Matching, VAEAC is not a general framework and cannot be used with typical pretrained VAEs. EDDI is a VAE-based approach to active feature acquisition and relies on arbitrary conditioning [25]. The authors introduce a “Partial VAE” in order to perform the arbitrary conditioning, which, similarly to Posterior Matching, tries to model p(z | xo). Unlike Posterior Matching, they do this by maximizing a variational lower bound on p(xo) using a partial inference network q(z | xo) (there is no standard VAE posterior q(z | x) in EDDI). Gong et al. [13] use a similar approach that is based on the Partial VAE of EDDI. The major drawback of these methods is that, unlike with Posterior Matching, q(z | xo) must be reparameterizable in order to optimize the lower bound (the authors use a diagonal Gaussian). Thus, certain more expressive distributions (e.g., autoregressive) cannot be used. Additionally, these methods cannot be applied to existing VAEs. The methods of Ipsen et al. [17] and Collier et al. [9] are also similar to EDDI, where the former optimizes an approximation of p(xo,b) and the latter optimizes a lower bound on p(xo | b). Ipsen et al. [17] also focuses on imputation for data that is missing “not at random”, a setting that is outside the focus of our work. There are also several works that have considered learning to identify desirable regions in latent spaces. Engel et al. [11] start from a pretrained VAE, but then train a separate GAN [14] with special regularizers to do their conditioning. They only condition on binary vectors, y, that correspond to a small number of predefined attributes, whereas we allow for conditioning on arbitrary subsets of continuous features xo (a more complicated conditioning space). Also, their resulting GAN does not make the likelihood q(z | y) available, whereas Posterior Matching directly (and flexibly) models q(z | xo), which may be useful for downstream tasks (e.g. Section 5.5) and likelihood evaluation (see Appendix). Furthermore, Posterior Matching trains directly through KL, without requiring an additional critic. Whang et al. [40] learn conditional distributions, but not arbitrary conditional distributions (a much harder problem). They also consider normalizing flow models, which are limited to invertible architectures with tractable Jacobian determinants and latent spaces that have the same dimensionality as the data (unlike VAEs). Cannella et al. [7] similarly do conditional sampling from a model of the joint distribution, but are also restricted to normalizing flow architectures and require a more expensive MCMC procedure for sampling. 5 Experiments In order to empirically test Posterior Matching, we apply it to a variety of VAEs aimed at different tasks. We find that our models are able to match or surpass the performance of previous specialized VAE methods. All experiments were conducted using JAX [5] and the DeepMind JAX Ecosystem [1]. Code is available at https://github.com/lupalab/posterior-matching. Our results are dependent on the choice of VAE, and the particular VAEs used in our experiments were not the product of extensive comparisons and did not undergo thorough hyperparameter tuning — that is not the focus of this work. With more carefully selected or tuned VAEs, and as new VAEs continue to be developed, we can expect Posterior Matching’s downstream performance to improve accordingly on any given task. We emphasize that our experiments span a diverse set of task, domains, and types of VAE, wherein Posterior Matching was effective. 5.1 MNIST In this first experiment, our goal is to demonstrate that Posterior Matching replicates the intuition depicted in Figure 1. We do this by training a convolutional VAE with Posterior Matching on the MNIST dataset. The latent space of this VAE is then mapped to two dimensions with UMAP [27] and visualized in Figure 4. In the figure, black points represent samples from qθ(z | xo), and for select samples, the corresponding reconstruction is shown. The encoded test data is shown, colored by true class label, to highlight which regions correspond to which digits. We see that the experimental results nicely replicate our earlier intuitions — the learned distribution qθ(z | xo) puts probability mass only in parts of the latent space that correspond to plausible digits based on what is observed and successfully captures multimodal distributions (see the second column in Figure 4). 5.2 Image Inpainting One practical application of arbitrary conditioning is image inpainting, where only part of an image is observed and we want to fill in the missing pixels with visually coherent imputations. As with prior works [18, 24], we assume pixels are missing completely at random. We test Posterior Matching as an approach to this task by pairing it with both discrete and hierarchical VAEs. Vector Quantized-VAEs We first consider VQ-VAE [30], a type of VAE that is known to work well with images. VQ-VAE differs from the typical VAE with its use of a discrete latent space. That is, each latent code is a grid of discrete indices rather than a vector of continuous values. Because the latent space is discrete, Oord et al. [30] model the prior distribution with a PixelCNN [29, 36] after training the VQ-VAE. We similarly use a conditional PixelCNN to model qθ(z | xo). First, a convolutional network maps xo to a vector, and that vector is then used as a conditioning input to the PixelCNN. More architecture and training details can be found in the Appendix. We train VQ-VAEs with Posterior Matching for the MNIST, OMNIGLOT, and CELEBA datasets. Table 1 reports peak signal-to-noise ratio (PSNR) and precision/recall [35] for inpaintings produced by our model. We find that Posterior Matching with VQ-VAE consistently achieves better precision/recall scores than previous models while having comparable PSNR. Hierarchical VAEs Hierarchical VAEs [22, 37, 39] are a powerful extension of traditional VAEs that allow for more expressive priors and posteriors by partitioning the latent variables into subsets z = {z1, . . . , zL}. A hierarchy is then created by fractorizing the prior p(z) = ∏ i p(zi | z<i) and posterior q(z | x) = ∏ i q(zi | z<i,x). These models have demonstrated impressive performance on images and can even outperform autoregressive models [8]. Posterior Matching can be Posterior Matching 0.834 ± 0.001 0.246 ± 0.002 0.330 ± 0.013 5.964 ± 0.005 0.857 ± 0.000 -8.963 ± 0.007 0.450 ± 0.002 -3.116 ± 0.175 0.573 ± 0.000 77.488 ± 0.012 VAEAC 0.880 ± 0.001 -0.042 ± 0.002 0.574 ± 0.033 2.418 ± 0.006 0.896 ± 0.001 -10.082 ± 0.010 0.462 ± 0.002 -3.452 ± 0.067 0.615 ± 0.000 74.850 ± 0.005 ACE 0.828 ± 0.002 0.631 ± 0.002 0.335 ± 0.027 9.643 ± 0.005 0.830 ± 0.001 -3.859 ± 0.005 0.432 ± 0.003 0.310 ± 0.054 0.525 ± 0.000 86.701 ± 0.008 ACE Proposal 0.828 ± 0.002 0.583 ± 0.003 0.312 ± 0.033 9.484 ± 0.005 0.832 ± 0.001 -4.417 ± 0.005 0.436 ± 0.004 -0.241 ± 0.056 0.535 ± 0.000 85.228 ± 0.009 ACFlow 0.877 ± 0.001 0.561 ± 0.003 0.567 ± 0.050 8.086 ± 0.010 0.909 ± 0.000 -8.197 ± 0.008 0.478 ± 0.004 -0.972 ± 0.022 0.603 ± 0.000 81.827 ± 0.007 ACFlow+BG 0.833 ± 0.002 0.528 ± 0.003 0.369 ± 0.016 7.593 ± 0.011 0.861 ± 0.001 -6.833 ± 0.006 0.442 ± 0.001 -1.098 ± 0.032 0.572 ± 0.000 81.399 ± 0.008 naturally applied to hierarchical VAEs, where the partially observed posterior is represented as q(z | xo) = ∏ i q(zi | z<i,xo). We adopt the Very Deep VAE (VDVAE) architecture used by Child [8] and extend it to include the partially observed posterior (see Appendix for training and architecture details). We note that due to our hardware constraints, we trained smaller models and for fewer iterations than Child [8]. Inpainting results for our VDVAE models are given in Table 1. We see that they achieve better precision/recall scores than the VQ-VAE models and, unlike VQ-VAE, are able to attain better PSNR than ACFlow for MNIST and CELEBA. Figure 5 shows some example inpaintings, and additional samples are provided in the Appendix. The fact that we see better downstream performance when using VDVAE than when using VQ-VAE is illustrative of Posterior Matching’s ability to admit easy performance gains by simply switching to a more powerful base VAE. 5.3 Real-valued Datasets We evaluate Posterior Matching on real-valued tabular data, specifically the benchmark UCI repository datasets from Papamakarios et al. [31]. We follow the experimental setup used by Li et al. [24] and Strauss and Oliva [38]. In these experiments, we train basic VAE models while simultaneously learning the partially observed posterior. Given the flexibility that Posterior Matching affords, we use an autoregressive distribution for qθ(z | xo). Further details can be found in the Appendix. Table 2 reports the arbitrary conditional log-likelihoods and normalized root-mean-square error (NRMSE) of imputations for our models (with features missing completely at random). Likelihoods are computed using an importance sampling estimate (see Appendix for details). We primarily compare to VAEAC as a baseline in the VAE family, however we also provide results for ACE and ACFlow for reference. We see that Posterior Matching is able to consistently produce more accurate imputations and higher likelihoods than VAEAC. While our models don’t match the likelihoods achieved by ACE and ACFlow, Posterior Matching is comparable to them for imputation NRMSE. 5.4 Partially Observed Clustering Probabilistic clustering often views cluster assignments as a latent variable. Thus, when applying Posterior Matching in this setting, we may perform “partially observed” clustering, which clusters instances based on a subset of observed features. We consider VaDE, which uses a mixture of Gaussians as the prior, allowing it to do unsupervised clustering by treating each Gaussian component as one of the clusters [19]. Despite differences in how VaDE is trained compared to a classic VAE, training a partially observed encoder via Posterior Matching remains exactly the same. We train models on both MNIST and FASHION MNIST (see Appendix for experimental details). Figure 6 shows the clustering accuracy of these models as the percentage of (randomly selected) observed features changes. As a baseline, we train a supervised model where the labels are the cluster predictions from the VaDE model when all of the features are observed. We see that Posterior Matching is able to match the performance of the baseline, and even slightly outperform it for low percentages of observed features. Unlike the supervised approach, Posterior Matching has the advantage of being generative. 5.5 Very Fast Greedy Feature Acquisition As discussed in Section 3.5, we can use Posterior Matching outside of the specific task of arbitrary conditioning. Here, we consider the problem of greedy active feature acquisition. We train a VAE with a Posterior Matching network that outputs the lookahead posteriors described in Section 3.5, using the loss in Equation 7. Note that we are also still using Posterior Matching in order to learn qθ(z | xo) and therefore to produce reconstructions. Training details can be found in the Appendix. We consider the MNIST dataset and compare to EDDI as a baseline, using the authors’ publicly available code. We downscale images to 16 × 16 since EDDI has difficulty scaling to highdimensional data. We also only evaluate on the first 1000 instances of the MNIST test set, as the EDDI code was very slow when computing the greedy acquisition policy. EDDI also uses a particular architecture that is not compatible with convolutions. Thus we train a MLP-based VAE on flattened images in order to make a fair comparison. However, since Posterior Matching does not place any limitations on the type of VAE being used, we also train a convolutional version. For our models, we greedily select the feature to acquire using the more expensive sampling-based approach (similar to EDDI) as well as with the lookahead posteriors (which requires no sampling). In both cases, imputations are computed with an expectation over 50 latent codes, as is done for EDDI. An example acquisition trajectory is shown in Figure 3. Figure 7 presents the root-mean-square error, averaged across the test instances, when imputing xu with different numbers of acquired features. We see that our models are able to achieve lower error than EDDI. We also see that acquiring based on the lookahead posteriors incurs only a minimal increase in error compared to the sampling-based method, despite being far more efficient. Computing the greedy choice with our model using the sampling-based approach takes 68 ms ± 917 µs (for a single acquisition on CPU). Using the lookahead posteriors, the time is only 310 µs ± 15.3 µs, a roughly 219x speedup. 6 Conclusions We have presented an elegant and general framework, called Posterior Matching, that allows VAEs to perform arbitrary conditioning. That is, we can take an existing VAE that only models the joint distribution p(x) and train an additional model that, when combined with the VAE, is able to assess any likelihood p(xu | xo) for arbitrary subsets of unobserved features xu and observed features xo. We applied this approach to a variety of VAEs for a multitude of different tasks. We found that Posterior Matching outperforms previous specialized VAEs for arbitrary conditioning with tabular data and for image inpainting. Importantly, we find that one can switch to a more powerful base VAE and get immediate improvements in downstream arbitrary conditioning performance “for free,” without making changes to Posterior Matching itself. We can also use Posterior Matching to perform clustering based on partially observed inputs and to improve the efficiency of greedy active feature acquisition by several orders of magnitude at negligible cost to performance. With this work, we hope to make arbitrary conditioning more widely accessible. Arbitrary conditioning no longer requires specialized methods, but can instead be achieved by applying one general framework to common VAEs. As advances are made in VAEs for joint density estimation, we can expect to immediately reap the rewards for arbitrary conditioning. Acknowledgments and Disclosure of Funding We would like to thank Google’s TPU Research Cloud program for providing free access to TPUs. This research was partly funded by NSF grant IIS2133595 and by NIH 1R01AA02687901A1.
1. What is the main contribution of the paper regarding VAE architecture? 2. What are the strengths of the proposed approach, particularly in its simplicity and flexibility? 3. What are the limitations of the method, and how does it compare to other VAE-based methods? 4. How effective is the method in various tasks, such as image inpainting and partially observed clustering? 5. Can the method be applied to greedy active feature acquisition, and how does it perform in such cases?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a general approach for adding arbitrary conditioning to models based on the VAE architecture. An additional encoder which takes partially observed inputs is used to define a partially observed posterior which is trained to match the fully observed posterior by adding an additional term to the loss function. The simplicity of this approach allows one to train a partially observed posterior for any VAE-based model and even to learn a partially observed posterior for an existing trained VAE. The authors demonstrate the effectiveness of their approach on a number of tasks, using MNIST to illustrate the model matches their motivating intuitions and then using image inpainting, imputation of tabular data, and partially observed clustering to demonstrate the effectiveness of their method on tasks involving arbitrary conditioning. The method presented in this paper does not achieve top performance on all tasks, but is competitive with the SOTA and improves on other VAE-based methods. Other methods are less flexible, in the sense that they cannot be immediately integrated with a new VAE architecture, one clear appeal of the method presented here. Additionally, the authors apply their method can be used for greedy active feature acquisition, where a conditional distribution based on partially observed features is used to greedily select the most informative feature to add to the model. Here, the method is shown to be more accurate and faster than existing methods. Strengths And Weaknesses The paper is based on a simple but novel idea, which is clearly explained and validated using well thought-out experiments. Questions No questions to the authors Limitations One limitation, which the authors remark on, is that this method requires fully observed training data. While there may be cases where it is advantageous to be able to train a model on partially observed data, there are many cases where some fully observed data will be available at training time.
NIPS
Title Posterior Matching for Arbitrary Conditioning Abstract Arbitrary conditioning is an important problem in unsupervised learning, where we seek to model the conditional densities p(xu | xo) that underly some data, for all possible non-intersecting subsets o, u ⊂ {1, . . . , d}. However, the vast majority of density estimation only focuses on modeling the joint distribution p(x), in which important conditional dependencies between features are opaque. We propose a simple and general framework, coined Posterior Matching, that enables Variational Autoencoders (VAEs) to perform arbitrary conditioning, without modification to the VAE itself. Posterior Matching applies to the numerous existing VAE-based approaches to joint density estimation, thereby circumventing the specialized models required by previous approaches to arbitrary conditioning. We find that Posterior Matching is comparable or superior to current state-of-the-art methods for a variety of tasks with an assortment of VAEs (e.g. discrete, hierarchical, VaDE). 1 Introduction Variational Autoencoders (VAEs) [21] are a widely adopted class of generative model that have been successfully employed in numerous areas [4, 15, 26, 33, 16]. Much of their appeal stems from their ability to probabilistically represent complex data in terms of lower-dimensional latent codes. Like most other generative models, VAEs are typically designed to model the joint data distribution, which communicates likelihoods for particular configurations of all features at once. This can be useful for some tasks, such as generating images, but the joint distribution is limited by its inability to explicitly convey the conditional dependencies between features. In many cases, conditional distributions, which provide the likelihood of an event given some known information, are more relevant and useful. Conditionals can be obtained in theory by marginalizing the joint distribution, but in practice, this is generally not analytically available and is expensive to approximate. Easily assessing the conditional distribution over any subset of features is important for tasks where decisions and predictions must be made over a varied set of possible information. For example, some medical applications may require reasoning over: the distribution of blood pressure given age and weight; or the distribution of heart-rate and blood-oxygen level given age, blood pressure, and BMI; etc. For flexibility and scalability, it is desirable for a single model to provide all such conditionals at inference time. More formally, this task is known as arbitrary conditioning, where the goal is to model the conditional density p(xu | xo) for any arbitrary subsets of unobserved features xu and observed features xo. In this work, we show, by way of a simple and general framework, that traditional VAEs can perform arbitrary conditioning, without modification to the VAE model itself. Our approach, which we call Posterior Matching, is to model the distribution p(z | xo) that is induced by some VAE, where z is the latent code. In other words, we consider the distribution of latent codes given partially observed features. We do this by having a neural network output an approximate partially observed posterior q(z | xo). In order to train this network, we develop a straightforward maximum likelihood estimation objective and show that it is equivalent to maximizing p(xu | xo), 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the quantity of interest. Unlike prior works that use VAEs for arbitrary conditioning, we do not make special assumptions or optimize custom variational lower bounds. Rather, training via Posterior Matching is simple, highly flexible, and without limiting assumptions on approximate posteriors (e.g., q(z | xo) need not be reparameterized and can thus be highly expressive). We conduct several experiments in which we apply Posterior Matching to various types of VAEs for a myriad of different tasks, including image inpainting, tabular arbitrary conditional density estimation, partially observed clustering, and active feature acquisition. We find that Posterior Matching leads to improvements over prior VAE-based methods across the range of tasks we consider. 2 Background Arbitrary Conditioning A core problem in unsupervised learning is density estimation, where we are given a dataset D = {x(i)}Ni=1 of i.i.d. samples drawn from an unknown distribution p(x) and wish to learn a model that best approximates the probability density function p. A limitation of only learning the joint distribution p(x) is that it does not provide direct access to the conditional dependencies between features. Arbitrary conditional density estimation [18, 24, 38] is a more general task where we want to estimate the conditional density p(xu | xo) for all possible subsets of observed features o ⊂ {1, . . . , d} and unobserved features u ⊂ {1, . . . , d} such that o and u do not intersect. Here, xo ∈ R|o| and xu ∈ R|u|. Estimation of joint or marginal likelihoods is a special case where o = ∅. Note that, while not strictly necessary for arbitrary conditioning methods [24, 38], we assume D is fully observed, a requirement for training traditional VAEs. Variational Autoencoders Variational Autoencoders (VAEs) [21] are a class of generative models that assume a generative process in which data likelihoods are represented as p(x) = ∫ p(x | z)p(z) dz, where z is a latent variable that typically has lower dimensionality than the data x. A tractable distribution that affords easy sampling and likelihood evaluation, such as a standard Gaussian, is usually imposed on the prior p(z). These models are learned by maximizing the evidence lower bound (ELBO) of the data likelihood: log p(x) ≥ Ez∼qψ(·|x)[log pϕ(x | z)]− KL(qψ(z | x) || p(z)), where qψ(z | x) and pϕ(x | z) are the encoder (or approximate posterior) and decoder of the VAE, respectively. The encoder and decoder are generally neural networks that output tractable distributions (e.g., a multivariate Gaussian). In order to properly optimize the ELBO, samples drawn from qψ(z | x) must be differentiable with respect to the parameters of the encoder (often called the reparameterization trick). After training, a new data point x̂ can be easily generated by first sampling z from the prior, then sampling x̂ ∼ pϕ(· | z). 3 Posterior Matching In this section we describe our framework, coined Posterior Matching, to model the underlying arbitrary conditionals in a VAE. In many respects, Posterior Matching cuts the Gordian knot to uncover the conditional dependencies. Following our insights, we show that our approach is direct and intuitive. Notwithstanding, we are the first to apply this direct methodology for arbitrary conditionals in VAEs and are the first to connect our proposed loss with arbitrary conditional likelihoods p(xu | xo). Note that we are not proposing a new type of VAE. Rather, we are formalizing a simple and intuitive methodology that can be applied to numerous existing (or future) VAEs. 3.1 Motivation Let us begin with a motivating example, depicted in Figure 1. Suppose we have trained a VAE on images of handwritten 3s, 5s, and 8s. This VAE has thus learned to represent these images in a low-dimensional latent space. Any given code (vector) in this latent space represents a distribution over images in the original data space, which can be retrieved by passing that code through the VAE’s decoder. Some regions in the latent space will contain codes that represent 3s, some will represent 5s, and some will represent 8s. There is typically only an interest in mapping from a given image x to a distribution over the latent codes that could represent that image, i.e., the posterior q(z | x). However, we can just as easily ask which latent codes are feasible having only observed part of an image. For example, if we only see the right half the image shown in Figure 1, we know the digit could be a 3 or an 8, but certainly not a 5. Thus, the distribution over latent codes that could correspond to the full image, that is pψ(z | xo) (where ψ is the encoder’s parameters), should only include regions that represent 3s or 8s. Decoding any sample from pψ(z | xo) will produce an image of a 3 or an 8 that aligns with what has been observed. The important insight is that we can think about how conditioning on xo changes the distribution over latent codes without explicitly worrying about what the (potentially higher-dimensional and more complicated) conditional distribution over xu looks like. Once we know pψ(z | xo), we can easily move back to the original data space using the decoder. 3.2 Approximating the Partially Observed Posterior The partially observed approximate posterior of interest is not readily available, as it is implicitly defined by the VAE: pψ(z | xo) = Exu∼p(·|xo) [ qψ(z | xo,xu) ] , (1) where qψ(z | xo,xu) = qψ(z | x) is the VAE’s encoder. Thus, we introduce a neural network in order to approximate it. Given a network that outputs the distribution qθ(z | xo) (i.e. the partially observed encoder in Figure 2), we now discuss our approach to training it. Our approach is guided by the priorities of simplicity and generality. We minimize (with respect to θ) the following likelihoods, where the samples are coming from our target distribution as defined in Equation 1: Exu∼p(·|xo) [ Ez∼qψ(·|xo,xu)[− log qθ(z | xo)] ] . (2) We discuss how this is optimized in practice in Section 3.4. Due to the relationship between negative log-likelihood minimization and KL-divergence minimization [3], we can interpret Equation 2 as minimizing: Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) ) ] . (3) We can directly minimize the KL-divergence in Equation 3 if it is analytically available between the two posteriors, for instance if both posteriors are Gaussians. However, Equation 2 is more general in that it allows us to use more expressive (e.g., autoregressive) distributions for qθ(z | xo) with which the KL-divergence cannot be directly computed. This is important given that pψ(z | xo) is likely to be complex (e.g., multimodal) and not easily captured by a Gaussian (as in Figure 1). Importantly, there is no requirement for qθ(z | xo) to be reparameterized, which would further limit the class of distributions that can be used. There is a high degree of flexibility in the choice of distribution for the partially observed posterior. Note that this objective does not utilize the decoder. 3.3 Connection with Arbitrary Conditioning While the Posterior Matching objective from Equation 2 and Equation 3 is intuitive, it is not immediately clear how this approach relates back to the arbitrary conditioning objective of maximizing p(xu | xo). We formalize this connection in Theorem 3.1 (see Appendix for proof). Theorem 3.1. Let qψ(z | x) and pϕ(x | z) be the encoder and decoder, respectively, for some VAE. Additionally, let qθ(z | xo) be an approximate partially observed posterior. Then minimizing Exu∼p(·|xo) [ KL ( qψ(z | xo,xu) || qθ(z | xo) )] is equivalent to minimizing Exu∼p(·|xo) [ − log pθ,ϕ(xu | xo) + KL ( qψ(z | xo,xu) || qθ(z | xo,xu) ) ] , (4) with respect to the parameters θ. The first term inside the expectation in Equation 4 gives us the explicit connection back to the arbitrary conditioning likelihood p(xu | xo), which is being maximized when minimizing Equation 4. The second term acts as a sort of regularizer by trying to make the partially observed posterior match the VAE posterior when conditioned on all of x — intuitively, this makes sense as a desirable outcome. 3.4 Implementation A practical training loss follows quickly from Equation 2. For the outer expectation, we do not have access to the true distribution p(xu | xo), but for a given instance x that has been partitioned into xo and xu, we do have one sample from this distribution, namely xu. So we approximate this expectation using xu as a single sample. This type of single-sample approximation is common with VAEs, e.g., when estimating the ELBO. For the inner expectation, we have access to qψ(z | x), which can easily be sampled in order to estimate the expectation. In practice, we generally use a single sample for this as well. This gives us the following Posterior Matching loss: LPM(x, o, θ, ψ) = −Ez∼qψ(·|x) [ log qθ(z | xo) ] , (5) where o is the set of observed feature indices. During training, o can be randomly sampled from a problem-specific distribution for each minibatch. Figure 2 provides a visual overview of our approach. In practice, we represent xo as a concatenation of x that has had unobserved features set to zero and a bitmask b that indicates which features are observed. This representation has been successful in other arbitrary conditioning models [24, 38]. However, this choice is not particularly important to Posterior Matching itself, and alternative representations, such as set embeddings, are valid as well. As required by VAEs, samples from qψ(z | x) will be reparameterized, which means that minimizing LPM will influence the parameters of the VAE’s encoder in addition to the partially observed posterior network. In some cases, this may be advantageous, as the encoder can be guided towards learning a latent representation that is more conducive to arbitrary conditioning. However, it might also be desirable to train the VAE independently of the partially observed posterior, in which case we can choose to stop gradients on the samples z ∼ qψ(· | x) when computing LPM. Similarly, the partially observed posterior can be trained against an existing pretrained VAE. In this case, the parameters of the VAE’s encoder and decoder are frozen, and we only optimize LPM with respect to θ. Otherwise, we jointly optimize the VAE’s ELBO and LPM. We emphasize that there is a high degree of flexibility with the choice of VAE, i.e. we have not imposed any unusual constraints. However, there are some potentially limiting practical considerations that have not been explicitly mentioned yet. First, the training data must be fully observed, as with traditional VAEs, since LPM requires sampling qψ(z | x). However, given that the base VAE requires fully observed training data anyway, this is generally not a relevant limitation for our purposes. Second, it is convenient in practice for the VAE’s decoder to be factorized, i.e. p(x | z) = ∏ i p(xi | z), as this allows us to easily sample from p(xu | z) (sampling xu is less straightforward with other types of decoders). However, it is standard practice to use factorized decoders with VAEs, so this is ordinarily not a concern. We also note that, while useful for easy sampling, a factorized decoder is not necessary for optimizing the Posterior Matching loss, which does not incorporate the decoder. 3.5 Posterior Matching Beyond Arbitrary Conditioning The concept of matching VAE posteriors is quite general and has other uses beyond the application of arbitrary conditioning. We consider one such example, which still has ties to arbitrary conditioning, in order to give a flavor for other potential uses. A common application of arbitrary conditioning is active feature acquisition [13, 23, 25], where informative features are sequentially acquired on an instance-by-instance basis. In the unsupervised case, the aim is to acquire as few features as possible while maximizing the ability to re- construct the remaining unobserved features (see Figure 3 for example). One approach to active feature acquisition is to greedily select the feature that will maximize the expected amount of information to be gained about the currently unobserved features [23, 25]. For VAEs, Ma et al. [25] show that this is equivalent to selecting each feature according to argmax i∈u H(z | xo)− Exi∼p(·|xo) [ H(z | xo, xi) ] = argmin i∈u Exi∼p(·|xo) [ H(z | xo, xi) ] . (6) For certain families of posteriors, such as multivariate Gaussians, the entropies in Equation 6 can be analytically computed. In practice, approximating the expectation in Equation 6 is done via entropies of the posteriors p(i)(z | xo) ≡ Exi∼pθ,ϕ(·|xo) [ qθ(z | xo, xi) ] , where samples from pθ,ϕ(xi | xo) are produced by first sampling z ∼ qθ(· | xo) and then passing z through the VAE’s decoder pϕ(xi | z) (we call p(i)(z | xo) the “lookahead” posterior for feature i, since it is obtained by imagining what the posterior will look like after one acquisition into the future). Hence, computing the resulting entropies requires one network evaluation per sample of xi to encode z, for i ∈ u. Thus, if using k samples for each xi, each greedy step will be Ω(k · |u|), which may be prohibitive in high dimensions. In analogous fashion to the Posterior Matching approach that has already been discussed, we can train a neural network to directly output the lookahead posteriors for all features at once. The Posterior Matching loss in this case is LPM-Lookahead(x, o, u, ω, θ, ϕ) = ∑ i∈u Exi∼pθ,ϕ(·|xo) [ Ez∼qθ(·|xo,xi) [ − log q(i)ω (z | xo) ]] , (7) where ω is the parameters of the lookahead posterior network. In practice, we train a single shared network with a final output layer that outputs the parameters of all q(i)ω (z | xo). Note that given the distributions q(i)ω (z | xo) for all i, computing the greedy acquisition choice consists of doing a forward evaluation of our network, then choosing the feature i ∈ u such that the entropy of q(i)ω (z | xo) is minimized. In other words, we may bypass the individual samples of xi, and use a single shared network for a faster acquisition step. In this setting, we let q(i)ω (z | xo) be a multivariate Gaussian so that the entropy computation is trivial. See Appendix for a diagram of the entire process. This use of Posterior Matching leads to large improvements in the computational efficiency of greedy active feature acquisition (demonstrated empirically in Section 5.5). 4 Prior Work A variety of approaches to arbitrary conditioning have been previously proposed. ACE is an autoregressive, energy-based method that is the current state-of-the-art for arbitrary conditional likelihood estimation and imputation, although it can be computationally intensive for very high dimensional data [38]. ACFlow is a variant of normalizing flows that can give analytical arbitrary conditional likelihoods [24]. Several other methods, including Sum-Product Networks [32, 6], Neural Conditioner [2], and Universal Marginalizer [10], also have the ability to estimate conditional likelihoods. Rezende et al. [34] were among the first to suggest that VAEs can be used for imputation. More recently, VAEAC was proposed as a VAE variant designed for arbitrary conditioning [18]. Unlike Posterior Matching, VAEAC is not a general framework and cannot be used with typical pretrained VAEs. EDDI is a VAE-based approach to active feature acquisition and relies on arbitrary conditioning [25]. The authors introduce a “Partial VAE” in order to perform the arbitrary conditioning, which, similarly to Posterior Matching, tries to model p(z | xo). Unlike Posterior Matching, they do this by maximizing a variational lower bound on p(xo) using a partial inference network q(z | xo) (there is no standard VAE posterior q(z | x) in EDDI). Gong et al. [13] use a similar approach that is based on the Partial VAE of EDDI. The major drawback of these methods is that, unlike with Posterior Matching, q(z | xo) must be reparameterizable in order to optimize the lower bound (the authors use a diagonal Gaussian). Thus, certain more expressive distributions (e.g., autoregressive) cannot be used. Additionally, these methods cannot be applied to existing VAEs. The methods of Ipsen et al. [17] and Collier et al. [9] are also similar to EDDI, where the former optimizes an approximation of p(xo,b) and the latter optimizes a lower bound on p(xo | b). Ipsen et al. [17] also focuses on imputation for data that is missing “not at random”, a setting that is outside the focus of our work. There are also several works that have considered learning to identify desirable regions in latent spaces. Engel et al. [11] start from a pretrained VAE, but then train a separate GAN [14] with special regularizers to do their conditioning. They only condition on binary vectors, y, that correspond to a small number of predefined attributes, whereas we allow for conditioning on arbitrary subsets of continuous features xo (a more complicated conditioning space). Also, their resulting GAN does not make the likelihood q(z | y) available, whereas Posterior Matching directly (and flexibly) models q(z | xo), which may be useful for downstream tasks (e.g. Section 5.5) and likelihood evaluation (see Appendix). Furthermore, Posterior Matching trains directly through KL, without requiring an additional critic. Whang et al. [40] learn conditional distributions, but not arbitrary conditional distributions (a much harder problem). They also consider normalizing flow models, which are limited to invertible architectures with tractable Jacobian determinants and latent spaces that have the same dimensionality as the data (unlike VAEs). Cannella et al. [7] similarly do conditional sampling from a model of the joint distribution, but are also restricted to normalizing flow architectures and require a more expensive MCMC procedure for sampling. 5 Experiments In order to empirically test Posterior Matching, we apply it to a variety of VAEs aimed at different tasks. We find that our models are able to match or surpass the performance of previous specialized VAE methods. All experiments were conducted using JAX [5] and the DeepMind JAX Ecosystem [1]. Code is available at https://github.com/lupalab/posterior-matching. Our results are dependent on the choice of VAE, and the particular VAEs used in our experiments were not the product of extensive comparisons and did not undergo thorough hyperparameter tuning — that is not the focus of this work. With more carefully selected or tuned VAEs, and as new VAEs continue to be developed, we can expect Posterior Matching’s downstream performance to improve accordingly on any given task. We emphasize that our experiments span a diverse set of task, domains, and types of VAE, wherein Posterior Matching was effective. 5.1 MNIST In this first experiment, our goal is to demonstrate that Posterior Matching replicates the intuition depicted in Figure 1. We do this by training a convolutional VAE with Posterior Matching on the MNIST dataset. The latent space of this VAE is then mapped to two dimensions with UMAP [27] and visualized in Figure 4. In the figure, black points represent samples from qθ(z | xo), and for select samples, the corresponding reconstruction is shown. The encoded test data is shown, colored by true class label, to highlight which regions correspond to which digits. We see that the experimental results nicely replicate our earlier intuitions — the learned distribution qθ(z | xo) puts probability mass only in parts of the latent space that correspond to plausible digits based on what is observed and successfully captures multimodal distributions (see the second column in Figure 4). 5.2 Image Inpainting One practical application of arbitrary conditioning is image inpainting, where only part of an image is observed and we want to fill in the missing pixels with visually coherent imputations. As with prior works [18, 24], we assume pixels are missing completely at random. We test Posterior Matching as an approach to this task by pairing it with both discrete and hierarchical VAEs. Vector Quantized-VAEs We first consider VQ-VAE [30], a type of VAE that is known to work well with images. VQ-VAE differs from the typical VAE with its use of a discrete latent space. That is, each latent code is a grid of discrete indices rather than a vector of continuous values. Because the latent space is discrete, Oord et al. [30] model the prior distribution with a PixelCNN [29, 36] after training the VQ-VAE. We similarly use a conditional PixelCNN to model qθ(z | xo). First, a convolutional network maps xo to a vector, and that vector is then used as a conditioning input to the PixelCNN. More architecture and training details can be found in the Appendix. We train VQ-VAEs with Posterior Matching for the MNIST, OMNIGLOT, and CELEBA datasets. Table 1 reports peak signal-to-noise ratio (PSNR) and precision/recall [35] for inpaintings produced by our model. We find that Posterior Matching with VQ-VAE consistently achieves better precision/recall scores than previous models while having comparable PSNR. Hierarchical VAEs Hierarchical VAEs [22, 37, 39] are a powerful extension of traditional VAEs that allow for more expressive priors and posteriors by partitioning the latent variables into subsets z = {z1, . . . , zL}. A hierarchy is then created by fractorizing the prior p(z) = ∏ i p(zi | z<i) and posterior q(z | x) = ∏ i q(zi | z<i,x). These models have demonstrated impressive performance on images and can even outperform autoregressive models [8]. Posterior Matching can be Posterior Matching 0.834 ± 0.001 0.246 ± 0.002 0.330 ± 0.013 5.964 ± 0.005 0.857 ± 0.000 -8.963 ± 0.007 0.450 ± 0.002 -3.116 ± 0.175 0.573 ± 0.000 77.488 ± 0.012 VAEAC 0.880 ± 0.001 -0.042 ± 0.002 0.574 ± 0.033 2.418 ± 0.006 0.896 ± 0.001 -10.082 ± 0.010 0.462 ± 0.002 -3.452 ± 0.067 0.615 ± 0.000 74.850 ± 0.005 ACE 0.828 ± 0.002 0.631 ± 0.002 0.335 ± 0.027 9.643 ± 0.005 0.830 ± 0.001 -3.859 ± 0.005 0.432 ± 0.003 0.310 ± 0.054 0.525 ± 0.000 86.701 ± 0.008 ACE Proposal 0.828 ± 0.002 0.583 ± 0.003 0.312 ± 0.033 9.484 ± 0.005 0.832 ± 0.001 -4.417 ± 0.005 0.436 ± 0.004 -0.241 ± 0.056 0.535 ± 0.000 85.228 ± 0.009 ACFlow 0.877 ± 0.001 0.561 ± 0.003 0.567 ± 0.050 8.086 ± 0.010 0.909 ± 0.000 -8.197 ± 0.008 0.478 ± 0.004 -0.972 ± 0.022 0.603 ± 0.000 81.827 ± 0.007 ACFlow+BG 0.833 ± 0.002 0.528 ± 0.003 0.369 ± 0.016 7.593 ± 0.011 0.861 ± 0.001 -6.833 ± 0.006 0.442 ± 0.001 -1.098 ± 0.032 0.572 ± 0.000 81.399 ± 0.008 naturally applied to hierarchical VAEs, where the partially observed posterior is represented as q(z | xo) = ∏ i q(zi | z<i,xo). We adopt the Very Deep VAE (VDVAE) architecture used by Child [8] and extend it to include the partially observed posterior (see Appendix for training and architecture details). We note that due to our hardware constraints, we trained smaller models and for fewer iterations than Child [8]. Inpainting results for our VDVAE models are given in Table 1. We see that they achieve better precision/recall scores than the VQ-VAE models and, unlike VQ-VAE, are able to attain better PSNR than ACFlow for MNIST and CELEBA. Figure 5 shows some example inpaintings, and additional samples are provided in the Appendix. The fact that we see better downstream performance when using VDVAE than when using VQ-VAE is illustrative of Posterior Matching’s ability to admit easy performance gains by simply switching to a more powerful base VAE. 5.3 Real-valued Datasets We evaluate Posterior Matching on real-valued tabular data, specifically the benchmark UCI repository datasets from Papamakarios et al. [31]. We follow the experimental setup used by Li et al. [24] and Strauss and Oliva [38]. In these experiments, we train basic VAE models while simultaneously learning the partially observed posterior. Given the flexibility that Posterior Matching affords, we use an autoregressive distribution for qθ(z | xo). Further details can be found in the Appendix. Table 2 reports the arbitrary conditional log-likelihoods and normalized root-mean-square error (NRMSE) of imputations for our models (with features missing completely at random). Likelihoods are computed using an importance sampling estimate (see Appendix for details). We primarily compare to VAEAC as a baseline in the VAE family, however we also provide results for ACE and ACFlow for reference. We see that Posterior Matching is able to consistently produce more accurate imputations and higher likelihoods than VAEAC. While our models don’t match the likelihoods achieved by ACE and ACFlow, Posterior Matching is comparable to them for imputation NRMSE. 5.4 Partially Observed Clustering Probabilistic clustering often views cluster assignments as a latent variable. Thus, when applying Posterior Matching in this setting, we may perform “partially observed” clustering, which clusters instances based on a subset of observed features. We consider VaDE, which uses a mixture of Gaussians as the prior, allowing it to do unsupervised clustering by treating each Gaussian component as one of the clusters [19]. Despite differences in how VaDE is trained compared to a classic VAE, training a partially observed encoder via Posterior Matching remains exactly the same. We train models on both MNIST and FASHION MNIST (see Appendix for experimental details). Figure 6 shows the clustering accuracy of these models as the percentage of (randomly selected) observed features changes. As a baseline, we train a supervised model where the labels are the cluster predictions from the VaDE model when all of the features are observed. We see that Posterior Matching is able to match the performance of the baseline, and even slightly outperform it for low percentages of observed features. Unlike the supervised approach, Posterior Matching has the advantage of being generative. 5.5 Very Fast Greedy Feature Acquisition As discussed in Section 3.5, we can use Posterior Matching outside of the specific task of arbitrary conditioning. Here, we consider the problem of greedy active feature acquisition. We train a VAE with a Posterior Matching network that outputs the lookahead posteriors described in Section 3.5, using the loss in Equation 7. Note that we are also still using Posterior Matching in order to learn qθ(z | xo) and therefore to produce reconstructions. Training details can be found in the Appendix. We consider the MNIST dataset and compare to EDDI as a baseline, using the authors’ publicly available code. We downscale images to 16 × 16 since EDDI has difficulty scaling to highdimensional data. We also only evaluate on the first 1000 instances of the MNIST test set, as the EDDI code was very slow when computing the greedy acquisition policy. EDDI also uses a particular architecture that is not compatible with convolutions. Thus we train a MLP-based VAE on flattened images in order to make a fair comparison. However, since Posterior Matching does not place any limitations on the type of VAE being used, we also train a convolutional version. For our models, we greedily select the feature to acquire using the more expensive sampling-based approach (similar to EDDI) as well as with the lookahead posteriors (which requires no sampling). In both cases, imputations are computed with an expectation over 50 latent codes, as is done for EDDI. An example acquisition trajectory is shown in Figure 3. Figure 7 presents the root-mean-square error, averaged across the test instances, when imputing xu with different numbers of acquired features. We see that our models are able to achieve lower error than EDDI. We also see that acquiring based on the lookahead posteriors incurs only a minimal increase in error compared to the sampling-based method, despite being far more efficient. Computing the greedy choice with our model using the sampling-based approach takes 68 ms ± 917 µs (for a single acquisition on CPU). Using the lookahead posteriors, the time is only 310 µs ± 15.3 µs, a roughly 219x speedup. 6 Conclusions We have presented an elegant and general framework, called Posterior Matching, that allows VAEs to perform arbitrary conditioning. That is, we can take an existing VAE that only models the joint distribution p(x) and train an additional model that, when combined with the VAE, is able to assess any likelihood p(xu | xo) for arbitrary subsets of unobserved features xu and observed features xo. We applied this approach to a variety of VAEs for a multitude of different tasks. We found that Posterior Matching outperforms previous specialized VAEs for arbitrary conditioning with tabular data and for image inpainting. Importantly, we find that one can switch to a more powerful base VAE and get immediate improvements in downstream arbitrary conditioning performance “for free,” without making changes to Posterior Matching itself. We can also use Posterior Matching to perform clustering based on partially observed inputs and to improve the efficiency of greedy active feature acquisition by several orders of magnitude at negligible cost to performance. With this work, we hope to make arbitrary conditioning more widely accessible. Arbitrary conditioning no longer requires specialized methods, but can instead be achieved by applying one general framework to common VAEs. As advances are made in VAEs for joint density estimation, we can expect to immediately reap the rewards for arbitrary conditioning. Acknowledgments and Disclosure of Funding We would like to thank Google’s TPU Research Cloud program for providing free access to TPUs. This research was partly funded by NSF grant IIS2133595 and by NIH 1R01AA02687901A1.
1. What is the main contribution of the paper regarding arbitrary conditioning with VAEs? 2. How does the proposed method differ from other approaches that allow learning VAEs from partially observed data? 3. Can you explain why the authors assume fully observed datasets in the background section? 4. How does the generative model in Equation 1 differ from the exact expression for p(z|x_o)? 5. Can you clarify what parameters are being optimized in Equation 3? 6. How does the proposed model part compare to the models proposed in Collier et al. (2020) and Ipsen et al. (2021)? 7. Why did the authors choose to evaluate their method on various downstream tasks using different evaluation metrics instead of focusing on conditional log-likelihood evaluations? 8. Would it be possible to include ablation studies comparing the proposed method with other recent methods that explicitly model missing features?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Authors propose a model and learning method, called posterior matching, to perform arbitrary conditioning with VAE models. The proposed method can be applied to a variety of VAE models without modifications to the VAE model itself, and can be used with a pre-trained VAE or can be trained alongside with the actual VAE model. Strengths And Weaknesses Originality: Previous work has proposed deep generative models for arbitrary conditioning using e.g. autoregressive and flow based models. VAE based have also been proposed for arbitrary conditioning. Regarding originality and novelty, this work seems to extend previous VAE methods by allowing more expressive posteriors as well as by allowing applying the proposed method to a pre-trained VAE model. Authors do not contrast their method to recent methods that allow learning VAEs from partially observed data (see detailed comments below). Quality & Clarity: Quality and clarity appears generally good. I have comments regarding the clarity of the experiments (see below). Significance: Significance of the work is very high. Arbitrary conditioning with (more classical) statistical models is of central importance, and the same applies to arbitrary conditioning with deep generative models. Consequently, this work is well motivated and has important applications. Questions List below contains both minor and major comments. In Background section where you introduce the notation x_u and x_o, it would be very helpful to specifically mention that you assume fully observed datasets. A reader may easily get disoriented that you are dealing with partially observed data. Equation. 1 in Section 3.2: From the point of view of the generative model, Eq. 1 is not exact but rather already an approximation of p(z|x_o) = p(x_o|z)p(z)/p(x_o) because your expression involves the variational approbation q(z|x). Equation 2: it would be useful to more clearly define q_{\theta}, e.g. by referring to Fig. 2 already at this stage. Equation 3: write already here that what are the parameters that you are optimizing. At a general level, as summarised in Fig. 2, the proposed model contains two branches: the "original" VAE and the "proposed model part" that includes the partially observed encoder and provides the posterior matching loss. The amortized variational approximation of the "proposed model part" takes as an input the "observed" variables x_o (used for conditioning) as well as a binary vector b that indicates which variables are missing/non-missing. This proposed model part seems very similar with the models proposed to learn VAEs from partially observed data in (Collier et al, 2020; Ipsen et al, 2021). These methods can learn approximations of the posteriors p(z|x_o,b) and thereby used to infer p(x_u|x_o). Please contrast your model with these two e.g. in Section 4, and include them in ablation studies (see below). REFS: Mark Collier, Alfredo Nazabal, Christopher K.I. Williams. VAEs in the Presence of Missing Data pdf Published on arXiv 13 July 2020. Presented at the first Workshop on the Art of Learning with Missing Values (Artemiss) hosted by the 37th International Conference on Machine Learning (ICML 2020). NB Ipsen, PA Mattei, J Frellsen, not-MIWAE: Deep Generative Modelling with Missing not at Random Data, International Conference on Learning Representations, 2021. Experiments and test metrics: The general goal of the proposed method is to perform arbitrary conditioning p(x_u|x_o). I would have liked to see much more extensive experiments with variety of datasets that quantify performance directly using that metric, instead of devoting a large number of experiments to various other downstream tasks, where the contribution of the proposed model is less directly quantifiable. Current manuscript evaluate conditional log-likelihoods only in one experiment (Table 2). Related to the comment on conditional log-likelihood evaluations: authors evaluate their method on various downstream tasks using a variety of evaluation metrics (peak SNR, precision, recall, clustering accuracy). While the metrics themselves are well-known, the way they are computed and applied here is not explained in detail, which leaves it unclear that what the numbers exactly tell about the proposed method itself. Ablation studies: I would have liked to see ablation studies. For example, it common to use the standard VAE or VAE with the Gaussian mixture prior and impute unobserved (missing) feature with zeros x_u = [0 0 ... 0], and then approximate the posterior directly using q(z|x_o,x_u), and then estimate p(x_u|x_o). More recent methods explicitly model the missing features, such as the method by (Collier et al. 2020) and (Ipsen et al, 2021) (see comment above). A comprehensive ablation study including e.g. these methods would provide more insight into the proposed model. Overall, this is an interesting study that addresses an important problem. The proposed method is intuitive and statistically well-motivated, and results appear promising. Regarding the novelty, I would like to see the proposed method to be compared by recent works that propose closely related methods. An ablation study and more straightforward, direct comparisons on the arbitrary conditioning would strengthen the manuscript. Limitations Limitations are adequately addressed.
NIPS
Title Training Spiking Neural Networks with Event-driven Backpropagation Abstract Spiking Neural networks (SNNs) represent and transmit information by spatiotemporal spike patterns, which bring two major advantages: biological plausibility and suitability for ultralow-power neuromorphic implementation. Despite this, the binary firing characteristic makes training SNNs more challenging. To learn the parameters of deep SNNs in an event-driven fashion as in inference of SNNs, backpropagation with respect to spike timing is proposed. Although this event-driven learning has the advantages of lower computational cost and memory occupation, the accuracy is far below the recurrent neural network-like learning approaches. In this paper, we first analyze the commonly used temporal backpropagation training approach and prove that the sum of gradients remains unchanged between fully-connected and convolutional layers. Secondly, we show that the max pooling layer meets the above invariance rule, while the average pooling layer does not, which will suffer the gradient vanishing problem but can be revised to meet the requirement. Thirdly, we point out the reverse gradient problem for time-based gradients and propose a backward kernel that can solve this problem and keep the property of the invariable sum of gradients. The experimental results show that the proposed approach achieves state-of-the-art performance on CIFAR10 among time-based training methods. Also, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. Our code is available at https://github.com/zhuyaoyu/SNN-event-driven-learning. 1 Introduction Motivated by the principles of brain computing, Spiking Neural Networks (SNNs) are considered as the third generation of neural networks [1, 2]. SNNs are developed to work in power-critical scenarios, such as edge computing. When run on dedicated neuromorphic chips, they can accomplish the tasks [3, 4, 5] with ultra-low power consumption [6, 7, 8, 9, 10, 11, 12]. In contrast, the last generation of neural networks – Artificial Neural Networks (ANNs) [13], generally require a large amount of computation resource (e.g., GPUs). This advantage of SNNs on power consumption largely relies on efficient event-based computations [14, 15]. Another advantage of SNNs originates from their biological reality (compared to ANNs). The similarity between SNNs and biological brains provides an excellent opportunity to study how the brain computes at the neuronal circuit level [16]. Compared with artificial neural networks, developing supervised learning algorithms for spiking neural networks requires more effort. The main challenge for training SNNs comes from the binary nature of spikes and the non-differentiability of the membrane potential at spike time. This difficulty ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). in training impedes the performance of SNNs in pattern classification tasks compared to their ANN counterparts. Existing supervised learning methods of SNNs can be grouped into two categories: The first category consists of recurrent neural network (RNN)-like learning algorithms. These algorithms treat spiking neural networks as binary-output recurrent neural networks and handle the discontinuities of membrane potential at spike times with continuous surrogate derivatives [17]. They typically train deep SNNs with surrogate gradients based on the idea of backpropagation through time (BPTT) algorithm [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. While competitive accuracies are reported on the MNIST, CIFAR-10, and even ImageNet datasets [29, 30, 31], the gradient information is propagated each time step, whether or not a spike is emitted (as shown in Fig. 1). Therefore, these approaches do not follow the event-driven nature of spiking neural networks, which lose the asynchronous characteristic of SNNs and consume much power when trained on neuromorphic hardware. The second category is event-driven algorithms, which propagate gradient information through spikes. Precise spiking timing acts an important role in this situation, and they are extensively used in such algorithms [32, 33, 34, 35, 36, 37, 38, 39]. Classical examples include SpikeProp [32] and its variants [33, 40, 41]. These algorithms approximate the derivative of spike timing to membrane potential as the negative inverse of the time derivative of membrane potential function. This approximation is actually mathematically correct without preconditions [42]. Some other works apply non-leaky integrate-and-fire neurons to stabilize the training process [35, 38, 43]. Most of these works restrict each neuron to fire at most once, which inspires [44] to take the spike time as the state of a neuron, and model the relation of neurons by this spike time. As a result, the SNN is trained similarly to an ANN. Among the methods trained in an event-driven fashion (not modelling the relation of spike time to train like ANNs), the state-of-the-art model is TSSL-BP [39]. However, they use RNN-like surrogate gradients (a sigmoid function) to assist training. Hence, it is still challenging to train SNNs in a pure event-driven fashion. In this work, we develop a novel event-driven learning algorithm that can train high-performance deep SNNs. The main contributions of our work are as follows: 1. We prove that the typical SNN temporal backpropagation training approach assigns the gradient of an output spike of a neuron to the input spikes generating it. After summing this assignment rule altogether, we find that the sum of gradients is unchanged between layers. 2. We analyze the case of the pooling layer (which does not have neurons) and find that average pooling does not keep the gradient sum unchanged, but we can modify its backward formulas to meet the requirement. Meanwhile, the max-pooling layer satisfies the rule initially. 3. We point out the reverse gradient problem in event-driven learning that the direction of the temporal gradient is reversed during backpropagation when the kernel function of an input spike is decreasing. Then we propose a backward kernel function that addresses this problem while keeping the sum of gradients unchanged between layers. 4. The adjusted average pooling layer and the non-decreasing backward kernel enhances the performance of our model as well as the convergence speed. To our best knowledge, our proposed approach achieves state-of-the-art performance on CIFAR10 among event-driven training methods (with temporal gradients) for SNNs. Meanwhile, our method is the first event-driven backpropagation approach that successfully trains SNN on the larger-scale CIFAR100 dataset. 2 Backgrounds and Related Work The gradient-based learning of spiking neural networks contains two stages: the forward (inference) and the backward (learning) stages. In the forward stage, Leaky Integrate-and-Fire (LIF) neurons are most commonly used [18, 21, 26, 39], while other types of neurons are also applicable [32, 35]. Typically, these neuron models can be changed to the form of the Spike Response Model (SRM) [37, 45, 46], which is easily represented in an event-driven fashion. In the backward stage, the methods used by existing works exhibits more diversity. Here, we classify existing approaches from two dimensions: whether non-spike information is needed in discrete time steps (RNN-like) or not (event-driven) and whether the gradient represents spike scale (activation-based) or spike timing (time-based). Event-driven learning v.s. RNN-like learning: In both forward and backward computation of event-driven learning, information is only carried by spikes in SNNs. Specifically, in backward computation, gradient information is propagated through spikes [32, 33, 41, 35] (shown in Fig. 1a-b). On the other side, in RNN-like learning, information is not only carried by spikes in backward computation. Especially, gradient information can be propagated through a neuron that does not emit a spike in backward computation (shown in Fig. 1c-d). This gradient propagation is achieved by a surrogate function [12, 17, 18, 23, 47], which is a function of the membrane potential at the current time step ut, and the firing threshold θ. Time-based gradient v.s. activation-based gradient: Time-based gradients represent the (reverse) direction that the timing of a spike should move, that is, to move leftward or rightward on the time axis [32]. In backward propagation, the derivative of the firing time of a spike to the corresponding membrane potential ∂t∂u is often approximated as −1 ∂u ∂t [32, 33], denoting how the change of membrane potential will change the spike firing time (Fig. 1b). On the other side, activation-based approaches replace the Heaviside neuron activation function Θ(·) (spike st = Θ(ut − θ)) in forward propagation with derivable functions σ(·) in backward propagation, whether there are spikes in the current time step [18, 26, 31, 21]. Therefore, activation-based approaches essentially regard SNNs as binary RNNs and train them with approximated gradients, where the gradients indicate whether the values in the network (including the binary spikes) should be larger or smaller (Fig. 1d). As a result, time-based gradients are event-driven by nature, since the temporal gradient could only be carried by spikes. Meanwhile, activation-based gradients are more suitable for the RNN-like training scheme since the diversity of surrogate gradients largely relies on the fact that ut ̸= θ in discrete time steps [17], which no longer holds in continuous time simulation. If we want to apply activation-based gradients to event-driven learning, there should only be one value ∂s∂u when the membrane potential reaches the threshold. Tab. 1 lists whether a gradient type can be used in a learning fashion. It should be noticed that although activation-based gradient is more suitable for RNN-like learning, it is still able to be used for event-driven learning. 3 Methods 3.1 Forward Formulas We use the spike response model [1] for neurons in the network. The forward propagation in the network can be described as follows: u (l) i (t) = ∫ t t (l) i,last ∑ j w (l) ij · s (l−1) j (τ) · ϵ(t− τ)dτ, (1) s (l) i (t) = δ(u (l) i (t)− θ). (2) Here u(l)i (t) denotes the membrane potential of neuron i in layer l at time t, w (l) ij denotes the weight between neuron j in layer l − 1 and neuron i in layer l. t(l)i,last is the time of last spike of neuron i in layer l, and s(l)i (t) represents the spike emitted from neuron i at time t. The function δ(·) is the Dirac Delta function and θ is the firing threshold. The spike response kernel ϵ(t) can be described by ϵ(t) = τm τm − τs (e− t τm − e− t τs ), (3) where τm and τs are the membrane time constant and the synapse time constant respectively. Notice that we do not use reset kernels as in previous works [39, 41]. Instead, we eliminate the influence of input spikes prior to the last output spike on membrane potentials. 3.2 Rethinking the Classical Time-based Backward Propagation Formula In this subsection, we analyze the classical time-based backpropagation formula in SNNs. We first theoretically prove that the backpropagation rule essentially assigns gradients of output spikes of neurons to their input spikes. Then we check the pooling layer and show that the average pooling should be adjusted in backpropagation to satisfy the gradient assignment mechanism, while the max pooling naturally satisfies this mechanism. Invariant sum of gradients among layers with weights. The most commonly used time-based gradient backpropagation method origins from [32]. The two key approximations are as follows: ∂tk(s (l) i ) ∂u (l) i (tk) = −1 du (l) i (tk)/dt = ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) w (l) ij · ∂ϵ(tk − tm) ∂tm −1 , (4) ∂u (l) i (tk) ∂tm(s (l−1) j ) = w (l) ij · ∂ϵ(tk − tm) ∂tm , (5) where tk(s (l) i ) denotes the firing time tk of neuron i in layer l, tk,last(s (l) i ) is the firing time of the last spike emitted by neuron i before time tk. ∂tk(s (l) i ) ∂· means the influence of changing other variables on the timing of a spike, and ∂· ∂tk(s (l) i ) is the influence of changing spike timing on that variable. Combining Eqs. 4-5 and the forward formulas, we can get an invariant equality:∑ j ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) ∂tk(s (l) i ) ∂tm(s (l−1) j ) = 1. (6) The proof is provided in Appendix. Eq. 6 implies the fact that the reference time (t = 0) is meaningless, and only relative spike times matter. If we increase all the spike times in layer l − 1 by 1 unit along the time axis, then all the spike times in layer l are also increased by 1 unit along the time axis. Denote the loss function as L, then the gradient of L with respect to tm(s (l−1) j ) is: ∂L ∂tm(s (l−1) j ) = ∑ i ∑ tm(s (l−1) j )<tk(s (l) i )≤tm,next(s (l−1) j ) ∂L ∂tk(s (l) i ) · ∂tk(s (l) i ) ∂tm(s (l−1) j ) , (7) where tm,next(s (l−1) j ) denotes the firing time of the next spike emitted by neuron j after time tm. Therefore, we actually decompose the gradient ∂L/∂tk(s (l) i ) from layer l into (part of) a set of gradients ∂L/∂tm(s (l−1) j ) of the last layer l − 1, and keep their sum unchanged. In other words, we assign the weighted sum ∂L/∂tk(s (l) i ) by weights ∂tk(s (l) i )/∂tm(s (l−1) j ) to the gradients ∂L/∂tm(s (l−1) j ) in the last layer, as shown in Fig. 2. If we sum all the gradients together, we can get another invariant in this backpropagation rule:∑ i ∑ tk ∂L ∂tk(s (l) i ) = ∑ j ∑ tm ∂L ∂tm(s (l−1) j ) , (8) which means the sum of gradients ∑ i ∑ tk ∂L ∂tk(s (l) i ) never changes between layers under this rule. Gradient sum invariance for pooling layers. The above equations determine the gradient propagation in fully-connected and convolution layers (which contain neurons). The case for pooling layers (which do not contain neurons) is illustrated in Fig. 3. In average pooling with kernel size k, the gradient of one spike (at time t) in layer l is averagely propagated to k× k neurons in layer l− 1 connected to it. Some of these k× k neurons may not emit spikes at time t. However, the gradients are also propagated to those neurons, which cannot further propagate the gradients to the previous layers. For instance, the two white squares (neurons) in layer l− 1 in Fig. 3 receive gradients, but they will not further propagate the gradients to layer l− 2. Thus, a part of the gradients is lost in the backpropagation of the average pooling layer, which might cause the gradient vanishing problem. Meanwhile, the sum of gradients carried by the spikes is not kept among layers in this case. We can adjust the backpropagation stage in average pooling to satisfy the gradient sum invariance requirement by increasing the multiplier from 1/k2 to 1/nspike, where nspike is the number of spikes emitted by the k × k neurons in layer l − 1 at the current time step. On the contrary, in max pooling, the gradient of a spike in layer l is entirely propagated to one of the spikes emitted by its connected neurons in layer l− 1 (shown in the middle of Fig. 3). This maintains the property of the invariable sum of gradients. It should be noticed that, although the backpropagation stage of max pooling is different from adjusted average pooling in discrete simulation, they are almost surely the same in the continuous simulation since (almost surely) no two spikes emit at exactly the same time in this case. 3.3 Deficiencies of The Typical Time-based Gradient Propagation and A New Approach In this section, we first point out the reverse gradient problem in event-driven learning: the gradient direction for spike timing gets wrong when the spike response kernel is decreasing. Then we propose a backward kernel that can not only solve the reverse gradient problem but also keep the property of the invariable sum of gradients. The reverse gradient problem. Fig. 4 illustrates the membrane potential response of a neuron (with index i) to one of its input spikes (with a positive weight) from presynaptic neuron j. As in equation (3), the spike response kernel is a double-exponentail function. Notice that this is not the whole membrane potential of the neuron i as it also receives the inputs from other presynaptic neurons. We consider two spike times (tk and t′k) of neuron i and one spike time tm of presynaptic neuron j. If the next spike of neuron i fires at time tk, moving the presynaptic spike from tm to tm+∆t will decrease membrane potential at time tk, which means postponing the spike at time tk (tm ↑⇒ u[tk] ↓⇒ tk ↑). This result shows that if the presynaptic neuron fires earlier, the postsynaptic neuron will fire earlier in this case. Oppositely, if the next spike of neuron i fires at time t′k, moving the input spike from tm to tm +∆t will cause a increase of u[t′k], which further moves t ′ k leftward (tm ↑⇒ u[t′k] ↑⇒ t′k ↓). As a result, if we want to move the output spike at t′k leftward, we should move the input spike at tm rightward, which reverses the direction. This might cause a problem: When we want to move t′k leftward, we want the neuron to emit more spikes. However, in gradient backpropagation, it moves tm rightward (assume weight wij > 0), which may cause the neuron in the last layer to spike fewer, further causing neurons in the current layer to spike fewer. More formally, we assume the neuron i receives a input spike at time tm from presynaptic neuron j with synaptic weight wij , then the membrane potential of neuron i at time tk is: ui(tk) = wij · ϵ(tk − tm) + C, (9) where ϵ(t) denotes the spike response kernel (Eq. 3). C denotes the influence of other spikes, which is not in our concern here. In backward pass, according to Eqs. (4)-(5), we have ∂L ∂tm(sj) = ∂L ∂tk(si) · ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) = ∂L ∂tk(si) · −1 dui(tk)/dt · wij · ∂ϵ(tk − tm) ∂tm . (10) Note that when a spike is emitted by neuron i at time tk, the slope of ui(t) > 0 at time tk, which means −1dui(tk)/dt has a negative sign. Considering ∂ϵ(tk−tm) ∂tm = −dϵ(τ)dτ , where τ = tk − tm, we get: sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) · sign ( wij ) · sign ( dϵ(τ) dτ ) . (11) When sign ( dϵ(τ)/dτ ) = −1, which is the part of the spike response kernel that decreases (see the case at t′k in Fig. 4), the gradient direction of tm can be classified into two cases: When wij > 0, sign ( ∂L ∂tm(sj) ) = −sign ( ∂L ∂tk(si) ) , which means the gradient direction is reversed. When wij < 0, sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) , which means the gradient direction is kept. In both cases, the sign of the gradient gets wrong in propagation between layers. Thus, the commonly used double-exponential spike response kernel is incompatible with the time-based gradient in event-driven learning. A smoother gradient assigning approach. Inspired by the above gradient inconsistency as well as the invariance of gradient sum, we propose a new gradient backpropagation approach here. Specifically, we replace the function ∂ϵ(tk−tm)∂tm in Eqs. (4) and (5) with a new function h(tk − tm). Therefore, the backpropagation formula between layers turns into: ∂tk(si) ∂tm(sj) = ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) (12) = 0, if tm(sj) ≤ tk,last(si) or tm(sj) > tk(si),(∑ tk,last(si)<tm(s′j)≤tk(si) wij · h(tk − tm) )−1 · wij · h(tk − tm), otherwise. It can be see from Eq. 12 that ∂tk(si)∂tm(sj) will not change if we multiply h(t) by an arbitrary constant, so we do not need to care about the scale of h(t). Meanwhile, the property of invariable sum of gradients is kept after this replacement. To guarantee that the gradients are not reversed between layers, we should expect h(t) > 0 always hold when t > 0. Therefore, we choose h(t) = e − tτgrad to simplify the calculation, where τgrad is a tunable parameter. Notice that the function h(t) is only used in backward propagation, which means the spike response kernel in the forward propagation is not necessarily the integral of h(t). 3.4 Overall Learning Rule The loss function we use in this work is the counting loss function, which has the form L = 1 N ∑Nout i=1 ( 1 T ( N targeti − ∫ T 0 si(t)dt ))2 , where Nout is the number of output neurons and equals to the number of classes, si(t) represents the spike train emitted by neuron i. Besides, N target i is the target of the spike number outputted by neuron i and typically we set N targeti larger when i is the correct answer. During the learning process, the gradient is first propagated from the loss function to the firing time of each spike from the last layer to the first layer. The formula for this stage is (please refer to Appendix for the detailed deduction): ∂L ∂tm(s (l−1) j ) = ∑ i ∂L ∂tk,next(s (l) i ) · ∑ tlasti (s (l) i )<tm(s (l−1) j )≤tk,next(s (l) i ) w (l) ij · h(tk,next − tm) −1 · w(l)ij · h(tk,next − tm), (13) where tk,next(s (l) i ) is the firing time of the first spike emitted by neuron i after time tm. After this, the gradient to weights in each layer is calculated by summing up the multiplication of the gradients of spike firing times in the same layer and the derivative of weights with respect to spike firing times. The learning rule for this stage is ∂L ∂w (l) ij = ∑ tm(s (l−1) j ) ∂L ∂tk,next(s (l) i ) · −1 ∂u (l) i (tk,next) ∂t · ϵ(tk,next − tm). (14) 4 Experiments In this section, we validate the effectiveness of our method on MNIST [48], Fashion-MNIST [49], N-MNIST [50], CIFAR10 [51], and CIFAR100 [51] datasets. This section is organized as follows: We first introduce the training details, then evaluate the performance of our algorithm and compare it with the state-of-the-art event-driven learning approaches. At last, we conduct ablation studies to illustrate the effectiveness of our proposed modules. More details of the configurations can be found in the Appendix. 4.1 Training Details Initialization: When training in an event-driven fashion, gradient information is only carried by spikes. Therefore, the gradient information will be completely blocked by a layer when there are no spikes in that layer. To solve this problem, we start with layers of arbitrarily initialized weights and scale them by certain multiples, which can make the average firing rate to be a certain number for each layer. We obtain these multiple parameters by binary search and this strategy works well in practice. Supervisory signals: Another problem we face is that output neurons corresponding to certain classes do not fire anymore after certain epochs of training. This problems makes corresponding gradients difficult to propagate in the network, further leading to these neurons no longer fire afterwards, resulting recognizing those classes correctly impossible in the following epochs. To address this problem, we utilize supervisory signals. For each neuron in the output layer corresponding to the ground-truth label, we force it to fire at the end of the simulation. Experiment settings: In our experiments, we use the real-valued spike current representing the pixel intensities of the image as inputs. We list the network architecture each work uses and the accuracy they achieves on each dataset in table 2. Notice that the output layer is, by default, a fully-connected layer containing the same number of neurons as the number of classes in the dataset, and omitted from the architecture representation. We run all experiments on a single Nvidia A100 GPU. 4.2 Comparison with the State-of-the-Art Tab. 2 reports the accuracies of the proposed method and other comparing methods. The performance of our algorithm is lower than TSSL-BP by 0.06% on the MNIST dataset and 0.01% on the N-MNIST dataset. However, the output of their network is real-valued postsynaptic currents while the output of our network is binary spikes. In addition, they use RNN-like gradients to assist learning. On the remaining datasets, we have achieved state-of-the-art performance among these works with temporal gradients. For the Fashion-MNIST dataset, our algorithm performs 0.45% higher than the previous SOTA. On the CIFAR10 dataset, we achieve 92.45% accuracy with a 14-layer SEW-Resnet and 92.10% with VGG11, which are all better than the current SOTA, 91.41%. For the CIFAR100 dataset, we are the first work to successfully train SNNs with time-based gradients in an event-driven fashion. We have achieved a performance of 63.97% on this dataset. 4.3 Ablation Studies To show the effect of our proposed modules, we conduct ablation experiments on the CIFAR10 dataset. Specifically, two proposed components are taken into consideration: (1) As mentioned in Section 3.3, we compare the proposed gradient assignment functions h(t) = e−βt in (12) with the commonly used one h(t) = dϵ(t)dt . (2) We compare the results of three different types of pooling layers (average pooling, adjusted average pooling and max pooling) mentioned in Section 3.2. We have tried all combinations of gradient assignment functions and pooling layers. The test accuracy of these different settings is shown in Tab. 3. The results in Tab. 3 meets our expectation. For the pooling layer, max pooling and adjusted average pooling have much better performance than the average pooling. This accords with the conclusion in Section 3.2 that pooling layers keeping the property of invariant sum of gradients are better than those that do not. The proposed gradient assignment function h(t) = e−βt is also better than the commonly used one h(t) = dϵ(t)dt for all three types of pooling layers. In addition, as shown in Fig. 5, h(t) = e −βt converges faster than h(t) = dϵ(t)dt in early stage. 5 Conclusion and Discussion In this work, we analyze the commonly-used SNN temporal backpropagation training approach and find that it follows the gradient assignment rule. We also find the average pooling layer does not obey this rule while the max pooling layer does. We show that the direction of the temporal gradient will be reversed when the spike kernel is decreasing and avoid it with an increasing kernel in backpropagation. Our algorithm achieves state-of-the-art performance on CIFAR10 among time-based SNN learning approaches and successfully learns the parameters of SNN on CIFAR100 for the first time. Compared with RNN-like methods, the proposed event-based learning algorithm has a lower computational cost and memory occupation when there are many time steps. Besides, our algorithm also does not need bias between layers. Meanwhile, gradient propagation between spikes instead of time steps can mitigate the gradient explosion/vanishing problem along the time axis. However, there is still a gap between event-driven backpropagation and biological plausible learning, since event-driven backpropagation processes the spike train in reverse time, which conflicts with the online learning in the real world and desires for future research. 6 Acknowledgements This work was supported by the National Natural Science Foundation of China Grants 62176003 and 62088102.
1. What is the focus of the paper regarding spiking neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its presentation and experimental results? 3. Do you have any questions or concerns regarding the difference between event-based training and RNN-like training? 4. Can the method be trained with residual connections? 5. What are the potential limitations of the paper, specifically in terms of its relevance to the NeurIPS conference?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper focuses on the temporal, event-driven manners of training a spiking neural network from scratch. The authors first revisit the learning dynamics of event-driven learning and discover several invariance properties. Then, a problem called reverse gradient is raised and addressed. Extensive experiments are conducted to verify the effectiveness of this method. Strengths And Weaknesses This paper is well-presented. The writing and visualization are neat and easy to follow. The experimental results are sufficient for comparison with existing event-driven learning works. Questions TBH, I am not an expert in event-based training of SNN, therefore I cannot give useful feedback with respect to that. I have a few questions about the difference between event-based training and the "RNN-like" training. When implementing these event-based learning algorithms on the neuromorphic hardware, how to accelerate the training and why is it significantly faster than the RNN-based learning algorithms? Can this method be trained with residual connection? Limitations Overall, I found this paper interesting and could be valuable for publication at the NeurIPS conference. As I am familiar with this type of training method for SNNs, I could not give conceptual limitations for this paper. I am giving borderline acceptance of this paper due to its good presentation. Meanwhile, I will set my confidence score to 2 and will look into the comments from other reviewers to finalize my rating. POST-REBUTTAL REVIEW: I'd like to thank the authors for their detailed response. My questions are addressed, thus I increase my rating to 7.
NIPS
Title Training Spiking Neural Networks with Event-driven Backpropagation Abstract Spiking Neural networks (SNNs) represent and transmit information by spatiotemporal spike patterns, which bring two major advantages: biological plausibility and suitability for ultralow-power neuromorphic implementation. Despite this, the binary firing characteristic makes training SNNs more challenging. To learn the parameters of deep SNNs in an event-driven fashion as in inference of SNNs, backpropagation with respect to spike timing is proposed. Although this event-driven learning has the advantages of lower computational cost and memory occupation, the accuracy is far below the recurrent neural network-like learning approaches. In this paper, we first analyze the commonly used temporal backpropagation training approach and prove that the sum of gradients remains unchanged between fully-connected and convolutional layers. Secondly, we show that the max pooling layer meets the above invariance rule, while the average pooling layer does not, which will suffer the gradient vanishing problem but can be revised to meet the requirement. Thirdly, we point out the reverse gradient problem for time-based gradients and propose a backward kernel that can solve this problem and keep the property of the invariable sum of gradients. The experimental results show that the proposed approach achieves state-of-the-art performance on CIFAR10 among time-based training methods. Also, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. Our code is available at https://github.com/zhuyaoyu/SNN-event-driven-learning. 1 Introduction Motivated by the principles of brain computing, Spiking Neural Networks (SNNs) are considered as the third generation of neural networks [1, 2]. SNNs are developed to work in power-critical scenarios, such as edge computing. When run on dedicated neuromorphic chips, they can accomplish the tasks [3, 4, 5] with ultra-low power consumption [6, 7, 8, 9, 10, 11, 12]. In contrast, the last generation of neural networks – Artificial Neural Networks (ANNs) [13], generally require a large amount of computation resource (e.g., GPUs). This advantage of SNNs on power consumption largely relies on efficient event-based computations [14, 15]. Another advantage of SNNs originates from their biological reality (compared to ANNs). The similarity between SNNs and biological brains provides an excellent opportunity to study how the brain computes at the neuronal circuit level [16]. Compared with artificial neural networks, developing supervised learning algorithms for spiking neural networks requires more effort. The main challenge for training SNNs comes from the binary nature of spikes and the non-differentiability of the membrane potential at spike time. This difficulty ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). in training impedes the performance of SNNs in pattern classification tasks compared to their ANN counterparts. Existing supervised learning methods of SNNs can be grouped into two categories: The first category consists of recurrent neural network (RNN)-like learning algorithms. These algorithms treat spiking neural networks as binary-output recurrent neural networks and handle the discontinuities of membrane potential at spike times with continuous surrogate derivatives [17]. They typically train deep SNNs with surrogate gradients based on the idea of backpropagation through time (BPTT) algorithm [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. While competitive accuracies are reported on the MNIST, CIFAR-10, and even ImageNet datasets [29, 30, 31], the gradient information is propagated each time step, whether or not a spike is emitted (as shown in Fig. 1). Therefore, these approaches do not follow the event-driven nature of spiking neural networks, which lose the asynchronous characteristic of SNNs and consume much power when trained on neuromorphic hardware. The second category is event-driven algorithms, which propagate gradient information through spikes. Precise spiking timing acts an important role in this situation, and they are extensively used in such algorithms [32, 33, 34, 35, 36, 37, 38, 39]. Classical examples include SpikeProp [32] and its variants [33, 40, 41]. These algorithms approximate the derivative of spike timing to membrane potential as the negative inverse of the time derivative of membrane potential function. This approximation is actually mathematically correct without preconditions [42]. Some other works apply non-leaky integrate-and-fire neurons to stabilize the training process [35, 38, 43]. Most of these works restrict each neuron to fire at most once, which inspires [44] to take the spike time as the state of a neuron, and model the relation of neurons by this spike time. As a result, the SNN is trained similarly to an ANN. Among the methods trained in an event-driven fashion (not modelling the relation of spike time to train like ANNs), the state-of-the-art model is TSSL-BP [39]. However, they use RNN-like surrogate gradients (a sigmoid function) to assist training. Hence, it is still challenging to train SNNs in a pure event-driven fashion. In this work, we develop a novel event-driven learning algorithm that can train high-performance deep SNNs. The main contributions of our work are as follows: 1. We prove that the typical SNN temporal backpropagation training approach assigns the gradient of an output spike of a neuron to the input spikes generating it. After summing this assignment rule altogether, we find that the sum of gradients is unchanged between layers. 2. We analyze the case of the pooling layer (which does not have neurons) and find that average pooling does not keep the gradient sum unchanged, but we can modify its backward formulas to meet the requirement. Meanwhile, the max-pooling layer satisfies the rule initially. 3. We point out the reverse gradient problem in event-driven learning that the direction of the temporal gradient is reversed during backpropagation when the kernel function of an input spike is decreasing. Then we propose a backward kernel function that addresses this problem while keeping the sum of gradients unchanged between layers. 4. The adjusted average pooling layer and the non-decreasing backward kernel enhances the performance of our model as well as the convergence speed. To our best knowledge, our proposed approach achieves state-of-the-art performance on CIFAR10 among event-driven training methods (with temporal gradients) for SNNs. Meanwhile, our method is the first event-driven backpropagation approach that successfully trains SNN on the larger-scale CIFAR100 dataset. 2 Backgrounds and Related Work The gradient-based learning of spiking neural networks contains two stages: the forward (inference) and the backward (learning) stages. In the forward stage, Leaky Integrate-and-Fire (LIF) neurons are most commonly used [18, 21, 26, 39], while other types of neurons are also applicable [32, 35]. Typically, these neuron models can be changed to the form of the Spike Response Model (SRM) [37, 45, 46], which is easily represented in an event-driven fashion. In the backward stage, the methods used by existing works exhibits more diversity. Here, we classify existing approaches from two dimensions: whether non-spike information is needed in discrete time steps (RNN-like) or not (event-driven) and whether the gradient represents spike scale (activation-based) or spike timing (time-based). Event-driven learning v.s. RNN-like learning: In both forward and backward computation of event-driven learning, information is only carried by spikes in SNNs. Specifically, in backward computation, gradient information is propagated through spikes [32, 33, 41, 35] (shown in Fig. 1a-b). On the other side, in RNN-like learning, information is not only carried by spikes in backward computation. Especially, gradient information can be propagated through a neuron that does not emit a spike in backward computation (shown in Fig. 1c-d). This gradient propagation is achieved by a surrogate function [12, 17, 18, 23, 47], which is a function of the membrane potential at the current time step ut, and the firing threshold θ. Time-based gradient v.s. activation-based gradient: Time-based gradients represent the (reverse) direction that the timing of a spike should move, that is, to move leftward or rightward on the time axis [32]. In backward propagation, the derivative of the firing time of a spike to the corresponding membrane potential ∂t∂u is often approximated as −1 ∂u ∂t [32, 33], denoting how the change of membrane potential will change the spike firing time (Fig. 1b). On the other side, activation-based approaches replace the Heaviside neuron activation function Θ(·) (spike st = Θ(ut − θ)) in forward propagation with derivable functions σ(·) in backward propagation, whether there are spikes in the current time step [18, 26, 31, 21]. Therefore, activation-based approaches essentially regard SNNs as binary RNNs and train them with approximated gradients, where the gradients indicate whether the values in the network (including the binary spikes) should be larger or smaller (Fig. 1d). As a result, time-based gradients are event-driven by nature, since the temporal gradient could only be carried by spikes. Meanwhile, activation-based gradients are more suitable for the RNN-like training scheme since the diversity of surrogate gradients largely relies on the fact that ut ̸= θ in discrete time steps [17], which no longer holds in continuous time simulation. If we want to apply activation-based gradients to event-driven learning, there should only be one value ∂s∂u when the membrane potential reaches the threshold. Tab. 1 lists whether a gradient type can be used in a learning fashion. It should be noticed that although activation-based gradient is more suitable for RNN-like learning, it is still able to be used for event-driven learning. 3 Methods 3.1 Forward Formulas We use the spike response model [1] for neurons in the network. The forward propagation in the network can be described as follows: u (l) i (t) = ∫ t t (l) i,last ∑ j w (l) ij · s (l−1) j (τ) · ϵ(t− τ)dτ, (1) s (l) i (t) = δ(u (l) i (t)− θ). (2) Here u(l)i (t) denotes the membrane potential of neuron i in layer l at time t, w (l) ij denotes the weight between neuron j in layer l − 1 and neuron i in layer l. t(l)i,last is the time of last spike of neuron i in layer l, and s(l)i (t) represents the spike emitted from neuron i at time t. The function δ(·) is the Dirac Delta function and θ is the firing threshold. The spike response kernel ϵ(t) can be described by ϵ(t) = τm τm − τs (e− t τm − e− t τs ), (3) where τm and τs are the membrane time constant and the synapse time constant respectively. Notice that we do not use reset kernels as in previous works [39, 41]. Instead, we eliminate the influence of input spikes prior to the last output spike on membrane potentials. 3.2 Rethinking the Classical Time-based Backward Propagation Formula In this subsection, we analyze the classical time-based backpropagation formula in SNNs. We first theoretically prove that the backpropagation rule essentially assigns gradients of output spikes of neurons to their input spikes. Then we check the pooling layer and show that the average pooling should be adjusted in backpropagation to satisfy the gradient assignment mechanism, while the max pooling naturally satisfies this mechanism. Invariant sum of gradients among layers with weights. The most commonly used time-based gradient backpropagation method origins from [32]. The two key approximations are as follows: ∂tk(s (l) i ) ∂u (l) i (tk) = −1 du (l) i (tk)/dt = ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) w (l) ij · ∂ϵ(tk − tm) ∂tm −1 , (4) ∂u (l) i (tk) ∂tm(s (l−1) j ) = w (l) ij · ∂ϵ(tk − tm) ∂tm , (5) where tk(s (l) i ) denotes the firing time tk of neuron i in layer l, tk,last(s (l) i ) is the firing time of the last spike emitted by neuron i before time tk. ∂tk(s (l) i ) ∂· means the influence of changing other variables on the timing of a spike, and ∂· ∂tk(s (l) i ) is the influence of changing spike timing on that variable. Combining Eqs. 4-5 and the forward formulas, we can get an invariant equality:∑ j ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) ∂tk(s (l) i ) ∂tm(s (l−1) j ) = 1. (6) The proof is provided in Appendix. Eq. 6 implies the fact that the reference time (t = 0) is meaningless, and only relative spike times matter. If we increase all the spike times in layer l − 1 by 1 unit along the time axis, then all the spike times in layer l are also increased by 1 unit along the time axis. Denote the loss function as L, then the gradient of L with respect to tm(s (l−1) j ) is: ∂L ∂tm(s (l−1) j ) = ∑ i ∑ tm(s (l−1) j )<tk(s (l) i )≤tm,next(s (l−1) j ) ∂L ∂tk(s (l) i ) · ∂tk(s (l) i ) ∂tm(s (l−1) j ) , (7) where tm,next(s (l−1) j ) denotes the firing time of the next spike emitted by neuron j after time tm. Therefore, we actually decompose the gradient ∂L/∂tk(s (l) i ) from layer l into (part of) a set of gradients ∂L/∂tm(s (l−1) j ) of the last layer l − 1, and keep their sum unchanged. In other words, we assign the weighted sum ∂L/∂tk(s (l) i ) by weights ∂tk(s (l) i )/∂tm(s (l−1) j ) to the gradients ∂L/∂tm(s (l−1) j ) in the last layer, as shown in Fig. 2. If we sum all the gradients together, we can get another invariant in this backpropagation rule:∑ i ∑ tk ∂L ∂tk(s (l) i ) = ∑ j ∑ tm ∂L ∂tm(s (l−1) j ) , (8) which means the sum of gradients ∑ i ∑ tk ∂L ∂tk(s (l) i ) never changes between layers under this rule. Gradient sum invariance for pooling layers. The above equations determine the gradient propagation in fully-connected and convolution layers (which contain neurons). The case for pooling layers (which do not contain neurons) is illustrated in Fig. 3. In average pooling with kernel size k, the gradient of one spike (at time t) in layer l is averagely propagated to k× k neurons in layer l− 1 connected to it. Some of these k× k neurons may not emit spikes at time t. However, the gradients are also propagated to those neurons, which cannot further propagate the gradients to the previous layers. For instance, the two white squares (neurons) in layer l− 1 in Fig. 3 receive gradients, but they will not further propagate the gradients to layer l− 2. Thus, a part of the gradients is lost in the backpropagation of the average pooling layer, which might cause the gradient vanishing problem. Meanwhile, the sum of gradients carried by the spikes is not kept among layers in this case. We can adjust the backpropagation stage in average pooling to satisfy the gradient sum invariance requirement by increasing the multiplier from 1/k2 to 1/nspike, where nspike is the number of spikes emitted by the k × k neurons in layer l − 1 at the current time step. On the contrary, in max pooling, the gradient of a spike in layer l is entirely propagated to one of the spikes emitted by its connected neurons in layer l− 1 (shown in the middle of Fig. 3). This maintains the property of the invariable sum of gradients. It should be noticed that, although the backpropagation stage of max pooling is different from adjusted average pooling in discrete simulation, they are almost surely the same in the continuous simulation since (almost surely) no two spikes emit at exactly the same time in this case. 3.3 Deficiencies of The Typical Time-based Gradient Propagation and A New Approach In this section, we first point out the reverse gradient problem in event-driven learning: the gradient direction for spike timing gets wrong when the spike response kernel is decreasing. Then we propose a backward kernel that can not only solve the reverse gradient problem but also keep the property of the invariable sum of gradients. The reverse gradient problem. Fig. 4 illustrates the membrane potential response of a neuron (with index i) to one of its input spikes (with a positive weight) from presynaptic neuron j. As in equation (3), the spike response kernel is a double-exponentail function. Notice that this is not the whole membrane potential of the neuron i as it also receives the inputs from other presynaptic neurons. We consider two spike times (tk and t′k) of neuron i and one spike time tm of presynaptic neuron j. If the next spike of neuron i fires at time tk, moving the presynaptic spike from tm to tm+∆t will decrease membrane potential at time tk, which means postponing the spike at time tk (tm ↑⇒ u[tk] ↓⇒ tk ↑). This result shows that if the presynaptic neuron fires earlier, the postsynaptic neuron will fire earlier in this case. Oppositely, if the next spike of neuron i fires at time t′k, moving the input spike from tm to tm +∆t will cause a increase of u[t′k], which further moves t ′ k leftward (tm ↑⇒ u[t′k] ↑⇒ t′k ↓). As a result, if we want to move the output spike at t′k leftward, we should move the input spike at tm rightward, which reverses the direction. This might cause a problem: When we want to move t′k leftward, we want the neuron to emit more spikes. However, in gradient backpropagation, it moves tm rightward (assume weight wij > 0), which may cause the neuron in the last layer to spike fewer, further causing neurons in the current layer to spike fewer. More formally, we assume the neuron i receives a input spike at time tm from presynaptic neuron j with synaptic weight wij , then the membrane potential of neuron i at time tk is: ui(tk) = wij · ϵ(tk − tm) + C, (9) where ϵ(t) denotes the spike response kernel (Eq. 3). C denotes the influence of other spikes, which is not in our concern here. In backward pass, according to Eqs. (4)-(5), we have ∂L ∂tm(sj) = ∂L ∂tk(si) · ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) = ∂L ∂tk(si) · −1 dui(tk)/dt · wij · ∂ϵ(tk − tm) ∂tm . (10) Note that when a spike is emitted by neuron i at time tk, the slope of ui(t) > 0 at time tk, which means −1dui(tk)/dt has a negative sign. Considering ∂ϵ(tk−tm) ∂tm = −dϵ(τ)dτ , where τ = tk − tm, we get: sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) · sign ( wij ) · sign ( dϵ(τ) dτ ) . (11) When sign ( dϵ(τ)/dτ ) = −1, which is the part of the spike response kernel that decreases (see the case at t′k in Fig. 4), the gradient direction of tm can be classified into two cases: When wij > 0, sign ( ∂L ∂tm(sj) ) = −sign ( ∂L ∂tk(si) ) , which means the gradient direction is reversed. When wij < 0, sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) , which means the gradient direction is kept. In both cases, the sign of the gradient gets wrong in propagation between layers. Thus, the commonly used double-exponential spike response kernel is incompatible with the time-based gradient in event-driven learning. A smoother gradient assigning approach. Inspired by the above gradient inconsistency as well as the invariance of gradient sum, we propose a new gradient backpropagation approach here. Specifically, we replace the function ∂ϵ(tk−tm)∂tm in Eqs. (4) and (5) with a new function h(tk − tm). Therefore, the backpropagation formula between layers turns into: ∂tk(si) ∂tm(sj) = ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) (12) = 0, if tm(sj) ≤ tk,last(si) or tm(sj) > tk(si),(∑ tk,last(si)<tm(s′j)≤tk(si) wij · h(tk − tm) )−1 · wij · h(tk − tm), otherwise. It can be see from Eq. 12 that ∂tk(si)∂tm(sj) will not change if we multiply h(t) by an arbitrary constant, so we do not need to care about the scale of h(t). Meanwhile, the property of invariable sum of gradients is kept after this replacement. To guarantee that the gradients are not reversed between layers, we should expect h(t) > 0 always hold when t > 0. Therefore, we choose h(t) = e − tτgrad to simplify the calculation, where τgrad is a tunable parameter. Notice that the function h(t) is only used in backward propagation, which means the spike response kernel in the forward propagation is not necessarily the integral of h(t). 3.4 Overall Learning Rule The loss function we use in this work is the counting loss function, which has the form L = 1 N ∑Nout i=1 ( 1 T ( N targeti − ∫ T 0 si(t)dt ))2 , where Nout is the number of output neurons and equals to the number of classes, si(t) represents the spike train emitted by neuron i. Besides, N target i is the target of the spike number outputted by neuron i and typically we set N targeti larger when i is the correct answer. During the learning process, the gradient is first propagated from the loss function to the firing time of each spike from the last layer to the first layer. The formula for this stage is (please refer to Appendix for the detailed deduction): ∂L ∂tm(s (l−1) j ) = ∑ i ∂L ∂tk,next(s (l) i ) · ∑ tlasti (s (l) i )<tm(s (l−1) j )≤tk,next(s (l) i ) w (l) ij · h(tk,next − tm) −1 · w(l)ij · h(tk,next − tm), (13) where tk,next(s (l) i ) is the firing time of the first spike emitted by neuron i after time tm. After this, the gradient to weights in each layer is calculated by summing up the multiplication of the gradients of spike firing times in the same layer and the derivative of weights with respect to spike firing times. The learning rule for this stage is ∂L ∂w (l) ij = ∑ tm(s (l−1) j ) ∂L ∂tk,next(s (l) i ) · −1 ∂u (l) i (tk,next) ∂t · ϵ(tk,next − tm). (14) 4 Experiments In this section, we validate the effectiveness of our method on MNIST [48], Fashion-MNIST [49], N-MNIST [50], CIFAR10 [51], and CIFAR100 [51] datasets. This section is organized as follows: We first introduce the training details, then evaluate the performance of our algorithm and compare it with the state-of-the-art event-driven learning approaches. At last, we conduct ablation studies to illustrate the effectiveness of our proposed modules. More details of the configurations can be found in the Appendix. 4.1 Training Details Initialization: When training in an event-driven fashion, gradient information is only carried by spikes. Therefore, the gradient information will be completely blocked by a layer when there are no spikes in that layer. To solve this problem, we start with layers of arbitrarily initialized weights and scale them by certain multiples, which can make the average firing rate to be a certain number for each layer. We obtain these multiple parameters by binary search and this strategy works well in practice. Supervisory signals: Another problem we face is that output neurons corresponding to certain classes do not fire anymore after certain epochs of training. This problems makes corresponding gradients difficult to propagate in the network, further leading to these neurons no longer fire afterwards, resulting recognizing those classes correctly impossible in the following epochs. To address this problem, we utilize supervisory signals. For each neuron in the output layer corresponding to the ground-truth label, we force it to fire at the end of the simulation. Experiment settings: In our experiments, we use the real-valued spike current representing the pixel intensities of the image as inputs. We list the network architecture each work uses and the accuracy they achieves on each dataset in table 2. Notice that the output layer is, by default, a fully-connected layer containing the same number of neurons as the number of classes in the dataset, and omitted from the architecture representation. We run all experiments on a single Nvidia A100 GPU. 4.2 Comparison with the State-of-the-Art Tab. 2 reports the accuracies of the proposed method and other comparing methods. The performance of our algorithm is lower than TSSL-BP by 0.06% on the MNIST dataset and 0.01% on the N-MNIST dataset. However, the output of their network is real-valued postsynaptic currents while the output of our network is binary spikes. In addition, they use RNN-like gradients to assist learning. On the remaining datasets, we have achieved state-of-the-art performance among these works with temporal gradients. For the Fashion-MNIST dataset, our algorithm performs 0.45% higher than the previous SOTA. On the CIFAR10 dataset, we achieve 92.45% accuracy with a 14-layer SEW-Resnet and 92.10% with VGG11, which are all better than the current SOTA, 91.41%. For the CIFAR100 dataset, we are the first work to successfully train SNNs with time-based gradients in an event-driven fashion. We have achieved a performance of 63.97% on this dataset. 4.3 Ablation Studies To show the effect of our proposed modules, we conduct ablation experiments on the CIFAR10 dataset. Specifically, two proposed components are taken into consideration: (1) As mentioned in Section 3.3, we compare the proposed gradient assignment functions h(t) = e−βt in (12) with the commonly used one h(t) = dϵ(t)dt . (2) We compare the results of three different types of pooling layers (average pooling, adjusted average pooling and max pooling) mentioned in Section 3.2. We have tried all combinations of gradient assignment functions and pooling layers. The test accuracy of these different settings is shown in Tab. 3. The results in Tab. 3 meets our expectation. For the pooling layer, max pooling and adjusted average pooling have much better performance than the average pooling. This accords with the conclusion in Section 3.2 that pooling layers keeping the property of invariant sum of gradients are better than those that do not. The proposed gradient assignment function h(t) = e−βt is also better than the commonly used one h(t) = dϵ(t)dt for all three types of pooling layers. In addition, as shown in Fig. 5, h(t) = e −βt converges faster than h(t) = dϵ(t)dt in early stage. 5 Conclusion and Discussion In this work, we analyze the commonly-used SNN temporal backpropagation training approach and find that it follows the gradient assignment rule. We also find the average pooling layer does not obey this rule while the max pooling layer does. We show that the direction of the temporal gradient will be reversed when the spike kernel is decreasing and avoid it with an increasing kernel in backpropagation. Our algorithm achieves state-of-the-art performance on CIFAR10 among time-based SNN learning approaches and successfully learns the parameters of SNN on CIFAR100 for the first time. Compared with RNN-like methods, the proposed event-based learning algorithm has a lower computational cost and memory occupation when there are many time steps. Besides, our algorithm also does not need bias between layers. Meanwhile, gradient propagation between spikes instead of time steps can mitigate the gradient explosion/vanishing problem along the time axis. However, there is still a gap between event-driven backpropagation and biological plausible learning, since event-driven backpropagation processes the spike train in reverse time, which conflicts with the online learning in the real world and desires for future research. 6 Acknowledgements This work was supported by the National Natural Science Foundation of China Grants 62176003 and 62088102.
1. What is the main contribution of the paper regarding temporal Backpropagation for SNNs? 2. What are the strengths and weaknesses of the proposed method compared to prior works in SNN training? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. Are there any concerns or questions regarding the scalability and efficiency of the method? 5. Is there any similarity between the backward kernel and backward connections in SNN training?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose a temporal Backprop approach for training SNNs with some interesting backward kernel fucntion. Strengths And Weaknesses The authors showcase that their temporal BP methodology yields high accuracy as compared to similar related works. -The paper presents a direct training method using BP for SNNs. This work is very derivative and incremental. There is a lot of work from Priya Panda's group at Yale, Emre Neftci's group, and many others with regard to SNN training. The authors have failed to acknowledge most recent works and the method they are proposing is very incremental in the context of those works. Further, many recent works on SNNs have targeted larger datatsets including video segmenattion with direct training. I wonder if the author's method can even scale up, since their results are limited to CIFAR10, CIFAR100. -The authors did not comment on how many time steps their method requires to train. In the recent work [5], the authors show that they can use backward connections to train SNNs better, is there a similarity between the backward kernel and backward conenction? Below is a list of publications (not exhaustive) that the author should check: [1] [2] Enabling spike-based backpropagation for training deep neural network architectures C Lee, SS Sarwar, P Panda, G Srinivasan, K Roy Frontiers in neuroscience, 119 [3] Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks? Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda ICASSP 2022-2022 [4] Neuromorphic Data Augmentation for Training Spiking Neural Networks Y Li, Y Kim, H Park, T Geller, P Panda arXiv preprint arXiv:2203.06145 [5] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355 [6] Optimizing deeper spiking neural networks for dynamic vision sensing Y Kim, P Panda Neural Networks 144, 686-698 [7] Federated Learning with Spiking Neural Networks Y Venkatesha, Y Kim, L Tassiulas, P Pand IEEE Transactions on Signal Processing 2021 [8] Beyond classification: directly training spiking neural networks for semantic segmentation Y Kim, J Chough, P Panda arXiv preprint arXiv:2110.07742 [9] Visual explanations from spiking neural networks using interspike intervals Y Kim, P Panda Scientific Reports 11, Article number: 19037 (2021) [10] Revisiting batch normalization for training low-latency deep spiking neural networks from scratch Y Kim, P Panda Frontiers in neuroscience, 1638 Questions If the authors can highlight their technical novelty a compared to previous works, it will help me re-assess the paper's contributions. See above comments for reference. Limitations See weakness section.
NIPS
Title Training Spiking Neural Networks with Event-driven Backpropagation Abstract Spiking Neural networks (SNNs) represent and transmit information by spatiotemporal spike patterns, which bring two major advantages: biological plausibility and suitability for ultralow-power neuromorphic implementation. Despite this, the binary firing characteristic makes training SNNs more challenging. To learn the parameters of deep SNNs in an event-driven fashion as in inference of SNNs, backpropagation with respect to spike timing is proposed. Although this event-driven learning has the advantages of lower computational cost and memory occupation, the accuracy is far below the recurrent neural network-like learning approaches. In this paper, we first analyze the commonly used temporal backpropagation training approach and prove that the sum of gradients remains unchanged between fully-connected and convolutional layers. Secondly, we show that the max pooling layer meets the above invariance rule, while the average pooling layer does not, which will suffer the gradient vanishing problem but can be revised to meet the requirement. Thirdly, we point out the reverse gradient problem for time-based gradients and propose a backward kernel that can solve this problem and keep the property of the invariable sum of gradients. The experimental results show that the proposed approach achieves state-of-the-art performance on CIFAR10 among time-based training methods. Also, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. Our code is available at https://github.com/zhuyaoyu/SNN-event-driven-learning. 1 Introduction Motivated by the principles of brain computing, Spiking Neural Networks (SNNs) are considered as the third generation of neural networks [1, 2]. SNNs are developed to work in power-critical scenarios, such as edge computing. When run on dedicated neuromorphic chips, they can accomplish the tasks [3, 4, 5] with ultra-low power consumption [6, 7, 8, 9, 10, 11, 12]. In contrast, the last generation of neural networks – Artificial Neural Networks (ANNs) [13], generally require a large amount of computation resource (e.g., GPUs). This advantage of SNNs on power consumption largely relies on efficient event-based computations [14, 15]. Another advantage of SNNs originates from their biological reality (compared to ANNs). The similarity between SNNs and biological brains provides an excellent opportunity to study how the brain computes at the neuronal circuit level [16]. Compared with artificial neural networks, developing supervised learning algorithms for spiking neural networks requires more effort. The main challenge for training SNNs comes from the binary nature of spikes and the non-differentiability of the membrane potential at spike time. This difficulty ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). in training impedes the performance of SNNs in pattern classification tasks compared to their ANN counterparts. Existing supervised learning methods of SNNs can be grouped into two categories: The first category consists of recurrent neural network (RNN)-like learning algorithms. These algorithms treat spiking neural networks as binary-output recurrent neural networks and handle the discontinuities of membrane potential at spike times with continuous surrogate derivatives [17]. They typically train deep SNNs with surrogate gradients based on the idea of backpropagation through time (BPTT) algorithm [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. While competitive accuracies are reported on the MNIST, CIFAR-10, and even ImageNet datasets [29, 30, 31], the gradient information is propagated each time step, whether or not a spike is emitted (as shown in Fig. 1). Therefore, these approaches do not follow the event-driven nature of spiking neural networks, which lose the asynchronous characteristic of SNNs and consume much power when trained on neuromorphic hardware. The second category is event-driven algorithms, which propagate gradient information through spikes. Precise spiking timing acts an important role in this situation, and they are extensively used in such algorithms [32, 33, 34, 35, 36, 37, 38, 39]. Classical examples include SpikeProp [32] and its variants [33, 40, 41]. These algorithms approximate the derivative of spike timing to membrane potential as the negative inverse of the time derivative of membrane potential function. This approximation is actually mathematically correct without preconditions [42]. Some other works apply non-leaky integrate-and-fire neurons to stabilize the training process [35, 38, 43]. Most of these works restrict each neuron to fire at most once, which inspires [44] to take the spike time as the state of a neuron, and model the relation of neurons by this spike time. As a result, the SNN is trained similarly to an ANN. Among the methods trained in an event-driven fashion (not modelling the relation of spike time to train like ANNs), the state-of-the-art model is TSSL-BP [39]. However, they use RNN-like surrogate gradients (a sigmoid function) to assist training. Hence, it is still challenging to train SNNs in a pure event-driven fashion. In this work, we develop a novel event-driven learning algorithm that can train high-performance deep SNNs. The main contributions of our work are as follows: 1. We prove that the typical SNN temporal backpropagation training approach assigns the gradient of an output spike of a neuron to the input spikes generating it. After summing this assignment rule altogether, we find that the sum of gradients is unchanged between layers. 2. We analyze the case of the pooling layer (which does not have neurons) and find that average pooling does not keep the gradient sum unchanged, but we can modify its backward formulas to meet the requirement. Meanwhile, the max-pooling layer satisfies the rule initially. 3. We point out the reverse gradient problem in event-driven learning that the direction of the temporal gradient is reversed during backpropagation when the kernel function of an input spike is decreasing. Then we propose a backward kernel function that addresses this problem while keeping the sum of gradients unchanged between layers. 4. The adjusted average pooling layer and the non-decreasing backward kernel enhances the performance of our model as well as the convergence speed. To our best knowledge, our proposed approach achieves state-of-the-art performance on CIFAR10 among event-driven training methods (with temporal gradients) for SNNs. Meanwhile, our method is the first event-driven backpropagation approach that successfully trains SNN on the larger-scale CIFAR100 dataset. 2 Backgrounds and Related Work The gradient-based learning of spiking neural networks contains two stages: the forward (inference) and the backward (learning) stages. In the forward stage, Leaky Integrate-and-Fire (LIF) neurons are most commonly used [18, 21, 26, 39], while other types of neurons are also applicable [32, 35]. Typically, these neuron models can be changed to the form of the Spike Response Model (SRM) [37, 45, 46], which is easily represented in an event-driven fashion. In the backward stage, the methods used by existing works exhibits more diversity. Here, we classify existing approaches from two dimensions: whether non-spike information is needed in discrete time steps (RNN-like) or not (event-driven) and whether the gradient represents spike scale (activation-based) or spike timing (time-based). Event-driven learning v.s. RNN-like learning: In both forward and backward computation of event-driven learning, information is only carried by spikes in SNNs. Specifically, in backward computation, gradient information is propagated through spikes [32, 33, 41, 35] (shown in Fig. 1a-b). On the other side, in RNN-like learning, information is not only carried by spikes in backward computation. Especially, gradient information can be propagated through a neuron that does not emit a spike in backward computation (shown in Fig. 1c-d). This gradient propagation is achieved by a surrogate function [12, 17, 18, 23, 47], which is a function of the membrane potential at the current time step ut, and the firing threshold θ. Time-based gradient v.s. activation-based gradient: Time-based gradients represent the (reverse) direction that the timing of a spike should move, that is, to move leftward or rightward on the time axis [32]. In backward propagation, the derivative of the firing time of a spike to the corresponding membrane potential ∂t∂u is often approximated as −1 ∂u ∂t [32, 33], denoting how the change of membrane potential will change the spike firing time (Fig. 1b). On the other side, activation-based approaches replace the Heaviside neuron activation function Θ(·) (spike st = Θ(ut − θ)) in forward propagation with derivable functions σ(·) in backward propagation, whether there are spikes in the current time step [18, 26, 31, 21]. Therefore, activation-based approaches essentially regard SNNs as binary RNNs and train them with approximated gradients, where the gradients indicate whether the values in the network (including the binary spikes) should be larger or smaller (Fig. 1d). As a result, time-based gradients are event-driven by nature, since the temporal gradient could only be carried by spikes. Meanwhile, activation-based gradients are more suitable for the RNN-like training scheme since the diversity of surrogate gradients largely relies on the fact that ut ̸= θ in discrete time steps [17], which no longer holds in continuous time simulation. If we want to apply activation-based gradients to event-driven learning, there should only be one value ∂s∂u when the membrane potential reaches the threshold. Tab. 1 lists whether a gradient type can be used in a learning fashion. It should be noticed that although activation-based gradient is more suitable for RNN-like learning, it is still able to be used for event-driven learning. 3 Methods 3.1 Forward Formulas We use the spike response model [1] for neurons in the network. The forward propagation in the network can be described as follows: u (l) i (t) = ∫ t t (l) i,last ∑ j w (l) ij · s (l−1) j (τ) · ϵ(t− τ)dτ, (1) s (l) i (t) = δ(u (l) i (t)− θ). (2) Here u(l)i (t) denotes the membrane potential of neuron i in layer l at time t, w (l) ij denotes the weight between neuron j in layer l − 1 and neuron i in layer l. t(l)i,last is the time of last spike of neuron i in layer l, and s(l)i (t) represents the spike emitted from neuron i at time t. The function δ(·) is the Dirac Delta function and θ is the firing threshold. The spike response kernel ϵ(t) can be described by ϵ(t) = τm τm − τs (e− t τm − e− t τs ), (3) where τm and τs are the membrane time constant and the synapse time constant respectively. Notice that we do not use reset kernels as in previous works [39, 41]. Instead, we eliminate the influence of input spikes prior to the last output spike on membrane potentials. 3.2 Rethinking the Classical Time-based Backward Propagation Formula In this subsection, we analyze the classical time-based backpropagation formula in SNNs. We first theoretically prove that the backpropagation rule essentially assigns gradients of output spikes of neurons to their input spikes. Then we check the pooling layer and show that the average pooling should be adjusted in backpropagation to satisfy the gradient assignment mechanism, while the max pooling naturally satisfies this mechanism. Invariant sum of gradients among layers with weights. The most commonly used time-based gradient backpropagation method origins from [32]. The two key approximations are as follows: ∂tk(s (l) i ) ∂u (l) i (tk) = −1 du (l) i (tk)/dt = ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) w (l) ij · ∂ϵ(tk − tm) ∂tm −1 , (4) ∂u (l) i (tk) ∂tm(s (l−1) j ) = w (l) ij · ∂ϵ(tk − tm) ∂tm , (5) where tk(s (l) i ) denotes the firing time tk of neuron i in layer l, tk,last(s (l) i ) is the firing time of the last spike emitted by neuron i before time tk. ∂tk(s (l) i ) ∂· means the influence of changing other variables on the timing of a spike, and ∂· ∂tk(s (l) i ) is the influence of changing spike timing on that variable. Combining Eqs. 4-5 and the forward formulas, we can get an invariant equality:∑ j ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) ∂tk(s (l) i ) ∂tm(s (l−1) j ) = 1. (6) The proof is provided in Appendix. Eq. 6 implies the fact that the reference time (t = 0) is meaningless, and only relative spike times matter. If we increase all the spike times in layer l − 1 by 1 unit along the time axis, then all the spike times in layer l are also increased by 1 unit along the time axis. Denote the loss function as L, then the gradient of L with respect to tm(s (l−1) j ) is: ∂L ∂tm(s (l−1) j ) = ∑ i ∑ tm(s (l−1) j )<tk(s (l) i )≤tm,next(s (l−1) j ) ∂L ∂tk(s (l) i ) · ∂tk(s (l) i ) ∂tm(s (l−1) j ) , (7) where tm,next(s (l−1) j ) denotes the firing time of the next spike emitted by neuron j after time tm. Therefore, we actually decompose the gradient ∂L/∂tk(s (l) i ) from layer l into (part of) a set of gradients ∂L/∂tm(s (l−1) j ) of the last layer l − 1, and keep their sum unchanged. In other words, we assign the weighted sum ∂L/∂tk(s (l) i ) by weights ∂tk(s (l) i )/∂tm(s (l−1) j ) to the gradients ∂L/∂tm(s (l−1) j ) in the last layer, as shown in Fig. 2. If we sum all the gradients together, we can get another invariant in this backpropagation rule:∑ i ∑ tk ∂L ∂tk(s (l) i ) = ∑ j ∑ tm ∂L ∂tm(s (l−1) j ) , (8) which means the sum of gradients ∑ i ∑ tk ∂L ∂tk(s (l) i ) never changes between layers under this rule. Gradient sum invariance for pooling layers. The above equations determine the gradient propagation in fully-connected and convolution layers (which contain neurons). The case for pooling layers (which do not contain neurons) is illustrated in Fig. 3. In average pooling with kernel size k, the gradient of one spike (at time t) in layer l is averagely propagated to k× k neurons in layer l− 1 connected to it. Some of these k× k neurons may not emit spikes at time t. However, the gradients are also propagated to those neurons, which cannot further propagate the gradients to the previous layers. For instance, the two white squares (neurons) in layer l− 1 in Fig. 3 receive gradients, but they will not further propagate the gradients to layer l− 2. Thus, a part of the gradients is lost in the backpropagation of the average pooling layer, which might cause the gradient vanishing problem. Meanwhile, the sum of gradients carried by the spikes is not kept among layers in this case. We can adjust the backpropagation stage in average pooling to satisfy the gradient sum invariance requirement by increasing the multiplier from 1/k2 to 1/nspike, where nspike is the number of spikes emitted by the k × k neurons in layer l − 1 at the current time step. On the contrary, in max pooling, the gradient of a spike in layer l is entirely propagated to one of the spikes emitted by its connected neurons in layer l− 1 (shown in the middle of Fig. 3). This maintains the property of the invariable sum of gradients. It should be noticed that, although the backpropagation stage of max pooling is different from adjusted average pooling in discrete simulation, they are almost surely the same in the continuous simulation since (almost surely) no two spikes emit at exactly the same time in this case. 3.3 Deficiencies of The Typical Time-based Gradient Propagation and A New Approach In this section, we first point out the reverse gradient problem in event-driven learning: the gradient direction for spike timing gets wrong when the spike response kernel is decreasing. Then we propose a backward kernel that can not only solve the reverse gradient problem but also keep the property of the invariable sum of gradients. The reverse gradient problem. Fig. 4 illustrates the membrane potential response of a neuron (with index i) to one of its input spikes (with a positive weight) from presynaptic neuron j. As in equation (3), the spike response kernel is a double-exponentail function. Notice that this is not the whole membrane potential of the neuron i as it also receives the inputs from other presynaptic neurons. We consider two spike times (tk and t′k) of neuron i and one spike time tm of presynaptic neuron j. If the next spike of neuron i fires at time tk, moving the presynaptic spike from tm to tm+∆t will decrease membrane potential at time tk, which means postponing the spike at time tk (tm ↑⇒ u[tk] ↓⇒ tk ↑). This result shows that if the presynaptic neuron fires earlier, the postsynaptic neuron will fire earlier in this case. Oppositely, if the next spike of neuron i fires at time t′k, moving the input spike from tm to tm +∆t will cause a increase of u[t′k], which further moves t ′ k leftward (tm ↑⇒ u[t′k] ↑⇒ t′k ↓). As a result, if we want to move the output spike at t′k leftward, we should move the input spike at tm rightward, which reverses the direction. This might cause a problem: When we want to move t′k leftward, we want the neuron to emit more spikes. However, in gradient backpropagation, it moves tm rightward (assume weight wij > 0), which may cause the neuron in the last layer to spike fewer, further causing neurons in the current layer to spike fewer. More formally, we assume the neuron i receives a input spike at time tm from presynaptic neuron j with synaptic weight wij , then the membrane potential of neuron i at time tk is: ui(tk) = wij · ϵ(tk − tm) + C, (9) where ϵ(t) denotes the spike response kernel (Eq. 3). C denotes the influence of other spikes, which is not in our concern here. In backward pass, according to Eqs. (4)-(5), we have ∂L ∂tm(sj) = ∂L ∂tk(si) · ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) = ∂L ∂tk(si) · −1 dui(tk)/dt · wij · ∂ϵ(tk − tm) ∂tm . (10) Note that when a spike is emitted by neuron i at time tk, the slope of ui(t) > 0 at time tk, which means −1dui(tk)/dt has a negative sign. Considering ∂ϵ(tk−tm) ∂tm = −dϵ(τ)dτ , where τ = tk − tm, we get: sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) · sign ( wij ) · sign ( dϵ(τ) dτ ) . (11) When sign ( dϵ(τ)/dτ ) = −1, which is the part of the spike response kernel that decreases (see the case at t′k in Fig. 4), the gradient direction of tm can be classified into two cases: When wij > 0, sign ( ∂L ∂tm(sj) ) = −sign ( ∂L ∂tk(si) ) , which means the gradient direction is reversed. When wij < 0, sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) , which means the gradient direction is kept. In both cases, the sign of the gradient gets wrong in propagation between layers. Thus, the commonly used double-exponential spike response kernel is incompatible with the time-based gradient in event-driven learning. A smoother gradient assigning approach. Inspired by the above gradient inconsistency as well as the invariance of gradient sum, we propose a new gradient backpropagation approach here. Specifically, we replace the function ∂ϵ(tk−tm)∂tm in Eqs. (4) and (5) with a new function h(tk − tm). Therefore, the backpropagation formula between layers turns into: ∂tk(si) ∂tm(sj) = ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) (12) = 0, if tm(sj) ≤ tk,last(si) or tm(sj) > tk(si),(∑ tk,last(si)<tm(s′j)≤tk(si) wij · h(tk − tm) )−1 · wij · h(tk − tm), otherwise. It can be see from Eq. 12 that ∂tk(si)∂tm(sj) will not change if we multiply h(t) by an arbitrary constant, so we do not need to care about the scale of h(t). Meanwhile, the property of invariable sum of gradients is kept after this replacement. To guarantee that the gradients are not reversed between layers, we should expect h(t) > 0 always hold when t > 0. Therefore, we choose h(t) = e − tτgrad to simplify the calculation, where τgrad is a tunable parameter. Notice that the function h(t) is only used in backward propagation, which means the spike response kernel in the forward propagation is not necessarily the integral of h(t). 3.4 Overall Learning Rule The loss function we use in this work is the counting loss function, which has the form L = 1 N ∑Nout i=1 ( 1 T ( N targeti − ∫ T 0 si(t)dt ))2 , where Nout is the number of output neurons and equals to the number of classes, si(t) represents the spike train emitted by neuron i. Besides, N target i is the target of the spike number outputted by neuron i and typically we set N targeti larger when i is the correct answer. During the learning process, the gradient is first propagated from the loss function to the firing time of each spike from the last layer to the first layer. The formula for this stage is (please refer to Appendix for the detailed deduction): ∂L ∂tm(s (l−1) j ) = ∑ i ∂L ∂tk,next(s (l) i ) · ∑ tlasti (s (l) i )<tm(s (l−1) j )≤tk,next(s (l) i ) w (l) ij · h(tk,next − tm) −1 · w(l)ij · h(tk,next − tm), (13) where tk,next(s (l) i ) is the firing time of the first spike emitted by neuron i after time tm. After this, the gradient to weights in each layer is calculated by summing up the multiplication of the gradients of spike firing times in the same layer and the derivative of weights with respect to spike firing times. The learning rule for this stage is ∂L ∂w (l) ij = ∑ tm(s (l−1) j ) ∂L ∂tk,next(s (l) i ) · −1 ∂u (l) i (tk,next) ∂t · ϵ(tk,next − tm). (14) 4 Experiments In this section, we validate the effectiveness of our method on MNIST [48], Fashion-MNIST [49], N-MNIST [50], CIFAR10 [51], and CIFAR100 [51] datasets. This section is organized as follows: We first introduce the training details, then evaluate the performance of our algorithm and compare it with the state-of-the-art event-driven learning approaches. At last, we conduct ablation studies to illustrate the effectiveness of our proposed modules. More details of the configurations can be found in the Appendix. 4.1 Training Details Initialization: When training in an event-driven fashion, gradient information is only carried by spikes. Therefore, the gradient information will be completely blocked by a layer when there are no spikes in that layer. To solve this problem, we start with layers of arbitrarily initialized weights and scale them by certain multiples, which can make the average firing rate to be a certain number for each layer. We obtain these multiple parameters by binary search and this strategy works well in practice. Supervisory signals: Another problem we face is that output neurons corresponding to certain classes do not fire anymore after certain epochs of training. This problems makes corresponding gradients difficult to propagate in the network, further leading to these neurons no longer fire afterwards, resulting recognizing those classes correctly impossible in the following epochs. To address this problem, we utilize supervisory signals. For each neuron in the output layer corresponding to the ground-truth label, we force it to fire at the end of the simulation. Experiment settings: In our experiments, we use the real-valued spike current representing the pixel intensities of the image as inputs. We list the network architecture each work uses and the accuracy they achieves on each dataset in table 2. Notice that the output layer is, by default, a fully-connected layer containing the same number of neurons as the number of classes in the dataset, and omitted from the architecture representation. We run all experiments on a single Nvidia A100 GPU. 4.2 Comparison with the State-of-the-Art Tab. 2 reports the accuracies of the proposed method and other comparing methods. The performance of our algorithm is lower than TSSL-BP by 0.06% on the MNIST dataset and 0.01% on the N-MNIST dataset. However, the output of their network is real-valued postsynaptic currents while the output of our network is binary spikes. In addition, they use RNN-like gradients to assist learning. On the remaining datasets, we have achieved state-of-the-art performance among these works with temporal gradients. For the Fashion-MNIST dataset, our algorithm performs 0.45% higher than the previous SOTA. On the CIFAR10 dataset, we achieve 92.45% accuracy with a 14-layer SEW-Resnet and 92.10% with VGG11, which are all better than the current SOTA, 91.41%. For the CIFAR100 dataset, we are the first work to successfully train SNNs with time-based gradients in an event-driven fashion. We have achieved a performance of 63.97% on this dataset. 4.3 Ablation Studies To show the effect of our proposed modules, we conduct ablation experiments on the CIFAR10 dataset. Specifically, two proposed components are taken into consideration: (1) As mentioned in Section 3.3, we compare the proposed gradient assignment functions h(t) = e−βt in (12) with the commonly used one h(t) = dϵ(t)dt . (2) We compare the results of three different types of pooling layers (average pooling, adjusted average pooling and max pooling) mentioned in Section 3.2. We have tried all combinations of gradient assignment functions and pooling layers. The test accuracy of these different settings is shown in Tab. 3. The results in Tab. 3 meets our expectation. For the pooling layer, max pooling and adjusted average pooling have much better performance than the average pooling. This accords with the conclusion in Section 3.2 that pooling layers keeping the property of invariant sum of gradients are better than those that do not. The proposed gradient assignment function h(t) = e−βt is also better than the commonly used one h(t) = dϵ(t)dt for all three types of pooling layers. In addition, as shown in Fig. 5, h(t) = e −βt converges faster than h(t) = dϵ(t)dt in early stage. 5 Conclusion and Discussion In this work, we analyze the commonly-used SNN temporal backpropagation training approach and find that it follows the gradient assignment rule. We also find the average pooling layer does not obey this rule while the max pooling layer does. We show that the direction of the temporal gradient will be reversed when the spike kernel is decreasing and avoid it with an increasing kernel in backpropagation. Our algorithm achieves state-of-the-art performance on CIFAR10 among time-based SNN learning approaches and successfully learns the parameters of SNN on CIFAR100 for the first time. Compared with RNN-like methods, the proposed event-based learning algorithm has a lower computational cost and memory occupation when there are many time steps. Besides, our algorithm also does not need bias between layers. Meanwhile, gradient propagation between spikes instead of time steps can mitigate the gradient explosion/vanishing problem along the time axis. However, there is still a gap between event-driven backpropagation and biological plausible learning, since event-driven backpropagation processes the spike train in reverse time, which conflicts with the online learning in the real world and desires for future research. 6 Acknowledgements This work was supported by the National Natural Science Foundation of China Grants 62176003 and 62088102.
1. What is the focus and contribution of the paper on spiking neural networks? 2. What are the strengths of the proposed approach, particularly in terms of biological plausibility and event-based features? 3. What are the weaknesses of the paper, especially regarding the achievement of SOTA in benchmarks? 4. Do you have any concerns or questions regarding the training configuration and experimental setup? 5. What are the limitations of the paper, including the gap in biological plausibility and the lack of alternative methods?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors propose a modified event-driven backpropagation and investigate its performance on benchmarks. The authors also investigate if the backpropagation followed a gradient assignment rule, finding that max-pooling obeyed this rule. This is one of the most important conclusions of the paper. The algorithm achieved SOTA on CIFAR-10, and was the first to be trained on CIFAR-100. Strengths And Weaknesses Spiking neural networks represent a new paradigm of neural networks that, among other advantages, incorporates time into the building blocks of its own functioning. Thus, in addition to having greater biological plausibility, it is also believed to be more coherent with learning in the real-world, which contains the time dimension. Event-base learning paradigm preserves de biological plausibility and the event-based features advantages of spiking neural networks when compared with surogate backpropagation. However, this new paradigm has not yet reached SOTA in any benchmark, not even in those that demand incorporation of the time dimension. Questions In line 233 you stated that more details about the configuration of training were available on appendix, I was not able to fin the appendix section, nor a appendix file on the supplement material, even though the code contains the configurations of the experiment. It is not clear for me if you want to provide more information in an appendix section or you were referring to the code provided. Limitations Spiking Neural Networks has yet achieved SOTA in no benchmark. Although spiking neural networks are believed to have greater biological plausibility, it is not clear whether biological neural networks learn through backpropagation, which was the method tested in this study. Despite this, nowadays there is no alternative that works better than backpropagation. The authors also stated that another gap in biological plausibility is the reverse time processing feature.
NIPS
Title Training Spiking Neural Networks with Event-driven Backpropagation Abstract Spiking Neural networks (SNNs) represent and transmit information by spatiotemporal spike patterns, which bring two major advantages: biological plausibility and suitability for ultralow-power neuromorphic implementation. Despite this, the binary firing characteristic makes training SNNs more challenging. To learn the parameters of deep SNNs in an event-driven fashion as in inference of SNNs, backpropagation with respect to spike timing is proposed. Although this event-driven learning has the advantages of lower computational cost and memory occupation, the accuracy is far below the recurrent neural network-like learning approaches. In this paper, we first analyze the commonly used temporal backpropagation training approach and prove that the sum of gradients remains unchanged between fully-connected and convolutional layers. Secondly, we show that the max pooling layer meets the above invariance rule, while the average pooling layer does not, which will suffer the gradient vanishing problem but can be revised to meet the requirement. Thirdly, we point out the reverse gradient problem for time-based gradients and propose a backward kernel that can solve this problem and keep the property of the invariable sum of gradients. The experimental results show that the proposed approach achieves state-of-the-art performance on CIFAR10 among time-based training methods. Also, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. Our code is available at https://github.com/zhuyaoyu/SNN-event-driven-learning. 1 Introduction Motivated by the principles of brain computing, Spiking Neural Networks (SNNs) are considered as the third generation of neural networks [1, 2]. SNNs are developed to work in power-critical scenarios, such as edge computing. When run on dedicated neuromorphic chips, they can accomplish the tasks [3, 4, 5] with ultra-low power consumption [6, 7, 8, 9, 10, 11, 12]. In contrast, the last generation of neural networks – Artificial Neural Networks (ANNs) [13], generally require a large amount of computation resource (e.g., GPUs). This advantage of SNNs on power consumption largely relies on efficient event-based computations [14, 15]. Another advantage of SNNs originates from their biological reality (compared to ANNs). The similarity between SNNs and biological brains provides an excellent opportunity to study how the brain computes at the neuronal circuit level [16]. Compared with artificial neural networks, developing supervised learning algorithms for spiking neural networks requires more effort. The main challenge for training SNNs comes from the binary nature of spikes and the non-differentiability of the membrane potential at spike time. This difficulty ∗Corresponding author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). in training impedes the performance of SNNs in pattern classification tasks compared to their ANN counterparts. Existing supervised learning methods of SNNs can be grouped into two categories: The first category consists of recurrent neural network (RNN)-like learning algorithms. These algorithms treat spiking neural networks as binary-output recurrent neural networks and handle the discontinuities of membrane potential at spike times with continuous surrogate derivatives [17]. They typically train deep SNNs with surrogate gradients based on the idea of backpropagation through time (BPTT) algorithm [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. While competitive accuracies are reported on the MNIST, CIFAR-10, and even ImageNet datasets [29, 30, 31], the gradient information is propagated each time step, whether or not a spike is emitted (as shown in Fig. 1). Therefore, these approaches do not follow the event-driven nature of spiking neural networks, which lose the asynchronous characteristic of SNNs and consume much power when trained on neuromorphic hardware. The second category is event-driven algorithms, which propagate gradient information through spikes. Precise spiking timing acts an important role in this situation, and they are extensively used in such algorithms [32, 33, 34, 35, 36, 37, 38, 39]. Classical examples include SpikeProp [32] and its variants [33, 40, 41]. These algorithms approximate the derivative of spike timing to membrane potential as the negative inverse of the time derivative of membrane potential function. This approximation is actually mathematically correct without preconditions [42]. Some other works apply non-leaky integrate-and-fire neurons to stabilize the training process [35, 38, 43]. Most of these works restrict each neuron to fire at most once, which inspires [44] to take the spike time as the state of a neuron, and model the relation of neurons by this spike time. As a result, the SNN is trained similarly to an ANN. Among the methods trained in an event-driven fashion (not modelling the relation of spike time to train like ANNs), the state-of-the-art model is TSSL-BP [39]. However, they use RNN-like surrogate gradients (a sigmoid function) to assist training. Hence, it is still challenging to train SNNs in a pure event-driven fashion. In this work, we develop a novel event-driven learning algorithm that can train high-performance deep SNNs. The main contributions of our work are as follows: 1. We prove that the typical SNN temporal backpropagation training approach assigns the gradient of an output spike of a neuron to the input spikes generating it. After summing this assignment rule altogether, we find that the sum of gradients is unchanged between layers. 2. We analyze the case of the pooling layer (which does not have neurons) and find that average pooling does not keep the gradient sum unchanged, but we can modify its backward formulas to meet the requirement. Meanwhile, the max-pooling layer satisfies the rule initially. 3. We point out the reverse gradient problem in event-driven learning that the direction of the temporal gradient is reversed during backpropagation when the kernel function of an input spike is decreasing. Then we propose a backward kernel function that addresses this problem while keeping the sum of gradients unchanged between layers. 4. The adjusted average pooling layer and the non-decreasing backward kernel enhances the performance of our model as well as the convergence speed. To our best knowledge, our proposed approach achieves state-of-the-art performance on CIFAR10 among event-driven training methods (with temporal gradients) for SNNs. Meanwhile, our method is the first event-driven backpropagation approach that successfully trains SNN on the larger-scale CIFAR100 dataset. 2 Backgrounds and Related Work The gradient-based learning of spiking neural networks contains two stages: the forward (inference) and the backward (learning) stages. In the forward stage, Leaky Integrate-and-Fire (LIF) neurons are most commonly used [18, 21, 26, 39], while other types of neurons are also applicable [32, 35]. Typically, these neuron models can be changed to the form of the Spike Response Model (SRM) [37, 45, 46], which is easily represented in an event-driven fashion. In the backward stage, the methods used by existing works exhibits more diversity. Here, we classify existing approaches from two dimensions: whether non-spike information is needed in discrete time steps (RNN-like) or not (event-driven) and whether the gradient represents spike scale (activation-based) or spike timing (time-based). Event-driven learning v.s. RNN-like learning: In both forward and backward computation of event-driven learning, information is only carried by spikes in SNNs. Specifically, in backward computation, gradient information is propagated through spikes [32, 33, 41, 35] (shown in Fig. 1a-b). On the other side, in RNN-like learning, information is not only carried by spikes in backward computation. Especially, gradient information can be propagated through a neuron that does not emit a spike in backward computation (shown in Fig. 1c-d). This gradient propagation is achieved by a surrogate function [12, 17, 18, 23, 47], which is a function of the membrane potential at the current time step ut, and the firing threshold θ. Time-based gradient v.s. activation-based gradient: Time-based gradients represent the (reverse) direction that the timing of a spike should move, that is, to move leftward or rightward on the time axis [32]. In backward propagation, the derivative of the firing time of a spike to the corresponding membrane potential ∂t∂u is often approximated as −1 ∂u ∂t [32, 33], denoting how the change of membrane potential will change the spike firing time (Fig. 1b). On the other side, activation-based approaches replace the Heaviside neuron activation function Θ(·) (spike st = Θ(ut − θ)) in forward propagation with derivable functions σ(·) in backward propagation, whether there are spikes in the current time step [18, 26, 31, 21]. Therefore, activation-based approaches essentially regard SNNs as binary RNNs and train them with approximated gradients, where the gradients indicate whether the values in the network (including the binary spikes) should be larger or smaller (Fig. 1d). As a result, time-based gradients are event-driven by nature, since the temporal gradient could only be carried by spikes. Meanwhile, activation-based gradients are more suitable for the RNN-like training scheme since the diversity of surrogate gradients largely relies on the fact that ut ̸= θ in discrete time steps [17], which no longer holds in continuous time simulation. If we want to apply activation-based gradients to event-driven learning, there should only be one value ∂s∂u when the membrane potential reaches the threshold. Tab. 1 lists whether a gradient type can be used in a learning fashion. It should be noticed that although activation-based gradient is more suitable for RNN-like learning, it is still able to be used for event-driven learning. 3 Methods 3.1 Forward Formulas We use the spike response model [1] for neurons in the network. The forward propagation in the network can be described as follows: u (l) i (t) = ∫ t t (l) i,last ∑ j w (l) ij · s (l−1) j (τ) · ϵ(t− τ)dτ, (1) s (l) i (t) = δ(u (l) i (t)− θ). (2) Here u(l)i (t) denotes the membrane potential of neuron i in layer l at time t, w (l) ij denotes the weight between neuron j in layer l − 1 and neuron i in layer l. t(l)i,last is the time of last spike of neuron i in layer l, and s(l)i (t) represents the spike emitted from neuron i at time t. The function δ(·) is the Dirac Delta function and θ is the firing threshold. The spike response kernel ϵ(t) can be described by ϵ(t) = τm τm − τs (e− t τm − e− t τs ), (3) where τm and τs are the membrane time constant and the synapse time constant respectively. Notice that we do not use reset kernels as in previous works [39, 41]. Instead, we eliminate the influence of input spikes prior to the last output spike on membrane potentials. 3.2 Rethinking the Classical Time-based Backward Propagation Formula In this subsection, we analyze the classical time-based backpropagation formula in SNNs. We first theoretically prove that the backpropagation rule essentially assigns gradients of output spikes of neurons to their input spikes. Then we check the pooling layer and show that the average pooling should be adjusted in backpropagation to satisfy the gradient assignment mechanism, while the max pooling naturally satisfies this mechanism. Invariant sum of gradients among layers with weights. The most commonly used time-based gradient backpropagation method origins from [32]. The two key approximations are as follows: ∂tk(s (l) i ) ∂u (l) i (tk) = −1 du (l) i (tk)/dt = ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) w (l) ij · ∂ϵ(tk − tm) ∂tm −1 , (4) ∂u (l) i (tk) ∂tm(s (l−1) j ) = w (l) ij · ∂ϵ(tk − tm) ∂tm , (5) where tk(s (l) i ) denotes the firing time tk of neuron i in layer l, tk,last(s (l) i ) is the firing time of the last spike emitted by neuron i before time tk. ∂tk(s (l) i ) ∂· means the influence of changing other variables on the timing of a spike, and ∂· ∂tk(s (l) i ) is the influence of changing spike timing on that variable. Combining Eqs. 4-5 and the forward formulas, we can get an invariant equality:∑ j ∑ tk,last(s (l) i )<tm(s (l−1) j )≤tk(s (l) i ) ∂tk(s (l) i ) ∂tm(s (l−1) j ) = 1. (6) The proof is provided in Appendix. Eq. 6 implies the fact that the reference time (t = 0) is meaningless, and only relative spike times matter. If we increase all the spike times in layer l − 1 by 1 unit along the time axis, then all the spike times in layer l are also increased by 1 unit along the time axis. Denote the loss function as L, then the gradient of L with respect to tm(s (l−1) j ) is: ∂L ∂tm(s (l−1) j ) = ∑ i ∑ tm(s (l−1) j )<tk(s (l) i )≤tm,next(s (l−1) j ) ∂L ∂tk(s (l) i ) · ∂tk(s (l) i ) ∂tm(s (l−1) j ) , (7) where tm,next(s (l−1) j ) denotes the firing time of the next spike emitted by neuron j after time tm. Therefore, we actually decompose the gradient ∂L/∂tk(s (l) i ) from layer l into (part of) a set of gradients ∂L/∂tm(s (l−1) j ) of the last layer l − 1, and keep their sum unchanged. In other words, we assign the weighted sum ∂L/∂tk(s (l) i ) by weights ∂tk(s (l) i )/∂tm(s (l−1) j ) to the gradients ∂L/∂tm(s (l−1) j ) in the last layer, as shown in Fig. 2. If we sum all the gradients together, we can get another invariant in this backpropagation rule:∑ i ∑ tk ∂L ∂tk(s (l) i ) = ∑ j ∑ tm ∂L ∂tm(s (l−1) j ) , (8) which means the sum of gradients ∑ i ∑ tk ∂L ∂tk(s (l) i ) never changes between layers under this rule. Gradient sum invariance for pooling layers. The above equations determine the gradient propagation in fully-connected and convolution layers (which contain neurons). The case for pooling layers (which do not contain neurons) is illustrated in Fig. 3. In average pooling with kernel size k, the gradient of one spike (at time t) in layer l is averagely propagated to k× k neurons in layer l− 1 connected to it. Some of these k× k neurons may not emit spikes at time t. However, the gradients are also propagated to those neurons, which cannot further propagate the gradients to the previous layers. For instance, the two white squares (neurons) in layer l− 1 in Fig. 3 receive gradients, but they will not further propagate the gradients to layer l− 2. Thus, a part of the gradients is lost in the backpropagation of the average pooling layer, which might cause the gradient vanishing problem. Meanwhile, the sum of gradients carried by the spikes is not kept among layers in this case. We can adjust the backpropagation stage in average pooling to satisfy the gradient sum invariance requirement by increasing the multiplier from 1/k2 to 1/nspike, where nspike is the number of spikes emitted by the k × k neurons in layer l − 1 at the current time step. On the contrary, in max pooling, the gradient of a spike in layer l is entirely propagated to one of the spikes emitted by its connected neurons in layer l− 1 (shown in the middle of Fig. 3). This maintains the property of the invariable sum of gradients. It should be noticed that, although the backpropagation stage of max pooling is different from adjusted average pooling in discrete simulation, they are almost surely the same in the continuous simulation since (almost surely) no two spikes emit at exactly the same time in this case. 3.3 Deficiencies of The Typical Time-based Gradient Propagation and A New Approach In this section, we first point out the reverse gradient problem in event-driven learning: the gradient direction for spike timing gets wrong when the spike response kernel is decreasing. Then we propose a backward kernel that can not only solve the reverse gradient problem but also keep the property of the invariable sum of gradients. The reverse gradient problem. Fig. 4 illustrates the membrane potential response of a neuron (with index i) to one of its input spikes (with a positive weight) from presynaptic neuron j. As in equation (3), the spike response kernel is a double-exponentail function. Notice that this is not the whole membrane potential of the neuron i as it also receives the inputs from other presynaptic neurons. We consider two spike times (tk and t′k) of neuron i and one spike time tm of presynaptic neuron j. If the next spike of neuron i fires at time tk, moving the presynaptic spike from tm to tm+∆t will decrease membrane potential at time tk, which means postponing the spike at time tk (tm ↑⇒ u[tk] ↓⇒ tk ↑). This result shows that if the presynaptic neuron fires earlier, the postsynaptic neuron will fire earlier in this case. Oppositely, if the next spike of neuron i fires at time t′k, moving the input spike from tm to tm +∆t will cause a increase of u[t′k], which further moves t ′ k leftward (tm ↑⇒ u[t′k] ↑⇒ t′k ↓). As a result, if we want to move the output spike at t′k leftward, we should move the input spike at tm rightward, which reverses the direction. This might cause a problem: When we want to move t′k leftward, we want the neuron to emit more spikes. However, in gradient backpropagation, it moves tm rightward (assume weight wij > 0), which may cause the neuron in the last layer to spike fewer, further causing neurons in the current layer to spike fewer. More formally, we assume the neuron i receives a input spike at time tm from presynaptic neuron j with synaptic weight wij , then the membrane potential of neuron i at time tk is: ui(tk) = wij · ϵ(tk − tm) + C, (9) where ϵ(t) denotes the spike response kernel (Eq. 3). C denotes the influence of other spikes, which is not in our concern here. In backward pass, according to Eqs. (4)-(5), we have ∂L ∂tm(sj) = ∂L ∂tk(si) · ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) = ∂L ∂tk(si) · −1 dui(tk)/dt · wij · ∂ϵ(tk − tm) ∂tm . (10) Note that when a spike is emitted by neuron i at time tk, the slope of ui(t) > 0 at time tk, which means −1dui(tk)/dt has a negative sign. Considering ∂ϵ(tk−tm) ∂tm = −dϵ(τ)dτ , where τ = tk − tm, we get: sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) · sign ( wij ) · sign ( dϵ(τ) dτ ) . (11) When sign ( dϵ(τ)/dτ ) = −1, which is the part of the spike response kernel that decreases (see the case at t′k in Fig. 4), the gradient direction of tm can be classified into two cases: When wij > 0, sign ( ∂L ∂tm(sj) ) = −sign ( ∂L ∂tk(si) ) , which means the gradient direction is reversed. When wij < 0, sign ( ∂L ∂tm(sj) ) = sign ( ∂L ∂tk(si) ) , which means the gradient direction is kept. In both cases, the sign of the gradient gets wrong in propagation between layers. Thus, the commonly used double-exponential spike response kernel is incompatible with the time-based gradient in event-driven learning. A smoother gradient assigning approach. Inspired by the above gradient inconsistency as well as the invariance of gradient sum, we propose a new gradient backpropagation approach here. Specifically, we replace the function ∂ϵ(tk−tm)∂tm in Eqs. (4) and (5) with a new function h(tk − tm). Therefore, the backpropagation formula between layers turns into: ∂tk(si) ∂tm(sj) = ∂tk(si) ∂ui(tk) · ∂ui(tk) ∂tm(sj) (12) = 0, if tm(sj) ≤ tk,last(si) or tm(sj) > tk(si),(∑ tk,last(si)<tm(s′j)≤tk(si) wij · h(tk − tm) )−1 · wij · h(tk − tm), otherwise. It can be see from Eq. 12 that ∂tk(si)∂tm(sj) will not change if we multiply h(t) by an arbitrary constant, so we do not need to care about the scale of h(t). Meanwhile, the property of invariable sum of gradients is kept after this replacement. To guarantee that the gradients are not reversed between layers, we should expect h(t) > 0 always hold when t > 0. Therefore, we choose h(t) = e − tτgrad to simplify the calculation, where τgrad is a tunable parameter. Notice that the function h(t) is only used in backward propagation, which means the spike response kernel in the forward propagation is not necessarily the integral of h(t). 3.4 Overall Learning Rule The loss function we use in this work is the counting loss function, which has the form L = 1 N ∑Nout i=1 ( 1 T ( N targeti − ∫ T 0 si(t)dt ))2 , where Nout is the number of output neurons and equals to the number of classes, si(t) represents the spike train emitted by neuron i. Besides, N target i is the target of the spike number outputted by neuron i and typically we set N targeti larger when i is the correct answer. During the learning process, the gradient is first propagated from the loss function to the firing time of each spike from the last layer to the first layer. The formula for this stage is (please refer to Appendix for the detailed deduction): ∂L ∂tm(s (l−1) j ) = ∑ i ∂L ∂tk,next(s (l) i ) · ∑ tlasti (s (l) i )<tm(s (l−1) j )≤tk,next(s (l) i ) w (l) ij · h(tk,next − tm) −1 · w(l)ij · h(tk,next − tm), (13) where tk,next(s (l) i ) is the firing time of the first spike emitted by neuron i after time tm. After this, the gradient to weights in each layer is calculated by summing up the multiplication of the gradients of spike firing times in the same layer and the derivative of weights with respect to spike firing times. The learning rule for this stage is ∂L ∂w (l) ij = ∑ tm(s (l−1) j ) ∂L ∂tk,next(s (l) i ) · −1 ∂u (l) i (tk,next) ∂t · ϵ(tk,next − tm). (14) 4 Experiments In this section, we validate the effectiveness of our method on MNIST [48], Fashion-MNIST [49], N-MNIST [50], CIFAR10 [51], and CIFAR100 [51] datasets. This section is organized as follows: We first introduce the training details, then evaluate the performance of our algorithm and compare it with the state-of-the-art event-driven learning approaches. At last, we conduct ablation studies to illustrate the effectiveness of our proposed modules. More details of the configurations can be found in the Appendix. 4.1 Training Details Initialization: When training in an event-driven fashion, gradient information is only carried by spikes. Therefore, the gradient information will be completely blocked by a layer when there are no spikes in that layer. To solve this problem, we start with layers of arbitrarily initialized weights and scale them by certain multiples, which can make the average firing rate to be a certain number for each layer. We obtain these multiple parameters by binary search and this strategy works well in practice. Supervisory signals: Another problem we face is that output neurons corresponding to certain classes do not fire anymore after certain epochs of training. This problems makes corresponding gradients difficult to propagate in the network, further leading to these neurons no longer fire afterwards, resulting recognizing those classes correctly impossible in the following epochs. To address this problem, we utilize supervisory signals. For each neuron in the output layer corresponding to the ground-truth label, we force it to fire at the end of the simulation. Experiment settings: In our experiments, we use the real-valued spike current representing the pixel intensities of the image as inputs. We list the network architecture each work uses and the accuracy they achieves on each dataset in table 2. Notice that the output layer is, by default, a fully-connected layer containing the same number of neurons as the number of classes in the dataset, and omitted from the architecture representation. We run all experiments on a single Nvidia A100 GPU. 4.2 Comparison with the State-of-the-Art Tab. 2 reports the accuracies of the proposed method and other comparing methods. The performance of our algorithm is lower than TSSL-BP by 0.06% on the MNIST dataset and 0.01% on the N-MNIST dataset. However, the output of their network is real-valued postsynaptic currents while the output of our network is binary spikes. In addition, they use RNN-like gradients to assist learning. On the remaining datasets, we have achieved state-of-the-art performance among these works with temporal gradients. For the Fashion-MNIST dataset, our algorithm performs 0.45% higher than the previous SOTA. On the CIFAR10 dataset, we achieve 92.45% accuracy with a 14-layer SEW-Resnet and 92.10% with VGG11, which are all better than the current SOTA, 91.41%. For the CIFAR100 dataset, we are the first work to successfully train SNNs with time-based gradients in an event-driven fashion. We have achieved a performance of 63.97% on this dataset. 4.3 Ablation Studies To show the effect of our proposed modules, we conduct ablation experiments on the CIFAR10 dataset. Specifically, two proposed components are taken into consideration: (1) As mentioned in Section 3.3, we compare the proposed gradient assignment functions h(t) = e−βt in (12) with the commonly used one h(t) = dϵ(t)dt . (2) We compare the results of three different types of pooling layers (average pooling, adjusted average pooling and max pooling) mentioned in Section 3.2. We have tried all combinations of gradient assignment functions and pooling layers. The test accuracy of these different settings is shown in Tab. 3. The results in Tab. 3 meets our expectation. For the pooling layer, max pooling and adjusted average pooling have much better performance than the average pooling. This accords with the conclusion in Section 3.2 that pooling layers keeping the property of invariant sum of gradients are better than those that do not. The proposed gradient assignment function h(t) = e−βt is also better than the commonly used one h(t) = dϵ(t)dt for all three types of pooling layers. In addition, as shown in Fig. 5, h(t) = e −βt converges faster than h(t) = dϵ(t)dt in early stage. 5 Conclusion and Discussion In this work, we analyze the commonly-used SNN temporal backpropagation training approach and find that it follows the gradient assignment rule. We also find the average pooling layer does not obey this rule while the max pooling layer does. We show that the direction of the temporal gradient will be reversed when the spike kernel is decreasing and avoid it with an increasing kernel in backpropagation. Our algorithm achieves state-of-the-art performance on CIFAR10 among time-based SNN learning approaches and successfully learns the parameters of SNN on CIFAR100 for the first time. Compared with RNN-like methods, the proposed event-based learning algorithm has a lower computational cost and memory occupation when there are many time steps. Besides, our algorithm also does not need bias between layers. Meanwhile, gradient propagation between spikes instead of time steps can mitigate the gradient explosion/vanishing problem along the time axis. However, there is still a gap between event-driven backpropagation and biological plausible learning, since event-driven backpropagation processes the spike train in reverse time, which conflicts with the online learning in the real world and desires for future research. 6 Acknowledgements This work was supported by the National Natural Science Foundation of China Grants 62176003 and 62088102.
1. What is the main contribution of the paper regarding spiking neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty and comparisons with prior works? 3. Do you have any questions or concerns regarding the training methodology, experimental results, and efficiency claims? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? 5. Are there any potential limitations or societal impacts associated with the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The focus of this paper is training of spiking neural networks. Specifically, the paper proposes back-propagation with respect to spike timing. They analyze previous methods and propose a small increment over the current state-of-the-art. The results are not clearly discussed. Strengths And Weaknesses Strengths: The work focuses on an aspect of the learning algorithms that requires optimization and innovation. Weakness 1. It is hard to understand what the axes are for Figure 1. 2. It is unclear what the major contributions of the paper are. Analyzing previous work does not constitute as a contribution. 3. It is unclear how the proposed method enables better results. For instance, Table 1 reports similar accuracies for this work compared to the previous ones. 4. The authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method. 5. Does the proposed method converge faster compared to previous algorithms? 6. How does the proposed methods compare against surrogate gradient techniques? 7. The paper does not discuss how the datasets are converted to spike domain. Questions Please refer to Strengths and Weakness for the points. Limitations There are no potential negative societal impacts. One major limitation of this work is applicability to neuromorphic hardware and how will the work shown on GPU translate to neuromorphic cores.
NIPS
Title A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order Abstract Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems. 1 Introduction Asynchronous parallel optimization received substantial successes and extensive attention recently, for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems, such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21], PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the efficiency of all computation workers. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us, that is, how much time can we save by employing more computation resources? More precisely, people are interested in the running time speedup (RTS) with T workers: RTS(T ) = running time using a single worker running time using T workers . Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the computational complexity speedup (CCS) with T workers1 CCS(T ) = total computational complexity using a single worker total computational complexity using T workers × T. In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More specifically, what is the upper bound on T to ensure CCS(T ) = Θ(T )? Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can 1For simplicity, we assume that the communication cost is not dominant throughout this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu and Wright [19] found that the upper bound depends on the problem dimension as well as the diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup property. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. To avoid unnecessary complication and cover practical problems and algorithms, we consider the following nonconvex stochastic optimization problem: minx∈RN f(x) := Eξ(F (x; ξ)), (1) where ξ ∈ Ξ is a random variable, and both F (·; ξ) : RN → R and f(·) : RN → R are smooth but not necessarily convex functions. This objective function covers a large scope of machine learning problems including deep learning. F (·; ξ)’s are called component functions in this paper. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , n} and F (x; ξ) is the loss function with respect to the training sample indexed by ξ. We highlight the main contributions of this paper in the following: • We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on multicore systems), and others as its special cases. • Our generic analysis can recover or improve the existing results on special cases. • Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent (ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. • The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. 1.1 Related Works We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper. The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting. Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous stochastic algorithms, including HOGWILD!, asynchronous SCD and asynchronous sparse SVRG. The fundamental difference in our work lies on that we apply different analysis and our result can be directly applied to various special cases, while theirs cannot. In addition, there is a line of research studying the asynchronous ADMM type methods, which is not in the scope of this paper. We encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35]. We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the dimension of the problem, K to denote the iteration number, and σ to the variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ √ K) for zeroth-order SGD applied to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of O( √ N/K) rate for zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows a lower bound O(1/ √ K) for any zeroth-order method with inaccurate evaluation. Duchi et al. [9] proved a O(N1/4/K + 1/ √ K) rate for zeroth order SGD on convex objectives but with some very different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) √ K) for zeroth-order bandit algorithm on convex objectives. For more comprehensive review of asynchronous algorithms, please refer to the long version of this paper on arXiv:1606.00498. 1.2 Notation • ei ∈ RN denotes the ith natural unit basis vector. • E(·) means taking the expectation with respect to all random variables, while Ea(·) denotes the expectation with respect to a random variable a. • ∇f(x) ∈ RN is the gradient of f(x) with respect to x. Let S be a subset of {1, · · · , N}. ∇Sf(x) ∈ RN is the projection of ∇f(x) onto the index set S, that is, setting components of ∇f(x) outside of S to be zero. We use∇if(x) ∈ RN to denote ∇{i}f(x) for short. • f∗ denotes the optimal objective value in (1). 2 Algorithm Algorithm 1 Generic Asynchronous Stochastic Algorithm (GASA) Require: x0,K, Y, (µ1, µ2, . . . , µN ), {γk}k=0,...,K−1 ▷ γk is the step length for kth iteration Ensure: {xk}Kk=0 1: for k = 0, . . . ,K − 1 do 2: Randomly select a component function index ξk and a set of coordinate indices Sk, where |Sk| = Y ; 3: xk+1 = xk − γkGSk(x̂k; ξk); 4: end for We illustrate the asynchronous parallelism by assuming a centralized network: a central node and multiple child nodes (workers). The central node maintains the optimization variable x. It could be a parameter server if implemented on a computer cluster [17]; it could be a shared memory if implemented on a multicore machine. Given a base algorithm A, all child nodes run algorithm A independently and concurrently: read x from the central node (we call the result of this read x̂, and it is mathematically defined later in (4)), calculate locally using the x̂, and modify x on the central node. There is no need to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency gets maximized. In other words, we have CCS(T ) ≈ RTS(T ). Note that due to the asynchronous parallel mechanism the variable x in the central node is not updated exactly following the protocol of Algorithm A, since when a child node returns its computation result, the x in the central node might have been changed by other child nodes. Thus a new analysis is required. A fundamental question would be under what conditions a linear speedup can be guaranteed. In other words, under what conditions CCS(T ) = Θ(T ) or equivalently RTS(T ) = Θ(T )? To provide a comprehensive analysis, we consider a generic algorithm A – the zeroth order hybrid of SCD and SGD: iteratively sample a component function2 indexed by ξ and a coordinate block S⊆{1, 2, · · · , N}, where |S| = Y for some constant Y and update x with x← x− γGS(x; ξ) (2) where GS(x; ξ) is an approximation to the block coordinate stochastic gradient NY −1∇SF (x; ξ): GS(x; ξ) := ∑ i∈S N 2Y µi (F (x+ µiei; ξ)− F (x− µiei; ξ))ei, S ⊆ {1, 2, . . . , N}. (3) In the definition of GS(x; ξ), µi is the approximation parameter for the ith coordinate. (µ1, µ2, . . . , µN ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS(x; ξ). It is easy to see that the closer to 0 the µi’s are, the closer GS(x; ξ) and NY −1∇Sf(x; ξ) will be. In particular, limµi→0,∀i GS(x; ξ) = NY −1∇Sf(x; ξ). 2The algorithm and theoretical analysis followed can be easily extended to the minibatch version. Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central node. γk is the predefined steplength (or learning rate). K is the total number of iterations (note that this iteration number is counted by the the central node, that is, any update on x no matter from which child node will increase this counter.) As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algorithm A in Eq. (2) is that x̂k may be not equal to xk. In asynchronous parallelism, there are two different ways to model the value of x̂k: • Consistent read: x̂k is some early existed state of x in the central node, that is, x̂k = xk−τk for some τk ≥ 0. This happens if reading x and writing x on the central node by any child node are atomic operations, for instance, the implementation on a parameter server [17]. • Inconsistent read: x̂k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have different ages. In other words, x̂k may not be any existed state of x in the central node. Readers who want to learn more details about consistent read and inconsistent read can refer to [3, 18, 19]. To cover both cases, we note that x̂k can be represented in the following generic form: x̂k = xk − ∑ j∈J(k)(xj+1 − xj), (4) where J(k) ⊂ {k−1, k−2, . . . , k−T} is a subset of the indices of early iterations, and T is the upper bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total number of workers and the upper bound of the staleness are treated as the same in the following discussion and this notation T is abused for simplicity. 3 Theoretical Analysis Before we show the main results of this paper, let us first make some global assumptions commonly used for the analysis of stochastic algorithms.3 Bounded Variance of Stochastic Gradient Eξ(∥∇F (x; ξ)−∇f(x)∥2) ≤ σ2,∀x. Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4 max{∥∇f(x)−∇f(y)∥, ∥∇F (x; ξ)−∇F (y; ξ)∥} ≤ L∥x− y∥ ∀x,∀y, ∀ξ. (5) Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: ∀ξ, ∀x, αiei∀S ⊂ {1, 2, ..., N} with |S| ≤ s for any z = ∑ i∈S we have: max {∥∇f(x)−∇f (x+ z)∥ , ∥∇F (x; ξ)−∇F (x+ z; ξ)∥} ≤ Ls ∥z∥ Define L(i) for i ∈ {1, 2, . . . , N} as the minimum constant that satisfies: max{∥∇if(x)−∇if(x+ αei)∥, ∥∇iF (x; ξ)−∇iF (x+ αei; ξ)∥} ≤ L(i)|α|. ∀ξ,∀x. (6) Define Lmax := maxi∈{1,...,N} L(i). It can be seen that Lmax ≤ Ls ≤ L. Independence All random variables ξk, Sk for k = 0, 1, · · · ,K are independent to each other. Bounded Age Let T be the global bound for delay: J(k)⊆{k − 1, . . . , k − T},∀k, so |J(k)| ≤ T . We define the following global quantities for short notations: ω := (∑N i=1 L 2 (i)µ 2 i ) /N, α1 := 4 + 4 ( TY + Y 3/2T 2/ √ N ) L2T /(L 2 Y N), α2 := Y/((f(x0)− f∗)LY N), α3 := (K(Nω + σ2)α2 + 4)L2Y /L2T . (7) Next we show our main result in the following theorem: 3Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in [25], these behaviors are guaranteed by most modern architectures. 4Note that the Lipschitz assumption on the component function F (x; ξ)’s can be eliminated when it comes to first order methods (i.e., ω → 0) in our following theorems. Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength γk to be a constant γ in Algorithm 1 γ−1k = γ −1 = 2LY NY −1 (√ α21/(K(Nω + σ 2)α2 + α1) + √ K(Nω + σ2)α2 ) ,∀k and suppose the age T is bounded by T ≤ √ N 2Y 1/2 (√ 1 + 4Y −1/2N1/2α3 − 1 ) . We have the fol- lowing convergence rate:∑K k=0 E∥∇f(xk)∥ 2 K ⩽ 20 Kα2 + 1 Kα2 ( L2T L2Y √ 1 + 4Y −1/2N1/2α3 − 1√ NY −1 + 11 √ Nω + σ2 √ Kα2 ) +Nω. (8) Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to “stochastic” gradient descent; and the last term is due to the zeroth-order approximation. Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover and improve existing results as well as prove the convergence for new algorithms using Theorem 1. To make the results more interpretable, we use the big-O notation to avoid explicitly writing down all the constant factors, including all L’s, f(x0), and f∗ in the following corollaries. 3.1 Asynchronous Stochastic Coordinate Descent (ASCD) We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and σ = 0. Sk = {ik} only contains a single randomly sampled coordinate, and ω = 0 (or equivalently µi = 0,∀i). The essential updating rule on x is xk+1 = xk − γk∇ikf(x̂k). Corollary 2 (ASCD). Let ω = 0, σ = 0, and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O(N3/4), (9) the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O(N/K). (10) The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property is better than the one in [20], by improving it from T ≤ O(N1/2) to T ≤ O(N3/4). Mania et al. [22] analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N1/6), which is also more restrictive than ours. 3.2 Asynchronous Stochastic Gradient Descent (ASGD) ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to the following updating rule for x (setting Y = N and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇F (x̂k; ξk). (11) Note that a single iteration is defined as modifying the whole vector. The other type is to implement on a single computer with multiple cores. In this case, the central node corresponds to the shared memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting Sk = {ik}, that is, Y = 1, and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇ikF (x̂k; ξk). (12) Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations. Corollary 3 (ASGD in (11)). Let ω = 0 (or µi = 0,∀i equivalently) and Y = N in Algorithm 1 and Theorem 1. If T ⩽ O (√ Kσ2 + 1 ) , (13) then the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( σ/ √ K + 1/K ) . (14) First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel) version of SGD [23]. We compare this linear speedup property indicated by (13) with results in [1], [11], and [18]. To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ≤ O(K1/4 min{σ3/2, √ σ}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need T to be bounded by σ1/2K1/4 to achieve the same rate, which is also inferior to our result. Our requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so far. Corollary 4 (ASGD in (12)). Let ω = 0 (or equivalently, µi = 0,∀i) and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O (√ N3/2 +KN1/2σ2 ) , (15) then the following convergence rate holds(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O (√ N/Kσ +N/K ) . (16) The additional factor N in (16) (comparing to (14)) arises from the different way of counting the iteration. This additional factor also appears in [25] and [18]. We first compare our result with [18], which requires T to be bounded by O( √ KN1/2σ2). We can see that our requirement in (16) allows a larger value for T , especially when σ is small such that N3/2 dominates KN1/2σ2. Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort of comparing “apple” with “orange”, it is still meaningful if one believes that the strong convexity would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup is guaranteed if T ≤ O(N1/4) under the assumption that the sparsity of the stochastic gradient is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism. Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup, while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al. [11]) suggest that a large stochastic variance σ (this often implies the number of samples is large) is beneficial to the linear speedup. Our analysis shows the combo effect of N and σ and shows how they improve the linear speedup jointly. 3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD) We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik}) in GSk(x̂k; ξk) GSk(x̂k; ξk) = G{ik}(x̂k; ξk) = (F (x̂k + µikeik ; ξk)− F (x̂k − µikeik ; ξk))/(2µik)eik . (17) To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization. Corollary 5 (ASZD). Set Y = 1 and all µi’s to be a constant µ in Algorithm 1. Suppose that µ satisfies µ ⩽ O ( 1/ √ K +min {√ σ(NK)−1/4, σ/ √ N }) , (18) and T satisfies T ⩽ O (√ N3/2 +KN1/2σ2 ) . (19) We have the following convergence rate(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( N/K + √ N/Kσ ) . (20) We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel) zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives. First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent. Our result implies a better dependence on µ, comparing with [12].5 To obtain such convergence rate 5Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more accurate but less aggressive. The proved convergence rate actually improves a small constant in [12]. in (20), Ghadimi and Lan [12] require µ ⩽ O ( 1/(N √ K) ) , while our requirement in (18) is much less restrictive. An important insight in our requirement is to suggest the dependence on the variance σ: if the variance σ is large, µ is allowed to be a much larger value. This insight meets the common sense: a large variance means that the stochastic gradient may largely deviate from the true gradient, so we are allowed to choose a large µ to obtain a less exact estimation for the stochastic gradient without affecting the convergence rate. From the practical view of point, it always tends to choose a large value for µ. Recall the zeroth-order method uses the function difference at two different points (e.g., x+µei and x−µei) to estimate the differential. In a practical system (e.g., a concrete control system), there usually exists some system noise while querying the function values. If two points are too close (in other words µ is too small), the obtained function difference is dominated by noise and does not really reflect the function differential. Second, we consider the case T ≥ 1, which leads to the asynchronous zeroth-order stochastic descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19) essentially indicates the requirement for the linear speedup property. The linear speedup property here also shows that even if Kσ2 is much smaller than 1, we still have O(N3/4) linear speedup, which shows a fundamental understanding of asynchronous stochastic algorithms that N and σ can improve the linear speedup jointly. 4 Experiment Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct two experiments to validate the proposed ASZD on in this section. The first part applies ASZD to estimate the parameters for a synthetic black box system. The second part applies ASZD to the model combination for Yahoo Music Recommendation Competition. 4.1 Parameter Optimization for A Black Box We use a deep neural network to simulate a black box system. The optimization variables are the weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the neural network with 46380 weights (or parameters) totally. The weights are randomly generated from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the input vector plus some Gaussian random noise. We use this network to generate 463800 samples. These synthetic samples are used to optimize the weights for the black box. (We pretend not to know the structure and weights of this neural network because it is a black box.) To optimize (estimate) the parameters for this black box, we apply the proposed ASZD method. The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and 10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the steplength is chosen as γ = 0.1, which is based on the best performance of Algorithm 1 running on 1 core to achieve the precision 10−1 for the objective value. The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the supplement) to show the curve of the objective value against the number of iterations and running time respectively. 4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation Competition In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set: RMSE = √∑ (u,i)∈T1(rui − r̂ui) 2/|T1|, (21) where (u, i) ∈ T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating for user u and item i, and r̂ui is the predicted rating. The winning team from NTU created more than 200 models using different machine learning algorithms [6], including Matrix Factorization, k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble to achieve better RMSE. NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result RMSE 21.0004 21.0545 21.2335 21.1241 We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm for the same number of iterations, with different number of threads, and measured the running time speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox, our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the square root of objective function value (RMSE) against the number of iterations and running time. After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is to show the performance of our algorithm, we assume we can “submit” our solution x for unlimited times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3. 5 Conclusion In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. Acknowledgements This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
1. What is the focus of the paper, and what are the authors' contributions to the field? 2. What are the strengths of the paper, particularly in terms of its analysis and experimental verification? 3. What are the weaknesses of the paper, especially regarding the motivation and limitations of the proposed method? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns the reviewer has regarding the paper's assumptions, arguments, or conclusions?
Review
Review The authors provide a comprehensive analysis of their zeroth-order method. They show that under special cases, their analysis improve the existing results. They have also done some experiments to verify their results.The paper is clear overall. The literature review is comprehensive and serve as a good reference. It provides all the relevant results in a table. The paper also gave a generic analysis of asynchronous stochastic parallel optimization. It improved the existing results on special cases. However, the motivation for zeroth-order is not very clear. The paper emphasis that the zeroth-order algorithm is a novel contribution. Yet, it seems that the zeroth-order algorithm is in essence an approximation to the gradient descent. It might help if the authors could make it clear in what situation we only have access to the function but not the gradient. Also, a major part of the discussion (Corollary 2,3,4) is based on the case where w=0, but the zeroth-order algorithm should have a positive w in practice (w=0 is reduced to first order method). It would be beneficial if the authors can provide more insights on their zeroth-order algorithms. Comments: the vertical spacing is not right. Line 65, the definition of zeroth-order should be fined before.
NIPS
Title A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order Abstract Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems. 1 Introduction Asynchronous parallel optimization received substantial successes and extensive attention recently, for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems, such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21], PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the efficiency of all computation workers. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us, that is, how much time can we save by employing more computation resources? More precisely, people are interested in the running time speedup (RTS) with T workers: RTS(T ) = running time using a single worker running time using T workers . Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the computational complexity speedup (CCS) with T workers1 CCS(T ) = total computational complexity using a single worker total computational complexity using T workers × T. In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More specifically, what is the upper bound on T to ensure CCS(T ) = Θ(T )? Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can 1For simplicity, we assume that the communication cost is not dominant throughout this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu and Wright [19] found that the upper bound depends on the problem dimension as well as the diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup property. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. To avoid unnecessary complication and cover practical problems and algorithms, we consider the following nonconvex stochastic optimization problem: minx∈RN f(x) := Eξ(F (x; ξ)), (1) where ξ ∈ Ξ is a random variable, and both F (·; ξ) : RN → R and f(·) : RN → R are smooth but not necessarily convex functions. This objective function covers a large scope of machine learning problems including deep learning. F (·; ξ)’s are called component functions in this paper. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , n} and F (x; ξ) is the loss function with respect to the training sample indexed by ξ. We highlight the main contributions of this paper in the following: • We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on multicore systems), and others as its special cases. • Our generic analysis can recover or improve the existing results on special cases. • Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent (ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. • The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. 1.1 Related Works We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper. The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting. Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous stochastic algorithms, including HOGWILD!, asynchronous SCD and asynchronous sparse SVRG. The fundamental difference in our work lies on that we apply different analysis and our result can be directly applied to various special cases, while theirs cannot. In addition, there is a line of research studying the asynchronous ADMM type methods, which is not in the scope of this paper. We encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35]. We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the dimension of the problem, K to denote the iteration number, and σ to the variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ √ K) for zeroth-order SGD applied to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of O( √ N/K) rate for zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows a lower bound O(1/ √ K) for any zeroth-order method with inaccurate evaluation. Duchi et al. [9] proved a O(N1/4/K + 1/ √ K) rate for zeroth order SGD on convex objectives but with some very different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) √ K) for zeroth-order bandit algorithm on convex objectives. For more comprehensive review of asynchronous algorithms, please refer to the long version of this paper on arXiv:1606.00498. 1.2 Notation • ei ∈ RN denotes the ith natural unit basis vector. • E(·) means taking the expectation with respect to all random variables, while Ea(·) denotes the expectation with respect to a random variable a. • ∇f(x) ∈ RN is the gradient of f(x) with respect to x. Let S be a subset of {1, · · · , N}. ∇Sf(x) ∈ RN is the projection of ∇f(x) onto the index set S, that is, setting components of ∇f(x) outside of S to be zero. We use∇if(x) ∈ RN to denote ∇{i}f(x) for short. • f∗ denotes the optimal objective value in (1). 2 Algorithm Algorithm 1 Generic Asynchronous Stochastic Algorithm (GASA) Require: x0,K, Y, (µ1, µ2, . . . , µN ), {γk}k=0,...,K−1 ▷ γk is the step length for kth iteration Ensure: {xk}Kk=0 1: for k = 0, . . . ,K − 1 do 2: Randomly select a component function index ξk and a set of coordinate indices Sk, where |Sk| = Y ; 3: xk+1 = xk − γkGSk(x̂k; ξk); 4: end for We illustrate the asynchronous parallelism by assuming a centralized network: a central node and multiple child nodes (workers). The central node maintains the optimization variable x. It could be a parameter server if implemented on a computer cluster [17]; it could be a shared memory if implemented on a multicore machine. Given a base algorithm A, all child nodes run algorithm A independently and concurrently: read x from the central node (we call the result of this read x̂, and it is mathematically defined later in (4)), calculate locally using the x̂, and modify x on the central node. There is no need to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency gets maximized. In other words, we have CCS(T ) ≈ RTS(T ). Note that due to the asynchronous parallel mechanism the variable x in the central node is not updated exactly following the protocol of Algorithm A, since when a child node returns its computation result, the x in the central node might have been changed by other child nodes. Thus a new analysis is required. A fundamental question would be under what conditions a linear speedup can be guaranteed. In other words, under what conditions CCS(T ) = Θ(T ) or equivalently RTS(T ) = Θ(T )? To provide a comprehensive analysis, we consider a generic algorithm A – the zeroth order hybrid of SCD and SGD: iteratively sample a component function2 indexed by ξ and a coordinate block S⊆{1, 2, · · · , N}, where |S| = Y for some constant Y and update x with x← x− γGS(x; ξ) (2) where GS(x; ξ) is an approximation to the block coordinate stochastic gradient NY −1∇SF (x; ξ): GS(x; ξ) := ∑ i∈S N 2Y µi (F (x+ µiei; ξ)− F (x− µiei; ξ))ei, S ⊆ {1, 2, . . . , N}. (3) In the definition of GS(x; ξ), µi is the approximation parameter for the ith coordinate. (µ1, µ2, . . . , µN ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS(x; ξ). It is easy to see that the closer to 0 the µi’s are, the closer GS(x; ξ) and NY −1∇Sf(x; ξ) will be. In particular, limµi→0,∀i GS(x; ξ) = NY −1∇Sf(x; ξ). 2The algorithm and theoretical analysis followed can be easily extended to the minibatch version. Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central node. γk is the predefined steplength (or learning rate). K is the total number of iterations (note that this iteration number is counted by the the central node, that is, any update on x no matter from which child node will increase this counter.) As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algorithm A in Eq. (2) is that x̂k may be not equal to xk. In asynchronous parallelism, there are two different ways to model the value of x̂k: • Consistent read: x̂k is some early existed state of x in the central node, that is, x̂k = xk−τk for some τk ≥ 0. This happens if reading x and writing x on the central node by any child node are atomic operations, for instance, the implementation on a parameter server [17]. • Inconsistent read: x̂k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have different ages. In other words, x̂k may not be any existed state of x in the central node. Readers who want to learn more details about consistent read and inconsistent read can refer to [3, 18, 19]. To cover both cases, we note that x̂k can be represented in the following generic form: x̂k = xk − ∑ j∈J(k)(xj+1 − xj), (4) where J(k) ⊂ {k−1, k−2, . . . , k−T} is a subset of the indices of early iterations, and T is the upper bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total number of workers and the upper bound of the staleness are treated as the same in the following discussion and this notation T is abused for simplicity. 3 Theoretical Analysis Before we show the main results of this paper, let us first make some global assumptions commonly used for the analysis of stochastic algorithms.3 Bounded Variance of Stochastic Gradient Eξ(∥∇F (x; ξ)−∇f(x)∥2) ≤ σ2,∀x. Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4 max{∥∇f(x)−∇f(y)∥, ∥∇F (x; ξ)−∇F (y; ξ)∥} ≤ L∥x− y∥ ∀x,∀y, ∀ξ. (5) Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: ∀ξ, ∀x, αiei∀S ⊂ {1, 2, ..., N} with |S| ≤ s for any z = ∑ i∈S we have: max {∥∇f(x)−∇f (x+ z)∥ , ∥∇F (x; ξ)−∇F (x+ z; ξ)∥} ≤ Ls ∥z∥ Define L(i) for i ∈ {1, 2, . . . , N} as the minimum constant that satisfies: max{∥∇if(x)−∇if(x+ αei)∥, ∥∇iF (x; ξ)−∇iF (x+ αei; ξ)∥} ≤ L(i)|α|. ∀ξ,∀x. (6) Define Lmax := maxi∈{1,...,N} L(i). It can be seen that Lmax ≤ Ls ≤ L. Independence All random variables ξk, Sk for k = 0, 1, · · · ,K are independent to each other. Bounded Age Let T be the global bound for delay: J(k)⊆{k − 1, . . . , k − T},∀k, so |J(k)| ≤ T . We define the following global quantities for short notations: ω := (∑N i=1 L 2 (i)µ 2 i ) /N, α1 := 4 + 4 ( TY + Y 3/2T 2/ √ N ) L2T /(L 2 Y N), α2 := Y/((f(x0)− f∗)LY N), α3 := (K(Nω + σ2)α2 + 4)L2Y /L2T . (7) Next we show our main result in the following theorem: 3Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in [25], these behaviors are guaranteed by most modern architectures. 4Note that the Lipschitz assumption on the component function F (x; ξ)’s can be eliminated when it comes to first order methods (i.e., ω → 0) in our following theorems. Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength γk to be a constant γ in Algorithm 1 γ−1k = γ −1 = 2LY NY −1 (√ α21/(K(Nω + σ 2)α2 + α1) + √ K(Nω + σ2)α2 ) ,∀k and suppose the age T is bounded by T ≤ √ N 2Y 1/2 (√ 1 + 4Y −1/2N1/2α3 − 1 ) . We have the fol- lowing convergence rate:∑K k=0 E∥∇f(xk)∥ 2 K ⩽ 20 Kα2 + 1 Kα2 ( L2T L2Y √ 1 + 4Y −1/2N1/2α3 − 1√ NY −1 + 11 √ Nω + σ2 √ Kα2 ) +Nω. (8) Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to “stochastic” gradient descent; and the last term is due to the zeroth-order approximation. Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover and improve existing results as well as prove the convergence for new algorithms using Theorem 1. To make the results more interpretable, we use the big-O notation to avoid explicitly writing down all the constant factors, including all L’s, f(x0), and f∗ in the following corollaries. 3.1 Asynchronous Stochastic Coordinate Descent (ASCD) We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and σ = 0. Sk = {ik} only contains a single randomly sampled coordinate, and ω = 0 (or equivalently µi = 0,∀i). The essential updating rule on x is xk+1 = xk − γk∇ikf(x̂k). Corollary 2 (ASCD). Let ω = 0, σ = 0, and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O(N3/4), (9) the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O(N/K). (10) The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property is better than the one in [20], by improving it from T ≤ O(N1/2) to T ≤ O(N3/4). Mania et al. [22] analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N1/6), which is also more restrictive than ours. 3.2 Asynchronous Stochastic Gradient Descent (ASGD) ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to the following updating rule for x (setting Y = N and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇F (x̂k; ξk). (11) Note that a single iteration is defined as modifying the whole vector. The other type is to implement on a single computer with multiple cores. In this case, the central node corresponds to the shared memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting Sk = {ik}, that is, Y = 1, and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇ikF (x̂k; ξk). (12) Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations. Corollary 3 (ASGD in (11)). Let ω = 0 (or µi = 0,∀i equivalently) and Y = N in Algorithm 1 and Theorem 1. If T ⩽ O (√ Kσ2 + 1 ) , (13) then the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( σ/ √ K + 1/K ) . (14) First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel) version of SGD [23]. We compare this linear speedup property indicated by (13) with results in [1], [11], and [18]. To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ≤ O(K1/4 min{σ3/2, √ σ}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need T to be bounded by σ1/2K1/4 to achieve the same rate, which is also inferior to our result. Our requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so far. Corollary 4 (ASGD in (12)). Let ω = 0 (or equivalently, µi = 0,∀i) and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O (√ N3/2 +KN1/2σ2 ) , (15) then the following convergence rate holds(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O (√ N/Kσ +N/K ) . (16) The additional factor N in (16) (comparing to (14)) arises from the different way of counting the iteration. This additional factor also appears in [25] and [18]. We first compare our result with [18], which requires T to be bounded by O( √ KN1/2σ2). We can see that our requirement in (16) allows a larger value for T , especially when σ is small such that N3/2 dominates KN1/2σ2. Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort of comparing “apple” with “orange”, it is still meaningful if one believes that the strong convexity would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup is guaranteed if T ≤ O(N1/4) under the assumption that the sparsity of the stochastic gradient is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism. Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup, while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al. [11]) suggest that a large stochastic variance σ (this often implies the number of samples is large) is beneficial to the linear speedup. Our analysis shows the combo effect of N and σ and shows how they improve the linear speedup jointly. 3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD) We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik}) in GSk(x̂k; ξk) GSk(x̂k; ξk) = G{ik}(x̂k; ξk) = (F (x̂k + µikeik ; ξk)− F (x̂k − µikeik ; ξk))/(2µik)eik . (17) To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization. Corollary 5 (ASZD). Set Y = 1 and all µi’s to be a constant µ in Algorithm 1. Suppose that µ satisfies µ ⩽ O ( 1/ √ K +min {√ σ(NK)−1/4, σ/ √ N }) , (18) and T satisfies T ⩽ O (√ N3/2 +KN1/2σ2 ) . (19) We have the following convergence rate(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( N/K + √ N/Kσ ) . (20) We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel) zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives. First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent. Our result implies a better dependence on µ, comparing with [12].5 To obtain such convergence rate 5Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more accurate but less aggressive. The proved convergence rate actually improves a small constant in [12]. in (20), Ghadimi and Lan [12] require µ ⩽ O ( 1/(N √ K) ) , while our requirement in (18) is much less restrictive. An important insight in our requirement is to suggest the dependence on the variance σ: if the variance σ is large, µ is allowed to be a much larger value. This insight meets the common sense: a large variance means that the stochastic gradient may largely deviate from the true gradient, so we are allowed to choose a large µ to obtain a less exact estimation for the stochastic gradient without affecting the convergence rate. From the practical view of point, it always tends to choose a large value for µ. Recall the zeroth-order method uses the function difference at two different points (e.g., x+µei and x−µei) to estimate the differential. In a practical system (e.g., a concrete control system), there usually exists some system noise while querying the function values. If two points are too close (in other words µ is too small), the obtained function difference is dominated by noise and does not really reflect the function differential. Second, we consider the case T ≥ 1, which leads to the asynchronous zeroth-order stochastic descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19) essentially indicates the requirement for the linear speedup property. The linear speedup property here also shows that even if Kσ2 is much smaller than 1, we still have O(N3/4) linear speedup, which shows a fundamental understanding of asynchronous stochastic algorithms that N and σ can improve the linear speedup jointly. 4 Experiment Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct two experiments to validate the proposed ASZD on in this section. The first part applies ASZD to estimate the parameters for a synthetic black box system. The second part applies ASZD to the model combination for Yahoo Music Recommendation Competition. 4.1 Parameter Optimization for A Black Box We use a deep neural network to simulate a black box system. The optimization variables are the weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the neural network with 46380 weights (or parameters) totally. The weights are randomly generated from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the input vector plus some Gaussian random noise. We use this network to generate 463800 samples. These synthetic samples are used to optimize the weights for the black box. (We pretend not to know the structure and weights of this neural network because it is a black box.) To optimize (estimate) the parameters for this black box, we apply the proposed ASZD method. The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and 10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the steplength is chosen as γ = 0.1, which is based on the best performance of Algorithm 1 running on 1 core to achieve the precision 10−1 for the objective value. The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the supplement) to show the curve of the objective value against the number of iterations and running time respectively. 4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation Competition In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set: RMSE = √∑ (u,i)∈T1(rui − r̂ui) 2/|T1|, (21) where (u, i) ∈ T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating for user u and item i, and r̂ui is the predicted rating. The winning team from NTU created more than 200 models using different machine learning algorithms [6], including Matrix Factorization, k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble to achieve better RMSE. NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result RMSE 21.0004 21.0545 21.2335 21.1241 We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm for the same number of iterations, with different number of threads, and measured the running time speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox, our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the square root of objective function value (RMSE) against the number of iterations and running time. After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is to show the performance of our algorithm, we assume we can “submit” our solution x for unlimited times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3. 5 Conclusion In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. Acknowledgements This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
1. What is the focus of the paper regarding asynchronous parallelization? 2. What are the strengths of the paper, particularly in terms of its theoretical analysis? 3. What are the weaknesses or limitations of the paper, especially regarding its experiments and applications? 4. How does the reviewer assess the significance of the paper's contributions and novel aspects? 5. Are there any questions or concerns raised by the reviewer that could benefit from further clarification or discussion?
Review
Review The paper considers asynchronous parallelization of first- and zeroth-order stochastic descent algorithms, and analyzes the feasibility of their linear speedup. The general bound derived in Theorem 1 is then applied to both Stochastic Coordinate Descent and Stochastic Gradient Descent cases as corollaries, for which bounds on speedup are obtained that relate #workers, dimensionality, iteration index and stochastic gradient variance. The paper also obtains convergence rate for zeroth-order descent. Experiments evaluate the zeroth-order method in multi-core environment for a synthetic neural network, and for ensembling submodels on Yahoo! Music recommendation dataset. The theoretical analysis in the paper is interesting due to the generalization it provides for a number of earlier methods, and for the new results for zeroth-order case. One blindspot is considering the effects of the infrastructure in the multi-node case: network bandwidth becomes highly contended, and corresponding effects should be accounted for in the analysis. The experimental validation in the paper can be improved significantly: - Most frequent practical application of ASCD/ASGD is for training large, highly sparse linear models for classification. It is unclear why this was not verified. - The use of the synthetic DNN is unwarranted given the wide popularity and accessibility of standard DNN benchmarks.
NIPS
Title A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order Abstract Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems. 1 Introduction Asynchronous parallel optimization received substantial successes and extensive attention recently, for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems, such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21], PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the efficiency of all computation workers. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us, that is, how much time can we save by employing more computation resources? More precisely, people are interested in the running time speedup (RTS) with T workers: RTS(T ) = running time using a single worker running time using T workers . Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the computational complexity speedup (CCS) with T workers1 CCS(T ) = total computational complexity using a single worker total computational complexity using T workers × T. In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More specifically, what is the upper bound on T to ensure CCS(T ) = Θ(T )? Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can 1For simplicity, we assume that the communication cost is not dominant throughout this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu and Wright [19] found that the upper bound depends on the problem dimension as well as the diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup property. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. To avoid unnecessary complication and cover practical problems and algorithms, we consider the following nonconvex stochastic optimization problem: minx∈RN f(x) := Eξ(F (x; ξ)), (1) where ξ ∈ Ξ is a random variable, and both F (·; ξ) : RN → R and f(·) : RN → R are smooth but not necessarily convex functions. This objective function covers a large scope of machine learning problems including deep learning. F (·; ξ)’s are called component functions in this paper. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , n} and F (x; ξ) is the loss function with respect to the training sample indexed by ξ. We highlight the main contributions of this paper in the following: • We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on multicore systems), and others as its special cases. • Our generic analysis can recover or improve the existing results on special cases. • Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent (ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. • The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. 1.1 Related Works We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper. The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting. Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous stochastic algorithms, including HOGWILD!, asynchronous SCD and asynchronous sparse SVRG. The fundamental difference in our work lies on that we apply different analysis and our result can be directly applied to various special cases, while theirs cannot. In addition, there is a line of research studying the asynchronous ADMM type methods, which is not in the scope of this paper. We encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35]. We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the dimension of the problem, K to denote the iteration number, and σ to the variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ √ K) for zeroth-order SGD applied to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of O( √ N/K) rate for zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows a lower bound O(1/ √ K) for any zeroth-order method with inaccurate evaluation. Duchi et al. [9] proved a O(N1/4/K + 1/ √ K) rate for zeroth order SGD on convex objectives but with some very different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) √ K) for zeroth-order bandit algorithm on convex objectives. For more comprehensive review of asynchronous algorithms, please refer to the long version of this paper on arXiv:1606.00498. 1.2 Notation • ei ∈ RN denotes the ith natural unit basis vector. • E(·) means taking the expectation with respect to all random variables, while Ea(·) denotes the expectation with respect to a random variable a. • ∇f(x) ∈ RN is the gradient of f(x) with respect to x. Let S be a subset of {1, · · · , N}. ∇Sf(x) ∈ RN is the projection of ∇f(x) onto the index set S, that is, setting components of ∇f(x) outside of S to be zero. We use∇if(x) ∈ RN to denote ∇{i}f(x) for short. • f∗ denotes the optimal objective value in (1). 2 Algorithm Algorithm 1 Generic Asynchronous Stochastic Algorithm (GASA) Require: x0,K, Y, (µ1, µ2, . . . , µN ), {γk}k=0,...,K−1 ▷ γk is the step length for kth iteration Ensure: {xk}Kk=0 1: for k = 0, . . . ,K − 1 do 2: Randomly select a component function index ξk and a set of coordinate indices Sk, where |Sk| = Y ; 3: xk+1 = xk − γkGSk(x̂k; ξk); 4: end for We illustrate the asynchronous parallelism by assuming a centralized network: a central node and multiple child nodes (workers). The central node maintains the optimization variable x. It could be a parameter server if implemented on a computer cluster [17]; it could be a shared memory if implemented on a multicore machine. Given a base algorithm A, all child nodes run algorithm A independently and concurrently: read x from the central node (we call the result of this read x̂, and it is mathematically defined later in (4)), calculate locally using the x̂, and modify x on the central node. There is no need to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency gets maximized. In other words, we have CCS(T ) ≈ RTS(T ). Note that due to the asynchronous parallel mechanism the variable x in the central node is not updated exactly following the protocol of Algorithm A, since when a child node returns its computation result, the x in the central node might have been changed by other child nodes. Thus a new analysis is required. A fundamental question would be under what conditions a linear speedup can be guaranteed. In other words, under what conditions CCS(T ) = Θ(T ) or equivalently RTS(T ) = Θ(T )? To provide a comprehensive analysis, we consider a generic algorithm A – the zeroth order hybrid of SCD and SGD: iteratively sample a component function2 indexed by ξ and a coordinate block S⊆{1, 2, · · · , N}, where |S| = Y for some constant Y and update x with x← x− γGS(x; ξ) (2) where GS(x; ξ) is an approximation to the block coordinate stochastic gradient NY −1∇SF (x; ξ): GS(x; ξ) := ∑ i∈S N 2Y µi (F (x+ µiei; ξ)− F (x− µiei; ξ))ei, S ⊆ {1, 2, . . . , N}. (3) In the definition of GS(x; ξ), µi is the approximation parameter for the ith coordinate. (µ1, µ2, . . . , µN ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS(x; ξ). It is easy to see that the closer to 0 the µi’s are, the closer GS(x; ξ) and NY −1∇Sf(x; ξ) will be. In particular, limµi→0,∀i GS(x; ξ) = NY −1∇Sf(x; ξ). 2The algorithm and theoretical analysis followed can be easily extended to the minibatch version. Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central node. γk is the predefined steplength (or learning rate). K is the total number of iterations (note that this iteration number is counted by the the central node, that is, any update on x no matter from which child node will increase this counter.) As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algorithm A in Eq. (2) is that x̂k may be not equal to xk. In asynchronous parallelism, there are two different ways to model the value of x̂k: • Consistent read: x̂k is some early existed state of x in the central node, that is, x̂k = xk−τk for some τk ≥ 0. This happens if reading x and writing x on the central node by any child node are atomic operations, for instance, the implementation on a parameter server [17]. • Inconsistent read: x̂k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have different ages. In other words, x̂k may not be any existed state of x in the central node. Readers who want to learn more details about consistent read and inconsistent read can refer to [3, 18, 19]. To cover both cases, we note that x̂k can be represented in the following generic form: x̂k = xk − ∑ j∈J(k)(xj+1 − xj), (4) where J(k) ⊂ {k−1, k−2, . . . , k−T} is a subset of the indices of early iterations, and T is the upper bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total number of workers and the upper bound of the staleness are treated as the same in the following discussion and this notation T is abused for simplicity. 3 Theoretical Analysis Before we show the main results of this paper, let us first make some global assumptions commonly used for the analysis of stochastic algorithms.3 Bounded Variance of Stochastic Gradient Eξ(∥∇F (x; ξ)−∇f(x)∥2) ≤ σ2,∀x. Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4 max{∥∇f(x)−∇f(y)∥, ∥∇F (x; ξ)−∇F (y; ξ)∥} ≤ L∥x− y∥ ∀x,∀y, ∀ξ. (5) Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: ∀ξ, ∀x, αiei∀S ⊂ {1, 2, ..., N} with |S| ≤ s for any z = ∑ i∈S we have: max {∥∇f(x)−∇f (x+ z)∥ , ∥∇F (x; ξ)−∇F (x+ z; ξ)∥} ≤ Ls ∥z∥ Define L(i) for i ∈ {1, 2, . . . , N} as the minimum constant that satisfies: max{∥∇if(x)−∇if(x+ αei)∥, ∥∇iF (x; ξ)−∇iF (x+ αei; ξ)∥} ≤ L(i)|α|. ∀ξ,∀x. (6) Define Lmax := maxi∈{1,...,N} L(i). It can be seen that Lmax ≤ Ls ≤ L. Independence All random variables ξk, Sk for k = 0, 1, · · · ,K are independent to each other. Bounded Age Let T be the global bound for delay: J(k)⊆{k − 1, . . . , k − T},∀k, so |J(k)| ≤ T . We define the following global quantities for short notations: ω := (∑N i=1 L 2 (i)µ 2 i ) /N, α1 := 4 + 4 ( TY + Y 3/2T 2/ √ N ) L2T /(L 2 Y N), α2 := Y/((f(x0)− f∗)LY N), α3 := (K(Nω + σ2)α2 + 4)L2Y /L2T . (7) Next we show our main result in the following theorem: 3Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in [25], these behaviors are guaranteed by most modern architectures. 4Note that the Lipschitz assumption on the component function F (x; ξ)’s can be eliminated when it comes to first order methods (i.e., ω → 0) in our following theorems. Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength γk to be a constant γ in Algorithm 1 γ−1k = γ −1 = 2LY NY −1 (√ α21/(K(Nω + σ 2)α2 + α1) + √ K(Nω + σ2)α2 ) ,∀k and suppose the age T is bounded by T ≤ √ N 2Y 1/2 (√ 1 + 4Y −1/2N1/2α3 − 1 ) . We have the fol- lowing convergence rate:∑K k=0 E∥∇f(xk)∥ 2 K ⩽ 20 Kα2 + 1 Kα2 ( L2T L2Y √ 1 + 4Y −1/2N1/2α3 − 1√ NY −1 + 11 √ Nω + σ2 √ Kα2 ) +Nω. (8) Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to “stochastic” gradient descent; and the last term is due to the zeroth-order approximation. Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover and improve existing results as well as prove the convergence for new algorithms using Theorem 1. To make the results more interpretable, we use the big-O notation to avoid explicitly writing down all the constant factors, including all L’s, f(x0), and f∗ in the following corollaries. 3.1 Asynchronous Stochastic Coordinate Descent (ASCD) We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and σ = 0. Sk = {ik} only contains a single randomly sampled coordinate, and ω = 0 (or equivalently µi = 0,∀i). The essential updating rule on x is xk+1 = xk − γk∇ikf(x̂k). Corollary 2 (ASCD). Let ω = 0, σ = 0, and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O(N3/4), (9) the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O(N/K). (10) The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property is better than the one in [20], by improving it from T ≤ O(N1/2) to T ≤ O(N3/4). Mania et al. [22] analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N1/6), which is also more restrictive than ours. 3.2 Asynchronous Stochastic Gradient Descent (ASGD) ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to the following updating rule for x (setting Y = N and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇F (x̂k; ξk). (11) Note that a single iteration is defined as modifying the whole vector. The other type is to implement on a single computer with multiple cores. In this case, the central node corresponds to the shared memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting Sk = {ik}, that is, Y = 1, and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇ikF (x̂k; ξk). (12) Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations. Corollary 3 (ASGD in (11)). Let ω = 0 (or µi = 0,∀i equivalently) and Y = N in Algorithm 1 and Theorem 1. If T ⩽ O (√ Kσ2 + 1 ) , (13) then the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( σ/ √ K + 1/K ) . (14) First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel) version of SGD [23]. We compare this linear speedup property indicated by (13) with results in [1], [11], and [18]. To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ≤ O(K1/4 min{σ3/2, √ σ}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need T to be bounded by σ1/2K1/4 to achieve the same rate, which is also inferior to our result. Our requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so far. Corollary 4 (ASGD in (12)). Let ω = 0 (or equivalently, µi = 0,∀i) and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O (√ N3/2 +KN1/2σ2 ) , (15) then the following convergence rate holds(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O (√ N/Kσ +N/K ) . (16) The additional factor N in (16) (comparing to (14)) arises from the different way of counting the iteration. This additional factor also appears in [25] and [18]. We first compare our result with [18], which requires T to be bounded by O( √ KN1/2σ2). We can see that our requirement in (16) allows a larger value for T , especially when σ is small such that N3/2 dominates KN1/2σ2. Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort of comparing “apple” with “orange”, it is still meaningful if one believes that the strong convexity would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup is guaranteed if T ≤ O(N1/4) under the assumption that the sparsity of the stochastic gradient is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism. Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup, while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al. [11]) suggest that a large stochastic variance σ (this often implies the number of samples is large) is beneficial to the linear speedup. Our analysis shows the combo effect of N and σ and shows how they improve the linear speedup jointly. 3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD) We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik}) in GSk(x̂k; ξk) GSk(x̂k; ξk) = G{ik}(x̂k; ξk) = (F (x̂k + µikeik ; ξk)− F (x̂k − µikeik ; ξk))/(2µik)eik . (17) To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization. Corollary 5 (ASZD). Set Y = 1 and all µi’s to be a constant µ in Algorithm 1. Suppose that µ satisfies µ ⩽ O ( 1/ √ K +min {√ σ(NK)−1/4, σ/ √ N }) , (18) and T satisfies T ⩽ O (√ N3/2 +KN1/2σ2 ) . (19) We have the following convergence rate(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( N/K + √ N/Kσ ) . (20) We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel) zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives. First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent. Our result implies a better dependence on µ, comparing with [12].5 To obtain such convergence rate 5Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more accurate but less aggressive. The proved convergence rate actually improves a small constant in [12]. in (20), Ghadimi and Lan [12] require µ ⩽ O ( 1/(N √ K) ) , while our requirement in (18) is much less restrictive. An important insight in our requirement is to suggest the dependence on the variance σ: if the variance σ is large, µ is allowed to be a much larger value. This insight meets the common sense: a large variance means that the stochastic gradient may largely deviate from the true gradient, so we are allowed to choose a large µ to obtain a less exact estimation for the stochastic gradient without affecting the convergence rate. From the practical view of point, it always tends to choose a large value for µ. Recall the zeroth-order method uses the function difference at two different points (e.g., x+µei and x−µei) to estimate the differential. In a practical system (e.g., a concrete control system), there usually exists some system noise while querying the function values. If two points are too close (in other words µ is too small), the obtained function difference is dominated by noise and does not really reflect the function differential. Second, we consider the case T ≥ 1, which leads to the asynchronous zeroth-order stochastic descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19) essentially indicates the requirement for the linear speedup property. The linear speedup property here also shows that even if Kσ2 is much smaller than 1, we still have O(N3/4) linear speedup, which shows a fundamental understanding of asynchronous stochastic algorithms that N and σ can improve the linear speedup jointly. 4 Experiment Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct two experiments to validate the proposed ASZD on in this section. The first part applies ASZD to estimate the parameters for a synthetic black box system. The second part applies ASZD to the model combination for Yahoo Music Recommendation Competition. 4.1 Parameter Optimization for A Black Box We use a deep neural network to simulate a black box system. The optimization variables are the weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the neural network with 46380 weights (or parameters) totally. The weights are randomly generated from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the input vector plus some Gaussian random noise. We use this network to generate 463800 samples. These synthetic samples are used to optimize the weights for the black box. (We pretend not to know the structure and weights of this neural network because it is a black box.) To optimize (estimate) the parameters for this black box, we apply the proposed ASZD method. The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and 10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the steplength is chosen as γ = 0.1, which is based on the best performance of Algorithm 1 running on 1 core to achieve the precision 10−1 for the objective value. The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the supplement) to show the curve of the objective value against the number of iterations and running time respectively. 4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation Competition In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set: RMSE = √∑ (u,i)∈T1(rui − r̂ui) 2/|T1|, (21) where (u, i) ∈ T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating for user u and item i, and r̂ui is the predicted rating. The winning team from NTU created more than 200 models using different machine learning algorithms [6], including Matrix Factorization, k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble to achieve better RMSE. NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result RMSE 21.0004 21.0545 21.2335 21.1241 We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm for the same number of iterations, with different number of threads, and measured the running time speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox, our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the square root of objective function value (RMSE) against the number of iterations and running time. After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is to show the performance of our algorithm, we assume we can “submit” our solution x for unlimited times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3. 5 Conclusion In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. Acknowledgements This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
1. What is the focus and contribution of the paper regarding asynchronous stochastic algorithms? 2. What are the strengths of the paper, particularly in its theoretical analysis and numerical examples? 3. Do you have any concerns or suggestions regarding the paper's presentation or content? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review The paper provides a general analysis of a wide class of asynchronous stochastic algorithms, including stochastic gradient descent, and stochastic coordinate descent, all in the lock-free case. Serious numerical examples are provided to back up their claims of near linear speedup.This is the best paper I have read for NIPS this year. I think it is a great contribution. Not only is a powerful framework and theorem provided, but the authors take the time to show how to apply it to other cases, and they discuss interesting features (for example, they make sure the results make sense in light of previous results, and discuss the effects of the variance in the gradient estimate). The authors instill confidence to the reader that the authors did a good job critically examining their own results. The numerical examples are not the typical small-scale test that claim this is the best algorithm ever, but rather they are well-down tests with good implementations on real-world datasets, and they do not claim to be the “best” algorithm ever, but rather test the stated claim of the paper — namely, that the asynchronous nature leads to speedup benefits up to a certain point. My main suggestion is that the authors re-proof the paper, as there are still many grammar mistakes (for example, many missing articles before nouns). I also must mention that I did not have time to read the appendix, so I am taking the proofs at the authors’ word. Other minor issues I noticed: - “sever” is listed several times, and should be “server” as in the title of [17]. - You might want to put the equation on line 107 as its own numbered equation, and mention that this is actually a change in the algorithm when mu=0. - Line 113, I don’t see where \hat{x}_k has been defined, so I am a bit confused about where this fits into the discussion. - Equation (4), I don’t see any coordinate indices, just the iteration indices, so it seems like this is a consistent read, not an inconsistent read. Maybe this is a notational issue? I’m confused. - Line 150, the choice of gamma for the Theorem to hold requires a lot of known parameters. Could you analyze the algorithm assuming that gamma is this quantity, up to a fixed constant? Or at least run numerical examples showing the robustness of the algorithm to incorrect choices in gamma? - Line 241 is not a complete sentence.
NIPS
Title A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order Abstract Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems. 1 Introduction Asynchronous parallel optimization received substantial successes and extensive attention recently, for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems, such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21], PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the efficiency of all computation workers. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us, that is, how much time can we save by employing more computation resources? More precisely, people are interested in the running time speedup (RTS) with T workers: RTS(T ) = running time using a single worker running time using T workers . Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the computational complexity speedup (CCS) with T workers1 CCS(T ) = total computational complexity using a single worker total computational complexity using T workers × T. In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More specifically, what is the upper bound on T to ensure CCS(T ) = Θ(T )? Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can 1For simplicity, we assume that the communication cost is not dominant throughout this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu and Wright [19] found that the upper bound depends on the problem dimension as well as the diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup property. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. To avoid unnecessary complication and cover practical problems and algorithms, we consider the following nonconvex stochastic optimization problem: minx∈RN f(x) := Eξ(F (x; ξ)), (1) where ξ ∈ Ξ is a random variable, and both F (·; ξ) : RN → R and f(·) : RN → R are smooth but not necessarily convex functions. This objective function covers a large scope of machine learning problems including deep learning. F (·; ξ)’s are called component functions in this paper. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , n} and F (x; ξ) is the loss function with respect to the training sample indexed by ξ. We highlight the main contributions of this paper in the following: • We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on multicore systems), and others as its special cases. • Our generic analysis can recover or improve the existing results on special cases. • Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent (ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. • The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. 1.1 Related Works We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper. The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting. Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous stochastic algorithms, including HOGWILD!, asynchronous SCD and asynchronous sparse SVRG. The fundamental difference in our work lies on that we apply different analysis and our result can be directly applied to various special cases, while theirs cannot. In addition, there is a line of research studying the asynchronous ADMM type methods, which is not in the scope of this paper. We encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35]. We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the dimension of the problem, K to denote the iteration number, and σ to the variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ √ K) for zeroth-order SGD applied to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of O( √ N/K) rate for zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows a lower bound O(1/ √ K) for any zeroth-order method with inaccurate evaluation. Duchi et al. [9] proved a O(N1/4/K + 1/ √ K) rate for zeroth order SGD on convex objectives but with some very different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) √ K) for zeroth-order bandit algorithm on convex objectives. For more comprehensive review of asynchronous algorithms, please refer to the long version of this paper on arXiv:1606.00498. 1.2 Notation • ei ∈ RN denotes the ith natural unit basis vector. • E(·) means taking the expectation with respect to all random variables, while Ea(·) denotes the expectation with respect to a random variable a. • ∇f(x) ∈ RN is the gradient of f(x) with respect to x. Let S be a subset of {1, · · · , N}. ∇Sf(x) ∈ RN is the projection of ∇f(x) onto the index set S, that is, setting components of ∇f(x) outside of S to be zero. We use∇if(x) ∈ RN to denote ∇{i}f(x) for short. • f∗ denotes the optimal objective value in (1). 2 Algorithm Algorithm 1 Generic Asynchronous Stochastic Algorithm (GASA) Require: x0,K, Y, (µ1, µ2, . . . , µN ), {γk}k=0,...,K−1 ▷ γk is the step length for kth iteration Ensure: {xk}Kk=0 1: for k = 0, . . . ,K − 1 do 2: Randomly select a component function index ξk and a set of coordinate indices Sk, where |Sk| = Y ; 3: xk+1 = xk − γkGSk(x̂k; ξk); 4: end for We illustrate the asynchronous parallelism by assuming a centralized network: a central node and multiple child nodes (workers). The central node maintains the optimization variable x. It could be a parameter server if implemented on a computer cluster [17]; it could be a shared memory if implemented on a multicore machine. Given a base algorithm A, all child nodes run algorithm A independently and concurrently: read x from the central node (we call the result of this read x̂, and it is mathematically defined later in (4)), calculate locally using the x̂, and modify x on the central node. There is no need to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency gets maximized. In other words, we have CCS(T ) ≈ RTS(T ). Note that due to the asynchronous parallel mechanism the variable x in the central node is not updated exactly following the protocol of Algorithm A, since when a child node returns its computation result, the x in the central node might have been changed by other child nodes. Thus a new analysis is required. A fundamental question would be under what conditions a linear speedup can be guaranteed. In other words, under what conditions CCS(T ) = Θ(T ) or equivalently RTS(T ) = Θ(T )? To provide a comprehensive analysis, we consider a generic algorithm A – the zeroth order hybrid of SCD and SGD: iteratively sample a component function2 indexed by ξ and a coordinate block S⊆{1, 2, · · · , N}, where |S| = Y for some constant Y and update x with x← x− γGS(x; ξ) (2) where GS(x; ξ) is an approximation to the block coordinate stochastic gradient NY −1∇SF (x; ξ): GS(x; ξ) := ∑ i∈S N 2Y µi (F (x+ µiei; ξ)− F (x− µiei; ξ))ei, S ⊆ {1, 2, . . . , N}. (3) In the definition of GS(x; ξ), µi is the approximation parameter for the ith coordinate. (µ1, µ2, . . . , µN ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS(x; ξ). It is easy to see that the closer to 0 the µi’s are, the closer GS(x; ξ) and NY −1∇Sf(x; ξ) will be. In particular, limµi→0,∀i GS(x; ξ) = NY −1∇Sf(x; ξ). 2The algorithm and theoretical analysis followed can be easily extended to the minibatch version. Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central node. γk is the predefined steplength (or learning rate). K is the total number of iterations (note that this iteration number is counted by the the central node, that is, any update on x no matter from which child node will increase this counter.) As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algorithm A in Eq. (2) is that x̂k may be not equal to xk. In asynchronous parallelism, there are two different ways to model the value of x̂k: • Consistent read: x̂k is some early existed state of x in the central node, that is, x̂k = xk−τk for some τk ≥ 0. This happens if reading x and writing x on the central node by any child node are atomic operations, for instance, the implementation on a parameter server [17]. • Inconsistent read: x̂k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have different ages. In other words, x̂k may not be any existed state of x in the central node. Readers who want to learn more details about consistent read and inconsistent read can refer to [3, 18, 19]. To cover both cases, we note that x̂k can be represented in the following generic form: x̂k = xk − ∑ j∈J(k)(xj+1 − xj), (4) where J(k) ⊂ {k−1, k−2, . . . , k−T} is a subset of the indices of early iterations, and T is the upper bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total number of workers and the upper bound of the staleness are treated as the same in the following discussion and this notation T is abused for simplicity. 3 Theoretical Analysis Before we show the main results of this paper, let us first make some global assumptions commonly used for the analysis of stochastic algorithms.3 Bounded Variance of Stochastic Gradient Eξ(∥∇F (x; ξ)−∇f(x)∥2) ≤ σ2,∀x. Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4 max{∥∇f(x)−∇f(y)∥, ∥∇F (x; ξ)−∇F (y; ξ)∥} ≤ L∥x− y∥ ∀x,∀y, ∀ξ. (5) Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: ∀ξ, ∀x, αiei∀S ⊂ {1, 2, ..., N} with |S| ≤ s for any z = ∑ i∈S we have: max {∥∇f(x)−∇f (x+ z)∥ , ∥∇F (x; ξ)−∇F (x+ z; ξ)∥} ≤ Ls ∥z∥ Define L(i) for i ∈ {1, 2, . . . , N} as the minimum constant that satisfies: max{∥∇if(x)−∇if(x+ αei)∥, ∥∇iF (x; ξ)−∇iF (x+ αei; ξ)∥} ≤ L(i)|α|. ∀ξ,∀x. (6) Define Lmax := maxi∈{1,...,N} L(i). It can be seen that Lmax ≤ Ls ≤ L. Independence All random variables ξk, Sk for k = 0, 1, · · · ,K are independent to each other. Bounded Age Let T be the global bound for delay: J(k)⊆{k − 1, . . . , k − T},∀k, so |J(k)| ≤ T . We define the following global quantities for short notations: ω := (∑N i=1 L 2 (i)µ 2 i ) /N, α1 := 4 + 4 ( TY + Y 3/2T 2/ √ N ) L2T /(L 2 Y N), α2 := Y/((f(x0)− f∗)LY N), α3 := (K(Nω + σ2)α2 + 4)L2Y /L2T . (7) Next we show our main result in the following theorem: 3Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in [25], these behaviors are guaranteed by most modern architectures. 4Note that the Lipschitz assumption on the component function F (x; ξ)’s can be eliminated when it comes to first order methods (i.e., ω → 0) in our following theorems. Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength γk to be a constant γ in Algorithm 1 γ−1k = γ −1 = 2LY NY −1 (√ α21/(K(Nω + σ 2)α2 + α1) + √ K(Nω + σ2)α2 ) ,∀k and suppose the age T is bounded by T ≤ √ N 2Y 1/2 (√ 1 + 4Y −1/2N1/2α3 − 1 ) . We have the fol- lowing convergence rate:∑K k=0 E∥∇f(xk)∥ 2 K ⩽ 20 Kα2 + 1 Kα2 ( L2T L2Y √ 1 + 4Y −1/2N1/2α3 − 1√ NY −1 + 11 √ Nω + σ2 √ Kα2 ) +Nω. (8) Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to “stochastic” gradient descent; and the last term is due to the zeroth-order approximation. Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover and improve existing results as well as prove the convergence for new algorithms using Theorem 1. To make the results more interpretable, we use the big-O notation to avoid explicitly writing down all the constant factors, including all L’s, f(x0), and f∗ in the following corollaries. 3.1 Asynchronous Stochastic Coordinate Descent (ASCD) We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and σ = 0. Sk = {ik} only contains a single randomly sampled coordinate, and ω = 0 (or equivalently µi = 0,∀i). The essential updating rule on x is xk+1 = xk − γk∇ikf(x̂k). Corollary 2 (ASCD). Let ω = 0, σ = 0, and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O(N3/4), (9) the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O(N/K). (10) The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property is better than the one in [20], by improving it from T ≤ O(N1/2) to T ≤ O(N3/4). Mania et al. [22] analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N1/6), which is also more restrictive than ours. 3.2 Asynchronous Stochastic Gradient Descent (ASGD) ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to the following updating rule for x (setting Y = N and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇F (x̂k; ξk). (11) Note that a single iteration is defined as modifying the whole vector. The other type is to implement on a single computer with multiple cores. In this case, the central node corresponds to the shared memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting Sk = {ik}, that is, Y = 1, and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇ikF (x̂k; ξk). (12) Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations. Corollary 3 (ASGD in (11)). Let ω = 0 (or µi = 0,∀i equivalently) and Y = N in Algorithm 1 and Theorem 1. If T ⩽ O (√ Kσ2 + 1 ) , (13) then the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( σ/ √ K + 1/K ) . (14) First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel) version of SGD [23]. We compare this linear speedup property indicated by (13) with results in [1], [11], and [18]. To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ≤ O(K1/4 min{σ3/2, √ σ}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need T to be bounded by σ1/2K1/4 to achieve the same rate, which is also inferior to our result. Our requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so far. Corollary 4 (ASGD in (12)). Let ω = 0 (or equivalently, µi = 0,∀i) and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O (√ N3/2 +KN1/2σ2 ) , (15) then the following convergence rate holds(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O (√ N/Kσ +N/K ) . (16) The additional factor N in (16) (comparing to (14)) arises from the different way of counting the iteration. This additional factor also appears in [25] and [18]. We first compare our result with [18], which requires T to be bounded by O( √ KN1/2σ2). We can see that our requirement in (16) allows a larger value for T , especially when σ is small such that N3/2 dominates KN1/2σ2. Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort of comparing “apple” with “orange”, it is still meaningful if one believes that the strong convexity would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup is guaranteed if T ≤ O(N1/4) under the assumption that the sparsity of the stochastic gradient is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism. Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup, while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al. [11]) suggest that a large stochastic variance σ (this often implies the number of samples is large) is beneficial to the linear speedup. Our analysis shows the combo effect of N and σ and shows how they improve the linear speedup jointly. 3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD) We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik}) in GSk(x̂k; ξk) GSk(x̂k; ξk) = G{ik}(x̂k; ξk) = (F (x̂k + µikeik ; ξk)− F (x̂k − µikeik ; ξk))/(2µik)eik . (17) To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization. Corollary 5 (ASZD). Set Y = 1 and all µi’s to be a constant µ in Algorithm 1. Suppose that µ satisfies µ ⩽ O ( 1/ √ K +min {√ σ(NK)−1/4, σ/ √ N }) , (18) and T satisfies T ⩽ O (√ N3/2 +KN1/2σ2 ) . (19) We have the following convergence rate(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( N/K + √ N/Kσ ) . (20) We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel) zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives. First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent. Our result implies a better dependence on µ, comparing with [12].5 To obtain such convergence rate 5Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more accurate but less aggressive. The proved convergence rate actually improves a small constant in [12]. in (20), Ghadimi and Lan [12] require µ ⩽ O ( 1/(N √ K) ) , while our requirement in (18) is much less restrictive. An important insight in our requirement is to suggest the dependence on the variance σ: if the variance σ is large, µ is allowed to be a much larger value. This insight meets the common sense: a large variance means that the stochastic gradient may largely deviate from the true gradient, so we are allowed to choose a large µ to obtain a less exact estimation for the stochastic gradient without affecting the convergence rate. From the practical view of point, it always tends to choose a large value for µ. Recall the zeroth-order method uses the function difference at two different points (e.g., x+µei and x−µei) to estimate the differential. In a practical system (e.g., a concrete control system), there usually exists some system noise while querying the function values. If two points are too close (in other words µ is too small), the obtained function difference is dominated by noise and does not really reflect the function differential. Second, we consider the case T ≥ 1, which leads to the asynchronous zeroth-order stochastic descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19) essentially indicates the requirement for the linear speedup property. The linear speedup property here also shows that even if Kσ2 is much smaller than 1, we still have O(N3/4) linear speedup, which shows a fundamental understanding of asynchronous stochastic algorithms that N and σ can improve the linear speedup jointly. 4 Experiment Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct two experiments to validate the proposed ASZD on in this section. The first part applies ASZD to estimate the parameters for a synthetic black box system. The second part applies ASZD to the model combination for Yahoo Music Recommendation Competition. 4.1 Parameter Optimization for A Black Box We use a deep neural network to simulate a black box system. The optimization variables are the weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the neural network with 46380 weights (or parameters) totally. The weights are randomly generated from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the input vector plus some Gaussian random noise. We use this network to generate 463800 samples. These synthetic samples are used to optimize the weights for the black box. (We pretend not to know the structure and weights of this neural network because it is a black box.) To optimize (estimate) the parameters for this black box, we apply the proposed ASZD method. The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and 10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the steplength is chosen as γ = 0.1, which is based on the best performance of Algorithm 1 running on 1 core to achieve the precision 10−1 for the objective value. The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the supplement) to show the curve of the objective value against the number of iterations and running time respectively. 4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation Competition In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set: RMSE = √∑ (u,i)∈T1(rui − r̂ui) 2/|T1|, (21) where (u, i) ∈ T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating for user u and item i, and r̂ui is the predicted rating. The winning team from NTU created more than 200 models using different machine learning algorithms [6], including Matrix Factorization, k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble to achieve better RMSE. NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result RMSE 21.0004 21.0545 21.2335 21.1241 We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm for the same number of iterations, with different number of threads, and measured the running time speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox, our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the square root of objective function value (RMSE) against the number of iterations and running time. After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is to show the performance of our algorithm, we assume we can “submit” our solution x for unlimited times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3. 5 Conclusion In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. Acknowledgements This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
1. What is the focus of the paper, and what are the contributions of the proposed asynchronous SGD algorithms? 2. What are the strengths of the paper regarding its ease of reading and improved convergence analysis? 3. What are the weaknesses of the paper, particularly in terms of the assumptions made in the inconsistent read definition? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns or questions regarding the practicality and implementability of the proposed methods?
Review
Review This paper provides a survey of existing asynchronous SGD algorithms. It then proposes a async zero order SGD and applies them to parameter selection and model blending problems. The paper performs staleness analysis on existing ASGD algorithms and shows that variance and model dimension affect speedup.The paper is easy to read and follow. The authors provide an improved convergence analysis, in special cases. It is not clear to the reader why these additional analyses are useful. Your definition of inconsistent read assumes that the individual x values are consistent (Section 3.2). It is very much possible that when a float value is being written, another child reads it and it reads a garbage value for that float. You should state this as an assumption clearly. Likewise. reading and writing atomically can be guaranteed even with a multi-core implementation by using an atomic read/write library. Similarly, one can get inconsistent reads even within a parameter server if implemented using direct memory access protocols. Hence, statements like "However, in this [multicore]computational model atomic read and write of x cannot be guaranteed." needs to be replaced with "in popular implementations of this computational model atomic read and write of x is not guaranteed"
NIPS
Title A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order Abstract Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems. 1 Introduction Asynchronous parallel optimization received substantial successes and extensive attention recently, for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems, such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21], PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the efficiency of all computation workers. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us, that is, how much time can we save by employing more computation resources? More precisely, people are interested in the running time speedup (RTS) with T workers: RTS(T ) = running time using a single worker running time using T workers . Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the computational complexity speedup (CCS) with T workers1 CCS(T ) = total computational complexity using a single worker total computational complexity using T workers × T. In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More specifically, what is the upper bound on T to ensure CCS(T ) = Θ(T )? Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can 1For simplicity, we assume that the communication cost is not dominant throughout this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu and Wright [19] found that the upper bound depends on the problem dimension as well as the diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup property. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. To avoid unnecessary complication and cover practical problems and algorithms, we consider the following nonconvex stochastic optimization problem: minx∈RN f(x) := Eξ(F (x; ξ)), (1) where ξ ∈ Ξ is a random variable, and both F (·; ξ) : RN → R and f(·) : RN → R are smooth but not necessarily convex functions. This objective function covers a large scope of machine learning problems including deep learning. F (·; ξ)’s are called component functions in this paper. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , n} and F (x; ξ) is the loss function with respect to the training sample indexed by ξ. We highlight the main contributions of this paper in the following: • We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on multicore systems), and others as its special cases. • Our generic analysis can recover or improve the existing results on special cases. • Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent (ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. • The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. 1.1 Related Works We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper. The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting. Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous stochastic algorithms, including HOGWILD!, asynchronous SCD and asynchronous sparse SVRG. The fundamental difference in our work lies on that we apply different analysis and our result can be directly applied to various special cases, while theirs cannot. In addition, there is a line of research studying the asynchronous ADMM type methods, which is not in the scope of this paper. We encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35]. We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the dimension of the problem, K to denote the iteration number, and σ to the variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ √ K) for zeroth-order SGD applied to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of O( √ N/K) rate for zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows a lower bound O(1/ √ K) for any zeroth-order method with inaccurate evaluation. Duchi et al. [9] proved a O(N1/4/K + 1/ √ K) rate for zeroth order SGD on convex objectives but with some very different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) √ K) for zeroth-order bandit algorithm on convex objectives. For more comprehensive review of asynchronous algorithms, please refer to the long version of this paper on arXiv:1606.00498. 1.2 Notation • ei ∈ RN denotes the ith natural unit basis vector. • E(·) means taking the expectation with respect to all random variables, while Ea(·) denotes the expectation with respect to a random variable a. • ∇f(x) ∈ RN is the gradient of f(x) with respect to x. Let S be a subset of {1, · · · , N}. ∇Sf(x) ∈ RN is the projection of ∇f(x) onto the index set S, that is, setting components of ∇f(x) outside of S to be zero. We use∇if(x) ∈ RN to denote ∇{i}f(x) for short. • f∗ denotes the optimal objective value in (1). 2 Algorithm Algorithm 1 Generic Asynchronous Stochastic Algorithm (GASA) Require: x0,K, Y, (µ1, µ2, . . . , µN ), {γk}k=0,...,K−1 ▷ γk is the step length for kth iteration Ensure: {xk}Kk=0 1: for k = 0, . . . ,K − 1 do 2: Randomly select a component function index ξk and a set of coordinate indices Sk, where |Sk| = Y ; 3: xk+1 = xk − γkGSk(x̂k; ξk); 4: end for We illustrate the asynchronous parallelism by assuming a centralized network: a central node and multiple child nodes (workers). The central node maintains the optimization variable x. It could be a parameter server if implemented on a computer cluster [17]; it could be a shared memory if implemented on a multicore machine. Given a base algorithm A, all child nodes run algorithm A independently and concurrently: read x from the central node (we call the result of this read x̂, and it is mathematically defined later in (4)), calculate locally using the x̂, and modify x on the central node. There is no need to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency gets maximized. In other words, we have CCS(T ) ≈ RTS(T ). Note that due to the asynchronous parallel mechanism the variable x in the central node is not updated exactly following the protocol of Algorithm A, since when a child node returns its computation result, the x in the central node might have been changed by other child nodes. Thus a new analysis is required. A fundamental question would be under what conditions a linear speedup can be guaranteed. In other words, under what conditions CCS(T ) = Θ(T ) or equivalently RTS(T ) = Θ(T )? To provide a comprehensive analysis, we consider a generic algorithm A – the zeroth order hybrid of SCD and SGD: iteratively sample a component function2 indexed by ξ and a coordinate block S⊆{1, 2, · · · , N}, where |S| = Y for some constant Y and update x with x← x− γGS(x; ξ) (2) where GS(x; ξ) is an approximation to the block coordinate stochastic gradient NY −1∇SF (x; ξ): GS(x; ξ) := ∑ i∈S N 2Y µi (F (x+ µiei; ξ)− F (x− µiei; ξ))ei, S ⊆ {1, 2, . . . , N}. (3) In the definition of GS(x; ξ), µi is the approximation parameter for the ith coordinate. (µ1, µ2, . . . , µN ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS(x; ξ). It is easy to see that the closer to 0 the µi’s are, the closer GS(x; ξ) and NY −1∇Sf(x; ξ) will be. In particular, limµi→0,∀i GS(x; ξ) = NY −1∇Sf(x; ξ). 2The algorithm and theoretical analysis followed can be easily extended to the minibatch version. Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central node. γk is the predefined steplength (or learning rate). K is the total number of iterations (note that this iteration number is counted by the the central node, that is, any update on x no matter from which child node will increase this counter.) As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algorithm A in Eq. (2) is that x̂k may be not equal to xk. In asynchronous parallelism, there are two different ways to model the value of x̂k: • Consistent read: x̂k is some early existed state of x in the central node, that is, x̂k = xk−τk for some τk ≥ 0. This happens if reading x and writing x on the central node by any child node are atomic operations, for instance, the implementation on a parameter server [17]. • Inconsistent read: x̂k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have different ages. In other words, x̂k may not be any existed state of x in the central node. Readers who want to learn more details about consistent read and inconsistent read can refer to [3, 18, 19]. To cover both cases, we note that x̂k can be represented in the following generic form: x̂k = xk − ∑ j∈J(k)(xj+1 − xj), (4) where J(k) ⊂ {k−1, k−2, . . . , k−T} is a subset of the indices of early iterations, and T is the upper bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total number of workers and the upper bound of the staleness are treated as the same in the following discussion and this notation T is abused for simplicity. 3 Theoretical Analysis Before we show the main results of this paper, let us first make some global assumptions commonly used for the analysis of stochastic algorithms.3 Bounded Variance of Stochastic Gradient Eξ(∥∇F (x; ξ)−∇f(x)∥2) ≤ σ2,∀x. Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4 max{∥∇f(x)−∇f(y)∥, ∥∇F (x; ξ)−∇F (y; ξ)∥} ≤ L∥x− y∥ ∀x,∀y, ∀ξ. (5) Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: ∀ξ, ∀x, αiei∀S ⊂ {1, 2, ..., N} with |S| ≤ s for any z = ∑ i∈S we have: max {∥∇f(x)−∇f (x+ z)∥ , ∥∇F (x; ξ)−∇F (x+ z; ξ)∥} ≤ Ls ∥z∥ Define L(i) for i ∈ {1, 2, . . . , N} as the minimum constant that satisfies: max{∥∇if(x)−∇if(x+ αei)∥, ∥∇iF (x; ξ)−∇iF (x+ αei; ξ)∥} ≤ L(i)|α|. ∀ξ,∀x. (6) Define Lmax := maxi∈{1,...,N} L(i). It can be seen that Lmax ≤ Ls ≤ L. Independence All random variables ξk, Sk for k = 0, 1, · · · ,K are independent to each other. Bounded Age Let T be the global bound for delay: J(k)⊆{k − 1, . . . , k − T},∀k, so |J(k)| ≤ T . We define the following global quantities for short notations: ω := (∑N i=1 L 2 (i)µ 2 i ) /N, α1 := 4 + 4 ( TY + Y 3/2T 2/ √ N ) L2T /(L 2 Y N), α2 := Y/((f(x0)− f∗)LY N), α3 := (K(Nω + σ2)α2 + 4)L2Y /L2T . (7) Next we show our main result in the following theorem: 3Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in [25], these behaviors are guaranteed by most modern architectures. 4Note that the Lipschitz assumption on the component function F (x; ξ)’s can be eliminated when it comes to first order methods (i.e., ω → 0) in our following theorems. Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength γk to be a constant γ in Algorithm 1 γ−1k = γ −1 = 2LY NY −1 (√ α21/(K(Nω + σ 2)α2 + α1) + √ K(Nω + σ2)α2 ) ,∀k and suppose the age T is bounded by T ≤ √ N 2Y 1/2 (√ 1 + 4Y −1/2N1/2α3 − 1 ) . We have the fol- lowing convergence rate:∑K k=0 E∥∇f(xk)∥ 2 K ⩽ 20 Kα2 + 1 Kα2 ( L2T L2Y √ 1 + 4Y −1/2N1/2α3 − 1√ NY −1 + 11 √ Nω + σ2 √ Kα2 ) +Nω. (8) Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to “stochastic” gradient descent; and the last term is due to the zeroth-order approximation. Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover and improve existing results as well as prove the convergence for new algorithms using Theorem 1. To make the results more interpretable, we use the big-O notation to avoid explicitly writing down all the constant factors, including all L’s, f(x0), and f∗ in the following corollaries. 3.1 Asynchronous Stochastic Coordinate Descent (ASCD) We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and σ = 0. Sk = {ik} only contains a single randomly sampled coordinate, and ω = 0 (or equivalently µi = 0,∀i). The essential updating rule on x is xk+1 = xk − γk∇ikf(x̂k). Corollary 2 (ASCD). Let ω = 0, σ = 0, and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O(N3/4), (9) the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O(N/K). (10) The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property is better than the one in [20], by improving it from T ≤ O(N1/2) to T ≤ O(N3/4). Mania et al. [22] analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N1/6), which is also more restrictive than ours. 3.2 Asynchronous Stochastic Gradient Descent (ASGD) ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to the following updating rule for x (setting Y = N and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇F (x̂k; ξk). (11) Note that a single iteration is defined as modifying the whole vector. The other type is to implement on a single computer with multiple cores. In this case, the central node corresponds to the shared memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting Sk = {ik}, that is, Y = 1, and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇ikF (x̂k; ξk). (12) Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations. Corollary 3 (ASGD in (11)). Let ω = 0 (or µi = 0,∀i equivalently) and Y = N in Algorithm 1 and Theorem 1. If T ⩽ O (√ Kσ2 + 1 ) , (13) then the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( σ/ √ K + 1/K ) . (14) First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel) version of SGD [23]. We compare this linear speedup property indicated by (13) with results in [1], [11], and [18]. To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ≤ O(K1/4 min{σ3/2, √ σ}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need T to be bounded by σ1/2K1/4 to achieve the same rate, which is also inferior to our result. Our requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so far. Corollary 4 (ASGD in (12)). Let ω = 0 (or equivalently, µi = 0,∀i) and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O (√ N3/2 +KN1/2σ2 ) , (15) then the following convergence rate holds(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O (√ N/Kσ +N/K ) . (16) The additional factor N in (16) (comparing to (14)) arises from the different way of counting the iteration. This additional factor also appears in [25] and [18]. We first compare our result with [18], which requires T to be bounded by O( √ KN1/2σ2). We can see that our requirement in (16) allows a larger value for T , especially when σ is small such that N3/2 dominates KN1/2σ2. Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort of comparing “apple” with “orange”, it is still meaningful if one believes that the strong convexity would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup is guaranteed if T ≤ O(N1/4) under the assumption that the sparsity of the stochastic gradient is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism. Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup, while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al. [11]) suggest that a large stochastic variance σ (this often implies the number of samples is large) is beneficial to the linear speedup. Our analysis shows the combo effect of N and σ and shows how they improve the linear speedup jointly. 3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD) We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik}) in GSk(x̂k; ξk) GSk(x̂k; ξk) = G{ik}(x̂k; ξk) = (F (x̂k + µikeik ; ξk)− F (x̂k − µikeik ; ξk))/(2µik)eik . (17) To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization. Corollary 5 (ASZD). Set Y = 1 and all µi’s to be a constant µ in Algorithm 1. Suppose that µ satisfies µ ⩽ O ( 1/ √ K +min {√ σ(NK)−1/4, σ/ √ N }) , (18) and T satisfies T ⩽ O (√ N3/2 +KN1/2σ2 ) . (19) We have the following convergence rate(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( N/K + √ N/Kσ ) . (20) We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel) zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives. First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent. Our result implies a better dependence on µ, comparing with [12].5 To obtain such convergence rate 5Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more accurate but less aggressive. The proved convergence rate actually improves a small constant in [12]. in (20), Ghadimi and Lan [12] require µ ⩽ O ( 1/(N √ K) ) , while our requirement in (18) is much less restrictive. An important insight in our requirement is to suggest the dependence on the variance σ: if the variance σ is large, µ is allowed to be a much larger value. This insight meets the common sense: a large variance means that the stochastic gradient may largely deviate from the true gradient, so we are allowed to choose a large µ to obtain a less exact estimation for the stochastic gradient without affecting the convergence rate. From the practical view of point, it always tends to choose a large value for µ. Recall the zeroth-order method uses the function difference at two different points (e.g., x+µei and x−µei) to estimate the differential. In a practical system (e.g., a concrete control system), there usually exists some system noise while querying the function values. If two points are too close (in other words µ is too small), the obtained function difference is dominated by noise and does not really reflect the function differential. Second, we consider the case T ≥ 1, which leads to the asynchronous zeroth-order stochastic descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19) essentially indicates the requirement for the linear speedup property. The linear speedup property here also shows that even if Kσ2 is much smaller than 1, we still have O(N3/4) linear speedup, which shows a fundamental understanding of asynchronous stochastic algorithms that N and σ can improve the linear speedup jointly. 4 Experiment Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct two experiments to validate the proposed ASZD on in this section. The first part applies ASZD to estimate the parameters for a synthetic black box system. The second part applies ASZD to the model combination for Yahoo Music Recommendation Competition. 4.1 Parameter Optimization for A Black Box We use a deep neural network to simulate a black box system. The optimization variables are the weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the neural network with 46380 weights (or parameters) totally. The weights are randomly generated from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the input vector plus some Gaussian random noise. We use this network to generate 463800 samples. These synthetic samples are used to optimize the weights for the black box. (We pretend not to know the structure and weights of this neural network because it is a black box.) To optimize (estimate) the parameters for this black box, we apply the proposed ASZD method. The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and 10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the steplength is chosen as γ = 0.1, which is based on the best performance of Algorithm 1 running on 1 core to achieve the precision 10−1 for the objective value. The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the supplement) to show the curve of the objective value against the number of iterations and running time respectively. 4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation Competition In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set: RMSE = √∑ (u,i)∈T1(rui − r̂ui) 2/|T1|, (21) where (u, i) ∈ T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating for user u and item i, and r̂ui is the predicted rating. The winning team from NTU created more than 200 models using different machine learning algorithms [6], including Matrix Factorization, k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble to achieve better RMSE. NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result RMSE 21.0004 21.0545 21.2335 21.1241 We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm for the same number of iterations, with different number of threads, and measured the running time speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox, our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the square root of objective function value (RMSE) against the number of iterations and running time. After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is to show the performance of our algorithm, we assume we can “submit” our solution x for unlimited times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3. 5 Conclusion In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. Acknowledgements This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
1. What is the focus of the paper regarding asynchronous parallel stochastic algorithms? 2. What are the strengths of the paper's theoretical analysis, particularly in improving existing results? 3. Do you have any concerns or suggestions for experimental validation related to the existing methods? 4. How does the reviewer assess the novelty and significance of the proposed zero-order gradient descent algorithm (ASZD)? 5. Are there any questions about the paper's clarity, quality, or reproducibility?
Review
Review This paper provides a generic analysis of asynchronous parallel stochastic algorithms including first order and zero order methods. Its contribution mainly focuses on the theoretical analysis of conditions ensuring the linear speedup property by T workers. The main theorem can cover results on existing algorithms such as ASCD and ASGD, and improves the analysis, in particular, T can be larger than that in previous work to guarantee the linear speedup. In addition, its generic analysis suggests a new zero order gradient descent algorithm (ASZD). The experiment result of ASZD on real dataset is given.The paper is well written and pleasant to read. The contribution of this paper is two-fold. 1. It improves the analysis on the existing asynchronous parallel first order stochastic optimization. Its proof technique seems to have some novelty, although I did not check the detail of the proof and compare with existing ones. The author claims that ASCD and ASGD have been validated in several recent papers, but I still hope to look at some simple experiments on them and see whether it matches your tighter result or not, e.g., in corollary 4, it needs T\leq O(\sqrt{N^{3/2}+KN^{1/2}\sigma^2}) rather than O(\sqrt{KN^{1/2}\sigma^2}) in the previous work. 2. ASZD is proposed and tested on real dataset. The analysis is covered by the main theorem.
NIPS
Title A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order Abstract Asynchronous parallel optimization received substantial successes and extensive attention recently. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. Our result recovers or improves existing analysis on special cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first time. Our experiments provide novel applications of the proposed asynchronous parallel zeroth order method on hyper parameter tuning and model blending problems. 1 Introduction Asynchronous parallel optimization received substantial successes and extensive attention recently, for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems, such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21], PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the efficiency of all computation workers. One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization can bring to us, that is, how much time can we save by employing more computation resources? More precisely, people are interested in the running time speedup (RTS) with T workers: RTS(T ) = running time using a single worker running time using T workers . Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the computational complexity speedup (CCS) with T workers1 CCS(T ) = total computational complexity using a single worker total computational complexity using T workers × T. In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More specifically, what is the upper bound on T to ensure CCS(T ) = Θ(T )? Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can 1For simplicity, we assume that the communication cost is not dominant throughout this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu and Wright [19] found that the upper bound depends on the problem dimension as well as the diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup property. This paper provides a comprehensive and generic analysis to study the speedup property for a broad range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods. To avoid unnecessary complication and cover practical problems and algorithms, we consider the following nonconvex stochastic optimization problem: minx∈RN f(x) := Eξ(F (x; ξ)), (1) where ξ ∈ Ξ is a random variable, and both F (·; ξ) : RN → R and f(·) : RN → R are smooth but not necessarily convex functions. This objective function covers a large scope of machine learning problems including deep learning. F (·; ξ)’s are called component functions in this paper. The most common specification is that Ξ is an index set of all training samples Ξ = {1, 2, · · · , n} and F (x; ξ) is the loss function with respect to the training sample indexed by ξ. We highlight the main contributions of this paper in the following: • We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on multicore systems), and others as its special cases. • Our generic analysis can recover or improve the existing results on special cases. • Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent (ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. • The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. 1.1 Related Works We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper. The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting. Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous stochastic algorithms, including HOGWILD!, asynchronous SCD and asynchronous sparse SVRG. The fundamental difference in our work lies on that we apply different analysis and our result can be directly applied to various special cases, while theirs cannot. In addition, there is a line of research studying the asynchronous ADMM type methods, which is not in the scope of this paper. We encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35]. We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the dimension of the problem, K to denote the iteration number, and σ to the variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ √ K) for zeroth-order SGD applied to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of O( √ N/K) rate for zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows a lower bound O(1/ √ K) for any zeroth-order method with inaccurate evaluation. Duchi et al. [9] proved a O(N1/4/K + 1/ √ K) rate for zeroth order SGD on convex objectives but with some very different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) √ K) for zeroth-order bandit algorithm on convex objectives. For more comprehensive review of asynchronous algorithms, please refer to the long version of this paper on arXiv:1606.00498. 1.2 Notation • ei ∈ RN denotes the ith natural unit basis vector. • E(·) means taking the expectation with respect to all random variables, while Ea(·) denotes the expectation with respect to a random variable a. • ∇f(x) ∈ RN is the gradient of f(x) with respect to x. Let S be a subset of {1, · · · , N}. ∇Sf(x) ∈ RN is the projection of ∇f(x) onto the index set S, that is, setting components of ∇f(x) outside of S to be zero. We use∇if(x) ∈ RN to denote ∇{i}f(x) for short. • f∗ denotes the optimal objective value in (1). 2 Algorithm Algorithm 1 Generic Asynchronous Stochastic Algorithm (GASA) Require: x0,K, Y, (µ1, µ2, . . . , µN ), {γk}k=0,...,K−1 ▷ γk is the step length for kth iteration Ensure: {xk}Kk=0 1: for k = 0, . . . ,K − 1 do 2: Randomly select a component function index ξk and a set of coordinate indices Sk, where |Sk| = Y ; 3: xk+1 = xk − γkGSk(x̂k; ξk); 4: end for We illustrate the asynchronous parallelism by assuming a centralized network: a central node and multiple child nodes (workers). The central node maintains the optimization variable x. It could be a parameter server if implemented on a computer cluster [17]; it could be a shared memory if implemented on a multicore machine. Given a base algorithm A, all child nodes run algorithm A independently and concurrently: read x from the central node (we call the result of this read x̂, and it is mathematically defined later in (4)), calculate locally using the x̂, and modify x on the central node. There is no need to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency gets maximized. In other words, we have CCS(T ) ≈ RTS(T ). Note that due to the asynchronous parallel mechanism the variable x in the central node is not updated exactly following the protocol of Algorithm A, since when a child node returns its computation result, the x in the central node might have been changed by other child nodes. Thus a new analysis is required. A fundamental question would be under what conditions a linear speedup can be guaranteed. In other words, under what conditions CCS(T ) = Θ(T ) or equivalently RTS(T ) = Θ(T )? To provide a comprehensive analysis, we consider a generic algorithm A – the zeroth order hybrid of SCD and SGD: iteratively sample a component function2 indexed by ξ and a coordinate block S⊆{1, 2, · · · , N}, where |S| = Y for some constant Y and update x with x← x− γGS(x; ξ) (2) where GS(x; ξ) is an approximation to the block coordinate stochastic gradient NY −1∇SF (x; ξ): GS(x; ξ) := ∑ i∈S N 2Y µi (F (x+ µiei; ξ)− F (x− µiei; ξ))ei, S ⊆ {1, 2, . . . , N}. (3) In the definition of GS(x; ξ), µi is the approximation parameter for the ith coordinate. (µ1, µ2, . . . , µN ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS(x; ξ). It is easy to see that the closer to 0 the µi’s are, the closer GS(x; ξ) and NY −1∇Sf(x; ξ) will be. In particular, limµi→0,∀i GS(x; ξ) = NY −1∇Sf(x; ξ). 2The algorithm and theoretical analysis followed can be easily extended to the minibatch version. Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central node. γk is the predefined steplength (or learning rate). K is the total number of iterations (note that this iteration number is counted by the the central node, that is, any update on x no matter from which child node will increase this counter.) As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algorithm A in Eq. (2) is that x̂k may be not equal to xk. In asynchronous parallelism, there are two different ways to model the value of x̂k: • Consistent read: x̂k is some early existed state of x in the central node, that is, x̂k = xk−τk for some τk ≥ 0. This happens if reading x and writing x on the central node by any child node are atomic operations, for instance, the implementation on a parameter server [17]. • Inconsistent read: x̂k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have different ages. In other words, x̂k may not be any existed state of x in the central node. Readers who want to learn more details about consistent read and inconsistent read can refer to [3, 18, 19]. To cover both cases, we note that x̂k can be represented in the following generic form: x̂k = xk − ∑ j∈J(k)(xj+1 − xj), (4) where J(k) ⊂ {k−1, k−2, . . . , k−T} is a subset of the indices of early iterations, and T is the upper bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total number of workers and the upper bound of the staleness are treated as the same in the following discussion and this notation T is abused for simplicity. 3 Theoretical Analysis Before we show the main results of this paper, let us first make some global assumptions commonly used for the analysis of stochastic algorithms.3 Bounded Variance of Stochastic Gradient Eξ(∥∇F (x; ξ)−∇f(x)∥2) ≤ σ2,∀x. Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4 max{∥∇f(x)−∇f(y)∥, ∥∇F (x; ξ)−∇F (y; ξ)∥} ≤ L∥x− y∥ ∀x,∀y, ∀ξ. (5) Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax. Let s be any positive integer bounded by N . Define Ls to be the minimal constant satisfying the following inequality: ∀ξ, ∀x, αiei∀S ⊂ {1, 2, ..., N} with |S| ≤ s for any z = ∑ i∈S we have: max {∥∇f(x)−∇f (x+ z)∥ , ∥∇F (x; ξ)−∇F (x+ z; ξ)∥} ≤ Ls ∥z∥ Define L(i) for i ∈ {1, 2, . . . , N} as the minimum constant that satisfies: max{∥∇if(x)−∇if(x+ αei)∥, ∥∇iF (x; ξ)−∇iF (x+ αei; ξ)∥} ≤ L(i)|α|. ∀ξ,∀x. (6) Define Lmax := maxi∈{1,...,N} L(i). It can be seen that Lmax ≤ Ls ≤ L. Independence All random variables ξk, Sk for k = 0, 1, · · · ,K are independent to each other. Bounded Age Let T be the global bound for delay: J(k)⊆{k − 1, . . . , k − T},∀k, so |J(k)| ≤ T . We define the following global quantities for short notations: ω := (∑N i=1 L 2 (i)µ 2 i ) /N, α1 := 4 + 4 ( TY + Y 3/2T 2/ √ N ) L2T /(L 2 Y N), α2 := Y/((f(x0)− f∗)LY N), α3 := (K(Nω + σ2)α2 + 4)L2Y /L2T . (7) Next we show our main result in the following theorem: 3Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in [25], these behaviors are guaranteed by most modern architectures. 4Note that the Lipschitz assumption on the component function F (x; ξ)’s can be eliminated when it comes to first order methods (i.e., ω → 0) in our following theorems. Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength γk to be a constant γ in Algorithm 1 γ−1k = γ −1 = 2LY NY −1 (√ α21/(K(Nω + σ 2)α2 + α1) + √ K(Nω + σ2)α2 ) ,∀k and suppose the age T is bounded by T ≤ √ N 2Y 1/2 (√ 1 + 4Y −1/2N1/2α3 − 1 ) . We have the fol- lowing convergence rate:∑K k=0 E∥∇f(xk)∥ 2 K ⩽ 20 Kα2 + 1 Kα2 ( L2T L2Y √ 1 + 4Y −1/2N1/2α3 − 1√ NY −1 + 11 √ Nω + σ2 √ Kα2 ) +Nω. (8) Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to “stochastic” gradient descent; and the last term is due to the zeroth-order approximation. Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover and improve existing results as well as prove the convergence for new algorithms using Theorem 1. To make the results more interpretable, we use the big-O notation to avoid explicitly writing down all the constant factors, including all L’s, f(x0), and f∗ in the following corollaries. 3.1 Asynchronous Stochastic Coordinate Descent (ASCD) We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and σ = 0. Sk = {ik} only contains a single randomly sampled coordinate, and ω = 0 (or equivalently µi = 0,∀i). The essential updating rule on x is xk+1 = xk − γk∇ikf(x̂k). Corollary 2 (ASCD). Let ω = 0, σ = 0, and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O(N3/4), (9) the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O(N/K). (10) The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property is better than the one in [20], by improving it from T ≤ O(N1/2) to T ≤ O(N3/4). Mania et al. [22] analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N1/6), which is also more restrictive than ours. 3.2 Asynchronous Stochastic Gradient Descent (ASGD) ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to the following updating rule for x (setting Y = N and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇F (x̂k; ξk). (11) Note that a single iteration is defined as modifying the whole vector. The other type is to implement on a single computer with multiple cores. In this case, the central node corresponds to the shared memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting Sk = {ik}, that is, Y = 1, and µi = 0,∀i in Algorithm 1): xk+1 = xk − γk∇ikF (x̂k; ξk). (12) Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations. Corollary 3 (ASGD in (11)). Let ω = 0 (or µi = 0,∀i equivalently) and Y = N in Algorithm 1 and Theorem 1. If T ⩽ O (√ Kσ2 + 1 ) , (13) then the following convergence rate holds:(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( σ/ √ K + 1/K ) . (14) First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel) version of SGD [23]. We compare this linear speedup property indicated by (13) with results in [1], [11], and [18]. To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ≤ O(K1/4 min{σ3/2, √ σ}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need T to be bounded by σ1/2K1/4 to achieve the same rate, which is also inferior to our result. Our requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so far. Corollary 4 (ASGD in (12)). Let ω = 0 (or equivalently, µi = 0,∀i) and Y = 1 in Algorithm 1 and Theorem 1. If T ⩽ O (√ N3/2 +KN1/2σ2 ) , (15) then the following convergence rate holds(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O (√ N/Kσ +N/K ) . (16) The additional factor N in (16) (comparing to (14)) arises from the different way of counting the iteration. This additional factor also appears in [25] and [18]. We first compare our result with [18], which requires T to be bounded by O( √ KN1/2σ2). We can see that our requirement in (16) allows a larger value for T , especially when σ is small such that N3/2 dominates KN1/2σ2. Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort of comparing “apple” with “orange”, it is still meaningful if one believes that the strong convexity would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup is guaranteed if T ≤ O(N1/4) under the assumption that the sparsity of the stochastic gradient is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism. Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup, while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al. [11]) suggest that a large stochastic variance σ (this often implies the number of samples is large) is beneficial to the linear speedup. Our analysis shows the combo effect of N and σ and shows how they improve the linear speedup jointly. 3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD) We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik}) in GSk(x̂k; ξk) GSk(x̂k; ξk) = G{ik}(x̂k; ξk) = (F (x̂k + µikeik ; ξk)− F (x̂k − µikeik ; ξk))/(2µik)eik . (17) To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization. Corollary 5 (ASZD). Set Y = 1 and all µi’s to be a constant µ in Algorithm 1. Suppose that µ satisfies µ ⩽ O ( 1/ √ K +min {√ σ(NK)−1/4, σ/ √ N }) , (18) and T satisfies T ⩽ O (√ N3/2 +KN1/2σ2 ) . (19) We have the following convergence rate(∑K k=0 E∥∇f(xk)∥2 ) /K ⩽ O ( N/K + √ N/Kσ ) . (20) We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel) zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives. First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent. Our result implies a better dependence on µ, comparing with [12].5 To obtain such convergence rate 5Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more accurate but less aggressive. The proved convergence rate actually improves a small constant in [12]. in (20), Ghadimi and Lan [12] require µ ⩽ O ( 1/(N √ K) ) , while our requirement in (18) is much less restrictive. An important insight in our requirement is to suggest the dependence on the variance σ: if the variance σ is large, µ is allowed to be a much larger value. This insight meets the common sense: a large variance means that the stochastic gradient may largely deviate from the true gradient, so we are allowed to choose a large µ to obtain a less exact estimation for the stochastic gradient without affecting the convergence rate. From the practical view of point, it always tends to choose a large value for µ. Recall the zeroth-order method uses the function difference at two different points (e.g., x+µei and x−µei) to estimate the differential. In a practical system (e.g., a concrete control system), there usually exists some system noise while querying the function values. If two points are too close (in other words µ is too small), the obtained function difference is dominated by noise and does not really reflect the function differential. Second, we consider the case T ≥ 1, which leads to the asynchronous zeroth-order stochastic descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19) essentially indicates the requirement for the linear speedup property. The linear speedup property here also shows that even if Kσ2 is much smaller than 1, we still have O(N3/4) linear speedup, which shows a fundamental understanding of asynchronous stochastic algorithms that N and σ can improve the linear speedup jointly. 4 Experiment Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct two experiments to validate the proposed ASZD on in this section. The first part applies ASZD to estimate the parameters for a synthetic black box system. The second part applies ASZD to the model combination for Yahoo Music Recommendation Competition. 4.1 Parameter Optimization for A Black Box We use a deep neural network to simulate a black box system. The optimization variables are the weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the neural network with 46380 weights (or parameters) totally. The weights are randomly generated from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the input vector plus some Gaussian random noise. We use this network to generate 463800 samples. These synthetic samples are used to optimize the weights for the black box. (We pretend not to know the structure and weights of this neural network because it is a black box.) To optimize (estimate) the parameters for this black box, we apply the proposed ASZD method. The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and 10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the steplength is chosen as γ = 0.1, which is based on the best performance of Algorithm 1 running on 1 core to achieve the precision 10−1 for the objective value. The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the supplement) to show the curve of the objective value against the number of iterations and running time respectively. 4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation Competition In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set: RMSE = √∑ (u,i)∈T1(rui − r̂ui) 2/|T1|, (21) where (u, i) ∈ T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating for user u and item i, and r̂ui is the predicted rating. The winning team from NTU created more than 200 models using different machine learning algorithms [6], including Matrix Factorization, k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble to achieve better RMSE. NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result RMSE 21.0004 21.0545 21.2335 21.1241 We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm for the same number of iterations, with different number of threads, and measured the running time speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox, our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the square root of objective function value (RMSE) against the number of iterations and running time. After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is to show the performance of our algorithm, we assume we can “submit” our solution x for unlimited times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3. 5 Conclusion In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth order algorithm. The experiment includes a novel application of the proposed ASZD method on model blending and hyper parameter tuning for big data optimization. Acknowledgements This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse Tsai for providing the code and data for the Yahoo Music Competition.
1. What is the focus of the paper in terms of asynchronous optimization algorithms? 2. What are the strengths of the paper regarding its contributions to the theory of asynchronous optimization? 3. What are the weaknesses of the paper regarding its lack of novelty compared to prior works on SGD and coordinate descent? 4. How does the reviewer assess the significance of the proposed algorithm, particularly in the context of zeroth-order optimization? 5. Are there any concerns regarding the paper's claims or experimental validation?
Review
Review This paper unifies the analysis asynchronous SGD with asynchronous coordinate descent. It also proposes an asynchronous algorithm for derivative free optimization.The behavior of asynchronous optimization algorithms is less well understood, and contributions to the theory in this case are valuable. The algorithm studied (Algorithm 1) is essentially a hybrid of SGD and coordinate descent, and so results about Algorithm 1 can be specialized to recover results about SGD and results about coordinate descent. However, it is not clear to me that this adds much value on top of the original analyses of SGD and coordinate descent or that Algorithm 1 is a particularly interesting algorithm to study. Does it really represent generic asynchronous optimization? Or is it mostly a unification of SGD and coordinate descent? For the zeroth order optimization case, the result is interesting. The authors demonstrate that this algorithm achieves a speedup over the single-core case, but is this a sensible approach relative to other zeroth-order optimization algorithms?
NIPS
Title Adversarial Self-Defense for Cycle-Consistent GANs Abstract The goal of unsupervised image-to-image translation is to map images from one domain to another without the ground truth correspondence between the two domains. State-of-art methods learn the correspondence using large numbers of unpaired examples from both domains and are based on generative adversarial networks. In order to preserve the semantics of the input image, the adversarial objective is usually combined with a cycle-consistency loss that penalizes incorrect reconstruction of the input image from the translated one. However, if the target mapping is many-to-one, e.g. aerial photos to maps, such a restriction forces the generator to hide information in low-amplitude structured noise that is undetectable by human eye or by the discriminator. In this paper, we show how such selfattacking behavior of unsupervised translation methods affects their performance and provide two defense techniques. We perform a quantitative evaluation of the proposed techniques and show that making the translation model more robust to the self-adversarial attack increases its generation quality and reconstruction reliability and makes the model less sensitive to low-amplitude perturbations. Our project page can be found at ai.bu.edu/selfadv/. 1 Introduction Generative adversarial networks (GANs) [7] have enabled many recent breakthroughs in image generation, such as being able to change visual attributes like hair color or gender in an impressively realistic way, and even generate highly realistic-looking faces of people that do not exist [13, 31, 14]. Conditional GANs designed for unsupervised image-to-image translation can map images from one domain to another without pairwise correspondence and ground truth labels, and are widely used for solving such tasks as semantic segmentation, colorization, style transfer, and quality enhancement of images [34, 10, 19, 3, 11, 35, 4] and videos [2, 1]. These models learn the cross-domain mapping by ensuring that the translated image both looks like a true representative of the target domain, and also preserves the semantics of the input image, e.g. the shape and position of objects, overall layout etc. Semantic preservation is usually achieved by enforcing cycle-consistency [34], i.e. a small error between the source image and its reverse reconstruction from the translated target image. Despite the success of cycle-consistent GANs, they have a major flaw. The reconstruction loss forces the generator network to hide the information necessary to faithfully reconstruct the input image inside tiny perturbations of the translated image [5]. The problem is particularly acute in many-to-one mappings, such as photos to semantic labels, where the model must reconstruct textures and colors lost during translation to the target domain. For example, Figure 1’s top row shows that even when the car is mapped incorrectly to semantic labels of building (gray) and tree (green), CycleGAN is still able to “cheat” and perfectly reconstruct the original car from hidden information. It also reconstructs road textures lost in the semantic map. This behavior is essentially an adversarial attack that the model is performing on itself, so we call it a self-adversarial attack. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. In this paper, we extend the analysis of self-adversarial attacks provided in [5] and show that the problem is present in recent state-of-art methods that incorporate cycle consistency. We provide two defense mechanisms against the attack that resemble the adversarial training technique widely used to increase robustness of deep neural networks to adversarial attacks [9, 16, 32]. We also introduce quantitative evaluation metrics for translation quality and reconstruction “honesty” that help to detect self-adversarial attacks and provide a better understanding of the learned cross-domain mapping. We show that due to the presence of hidden embeddings, state of the art translation methods are highly sensitive to high-frequency perturbations as illustrated in Figure 1. In contrast, our defense methods substantially decrease the amount of self-adversarial structured noise and thus make the mapping more reliant on the input image, which results in more interpretable translation and reconstruction and increased translation quality. Importantly, robustifying the model against the self-adversarial attack makes it also less susceptible to the high-frequency perturbations which make it less likely to converge to a non-optimal solution. 2 Related Work Unsupervised image-to-image translation is one of the tasks of domain adaptation that received a lot of attention in recent years. Current state-of-art methods [34, 20, 11, 15, 4, 10] solve this task using generative adversarial networks [8] that usually consist of a pair of generator and discriminator networks that are trained in a min-max fashion to generate realistic images from the target domain and correctly classify real and fake images respectively. The goal of image-to-image translation methods is to map the image from one domain to another in such way that the output image both looks like a real representative of the target domain and contains the semantics of the input image. In the supervised setting, the semantic consistency is enforced by the ground truth labels or pairwise correspondence. In case when there is no supervision, however, there is no such ground truth guidance, so using regular GAN results in often realistic-looking but unreliable translations. In order to overcome this problem, current state-of-art unsupervised translation methods incorporate cycle-consistency loss first introduced in [34] that forces the model to learn such mapping from which it is possible to reconstruct the input image. Recently, various methods have been developed for unimodal (CycleGAN [34], UNIT [20], CoGAN [21] etc.) and multimodal (MUNIT [11], StarGAN [4], BicycleGAN [35]) image-to-image translation. In this paper, we explore the problem of self-adversarial attacks in three of them: CycleGAN, UNIT and MUNIT. CycleGAN is a unimodal translation method that consists of two domain discriminators and two generator networks; the generators are trained to produce realistic images from the corresponding domains, while the discriminators aim to distinguish in-domain real images from the generated ones. The generator-discriminator pairs are trained in a min-max fashion both to produce realistic images and to satisfy the cycle-consistency property. The main idea behind UNIT is that both domains share some common semantics, and thus can be encoded to the shared latent space. It consists of two encoder-decoder pairs that map images to the latent space and back; the crossdomain translation is then performed by encoding the image from the source domain to the latent space and decoding it with the decoder for the target domain. MUNIT is a multimodal extension of UNIT that performs disentanglement of domain-specific (style space) and domain-agnostic (content space) features. While the original MUNIT does not use the explicit cycle-consistency loss, we found that cycle-consistency penalty significantly increases the quality of translation and helps the model to learn more reliable content disentanglement (see Figure 2). Thus, we used the MUNIT with cycle-consistency loss in our experiments. As illustrated in Figure 2, adding cycle-consistency loss indeed helps to disentangle domain-agnostic information and enhance the translation quality and reliability. However, such pixelwise penalty was shown [5] to force the generator to hide the domain-specific information that cannot be explicitly reconstructed from the translated image (i.e., shadows or color of the buildings from maps in mapsto-photos example) in such way that it cannot be detected by the discriminator. It has been known that deep neural networks [17], while providing higher accuracy in the majority of machine learning problems, are highly susceptible to the adversarial attacks [24, 29, 16, 23]. There exist multiple defense techniques that make neural networks more robust to the adversarial examples, such as adding adversarial examples to the training set or adversarial training [24, 22], distillation [25], ensemble adversarial training [30], denoising [18] and many more. Moreover, [33] have shown that defending the discriminator in a GAN setting increases the generation quality and prevents the model from converging to a non-optimal solution. However, most adversarial defense techniques are developed for the classification task and are very hard to adapt to the generative setting. 3 Self-Adversarial Attack in Cyclic Models Suppose we are given a number of samples from two image domains x ∼ pA and y ∼ pB . The goal is to learn two mappings G : x ∼ pA → y ∼ pB and F : y ∼ pB → x ∼ pA. In order to learn the distributions pA and pB , two discriminators DA and DB are trained to classify whether the input image is a true representative of the corresponding domain or generated by G or F accordingly. The cross-distribution mapping is learned using the cycle-consistency property in form of a loss based on the pixelwise distance between the input image and its reconstruction. Usually, the cycle-consistency loss can be described as following: Lrec = ‖F (G(x))− x‖1 (1) However, in case when domain A is richer than B, the mapping G : x ∼ pA → y ∼ pB is many-toone (i.e. if for one image x ∼ pB there are multiple correct correspondences y ∼ pA), the generator is still forced to perfectly reconstruct the input even though some of the information of the input image is lost after the translation to the domain B. As shown in [5], such behavior of a CycleGAN can be described as an adversarial attack, and in fact, for any given image it is possible to generate such structured noise that would lead to reconstruction of the target image [5]. In practice, CycleGAN and other methods that utilize cycle-consistency loss add a very low-amplitude signal to the translation ŷ that is invisible for a human eye. Addition of a certain signal is enough to reconstruct the information of image x that should not be present in ŷ. This makes methods that incorporate cycle-consistency loss sensitive to low-amplitude high-frequency noise since that noise can destroy the hidden signal (shown in Figure 3). In addition, such behavior can force the model to converge to a non-optimal solution or even diverge since by adding structured noise the model "cheats" to minimize the reconstruction loss instead of learning the correct mapping. 4 Defense techniques 4.1 Adversarial training with noise One approach to defend the model from a self-adversarial attack is to train it to be resistant to the perturbation of nature similar to the one produced by the hidden embedding. Unfortunately, it is impoossible to separate the pure structured noise from the traslated image, so classic adversarial defense training cannot be used in this scenario. However, it is possible to prevent the model from learning to embed by adding perturbations to the translated image before reconstruction. The intuition behind this approach is that adding random noise of amplitude similar to the hidden signal disturbs the embedded message. This results in high reconstruction error, so the generator cannot rely on the embedding. The modified noisy cycle-consistency loss can be described as follows: Lnoisyrec = ‖F (G(x) + ∆(θn))− x‖1 , (2) where ∆(θn) is some high-frequency perturbation function with parameters θn. In our experiments we used low-amplitude Gaussian noise with mean equal to zero. Such a simplistic defense approach is very similar to the one proposed in [33] where the discriminator is defended from the generator attack by regularizing the discriminator objective using the adversarial vectors. In our setting, however, the attack is targeted on both the discriminator and the generator of opposite domain, which makes it harder to find the exact adversarial vector. Which is why we regularize both the discriminator and generator using random noise. Since adding noise to the input image is equivalent to penalizing large magnitude of the gradients of the loss function, this also forces the model to learn smoother boundaries and prevents it from overfitting. 4.2 Guess Discriminator Ideally, the self-adversarial attack should be detected by the discriminator, but this might be too hard for it since it never sees real and fake examples of the same content. In the supervised setting, this problem is naturally solved by conditioning the outputs on the ground truth labels. For example, a self-adversarial attack does not occur in Conditional GANs because the discriminator is conditioned on the ground truth class labels and is provided with real and fake examples of each class. In the unsupervised setting, however, there is no such information about the class labels, and the discriminator only receives unpaired real and fake examples from the domain. This task is significantly harder for the discriminator as it has to learn the distribution of the whole domain. One widely used defense strategy is adding the adversarial examples to the training set. While it is possible to model the adversarial attack of the generator, it is very time and memory consuming as it requires training an additional network that generates such examples at each step of training the GAN. However, we can use the fact that cycle-consistency loss forces the model to minimize the difference between the input and reconstructed images, so we can use the reconstruction output to provide the fake example for the real input image as an approximation of the adversarial example. Thus, the defense during training can be formulated in terms of an additional guess discriminator that is very similar to the original GAN discriminator, but receives as input two images – input and reconstruction – in a random order, and "guesses" which of the images is fake. As with the original discriminator, the guess discriminator Dguess is trained to minimize its error while the generator aims to produce such images that maximize it. The guess discriminator loss or guess loss can be described as: Lguess = { GAguess{X, F (G(X)}, with probability 0.5 1−GAguess{F (G(X)), X}, with probability 0.5 (3) where X ∼ PA, GAguess(X, X̂) ∈ [0, 1]. This loss resembles the class label conditioning in the Conditional GAN in the sense that the guess discriminator receives real and fake examples that are presumably of the same content, therefore the embedding detection task is significantly simplified. In addition to the defense approaches described above, it is beneficial to use the fact that the relationship between the domains is one-to-many. One naive solution to add such prior knowledge is by assigning a smaller weight to the reconstruction loss of the "richer" domain (e.g. photos in maps-to-photos experiment). Results of our experiments show substantial improvement in the generation quality when such a domain relation prior is used. 5 Experiments and results In abundance of GAN-based methods for unsupervised image translation, we limited our analysis to three popular state-of-art models that cover both unimodal and multimodal translation cases: CycleGAN[34], UNIT[20] and MUNIT[11]. The details on model architectures and choice of hyperparameters used in our experiments can be found in the supplementary materials. 5.1 Datasets To provide empirical evidence of our claims, we performed a sequence of experiments on three publicly available image-to-image translation datasets. Despite the fact that all three datasets are paired and hence the ground truth correspondence is known, the models that we used are not capable of using the ground-truth alignment by design and thus were trained in an unsupervised manner. Google Aerial Photo to Maps dataset consisting of 3292 pairs of aerial photos and corresponding maps. In our experiments, we resized the images from 600 × 600 pixels to 400 × 400 pixels for MUNIT and UNIT and to 289×289 pixels for CycleGAN. During training, the images were randomly cropped to 360× 360 for UNIT and MUNIT and 256× 256 for CycleGAN. The dataset is available at [6]. We used 1098 images for training and 1096 images for testing. Playing for Data (GTA)[26] dataset that consists of 24966 pairs of image frames and their semantic segmentation maps. We used a subset of 10000 frames (7500 images for training, 2500 images for testing) with day-time lighting resized to 192× 192 pixels, and randomly cropped with window size 128× 128. SynAction [28] synthetic human action dataset consisting of a set of 20 possible actions performed by 10 different human renders. For our experiments, we used two actors and all existing actions to perform the translation from one actor to another; all other conditions such as background, lighting, viewpoint etc. are chosen to be the same for both domains. We used this dataset to test whether the self-adversarial attack is present in the one-to-one setting. The original images were resized to 512× 512 and cropped to 452× 452. We split the data to 1561 images in each domain for training 357 images for testing. 5.2 Metrics Translation quality. The choice of aligned datasets was dictated by the need to quantitatively evaluate the translation quality which is impossible when the ground truth correspondence is unknown. However, even having the ground truth pairs does not solve the issue of quality evaluation in oneto-many case, since for one input image there exist a large (possibly infinite) number of correct translations, so pixelwise comparison of the ground truth image and the output of the model does not provide a correct metric for the translation quality. In order to overcome this issue, we adopted the idea behind the Inception Score [27] and trained the supervised Pix2pix[12] model to perform many-to-one mapping as an intermediate step in the evaluation. Considering the GTA dataset example, in order to evaluate the unsupervised mapping from segmentation maps to real frames (later on – segmentation to real), we train the Pix2pix model to translate from real to segmentation; then we feed it the output of the unsupervised model to perform "honest" reconstruction of the input segmentation map, and compute the Intersection over Union (IoU) and mean class-wise accuracy of the output of Pix2Pix when given a ground truth example and the output of the one-to-many translation model. For any ground truth pair (Ai, Bi), the one-to-many translation quality is computed as IoU(pix(GA(Bi)), pix(Ai)), where pix(·) is the translation with Pix2pix from A to B. The "honest reconstruction" is compared with the Pix2pix translation of the ground truth image Ai instead of the ground truth image itself in order to take into account the error produced by the Pix2pix translation. Reconstruction honesty. Since it is impossible to acquire the structured noise produced as a result of a self-adversarial attack, there is no direct way to either detect the attack or measure the amount of information hidden in the embedding. In order to evaluate the presence of a self-adversarial attack, we developed a metric that we call quantized reconstruction honesty. The intuition behind this metric is that, ideally, the reconstruction error of the image of the richer domain should be the same as the one-to-many translation error if given the same input image from the poorer domain. In order to measure whether the model is independent of the origin of the input image, we quantize the many-to-one translation results in such way that it only contains the colors from the domain-specific palette. In our experiments, we approximate the quantized maps by replacing the colors of each pixel by the closest one from the palette. We then feed those quantized images to the model to acquire the "honest" reconstruction error, and compare it with the reconstruction error without quantization. The honesty metric for a one-to-many reconstruction can be described as follows: RH = 1 N N∑ i=1 {‖GA(bGB(Xi)c)− Yi‖2 − ‖GA(GB(Xi))− Yi‖2}, (4) where b∗c is a quantization operation, GB is a many-to-one mapping, (Xi, Yi) is a ground truth pair of examples from domains A and B. Sensitivity to noise. Aside from the obvious consequences of the self-adversarial attack, such as convergence of the generator to a suboptimal solution, there is one more significant side effect of it – extreme sensitivity to perturbations. Figure 1 shows how addition of low-amplitude Gaussian noise effectively destroys the hidden embedding thus making a model that uses cycle-consistency loss unable to correctly reconstruct the input image. In order to estimate the sensitivity of the model, we add zero-mean Gaussian noise to the translation result before reconstruction and compute the reconstruction error. The sensitivity to noise of amplitude σ for a set of images Xi ∼ pA is computed by the following formula: SN(σ) = 1 N N∑ i=1 ‖GA(GB(Xi) +N (0, σ))−GA(GB(Xi))‖2 (5) The overall sensitivity of a method is then computed as an area under curve of AuC(SN(σ)) ≈∫ b a SN(x)dx. In our experiments we chose a = 0, b = 0.2, N = 500 for Google Maps and GTA experiments and N = 100 for the SynAction experiment. In case when there is no structured noise in the translation, the reconstruction error should be proportional to the amplitude of added noise, which is what we observe for the one-to-many mapping using MUNIT and CycleGAN. Surprisingly, UNIT translation is highly senstive to noise even in one-to-many case. Method MSE↓ SN ↓ CycleGAN 32.55 6.5 CycleGAN+noise* 22.18 1.1 CycleGAN+guess* 23.57 2.4 CycleGAN+guess+noise* 23.13 1.35 5.3 Results The results of our experiments show that the problem of self-adversarial attacks is present in all three cycle-consistent methods we examined. Surprisingly, the results on the SynAction dataset had shown that self-adversarial attack appears even if the learned mapping is one-to-one (Table 1). Both defense techniques proposed in Section 4 make CycleGAN more robust to random noise and increase its translation quality (see Tables 1, 2 and 3). The noise-regularization defense helps the CycleGAN model to become more robust both to small perturbations and to the self-adversarial attack. The guess loss approach, on the other hand, while allowing the model to hide some small portion of information about the input image (for example, road marking for the GTA experiment), produces more interpretable and reliable reconstructions. Furthermore, combination of both proposed defense techniques results beats both methods in terms of translation quality and reconstruction honesty (Figure 6). Since both defense techniques force the generators to rely more on the input image than on the structured noise, their results are more interpretable and provide deeper understanding of the methods "reasoning". For example, since the training set did not contain any examples of a truck that is colored in white and green, at test time the guess-loss CycleGAN approximated the green part of the truck with the "vegetation" class color and the white part with the building class color (see Section 3 of the supplementary material); the reconstructed frame looked like a rough approximation of the truck despite the fact that the semantic segmentation map was wrong. This can give a hint about the limitations of the given training set. 6 Conclusion In this paper, we introduced the self-adversarial attack phenomenon of unsupervised image-to-image translation methods – the hidden embedding performed by the model itself in order to reconstruct the input image with high precision. We empirically showed that self-adversarial attack appears in models when the cycle-consistency property is enforced and the target mapping is many-to-one. We provided the evaluation metrics that help to indicate the presence of self-adversarial attack, and a translation quality metric for one-to-many mappings. We also developed two adversarial defense techniques that significantly reduce the hidden embedding and force the model to produce more "honest" results, which, in return, increases its translation quality. 7 Acknowledgements This project was supported in part by NSF and DARPA.
1. What is the novelty of the task addressed in the paper? 2. What is the significance of the proposed approach, particularly in evaluating the quality of the model? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Are there any concerns or suggestions regarding the experimental results or comparisons with other works?
Review
Review Originality: The task is new to me. Adding noise for GAN training is not new. The guess discriminator seems new to me. Quality: The paper's claim is supported by its experiment results. I think this is a complete piece of work. Clarity: The paper is ok with clarity. I'd like to see a detailed model structure, especially for the guess discriminator. Significance: The paper proposed a few metrics to evaluate the quality of the model, which could be very useful for comparing different methods. Unfortunately, in this paper there were no other methods to compare with. So it is hard to say if their method is much better than existing methods.
NIPS
Title Adversarial Self-Defense for Cycle-Consistent GANs Abstract The goal of unsupervised image-to-image translation is to map images from one domain to another without the ground truth correspondence between the two domains. State-of-art methods learn the correspondence using large numbers of unpaired examples from both domains and are based on generative adversarial networks. In order to preserve the semantics of the input image, the adversarial objective is usually combined with a cycle-consistency loss that penalizes incorrect reconstruction of the input image from the translated one. However, if the target mapping is many-to-one, e.g. aerial photos to maps, such a restriction forces the generator to hide information in low-amplitude structured noise that is undetectable by human eye or by the discriminator. In this paper, we show how such selfattacking behavior of unsupervised translation methods affects their performance and provide two defense techniques. We perform a quantitative evaluation of the proposed techniques and show that making the translation model more robust to the self-adversarial attack increases its generation quality and reconstruction reliability and makes the model less sensitive to low-amplitude perturbations. Our project page can be found at ai.bu.edu/selfadv/. 1 Introduction Generative adversarial networks (GANs) [7] have enabled many recent breakthroughs in image generation, such as being able to change visual attributes like hair color or gender in an impressively realistic way, and even generate highly realistic-looking faces of people that do not exist [13, 31, 14]. Conditional GANs designed for unsupervised image-to-image translation can map images from one domain to another without pairwise correspondence and ground truth labels, and are widely used for solving such tasks as semantic segmentation, colorization, style transfer, and quality enhancement of images [34, 10, 19, 3, 11, 35, 4] and videos [2, 1]. These models learn the cross-domain mapping by ensuring that the translated image both looks like a true representative of the target domain, and also preserves the semantics of the input image, e.g. the shape and position of objects, overall layout etc. Semantic preservation is usually achieved by enforcing cycle-consistency [34], i.e. a small error between the source image and its reverse reconstruction from the translated target image. Despite the success of cycle-consistent GANs, they have a major flaw. The reconstruction loss forces the generator network to hide the information necessary to faithfully reconstruct the input image inside tiny perturbations of the translated image [5]. The problem is particularly acute in many-to-one mappings, such as photos to semantic labels, where the model must reconstruct textures and colors lost during translation to the target domain. For example, Figure 1’s top row shows that even when the car is mapped incorrectly to semantic labels of building (gray) and tree (green), CycleGAN is still able to “cheat” and perfectly reconstruct the original car from hidden information. It also reconstructs road textures lost in the semantic map. This behavior is essentially an adversarial attack that the model is performing on itself, so we call it a self-adversarial attack. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. In this paper, we extend the analysis of self-adversarial attacks provided in [5] and show that the problem is present in recent state-of-art methods that incorporate cycle consistency. We provide two defense mechanisms against the attack that resemble the adversarial training technique widely used to increase robustness of deep neural networks to adversarial attacks [9, 16, 32]. We also introduce quantitative evaluation metrics for translation quality and reconstruction “honesty” that help to detect self-adversarial attacks and provide a better understanding of the learned cross-domain mapping. We show that due to the presence of hidden embeddings, state of the art translation methods are highly sensitive to high-frequency perturbations as illustrated in Figure 1. In contrast, our defense methods substantially decrease the amount of self-adversarial structured noise and thus make the mapping more reliant on the input image, which results in more interpretable translation and reconstruction and increased translation quality. Importantly, robustifying the model against the self-adversarial attack makes it also less susceptible to the high-frequency perturbations which make it less likely to converge to a non-optimal solution. 2 Related Work Unsupervised image-to-image translation is one of the tasks of domain adaptation that received a lot of attention in recent years. Current state-of-art methods [34, 20, 11, 15, 4, 10] solve this task using generative adversarial networks [8] that usually consist of a pair of generator and discriminator networks that are trained in a min-max fashion to generate realistic images from the target domain and correctly classify real and fake images respectively. The goal of image-to-image translation methods is to map the image from one domain to another in such way that the output image both looks like a real representative of the target domain and contains the semantics of the input image. In the supervised setting, the semantic consistency is enforced by the ground truth labels or pairwise correspondence. In case when there is no supervision, however, there is no such ground truth guidance, so using regular GAN results in often realistic-looking but unreliable translations. In order to overcome this problem, current state-of-art unsupervised translation methods incorporate cycle-consistency loss first introduced in [34] that forces the model to learn such mapping from which it is possible to reconstruct the input image. Recently, various methods have been developed for unimodal (CycleGAN [34], UNIT [20], CoGAN [21] etc.) and multimodal (MUNIT [11], StarGAN [4], BicycleGAN [35]) image-to-image translation. In this paper, we explore the problem of self-adversarial attacks in three of them: CycleGAN, UNIT and MUNIT. CycleGAN is a unimodal translation method that consists of two domain discriminators and two generator networks; the generators are trained to produce realistic images from the corresponding domains, while the discriminators aim to distinguish in-domain real images from the generated ones. The generator-discriminator pairs are trained in a min-max fashion both to produce realistic images and to satisfy the cycle-consistency property. The main idea behind UNIT is that both domains share some common semantics, and thus can be encoded to the shared latent space. It consists of two encoder-decoder pairs that map images to the latent space and back; the crossdomain translation is then performed by encoding the image from the source domain to the latent space and decoding it with the decoder for the target domain. MUNIT is a multimodal extension of UNIT that performs disentanglement of domain-specific (style space) and domain-agnostic (content space) features. While the original MUNIT does not use the explicit cycle-consistency loss, we found that cycle-consistency penalty significantly increases the quality of translation and helps the model to learn more reliable content disentanglement (see Figure 2). Thus, we used the MUNIT with cycle-consistency loss in our experiments. As illustrated in Figure 2, adding cycle-consistency loss indeed helps to disentangle domain-agnostic information and enhance the translation quality and reliability. However, such pixelwise penalty was shown [5] to force the generator to hide the domain-specific information that cannot be explicitly reconstructed from the translated image (i.e., shadows or color of the buildings from maps in mapsto-photos example) in such way that it cannot be detected by the discriminator. It has been known that deep neural networks [17], while providing higher accuracy in the majority of machine learning problems, are highly susceptible to the adversarial attacks [24, 29, 16, 23]. There exist multiple defense techniques that make neural networks more robust to the adversarial examples, such as adding adversarial examples to the training set or adversarial training [24, 22], distillation [25], ensemble adversarial training [30], denoising [18] and many more. Moreover, [33] have shown that defending the discriminator in a GAN setting increases the generation quality and prevents the model from converging to a non-optimal solution. However, most adversarial defense techniques are developed for the classification task and are very hard to adapt to the generative setting. 3 Self-Adversarial Attack in Cyclic Models Suppose we are given a number of samples from two image domains x ∼ pA and y ∼ pB . The goal is to learn two mappings G : x ∼ pA → y ∼ pB and F : y ∼ pB → x ∼ pA. In order to learn the distributions pA and pB , two discriminators DA and DB are trained to classify whether the input image is a true representative of the corresponding domain or generated by G or F accordingly. The cross-distribution mapping is learned using the cycle-consistency property in form of a loss based on the pixelwise distance between the input image and its reconstruction. Usually, the cycle-consistency loss can be described as following: Lrec = ‖F (G(x))− x‖1 (1) However, in case when domain A is richer than B, the mapping G : x ∼ pA → y ∼ pB is many-toone (i.e. if for one image x ∼ pB there are multiple correct correspondences y ∼ pA), the generator is still forced to perfectly reconstruct the input even though some of the information of the input image is lost after the translation to the domain B. As shown in [5], such behavior of a CycleGAN can be described as an adversarial attack, and in fact, for any given image it is possible to generate such structured noise that would lead to reconstruction of the target image [5]. In practice, CycleGAN and other methods that utilize cycle-consistency loss add a very low-amplitude signal to the translation ŷ that is invisible for a human eye. Addition of a certain signal is enough to reconstruct the information of image x that should not be present in ŷ. This makes methods that incorporate cycle-consistency loss sensitive to low-amplitude high-frequency noise since that noise can destroy the hidden signal (shown in Figure 3). In addition, such behavior can force the model to converge to a non-optimal solution or even diverge since by adding structured noise the model "cheats" to minimize the reconstruction loss instead of learning the correct mapping. 4 Defense techniques 4.1 Adversarial training with noise One approach to defend the model from a self-adversarial attack is to train it to be resistant to the perturbation of nature similar to the one produced by the hidden embedding. Unfortunately, it is impoossible to separate the pure structured noise from the traslated image, so classic adversarial defense training cannot be used in this scenario. However, it is possible to prevent the model from learning to embed by adding perturbations to the translated image before reconstruction. The intuition behind this approach is that adding random noise of amplitude similar to the hidden signal disturbs the embedded message. This results in high reconstruction error, so the generator cannot rely on the embedding. The modified noisy cycle-consistency loss can be described as follows: Lnoisyrec = ‖F (G(x) + ∆(θn))− x‖1 , (2) where ∆(θn) is some high-frequency perturbation function with parameters θn. In our experiments we used low-amplitude Gaussian noise with mean equal to zero. Such a simplistic defense approach is very similar to the one proposed in [33] where the discriminator is defended from the generator attack by regularizing the discriminator objective using the adversarial vectors. In our setting, however, the attack is targeted on both the discriminator and the generator of opposite domain, which makes it harder to find the exact adversarial vector. Which is why we regularize both the discriminator and generator using random noise. Since adding noise to the input image is equivalent to penalizing large magnitude of the gradients of the loss function, this also forces the model to learn smoother boundaries and prevents it from overfitting. 4.2 Guess Discriminator Ideally, the self-adversarial attack should be detected by the discriminator, but this might be too hard for it since it never sees real and fake examples of the same content. In the supervised setting, this problem is naturally solved by conditioning the outputs on the ground truth labels. For example, a self-adversarial attack does not occur in Conditional GANs because the discriminator is conditioned on the ground truth class labels and is provided with real and fake examples of each class. In the unsupervised setting, however, there is no such information about the class labels, and the discriminator only receives unpaired real and fake examples from the domain. This task is significantly harder for the discriminator as it has to learn the distribution of the whole domain. One widely used defense strategy is adding the adversarial examples to the training set. While it is possible to model the adversarial attack of the generator, it is very time and memory consuming as it requires training an additional network that generates such examples at each step of training the GAN. However, we can use the fact that cycle-consistency loss forces the model to minimize the difference between the input and reconstructed images, so we can use the reconstruction output to provide the fake example for the real input image as an approximation of the adversarial example. Thus, the defense during training can be formulated in terms of an additional guess discriminator that is very similar to the original GAN discriminator, but receives as input two images – input and reconstruction – in a random order, and "guesses" which of the images is fake. As with the original discriminator, the guess discriminator Dguess is trained to minimize its error while the generator aims to produce such images that maximize it. The guess discriminator loss or guess loss can be described as: Lguess = { GAguess{X, F (G(X)}, with probability 0.5 1−GAguess{F (G(X)), X}, with probability 0.5 (3) where X ∼ PA, GAguess(X, X̂) ∈ [0, 1]. This loss resembles the class label conditioning in the Conditional GAN in the sense that the guess discriminator receives real and fake examples that are presumably of the same content, therefore the embedding detection task is significantly simplified. In addition to the defense approaches described above, it is beneficial to use the fact that the relationship between the domains is one-to-many. One naive solution to add such prior knowledge is by assigning a smaller weight to the reconstruction loss of the "richer" domain (e.g. photos in maps-to-photos experiment). Results of our experiments show substantial improvement in the generation quality when such a domain relation prior is used. 5 Experiments and results In abundance of GAN-based methods for unsupervised image translation, we limited our analysis to three popular state-of-art models that cover both unimodal and multimodal translation cases: CycleGAN[34], UNIT[20] and MUNIT[11]. The details on model architectures and choice of hyperparameters used in our experiments can be found in the supplementary materials. 5.1 Datasets To provide empirical evidence of our claims, we performed a sequence of experiments on three publicly available image-to-image translation datasets. Despite the fact that all three datasets are paired and hence the ground truth correspondence is known, the models that we used are not capable of using the ground-truth alignment by design and thus were trained in an unsupervised manner. Google Aerial Photo to Maps dataset consisting of 3292 pairs of aerial photos and corresponding maps. In our experiments, we resized the images from 600 × 600 pixels to 400 × 400 pixels for MUNIT and UNIT and to 289×289 pixels for CycleGAN. During training, the images were randomly cropped to 360× 360 for UNIT and MUNIT and 256× 256 for CycleGAN. The dataset is available at [6]. We used 1098 images for training and 1096 images for testing. Playing for Data (GTA)[26] dataset that consists of 24966 pairs of image frames and their semantic segmentation maps. We used a subset of 10000 frames (7500 images for training, 2500 images for testing) with day-time lighting resized to 192× 192 pixels, and randomly cropped with window size 128× 128. SynAction [28] synthetic human action dataset consisting of a set of 20 possible actions performed by 10 different human renders. For our experiments, we used two actors and all existing actions to perform the translation from one actor to another; all other conditions such as background, lighting, viewpoint etc. are chosen to be the same for both domains. We used this dataset to test whether the self-adversarial attack is present in the one-to-one setting. The original images were resized to 512× 512 and cropped to 452× 452. We split the data to 1561 images in each domain for training 357 images for testing. 5.2 Metrics Translation quality. The choice of aligned datasets was dictated by the need to quantitatively evaluate the translation quality which is impossible when the ground truth correspondence is unknown. However, even having the ground truth pairs does not solve the issue of quality evaluation in oneto-many case, since for one input image there exist a large (possibly infinite) number of correct translations, so pixelwise comparison of the ground truth image and the output of the model does not provide a correct metric for the translation quality. In order to overcome this issue, we adopted the idea behind the Inception Score [27] and trained the supervised Pix2pix[12] model to perform many-to-one mapping as an intermediate step in the evaluation. Considering the GTA dataset example, in order to evaluate the unsupervised mapping from segmentation maps to real frames (later on – segmentation to real), we train the Pix2pix model to translate from real to segmentation; then we feed it the output of the unsupervised model to perform "honest" reconstruction of the input segmentation map, and compute the Intersection over Union (IoU) and mean class-wise accuracy of the output of Pix2Pix when given a ground truth example and the output of the one-to-many translation model. For any ground truth pair (Ai, Bi), the one-to-many translation quality is computed as IoU(pix(GA(Bi)), pix(Ai)), where pix(·) is the translation with Pix2pix from A to B. The "honest reconstruction" is compared with the Pix2pix translation of the ground truth image Ai instead of the ground truth image itself in order to take into account the error produced by the Pix2pix translation. Reconstruction honesty. Since it is impossible to acquire the structured noise produced as a result of a self-adversarial attack, there is no direct way to either detect the attack or measure the amount of information hidden in the embedding. In order to evaluate the presence of a self-adversarial attack, we developed a metric that we call quantized reconstruction honesty. The intuition behind this metric is that, ideally, the reconstruction error of the image of the richer domain should be the same as the one-to-many translation error if given the same input image from the poorer domain. In order to measure whether the model is independent of the origin of the input image, we quantize the many-to-one translation results in such way that it only contains the colors from the domain-specific palette. In our experiments, we approximate the quantized maps by replacing the colors of each pixel by the closest one from the palette. We then feed those quantized images to the model to acquire the "honest" reconstruction error, and compare it with the reconstruction error without quantization. The honesty metric for a one-to-many reconstruction can be described as follows: RH = 1 N N∑ i=1 {‖GA(bGB(Xi)c)− Yi‖2 − ‖GA(GB(Xi))− Yi‖2}, (4) where b∗c is a quantization operation, GB is a many-to-one mapping, (Xi, Yi) is a ground truth pair of examples from domains A and B. Sensitivity to noise. Aside from the obvious consequences of the self-adversarial attack, such as convergence of the generator to a suboptimal solution, there is one more significant side effect of it – extreme sensitivity to perturbations. Figure 1 shows how addition of low-amplitude Gaussian noise effectively destroys the hidden embedding thus making a model that uses cycle-consistency loss unable to correctly reconstruct the input image. In order to estimate the sensitivity of the model, we add zero-mean Gaussian noise to the translation result before reconstruction and compute the reconstruction error. The sensitivity to noise of amplitude σ for a set of images Xi ∼ pA is computed by the following formula: SN(σ) = 1 N N∑ i=1 ‖GA(GB(Xi) +N (0, σ))−GA(GB(Xi))‖2 (5) The overall sensitivity of a method is then computed as an area under curve of AuC(SN(σ)) ≈∫ b a SN(x)dx. In our experiments we chose a = 0, b = 0.2, N = 500 for Google Maps and GTA experiments and N = 100 for the SynAction experiment. In case when there is no structured noise in the translation, the reconstruction error should be proportional to the amplitude of added noise, which is what we observe for the one-to-many mapping using MUNIT and CycleGAN. Surprisingly, UNIT translation is highly senstive to noise even in one-to-many case. Method MSE↓ SN ↓ CycleGAN 32.55 6.5 CycleGAN+noise* 22.18 1.1 CycleGAN+guess* 23.57 2.4 CycleGAN+guess+noise* 23.13 1.35 5.3 Results The results of our experiments show that the problem of self-adversarial attacks is present in all three cycle-consistent methods we examined. Surprisingly, the results on the SynAction dataset had shown that self-adversarial attack appears even if the learned mapping is one-to-one (Table 1). Both defense techniques proposed in Section 4 make CycleGAN more robust to random noise and increase its translation quality (see Tables 1, 2 and 3). The noise-regularization defense helps the CycleGAN model to become more robust both to small perturbations and to the self-adversarial attack. The guess loss approach, on the other hand, while allowing the model to hide some small portion of information about the input image (for example, road marking for the GTA experiment), produces more interpretable and reliable reconstructions. Furthermore, combination of both proposed defense techniques results beats both methods in terms of translation quality and reconstruction honesty (Figure 6). Since both defense techniques force the generators to rely more on the input image than on the structured noise, their results are more interpretable and provide deeper understanding of the methods "reasoning". For example, since the training set did not contain any examples of a truck that is colored in white and green, at test time the guess-loss CycleGAN approximated the green part of the truck with the "vegetation" class color and the white part with the building class color (see Section 3 of the supplementary material); the reconstructed frame looked like a rough approximation of the truck despite the fact that the semantic segmentation map was wrong. This can give a hint about the limitations of the given training set. 6 Conclusion In this paper, we introduced the self-adversarial attack phenomenon of unsupervised image-to-image translation methods – the hidden embedding performed by the model itself in order to reconstruct the input image with high precision. We empirically showed that self-adversarial attack appears in models when the cycle-consistency property is enforced and the target mapping is many-to-one. We provided the evaluation metrics that help to indicate the presence of self-adversarial attack, and a translation quality metric for one-to-many mappings. We also developed two adversarial defense techniques that significantly reduce the hidden embedding and force the model to produce more "honest" results, which, in return, increases its translation quality. 7 Acknowledgements This project was supported in part by NSF and DARPA.
1. What is the focus of the paper, and how does it build upon previous research? 2. What are the contributions of the paper, particularly in terms of mitigation techniques and evaluation metrics? 3. How effective are the proposed defense techniques, and are they based on empirical observation or provable guarantees? 4. How convincing are the experimental results, and what insights do they provide into the behavior of GAN-based translation networks? 5. Are there any minor issues or typos in the paper that could be improved?
Review
Review The submission is clearly building upon the observations made in [5], and extends/complements them in meaningful ways. In particular, it contributes mitigation techniques as well as improved/complementary evaluation metrics. Overall, the submission is written clearly, and remains very readable in all parts. Although not strictly part of this evaluation, the provided supplementary material is exemplary, and can help reproducing these results. I see the submission as a high-quality contribution to 1) gain deeper insight into the workings of [unpaired] image-to-image translation systems, and 2) improve their quality. Both of these goals have been reached, by means of the contributions a)-c). The presented defense techniques in Section 4 are based more on empirical observation (i.e. results get better) than on provable guarantees, but this does not diminish their usefulness and level of significance. While the adversarial training with noise (Section 4.1) is a rather obvious approach (and even referred to by the authors as a "simplistic defense approach"), the guess discriminator loss in Section 4.2 is a more interesting modification. The loss terms are generic enough to be suitably applied to any kind of cyclic/reconstruction based image-to-image translation architecture. The experimental results are convincing, both in terms of the data sets they have been evaluated on, as well as in terms of the results. Experiments overall are thorough enough to be significant. It would have been even better to see what a combination of the two loss terms can achieve., i.e. another row "CycleGAN + noise* + guess*" in Tables 2 and 3 (after optimization of the loss weighting hyperparameters). The novel "metrics" are quite ad-hoc but make sense, and appear to provide further insight into the behavior of these GAN-based translation networks. Coming up with good metrics here is not that easy, so this contribution is appreciated. The sensitivity-to-noise metric should be directly improved by the noise defense, and, unsurprisingly, this approach yields the best results under the metric. Minor comments: - References [10] and [11] are the same paper. - I think there is some word missing in the sentence starting in l. 102. I get the meaning, though. - Extraneous word ('is') in l. 112 - Typo in l. 156: 'Coditional' -> 'Conditional'
NIPS
Title Adversarial Self-Defense for Cycle-Consistent GANs Abstract The goal of unsupervised image-to-image translation is to map images from one domain to another without the ground truth correspondence between the two domains. State-of-art methods learn the correspondence using large numbers of unpaired examples from both domains and are based on generative adversarial networks. In order to preserve the semantics of the input image, the adversarial objective is usually combined with a cycle-consistency loss that penalizes incorrect reconstruction of the input image from the translated one. However, if the target mapping is many-to-one, e.g. aerial photos to maps, such a restriction forces the generator to hide information in low-amplitude structured noise that is undetectable by human eye or by the discriminator. In this paper, we show how such selfattacking behavior of unsupervised translation methods affects their performance and provide two defense techniques. We perform a quantitative evaluation of the proposed techniques and show that making the translation model more robust to the self-adversarial attack increases its generation quality and reconstruction reliability and makes the model less sensitive to low-amplitude perturbations. Our project page can be found at ai.bu.edu/selfadv/. 1 Introduction Generative adversarial networks (GANs) [7] have enabled many recent breakthroughs in image generation, such as being able to change visual attributes like hair color or gender in an impressively realistic way, and even generate highly realistic-looking faces of people that do not exist [13, 31, 14]. Conditional GANs designed for unsupervised image-to-image translation can map images from one domain to another without pairwise correspondence and ground truth labels, and are widely used for solving such tasks as semantic segmentation, colorization, style transfer, and quality enhancement of images [34, 10, 19, 3, 11, 35, 4] and videos [2, 1]. These models learn the cross-domain mapping by ensuring that the translated image both looks like a true representative of the target domain, and also preserves the semantics of the input image, e.g. the shape and position of objects, overall layout etc. Semantic preservation is usually achieved by enforcing cycle-consistency [34], i.e. a small error between the source image and its reverse reconstruction from the translated target image. Despite the success of cycle-consistent GANs, they have a major flaw. The reconstruction loss forces the generator network to hide the information necessary to faithfully reconstruct the input image inside tiny perturbations of the translated image [5]. The problem is particularly acute in many-to-one mappings, such as photos to semantic labels, where the model must reconstruct textures and colors lost during translation to the target domain. For example, Figure 1’s top row shows that even when the car is mapped incorrectly to semantic labels of building (gray) and tree (green), CycleGAN is still able to “cheat” and perfectly reconstruct the original car from hidden information. It also reconstructs road textures lost in the semantic map. This behavior is essentially an adversarial attack that the model is performing on itself, so we call it a self-adversarial attack. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. In this paper, we extend the analysis of self-adversarial attacks provided in [5] and show that the problem is present in recent state-of-art methods that incorporate cycle consistency. We provide two defense mechanisms against the attack that resemble the adversarial training technique widely used to increase robustness of deep neural networks to adversarial attacks [9, 16, 32]. We also introduce quantitative evaluation metrics for translation quality and reconstruction “honesty” that help to detect self-adversarial attacks and provide a better understanding of the learned cross-domain mapping. We show that due to the presence of hidden embeddings, state of the art translation methods are highly sensitive to high-frequency perturbations as illustrated in Figure 1. In contrast, our defense methods substantially decrease the amount of self-adversarial structured noise and thus make the mapping more reliant on the input image, which results in more interpretable translation and reconstruction and increased translation quality. Importantly, robustifying the model against the self-adversarial attack makes it also less susceptible to the high-frequency perturbations which make it less likely to converge to a non-optimal solution. 2 Related Work Unsupervised image-to-image translation is one of the tasks of domain adaptation that received a lot of attention in recent years. Current state-of-art methods [34, 20, 11, 15, 4, 10] solve this task using generative adversarial networks [8] that usually consist of a pair of generator and discriminator networks that are trained in a min-max fashion to generate realistic images from the target domain and correctly classify real and fake images respectively. The goal of image-to-image translation methods is to map the image from one domain to another in such way that the output image both looks like a real representative of the target domain and contains the semantics of the input image. In the supervised setting, the semantic consistency is enforced by the ground truth labels or pairwise correspondence. In case when there is no supervision, however, there is no such ground truth guidance, so using regular GAN results in often realistic-looking but unreliable translations. In order to overcome this problem, current state-of-art unsupervised translation methods incorporate cycle-consistency loss first introduced in [34] that forces the model to learn such mapping from which it is possible to reconstruct the input image. Recently, various methods have been developed for unimodal (CycleGAN [34], UNIT [20], CoGAN [21] etc.) and multimodal (MUNIT [11], StarGAN [4], BicycleGAN [35]) image-to-image translation. In this paper, we explore the problem of self-adversarial attacks in three of them: CycleGAN, UNIT and MUNIT. CycleGAN is a unimodal translation method that consists of two domain discriminators and two generator networks; the generators are trained to produce realistic images from the corresponding domains, while the discriminators aim to distinguish in-domain real images from the generated ones. The generator-discriminator pairs are trained in a min-max fashion both to produce realistic images and to satisfy the cycle-consistency property. The main idea behind UNIT is that both domains share some common semantics, and thus can be encoded to the shared latent space. It consists of two encoder-decoder pairs that map images to the latent space and back; the crossdomain translation is then performed by encoding the image from the source domain to the latent space and decoding it with the decoder for the target domain. MUNIT is a multimodal extension of UNIT that performs disentanglement of domain-specific (style space) and domain-agnostic (content space) features. While the original MUNIT does not use the explicit cycle-consistency loss, we found that cycle-consistency penalty significantly increases the quality of translation and helps the model to learn more reliable content disentanglement (see Figure 2). Thus, we used the MUNIT with cycle-consistency loss in our experiments. As illustrated in Figure 2, adding cycle-consistency loss indeed helps to disentangle domain-agnostic information and enhance the translation quality and reliability. However, such pixelwise penalty was shown [5] to force the generator to hide the domain-specific information that cannot be explicitly reconstructed from the translated image (i.e., shadows or color of the buildings from maps in mapsto-photos example) in such way that it cannot be detected by the discriminator. It has been known that deep neural networks [17], while providing higher accuracy in the majority of machine learning problems, are highly susceptible to the adversarial attacks [24, 29, 16, 23]. There exist multiple defense techniques that make neural networks more robust to the adversarial examples, such as adding adversarial examples to the training set or adversarial training [24, 22], distillation [25], ensemble adversarial training [30], denoising [18] and many more. Moreover, [33] have shown that defending the discriminator in a GAN setting increases the generation quality and prevents the model from converging to a non-optimal solution. However, most adversarial defense techniques are developed for the classification task and are very hard to adapt to the generative setting. 3 Self-Adversarial Attack in Cyclic Models Suppose we are given a number of samples from two image domains x ∼ pA and y ∼ pB . The goal is to learn two mappings G : x ∼ pA → y ∼ pB and F : y ∼ pB → x ∼ pA. In order to learn the distributions pA and pB , two discriminators DA and DB are trained to classify whether the input image is a true representative of the corresponding domain or generated by G or F accordingly. The cross-distribution mapping is learned using the cycle-consistency property in form of a loss based on the pixelwise distance between the input image and its reconstruction. Usually, the cycle-consistency loss can be described as following: Lrec = ‖F (G(x))− x‖1 (1) However, in case when domain A is richer than B, the mapping G : x ∼ pA → y ∼ pB is many-toone (i.e. if for one image x ∼ pB there are multiple correct correspondences y ∼ pA), the generator is still forced to perfectly reconstruct the input even though some of the information of the input image is lost after the translation to the domain B. As shown in [5], such behavior of a CycleGAN can be described as an adversarial attack, and in fact, for any given image it is possible to generate such structured noise that would lead to reconstruction of the target image [5]. In practice, CycleGAN and other methods that utilize cycle-consistency loss add a very low-amplitude signal to the translation ŷ that is invisible for a human eye. Addition of a certain signal is enough to reconstruct the information of image x that should not be present in ŷ. This makes methods that incorporate cycle-consistency loss sensitive to low-amplitude high-frequency noise since that noise can destroy the hidden signal (shown in Figure 3). In addition, such behavior can force the model to converge to a non-optimal solution or even diverge since by adding structured noise the model "cheats" to minimize the reconstruction loss instead of learning the correct mapping. 4 Defense techniques 4.1 Adversarial training with noise One approach to defend the model from a self-adversarial attack is to train it to be resistant to the perturbation of nature similar to the one produced by the hidden embedding. Unfortunately, it is impoossible to separate the pure structured noise from the traslated image, so classic adversarial defense training cannot be used in this scenario. However, it is possible to prevent the model from learning to embed by adding perturbations to the translated image before reconstruction. The intuition behind this approach is that adding random noise of amplitude similar to the hidden signal disturbs the embedded message. This results in high reconstruction error, so the generator cannot rely on the embedding. The modified noisy cycle-consistency loss can be described as follows: Lnoisyrec = ‖F (G(x) + ∆(θn))− x‖1 , (2) where ∆(θn) is some high-frequency perturbation function with parameters θn. In our experiments we used low-amplitude Gaussian noise with mean equal to zero. Such a simplistic defense approach is very similar to the one proposed in [33] where the discriminator is defended from the generator attack by regularizing the discriminator objective using the adversarial vectors. In our setting, however, the attack is targeted on both the discriminator and the generator of opposite domain, which makes it harder to find the exact adversarial vector. Which is why we regularize both the discriminator and generator using random noise. Since adding noise to the input image is equivalent to penalizing large magnitude of the gradients of the loss function, this also forces the model to learn smoother boundaries and prevents it from overfitting. 4.2 Guess Discriminator Ideally, the self-adversarial attack should be detected by the discriminator, but this might be too hard for it since it never sees real and fake examples of the same content. In the supervised setting, this problem is naturally solved by conditioning the outputs on the ground truth labels. For example, a self-adversarial attack does not occur in Conditional GANs because the discriminator is conditioned on the ground truth class labels and is provided with real and fake examples of each class. In the unsupervised setting, however, there is no such information about the class labels, and the discriminator only receives unpaired real and fake examples from the domain. This task is significantly harder for the discriminator as it has to learn the distribution of the whole domain. One widely used defense strategy is adding the adversarial examples to the training set. While it is possible to model the adversarial attack of the generator, it is very time and memory consuming as it requires training an additional network that generates such examples at each step of training the GAN. However, we can use the fact that cycle-consistency loss forces the model to minimize the difference between the input and reconstructed images, so we can use the reconstruction output to provide the fake example for the real input image as an approximation of the adversarial example. Thus, the defense during training can be formulated in terms of an additional guess discriminator that is very similar to the original GAN discriminator, but receives as input two images – input and reconstruction – in a random order, and "guesses" which of the images is fake. As with the original discriminator, the guess discriminator Dguess is trained to minimize its error while the generator aims to produce such images that maximize it. The guess discriminator loss or guess loss can be described as: Lguess = { GAguess{X, F (G(X)}, with probability 0.5 1−GAguess{F (G(X)), X}, with probability 0.5 (3) where X ∼ PA, GAguess(X, X̂) ∈ [0, 1]. This loss resembles the class label conditioning in the Conditional GAN in the sense that the guess discriminator receives real and fake examples that are presumably of the same content, therefore the embedding detection task is significantly simplified. In addition to the defense approaches described above, it is beneficial to use the fact that the relationship between the domains is one-to-many. One naive solution to add such prior knowledge is by assigning a smaller weight to the reconstruction loss of the "richer" domain (e.g. photos in maps-to-photos experiment). Results of our experiments show substantial improvement in the generation quality when such a domain relation prior is used. 5 Experiments and results In abundance of GAN-based methods for unsupervised image translation, we limited our analysis to three popular state-of-art models that cover both unimodal and multimodal translation cases: CycleGAN[34], UNIT[20] and MUNIT[11]. The details on model architectures and choice of hyperparameters used in our experiments can be found in the supplementary materials. 5.1 Datasets To provide empirical evidence of our claims, we performed a sequence of experiments on three publicly available image-to-image translation datasets. Despite the fact that all three datasets are paired and hence the ground truth correspondence is known, the models that we used are not capable of using the ground-truth alignment by design and thus were trained in an unsupervised manner. Google Aerial Photo to Maps dataset consisting of 3292 pairs of aerial photos and corresponding maps. In our experiments, we resized the images from 600 × 600 pixels to 400 × 400 pixels for MUNIT and UNIT and to 289×289 pixels for CycleGAN. During training, the images were randomly cropped to 360× 360 for UNIT and MUNIT and 256× 256 for CycleGAN. The dataset is available at [6]. We used 1098 images for training and 1096 images for testing. Playing for Data (GTA)[26] dataset that consists of 24966 pairs of image frames and their semantic segmentation maps. We used a subset of 10000 frames (7500 images for training, 2500 images for testing) with day-time lighting resized to 192× 192 pixels, and randomly cropped with window size 128× 128. SynAction [28] synthetic human action dataset consisting of a set of 20 possible actions performed by 10 different human renders. For our experiments, we used two actors and all existing actions to perform the translation from one actor to another; all other conditions such as background, lighting, viewpoint etc. are chosen to be the same for both domains. We used this dataset to test whether the self-adversarial attack is present in the one-to-one setting. The original images were resized to 512× 512 and cropped to 452× 452. We split the data to 1561 images in each domain for training 357 images for testing. 5.2 Metrics Translation quality. The choice of aligned datasets was dictated by the need to quantitatively evaluate the translation quality which is impossible when the ground truth correspondence is unknown. However, even having the ground truth pairs does not solve the issue of quality evaluation in oneto-many case, since for one input image there exist a large (possibly infinite) number of correct translations, so pixelwise comparison of the ground truth image and the output of the model does not provide a correct metric for the translation quality. In order to overcome this issue, we adopted the idea behind the Inception Score [27] and trained the supervised Pix2pix[12] model to perform many-to-one mapping as an intermediate step in the evaluation. Considering the GTA dataset example, in order to evaluate the unsupervised mapping from segmentation maps to real frames (later on – segmentation to real), we train the Pix2pix model to translate from real to segmentation; then we feed it the output of the unsupervised model to perform "honest" reconstruction of the input segmentation map, and compute the Intersection over Union (IoU) and mean class-wise accuracy of the output of Pix2Pix when given a ground truth example and the output of the one-to-many translation model. For any ground truth pair (Ai, Bi), the one-to-many translation quality is computed as IoU(pix(GA(Bi)), pix(Ai)), where pix(·) is the translation with Pix2pix from A to B. The "honest reconstruction" is compared with the Pix2pix translation of the ground truth image Ai instead of the ground truth image itself in order to take into account the error produced by the Pix2pix translation. Reconstruction honesty. Since it is impossible to acquire the structured noise produced as a result of a self-adversarial attack, there is no direct way to either detect the attack or measure the amount of information hidden in the embedding. In order to evaluate the presence of a self-adversarial attack, we developed a metric that we call quantized reconstruction honesty. The intuition behind this metric is that, ideally, the reconstruction error of the image of the richer domain should be the same as the one-to-many translation error if given the same input image from the poorer domain. In order to measure whether the model is independent of the origin of the input image, we quantize the many-to-one translation results in such way that it only contains the colors from the domain-specific palette. In our experiments, we approximate the quantized maps by replacing the colors of each pixel by the closest one from the palette. We then feed those quantized images to the model to acquire the "honest" reconstruction error, and compare it with the reconstruction error without quantization. The honesty metric for a one-to-many reconstruction can be described as follows: RH = 1 N N∑ i=1 {‖GA(bGB(Xi)c)− Yi‖2 − ‖GA(GB(Xi))− Yi‖2}, (4) where b∗c is a quantization operation, GB is a many-to-one mapping, (Xi, Yi) is a ground truth pair of examples from domains A and B. Sensitivity to noise. Aside from the obvious consequences of the self-adversarial attack, such as convergence of the generator to a suboptimal solution, there is one more significant side effect of it – extreme sensitivity to perturbations. Figure 1 shows how addition of low-amplitude Gaussian noise effectively destroys the hidden embedding thus making a model that uses cycle-consistency loss unable to correctly reconstruct the input image. In order to estimate the sensitivity of the model, we add zero-mean Gaussian noise to the translation result before reconstruction and compute the reconstruction error. The sensitivity to noise of amplitude σ for a set of images Xi ∼ pA is computed by the following formula: SN(σ) = 1 N N∑ i=1 ‖GA(GB(Xi) +N (0, σ))−GA(GB(Xi))‖2 (5) The overall sensitivity of a method is then computed as an area under curve of AuC(SN(σ)) ≈∫ b a SN(x)dx. In our experiments we chose a = 0, b = 0.2, N = 500 for Google Maps and GTA experiments and N = 100 for the SynAction experiment. In case when there is no structured noise in the translation, the reconstruction error should be proportional to the amplitude of added noise, which is what we observe for the one-to-many mapping using MUNIT and CycleGAN. Surprisingly, UNIT translation is highly senstive to noise even in one-to-many case. Method MSE↓ SN ↓ CycleGAN 32.55 6.5 CycleGAN+noise* 22.18 1.1 CycleGAN+guess* 23.57 2.4 CycleGAN+guess+noise* 23.13 1.35 5.3 Results The results of our experiments show that the problem of self-adversarial attacks is present in all three cycle-consistent methods we examined. Surprisingly, the results on the SynAction dataset had shown that self-adversarial attack appears even if the learned mapping is one-to-one (Table 1). Both defense techniques proposed in Section 4 make CycleGAN more robust to random noise and increase its translation quality (see Tables 1, 2 and 3). The noise-regularization defense helps the CycleGAN model to become more robust both to small perturbations and to the self-adversarial attack. The guess loss approach, on the other hand, while allowing the model to hide some small portion of information about the input image (for example, road marking for the GTA experiment), produces more interpretable and reliable reconstructions. Furthermore, combination of both proposed defense techniques results beats both methods in terms of translation quality and reconstruction honesty (Figure 6). Since both defense techniques force the generators to rely more on the input image than on the structured noise, their results are more interpretable and provide deeper understanding of the methods "reasoning". For example, since the training set did not contain any examples of a truck that is colored in white and green, at test time the guess-loss CycleGAN approximated the green part of the truck with the "vegetation" class color and the white part with the building class color (see Section 3 of the supplementary material); the reconstructed frame looked like a rough approximation of the truck despite the fact that the semantic segmentation map was wrong. This can give a hint about the limitations of the given training set. 6 Conclusion In this paper, we introduced the self-adversarial attack phenomenon of unsupervised image-to-image translation methods – the hidden embedding performed by the model itself in order to reconstruct the input image with high precision. We empirically showed that self-adversarial attack appears in models when the cycle-consistency property is enforced and the target mapping is many-to-one. We provided the evaluation metrics that help to indicate the presence of self-adversarial attack, and a translation quality metric for one-to-many mappings. We also developed two adversarial defense techniques that significantly reduce the hidden embedding and force the model to produce more "honest" results, which, in return, increases its translation quality. 7 Acknowledgements This project was supported in part by NSF and DARPA.
1. What is the focus of the paper, and how does it contribute to the field of self-adversarial attacks? 2. What are the strengths of the paper, particularly in terms of its writing quality and experimental efforts? 3. What are the weaknesses of the paper, specifically regarding its claims of novelty and originality? 4. How do the proposed defense techniques differ from existing methods in the literature, and what are their limitations? 5. Are there any concerns about the reproducibility of the results, given that similar solutions have been proposed in other works?
Review
Review I think the self-adversarial attack observation is quite interesting but not very convinced that the proposed defense techniques are novel enough for the submission. Note self-adversarial attack is not a new observation(as the paper heavily cited), and both defense techniques (adding noise and adding pairwise discriminator) exist in the literature. Pros: This paper is quite well written and properly summarized the related works. This paper shows significant effort in conducting experiments. Cons: Novelty is not enough as most of the proposed solution or observations are already published. Need more insight on the proposed solutions instead of similar to some other works.
NIPS
Title A Limitation of the PAC-Bayes Framework Abstract PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester (’98). This framework has the flexibility of deriving distributionand algorithm-dependent bounds, which are often tighter than VCrelated uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/δ)/ ) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large. 1 Introduction The classical setting of supervised binary classification considers learning algorithms that receive (binary) labelled examples and are required to output a predictor or a classifier that predicts the label of new and unseen examples. Within this setting, Probably Approximately Correct (PAC) generalization bounds quantify the success of an algorithm to approximately predict with high probability. The PAC-Bayes framework, introduced in [22, 34] and further developed in [21, 20, 30], provides PAC-flavored bounds to Bayesian algorithms that produce Gibbs-classifiers (also called stochastic-classifiers). These are classifiers that, instead of outputting a single classifier, output a probability distribution over the family of classifiers. Their performance is measured by the expected success of prediction where expectation is taken with respect to both sampled data and sampled classifier. A PAC-Bayes generalization bound relates the generalization error of the algorithm to a KL distance between the stochastic output classifier and some prior distribution P . In more detail, the generalization bound is comprised of two terms: first, the empirical error of the output Gibbs-classifier, and second, the KL distance between the output Gibbs classifier and some arbitrary (but sampleindependent) prior distribution. This standard bound captures a basic intuition that a good learner needs to balance between bias, manifested in the form of a prior, and fitting the data, which is measured by the empirical loss. A natural task is then, to try and characterize the potential as well as limitations of such Gibbs-learners that are amenable to PAC-Bayes analysis. As far as the potential, several past results established the strength and utility of this framework (e.g. [33, 31, 18, 13, 17]). In this work we focus on the complementary task, and present the first limitation result showing that there are classes that are learnable, even in the strong distribution-independent setting of PAC, but do not admit any algorithm that is amenable to a non-vacuous PAC-Bayes analysis. We stress that this is true even if we exploit the bound to its fullest and allow any algorithm and any possible, potentially distribution-dependent, prior. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. More concretely, we consider the class of 1-dimensional thresholds, i.e. the class of linear classifiers over the real line. It is a well known fact that this class is learnable and enjoys highly optimistic sample complexity. Perhaps surprisingly, though, we show that any Gibbs-classifier that learns the class of thresholds, must output posteriors from an unbounded set. We emphasize that the result is provided even for priors that depend on the data distribution. From a technical perspective our proof exploits and expands a technique that was recently introduced by Alon et al. [1] to establish limitations on differentially-private PAC learning algorithms. The argument here follow similar lines, and we believe that these similarities in fact highlight a potentially powerful method to derive further limitation results, especially in the context of stability. 2 Preliminaries 2.1 Problem Setup We consider the standard setting of binary classification. Let X denote the domain and Y = {±1} the label space. We study learning algorithms that observe as input a sample S of labelled examples drawn independently from an unknown target distribution D, supported on X × Y . The output of the algorithm is an hypothesis h : X → Y , and its goal is to minimize the 0/1-loss, which is defined by: LD(h) = E (x,y)∼D [ 1[h(x) 6= y] ] . We will focus on the setting where the distribution D is realizable with respect to a fixed hypothesis classH ⊆ YX which is known in advance. That is, it is assumed that there exists h ∈ H such that: LD(h) = 0. Let S = 〈(x1, y1), . . . , (xm, ym)〉 ∈ (X × Y)m be a sample of labelled examples. The empirical error LS with respect to S is defined by LS(h) = 1 m m∑ i=1 1[h(x) 6= y]. We will use the following notation: for a sample S = 〈(x1, y1), . . . (xm, ym)〉, let S denote the underlying set of unlabeled examples S = {xi : i ≤ m}. The Class of Thresholds. For k ∈ N let hk : N→ {±1} denote the threshold function hk(x) = { −1 x ≤ k +1 x > k. The class of thresholds HN is the class HN := {hk : k ∈ N} over the domain XN := N. Similarly, for a finite n ∈ N let Hn denote the class of all thresholds restricted to the domain Xn := [n] = {1, . . . , n}. Note that S is realizable with respect to HN if and only if either (i) yi = +1 for all i ≤ m, or (ii) there exists 1 ≤ j ≤ m such that yi = −1 if and only if xi ≤ xj . A basic fact in statistical learning is thatHN is PAC-learnable. That is, there exists an algorithm A such that for every realizable distributionD, ifA is given a sample of sizeO( log 1/δ ) examples drawn from D, then with probability at least 1− δ, the output hypothesis hS satisfies LD(hS) ≤ . In fact, any algorithm A which returns an hypothesis hk ∈ HN which is consistent with the input sample, will satisfy the above guarantee. Such algorithms are called empirical risk minimizers (ERMs). We stress that the above sample complexity bound is independent of the domain size. In particular it applies to Hn for every n, as well as to the infinite class HN. For further reading, we refer to text books on the subject, such as [32, 23]. 2.2 PAC-Bayes Bounds PAC Bayes bounds are concerned with stochastic-classifiers, or Gibbs-classifiers. A Gibbs-classifier is defined by a distribution Q over hypotheses. The distribution Q is sometimes referred to as a posterior. The loss of a Gibbs-classifier with respect to a distribution D is given by the expected loss over the drawn hypothesis and test point, namely: LD(Q) = E h∼Q,(x,y)∼D [1 [ h(x) 6= y] ] . A key advantage of the PAC-Bayes framework is its flexibility of deriving generalization bounds that do not depend on an hypothesis class. Instead, they provide bounds that depend on the KL distance between the output posterior and a fixed prior P . Recall that the KL divergence between a distribution P and a distribution Q is defined as follows1: KL (P‖Q) = E x∼P [ log P (x) Q(x) ] . Then, the classical PAC-Bayes bound asserts the following: Theorem 1 (PAC-Bayes Generalization Bound [22]). Let D be a distribution over examples, let P be a prior distribution over hypothesis, and let δ > 0. Denote by S a sample of size m drawn independently from D. Then, the following event occurs with probability at least 1− δ: for every posterior distribution Q, LD(Q) ≤ LS(Q) +O (√ KL (Q‖P ) + ln √ m/δ m ) . The above bound relates the generalization error to the KL divergence between the posterior and the prior. Remarkably, the prior distribution P can be chosen as a function of the target distribution D, allowing to obtain distribution-dependent generalization bounds. Since this pioneer work of McAllester [21], many variations on the PAC-Bayes bounds have been proposed. Notably, Seeger et al. [31] and Catoni [9] provided bounds that are known to converge at rate 1/m in the realizable case (see also [15] for an up-to-date survey). We note that our constructions are all provided in the realizable setting, hence readily apply. 3 Main Result We next present the main result in this manuscript. Proofs are provided in the full version [19]. The statements use the following function Φ(m, γ, n), which is defined for m,n > 1 and γ ∈ (0, 1): Φ(m, γ, n) = log(m)(n) ( 10mγ ) 3m . Here, log(k)(x) denotes the iterated logarithm, i.e. log(k)(x) = log(log . . . (log(x)))︸ ︷︷ ︸ k times . An important observation is that limn→∞Φ(m, γ, n) =∞ for every fixed m and γ. Theorem 2 (Main Result). Let n,m > 1 be integers, and let γ ∈ (0, 1). Consider the class Hn of thresholds over the domain Xn = [n]. Then, for any learning algorithm A which is defined on samples of size m, there exists a realizable distribution D = DA such that for any prior P the following event occurs with probability at least 1/16 over the input sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( γ2 m2 log (Φ(m, γ, n) m )) or LD(QS) > 1/2− γ − m Φ(m, γ, n) , where QS denotes the posterior outputted by A. To demonstrate how this result implies a limitation of the PAC-Bayes framework, pick γ = 1/4 and consider any algorithm A which learns thresholds over the natural numbers XN = N with confidence 1− δ ≥ 99/100, error < 1/2− γ = 1/4, and m examples2. Since Φ(m, 1/4, n) tends to infinity with n for any fixedm, the above result implies the existence of a realizable distributionDn supported onXn ⊆ N such that the PAC-Bayes bound with respect to any possible prior P will produce vacuous bounds. We summarize it in the following corollary. 1We use here the standard convention that if P ({x : Q(x) = 0}) > 0 then KL (P‖Q) =∞. 2We note in passing that any Empirical Risk Minimizer learns thresholds with these parameters using < 50 examples. Corollary 1 (PAC-learnability of Linear classifiers cannot be explained by PAC-Bayes). Let HN denote the class of thresholds over XN = N and let m > 0. Then, for every algorithm A that maps inputs sample S of size m to output posteriors QS and for every arbitrarily large N > 0 there exists a realizable distribution D such that, for any prior P , with probability at least 1/16 over S ∼ Dm on of the following holds: KL (QS‖P ) > N or, LD(QS) > 1/4. A different interpretation of Theorem 2 is that in order to derive meaningful PAC-Bayes generalization bounds for PAC-learning thresholds over a finite domain Xn, the sample complexity must grow to infinity with the domain size n (it is at least Ω(log?(n))). In contrast, the true sample complexity of this problem is O(log(1/δ)/ ) which is independent of n. 4 Technical Overview A common approach of proving impossibility results in computer science (and in machine learning in particular) exploits a Minmax principle, whereby one specifies a fixed hard distribution over inputs, and establishes the desired impossibility result for any algorithm with respect to random inputs from that distribution. As an example, consider the “No-Free-Lunch Theorem” which establishes that the VC dimension lower bounds the sample complexity of PAC-learning a classH. Here, one fixes the distribution to be uniform over a shattered set of size d = VC(H), and argues that every learning algorithm must observe Ω(d) examples. (See e.g. Theorem 5.1 in [32].) Such “Minmax” proofs establish a stronger assertion: they apply even to algorithms that “know” the input-distribution. For example, the No-Free-Lunch Theorem applies even to learning algorithms that are designed given the knowledge that the marginal distribution is uniform over some shattered set. Interestingly, such an approach is bound to fail in proving Theorem 2. The reason is that if the marginal distribution DX over Xn is fixed, then one can pick an /2-cover3 Cn ⊆ Hn of size |Cn| = O(1/ ), and use any Empirical Risk Minimizer for Cn. Then, by picking the prior distribution P to be uniform over Cn, one obtains a PAC-Bayes bound which scales with the entropy H(P ) = log|Cn| = O(log(1/ )), and yields a poly(1/ , log(1/δ)) generalization bound, which is independent of n. In other words, in the context of Theorem 2, there is no single distribution which is “hard” for all algorithms. Thus, to overcome this difficulty one must come up with a “method” which assigns to any given algorithm A a “hard” distribution D = DA, which witnesses Theorem 2 with respect to A. The challenge is that A is an arbitrary algorithm; e.g. it may be improper4 or add different sorts of noise to its output classifier. We refer the reader to [26, 25, 3] for a line of work which explores in detail a similar “failure” of the Minmax principle in the context of PAC learning with low mutual information. The method we use in the proof of Theorem 2 exploits Ramsey Theory. In a nutshell, Ramsey Theory provides powerful tools which allow to detect, for any learning algorithm, a large homogeneous set such that the behavior of A on inputs from the homogeneous set is highly regular. Then, we consider the uniform distribution over the homogeneous set to establish Theorem 2. We note that similar applications of Ramsey Theory in proving lower bounds in computer science date back to the 80’s [24]. For more recent usages see e.g. [8, 11, 10, 1]. Our proof closely follows the argument of Alon et al. [1], which establishes an impossibility result for learningHn by differentially-private algorithms. Technical Comparison with the Work by Alon et al. [1]. For readers who are familiar with the work of [1], let us summarize the main differences between the two proofs. The main challenge in extending the technique from [1] to prove Theorem 2 is that PAC-Bayes bounds are only required to hold for typical samples. This is unlike the notion of differential-privacy (which was the focus of [1]) that is defined with respect to all samples. Thus, establishing a lower bound in the context of differential privacy is easier: one only needs to demonstrate a single sample for which privacy is 3I.e. Cn satisfies that (∀h ∈ Hn)(∃c ∈ Cn) : Prx∼DX (c(x) 6= h(x)) ≤ /2. 4I.e. A may output hypotheses which are not thresholds, or Gibbs-classifiers supported on hypotheses which are not thresholds. breached. However, to prove Theorem 2 one has to demonstrate that the lower bound applies to many samples. Concretely, this affects the following parts of the proof: (i) The Ramsey argument in the current manuscript (Lemma 1) is more complex: to overcome the above difficulty we needed to modify the coloring and the overall construction is more convoluted. (ii) Once Ramsey Theorem is applied and the homogeneous subset Rn ⊆ Xn is derived, one still needs to derive a lower bound on the PAC-Bayes quantity. This requires a technical argument (Lemma 2), which is tailored to the definition of PAC-Bayes. Again, this lemma is more complicated than the corresponding lemma in [1]. (iii) Even with Lemma 1 and Lemma 2 in hand, the remaining derivation of Theorem 2 still requires a careful analysis which involves defining several “bad” events and bounding their probabilities. Again, this is all a consequence of that the PAC-Bayes quantity is an “average-case” complexity measure. 4.1 Proof Sketch and Key Definitions The proof of Theorem 2 consists of two steps: (i) detecting a hard distribution D = DA which witnesses Theorem 2 with respect to the assumed algorithm A, and (ii) establishing the conclusion of Theorem 2 given the hard distribution D. The first part is combinatorial (exploits Ramsey Theory), and the second part is more information-theoretic. For the purpose of exposition, we focus in this technical overview, on a specific algorithm A. This will make the introduction of the key definitions and presentation of the main technical tools more accessible. The algorithmA. Let S = 〈(x1, y1), . . . , (xm, ym)〉 be an input sample. The algorithmA outputs the posterior distribution QS which is defined as follows: let hxi = 1[x > xi]− 1[x ≤ xi] denote the threshold corresponding to the i’th input example. The posterior QS is supported on {hxi}mi=1, and to each hxi it assigns a probability according to a decreasing function of its empirical risk. (So, hypotheses with lower risk are more probable.) The specific choice of the decreasing function does not matter, but for concreteness let us pick the function exp(−x). Thus, QS(hxi) ∝ exp ( −LS(hxi) ) . (1) While one can directly prove that the above algorithm does not admit a PAC-Bayes analysis, we provide here an argument which follows the lines of the general case. We start by explaining the key property of Homogeneity, which allows to detect the hard distribution. 4.1.1 Detecting a Hard Distribution: Homogeneity The first step in the proof of Theorem 2 takes the given algorithm and identifies a large subset of the domain on which its behavior is Homogeneous. In particular, we will soon see that the algorithm A is Homogeneous on the entire domain Xn. In order to define Homogeneity, we use the following equivalence relation between samples: Definition 1 (Equivalent Samples). Let S = 〈(x1, y1), . . . , (xm, ym)〉 and S′ = 〈(x′1, y′1), . . . , (x′m, y′m)〉 be two samples. We say that S and S′ are equivalent if for all i, j ≤ m the following holds. 1. xi ≤ xj ⇐⇒ x′i ≤ x′j , and 2. yi = y′i. For example, 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 are equivalent, but 〈(3,−), (6,+), (4,+)〉 is not equivalent to them (because of Item 1). For a point x ∈ Xn let pos(x;S) denote the number of examples in S that are less than or equal to x: pos(x;S) = ∣∣∣{xi ∈ S : xi ≤ x}∣∣∣. (2) For a sample S = 〈(x1, y1), . . . , (xm, ym)〉 let π(S) denote the order-type of S: π(S) = (pos(x1;S), pos(x2;S), . . . , pos(xm;S)). (3) So, the samples 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 have order-type π = (1, 2, 3), whereas 〈(3,−), (6,+), (4,+)〉 has order-type π = (1, 3, 2). Note that S, S′ are equivalent if and only if they have the same labels-vectors and the same order-type. Thus, we encode the equivalence class of a sample by the pair (π, ȳ), where π denotes its order-type and ȳ = (y1 . . . ym) denotes its labels-vector. The pair (π, y) is called the equivalence-type of S. We claim that A satisfies the following property of Homogeneity: Property 1 (Homogeneity). The algorithm A possesses the following property: for every two equivalent samples S, S′ and every x, x′ ∈ Xn such that pos(x, S) = pos(x′, S′), Pr h∼QS [h(x) = 1] = Pr h′∼QS′ [h′(x′) = 1], where QS , QS′ denote the Gibbs-classifier outputted by A on the samples S, S′. In short, Homogeneity means that the probability h ∼ QS satisfies h(x) = 1 depends only on pos(x, S) and on the equivalence-type of S. To see that A is indeed homogeneous, let S, S′ be equivalent samples and let QS , QS′ denote the corresponding Gibbs-classifiers outputted byA. Then, for every x, x′ such that pos(x, S) = pos(x′, S′), Equation (1) yields that: Pr h∼QS [ h(x) = +1 ] = ∑ xi<x QS(hxi) = ∑ x′i<x ′ QS′(hx′i) = Prh′∼QS′ [ h′(x′) = +1 ] , where in the second transition we used that QS(hxi) = QS′(hx′i) for every i ≤ m (because S, S ′ are equivalent), and that xi ≤ x ⇐⇒ x′i ≤ x′, for every i (because pos(x, S) = pos(x′, S′)). The General Case: Approximate Homogeneity. Before we continue to define the hard distribution for algorithm A, let us discuss how the proof of Theorem 2 handles arbitrary algorithms that are not necessarily homogeneous. The general case complicates the argument in two ways. First, the notion of Homogeneity is relaxed to an approximate variant which is defined next. Here, an order type π is called a permutation if π(i) 6= π(j) for every distinct i, j ≤ m. (Indeed, in this case π = (π(x1) . . . π(xm)) is a permutation of 1 . . .m.) Note that the order type of S = 〈(x1, y1) . . . (xm, ym))〉 is a permutation if and only if all the points in S are distinct (i.e. xi 6= xj for all i 6= j). Definition 2 (Approximate Homogeneity). An algorithm B is γ-approximatelym-homogeneous if the following holds: let S, S′ be two equivalent samples of length m whose order-type is a permutation, and let x /∈ S, x′ /∈ S′ such that pos(x, S) = pos(x′, S′). Then, |QS(x)−QS′(x′)| ≤ γ 5m , (4) where QS , QS′ denote the Gibbs-classifier outputted by B on the samples S, S′. Second, we need to identify a sufficiently large subdomain on which the assumed algorithm is approximately homogeneous. This is achieved by the next lemma, which is based on a Ramsey argument. Lemma 1 (Large Approximately Homogeneous Sets ). Let m,n > 1 and let B be an algorithm that is defined over input samples of size m over Xn. Then, there is X ′ ⊆ Xn of size |X ′| ≥ Φ(m, γ, n) such that the restriction of B to input samples from X ′ is γ-approximate m-homogeneous. We prove Lemma 1 in the full version [19]. For the rest of this exposition we rely on Property 1 as it simplifies the presentation of the main ideas. The Hard Distribution D. We are now ready to finish the first step and define the “hard” distribution D. Define D to be uniform over examples (x, y) such that y = hn/2(x). So, each drawn example (x, y) satisfies that x is uniform in Xn and y = −1 if and only if x ≤ n/2. In the general case, D will be defined in the same way with respect to the detected homogeneous subdomain. 4.1.2 Hard Distribution =⇒ Lower Bound: Sensitivity We next outline the second step of the proof, which establishes Theorem 2 using the hard distribution D. Specifically, we show that for a sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( 1 m2 log(|Xn|) ) , with a constant probability bounded away from zero. (In the general case |Xn| is replaced by Φ(m, γ, n) – the size of the homogeneous set.) Sensitive Indices. We begin with describing the key property of homogeneous learners. Let (π, ȳ) denote the equivalence-type of the input sample S. By homogeneity (Property 1), there is a list of numbers p0, . . . , pm, which depends only on the order-type (π, ȳ), such that Prh∼QS [h(x) = 1] = pi for every x ∈ Xn, where i = pos(x, S). The crucial observation is that there exists an index i ≤ m′ which is sensitive in the sense that pi − pi−1 ≥ 1 m . (5) Indeed, consider xj such that hxj = arg mink LS(hxk), and let i = pos(xj , S). Then, pi − pi−1 = LS(hxj )∑ i′≤m LS(hxi′ ) ≥ 1 m . In the general case we show that any homogeneous algorithm that learnsHn satisfies Equation (5) for typical samples (see the full version [19]). The intuition is that any algorithm that learns the distribution D must output a Gibbs-classifier QS such that for typical points x, if x > n/2 then Prh∼QS [h(x) = 1] ≈ 1, and if x ≤ n/2 then Prh∼QS [h(x) = 1] ≈ 0. Thus, when traversing all x’s from 1 up to n there must be a jump between pi−1 and pi for some i. From Sensitive Indices to a Lower Bound on the KL-divergence. How do sensitive indices imply a lower bound on PAC-Bayes? This is the most technical part of the proof. The crux of it is a connection between sensitivity and the KL-divergence which we discuss next. Consider a sensitive index i and let xj be the input example such that pos(xj , S) = i. For x̂ ∈ Xn, let Sx̂ denote the sample obtained by replacing xj with x̂: Sx̂ = 〈(x1, y1), . . . , (xj−1, yj−1), (x̂j , yj), (xj+1, yj+1) . . . (xm, ym).〉, and let Qx̂ := QSx̂ denote the posterior outputted by A given the sample Sx̂. Consider the set I ⊆ Xn of all points x̂ such that Sx̂ is equivalent to S. Equation (5) implies that that for every x, x̂ ∈ I , Pr h∼Qx̂ [h(x) = 1] = { pi−1 x < x̂, pi x > x̂. Combined with the fact that pi− pi−1 ≥ 1/m, this implies a lower bound on KL-divergence between an arbitrary prior P and Qx̂ for most x̂ ∈ I . This is summarized in the following lemma: Lemma 2 (Sensitivity Lemma). Let I be a linearly ordered set and let {Qx̂}x̂∈I be a family of posteriors supported on {±1}I . Suppose there are q1 < q2 ∈ [0, 1] such that for every x, x̂ ∈ I: x < x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≤ q1 + q2 − q1 4 , x > x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≥ q2 − q2 − q1 4 . Then, for every prior distribution P , if x̂ ∈ I is drawn uniformly at random, then the following event occurs with probability at least 1/4: KL (Qx̂‖P ) = Ω ( (q2 − q1)2 log|I| log log|I| ) . The sensitivity lemma tells us that in the above situation, the KL divergence between Qx̂ and any prior P , for a random choice x̂, scales in terms of two quantities: the distance between the two values, q2 − q1, and the size of I . The proof of Lemma 2 is provided in the full version [19]. In a nutshell, the strategy is to bound from below KL (Qrx̂‖P r), where r is sufficiently small; the desired lower bound then follows from the chain rule, KL (Qx̂‖P ) = 1rKL (Q r x̂‖P r). Obtaining the lower bound with respect to the r-fold products is the crux of the proof. In short, we will exhibit events Ex̂ such that Qrx̂(Ex̂) ≥ 12 for every x̂ ∈ I , but P r(Ex̂) is tiny for |I|4 of the x̂’s. This implies a lower bound on KL (Q r x̂‖P r) since KL (Qrx̂‖P r) ≥ KL (Qrx̂(Ex̂)‖P r(Ex̂)) , by the data-processing inequality. Wrapping Up. We now continue in deriving a lower bound for A. Consider an input sample S ∼ Dm. In order to apply Lemma 2, fix any equivalence-type (π, y) with a sensitive index i and let xj be such that pos(xj ;S) = i. The key step is to condition the random sample S on (π, y) as well as on {xt}mt=1 \ {xj} – all sample points besides the sensitive point xj . Thus, only xj is remained to be drawn in order to fully specify S. Note then, that by symmetry x̂ is uniformly distributed in a set I ⊆ Xn, and plugging q1 := pi, q2 := pi−1 in Lemma 2 yields that for any prior distribution P : KL (QS‖P ) ≥ Ω̃ ( 1 m2 log(|I|) ) , with probability at least 1/4. Note that we are not quite done since the size |I| is a random variable which depends on the type (π, ȳ) and the sample points {xk}k 6=j . However, the distribution of |I| can be analyzed by elementary tools. In particular, we show that |I| ≥ Ω(|Xn|/m2) with high enough probability, which yields the desired lower bound on the PAC-Bayes quantity. (In the general case |Xn| is replaced by the size of the homogeneous set.) 5 Discussion In this work we presented a limitation for the PAC-Bayes framework by showing that PAC-learnability of one-dimensional thresholds can not be established using PAC-Bayes. Perhaps the biggest caveat of our result is the mild dependence of the bound on the size of the domain in Theorem 2. In fact, Theorem 2 does not exclude the possibility of PAC-learning thresholds over Xn with sample complexity that scale with O(log∗ n) such that the PAC-Bayes bound vanishes. It would be interesting to explore this possibility; one promising direction is to borrow ideas from the differential privacy literature: [4] and [6] designed a private learning algorithm for thresholds with sample complexity exp(log∗ n); this bound was later improved by [16] to Õ((log∗ n)2). Also, [7] showed that finite Littlestone dimension is sufficient for private learnability, and it would be interesting to extend these results to the context of PAC-Bayes. Let us note that in the context of pure differential privacy, the connection between PAC-Bayes analysis and privacy has been established in [14]. Non-uniform learning bounds Another aspect is the implication of our work to learning algorithms beyond the uniform PAC setting. Indeed, many successful and practical algorithms exhibit sample complexity that depends on the target-distribution. E.g.,the k-Nearest-Neighbor algorithm eventually learns any target-distribution (with a distribution-dependent rate). The first point we address in this context concerns interpolating algorithms. These are learners that achieve zero (or close to zero) training error (i.e. they interpolate the training set). Examples of such algorithms include kernel machines, boosting, random forests, as well as deep neural networks [5, 29]. PAC-Bayes analysis has been utilized in this context, for example, to provide margin-dependent generalization guarantees for kernel machines [18]. It is therefore natural to ask whether our lower bound has implications in this context. As a simple case-study, consider the 1-Nearest-Neighbour. Observe that this algorithm forms a proper and consistent learner for the class of 1-dimensional thresholds5, and therefore enjoys a very fast learning rate. On the other hand, our result implies that for any 5Indeed, given any realizable sample it will output the threshold which maximizes the margin. algorithm (including as 1-Nearest-Neighbor) that is amenable to PAC-Bayes analysis, there is a distribution realizable by thresholds on which it has high population error. Thus, no algorithm with a PAC-Bayes generalization bound can match the performance of nearest-neighbour with respect to such distributions. Finally, this work also relates to a recent attempt to explain generalization through the implicit bias of learning algorithms: it is commonly argued that the generalization performance of algorithms can be explained by an implicit algorithmic bias. Building upon the flexibility of providing distributiondependent generalization bounds, the PAC-Bayes framework has seen a resurgence of interest in this context towards explaining generalization in large-scale modern-time practical algorithms [27, 28, 13, 14, 2]. Indeed PAC-Bayes bounds seem to provide non-vacuous bounds in several relevant domains [17, 14]. Nevertheless, the work here shows that any algorithm that can learn 1D thresholds is necessarily not biased, in the PAC-Bayes sense, towards a (possibly distribution-dependent) prior. We mention that recently, [12] showed that SGD’s generalization performance indeed cannot be attributed to some implicit bias of the algorithm that governs the generalization. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research.
1. What is the main contribution of the paper regarding PAC-Bayes bounds? 2. What are the strengths of the paper's approach to proving the lower bound for a specific algorithm and generalizing it to all algorithms? 3. How does the reviewer assess the significance of the paper's findings regarding the limitations of the PAC Bayes framework for classification? 4. What are the weaknesses of the paper regarding its potential lack of practicality when considering discritization of the parameter space? 5. Do you have any additional questions or concerns about the review or the paper it discusses?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper demonstrates a scenario -- namely learning initial segments -- where any PAC-Bayes bound is vacuous, although the standard VC bound works just fine. In particular, the lower bound depends (although mildly) on the size of the domain. The lower bound is proved for any algorithm, not just usual Gibbs classifiers (e.g., exponential or Gaussian). The proof deviates from the usual minimax arguments where a hard distribution is found; here a more involved argument is needed. Strengths The contributions of the paper can be conceptually divided into two parts: (i) proving the lower bound for a specific but natural algorithm and (ii) generalizing the argument to all algorithms. Even the first part is quite illuminating: it shows that the KL divergence of the prior and posterior is Omega(log|n|) where n is the domain size. This shows a fundamental limitation of PAC-Bayes approach (which is being used frequently these days) The second part generalizes this to arbitrary algorithms, with the cost of having a weaker lower bound Omega(log*(n)). The technique of using Ramsey theory in the argument is quite interesting (although it is based on a similar argument in [1]) and may have other applications. As far as I know this is the first paper that shows the limitations of the PAC Bayes framework for classification. Moreover, the techniques used for the proof are novel, sophisticated and potentially useful in other scenarios. Weaknesses In practice, people sometimes (often?) discritize the parameter space and then use the PAC bound on that discritized hypothesis class. This improves the bound significantly in practice (although in theory the discritization can hurt in the worst case), and the lower bound may not be that powerful for the finite hypothesis class that is built using discritization. More discussion on this can help to increase the applicability of this lower bound to practical situations.
NIPS
Title A Limitation of the PAC-Bayes Framework Abstract PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester (’98). This framework has the flexibility of deriving distributionand algorithm-dependent bounds, which are often tighter than VCrelated uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/δ)/ ) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large. 1 Introduction The classical setting of supervised binary classification considers learning algorithms that receive (binary) labelled examples and are required to output a predictor or a classifier that predicts the label of new and unseen examples. Within this setting, Probably Approximately Correct (PAC) generalization bounds quantify the success of an algorithm to approximately predict with high probability. The PAC-Bayes framework, introduced in [22, 34] and further developed in [21, 20, 30], provides PAC-flavored bounds to Bayesian algorithms that produce Gibbs-classifiers (also called stochastic-classifiers). These are classifiers that, instead of outputting a single classifier, output a probability distribution over the family of classifiers. Their performance is measured by the expected success of prediction where expectation is taken with respect to both sampled data and sampled classifier. A PAC-Bayes generalization bound relates the generalization error of the algorithm to a KL distance between the stochastic output classifier and some prior distribution P . In more detail, the generalization bound is comprised of two terms: first, the empirical error of the output Gibbs-classifier, and second, the KL distance between the output Gibbs classifier and some arbitrary (but sampleindependent) prior distribution. This standard bound captures a basic intuition that a good learner needs to balance between bias, manifested in the form of a prior, and fitting the data, which is measured by the empirical loss. A natural task is then, to try and characterize the potential as well as limitations of such Gibbs-learners that are amenable to PAC-Bayes analysis. As far as the potential, several past results established the strength and utility of this framework (e.g. [33, 31, 18, 13, 17]). In this work we focus on the complementary task, and present the first limitation result showing that there are classes that are learnable, even in the strong distribution-independent setting of PAC, but do not admit any algorithm that is amenable to a non-vacuous PAC-Bayes analysis. We stress that this is true even if we exploit the bound to its fullest and allow any algorithm and any possible, potentially distribution-dependent, prior. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. More concretely, we consider the class of 1-dimensional thresholds, i.e. the class of linear classifiers over the real line. It is a well known fact that this class is learnable and enjoys highly optimistic sample complexity. Perhaps surprisingly, though, we show that any Gibbs-classifier that learns the class of thresholds, must output posteriors from an unbounded set. We emphasize that the result is provided even for priors that depend on the data distribution. From a technical perspective our proof exploits and expands a technique that was recently introduced by Alon et al. [1] to establish limitations on differentially-private PAC learning algorithms. The argument here follow similar lines, and we believe that these similarities in fact highlight a potentially powerful method to derive further limitation results, especially in the context of stability. 2 Preliminaries 2.1 Problem Setup We consider the standard setting of binary classification. Let X denote the domain and Y = {±1} the label space. We study learning algorithms that observe as input a sample S of labelled examples drawn independently from an unknown target distribution D, supported on X × Y . The output of the algorithm is an hypothesis h : X → Y , and its goal is to minimize the 0/1-loss, which is defined by: LD(h) = E (x,y)∼D [ 1[h(x) 6= y] ] . We will focus on the setting where the distribution D is realizable with respect to a fixed hypothesis classH ⊆ YX which is known in advance. That is, it is assumed that there exists h ∈ H such that: LD(h) = 0. Let S = 〈(x1, y1), . . . , (xm, ym)〉 ∈ (X × Y)m be a sample of labelled examples. The empirical error LS with respect to S is defined by LS(h) = 1 m m∑ i=1 1[h(x) 6= y]. We will use the following notation: for a sample S = 〈(x1, y1), . . . (xm, ym)〉, let S denote the underlying set of unlabeled examples S = {xi : i ≤ m}. The Class of Thresholds. For k ∈ N let hk : N→ {±1} denote the threshold function hk(x) = { −1 x ≤ k +1 x > k. The class of thresholds HN is the class HN := {hk : k ∈ N} over the domain XN := N. Similarly, for a finite n ∈ N let Hn denote the class of all thresholds restricted to the domain Xn := [n] = {1, . . . , n}. Note that S is realizable with respect to HN if and only if either (i) yi = +1 for all i ≤ m, or (ii) there exists 1 ≤ j ≤ m such that yi = −1 if and only if xi ≤ xj . A basic fact in statistical learning is thatHN is PAC-learnable. That is, there exists an algorithm A such that for every realizable distributionD, ifA is given a sample of sizeO( log 1/δ ) examples drawn from D, then with probability at least 1− δ, the output hypothesis hS satisfies LD(hS) ≤ . In fact, any algorithm A which returns an hypothesis hk ∈ HN which is consistent with the input sample, will satisfy the above guarantee. Such algorithms are called empirical risk minimizers (ERMs). We stress that the above sample complexity bound is independent of the domain size. In particular it applies to Hn for every n, as well as to the infinite class HN. For further reading, we refer to text books on the subject, such as [32, 23]. 2.2 PAC-Bayes Bounds PAC Bayes bounds are concerned with stochastic-classifiers, or Gibbs-classifiers. A Gibbs-classifier is defined by a distribution Q over hypotheses. The distribution Q is sometimes referred to as a posterior. The loss of a Gibbs-classifier with respect to a distribution D is given by the expected loss over the drawn hypothesis and test point, namely: LD(Q) = E h∼Q,(x,y)∼D [1 [ h(x) 6= y] ] . A key advantage of the PAC-Bayes framework is its flexibility of deriving generalization bounds that do not depend on an hypothesis class. Instead, they provide bounds that depend on the KL distance between the output posterior and a fixed prior P . Recall that the KL divergence between a distribution P and a distribution Q is defined as follows1: KL (P‖Q) = E x∼P [ log P (x) Q(x) ] . Then, the classical PAC-Bayes bound asserts the following: Theorem 1 (PAC-Bayes Generalization Bound [22]). Let D be a distribution over examples, let P be a prior distribution over hypothesis, and let δ > 0. Denote by S a sample of size m drawn independently from D. Then, the following event occurs with probability at least 1− δ: for every posterior distribution Q, LD(Q) ≤ LS(Q) +O (√ KL (Q‖P ) + ln √ m/δ m ) . The above bound relates the generalization error to the KL divergence between the posterior and the prior. Remarkably, the prior distribution P can be chosen as a function of the target distribution D, allowing to obtain distribution-dependent generalization bounds. Since this pioneer work of McAllester [21], many variations on the PAC-Bayes bounds have been proposed. Notably, Seeger et al. [31] and Catoni [9] provided bounds that are known to converge at rate 1/m in the realizable case (see also [15] for an up-to-date survey). We note that our constructions are all provided in the realizable setting, hence readily apply. 3 Main Result We next present the main result in this manuscript. Proofs are provided in the full version [19]. The statements use the following function Φ(m, γ, n), which is defined for m,n > 1 and γ ∈ (0, 1): Φ(m, γ, n) = log(m)(n) ( 10mγ ) 3m . Here, log(k)(x) denotes the iterated logarithm, i.e. log(k)(x) = log(log . . . (log(x)))︸ ︷︷ ︸ k times . An important observation is that limn→∞Φ(m, γ, n) =∞ for every fixed m and γ. Theorem 2 (Main Result). Let n,m > 1 be integers, and let γ ∈ (0, 1). Consider the class Hn of thresholds over the domain Xn = [n]. Then, for any learning algorithm A which is defined on samples of size m, there exists a realizable distribution D = DA such that for any prior P the following event occurs with probability at least 1/16 over the input sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( γ2 m2 log (Φ(m, γ, n) m )) or LD(QS) > 1/2− γ − m Φ(m, γ, n) , where QS denotes the posterior outputted by A. To demonstrate how this result implies a limitation of the PAC-Bayes framework, pick γ = 1/4 and consider any algorithm A which learns thresholds over the natural numbers XN = N with confidence 1− δ ≥ 99/100, error < 1/2− γ = 1/4, and m examples2. Since Φ(m, 1/4, n) tends to infinity with n for any fixedm, the above result implies the existence of a realizable distributionDn supported onXn ⊆ N such that the PAC-Bayes bound with respect to any possible prior P will produce vacuous bounds. We summarize it in the following corollary. 1We use here the standard convention that if P ({x : Q(x) = 0}) > 0 then KL (P‖Q) =∞. 2We note in passing that any Empirical Risk Minimizer learns thresholds with these parameters using < 50 examples. Corollary 1 (PAC-learnability of Linear classifiers cannot be explained by PAC-Bayes). Let HN denote the class of thresholds over XN = N and let m > 0. Then, for every algorithm A that maps inputs sample S of size m to output posteriors QS and for every arbitrarily large N > 0 there exists a realizable distribution D such that, for any prior P , with probability at least 1/16 over S ∼ Dm on of the following holds: KL (QS‖P ) > N or, LD(QS) > 1/4. A different interpretation of Theorem 2 is that in order to derive meaningful PAC-Bayes generalization bounds for PAC-learning thresholds over a finite domain Xn, the sample complexity must grow to infinity with the domain size n (it is at least Ω(log?(n))). In contrast, the true sample complexity of this problem is O(log(1/δ)/ ) which is independent of n. 4 Technical Overview A common approach of proving impossibility results in computer science (and in machine learning in particular) exploits a Minmax principle, whereby one specifies a fixed hard distribution over inputs, and establishes the desired impossibility result for any algorithm with respect to random inputs from that distribution. As an example, consider the “No-Free-Lunch Theorem” which establishes that the VC dimension lower bounds the sample complexity of PAC-learning a classH. Here, one fixes the distribution to be uniform over a shattered set of size d = VC(H), and argues that every learning algorithm must observe Ω(d) examples. (See e.g. Theorem 5.1 in [32].) Such “Minmax” proofs establish a stronger assertion: they apply even to algorithms that “know” the input-distribution. For example, the No-Free-Lunch Theorem applies even to learning algorithms that are designed given the knowledge that the marginal distribution is uniform over some shattered set. Interestingly, such an approach is bound to fail in proving Theorem 2. The reason is that if the marginal distribution DX over Xn is fixed, then one can pick an /2-cover3 Cn ⊆ Hn of size |Cn| = O(1/ ), and use any Empirical Risk Minimizer for Cn. Then, by picking the prior distribution P to be uniform over Cn, one obtains a PAC-Bayes bound which scales with the entropy H(P ) = log|Cn| = O(log(1/ )), and yields a poly(1/ , log(1/δ)) generalization bound, which is independent of n. In other words, in the context of Theorem 2, there is no single distribution which is “hard” for all algorithms. Thus, to overcome this difficulty one must come up with a “method” which assigns to any given algorithm A a “hard” distribution D = DA, which witnesses Theorem 2 with respect to A. The challenge is that A is an arbitrary algorithm; e.g. it may be improper4 or add different sorts of noise to its output classifier. We refer the reader to [26, 25, 3] for a line of work which explores in detail a similar “failure” of the Minmax principle in the context of PAC learning with low mutual information. The method we use in the proof of Theorem 2 exploits Ramsey Theory. In a nutshell, Ramsey Theory provides powerful tools which allow to detect, for any learning algorithm, a large homogeneous set such that the behavior of A on inputs from the homogeneous set is highly regular. Then, we consider the uniform distribution over the homogeneous set to establish Theorem 2. We note that similar applications of Ramsey Theory in proving lower bounds in computer science date back to the 80’s [24]. For more recent usages see e.g. [8, 11, 10, 1]. Our proof closely follows the argument of Alon et al. [1], which establishes an impossibility result for learningHn by differentially-private algorithms. Technical Comparison with the Work by Alon et al. [1]. For readers who are familiar with the work of [1], let us summarize the main differences between the two proofs. The main challenge in extending the technique from [1] to prove Theorem 2 is that PAC-Bayes bounds are only required to hold for typical samples. This is unlike the notion of differential-privacy (which was the focus of [1]) that is defined with respect to all samples. Thus, establishing a lower bound in the context of differential privacy is easier: one only needs to demonstrate a single sample for which privacy is 3I.e. Cn satisfies that (∀h ∈ Hn)(∃c ∈ Cn) : Prx∼DX (c(x) 6= h(x)) ≤ /2. 4I.e. A may output hypotheses which are not thresholds, or Gibbs-classifiers supported on hypotheses which are not thresholds. breached. However, to prove Theorem 2 one has to demonstrate that the lower bound applies to many samples. Concretely, this affects the following parts of the proof: (i) The Ramsey argument in the current manuscript (Lemma 1) is more complex: to overcome the above difficulty we needed to modify the coloring and the overall construction is more convoluted. (ii) Once Ramsey Theorem is applied and the homogeneous subset Rn ⊆ Xn is derived, one still needs to derive a lower bound on the PAC-Bayes quantity. This requires a technical argument (Lemma 2), which is tailored to the definition of PAC-Bayes. Again, this lemma is more complicated than the corresponding lemma in [1]. (iii) Even with Lemma 1 and Lemma 2 in hand, the remaining derivation of Theorem 2 still requires a careful analysis which involves defining several “bad” events and bounding their probabilities. Again, this is all a consequence of that the PAC-Bayes quantity is an “average-case” complexity measure. 4.1 Proof Sketch and Key Definitions The proof of Theorem 2 consists of two steps: (i) detecting a hard distribution D = DA which witnesses Theorem 2 with respect to the assumed algorithm A, and (ii) establishing the conclusion of Theorem 2 given the hard distribution D. The first part is combinatorial (exploits Ramsey Theory), and the second part is more information-theoretic. For the purpose of exposition, we focus in this technical overview, on a specific algorithm A. This will make the introduction of the key definitions and presentation of the main technical tools more accessible. The algorithmA. Let S = 〈(x1, y1), . . . , (xm, ym)〉 be an input sample. The algorithmA outputs the posterior distribution QS which is defined as follows: let hxi = 1[x > xi]− 1[x ≤ xi] denote the threshold corresponding to the i’th input example. The posterior QS is supported on {hxi}mi=1, and to each hxi it assigns a probability according to a decreasing function of its empirical risk. (So, hypotheses with lower risk are more probable.) The specific choice of the decreasing function does not matter, but for concreteness let us pick the function exp(−x). Thus, QS(hxi) ∝ exp ( −LS(hxi) ) . (1) While one can directly prove that the above algorithm does not admit a PAC-Bayes analysis, we provide here an argument which follows the lines of the general case. We start by explaining the key property of Homogeneity, which allows to detect the hard distribution. 4.1.1 Detecting a Hard Distribution: Homogeneity The first step in the proof of Theorem 2 takes the given algorithm and identifies a large subset of the domain on which its behavior is Homogeneous. In particular, we will soon see that the algorithm A is Homogeneous on the entire domain Xn. In order to define Homogeneity, we use the following equivalence relation between samples: Definition 1 (Equivalent Samples). Let S = 〈(x1, y1), . . . , (xm, ym)〉 and S′ = 〈(x′1, y′1), . . . , (x′m, y′m)〉 be two samples. We say that S and S′ are equivalent if for all i, j ≤ m the following holds. 1. xi ≤ xj ⇐⇒ x′i ≤ x′j , and 2. yi = y′i. For example, 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 are equivalent, but 〈(3,−), (6,+), (4,+)〉 is not equivalent to them (because of Item 1). For a point x ∈ Xn let pos(x;S) denote the number of examples in S that are less than or equal to x: pos(x;S) = ∣∣∣{xi ∈ S : xi ≤ x}∣∣∣. (2) For a sample S = 〈(x1, y1), . . . , (xm, ym)〉 let π(S) denote the order-type of S: π(S) = (pos(x1;S), pos(x2;S), . . . , pos(xm;S)). (3) So, the samples 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 have order-type π = (1, 2, 3), whereas 〈(3,−), (6,+), (4,+)〉 has order-type π = (1, 3, 2). Note that S, S′ are equivalent if and only if they have the same labels-vectors and the same order-type. Thus, we encode the equivalence class of a sample by the pair (π, ȳ), where π denotes its order-type and ȳ = (y1 . . . ym) denotes its labels-vector. The pair (π, y) is called the equivalence-type of S. We claim that A satisfies the following property of Homogeneity: Property 1 (Homogeneity). The algorithm A possesses the following property: for every two equivalent samples S, S′ and every x, x′ ∈ Xn such that pos(x, S) = pos(x′, S′), Pr h∼QS [h(x) = 1] = Pr h′∼QS′ [h′(x′) = 1], where QS , QS′ denote the Gibbs-classifier outputted by A on the samples S, S′. In short, Homogeneity means that the probability h ∼ QS satisfies h(x) = 1 depends only on pos(x, S) and on the equivalence-type of S. To see that A is indeed homogeneous, let S, S′ be equivalent samples and let QS , QS′ denote the corresponding Gibbs-classifiers outputted byA. Then, for every x, x′ such that pos(x, S) = pos(x′, S′), Equation (1) yields that: Pr h∼QS [ h(x) = +1 ] = ∑ xi<x QS(hxi) = ∑ x′i<x ′ QS′(hx′i) = Prh′∼QS′ [ h′(x′) = +1 ] , where in the second transition we used that QS(hxi) = QS′(hx′i) for every i ≤ m (because S, S ′ are equivalent), and that xi ≤ x ⇐⇒ x′i ≤ x′, for every i (because pos(x, S) = pos(x′, S′)). The General Case: Approximate Homogeneity. Before we continue to define the hard distribution for algorithm A, let us discuss how the proof of Theorem 2 handles arbitrary algorithms that are not necessarily homogeneous. The general case complicates the argument in two ways. First, the notion of Homogeneity is relaxed to an approximate variant which is defined next. Here, an order type π is called a permutation if π(i) 6= π(j) for every distinct i, j ≤ m. (Indeed, in this case π = (π(x1) . . . π(xm)) is a permutation of 1 . . .m.) Note that the order type of S = 〈(x1, y1) . . . (xm, ym))〉 is a permutation if and only if all the points in S are distinct (i.e. xi 6= xj for all i 6= j). Definition 2 (Approximate Homogeneity). An algorithm B is γ-approximatelym-homogeneous if the following holds: let S, S′ be two equivalent samples of length m whose order-type is a permutation, and let x /∈ S, x′ /∈ S′ such that pos(x, S) = pos(x′, S′). Then, |QS(x)−QS′(x′)| ≤ γ 5m , (4) where QS , QS′ denote the Gibbs-classifier outputted by B on the samples S, S′. Second, we need to identify a sufficiently large subdomain on which the assumed algorithm is approximately homogeneous. This is achieved by the next lemma, which is based on a Ramsey argument. Lemma 1 (Large Approximately Homogeneous Sets ). Let m,n > 1 and let B be an algorithm that is defined over input samples of size m over Xn. Then, there is X ′ ⊆ Xn of size |X ′| ≥ Φ(m, γ, n) such that the restriction of B to input samples from X ′ is γ-approximate m-homogeneous. We prove Lemma 1 in the full version [19]. For the rest of this exposition we rely on Property 1 as it simplifies the presentation of the main ideas. The Hard Distribution D. We are now ready to finish the first step and define the “hard” distribution D. Define D to be uniform over examples (x, y) such that y = hn/2(x). So, each drawn example (x, y) satisfies that x is uniform in Xn and y = −1 if and only if x ≤ n/2. In the general case, D will be defined in the same way with respect to the detected homogeneous subdomain. 4.1.2 Hard Distribution =⇒ Lower Bound: Sensitivity We next outline the second step of the proof, which establishes Theorem 2 using the hard distribution D. Specifically, we show that for a sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( 1 m2 log(|Xn|) ) , with a constant probability bounded away from zero. (In the general case |Xn| is replaced by Φ(m, γ, n) – the size of the homogeneous set.) Sensitive Indices. We begin with describing the key property of homogeneous learners. Let (π, ȳ) denote the equivalence-type of the input sample S. By homogeneity (Property 1), there is a list of numbers p0, . . . , pm, which depends only on the order-type (π, ȳ), such that Prh∼QS [h(x) = 1] = pi for every x ∈ Xn, where i = pos(x, S). The crucial observation is that there exists an index i ≤ m′ which is sensitive in the sense that pi − pi−1 ≥ 1 m . (5) Indeed, consider xj such that hxj = arg mink LS(hxk), and let i = pos(xj , S). Then, pi − pi−1 = LS(hxj )∑ i′≤m LS(hxi′ ) ≥ 1 m . In the general case we show that any homogeneous algorithm that learnsHn satisfies Equation (5) for typical samples (see the full version [19]). The intuition is that any algorithm that learns the distribution D must output a Gibbs-classifier QS such that for typical points x, if x > n/2 then Prh∼QS [h(x) = 1] ≈ 1, and if x ≤ n/2 then Prh∼QS [h(x) = 1] ≈ 0. Thus, when traversing all x’s from 1 up to n there must be a jump between pi−1 and pi for some i. From Sensitive Indices to a Lower Bound on the KL-divergence. How do sensitive indices imply a lower bound on PAC-Bayes? This is the most technical part of the proof. The crux of it is a connection between sensitivity and the KL-divergence which we discuss next. Consider a sensitive index i and let xj be the input example such that pos(xj , S) = i. For x̂ ∈ Xn, let Sx̂ denote the sample obtained by replacing xj with x̂: Sx̂ = 〈(x1, y1), . . . , (xj−1, yj−1), (x̂j , yj), (xj+1, yj+1) . . . (xm, ym).〉, and let Qx̂ := QSx̂ denote the posterior outputted by A given the sample Sx̂. Consider the set I ⊆ Xn of all points x̂ such that Sx̂ is equivalent to S. Equation (5) implies that that for every x, x̂ ∈ I , Pr h∼Qx̂ [h(x) = 1] = { pi−1 x < x̂, pi x > x̂. Combined with the fact that pi− pi−1 ≥ 1/m, this implies a lower bound on KL-divergence between an arbitrary prior P and Qx̂ for most x̂ ∈ I . This is summarized in the following lemma: Lemma 2 (Sensitivity Lemma). Let I be a linearly ordered set and let {Qx̂}x̂∈I be a family of posteriors supported on {±1}I . Suppose there are q1 < q2 ∈ [0, 1] such that for every x, x̂ ∈ I: x < x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≤ q1 + q2 − q1 4 , x > x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≥ q2 − q2 − q1 4 . Then, for every prior distribution P , if x̂ ∈ I is drawn uniformly at random, then the following event occurs with probability at least 1/4: KL (Qx̂‖P ) = Ω ( (q2 − q1)2 log|I| log log|I| ) . The sensitivity lemma tells us that in the above situation, the KL divergence between Qx̂ and any prior P , for a random choice x̂, scales in terms of two quantities: the distance between the two values, q2 − q1, and the size of I . The proof of Lemma 2 is provided in the full version [19]. In a nutshell, the strategy is to bound from below KL (Qrx̂‖P r), where r is sufficiently small; the desired lower bound then follows from the chain rule, KL (Qx̂‖P ) = 1rKL (Q r x̂‖P r). Obtaining the lower bound with respect to the r-fold products is the crux of the proof. In short, we will exhibit events Ex̂ such that Qrx̂(Ex̂) ≥ 12 for every x̂ ∈ I , but P r(Ex̂) is tiny for |I|4 of the x̂’s. This implies a lower bound on KL (Q r x̂‖P r) since KL (Qrx̂‖P r) ≥ KL (Qrx̂(Ex̂)‖P r(Ex̂)) , by the data-processing inequality. Wrapping Up. We now continue in deriving a lower bound for A. Consider an input sample S ∼ Dm. In order to apply Lemma 2, fix any equivalence-type (π, y) with a sensitive index i and let xj be such that pos(xj ;S) = i. The key step is to condition the random sample S on (π, y) as well as on {xt}mt=1 \ {xj} – all sample points besides the sensitive point xj . Thus, only xj is remained to be drawn in order to fully specify S. Note then, that by symmetry x̂ is uniformly distributed in a set I ⊆ Xn, and plugging q1 := pi, q2 := pi−1 in Lemma 2 yields that for any prior distribution P : KL (QS‖P ) ≥ Ω̃ ( 1 m2 log(|I|) ) , with probability at least 1/4. Note that we are not quite done since the size |I| is a random variable which depends on the type (π, ȳ) and the sample points {xk}k 6=j . However, the distribution of |I| can be analyzed by elementary tools. In particular, we show that |I| ≥ Ω(|Xn|/m2) with high enough probability, which yields the desired lower bound on the PAC-Bayes quantity. (In the general case |Xn| is replaced by the size of the homogeneous set.) 5 Discussion In this work we presented a limitation for the PAC-Bayes framework by showing that PAC-learnability of one-dimensional thresholds can not be established using PAC-Bayes. Perhaps the biggest caveat of our result is the mild dependence of the bound on the size of the domain in Theorem 2. In fact, Theorem 2 does not exclude the possibility of PAC-learning thresholds over Xn with sample complexity that scale with O(log∗ n) such that the PAC-Bayes bound vanishes. It would be interesting to explore this possibility; one promising direction is to borrow ideas from the differential privacy literature: [4] and [6] designed a private learning algorithm for thresholds with sample complexity exp(log∗ n); this bound was later improved by [16] to Õ((log∗ n)2). Also, [7] showed that finite Littlestone dimension is sufficient for private learnability, and it would be interesting to extend these results to the context of PAC-Bayes. Let us note that in the context of pure differential privacy, the connection between PAC-Bayes analysis and privacy has been established in [14]. Non-uniform learning bounds Another aspect is the implication of our work to learning algorithms beyond the uniform PAC setting. Indeed, many successful and practical algorithms exhibit sample complexity that depends on the target-distribution. E.g.,the k-Nearest-Neighbor algorithm eventually learns any target-distribution (with a distribution-dependent rate). The first point we address in this context concerns interpolating algorithms. These are learners that achieve zero (or close to zero) training error (i.e. they interpolate the training set). Examples of such algorithms include kernel machines, boosting, random forests, as well as deep neural networks [5, 29]. PAC-Bayes analysis has been utilized in this context, for example, to provide margin-dependent generalization guarantees for kernel machines [18]. It is therefore natural to ask whether our lower bound has implications in this context. As a simple case-study, consider the 1-Nearest-Neighbour. Observe that this algorithm forms a proper and consistent learner for the class of 1-dimensional thresholds5, and therefore enjoys a very fast learning rate. On the other hand, our result implies that for any 5Indeed, given any realizable sample it will output the threshold which maximizes the margin. algorithm (including as 1-Nearest-Neighbor) that is amenable to PAC-Bayes analysis, there is a distribution realizable by thresholds on which it has high population error. Thus, no algorithm with a PAC-Bayes generalization bound can match the performance of nearest-neighbour with respect to such distributions. Finally, this work also relates to a recent attempt to explain generalization through the implicit bias of learning algorithms: it is commonly argued that the generalization performance of algorithms can be explained by an implicit algorithmic bias. Building upon the flexibility of providing distributiondependent generalization bounds, the PAC-Bayes framework has seen a resurgence of interest in this context towards explaining generalization in large-scale modern-time practical algorithms [27, 28, 13, 14, 2]. Indeed PAC-Bayes bounds seem to provide non-vacuous bounds in several relevant domains [17, 14]. Nevertheless, the work here shows that any algorithm that can learn 1D thresholds is necessarily not biased, in the PAC-Bayes sense, towards a (possibly distribution-dependent) prior. We mention that recently, [12] showed that SGD’s generalization performance indeed cannot be attributed to some implicit bias of the algorithm that governs the generalization. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research.
1. What is the main contribution of the paper regarding uniform learning? 2. What are the strengths of the proposed approach, particularly in terms of its proof technique and potential for future applications? 3. Do you have any concerns or criticisms regarding the paper's claims or methodology? 4. How does the reviewer assess the significance and novelty of the paper's findings? 5. Are there any suggestions for improving the paper's presentation or emphasizing its key contributions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions 1. The paper demonstrates (via lower bounds on the standard PAC-Bayes bounds) that uniform learning cannot be explained by the standard PAC-Bayes bounds 2. The authors introduce a new (to PAC-Bayes) proof technique to demonstrate this using Ramsey Theory. The connection to Ramsey Theory is quite interesting as an avenue for future work on lower bounds. Strengths 1. The claims appear to be sound. 2. The proof technique is interesting and could likely be modified and used for more problems in the future for deriving lower bounds. This will give the paper some long term impact. 3. While the overall result was not surprising at first, that the choice of adversarial distribution does not depend on the prior IS very surprising and interesting. The authors should aim to highlight this more and explain why this is more difficult than allowing the adversarial distribution to depend on P, because it dramatically effected how excited I was about the result. Weaknesses 1. The result is not surprising at first glance, since PAC-Bayes results are essentially non-uniform learning bounds (the sample complexity depends on the identity of the best hypothesis) rather than uniform learning bounds (sample complexity independent of the identity of best hypothesis), where the hypothesis space can be seen as "gradated" based on the prior probability intensity. 2. Should explain earlier and more intuitively why having the adversarial distribution choice not depend on P is challenging and more interesting.
NIPS
Title A Limitation of the PAC-Bayes Framework Abstract PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester (’98). This framework has the flexibility of deriving distributionand algorithm-dependent bounds, which are often tighter than VCrelated uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/δ)/ ) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large. 1 Introduction The classical setting of supervised binary classification considers learning algorithms that receive (binary) labelled examples and are required to output a predictor or a classifier that predicts the label of new and unseen examples. Within this setting, Probably Approximately Correct (PAC) generalization bounds quantify the success of an algorithm to approximately predict with high probability. The PAC-Bayes framework, introduced in [22, 34] and further developed in [21, 20, 30], provides PAC-flavored bounds to Bayesian algorithms that produce Gibbs-classifiers (also called stochastic-classifiers). These are classifiers that, instead of outputting a single classifier, output a probability distribution over the family of classifiers. Their performance is measured by the expected success of prediction where expectation is taken with respect to both sampled data and sampled classifier. A PAC-Bayes generalization bound relates the generalization error of the algorithm to a KL distance between the stochastic output classifier and some prior distribution P . In more detail, the generalization bound is comprised of two terms: first, the empirical error of the output Gibbs-classifier, and second, the KL distance between the output Gibbs classifier and some arbitrary (but sampleindependent) prior distribution. This standard bound captures a basic intuition that a good learner needs to balance between bias, manifested in the form of a prior, and fitting the data, which is measured by the empirical loss. A natural task is then, to try and characterize the potential as well as limitations of such Gibbs-learners that are amenable to PAC-Bayes analysis. As far as the potential, several past results established the strength and utility of this framework (e.g. [33, 31, 18, 13, 17]). In this work we focus on the complementary task, and present the first limitation result showing that there are classes that are learnable, even in the strong distribution-independent setting of PAC, but do not admit any algorithm that is amenable to a non-vacuous PAC-Bayes analysis. We stress that this is true even if we exploit the bound to its fullest and allow any algorithm and any possible, potentially distribution-dependent, prior. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. More concretely, we consider the class of 1-dimensional thresholds, i.e. the class of linear classifiers over the real line. It is a well known fact that this class is learnable and enjoys highly optimistic sample complexity. Perhaps surprisingly, though, we show that any Gibbs-classifier that learns the class of thresholds, must output posteriors from an unbounded set. We emphasize that the result is provided even for priors that depend on the data distribution. From a technical perspective our proof exploits and expands a technique that was recently introduced by Alon et al. [1] to establish limitations on differentially-private PAC learning algorithms. The argument here follow similar lines, and we believe that these similarities in fact highlight a potentially powerful method to derive further limitation results, especially in the context of stability. 2 Preliminaries 2.1 Problem Setup We consider the standard setting of binary classification. Let X denote the domain and Y = {±1} the label space. We study learning algorithms that observe as input a sample S of labelled examples drawn independently from an unknown target distribution D, supported on X × Y . The output of the algorithm is an hypothesis h : X → Y , and its goal is to minimize the 0/1-loss, which is defined by: LD(h) = E (x,y)∼D [ 1[h(x) 6= y] ] . We will focus on the setting where the distribution D is realizable with respect to a fixed hypothesis classH ⊆ YX which is known in advance. That is, it is assumed that there exists h ∈ H such that: LD(h) = 0. Let S = 〈(x1, y1), . . . , (xm, ym)〉 ∈ (X × Y)m be a sample of labelled examples. The empirical error LS with respect to S is defined by LS(h) = 1 m m∑ i=1 1[h(x) 6= y]. We will use the following notation: for a sample S = 〈(x1, y1), . . . (xm, ym)〉, let S denote the underlying set of unlabeled examples S = {xi : i ≤ m}. The Class of Thresholds. For k ∈ N let hk : N→ {±1} denote the threshold function hk(x) = { −1 x ≤ k +1 x > k. The class of thresholds HN is the class HN := {hk : k ∈ N} over the domain XN := N. Similarly, for a finite n ∈ N let Hn denote the class of all thresholds restricted to the domain Xn := [n] = {1, . . . , n}. Note that S is realizable with respect to HN if and only if either (i) yi = +1 for all i ≤ m, or (ii) there exists 1 ≤ j ≤ m such that yi = −1 if and only if xi ≤ xj . A basic fact in statistical learning is thatHN is PAC-learnable. That is, there exists an algorithm A such that for every realizable distributionD, ifA is given a sample of sizeO( log 1/δ ) examples drawn from D, then with probability at least 1− δ, the output hypothesis hS satisfies LD(hS) ≤ . In fact, any algorithm A which returns an hypothesis hk ∈ HN which is consistent with the input sample, will satisfy the above guarantee. Such algorithms are called empirical risk minimizers (ERMs). We stress that the above sample complexity bound is independent of the domain size. In particular it applies to Hn for every n, as well as to the infinite class HN. For further reading, we refer to text books on the subject, such as [32, 23]. 2.2 PAC-Bayes Bounds PAC Bayes bounds are concerned with stochastic-classifiers, or Gibbs-classifiers. A Gibbs-classifier is defined by a distribution Q over hypotheses. The distribution Q is sometimes referred to as a posterior. The loss of a Gibbs-classifier with respect to a distribution D is given by the expected loss over the drawn hypothesis and test point, namely: LD(Q) = E h∼Q,(x,y)∼D [1 [ h(x) 6= y] ] . A key advantage of the PAC-Bayes framework is its flexibility of deriving generalization bounds that do not depend on an hypothesis class. Instead, they provide bounds that depend on the KL distance between the output posterior and a fixed prior P . Recall that the KL divergence between a distribution P and a distribution Q is defined as follows1: KL (P‖Q) = E x∼P [ log P (x) Q(x) ] . Then, the classical PAC-Bayes bound asserts the following: Theorem 1 (PAC-Bayes Generalization Bound [22]). Let D be a distribution over examples, let P be a prior distribution over hypothesis, and let δ > 0. Denote by S a sample of size m drawn independently from D. Then, the following event occurs with probability at least 1− δ: for every posterior distribution Q, LD(Q) ≤ LS(Q) +O (√ KL (Q‖P ) + ln √ m/δ m ) . The above bound relates the generalization error to the KL divergence between the posterior and the prior. Remarkably, the prior distribution P can be chosen as a function of the target distribution D, allowing to obtain distribution-dependent generalization bounds. Since this pioneer work of McAllester [21], many variations on the PAC-Bayes bounds have been proposed. Notably, Seeger et al. [31] and Catoni [9] provided bounds that are known to converge at rate 1/m in the realizable case (see also [15] for an up-to-date survey). We note that our constructions are all provided in the realizable setting, hence readily apply. 3 Main Result We next present the main result in this manuscript. Proofs are provided in the full version [19]. The statements use the following function Φ(m, γ, n), which is defined for m,n > 1 and γ ∈ (0, 1): Φ(m, γ, n) = log(m)(n) ( 10mγ ) 3m . Here, log(k)(x) denotes the iterated logarithm, i.e. log(k)(x) = log(log . . . (log(x)))︸ ︷︷ ︸ k times . An important observation is that limn→∞Φ(m, γ, n) =∞ for every fixed m and γ. Theorem 2 (Main Result). Let n,m > 1 be integers, and let γ ∈ (0, 1). Consider the class Hn of thresholds over the domain Xn = [n]. Then, for any learning algorithm A which is defined on samples of size m, there exists a realizable distribution D = DA such that for any prior P the following event occurs with probability at least 1/16 over the input sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( γ2 m2 log (Φ(m, γ, n) m )) or LD(QS) > 1/2− γ − m Φ(m, γ, n) , where QS denotes the posterior outputted by A. To demonstrate how this result implies a limitation of the PAC-Bayes framework, pick γ = 1/4 and consider any algorithm A which learns thresholds over the natural numbers XN = N with confidence 1− δ ≥ 99/100, error < 1/2− γ = 1/4, and m examples2. Since Φ(m, 1/4, n) tends to infinity with n for any fixedm, the above result implies the existence of a realizable distributionDn supported onXn ⊆ N such that the PAC-Bayes bound with respect to any possible prior P will produce vacuous bounds. We summarize it in the following corollary. 1We use here the standard convention that if P ({x : Q(x) = 0}) > 0 then KL (P‖Q) =∞. 2We note in passing that any Empirical Risk Minimizer learns thresholds with these parameters using < 50 examples. Corollary 1 (PAC-learnability of Linear classifiers cannot be explained by PAC-Bayes). Let HN denote the class of thresholds over XN = N and let m > 0. Then, for every algorithm A that maps inputs sample S of size m to output posteriors QS and for every arbitrarily large N > 0 there exists a realizable distribution D such that, for any prior P , with probability at least 1/16 over S ∼ Dm on of the following holds: KL (QS‖P ) > N or, LD(QS) > 1/4. A different interpretation of Theorem 2 is that in order to derive meaningful PAC-Bayes generalization bounds for PAC-learning thresholds over a finite domain Xn, the sample complexity must grow to infinity with the domain size n (it is at least Ω(log?(n))). In contrast, the true sample complexity of this problem is O(log(1/δ)/ ) which is independent of n. 4 Technical Overview A common approach of proving impossibility results in computer science (and in machine learning in particular) exploits a Minmax principle, whereby one specifies a fixed hard distribution over inputs, and establishes the desired impossibility result for any algorithm with respect to random inputs from that distribution. As an example, consider the “No-Free-Lunch Theorem” which establishes that the VC dimension lower bounds the sample complexity of PAC-learning a classH. Here, one fixes the distribution to be uniform over a shattered set of size d = VC(H), and argues that every learning algorithm must observe Ω(d) examples. (See e.g. Theorem 5.1 in [32].) Such “Minmax” proofs establish a stronger assertion: they apply even to algorithms that “know” the input-distribution. For example, the No-Free-Lunch Theorem applies even to learning algorithms that are designed given the knowledge that the marginal distribution is uniform over some shattered set. Interestingly, such an approach is bound to fail in proving Theorem 2. The reason is that if the marginal distribution DX over Xn is fixed, then one can pick an /2-cover3 Cn ⊆ Hn of size |Cn| = O(1/ ), and use any Empirical Risk Minimizer for Cn. Then, by picking the prior distribution P to be uniform over Cn, one obtains a PAC-Bayes bound which scales with the entropy H(P ) = log|Cn| = O(log(1/ )), and yields a poly(1/ , log(1/δ)) generalization bound, which is independent of n. In other words, in the context of Theorem 2, there is no single distribution which is “hard” for all algorithms. Thus, to overcome this difficulty one must come up with a “method” which assigns to any given algorithm A a “hard” distribution D = DA, which witnesses Theorem 2 with respect to A. The challenge is that A is an arbitrary algorithm; e.g. it may be improper4 or add different sorts of noise to its output classifier. We refer the reader to [26, 25, 3] for a line of work which explores in detail a similar “failure” of the Minmax principle in the context of PAC learning with low mutual information. The method we use in the proof of Theorem 2 exploits Ramsey Theory. In a nutshell, Ramsey Theory provides powerful tools which allow to detect, for any learning algorithm, a large homogeneous set such that the behavior of A on inputs from the homogeneous set is highly regular. Then, we consider the uniform distribution over the homogeneous set to establish Theorem 2. We note that similar applications of Ramsey Theory in proving lower bounds in computer science date back to the 80’s [24]. For more recent usages see e.g. [8, 11, 10, 1]. Our proof closely follows the argument of Alon et al. [1], which establishes an impossibility result for learningHn by differentially-private algorithms. Technical Comparison with the Work by Alon et al. [1]. For readers who are familiar with the work of [1], let us summarize the main differences between the two proofs. The main challenge in extending the technique from [1] to prove Theorem 2 is that PAC-Bayes bounds are only required to hold for typical samples. This is unlike the notion of differential-privacy (which was the focus of [1]) that is defined with respect to all samples. Thus, establishing a lower bound in the context of differential privacy is easier: one only needs to demonstrate a single sample for which privacy is 3I.e. Cn satisfies that (∀h ∈ Hn)(∃c ∈ Cn) : Prx∼DX (c(x) 6= h(x)) ≤ /2. 4I.e. A may output hypotheses which are not thresholds, or Gibbs-classifiers supported on hypotheses which are not thresholds. breached. However, to prove Theorem 2 one has to demonstrate that the lower bound applies to many samples. Concretely, this affects the following parts of the proof: (i) The Ramsey argument in the current manuscript (Lemma 1) is more complex: to overcome the above difficulty we needed to modify the coloring and the overall construction is more convoluted. (ii) Once Ramsey Theorem is applied and the homogeneous subset Rn ⊆ Xn is derived, one still needs to derive a lower bound on the PAC-Bayes quantity. This requires a technical argument (Lemma 2), which is tailored to the definition of PAC-Bayes. Again, this lemma is more complicated than the corresponding lemma in [1]. (iii) Even with Lemma 1 and Lemma 2 in hand, the remaining derivation of Theorem 2 still requires a careful analysis which involves defining several “bad” events and bounding their probabilities. Again, this is all a consequence of that the PAC-Bayes quantity is an “average-case” complexity measure. 4.1 Proof Sketch and Key Definitions The proof of Theorem 2 consists of two steps: (i) detecting a hard distribution D = DA which witnesses Theorem 2 with respect to the assumed algorithm A, and (ii) establishing the conclusion of Theorem 2 given the hard distribution D. The first part is combinatorial (exploits Ramsey Theory), and the second part is more information-theoretic. For the purpose of exposition, we focus in this technical overview, on a specific algorithm A. This will make the introduction of the key definitions and presentation of the main technical tools more accessible. The algorithmA. Let S = 〈(x1, y1), . . . , (xm, ym)〉 be an input sample. The algorithmA outputs the posterior distribution QS which is defined as follows: let hxi = 1[x > xi]− 1[x ≤ xi] denote the threshold corresponding to the i’th input example. The posterior QS is supported on {hxi}mi=1, and to each hxi it assigns a probability according to a decreasing function of its empirical risk. (So, hypotheses with lower risk are more probable.) The specific choice of the decreasing function does not matter, but for concreteness let us pick the function exp(−x). Thus, QS(hxi) ∝ exp ( −LS(hxi) ) . (1) While one can directly prove that the above algorithm does not admit a PAC-Bayes analysis, we provide here an argument which follows the lines of the general case. We start by explaining the key property of Homogeneity, which allows to detect the hard distribution. 4.1.1 Detecting a Hard Distribution: Homogeneity The first step in the proof of Theorem 2 takes the given algorithm and identifies a large subset of the domain on which its behavior is Homogeneous. In particular, we will soon see that the algorithm A is Homogeneous on the entire domain Xn. In order to define Homogeneity, we use the following equivalence relation between samples: Definition 1 (Equivalent Samples). Let S = 〈(x1, y1), . . . , (xm, ym)〉 and S′ = 〈(x′1, y′1), . . . , (x′m, y′m)〉 be two samples. We say that S and S′ are equivalent if for all i, j ≤ m the following holds. 1. xi ≤ xj ⇐⇒ x′i ≤ x′j , and 2. yi = y′i. For example, 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 are equivalent, but 〈(3,−), (6,+), (4,+)〉 is not equivalent to them (because of Item 1). For a point x ∈ Xn let pos(x;S) denote the number of examples in S that are less than or equal to x: pos(x;S) = ∣∣∣{xi ∈ S : xi ≤ x}∣∣∣. (2) For a sample S = 〈(x1, y1), . . . , (xm, ym)〉 let π(S) denote the order-type of S: π(S) = (pos(x1;S), pos(x2;S), . . . , pos(xm;S)). (3) So, the samples 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 have order-type π = (1, 2, 3), whereas 〈(3,−), (6,+), (4,+)〉 has order-type π = (1, 3, 2). Note that S, S′ are equivalent if and only if they have the same labels-vectors and the same order-type. Thus, we encode the equivalence class of a sample by the pair (π, ȳ), where π denotes its order-type and ȳ = (y1 . . . ym) denotes its labels-vector. The pair (π, y) is called the equivalence-type of S. We claim that A satisfies the following property of Homogeneity: Property 1 (Homogeneity). The algorithm A possesses the following property: for every two equivalent samples S, S′ and every x, x′ ∈ Xn such that pos(x, S) = pos(x′, S′), Pr h∼QS [h(x) = 1] = Pr h′∼QS′ [h′(x′) = 1], where QS , QS′ denote the Gibbs-classifier outputted by A on the samples S, S′. In short, Homogeneity means that the probability h ∼ QS satisfies h(x) = 1 depends only on pos(x, S) and on the equivalence-type of S. To see that A is indeed homogeneous, let S, S′ be equivalent samples and let QS , QS′ denote the corresponding Gibbs-classifiers outputted byA. Then, for every x, x′ such that pos(x, S) = pos(x′, S′), Equation (1) yields that: Pr h∼QS [ h(x) = +1 ] = ∑ xi<x QS(hxi) = ∑ x′i<x ′ QS′(hx′i) = Prh′∼QS′ [ h′(x′) = +1 ] , where in the second transition we used that QS(hxi) = QS′(hx′i) for every i ≤ m (because S, S ′ are equivalent), and that xi ≤ x ⇐⇒ x′i ≤ x′, for every i (because pos(x, S) = pos(x′, S′)). The General Case: Approximate Homogeneity. Before we continue to define the hard distribution for algorithm A, let us discuss how the proof of Theorem 2 handles arbitrary algorithms that are not necessarily homogeneous. The general case complicates the argument in two ways. First, the notion of Homogeneity is relaxed to an approximate variant which is defined next. Here, an order type π is called a permutation if π(i) 6= π(j) for every distinct i, j ≤ m. (Indeed, in this case π = (π(x1) . . . π(xm)) is a permutation of 1 . . .m.) Note that the order type of S = 〈(x1, y1) . . . (xm, ym))〉 is a permutation if and only if all the points in S are distinct (i.e. xi 6= xj for all i 6= j). Definition 2 (Approximate Homogeneity). An algorithm B is γ-approximatelym-homogeneous if the following holds: let S, S′ be two equivalent samples of length m whose order-type is a permutation, and let x /∈ S, x′ /∈ S′ such that pos(x, S) = pos(x′, S′). Then, |QS(x)−QS′(x′)| ≤ γ 5m , (4) where QS , QS′ denote the Gibbs-classifier outputted by B on the samples S, S′. Second, we need to identify a sufficiently large subdomain on which the assumed algorithm is approximately homogeneous. This is achieved by the next lemma, which is based on a Ramsey argument. Lemma 1 (Large Approximately Homogeneous Sets ). Let m,n > 1 and let B be an algorithm that is defined over input samples of size m over Xn. Then, there is X ′ ⊆ Xn of size |X ′| ≥ Φ(m, γ, n) such that the restriction of B to input samples from X ′ is γ-approximate m-homogeneous. We prove Lemma 1 in the full version [19]. For the rest of this exposition we rely on Property 1 as it simplifies the presentation of the main ideas. The Hard Distribution D. We are now ready to finish the first step and define the “hard” distribution D. Define D to be uniform over examples (x, y) such that y = hn/2(x). So, each drawn example (x, y) satisfies that x is uniform in Xn and y = −1 if and only if x ≤ n/2. In the general case, D will be defined in the same way with respect to the detected homogeneous subdomain. 4.1.2 Hard Distribution =⇒ Lower Bound: Sensitivity We next outline the second step of the proof, which establishes Theorem 2 using the hard distribution D. Specifically, we show that for a sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( 1 m2 log(|Xn|) ) , with a constant probability bounded away from zero. (In the general case |Xn| is replaced by Φ(m, γ, n) – the size of the homogeneous set.) Sensitive Indices. We begin with describing the key property of homogeneous learners. Let (π, ȳ) denote the equivalence-type of the input sample S. By homogeneity (Property 1), there is a list of numbers p0, . . . , pm, which depends only on the order-type (π, ȳ), such that Prh∼QS [h(x) = 1] = pi for every x ∈ Xn, where i = pos(x, S). The crucial observation is that there exists an index i ≤ m′ which is sensitive in the sense that pi − pi−1 ≥ 1 m . (5) Indeed, consider xj such that hxj = arg mink LS(hxk), and let i = pos(xj , S). Then, pi − pi−1 = LS(hxj )∑ i′≤m LS(hxi′ ) ≥ 1 m . In the general case we show that any homogeneous algorithm that learnsHn satisfies Equation (5) for typical samples (see the full version [19]). The intuition is that any algorithm that learns the distribution D must output a Gibbs-classifier QS such that for typical points x, if x > n/2 then Prh∼QS [h(x) = 1] ≈ 1, and if x ≤ n/2 then Prh∼QS [h(x) = 1] ≈ 0. Thus, when traversing all x’s from 1 up to n there must be a jump between pi−1 and pi for some i. From Sensitive Indices to a Lower Bound on the KL-divergence. How do sensitive indices imply a lower bound on PAC-Bayes? This is the most technical part of the proof. The crux of it is a connection between sensitivity and the KL-divergence which we discuss next. Consider a sensitive index i and let xj be the input example such that pos(xj , S) = i. For x̂ ∈ Xn, let Sx̂ denote the sample obtained by replacing xj with x̂: Sx̂ = 〈(x1, y1), . . . , (xj−1, yj−1), (x̂j , yj), (xj+1, yj+1) . . . (xm, ym).〉, and let Qx̂ := QSx̂ denote the posterior outputted by A given the sample Sx̂. Consider the set I ⊆ Xn of all points x̂ such that Sx̂ is equivalent to S. Equation (5) implies that that for every x, x̂ ∈ I , Pr h∼Qx̂ [h(x) = 1] = { pi−1 x < x̂, pi x > x̂. Combined with the fact that pi− pi−1 ≥ 1/m, this implies a lower bound on KL-divergence between an arbitrary prior P and Qx̂ for most x̂ ∈ I . This is summarized in the following lemma: Lemma 2 (Sensitivity Lemma). Let I be a linearly ordered set and let {Qx̂}x̂∈I be a family of posteriors supported on {±1}I . Suppose there are q1 < q2 ∈ [0, 1] such that for every x, x̂ ∈ I: x < x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≤ q1 + q2 − q1 4 , x > x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≥ q2 − q2 − q1 4 . Then, for every prior distribution P , if x̂ ∈ I is drawn uniformly at random, then the following event occurs with probability at least 1/4: KL (Qx̂‖P ) = Ω ( (q2 − q1)2 log|I| log log|I| ) . The sensitivity lemma tells us that in the above situation, the KL divergence between Qx̂ and any prior P , for a random choice x̂, scales in terms of two quantities: the distance between the two values, q2 − q1, and the size of I . The proof of Lemma 2 is provided in the full version [19]. In a nutshell, the strategy is to bound from below KL (Qrx̂‖P r), where r is sufficiently small; the desired lower bound then follows from the chain rule, KL (Qx̂‖P ) = 1rKL (Q r x̂‖P r). Obtaining the lower bound with respect to the r-fold products is the crux of the proof. In short, we will exhibit events Ex̂ such that Qrx̂(Ex̂) ≥ 12 for every x̂ ∈ I , but P r(Ex̂) is tiny for |I|4 of the x̂’s. This implies a lower bound on KL (Q r x̂‖P r) since KL (Qrx̂‖P r) ≥ KL (Qrx̂(Ex̂)‖P r(Ex̂)) , by the data-processing inequality. Wrapping Up. We now continue in deriving a lower bound for A. Consider an input sample S ∼ Dm. In order to apply Lemma 2, fix any equivalence-type (π, y) with a sensitive index i and let xj be such that pos(xj ;S) = i. The key step is to condition the random sample S on (π, y) as well as on {xt}mt=1 \ {xj} – all sample points besides the sensitive point xj . Thus, only xj is remained to be drawn in order to fully specify S. Note then, that by symmetry x̂ is uniformly distributed in a set I ⊆ Xn, and plugging q1 := pi, q2 := pi−1 in Lemma 2 yields that for any prior distribution P : KL (QS‖P ) ≥ Ω̃ ( 1 m2 log(|I|) ) , with probability at least 1/4. Note that we are not quite done since the size |I| is a random variable which depends on the type (π, ȳ) and the sample points {xk}k 6=j . However, the distribution of |I| can be analyzed by elementary tools. In particular, we show that |I| ≥ Ω(|Xn|/m2) with high enough probability, which yields the desired lower bound on the PAC-Bayes quantity. (In the general case |Xn| is replaced by the size of the homogeneous set.) 5 Discussion In this work we presented a limitation for the PAC-Bayes framework by showing that PAC-learnability of one-dimensional thresholds can not be established using PAC-Bayes. Perhaps the biggest caveat of our result is the mild dependence of the bound on the size of the domain in Theorem 2. In fact, Theorem 2 does not exclude the possibility of PAC-learning thresholds over Xn with sample complexity that scale with O(log∗ n) such that the PAC-Bayes bound vanishes. It would be interesting to explore this possibility; one promising direction is to borrow ideas from the differential privacy literature: [4] and [6] designed a private learning algorithm for thresholds with sample complexity exp(log∗ n); this bound was later improved by [16] to Õ((log∗ n)2). Also, [7] showed that finite Littlestone dimension is sufficient for private learnability, and it would be interesting to extend these results to the context of PAC-Bayes. Let us note that in the context of pure differential privacy, the connection between PAC-Bayes analysis and privacy has been established in [14]. Non-uniform learning bounds Another aspect is the implication of our work to learning algorithms beyond the uniform PAC setting. Indeed, many successful and practical algorithms exhibit sample complexity that depends on the target-distribution. E.g.,the k-Nearest-Neighbor algorithm eventually learns any target-distribution (with a distribution-dependent rate). The first point we address in this context concerns interpolating algorithms. These are learners that achieve zero (or close to zero) training error (i.e. they interpolate the training set). Examples of such algorithms include kernel machines, boosting, random forests, as well as deep neural networks [5, 29]. PAC-Bayes analysis has been utilized in this context, for example, to provide margin-dependent generalization guarantees for kernel machines [18]. It is therefore natural to ask whether our lower bound has implications in this context. As a simple case-study, consider the 1-Nearest-Neighbour. Observe that this algorithm forms a proper and consistent learner for the class of 1-dimensional thresholds5, and therefore enjoys a very fast learning rate. On the other hand, our result implies that for any 5Indeed, given any realizable sample it will output the threshold which maximizes the margin. algorithm (including as 1-Nearest-Neighbor) that is amenable to PAC-Bayes analysis, there is a distribution realizable by thresholds on which it has high population error. Thus, no algorithm with a PAC-Bayes generalization bound can match the performance of nearest-neighbour with respect to such distributions. Finally, this work also relates to a recent attempt to explain generalization through the implicit bias of learning algorithms: it is commonly argued that the generalization performance of algorithms can be explained by an implicit algorithmic bias. Building upon the flexibility of providing distributiondependent generalization bounds, the PAC-Bayes framework has seen a resurgence of interest in this context towards explaining generalization in large-scale modern-time practical algorithms [27, 28, 13, 14, 2]. Indeed PAC-Bayes bounds seem to provide non-vacuous bounds in several relevant domains [17, 14]. Nevertheless, the work here shows that any algorithm that can learn 1D thresholds is necessarily not biased, in the PAC-Bayes sense, towards a (possibly distribution-dependent) prior. We mention that recently, [12] showed that SGD’s generalization performance indeed cannot be attributed to some implicit bias of the algorithm that governs the generalization. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research.
1. What is the main contribution of the paper regarding the PAC-Bayes framework? 2. What are the strengths and weaknesses of the paper in terms of its technical content? 3. How does the paper's impossibility theorem relate to previous works on PAC-Bayes bounds? 4. Can the results of the paper be applied to other variations of PAC-Bayes bounds, such as those proposed by Seeger (2002) and Catoni (2007)? 5. Is there any potential connection between the paper's findings and the concept of "weighted majority vote" in PAC-Bayes analysis?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The central theoretical result of this paper is an impossibility theorem. It shows that McAllester's PAC-Bayes bound cannot explain the small sample complexity of a one-dimensional linear classification learning problem. Strengths Rigorously stating the strengths and the limitations of learning theories is crucial for ensuring that our science evolves on solid bases. It seems that the paper contributes to the PAC-Bayes framework in this direction. Weaknesses The paper is technically heavy for my expertise, so I can only raise questions about its content. Might they be naive, discussing them in the paper would help other readers to understand the scope of this work. A first concern is about the fact that the paper presents solely (Theorem 1) the PAC-Bayes bound of McAllester (1999), converging at rate sqrt(1/m). Since this pioneer work, many variations on the PAC-Bayes bounds have been proposed. Notably, Seeger (2002)'s and Catoni (2007)'s bounds are known to converge at rate 1/m when the empirical risk is zero (see also Guedj (2019) for a up-to-date overview of PAC-Bayes literature). Is the given impossibility theorem and its proof remain the same for these settings? I also wonder if the issue with the PAC-Bayes theorem comes from the fact that, by considering the Gibbs predictor (an average of predictors), the aim of the PAC-Bayes estimator is to model a [0,1] target: P(y=1|x), as stated by the paragraph of Lines 238-242. Would it be natural (or not) that such a regressor is not as efficient to model the {0,1} thresholded distribution than if the predictor hypothesis class is restricted to linear binary classifiers? Related to the latter point, of potential interest is the line of works adapting the PAC-Bayes results to "weighted majority vote" (e.g., Germain et al. 2015), where one wants to bound the loss of the so-called Bayes classifier B_Q(x) = sgn(E_Q(h(x)) \in {-1,1} instead of the Gibbs randomized classifier with E G_Q(x) = E_Q(h(x)) \in [-1,1]. Note that the latter expected output of the Gibbs classifier, E G_Q(x), can be interpreted as a regressor, References: Catoni (2007). PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning. Institute of Mathematical Statistics, Germain, Lacasse, Laviolette, Marchand, Roy (2015). Risk bounds for the majority vote: from a PAC-Bayesian analysis to a learning algorithm. JMLR Guedj 2019, A Primer on PAC-Bayesian Learning. Proceedings of the second congress of the French Mathematical Society. Seeger (2002). PAC-Bayesian generalization bounds for Gaussian processes.JMLR
NIPS
Title A Limitation of the PAC-Bayes Framework Abstract PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester (’98). This framework has the flexibility of deriving distributionand algorithm-dependent bounds, which are often tighter than VCrelated uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/δ)/ ) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large. 1 Introduction The classical setting of supervised binary classification considers learning algorithms that receive (binary) labelled examples and are required to output a predictor or a classifier that predicts the label of new and unseen examples. Within this setting, Probably Approximately Correct (PAC) generalization bounds quantify the success of an algorithm to approximately predict with high probability. The PAC-Bayes framework, introduced in [22, 34] and further developed in [21, 20, 30], provides PAC-flavored bounds to Bayesian algorithms that produce Gibbs-classifiers (also called stochastic-classifiers). These are classifiers that, instead of outputting a single classifier, output a probability distribution over the family of classifiers. Their performance is measured by the expected success of prediction where expectation is taken with respect to both sampled data and sampled classifier. A PAC-Bayes generalization bound relates the generalization error of the algorithm to a KL distance between the stochastic output classifier and some prior distribution P . In more detail, the generalization bound is comprised of two terms: first, the empirical error of the output Gibbs-classifier, and second, the KL distance between the output Gibbs classifier and some arbitrary (but sampleindependent) prior distribution. This standard bound captures a basic intuition that a good learner needs to balance between bias, manifested in the form of a prior, and fitting the data, which is measured by the empirical loss. A natural task is then, to try and characterize the potential as well as limitations of such Gibbs-learners that are amenable to PAC-Bayes analysis. As far as the potential, several past results established the strength and utility of this framework (e.g. [33, 31, 18, 13, 17]). In this work we focus on the complementary task, and present the first limitation result showing that there are classes that are learnable, even in the strong distribution-independent setting of PAC, but do not admit any algorithm that is amenable to a non-vacuous PAC-Bayes analysis. We stress that this is true even if we exploit the bound to its fullest and allow any algorithm and any possible, potentially distribution-dependent, prior. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. More concretely, we consider the class of 1-dimensional thresholds, i.e. the class of linear classifiers over the real line. It is a well known fact that this class is learnable and enjoys highly optimistic sample complexity. Perhaps surprisingly, though, we show that any Gibbs-classifier that learns the class of thresholds, must output posteriors from an unbounded set. We emphasize that the result is provided even for priors that depend on the data distribution. From a technical perspective our proof exploits and expands a technique that was recently introduced by Alon et al. [1] to establish limitations on differentially-private PAC learning algorithms. The argument here follow similar lines, and we believe that these similarities in fact highlight a potentially powerful method to derive further limitation results, especially in the context of stability. 2 Preliminaries 2.1 Problem Setup We consider the standard setting of binary classification. Let X denote the domain and Y = {±1} the label space. We study learning algorithms that observe as input a sample S of labelled examples drawn independently from an unknown target distribution D, supported on X × Y . The output of the algorithm is an hypothesis h : X → Y , and its goal is to minimize the 0/1-loss, which is defined by: LD(h) = E (x,y)∼D [ 1[h(x) 6= y] ] . We will focus on the setting where the distribution D is realizable with respect to a fixed hypothesis classH ⊆ YX which is known in advance. That is, it is assumed that there exists h ∈ H such that: LD(h) = 0. Let S = 〈(x1, y1), . . . , (xm, ym)〉 ∈ (X × Y)m be a sample of labelled examples. The empirical error LS with respect to S is defined by LS(h) = 1 m m∑ i=1 1[h(x) 6= y]. We will use the following notation: for a sample S = 〈(x1, y1), . . . (xm, ym)〉, let S denote the underlying set of unlabeled examples S = {xi : i ≤ m}. The Class of Thresholds. For k ∈ N let hk : N→ {±1} denote the threshold function hk(x) = { −1 x ≤ k +1 x > k. The class of thresholds HN is the class HN := {hk : k ∈ N} over the domain XN := N. Similarly, for a finite n ∈ N let Hn denote the class of all thresholds restricted to the domain Xn := [n] = {1, . . . , n}. Note that S is realizable with respect to HN if and only if either (i) yi = +1 for all i ≤ m, or (ii) there exists 1 ≤ j ≤ m such that yi = −1 if and only if xi ≤ xj . A basic fact in statistical learning is thatHN is PAC-learnable. That is, there exists an algorithm A such that for every realizable distributionD, ifA is given a sample of sizeO( log 1/δ ) examples drawn from D, then with probability at least 1− δ, the output hypothesis hS satisfies LD(hS) ≤ . In fact, any algorithm A which returns an hypothesis hk ∈ HN which is consistent with the input sample, will satisfy the above guarantee. Such algorithms are called empirical risk minimizers (ERMs). We stress that the above sample complexity bound is independent of the domain size. In particular it applies to Hn for every n, as well as to the infinite class HN. For further reading, we refer to text books on the subject, such as [32, 23]. 2.2 PAC-Bayes Bounds PAC Bayes bounds are concerned with stochastic-classifiers, or Gibbs-classifiers. A Gibbs-classifier is defined by a distribution Q over hypotheses. The distribution Q is sometimes referred to as a posterior. The loss of a Gibbs-classifier with respect to a distribution D is given by the expected loss over the drawn hypothesis and test point, namely: LD(Q) = E h∼Q,(x,y)∼D [1 [ h(x) 6= y] ] . A key advantage of the PAC-Bayes framework is its flexibility of deriving generalization bounds that do not depend on an hypothesis class. Instead, they provide bounds that depend on the KL distance between the output posterior and a fixed prior P . Recall that the KL divergence between a distribution P and a distribution Q is defined as follows1: KL (P‖Q) = E x∼P [ log P (x) Q(x) ] . Then, the classical PAC-Bayes bound asserts the following: Theorem 1 (PAC-Bayes Generalization Bound [22]). Let D be a distribution over examples, let P be a prior distribution over hypothesis, and let δ > 0. Denote by S a sample of size m drawn independently from D. Then, the following event occurs with probability at least 1− δ: for every posterior distribution Q, LD(Q) ≤ LS(Q) +O (√ KL (Q‖P ) + ln √ m/δ m ) . The above bound relates the generalization error to the KL divergence between the posterior and the prior. Remarkably, the prior distribution P can be chosen as a function of the target distribution D, allowing to obtain distribution-dependent generalization bounds. Since this pioneer work of McAllester [21], many variations on the PAC-Bayes bounds have been proposed. Notably, Seeger et al. [31] and Catoni [9] provided bounds that are known to converge at rate 1/m in the realizable case (see also [15] for an up-to-date survey). We note that our constructions are all provided in the realizable setting, hence readily apply. 3 Main Result We next present the main result in this manuscript. Proofs are provided in the full version [19]. The statements use the following function Φ(m, γ, n), which is defined for m,n > 1 and γ ∈ (0, 1): Φ(m, γ, n) = log(m)(n) ( 10mγ ) 3m . Here, log(k)(x) denotes the iterated logarithm, i.e. log(k)(x) = log(log . . . (log(x)))︸ ︷︷ ︸ k times . An important observation is that limn→∞Φ(m, γ, n) =∞ for every fixed m and γ. Theorem 2 (Main Result). Let n,m > 1 be integers, and let γ ∈ (0, 1). Consider the class Hn of thresholds over the domain Xn = [n]. Then, for any learning algorithm A which is defined on samples of size m, there exists a realizable distribution D = DA such that for any prior P the following event occurs with probability at least 1/16 over the input sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( γ2 m2 log (Φ(m, γ, n) m )) or LD(QS) > 1/2− γ − m Φ(m, γ, n) , where QS denotes the posterior outputted by A. To demonstrate how this result implies a limitation of the PAC-Bayes framework, pick γ = 1/4 and consider any algorithm A which learns thresholds over the natural numbers XN = N with confidence 1− δ ≥ 99/100, error < 1/2− γ = 1/4, and m examples2. Since Φ(m, 1/4, n) tends to infinity with n for any fixedm, the above result implies the existence of a realizable distributionDn supported onXn ⊆ N such that the PAC-Bayes bound with respect to any possible prior P will produce vacuous bounds. We summarize it in the following corollary. 1We use here the standard convention that if P ({x : Q(x) = 0}) > 0 then KL (P‖Q) =∞. 2We note in passing that any Empirical Risk Minimizer learns thresholds with these parameters using < 50 examples. Corollary 1 (PAC-learnability of Linear classifiers cannot be explained by PAC-Bayes). Let HN denote the class of thresholds over XN = N and let m > 0. Then, for every algorithm A that maps inputs sample S of size m to output posteriors QS and for every arbitrarily large N > 0 there exists a realizable distribution D such that, for any prior P , with probability at least 1/16 over S ∼ Dm on of the following holds: KL (QS‖P ) > N or, LD(QS) > 1/4. A different interpretation of Theorem 2 is that in order to derive meaningful PAC-Bayes generalization bounds for PAC-learning thresholds over a finite domain Xn, the sample complexity must grow to infinity with the domain size n (it is at least Ω(log?(n))). In contrast, the true sample complexity of this problem is O(log(1/δ)/ ) which is independent of n. 4 Technical Overview A common approach of proving impossibility results in computer science (and in machine learning in particular) exploits a Minmax principle, whereby one specifies a fixed hard distribution over inputs, and establishes the desired impossibility result for any algorithm with respect to random inputs from that distribution. As an example, consider the “No-Free-Lunch Theorem” which establishes that the VC dimension lower bounds the sample complexity of PAC-learning a classH. Here, one fixes the distribution to be uniform over a shattered set of size d = VC(H), and argues that every learning algorithm must observe Ω(d) examples. (See e.g. Theorem 5.1 in [32].) Such “Minmax” proofs establish a stronger assertion: they apply even to algorithms that “know” the input-distribution. For example, the No-Free-Lunch Theorem applies even to learning algorithms that are designed given the knowledge that the marginal distribution is uniform over some shattered set. Interestingly, such an approach is bound to fail in proving Theorem 2. The reason is that if the marginal distribution DX over Xn is fixed, then one can pick an /2-cover3 Cn ⊆ Hn of size |Cn| = O(1/ ), and use any Empirical Risk Minimizer for Cn. Then, by picking the prior distribution P to be uniform over Cn, one obtains a PAC-Bayes bound which scales with the entropy H(P ) = log|Cn| = O(log(1/ )), and yields a poly(1/ , log(1/δ)) generalization bound, which is independent of n. In other words, in the context of Theorem 2, there is no single distribution which is “hard” for all algorithms. Thus, to overcome this difficulty one must come up with a “method” which assigns to any given algorithm A a “hard” distribution D = DA, which witnesses Theorem 2 with respect to A. The challenge is that A is an arbitrary algorithm; e.g. it may be improper4 or add different sorts of noise to its output classifier. We refer the reader to [26, 25, 3] for a line of work which explores in detail a similar “failure” of the Minmax principle in the context of PAC learning with low mutual information. The method we use in the proof of Theorem 2 exploits Ramsey Theory. In a nutshell, Ramsey Theory provides powerful tools which allow to detect, for any learning algorithm, a large homogeneous set such that the behavior of A on inputs from the homogeneous set is highly regular. Then, we consider the uniform distribution over the homogeneous set to establish Theorem 2. We note that similar applications of Ramsey Theory in proving lower bounds in computer science date back to the 80’s [24]. For more recent usages see e.g. [8, 11, 10, 1]. Our proof closely follows the argument of Alon et al. [1], which establishes an impossibility result for learningHn by differentially-private algorithms. Technical Comparison with the Work by Alon et al. [1]. For readers who are familiar with the work of [1], let us summarize the main differences between the two proofs. The main challenge in extending the technique from [1] to prove Theorem 2 is that PAC-Bayes bounds are only required to hold for typical samples. This is unlike the notion of differential-privacy (which was the focus of [1]) that is defined with respect to all samples. Thus, establishing a lower bound in the context of differential privacy is easier: one only needs to demonstrate a single sample for which privacy is 3I.e. Cn satisfies that (∀h ∈ Hn)(∃c ∈ Cn) : Prx∼DX (c(x) 6= h(x)) ≤ /2. 4I.e. A may output hypotheses which are not thresholds, or Gibbs-classifiers supported on hypotheses which are not thresholds. breached. However, to prove Theorem 2 one has to demonstrate that the lower bound applies to many samples. Concretely, this affects the following parts of the proof: (i) The Ramsey argument in the current manuscript (Lemma 1) is more complex: to overcome the above difficulty we needed to modify the coloring and the overall construction is more convoluted. (ii) Once Ramsey Theorem is applied and the homogeneous subset Rn ⊆ Xn is derived, one still needs to derive a lower bound on the PAC-Bayes quantity. This requires a technical argument (Lemma 2), which is tailored to the definition of PAC-Bayes. Again, this lemma is more complicated than the corresponding lemma in [1]. (iii) Even with Lemma 1 and Lemma 2 in hand, the remaining derivation of Theorem 2 still requires a careful analysis which involves defining several “bad” events and bounding their probabilities. Again, this is all a consequence of that the PAC-Bayes quantity is an “average-case” complexity measure. 4.1 Proof Sketch and Key Definitions The proof of Theorem 2 consists of two steps: (i) detecting a hard distribution D = DA which witnesses Theorem 2 with respect to the assumed algorithm A, and (ii) establishing the conclusion of Theorem 2 given the hard distribution D. The first part is combinatorial (exploits Ramsey Theory), and the second part is more information-theoretic. For the purpose of exposition, we focus in this technical overview, on a specific algorithm A. This will make the introduction of the key definitions and presentation of the main technical tools more accessible. The algorithmA. Let S = 〈(x1, y1), . . . , (xm, ym)〉 be an input sample. The algorithmA outputs the posterior distribution QS which is defined as follows: let hxi = 1[x > xi]− 1[x ≤ xi] denote the threshold corresponding to the i’th input example. The posterior QS is supported on {hxi}mi=1, and to each hxi it assigns a probability according to a decreasing function of its empirical risk. (So, hypotheses with lower risk are more probable.) The specific choice of the decreasing function does not matter, but for concreteness let us pick the function exp(−x). Thus, QS(hxi) ∝ exp ( −LS(hxi) ) . (1) While one can directly prove that the above algorithm does not admit a PAC-Bayes analysis, we provide here an argument which follows the lines of the general case. We start by explaining the key property of Homogeneity, which allows to detect the hard distribution. 4.1.1 Detecting a Hard Distribution: Homogeneity The first step in the proof of Theorem 2 takes the given algorithm and identifies a large subset of the domain on which its behavior is Homogeneous. In particular, we will soon see that the algorithm A is Homogeneous on the entire domain Xn. In order to define Homogeneity, we use the following equivalence relation between samples: Definition 1 (Equivalent Samples). Let S = 〈(x1, y1), . . . , (xm, ym)〉 and S′ = 〈(x′1, y′1), . . . , (x′m, y′m)〉 be two samples. We say that S and S′ are equivalent if for all i, j ≤ m the following holds. 1. xi ≤ xj ⇐⇒ x′i ≤ x′j , and 2. yi = y′i. For example, 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 are equivalent, but 〈(3,−), (6,+), (4,+)〉 is not equivalent to them (because of Item 1). For a point x ∈ Xn let pos(x;S) denote the number of examples in S that are less than or equal to x: pos(x;S) = ∣∣∣{xi ∈ S : xi ≤ x}∣∣∣. (2) For a sample S = 〈(x1, y1), . . . , (xm, ym)〉 let π(S) denote the order-type of S: π(S) = (pos(x1;S), pos(x2;S), . . . , pos(xm;S)). (3) So, the samples 〈(1,−), (5,+), (8,+)〉 and 〈(10,−), (70,+), (100,+)〉 have order-type π = (1, 2, 3), whereas 〈(3,−), (6,+), (4,+)〉 has order-type π = (1, 3, 2). Note that S, S′ are equivalent if and only if they have the same labels-vectors and the same order-type. Thus, we encode the equivalence class of a sample by the pair (π, ȳ), where π denotes its order-type and ȳ = (y1 . . . ym) denotes its labels-vector. The pair (π, y) is called the equivalence-type of S. We claim that A satisfies the following property of Homogeneity: Property 1 (Homogeneity). The algorithm A possesses the following property: for every two equivalent samples S, S′ and every x, x′ ∈ Xn such that pos(x, S) = pos(x′, S′), Pr h∼QS [h(x) = 1] = Pr h′∼QS′ [h′(x′) = 1], where QS , QS′ denote the Gibbs-classifier outputted by A on the samples S, S′. In short, Homogeneity means that the probability h ∼ QS satisfies h(x) = 1 depends only on pos(x, S) and on the equivalence-type of S. To see that A is indeed homogeneous, let S, S′ be equivalent samples and let QS , QS′ denote the corresponding Gibbs-classifiers outputted byA. Then, for every x, x′ such that pos(x, S) = pos(x′, S′), Equation (1) yields that: Pr h∼QS [ h(x) = +1 ] = ∑ xi<x QS(hxi) = ∑ x′i<x ′ QS′(hx′i) = Prh′∼QS′ [ h′(x′) = +1 ] , where in the second transition we used that QS(hxi) = QS′(hx′i) for every i ≤ m (because S, S ′ are equivalent), and that xi ≤ x ⇐⇒ x′i ≤ x′, for every i (because pos(x, S) = pos(x′, S′)). The General Case: Approximate Homogeneity. Before we continue to define the hard distribution for algorithm A, let us discuss how the proof of Theorem 2 handles arbitrary algorithms that are not necessarily homogeneous. The general case complicates the argument in two ways. First, the notion of Homogeneity is relaxed to an approximate variant which is defined next. Here, an order type π is called a permutation if π(i) 6= π(j) for every distinct i, j ≤ m. (Indeed, in this case π = (π(x1) . . . π(xm)) is a permutation of 1 . . .m.) Note that the order type of S = 〈(x1, y1) . . . (xm, ym))〉 is a permutation if and only if all the points in S are distinct (i.e. xi 6= xj for all i 6= j). Definition 2 (Approximate Homogeneity). An algorithm B is γ-approximatelym-homogeneous if the following holds: let S, S′ be two equivalent samples of length m whose order-type is a permutation, and let x /∈ S, x′ /∈ S′ such that pos(x, S) = pos(x′, S′). Then, |QS(x)−QS′(x′)| ≤ γ 5m , (4) where QS , QS′ denote the Gibbs-classifier outputted by B on the samples S, S′. Second, we need to identify a sufficiently large subdomain on which the assumed algorithm is approximately homogeneous. This is achieved by the next lemma, which is based on a Ramsey argument. Lemma 1 (Large Approximately Homogeneous Sets ). Let m,n > 1 and let B be an algorithm that is defined over input samples of size m over Xn. Then, there is X ′ ⊆ Xn of size |X ′| ≥ Φ(m, γ, n) such that the restriction of B to input samples from X ′ is γ-approximate m-homogeneous. We prove Lemma 1 in the full version [19]. For the rest of this exposition we rely on Property 1 as it simplifies the presentation of the main ideas. The Hard Distribution D. We are now ready to finish the first step and define the “hard” distribution D. Define D to be uniform over examples (x, y) such that y = hn/2(x). So, each drawn example (x, y) satisfies that x is uniform in Xn and y = −1 if and only if x ≤ n/2. In the general case, D will be defined in the same way with respect to the detected homogeneous subdomain. 4.1.2 Hard Distribution =⇒ Lower Bound: Sensitivity We next outline the second step of the proof, which establishes Theorem 2 using the hard distribution D. Specifically, we show that for a sample S ∼ Dm, KL (QS‖P ) = Ω̃ ( 1 m2 log(|Xn|) ) , with a constant probability bounded away from zero. (In the general case |Xn| is replaced by Φ(m, γ, n) – the size of the homogeneous set.) Sensitive Indices. We begin with describing the key property of homogeneous learners. Let (π, ȳ) denote the equivalence-type of the input sample S. By homogeneity (Property 1), there is a list of numbers p0, . . . , pm, which depends only on the order-type (π, ȳ), such that Prh∼QS [h(x) = 1] = pi for every x ∈ Xn, where i = pos(x, S). The crucial observation is that there exists an index i ≤ m′ which is sensitive in the sense that pi − pi−1 ≥ 1 m . (5) Indeed, consider xj such that hxj = arg mink LS(hxk), and let i = pos(xj , S). Then, pi − pi−1 = LS(hxj )∑ i′≤m LS(hxi′ ) ≥ 1 m . In the general case we show that any homogeneous algorithm that learnsHn satisfies Equation (5) for typical samples (see the full version [19]). The intuition is that any algorithm that learns the distribution D must output a Gibbs-classifier QS such that for typical points x, if x > n/2 then Prh∼QS [h(x) = 1] ≈ 1, and if x ≤ n/2 then Prh∼QS [h(x) = 1] ≈ 0. Thus, when traversing all x’s from 1 up to n there must be a jump between pi−1 and pi for some i. From Sensitive Indices to a Lower Bound on the KL-divergence. How do sensitive indices imply a lower bound on PAC-Bayes? This is the most technical part of the proof. The crux of it is a connection between sensitivity and the KL-divergence which we discuss next. Consider a sensitive index i and let xj be the input example such that pos(xj , S) = i. For x̂ ∈ Xn, let Sx̂ denote the sample obtained by replacing xj with x̂: Sx̂ = 〈(x1, y1), . . . , (xj−1, yj−1), (x̂j , yj), (xj+1, yj+1) . . . (xm, ym).〉, and let Qx̂ := QSx̂ denote the posterior outputted by A given the sample Sx̂. Consider the set I ⊆ Xn of all points x̂ such that Sx̂ is equivalent to S. Equation (5) implies that that for every x, x̂ ∈ I , Pr h∼Qx̂ [h(x) = 1] = { pi−1 x < x̂, pi x > x̂. Combined with the fact that pi− pi−1 ≥ 1/m, this implies a lower bound on KL-divergence between an arbitrary prior P and Qx̂ for most x̂ ∈ I . This is summarized in the following lemma: Lemma 2 (Sensitivity Lemma). Let I be a linearly ordered set and let {Qx̂}x̂∈I be a family of posteriors supported on {±1}I . Suppose there are q1 < q2 ∈ [0, 1] such that for every x, x̂ ∈ I: x < x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≤ q1 + q2 − q1 4 , x > x̂ =⇒ Pr h∼Qx̂ [h(x) = 1] ≥ q2 − q2 − q1 4 . Then, for every prior distribution P , if x̂ ∈ I is drawn uniformly at random, then the following event occurs with probability at least 1/4: KL (Qx̂‖P ) = Ω ( (q2 − q1)2 log|I| log log|I| ) . The sensitivity lemma tells us that in the above situation, the KL divergence between Qx̂ and any prior P , for a random choice x̂, scales in terms of two quantities: the distance between the two values, q2 − q1, and the size of I . The proof of Lemma 2 is provided in the full version [19]. In a nutshell, the strategy is to bound from below KL (Qrx̂‖P r), where r is sufficiently small; the desired lower bound then follows from the chain rule, KL (Qx̂‖P ) = 1rKL (Q r x̂‖P r). Obtaining the lower bound with respect to the r-fold products is the crux of the proof. In short, we will exhibit events Ex̂ such that Qrx̂(Ex̂) ≥ 12 for every x̂ ∈ I , but P r(Ex̂) is tiny for |I|4 of the x̂’s. This implies a lower bound on KL (Q r x̂‖P r) since KL (Qrx̂‖P r) ≥ KL (Qrx̂(Ex̂)‖P r(Ex̂)) , by the data-processing inequality. Wrapping Up. We now continue in deriving a lower bound for A. Consider an input sample S ∼ Dm. In order to apply Lemma 2, fix any equivalence-type (π, y) with a sensitive index i and let xj be such that pos(xj ;S) = i. The key step is to condition the random sample S on (π, y) as well as on {xt}mt=1 \ {xj} – all sample points besides the sensitive point xj . Thus, only xj is remained to be drawn in order to fully specify S. Note then, that by symmetry x̂ is uniformly distributed in a set I ⊆ Xn, and plugging q1 := pi, q2 := pi−1 in Lemma 2 yields that for any prior distribution P : KL (QS‖P ) ≥ Ω̃ ( 1 m2 log(|I|) ) , with probability at least 1/4. Note that we are not quite done since the size |I| is a random variable which depends on the type (π, ȳ) and the sample points {xk}k 6=j . However, the distribution of |I| can be analyzed by elementary tools. In particular, we show that |I| ≥ Ω(|Xn|/m2) with high enough probability, which yields the desired lower bound on the PAC-Bayes quantity. (In the general case |Xn| is replaced by the size of the homogeneous set.) 5 Discussion In this work we presented a limitation for the PAC-Bayes framework by showing that PAC-learnability of one-dimensional thresholds can not be established using PAC-Bayes. Perhaps the biggest caveat of our result is the mild dependence of the bound on the size of the domain in Theorem 2. In fact, Theorem 2 does not exclude the possibility of PAC-learning thresholds over Xn with sample complexity that scale with O(log∗ n) such that the PAC-Bayes bound vanishes. It would be interesting to explore this possibility; one promising direction is to borrow ideas from the differential privacy literature: [4] and [6] designed a private learning algorithm for thresholds with sample complexity exp(log∗ n); this bound was later improved by [16] to Õ((log∗ n)2). Also, [7] showed that finite Littlestone dimension is sufficient for private learnability, and it would be interesting to extend these results to the context of PAC-Bayes. Let us note that in the context of pure differential privacy, the connection between PAC-Bayes analysis and privacy has been established in [14]. Non-uniform learning bounds Another aspect is the implication of our work to learning algorithms beyond the uniform PAC setting. Indeed, many successful and practical algorithms exhibit sample complexity that depends on the target-distribution. E.g.,the k-Nearest-Neighbor algorithm eventually learns any target-distribution (with a distribution-dependent rate). The first point we address in this context concerns interpolating algorithms. These are learners that achieve zero (or close to zero) training error (i.e. they interpolate the training set). Examples of such algorithms include kernel machines, boosting, random forests, as well as deep neural networks [5, 29]. PAC-Bayes analysis has been utilized in this context, for example, to provide margin-dependent generalization guarantees for kernel machines [18]. It is therefore natural to ask whether our lower bound has implications in this context. As a simple case-study, consider the 1-Nearest-Neighbour. Observe that this algorithm forms a proper and consistent learner for the class of 1-dimensional thresholds5, and therefore enjoys a very fast learning rate. On the other hand, our result implies that for any 5Indeed, given any realizable sample it will output the threshold which maximizes the margin. algorithm (including as 1-Nearest-Neighbor) that is amenable to PAC-Bayes analysis, there is a distribution realizable by thresholds on which it has high population error. Thus, no algorithm with a PAC-Bayes generalization bound can match the performance of nearest-neighbour with respect to such distributions. Finally, this work also relates to a recent attempt to explain generalization through the implicit bias of learning algorithms: it is commonly argued that the generalization performance of algorithms can be explained by an implicit algorithmic bias. Building upon the flexibility of providing distributiondependent generalization bounds, the PAC-Bayes framework has seen a resurgence of interest in this context towards explaining generalization in large-scale modern-time practical algorithms [27, 28, 13, 14, 2]. Indeed PAC-Bayes bounds seem to provide non-vacuous bounds in several relevant domains [17, 14]. Nevertheless, the work here shows that any algorithm that can learn 1D thresholds is necessarily not biased, in the PAC-Bayes sense, towards a (possibly distribution-dependent) prior. We mention that recently, [12] showed that SGD’s generalization performance indeed cannot be attributed to some implicit bias of the algorithm that governs the generalization. Broader Impact There are no foreseen ethical or societal consequences for the research presented herein. Acknowledgments and Disclosure of Funding R.L is supported by an ISF grant no. 2188/20 and partially funded by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google. S.M is supported by the Israel Science Foundation (grant No. 1225/20), by an Azrieli Faculty Fellowship, and by a grant from the United States - Israel Binational Science Foundation (BSF). Part of this work was done while the author was at Google Research.
1. What is the focus of the paper regarding the PAC-Bayes framework? 2. What are the strengths of the paper's approach and contributions? 3. What are the limitations of the proposed method and its applications? 4. How does the reviewer assess the significance and relevance of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a limitation for the PAC-Bayes framework, that is, there are classes that are learnable but this cannot be proved using a PAC-Bayes analysis. Since these exists a realizable distribution for which the PAC-Bayes bound is arbitrarily large. Strengths This paper firstly gives the main results and then takes much effort to prove its findings in a detailed and ordered manner. It is the first to investigate the limitation of PAC-Bayes framework which is used in the class of 1-dimensional thresholds. Therefore, the paper provides a limited significant and novel contribution and is relevant to the NeurIPS. Weaknesses This paper provides the evidence that the PAC-Bayes framework is useless in proving the learnability of linear classification in 1D. The method proposed in this paper is limited in the simple linear classification and can be seen as a specific counter-example of PAC-Bayes analysis. Hence, it is suggested to explore whether there exist other learnable tasks which cannot be proved with a PAC-Bayes analysis and specify the scope of effectiveness of PAC-Bayes analysis.
NIPS
Title Joint Modeling of Visual Objects and Relations for Scene Graph Generation Abstract An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on meanfield variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach. 1 Introduction Modern object recognition [32, 10, 35] and detection [28, 27, 57] systems excel at the perception of visual objects, which has significantly boosted many industrial applications such as intelligent surveillance [18, 49] and autonomous driving [23, 38]. To have a deeper understanding of a visual scene, detecting and recognizing the objects in the scene is however insufficient. Instead, a comprehensive cognition of visual objects and their relationships is more desirable. Scene Graph Generation (SGG) [13] is a natural way to achieve this goal, in which a graph incorporating all objects and their relations within a scene image is derived to represent its semantic structure. Most previous works for SGG [48, 55, 53, 36, 4, 37] usually first independently predict different objects in a scene and then predict their relations independently. In practice, though such methods are very efficient, they ignore the dependency between different objects and between the relations of different object pairs. For example, a car could frequently co-occur with a street, and the relation eating could always appear along with the relation sitting on. Modeling such dependency could be very important for accurate scene graph prediction, especially for rare objects and relations. There are indeed some recent works [6, 5] along this direction. For example, Dai et al. [6] explored the triplet-level label dependency among a head object, a tail object and their relation. These methods have shown very promising results, while they only explored the limited dependency within a triplet. How to capture the full dependency between different objects and between their relations within a whole scene graph remains very challenging and unexplored. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To attain such a goal, in this paper, we propose a principled approach called Joint Modeling for Scene Graph Generation (JM-SGG) to predict the whole scene graph by jointly capturing all the label dependency within it, i.e. the dependency between different objects and their relations and also the interdependency between them. Specifically, we model the joint distribution of all objects and relations in a scene graph with the conditional random field (CRF) framework [17]. To flexibly model the joint distribution, the key is to define effective potential functions on both nodes (i.e. objects) and edges (i.e. relations between objects). We define the potential functions on objects according to the object representations extracted by existing neural network based object detector. It is however nontrivial to design effective potential functions on edges, since these potential functions have to capture the relation between two objects in an edge and meanwhile allow relational reasoning among different edges, which models the dependency among the relations on various edges. Inspired by the existing work of knowledge graph embedding [20], which represents entities and relations in the same embedding space and performs relational reasoning in that space, we define our potential functions according to the knowledge graph embedding method and hence allow efficient relational reasoning between different object pairs in a scene graph. Such a fully expressive model also brings challenges to both learning and inference due to the complicated structures between different random variables in the CRF, i.e. objects and their relations. We therefore further propose an efficient and effective inference algorithm based on mean-field variational inference, which is able to assist the gradient estimation for learning and derive the most likely scene graph for test. Traditional mean-field methods usually suffer from the problem of slow convergence. Instead of starting from a randomly initialized variational distribution as in traditional mean-field methods, we propose to initialize the variational distribution, i.e. the marginal distribution of each object and each relation, with a factorized tweak of JM-SGG model, and then perform a few iterations of message passing induced by the fixed-point optimality condition of mean field to refine the variational distribution, which allows our approach to enjoy both good precision and efficiency. To summarize, in this paper, we make the following contributions: • We propose Joint Modeling for Scene Graph Generation (JM-SGG) which is a fully expressive model that can capture all the label dependency in a whole scene graph. • We propose a principled mean-field variational inference algorithm to enable the efficient learning and inference of JM-SGG model. • We verify the superior performance of our method on both relationship retrieval and zero-shot relationship retrieval tasks under various settings and metrics. Also, we illustrate the efficiency and efficacy of the proposed inference algorithm by thorough analytical experiments. 2 Related Work Scene Graph Generation (SGG). This task aims to extract structured representations from scene images [13], including the category of objects and their relationships. Previous works performed SGG by propagating the information from different local regions [48, 53, 50, 36], introducing external knowledge [9, 52], employing well-designed loss functions [56, 14, 34] and performing unbiased scene graph prediction [4, 19, 37]. Most of these methods predict each object and relation label independently based on an informative representation, which fails to capture the rich label dependency within a scene graph and is thus less expressive. Several former works [6, 5] attempted to model such label dependency within a single relational triplet but not on the whole scene graph. Improvements over existing methods. The proposed JM-SGG model is, to our best knowledge, the first approach that jointly models all the label dependency within a scene graph, including the one within object or relation labels and the one between these two kinds of labels. To attain this goal, a unified CRF is constructed for graphical modeling, and a mean-field variational inference algorithm is designed for efficient learning and inference, which show technical contributions. Conditional Random Fields (CRFs). CRFs are a class of probabilistic graphical modeling methods which perform structured prediction upon the observed data. CRF-based approaches have been broadly studied on various computer vision problems, including segmentation [17, 43, 51, 26], superresolution [39, 46], image denoising [29, 42] and scene graph generation [6, 5]. These former works utilizing CRF for SGG [6, 5] aimed to model the conditional distribution of a single triplet upon visual representations. By comparison, our approach models the conditional distribution of a whole scene graph upon the observed scene image, which is more expressive. 3 Problem Definition and Preliminary 3.1 Problem Definition This work focuses on extracting a scene graph, i.e. a structured representation of visual scene [13], from an image. Formally, we define a scene graph as G = (yO, R). yO denotes the category labels of all objects O in the image, and it holds that yo ∈ C for each object o ∈ O, where C stands for the set of all object categories, including the “background” category. R = {(oh, r, ot)} is the set of relational triplets/edges with r ∈ T as the relation type from head object oh to tail object ot (oh, ot ∈ O), where T represents all relation types, including the type of “no relation”. In this work, we aim at jointly modeling visual objects and visual relations as defined below: Joint Scene Graph Modeling. Given an image I , we aim to jointly predict object categories yO and the relationships R among all objects, which models the joint distribution of scene graphs, i.e. p(G|I) = p(yO, R|I), with comprehensively considering the dependency within yO and R and also the interdependency between them. 3.2 Conditional Random Fields Conditional Random Field (CRF) is a discriminative undirected graphical model. Given a set of observed variables x, it models the joint distribution of labels y based on a Markov network G that specifies the dependency among all variables: p(y|x) = 1 Z(x) ∏ C φC(xC ,yC), Z(x) = ∑ y ∏ C φC(xC ,yC). (1) where φC denotes the nonnegative potential function defined over the variables in clique C (a clique is a fully-connected local subgraph), and Z(x) is a normalization constant called partition function. 4 Model In this section, we introduce Joint Modeling for Scene Graph Generation (JM-SGG). Current methods solve the problem by independently predicting each object and relation label upon an informative representation, and thus the prediction of different labels cannot fully benefit each other. JM-SGG tackles the limitation by jointly modeling all the objects and relationships in a visual scene with a unified conditional random field, which enables the prediction of various object and relation labels to sufficiently interact with each other. Nevertheless, learning and inferring this complex CRF is nontrivial, and we thus propose to use maximum likelihood estimation combined with mean-field variational inference, yielding an efficient algorithm for learning and inference. Next, we elucidate the details of our approach. 4.1 Representation In the JM-SGG model, we organize the observed scene image I and all object and relation labels in the latent scene graph (i.e. yO and R) as the nodes in a unified conditional random field. Since the interactions of these nodes are either for a single object or for the relationship between an object pair, we decompose the graphical structure of whole network into two sets of components. (1) Object components: For an object o ∈ O, we consider the dependency of its category label on its visual representation and thus connect yo with I , as shown in Fig. 1(a). (2) Relation components: for a relational triplet (oh, r, ot) ∈ R, we consider the dependency of relation type r on the visual cues in image I , and we also model the interdependency among the object and relation labels in this triplet (i.e. yoh , yot and r), which forms a relation component as Fig. 1(b) shows. By combining all object and relation components, the CRF can capture the comprehensive label dependency within a scene graph. We now define the joint distribution of scene graphs upon the observed scene image as below: pΘ(G|I) = 1 ZΘ(I) fΘ(G, I), (2) fΘ(G, I) = ∏ o∈O φ(yo, I) ∏ (oh,r,ot)∈R ψ(r, yoh , yot , I), (3) where Θ summarizes the parameters of whole model, fΘ is an unnormalized likelihood function, ZΘ denotes the partition function, and φ and ψ are the potential functions defined on object and relation components, respectively. Next, we define these potential functions based on the extracted visual representations and the correlation among different labels. Visual representation extraction. Given a scene image I , we first utilize a standard object detector (e.g. Faster R-CNN [28] in our implementation) to obtain a set of bounding boxes which potentially contain the objects in the image, and object representations zO = {zo|o ∈ O} (zo ∈ RD) are then derived by RoIAlign [11]. We regard the union bounding box over a pair of objects as their context region and again use RoIAlign to get all context representations zR = {zht|(oh, r, ot) ∈ R} (zht ∈ RD). Here, D denotes the latent dimension of objects and contexts. By denoting the whole object detector as gθ, this feature extraction process can be represented as: (zO, zR) = gθ(I). Potential function definition. The potential function φ(yo, I) for object component models the dependency of object category yo on object representation zo by measuring their affinity. To conduct such a measure, we represent each object category with a prototype [33] (i.e. a learnable embedding vector) in the continuous space, which forms a prototype set C = {Ci ∈ RD|i ∈ C} for all object categories (D denotes the dimension of object space). On such basis, we define φ(yo, I) by computing the distance between object representation zo and the prototype of object category yo: φ(yo, I) = exp ( −d(Cyo , zo) ) , (4) where d is a distance measure (e.g. Euclidean distance in our practice). The potential function ψ(r, yoh , yot , I) for relation component models the dependency of relation type r on the relevant visual representations in image I , and it also models the interdependency among the object and relation labels of a triplet (i.e. yoh , yot and r). Therefore, we can factorize ψ(r, yoh , yot , I) into a term ψvisual(r, I) for modeling visual influence and another term ψtriplet(r, yoh , yot) for modeling the label consistency within a triplet: ψ(r, yoh , yot , I) = ψvisual(r, I)ψtriplet(r, yoh , yot). (5) Similarly, for measuring r in the continuous space, a prototype set T = {Tj ∈ RK |j ∈ T } is constructed for all relation types (K denotes the dimension of relation space). We consider two kinds of visual representations that affect the prediction of relation type r, i.e. the context representation zht and the head and tail object representations zoh and zot . The influence of context representation can be easily measured by projecting context representation zht to the relation space and computing its distance to the prototype of relation type r. However, measuring the influence of head and tail object representations and evaluating the label consistency within a triplet are nontrivial, which require to model the ternary correlation among head object, tail object and their relationship. Inspired by the idea of TransR [20], an effective knowledge graph embedding technique, we model such ternary correlation by treating each relation as a translation vector from head object embedding to tail object embedding in the same embedding space. Specifically, we first apply the translation vector Tr specified by relation r to head object embedding, and then compute the distance between the translated embedding and tail object embedding. Based on these thoughts, we define ψvisual(r, I) and ψtriplet(r, yoh , yot) as follows: ψvisual(r, I) = exp ( − ( d(Tr,Mczht) + d(Mozoh + Tr,Mozot) )) , (6) ψtriplet(r, yoh , yot) = exp ( −d(MoCyoh + Tr,MoCyot ) ) , (7) where Mc ∈ RK×D denotes the projection matrix mapping from context space to relation space, and Mo ∈ RK×D is the projection matrix mapping from object space to relation space. Next, we state how to learn the parameters in JM-SGG model. 4.2 Learning In the learning phase, we seek to learn the parameters C, T, Mc and Mo of potential function and the parameters θ of object detector by maximum likelihood estimation, where Θ summarizes all these parameters. Specifically, we aim to maximize the expectation of log-likelihood function log pΘ(G|I) with respect to the data distribution pd, i.e. L(Θ) = EG∼pd [ log pΘ(G|I) ] , by performing gradient ascent. The gradient of the objective function L(Θ) with respect to Θ can be computed as below: ∇ΘL(Θ) = EG∼pd [∇Θ log fΘ(G, I)]− EG∼pΘ [∇Θ log fΘ(G, I)], (8) where pΘ is the model distribution that approximates pd (i.e. the conditional distribution pΘ(G|I) defined by JM-SGG model). This formula has been broadly adopted in the literature [12, 3, 7], and we provide the proof in supplementary material. In practice, we estimate the first expectation in Eq. (8) with the ground-truth scene graphs in a mini-batch. The estimation of the second expectation in Eq. (8) requires to sample scene graphs from the model distribution, which is nontrivial due to the intractable partition function ZΘ(I) that sums over all possible scene graphs. One solution is to run the Markov Chain Monte Carlo (MCMC) sampler, but its computational cost is high, and we therefore use mean-field variational inference for more efficient sampling (the detailed scheme is stated in Sec. 4.3). Instead of fixing the parameters of a pre-trained object detector during learning as in former works [53, 36, 37, 34], we fine-tune the parameters of object detector during maximum likelihood learning. In this way, the detector can extract more precise object and context representations by learning the likelihoods of whole scene graphs. Also, we apply a traditional bounding box regression constraint Lreg(Θ) [28] to the detector for preserving its localization capability, and these two learning objectives share the same weight. Next, we introduce the inference scheme for JM-SGG model. 4.3 Inference The inference phase aims to compute the conditional distribution pΘ(G|I) defined by JM-SGG model and also sample from it. Exact inference is always infeasible due to the complex structures among the latent variables yO andR of the scene graph as well as the intractable partition function. Therefore, we approximate pΘ(G|I) with a variational distribution qΘ(G) via the mean-field approximation [31, 24]: qΘ(G) = ∏ o∈O qΘ(yo) ∏ (oh,r,ot)∈R qΘ(r), (9) where each factor qΘ(yo) and qΘ(r) defines a categorical distribution, i.e. ∑ yo∈C qΘ(yo) = 1 and∑ r∈T qΘ(r) = 1. In this variational distribution, all object and relation labels are assumed to be independent, and it shares the same set of parameters Θ with pΘ(G|I), which greatly reduces the number of parameters needed for variational inference. For brevity, we will omit Θ in the following distribution notations, e.g. simplifying qΘ(G) as q(G). In general, we are seeking for a variational distribution that satisfies the factorization in Eq. (9) and also maximizes the variational lower bound L(q) = Eq(G)[log p(G, I)− log q(G)] (i.e. equivalent to minimizing the KL divergence between q(G) and p(G|I)). Typically, this is achieved by optimizing the variational distribution with fixed-point iterations [44, 45], which can however be inefficient, especially for the images with many objects. We thus design an inference algorithm that appropriately initializes each factor in q(G) and iteratively updates all factors. Intuitively, factor initialization is similar to existing SGG methods, where object and relation labels are predicted independently; factor update can be viewed as a refinement procedure, which makes the predictions from the initialization step more consistent. With factor initialization and factor update, the proposed inference method combines the advantages of both existing methods and CRFs, i.e. efficiency and consistency. Factor initialization. For initialization, we neglect the interdependency among different object and relation labels, i.e. omitting the potential function ψtriplet(r, yoh , yot) in p(G|I), yielding a simplified model distribution p̂(G|I). In this way, we can easily derive the following factors for initialization which makes q(G) = p̂(G|I): q(yo) = φ(yo, I)∑ y′o∈C φ(y′o, I) ∀o ∈ O, (10) q(r) = ψvisual(r, I)∑ r′∈T ψvisual(r ′, I) ∀(oh, r, ot) ∈ R. (11) See supplementary material for the proof. Intuitively, we initialize each factor by only considering its dependency on visual representations, and, on such basis, label interdependency will then be taken into account to refine each factor. In such an initialization approach, the computation of different factors is independent with each other and thus can be done efficiently in a parallel manner. In Sec. 6.1, we empirically illustrate the better convergence performance of this initialization scheme compared to the random initialization which is commonly employed in previous works [45, 22]. Factor update. Based on these initialized factors, we perform update by taking into account the interdependency among the object and relation labels in scene graph, i.e. using the full expression of p(G|I) with potential function ψtriplet(r, yoh , yot). In the mean-field formulation of Eq. (9), if we are to update one factor q(yo) (or q(r)) with all other factors fixed, its optimum q∗(yo) (or q∗(r)) which maximizes the variational lower bound L(q) can be specified by the following expression: log q∗(yo) = log φ(yo, I) + ∑ (o,r,ot)∈R ∑ yot∈C ∑ r∈T q(yot)q(r) logψtriplet(r, yo, yot) + ∑ (oh,r,o)∈R ∑ yoh∈C ∑ r∈T q(yoh)q(r) logψtriplet(r, yoh , yo) + const ∀o ∈ O, (12) log q∗(r) = logψvisual(r, I) + ∑ yoh∈C ∑ yot∈C q(yoh)q(yot) logψtriplet(r, yoh , yot) + const ∀(oh, r, ot) ∈ R. (13) The proof is provided in supplementary material. During computation, we omit the additive constants above, since they can be naturally eliminated when computing normalized q∗(yo) and q∗(r), i.e. taking the exponential of both sides and normalizing q∗(yo) over C and q∗(r) over T . Taking a close look at Eqs. (12) and (13), we can find that each factor is updated by aggregating the information from its neighboring factors (e.g. from the factors q(yoh) and q(yot) of head and tail objects to the factor q(r) of their relation), which can be efficiently implemented by matrix multiplication as in message passing neural networks [8]. In practice, we simultaneously update all factors in a single iteration based on the states of factors in last iteration, i.e. performing asynchronous message passing in mean field [41, 47], which forms an efficient iterative update scheme. We analyze the efficiency and efficacy of this update scheme in Secs. 6.1 and 6.2. Algorithm 1 Inference algorithm of JM-SGG. Input: Scene image I , iteration number NT . Output: Factors {q(yo)}, {q(r)} of q(G). Initialize {q(yo)}, {q(r)} by Eqs. (10), (11). for t = 1 to NT do Derive {log q∗(yo)}, {log q∗(r)} by Eqs. (12), (13). Update all factors: {q(yo)} ← {softmax(log q∗(yo))}, {q(r)} ← {softmax(log q∗(r))}. end for Inference algorithm. The whole inference algorithm is summarized in Alg. 1. Upon on the input scene image I , we first initialize each factor in q(G) by Eqs. (10) and (11). After that, we perform factor update for NT iterations. In each iteration, the log-optimum of each factor is computed based on the factors of last iteration by Eqs. (12) and (13), and the normalized factors are then derived by softmax for update. Sampling strategy. After such an iterative inference, we obtain a factorized variational distribution q(G) which well approximates the conditional distribution p(G|I) defined by JM-SGG model. Now, instead of sampling from the intractable model distribution p(G|I), we can easily sample scene graphs from q(G) by independently drawing each object/relation label from the corresponding factor (i.e. q(yo) or q(r)), where each factor is a categorical distribution. In practice, we sample NS scene graphs from q(G) for each image in a mini-batch, yielding totally NSNB samples for estimating the second expectation term in∇ΘL(Θ) (Eq. (8)), where NB denotes batch size. Prediction strategy. At the test time, we need to infer the scene graph with the highest probability in p(G|I), and it can also be efficiently done using the variational distribution q(G). In specific, based on the factorized definition of q(G), we can easily select the object category (or relation type) with the highest probability in each factor q(yo) (or q(r)), and the selected object and relation labels together form a scene graph that well approximates the most likely scene graph with respect to the model distribution p(G|I). Similar prediction strategies have been widely used in previous works that employed mean-field methods [15, 40]. 5 Experiments 5.1 Experimental Setup Dataset. We use the Visual Genome (VG) dataset [16] (CC BY 4.0 License), a large-scale database with structured image concepts, for evaluation. We use the pre-processed VG from Xu et al. [48] (MIT License) which contains 108k images with 150 object categories and 50 relation types. Following previous works [53, 36, 37], we employ the original split with 70% images for training and 30% images for test, and 5k images randomly sampled from the training split are held out for validation. Evaluation tasks. We evaluate the proposed method on two widely studied tasks: • Relationship Retrieval (RR). This task examines model’s comprehensive capability of localizing and classifying objects and their relationships. It is further divided into three sub-tasks from easy to hard: (1) Predicate Classification (PredCls): predict the predicate/relation of all object pairs using the ground-truth bounding boxes and object labels; (2) Scene Graph Classification (SGCls): predict all object categories and relation types given the ground-truth bounding boxes; (3) Scene Graph Generation (SGGen): localize the objects in an image and simultaneously predict their categories and all relations, where an object is regarded as correctly detected if it has at least 0.5 IoU overlap with the ground-truth box. Since two evaluation protocols were typically used in the literature, we adopt two metrics in our experiments, i.e. computing the recall for each relation type and reporting the mean (mR@k) [21, 48, 53] and computing a single recall for all relation types (R@k) [4, 37, 34], where we use both 50 and 100 for k as in previous works [21, 48, 4]. Following Xu et al. [48], we apply the graph constraint that only one relation is obtained for each ordered object pair. Totally, we report model’s performance on 12 configurations. • Zero-Shot Relationship Retrieval (ZSRR). This task was first introduced by Lu et al. [21] to evaluate model’s ability of identifying the head-relation-tail triplets that have not been observed during training. For this task, we employ the metric Zero-Shot Recall@k (ZSR@k) and conduct evaluation under three settings, i.e. PredCls, SGCls and SGGen. Also, the configurations where k equals to 50 and 100 are both evaluated. Performance comparisons. We compare the proposed method with existing scene graph generation algorithms, including IMP+ [48] (a re-implementation of IMP by Zellers et al. [53]), VTransE [55], FREQ [53], Motifs [53], KERN [4], VCTree [36], VCTree-TDE [37], VCTree-EBM [34] and GBNet-β [52]. We adapt the results on the metric mR@k from original papers, and the results on the metric R@k and ZSR@k are evaluated by the released source code for some methods, i.e. VTransE, VCTree-TDE and VCTree-EBM on R@k, and KERN and GB-Net-β on ZSR@k. 5.2 Implementation Details Model details. Following previous works [48, 55, 53, 4, 52], we adopt the Faster R-CNN [28] with a VGG-16 [32] backbone as object detector, and the VGG-16 backbone is initialized with the weights of the model pre-trained on ImageNet [30]. We use the same detector configuration as Zellers et al. [53] for fair comparison. The dimension D of object and context space and the dimension K of relation space are both set as 4096, i.e. the output dimension of the fc7 layer of VGG-16. Our method is implemented under PyTorch [25], and the source code will be released for reproducibility. Training details. In our experiments, the object detector is first pre-trained by an SGD optimizer (batch size: 4, initial learning rate: 0.001, momentum: 0.9, weight decay: 5× 10−4) for 20 epochs, and the learning rate is multiplied by 0.1 after the 10th epoch. During maximum likelihood learning, we train the potential functions and fine-tune the object detector with another SGD optimizer (batch size: 4, potential function learning rate: 0.001, detector learning rate: 0.0001, momentum: 0.9, weight decay: 5×10−4) for 10 epochs, and the learning rate is multiplied by 0.1 after the 5th epoch. Without otherwise specified, the iteration number NT is set as 1 for training and 2 for test, and the per image sampling size NS is set as 3. These hyperparameters are selected by the grid search on validation set, and their sensitivities are analyzed in Sec. 6.2. An NVIDIA Tesla V100 GPU is used for training. Evaluation details. As stated in Sec. 4.3, we independently predict each object category and relation type by selecting the most likely one in the corresponding factor of variational distribution. The objects predicted as “background” are discarded along with the relations linking to them, and the relations predicted as “no relation” are also removed. To derive a ranked triplet list for RR and ZSRR tasks, we save the probability of each object and relation and compute the probability product within each head-relation-tail triplet, and all triplets are then ranked according to the values of their probability products in a descending order. We report model’s performance at the last epoch. 5.3 Experimental Results Relationship Retrieval (RR). In Tab. 1, we compare our method with existing approaches under 12 settings of the RR task. It can be observed that the proposed JM-SGG model achieves the best performance on 10 of 12 settings. In particular, compared to the state-of-the-art VCTree-TDE [37], a previous work dedicated to addressing unbiased scene graph prediction, JM-SGG performs better on 4 of 6 settings for unbiased prediction (i.e. the settings using metric mR@k). We think these superior results are mainly ascribed to the proposed joint scene graph modeling, in which the class imbalance among different relation types is mitigated by emphasizing the role of these sample-scarce relation types under the context of whole scene graphs. Zero-Shot Relationship Retrieval (ZSRR). Tab. 2 reports the performance of various approaches on 6 settings of the ZSRR task. The comparison with FREQ [53] is not included on this task, since this baseline method can only predict the relational triplets appearing in the training set. We can observe that the JM-SGG model outperforms existing methods on all 6 settings, and, especially, a 34% performance gain on ZSR@50 is achieved on the SGCls sub-task. These results illustrate the effectiveness of JM-SGG on discovering the novel relational triplets that have not been observed during learning. 6 Analysis 6.1 Ablation Study Ablation study for joint scene graph modeling. To better verify the effectiveness of joint scene graph modeling, we study a variant of JM-SGG which models the joint distribution of an individual relational triplet instead of the whole scene graph, denoted as JM-SGG (triplet) (see supplementary material for more details). In Tabs. 1 and 2, JM-SGG clearly outperforms JM-SGG (triplet) on all metrics including the metric mR@k for unbiased prediction, which demonstrates the benefit of joint scene graph modeling on mitigating the class imbalance among different relation types. Ablation study for factor initialization. In this experiment, we compare the proposed initialization method (Eqs. (10) and (11)) with the random initialization which randomly initializes the categorical distribution for each factor q(yo) and q(r) in variational distribution q(G). Under these two initialization schemes, we respectively plot model’s performance after different iterations of factor update in Fig. 2(a). After four iterations, two schemes converge to the solutions with comparable performance, while our initialization approach shows a faster convergence (i.e. converge after two iterations). Ablation study for factor update. In this part, we study another configuration where the initialized factors are directly used for scene graph prediction without factor update, denoted as JM-SGG (w/o FU). In Tabs. 1 and 2, the superior performance of JM-SGG over JM-SGG (w/o FU) verifies the necessity of performing factor update to refine the initial label predictions. Ablation study on modeling head-relation-tail triplets. Previous works [55, 5] used TransE [2] to model the relation between two objects, while our method employs TransR [20] to model headrelation-tail triplets. To investigate the effectiveness of such a model design, we substitute TransR with TransE in our model, named as JM-SGG (TransE). Specifically, this model variant regards object and relation embeddings lie in the same space, and thus the projection matrix Mo is removed from two relation potential terms ψvisual and ψtriplet. In Tab. 3, it can be observed that TransR clearly outperforms TransE in the JM-SGG model, which demonstrates the importance of modeling objects and relations in two distinct embedding spaces. 6.2 Sensitivity Analysis Sensitivity of iteration number NT . In Fig. 2(b), we plot the performance of JM-SGG model under different iteration numbers. It can be observed that, for training, one iteration of factor update is enough to derive a decent variational distribution for the sampling purpose; for test, two iterations are required to converge to the optimal approximation of the model distribution. Sensitivity of per image sampling size NS . We vary the value of per image sampling size NS for learning and plot the corresponding model performance in Fig. 2(c). We can observe that through sampling at least three scene graphs from the variational distribution for each image, the second expectation term in Eq. (8) can be well estimated, which stably enhances model performance. 6.3 Visualization In Fig. 3, we visualize the typical scene graphs generated by JM-SGG model, in which the results with and without applying factor update are respectively shown. In these two examples, factor update succeeds in correcting some wrong relation labels (e.g. person has jean→ person wearing jean) by considering the dependency among different object and relation labels. More visualization results are provided in the supplementary material. 7 Conclusions and Future Work In this work, we propose the Joint Modeling for Scene Graph Generation (JM-SGG) model. This model is able to jointly capture the dependency among all object and relation labels in the scene graph, and its learning and inference can be efficiently performed using the mean-field variational inference algorithm. The extensive experiments on both relationship retrieval and zero-shot relationship retrieval tasks demonstrate the superiority of JM-SGG model. The current JM-SGG model cannot be directly used for visual reasoning, and its inference method makes a strong assumption of fully factorized variational distribution. Therefore, our future work will include exploring downstream visual reasoning tasks (e.g. visual question answering [1] and visual commonsense reasoning [54]) based on JM-SGG model and further improving our approximate inference algorithm (e.g. by defining more expressive variational distribution). 8 Broader Impacts This research project focuses on predicting objects and their relations in a visual scene by fully capturing the dependency among all objects and relations, and the predicted object and relation labels are further organized as a scene graph. Compared to the conventional visual recognition systems that only predict objects, our approach is able to simultaneously provide object and relationship prediction. This merit enables more in-depth scene understanding and can potentially benefit many real-world applications, like intelligent surveillance and autonomous driving. However, it cannot be denied that the annotation process for a scene graph generation model is labor-intensive. For example, 11.5 objects and 6.2 relations, on average, are required to be annotated for each image in the Visual Genome dataset, and the dataset contains 108k images in total. Therefore, how to train a scene graph generation model in a more efficient way by using less labeled data remains to be further explored. Acknowledgments and Disclosure of Funding This project was supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ldt., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R&D Project (AI4D-CORE-08). This project was also partially funded by IVADO Fundamental Research Project grant PRF2019-3583139727. Bingbing Ni is supported by National Science Foundation of China (U20B2072, 61976137). The authors would like to thank Zhaocheng Zhu, Louis-Pascal Xhonneux and Zuobai Zhang for providing constructive advices during this project, and also appreciate the Student Innovation Center of SJTU for providing GPUs.
1. What is the main contribution of the paper in scene graph generation? 2. What are the strengths of the proposed approach, particularly in its embedding-based representation and unified modeling of potential functions? 3. Are there any concerns regarding the novelty of the paper's ideas, considering previous works such as Cong et al. (2018)? 4. How does the reviewer assess the clarity, quality, and structure of the paper's content? 5. What are the limitations of the experimental results, and how could they be improved?
Summary Of The Paper Review
Summary Of The Paper This paper presents a conditional random field (CRF) based joint modeling of objects and relations for the task of scene graph generation. Unlike previous relation discovery methods, this paper employs embedding-based relation feature representation, and thus enables an unified modeling of the unitary and clique potential functions. Using MCMC sampler and mean-field variational inference, the proposed JM-SGG can effectively update the relation triplets utilizing the contextual label dependency. The experiments show that JM-SGG has good performances on the relationship retrieval task, even under the zero-shot setting. Review Overall, this study is helpful to researchers in the fields of visual understanding, especially scene graph generation. It is happy to see how the probabilistic graph model is successfully integrated into scene graph generation. This paper is well written, clearly illustrated, and appropriately structured. The experimental results also validate the claims. But in my opinion, conditioned random field and mean-field variational inference are not at the first time applied into scene graph generation. For example, Cong et al. 2018, Scene Graph Generation via Conditional Random Fields has introduced some important techniques used in this paper. Even though this paper was just `published' in arxiv, it would be appreciated to discuss their technical connections, and how the proposed method outperforms. Another concern lies in the experiments. At first, why not in addition report the results by non-graph constraint? Each ordered object pair should contain more than one relation if the relation label cannot convincingly exclude each other. For example, <person, has, jean> is not wrong, even though <person, wearing, jean> may be a more precise description. It would be nice if the proposed JM-SGG can capture the most possible relations for a particular object pair.
NIPS
Title Joint Modeling of Visual Objects and Relations for Scene Graph Generation Abstract An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on meanfield variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach. 1 Introduction Modern object recognition [32, 10, 35] and detection [28, 27, 57] systems excel at the perception of visual objects, which has significantly boosted many industrial applications such as intelligent surveillance [18, 49] and autonomous driving [23, 38]. To have a deeper understanding of a visual scene, detecting and recognizing the objects in the scene is however insufficient. Instead, a comprehensive cognition of visual objects and their relationships is more desirable. Scene Graph Generation (SGG) [13] is a natural way to achieve this goal, in which a graph incorporating all objects and their relations within a scene image is derived to represent its semantic structure. Most previous works for SGG [48, 55, 53, 36, 4, 37] usually first independently predict different objects in a scene and then predict their relations independently. In practice, though such methods are very efficient, they ignore the dependency between different objects and between the relations of different object pairs. For example, a car could frequently co-occur with a street, and the relation eating could always appear along with the relation sitting on. Modeling such dependency could be very important for accurate scene graph prediction, especially for rare objects and relations. There are indeed some recent works [6, 5] along this direction. For example, Dai et al. [6] explored the triplet-level label dependency among a head object, a tail object and their relation. These methods have shown very promising results, while they only explored the limited dependency within a triplet. How to capture the full dependency between different objects and between their relations within a whole scene graph remains very challenging and unexplored. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To attain such a goal, in this paper, we propose a principled approach called Joint Modeling for Scene Graph Generation (JM-SGG) to predict the whole scene graph by jointly capturing all the label dependency within it, i.e. the dependency between different objects and their relations and also the interdependency between them. Specifically, we model the joint distribution of all objects and relations in a scene graph with the conditional random field (CRF) framework [17]. To flexibly model the joint distribution, the key is to define effective potential functions on both nodes (i.e. objects) and edges (i.e. relations between objects). We define the potential functions on objects according to the object representations extracted by existing neural network based object detector. It is however nontrivial to design effective potential functions on edges, since these potential functions have to capture the relation between two objects in an edge and meanwhile allow relational reasoning among different edges, which models the dependency among the relations on various edges. Inspired by the existing work of knowledge graph embedding [20], which represents entities and relations in the same embedding space and performs relational reasoning in that space, we define our potential functions according to the knowledge graph embedding method and hence allow efficient relational reasoning between different object pairs in a scene graph. Such a fully expressive model also brings challenges to both learning and inference due to the complicated structures between different random variables in the CRF, i.e. objects and their relations. We therefore further propose an efficient and effective inference algorithm based on mean-field variational inference, which is able to assist the gradient estimation for learning and derive the most likely scene graph for test. Traditional mean-field methods usually suffer from the problem of slow convergence. Instead of starting from a randomly initialized variational distribution as in traditional mean-field methods, we propose to initialize the variational distribution, i.e. the marginal distribution of each object and each relation, with a factorized tweak of JM-SGG model, and then perform a few iterations of message passing induced by the fixed-point optimality condition of mean field to refine the variational distribution, which allows our approach to enjoy both good precision and efficiency. To summarize, in this paper, we make the following contributions: • We propose Joint Modeling for Scene Graph Generation (JM-SGG) which is a fully expressive model that can capture all the label dependency in a whole scene graph. • We propose a principled mean-field variational inference algorithm to enable the efficient learning and inference of JM-SGG model. • We verify the superior performance of our method on both relationship retrieval and zero-shot relationship retrieval tasks under various settings and metrics. Also, we illustrate the efficiency and efficacy of the proposed inference algorithm by thorough analytical experiments. 2 Related Work Scene Graph Generation (SGG). This task aims to extract structured representations from scene images [13], including the category of objects and their relationships. Previous works performed SGG by propagating the information from different local regions [48, 53, 50, 36], introducing external knowledge [9, 52], employing well-designed loss functions [56, 14, 34] and performing unbiased scene graph prediction [4, 19, 37]. Most of these methods predict each object and relation label independently based on an informative representation, which fails to capture the rich label dependency within a scene graph and is thus less expressive. Several former works [6, 5] attempted to model such label dependency within a single relational triplet but not on the whole scene graph. Improvements over existing methods. The proposed JM-SGG model is, to our best knowledge, the first approach that jointly models all the label dependency within a scene graph, including the one within object or relation labels and the one between these two kinds of labels. To attain this goal, a unified CRF is constructed for graphical modeling, and a mean-field variational inference algorithm is designed for efficient learning and inference, which show technical contributions. Conditional Random Fields (CRFs). CRFs are a class of probabilistic graphical modeling methods which perform structured prediction upon the observed data. CRF-based approaches have been broadly studied on various computer vision problems, including segmentation [17, 43, 51, 26], superresolution [39, 46], image denoising [29, 42] and scene graph generation [6, 5]. These former works utilizing CRF for SGG [6, 5] aimed to model the conditional distribution of a single triplet upon visual representations. By comparison, our approach models the conditional distribution of a whole scene graph upon the observed scene image, which is more expressive. 3 Problem Definition and Preliminary 3.1 Problem Definition This work focuses on extracting a scene graph, i.e. a structured representation of visual scene [13], from an image. Formally, we define a scene graph as G = (yO, R). yO denotes the category labels of all objects O in the image, and it holds that yo ∈ C for each object o ∈ O, where C stands for the set of all object categories, including the “background” category. R = {(oh, r, ot)} is the set of relational triplets/edges with r ∈ T as the relation type from head object oh to tail object ot (oh, ot ∈ O), where T represents all relation types, including the type of “no relation”. In this work, we aim at jointly modeling visual objects and visual relations as defined below: Joint Scene Graph Modeling. Given an image I , we aim to jointly predict object categories yO and the relationships R among all objects, which models the joint distribution of scene graphs, i.e. p(G|I) = p(yO, R|I), with comprehensively considering the dependency within yO and R and also the interdependency between them. 3.2 Conditional Random Fields Conditional Random Field (CRF) is a discriminative undirected graphical model. Given a set of observed variables x, it models the joint distribution of labels y based on a Markov network G that specifies the dependency among all variables: p(y|x) = 1 Z(x) ∏ C φC(xC ,yC), Z(x) = ∑ y ∏ C φC(xC ,yC). (1) where φC denotes the nonnegative potential function defined over the variables in clique C (a clique is a fully-connected local subgraph), and Z(x) is a normalization constant called partition function. 4 Model In this section, we introduce Joint Modeling for Scene Graph Generation (JM-SGG). Current methods solve the problem by independently predicting each object and relation label upon an informative representation, and thus the prediction of different labels cannot fully benefit each other. JM-SGG tackles the limitation by jointly modeling all the objects and relationships in a visual scene with a unified conditional random field, which enables the prediction of various object and relation labels to sufficiently interact with each other. Nevertheless, learning and inferring this complex CRF is nontrivial, and we thus propose to use maximum likelihood estimation combined with mean-field variational inference, yielding an efficient algorithm for learning and inference. Next, we elucidate the details of our approach. 4.1 Representation In the JM-SGG model, we organize the observed scene image I and all object and relation labels in the latent scene graph (i.e. yO and R) as the nodes in a unified conditional random field. Since the interactions of these nodes are either for a single object or for the relationship between an object pair, we decompose the graphical structure of whole network into two sets of components. (1) Object components: For an object o ∈ O, we consider the dependency of its category label on its visual representation and thus connect yo with I , as shown in Fig. 1(a). (2) Relation components: for a relational triplet (oh, r, ot) ∈ R, we consider the dependency of relation type r on the visual cues in image I , and we also model the interdependency among the object and relation labels in this triplet (i.e. yoh , yot and r), which forms a relation component as Fig. 1(b) shows. By combining all object and relation components, the CRF can capture the comprehensive label dependency within a scene graph. We now define the joint distribution of scene graphs upon the observed scene image as below: pΘ(G|I) = 1 ZΘ(I) fΘ(G, I), (2) fΘ(G, I) = ∏ o∈O φ(yo, I) ∏ (oh,r,ot)∈R ψ(r, yoh , yot , I), (3) where Θ summarizes the parameters of whole model, fΘ is an unnormalized likelihood function, ZΘ denotes the partition function, and φ and ψ are the potential functions defined on object and relation components, respectively. Next, we define these potential functions based on the extracted visual representations and the correlation among different labels. Visual representation extraction. Given a scene image I , we first utilize a standard object detector (e.g. Faster R-CNN [28] in our implementation) to obtain a set of bounding boxes which potentially contain the objects in the image, and object representations zO = {zo|o ∈ O} (zo ∈ RD) are then derived by RoIAlign [11]. We regard the union bounding box over a pair of objects as their context region and again use RoIAlign to get all context representations zR = {zht|(oh, r, ot) ∈ R} (zht ∈ RD). Here, D denotes the latent dimension of objects and contexts. By denoting the whole object detector as gθ, this feature extraction process can be represented as: (zO, zR) = gθ(I). Potential function definition. The potential function φ(yo, I) for object component models the dependency of object category yo on object representation zo by measuring their affinity. To conduct such a measure, we represent each object category with a prototype [33] (i.e. a learnable embedding vector) in the continuous space, which forms a prototype set C = {Ci ∈ RD|i ∈ C} for all object categories (D denotes the dimension of object space). On such basis, we define φ(yo, I) by computing the distance between object representation zo and the prototype of object category yo: φ(yo, I) = exp ( −d(Cyo , zo) ) , (4) where d is a distance measure (e.g. Euclidean distance in our practice). The potential function ψ(r, yoh , yot , I) for relation component models the dependency of relation type r on the relevant visual representations in image I , and it also models the interdependency among the object and relation labels of a triplet (i.e. yoh , yot and r). Therefore, we can factorize ψ(r, yoh , yot , I) into a term ψvisual(r, I) for modeling visual influence and another term ψtriplet(r, yoh , yot) for modeling the label consistency within a triplet: ψ(r, yoh , yot , I) = ψvisual(r, I)ψtriplet(r, yoh , yot). (5) Similarly, for measuring r in the continuous space, a prototype set T = {Tj ∈ RK |j ∈ T } is constructed for all relation types (K denotes the dimension of relation space). We consider two kinds of visual representations that affect the prediction of relation type r, i.e. the context representation zht and the head and tail object representations zoh and zot . The influence of context representation can be easily measured by projecting context representation zht to the relation space and computing its distance to the prototype of relation type r. However, measuring the influence of head and tail object representations and evaluating the label consistency within a triplet are nontrivial, which require to model the ternary correlation among head object, tail object and their relationship. Inspired by the idea of TransR [20], an effective knowledge graph embedding technique, we model such ternary correlation by treating each relation as a translation vector from head object embedding to tail object embedding in the same embedding space. Specifically, we first apply the translation vector Tr specified by relation r to head object embedding, and then compute the distance between the translated embedding and tail object embedding. Based on these thoughts, we define ψvisual(r, I) and ψtriplet(r, yoh , yot) as follows: ψvisual(r, I) = exp ( − ( d(Tr,Mczht) + d(Mozoh + Tr,Mozot) )) , (6) ψtriplet(r, yoh , yot) = exp ( −d(MoCyoh + Tr,MoCyot ) ) , (7) where Mc ∈ RK×D denotes the projection matrix mapping from context space to relation space, and Mo ∈ RK×D is the projection matrix mapping from object space to relation space. Next, we state how to learn the parameters in JM-SGG model. 4.2 Learning In the learning phase, we seek to learn the parameters C, T, Mc and Mo of potential function and the parameters θ of object detector by maximum likelihood estimation, where Θ summarizes all these parameters. Specifically, we aim to maximize the expectation of log-likelihood function log pΘ(G|I) with respect to the data distribution pd, i.e. L(Θ) = EG∼pd [ log pΘ(G|I) ] , by performing gradient ascent. The gradient of the objective function L(Θ) with respect to Θ can be computed as below: ∇ΘL(Θ) = EG∼pd [∇Θ log fΘ(G, I)]− EG∼pΘ [∇Θ log fΘ(G, I)], (8) where pΘ is the model distribution that approximates pd (i.e. the conditional distribution pΘ(G|I) defined by JM-SGG model). This formula has been broadly adopted in the literature [12, 3, 7], and we provide the proof in supplementary material. In practice, we estimate the first expectation in Eq. (8) with the ground-truth scene graphs in a mini-batch. The estimation of the second expectation in Eq. (8) requires to sample scene graphs from the model distribution, which is nontrivial due to the intractable partition function ZΘ(I) that sums over all possible scene graphs. One solution is to run the Markov Chain Monte Carlo (MCMC) sampler, but its computational cost is high, and we therefore use mean-field variational inference for more efficient sampling (the detailed scheme is stated in Sec. 4.3). Instead of fixing the parameters of a pre-trained object detector during learning as in former works [53, 36, 37, 34], we fine-tune the parameters of object detector during maximum likelihood learning. In this way, the detector can extract more precise object and context representations by learning the likelihoods of whole scene graphs. Also, we apply a traditional bounding box regression constraint Lreg(Θ) [28] to the detector for preserving its localization capability, and these two learning objectives share the same weight. Next, we introduce the inference scheme for JM-SGG model. 4.3 Inference The inference phase aims to compute the conditional distribution pΘ(G|I) defined by JM-SGG model and also sample from it. Exact inference is always infeasible due to the complex structures among the latent variables yO andR of the scene graph as well as the intractable partition function. Therefore, we approximate pΘ(G|I) with a variational distribution qΘ(G) via the mean-field approximation [31, 24]: qΘ(G) = ∏ o∈O qΘ(yo) ∏ (oh,r,ot)∈R qΘ(r), (9) where each factor qΘ(yo) and qΘ(r) defines a categorical distribution, i.e. ∑ yo∈C qΘ(yo) = 1 and∑ r∈T qΘ(r) = 1. In this variational distribution, all object and relation labels are assumed to be independent, and it shares the same set of parameters Θ with pΘ(G|I), which greatly reduces the number of parameters needed for variational inference. For brevity, we will omit Θ in the following distribution notations, e.g. simplifying qΘ(G) as q(G). In general, we are seeking for a variational distribution that satisfies the factorization in Eq. (9) and also maximizes the variational lower bound L(q) = Eq(G)[log p(G, I)− log q(G)] (i.e. equivalent to minimizing the KL divergence between q(G) and p(G|I)). Typically, this is achieved by optimizing the variational distribution with fixed-point iterations [44, 45], which can however be inefficient, especially for the images with many objects. We thus design an inference algorithm that appropriately initializes each factor in q(G) and iteratively updates all factors. Intuitively, factor initialization is similar to existing SGG methods, where object and relation labels are predicted independently; factor update can be viewed as a refinement procedure, which makes the predictions from the initialization step more consistent. With factor initialization and factor update, the proposed inference method combines the advantages of both existing methods and CRFs, i.e. efficiency and consistency. Factor initialization. For initialization, we neglect the interdependency among different object and relation labels, i.e. omitting the potential function ψtriplet(r, yoh , yot) in p(G|I), yielding a simplified model distribution p̂(G|I). In this way, we can easily derive the following factors for initialization which makes q(G) = p̂(G|I): q(yo) = φ(yo, I)∑ y′o∈C φ(y′o, I) ∀o ∈ O, (10) q(r) = ψvisual(r, I)∑ r′∈T ψvisual(r ′, I) ∀(oh, r, ot) ∈ R. (11) See supplementary material for the proof. Intuitively, we initialize each factor by only considering its dependency on visual representations, and, on such basis, label interdependency will then be taken into account to refine each factor. In such an initialization approach, the computation of different factors is independent with each other and thus can be done efficiently in a parallel manner. In Sec. 6.1, we empirically illustrate the better convergence performance of this initialization scheme compared to the random initialization which is commonly employed in previous works [45, 22]. Factor update. Based on these initialized factors, we perform update by taking into account the interdependency among the object and relation labels in scene graph, i.e. using the full expression of p(G|I) with potential function ψtriplet(r, yoh , yot). In the mean-field formulation of Eq. (9), if we are to update one factor q(yo) (or q(r)) with all other factors fixed, its optimum q∗(yo) (or q∗(r)) which maximizes the variational lower bound L(q) can be specified by the following expression: log q∗(yo) = log φ(yo, I) + ∑ (o,r,ot)∈R ∑ yot∈C ∑ r∈T q(yot)q(r) logψtriplet(r, yo, yot) + ∑ (oh,r,o)∈R ∑ yoh∈C ∑ r∈T q(yoh)q(r) logψtriplet(r, yoh , yo) + const ∀o ∈ O, (12) log q∗(r) = logψvisual(r, I) + ∑ yoh∈C ∑ yot∈C q(yoh)q(yot) logψtriplet(r, yoh , yot) + const ∀(oh, r, ot) ∈ R. (13) The proof is provided in supplementary material. During computation, we omit the additive constants above, since they can be naturally eliminated when computing normalized q∗(yo) and q∗(r), i.e. taking the exponential of both sides and normalizing q∗(yo) over C and q∗(r) over T . Taking a close look at Eqs. (12) and (13), we can find that each factor is updated by aggregating the information from its neighboring factors (e.g. from the factors q(yoh) and q(yot) of head and tail objects to the factor q(r) of their relation), which can be efficiently implemented by matrix multiplication as in message passing neural networks [8]. In practice, we simultaneously update all factors in a single iteration based on the states of factors in last iteration, i.e. performing asynchronous message passing in mean field [41, 47], which forms an efficient iterative update scheme. We analyze the efficiency and efficacy of this update scheme in Secs. 6.1 and 6.2. Algorithm 1 Inference algorithm of JM-SGG. Input: Scene image I , iteration number NT . Output: Factors {q(yo)}, {q(r)} of q(G). Initialize {q(yo)}, {q(r)} by Eqs. (10), (11). for t = 1 to NT do Derive {log q∗(yo)}, {log q∗(r)} by Eqs. (12), (13). Update all factors: {q(yo)} ← {softmax(log q∗(yo))}, {q(r)} ← {softmax(log q∗(r))}. end for Inference algorithm. The whole inference algorithm is summarized in Alg. 1. Upon on the input scene image I , we first initialize each factor in q(G) by Eqs. (10) and (11). After that, we perform factor update for NT iterations. In each iteration, the log-optimum of each factor is computed based on the factors of last iteration by Eqs. (12) and (13), and the normalized factors are then derived by softmax for update. Sampling strategy. After such an iterative inference, we obtain a factorized variational distribution q(G) which well approximates the conditional distribution p(G|I) defined by JM-SGG model. Now, instead of sampling from the intractable model distribution p(G|I), we can easily sample scene graphs from q(G) by independently drawing each object/relation label from the corresponding factor (i.e. q(yo) or q(r)), where each factor is a categorical distribution. In practice, we sample NS scene graphs from q(G) for each image in a mini-batch, yielding totally NSNB samples for estimating the second expectation term in∇ΘL(Θ) (Eq. (8)), where NB denotes batch size. Prediction strategy. At the test time, we need to infer the scene graph with the highest probability in p(G|I), and it can also be efficiently done using the variational distribution q(G). In specific, based on the factorized definition of q(G), we can easily select the object category (or relation type) with the highest probability in each factor q(yo) (or q(r)), and the selected object and relation labels together form a scene graph that well approximates the most likely scene graph with respect to the model distribution p(G|I). Similar prediction strategies have been widely used in previous works that employed mean-field methods [15, 40]. 5 Experiments 5.1 Experimental Setup Dataset. We use the Visual Genome (VG) dataset [16] (CC BY 4.0 License), a large-scale database with structured image concepts, for evaluation. We use the pre-processed VG from Xu et al. [48] (MIT License) which contains 108k images with 150 object categories and 50 relation types. Following previous works [53, 36, 37], we employ the original split with 70% images for training and 30% images for test, and 5k images randomly sampled from the training split are held out for validation. Evaluation tasks. We evaluate the proposed method on two widely studied tasks: • Relationship Retrieval (RR). This task examines model’s comprehensive capability of localizing and classifying objects and their relationships. It is further divided into three sub-tasks from easy to hard: (1) Predicate Classification (PredCls): predict the predicate/relation of all object pairs using the ground-truth bounding boxes and object labels; (2) Scene Graph Classification (SGCls): predict all object categories and relation types given the ground-truth bounding boxes; (3) Scene Graph Generation (SGGen): localize the objects in an image and simultaneously predict their categories and all relations, where an object is regarded as correctly detected if it has at least 0.5 IoU overlap with the ground-truth box. Since two evaluation protocols were typically used in the literature, we adopt two metrics in our experiments, i.e. computing the recall for each relation type and reporting the mean (mR@k) [21, 48, 53] and computing a single recall for all relation types (R@k) [4, 37, 34], where we use both 50 and 100 for k as in previous works [21, 48, 4]. Following Xu et al. [48], we apply the graph constraint that only one relation is obtained for each ordered object pair. Totally, we report model’s performance on 12 configurations. • Zero-Shot Relationship Retrieval (ZSRR). This task was first introduced by Lu et al. [21] to evaluate model’s ability of identifying the head-relation-tail triplets that have not been observed during training. For this task, we employ the metric Zero-Shot Recall@k (ZSR@k) and conduct evaluation under three settings, i.e. PredCls, SGCls and SGGen. Also, the configurations where k equals to 50 and 100 are both evaluated. Performance comparisons. We compare the proposed method with existing scene graph generation algorithms, including IMP+ [48] (a re-implementation of IMP by Zellers et al. [53]), VTransE [55], FREQ [53], Motifs [53], KERN [4], VCTree [36], VCTree-TDE [37], VCTree-EBM [34] and GBNet-β [52]. We adapt the results on the metric mR@k from original papers, and the results on the metric R@k and ZSR@k are evaluated by the released source code for some methods, i.e. VTransE, VCTree-TDE and VCTree-EBM on R@k, and KERN and GB-Net-β on ZSR@k. 5.2 Implementation Details Model details. Following previous works [48, 55, 53, 4, 52], we adopt the Faster R-CNN [28] with a VGG-16 [32] backbone as object detector, and the VGG-16 backbone is initialized with the weights of the model pre-trained on ImageNet [30]. We use the same detector configuration as Zellers et al. [53] for fair comparison. The dimension D of object and context space and the dimension K of relation space are both set as 4096, i.e. the output dimension of the fc7 layer of VGG-16. Our method is implemented under PyTorch [25], and the source code will be released for reproducibility. Training details. In our experiments, the object detector is first pre-trained by an SGD optimizer (batch size: 4, initial learning rate: 0.001, momentum: 0.9, weight decay: 5× 10−4) for 20 epochs, and the learning rate is multiplied by 0.1 after the 10th epoch. During maximum likelihood learning, we train the potential functions and fine-tune the object detector with another SGD optimizer (batch size: 4, potential function learning rate: 0.001, detector learning rate: 0.0001, momentum: 0.9, weight decay: 5×10−4) for 10 epochs, and the learning rate is multiplied by 0.1 after the 5th epoch. Without otherwise specified, the iteration number NT is set as 1 for training and 2 for test, and the per image sampling size NS is set as 3. These hyperparameters are selected by the grid search on validation set, and their sensitivities are analyzed in Sec. 6.2. An NVIDIA Tesla V100 GPU is used for training. Evaluation details. As stated in Sec. 4.3, we independently predict each object category and relation type by selecting the most likely one in the corresponding factor of variational distribution. The objects predicted as “background” are discarded along with the relations linking to them, and the relations predicted as “no relation” are also removed. To derive a ranked triplet list for RR and ZSRR tasks, we save the probability of each object and relation and compute the probability product within each head-relation-tail triplet, and all triplets are then ranked according to the values of their probability products in a descending order. We report model’s performance at the last epoch. 5.3 Experimental Results Relationship Retrieval (RR). In Tab. 1, we compare our method with existing approaches under 12 settings of the RR task. It can be observed that the proposed JM-SGG model achieves the best performance on 10 of 12 settings. In particular, compared to the state-of-the-art VCTree-TDE [37], a previous work dedicated to addressing unbiased scene graph prediction, JM-SGG performs better on 4 of 6 settings for unbiased prediction (i.e. the settings using metric mR@k). We think these superior results are mainly ascribed to the proposed joint scene graph modeling, in which the class imbalance among different relation types is mitigated by emphasizing the role of these sample-scarce relation types under the context of whole scene graphs. Zero-Shot Relationship Retrieval (ZSRR). Tab. 2 reports the performance of various approaches on 6 settings of the ZSRR task. The comparison with FREQ [53] is not included on this task, since this baseline method can only predict the relational triplets appearing in the training set. We can observe that the JM-SGG model outperforms existing methods on all 6 settings, and, especially, a 34% performance gain on ZSR@50 is achieved on the SGCls sub-task. These results illustrate the effectiveness of JM-SGG on discovering the novel relational triplets that have not been observed during learning. 6 Analysis 6.1 Ablation Study Ablation study for joint scene graph modeling. To better verify the effectiveness of joint scene graph modeling, we study a variant of JM-SGG which models the joint distribution of an individual relational triplet instead of the whole scene graph, denoted as JM-SGG (triplet) (see supplementary material for more details). In Tabs. 1 and 2, JM-SGG clearly outperforms JM-SGG (triplet) on all metrics including the metric mR@k for unbiased prediction, which demonstrates the benefit of joint scene graph modeling on mitigating the class imbalance among different relation types. Ablation study for factor initialization. In this experiment, we compare the proposed initialization method (Eqs. (10) and (11)) with the random initialization which randomly initializes the categorical distribution for each factor q(yo) and q(r) in variational distribution q(G). Under these two initialization schemes, we respectively plot model’s performance after different iterations of factor update in Fig. 2(a). After four iterations, two schemes converge to the solutions with comparable performance, while our initialization approach shows a faster convergence (i.e. converge after two iterations). Ablation study for factor update. In this part, we study another configuration where the initialized factors are directly used for scene graph prediction without factor update, denoted as JM-SGG (w/o FU). In Tabs. 1 and 2, the superior performance of JM-SGG over JM-SGG (w/o FU) verifies the necessity of performing factor update to refine the initial label predictions. Ablation study on modeling head-relation-tail triplets. Previous works [55, 5] used TransE [2] to model the relation between two objects, while our method employs TransR [20] to model headrelation-tail triplets. To investigate the effectiveness of such a model design, we substitute TransR with TransE in our model, named as JM-SGG (TransE). Specifically, this model variant regards object and relation embeddings lie in the same space, and thus the projection matrix Mo is removed from two relation potential terms ψvisual and ψtriplet. In Tab. 3, it can be observed that TransR clearly outperforms TransE in the JM-SGG model, which demonstrates the importance of modeling objects and relations in two distinct embedding spaces. 6.2 Sensitivity Analysis Sensitivity of iteration number NT . In Fig. 2(b), we plot the performance of JM-SGG model under different iteration numbers. It can be observed that, for training, one iteration of factor update is enough to derive a decent variational distribution for the sampling purpose; for test, two iterations are required to converge to the optimal approximation of the model distribution. Sensitivity of per image sampling size NS . We vary the value of per image sampling size NS for learning and plot the corresponding model performance in Fig. 2(c). We can observe that through sampling at least three scene graphs from the variational distribution for each image, the second expectation term in Eq. (8) can be well estimated, which stably enhances model performance. 6.3 Visualization In Fig. 3, we visualize the typical scene graphs generated by JM-SGG model, in which the results with and without applying factor update are respectively shown. In these two examples, factor update succeeds in correcting some wrong relation labels (e.g. person has jean→ person wearing jean) by considering the dependency among different object and relation labels. More visualization results are provided in the supplementary material. 7 Conclusions and Future Work In this work, we propose the Joint Modeling for Scene Graph Generation (JM-SGG) model. This model is able to jointly capture the dependency among all object and relation labels in the scene graph, and its learning and inference can be efficiently performed using the mean-field variational inference algorithm. The extensive experiments on both relationship retrieval and zero-shot relationship retrieval tasks demonstrate the superiority of JM-SGG model. The current JM-SGG model cannot be directly used for visual reasoning, and its inference method makes a strong assumption of fully factorized variational distribution. Therefore, our future work will include exploring downstream visual reasoning tasks (e.g. visual question answering [1] and visual commonsense reasoning [54]) based on JM-SGG model and further improving our approximate inference algorithm (e.g. by defining more expressive variational distribution). 8 Broader Impacts This research project focuses on predicting objects and their relations in a visual scene by fully capturing the dependency among all objects and relations, and the predicted object and relation labels are further organized as a scene graph. Compared to the conventional visual recognition systems that only predict objects, our approach is able to simultaneously provide object and relationship prediction. This merit enables more in-depth scene understanding and can potentially benefit many real-world applications, like intelligent surveillance and autonomous driving. However, it cannot be denied that the annotation process for a scene graph generation model is labor-intensive. For example, 11.5 objects and 6.2 relations, on average, are required to be annotated for each image in the Visual Genome dataset, and the dataset contains 108k images in total. Therefore, how to train a scene graph generation model in a more efficient way by using less labeled data remains to be further explored. Acknowledgments and Disclosure of Funding This project was supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ldt., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R&D Project (AI4D-CORE-08). This project was also partially funded by IVADO Fundamental Research Project grant PRF2019-3583139727. Bingbing Ni is supported by National Science Foundation of China (U20B2072, 61976137). The authors would like to thank Zhaocheng Zhu, Louis-Pascal Xhonneux and Zuobai Zhang for providing constructive advices during this project, and also appreciate the Student Innovation Center of SJTU for providing GPUs.
1. What is the focus of the paper in terms of scene understanding? 2. Can you describe the proposed model for joint prediction of the entire scene graph? 3. How does the model capture dependency among different objects and relations? 4. What are some of the key techniques used in the proposed model, such as CRF and knowledge graph embedding? 5. How were the learnable parameters trained, and what was the initialization method used for the partition function? 6. Can you summarize the findings from the experiments conducted on the Visual Genome dataset and the ablation study? 7. How does the paper contribute to the scene understanding task, and what are its strengths and weaknesses?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a joint prediction of the entire scene graph that fully captures the dependency among different objects and relations using a unified conditional random field. The proposed model can be summarized as starting from joint modeling of the object and relation component label and visual features. Joint modeling uses CRF to model comprehensive dependency of the object using unnormalized likelihood and partition function. In addition, the potential function used for object and relation components computed affinity through a distance-based learnable prototype. Finally, the knowledge graph embedding techniques for projection of different embedding to the same embedding space (e.g. context space to relation space, object space to relation space) have been used. All the learnable parameters are trained with maximum log-likelihood function and the partition function uses a mean-field variational inference for efficient sampling. Mean-field inference initialized with independent relation labels for triplet component factor update. Authors have conducted extensive experiments on the Visual Genome dataset and a brief ablation study that provide a good insight into the method. Review The main idea of the paper for joint modeling of object and relation using CRF and mean-field variance algorithm is novel and significant in the SGG task. The paper is well written, although the introduction and description of the long list of components hindrance its readability. The combination of CRF and knowledge graph technique will lead to exploration of commonality between scene and knowledge graph, thus making it a relevant work in this area.
NIPS
Title Joint Modeling of Visual Objects and Relations for Scene Graph Generation Abstract An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on meanfield variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach. 1 Introduction Modern object recognition [32, 10, 35] and detection [28, 27, 57] systems excel at the perception of visual objects, which has significantly boosted many industrial applications such as intelligent surveillance [18, 49] and autonomous driving [23, 38]. To have a deeper understanding of a visual scene, detecting and recognizing the objects in the scene is however insufficient. Instead, a comprehensive cognition of visual objects and their relationships is more desirable. Scene Graph Generation (SGG) [13] is a natural way to achieve this goal, in which a graph incorporating all objects and their relations within a scene image is derived to represent its semantic structure. Most previous works for SGG [48, 55, 53, 36, 4, 37] usually first independently predict different objects in a scene and then predict their relations independently. In practice, though such methods are very efficient, they ignore the dependency between different objects and between the relations of different object pairs. For example, a car could frequently co-occur with a street, and the relation eating could always appear along with the relation sitting on. Modeling such dependency could be very important for accurate scene graph prediction, especially for rare objects and relations. There are indeed some recent works [6, 5] along this direction. For example, Dai et al. [6] explored the triplet-level label dependency among a head object, a tail object and their relation. These methods have shown very promising results, while they only explored the limited dependency within a triplet. How to capture the full dependency between different objects and between their relations within a whole scene graph remains very challenging and unexplored. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To attain such a goal, in this paper, we propose a principled approach called Joint Modeling for Scene Graph Generation (JM-SGG) to predict the whole scene graph by jointly capturing all the label dependency within it, i.e. the dependency between different objects and their relations and also the interdependency between them. Specifically, we model the joint distribution of all objects and relations in a scene graph with the conditional random field (CRF) framework [17]. To flexibly model the joint distribution, the key is to define effective potential functions on both nodes (i.e. objects) and edges (i.e. relations between objects). We define the potential functions on objects according to the object representations extracted by existing neural network based object detector. It is however nontrivial to design effective potential functions on edges, since these potential functions have to capture the relation between two objects in an edge and meanwhile allow relational reasoning among different edges, which models the dependency among the relations on various edges. Inspired by the existing work of knowledge graph embedding [20], which represents entities and relations in the same embedding space and performs relational reasoning in that space, we define our potential functions according to the knowledge graph embedding method and hence allow efficient relational reasoning between different object pairs in a scene graph. Such a fully expressive model also brings challenges to both learning and inference due to the complicated structures between different random variables in the CRF, i.e. objects and their relations. We therefore further propose an efficient and effective inference algorithm based on mean-field variational inference, which is able to assist the gradient estimation for learning and derive the most likely scene graph for test. Traditional mean-field methods usually suffer from the problem of slow convergence. Instead of starting from a randomly initialized variational distribution as in traditional mean-field methods, we propose to initialize the variational distribution, i.e. the marginal distribution of each object and each relation, with a factorized tweak of JM-SGG model, and then perform a few iterations of message passing induced by the fixed-point optimality condition of mean field to refine the variational distribution, which allows our approach to enjoy both good precision and efficiency. To summarize, in this paper, we make the following contributions: • We propose Joint Modeling for Scene Graph Generation (JM-SGG) which is a fully expressive model that can capture all the label dependency in a whole scene graph. • We propose a principled mean-field variational inference algorithm to enable the efficient learning and inference of JM-SGG model. • We verify the superior performance of our method on both relationship retrieval and zero-shot relationship retrieval tasks under various settings and metrics. Also, we illustrate the efficiency and efficacy of the proposed inference algorithm by thorough analytical experiments. 2 Related Work Scene Graph Generation (SGG). This task aims to extract structured representations from scene images [13], including the category of objects and their relationships. Previous works performed SGG by propagating the information from different local regions [48, 53, 50, 36], introducing external knowledge [9, 52], employing well-designed loss functions [56, 14, 34] and performing unbiased scene graph prediction [4, 19, 37]. Most of these methods predict each object and relation label independently based on an informative representation, which fails to capture the rich label dependency within a scene graph and is thus less expressive. Several former works [6, 5] attempted to model such label dependency within a single relational triplet but not on the whole scene graph. Improvements over existing methods. The proposed JM-SGG model is, to our best knowledge, the first approach that jointly models all the label dependency within a scene graph, including the one within object or relation labels and the one between these two kinds of labels. To attain this goal, a unified CRF is constructed for graphical modeling, and a mean-field variational inference algorithm is designed for efficient learning and inference, which show technical contributions. Conditional Random Fields (CRFs). CRFs are a class of probabilistic graphical modeling methods which perform structured prediction upon the observed data. CRF-based approaches have been broadly studied on various computer vision problems, including segmentation [17, 43, 51, 26], superresolution [39, 46], image denoising [29, 42] and scene graph generation [6, 5]. These former works utilizing CRF for SGG [6, 5] aimed to model the conditional distribution of a single triplet upon visual representations. By comparison, our approach models the conditional distribution of a whole scene graph upon the observed scene image, which is more expressive. 3 Problem Definition and Preliminary 3.1 Problem Definition This work focuses on extracting a scene graph, i.e. a structured representation of visual scene [13], from an image. Formally, we define a scene graph as G = (yO, R). yO denotes the category labels of all objects O in the image, and it holds that yo ∈ C for each object o ∈ O, where C stands for the set of all object categories, including the “background” category. R = {(oh, r, ot)} is the set of relational triplets/edges with r ∈ T as the relation type from head object oh to tail object ot (oh, ot ∈ O), where T represents all relation types, including the type of “no relation”. In this work, we aim at jointly modeling visual objects and visual relations as defined below: Joint Scene Graph Modeling. Given an image I , we aim to jointly predict object categories yO and the relationships R among all objects, which models the joint distribution of scene graphs, i.e. p(G|I) = p(yO, R|I), with comprehensively considering the dependency within yO and R and also the interdependency between them. 3.2 Conditional Random Fields Conditional Random Field (CRF) is a discriminative undirected graphical model. Given a set of observed variables x, it models the joint distribution of labels y based on a Markov network G that specifies the dependency among all variables: p(y|x) = 1 Z(x) ∏ C φC(xC ,yC), Z(x) = ∑ y ∏ C φC(xC ,yC). (1) where φC denotes the nonnegative potential function defined over the variables in clique C (a clique is a fully-connected local subgraph), and Z(x) is a normalization constant called partition function. 4 Model In this section, we introduce Joint Modeling for Scene Graph Generation (JM-SGG). Current methods solve the problem by independently predicting each object and relation label upon an informative representation, and thus the prediction of different labels cannot fully benefit each other. JM-SGG tackles the limitation by jointly modeling all the objects and relationships in a visual scene with a unified conditional random field, which enables the prediction of various object and relation labels to sufficiently interact with each other. Nevertheless, learning and inferring this complex CRF is nontrivial, and we thus propose to use maximum likelihood estimation combined with mean-field variational inference, yielding an efficient algorithm for learning and inference. Next, we elucidate the details of our approach. 4.1 Representation In the JM-SGG model, we organize the observed scene image I and all object and relation labels in the latent scene graph (i.e. yO and R) as the nodes in a unified conditional random field. Since the interactions of these nodes are either for a single object or for the relationship between an object pair, we decompose the graphical structure of whole network into two sets of components. (1) Object components: For an object o ∈ O, we consider the dependency of its category label on its visual representation and thus connect yo with I , as shown in Fig. 1(a). (2) Relation components: for a relational triplet (oh, r, ot) ∈ R, we consider the dependency of relation type r on the visual cues in image I , and we also model the interdependency among the object and relation labels in this triplet (i.e. yoh , yot and r), which forms a relation component as Fig. 1(b) shows. By combining all object and relation components, the CRF can capture the comprehensive label dependency within a scene graph. We now define the joint distribution of scene graphs upon the observed scene image as below: pΘ(G|I) = 1 ZΘ(I) fΘ(G, I), (2) fΘ(G, I) = ∏ o∈O φ(yo, I) ∏ (oh,r,ot)∈R ψ(r, yoh , yot , I), (3) where Θ summarizes the parameters of whole model, fΘ is an unnormalized likelihood function, ZΘ denotes the partition function, and φ and ψ are the potential functions defined on object and relation components, respectively. Next, we define these potential functions based on the extracted visual representations and the correlation among different labels. Visual representation extraction. Given a scene image I , we first utilize a standard object detector (e.g. Faster R-CNN [28] in our implementation) to obtain a set of bounding boxes which potentially contain the objects in the image, and object representations zO = {zo|o ∈ O} (zo ∈ RD) are then derived by RoIAlign [11]. We regard the union bounding box over a pair of objects as their context region and again use RoIAlign to get all context representations zR = {zht|(oh, r, ot) ∈ R} (zht ∈ RD). Here, D denotes the latent dimension of objects and contexts. By denoting the whole object detector as gθ, this feature extraction process can be represented as: (zO, zR) = gθ(I). Potential function definition. The potential function φ(yo, I) for object component models the dependency of object category yo on object representation zo by measuring their affinity. To conduct such a measure, we represent each object category with a prototype [33] (i.e. a learnable embedding vector) in the continuous space, which forms a prototype set C = {Ci ∈ RD|i ∈ C} for all object categories (D denotes the dimension of object space). On such basis, we define φ(yo, I) by computing the distance between object representation zo and the prototype of object category yo: φ(yo, I) = exp ( −d(Cyo , zo) ) , (4) where d is a distance measure (e.g. Euclidean distance in our practice). The potential function ψ(r, yoh , yot , I) for relation component models the dependency of relation type r on the relevant visual representations in image I , and it also models the interdependency among the object and relation labels of a triplet (i.e. yoh , yot and r). Therefore, we can factorize ψ(r, yoh , yot , I) into a term ψvisual(r, I) for modeling visual influence and another term ψtriplet(r, yoh , yot) for modeling the label consistency within a triplet: ψ(r, yoh , yot , I) = ψvisual(r, I)ψtriplet(r, yoh , yot). (5) Similarly, for measuring r in the continuous space, a prototype set T = {Tj ∈ RK |j ∈ T } is constructed for all relation types (K denotes the dimension of relation space). We consider two kinds of visual representations that affect the prediction of relation type r, i.e. the context representation zht and the head and tail object representations zoh and zot . The influence of context representation can be easily measured by projecting context representation zht to the relation space and computing its distance to the prototype of relation type r. However, measuring the influence of head and tail object representations and evaluating the label consistency within a triplet are nontrivial, which require to model the ternary correlation among head object, tail object and their relationship. Inspired by the idea of TransR [20], an effective knowledge graph embedding technique, we model such ternary correlation by treating each relation as a translation vector from head object embedding to tail object embedding in the same embedding space. Specifically, we first apply the translation vector Tr specified by relation r to head object embedding, and then compute the distance between the translated embedding and tail object embedding. Based on these thoughts, we define ψvisual(r, I) and ψtriplet(r, yoh , yot) as follows: ψvisual(r, I) = exp ( − ( d(Tr,Mczht) + d(Mozoh + Tr,Mozot) )) , (6) ψtriplet(r, yoh , yot) = exp ( −d(MoCyoh + Tr,MoCyot ) ) , (7) where Mc ∈ RK×D denotes the projection matrix mapping from context space to relation space, and Mo ∈ RK×D is the projection matrix mapping from object space to relation space. Next, we state how to learn the parameters in JM-SGG model. 4.2 Learning In the learning phase, we seek to learn the parameters C, T, Mc and Mo of potential function and the parameters θ of object detector by maximum likelihood estimation, where Θ summarizes all these parameters. Specifically, we aim to maximize the expectation of log-likelihood function log pΘ(G|I) with respect to the data distribution pd, i.e. L(Θ) = EG∼pd [ log pΘ(G|I) ] , by performing gradient ascent. The gradient of the objective function L(Θ) with respect to Θ can be computed as below: ∇ΘL(Θ) = EG∼pd [∇Θ log fΘ(G, I)]− EG∼pΘ [∇Θ log fΘ(G, I)], (8) where pΘ is the model distribution that approximates pd (i.e. the conditional distribution pΘ(G|I) defined by JM-SGG model). This formula has been broadly adopted in the literature [12, 3, 7], and we provide the proof in supplementary material. In practice, we estimate the first expectation in Eq. (8) with the ground-truth scene graphs in a mini-batch. The estimation of the second expectation in Eq. (8) requires to sample scene graphs from the model distribution, which is nontrivial due to the intractable partition function ZΘ(I) that sums over all possible scene graphs. One solution is to run the Markov Chain Monte Carlo (MCMC) sampler, but its computational cost is high, and we therefore use mean-field variational inference for more efficient sampling (the detailed scheme is stated in Sec. 4.3). Instead of fixing the parameters of a pre-trained object detector during learning as in former works [53, 36, 37, 34], we fine-tune the parameters of object detector during maximum likelihood learning. In this way, the detector can extract more precise object and context representations by learning the likelihoods of whole scene graphs. Also, we apply a traditional bounding box regression constraint Lreg(Θ) [28] to the detector for preserving its localization capability, and these two learning objectives share the same weight. Next, we introduce the inference scheme for JM-SGG model. 4.3 Inference The inference phase aims to compute the conditional distribution pΘ(G|I) defined by JM-SGG model and also sample from it. Exact inference is always infeasible due to the complex structures among the latent variables yO andR of the scene graph as well as the intractable partition function. Therefore, we approximate pΘ(G|I) with a variational distribution qΘ(G) via the mean-field approximation [31, 24]: qΘ(G) = ∏ o∈O qΘ(yo) ∏ (oh,r,ot)∈R qΘ(r), (9) where each factor qΘ(yo) and qΘ(r) defines a categorical distribution, i.e. ∑ yo∈C qΘ(yo) = 1 and∑ r∈T qΘ(r) = 1. In this variational distribution, all object and relation labels are assumed to be independent, and it shares the same set of parameters Θ with pΘ(G|I), which greatly reduces the number of parameters needed for variational inference. For brevity, we will omit Θ in the following distribution notations, e.g. simplifying qΘ(G) as q(G). In general, we are seeking for a variational distribution that satisfies the factorization in Eq. (9) and also maximizes the variational lower bound L(q) = Eq(G)[log p(G, I)− log q(G)] (i.e. equivalent to minimizing the KL divergence between q(G) and p(G|I)). Typically, this is achieved by optimizing the variational distribution with fixed-point iterations [44, 45], which can however be inefficient, especially for the images with many objects. We thus design an inference algorithm that appropriately initializes each factor in q(G) and iteratively updates all factors. Intuitively, factor initialization is similar to existing SGG methods, where object and relation labels are predicted independently; factor update can be viewed as a refinement procedure, which makes the predictions from the initialization step more consistent. With factor initialization and factor update, the proposed inference method combines the advantages of both existing methods and CRFs, i.e. efficiency and consistency. Factor initialization. For initialization, we neglect the interdependency among different object and relation labels, i.e. omitting the potential function ψtriplet(r, yoh , yot) in p(G|I), yielding a simplified model distribution p̂(G|I). In this way, we can easily derive the following factors for initialization which makes q(G) = p̂(G|I): q(yo) = φ(yo, I)∑ y′o∈C φ(y′o, I) ∀o ∈ O, (10) q(r) = ψvisual(r, I)∑ r′∈T ψvisual(r ′, I) ∀(oh, r, ot) ∈ R. (11) See supplementary material for the proof. Intuitively, we initialize each factor by only considering its dependency on visual representations, and, on such basis, label interdependency will then be taken into account to refine each factor. In such an initialization approach, the computation of different factors is independent with each other and thus can be done efficiently in a parallel manner. In Sec. 6.1, we empirically illustrate the better convergence performance of this initialization scheme compared to the random initialization which is commonly employed in previous works [45, 22]. Factor update. Based on these initialized factors, we perform update by taking into account the interdependency among the object and relation labels in scene graph, i.e. using the full expression of p(G|I) with potential function ψtriplet(r, yoh , yot). In the mean-field formulation of Eq. (9), if we are to update one factor q(yo) (or q(r)) with all other factors fixed, its optimum q∗(yo) (or q∗(r)) which maximizes the variational lower bound L(q) can be specified by the following expression: log q∗(yo) = log φ(yo, I) + ∑ (o,r,ot)∈R ∑ yot∈C ∑ r∈T q(yot)q(r) logψtriplet(r, yo, yot) + ∑ (oh,r,o)∈R ∑ yoh∈C ∑ r∈T q(yoh)q(r) logψtriplet(r, yoh , yo) + const ∀o ∈ O, (12) log q∗(r) = logψvisual(r, I) + ∑ yoh∈C ∑ yot∈C q(yoh)q(yot) logψtriplet(r, yoh , yot) + const ∀(oh, r, ot) ∈ R. (13) The proof is provided in supplementary material. During computation, we omit the additive constants above, since they can be naturally eliminated when computing normalized q∗(yo) and q∗(r), i.e. taking the exponential of both sides and normalizing q∗(yo) over C and q∗(r) over T . Taking a close look at Eqs. (12) and (13), we can find that each factor is updated by aggregating the information from its neighboring factors (e.g. from the factors q(yoh) and q(yot) of head and tail objects to the factor q(r) of their relation), which can be efficiently implemented by matrix multiplication as in message passing neural networks [8]. In practice, we simultaneously update all factors in a single iteration based on the states of factors in last iteration, i.e. performing asynchronous message passing in mean field [41, 47], which forms an efficient iterative update scheme. We analyze the efficiency and efficacy of this update scheme in Secs. 6.1 and 6.2. Algorithm 1 Inference algorithm of JM-SGG. Input: Scene image I , iteration number NT . Output: Factors {q(yo)}, {q(r)} of q(G). Initialize {q(yo)}, {q(r)} by Eqs. (10), (11). for t = 1 to NT do Derive {log q∗(yo)}, {log q∗(r)} by Eqs. (12), (13). Update all factors: {q(yo)} ← {softmax(log q∗(yo))}, {q(r)} ← {softmax(log q∗(r))}. end for Inference algorithm. The whole inference algorithm is summarized in Alg. 1. Upon on the input scene image I , we first initialize each factor in q(G) by Eqs. (10) and (11). After that, we perform factor update for NT iterations. In each iteration, the log-optimum of each factor is computed based on the factors of last iteration by Eqs. (12) and (13), and the normalized factors are then derived by softmax for update. Sampling strategy. After such an iterative inference, we obtain a factorized variational distribution q(G) which well approximates the conditional distribution p(G|I) defined by JM-SGG model. Now, instead of sampling from the intractable model distribution p(G|I), we can easily sample scene graphs from q(G) by independently drawing each object/relation label from the corresponding factor (i.e. q(yo) or q(r)), where each factor is a categorical distribution. In practice, we sample NS scene graphs from q(G) for each image in a mini-batch, yielding totally NSNB samples for estimating the second expectation term in∇ΘL(Θ) (Eq. (8)), where NB denotes batch size. Prediction strategy. At the test time, we need to infer the scene graph with the highest probability in p(G|I), and it can also be efficiently done using the variational distribution q(G). In specific, based on the factorized definition of q(G), we can easily select the object category (or relation type) with the highest probability in each factor q(yo) (or q(r)), and the selected object and relation labels together form a scene graph that well approximates the most likely scene graph with respect to the model distribution p(G|I). Similar prediction strategies have been widely used in previous works that employed mean-field methods [15, 40]. 5 Experiments 5.1 Experimental Setup Dataset. We use the Visual Genome (VG) dataset [16] (CC BY 4.0 License), a large-scale database with structured image concepts, for evaluation. We use the pre-processed VG from Xu et al. [48] (MIT License) which contains 108k images with 150 object categories and 50 relation types. Following previous works [53, 36, 37], we employ the original split with 70% images for training and 30% images for test, and 5k images randomly sampled from the training split are held out for validation. Evaluation tasks. We evaluate the proposed method on two widely studied tasks: • Relationship Retrieval (RR). This task examines model’s comprehensive capability of localizing and classifying objects and their relationships. It is further divided into three sub-tasks from easy to hard: (1) Predicate Classification (PredCls): predict the predicate/relation of all object pairs using the ground-truth bounding boxes and object labels; (2) Scene Graph Classification (SGCls): predict all object categories and relation types given the ground-truth bounding boxes; (3) Scene Graph Generation (SGGen): localize the objects in an image and simultaneously predict their categories and all relations, where an object is regarded as correctly detected if it has at least 0.5 IoU overlap with the ground-truth box. Since two evaluation protocols were typically used in the literature, we adopt two metrics in our experiments, i.e. computing the recall for each relation type and reporting the mean (mR@k) [21, 48, 53] and computing a single recall for all relation types (R@k) [4, 37, 34], where we use both 50 and 100 for k as in previous works [21, 48, 4]. Following Xu et al. [48], we apply the graph constraint that only one relation is obtained for each ordered object pair. Totally, we report model’s performance on 12 configurations. • Zero-Shot Relationship Retrieval (ZSRR). This task was first introduced by Lu et al. [21] to evaluate model’s ability of identifying the head-relation-tail triplets that have not been observed during training. For this task, we employ the metric Zero-Shot Recall@k (ZSR@k) and conduct evaluation under three settings, i.e. PredCls, SGCls and SGGen. Also, the configurations where k equals to 50 and 100 are both evaluated. Performance comparisons. We compare the proposed method with existing scene graph generation algorithms, including IMP+ [48] (a re-implementation of IMP by Zellers et al. [53]), VTransE [55], FREQ [53], Motifs [53], KERN [4], VCTree [36], VCTree-TDE [37], VCTree-EBM [34] and GBNet-β [52]. We adapt the results on the metric mR@k from original papers, and the results on the metric R@k and ZSR@k are evaluated by the released source code for some methods, i.e. VTransE, VCTree-TDE and VCTree-EBM on R@k, and KERN and GB-Net-β on ZSR@k. 5.2 Implementation Details Model details. Following previous works [48, 55, 53, 4, 52], we adopt the Faster R-CNN [28] with a VGG-16 [32] backbone as object detector, and the VGG-16 backbone is initialized with the weights of the model pre-trained on ImageNet [30]. We use the same detector configuration as Zellers et al. [53] for fair comparison. The dimension D of object and context space and the dimension K of relation space are both set as 4096, i.e. the output dimension of the fc7 layer of VGG-16. Our method is implemented under PyTorch [25], and the source code will be released for reproducibility. Training details. In our experiments, the object detector is first pre-trained by an SGD optimizer (batch size: 4, initial learning rate: 0.001, momentum: 0.9, weight decay: 5× 10−4) for 20 epochs, and the learning rate is multiplied by 0.1 after the 10th epoch. During maximum likelihood learning, we train the potential functions and fine-tune the object detector with another SGD optimizer (batch size: 4, potential function learning rate: 0.001, detector learning rate: 0.0001, momentum: 0.9, weight decay: 5×10−4) for 10 epochs, and the learning rate is multiplied by 0.1 after the 5th epoch. Without otherwise specified, the iteration number NT is set as 1 for training and 2 for test, and the per image sampling size NS is set as 3. These hyperparameters are selected by the grid search on validation set, and their sensitivities are analyzed in Sec. 6.2. An NVIDIA Tesla V100 GPU is used for training. Evaluation details. As stated in Sec. 4.3, we independently predict each object category and relation type by selecting the most likely one in the corresponding factor of variational distribution. The objects predicted as “background” are discarded along with the relations linking to them, and the relations predicted as “no relation” are also removed. To derive a ranked triplet list for RR and ZSRR tasks, we save the probability of each object and relation and compute the probability product within each head-relation-tail triplet, and all triplets are then ranked according to the values of their probability products in a descending order. We report model’s performance at the last epoch. 5.3 Experimental Results Relationship Retrieval (RR). In Tab. 1, we compare our method with existing approaches under 12 settings of the RR task. It can be observed that the proposed JM-SGG model achieves the best performance on 10 of 12 settings. In particular, compared to the state-of-the-art VCTree-TDE [37], a previous work dedicated to addressing unbiased scene graph prediction, JM-SGG performs better on 4 of 6 settings for unbiased prediction (i.e. the settings using metric mR@k). We think these superior results are mainly ascribed to the proposed joint scene graph modeling, in which the class imbalance among different relation types is mitigated by emphasizing the role of these sample-scarce relation types under the context of whole scene graphs. Zero-Shot Relationship Retrieval (ZSRR). Tab. 2 reports the performance of various approaches on 6 settings of the ZSRR task. The comparison with FREQ [53] is not included on this task, since this baseline method can only predict the relational triplets appearing in the training set. We can observe that the JM-SGG model outperforms existing methods on all 6 settings, and, especially, a 34% performance gain on ZSR@50 is achieved on the SGCls sub-task. These results illustrate the effectiveness of JM-SGG on discovering the novel relational triplets that have not been observed during learning. 6 Analysis 6.1 Ablation Study Ablation study for joint scene graph modeling. To better verify the effectiveness of joint scene graph modeling, we study a variant of JM-SGG which models the joint distribution of an individual relational triplet instead of the whole scene graph, denoted as JM-SGG (triplet) (see supplementary material for more details). In Tabs. 1 and 2, JM-SGG clearly outperforms JM-SGG (triplet) on all metrics including the metric mR@k for unbiased prediction, which demonstrates the benefit of joint scene graph modeling on mitigating the class imbalance among different relation types. Ablation study for factor initialization. In this experiment, we compare the proposed initialization method (Eqs. (10) and (11)) with the random initialization which randomly initializes the categorical distribution for each factor q(yo) and q(r) in variational distribution q(G). Under these two initialization schemes, we respectively plot model’s performance after different iterations of factor update in Fig. 2(a). After four iterations, two schemes converge to the solutions with comparable performance, while our initialization approach shows a faster convergence (i.e. converge after two iterations). Ablation study for factor update. In this part, we study another configuration where the initialized factors are directly used for scene graph prediction without factor update, denoted as JM-SGG (w/o FU). In Tabs. 1 and 2, the superior performance of JM-SGG over JM-SGG (w/o FU) verifies the necessity of performing factor update to refine the initial label predictions. Ablation study on modeling head-relation-tail triplets. Previous works [55, 5] used TransE [2] to model the relation between two objects, while our method employs TransR [20] to model headrelation-tail triplets. To investigate the effectiveness of such a model design, we substitute TransR with TransE in our model, named as JM-SGG (TransE). Specifically, this model variant regards object and relation embeddings lie in the same space, and thus the projection matrix Mo is removed from two relation potential terms ψvisual and ψtriplet. In Tab. 3, it can be observed that TransR clearly outperforms TransE in the JM-SGG model, which demonstrates the importance of modeling objects and relations in two distinct embedding spaces. 6.2 Sensitivity Analysis Sensitivity of iteration number NT . In Fig. 2(b), we plot the performance of JM-SGG model under different iteration numbers. It can be observed that, for training, one iteration of factor update is enough to derive a decent variational distribution for the sampling purpose; for test, two iterations are required to converge to the optimal approximation of the model distribution. Sensitivity of per image sampling size NS . We vary the value of per image sampling size NS for learning and plot the corresponding model performance in Fig. 2(c). We can observe that through sampling at least three scene graphs from the variational distribution for each image, the second expectation term in Eq. (8) can be well estimated, which stably enhances model performance. 6.3 Visualization In Fig. 3, we visualize the typical scene graphs generated by JM-SGG model, in which the results with and without applying factor update are respectively shown. In these two examples, factor update succeeds in correcting some wrong relation labels (e.g. person has jean→ person wearing jean) by considering the dependency among different object and relation labels. More visualization results are provided in the supplementary material. 7 Conclusions and Future Work In this work, we propose the Joint Modeling for Scene Graph Generation (JM-SGG) model. This model is able to jointly capture the dependency among all object and relation labels in the scene graph, and its learning and inference can be efficiently performed using the mean-field variational inference algorithm. The extensive experiments on both relationship retrieval and zero-shot relationship retrieval tasks demonstrate the superiority of JM-SGG model. The current JM-SGG model cannot be directly used for visual reasoning, and its inference method makes a strong assumption of fully factorized variational distribution. Therefore, our future work will include exploring downstream visual reasoning tasks (e.g. visual question answering [1] and visual commonsense reasoning [54]) based on JM-SGG model and further improving our approximate inference algorithm (e.g. by defining more expressive variational distribution). 8 Broader Impacts This research project focuses on predicting objects and their relations in a visual scene by fully capturing the dependency among all objects and relations, and the predicted object and relation labels are further organized as a scene graph. Compared to the conventional visual recognition systems that only predict objects, our approach is able to simultaneously provide object and relationship prediction. This merit enables more in-depth scene understanding and can potentially benefit many real-world applications, like intelligent surveillance and autonomous driving. However, it cannot be denied that the annotation process for a scene graph generation model is labor-intensive. For example, 11.5 objects and 6.2 relations, on average, are required to be annotated for each image in the Visual Genome dataset, and the dataset contains 108k images in total. Therefore, how to train a scene graph generation model in a more efficient way by using less labeled data remains to be further explored. Acknowledgments and Disclosure of Funding This project was supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ldt., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R&D Project (AI4D-CORE-08). This project was also partially funded by IVADO Fundamental Research Project grant PRF2019-3583139727. Bingbing Ni is supported by National Science Foundation of China (U20B2072, 61976137). The authors would like to thank Zhaocheng Zhu, Louis-Pascal Xhonneux and Zuobai Zhang for providing constructive advices during this project, and also appreciate the Student Innovation Center of SJTU for providing GPUs.
1. What is the focus of the paper regarding scene graph generation? 2. What are the strengths of the proposed method, particularly in its design choices and theoretical analysis? 3. Do you have any concerns regarding the originality of the proposed method? 4. How does the reviewer assess the empirical evaluation of the paper, particularly in terms of comparisons with other works and the choice of dataset? 5. What are the suggestions for improving the clarity and readability of the paper's content? 6. How has the author addressed the reviewer's concerns in their response?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new method for supervised scene graph generation from images. The new method models dependencies between objects and their relations as a conditional random field (CRF), conditioned on the output of an object detector. To facilitate inference in the GRF, the deterministic prediction by the object detector is used to initialize a mean field approximation to the posterior. This is then updated iteratively via message passing. The method is shown to yield improved performance in relationship retrieval on the Visual Genome dataset. Review Method The intuition behind the proposed method is clear, and the design choices made in its implementation are plausible. The theoretical analysis is correct as far as I can tell. The use of the mean field approximation is a limitation, but one that is reasonable and also discussed in the paper. Originality The introduction and related work section claims that the proposed method is the first to model dependencies across the entire scene graph. I don't believe that this is the case: At the least, Suhail et al. [33] propose an energy based model that uses a graph neural network to assign energy values to scene graphs, which will be able to model arbitrary dependencies between the nodes. They also use a similar inference procedure, by first obtaining an initial scene graph via a deterministic predictor, and then iteratively updating it to maximize the likelihood (energy) under the probabilistic model. While the proposed method may be the first to model dependencies via a traditional graphical model like a CRF, and there may be some benefits to that, such a claim would need to be discussed and supported by evidence. Empirical Evaluation The model is evaluated in the standard setting of relationship retrieval on the Visual Genome dataset, with Table 1 indicating that the proposed methods performs almost universally better than previous methods. However, given that the authors report baseline numbers for VCTree, VCTree-TDE, and VCTree with the EBM loss from [33], I am very confused that they chose to not also report the results for VCTree-TDE with EBM loss [33]. This appears to be the current state of the art on this benchmark, and also beats the proposed method on most of the reported metrics. Am I missing any reason why this wouldn't be a fair comparison? More generally, the robustness of the evaluation could be improved by including another dataset, such as GQA. Clarity Overall, the proposed method is explained in a clear and reproducible way. The paper could benefit from another editing pass, e.g., the caption of Fig. 2(c) is broken. The readability of some of the math, especially formulas (12) and (13) could be improved, e.g., by avoiding nested subscripts such as y o h , and merging summation signs. Summary Overall, the paper proposes a plausible method, but the discussion of related work and the empirical evaluation raise questions. I am open to increasing my score if these can be addressed during the response phase. Update In their response, the authors have agreed to remove the claim that they are the first to jointly model scene graphs. They have also demonstrated that their contribution is orthogonal to the unbiased prediction techniques employed in VCTree-TDE, and that the two can be combined to achieve even better performance. This addresses my two main concerns. I have therefore raised my score from 5 to 6.
NIPS
Title Joint Modeling of Visual Objects and Relations for Scene Graph Generation Abstract An in-depth scene understanding usually requires recognizing all the objects and their relations in an image, encoded as a scene graph. Most existing approaches for scene graph generation first independently recognize each object and then predict their relations independently. Though these approaches are very efficient, they ignore the dependency between different objects as well as between their relations. In this paper, we propose a principled approach to jointly predict the entire scene graph by fully capturing the dependency between different objects and between their relations. Specifically, we establish a unified conditional random field (CRF) to model the joint distribution of all the objects and their relations in a scene graph. We carefully design the potential functions to enable relational reasoning among different objects according to knowledge graph embedding methods. We further propose an efficient and effective algorithm for inference based on meanfield variational inference, in which we first provide a warm initialization by independently predicting the objects and their relations according to the current model, followed by a few iterations of relational reasoning. Experimental results on both the relationship retrieval and zero-shot relationship retrieval tasks prove the efficiency and efficacy of our proposed approach. 1 Introduction Modern object recognition [32, 10, 35] and detection [28, 27, 57] systems excel at the perception of visual objects, which has significantly boosted many industrial applications such as intelligent surveillance [18, 49] and autonomous driving [23, 38]. To have a deeper understanding of a visual scene, detecting and recognizing the objects in the scene is however insufficient. Instead, a comprehensive cognition of visual objects and their relationships is more desirable. Scene Graph Generation (SGG) [13] is a natural way to achieve this goal, in which a graph incorporating all objects and their relations within a scene image is derived to represent its semantic structure. Most previous works for SGG [48, 55, 53, 36, 4, 37] usually first independently predict different objects in a scene and then predict their relations independently. In practice, though such methods are very efficient, they ignore the dependency between different objects and between the relations of different object pairs. For example, a car could frequently co-occur with a street, and the relation eating could always appear along with the relation sitting on. Modeling such dependency could be very important for accurate scene graph prediction, especially for rare objects and relations. There are indeed some recent works [6, 5] along this direction. For example, Dai et al. [6] explored the triplet-level label dependency among a head object, a tail object and their relation. These methods have shown very promising results, while they only explored the limited dependency within a triplet. How to capture the full dependency between different objects and between their relations within a whole scene graph remains very challenging and unexplored. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). To attain such a goal, in this paper, we propose a principled approach called Joint Modeling for Scene Graph Generation (JM-SGG) to predict the whole scene graph by jointly capturing all the label dependency within it, i.e. the dependency between different objects and their relations and also the interdependency between them. Specifically, we model the joint distribution of all objects and relations in a scene graph with the conditional random field (CRF) framework [17]. To flexibly model the joint distribution, the key is to define effective potential functions on both nodes (i.e. objects) and edges (i.e. relations between objects). We define the potential functions on objects according to the object representations extracted by existing neural network based object detector. It is however nontrivial to design effective potential functions on edges, since these potential functions have to capture the relation between two objects in an edge and meanwhile allow relational reasoning among different edges, which models the dependency among the relations on various edges. Inspired by the existing work of knowledge graph embedding [20], which represents entities and relations in the same embedding space and performs relational reasoning in that space, we define our potential functions according to the knowledge graph embedding method and hence allow efficient relational reasoning between different object pairs in a scene graph. Such a fully expressive model also brings challenges to both learning and inference due to the complicated structures between different random variables in the CRF, i.e. objects and their relations. We therefore further propose an efficient and effective inference algorithm based on mean-field variational inference, which is able to assist the gradient estimation for learning and derive the most likely scene graph for test. Traditional mean-field methods usually suffer from the problem of slow convergence. Instead of starting from a randomly initialized variational distribution as in traditional mean-field methods, we propose to initialize the variational distribution, i.e. the marginal distribution of each object and each relation, with a factorized tweak of JM-SGG model, and then perform a few iterations of message passing induced by the fixed-point optimality condition of mean field to refine the variational distribution, which allows our approach to enjoy both good precision and efficiency. To summarize, in this paper, we make the following contributions: • We propose Joint Modeling for Scene Graph Generation (JM-SGG) which is a fully expressive model that can capture all the label dependency in a whole scene graph. • We propose a principled mean-field variational inference algorithm to enable the efficient learning and inference of JM-SGG model. • We verify the superior performance of our method on both relationship retrieval and zero-shot relationship retrieval tasks under various settings and metrics. Also, we illustrate the efficiency and efficacy of the proposed inference algorithm by thorough analytical experiments. 2 Related Work Scene Graph Generation (SGG). This task aims to extract structured representations from scene images [13], including the category of objects and their relationships. Previous works performed SGG by propagating the information from different local regions [48, 53, 50, 36], introducing external knowledge [9, 52], employing well-designed loss functions [56, 14, 34] and performing unbiased scene graph prediction [4, 19, 37]. Most of these methods predict each object and relation label independently based on an informative representation, which fails to capture the rich label dependency within a scene graph and is thus less expressive. Several former works [6, 5] attempted to model such label dependency within a single relational triplet but not on the whole scene graph. Improvements over existing methods. The proposed JM-SGG model is, to our best knowledge, the first approach that jointly models all the label dependency within a scene graph, including the one within object or relation labels and the one between these two kinds of labels. To attain this goal, a unified CRF is constructed for graphical modeling, and a mean-field variational inference algorithm is designed for efficient learning and inference, which show technical contributions. Conditional Random Fields (CRFs). CRFs are a class of probabilistic graphical modeling methods which perform structured prediction upon the observed data. CRF-based approaches have been broadly studied on various computer vision problems, including segmentation [17, 43, 51, 26], superresolution [39, 46], image denoising [29, 42] and scene graph generation [6, 5]. These former works utilizing CRF for SGG [6, 5] aimed to model the conditional distribution of a single triplet upon visual representations. By comparison, our approach models the conditional distribution of a whole scene graph upon the observed scene image, which is more expressive. 3 Problem Definition and Preliminary 3.1 Problem Definition This work focuses on extracting a scene graph, i.e. a structured representation of visual scene [13], from an image. Formally, we define a scene graph as G = (yO, R). yO denotes the category labels of all objects O in the image, and it holds that yo ∈ C for each object o ∈ O, where C stands for the set of all object categories, including the “background” category. R = {(oh, r, ot)} is the set of relational triplets/edges with r ∈ T as the relation type from head object oh to tail object ot (oh, ot ∈ O), where T represents all relation types, including the type of “no relation”. In this work, we aim at jointly modeling visual objects and visual relations as defined below: Joint Scene Graph Modeling. Given an image I , we aim to jointly predict object categories yO and the relationships R among all objects, which models the joint distribution of scene graphs, i.e. p(G|I) = p(yO, R|I), with comprehensively considering the dependency within yO and R and also the interdependency between them. 3.2 Conditional Random Fields Conditional Random Field (CRF) is a discriminative undirected graphical model. Given a set of observed variables x, it models the joint distribution of labels y based on a Markov network G that specifies the dependency among all variables: p(y|x) = 1 Z(x) ∏ C φC(xC ,yC), Z(x) = ∑ y ∏ C φC(xC ,yC). (1) where φC denotes the nonnegative potential function defined over the variables in clique C (a clique is a fully-connected local subgraph), and Z(x) is a normalization constant called partition function. 4 Model In this section, we introduce Joint Modeling for Scene Graph Generation (JM-SGG). Current methods solve the problem by independently predicting each object and relation label upon an informative representation, and thus the prediction of different labels cannot fully benefit each other. JM-SGG tackles the limitation by jointly modeling all the objects and relationships in a visual scene with a unified conditional random field, which enables the prediction of various object and relation labels to sufficiently interact with each other. Nevertheless, learning and inferring this complex CRF is nontrivial, and we thus propose to use maximum likelihood estimation combined with mean-field variational inference, yielding an efficient algorithm for learning and inference. Next, we elucidate the details of our approach. 4.1 Representation In the JM-SGG model, we organize the observed scene image I and all object and relation labels in the latent scene graph (i.e. yO and R) as the nodes in a unified conditional random field. Since the interactions of these nodes are either for a single object or for the relationship between an object pair, we decompose the graphical structure of whole network into two sets of components. (1) Object components: For an object o ∈ O, we consider the dependency of its category label on its visual representation and thus connect yo with I , as shown in Fig. 1(a). (2) Relation components: for a relational triplet (oh, r, ot) ∈ R, we consider the dependency of relation type r on the visual cues in image I , and we also model the interdependency among the object and relation labels in this triplet (i.e. yoh , yot and r), which forms a relation component as Fig. 1(b) shows. By combining all object and relation components, the CRF can capture the comprehensive label dependency within a scene graph. We now define the joint distribution of scene graphs upon the observed scene image as below: pΘ(G|I) = 1 ZΘ(I) fΘ(G, I), (2) fΘ(G, I) = ∏ o∈O φ(yo, I) ∏ (oh,r,ot)∈R ψ(r, yoh , yot , I), (3) where Θ summarizes the parameters of whole model, fΘ is an unnormalized likelihood function, ZΘ denotes the partition function, and φ and ψ are the potential functions defined on object and relation components, respectively. Next, we define these potential functions based on the extracted visual representations and the correlation among different labels. Visual representation extraction. Given a scene image I , we first utilize a standard object detector (e.g. Faster R-CNN [28] in our implementation) to obtain a set of bounding boxes which potentially contain the objects in the image, and object representations zO = {zo|o ∈ O} (zo ∈ RD) are then derived by RoIAlign [11]. We regard the union bounding box over a pair of objects as their context region and again use RoIAlign to get all context representations zR = {zht|(oh, r, ot) ∈ R} (zht ∈ RD). Here, D denotes the latent dimension of objects and contexts. By denoting the whole object detector as gθ, this feature extraction process can be represented as: (zO, zR) = gθ(I). Potential function definition. The potential function φ(yo, I) for object component models the dependency of object category yo on object representation zo by measuring their affinity. To conduct such a measure, we represent each object category with a prototype [33] (i.e. a learnable embedding vector) in the continuous space, which forms a prototype set C = {Ci ∈ RD|i ∈ C} for all object categories (D denotes the dimension of object space). On such basis, we define φ(yo, I) by computing the distance between object representation zo and the prototype of object category yo: φ(yo, I) = exp ( −d(Cyo , zo) ) , (4) where d is a distance measure (e.g. Euclidean distance in our practice). The potential function ψ(r, yoh , yot , I) for relation component models the dependency of relation type r on the relevant visual representations in image I , and it also models the interdependency among the object and relation labels of a triplet (i.e. yoh , yot and r). Therefore, we can factorize ψ(r, yoh , yot , I) into a term ψvisual(r, I) for modeling visual influence and another term ψtriplet(r, yoh , yot) for modeling the label consistency within a triplet: ψ(r, yoh , yot , I) = ψvisual(r, I)ψtriplet(r, yoh , yot). (5) Similarly, for measuring r in the continuous space, a prototype set T = {Tj ∈ RK |j ∈ T } is constructed for all relation types (K denotes the dimension of relation space). We consider two kinds of visual representations that affect the prediction of relation type r, i.e. the context representation zht and the head and tail object representations zoh and zot . The influence of context representation can be easily measured by projecting context representation zht to the relation space and computing its distance to the prototype of relation type r. However, measuring the influence of head and tail object representations and evaluating the label consistency within a triplet are nontrivial, which require to model the ternary correlation among head object, tail object and their relationship. Inspired by the idea of TransR [20], an effective knowledge graph embedding technique, we model such ternary correlation by treating each relation as a translation vector from head object embedding to tail object embedding in the same embedding space. Specifically, we first apply the translation vector Tr specified by relation r to head object embedding, and then compute the distance between the translated embedding and tail object embedding. Based on these thoughts, we define ψvisual(r, I) and ψtriplet(r, yoh , yot) as follows: ψvisual(r, I) = exp ( − ( d(Tr,Mczht) + d(Mozoh + Tr,Mozot) )) , (6) ψtriplet(r, yoh , yot) = exp ( −d(MoCyoh + Tr,MoCyot ) ) , (7) where Mc ∈ RK×D denotes the projection matrix mapping from context space to relation space, and Mo ∈ RK×D is the projection matrix mapping from object space to relation space. Next, we state how to learn the parameters in JM-SGG model. 4.2 Learning In the learning phase, we seek to learn the parameters C, T, Mc and Mo of potential function and the parameters θ of object detector by maximum likelihood estimation, where Θ summarizes all these parameters. Specifically, we aim to maximize the expectation of log-likelihood function log pΘ(G|I) with respect to the data distribution pd, i.e. L(Θ) = EG∼pd [ log pΘ(G|I) ] , by performing gradient ascent. The gradient of the objective function L(Θ) with respect to Θ can be computed as below: ∇ΘL(Θ) = EG∼pd [∇Θ log fΘ(G, I)]− EG∼pΘ [∇Θ log fΘ(G, I)], (8) where pΘ is the model distribution that approximates pd (i.e. the conditional distribution pΘ(G|I) defined by JM-SGG model). This formula has been broadly adopted in the literature [12, 3, 7], and we provide the proof in supplementary material. In practice, we estimate the first expectation in Eq. (8) with the ground-truth scene graphs in a mini-batch. The estimation of the second expectation in Eq. (8) requires to sample scene graphs from the model distribution, which is nontrivial due to the intractable partition function ZΘ(I) that sums over all possible scene graphs. One solution is to run the Markov Chain Monte Carlo (MCMC) sampler, but its computational cost is high, and we therefore use mean-field variational inference for more efficient sampling (the detailed scheme is stated in Sec. 4.3). Instead of fixing the parameters of a pre-trained object detector during learning as in former works [53, 36, 37, 34], we fine-tune the parameters of object detector during maximum likelihood learning. In this way, the detector can extract more precise object and context representations by learning the likelihoods of whole scene graphs. Also, we apply a traditional bounding box regression constraint Lreg(Θ) [28] to the detector for preserving its localization capability, and these two learning objectives share the same weight. Next, we introduce the inference scheme for JM-SGG model. 4.3 Inference The inference phase aims to compute the conditional distribution pΘ(G|I) defined by JM-SGG model and also sample from it. Exact inference is always infeasible due to the complex structures among the latent variables yO andR of the scene graph as well as the intractable partition function. Therefore, we approximate pΘ(G|I) with a variational distribution qΘ(G) via the mean-field approximation [31, 24]: qΘ(G) = ∏ o∈O qΘ(yo) ∏ (oh,r,ot)∈R qΘ(r), (9) where each factor qΘ(yo) and qΘ(r) defines a categorical distribution, i.e. ∑ yo∈C qΘ(yo) = 1 and∑ r∈T qΘ(r) = 1. In this variational distribution, all object and relation labels are assumed to be independent, and it shares the same set of parameters Θ with pΘ(G|I), which greatly reduces the number of parameters needed for variational inference. For brevity, we will omit Θ in the following distribution notations, e.g. simplifying qΘ(G) as q(G). In general, we are seeking for a variational distribution that satisfies the factorization in Eq. (9) and also maximizes the variational lower bound L(q) = Eq(G)[log p(G, I)− log q(G)] (i.e. equivalent to minimizing the KL divergence between q(G) and p(G|I)). Typically, this is achieved by optimizing the variational distribution with fixed-point iterations [44, 45], which can however be inefficient, especially for the images with many objects. We thus design an inference algorithm that appropriately initializes each factor in q(G) and iteratively updates all factors. Intuitively, factor initialization is similar to existing SGG methods, where object and relation labels are predicted independently; factor update can be viewed as a refinement procedure, which makes the predictions from the initialization step more consistent. With factor initialization and factor update, the proposed inference method combines the advantages of both existing methods and CRFs, i.e. efficiency and consistency. Factor initialization. For initialization, we neglect the interdependency among different object and relation labels, i.e. omitting the potential function ψtriplet(r, yoh , yot) in p(G|I), yielding a simplified model distribution p̂(G|I). In this way, we can easily derive the following factors for initialization which makes q(G) = p̂(G|I): q(yo) = φ(yo, I)∑ y′o∈C φ(y′o, I) ∀o ∈ O, (10) q(r) = ψvisual(r, I)∑ r′∈T ψvisual(r ′, I) ∀(oh, r, ot) ∈ R. (11) See supplementary material for the proof. Intuitively, we initialize each factor by only considering its dependency on visual representations, and, on such basis, label interdependency will then be taken into account to refine each factor. In such an initialization approach, the computation of different factors is independent with each other and thus can be done efficiently in a parallel manner. In Sec. 6.1, we empirically illustrate the better convergence performance of this initialization scheme compared to the random initialization which is commonly employed in previous works [45, 22]. Factor update. Based on these initialized factors, we perform update by taking into account the interdependency among the object and relation labels in scene graph, i.e. using the full expression of p(G|I) with potential function ψtriplet(r, yoh , yot). In the mean-field formulation of Eq. (9), if we are to update one factor q(yo) (or q(r)) with all other factors fixed, its optimum q∗(yo) (or q∗(r)) which maximizes the variational lower bound L(q) can be specified by the following expression: log q∗(yo) = log φ(yo, I) + ∑ (o,r,ot)∈R ∑ yot∈C ∑ r∈T q(yot)q(r) logψtriplet(r, yo, yot) + ∑ (oh,r,o)∈R ∑ yoh∈C ∑ r∈T q(yoh)q(r) logψtriplet(r, yoh , yo) + const ∀o ∈ O, (12) log q∗(r) = logψvisual(r, I) + ∑ yoh∈C ∑ yot∈C q(yoh)q(yot) logψtriplet(r, yoh , yot) + const ∀(oh, r, ot) ∈ R. (13) The proof is provided in supplementary material. During computation, we omit the additive constants above, since they can be naturally eliminated when computing normalized q∗(yo) and q∗(r), i.e. taking the exponential of both sides and normalizing q∗(yo) over C and q∗(r) over T . Taking a close look at Eqs. (12) and (13), we can find that each factor is updated by aggregating the information from its neighboring factors (e.g. from the factors q(yoh) and q(yot) of head and tail objects to the factor q(r) of their relation), which can be efficiently implemented by matrix multiplication as in message passing neural networks [8]. In practice, we simultaneously update all factors in a single iteration based on the states of factors in last iteration, i.e. performing asynchronous message passing in mean field [41, 47], which forms an efficient iterative update scheme. We analyze the efficiency and efficacy of this update scheme in Secs. 6.1 and 6.2. Algorithm 1 Inference algorithm of JM-SGG. Input: Scene image I , iteration number NT . Output: Factors {q(yo)}, {q(r)} of q(G). Initialize {q(yo)}, {q(r)} by Eqs. (10), (11). for t = 1 to NT do Derive {log q∗(yo)}, {log q∗(r)} by Eqs. (12), (13). Update all factors: {q(yo)} ← {softmax(log q∗(yo))}, {q(r)} ← {softmax(log q∗(r))}. end for Inference algorithm. The whole inference algorithm is summarized in Alg. 1. Upon on the input scene image I , we first initialize each factor in q(G) by Eqs. (10) and (11). After that, we perform factor update for NT iterations. In each iteration, the log-optimum of each factor is computed based on the factors of last iteration by Eqs. (12) and (13), and the normalized factors are then derived by softmax for update. Sampling strategy. After such an iterative inference, we obtain a factorized variational distribution q(G) which well approximates the conditional distribution p(G|I) defined by JM-SGG model. Now, instead of sampling from the intractable model distribution p(G|I), we can easily sample scene graphs from q(G) by independently drawing each object/relation label from the corresponding factor (i.e. q(yo) or q(r)), where each factor is a categorical distribution. In practice, we sample NS scene graphs from q(G) for each image in a mini-batch, yielding totally NSNB samples for estimating the second expectation term in∇ΘL(Θ) (Eq. (8)), where NB denotes batch size. Prediction strategy. At the test time, we need to infer the scene graph with the highest probability in p(G|I), and it can also be efficiently done using the variational distribution q(G). In specific, based on the factorized definition of q(G), we can easily select the object category (or relation type) with the highest probability in each factor q(yo) (or q(r)), and the selected object and relation labels together form a scene graph that well approximates the most likely scene graph with respect to the model distribution p(G|I). Similar prediction strategies have been widely used in previous works that employed mean-field methods [15, 40]. 5 Experiments 5.1 Experimental Setup Dataset. We use the Visual Genome (VG) dataset [16] (CC BY 4.0 License), a large-scale database with structured image concepts, for evaluation. We use the pre-processed VG from Xu et al. [48] (MIT License) which contains 108k images with 150 object categories and 50 relation types. Following previous works [53, 36, 37], we employ the original split with 70% images for training and 30% images for test, and 5k images randomly sampled from the training split are held out for validation. Evaluation tasks. We evaluate the proposed method on two widely studied tasks: • Relationship Retrieval (RR). This task examines model’s comprehensive capability of localizing and classifying objects and their relationships. It is further divided into three sub-tasks from easy to hard: (1) Predicate Classification (PredCls): predict the predicate/relation of all object pairs using the ground-truth bounding boxes and object labels; (2) Scene Graph Classification (SGCls): predict all object categories and relation types given the ground-truth bounding boxes; (3) Scene Graph Generation (SGGen): localize the objects in an image and simultaneously predict their categories and all relations, where an object is regarded as correctly detected if it has at least 0.5 IoU overlap with the ground-truth box. Since two evaluation protocols were typically used in the literature, we adopt two metrics in our experiments, i.e. computing the recall for each relation type and reporting the mean (mR@k) [21, 48, 53] and computing a single recall for all relation types (R@k) [4, 37, 34], where we use both 50 and 100 for k as in previous works [21, 48, 4]. Following Xu et al. [48], we apply the graph constraint that only one relation is obtained for each ordered object pair. Totally, we report model’s performance on 12 configurations. • Zero-Shot Relationship Retrieval (ZSRR). This task was first introduced by Lu et al. [21] to evaluate model’s ability of identifying the head-relation-tail triplets that have not been observed during training. For this task, we employ the metric Zero-Shot Recall@k (ZSR@k) and conduct evaluation under three settings, i.e. PredCls, SGCls and SGGen. Also, the configurations where k equals to 50 and 100 are both evaluated. Performance comparisons. We compare the proposed method with existing scene graph generation algorithms, including IMP+ [48] (a re-implementation of IMP by Zellers et al. [53]), VTransE [55], FREQ [53], Motifs [53], KERN [4], VCTree [36], VCTree-TDE [37], VCTree-EBM [34] and GBNet-β [52]. We adapt the results on the metric mR@k from original papers, and the results on the metric R@k and ZSR@k are evaluated by the released source code for some methods, i.e. VTransE, VCTree-TDE and VCTree-EBM on R@k, and KERN and GB-Net-β on ZSR@k. 5.2 Implementation Details Model details. Following previous works [48, 55, 53, 4, 52], we adopt the Faster R-CNN [28] with a VGG-16 [32] backbone as object detector, and the VGG-16 backbone is initialized with the weights of the model pre-trained on ImageNet [30]. We use the same detector configuration as Zellers et al. [53] for fair comparison. The dimension D of object and context space and the dimension K of relation space are both set as 4096, i.e. the output dimension of the fc7 layer of VGG-16. Our method is implemented under PyTorch [25], and the source code will be released for reproducibility. Training details. In our experiments, the object detector is first pre-trained by an SGD optimizer (batch size: 4, initial learning rate: 0.001, momentum: 0.9, weight decay: 5× 10−4) for 20 epochs, and the learning rate is multiplied by 0.1 after the 10th epoch. During maximum likelihood learning, we train the potential functions and fine-tune the object detector with another SGD optimizer (batch size: 4, potential function learning rate: 0.001, detector learning rate: 0.0001, momentum: 0.9, weight decay: 5×10−4) for 10 epochs, and the learning rate is multiplied by 0.1 after the 5th epoch. Without otherwise specified, the iteration number NT is set as 1 for training and 2 for test, and the per image sampling size NS is set as 3. These hyperparameters are selected by the grid search on validation set, and their sensitivities are analyzed in Sec. 6.2. An NVIDIA Tesla V100 GPU is used for training. Evaluation details. As stated in Sec. 4.3, we independently predict each object category and relation type by selecting the most likely one in the corresponding factor of variational distribution. The objects predicted as “background” are discarded along with the relations linking to them, and the relations predicted as “no relation” are also removed. To derive a ranked triplet list for RR and ZSRR tasks, we save the probability of each object and relation and compute the probability product within each head-relation-tail triplet, and all triplets are then ranked according to the values of their probability products in a descending order. We report model’s performance at the last epoch. 5.3 Experimental Results Relationship Retrieval (RR). In Tab. 1, we compare our method with existing approaches under 12 settings of the RR task. It can be observed that the proposed JM-SGG model achieves the best performance on 10 of 12 settings. In particular, compared to the state-of-the-art VCTree-TDE [37], a previous work dedicated to addressing unbiased scene graph prediction, JM-SGG performs better on 4 of 6 settings for unbiased prediction (i.e. the settings using metric mR@k). We think these superior results are mainly ascribed to the proposed joint scene graph modeling, in which the class imbalance among different relation types is mitigated by emphasizing the role of these sample-scarce relation types under the context of whole scene graphs. Zero-Shot Relationship Retrieval (ZSRR). Tab. 2 reports the performance of various approaches on 6 settings of the ZSRR task. The comparison with FREQ [53] is not included on this task, since this baseline method can only predict the relational triplets appearing in the training set. We can observe that the JM-SGG model outperforms existing methods on all 6 settings, and, especially, a 34% performance gain on ZSR@50 is achieved on the SGCls sub-task. These results illustrate the effectiveness of JM-SGG on discovering the novel relational triplets that have not been observed during learning. 6 Analysis 6.1 Ablation Study Ablation study for joint scene graph modeling. To better verify the effectiveness of joint scene graph modeling, we study a variant of JM-SGG which models the joint distribution of an individual relational triplet instead of the whole scene graph, denoted as JM-SGG (triplet) (see supplementary material for more details). In Tabs. 1 and 2, JM-SGG clearly outperforms JM-SGG (triplet) on all metrics including the metric mR@k for unbiased prediction, which demonstrates the benefit of joint scene graph modeling on mitigating the class imbalance among different relation types. Ablation study for factor initialization. In this experiment, we compare the proposed initialization method (Eqs. (10) and (11)) with the random initialization which randomly initializes the categorical distribution for each factor q(yo) and q(r) in variational distribution q(G). Under these two initialization schemes, we respectively plot model’s performance after different iterations of factor update in Fig. 2(a). After four iterations, two schemes converge to the solutions with comparable performance, while our initialization approach shows a faster convergence (i.e. converge after two iterations). Ablation study for factor update. In this part, we study another configuration where the initialized factors are directly used for scene graph prediction without factor update, denoted as JM-SGG (w/o FU). In Tabs. 1 and 2, the superior performance of JM-SGG over JM-SGG (w/o FU) verifies the necessity of performing factor update to refine the initial label predictions. Ablation study on modeling head-relation-tail triplets. Previous works [55, 5] used TransE [2] to model the relation between two objects, while our method employs TransR [20] to model headrelation-tail triplets. To investigate the effectiveness of such a model design, we substitute TransR with TransE in our model, named as JM-SGG (TransE). Specifically, this model variant regards object and relation embeddings lie in the same space, and thus the projection matrix Mo is removed from two relation potential terms ψvisual and ψtriplet. In Tab. 3, it can be observed that TransR clearly outperforms TransE in the JM-SGG model, which demonstrates the importance of modeling objects and relations in two distinct embedding spaces. 6.2 Sensitivity Analysis Sensitivity of iteration number NT . In Fig. 2(b), we plot the performance of JM-SGG model under different iteration numbers. It can be observed that, for training, one iteration of factor update is enough to derive a decent variational distribution for the sampling purpose; for test, two iterations are required to converge to the optimal approximation of the model distribution. Sensitivity of per image sampling size NS . We vary the value of per image sampling size NS for learning and plot the corresponding model performance in Fig. 2(c). We can observe that through sampling at least three scene graphs from the variational distribution for each image, the second expectation term in Eq. (8) can be well estimated, which stably enhances model performance. 6.3 Visualization In Fig. 3, we visualize the typical scene graphs generated by JM-SGG model, in which the results with and without applying factor update are respectively shown. In these two examples, factor update succeeds in correcting some wrong relation labels (e.g. person has jean→ person wearing jean) by considering the dependency among different object and relation labels. More visualization results are provided in the supplementary material. 7 Conclusions and Future Work In this work, we propose the Joint Modeling for Scene Graph Generation (JM-SGG) model. This model is able to jointly capture the dependency among all object and relation labels in the scene graph, and its learning and inference can be efficiently performed using the mean-field variational inference algorithm. The extensive experiments on both relationship retrieval and zero-shot relationship retrieval tasks demonstrate the superiority of JM-SGG model. The current JM-SGG model cannot be directly used for visual reasoning, and its inference method makes a strong assumption of fully factorized variational distribution. Therefore, our future work will include exploring downstream visual reasoning tasks (e.g. visual question answering [1] and visual commonsense reasoning [54]) based on JM-SGG model and further improving our approximate inference algorithm (e.g. by defining more expressive variational distribution). 8 Broader Impacts This research project focuses on predicting objects and their relations in a visual scene by fully capturing the dependency among all objects and relations, and the predicted object and relation labels are further organized as a scene graph. Compared to the conventional visual recognition systems that only predict objects, our approach is able to simultaneously provide object and relationship prediction. This merit enables more in-depth scene understanding and can potentially benefit many real-world applications, like intelligent surveillance and autonomous driving. However, it cannot be denied that the annotation process for a scene graph generation model is labor-intensive. For example, 11.5 objects and 6.2 relations, on average, are required to be annotated for each image in the Visual Genome dataset, and the dataset contains 108k images in total. Therefore, how to train a scene graph generation model in a more efficient way by using less labeled data remains to be further explored. Acknowledgments and Disclosure of Funding This project was supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ldt., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R&D Project (AI4D-CORE-08). This project was also partially funded by IVADO Fundamental Research Project grant PRF2019-3583139727. Bingbing Ni is supported by National Science Foundation of China (U20B2072, 61976137). The authors would like to thank Zhaocheng Zhu, Louis-Pascal Xhonneux and Zuobai Zhang for providing constructive advices during this project, and also appreciate the Student Innovation Center of SJTU for providing GPUs.
1. What is the main contribution of the paper regarding scene graph generation? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. Do you have any concerns regarding the originality and quality of the paper's content? 4. How does the reviewer assess the clarity and significance of the paper's writing? 5. Are there any suggestions or recommendations for improving the paper or its experimental results?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a method for generating scene graphs of images by modeling the objects and relationships with a Conditional Random Field. The paper claims that other existing methods do not model the dependency between all the objects and relations in an image. The proposed method is meant for modeling all the relations jointly which could lead to better performance in generating scene graphs. The potential function for object components is defined using the representation obtained from an object detector network. The potential function for relationship components is defined using the knowledge graph embedding method which represents both the objects and relationships in the same embedding space. A mean-field approximation of the probability distribution of the CRF is obtained by treating the labels for objects and relations as independent of each other. A message passing algorithm is used to iteratively update the factors and perform inference with the mean-field approximation. This is also helpful for sampling the scene graphs during the learning phase. The experiments are performed on the VisualGenome dataset and results are shown for a few different tasks like Relationship Predicate Classification, Scene Graph Classification and Generation. The paper also includes results on the task of zero shot relationship retrieval that evaluates the model’s ability to identify head-relation-tail combinations not observed during training. Review I have a few concerns regarding the Originality, Quality and Clarity. The concerns are marked with the [-ve] prefix. Originality [-ve] The claim that other works don’t model all the relationships in an image is not true. The idea of using factors for relationships and aggregating information over all the relationships using an iterative message passing algorithm has been used by W.Cong et al. [4]. The Algorithm 1 in [4] has a message passing step that gathers information from the neighbors of a node in a graph, thus aggregating information from the entire graph over the iterations. The aggregation method is not just special in [4], it is also a pretty standard way to do inference over graphs. [-ve] The paper claims to propose a novel inference algorithm. However, it is not clear what aspect of the inference algorithm is novel when compared to that of existing mean-field approximation for CRF models. This inference algorithm has been used for CRFs in many prior computer vision works. Ex: Krahenbuhl and Kolton, NeurIPS 2011. Quality [-ve] The results are better than some of the state of the art methods. While the paper claims that this is due to joint scene graph modeling (line 337), I think a more thorough analysis would be required to justify the claim. The effect of the proposed relationship potential function that is different from other works is not fully measured. The definitions of the potential functions seem to be the main distinguishing contribution when compared to works like Cong et al. [4]. The work in [4] uses TransE (Bordes et al. 2013) whereas this paper uses TransR [19]. Could this be the one of the reasons for the improved results? This could be verified with an ablation experiment where TransR is substituted with TransE. I am suggesting this experiment because the results from the ablation settings which don’t perform full scene graph modeling (triplet-only and without factor update) are already better in some settings when compared to the other methods. The results from the model without factor update is better than the triplet only model which does one step of factor update. This is an interesting result which needs some explanation. Does this imply that the results can be noisy if we don’t perform full scene graph modeling? It will be good to mention this explicitly which adds to the motivation of doing full scene graph modeling. The experiments section provides a good sensitivity analysis showing how the results converge with increasing number of factor update iterations. The code is provided for reproducing the results Clarity [-ve] The paper is written well to provide a clear explanation of the method along with the proofs of the mathematical lemmas. However, the related work section is lacking since it doesn’t mention how the proposed method is different and/or better compared to existing works. Significance The results are better than the state of the art methods in many settings. The work is useful for the community as an example for an alternative method to perform scene generation. While the state of the art methods like Dhingra et al., CVPR 2021 show much better performance on the same tasks using Transformers, the paper proposed a viable method that uses CRFs. It is beneficial to the community since it answers the question “How would CRF models perform on the task of scene generation?” Other suggestions The experiments use VGG16 as the backbone for comparison with other methods. However, it will be good to include additional results with Resnet as the backbone since the latest methods have shown improvements when switching from VGG16 to Resnet. Ex: Dhingra et al., CVPR 2021.
NIPS
Title Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning Abstract Unlabeled data examples awaiting annotations contain open-set noise inevitably. A few active learning studies have attempted to deal with this open-set noise for sample selection by filtering out the noisy examples. However, because focusing on the purity of examples in a query set leads to overlooking the informativeness of the examples, the best balancing of purity and informativeness remains an important question. In this paper, to solve this purity-informativeness dilemma in open-set active learning, we propose a novel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. Specifically, by leveraging the multi-round property of active learning, we train MQ-Net using a query set without an additional validation set. Furthermore, a clear dominance relationship between unlabeled examples is effectively captured by MQ-Net through a novel skyline regularization. Extensive experiments on multiple open-set active learning scenarios demonstrate that the proposed MQ-Net achieves 20.14% improvement in terms of accuracy, compared with the state-of-the-art methods. 1 Introduction The success of deep learning in many complex tasks highly depends on the availability of massive data with well-annotated labels, which are very costly to obtain in practice [1]. Active learning (AL) is one of the popular learning frameworks to reduce the high human-labeling cost, where a small number of maximally-informative examples are selected by a query strategy and labeled by an oracle repeatedly [2]. Numerous query (i.e., sample selection) strategies, mainly categorized into uncertaintybased sampling [3, 4, 5] and diversity-based sampling [6, 7, 8], have succeeded in effectively reducing the labeling cost while achieving high model performance. Despite their success, most standard AL approaches rely on a strict assumption that all unlabeled examples should be cleanly collected from a pre-defined domain called in-distribution (IN), even before being labeled [9]. This assumption is unrealistic in practice since the unlabeled examples are mostly collected from rather casual data curation processes such as web-crawling. Notably, in the Google search engine, the precision of image retrieval is reported to be 82% on average, and it is worsened to 48% for unpopular entities [10, 11]. That is, such collected unlabeled data naturally involves open-set noise, which is defined as a set of the examples collected from different domains called out-of-distribution (OOD) [12]. In general, standard AL approaches favor the examples either highly uncertain in predictions or highly diverse in representations as a query for labeling. However, the addition of open-set noise makes these two measures fail to identify informative examples; the OOD examples also exhibit high uncertainty and diversity because they share neither class-distinctive features nor other inductive biases with IN examples [14, 15]. As a result, an active learner is confused and likely to query the OOD examples to ∗Corresponding authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). a human-annotator for labeling. Human annotators would disregard the OOD examples because they are unnecessary for the target task, thereby wasting the labeling budget. Therefore, the problem of active learning with open-set noise, which we call open-set active learning, has emerged as a new important challenge for real-world applications. Recently, a few studies have attempted to deal with the open-set noise for active learning [13, 16]. They commonly try to increase the purity of examples in a query set, which is defined as the proportion of IN examples, by effectively filtering out the OOD examples. However, whether focusing on the purity is needed throughout the entire training period remains a question. In Figure 1(a), let’s consider an open-set AL task with a binary classification of cats and dogs, where the images of other animals, e.g., horses and wolves, are regarded as OOD examples. It is clear that the group of high purity and high informativeness (HP-HI) is the most preferable for sample selection. However, when comparing the group of high purity and low informativeness (HP-LI) and that of low purity and high informativeness (LP-HI), the preference between these two groups of examples is not clear, but rather contingent on the learning stage and the ratio of OOD examples. Thus, we coin a new term “purity-informativeness dilemma” to call attention to the best balancing of purity and informativeness. Figures 1(b) and 1(c) illustrate the purity-informativeness dilemma. The standard AL approach, LL[5], puts more weight on the examples of high informativeness (denoted as HI-focused), while the existing open-set AL approach, CCAL [13], puts more weight on those of high purity (denoted as HP-focused). The HP-focused approach improves the test accuracy more significantly than the HI-focused one at earlier AL rounds, meaning that pure as well as easy examples are more beneficial. In contrast, the HI-focused approach beats the HP-focused one at later AL rounds, meaning that highly informative examples should be selected even at the expense of purity. Furthermore, comparing a low OOD (noise) ratio in Figure 1(b) and a high OOD ratio in Figure 1(c), the shift from HP-dominance to HI-dominance tends to occur later at a higher OOD ratio, which renders this dilemma more difficult. In this paper, to solve the purity-informativeness dilemma in open-set AL, we propose a novel metamodel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. A key challenge is the best balancing is unknown in advance. The meta-model is trained to assign higher priority for in-distribution examples over OOD examples as well as for more informative examples among in-distribution ones. The input to the meta-model, which includes the target and OOD labels, is obtained for free from each AL round’s query set by leveraging the multi-round property of AL. Moreover, the meta-model is optimized more stably through a novel regularization inspired by the skyline query [17, 18] popularly used in multi-objective optimization. As a result, MQ-Net can guide the learning of the target model by providing the best balancing between purity and informativeness throughout the entire training period. Overall, our main contributions are summarized as follows: 1. We formulate the purity-informativeness dilemma, which hinders the usability of open-set AL in real-world applications. 2. As our answer to the dilemma, we propose a novel AL framework, MQ-Net, which keeps finding the best trade-off between purity and informativeness. 3. Extensive experiments on CIFAR10, CIFAR100, and ImageNet show that MQ-Net improves the classifier accuracy consistently when the OOD ratio changes from 10% to 60% by up to 20.14%. 2 Related Work 2.1 Active Learning and Open-set Recognition Active Learning is a learning framework to reduce the human labeling cost by finding the most informative examples given unlabeled data [9, 19]. One popular direction is uncertainty-based sampling. Typical approaches have exploited prediction probability, e.g., soft-max confidence [20, 3], margin [21], and entropy [22]. Some approaches obtain uncertainty by Monte Carlo Dropout on multiple forward passes [23, 24, 25]. LL [5] predicts the loss of examples by jointly learning a loss prediction module with a target model. Meanwhile, diversity-based sampling has also been widely studied. To incorporate diversity, most methods use a clustering [6, 26] or coreset selection algorithm [7]. Notably, CoreSet [7] finds the set of examples having the highest distance coverage on the entire unlabeled data. BADGE [8] is a hybrid of uncertainty- and diversity-based sampling which uses k-means++ clustering in the gradient embedding space. However, this family of approaches is not appropriate for open-set AL since they do not consider how to handle the OOD examples for query selection. Open-set Recognition (OSR) is a detection task to recognize the examples outside of the target domain [12]. Closely related to this purpose, OOD detection has been actively studied [27]. Recent work can be categorized into classifier-dependent, density-based, and self-supervised approaches. The classifier-dependent approach leverages a pre-trained classifier and introduces several scoring functions, such as Uncertainty [28], ODIN [29], mahalanobis distance (MD) [30], and Energy[31]. Recently, ReAct [32] shows that rectifying penultimate activations can enhance most of the aforementioned classifier-dependent OOD scores. The density-based approach learns an auxiliary generative model like a variational auto-encoder to compute likelihood-based OOD scores [33, 34, 35]. Most self-supervised approaches leverage contrastive learning [36, 37, 38]. CSI shows that contrasting with distributionally-shifted augmentations can considerably enhance the OSR performance [36]. The OSR performance of classifier-dependent approaches degrades significantly if the classifier performs poorly [39]. Similarly, the performance of density-based and self-supervised approaches heavily resorts to the amount of clean IN data [35, 36]. Therefore, open-set active learning is a challenging problem to be resolved by simply applying the OSR approaches since it is difficult to obtain high-quality classifiers and sufficient IN data at early AL rounds. 2.2 Open-set Active learning Two recent approaches have attempted to handle the open-set noise for AL [13, 16]. Both approaches try to increase purity in query selection by effectively filtering out the OOD examples. CCAL [13] learns two contrastive coding models each for calculating informativeness and OODness of an example, and combines the two scores using a heuristic balancing rule. SIMILAR [16] selects a pure and core set of examples that maximize the distance on the entire unlabeled data while minimizing the distance to the identified OOD data. However, we found that CCAL and SIMILAR are often worse than standard AL methods, since they always put higher weights on purity although informativeness should be emphasized when the open-set noise ratio is small or in later AL rounds. This calls for developing a new solution to carefully find the best balance between purity and informativeness. 3 Purity-Informativeness Dilemma in Open-set Active Learning 3.1 Problem Statement Let DIN and DOOD be the IN and OOD data distributions, where the label of examples from DOOD does not belong to any of the k known labels Y = {yi}ki=1. Then, an unlabeled set is a mixture of IN and OOD examples, U = {XIN , XOOD}, i.e., XIN ∼ DIN and XOOD ∼ DOOD. In the open-set AL, a human oracle is requested to assign a known label y to an IN example x ∈ XIN with a labeling cost cIN , while an OOD example x ∈ XOOD is marked as open-set noise with a labeling cost cOOD. AL imposes restrictions on the labeling budget b every round. It starts with a small labeled set SL, consisting of both labeled IN and OOD examples. The initial labeled set SL improves by adding a small but maximally-informative labeled query set SQ per round, i.e., SL←SL∪SQ, where the labeling cost for SQ by the oracle does not exceed the labeling budget b. Hence, the goal of open-set AL is defined to construct the optimal query set S∗Q, minimizing the loss for the unseen target IN data. The difference from standard AL is that the labeling cost for OOD examples is introduced, where the labeling budget is wasted when OOD examples are misclassified as informative ones. Formally, let C(·) be the labeling cost function for a given unlabeled set; then, each round of open-set AL is formulated to find the best query set S∗Q as S∗Q = argmin SQ: C(SQ)≤b E(x,y)∈TIN [ ℓcls ( f(x; ΘSL∪SQ), y )] , where C(SQ) = ∑ x∈SQ ( 1[x∈XIN ]cIN + 1[x∈XOOD]cOOD ) . (1) Here, f(·; ΘSL∪SQ) denotes the target model trained on only IN examples in SL ∪ SQ, and ℓcls is a certain loss function, e.g., cross-entropy, for classification. For each AL round, all the examples in S∗Q are removed from the unlabeled set U and then added to the accumulated labeled set SL with their labels. This procedure repeats for the total number r of rounds. 3.2 Purity-Informativeness Dilemma An ideal approach for open-set AL would be to increase both purity and informativeness of a query set by completely suppressing the selection of OOD examples and accurately querying the most informative examples among the remaining IN examples. However, the ideal approach is infeasible because overly emphasizing purity in query selection does not promote example informativeness and vice versa. Specifically, OOD examples with low purity scores mostly exhibit high informativeness scores because they share neither class-distinctive features nor other inductive biases with the IN examples [14, 15]. We call this trade-off in query selection as the purity-informativeness dilemma, which is our new finding expected to trigger a lot of subsequent work. To address this dilemma, we need to consider the proper weights of a purity score and an informative score when they are combined. Let P(x) be a purity score of an example x which can be measured by any existing OOD scores, e.g., negative energy [31], and I(x) be an informativeness score of an example x from any standard AL strategies, e.g., uncertainty [3] and diversity [26]. Next, supposing zx = ⟨P(x), I(x)⟩ is a tuple of available purity and informativeness scores for an example x. Then, a score combination function Φ(zx), where zx = ⟨P(x), I(x)⟩, is defined to return an overall score that indicates the necessity of x being included in the query set. Given two unlabeled examples xi and xj , if P(xi) > P(xj) and I(xi) > I(xj), it is clear to favor xi over xj based on Φ(zxi) > Φ(zxj ). However, due to the purity-informativeness dilemma, if P(xi) > P(xj) and I(xi) < I(xj) or P(xi) < P(xj) and I(xi) > I(xj), it is very challenging to determine the dominance between Φ(zxi) and Φ(zxj ). In order to design Φ(·), we mainly focus on leveraging meta-learning, which is a more agnostic approach to resolve the dilemma other than several heuristic approaches, such as linear combination and multiplication. 4 Meta-Query-Net We propose a meta-model, named Meta-Query-Net (MQ-Net), which aims to learn a meta-score function for the purpose of identifying a query set. In the presence of open-set noise, MQ-Net outputs the meta-score for unlabeled examples to achieve the best balance between purity and informativeness in the selected query set. In this section, we introduce the notion of a self-validation set to guide the meta-model in a supervised manner and then demonstrate the meta-objective of MQ-Net for training. Then, we propose a novel skyline constraint used in optimization, which helps MQ-Net capture the obvious preference among unlabeled examples when a clear dominance exists. Next, we present a way of converting the purity and informativeness scores estimated by existing methods for use in MQ-Net. Note that training MQ-Net is not expensive because it builds a light meta-model on a small self-validation set. The overview of MQ-Net is illustrated in Figure 2. 4.1 Training Objective with Self-validation Set The parameters w contained in MQ-Net Φ(·;w) is optimized in a supervised manner. For clean supervision, validation data is required for training. Without assuming a hard-to-obtain clean validation set, we propose to use a self-validation set, which is instantaneously generated in every AL round. In detail, we obtain a labeled query set SQ by the oracle, consisting of a labeled IN set and an identified OOD set in every round. Since the query set SQ is unseen for the target model Θ and the meta-model w at the current round, we can exploit it as a self-validation set to train MQ-Net. This self-validation set eliminates the need for a clean validation set in meta-learning. Given the ground-truth labels in the self-validation set, it is feasible to guide MQ-Net to be trained to resolve the purity-informativeness dilemma by designing an appropriate meta-objective. It is based on the cross-entropy loss for classification because the loss value of training examples has been proven to be effective in identifying high informativeness examples [5]. The conventional loss value by a target model Θ is masked to be zero if x ∈ XOOD since OOD examples are useless for AL, ℓmce(x) = 1[lx=1]ℓce ( f(x; Θ), y ) , (2) where l is a true binary IN label, i.e., 1 for IN examples and 0 for OOD examples, which can be reliably obtained from the self-validation set. This masked loss, ℓmce, preserves the informativeness of IN examples while excluding OOD examples. Given a self-validation data SQ, the meta-objective is defined such that MQ-Net parameterized by w outputs a high (or low) meta-score Φ(zx;w) if an example x’s masked loss value is large (or small), L(SQ)= ∑ i∈SQ ∑ j∈SQ max ( 0,−Sign ( ℓmce(xi), ℓmce(xj) ) · ( Φ(zxi ;w)− Φ(zxj ;w) + η )) s.t. ∀xi, xj , if P(xi) > P(xj) and I(xi) > I(xj), then Φ(zxi ;w) > Φ(zxj ;w), (3) where η > 0 is a constant margin for the ranking loss, and Sign(a, b) is an indicator function that returns +1 if a > b, 0 if a = b, and −1 otherwise. Hence, Φ(zxi ;w) is forced to be higher than Φ(zxj ;w) if ℓmce(xi) > ℓmce(xj); in contrast, Φ(zxi ;w) is forced to be lower than Φ(zxj ;w) if ℓmce(xi) < ℓmce(xj). Two OOD examples do not affect the optimization because they do not have any priority between them, i.e., ℓmce(xi) = ℓmce(xj). In addition to the ranking loss, we add a regularization term named the skyline constraint (i.e., the second line) in the meta-objective Eq. (3), which is inspired by the skyline query which aims to narrow down a search space in a large-scale database by keeping only those items that are not worse than any other [17, 18]. Specifically, in the case of P(xi) > P(xj) and I(xi) > I(xj), the condition Φ(zxi ;w) > Φ(zxj ;w) must hold in our objective, and hence we make this proposition as the skyline constraint. This simple yet intuitive regularization is very helpful for achieving a meta-model that better judges the importance of purity or informativeness. We provide an ablation study on the skyline constraint in Section 5.4. 4.2 Architecture of MQ-Net MQ-Net is parameterized by a multi-layer perceptron (MLP), a widely-used deep learning architecture for meta-learning [40]. A challenge here is that the proposed skyline constraint in Eq. (3) does not hold with a standard MLP model. To satisfy the skyline constraint, the meta-score function Φ(·;w) should be a monotonic non-decreasing function because the output (meta-score) of MQ-Net for an example xi must be higher than that for another example xj if the two factors (purity and informativeness) of xi are both higher than those of xj . The MLP model consists of multiple matrix multiplications with non-linear activation functions such as ReLU and Sigmoid. In order for the MLP model to be monotonically non-decreasing, all the parameters in w for Φ(·;w) should be non-negative, as proven by Theorem 4.1. Theorem 4.1. For any MLP meta-model w with non-decreasing activation functions, a meta-score function Φ(z;w) : Rd → R holds the skyline constraints if w ⪰ 0 and z(∈ Rd) ⪰ 0, where ⪰ is the component-wise inequality. Proof. An MLP model is involved with matrix multiplication and composition with activation functions, which are characterized by three basic operators: (1) addition: h(z) = f(z) + g(z), (2) multiplication: h(z) = f(z)× g(z), and (3) composition: h(z) = f ◦ g(z). These three operators are guaranteed to be non-decreasing functions if the parameters of the MLP model are all nonnegative, because the non-negative weights guarantee all decomposed scalar operations in MLP to be non-decreasing functions. Combining the three operators, the MLP model Φ(z;w), where w ⪰ 0, naturally becomes a monotonic non-decreasing function for each input dimension. Refer to Appendix A for the complete proof. In implementation, non-negative weights are guaranteed by applying a ReLU function to meta-model parameters. Since the ReLU function is differentiable, MQ-Net can be trained with the proposed objective in an end-to-end manner. Putting this simple modification, the skyline constraint is preserved successfully without introducing any complex loss-based regularization term. The only remaining condition is that each input of MQ-Net must be a vector of non-negative entries. 4.3 Active Learning with MQ-Net 4.3.1 Meta-input Conversion MQ-Net receives zx = ⟨P(x), I(x)⟩ and then returns a meta-score for query selection. All the scores for the input of MQ-Net should be positive to preserve the skyline constraint, i.e., z ⪰ 0. Existing OOD and AL query scores are converted to the meta-input. The methods used for calculating the scores are orthogonal to our framework. The OOD score O(·) is conceptually the opposite of purity and varies in its scale; hence, we convert it to a purity score by P(x) = Exp(Normalize(−O(x))), where Normalize(·) is the z-score normalization. This conversion guarantees the purity score to be positive. Similarly, for the informativeness score, we convert an existing AL query score Q(·) to I(x) = Exp(Normalize(Q(x))). For the z-score normalization, we compute the mean and standard deviation of O(x) or Q(x) over the unlabeled examples. Such mean and standard deviation are iteratively computed before the meta-training, and used for the z-score normalization at that round. 4.3.2 Overall Procedure For each AL round, a target model is trained via stochastic gradient descent (SGD) on mini-batches sampled from the IN examples in the current labeled set SL. Based on the current target model, the purity and informative scores are computed by using certain OOD and AL query scores. The querying phase is then performed by selecting the examples SQ with the highest meta-scores within the labeling budget b. The query set SQ is used as the self-validation set for training MQ-Net at the current AL round. The trained MQ-Net is used at the next AL round. The alternating procedure of updating the target model and the meta-model repeats for a given number r of AL rounds. The pseudocode of MQ-Net can be found in Appendix B. 5 Experiments 5.1 Experiment Setting Datasets. We perform the active learning task on three benchmark datasets; CIFAR10 [41], CIFAR100 [41], and ImageNet [42]. Following the ‘split-dataset’ setup in open-world learning literature [13, 16, 43], we divide each dataset into two subsets: (1) the target set with IN classes and (2) the noise set with OOD classes. Specifically, CIFAR10 is split into the target set with four classes and the noise set with the rest six classes; CIFAR100 into the two sets with 40 and 60 classes; and ImageNet into the two sets with 50 and 950 classes. The entire target set is used as the unlabeled IN data, while only a part of classes in the noise set is selected as the unlabeled OOD data according to the given noise ratio. In addition, following OOD detection literature [28, 33], we also consider the ‘cross-dataset’ setup, which mixes a certain dataset with two external OOD datasets collected from different domains, such as LSUN [44] and Places365 [45]. For sake of space, we present all the results on the cross-dataset setup in Appendix D. Algorithms. We compare MQ-Net with a random selection, four standard AL, and two recent open-set AL approaches. • Standard AL: The four methods perform AL without any processing for open-set noise: (1) CONF [3] queries the most uncertain examples with the lowest softmax confidence in the prediction, (2) CORESET [7] queries the most diverse examples with the highest coverage in the representation space, (3) LL [5] queries the examples having the largest predicted loss by jointly learning a loss prediction module, and (4) BADGE [8] considers both uncertainty and diversity by querying the most representative examples in the gradient via k-means++ clustering [46]. • Open-set AL: The two methods tend to put more weight on the examples with high purity: (1) CCAL [13] learns two contrastive coding models for calculating informativeness and OODness, and then it combines the two scores into one using a heuristic balancing rule, and (2) SIMILAR [16] selects a pure and core set of examples that maximize the distance coverage on the entire unlabeled data while minimizing the distance coverage to the already labeled OOD data. For all the experiments, regarding the two inputs of MQ-Net, we mainly use CSI [36] and LL [5] for calculating the purity and informativeness scores, respectively. For CSI, as in CCAL, we train a contrastive learner on the entire unlabeled set with open-set noise since the clean in-distribution set is not available in open-set AL. The ablation study in Section 5.4 shows that MQ-Net is also effective with other OOD and AL scores as its input. Implementation Details. We repeat the three steps—training, querying, and labeling—of AL. The total number r of rounds is set to 10. Following the prior open-set AL setup [13, 16], we set the labeling cost cIN = 1 for IN examples and cOOD = 1 for OOD examples. For the class-split setup, the labeling budget b per round is set to 500 for CIFAR10/100 and 1, 000 for ImageNet. Regarding the open-set noise ratio τ , we configure four different levels from light to heavy noise in {10%, 20%, 40%, 60%}. In the case of τ = 0% (no noise), MQ-Net naturally discards the purity score and only uses the informativeness score for query selection, since the self-validation set does not contain any OOD examples. The initial labeled set is randomly selected uniformly at random from the entire unlabeled set within the labeling budget b. For the architecture of MQ-Net, we use a 2-layer MLP with the hidden dimension size of 64 and the Sigmoid activation fuction. We report the average results of five runs with different class splits. We did not use any pre-trained networks. See Appendix C for more implementation details with training configurations. All methods are implemented with PyTorch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU. The code is available at https://github.com/kaist-dmlab/MQNet. 5.2 Open-set Noise Robustness 5.2.1 Results over AL Rounds Figure 3 illustrates the test accuracy of the target model over AL rounds on the two CIFAR datasets. MQ-Net achieves the highest test accuracy in most AL rounds, thereby reaching the best test accuracy at the final round in every case for various datasets and noise ratios. Compared with the two existing open-set AL methods, CCAL and SIMILAR, MQ-Net shows a steeper improvement in test accuracy over rounds by resolving the purity-informativeness dilemma in query selection. For example, the performance gap between MQ-Net and the two open-set AL methods gets larger after the sixth round, as shown in Figure 3(b), because CCAL and SIMILAR mainly depend on purity in query selection, which conveys less informative information to the classifier. For a better classifier, informative examples should be favored at a later AL round due to the sufficient number of IN examples in the labeled set. In contrast, MQ-Net keeps improving the test accuracy even in a later AL round by finding the best balancing between purity and informativeness in its query set. More analysis of MQ-Net associated with the purity-informativeness dilemma is discussed in Section 5.3. 5.2.2 Results with Varying Noise Ratios Table 1 summarizes the last test accuracy at the final AL round for three datasets with varying levels of open-set noise. Overall, the last test accuracy of MQ-Net is the best in every case. This superiority concludes that MQ-Net successfully finds the best trade-off between purity and informativeness in terms of AL accuracy regardless of the noise ratio. In general, the performance improvement becomes larger with the increase in the noise ratio. On the other hand, the two open-set AL approaches are even worse than the four standard AL approaches when the noise ratio is less than or equal to 20%. Especially, in CIFAR10 relatively easier than others, CCAL and SIMILAR are inferior to the non-robust AL method, LL, even with 40% noise. This trend confirms that increasing informativeness is more crucial than increasing purity when the noise ratio is small; highly informative examples are still beneficial when the performance of a classifier is saturated in the presence of open-set noise. An in-depth analysis on the low accuracy of the existing open-set AL approaches in a low noise ratio is presented in Appendix E. Table 2: Effect of the meta inputs on MQ-Net. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% Standard AL BADGE 92.80 91.73 89.27 86.83 Open-set AL CCAL 90.55 89.99 88.87 87.49 MQ-Net CONF-ReAct 93.21 91.89 89.54 87.99 CONF-CSI 93.28 92.40 91.43 89.37 LL-ReAct 92.34 91.85 90.08 88.41 LL-CSI 93.10 92.10 91.48 89.51 Table 3: Efficacy of the self-validation set. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% MQ-Net Query set 93.10 92.10 91.48 89.51 Random 92.10 91.75 90.88 87.65 Table 4: Efficacy of the skyline constraint. Noise Ratio 10% 20% 40% 60% MQ-Net w/ skyline 93.10 92.10 91.48 89.51 w/o skyline 87.25 86.29 83.61 81.67 5.3 Answers to the Purity-Informativeness Dilemma The high robustness of MQ-Net in Table 1 and Figure 3 is mainly attributed to its ability to keep finding the best trade-off between purity and informativeness. Figure 4(a) illustrates the preference change of MQ-Net between purity and informativeness throughout the AL rounds. As the round progresses, MQ-Net automatically raises the importance of informativeness rather than purity; the slope of the tangent line keeps steepening from −0.74 to −1.21. This trend implies that more informative examples are required to be labeled when the target classifier becomes mature. That is, as the model performance increases, ‘fewer but highly-informative’ examples are more impactful than ‘more but less-informative’ examples in terms of improving the model performance. Figure 4(b) describes the preference change of MQ-Net with varying noise ratios. Contrary to the trend over AL rounds, as the noise ratio gets higher, MQ-Net prefers purity more over informativeness. 5.4 Ablation Studies Various Combination of Meta-input. MQ-Net can design its purity and informativeness scores by leveraging diverse metrics in the existing OOD detection and AL literature. Table 2 shows the final round test accuracy on CIFAR10 for the four variants of score combinations, each of which is constructed by a combination of two purity scores and two informativeness scores; each purity score is induced by the two recent OOD detection methods, ReAct [32] and CSI [36], while each informativeness score is converted from the two existing AL methods, CONF and LL. “CONF-ReAct” denotes a variant that uses ReAct as the purity score and CONF as the informativeness score. Overall, all variants perform better than standard and open-set AL baselines in every noise level. Refer to Table 2 for detailed comparison. This result concludes that MQ-Net can be generalized over different types of meta-input owing to the learning flexibility of MLPs. Interestingly, the variant using CSI as the purity score is consistently better than those using ReAct. ReAct, a classifier-dependent OOD score, performs poorly in earlier AL rounds. A detailed analysis of the two OOD detectors, ReAct and CSI, over AL rounds can be found in Appendix F. Efficacy of Self-validation Set. MQ-Net can be trained with an independent validation set, instead of using the proposed self-validation set. We generate the independent validation set by randomly sampling the same number of examples as the self-validation set with their ground-truth labels from the entire data not overlapped with the unlabeled set used for AL. As can be seen from Table 3, it is of interest to see that our self-validation set performs better than the random validation set. The two validation sets have a major difference in data distributions; the self-validation set mainly consists of the examples with highest meta-scores among the remaining unlabeled data per round, while the random validation set consists of random examples. We conclude that the meta-score of MQ-Net has the potential for constructing a high-quality validation set in addition to query selection. Efficacy of Skyline Constraint. Table 4 demonstrates the final round test accuracy of MQ-Net with or without the skyline constraint. For the latter, a standard 2-layer MLP is used as the meta-network architecture without any modification. The performance of MQ-Net degrades significantly without the skyline constraint, meaning that the non-constrained MLP can easily overfit to the small-sized self-validation set, thereby assigning high output scores on less-pure and less-informative examples. Therefore, the violation of the skyline constraint in optimization makes MQ-Net hard to balance between the purity and informativeness scores in query selection. Efficacy of Meta-objective. MQ-Net keeps finding the best balance between purity and informativeness over multiple AL rounds by repeatedly minimizing the meta-objective in Eq. (3). To validate its efficacy, we compare it with two simple alternatives based on heuristic balancing rules such as linear combination and multiplication, denoted as P(x) + I(x) and P(x) · I(x), respectively. Following the default setting of MQ-Net, we use LL for P(x) and CSI for I(x). Table 5 shows the AL performance of the two alternatives and MQ-Net for the split-dataset setup on CIFAR10 with the noise ratios of 20% and 40%. MQ-Net beats the two alternatives after the second AL round where MQ-Net starts balancing purity and informativeness with its meta-objective. This result implies that our meta-objective successfully finds the best balance between purity and informativeness by emphasizing informativeness over purity at the later AL rounds. 5.5 Effect of Varying OOD Labeling Cost The labeling cost for OOD examples could vary with respect to data domains. To validate the robustness of MQ-Net on diverse labeling scenarios, we conduct an additional study of adjusting the labeling cost cOOD for the OOD examples. Table 6 summarizes the performance change with four different labeling costs (i.e., 0.5, 1, 2, and 4). The two standard AL methods, CONF and CORESET, and two open-set AL methods, CCAL and SIMILAR, are compared with MQ-Net. Overall, MQ-Net consistently outperforms the four baselines regardless of the labeling cost. Meanwhile, CCAL and SIMILAR are more robust to the higher labeling cost than CONF and CORESET; CCAL and SIMILAR, which favor high purity examples, query more IN examples than CONF and CORESET, so they are less affected by the labeling cost, especially when it is high. 6 Conclusion We propose MQ-Net, a novel meta-model for open-set active learning that deals with the purityinformativeness dilemma. MQ-Net finds the best balancing between the two factors, being adaptive to the noise ratio and target model status. A clean validation set for the meta-model is obtained for free by exploiting the procedure of active learning. A ranking loss with the skyline constraint optimizes MQ-Net to make the output a legitimate meta-score that keeps the obvious order of two examples. MQ-Net is shown to yield the best test accuracy throughout the entire active learning rounds, thereby empirically proving the correctness of our solution to the purity-informativeness dilemma. Overall, we expect that our work will raise the practical usability of active learning with open-set noise. Acknowledgement This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00862, DB4DL: HighUsability and Performance In-Memory Distributed DBMS for Deep Learning). The experiment was conducted by the courtesy of NAVER Smart Machine Learning (NSML) [47].
1. What is the focus and contribution of the paper regarding the "purity-informativeness dilemma" problem? 2. What are the strengths of the proposed MQ-Net algorithm, particularly in its design and implementation? 3. What are the weaknesses of the paper, especially regarding the lack of discussion on certain aspects? 4. Do you have any concerns or suggestions regarding the architecture of MQ-Net and its performance comparisons with other alternatives? 5. What are the limitations of the paper, and how could they be addressed in future works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposed the "purity-informativeness dilemma" problem and a meta-learning algorithm - MQ-Net to adaptively balance the purity and informativeness scores for better sample selection. The experimental results verify the reasonability of the concerned problem and the proposed solution. Strengths And Weaknesses Strengths The "purity-informativeness dilemma" problem is essential, and the analysis is convincing. The design and implementation of MQ-Net is reasonable. The active learning architecture is clean and clear, with significantly better performance. Weakness The paper does not discuss the benefit of training the MQ-Net in each round. The performance comparison between MQ-Net and other simple alternatives is not discussed. Those alternatives may include heuristic rules like 1 ( P ( x ) − 1 ) 2 + ( I ( x ) − 1 ) 2 , logistic regressions, and ranking SVM. The architecture of MQ-Net is not clearly stated. The activation functions are not reported, and the layer number is only reported in the appendix. Questions As said in the weakness part: Is it necessary to train the MQ-Net in each round? What if we train it in the first round (or first several rounds) and fix it in the remaining round? Figure 4 indicates that the function learned by the MQ-Net is fairly simple. How do the MQ-Net's performances differ from those simple alternatives? Limitations The authors mentioned one limitation: they only consider one purity score and one informativeness score as input. They left the multi-score input as future work. However, the multi-score version is intuitive by adding more input dimensions to MQ-Net. Since the authors had already computed many scores in Section 5.4, the multi-score version is very convenient to implement. Furthermore, when more scores are used, there would be a score selection problem in MQ-Net. The solution to that problem will increase the quality of the paper.
NIPS
Title Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning Abstract Unlabeled data examples awaiting annotations contain open-set noise inevitably. A few active learning studies have attempted to deal with this open-set noise for sample selection by filtering out the noisy examples. However, because focusing on the purity of examples in a query set leads to overlooking the informativeness of the examples, the best balancing of purity and informativeness remains an important question. In this paper, to solve this purity-informativeness dilemma in open-set active learning, we propose a novel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. Specifically, by leveraging the multi-round property of active learning, we train MQ-Net using a query set without an additional validation set. Furthermore, a clear dominance relationship between unlabeled examples is effectively captured by MQ-Net through a novel skyline regularization. Extensive experiments on multiple open-set active learning scenarios demonstrate that the proposed MQ-Net achieves 20.14% improvement in terms of accuracy, compared with the state-of-the-art methods. 1 Introduction The success of deep learning in many complex tasks highly depends on the availability of massive data with well-annotated labels, which are very costly to obtain in practice [1]. Active learning (AL) is one of the popular learning frameworks to reduce the high human-labeling cost, where a small number of maximally-informative examples are selected by a query strategy and labeled by an oracle repeatedly [2]. Numerous query (i.e., sample selection) strategies, mainly categorized into uncertaintybased sampling [3, 4, 5] and diversity-based sampling [6, 7, 8], have succeeded in effectively reducing the labeling cost while achieving high model performance. Despite their success, most standard AL approaches rely on a strict assumption that all unlabeled examples should be cleanly collected from a pre-defined domain called in-distribution (IN), even before being labeled [9]. This assumption is unrealistic in practice since the unlabeled examples are mostly collected from rather casual data curation processes such as web-crawling. Notably, in the Google search engine, the precision of image retrieval is reported to be 82% on average, and it is worsened to 48% for unpopular entities [10, 11]. That is, such collected unlabeled data naturally involves open-set noise, which is defined as a set of the examples collected from different domains called out-of-distribution (OOD) [12]. In general, standard AL approaches favor the examples either highly uncertain in predictions or highly diverse in representations as a query for labeling. However, the addition of open-set noise makes these two measures fail to identify informative examples; the OOD examples also exhibit high uncertainty and diversity because they share neither class-distinctive features nor other inductive biases with IN examples [14, 15]. As a result, an active learner is confused and likely to query the OOD examples to ∗Corresponding authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). a human-annotator for labeling. Human annotators would disregard the OOD examples because they are unnecessary for the target task, thereby wasting the labeling budget. Therefore, the problem of active learning with open-set noise, which we call open-set active learning, has emerged as a new important challenge for real-world applications. Recently, a few studies have attempted to deal with the open-set noise for active learning [13, 16]. They commonly try to increase the purity of examples in a query set, which is defined as the proportion of IN examples, by effectively filtering out the OOD examples. However, whether focusing on the purity is needed throughout the entire training period remains a question. In Figure 1(a), let’s consider an open-set AL task with a binary classification of cats and dogs, where the images of other animals, e.g., horses and wolves, are regarded as OOD examples. It is clear that the group of high purity and high informativeness (HP-HI) is the most preferable for sample selection. However, when comparing the group of high purity and low informativeness (HP-LI) and that of low purity and high informativeness (LP-HI), the preference between these two groups of examples is not clear, but rather contingent on the learning stage and the ratio of OOD examples. Thus, we coin a new term “purity-informativeness dilemma” to call attention to the best balancing of purity and informativeness. Figures 1(b) and 1(c) illustrate the purity-informativeness dilemma. The standard AL approach, LL[5], puts more weight on the examples of high informativeness (denoted as HI-focused), while the existing open-set AL approach, CCAL [13], puts more weight on those of high purity (denoted as HP-focused). The HP-focused approach improves the test accuracy more significantly than the HI-focused one at earlier AL rounds, meaning that pure as well as easy examples are more beneficial. In contrast, the HI-focused approach beats the HP-focused one at later AL rounds, meaning that highly informative examples should be selected even at the expense of purity. Furthermore, comparing a low OOD (noise) ratio in Figure 1(b) and a high OOD ratio in Figure 1(c), the shift from HP-dominance to HI-dominance tends to occur later at a higher OOD ratio, which renders this dilemma more difficult. In this paper, to solve the purity-informativeness dilemma in open-set AL, we propose a novel metamodel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. A key challenge is the best balancing is unknown in advance. The meta-model is trained to assign higher priority for in-distribution examples over OOD examples as well as for more informative examples among in-distribution ones. The input to the meta-model, which includes the target and OOD labels, is obtained for free from each AL round’s query set by leveraging the multi-round property of AL. Moreover, the meta-model is optimized more stably through a novel regularization inspired by the skyline query [17, 18] popularly used in multi-objective optimization. As a result, MQ-Net can guide the learning of the target model by providing the best balancing between purity and informativeness throughout the entire training period. Overall, our main contributions are summarized as follows: 1. We formulate the purity-informativeness dilemma, which hinders the usability of open-set AL in real-world applications. 2. As our answer to the dilemma, we propose a novel AL framework, MQ-Net, which keeps finding the best trade-off between purity and informativeness. 3. Extensive experiments on CIFAR10, CIFAR100, and ImageNet show that MQ-Net improves the classifier accuracy consistently when the OOD ratio changes from 10% to 60% by up to 20.14%. 2 Related Work 2.1 Active Learning and Open-set Recognition Active Learning is a learning framework to reduce the human labeling cost by finding the most informative examples given unlabeled data [9, 19]. One popular direction is uncertainty-based sampling. Typical approaches have exploited prediction probability, e.g., soft-max confidence [20, 3], margin [21], and entropy [22]. Some approaches obtain uncertainty by Monte Carlo Dropout on multiple forward passes [23, 24, 25]. LL [5] predicts the loss of examples by jointly learning a loss prediction module with a target model. Meanwhile, diversity-based sampling has also been widely studied. To incorporate diversity, most methods use a clustering [6, 26] or coreset selection algorithm [7]. Notably, CoreSet [7] finds the set of examples having the highest distance coverage on the entire unlabeled data. BADGE [8] is a hybrid of uncertainty- and diversity-based sampling which uses k-means++ clustering in the gradient embedding space. However, this family of approaches is not appropriate for open-set AL since they do not consider how to handle the OOD examples for query selection. Open-set Recognition (OSR) is a detection task to recognize the examples outside of the target domain [12]. Closely related to this purpose, OOD detection has been actively studied [27]. Recent work can be categorized into classifier-dependent, density-based, and self-supervised approaches. The classifier-dependent approach leverages a pre-trained classifier and introduces several scoring functions, such as Uncertainty [28], ODIN [29], mahalanobis distance (MD) [30], and Energy[31]. Recently, ReAct [32] shows that rectifying penultimate activations can enhance most of the aforementioned classifier-dependent OOD scores. The density-based approach learns an auxiliary generative model like a variational auto-encoder to compute likelihood-based OOD scores [33, 34, 35]. Most self-supervised approaches leverage contrastive learning [36, 37, 38]. CSI shows that contrasting with distributionally-shifted augmentations can considerably enhance the OSR performance [36]. The OSR performance of classifier-dependent approaches degrades significantly if the classifier performs poorly [39]. Similarly, the performance of density-based and self-supervised approaches heavily resorts to the amount of clean IN data [35, 36]. Therefore, open-set active learning is a challenging problem to be resolved by simply applying the OSR approaches since it is difficult to obtain high-quality classifiers and sufficient IN data at early AL rounds. 2.2 Open-set Active learning Two recent approaches have attempted to handle the open-set noise for AL [13, 16]. Both approaches try to increase purity in query selection by effectively filtering out the OOD examples. CCAL [13] learns two contrastive coding models each for calculating informativeness and OODness of an example, and combines the two scores using a heuristic balancing rule. SIMILAR [16] selects a pure and core set of examples that maximize the distance on the entire unlabeled data while minimizing the distance to the identified OOD data. However, we found that CCAL and SIMILAR are often worse than standard AL methods, since they always put higher weights on purity although informativeness should be emphasized when the open-set noise ratio is small or in later AL rounds. This calls for developing a new solution to carefully find the best balance between purity and informativeness. 3 Purity-Informativeness Dilemma in Open-set Active Learning 3.1 Problem Statement Let DIN and DOOD be the IN and OOD data distributions, where the label of examples from DOOD does not belong to any of the k known labels Y = {yi}ki=1. Then, an unlabeled set is a mixture of IN and OOD examples, U = {XIN , XOOD}, i.e., XIN ∼ DIN and XOOD ∼ DOOD. In the open-set AL, a human oracle is requested to assign a known label y to an IN example x ∈ XIN with a labeling cost cIN , while an OOD example x ∈ XOOD is marked as open-set noise with a labeling cost cOOD. AL imposes restrictions on the labeling budget b every round. It starts with a small labeled set SL, consisting of both labeled IN and OOD examples. The initial labeled set SL improves by adding a small but maximally-informative labeled query set SQ per round, i.e., SL←SL∪SQ, where the labeling cost for SQ by the oracle does not exceed the labeling budget b. Hence, the goal of open-set AL is defined to construct the optimal query set S∗Q, minimizing the loss for the unseen target IN data. The difference from standard AL is that the labeling cost for OOD examples is introduced, where the labeling budget is wasted when OOD examples are misclassified as informative ones. Formally, let C(·) be the labeling cost function for a given unlabeled set; then, each round of open-set AL is formulated to find the best query set S∗Q as S∗Q = argmin SQ: C(SQ)≤b E(x,y)∈TIN [ ℓcls ( f(x; ΘSL∪SQ), y )] , where C(SQ) = ∑ x∈SQ ( 1[x∈XIN ]cIN + 1[x∈XOOD]cOOD ) . (1) Here, f(·; ΘSL∪SQ) denotes the target model trained on only IN examples in SL ∪ SQ, and ℓcls is a certain loss function, e.g., cross-entropy, for classification. For each AL round, all the examples in S∗Q are removed from the unlabeled set U and then added to the accumulated labeled set SL with their labels. This procedure repeats for the total number r of rounds. 3.2 Purity-Informativeness Dilemma An ideal approach for open-set AL would be to increase both purity and informativeness of a query set by completely suppressing the selection of OOD examples and accurately querying the most informative examples among the remaining IN examples. However, the ideal approach is infeasible because overly emphasizing purity in query selection does not promote example informativeness and vice versa. Specifically, OOD examples with low purity scores mostly exhibit high informativeness scores because they share neither class-distinctive features nor other inductive biases with the IN examples [14, 15]. We call this trade-off in query selection as the purity-informativeness dilemma, which is our new finding expected to trigger a lot of subsequent work. To address this dilemma, we need to consider the proper weights of a purity score and an informative score when they are combined. Let P(x) be a purity score of an example x which can be measured by any existing OOD scores, e.g., negative energy [31], and I(x) be an informativeness score of an example x from any standard AL strategies, e.g., uncertainty [3] and diversity [26]. Next, supposing zx = ⟨P(x), I(x)⟩ is a tuple of available purity and informativeness scores for an example x. Then, a score combination function Φ(zx), where zx = ⟨P(x), I(x)⟩, is defined to return an overall score that indicates the necessity of x being included in the query set. Given two unlabeled examples xi and xj , if P(xi) > P(xj) and I(xi) > I(xj), it is clear to favor xi over xj based on Φ(zxi) > Φ(zxj ). However, due to the purity-informativeness dilemma, if P(xi) > P(xj) and I(xi) < I(xj) or P(xi) < P(xj) and I(xi) > I(xj), it is very challenging to determine the dominance between Φ(zxi) and Φ(zxj ). In order to design Φ(·), we mainly focus on leveraging meta-learning, which is a more agnostic approach to resolve the dilemma other than several heuristic approaches, such as linear combination and multiplication. 4 Meta-Query-Net We propose a meta-model, named Meta-Query-Net (MQ-Net), which aims to learn a meta-score function for the purpose of identifying a query set. In the presence of open-set noise, MQ-Net outputs the meta-score for unlabeled examples to achieve the best balance between purity and informativeness in the selected query set. In this section, we introduce the notion of a self-validation set to guide the meta-model in a supervised manner and then demonstrate the meta-objective of MQ-Net for training. Then, we propose a novel skyline constraint used in optimization, which helps MQ-Net capture the obvious preference among unlabeled examples when a clear dominance exists. Next, we present a way of converting the purity and informativeness scores estimated by existing methods for use in MQ-Net. Note that training MQ-Net is not expensive because it builds a light meta-model on a small self-validation set. The overview of MQ-Net is illustrated in Figure 2. 4.1 Training Objective with Self-validation Set The parameters w contained in MQ-Net Φ(·;w) is optimized in a supervised manner. For clean supervision, validation data is required for training. Without assuming a hard-to-obtain clean validation set, we propose to use a self-validation set, which is instantaneously generated in every AL round. In detail, we obtain a labeled query set SQ by the oracle, consisting of a labeled IN set and an identified OOD set in every round. Since the query set SQ is unseen for the target model Θ and the meta-model w at the current round, we can exploit it as a self-validation set to train MQ-Net. This self-validation set eliminates the need for a clean validation set in meta-learning. Given the ground-truth labels in the self-validation set, it is feasible to guide MQ-Net to be trained to resolve the purity-informativeness dilemma by designing an appropriate meta-objective. It is based on the cross-entropy loss for classification because the loss value of training examples has been proven to be effective in identifying high informativeness examples [5]. The conventional loss value by a target model Θ is masked to be zero if x ∈ XOOD since OOD examples are useless for AL, ℓmce(x) = 1[lx=1]ℓce ( f(x; Θ), y ) , (2) where l is a true binary IN label, i.e., 1 for IN examples and 0 for OOD examples, which can be reliably obtained from the self-validation set. This masked loss, ℓmce, preserves the informativeness of IN examples while excluding OOD examples. Given a self-validation data SQ, the meta-objective is defined such that MQ-Net parameterized by w outputs a high (or low) meta-score Φ(zx;w) if an example x’s masked loss value is large (or small), L(SQ)= ∑ i∈SQ ∑ j∈SQ max ( 0,−Sign ( ℓmce(xi), ℓmce(xj) ) · ( Φ(zxi ;w)− Φ(zxj ;w) + η )) s.t. ∀xi, xj , if P(xi) > P(xj) and I(xi) > I(xj), then Φ(zxi ;w) > Φ(zxj ;w), (3) where η > 0 is a constant margin for the ranking loss, and Sign(a, b) is an indicator function that returns +1 if a > b, 0 if a = b, and −1 otherwise. Hence, Φ(zxi ;w) is forced to be higher than Φ(zxj ;w) if ℓmce(xi) > ℓmce(xj); in contrast, Φ(zxi ;w) is forced to be lower than Φ(zxj ;w) if ℓmce(xi) < ℓmce(xj). Two OOD examples do not affect the optimization because they do not have any priority between them, i.e., ℓmce(xi) = ℓmce(xj). In addition to the ranking loss, we add a regularization term named the skyline constraint (i.e., the second line) in the meta-objective Eq. (3), which is inspired by the skyline query which aims to narrow down a search space in a large-scale database by keeping only those items that are not worse than any other [17, 18]. Specifically, in the case of P(xi) > P(xj) and I(xi) > I(xj), the condition Φ(zxi ;w) > Φ(zxj ;w) must hold in our objective, and hence we make this proposition as the skyline constraint. This simple yet intuitive regularization is very helpful for achieving a meta-model that better judges the importance of purity or informativeness. We provide an ablation study on the skyline constraint in Section 5.4. 4.2 Architecture of MQ-Net MQ-Net is parameterized by a multi-layer perceptron (MLP), a widely-used deep learning architecture for meta-learning [40]. A challenge here is that the proposed skyline constraint in Eq. (3) does not hold with a standard MLP model. To satisfy the skyline constraint, the meta-score function Φ(·;w) should be a monotonic non-decreasing function because the output (meta-score) of MQ-Net for an example xi must be higher than that for another example xj if the two factors (purity and informativeness) of xi are both higher than those of xj . The MLP model consists of multiple matrix multiplications with non-linear activation functions such as ReLU and Sigmoid. In order for the MLP model to be monotonically non-decreasing, all the parameters in w for Φ(·;w) should be non-negative, as proven by Theorem 4.1. Theorem 4.1. For any MLP meta-model w with non-decreasing activation functions, a meta-score function Φ(z;w) : Rd → R holds the skyline constraints if w ⪰ 0 and z(∈ Rd) ⪰ 0, where ⪰ is the component-wise inequality. Proof. An MLP model is involved with matrix multiplication and composition with activation functions, which are characterized by three basic operators: (1) addition: h(z) = f(z) + g(z), (2) multiplication: h(z) = f(z)× g(z), and (3) composition: h(z) = f ◦ g(z). These three operators are guaranteed to be non-decreasing functions if the parameters of the MLP model are all nonnegative, because the non-negative weights guarantee all decomposed scalar operations in MLP to be non-decreasing functions. Combining the three operators, the MLP model Φ(z;w), where w ⪰ 0, naturally becomes a monotonic non-decreasing function for each input dimension. Refer to Appendix A for the complete proof. In implementation, non-negative weights are guaranteed by applying a ReLU function to meta-model parameters. Since the ReLU function is differentiable, MQ-Net can be trained with the proposed objective in an end-to-end manner. Putting this simple modification, the skyline constraint is preserved successfully without introducing any complex loss-based regularization term. The only remaining condition is that each input of MQ-Net must be a vector of non-negative entries. 4.3 Active Learning with MQ-Net 4.3.1 Meta-input Conversion MQ-Net receives zx = ⟨P(x), I(x)⟩ and then returns a meta-score for query selection. All the scores for the input of MQ-Net should be positive to preserve the skyline constraint, i.e., z ⪰ 0. Existing OOD and AL query scores are converted to the meta-input. The methods used for calculating the scores are orthogonal to our framework. The OOD score O(·) is conceptually the opposite of purity and varies in its scale; hence, we convert it to a purity score by P(x) = Exp(Normalize(−O(x))), where Normalize(·) is the z-score normalization. This conversion guarantees the purity score to be positive. Similarly, for the informativeness score, we convert an existing AL query score Q(·) to I(x) = Exp(Normalize(Q(x))). For the z-score normalization, we compute the mean and standard deviation of O(x) or Q(x) over the unlabeled examples. Such mean and standard deviation are iteratively computed before the meta-training, and used for the z-score normalization at that round. 4.3.2 Overall Procedure For each AL round, a target model is trained via stochastic gradient descent (SGD) on mini-batches sampled from the IN examples in the current labeled set SL. Based on the current target model, the purity and informative scores are computed by using certain OOD and AL query scores. The querying phase is then performed by selecting the examples SQ with the highest meta-scores within the labeling budget b. The query set SQ is used as the self-validation set for training MQ-Net at the current AL round. The trained MQ-Net is used at the next AL round. The alternating procedure of updating the target model and the meta-model repeats for a given number r of AL rounds. The pseudocode of MQ-Net can be found in Appendix B. 5 Experiments 5.1 Experiment Setting Datasets. We perform the active learning task on three benchmark datasets; CIFAR10 [41], CIFAR100 [41], and ImageNet [42]. Following the ‘split-dataset’ setup in open-world learning literature [13, 16, 43], we divide each dataset into two subsets: (1) the target set with IN classes and (2) the noise set with OOD classes. Specifically, CIFAR10 is split into the target set with four classes and the noise set with the rest six classes; CIFAR100 into the two sets with 40 and 60 classes; and ImageNet into the two sets with 50 and 950 classes. The entire target set is used as the unlabeled IN data, while only a part of classes in the noise set is selected as the unlabeled OOD data according to the given noise ratio. In addition, following OOD detection literature [28, 33], we also consider the ‘cross-dataset’ setup, which mixes a certain dataset with two external OOD datasets collected from different domains, such as LSUN [44] and Places365 [45]. For sake of space, we present all the results on the cross-dataset setup in Appendix D. Algorithms. We compare MQ-Net with a random selection, four standard AL, and two recent open-set AL approaches. • Standard AL: The four methods perform AL without any processing for open-set noise: (1) CONF [3] queries the most uncertain examples with the lowest softmax confidence in the prediction, (2) CORESET [7] queries the most diverse examples with the highest coverage in the representation space, (3) LL [5] queries the examples having the largest predicted loss by jointly learning a loss prediction module, and (4) BADGE [8] considers both uncertainty and diversity by querying the most representative examples in the gradient via k-means++ clustering [46]. • Open-set AL: The two methods tend to put more weight on the examples with high purity: (1) CCAL [13] learns two contrastive coding models for calculating informativeness and OODness, and then it combines the two scores into one using a heuristic balancing rule, and (2) SIMILAR [16] selects a pure and core set of examples that maximize the distance coverage on the entire unlabeled data while minimizing the distance coverage to the already labeled OOD data. For all the experiments, regarding the two inputs of MQ-Net, we mainly use CSI [36] and LL [5] for calculating the purity and informativeness scores, respectively. For CSI, as in CCAL, we train a contrastive learner on the entire unlabeled set with open-set noise since the clean in-distribution set is not available in open-set AL. The ablation study in Section 5.4 shows that MQ-Net is also effective with other OOD and AL scores as its input. Implementation Details. We repeat the three steps—training, querying, and labeling—of AL. The total number r of rounds is set to 10. Following the prior open-set AL setup [13, 16], we set the labeling cost cIN = 1 for IN examples and cOOD = 1 for OOD examples. For the class-split setup, the labeling budget b per round is set to 500 for CIFAR10/100 and 1, 000 for ImageNet. Regarding the open-set noise ratio τ , we configure four different levels from light to heavy noise in {10%, 20%, 40%, 60%}. In the case of τ = 0% (no noise), MQ-Net naturally discards the purity score and only uses the informativeness score for query selection, since the self-validation set does not contain any OOD examples. The initial labeled set is randomly selected uniformly at random from the entire unlabeled set within the labeling budget b. For the architecture of MQ-Net, we use a 2-layer MLP with the hidden dimension size of 64 and the Sigmoid activation fuction. We report the average results of five runs with different class splits. We did not use any pre-trained networks. See Appendix C for more implementation details with training configurations. All methods are implemented with PyTorch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU. The code is available at https://github.com/kaist-dmlab/MQNet. 5.2 Open-set Noise Robustness 5.2.1 Results over AL Rounds Figure 3 illustrates the test accuracy of the target model over AL rounds on the two CIFAR datasets. MQ-Net achieves the highest test accuracy in most AL rounds, thereby reaching the best test accuracy at the final round in every case for various datasets and noise ratios. Compared with the two existing open-set AL methods, CCAL and SIMILAR, MQ-Net shows a steeper improvement in test accuracy over rounds by resolving the purity-informativeness dilemma in query selection. For example, the performance gap between MQ-Net and the two open-set AL methods gets larger after the sixth round, as shown in Figure 3(b), because CCAL and SIMILAR mainly depend on purity in query selection, which conveys less informative information to the classifier. For a better classifier, informative examples should be favored at a later AL round due to the sufficient number of IN examples in the labeled set. In contrast, MQ-Net keeps improving the test accuracy even in a later AL round by finding the best balancing between purity and informativeness in its query set. More analysis of MQ-Net associated with the purity-informativeness dilemma is discussed in Section 5.3. 5.2.2 Results with Varying Noise Ratios Table 1 summarizes the last test accuracy at the final AL round for three datasets with varying levels of open-set noise. Overall, the last test accuracy of MQ-Net is the best in every case. This superiority concludes that MQ-Net successfully finds the best trade-off between purity and informativeness in terms of AL accuracy regardless of the noise ratio. In general, the performance improvement becomes larger with the increase in the noise ratio. On the other hand, the two open-set AL approaches are even worse than the four standard AL approaches when the noise ratio is less than or equal to 20%. Especially, in CIFAR10 relatively easier than others, CCAL and SIMILAR are inferior to the non-robust AL method, LL, even with 40% noise. This trend confirms that increasing informativeness is more crucial than increasing purity when the noise ratio is small; highly informative examples are still beneficial when the performance of a classifier is saturated in the presence of open-set noise. An in-depth analysis on the low accuracy of the existing open-set AL approaches in a low noise ratio is presented in Appendix E. Table 2: Effect of the meta inputs on MQ-Net. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% Standard AL BADGE 92.80 91.73 89.27 86.83 Open-set AL CCAL 90.55 89.99 88.87 87.49 MQ-Net CONF-ReAct 93.21 91.89 89.54 87.99 CONF-CSI 93.28 92.40 91.43 89.37 LL-ReAct 92.34 91.85 90.08 88.41 LL-CSI 93.10 92.10 91.48 89.51 Table 3: Efficacy of the self-validation set. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% MQ-Net Query set 93.10 92.10 91.48 89.51 Random 92.10 91.75 90.88 87.65 Table 4: Efficacy of the skyline constraint. Noise Ratio 10% 20% 40% 60% MQ-Net w/ skyline 93.10 92.10 91.48 89.51 w/o skyline 87.25 86.29 83.61 81.67 5.3 Answers to the Purity-Informativeness Dilemma The high robustness of MQ-Net in Table 1 and Figure 3 is mainly attributed to its ability to keep finding the best trade-off between purity and informativeness. Figure 4(a) illustrates the preference change of MQ-Net between purity and informativeness throughout the AL rounds. As the round progresses, MQ-Net automatically raises the importance of informativeness rather than purity; the slope of the tangent line keeps steepening from −0.74 to −1.21. This trend implies that more informative examples are required to be labeled when the target classifier becomes mature. That is, as the model performance increases, ‘fewer but highly-informative’ examples are more impactful than ‘more but less-informative’ examples in terms of improving the model performance. Figure 4(b) describes the preference change of MQ-Net with varying noise ratios. Contrary to the trend over AL rounds, as the noise ratio gets higher, MQ-Net prefers purity more over informativeness. 5.4 Ablation Studies Various Combination of Meta-input. MQ-Net can design its purity and informativeness scores by leveraging diverse metrics in the existing OOD detection and AL literature. Table 2 shows the final round test accuracy on CIFAR10 for the four variants of score combinations, each of which is constructed by a combination of two purity scores and two informativeness scores; each purity score is induced by the two recent OOD detection methods, ReAct [32] and CSI [36], while each informativeness score is converted from the two existing AL methods, CONF and LL. “CONF-ReAct” denotes a variant that uses ReAct as the purity score and CONF as the informativeness score. Overall, all variants perform better than standard and open-set AL baselines in every noise level. Refer to Table 2 for detailed comparison. This result concludes that MQ-Net can be generalized over different types of meta-input owing to the learning flexibility of MLPs. Interestingly, the variant using CSI as the purity score is consistently better than those using ReAct. ReAct, a classifier-dependent OOD score, performs poorly in earlier AL rounds. A detailed analysis of the two OOD detectors, ReAct and CSI, over AL rounds can be found in Appendix F. Efficacy of Self-validation Set. MQ-Net can be trained with an independent validation set, instead of using the proposed self-validation set. We generate the independent validation set by randomly sampling the same number of examples as the self-validation set with their ground-truth labels from the entire data not overlapped with the unlabeled set used for AL. As can be seen from Table 3, it is of interest to see that our self-validation set performs better than the random validation set. The two validation sets have a major difference in data distributions; the self-validation set mainly consists of the examples with highest meta-scores among the remaining unlabeled data per round, while the random validation set consists of random examples. We conclude that the meta-score of MQ-Net has the potential for constructing a high-quality validation set in addition to query selection. Efficacy of Skyline Constraint. Table 4 demonstrates the final round test accuracy of MQ-Net with or without the skyline constraint. For the latter, a standard 2-layer MLP is used as the meta-network architecture without any modification. The performance of MQ-Net degrades significantly without the skyline constraint, meaning that the non-constrained MLP can easily overfit to the small-sized self-validation set, thereby assigning high output scores on less-pure and less-informative examples. Therefore, the violation of the skyline constraint in optimization makes MQ-Net hard to balance between the purity and informativeness scores in query selection. Efficacy of Meta-objective. MQ-Net keeps finding the best balance between purity and informativeness over multiple AL rounds by repeatedly minimizing the meta-objective in Eq. (3). To validate its efficacy, we compare it with two simple alternatives based on heuristic balancing rules such as linear combination and multiplication, denoted as P(x) + I(x) and P(x) · I(x), respectively. Following the default setting of MQ-Net, we use LL for P(x) and CSI for I(x). Table 5 shows the AL performance of the two alternatives and MQ-Net for the split-dataset setup on CIFAR10 with the noise ratios of 20% and 40%. MQ-Net beats the two alternatives after the second AL round where MQ-Net starts balancing purity and informativeness with its meta-objective. This result implies that our meta-objective successfully finds the best balance between purity and informativeness by emphasizing informativeness over purity at the later AL rounds. 5.5 Effect of Varying OOD Labeling Cost The labeling cost for OOD examples could vary with respect to data domains. To validate the robustness of MQ-Net on diverse labeling scenarios, we conduct an additional study of adjusting the labeling cost cOOD for the OOD examples. Table 6 summarizes the performance change with four different labeling costs (i.e., 0.5, 1, 2, and 4). The two standard AL methods, CONF and CORESET, and two open-set AL methods, CCAL and SIMILAR, are compared with MQ-Net. Overall, MQ-Net consistently outperforms the four baselines regardless of the labeling cost. Meanwhile, CCAL and SIMILAR are more robust to the higher labeling cost than CONF and CORESET; CCAL and SIMILAR, which favor high purity examples, query more IN examples than CONF and CORESET, so they are less affected by the labeling cost, especially when it is high. 6 Conclusion We propose MQ-Net, a novel meta-model for open-set active learning that deals with the purityinformativeness dilemma. MQ-Net finds the best balancing between the two factors, being adaptive to the noise ratio and target model status. A clean validation set for the meta-model is obtained for free by exploiting the procedure of active learning. A ranking loss with the skyline constraint optimizes MQ-Net to make the output a legitimate meta-score that keeps the obvious order of two examples. MQ-Net is shown to yield the best test accuracy throughout the entire active learning rounds, thereby empirically proving the correctness of our solution to the purity-informativeness dilemma. Overall, we expect that our work will raise the practical usability of active learning with open-set noise. Acknowledgement This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00862, DB4DL: HighUsability and Performance In-Memory Distributed DBMS for Deep Learning). The experiment was conducted by the courtesy of NAVER Smart Machine Learning (NSML) [47].
1. What is the focus and contribution of the paper on open-set noise problems in active learning? 2. What are the strengths of the proposed approach, particularly in its design and effectiveness? 3. What are the weaknesses of the paper, especially regarding its definition of open-set noise, motivation, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's methodology, such as the choice of classifier-dependent approach, use of CSI and LL for calculating purity and informativeness scores, and lack of error bars in experiments?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduced the open-set noise problem that might exist in active learning, illustrated the purity-informativeness dilemma and proposed a meta-query (MQ) net as a plugin in normal active learning processes. This paper is well-written before experimental part and easy to follow. Strengths And Weaknesses Strengths: The idea of adding a mq net into normal active learning is interesting, it just like LL4AL, which add an extra plugin/network to estimate whether the unlabeled data set belong to ood data or not. The design is both effective and flexible. Additionally, the L(SQ) inherently contains multi-objective optimization (pareto optimization) design ( P ( x i ) > P ( x j ) and I ( x i ) > I ( x j ) for finding pareto fronts). It is interesting. Weaknesses: there are still several problems left in this paper. Firstly is the definition of the open-set noise problem. I think it is just ID/OOD problems, which is properly defined as class mismatch in CCAL and OOD data scenarios in SIMILAR, there is no need to create a new concept and called it open-set noise problem. This concept is wider, for instance, some instances are really belongs to the ID data distribution but contains noise in x, it is not OOD data but it still contains noise. But the author only conduct experiments follows class mismatch settings in the experimental part. Secondly is the motivation of the purity-informativeness dilemma, 1) the author provide an example in Figure 1. This example is not convincing enough since LL4AL and CCAL are both non-typical methods, LL4AL is jointly trained with a LossNet and CCAL use SimCLR/CSI for extracting features, they are not comparable. Additionally, the example only show the first 10 rounds and show low-noise (10% and 20 % ood rate) situations; 2) Why can't we maintain purity all the time and at the same time acquire high informativeness? In addition, if there is a method to achieve the ideal/ Optimal effect, the proportion of OOD sample in unlabeled data pool will naturally be higher and higher, and more attention should be attached to purity. Thirdly is the experiments, there is no error bar of the conduct experiments. The experimental results on CCAL and SIMILAR are indeed very strange, on low noise situations (10% and 20% OOD data rate), are even worse than typical uncertainty-based sampling strategy (e.g., CONF), I have contacted the authors of CCAL and SIMILAR to asked them if their models would perform worse than typical uncertainty-based measures like (CONF, ENTROPY), the author of SIMILAR paper said "If there is low-noise then it should only be less challenging and the performance should at least be consistent and better than MARGIN.". CCAL author show me their new experiments on low-noise data scenarios, also better than typical uncertainty-based measures. Since the author didn't provide the code (only provide the pdf version of appendix), I cannot check the implementation. Is it fair comparison? Questions Most of the questions are listed in previous part. Questions: The definition of open-set noise (see previous part). The motivation/example of purity-informativeness dilemma contains a contradiction (see pervious part). In my view, in ideal case, it is purity rather than informativeness should be emphasized in latter AL processes. In line 63-64, the author said "The input to the meta-model, which includes the target and OOD labels, is obtained for free from each AL round’s query set by leveraging the multi-round property of AL." This is an advantage, learned from the queried ID and OOD samples. But learning is for updating the META model to better output Φ ( < P ( x ) , I ( x ) > ; w ) , instead of using it to get better P ( x ) and I ( x ) , it feels like it is just to train a classifier, but in deep learning tasks, the feature representation and classifier are jointly trained. In line 95, why choose classifier-dependent approach to get a meta-model, what is the motivation? Did the author compared with other density- and self-supervised based methods? In line 277-278, why use CSI and LL for calculating the purity and informativeness scores? Especially LL, some researches show that LL is not stable due to the joint training with LossNet in some tasks. Is mq-net joint trained with basic classifier (e.g., ResNet18) or not like LL? Equation 3, the situation is both P ( x i ) > P ( x j ) and I ( x i ) > I ( x j ) , this is apparently to find pareto fronts if one regard it as pareto-optimization problem. Could the author provide some discussions of the situation the number of pareto front set (the data samples that satisfies both P ( x i ) > P ( x j ) and I ( x i ) > I ( x j ) ) less than batch size (labeling budget b in main paper) in active learning process? Is the ResNet18 pre-trained? Did the author conduct repeat trials per experiment? (no error bar problem, mentioned in previous part) The experimental results of the CCAL, SIMILAR and standard AL approaches (mentioned in previous part) In line 284-285, the author already defined the cost of querying ood data samples, why it is not an evaluation metric in later experimental result analysis? It is important if the author could present the cost of querying ood data samples, the less cost means that the labeling budget is less wasted. Hope the authors could provide convincing responses in rebuttal, I will increase my score if the author could persuade me. Small typo: Line 261, "OOD" not "ODD". Limitations The authors adequately addressed the limitations and potential negative societal impact of their work.
NIPS
Title Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning Abstract Unlabeled data examples awaiting annotations contain open-set noise inevitably. A few active learning studies have attempted to deal with this open-set noise for sample selection by filtering out the noisy examples. However, because focusing on the purity of examples in a query set leads to overlooking the informativeness of the examples, the best balancing of purity and informativeness remains an important question. In this paper, to solve this purity-informativeness dilemma in open-set active learning, we propose a novel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. Specifically, by leveraging the multi-round property of active learning, we train MQ-Net using a query set without an additional validation set. Furthermore, a clear dominance relationship between unlabeled examples is effectively captured by MQ-Net through a novel skyline regularization. Extensive experiments on multiple open-set active learning scenarios demonstrate that the proposed MQ-Net achieves 20.14% improvement in terms of accuracy, compared with the state-of-the-art methods. 1 Introduction The success of deep learning in many complex tasks highly depends on the availability of massive data with well-annotated labels, which are very costly to obtain in practice [1]. Active learning (AL) is one of the popular learning frameworks to reduce the high human-labeling cost, where a small number of maximally-informative examples are selected by a query strategy and labeled by an oracle repeatedly [2]. Numerous query (i.e., sample selection) strategies, mainly categorized into uncertaintybased sampling [3, 4, 5] and diversity-based sampling [6, 7, 8], have succeeded in effectively reducing the labeling cost while achieving high model performance. Despite their success, most standard AL approaches rely on a strict assumption that all unlabeled examples should be cleanly collected from a pre-defined domain called in-distribution (IN), even before being labeled [9]. This assumption is unrealistic in practice since the unlabeled examples are mostly collected from rather casual data curation processes such as web-crawling. Notably, in the Google search engine, the precision of image retrieval is reported to be 82% on average, and it is worsened to 48% for unpopular entities [10, 11]. That is, such collected unlabeled data naturally involves open-set noise, which is defined as a set of the examples collected from different domains called out-of-distribution (OOD) [12]. In general, standard AL approaches favor the examples either highly uncertain in predictions or highly diverse in representations as a query for labeling. However, the addition of open-set noise makes these two measures fail to identify informative examples; the OOD examples also exhibit high uncertainty and diversity because they share neither class-distinctive features nor other inductive biases with IN examples [14, 15]. As a result, an active learner is confused and likely to query the OOD examples to ∗Corresponding authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). a human-annotator for labeling. Human annotators would disregard the OOD examples because they are unnecessary for the target task, thereby wasting the labeling budget. Therefore, the problem of active learning with open-set noise, which we call open-set active learning, has emerged as a new important challenge for real-world applications. Recently, a few studies have attempted to deal with the open-set noise for active learning [13, 16]. They commonly try to increase the purity of examples in a query set, which is defined as the proportion of IN examples, by effectively filtering out the OOD examples. However, whether focusing on the purity is needed throughout the entire training period remains a question. In Figure 1(a), let’s consider an open-set AL task with a binary classification of cats and dogs, where the images of other animals, e.g., horses and wolves, are regarded as OOD examples. It is clear that the group of high purity and high informativeness (HP-HI) is the most preferable for sample selection. However, when comparing the group of high purity and low informativeness (HP-LI) and that of low purity and high informativeness (LP-HI), the preference between these two groups of examples is not clear, but rather contingent on the learning stage and the ratio of OOD examples. Thus, we coin a new term “purity-informativeness dilemma” to call attention to the best balancing of purity and informativeness. Figures 1(b) and 1(c) illustrate the purity-informativeness dilemma. The standard AL approach, LL[5], puts more weight on the examples of high informativeness (denoted as HI-focused), while the existing open-set AL approach, CCAL [13], puts more weight on those of high purity (denoted as HP-focused). The HP-focused approach improves the test accuracy more significantly than the HI-focused one at earlier AL rounds, meaning that pure as well as easy examples are more beneficial. In contrast, the HI-focused approach beats the HP-focused one at later AL rounds, meaning that highly informative examples should be selected even at the expense of purity. Furthermore, comparing a low OOD (noise) ratio in Figure 1(b) and a high OOD ratio in Figure 1(c), the shift from HP-dominance to HI-dominance tends to occur later at a higher OOD ratio, which renders this dilemma more difficult. In this paper, to solve the purity-informativeness dilemma in open-set AL, we propose a novel metamodel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. A key challenge is the best balancing is unknown in advance. The meta-model is trained to assign higher priority for in-distribution examples over OOD examples as well as for more informative examples among in-distribution ones. The input to the meta-model, which includes the target and OOD labels, is obtained for free from each AL round’s query set by leveraging the multi-round property of AL. Moreover, the meta-model is optimized more stably through a novel regularization inspired by the skyline query [17, 18] popularly used in multi-objective optimization. As a result, MQ-Net can guide the learning of the target model by providing the best balancing between purity and informativeness throughout the entire training period. Overall, our main contributions are summarized as follows: 1. We formulate the purity-informativeness dilemma, which hinders the usability of open-set AL in real-world applications. 2. As our answer to the dilemma, we propose a novel AL framework, MQ-Net, which keeps finding the best trade-off between purity and informativeness. 3. Extensive experiments on CIFAR10, CIFAR100, and ImageNet show that MQ-Net improves the classifier accuracy consistently when the OOD ratio changes from 10% to 60% by up to 20.14%. 2 Related Work 2.1 Active Learning and Open-set Recognition Active Learning is a learning framework to reduce the human labeling cost by finding the most informative examples given unlabeled data [9, 19]. One popular direction is uncertainty-based sampling. Typical approaches have exploited prediction probability, e.g., soft-max confidence [20, 3], margin [21], and entropy [22]. Some approaches obtain uncertainty by Monte Carlo Dropout on multiple forward passes [23, 24, 25]. LL [5] predicts the loss of examples by jointly learning a loss prediction module with a target model. Meanwhile, diversity-based sampling has also been widely studied. To incorporate diversity, most methods use a clustering [6, 26] or coreset selection algorithm [7]. Notably, CoreSet [7] finds the set of examples having the highest distance coverage on the entire unlabeled data. BADGE [8] is a hybrid of uncertainty- and diversity-based sampling which uses k-means++ clustering in the gradient embedding space. However, this family of approaches is not appropriate for open-set AL since they do not consider how to handle the OOD examples for query selection. Open-set Recognition (OSR) is a detection task to recognize the examples outside of the target domain [12]. Closely related to this purpose, OOD detection has been actively studied [27]. Recent work can be categorized into classifier-dependent, density-based, and self-supervised approaches. The classifier-dependent approach leverages a pre-trained classifier and introduces several scoring functions, such as Uncertainty [28], ODIN [29], mahalanobis distance (MD) [30], and Energy[31]. Recently, ReAct [32] shows that rectifying penultimate activations can enhance most of the aforementioned classifier-dependent OOD scores. The density-based approach learns an auxiliary generative model like a variational auto-encoder to compute likelihood-based OOD scores [33, 34, 35]. Most self-supervised approaches leverage contrastive learning [36, 37, 38]. CSI shows that contrasting with distributionally-shifted augmentations can considerably enhance the OSR performance [36]. The OSR performance of classifier-dependent approaches degrades significantly if the classifier performs poorly [39]. Similarly, the performance of density-based and self-supervised approaches heavily resorts to the amount of clean IN data [35, 36]. Therefore, open-set active learning is a challenging problem to be resolved by simply applying the OSR approaches since it is difficult to obtain high-quality classifiers and sufficient IN data at early AL rounds. 2.2 Open-set Active learning Two recent approaches have attempted to handle the open-set noise for AL [13, 16]. Both approaches try to increase purity in query selection by effectively filtering out the OOD examples. CCAL [13] learns two contrastive coding models each for calculating informativeness and OODness of an example, and combines the two scores using a heuristic balancing rule. SIMILAR [16] selects a pure and core set of examples that maximize the distance on the entire unlabeled data while minimizing the distance to the identified OOD data. However, we found that CCAL and SIMILAR are often worse than standard AL methods, since they always put higher weights on purity although informativeness should be emphasized when the open-set noise ratio is small or in later AL rounds. This calls for developing a new solution to carefully find the best balance between purity and informativeness. 3 Purity-Informativeness Dilemma in Open-set Active Learning 3.1 Problem Statement Let DIN and DOOD be the IN and OOD data distributions, where the label of examples from DOOD does not belong to any of the k known labels Y = {yi}ki=1. Then, an unlabeled set is a mixture of IN and OOD examples, U = {XIN , XOOD}, i.e., XIN ∼ DIN and XOOD ∼ DOOD. In the open-set AL, a human oracle is requested to assign a known label y to an IN example x ∈ XIN with a labeling cost cIN , while an OOD example x ∈ XOOD is marked as open-set noise with a labeling cost cOOD. AL imposes restrictions on the labeling budget b every round. It starts with a small labeled set SL, consisting of both labeled IN and OOD examples. The initial labeled set SL improves by adding a small but maximally-informative labeled query set SQ per round, i.e., SL←SL∪SQ, where the labeling cost for SQ by the oracle does not exceed the labeling budget b. Hence, the goal of open-set AL is defined to construct the optimal query set S∗Q, minimizing the loss for the unseen target IN data. The difference from standard AL is that the labeling cost for OOD examples is introduced, where the labeling budget is wasted when OOD examples are misclassified as informative ones. Formally, let C(·) be the labeling cost function for a given unlabeled set; then, each round of open-set AL is formulated to find the best query set S∗Q as S∗Q = argmin SQ: C(SQ)≤b E(x,y)∈TIN [ ℓcls ( f(x; ΘSL∪SQ), y )] , where C(SQ) = ∑ x∈SQ ( 1[x∈XIN ]cIN + 1[x∈XOOD]cOOD ) . (1) Here, f(·; ΘSL∪SQ) denotes the target model trained on only IN examples in SL ∪ SQ, and ℓcls is a certain loss function, e.g., cross-entropy, for classification. For each AL round, all the examples in S∗Q are removed from the unlabeled set U and then added to the accumulated labeled set SL with their labels. This procedure repeats for the total number r of rounds. 3.2 Purity-Informativeness Dilemma An ideal approach for open-set AL would be to increase both purity and informativeness of a query set by completely suppressing the selection of OOD examples and accurately querying the most informative examples among the remaining IN examples. However, the ideal approach is infeasible because overly emphasizing purity in query selection does not promote example informativeness and vice versa. Specifically, OOD examples with low purity scores mostly exhibit high informativeness scores because they share neither class-distinctive features nor other inductive biases with the IN examples [14, 15]. We call this trade-off in query selection as the purity-informativeness dilemma, which is our new finding expected to trigger a lot of subsequent work. To address this dilemma, we need to consider the proper weights of a purity score and an informative score when they are combined. Let P(x) be a purity score of an example x which can be measured by any existing OOD scores, e.g., negative energy [31], and I(x) be an informativeness score of an example x from any standard AL strategies, e.g., uncertainty [3] and diversity [26]. Next, supposing zx = ⟨P(x), I(x)⟩ is a tuple of available purity and informativeness scores for an example x. Then, a score combination function Φ(zx), where zx = ⟨P(x), I(x)⟩, is defined to return an overall score that indicates the necessity of x being included in the query set. Given two unlabeled examples xi and xj , if P(xi) > P(xj) and I(xi) > I(xj), it is clear to favor xi over xj based on Φ(zxi) > Φ(zxj ). However, due to the purity-informativeness dilemma, if P(xi) > P(xj) and I(xi) < I(xj) or P(xi) < P(xj) and I(xi) > I(xj), it is very challenging to determine the dominance between Φ(zxi) and Φ(zxj ). In order to design Φ(·), we mainly focus on leveraging meta-learning, which is a more agnostic approach to resolve the dilemma other than several heuristic approaches, such as linear combination and multiplication. 4 Meta-Query-Net We propose a meta-model, named Meta-Query-Net (MQ-Net), which aims to learn a meta-score function for the purpose of identifying a query set. In the presence of open-set noise, MQ-Net outputs the meta-score for unlabeled examples to achieve the best balance between purity and informativeness in the selected query set. In this section, we introduce the notion of a self-validation set to guide the meta-model in a supervised manner and then demonstrate the meta-objective of MQ-Net for training. Then, we propose a novel skyline constraint used in optimization, which helps MQ-Net capture the obvious preference among unlabeled examples when a clear dominance exists. Next, we present a way of converting the purity and informativeness scores estimated by existing methods for use in MQ-Net. Note that training MQ-Net is not expensive because it builds a light meta-model on a small self-validation set. The overview of MQ-Net is illustrated in Figure 2. 4.1 Training Objective with Self-validation Set The parameters w contained in MQ-Net Φ(·;w) is optimized in a supervised manner. For clean supervision, validation data is required for training. Without assuming a hard-to-obtain clean validation set, we propose to use a self-validation set, which is instantaneously generated in every AL round. In detail, we obtain a labeled query set SQ by the oracle, consisting of a labeled IN set and an identified OOD set in every round. Since the query set SQ is unseen for the target model Θ and the meta-model w at the current round, we can exploit it as a self-validation set to train MQ-Net. This self-validation set eliminates the need for a clean validation set in meta-learning. Given the ground-truth labels in the self-validation set, it is feasible to guide MQ-Net to be trained to resolve the purity-informativeness dilemma by designing an appropriate meta-objective. It is based on the cross-entropy loss for classification because the loss value of training examples has been proven to be effective in identifying high informativeness examples [5]. The conventional loss value by a target model Θ is masked to be zero if x ∈ XOOD since OOD examples are useless for AL, ℓmce(x) = 1[lx=1]ℓce ( f(x; Θ), y ) , (2) where l is a true binary IN label, i.e., 1 for IN examples and 0 for OOD examples, which can be reliably obtained from the self-validation set. This masked loss, ℓmce, preserves the informativeness of IN examples while excluding OOD examples. Given a self-validation data SQ, the meta-objective is defined such that MQ-Net parameterized by w outputs a high (or low) meta-score Φ(zx;w) if an example x’s masked loss value is large (or small), L(SQ)= ∑ i∈SQ ∑ j∈SQ max ( 0,−Sign ( ℓmce(xi), ℓmce(xj) ) · ( Φ(zxi ;w)− Φ(zxj ;w) + η )) s.t. ∀xi, xj , if P(xi) > P(xj) and I(xi) > I(xj), then Φ(zxi ;w) > Φ(zxj ;w), (3) where η > 0 is a constant margin for the ranking loss, and Sign(a, b) is an indicator function that returns +1 if a > b, 0 if a = b, and −1 otherwise. Hence, Φ(zxi ;w) is forced to be higher than Φ(zxj ;w) if ℓmce(xi) > ℓmce(xj); in contrast, Φ(zxi ;w) is forced to be lower than Φ(zxj ;w) if ℓmce(xi) < ℓmce(xj). Two OOD examples do not affect the optimization because they do not have any priority between them, i.e., ℓmce(xi) = ℓmce(xj). In addition to the ranking loss, we add a regularization term named the skyline constraint (i.e., the second line) in the meta-objective Eq. (3), which is inspired by the skyline query which aims to narrow down a search space in a large-scale database by keeping only those items that are not worse than any other [17, 18]. Specifically, in the case of P(xi) > P(xj) and I(xi) > I(xj), the condition Φ(zxi ;w) > Φ(zxj ;w) must hold in our objective, and hence we make this proposition as the skyline constraint. This simple yet intuitive regularization is very helpful for achieving a meta-model that better judges the importance of purity or informativeness. We provide an ablation study on the skyline constraint in Section 5.4. 4.2 Architecture of MQ-Net MQ-Net is parameterized by a multi-layer perceptron (MLP), a widely-used deep learning architecture for meta-learning [40]. A challenge here is that the proposed skyline constraint in Eq. (3) does not hold with a standard MLP model. To satisfy the skyline constraint, the meta-score function Φ(·;w) should be a monotonic non-decreasing function because the output (meta-score) of MQ-Net for an example xi must be higher than that for another example xj if the two factors (purity and informativeness) of xi are both higher than those of xj . The MLP model consists of multiple matrix multiplications with non-linear activation functions such as ReLU and Sigmoid. In order for the MLP model to be monotonically non-decreasing, all the parameters in w for Φ(·;w) should be non-negative, as proven by Theorem 4.1. Theorem 4.1. For any MLP meta-model w with non-decreasing activation functions, a meta-score function Φ(z;w) : Rd → R holds the skyline constraints if w ⪰ 0 and z(∈ Rd) ⪰ 0, where ⪰ is the component-wise inequality. Proof. An MLP model is involved with matrix multiplication and composition with activation functions, which are characterized by three basic operators: (1) addition: h(z) = f(z) + g(z), (2) multiplication: h(z) = f(z)× g(z), and (3) composition: h(z) = f ◦ g(z). These three operators are guaranteed to be non-decreasing functions if the parameters of the MLP model are all nonnegative, because the non-negative weights guarantee all decomposed scalar operations in MLP to be non-decreasing functions. Combining the three operators, the MLP model Φ(z;w), where w ⪰ 0, naturally becomes a monotonic non-decreasing function for each input dimension. Refer to Appendix A for the complete proof. In implementation, non-negative weights are guaranteed by applying a ReLU function to meta-model parameters. Since the ReLU function is differentiable, MQ-Net can be trained with the proposed objective in an end-to-end manner. Putting this simple modification, the skyline constraint is preserved successfully without introducing any complex loss-based regularization term. The only remaining condition is that each input of MQ-Net must be a vector of non-negative entries. 4.3 Active Learning with MQ-Net 4.3.1 Meta-input Conversion MQ-Net receives zx = ⟨P(x), I(x)⟩ and then returns a meta-score for query selection. All the scores for the input of MQ-Net should be positive to preserve the skyline constraint, i.e., z ⪰ 0. Existing OOD and AL query scores are converted to the meta-input. The methods used for calculating the scores are orthogonal to our framework. The OOD score O(·) is conceptually the opposite of purity and varies in its scale; hence, we convert it to a purity score by P(x) = Exp(Normalize(−O(x))), where Normalize(·) is the z-score normalization. This conversion guarantees the purity score to be positive. Similarly, for the informativeness score, we convert an existing AL query score Q(·) to I(x) = Exp(Normalize(Q(x))). For the z-score normalization, we compute the mean and standard deviation of O(x) or Q(x) over the unlabeled examples. Such mean and standard deviation are iteratively computed before the meta-training, and used for the z-score normalization at that round. 4.3.2 Overall Procedure For each AL round, a target model is trained via stochastic gradient descent (SGD) on mini-batches sampled from the IN examples in the current labeled set SL. Based on the current target model, the purity and informative scores are computed by using certain OOD and AL query scores. The querying phase is then performed by selecting the examples SQ with the highest meta-scores within the labeling budget b. The query set SQ is used as the self-validation set for training MQ-Net at the current AL round. The trained MQ-Net is used at the next AL round. The alternating procedure of updating the target model and the meta-model repeats for a given number r of AL rounds. The pseudocode of MQ-Net can be found in Appendix B. 5 Experiments 5.1 Experiment Setting Datasets. We perform the active learning task on three benchmark datasets; CIFAR10 [41], CIFAR100 [41], and ImageNet [42]. Following the ‘split-dataset’ setup in open-world learning literature [13, 16, 43], we divide each dataset into two subsets: (1) the target set with IN classes and (2) the noise set with OOD classes. Specifically, CIFAR10 is split into the target set with four classes and the noise set with the rest six classes; CIFAR100 into the two sets with 40 and 60 classes; and ImageNet into the two sets with 50 and 950 classes. The entire target set is used as the unlabeled IN data, while only a part of classes in the noise set is selected as the unlabeled OOD data according to the given noise ratio. In addition, following OOD detection literature [28, 33], we also consider the ‘cross-dataset’ setup, which mixes a certain dataset with two external OOD datasets collected from different domains, such as LSUN [44] and Places365 [45]. For sake of space, we present all the results on the cross-dataset setup in Appendix D. Algorithms. We compare MQ-Net with a random selection, four standard AL, and two recent open-set AL approaches. • Standard AL: The four methods perform AL without any processing for open-set noise: (1) CONF [3] queries the most uncertain examples with the lowest softmax confidence in the prediction, (2) CORESET [7] queries the most diverse examples with the highest coverage in the representation space, (3) LL [5] queries the examples having the largest predicted loss by jointly learning a loss prediction module, and (4) BADGE [8] considers both uncertainty and diversity by querying the most representative examples in the gradient via k-means++ clustering [46]. • Open-set AL: The two methods tend to put more weight on the examples with high purity: (1) CCAL [13] learns two contrastive coding models for calculating informativeness and OODness, and then it combines the two scores into one using a heuristic balancing rule, and (2) SIMILAR [16] selects a pure and core set of examples that maximize the distance coverage on the entire unlabeled data while minimizing the distance coverage to the already labeled OOD data. For all the experiments, regarding the two inputs of MQ-Net, we mainly use CSI [36] and LL [5] for calculating the purity and informativeness scores, respectively. For CSI, as in CCAL, we train a contrastive learner on the entire unlabeled set with open-set noise since the clean in-distribution set is not available in open-set AL. The ablation study in Section 5.4 shows that MQ-Net is also effective with other OOD and AL scores as its input. Implementation Details. We repeat the three steps—training, querying, and labeling—of AL. The total number r of rounds is set to 10. Following the prior open-set AL setup [13, 16], we set the labeling cost cIN = 1 for IN examples and cOOD = 1 for OOD examples. For the class-split setup, the labeling budget b per round is set to 500 for CIFAR10/100 and 1, 000 for ImageNet. Regarding the open-set noise ratio τ , we configure four different levels from light to heavy noise in {10%, 20%, 40%, 60%}. In the case of τ = 0% (no noise), MQ-Net naturally discards the purity score and only uses the informativeness score for query selection, since the self-validation set does not contain any OOD examples. The initial labeled set is randomly selected uniformly at random from the entire unlabeled set within the labeling budget b. For the architecture of MQ-Net, we use a 2-layer MLP with the hidden dimension size of 64 and the Sigmoid activation fuction. We report the average results of five runs with different class splits. We did not use any pre-trained networks. See Appendix C for more implementation details with training configurations. All methods are implemented with PyTorch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU. The code is available at https://github.com/kaist-dmlab/MQNet. 5.2 Open-set Noise Robustness 5.2.1 Results over AL Rounds Figure 3 illustrates the test accuracy of the target model over AL rounds on the two CIFAR datasets. MQ-Net achieves the highest test accuracy in most AL rounds, thereby reaching the best test accuracy at the final round in every case for various datasets and noise ratios. Compared with the two existing open-set AL methods, CCAL and SIMILAR, MQ-Net shows a steeper improvement in test accuracy over rounds by resolving the purity-informativeness dilemma in query selection. For example, the performance gap between MQ-Net and the two open-set AL methods gets larger after the sixth round, as shown in Figure 3(b), because CCAL and SIMILAR mainly depend on purity in query selection, which conveys less informative information to the classifier. For a better classifier, informative examples should be favored at a later AL round due to the sufficient number of IN examples in the labeled set. In contrast, MQ-Net keeps improving the test accuracy even in a later AL round by finding the best balancing between purity and informativeness in its query set. More analysis of MQ-Net associated with the purity-informativeness dilemma is discussed in Section 5.3. 5.2.2 Results with Varying Noise Ratios Table 1 summarizes the last test accuracy at the final AL round for three datasets with varying levels of open-set noise. Overall, the last test accuracy of MQ-Net is the best in every case. This superiority concludes that MQ-Net successfully finds the best trade-off between purity and informativeness in terms of AL accuracy regardless of the noise ratio. In general, the performance improvement becomes larger with the increase in the noise ratio. On the other hand, the two open-set AL approaches are even worse than the four standard AL approaches when the noise ratio is less than or equal to 20%. Especially, in CIFAR10 relatively easier than others, CCAL and SIMILAR are inferior to the non-robust AL method, LL, even with 40% noise. This trend confirms that increasing informativeness is more crucial than increasing purity when the noise ratio is small; highly informative examples are still beneficial when the performance of a classifier is saturated in the presence of open-set noise. An in-depth analysis on the low accuracy of the existing open-set AL approaches in a low noise ratio is presented in Appendix E. Table 2: Effect of the meta inputs on MQ-Net. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% Standard AL BADGE 92.80 91.73 89.27 86.83 Open-set AL CCAL 90.55 89.99 88.87 87.49 MQ-Net CONF-ReAct 93.21 91.89 89.54 87.99 CONF-CSI 93.28 92.40 91.43 89.37 LL-ReAct 92.34 91.85 90.08 88.41 LL-CSI 93.10 92.10 91.48 89.51 Table 3: Efficacy of the self-validation set. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% MQ-Net Query set 93.10 92.10 91.48 89.51 Random 92.10 91.75 90.88 87.65 Table 4: Efficacy of the skyline constraint. Noise Ratio 10% 20% 40% 60% MQ-Net w/ skyline 93.10 92.10 91.48 89.51 w/o skyline 87.25 86.29 83.61 81.67 5.3 Answers to the Purity-Informativeness Dilemma The high robustness of MQ-Net in Table 1 and Figure 3 is mainly attributed to its ability to keep finding the best trade-off between purity and informativeness. Figure 4(a) illustrates the preference change of MQ-Net between purity and informativeness throughout the AL rounds. As the round progresses, MQ-Net automatically raises the importance of informativeness rather than purity; the slope of the tangent line keeps steepening from −0.74 to −1.21. This trend implies that more informative examples are required to be labeled when the target classifier becomes mature. That is, as the model performance increases, ‘fewer but highly-informative’ examples are more impactful than ‘more but less-informative’ examples in terms of improving the model performance. Figure 4(b) describes the preference change of MQ-Net with varying noise ratios. Contrary to the trend over AL rounds, as the noise ratio gets higher, MQ-Net prefers purity more over informativeness. 5.4 Ablation Studies Various Combination of Meta-input. MQ-Net can design its purity and informativeness scores by leveraging diverse metrics in the existing OOD detection and AL literature. Table 2 shows the final round test accuracy on CIFAR10 for the four variants of score combinations, each of which is constructed by a combination of two purity scores and two informativeness scores; each purity score is induced by the two recent OOD detection methods, ReAct [32] and CSI [36], while each informativeness score is converted from the two existing AL methods, CONF and LL. “CONF-ReAct” denotes a variant that uses ReAct as the purity score and CONF as the informativeness score. Overall, all variants perform better than standard and open-set AL baselines in every noise level. Refer to Table 2 for detailed comparison. This result concludes that MQ-Net can be generalized over different types of meta-input owing to the learning flexibility of MLPs. Interestingly, the variant using CSI as the purity score is consistently better than those using ReAct. ReAct, a classifier-dependent OOD score, performs poorly in earlier AL rounds. A detailed analysis of the two OOD detectors, ReAct and CSI, over AL rounds can be found in Appendix F. Efficacy of Self-validation Set. MQ-Net can be trained with an independent validation set, instead of using the proposed self-validation set. We generate the independent validation set by randomly sampling the same number of examples as the self-validation set with their ground-truth labels from the entire data not overlapped with the unlabeled set used for AL. As can be seen from Table 3, it is of interest to see that our self-validation set performs better than the random validation set. The two validation sets have a major difference in data distributions; the self-validation set mainly consists of the examples with highest meta-scores among the remaining unlabeled data per round, while the random validation set consists of random examples. We conclude that the meta-score of MQ-Net has the potential for constructing a high-quality validation set in addition to query selection. Efficacy of Skyline Constraint. Table 4 demonstrates the final round test accuracy of MQ-Net with or without the skyline constraint. For the latter, a standard 2-layer MLP is used as the meta-network architecture without any modification. The performance of MQ-Net degrades significantly without the skyline constraint, meaning that the non-constrained MLP can easily overfit to the small-sized self-validation set, thereby assigning high output scores on less-pure and less-informative examples. Therefore, the violation of the skyline constraint in optimization makes MQ-Net hard to balance between the purity and informativeness scores in query selection. Efficacy of Meta-objective. MQ-Net keeps finding the best balance between purity and informativeness over multiple AL rounds by repeatedly minimizing the meta-objective in Eq. (3). To validate its efficacy, we compare it with two simple alternatives based on heuristic balancing rules such as linear combination and multiplication, denoted as P(x) + I(x) and P(x) · I(x), respectively. Following the default setting of MQ-Net, we use LL for P(x) and CSI for I(x). Table 5 shows the AL performance of the two alternatives and MQ-Net for the split-dataset setup on CIFAR10 with the noise ratios of 20% and 40%. MQ-Net beats the two alternatives after the second AL round where MQ-Net starts balancing purity and informativeness with its meta-objective. This result implies that our meta-objective successfully finds the best balance between purity and informativeness by emphasizing informativeness over purity at the later AL rounds. 5.5 Effect of Varying OOD Labeling Cost The labeling cost for OOD examples could vary with respect to data domains. To validate the robustness of MQ-Net on diverse labeling scenarios, we conduct an additional study of adjusting the labeling cost cOOD for the OOD examples. Table 6 summarizes the performance change with four different labeling costs (i.e., 0.5, 1, 2, and 4). The two standard AL methods, CONF and CORESET, and two open-set AL methods, CCAL and SIMILAR, are compared with MQ-Net. Overall, MQ-Net consistently outperforms the four baselines regardless of the labeling cost. Meanwhile, CCAL and SIMILAR are more robust to the higher labeling cost than CONF and CORESET; CCAL and SIMILAR, which favor high purity examples, query more IN examples than CONF and CORESET, so they are less affected by the labeling cost, especially when it is high. 6 Conclusion We propose MQ-Net, a novel meta-model for open-set active learning that deals with the purityinformativeness dilemma. MQ-Net finds the best balancing between the two factors, being adaptive to the noise ratio and target model status. A clean validation set for the meta-model is obtained for free by exploiting the procedure of active learning. A ranking loss with the skyline constraint optimizes MQ-Net to make the output a legitimate meta-score that keeps the obvious order of two examples. MQ-Net is shown to yield the best test accuracy throughout the entire active learning rounds, thereby empirically proving the correctness of our solution to the purity-informativeness dilemma. Overall, we expect that our work will raise the practical usability of active learning with open-set noise. Acknowledgement This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00862, DB4DL: HighUsability and Performance In-Memory Distributed DBMS for Deep Learning). The experiment was conducted by the courtesy of NAVER Smart Machine Learning (NSML) [47].
1. What is the main contribution of the paper regarding active learning with out-of-distribution data? 2. What are the strengths of the proposed method, particularly in its ability to adaptively combine different metrics on unlabeled data? 3. What are the weaknesses of the paper, such as computational expense and lack of clarity in certain aspects? 4. How does the reviewer assess the significance and originality of the work compared to other approaches in the field? 5. Are there any questions or concerns regarding the presentation and explanation of the proposed method and its results?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Active learning typically involves querying and labeling points according to their informativeness, e.g. some notion of uncertainty or diversity. However, when there is out-of-distribution (OOD) data in the unlabeled dataset, such criteria may result in querying these OOD points, which wastes the labeling budget. Recent work has focused on improving the purity of queried points by filtering out all the OOD data and selecting informative in-distribution (IN) points. This paper challenges the notion that filtering OOD data needs to be done to the same extent at each round of active learning. Instead, it may be true that at later rounds, informative but impure OOD data points could be useful to query. There is hence a purity-informativeness dilemma that presents a dynamic tradeoff in the querying process. The paper introduces a meta-model, Meta-Query-Net (MQ-Net) that learns a score that is a function of both purity and informativeness scores updated after each round of AL using the labeled self-validation set already created from previous rounds. They also enforce a skyline constraint to encode ordering of the scores and implement it via nonnegative constraints. Empirically, MQ-Net outperforms both AL that doesn’t account for OOD data, and AL that is OOD aware but static throughout rounds. They also show how MQ-Net gradually prioritizes informativeness more than purity in later rounds, and perform ablation studies with varying purity/informativeness scores, removing the skyline constraint, and replacing the self-validation set with a random validation set. Strengths And Weaknesses Strengths Significance: The presence of OOD data can indeed confound standard querying approaches in AL and is an important problem in real datasets. It is unclear how to prioritize potential signal in such data dynamically, which this paper takes a principled step towards. Originality: the idea of using a self-validation set to dynamically meta-score points is novel to me and can inspire new ways of adaptively combining different metrics on unlabeled data. Quality: thorough experiments and ablations convince me of MQ-Net’s utility. Weaknesses Quality: it seems like MQ-Net would be more computationally expensive than previous approaches since a model is trained at each round. It would be interesting to see how much longer MQ-Net takes. Presentation: as someone who does not publish in this area, I had a few comments about clarity (in questions below). Questions Q1. Given that MQ-Net requires training a model at each round, how long does MQ-Net take to run versus other baselines like CCAL and SIMILAR? Q2. A running example of what the OOD data and informative versus non-informative data looks like would be helpful. For instance, Figure 1a explains the purity-informative dilemma, but it is not clear what the dataset and task are. Q3. Equation 1 is defined as the optimal query set approach, but is not mentioned otherwise in the paper. It is not clear what the purpose of presenting the upper limit of a query set is. Also, the cost constraint in equation 1 is used in MQ-Net but is not mentioned in that section. Q4. The intuitive interpretation of L ( S Q ) was not clear. The main idea from previous sections was that we want a loss that emphasizes informativeness more in later rounds. How does this loss function do that? In particular, it would be good to just have some further discussion about the optimal ranking learned by L ( S Q ) , case by case. E.g., OOD examples are always scored lower than IN examples, but amongst OOD examples they are roughly ranked by their dot product of purity and information. Q5. Typo: lines 52-53, “The HP-focused approach improves….than the HP-focused one at earlier AL rounds” Limitations N/A
NIPS
Title Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning Abstract Unlabeled data examples awaiting annotations contain open-set noise inevitably. A few active learning studies have attempted to deal with this open-set noise for sample selection by filtering out the noisy examples. However, because focusing on the purity of examples in a query set leads to overlooking the informativeness of the examples, the best balancing of purity and informativeness remains an important question. In this paper, to solve this purity-informativeness dilemma in open-set active learning, we propose a novel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. Specifically, by leveraging the multi-round property of active learning, we train MQ-Net using a query set without an additional validation set. Furthermore, a clear dominance relationship between unlabeled examples is effectively captured by MQ-Net through a novel skyline regularization. Extensive experiments on multiple open-set active learning scenarios demonstrate that the proposed MQ-Net achieves 20.14% improvement in terms of accuracy, compared with the state-of-the-art methods. 1 Introduction The success of deep learning in many complex tasks highly depends on the availability of massive data with well-annotated labels, which are very costly to obtain in practice [1]. Active learning (AL) is one of the popular learning frameworks to reduce the high human-labeling cost, where a small number of maximally-informative examples are selected by a query strategy and labeled by an oracle repeatedly [2]. Numerous query (i.e., sample selection) strategies, mainly categorized into uncertaintybased sampling [3, 4, 5] and diversity-based sampling [6, 7, 8], have succeeded in effectively reducing the labeling cost while achieving high model performance. Despite their success, most standard AL approaches rely on a strict assumption that all unlabeled examples should be cleanly collected from a pre-defined domain called in-distribution (IN), even before being labeled [9]. This assumption is unrealistic in practice since the unlabeled examples are mostly collected from rather casual data curation processes such as web-crawling. Notably, in the Google search engine, the precision of image retrieval is reported to be 82% on average, and it is worsened to 48% for unpopular entities [10, 11]. That is, such collected unlabeled data naturally involves open-set noise, which is defined as a set of the examples collected from different domains called out-of-distribution (OOD) [12]. In general, standard AL approaches favor the examples either highly uncertain in predictions or highly diverse in representations as a query for labeling. However, the addition of open-set noise makes these two measures fail to identify informative examples; the OOD examples also exhibit high uncertainty and diversity because they share neither class-distinctive features nor other inductive biases with IN examples [14, 15]. As a result, an active learner is confused and likely to query the OOD examples to ∗Corresponding authors. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). a human-annotator for labeling. Human annotators would disregard the OOD examples because they are unnecessary for the target task, thereby wasting the labeling budget. Therefore, the problem of active learning with open-set noise, which we call open-set active learning, has emerged as a new important challenge for real-world applications. Recently, a few studies have attempted to deal with the open-set noise for active learning [13, 16]. They commonly try to increase the purity of examples in a query set, which is defined as the proportion of IN examples, by effectively filtering out the OOD examples. However, whether focusing on the purity is needed throughout the entire training period remains a question. In Figure 1(a), let’s consider an open-set AL task with a binary classification of cats and dogs, where the images of other animals, e.g., horses and wolves, are regarded as OOD examples. It is clear that the group of high purity and high informativeness (HP-HI) is the most preferable for sample selection. However, when comparing the group of high purity and low informativeness (HP-LI) and that of low purity and high informativeness (LP-HI), the preference between these two groups of examples is not clear, but rather contingent on the learning stage and the ratio of OOD examples. Thus, we coin a new term “purity-informativeness dilemma” to call attention to the best balancing of purity and informativeness. Figures 1(b) and 1(c) illustrate the purity-informativeness dilemma. The standard AL approach, LL[5], puts more weight on the examples of high informativeness (denoted as HI-focused), while the existing open-set AL approach, CCAL [13], puts more weight on those of high purity (denoted as HP-focused). The HP-focused approach improves the test accuracy more significantly than the HI-focused one at earlier AL rounds, meaning that pure as well as easy examples are more beneficial. In contrast, the HI-focused approach beats the HP-focused one at later AL rounds, meaning that highly informative examples should be selected even at the expense of purity. Furthermore, comparing a low OOD (noise) ratio in Figure 1(b) and a high OOD ratio in Figure 1(c), the shift from HP-dominance to HI-dominance tends to occur later at a higher OOD ratio, which renders this dilemma more difficult. In this paper, to solve the purity-informativeness dilemma in open-set AL, we propose a novel metamodel Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors. A key challenge is the best balancing is unknown in advance. The meta-model is trained to assign higher priority for in-distribution examples over OOD examples as well as for more informative examples among in-distribution ones. The input to the meta-model, which includes the target and OOD labels, is obtained for free from each AL round’s query set by leveraging the multi-round property of AL. Moreover, the meta-model is optimized more stably through a novel regularization inspired by the skyline query [17, 18] popularly used in multi-objective optimization. As a result, MQ-Net can guide the learning of the target model by providing the best balancing between purity and informativeness throughout the entire training period. Overall, our main contributions are summarized as follows: 1. We formulate the purity-informativeness dilemma, which hinders the usability of open-set AL in real-world applications. 2. As our answer to the dilemma, we propose a novel AL framework, MQ-Net, which keeps finding the best trade-off between purity and informativeness. 3. Extensive experiments on CIFAR10, CIFAR100, and ImageNet show that MQ-Net improves the classifier accuracy consistently when the OOD ratio changes from 10% to 60% by up to 20.14%. 2 Related Work 2.1 Active Learning and Open-set Recognition Active Learning is a learning framework to reduce the human labeling cost by finding the most informative examples given unlabeled data [9, 19]. One popular direction is uncertainty-based sampling. Typical approaches have exploited prediction probability, e.g., soft-max confidence [20, 3], margin [21], and entropy [22]. Some approaches obtain uncertainty by Monte Carlo Dropout on multiple forward passes [23, 24, 25]. LL [5] predicts the loss of examples by jointly learning a loss prediction module with a target model. Meanwhile, diversity-based sampling has also been widely studied. To incorporate diversity, most methods use a clustering [6, 26] or coreset selection algorithm [7]. Notably, CoreSet [7] finds the set of examples having the highest distance coverage on the entire unlabeled data. BADGE [8] is a hybrid of uncertainty- and diversity-based sampling which uses k-means++ clustering in the gradient embedding space. However, this family of approaches is not appropriate for open-set AL since they do not consider how to handle the OOD examples for query selection. Open-set Recognition (OSR) is a detection task to recognize the examples outside of the target domain [12]. Closely related to this purpose, OOD detection has been actively studied [27]. Recent work can be categorized into classifier-dependent, density-based, and self-supervised approaches. The classifier-dependent approach leverages a pre-trained classifier and introduces several scoring functions, such as Uncertainty [28], ODIN [29], mahalanobis distance (MD) [30], and Energy[31]. Recently, ReAct [32] shows that rectifying penultimate activations can enhance most of the aforementioned classifier-dependent OOD scores. The density-based approach learns an auxiliary generative model like a variational auto-encoder to compute likelihood-based OOD scores [33, 34, 35]. Most self-supervised approaches leverage contrastive learning [36, 37, 38]. CSI shows that contrasting with distributionally-shifted augmentations can considerably enhance the OSR performance [36]. The OSR performance of classifier-dependent approaches degrades significantly if the classifier performs poorly [39]. Similarly, the performance of density-based and self-supervised approaches heavily resorts to the amount of clean IN data [35, 36]. Therefore, open-set active learning is a challenging problem to be resolved by simply applying the OSR approaches since it is difficult to obtain high-quality classifiers and sufficient IN data at early AL rounds. 2.2 Open-set Active learning Two recent approaches have attempted to handle the open-set noise for AL [13, 16]. Both approaches try to increase purity in query selection by effectively filtering out the OOD examples. CCAL [13] learns two contrastive coding models each for calculating informativeness and OODness of an example, and combines the two scores using a heuristic balancing rule. SIMILAR [16] selects a pure and core set of examples that maximize the distance on the entire unlabeled data while minimizing the distance to the identified OOD data. However, we found that CCAL and SIMILAR are often worse than standard AL methods, since they always put higher weights on purity although informativeness should be emphasized when the open-set noise ratio is small or in later AL rounds. This calls for developing a new solution to carefully find the best balance between purity and informativeness. 3 Purity-Informativeness Dilemma in Open-set Active Learning 3.1 Problem Statement Let DIN and DOOD be the IN and OOD data distributions, where the label of examples from DOOD does not belong to any of the k known labels Y = {yi}ki=1. Then, an unlabeled set is a mixture of IN and OOD examples, U = {XIN , XOOD}, i.e., XIN ∼ DIN and XOOD ∼ DOOD. In the open-set AL, a human oracle is requested to assign a known label y to an IN example x ∈ XIN with a labeling cost cIN , while an OOD example x ∈ XOOD is marked as open-set noise with a labeling cost cOOD. AL imposes restrictions on the labeling budget b every round. It starts with a small labeled set SL, consisting of both labeled IN and OOD examples. The initial labeled set SL improves by adding a small but maximally-informative labeled query set SQ per round, i.e., SL←SL∪SQ, where the labeling cost for SQ by the oracle does not exceed the labeling budget b. Hence, the goal of open-set AL is defined to construct the optimal query set S∗Q, minimizing the loss for the unseen target IN data. The difference from standard AL is that the labeling cost for OOD examples is introduced, where the labeling budget is wasted when OOD examples are misclassified as informative ones. Formally, let C(·) be the labeling cost function for a given unlabeled set; then, each round of open-set AL is formulated to find the best query set S∗Q as S∗Q = argmin SQ: C(SQ)≤b E(x,y)∈TIN [ ℓcls ( f(x; ΘSL∪SQ), y )] , where C(SQ) = ∑ x∈SQ ( 1[x∈XIN ]cIN + 1[x∈XOOD]cOOD ) . (1) Here, f(·; ΘSL∪SQ) denotes the target model trained on only IN examples in SL ∪ SQ, and ℓcls is a certain loss function, e.g., cross-entropy, for classification. For each AL round, all the examples in S∗Q are removed from the unlabeled set U and then added to the accumulated labeled set SL with their labels. This procedure repeats for the total number r of rounds. 3.2 Purity-Informativeness Dilemma An ideal approach for open-set AL would be to increase both purity and informativeness of a query set by completely suppressing the selection of OOD examples and accurately querying the most informative examples among the remaining IN examples. However, the ideal approach is infeasible because overly emphasizing purity in query selection does not promote example informativeness and vice versa. Specifically, OOD examples with low purity scores mostly exhibit high informativeness scores because they share neither class-distinctive features nor other inductive biases with the IN examples [14, 15]. We call this trade-off in query selection as the purity-informativeness dilemma, which is our new finding expected to trigger a lot of subsequent work. To address this dilemma, we need to consider the proper weights of a purity score and an informative score when they are combined. Let P(x) be a purity score of an example x which can be measured by any existing OOD scores, e.g., negative energy [31], and I(x) be an informativeness score of an example x from any standard AL strategies, e.g., uncertainty [3] and diversity [26]. Next, supposing zx = ⟨P(x), I(x)⟩ is a tuple of available purity and informativeness scores for an example x. Then, a score combination function Φ(zx), where zx = ⟨P(x), I(x)⟩, is defined to return an overall score that indicates the necessity of x being included in the query set. Given two unlabeled examples xi and xj , if P(xi) > P(xj) and I(xi) > I(xj), it is clear to favor xi over xj based on Φ(zxi) > Φ(zxj ). However, due to the purity-informativeness dilemma, if P(xi) > P(xj) and I(xi) < I(xj) or P(xi) < P(xj) and I(xi) > I(xj), it is very challenging to determine the dominance between Φ(zxi) and Φ(zxj ). In order to design Φ(·), we mainly focus on leveraging meta-learning, which is a more agnostic approach to resolve the dilemma other than several heuristic approaches, such as linear combination and multiplication. 4 Meta-Query-Net We propose a meta-model, named Meta-Query-Net (MQ-Net), which aims to learn a meta-score function for the purpose of identifying a query set. In the presence of open-set noise, MQ-Net outputs the meta-score for unlabeled examples to achieve the best balance between purity and informativeness in the selected query set. In this section, we introduce the notion of a self-validation set to guide the meta-model in a supervised manner and then demonstrate the meta-objective of MQ-Net for training. Then, we propose a novel skyline constraint used in optimization, which helps MQ-Net capture the obvious preference among unlabeled examples when a clear dominance exists. Next, we present a way of converting the purity and informativeness scores estimated by existing methods for use in MQ-Net. Note that training MQ-Net is not expensive because it builds a light meta-model on a small self-validation set. The overview of MQ-Net is illustrated in Figure 2. 4.1 Training Objective with Self-validation Set The parameters w contained in MQ-Net Φ(·;w) is optimized in a supervised manner. For clean supervision, validation data is required for training. Without assuming a hard-to-obtain clean validation set, we propose to use a self-validation set, which is instantaneously generated in every AL round. In detail, we obtain a labeled query set SQ by the oracle, consisting of a labeled IN set and an identified OOD set in every round. Since the query set SQ is unseen for the target model Θ and the meta-model w at the current round, we can exploit it as a self-validation set to train MQ-Net. This self-validation set eliminates the need for a clean validation set in meta-learning. Given the ground-truth labels in the self-validation set, it is feasible to guide MQ-Net to be trained to resolve the purity-informativeness dilemma by designing an appropriate meta-objective. It is based on the cross-entropy loss for classification because the loss value of training examples has been proven to be effective in identifying high informativeness examples [5]. The conventional loss value by a target model Θ is masked to be zero if x ∈ XOOD since OOD examples are useless for AL, ℓmce(x) = 1[lx=1]ℓce ( f(x; Θ), y ) , (2) where l is a true binary IN label, i.e., 1 for IN examples and 0 for OOD examples, which can be reliably obtained from the self-validation set. This masked loss, ℓmce, preserves the informativeness of IN examples while excluding OOD examples. Given a self-validation data SQ, the meta-objective is defined such that MQ-Net parameterized by w outputs a high (or low) meta-score Φ(zx;w) if an example x’s masked loss value is large (or small), L(SQ)= ∑ i∈SQ ∑ j∈SQ max ( 0,−Sign ( ℓmce(xi), ℓmce(xj) ) · ( Φ(zxi ;w)− Φ(zxj ;w) + η )) s.t. ∀xi, xj , if P(xi) > P(xj) and I(xi) > I(xj), then Φ(zxi ;w) > Φ(zxj ;w), (3) where η > 0 is a constant margin for the ranking loss, and Sign(a, b) is an indicator function that returns +1 if a > b, 0 if a = b, and −1 otherwise. Hence, Φ(zxi ;w) is forced to be higher than Φ(zxj ;w) if ℓmce(xi) > ℓmce(xj); in contrast, Φ(zxi ;w) is forced to be lower than Φ(zxj ;w) if ℓmce(xi) < ℓmce(xj). Two OOD examples do not affect the optimization because they do not have any priority between them, i.e., ℓmce(xi) = ℓmce(xj). In addition to the ranking loss, we add a regularization term named the skyline constraint (i.e., the second line) in the meta-objective Eq. (3), which is inspired by the skyline query which aims to narrow down a search space in a large-scale database by keeping only those items that are not worse than any other [17, 18]. Specifically, in the case of P(xi) > P(xj) and I(xi) > I(xj), the condition Φ(zxi ;w) > Φ(zxj ;w) must hold in our objective, and hence we make this proposition as the skyline constraint. This simple yet intuitive regularization is very helpful for achieving a meta-model that better judges the importance of purity or informativeness. We provide an ablation study on the skyline constraint in Section 5.4. 4.2 Architecture of MQ-Net MQ-Net is parameterized by a multi-layer perceptron (MLP), a widely-used deep learning architecture for meta-learning [40]. A challenge here is that the proposed skyline constraint in Eq. (3) does not hold with a standard MLP model. To satisfy the skyline constraint, the meta-score function Φ(·;w) should be a monotonic non-decreasing function because the output (meta-score) of MQ-Net for an example xi must be higher than that for another example xj if the two factors (purity and informativeness) of xi are both higher than those of xj . The MLP model consists of multiple matrix multiplications with non-linear activation functions such as ReLU and Sigmoid. In order for the MLP model to be monotonically non-decreasing, all the parameters in w for Φ(·;w) should be non-negative, as proven by Theorem 4.1. Theorem 4.1. For any MLP meta-model w with non-decreasing activation functions, a meta-score function Φ(z;w) : Rd → R holds the skyline constraints if w ⪰ 0 and z(∈ Rd) ⪰ 0, where ⪰ is the component-wise inequality. Proof. An MLP model is involved with matrix multiplication and composition with activation functions, which are characterized by three basic operators: (1) addition: h(z) = f(z) + g(z), (2) multiplication: h(z) = f(z)× g(z), and (3) composition: h(z) = f ◦ g(z). These three operators are guaranteed to be non-decreasing functions if the parameters of the MLP model are all nonnegative, because the non-negative weights guarantee all decomposed scalar operations in MLP to be non-decreasing functions. Combining the three operators, the MLP model Φ(z;w), where w ⪰ 0, naturally becomes a monotonic non-decreasing function for each input dimension. Refer to Appendix A for the complete proof. In implementation, non-negative weights are guaranteed by applying a ReLU function to meta-model parameters. Since the ReLU function is differentiable, MQ-Net can be trained with the proposed objective in an end-to-end manner. Putting this simple modification, the skyline constraint is preserved successfully without introducing any complex loss-based regularization term. The only remaining condition is that each input of MQ-Net must be a vector of non-negative entries. 4.3 Active Learning with MQ-Net 4.3.1 Meta-input Conversion MQ-Net receives zx = ⟨P(x), I(x)⟩ and then returns a meta-score for query selection. All the scores for the input of MQ-Net should be positive to preserve the skyline constraint, i.e., z ⪰ 0. Existing OOD and AL query scores are converted to the meta-input. The methods used for calculating the scores are orthogonal to our framework. The OOD score O(·) is conceptually the opposite of purity and varies in its scale; hence, we convert it to a purity score by P(x) = Exp(Normalize(−O(x))), where Normalize(·) is the z-score normalization. This conversion guarantees the purity score to be positive. Similarly, for the informativeness score, we convert an existing AL query score Q(·) to I(x) = Exp(Normalize(Q(x))). For the z-score normalization, we compute the mean and standard deviation of O(x) or Q(x) over the unlabeled examples. Such mean and standard deviation are iteratively computed before the meta-training, and used for the z-score normalization at that round. 4.3.2 Overall Procedure For each AL round, a target model is trained via stochastic gradient descent (SGD) on mini-batches sampled from the IN examples in the current labeled set SL. Based on the current target model, the purity and informative scores are computed by using certain OOD and AL query scores. The querying phase is then performed by selecting the examples SQ with the highest meta-scores within the labeling budget b. The query set SQ is used as the self-validation set for training MQ-Net at the current AL round. The trained MQ-Net is used at the next AL round. The alternating procedure of updating the target model and the meta-model repeats for a given number r of AL rounds. The pseudocode of MQ-Net can be found in Appendix B. 5 Experiments 5.1 Experiment Setting Datasets. We perform the active learning task on three benchmark datasets; CIFAR10 [41], CIFAR100 [41], and ImageNet [42]. Following the ‘split-dataset’ setup in open-world learning literature [13, 16, 43], we divide each dataset into two subsets: (1) the target set with IN classes and (2) the noise set with OOD classes. Specifically, CIFAR10 is split into the target set with four classes and the noise set with the rest six classes; CIFAR100 into the two sets with 40 and 60 classes; and ImageNet into the two sets with 50 and 950 classes. The entire target set is used as the unlabeled IN data, while only a part of classes in the noise set is selected as the unlabeled OOD data according to the given noise ratio. In addition, following OOD detection literature [28, 33], we also consider the ‘cross-dataset’ setup, which mixes a certain dataset with two external OOD datasets collected from different domains, such as LSUN [44] and Places365 [45]. For sake of space, we present all the results on the cross-dataset setup in Appendix D. Algorithms. We compare MQ-Net with a random selection, four standard AL, and two recent open-set AL approaches. • Standard AL: The four methods perform AL without any processing for open-set noise: (1) CONF [3] queries the most uncertain examples with the lowest softmax confidence in the prediction, (2) CORESET [7] queries the most diverse examples with the highest coverage in the representation space, (3) LL [5] queries the examples having the largest predicted loss by jointly learning a loss prediction module, and (4) BADGE [8] considers both uncertainty and diversity by querying the most representative examples in the gradient via k-means++ clustering [46]. • Open-set AL: The two methods tend to put more weight on the examples with high purity: (1) CCAL [13] learns two contrastive coding models for calculating informativeness and OODness, and then it combines the two scores into one using a heuristic balancing rule, and (2) SIMILAR [16] selects a pure and core set of examples that maximize the distance coverage on the entire unlabeled data while minimizing the distance coverage to the already labeled OOD data. For all the experiments, regarding the two inputs of MQ-Net, we mainly use CSI [36] and LL [5] for calculating the purity and informativeness scores, respectively. For CSI, as in CCAL, we train a contrastive learner on the entire unlabeled set with open-set noise since the clean in-distribution set is not available in open-set AL. The ablation study in Section 5.4 shows that MQ-Net is also effective with other OOD and AL scores as its input. Implementation Details. We repeat the three steps—training, querying, and labeling—of AL. The total number r of rounds is set to 10. Following the prior open-set AL setup [13, 16], we set the labeling cost cIN = 1 for IN examples and cOOD = 1 for OOD examples. For the class-split setup, the labeling budget b per round is set to 500 for CIFAR10/100 and 1, 000 for ImageNet. Regarding the open-set noise ratio τ , we configure four different levels from light to heavy noise in {10%, 20%, 40%, 60%}. In the case of τ = 0% (no noise), MQ-Net naturally discards the purity score and only uses the informativeness score for query selection, since the self-validation set does not contain any OOD examples. The initial labeled set is randomly selected uniformly at random from the entire unlabeled set within the labeling budget b. For the architecture of MQ-Net, we use a 2-layer MLP with the hidden dimension size of 64 and the Sigmoid activation fuction. We report the average results of five runs with different class splits. We did not use any pre-trained networks. See Appendix C for more implementation details with training configurations. All methods are implemented with PyTorch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU. The code is available at https://github.com/kaist-dmlab/MQNet. 5.2 Open-set Noise Robustness 5.2.1 Results over AL Rounds Figure 3 illustrates the test accuracy of the target model over AL rounds on the two CIFAR datasets. MQ-Net achieves the highest test accuracy in most AL rounds, thereby reaching the best test accuracy at the final round in every case for various datasets and noise ratios. Compared with the two existing open-set AL methods, CCAL and SIMILAR, MQ-Net shows a steeper improvement in test accuracy over rounds by resolving the purity-informativeness dilemma in query selection. For example, the performance gap between MQ-Net and the two open-set AL methods gets larger after the sixth round, as shown in Figure 3(b), because CCAL and SIMILAR mainly depend on purity in query selection, which conveys less informative information to the classifier. For a better classifier, informative examples should be favored at a later AL round due to the sufficient number of IN examples in the labeled set. In contrast, MQ-Net keeps improving the test accuracy even in a later AL round by finding the best balancing between purity and informativeness in its query set. More analysis of MQ-Net associated with the purity-informativeness dilemma is discussed in Section 5.3. 5.2.2 Results with Varying Noise Ratios Table 1 summarizes the last test accuracy at the final AL round for three datasets with varying levels of open-set noise. Overall, the last test accuracy of MQ-Net is the best in every case. This superiority concludes that MQ-Net successfully finds the best trade-off between purity and informativeness in terms of AL accuracy regardless of the noise ratio. In general, the performance improvement becomes larger with the increase in the noise ratio. On the other hand, the two open-set AL approaches are even worse than the four standard AL approaches when the noise ratio is less than or equal to 20%. Especially, in CIFAR10 relatively easier than others, CCAL and SIMILAR are inferior to the non-robust AL method, LL, even with 40% noise. This trend confirms that increasing informativeness is more crucial than increasing purity when the noise ratio is small; highly informative examples are still beneficial when the performance of a classifier is saturated in the presence of open-set noise. An in-depth analysis on the low accuracy of the existing open-set AL approaches in a low noise ratio is presented in Appendix E. Table 2: Effect of the meta inputs on MQ-Net. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% Standard AL BADGE 92.80 91.73 89.27 86.83 Open-set AL CCAL 90.55 89.99 88.87 87.49 MQ-Net CONF-ReAct 93.21 91.89 89.54 87.99 CONF-CSI 93.28 92.40 91.43 89.37 LL-ReAct 92.34 91.85 90.08 88.41 LL-CSI 93.10 92.10 91.48 89.51 Table 3: Efficacy of the self-validation set. Dataset CIFAR10 (4:6 split) Noise Ratio 10% 20% 40% 60% MQ-Net Query set 93.10 92.10 91.48 89.51 Random 92.10 91.75 90.88 87.65 Table 4: Efficacy of the skyline constraint. Noise Ratio 10% 20% 40% 60% MQ-Net w/ skyline 93.10 92.10 91.48 89.51 w/o skyline 87.25 86.29 83.61 81.67 5.3 Answers to the Purity-Informativeness Dilemma The high robustness of MQ-Net in Table 1 and Figure 3 is mainly attributed to its ability to keep finding the best trade-off between purity and informativeness. Figure 4(a) illustrates the preference change of MQ-Net between purity and informativeness throughout the AL rounds. As the round progresses, MQ-Net automatically raises the importance of informativeness rather than purity; the slope of the tangent line keeps steepening from −0.74 to −1.21. This trend implies that more informative examples are required to be labeled when the target classifier becomes mature. That is, as the model performance increases, ‘fewer but highly-informative’ examples are more impactful than ‘more but less-informative’ examples in terms of improving the model performance. Figure 4(b) describes the preference change of MQ-Net with varying noise ratios. Contrary to the trend over AL rounds, as the noise ratio gets higher, MQ-Net prefers purity more over informativeness. 5.4 Ablation Studies Various Combination of Meta-input. MQ-Net can design its purity and informativeness scores by leveraging diverse metrics in the existing OOD detection and AL literature. Table 2 shows the final round test accuracy on CIFAR10 for the four variants of score combinations, each of which is constructed by a combination of two purity scores and two informativeness scores; each purity score is induced by the two recent OOD detection methods, ReAct [32] and CSI [36], while each informativeness score is converted from the two existing AL methods, CONF and LL. “CONF-ReAct” denotes a variant that uses ReAct as the purity score and CONF as the informativeness score. Overall, all variants perform better than standard and open-set AL baselines in every noise level. Refer to Table 2 for detailed comparison. This result concludes that MQ-Net can be generalized over different types of meta-input owing to the learning flexibility of MLPs. Interestingly, the variant using CSI as the purity score is consistently better than those using ReAct. ReAct, a classifier-dependent OOD score, performs poorly in earlier AL rounds. A detailed analysis of the two OOD detectors, ReAct and CSI, over AL rounds can be found in Appendix F. Efficacy of Self-validation Set. MQ-Net can be trained with an independent validation set, instead of using the proposed self-validation set. We generate the independent validation set by randomly sampling the same number of examples as the self-validation set with their ground-truth labels from the entire data not overlapped with the unlabeled set used for AL. As can be seen from Table 3, it is of interest to see that our self-validation set performs better than the random validation set. The two validation sets have a major difference in data distributions; the self-validation set mainly consists of the examples with highest meta-scores among the remaining unlabeled data per round, while the random validation set consists of random examples. We conclude that the meta-score of MQ-Net has the potential for constructing a high-quality validation set in addition to query selection. Efficacy of Skyline Constraint. Table 4 demonstrates the final round test accuracy of MQ-Net with or without the skyline constraint. For the latter, a standard 2-layer MLP is used as the meta-network architecture without any modification. The performance of MQ-Net degrades significantly without the skyline constraint, meaning that the non-constrained MLP can easily overfit to the small-sized self-validation set, thereby assigning high output scores on less-pure and less-informative examples. Therefore, the violation of the skyline constraint in optimization makes MQ-Net hard to balance between the purity and informativeness scores in query selection. Efficacy of Meta-objective. MQ-Net keeps finding the best balance between purity and informativeness over multiple AL rounds by repeatedly minimizing the meta-objective in Eq. (3). To validate its efficacy, we compare it with two simple alternatives based on heuristic balancing rules such as linear combination and multiplication, denoted as P(x) + I(x) and P(x) · I(x), respectively. Following the default setting of MQ-Net, we use LL for P(x) and CSI for I(x). Table 5 shows the AL performance of the two alternatives and MQ-Net for the split-dataset setup on CIFAR10 with the noise ratios of 20% and 40%. MQ-Net beats the two alternatives after the second AL round where MQ-Net starts balancing purity and informativeness with its meta-objective. This result implies that our meta-objective successfully finds the best balance between purity and informativeness by emphasizing informativeness over purity at the later AL rounds. 5.5 Effect of Varying OOD Labeling Cost The labeling cost for OOD examples could vary with respect to data domains. To validate the robustness of MQ-Net on diverse labeling scenarios, we conduct an additional study of adjusting the labeling cost cOOD for the OOD examples. Table 6 summarizes the performance change with four different labeling costs (i.e., 0.5, 1, 2, and 4). The two standard AL methods, CONF and CORESET, and two open-set AL methods, CCAL and SIMILAR, are compared with MQ-Net. Overall, MQ-Net consistently outperforms the four baselines regardless of the labeling cost. Meanwhile, CCAL and SIMILAR are more robust to the higher labeling cost than CONF and CORESET; CCAL and SIMILAR, which favor high purity examples, query more IN examples than CONF and CORESET, so they are less affected by the labeling cost, especially when it is high. 6 Conclusion We propose MQ-Net, a novel meta-model for open-set active learning that deals with the purityinformativeness dilemma. MQ-Net finds the best balancing between the two factors, being adaptive to the noise ratio and target model status. A clean validation set for the meta-model is obtained for free by exploiting the procedure of active learning. A ranking loss with the skyline constraint optimizes MQ-Net to make the output a legitimate meta-score that keeps the obvious order of two examples. MQ-Net is shown to yield the best test accuracy throughout the entire active learning rounds, thereby empirically proving the correctness of our solution to the purity-informativeness dilemma. Overall, we expect that our work will raise the practical usability of active learning with open-set noise. Acknowledgement This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00862, DB4DL: HighUsability and Performance In-Memory Distributed DBMS for Deep Learning). The experiment was conducted by the courtesy of NAVER Smart Machine Learning (NSML) [47].
1. What is the focus of the paper, and what practical problem does it aim to solve? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its intuitive sense and compatibility with other measures? 3. Are there any concerns or questions about the experimental results, such as their significance, resource usage, and comparison to random acquisition baselines? 4. How does the reviewer assess the clarity and readability of certain parts of the paper, including the problem statement and figures? 5. Does the reviewer have any minor questions or comments regarding specific details in the paper, such as z-score normalization, Figure 4, activation functions, and terminology? 6. Is there anything that the reviewer considers a limitation of the paper, even though it's discussed in the appendix?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper discusses the purity-informativeness dilemma for open-set active learning, where excessively increasing purity can lead to loss of informativenss. As mitigation they propose a new meta-heuristic, which learns to monotonically combine a purity and an informativeness metric, called MQ-Net. MQ-Net is trained on the acquired query sets alternatingly to utilizing it for query selection. Strengths And Weaknesses Strengths the paper addresses a practically relevant problem the proposed heuristic intuitively makes sense, and is compatible with different choices of purity or informativeness measures there is small overhead of the heuristic and it seems to be easy to implement Weaknesses: in contrast to the answers in the checklist, no code is provided. It is also unclear how hyperparameters of the MQ-Net were chosen, and there is no report about the resource usage. without reporting standard deviations, it is hard to assess the significance of the empirical results - since the authors already report running the experiments five times with different random seeds it is unclear why they did decide to do so; in addition there is also no comparison against the performance of a random acquisition baseline - having it would justify the necessecity of using an active learning strategy at all some parts of the paper could be improved in readability, e.g., in the problem statement, X I N is initially defined as set of in-distribution samples (without their label), but later one, T I N coming from the same distribution D I N contains pairs of (input, target). Questions Major questions: could you additionally provide the standard deviation across the five different runs? what computing infrastructure was used? In particular, I wonder about the resource requirements for experiments conducted on the ImageNet dataset how do you do the z-score normalization, i.e., over which parts do you compute mean and standard deviation? Minor questions: Figure 4: it appears that all red OOD samples received a high purity score - shouldn't it be vice-versa? in Theorem 4.1., the constraint that the activation function needs to be monotonically non-decreasing is not mentioned (it is however in the appendix). This is an important requirement, and while most commonly used activation functions are monotonically non-decreasing, some, e.g., Swish, are not in the problem statement, you introduce different costs for annotation of OOD and IN examples. To me it is not clear what would motivate this. In the experiments, you then set same costs for OOD and IN labeling, as in related work. Minor comments: L52-53: One of the "HP" should be something else L80: directions -> direction L89: k-MEANS -> k-MEANS++ L115: more higher -> higher L120: delimma -> dilemma L187: a -> the L324-325: this could make a good plot, e.g., x=AL round, y=noise ratio, color=slope L363: legit -> legitimate / valid Figure 4 is very small and thus hard to read Limitations Yes, although the discussion is only present in the appendix.
NIPS
Title Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources Abstract Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems. 1 Introduction Our brains constantly and effortlessly extract latent causes, or sources, of complex visual, auditory or olfactory stimuli sensed by sensory organs [1–11]. This extraction is mostly done without any instruction, in an unsupervised manner, making the process an instance of the blind source separation (BSS) problem [12, 13]. Indeed, visual and auditory cortical receptive fields were argued to be the result of performing BSS on natural images [1, 2] and sounds [4]. The wide-spread use of BSS in the brain suggests the existence of generic circuit motifs that perform this task [14]. Consequently, the literature on biologically-plausible neural network algorithms for BSS is growing [15–19]. Because BSS is an underdetermined inverse problem, BSS algorithms make generative assumptions on observations. In most instances of the biologically-plausible BSS algorithms, complex stimuli are assumed to be linear mixtures of latent sources. This assumption is particularly fruitful and is used to model, for example, natural images [1, 20], and responses of olfactory neurons to complex odorants [21–23]. However, linear mixing by itself is not sufficient for source identifiability; further assumptions are needed. Previous work on biologically-plausible algorithms for BSS of linear mixtures 36th Conference on Neural Information Processing Systems (NeurIPS 2022). assumed sources to be statistically independent [17, 19, 24] or uncorrelated [16, 18]. However, these assumptions are very limiting when considering real data where sources can themselves be correlated. In this paper, we address the limitation imposed by independence assumptions and provide biologically-plausible BSS neural networks that can separate potentially correlated sources. We achieve this by considering various general geometric identifiability conditions on sources instead of statistical assumptions like independence or uncorrelatedness. In particular, 1) we make natural assumptions on the domains of source vectors–like nonnegativity, sparsity, anti-sparsity or boundedness (Figure 1)–and 2) we assume that latent source vectors are sufficiently spread in their domain [25, 26]. Because these identifiability conditions are not stochastic in nature, our neural networks are able to separate both independent and dependent sources. We derive our biologically-plausible algorithms from a normative principle. A common method for exploiting our geometric identifiability conditions is to disperse latent vector estimates across their presumed domain by maximizing the determinant of their sample correlation matrix, i.e., the Det-Max approach [25, 27–30]. Starting from a Det-Max objective function with constraints that specify the domain of source vectors, and using mathematical tools introduced for mapping optimization algorithms to adaptive Hebbian neural networks [18, 31, 32], we derive two-layered neural networks that can separate potentially correlated sources from their linear mixtures (Figure 2). These networks contain feedforward, recurrent and feedback synaptic connections updated via Hebbian or anti-Hebbian update rules. The domain of latent sources determines the structure of the output layer of the neural network (Figure 2, Table 1 and Appendix D). In summary, our main contributions in this article are the following: • We propose a normative framework for generating biologically plausible neural networks that are capable of separating correlated sources from their mixtures by deriving them from a Det-Max objective function subject to source domain constraints. • Our framework can handle infinitely many source types by exploiting their source domain topology. • We demonstrate the performance of our networks in simulations with synthetic and realistic data. 1.1 Other related work Several algorithms for separation of linearly mixed and correlated sources have been proposed outside the domain of biologically-plausible BSS. These algorithms make other forms of assumptions on the latent sources. Nonnegative matrix factorization (NMF) assumes that the latent vectors are nonnegative [13, 33–35]. Simplex structured matrix factorization (SSMF) assumes that the latent vectors are members of the unit-simplex [25, 36, 37]. Sparse component analysis (SCA) often assumes that the latent vectors lie in the unity `1-norm-ball [30, 38–42]. Antisparse bounded component analysis (BCA) assumes latent vectors are in the `1-norm-ball [28, 29, 43]. Recently introduced polytopic matrix factorization (PMF) extends the identifiability-enabling domains to infinitely many polytopes obeying a particular symmetry restriction [26, 44, 45]. The mapping of optimization algorithms to biologically-plausible neural networks have been formalized in the similarity matching framework [31, 32, 46, 47]. Several BSS algorithms were proposed within this framework: 1) Nonnegative Similarity Matching (NSM) [16, 48] separates linear mixtures of uncorrelated nonnegative sources, 2) [19] separates independent sources, and 3) Bounded Similarity Matching (BSM) separates uncorrelated anti-sparse bounded sources from `1-norm-ball [18]. BSM introduced a weighted inner product-based similarity criterion, referred to as the weighted similarity matching (WSM). Compared to these algorithm, the neural network algorithms we propose in this article 1) cover more general source domains, 2) handle potentially correlated sources, 3) use a two-layer WSM architecture (relative to single layer WSM architecture of BSM, which is not capable of generating arbitrary linear transformations) and 4) offer a general framework for neural-network-based optimization of the Det-Max criterion. 2 Problem statement 2.1 Sources We assume that there are n real-valued sources, represented by the vector s 2 P , where P is a particular subset of Rn. Our algorithms will address a wide range of source domains. We list some examples before giving a more general criterion: • Bounded sparse sources: A natural convex domain choice for sparse sources is the unit `1 norm ball B`1 = {s | ksk1 1} (Figure 1.(a)). The use of `1-norm as a convex (non)sparsity measure has been quite successful with various applications including sparse dictionary learning/component analysis [30, 39, 41, 49, 50] and modeling of V1 receptive fields [2]. • Bounded anti-sparse sources: A common domain choice for anti-sparse sources is the unit `1- norm-ball: B`1 = {s | ksk1 1} (Figure 1.(b)). If vectors drawn from B`1 are well-spread inside this set, some samples would contain near-peak magnitude values simultaneously at all their components. The potential equal spreading of values among the components justifies the term “anti-sparse” [51] or “democratic” [52] component representations. This choice is well-suited for both applications in natural images and digital communication constellations [28, 43]. • Normalized nonnegative sources: Simplex structured matrix factorization [25, 36, 37] uses the unit simplex [35, 53] = {s | s 0,1T s = 1} (Figure 1.(c)) as the source domain. Nonnegativity of sources naturally arises in biological context, for example in demixing olfactory mixtures [54]. • Nonnegative bounded anti-sparse sources: A non-degenerate polytopic choice of the nonnegative sources can be obtained through the combination of anti-sparseness and nonnegativity constraints. This corresponds to the intersection of B`1 with the nonnegative orthant Rn+, represented as B`1,+ = B`1 \ Rn+ [26] (Figure 1.(d)). • Nonnegative bounded sparse sources: Another polytopic choice for nonnegative sources can be obtained through combination of the sparsity and nonnegativity constraints which yields the intersection of B`1 with the nonnegative orthant R+, [26]: B`1,+ = B`1 \ Rn+ (Figure 1.(e)). Except the unit simplex , all the examples above are examples of an infinite set of identifiable polytopes whose symmetry groups are restricted to the combinations of component permutations and sign alterations as formalized in PMF framework for BSS [44]. Further, in- stead of a homogeneous choice of features, such as sparsity and nonnegativity, globally imposed on all elements of the component vector, we can assign these attributes at the subvector level and still obtain identifiable polytopes. For example, the reference [26] provides the set Pex = ⇢ s 2 R3 s1, s2 2 [ 1, 1], s3 2 [0, 1], s1 s2 1 1, s2 s3 1 1 , as a simple il- lustration of such polytopes with heterogeneous structure where s3 is nonnegative, s1, s2 are signed, and [ s1 s2 ] T , [ s2 s3 ] T are sparse subvectors, while sparsity is not globally imposed. In this article, we concentrate on particular source domains including the unit simplex, and the subset of identifiable polytopes for which the attributes such as sparsity and nonnegativity are defined at the subvector level in the general form P = s 2 Rn si 2 [ 1, 1] 8i 2 Is, si 2 [0, 1] 8i 2 I+, ksJkk1 1, Jk ✓ Zn, k 2 ZL , (1) where I+ ✓ Zn is the index set for nonnegative sources, and Is is its complement, sJk is the subvector constructed from the elements with indices in Jk, and L is the number of sparsity constraints imposed in the subvector level. The Det-Max criterion for BSS is based on the assumption that the source samples are well-spread in their presumed domain. The references [55] and [26] provide precise conditions on the scattering of source samples which guarantee their identifiability for the unit simplex and polytopes, respectively. Appendix A provides a brief summary of these conditions. We emphasize that our assumptions about the sources are deterministic. Therefore, our proposed algorithms do not exploit any stochastic assumptions such as independence or uncorrelatedness, and can separate both independent and dependent (potentially correlated) sources. 2.2 Mixing The sources st are mixed through a mixing matrix A 2 Rm⇥n. xt = Ast, t 2 Z. (2) We only consider the (over)determined case with m n and assume that the mixing matrix is fullrank. While we consider noiseless mixtures to achieve perfect separability, the optimization setting proposed for the online algorithm features a particular objective function that safeguards against potential noise presence. We use S(t) = [ s1 . . . st ] 2 Rn⇥t and X(t) = [ x1 . . . xt ] 2 Rm⇥t to represent data snapshot matrices, at time t, for sources and mixtures, respectively. 2.3 Separation The goal of the source separation is to obtain an estimate of S(t) from the mixture measurements X(t) when the mixing matrix A is unknown. We use the notation yt to refer to source estimates, which are linear transformations of observations, i.e., yi = Wxi, where W 2 Rn⇥m. We define Y(t) = [ y1 y2 . . . yt ] 2 Rn⇥t as the output snapshot matrix. "Ideal separation" is defined as the condition where the outputs are scaled and permuted versions of original sources, i.e., they satisfy yt = P⇤st, where P is a permutation matrix, and ⇤ is a full rank diagonal matrix. 3 Determinant maximization based blind source separation Among several alternative solution methods for the BSS problem, the determinant-maximization (DetMax) criterion has been proposed within the NMF, BCA, and PMF frameworks, [26–28, 30, 35, 44]. Here, the separator is trained to maximize the (log)-determinant of the sample correlation matrix for the separator outputs, J(W) = log(det(R̂y(t))), where R̂y(t) is the sample correlation matrix R̂y(t) = 1 t P t i=1 yiy T i = 1 t Y(t)Y(t)T . Further, during the training process, the separator outputs are constrained to lie inside the presumed source domain, i.e. P . As a result, we can pose the corresponding optimization problem as [26, 35] maximize Y(t) log(det(Y(t)Y(t)T )) (3a) subject to yi 2 P, i = 1, . . . , t, (3b) where we ignored the constant 1 t term. Here, the determinant of the correlation matrix acts as a spread measure for the output samples. If the original source samples {s1, . . . , st} are sufficiently scattered inside the source domain P , as described in Section 2.1 and Appendix A, then the global solution of this optimization can be shown to achieve perfect separation [26, 35, 55]. 4 An alternative optimization formulation of determinant-maximization based on weighted similarity matching Here, we reformulate the Det-Max problem 3 described above in a way that allows derivation of a biologically-plausible neural network for the linear BSS setup in Section 2. Our formulation applies to all source types discusses in 2.1. We propose the following optimization problem: minimize Y(t),H(t),D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) nX i=1 log(D1,ii(t)) + nX i=1 log(D2,ii(t)) (4a) subject to X(t)TX(t) H(t)TD1(t)H(t) = 0, (4b) H(t)TH(t) Y(t)TD2(t)Y(t) = 0, (4c) yi 2 P, i = 1, . . . , n, (4d) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (4e) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (4f) Here, X(t) 2 Rm⇥t is the matrix containing input (mixture) vectors, Y(t) 2 Rn⇥t is the matrix containing output vectors, H(t) 2 Rn⇥t is a slack variable containing an intermediate signal {hi 2 Rn, i = 1, . . . , t}, corresponding to the hidden layer of the neural network implementation in Section 5, in its columns H(t) = [ h1 h2 . . . ht ]. Dl,11(t), Dl,22(t), . . . , Dl,nn(t) for l = 1, 2 are nonnegative slack variables to be described below, and Dl is the diagonal matrix containing weights Dl,ii for i = 1, . . . , n and l = 1, 2. The constraint (4d) ensures that the outputs lie in the presumed domain of sources. This problem is related to the weighted similarity matching (WSM) objective introduced in [18]. Constraints (4b) and (4c) define two separate WSM conditions. In particular, the equality constraint in (4b) is a WSM constraint between inputs and the intermediate signal H(t). This constraint imposes that the pairwise weighted correlations of the signal {hi, i = 1, . . . , t} are the same as correlations among the elements of the input signal {xi, i = 1, . . . , t}, i.e., xTi xj = hTi D1(t)hj , 8i, j 2 {1, . . . , t}. D1,11(t), D1,22(t), . . . , D1,nn(t) correspond to inner product weights used in these equalities. Similarly, the equality constraint in (4c) defines a WSM constraint between the intermediate signal and outputs. This equality can be written as hT i hj = yTi D2(t)yj , i, j 2 {1, . . . , t}, and D2,11(t), D2,22(t), . . . , D2,nn(t) correspond to the inner product weights used in these equalities. The optimization involves minimizing the logarithm of the determinant of the weighting matrices. Now we state the relation between our WSM-based objective and the original Det-Max criterion (3). Theorem 1. If X(t) is full column-rank, then global optimal Y(t) solutions of (3) and (4) coincide. Proof of Theorem 1. See Appendix B for the proof. The proof relies on a lemma that states that the optimization constraints enforce inputs and outputs to be related by an arbitrary linear transformation. 5 Biologically-plausible neural networks for WSM-based BSS The optimization problems we considered so far were in an offline setting, where all inputs are observed together and all outputs are produced together. However, biology operates in an online fashion, observing an input and producing the corresponding output, before seeing the next input. Therefore, in this section, we first introduce an online version of the batch WSM-problem (4). Then we show that the corresponding gradient descent algorithm leads to a two-layer neural network with biologically-plausible local update rules. 5.1 Online optimization setting for WSM-based BSS We first propose an online extension of WSM-based BSS (4). In the online setting, past outputs cannot be altered, but past inputs and outputs still carry valuable information about solving the BSS problem. We will write down an optimization problem whose goal is to produce the sources yt given a mixture xt, while exploiting information from all the fixed previous inputs and outputs. We first introduce our notation. We consider exponential weighting of the signals as a recipe for dynamical adjustment to potential nonstationarity in the data. We define the weighted input data snapshot matrix by time t as, X (t) = ⇥ t 1x1 . . . xt 1 xt ⇤ = X(t) (t), where is the forgetting factor and (t) = diag( t 1, . . . , , 1). The exponential weighting emphasizes recent mixtures by reducing the impact of past samples. Similarly, we define the corresponding weighted output snapshot matrix for output as Y(t) = ⇥ t 1y1 . . . yt 1 yt ⇤ = Y(t) (t), and the hidden layer vectors as H(t) = ⇥ t 1h1 . . . ht 1 ht ⇤ = H(t) (t). We further define ⌧ = limt!1 = P t 1 k=0 2k = 11 2 as a measure of the effective time window length for sample correlation calculations based on the exponential weights. In order to derive an online cost function, we first converted equality constraints in (4b) and (4c) to similarity matching cost functions J1(H(t),D1(t)) = 12⌧2 kX (t) T X (t) H(t)TD1(t)H(t)k2F , J2(H(t),D2(t),Y(t)) = 1 2⌧2 kH(t) T H(t) Y(t)TD2(t)Y(t)k2F . Then, a weighted combination of similarity matching costs and the objective function in (4a) yields the final cost function J (H(t),D1(t),D2(t),Y(t)) = SM [ J1(H(t),D1(t)) + (1 )J2(H(t),D2(t),Y(t))] +(1 SM )[ nX k=1 log(D1,kk(t)) + nX k=1 log(D2,kk(t))]. (5) Here, 2 [0, 1] and SM 2 [0, 1] are parameters that convexly combine similarity matching costs and the objective function. Finally, we can state the online optimization problem for determining the current output yt, the corresponding hidden state ht and for updating the gain parameters Dl(t) for l = 1, 2, as minimize yt,ht,D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) J (H(t),D1(t),D2(t),Y(t)) (6a) subject to yt 2 P, (6b) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (6c) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (6d) As shown in Appendix C.1, part of J that depends on ht and yt can be written as C(ht,yt) = 2h T t D1MH(t)D1(t)ht 4h T t D1(t)WHX(t)xt +2yT t D2(t)MY (t)D2(t)yt 4y T t D2(t)WY H(t)ht + 2h T t MH(t)ht, (7) where the dependence on past inputs and outputs appear in the weighted correlation matrices: MH(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkhTk , WHX(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkxTk , WY H(t) = 1 ⌧ P t 1 k=1( 2)t 1 kykhTk , MY (t) = 1 ⌧ P t 1 k=1( 2)t 1 kykyTk . (8) 5.2 Description of network dynamics for bounded anti-sparse sources We now show that the gradient-descent minimization of the online WSM cost function in (6) can be interpreted as the dynamics of a neural network with local learning rules. The exact network architecture is determined by the presumed identifiable source domain P , which can be chosen in infinitely many ways. In this section, we concentrate on the domain choice P = B1 as an illustrative example. In Section 5.3, we discuss how to generalize the results of this section by modifying the output layer for different identifiable source domains. We start by writing the update expressions for the optimization variables based on the gradients of J (ht,yt,D1(t),D2(t)): Update dynamics for ht: Following previous work [48, 56], and using the gradient of (7) in (A.8) with respect to ht, we can write down an update dynamics for ht in the form dv(⌧) d⌧ = v(⌧) SM [((1 )M̄H(t) + D1(t)M̄H(t)D1(t))h(⌧) + D1(t)WHX(t)x(⌧) + (1 )WY H(t) T D2(t)y(⌧)] (9) ht,i(⌧) = A vi(⌧) SM Hii(t)((1 ) + D1,ii(t) 2) ! , for i = 1, . . . n, (10) where H(t) is a diagonal matrix containing diagonal elements of MH(t) and M̄H(t) = MH(t) H(t), (·) is the clipping function, defined as A(x) = ⇢ x A x A, Asign(x) otherwise. . This dy- namics can be shown to minimize (7) [56]. Here v(⌧) is an internal variable that could be interpreted as the voltage dynamics of a biological neuron, and is defined based on a linear transformation of ht in (A.9). Equation (9) defines v(⌧) dynamics from the gradient of (7) with respect to ht in (A.8). Due to the positive definite linear map in (A.9), the expression in (A.8) also serves as a descent direction for v(⌧). Furthermore, (·) function is the projection onto AB1, where [ A,A] is the presumed dynamic range for the components of ht. We note that there is no explicit constraint set for ht in the online optimization setting of Section 5.1, and therefore, A can be chosen as large as desired in the actual implementation. We included the nonlinearity in (10) to model the limited dynamic range of an actual (biological) neuron. Update dynamics for output yt: We write the update dynamics for the output yt, based on (A.11) as du(⌧) d⌧ = u(⌧) +WY H(t)h(⌧) M̄Y (t)D2(t)y(⌧), (11) yt,i(⌧) = 1 ✓ ui(⌧) Y ii(t)D2,ii(t) ◆ , for i = 1, . . . n, (12) which is derived using the same approach for ht, where we used the descent direction expression in (A.11), and the substitution in (A.12). Here, Y (t) is a diagonal matrix containing diagonal elements of MY (t) and M̄Y (t) = MY (t) Y (t). Note that the nonlinear mapping 1(·) is the projection onto the presumed domain of sources, i.e., P = B1, which is elementwise clipping operation. The state space representations in (9)-(10) and (11)-(12) correspond to a two-layer recurrent neural network with input xt, hidden layer activation ht, output layer activation yt, WHX (WTHX ) and WY H (WTY H ) are the feedforward (feedback) synaptic weight matrices for the first and the second layers, respectively, and M̄H and M̄Y are recurrent synaptic weight matrices for the first and the second layers, respectively. The corresponding neural network schematic is provided in Figure 2.(b). The gain and synaptic weight dynamics below describe the learning mechanism for this network: Update dynamics for gains Dl,ii: Using the derivative of the cost function with respect to D1,ii in (A.13), we can write the dynamics corresponding to the gain variable D1,ii as µD1 dD1,ii(t) dt = ( SM )(kMHi,:k 2 D1(t) kWHXi,:k 2 2) (1 SM ) 1 D1,ii(t) , (13) where µD1 corresponds to the learning time-constant. Similarly, for the gain variable D2,ii, the corresponding coefficient dynamics expression based on (A.14) is given by µD2 dD2,ii(t) dt = ( SM )(kMY i,:k 2 D2(t) kWY Hi,:k 2 2) (1 SM ) 1 D2,ii(t) , (14) where µD2 corresponds to the learning time-constant. The inverses of the inner product weights Dl,ii correspond to homeostatic gain parameters. The inspection of the gain updates in (13) and (14) leads to an interesting observation: whether the corresponding gain is going to increase or decrease depends on the balance between the norms of the recurrent and the feedforward synaptic strengths, which are the statistical indicators of the recent output and input activations, respectively. Hence, the homeostatic gain of the neuron will increase (decrease) if the level of recent output activations falls behind (surpasses) the level of recent input activations to balance input/output energy levels. The resulting dynamics align with the experimental homeostatic balance observed in biological neurons [57]. Based on the definitions of the synaptic weight matrices in (8), we can write their updates as MH(t+ 1) = 2 MH(t) + (1 2)hth T t , MY (t+ 1) = 2 MY (t) + (1 2)yty T t , (15) WHX(t+ 1) = 2 WHX(t) + (1 2)htx T t , WY H(t+ 1) = 2 WY H(t) + (1 2)yth T t . These updates are local in the sense that they only depend on variables available to the synapse, and hence are biologically plausible. 5.3 Det-max WSM neural network examples for more general source domains Det-Max Neural Network obtained for the source domain P = B1 in Section 5.2 can be extended to more general identifiable source domains by only changing the output dynamics. In Appendix D, we provide illustrative examples for different identifiable domain choices. Table 1 summarizes the output dynamics obtained for the identifiable source domain examples in Figure 1. We can make the following observations on Table 1: (1) For sparse and unit simplex settings, there is an additional inhibitory neuron which takes input from all outputs and whose activation is the inhibitory signal 1(⌧), (2) The source attributes, which are globally defined over all sources, determine the activation functions at the output layer. The proposed framework can be applied to any polytope described by (1) for which the corresponding Det-Max neural network will contain combinations of activation functions in Table 1 as illustrated in Figure 2.(a). 6 Numerical experiments In this section, we illustrate the applications of the proposed WSM-based BSS framework for both synthetic and natural sources. More details on these experiments and additional examples are provided in Appendix E, including sparse dictionary learning. Our implementation code is publicly available1. 6.1 Synthetically correlated source separation In order to illustrate the correlated source separation capability of the proposed WSM neural networks, we consider a numerical experiment with five copula-T distributed (uniform and correlated) sources. For the correlation calibration matrix for these sources, we use Toeplitz matrix whose first row is [1 ⇢ ⇢ ⇢ ⇢]. The ⇢ parameter determines the correlation level, and we considered the range [0, 0.6] for this parameter. These sources are mixed with a 10⇥5 random matrix with independent and identically distributed (i.i.d.) standard normal random variables. The mixtures are corrupted by i.i.d. 1https://github.com/BariscanBozkurt/Biologically-Plausible-DetMaxNNs-for-Blind-Source-Separation normal noise corresponding to 30dB signal-to-noise ratio (SNR) level. In this experiment, we employ the nonnegative-antisparse-WSM neural network (Figure 8 in Appendix D.2) whose activation functions at the output layer are nonnegative-clipping functions, as the sources are nonnegative uniform random variables. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ρ 0 5 10 15 20 25 30 35 SI NR (d B) Nonnegative Anti-sparse Source Separation SINR Results WSM NSM ICA-Infomax LD-InfoMax PMF that the performance of batch Det-Max algorithms, i.e., LD-InfoMax and PMF, are also robust against source correlations. Furthermore, due to their batch nature, these algorithms typically achieved better performance results than our neural network with online-restriction, as expected. 6.2 Image separation To further illustrate the correlated source separation advantage of our approach, we consider a natural image separation scenario. For this example, we have three RGB images with sizes 324⇥ 432⇥ 3 as sources (Figure 4.(a)). The sample Pearson correlation coefficients between the images are ⇢12 = 0.263, ⇢13 = 0.066, ⇢23 = 0.333. We use a random 5⇥ 3 mixing matrix whose entries are drawn from i.i.d standard normal distribution. The corresponding mixtures are shown in Figure 4.(b). We applied ICA, NSM and WSM algorithms to the mixtures. Figure 4.(c),(d),(e) shows the corresponding outputs. High-resolution versions of all images in this example are available in Appendix E.4 in addition to the comparisons with LD-Infomax and PMF algorithms. The Infomax ICA algorithm’s outputs have SINR level of 13.92dB, and this performance is perceivable as residual interference effects in the corresponding output images. The NSM algorithm achieves significantly higher SINR level of 17.45dB and the output images visually reflect this better performance. Our algorithm achieves the best SINR level of 27.49dB, and the corresponding outputs closely resemble the original source images. 7 Discussion and Conclusion We proposed a general framework for generating biologically plausible neural networks that are capable of separating correlated sources from their linear mixtures, and demonstrated their successful correlated source separation capability through synthetic and natural sources. Another motivation for our work is to link network structure with function. This is a long standing goal of neuroscience, however examples where this link can be achieved are limited. Our work provides concrete examples where clear links between a network’s architecture–i.e. number of interneurons, connections between interneurons and output neurons, nonlinearities (frequency-current curves)– and its function, the type of source separation or feature extraction problem the networks solves, can be established. These links may provide insights and interpretations that might generalize to real biological circuits. Our networks suffer from the same limitations of other recurrent biologically-plausible BSS networks. First, certain hyperparameters can significantly influence algorithm performance (see Appendix E.9). Especially, the inner product gains (Dii) are sensitive to the combined choices of algorithm parameters, which require careful tuning. Second, the numerical experiments with our neural networks are relatively slow due to the recursive computations in (9)-(10) and (11)-(12) for hidden layer and output vectors, which is common to all biologically plausible recurrent source separation networks (see Appendix F). This could perhaps be addressed by early-stopping the recursive computation [60]. Acknowledgments and Disclosure of Funding This work/research was supported by KUIS AI Center Research Award. C. Pehlevan acknowledges support from the Intel Corporation.
1. What is the focus and contribution of the paper on blind source separation? 2. What are the strengths of the proposed approach, particularly in terms of its biological plausibility and novelty? 3. What are the weaknesses of the paper, especially regarding the experiments and comparisons with other works? 4. How does the reviewer assess the clarity, quality, significance, and reproducibility of the paper's content? 5. Are there any concerns or limitations regarding the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors present a biologically-plausible algorithm for blind source separation (BSS) of correlated input sources. By applying weight similarity matching (WSM) approach to the Det-Max optimization algorithm used for BSS, the authors show that they can derive 2-layered Hebbian neural networks that are able to separate correlated sources from linear mixtures. They then compare their methods against Independent-Components Analysis (ICA) and Nonnegative Similarity Matching (NSM) methods in two tasks: one with artificial signals and another with natural images. In the task with artificial signals, the authors show how their algorithms is robust against correlation in the input sources whereas ICA and NSM degrade as measured by SINRs. The authors report results for the task of separating mixtures of natural images that are in-line with the former results, and they cherry-pick an example that illustrated the superior performance of their algorithm. Strengths And Weaknesses The paper offers a general framework for deriving neural-network from the Det-Max approach, which is a novel contribution. They are the first (to the best of my knowledge) to propose a biologically-plausible algorithm that separates correlated sources. Overall, seems a worth contribution in terms of novelty. In terms of quality, I think the paper is very solid, and there are no obvious errors that I can see. The only experiment that I am missing is one that compares WSM with algorithms that are able to separate correlated sources and reporting how the performance compares there. I think this could be a necessary addition to judge rating of the paper. The paper is fairly clearly written, but I would suggest to add a section that states more clearly what the contributions of the paper are. I like that the authors highlight the limitations of their approach. In terms of significance, I think the paper is relevant for understanding how brains may accomplish BSS and to investigate novel biologically-inspired algorithmic improvements that can help advance state-of-the-art methods of BSS with correlated sources. Questions Please add more experiments and report SNIR comparing against algorithms that are able to separate correlated sources. Limitations Yes.
NIPS
Title Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources Abstract Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems. 1 Introduction Our brains constantly and effortlessly extract latent causes, or sources, of complex visual, auditory or olfactory stimuli sensed by sensory organs [1–11]. This extraction is mostly done without any instruction, in an unsupervised manner, making the process an instance of the blind source separation (BSS) problem [12, 13]. Indeed, visual and auditory cortical receptive fields were argued to be the result of performing BSS on natural images [1, 2] and sounds [4]. The wide-spread use of BSS in the brain suggests the existence of generic circuit motifs that perform this task [14]. Consequently, the literature on biologically-plausible neural network algorithms for BSS is growing [15–19]. Because BSS is an underdetermined inverse problem, BSS algorithms make generative assumptions on observations. In most instances of the biologically-plausible BSS algorithms, complex stimuli are assumed to be linear mixtures of latent sources. This assumption is particularly fruitful and is used to model, for example, natural images [1, 20], and responses of olfactory neurons to complex odorants [21–23]. However, linear mixing by itself is not sufficient for source identifiability; further assumptions are needed. Previous work on biologically-plausible algorithms for BSS of linear mixtures 36th Conference on Neural Information Processing Systems (NeurIPS 2022). assumed sources to be statistically independent [17, 19, 24] or uncorrelated [16, 18]. However, these assumptions are very limiting when considering real data where sources can themselves be correlated. In this paper, we address the limitation imposed by independence assumptions and provide biologically-plausible BSS neural networks that can separate potentially correlated sources. We achieve this by considering various general geometric identifiability conditions on sources instead of statistical assumptions like independence or uncorrelatedness. In particular, 1) we make natural assumptions on the domains of source vectors–like nonnegativity, sparsity, anti-sparsity or boundedness (Figure 1)–and 2) we assume that latent source vectors are sufficiently spread in their domain [25, 26]. Because these identifiability conditions are not stochastic in nature, our neural networks are able to separate both independent and dependent sources. We derive our biologically-plausible algorithms from a normative principle. A common method for exploiting our geometric identifiability conditions is to disperse latent vector estimates across their presumed domain by maximizing the determinant of their sample correlation matrix, i.e., the Det-Max approach [25, 27–30]. Starting from a Det-Max objective function with constraints that specify the domain of source vectors, and using mathematical tools introduced for mapping optimization algorithms to adaptive Hebbian neural networks [18, 31, 32], we derive two-layered neural networks that can separate potentially correlated sources from their linear mixtures (Figure 2). These networks contain feedforward, recurrent and feedback synaptic connections updated via Hebbian or anti-Hebbian update rules. The domain of latent sources determines the structure of the output layer of the neural network (Figure 2, Table 1 and Appendix D). In summary, our main contributions in this article are the following: • We propose a normative framework for generating biologically plausible neural networks that are capable of separating correlated sources from their mixtures by deriving them from a Det-Max objective function subject to source domain constraints. • Our framework can handle infinitely many source types by exploiting their source domain topology. • We demonstrate the performance of our networks in simulations with synthetic and realistic data. 1.1 Other related work Several algorithms for separation of linearly mixed and correlated sources have been proposed outside the domain of biologically-plausible BSS. These algorithms make other forms of assumptions on the latent sources. Nonnegative matrix factorization (NMF) assumes that the latent vectors are nonnegative [13, 33–35]. Simplex structured matrix factorization (SSMF) assumes that the latent vectors are members of the unit-simplex [25, 36, 37]. Sparse component analysis (SCA) often assumes that the latent vectors lie in the unity `1-norm-ball [30, 38–42]. Antisparse bounded component analysis (BCA) assumes latent vectors are in the `1-norm-ball [28, 29, 43]. Recently introduced polytopic matrix factorization (PMF) extends the identifiability-enabling domains to infinitely many polytopes obeying a particular symmetry restriction [26, 44, 45]. The mapping of optimization algorithms to biologically-plausible neural networks have been formalized in the similarity matching framework [31, 32, 46, 47]. Several BSS algorithms were proposed within this framework: 1) Nonnegative Similarity Matching (NSM) [16, 48] separates linear mixtures of uncorrelated nonnegative sources, 2) [19] separates independent sources, and 3) Bounded Similarity Matching (BSM) separates uncorrelated anti-sparse bounded sources from `1-norm-ball [18]. BSM introduced a weighted inner product-based similarity criterion, referred to as the weighted similarity matching (WSM). Compared to these algorithm, the neural network algorithms we propose in this article 1) cover more general source domains, 2) handle potentially correlated sources, 3) use a two-layer WSM architecture (relative to single layer WSM architecture of BSM, which is not capable of generating arbitrary linear transformations) and 4) offer a general framework for neural-network-based optimization of the Det-Max criterion. 2 Problem statement 2.1 Sources We assume that there are n real-valued sources, represented by the vector s 2 P , where P is a particular subset of Rn. Our algorithms will address a wide range of source domains. We list some examples before giving a more general criterion: • Bounded sparse sources: A natural convex domain choice for sparse sources is the unit `1 norm ball B`1 = {s | ksk1 1} (Figure 1.(a)). The use of `1-norm as a convex (non)sparsity measure has been quite successful with various applications including sparse dictionary learning/component analysis [30, 39, 41, 49, 50] and modeling of V1 receptive fields [2]. • Bounded anti-sparse sources: A common domain choice for anti-sparse sources is the unit `1- norm-ball: B`1 = {s | ksk1 1} (Figure 1.(b)). If vectors drawn from B`1 are well-spread inside this set, some samples would contain near-peak magnitude values simultaneously at all their components. The potential equal spreading of values among the components justifies the term “anti-sparse” [51] or “democratic” [52] component representations. This choice is well-suited for both applications in natural images and digital communication constellations [28, 43]. • Normalized nonnegative sources: Simplex structured matrix factorization [25, 36, 37] uses the unit simplex [35, 53] = {s | s 0,1T s = 1} (Figure 1.(c)) as the source domain. Nonnegativity of sources naturally arises in biological context, for example in demixing olfactory mixtures [54]. • Nonnegative bounded anti-sparse sources: A non-degenerate polytopic choice of the nonnegative sources can be obtained through the combination of anti-sparseness and nonnegativity constraints. This corresponds to the intersection of B`1 with the nonnegative orthant Rn+, represented as B`1,+ = B`1 \ Rn+ [26] (Figure 1.(d)). • Nonnegative bounded sparse sources: Another polytopic choice for nonnegative sources can be obtained through combination of the sparsity and nonnegativity constraints which yields the intersection of B`1 with the nonnegative orthant R+, [26]: B`1,+ = B`1 \ Rn+ (Figure 1.(e)). Except the unit simplex , all the examples above are examples of an infinite set of identifiable polytopes whose symmetry groups are restricted to the combinations of component permutations and sign alterations as formalized in PMF framework for BSS [44]. Further, in- stead of a homogeneous choice of features, such as sparsity and nonnegativity, globally imposed on all elements of the component vector, we can assign these attributes at the subvector level and still obtain identifiable polytopes. For example, the reference [26] provides the set Pex = ⇢ s 2 R3 s1, s2 2 [ 1, 1], s3 2 [0, 1], s1 s2 1 1, s2 s3 1 1 , as a simple il- lustration of such polytopes with heterogeneous structure where s3 is nonnegative, s1, s2 are signed, and [ s1 s2 ] T , [ s2 s3 ] T are sparse subvectors, while sparsity is not globally imposed. In this article, we concentrate on particular source domains including the unit simplex, and the subset of identifiable polytopes for which the attributes such as sparsity and nonnegativity are defined at the subvector level in the general form P = s 2 Rn si 2 [ 1, 1] 8i 2 Is, si 2 [0, 1] 8i 2 I+, ksJkk1 1, Jk ✓ Zn, k 2 ZL , (1) where I+ ✓ Zn is the index set for nonnegative sources, and Is is its complement, sJk is the subvector constructed from the elements with indices in Jk, and L is the number of sparsity constraints imposed in the subvector level. The Det-Max criterion for BSS is based on the assumption that the source samples are well-spread in their presumed domain. The references [55] and [26] provide precise conditions on the scattering of source samples which guarantee their identifiability for the unit simplex and polytopes, respectively. Appendix A provides a brief summary of these conditions. We emphasize that our assumptions about the sources are deterministic. Therefore, our proposed algorithms do not exploit any stochastic assumptions such as independence or uncorrelatedness, and can separate both independent and dependent (potentially correlated) sources. 2.2 Mixing The sources st are mixed through a mixing matrix A 2 Rm⇥n. xt = Ast, t 2 Z. (2) We only consider the (over)determined case with m n and assume that the mixing matrix is fullrank. While we consider noiseless mixtures to achieve perfect separability, the optimization setting proposed for the online algorithm features a particular objective function that safeguards against potential noise presence. We use S(t) = [ s1 . . . st ] 2 Rn⇥t and X(t) = [ x1 . . . xt ] 2 Rm⇥t to represent data snapshot matrices, at time t, for sources and mixtures, respectively. 2.3 Separation The goal of the source separation is to obtain an estimate of S(t) from the mixture measurements X(t) when the mixing matrix A is unknown. We use the notation yt to refer to source estimates, which are linear transformations of observations, i.e., yi = Wxi, where W 2 Rn⇥m. We define Y(t) = [ y1 y2 . . . yt ] 2 Rn⇥t as the output snapshot matrix. "Ideal separation" is defined as the condition where the outputs are scaled and permuted versions of original sources, i.e., they satisfy yt = P⇤st, where P is a permutation matrix, and ⇤ is a full rank diagonal matrix. 3 Determinant maximization based blind source separation Among several alternative solution methods for the BSS problem, the determinant-maximization (DetMax) criterion has been proposed within the NMF, BCA, and PMF frameworks, [26–28, 30, 35, 44]. Here, the separator is trained to maximize the (log)-determinant of the sample correlation matrix for the separator outputs, J(W) = log(det(R̂y(t))), where R̂y(t) is the sample correlation matrix R̂y(t) = 1 t P t i=1 yiy T i = 1 t Y(t)Y(t)T . Further, during the training process, the separator outputs are constrained to lie inside the presumed source domain, i.e. P . As a result, we can pose the corresponding optimization problem as [26, 35] maximize Y(t) log(det(Y(t)Y(t)T )) (3a) subject to yi 2 P, i = 1, . . . , t, (3b) where we ignored the constant 1 t term. Here, the determinant of the correlation matrix acts as a spread measure for the output samples. If the original source samples {s1, . . . , st} are sufficiently scattered inside the source domain P , as described in Section 2.1 and Appendix A, then the global solution of this optimization can be shown to achieve perfect separation [26, 35, 55]. 4 An alternative optimization formulation of determinant-maximization based on weighted similarity matching Here, we reformulate the Det-Max problem 3 described above in a way that allows derivation of a biologically-plausible neural network for the linear BSS setup in Section 2. Our formulation applies to all source types discusses in 2.1. We propose the following optimization problem: minimize Y(t),H(t),D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) nX i=1 log(D1,ii(t)) + nX i=1 log(D2,ii(t)) (4a) subject to X(t)TX(t) H(t)TD1(t)H(t) = 0, (4b) H(t)TH(t) Y(t)TD2(t)Y(t) = 0, (4c) yi 2 P, i = 1, . . . , n, (4d) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (4e) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (4f) Here, X(t) 2 Rm⇥t is the matrix containing input (mixture) vectors, Y(t) 2 Rn⇥t is the matrix containing output vectors, H(t) 2 Rn⇥t is a slack variable containing an intermediate signal {hi 2 Rn, i = 1, . . . , t}, corresponding to the hidden layer of the neural network implementation in Section 5, in its columns H(t) = [ h1 h2 . . . ht ]. Dl,11(t), Dl,22(t), . . . , Dl,nn(t) for l = 1, 2 are nonnegative slack variables to be described below, and Dl is the diagonal matrix containing weights Dl,ii for i = 1, . . . , n and l = 1, 2. The constraint (4d) ensures that the outputs lie in the presumed domain of sources. This problem is related to the weighted similarity matching (WSM) objective introduced in [18]. Constraints (4b) and (4c) define two separate WSM conditions. In particular, the equality constraint in (4b) is a WSM constraint between inputs and the intermediate signal H(t). This constraint imposes that the pairwise weighted correlations of the signal {hi, i = 1, . . . , t} are the same as correlations among the elements of the input signal {xi, i = 1, . . . , t}, i.e., xTi xj = hTi D1(t)hj , 8i, j 2 {1, . . . , t}. D1,11(t), D1,22(t), . . . , D1,nn(t) correspond to inner product weights used in these equalities. Similarly, the equality constraint in (4c) defines a WSM constraint between the intermediate signal and outputs. This equality can be written as hT i hj = yTi D2(t)yj , i, j 2 {1, . . . , t}, and D2,11(t), D2,22(t), . . . , D2,nn(t) correspond to the inner product weights used in these equalities. The optimization involves minimizing the logarithm of the determinant of the weighting matrices. Now we state the relation between our WSM-based objective and the original Det-Max criterion (3). Theorem 1. If X(t) is full column-rank, then global optimal Y(t) solutions of (3) and (4) coincide. Proof of Theorem 1. See Appendix B for the proof. The proof relies on a lemma that states that the optimization constraints enforce inputs and outputs to be related by an arbitrary linear transformation. 5 Biologically-plausible neural networks for WSM-based BSS The optimization problems we considered so far were in an offline setting, where all inputs are observed together and all outputs are produced together. However, biology operates in an online fashion, observing an input and producing the corresponding output, before seeing the next input. Therefore, in this section, we first introduce an online version of the batch WSM-problem (4). Then we show that the corresponding gradient descent algorithm leads to a two-layer neural network with biologically-plausible local update rules. 5.1 Online optimization setting for WSM-based BSS We first propose an online extension of WSM-based BSS (4). In the online setting, past outputs cannot be altered, but past inputs and outputs still carry valuable information about solving the BSS problem. We will write down an optimization problem whose goal is to produce the sources yt given a mixture xt, while exploiting information from all the fixed previous inputs and outputs. We first introduce our notation. We consider exponential weighting of the signals as a recipe for dynamical adjustment to potential nonstationarity in the data. We define the weighted input data snapshot matrix by time t as, X (t) = ⇥ t 1x1 . . . xt 1 xt ⇤ = X(t) (t), where is the forgetting factor and (t) = diag( t 1, . . . , , 1). The exponential weighting emphasizes recent mixtures by reducing the impact of past samples. Similarly, we define the corresponding weighted output snapshot matrix for output as Y(t) = ⇥ t 1y1 . . . yt 1 yt ⇤ = Y(t) (t), and the hidden layer vectors as H(t) = ⇥ t 1h1 . . . ht 1 ht ⇤ = H(t) (t). We further define ⌧ = limt!1 = P t 1 k=0 2k = 11 2 as a measure of the effective time window length for sample correlation calculations based on the exponential weights. In order to derive an online cost function, we first converted equality constraints in (4b) and (4c) to similarity matching cost functions J1(H(t),D1(t)) = 12⌧2 kX (t) T X (t) H(t)TD1(t)H(t)k2F , J2(H(t),D2(t),Y(t)) = 1 2⌧2 kH(t) T H(t) Y(t)TD2(t)Y(t)k2F . Then, a weighted combination of similarity matching costs and the objective function in (4a) yields the final cost function J (H(t),D1(t),D2(t),Y(t)) = SM [ J1(H(t),D1(t)) + (1 )J2(H(t),D2(t),Y(t))] +(1 SM )[ nX k=1 log(D1,kk(t)) + nX k=1 log(D2,kk(t))]. (5) Here, 2 [0, 1] and SM 2 [0, 1] are parameters that convexly combine similarity matching costs and the objective function. Finally, we can state the online optimization problem for determining the current output yt, the corresponding hidden state ht and for updating the gain parameters Dl(t) for l = 1, 2, as minimize yt,ht,D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) J (H(t),D1(t),D2(t),Y(t)) (6a) subject to yt 2 P, (6b) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (6c) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (6d) As shown in Appendix C.1, part of J that depends on ht and yt can be written as C(ht,yt) = 2h T t D1MH(t)D1(t)ht 4h T t D1(t)WHX(t)xt +2yT t D2(t)MY (t)D2(t)yt 4y T t D2(t)WY H(t)ht + 2h T t MH(t)ht, (7) where the dependence on past inputs and outputs appear in the weighted correlation matrices: MH(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkhTk , WHX(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkxTk , WY H(t) = 1 ⌧ P t 1 k=1( 2)t 1 kykhTk , MY (t) = 1 ⌧ P t 1 k=1( 2)t 1 kykyTk . (8) 5.2 Description of network dynamics for bounded anti-sparse sources We now show that the gradient-descent minimization of the online WSM cost function in (6) can be interpreted as the dynamics of a neural network with local learning rules. The exact network architecture is determined by the presumed identifiable source domain P , which can be chosen in infinitely many ways. In this section, we concentrate on the domain choice P = B1 as an illustrative example. In Section 5.3, we discuss how to generalize the results of this section by modifying the output layer for different identifiable source domains. We start by writing the update expressions for the optimization variables based on the gradients of J (ht,yt,D1(t),D2(t)): Update dynamics for ht: Following previous work [48, 56], and using the gradient of (7) in (A.8) with respect to ht, we can write down an update dynamics for ht in the form dv(⌧) d⌧ = v(⌧) SM [((1 )M̄H(t) + D1(t)M̄H(t)D1(t))h(⌧) + D1(t)WHX(t)x(⌧) + (1 )WY H(t) T D2(t)y(⌧)] (9) ht,i(⌧) = A vi(⌧) SM Hii(t)((1 ) + D1,ii(t) 2) ! , for i = 1, . . . n, (10) where H(t) is a diagonal matrix containing diagonal elements of MH(t) and M̄H(t) = MH(t) H(t), (·) is the clipping function, defined as A(x) = ⇢ x A x A, Asign(x) otherwise. . This dy- namics can be shown to minimize (7) [56]. Here v(⌧) is an internal variable that could be interpreted as the voltage dynamics of a biological neuron, and is defined based on a linear transformation of ht in (A.9). Equation (9) defines v(⌧) dynamics from the gradient of (7) with respect to ht in (A.8). Due to the positive definite linear map in (A.9), the expression in (A.8) also serves as a descent direction for v(⌧). Furthermore, (·) function is the projection onto AB1, where [ A,A] is the presumed dynamic range for the components of ht. We note that there is no explicit constraint set for ht in the online optimization setting of Section 5.1, and therefore, A can be chosen as large as desired in the actual implementation. We included the nonlinearity in (10) to model the limited dynamic range of an actual (biological) neuron. Update dynamics for output yt: We write the update dynamics for the output yt, based on (A.11) as du(⌧) d⌧ = u(⌧) +WY H(t)h(⌧) M̄Y (t)D2(t)y(⌧), (11) yt,i(⌧) = 1 ✓ ui(⌧) Y ii(t)D2,ii(t) ◆ , for i = 1, . . . n, (12) which is derived using the same approach for ht, where we used the descent direction expression in (A.11), and the substitution in (A.12). Here, Y (t) is a diagonal matrix containing diagonal elements of MY (t) and M̄Y (t) = MY (t) Y (t). Note that the nonlinear mapping 1(·) is the projection onto the presumed domain of sources, i.e., P = B1, which is elementwise clipping operation. The state space representations in (9)-(10) and (11)-(12) correspond to a two-layer recurrent neural network with input xt, hidden layer activation ht, output layer activation yt, WHX (WTHX ) and WY H (WTY H ) are the feedforward (feedback) synaptic weight matrices for the first and the second layers, respectively, and M̄H and M̄Y are recurrent synaptic weight matrices for the first and the second layers, respectively. The corresponding neural network schematic is provided in Figure 2.(b). The gain and synaptic weight dynamics below describe the learning mechanism for this network: Update dynamics for gains Dl,ii: Using the derivative of the cost function with respect to D1,ii in (A.13), we can write the dynamics corresponding to the gain variable D1,ii as µD1 dD1,ii(t) dt = ( SM )(kMHi,:k 2 D1(t) kWHXi,:k 2 2) (1 SM ) 1 D1,ii(t) , (13) where µD1 corresponds to the learning time-constant. Similarly, for the gain variable D2,ii, the corresponding coefficient dynamics expression based on (A.14) is given by µD2 dD2,ii(t) dt = ( SM )(kMY i,:k 2 D2(t) kWY Hi,:k 2 2) (1 SM ) 1 D2,ii(t) , (14) where µD2 corresponds to the learning time-constant. The inverses of the inner product weights Dl,ii correspond to homeostatic gain parameters. The inspection of the gain updates in (13) and (14) leads to an interesting observation: whether the corresponding gain is going to increase or decrease depends on the balance between the norms of the recurrent and the feedforward synaptic strengths, which are the statistical indicators of the recent output and input activations, respectively. Hence, the homeostatic gain of the neuron will increase (decrease) if the level of recent output activations falls behind (surpasses) the level of recent input activations to balance input/output energy levels. The resulting dynamics align with the experimental homeostatic balance observed in biological neurons [57]. Based on the definitions of the synaptic weight matrices in (8), we can write their updates as MH(t+ 1) = 2 MH(t) + (1 2)hth T t , MY (t+ 1) = 2 MY (t) + (1 2)yty T t , (15) WHX(t+ 1) = 2 WHX(t) + (1 2)htx T t , WY H(t+ 1) = 2 WY H(t) + (1 2)yth T t . These updates are local in the sense that they only depend on variables available to the synapse, and hence are biologically plausible. 5.3 Det-max WSM neural network examples for more general source domains Det-Max Neural Network obtained for the source domain P = B1 in Section 5.2 can be extended to more general identifiable source domains by only changing the output dynamics. In Appendix D, we provide illustrative examples for different identifiable domain choices. Table 1 summarizes the output dynamics obtained for the identifiable source domain examples in Figure 1. We can make the following observations on Table 1: (1) For sparse and unit simplex settings, there is an additional inhibitory neuron which takes input from all outputs and whose activation is the inhibitory signal 1(⌧), (2) The source attributes, which are globally defined over all sources, determine the activation functions at the output layer. The proposed framework can be applied to any polytope described by (1) for which the corresponding Det-Max neural network will contain combinations of activation functions in Table 1 as illustrated in Figure 2.(a). 6 Numerical experiments In this section, we illustrate the applications of the proposed WSM-based BSS framework for both synthetic and natural sources. More details on these experiments and additional examples are provided in Appendix E, including sparse dictionary learning. Our implementation code is publicly available1. 6.1 Synthetically correlated source separation In order to illustrate the correlated source separation capability of the proposed WSM neural networks, we consider a numerical experiment with five copula-T distributed (uniform and correlated) sources. For the correlation calibration matrix for these sources, we use Toeplitz matrix whose first row is [1 ⇢ ⇢ ⇢ ⇢]. The ⇢ parameter determines the correlation level, and we considered the range [0, 0.6] for this parameter. These sources are mixed with a 10⇥5 random matrix with independent and identically distributed (i.i.d.) standard normal random variables. The mixtures are corrupted by i.i.d. 1https://github.com/BariscanBozkurt/Biologically-Plausible-DetMaxNNs-for-Blind-Source-Separation normal noise corresponding to 30dB signal-to-noise ratio (SNR) level. In this experiment, we employ the nonnegative-antisparse-WSM neural network (Figure 8 in Appendix D.2) whose activation functions at the output layer are nonnegative-clipping functions, as the sources are nonnegative uniform random variables. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ρ 0 5 10 15 20 25 30 35 SI NR (d B) Nonnegative Anti-sparse Source Separation SINR Results WSM NSM ICA-Infomax LD-InfoMax PMF that the performance of batch Det-Max algorithms, i.e., LD-InfoMax and PMF, are also robust against source correlations. Furthermore, due to their batch nature, these algorithms typically achieved better performance results than our neural network with online-restriction, as expected. 6.2 Image separation To further illustrate the correlated source separation advantage of our approach, we consider a natural image separation scenario. For this example, we have three RGB images with sizes 324⇥ 432⇥ 3 as sources (Figure 4.(a)). The sample Pearson correlation coefficients between the images are ⇢12 = 0.263, ⇢13 = 0.066, ⇢23 = 0.333. We use a random 5⇥ 3 mixing matrix whose entries are drawn from i.i.d standard normal distribution. The corresponding mixtures are shown in Figure 4.(b). We applied ICA, NSM and WSM algorithms to the mixtures. Figure 4.(c),(d),(e) shows the corresponding outputs. High-resolution versions of all images in this example are available in Appendix E.4 in addition to the comparisons with LD-Infomax and PMF algorithms. The Infomax ICA algorithm’s outputs have SINR level of 13.92dB, and this performance is perceivable as residual interference effects in the corresponding output images. The NSM algorithm achieves significantly higher SINR level of 17.45dB and the output images visually reflect this better performance. Our algorithm achieves the best SINR level of 27.49dB, and the corresponding outputs closely resemble the original source images. 7 Discussion and Conclusion We proposed a general framework for generating biologically plausible neural networks that are capable of separating correlated sources from their linear mixtures, and demonstrated their successful correlated source separation capability through synthetic and natural sources. Another motivation for our work is to link network structure with function. This is a long standing goal of neuroscience, however examples where this link can be achieved are limited. Our work provides concrete examples where clear links between a network’s architecture–i.e. number of interneurons, connections between interneurons and output neurons, nonlinearities (frequency-current curves)– and its function, the type of source separation or feature extraction problem the networks solves, can be established. These links may provide insights and interpretations that might generalize to real biological circuits. Our networks suffer from the same limitations of other recurrent biologically-plausible BSS networks. First, certain hyperparameters can significantly influence algorithm performance (see Appendix E.9). Especially, the inner product gains (Dii) are sensitive to the combined choices of algorithm parameters, which require careful tuning. Second, the numerical experiments with our neural networks are relatively slow due to the recursive computations in (9)-(10) and (11)-(12) for hidden layer and output vectors, which is common to all biologically plausible recurrent source separation networks (see Appendix F). This could perhaps be addressed by early-stopping the recursive computation [60]. Acknowledgments and Disclosure of Funding This work/research was supported by KUIS AI Center Research Award. C. Pehlevan acknowledges support from the Intel Corporation.
1. What is the focus and contribution of the paper regarding the Blind Source Separation (BSS) problem? 2. What are the strengths and weaknesses of the proposed online weighted similarity matching algorithm (WSM)? 3. How did the authors set the hyperparameters in Section E.2 and E.3, and did they perform a sensitivity analysis? 4. Are there any experiments conducted using real data, and why is it important to include such experiments? 5. Why did the authors not compare their method with the Det-Max criterion, which solves the same problem mathematically but seems easier to optimize?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work focuses on the BSS problem and solves it by imposing some geometrical priors on the sources via an online weighted similarity matching algorithm (WSM). Since WSM does not use statistical independence of the sources to recover them, sources can be recovered even if they are correlated. WSM is benchmarked on 2 datasets (a synthetic and a toy dataset) and is shown to yield better performance than ICA or non-negative similarity matching (NSM). Strengths And Weaknesses Strength: A general framework that applies to a large set of priors on the sources An online algorithm with local update rules that is therefore more biologically plausible Weaknesses: Many hyper-parameters to set and no clear rules on how to set them (but the paper is transparent about this which is a good point) Experiments are on synthetic data and a toy dataset that does not really correspond to any realistic problem Missing comparison with Det-Max criterion I quickly reviewed the code: it lacks documentation and unit tests are missing but is overall well structured and readable. I would still advice the authors to document every public function, make unit tests, examples and set up a continuous integration so that other researchers can easily build upon their work. Questions Hyper-parameters setting: There are many hyper-parameters to set in this method. Did you perform a sensitivity analysis to see in which range they work ? Is there a way to find default values that would work for all kind of problems ? Can you explain how you chose the values in section E.2 and E.3 ? Experiments on synthetic data: While the experiments on synthetic data show what they are meant to show, it would have been nice to see some experiments with real systems. ICA is used in many different settings: astronomy, neuroscience, finance (see the section 7 of Hyvärinen, Aapo, and Erkki Oja. "Independent component analysis: algorithms and applications." Neural networks 13.4-5 (2000): 411-430.). Did you try any experiments with real data ? Missing comparison with Det-Max criterion According to your theorem 1: Problem (3) with Det-Max criterion and Problem (4) that yield the equations for WSM updates solve essentially the same problem. Therefore it would have seem natural to check whether they yield the same results in the synthetic experiments. Did you perform this comparison ? The Det-Max criterion is mathematically much simpler and seem easier to optimize. Why should practitioners use WSM instead ? Limitations Ethical limitations are properly discussed
NIPS
Title Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources Abstract Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems. 1 Introduction Our brains constantly and effortlessly extract latent causes, or sources, of complex visual, auditory or olfactory stimuli sensed by sensory organs [1–11]. This extraction is mostly done without any instruction, in an unsupervised manner, making the process an instance of the blind source separation (BSS) problem [12, 13]. Indeed, visual and auditory cortical receptive fields were argued to be the result of performing BSS on natural images [1, 2] and sounds [4]. The wide-spread use of BSS in the brain suggests the existence of generic circuit motifs that perform this task [14]. Consequently, the literature on biologically-plausible neural network algorithms for BSS is growing [15–19]. Because BSS is an underdetermined inverse problem, BSS algorithms make generative assumptions on observations. In most instances of the biologically-plausible BSS algorithms, complex stimuli are assumed to be linear mixtures of latent sources. This assumption is particularly fruitful and is used to model, for example, natural images [1, 20], and responses of olfactory neurons to complex odorants [21–23]. However, linear mixing by itself is not sufficient for source identifiability; further assumptions are needed. Previous work on biologically-plausible algorithms for BSS of linear mixtures 36th Conference on Neural Information Processing Systems (NeurIPS 2022). assumed sources to be statistically independent [17, 19, 24] or uncorrelated [16, 18]. However, these assumptions are very limiting when considering real data where sources can themselves be correlated. In this paper, we address the limitation imposed by independence assumptions and provide biologically-plausible BSS neural networks that can separate potentially correlated sources. We achieve this by considering various general geometric identifiability conditions on sources instead of statistical assumptions like independence or uncorrelatedness. In particular, 1) we make natural assumptions on the domains of source vectors–like nonnegativity, sparsity, anti-sparsity or boundedness (Figure 1)–and 2) we assume that latent source vectors are sufficiently spread in their domain [25, 26]. Because these identifiability conditions are not stochastic in nature, our neural networks are able to separate both independent and dependent sources. We derive our biologically-plausible algorithms from a normative principle. A common method for exploiting our geometric identifiability conditions is to disperse latent vector estimates across their presumed domain by maximizing the determinant of their sample correlation matrix, i.e., the Det-Max approach [25, 27–30]. Starting from a Det-Max objective function with constraints that specify the domain of source vectors, and using mathematical tools introduced for mapping optimization algorithms to adaptive Hebbian neural networks [18, 31, 32], we derive two-layered neural networks that can separate potentially correlated sources from their linear mixtures (Figure 2). These networks contain feedforward, recurrent and feedback synaptic connections updated via Hebbian or anti-Hebbian update rules. The domain of latent sources determines the structure of the output layer of the neural network (Figure 2, Table 1 and Appendix D). In summary, our main contributions in this article are the following: • We propose a normative framework for generating biologically plausible neural networks that are capable of separating correlated sources from their mixtures by deriving them from a Det-Max objective function subject to source domain constraints. • Our framework can handle infinitely many source types by exploiting their source domain topology. • We demonstrate the performance of our networks in simulations with synthetic and realistic data. 1.1 Other related work Several algorithms for separation of linearly mixed and correlated sources have been proposed outside the domain of biologically-plausible BSS. These algorithms make other forms of assumptions on the latent sources. Nonnegative matrix factorization (NMF) assumes that the latent vectors are nonnegative [13, 33–35]. Simplex structured matrix factorization (SSMF) assumes that the latent vectors are members of the unit-simplex [25, 36, 37]. Sparse component analysis (SCA) often assumes that the latent vectors lie in the unity `1-norm-ball [30, 38–42]. Antisparse bounded component analysis (BCA) assumes latent vectors are in the `1-norm-ball [28, 29, 43]. Recently introduced polytopic matrix factorization (PMF) extends the identifiability-enabling domains to infinitely many polytopes obeying a particular symmetry restriction [26, 44, 45]. The mapping of optimization algorithms to biologically-plausible neural networks have been formalized in the similarity matching framework [31, 32, 46, 47]. Several BSS algorithms were proposed within this framework: 1) Nonnegative Similarity Matching (NSM) [16, 48] separates linear mixtures of uncorrelated nonnegative sources, 2) [19] separates independent sources, and 3) Bounded Similarity Matching (BSM) separates uncorrelated anti-sparse bounded sources from `1-norm-ball [18]. BSM introduced a weighted inner product-based similarity criterion, referred to as the weighted similarity matching (WSM). Compared to these algorithm, the neural network algorithms we propose in this article 1) cover more general source domains, 2) handle potentially correlated sources, 3) use a two-layer WSM architecture (relative to single layer WSM architecture of BSM, which is not capable of generating arbitrary linear transformations) and 4) offer a general framework for neural-network-based optimization of the Det-Max criterion. 2 Problem statement 2.1 Sources We assume that there are n real-valued sources, represented by the vector s 2 P , where P is a particular subset of Rn. Our algorithms will address a wide range of source domains. We list some examples before giving a more general criterion: • Bounded sparse sources: A natural convex domain choice for sparse sources is the unit `1 norm ball B`1 = {s | ksk1 1} (Figure 1.(a)). The use of `1-norm as a convex (non)sparsity measure has been quite successful with various applications including sparse dictionary learning/component analysis [30, 39, 41, 49, 50] and modeling of V1 receptive fields [2]. • Bounded anti-sparse sources: A common domain choice for anti-sparse sources is the unit `1- norm-ball: B`1 = {s | ksk1 1} (Figure 1.(b)). If vectors drawn from B`1 are well-spread inside this set, some samples would contain near-peak magnitude values simultaneously at all their components. The potential equal spreading of values among the components justifies the term “anti-sparse” [51] or “democratic” [52] component representations. This choice is well-suited for both applications in natural images and digital communication constellations [28, 43]. • Normalized nonnegative sources: Simplex structured matrix factorization [25, 36, 37] uses the unit simplex [35, 53] = {s | s 0,1T s = 1} (Figure 1.(c)) as the source domain. Nonnegativity of sources naturally arises in biological context, for example in demixing olfactory mixtures [54]. • Nonnegative bounded anti-sparse sources: A non-degenerate polytopic choice of the nonnegative sources can be obtained through the combination of anti-sparseness and nonnegativity constraints. This corresponds to the intersection of B`1 with the nonnegative orthant Rn+, represented as B`1,+ = B`1 \ Rn+ [26] (Figure 1.(d)). • Nonnegative bounded sparse sources: Another polytopic choice for nonnegative sources can be obtained through combination of the sparsity and nonnegativity constraints which yields the intersection of B`1 with the nonnegative orthant R+, [26]: B`1,+ = B`1 \ Rn+ (Figure 1.(e)). Except the unit simplex , all the examples above are examples of an infinite set of identifiable polytopes whose symmetry groups are restricted to the combinations of component permutations and sign alterations as formalized in PMF framework for BSS [44]. Further, in- stead of a homogeneous choice of features, such as sparsity and nonnegativity, globally imposed on all elements of the component vector, we can assign these attributes at the subvector level and still obtain identifiable polytopes. For example, the reference [26] provides the set Pex = ⇢ s 2 R3 s1, s2 2 [ 1, 1], s3 2 [0, 1], s1 s2 1 1, s2 s3 1 1 , as a simple il- lustration of such polytopes with heterogeneous structure where s3 is nonnegative, s1, s2 are signed, and [ s1 s2 ] T , [ s2 s3 ] T are sparse subvectors, while sparsity is not globally imposed. In this article, we concentrate on particular source domains including the unit simplex, and the subset of identifiable polytopes for which the attributes such as sparsity and nonnegativity are defined at the subvector level in the general form P = s 2 Rn si 2 [ 1, 1] 8i 2 Is, si 2 [0, 1] 8i 2 I+, ksJkk1 1, Jk ✓ Zn, k 2 ZL , (1) where I+ ✓ Zn is the index set for nonnegative sources, and Is is its complement, sJk is the subvector constructed from the elements with indices in Jk, and L is the number of sparsity constraints imposed in the subvector level. The Det-Max criterion for BSS is based on the assumption that the source samples are well-spread in their presumed domain. The references [55] and [26] provide precise conditions on the scattering of source samples which guarantee their identifiability for the unit simplex and polytopes, respectively. Appendix A provides a brief summary of these conditions. We emphasize that our assumptions about the sources are deterministic. Therefore, our proposed algorithms do not exploit any stochastic assumptions such as independence or uncorrelatedness, and can separate both independent and dependent (potentially correlated) sources. 2.2 Mixing The sources st are mixed through a mixing matrix A 2 Rm⇥n. xt = Ast, t 2 Z. (2) We only consider the (over)determined case with m n and assume that the mixing matrix is fullrank. While we consider noiseless mixtures to achieve perfect separability, the optimization setting proposed for the online algorithm features a particular objective function that safeguards against potential noise presence. We use S(t) = [ s1 . . . st ] 2 Rn⇥t and X(t) = [ x1 . . . xt ] 2 Rm⇥t to represent data snapshot matrices, at time t, for sources and mixtures, respectively. 2.3 Separation The goal of the source separation is to obtain an estimate of S(t) from the mixture measurements X(t) when the mixing matrix A is unknown. We use the notation yt to refer to source estimates, which are linear transformations of observations, i.e., yi = Wxi, where W 2 Rn⇥m. We define Y(t) = [ y1 y2 . . . yt ] 2 Rn⇥t as the output snapshot matrix. "Ideal separation" is defined as the condition where the outputs are scaled and permuted versions of original sources, i.e., they satisfy yt = P⇤st, where P is a permutation matrix, and ⇤ is a full rank diagonal matrix. 3 Determinant maximization based blind source separation Among several alternative solution methods for the BSS problem, the determinant-maximization (DetMax) criterion has been proposed within the NMF, BCA, and PMF frameworks, [26–28, 30, 35, 44]. Here, the separator is trained to maximize the (log)-determinant of the sample correlation matrix for the separator outputs, J(W) = log(det(R̂y(t))), where R̂y(t) is the sample correlation matrix R̂y(t) = 1 t P t i=1 yiy T i = 1 t Y(t)Y(t)T . Further, during the training process, the separator outputs are constrained to lie inside the presumed source domain, i.e. P . As a result, we can pose the corresponding optimization problem as [26, 35] maximize Y(t) log(det(Y(t)Y(t)T )) (3a) subject to yi 2 P, i = 1, . . . , t, (3b) where we ignored the constant 1 t term. Here, the determinant of the correlation matrix acts as a spread measure for the output samples. If the original source samples {s1, . . . , st} are sufficiently scattered inside the source domain P , as described in Section 2.1 and Appendix A, then the global solution of this optimization can be shown to achieve perfect separation [26, 35, 55]. 4 An alternative optimization formulation of determinant-maximization based on weighted similarity matching Here, we reformulate the Det-Max problem 3 described above in a way that allows derivation of a biologically-plausible neural network for the linear BSS setup in Section 2. Our formulation applies to all source types discusses in 2.1. We propose the following optimization problem: minimize Y(t),H(t),D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) nX i=1 log(D1,ii(t)) + nX i=1 log(D2,ii(t)) (4a) subject to X(t)TX(t) H(t)TD1(t)H(t) = 0, (4b) H(t)TH(t) Y(t)TD2(t)Y(t) = 0, (4c) yi 2 P, i = 1, . . . , n, (4d) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (4e) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (4f) Here, X(t) 2 Rm⇥t is the matrix containing input (mixture) vectors, Y(t) 2 Rn⇥t is the matrix containing output vectors, H(t) 2 Rn⇥t is a slack variable containing an intermediate signal {hi 2 Rn, i = 1, . . . , t}, corresponding to the hidden layer of the neural network implementation in Section 5, in its columns H(t) = [ h1 h2 . . . ht ]. Dl,11(t), Dl,22(t), . . . , Dl,nn(t) for l = 1, 2 are nonnegative slack variables to be described below, and Dl is the diagonal matrix containing weights Dl,ii for i = 1, . . . , n and l = 1, 2. The constraint (4d) ensures that the outputs lie in the presumed domain of sources. This problem is related to the weighted similarity matching (WSM) objective introduced in [18]. Constraints (4b) and (4c) define two separate WSM conditions. In particular, the equality constraint in (4b) is a WSM constraint between inputs and the intermediate signal H(t). This constraint imposes that the pairwise weighted correlations of the signal {hi, i = 1, . . . , t} are the same as correlations among the elements of the input signal {xi, i = 1, . . . , t}, i.e., xTi xj = hTi D1(t)hj , 8i, j 2 {1, . . . , t}. D1,11(t), D1,22(t), . . . , D1,nn(t) correspond to inner product weights used in these equalities. Similarly, the equality constraint in (4c) defines a WSM constraint between the intermediate signal and outputs. This equality can be written as hT i hj = yTi D2(t)yj , i, j 2 {1, . . . , t}, and D2,11(t), D2,22(t), . . . , D2,nn(t) correspond to the inner product weights used in these equalities. The optimization involves minimizing the logarithm of the determinant of the weighting matrices. Now we state the relation between our WSM-based objective and the original Det-Max criterion (3). Theorem 1. If X(t) is full column-rank, then global optimal Y(t) solutions of (3) and (4) coincide. Proof of Theorem 1. See Appendix B for the proof. The proof relies on a lemma that states that the optimization constraints enforce inputs and outputs to be related by an arbitrary linear transformation. 5 Biologically-plausible neural networks for WSM-based BSS The optimization problems we considered so far were in an offline setting, where all inputs are observed together and all outputs are produced together. However, biology operates in an online fashion, observing an input and producing the corresponding output, before seeing the next input. Therefore, in this section, we first introduce an online version of the batch WSM-problem (4). Then we show that the corresponding gradient descent algorithm leads to a two-layer neural network with biologically-plausible local update rules. 5.1 Online optimization setting for WSM-based BSS We first propose an online extension of WSM-based BSS (4). In the online setting, past outputs cannot be altered, but past inputs and outputs still carry valuable information about solving the BSS problem. We will write down an optimization problem whose goal is to produce the sources yt given a mixture xt, while exploiting information from all the fixed previous inputs and outputs. We first introduce our notation. We consider exponential weighting of the signals as a recipe for dynamical adjustment to potential nonstationarity in the data. We define the weighted input data snapshot matrix by time t as, X (t) = ⇥ t 1x1 . . . xt 1 xt ⇤ = X(t) (t), where is the forgetting factor and (t) = diag( t 1, . . . , , 1). The exponential weighting emphasizes recent mixtures by reducing the impact of past samples. Similarly, we define the corresponding weighted output snapshot matrix for output as Y(t) = ⇥ t 1y1 . . . yt 1 yt ⇤ = Y(t) (t), and the hidden layer vectors as H(t) = ⇥ t 1h1 . . . ht 1 ht ⇤ = H(t) (t). We further define ⌧ = limt!1 = P t 1 k=0 2k = 11 2 as a measure of the effective time window length for sample correlation calculations based on the exponential weights. In order to derive an online cost function, we first converted equality constraints in (4b) and (4c) to similarity matching cost functions J1(H(t),D1(t)) = 12⌧2 kX (t) T X (t) H(t)TD1(t)H(t)k2F , J2(H(t),D2(t),Y(t)) = 1 2⌧2 kH(t) T H(t) Y(t)TD2(t)Y(t)k2F . Then, a weighted combination of similarity matching costs and the objective function in (4a) yields the final cost function J (H(t),D1(t),D2(t),Y(t)) = SM [ J1(H(t),D1(t)) + (1 )J2(H(t),D2(t),Y(t))] +(1 SM )[ nX k=1 log(D1,kk(t)) + nX k=1 log(D2,kk(t))]. (5) Here, 2 [0, 1] and SM 2 [0, 1] are parameters that convexly combine similarity matching costs and the objective function. Finally, we can state the online optimization problem for determining the current output yt, the corresponding hidden state ht and for updating the gain parameters Dl(t) for l = 1, 2, as minimize yt,ht,D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) J (H(t),D1(t),D2(t),Y(t)) (6a) subject to yt 2 P, (6b) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (6c) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (6d) As shown in Appendix C.1, part of J that depends on ht and yt can be written as C(ht,yt) = 2h T t D1MH(t)D1(t)ht 4h T t D1(t)WHX(t)xt +2yT t D2(t)MY (t)D2(t)yt 4y T t D2(t)WY H(t)ht + 2h T t MH(t)ht, (7) where the dependence on past inputs and outputs appear in the weighted correlation matrices: MH(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkhTk , WHX(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkxTk , WY H(t) = 1 ⌧ P t 1 k=1( 2)t 1 kykhTk , MY (t) = 1 ⌧ P t 1 k=1( 2)t 1 kykyTk . (8) 5.2 Description of network dynamics for bounded anti-sparse sources We now show that the gradient-descent minimization of the online WSM cost function in (6) can be interpreted as the dynamics of a neural network with local learning rules. The exact network architecture is determined by the presumed identifiable source domain P , which can be chosen in infinitely many ways. In this section, we concentrate on the domain choice P = B1 as an illustrative example. In Section 5.3, we discuss how to generalize the results of this section by modifying the output layer for different identifiable source domains. We start by writing the update expressions for the optimization variables based on the gradients of J (ht,yt,D1(t),D2(t)): Update dynamics for ht: Following previous work [48, 56], and using the gradient of (7) in (A.8) with respect to ht, we can write down an update dynamics for ht in the form dv(⌧) d⌧ = v(⌧) SM [((1 )M̄H(t) + D1(t)M̄H(t)D1(t))h(⌧) + D1(t)WHX(t)x(⌧) + (1 )WY H(t) T D2(t)y(⌧)] (9) ht,i(⌧) = A vi(⌧) SM Hii(t)((1 ) + D1,ii(t) 2) ! , for i = 1, . . . n, (10) where H(t) is a diagonal matrix containing diagonal elements of MH(t) and M̄H(t) = MH(t) H(t), (·) is the clipping function, defined as A(x) = ⇢ x A x A, Asign(x) otherwise. . This dy- namics can be shown to minimize (7) [56]. Here v(⌧) is an internal variable that could be interpreted as the voltage dynamics of a biological neuron, and is defined based on a linear transformation of ht in (A.9). Equation (9) defines v(⌧) dynamics from the gradient of (7) with respect to ht in (A.8). Due to the positive definite linear map in (A.9), the expression in (A.8) also serves as a descent direction for v(⌧). Furthermore, (·) function is the projection onto AB1, where [ A,A] is the presumed dynamic range for the components of ht. We note that there is no explicit constraint set for ht in the online optimization setting of Section 5.1, and therefore, A can be chosen as large as desired in the actual implementation. We included the nonlinearity in (10) to model the limited dynamic range of an actual (biological) neuron. Update dynamics for output yt: We write the update dynamics for the output yt, based on (A.11) as du(⌧) d⌧ = u(⌧) +WY H(t)h(⌧) M̄Y (t)D2(t)y(⌧), (11) yt,i(⌧) = 1 ✓ ui(⌧) Y ii(t)D2,ii(t) ◆ , for i = 1, . . . n, (12) which is derived using the same approach for ht, where we used the descent direction expression in (A.11), and the substitution in (A.12). Here, Y (t) is a diagonal matrix containing diagonal elements of MY (t) and M̄Y (t) = MY (t) Y (t). Note that the nonlinear mapping 1(·) is the projection onto the presumed domain of sources, i.e., P = B1, which is elementwise clipping operation. The state space representations in (9)-(10) and (11)-(12) correspond to a two-layer recurrent neural network with input xt, hidden layer activation ht, output layer activation yt, WHX (WTHX ) and WY H (WTY H ) are the feedforward (feedback) synaptic weight matrices for the first and the second layers, respectively, and M̄H and M̄Y are recurrent synaptic weight matrices for the first and the second layers, respectively. The corresponding neural network schematic is provided in Figure 2.(b). The gain and synaptic weight dynamics below describe the learning mechanism for this network: Update dynamics for gains Dl,ii: Using the derivative of the cost function with respect to D1,ii in (A.13), we can write the dynamics corresponding to the gain variable D1,ii as µD1 dD1,ii(t) dt = ( SM )(kMHi,:k 2 D1(t) kWHXi,:k 2 2) (1 SM ) 1 D1,ii(t) , (13) where µD1 corresponds to the learning time-constant. Similarly, for the gain variable D2,ii, the corresponding coefficient dynamics expression based on (A.14) is given by µD2 dD2,ii(t) dt = ( SM )(kMY i,:k 2 D2(t) kWY Hi,:k 2 2) (1 SM ) 1 D2,ii(t) , (14) where µD2 corresponds to the learning time-constant. The inverses of the inner product weights Dl,ii correspond to homeostatic gain parameters. The inspection of the gain updates in (13) and (14) leads to an interesting observation: whether the corresponding gain is going to increase or decrease depends on the balance between the norms of the recurrent and the feedforward synaptic strengths, which are the statistical indicators of the recent output and input activations, respectively. Hence, the homeostatic gain of the neuron will increase (decrease) if the level of recent output activations falls behind (surpasses) the level of recent input activations to balance input/output energy levels. The resulting dynamics align with the experimental homeostatic balance observed in biological neurons [57]. Based on the definitions of the synaptic weight matrices in (8), we can write their updates as MH(t+ 1) = 2 MH(t) + (1 2)hth T t , MY (t+ 1) = 2 MY (t) + (1 2)yty T t , (15) WHX(t+ 1) = 2 WHX(t) + (1 2)htx T t , WY H(t+ 1) = 2 WY H(t) + (1 2)yth T t . These updates are local in the sense that they only depend on variables available to the synapse, and hence are biologically plausible. 5.3 Det-max WSM neural network examples for more general source domains Det-Max Neural Network obtained for the source domain P = B1 in Section 5.2 can be extended to more general identifiable source domains by only changing the output dynamics. In Appendix D, we provide illustrative examples for different identifiable domain choices. Table 1 summarizes the output dynamics obtained for the identifiable source domain examples in Figure 1. We can make the following observations on Table 1: (1) For sparse and unit simplex settings, there is an additional inhibitory neuron which takes input from all outputs and whose activation is the inhibitory signal 1(⌧), (2) The source attributes, which are globally defined over all sources, determine the activation functions at the output layer. The proposed framework can be applied to any polytope described by (1) for which the corresponding Det-Max neural network will contain combinations of activation functions in Table 1 as illustrated in Figure 2.(a). 6 Numerical experiments In this section, we illustrate the applications of the proposed WSM-based BSS framework for both synthetic and natural sources. More details on these experiments and additional examples are provided in Appendix E, including sparse dictionary learning. Our implementation code is publicly available1. 6.1 Synthetically correlated source separation In order to illustrate the correlated source separation capability of the proposed WSM neural networks, we consider a numerical experiment with five copula-T distributed (uniform and correlated) sources. For the correlation calibration matrix for these sources, we use Toeplitz matrix whose first row is [1 ⇢ ⇢ ⇢ ⇢]. The ⇢ parameter determines the correlation level, and we considered the range [0, 0.6] for this parameter. These sources are mixed with a 10⇥5 random matrix with independent and identically distributed (i.i.d.) standard normal random variables. The mixtures are corrupted by i.i.d. 1https://github.com/BariscanBozkurt/Biologically-Plausible-DetMaxNNs-for-Blind-Source-Separation normal noise corresponding to 30dB signal-to-noise ratio (SNR) level. In this experiment, we employ the nonnegative-antisparse-WSM neural network (Figure 8 in Appendix D.2) whose activation functions at the output layer are nonnegative-clipping functions, as the sources are nonnegative uniform random variables. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ρ 0 5 10 15 20 25 30 35 SI NR (d B) Nonnegative Anti-sparse Source Separation SINR Results WSM NSM ICA-Infomax LD-InfoMax PMF that the performance of batch Det-Max algorithms, i.e., LD-InfoMax and PMF, are also robust against source correlations. Furthermore, due to their batch nature, these algorithms typically achieved better performance results than our neural network with online-restriction, as expected. 6.2 Image separation To further illustrate the correlated source separation advantage of our approach, we consider a natural image separation scenario. For this example, we have three RGB images with sizes 324⇥ 432⇥ 3 as sources (Figure 4.(a)). The sample Pearson correlation coefficients between the images are ⇢12 = 0.263, ⇢13 = 0.066, ⇢23 = 0.333. We use a random 5⇥ 3 mixing matrix whose entries are drawn from i.i.d standard normal distribution. The corresponding mixtures are shown in Figure 4.(b). We applied ICA, NSM and WSM algorithms to the mixtures. Figure 4.(c),(d),(e) shows the corresponding outputs. High-resolution versions of all images in this example are available in Appendix E.4 in addition to the comparisons with LD-Infomax and PMF algorithms. The Infomax ICA algorithm’s outputs have SINR level of 13.92dB, and this performance is perceivable as residual interference effects in the corresponding output images. The NSM algorithm achieves significantly higher SINR level of 17.45dB and the output images visually reflect this better performance. Our algorithm achieves the best SINR level of 27.49dB, and the corresponding outputs closely resemble the original source images. 7 Discussion and Conclusion We proposed a general framework for generating biologically plausible neural networks that are capable of separating correlated sources from their linear mixtures, and demonstrated their successful correlated source separation capability through synthetic and natural sources. Another motivation for our work is to link network structure with function. This is a long standing goal of neuroscience, however examples where this link can be achieved are limited. Our work provides concrete examples where clear links between a network’s architecture–i.e. number of interneurons, connections between interneurons and output neurons, nonlinearities (frequency-current curves)– and its function, the type of source separation or feature extraction problem the networks solves, can be established. These links may provide insights and interpretations that might generalize to real biological circuits. Our networks suffer from the same limitations of other recurrent biologically-plausible BSS networks. First, certain hyperparameters can significantly influence algorithm performance (see Appendix E.9). Especially, the inner product gains (Dii) are sensitive to the combined choices of algorithm parameters, which require careful tuning. Second, the numerical experiments with our neural networks are relatively slow due to the recursive computations in (9)-(10) and (11)-(12) for hidden layer and output vectors, which is common to all biologically plausible recurrent source separation networks (see Appendix F). This could perhaps be addressed by early-stopping the recursive computation [60]. Acknowledgments and Disclosure of Funding This work/research was supported by KUIS AI Center Research Award. C. Pehlevan acknowledges support from the Intel Corporation.
1. What is the focus and contribution of the paper on blind source separation? 2. What are the strengths of the proposed approach, particularly in its ability to generalize and extend previous work? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are the limitations of the proposed approach, and how do they compare to existing algorithms designed to solve the problem at hand?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work follows a recent line of work on formulating blind source separation problems as solutions of similarity matching objective functions. This work greatly expands existing work in the domain by proposing geometric interpretation and an objective function related to the Det-Max approach. Also, the formalism allows for the derivation of a biologically-plausible and online learning algorithm. Indeed, the model can be implemented by a two-layer neural network with local learning rules. Strengths And Weaknesses The authors have covered a broad class of blind source separation problems. These problems generalize the well-known problems for which there existed or not biologically plausible learning algorithms. There is a broad modeling literature on BSS, and a growing one on similarity matching, and these generalizations and extensions of both is very novel. Although it is very technical the manuscript is well written and is easy fairly easy to follow. I found three minor weaknesses in this paper that can be easily addressed. One is that the paper claims that the resulting algorithm is only, but only presents these results in the appendix. I would have appreciated that some space of the main paper be allocated to that as it is rather central to the paper. The second one is related to the context of the problem. The paper is at the interface of signal processing, machine learning, and neuroscience, and it is a bit much to ask the reader to be well versed in all the different BSS problems covered in the paper. A bit more context for each of the problems would help understand the importance of each of these problems and why building such biologically plausible would be useful. Are natural data mixed or present in the form presented in this paper? Finally, the work mainly compares to nonnegative similarity matching and infomax, but it could be interesting to see how the model compares to existing algorithms designed to solve the problem at hand, not only biologically inspired ones. Questions My questions relate to the minor weaknesses found above: Can you propose a concise presentation of the online result showing that your model can operate in that setting? Can you explain why would biological plausible model has to solve such problems? Can you compare your model to existing algorithms that were designed to solve it, and not only bio-inspired ones? Limitations The authors have addressed the limitations of the paper in the last section.
NIPS
Title Biologically-Plausible Determinant Maximization Neural Networks for Blind Separation of Correlated Sources Abstract Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologically-plausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the Det-Max criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems. 1 Introduction Our brains constantly and effortlessly extract latent causes, or sources, of complex visual, auditory or olfactory stimuli sensed by sensory organs [1–11]. This extraction is mostly done without any instruction, in an unsupervised manner, making the process an instance of the blind source separation (BSS) problem [12, 13]. Indeed, visual and auditory cortical receptive fields were argued to be the result of performing BSS on natural images [1, 2] and sounds [4]. The wide-spread use of BSS in the brain suggests the existence of generic circuit motifs that perform this task [14]. Consequently, the literature on biologically-plausible neural network algorithms for BSS is growing [15–19]. Because BSS is an underdetermined inverse problem, BSS algorithms make generative assumptions on observations. In most instances of the biologically-plausible BSS algorithms, complex stimuli are assumed to be linear mixtures of latent sources. This assumption is particularly fruitful and is used to model, for example, natural images [1, 20], and responses of olfactory neurons to complex odorants [21–23]. However, linear mixing by itself is not sufficient for source identifiability; further assumptions are needed. Previous work on biologically-plausible algorithms for BSS of linear mixtures 36th Conference on Neural Information Processing Systems (NeurIPS 2022). assumed sources to be statistically independent [17, 19, 24] or uncorrelated [16, 18]. However, these assumptions are very limiting when considering real data where sources can themselves be correlated. In this paper, we address the limitation imposed by independence assumptions and provide biologically-plausible BSS neural networks that can separate potentially correlated sources. We achieve this by considering various general geometric identifiability conditions on sources instead of statistical assumptions like independence or uncorrelatedness. In particular, 1) we make natural assumptions on the domains of source vectors–like nonnegativity, sparsity, anti-sparsity or boundedness (Figure 1)–and 2) we assume that latent source vectors are sufficiently spread in their domain [25, 26]. Because these identifiability conditions are not stochastic in nature, our neural networks are able to separate both independent and dependent sources. We derive our biologically-plausible algorithms from a normative principle. A common method for exploiting our geometric identifiability conditions is to disperse latent vector estimates across their presumed domain by maximizing the determinant of their sample correlation matrix, i.e., the Det-Max approach [25, 27–30]. Starting from a Det-Max objective function with constraints that specify the domain of source vectors, and using mathematical tools introduced for mapping optimization algorithms to adaptive Hebbian neural networks [18, 31, 32], we derive two-layered neural networks that can separate potentially correlated sources from their linear mixtures (Figure 2). These networks contain feedforward, recurrent and feedback synaptic connections updated via Hebbian or anti-Hebbian update rules. The domain of latent sources determines the structure of the output layer of the neural network (Figure 2, Table 1 and Appendix D). In summary, our main contributions in this article are the following: • We propose a normative framework for generating biologically plausible neural networks that are capable of separating correlated sources from their mixtures by deriving them from a Det-Max objective function subject to source domain constraints. • Our framework can handle infinitely many source types by exploiting their source domain topology. • We demonstrate the performance of our networks in simulations with synthetic and realistic data. 1.1 Other related work Several algorithms for separation of linearly mixed and correlated sources have been proposed outside the domain of biologically-plausible BSS. These algorithms make other forms of assumptions on the latent sources. Nonnegative matrix factorization (NMF) assumes that the latent vectors are nonnegative [13, 33–35]. Simplex structured matrix factorization (SSMF) assumes that the latent vectors are members of the unit-simplex [25, 36, 37]. Sparse component analysis (SCA) often assumes that the latent vectors lie in the unity `1-norm-ball [30, 38–42]. Antisparse bounded component analysis (BCA) assumes latent vectors are in the `1-norm-ball [28, 29, 43]. Recently introduced polytopic matrix factorization (PMF) extends the identifiability-enabling domains to infinitely many polytopes obeying a particular symmetry restriction [26, 44, 45]. The mapping of optimization algorithms to biologically-plausible neural networks have been formalized in the similarity matching framework [31, 32, 46, 47]. Several BSS algorithms were proposed within this framework: 1) Nonnegative Similarity Matching (NSM) [16, 48] separates linear mixtures of uncorrelated nonnegative sources, 2) [19] separates independent sources, and 3) Bounded Similarity Matching (BSM) separates uncorrelated anti-sparse bounded sources from `1-norm-ball [18]. BSM introduced a weighted inner product-based similarity criterion, referred to as the weighted similarity matching (WSM). Compared to these algorithm, the neural network algorithms we propose in this article 1) cover more general source domains, 2) handle potentially correlated sources, 3) use a two-layer WSM architecture (relative to single layer WSM architecture of BSM, which is not capable of generating arbitrary linear transformations) and 4) offer a general framework for neural-network-based optimization of the Det-Max criterion. 2 Problem statement 2.1 Sources We assume that there are n real-valued sources, represented by the vector s 2 P , where P is a particular subset of Rn. Our algorithms will address a wide range of source domains. We list some examples before giving a more general criterion: • Bounded sparse sources: A natural convex domain choice for sparse sources is the unit `1 norm ball B`1 = {s | ksk1 1} (Figure 1.(a)). The use of `1-norm as a convex (non)sparsity measure has been quite successful with various applications including sparse dictionary learning/component analysis [30, 39, 41, 49, 50] and modeling of V1 receptive fields [2]. • Bounded anti-sparse sources: A common domain choice for anti-sparse sources is the unit `1- norm-ball: B`1 = {s | ksk1 1} (Figure 1.(b)). If vectors drawn from B`1 are well-spread inside this set, some samples would contain near-peak magnitude values simultaneously at all their components. The potential equal spreading of values among the components justifies the term “anti-sparse” [51] or “democratic” [52] component representations. This choice is well-suited for both applications in natural images and digital communication constellations [28, 43]. • Normalized nonnegative sources: Simplex structured matrix factorization [25, 36, 37] uses the unit simplex [35, 53] = {s | s 0,1T s = 1} (Figure 1.(c)) as the source domain. Nonnegativity of sources naturally arises in biological context, for example in demixing olfactory mixtures [54]. • Nonnegative bounded anti-sparse sources: A non-degenerate polytopic choice of the nonnegative sources can be obtained through the combination of anti-sparseness and nonnegativity constraints. This corresponds to the intersection of B`1 with the nonnegative orthant Rn+, represented as B`1,+ = B`1 \ Rn+ [26] (Figure 1.(d)). • Nonnegative bounded sparse sources: Another polytopic choice for nonnegative sources can be obtained through combination of the sparsity and nonnegativity constraints which yields the intersection of B`1 with the nonnegative orthant R+, [26]: B`1,+ = B`1 \ Rn+ (Figure 1.(e)). Except the unit simplex , all the examples above are examples of an infinite set of identifiable polytopes whose symmetry groups are restricted to the combinations of component permutations and sign alterations as formalized in PMF framework for BSS [44]. Further, in- stead of a homogeneous choice of features, such as sparsity and nonnegativity, globally imposed on all elements of the component vector, we can assign these attributes at the subvector level and still obtain identifiable polytopes. For example, the reference [26] provides the set Pex = ⇢ s 2 R3 s1, s2 2 [ 1, 1], s3 2 [0, 1], s1 s2 1 1, s2 s3 1 1 , as a simple il- lustration of such polytopes with heterogeneous structure where s3 is nonnegative, s1, s2 are signed, and [ s1 s2 ] T , [ s2 s3 ] T are sparse subvectors, while sparsity is not globally imposed. In this article, we concentrate on particular source domains including the unit simplex, and the subset of identifiable polytopes for which the attributes such as sparsity and nonnegativity are defined at the subvector level in the general form P = s 2 Rn si 2 [ 1, 1] 8i 2 Is, si 2 [0, 1] 8i 2 I+, ksJkk1 1, Jk ✓ Zn, k 2 ZL , (1) where I+ ✓ Zn is the index set for nonnegative sources, and Is is its complement, sJk is the subvector constructed from the elements with indices in Jk, and L is the number of sparsity constraints imposed in the subvector level. The Det-Max criterion for BSS is based on the assumption that the source samples are well-spread in their presumed domain. The references [55] and [26] provide precise conditions on the scattering of source samples which guarantee their identifiability for the unit simplex and polytopes, respectively. Appendix A provides a brief summary of these conditions. We emphasize that our assumptions about the sources are deterministic. Therefore, our proposed algorithms do not exploit any stochastic assumptions such as independence or uncorrelatedness, and can separate both independent and dependent (potentially correlated) sources. 2.2 Mixing The sources st are mixed through a mixing matrix A 2 Rm⇥n. xt = Ast, t 2 Z. (2) We only consider the (over)determined case with m n and assume that the mixing matrix is fullrank. While we consider noiseless mixtures to achieve perfect separability, the optimization setting proposed for the online algorithm features a particular objective function that safeguards against potential noise presence. We use S(t) = [ s1 . . . st ] 2 Rn⇥t and X(t) = [ x1 . . . xt ] 2 Rm⇥t to represent data snapshot matrices, at time t, for sources and mixtures, respectively. 2.3 Separation The goal of the source separation is to obtain an estimate of S(t) from the mixture measurements X(t) when the mixing matrix A is unknown. We use the notation yt to refer to source estimates, which are linear transformations of observations, i.e., yi = Wxi, where W 2 Rn⇥m. We define Y(t) = [ y1 y2 . . . yt ] 2 Rn⇥t as the output snapshot matrix. "Ideal separation" is defined as the condition where the outputs are scaled and permuted versions of original sources, i.e., they satisfy yt = P⇤st, where P is a permutation matrix, and ⇤ is a full rank diagonal matrix. 3 Determinant maximization based blind source separation Among several alternative solution methods for the BSS problem, the determinant-maximization (DetMax) criterion has been proposed within the NMF, BCA, and PMF frameworks, [26–28, 30, 35, 44]. Here, the separator is trained to maximize the (log)-determinant of the sample correlation matrix for the separator outputs, J(W) = log(det(R̂y(t))), where R̂y(t) is the sample correlation matrix R̂y(t) = 1 t P t i=1 yiy T i = 1 t Y(t)Y(t)T . Further, during the training process, the separator outputs are constrained to lie inside the presumed source domain, i.e. P . As a result, we can pose the corresponding optimization problem as [26, 35] maximize Y(t) log(det(Y(t)Y(t)T )) (3a) subject to yi 2 P, i = 1, . . . , t, (3b) where we ignored the constant 1 t term. Here, the determinant of the correlation matrix acts as a spread measure for the output samples. If the original source samples {s1, . . . , st} are sufficiently scattered inside the source domain P , as described in Section 2.1 and Appendix A, then the global solution of this optimization can be shown to achieve perfect separation [26, 35, 55]. 4 An alternative optimization formulation of determinant-maximization based on weighted similarity matching Here, we reformulate the Det-Max problem 3 described above in a way that allows derivation of a biologically-plausible neural network for the linear BSS setup in Section 2. Our formulation applies to all source types discusses in 2.1. We propose the following optimization problem: minimize Y(t),H(t),D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) nX i=1 log(D1,ii(t)) + nX i=1 log(D2,ii(t)) (4a) subject to X(t)TX(t) H(t)TD1(t)H(t) = 0, (4b) H(t)TH(t) Y(t)TD2(t)Y(t) = 0, (4c) yi 2 P, i = 1, . . . , n, (4d) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (4e) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (4f) Here, X(t) 2 Rm⇥t is the matrix containing input (mixture) vectors, Y(t) 2 Rn⇥t is the matrix containing output vectors, H(t) 2 Rn⇥t is a slack variable containing an intermediate signal {hi 2 Rn, i = 1, . . . , t}, corresponding to the hidden layer of the neural network implementation in Section 5, in its columns H(t) = [ h1 h2 . . . ht ]. Dl,11(t), Dl,22(t), . . . , Dl,nn(t) for l = 1, 2 are nonnegative slack variables to be described below, and Dl is the diagonal matrix containing weights Dl,ii for i = 1, . . . , n and l = 1, 2. The constraint (4d) ensures that the outputs lie in the presumed domain of sources. This problem is related to the weighted similarity matching (WSM) objective introduced in [18]. Constraints (4b) and (4c) define two separate WSM conditions. In particular, the equality constraint in (4b) is a WSM constraint between inputs and the intermediate signal H(t). This constraint imposes that the pairwise weighted correlations of the signal {hi, i = 1, . . . , t} are the same as correlations among the elements of the input signal {xi, i = 1, . . . , t}, i.e., xTi xj = hTi D1(t)hj , 8i, j 2 {1, . . . , t}. D1,11(t), D1,22(t), . . . , D1,nn(t) correspond to inner product weights used in these equalities. Similarly, the equality constraint in (4c) defines a WSM constraint between the intermediate signal and outputs. This equality can be written as hT i hj = yTi D2(t)yj , i, j 2 {1, . . . , t}, and D2,11(t), D2,22(t), . . . , D2,nn(t) correspond to the inner product weights used in these equalities. The optimization involves minimizing the logarithm of the determinant of the weighting matrices. Now we state the relation between our WSM-based objective and the original Det-Max criterion (3). Theorem 1. If X(t) is full column-rank, then global optimal Y(t) solutions of (3) and (4) coincide. Proof of Theorem 1. See Appendix B for the proof. The proof relies on a lemma that states that the optimization constraints enforce inputs and outputs to be related by an arbitrary linear transformation. 5 Biologically-plausible neural networks for WSM-based BSS The optimization problems we considered so far were in an offline setting, where all inputs are observed together and all outputs are produced together. However, biology operates in an online fashion, observing an input and producing the corresponding output, before seeing the next input. Therefore, in this section, we first introduce an online version of the batch WSM-problem (4). Then we show that the corresponding gradient descent algorithm leads to a two-layer neural network with biologically-plausible local update rules. 5.1 Online optimization setting for WSM-based BSS We first propose an online extension of WSM-based BSS (4). In the online setting, past outputs cannot be altered, but past inputs and outputs still carry valuable information about solving the BSS problem. We will write down an optimization problem whose goal is to produce the sources yt given a mixture xt, while exploiting information from all the fixed previous inputs and outputs. We first introduce our notation. We consider exponential weighting of the signals as a recipe for dynamical adjustment to potential nonstationarity in the data. We define the weighted input data snapshot matrix by time t as, X (t) = ⇥ t 1x1 . . . xt 1 xt ⇤ = X(t) (t), where is the forgetting factor and (t) = diag( t 1, . . . , , 1). The exponential weighting emphasizes recent mixtures by reducing the impact of past samples. Similarly, we define the corresponding weighted output snapshot matrix for output as Y(t) = ⇥ t 1y1 . . . yt 1 yt ⇤ = Y(t) (t), and the hidden layer vectors as H(t) = ⇥ t 1h1 . . . ht 1 ht ⇤ = H(t) (t). We further define ⌧ = limt!1 = P t 1 k=0 2k = 11 2 as a measure of the effective time window length for sample correlation calculations based on the exponential weights. In order to derive an online cost function, we first converted equality constraints in (4b) and (4c) to similarity matching cost functions J1(H(t),D1(t)) = 12⌧2 kX (t) T X (t) H(t)TD1(t)H(t)k2F , J2(H(t),D2(t),Y(t)) = 1 2⌧2 kH(t) T H(t) Y(t)TD2(t)Y(t)k2F . Then, a weighted combination of similarity matching costs and the objective function in (4a) yields the final cost function J (H(t),D1(t),D2(t),Y(t)) = SM [ J1(H(t),D1(t)) + (1 )J2(H(t),D2(t),Y(t))] +(1 SM )[ nX k=1 log(D1,kk(t)) + nX k=1 log(D2,kk(t))]. (5) Here, 2 [0, 1] and SM 2 [0, 1] are parameters that convexly combine similarity matching costs and the objective function. Finally, we can state the online optimization problem for determining the current output yt, the corresponding hidden state ht and for updating the gain parameters Dl(t) for l = 1, 2, as minimize yt,ht,D1,11(t),...D1,nn(t),D1(t) D2,11(t),...D2,nn(t),D2(t) J (H(t),D1(t),D2(t),Y(t)) (6a) subject to yt 2 P, (6b) Dl(t) = diag(Dl,11(t), . . . , Dl,nn(t)), l = 1, 2, (6c) Dl,11(t), Dl,22(t), . . . , Dl,nn(t) > 0, l = 1, 2 (6d) As shown in Appendix C.1, part of J that depends on ht and yt can be written as C(ht,yt) = 2h T t D1MH(t)D1(t)ht 4h T t D1(t)WHX(t)xt +2yT t D2(t)MY (t)D2(t)yt 4y T t D2(t)WY H(t)ht + 2h T t MH(t)ht, (7) where the dependence on past inputs and outputs appear in the weighted correlation matrices: MH(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkhTk , WHX(t) = 1 ⌧ P t 1 k=1( 2)t 1 khkxTk , WY H(t) = 1 ⌧ P t 1 k=1( 2)t 1 kykhTk , MY (t) = 1 ⌧ P t 1 k=1( 2)t 1 kykyTk . (8) 5.2 Description of network dynamics for bounded anti-sparse sources We now show that the gradient-descent minimization of the online WSM cost function in (6) can be interpreted as the dynamics of a neural network with local learning rules. The exact network architecture is determined by the presumed identifiable source domain P , which can be chosen in infinitely many ways. In this section, we concentrate on the domain choice P = B1 as an illustrative example. In Section 5.3, we discuss how to generalize the results of this section by modifying the output layer for different identifiable source domains. We start by writing the update expressions for the optimization variables based on the gradients of J (ht,yt,D1(t),D2(t)): Update dynamics for ht: Following previous work [48, 56], and using the gradient of (7) in (A.8) with respect to ht, we can write down an update dynamics for ht in the form dv(⌧) d⌧ = v(⌧) SM [((1 )M̄H(t) + D1(t)M̄H(t)D1(t))h(⌧) + D1(t)WHX(t)x(⌧) + (1 )WY H(t) T D2(t)y(⌧)] (9) ht,i(⌧) = A vi(⌧) SM Hii(t)((1 ) + D1,ii(t) 2) ! , for i = 1, . . . n, (10) where H(t) is a diagonal matrix containing diagonal elements of MH(t) and M̄H(t) = MH(t) H(t), (·) is the clipping function, defined as A(x) = ⇢ x A x A, Asign(x) otherwise. . This dy- namics can be shown to minimize (7) [56]. Here v(⌧) is an internal variable that could be interpreted as the voltage dynamics of a biological neuron, and is defined based on a linear transformation of ht in (A.9). Equation (9) defines v(⌧) dynamics from the gradient of (7) with respect to ht in (A.8). Due to the positive definite linear map in (A.9), the expression in (A.8) also serves as a descent direction for v(⌧). Furthermore, (·) function is the projection onto AB1, where [ A,A] is the presumed dynamic range for the components of ht. We note that there is no explicit constraint set for ht in the online optimization setting of Section 5.1, and therefore, A can be chosen as large as desired in the actual implementation. We included the nonlinearity in (10) to model the limited dynamic range of an actual (biological) neuron. Update dynamics for output yt: We write the update dynamics for the output yt, based on (A.11) as du(⌧) d⌧ = u(⌧) +WY H(t)h(⌧) M̄Y (t)D2(t)y(⌧), (11) yt,i(⌧) = 1 ✓ ui(⌧) Y ii(t)D2,ii(t) ◆ , for i = 1, . . . n, (12) which is derived using the same approach for ht, where we used the descent direction expression in (A.11), and the substitution in (A.12). Here, Y (t) is a diagonal matrix containing diagonal elements of MY (t) and M̄Y (t) = MY (t) Y (t). Note that the nonlinear mapping 1(·) is the projection onto the presumed domain of sources, i.e., P = B1, which is elementwise clipping operation. The state space representations in (9)-(10) and (11)-(12) correspond to a two-layer recurrent neural network with input xt, hidden layer activation ht, output layer activation yt, WHX (WTHX ) and WY H (WTY H ) are the feedforward (feedback) synaptic weight matrices for the first and the second layers, respectively, and M̄H and M̄Y are recurrent synaptic weight matrices for the first and the second layers, respectively. The corresponding neural network schematic is provided in Figure 2.(b). The gain and synaptic weight dynamics below describe the learning mechanism for this network: Update dynamics for gains Dl,ii: Using the derivative of the cost function with respect to D1,ii in (A.13), we can write the dynamics corresponding to the gain variable D1,ii as µD1 dD1,ii(t) dt = ( SM )(kMHi,:k 2 D1(t) kWHXi,:k 2 2) (1 SM ) 1 D1,ii(t) , (13) where µD1 corresponds to the learning time-constant. Similarly, for the gain variable D2,ii, the corresponding coefficient dynamics expression based on (A.14) is given by µD2 dD2,ii(t) dt = ( SM )(kMY i,:k 2 D2(t) kWY Hi,:k 2 2) (1 SM ) 1 D2,ii(t) , (14) where µD2 corresponds to the learning time-constant. The inverses of the inner product weights Dl,ii correspond to homeostatic gain parameters. The inspection of the gain updates in (13) and (14) leads to an interesting observation: whether the corresponding gain is going to increase or decrease depends on the balance between the norms of the recurrent and the feedforward synaptic strengths, which are the statistical indicators of the recent output and input activations, respectively. Hence, the homeostatic gain of the neuron will increase (decrease) if the level of recent output activations falls behind (surpasses) the level of recent input activations to balance input/output energy levels. The resulting dynamics align with the experimental homeostatic balance observed in biological neurons [57]. Based on the definitions of the synaptic weight matrices in (8), we can write their updates as MH(t+ 1) = 2 MH(t) + (1 2)hth T t , MY (t+ 1) = 2 MY (t) + (1 2)yty T t , (15) WHX(t+ 1) = 2 WHX(t) + (1 2)htx T t , WY H(t+ 1) = 2 WY H(t) + (1 2)yth T t . These updates are local in the sense that they only depend on variables available to the synapse, and hence are biologically plausible. 5.3 Det-max WSM neural network examples for more general source domains Det-Max Neural Network obtained for the source domain P = B1 in Section 5.2 can be extended to more general identifiable source domains by only changing the output dynamics. In Appendix D, we provide illustrative examples for different identifiable domain choices. Table 1 summarizes the output dynamics obtained for the identifiable source domain examples in Figure 1. We can make the following observations on Table 1: (1) For sparse and unit simplex settings, there is an additional inhibitory neuron which takes input from all outputs and whose activation is the inhibitory signal 1(⌧), (2) The source attributes, which are globally defined over all sources, determine the activation functions at the output layer. The proposed framework can be applied to any polytope described by (1) for which the corresponding Det-Max neural network will contain combinations of activation functions in Table 1 as illustrated in Figure 2.(a). 6 Numerical experiments In this section, we illustrate the applications of the proposed WSM-based BSS framework for both synthetic and natural sources. More details on these experiments and additional examples are provided in Appendix E, including sparse dictionary learning. Our implementation code is publicly available1. 6.1 Synthetically correlated source separation In order to illustrate the correlated source separation capability of the proposed WSM neural networks, we consider a numerical experiment with five copula-T distributed (uniform and correlated) sources. For the correlation calibration matrix for these sources, we use Toeplitz matrix whose first row is [1 ⇢ ⇢ ⇢ ⇢]. The ⇢ parameter determines the correlation level, and we considered the range [0, 0.6] for this parameter. These sources are mixed with a 10⇥5 random matrix with independent and identically distributed (i.i.d.) standard normal random variables. The mixtures are corrupted by i.i.d. 1https://github.com/BariscanBozkurt/Biologically-Plausible-DetMaxNNs-for-Blind-Source-Separation normal noise corresponding to 30dB signal-to-noise ratio (SNR) level. In this experiment, we employ the nonnegative-antisparse-WSM neural network (Figure 8 in Appendix D.2) whose activation functions at the output layer are nonnegative-clipping functions, as the sources are nonnegative uniform random variables. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ρ 0 5 10 15 20 25 30 35 SI NR (d B) Nonnegative Anti-sparse Source Separation SINR Results WSM NSM ICA-Infomax LD-InfoMax PMF that the performance of batch Det-Max algorithms, i.e., LD-InfoMax and PMF, are also robust against source correlations. Furthermore, due to their batch nature, these algorithms typically achieved better performance results than our neural network with online-restriction, as expected. 6.2 Image separation To further illustrate the correlated source separation advantage of our approach, we consider a natural image separation scenario. For this example, we have three RGB images with sizes 324⇥ 432⇥ 3 as sources (Figure 4.(a)). The sample Pearson correlation coefficients between the images are ⇢12 = 0.263, ⇢13 = 0.066, ⇢23 = 0.333. We use a random 5⇥ 3 mixing matrix whose entries are drawn from i.i.d standard normal distribution. The corresponding mixtures are shown in Figure 4.(b). We applied ICA, NSM and WSM algorithms to the mixtures. Figure 4.(c),(d),(e) shows the corresponding outputs. High-resolution versions of all images in this example are available in Appendix E.4 in addition to the comparisons with LD-Infomax and PMF algorithms. The Infomax ICA algorithm’s outputs have SINR level of 13.92dB, and this performance is perceivable as residual interference effects in the corresponding output images. The NSM algorithm achieves significantly higher SINR level of 17.45dB and the output images visually reflect this better performance. Our algorithm achieves the best SINR level of 27.49dB, and the corresponding outputs closely resemble the original source images. 7 Discussion and Conclusion We proposed a general framework for generating biologically plausible neural networks that are capable of separating correlated sources from their linear mixtures, and demonstrated their successful correlated source separation capability through synthetic and natural sources. Another motivation for our work is to link network structure with function. This is a long standing goal of neuroscience, however examples where this link can be achieved are limited. Our work provides concrete examples where clear links between a network’s architecture–i.e. number of interneurons, connections between interneurons and output neurons, nonlinearities (frequency-current curves)– and its function, the type of source separation or feature extraction problem the networks solves, can be established. These links may provide insights and interpretations that might generalize to real biological circuits. Our networks suffer from the same limitations of other recurrent biologically-plausible BSS networks. First, certain hyperparameters can significantly influence algorithm performance (see Appendix E.9). Especially, the inner product gains (Dii) are sensitive to the combined choices of algorithm parameters, which require careful tuning. Second, the numerical experiments with our neural networks are relatively slow due to the recursive computations in (9)-(10) and (11)-(12) for hidden layer and output vectors, which is common to all biologically plausible recurrent source separation networks (see Appendix F). This could perhaps be addressed by early-stopping the recursive computation [60]. Acknowledgments and Disclosure of Funding This work/research was supported by KUIS AI Center Research Award. C. Pehlevan acknowledges support from the Intel Corporation.
1. What is the focus and contribution of the paper regarding blind source separation for correlated sources? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity, quality, and novelty of the paper's content? 4. What are the limitations of the proposed method regarding computational costs and comparisons with other methods?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a novel neural network architecture to perform a blind source separation task, specifically addressing the case of correlated sources. The proposed architecture is shallow (2 or 3 layers) and derived in an online manner to maximize biologically plausibility. The proposed architecture is shown to solve a determinant-maximization problem, as proved in Theorem 1 (line 172), and to adequately solve two synthetically correlated source separation toy examples. Strengths And Weaknesses Disclaimer: Due to my lack of expertise on the topic, I am not able to assess the soundness of the mathematical model nor the correctness of the theorem proof. I have also not read carefully the very long appendix (31 pages, 17 figures). Due to the length of the full manuscript, a journal might be a better publishing venue to benefit from more extensive reviews. Originality The blind source separation problem tackled by the paper is a well-known problem, addressed by a wide literature of methods. The originality seems to resides in the special case of correlated sources (as opposed to e.g. ICA which specifically assumes independent sources). The proposed approach builds upon a recent framework called weighted similarity matching (WSM) and introduced in [15]. The difference with related works is very briefly discussed lines 70-74. Clarity The paper is clearly written, but the relation to prior work is somewhat limited. A number of existing methods are listed, but the difference with the proposed method is not clearly explained. The paper would benefit from being more pedagogical about its different original contributions. Quality The numerical experiments to demonstrate the effectiveness of the model are quite limited. It would have been interesting to compare the proposed method with more methods. Questions What makes the proposed method well suited from correlated sources? Figure 3/4 only compare the proposed method with ICA and NSM, despite a larger number of related works listed in section 1.1: NMF, SSMF, SCA, BCA, PMF, BSM. Why not comparing with these other methods? Limitations The paper stresses the computational cost of the proposed method, but does not give concrete examples. What is the computational cost of the proposed method (including hyperparameter tuning) in the two proposed numerical experiments? What is the computational cost of ICA and NSM?
NIPS
Title An Investigation into Whitening Loss for Self-supervised Learning Abstract A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP. 1 Introduction Self-supervised learning (SSL) has made significant progress over the last several years [1, 19, 6, 16, 8], almost reaching the performance of supervised baselines on many downstream tasks [33, 24, 35]. Several recent approaches rely on a joint embedding architecture in which a dual pair of networks are trained to produce similar embeddings for different views of the same image [8]. Such methods aim to learn representations that are invariant to transformation of the same input. One main challenge with the joint embedding architectures is how to prevent a collapse of representation, in which the two branches ignore the inputs and produce identical and constant output representations [6, 8]. One line of work uses contrastive learning methods that attract different views from the same image (positive pairs) while pull apart different images (negative pairs), which can prevent constant outputs from the solution space [43]. While the concept is simple, these methods need large batch size to obtain a good performance [19, 6, 37]. Another line of work tries to directly match the positive targets without introducing negative pairs. A seminal approach, BYOL [16], shows that an extra predictor and momentum is essential for representation learning. SimSiam [8] further generalizes [16] by empirically showing that stop-gradient is essential for preventing trivial solutions. Recent works generalize the collapse problem into dimensional collapse [21, 25]2 where the embedding vectors only span a lower-dimensional subspace and would be highly correlated. Therefore, the embedding vector dimensions would vary together and contain redundant information. To prevent the dimensional ∗equal contribution corresponding author (huangleiAI@buaa.edu.cn). This work was partially done while Lei Huang was a visiting scholar at Mohamed bin Zayed University of Artificial Intelligence, UAE. 2This collapse is also referred to informational collapse in [2]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). collapse, whitening loss is proposed by only minimizing the distance between embeddings of positive pairs under the condition that embeddings from different views are whitened [12, 21]. A typical way is using batch whitening (BW) and imposing the loss on the whitened output [12, 21], which obtains promising results. Although whitening loss has theoretical guarantee in avoiding collapse, we experimentally observe that this guarantee depends on which kind of whitening transformation [26] is used in practice (see Section 3.2 for details). This interesting observation challenges the motivations of whitening loss for SSL. Besides, the motivation of whitening loss is that the whitening operation can remove the correlation among axes [21] and a whitened representation ensures the examples scattered in a spherical distribution [12]. Based on this argument, one can use the whitened output as the representation for downstream tasks, but it is not used in practice. To this end, this paper investigates whitening loss and tries to demystify these interesting observations. Our contributions are as follows: • We decompose the symmetric formulation of whitening loss into two asymmetric losses, where each asymmetric loss requires an online network to match a whitened target. This mechanism provides a pivoting point connecting to other methods, and a way to understand why certain whitening transformation fails to avoid dimensional collapse. • Our analysis shows that BW based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. • We propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based method in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection show that CW-RGP has promising potential in learning good representation. 2 Related Work A desirable objective in self-supervised learning is to avoid feature collapse. Contrastive learning prevents collapse by attracting positive samples closer, and spreading negative samples apart [43, 44]. In these methods, negative samples play an important role and need to be well designed [34, 1, 20]. One typical mechanism is building a memory bank with a momentum encoder to provide consistent negative samples, proposed in MoCos [19], yielding promising results [19, 7, 9, 30]. Other works include SimCLR [6] addresses that more negative samples in a batch with strong data augmentations perform better. Contrastive methods require large batch sizes or memory banks, which tends to be costly, promoting the questions whether negative pairs is necessary. Non-contrastive methods aim to accomplish SSL without introducing negative pairs explicitly [3, 4, 31, 16, 8]. One typical way to avoid representational collapse is the introduction of asymmetric network architecture. BYOL [16] appends a predictor after the online network and introduce momentum into the target network. SimSiam [8] further simplifies BYOL by removing the momentum mechanism, and shows that stop-gradient to target network serves as an alternative approximation to the momentum encoder. Other progress includes an asymmetric pipeline with a self-distillation loss for Vision Transformers [5]. It remains not clear how the asymmetric network avoids collapse without negative pairs, leaving the debates on batch normalization (BN) [14, 41, 36] and stop-gradient [8, 46], even though preliminary works have attempted to analyze the training dynamics theoretical with certain assumptions [40] and build a connection between asymmetric network with contrastive learning methods [39]. Our work provides a pivoting point connecting asymmetric network to profound whitening loss in avoiding collapse. Whitening loss has theoretical guarantee in avoiding collapse by minimizing the distance of positive pairs under the conditioning that the embeddings from different views are whitened [45, 12, 21, 2]. One way to obtain whitened output is imposing a whitening penalty as regularization on embedding— the so-called soft whitening, which is proposed in Barlow Twins [45], VICReg [2] and CCA-SSG [47]. Another way is using batch whitening (BW) [22]—the so-called hard whitening, which is used in W-MSE [12] and Shuffled-DBN [21]. We propose a different hard whitening method—channel whitening (CW) that has the same function that ensures all the singular values of transformed output being one for avoiding collapse. But CW is more numerical stable and works better when batch size is small, compared to BW. Furthermore, our CW with random group partition (CW-RGP) can effectively control the extent of constraint on embedding and obtain better performance in practice. We note that a recent work ICL [48] proposes to decorrelate instances, like CW but having several significant differences in technical details. ICL uses "stop-gradient" for the whitening matrix, while CW requires back-propagation through the whitening transformation. Besides, ICL uses extra pre-conditioning on the covariance and whitening matrices, which is essential for the numerical stability, while CW does not use extra pre-conditioning and can work well since it encourages the embedding to be full-rank. 3 Exploring Whitening Loss for SSL 3.1 Preliminaries Let x denote the input sampled uniformly from a set of images D, and T denote the set of data transformations available for augmentation. We consider the Siamese network fθ(·) parameterized by θ. It takes as input two randomly augmented views, x1 = T1(x) and x2 = T2(x), where T1,2 ∈ T. The network fθ(·) is trained with an objective function that minimizes the distance between embeddings obtained from different views of the same image: L(x, θ) = Ex∼D, T1,2∼T ` ( fθ(T1(x)), fθ(T2(x)) ) . (1) where `(·, ·) is a loss function. In particular, the Siamese network usually consists of an encoder Eθe(·) and a projector Gθg (·). Their outputs h = Eθe(T (x)) and z = Gθg (h) are referred to as encoding and embedding, respectively. We summarize the notations and use the corresponding capital letters denoting mini-batch data in Figure 1. Under this notation, we have fθ(·) = Gθg (Eθe(·)) with learnable parameters θ = {θe, θg}. The encoding h is usually used as representation for evaluation by either training a linear classifier [19] or transferring to downstream tasks. This is due to that h is shown to obtain significantly better performance than the embedding z [6, 8]. The mean square error (MSE) of L2−normalized vectors is usually used as the loss function [8]: `(z1, z2) = ‖ z1 ‖z1‖2 − z2 ‖z2‖2 ‖22, (2) where ‖ · ‖2 denotes the L2 norm. This loss is also equivalent to the negative cosine similarity, up to a scale of 12 and an optimization irrelevant constant. Collapse and Whitening Loss. While minimizing Eqn. 1, a trivial solution known as collapse could occur such that fθ(x) ≡ c, ∀x ∈ D. The state of collapse will provide no gradients for learning and offer no information for discrimination. Moreover, a weaker collapse condition called dimensional collapse can be easily arrived, for which the projected features collapse into a low-dimensional manifold. As illustrated in [21], dimensional collapse is associated with strong correlations between axes, which motivates the use of whitening method in avoiding the dimensional collapse. The general idea of whitening loss [12] is to minimize Eqn. 1, under the condition that embeddings from different views are whitened, which can be formulated as3: min θ L(x; θ) = Ex∼D, T1,2∼T `(z1, z2), s.t. cov(zi, zi) = I, i ∈ {1, 2}. (3) Whitening loss provides theoretical guarantee in avoiding (dimensional) collapse, since the embedding is whitened with all axes decorrelated [12, 21]. While it is difficult to directly solve the problem of Eqn. 3, Ermolov et al. [12] propose to whiten the mini-batch embedding Z ∈ Rdz×m using batch whitening (BW) [22, 38] and impose the loss on the whitened output Ẑ ∈ Rdz×m, given the mini-batch inputs X with size of m, as follows: min θ L(X; θ) = EX∼D, T1,2∼T ‖Ẑ1 − Ẑ2‖2F with Ẑi = Φ(Zi), i ∈ {1, 2}, (4) where Φ(·) denotes the whitening transformation over mini-batch data. 3The dual view formulation can be extended to s different views, as shown in [12]. Whitening Transformations. There are an infinite number of possible whitening matrices, as shown in [26, 22], since any whitened data with a rotation is still whitened. For simplifying notation, we assume Z is centered by Z := Z(I − 1m11 T ). Ermolov et al. [12] propose W-MSE that uses Cholesky decomposition (CD) whitening: ΦCD(Z) = L−1Z in Eqn. 4, where L is a lower triangular matrix from the Cholesky decomposition, with LLT = Σ. Here Σ = 1mZZ T is the covariance matrix of the embedding. Hua et al. [21] use zero-phase component analysis (ZCA) whitening [22] in Eqn. 4: ΦZCA = UΛ − 12UT , where Λ = diag(λ1, . . . , λdz ) and U = [u1, ...,udz ] are the eigenvalues and associated eigenvectors of Σ, i.e., UΛUT = Σ. Another famous whitening is principal components analysis (PCA) whitening: ΦPCA = Λ− 1 2UT [26, 22]. 3.2 Empirical Investigation on Whitening Loss In this section, we conduct experiments to investigate the effects of different whitening transformations Φ(·) used in Eqn. 4 for SSL. Besides, we investigate the performances of different features (including encoding H, embedding Z and the whitened output Ẑ) used as representation for evaluation. For illustration, we first define the rank and stable-rank [42] of a matrix as follows: Definition 1. Given a matrix A ∈ Rd×m, d ≤ m, we denote {λ1, ..., λd} the singular values of A in a descent order with convention λ1 > 0. The rank of A is the number of its non-zero singular values, denoted as Rank(A) = ∑d i=1 I(λi > 0), where I(·) is the indicator function. The stable-rank of A is denoted as r(A) = ∑d i=1 λi λ1 . By definition, Rank(A) can be a good indicator to evaluate the extent of dimensional collapse of A, and r(A) can be an indicator to evaluate the extent of whitening of A. It can be demonstrated that r(A) ≤ Rank(A) ≤ d [42]. Note that if A is fully whitened with covariance matrix AAT = mI, we have r(A) = Rank(A) = d. We also define normalized rank as R̂ank(A) = Rank(A)d and normalized stable-rank as r̂(A) = r(A)d , for comparing the extent of dimensional collapse and whitening of matrices with different dimensions, respectively. PCA Whitening Fails to Avoid Dimensional Collapse. We compare the effects of ZCA, CD, PCA transformations for whitening loss, evaluated on CIFAR-10 using the standard setup for SSL (see Section 4.1 for details). Besides, we also provide the result of batch normalization (BN) that only performs standardization without decorrelating the axes, and the ‘Plain’ method that imposes the loss directly on embedding. From Figure 2, we observe that naively training a Siamese network (‘Plain’) results in collapse both on the embedding (Figure 2(c)) and encoding (Figure 2(d)), which significantly hampers the performance (Figure 2(a)), although its training loss becomes close to zero (Figure 2(b)). We also observe that an extra BN imposed on the embedding prevents collapse to a point. However, it suffers from the dimensional collapse where the rank of embedding and encoding are significantly low, which also hampers the performance. ZCA and CD whitening both maintain high rank of embedding and encoding by decorrelating the axes, ensuring high linear evaluation accuracy. However, we note that PCA whitening shows significantly different behaviors: PCA whitening cannot decrease the loss and even cannot avoid the dimensional collapse, which also leads to significantly downgraded performance. This interesting observation challenges the motivations of whitening loss for SSL. We defer the analyses and illustration in Section 3.3. Whitened Output is not a Good Representation. As introduced before, the motivation of whitening loss for SSL is that the whitening operation can remove the correlation among axes [21] and a whitened representation ensures that the examples scattered in a spherical distribution [12], which is sufficient to avoid collapse. Based on this argument, one should use the whitened output Ẑ as the representation for downstream tasks, rather than the encoding H that is commonly used. This raises questions that whether H is well whitened and whether the whitened output is a good feature. We conduct experiments to compare the performances of whitening loss, when using H, Z and Ẑ as representations for evaluation respectively. The results are shown in Figure 3. We observe that using whitened output Ẑ as a representation has significantly worse performance than using H. Furthermore, we find that the normalized stable rank of H is significantly smaller than 100%, which suggests that H is not well whitened. These results show that the whitened output could not be a good representation. 3.3 Analysing Decomposition of Whitening Loss For clarity, we use the mini-batch input with size of m. Given one mini-batch input X with two augmented views, Eqn. 4 can be formulated as: L(X) = 1 m ‖Ẑ1 − Ẑ2‖2F . (5) Let us consider a proxy loss described as: L ′ (X) = 1 m ‖Ẑ1 − (Ẑ2)st‖2F︸ ︷︷ ︸ L′1 + 1 m ‖(Ẑ1)st − Ẑ2‖2F︸ ︷︷ ︸ L′2 , (6) where (·)st indicates the stop-gradient operation. It is easy to demonstrate that ∂L∂θ = ∂L ′ ∂θ (see supplementary materials for proof). That is, the optimization dynamics of L is equivalent to L′ . By looking into the first term of Eqn. 6, we have: L ′ 1 = 1 m ‖φ(Z1)Z1 − (Ẑ2)st‖2F . (7) Here, we can view φ(Z1) as a predictor that depends on Z1 during forward propagation, and Ẑ2 as a whitened target with r(Ẑ2) = Rank(Ẑ2) = dz . In this way, we find that minimizing L ′ 1 only requires the embedding Z1 being full-rank with Rank(Ẑ1) = dz , as stated by following proposition. Proposition 1. Let A = argminZ1L ′ 1(Z1). We have that A is not an empty set, and ∀Z1 ∈ A, Z1 is full-rank. Furthermore, for any {σi}dzi=1 with σ1 ≥ σ2 ≥, ..., σdz > 0, we construct à = {Z1|Z1 = U2 diag(σ1, σ2, ..., σdz ) V T 2 , where U2 ∈ Rdz×dz and V2 ∈ Rm×dz are from the singular value decomposition of Ẑ2, i.e., U2( √ mI)VT2 = Ẑ2. When we use ZCA whitening, we have à ⊆ A. The proof is shown in supplementary materials. Proposition 1 states that there are infinity matrix with full-rank that is the optimum when minimizing L′1 w.r.t. Z1. Therefore, minimizing L ′ 1 only requires the embedding Z1 being full-rank with Rank(Ẑ1) = dz , and does not necessarily impose the constraints on Z1 to be whitened with r(Z1) = dz . Similar analysis also applies to L ′ 2 and minimizing L′2 requires Z2 being full-rank. Therefore, BW-based methods shown in Eqn. 4 do not impose whitening constraints on the embedding as formulated in Eqn. 3, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse for embedding, even though it is a weaker constraint than whitening. Our analysis further implies that whitening loss in its symmetric formulation (Eqn. 5) can be decomposed into two asymmetric losses (Eqn. 6), where each asymmetric loss requires an online network to match a whitened target. This mechanism provides a pivot connecting to other methods, and a clue to understand why PCA whitening fails to avoid dimensional collapse for SSL. Connection to Asymmetric Methods. The asymmetric formulation of whitening loss shown in Eqn. 7 bears resemblance to those asymmetry methods without negative pairs, e.g., SimSiam [8]. In these methods, an extra predictor is incorporated and the stop-gradient is essential for avoid collapse. In particular, SimSiam uses the objective as: L(X) = 1 m ‖Pθp(·) ◦ Z1 − (Z2)st‖2F + 1 m ‖Pθp(·) ◦ Z2 − (Z1)st‖2F , (8) where Pθp(·) is the predictor with learnable parameters θp. By contrasting Eqn. 7 and the first term of Eqn. 8, we find that: 1) BW-based whitening loss ensures a whitened target Ẑ2, while SimSiam does not put constraint on the target Z2; 2) SimSiam uses a learnable predictor Pθp(·), which is shown to empirically avoid collapse by matching the rank of the covariance matrix by back-propagation [40], while BW-based whitening loss has an implicit predictor φ(Z1) depending on the input itself, which is a full-rank matrix by design. Based on this analysis, we find that BW-based whitening loss can surely avoid collapse if the loss converges well, while Simsian can not provide such a guarantee in avoiding collapse. Similar analysis also applies to BYOL [16], except that BYOL uses a momentum target network for providing target signal. Connection to Soft Whitening. VICReg [2] also encourages whitened embedding produced from different views, but by imposing a whitening penalty as a regularization on the embedding, which is called soft whitening. In particular, given a mini-batch input, the objective of VICReg is as follows4: L(X) = 1 m ‖Z1 − Z2‖2F + α 2∑ i=1 (‖ 1 m ZiZ T i − λI‖2F ), (9) where α ≥ 0 is the penalty factor. Similarly, we can use a proxy loss for VICReg and considering its term corresponding to optimizing Z1 only (similar to Eqn. 7), we have: L ′ V ICReg(X) = 1 m ‖Z1 − (Z2)st‖2F + α‖ 1 m Z1Z T 1 − λI‖2F . (10) Based on this formulation, we observe that VICReg requires embedding Z1 to be whitened by, 1) the additional whitening penalty, and 2) fitting the (expected) whitened targets Z2. By contrasting Eqns. 7 and 10, we highlight that the so-called hard whitening methods, like W-MSE [12], only impose fullrank constraints on the embedding, while soft whitening methods indeed impose whitening constraints. Similar analysis also applies to Barlow Twins [45], except that the whitening/decorrelation penalty is imposed on the cross-covariance matrix of embedding from different views. Connection to Other Non-contrastive Methods. SwAV [4], a clustering-based method, uses a "swapped" prediction mechanism where the cluster assignment (code) of a view is predicted from the representation of another view, by minimizing the following objective: L(X) = `(CTZ1, (Q2)st) + `(CTZ2, (Q1)st). (11) Here, C is the prototype matrix learned by back-propagation, Qi is the predicted code with equalpartition and high-entropy constraints, and SwAV uses cross-entropy loss as `(·, ·) to match the distributions. The constraints on Qi are approximately satisfied during optimization, by using the iterative Sinkhorn-Knopp algorithm conditioned on the input CTZi. Note that SwAV explicitly uses stop-gradient when it calculates the target Qi. By contrasting Eqn. 7 and the first term of Eqn. 11, we find that: 1) SwAV can be viewed as an online network to match a target with constraints, like BW-based whitening loss, even thought the constraints imposed on the targets between them are 4Note the slight difference where VICReg uses margin loss on the diagonal of covariance, while our notation uses MSE loss. different; 2) From the perspective of asymmetric structure, SwAV indeed uses a linear predictor CT that is also learned by back-propagation like SimSiam, while BW-based whitening loss has an implicit predictor φ(Z1) depending on the input itself. Similar analysis also applies to DINO [5], which further simplifies the formulation of SwAV by removing the prototype matrix and directly matching the output of another view, from the view of knowledge distillation. DINO uses centering and sharpening operations to impose the constraints on the target (output of another view). One significant difference between DINO and whitening loss is that DINO uses population statistics of centering calculated by moving average, while whitening loss uses the mini-batch statistics of whitening. Why PCA Whitening Fails to Avoid Dimensional Collapse? Based on Eqn. 7, we note that whitening loss can favorably provide full-rank constraints on the embedding under the condition that the online network can match the whitened targets well. We experimentally find that PCA-based whitening loss provides volatile sequence of whitened targets during training, as shown in Figure 4(a). It is difficult for the online network to match such a target signal with significant variation, resulting in minimal decrease in the whitening loss (see Figure 2). Furthermore, we observe that PCA-based whitening loss has also significantly varying whitening matrix sequences {φt(·)} (Figure 4(b)), even given the same input data. This coincides with the observation in [16, 8], where an unstable predictor results in significant degenerate performance. Our observations are also in accordance with the arguments in [22, 23] that PCA-based BW shows significantly large stochasticity. We note that ZCA whitening can provide relatively stable sequences of whitened targets and whitening matrix during training (Figure 4), which ensures stable training for SSL. This is likely due to the property of ZCA-based whitening that minimizes the total squared distance between the original and whitened variables [26, 22]. Why Whitened Output is not a Good Representation? A whitened output removes the correlation among axes [21] and ensures the examples scattered in a spherical distribution [12], which bears resemblance to contrastive learning where different examples are pulled away. We conduct experiments to compare SimCLR [6], BYOL [16], VICReg [2] and W-MSE [12], and monitor the cosine similarity for all negative pairs, stable-rank and rank during training. From Figure 5, we find that all methods can achieve a high rank on the encoding. This is driven by the improved extent of whitening on the embedding. Furthermore, we observe that the negatives cosine similarity decreases during the training, while the extent of stable-rank increases, for all methods. This observation suggests that a representation with stronger extent of whitening is more likely to have less similarity among different examples. We further conduct experiments to validate this argument, using VICReg with varying penalty factor α (Eqn. 10) to adjust the extent of whitening on embedding (Figure 5(d)). Therefore, a whitened output leads to the state that all examples have dissimilar features. This state can break the potential manifold the examples in the same class belong to, which makes the learning more difficult [17]. Similar analysis for contrastive learning is also shown in [6], where classes represented by the projected output (embedding) are not well separated, compared to encoding. 4 Channel Whitening with Random Group Partition One main weakness of BW-based whitening loss is that the whitening operation requires the number of examples (mini-batch size) m to be larger than the size of channels d, to avoid numerical instability5. This requirement limits its usage in scenarios where large batch of training data cannot be fit into the memory. Based on previous analysis, the whitening loss can be viewed as an online learner to match a whitened target with all singular values being one. We note the key of whitening loss is that it conducts a transformation φ : Z → Ẑ, ensuring that the singular values of Ẑ are one. We thus propose channel whitening (CW) that ensures the examples in a mini-batch are orthogonal: Centering : Zc = (I− 1 d 1 · 1T )Z, Whitening : Ẑ = ZcΦ, (12) where Φ ∈ Rm×m is the ‘whitening matrix’ that is derived from the corresponding ‘covariance matrix’: Σ ′ = 1d−1Z T c Zc. In our implementation, we use ZCA whitening to obtain Φ. CW ensures the examples in a mini-batch are orthogonal to each other, with ẐT Ẑ = 1d−1I. This means CW has the same ability as BW for SSL in avoiding the dimensional collapse, by providing target Ẑ whose singular values are one. More importantly, one significant advantage of CW is that it can obtain numerical stability when the batch size is small, since the condition that d > m can be obtained by design (e.g., we can set the channel number of embedding d to be larger than the batch size m). Besides, we find that CW can amplify the full-rank constraints on the embedding by dividing the channels/neurons into random groups, as we will illustrate. Random Group Partition. Given the embedding Z ∈ Rd×m, d > m, we divide it into g ≥ 1 groups {Z(i) ∈ R d g×m}gi=1, where we assume that d is divisible by g and ensure dg > m. We then perform CW on each Z(i), i = 1, ..., g. Note that the ranks of Z and Z(i) are all at most m. Therefore, CW with group partition provides g constraints with Rank(Z(i)) = m on embedding, compared to CW without group partition that only one constraint with Rank(Z) = m. Although CW with group partition can provide more full-rank constraints for mini-batch data, we find that it can also make the population data correlated, if group partition is all the same during training, which decreases the rank and does not improve the performance in accuracy by our experiments (Figure 6). We find random group partition, which randomly divide the channels/neurons into group for each iteration (mini-batch data), can alleviate this issue and obtain an improved performance, from Figure 6. We call our method as channel whitening with random group partition (CW-RGP), and provide the full algorithm and PyTorch-style code in supplementary materials. We note that Hua et al. [21] use a similar idea for BW, called Shuffled-DBN. However Shuffled-DBN cannot well amplify the full-rank constraints by using more groups, since BW-based methods require m > dg to avoid numerical instability. We further show that CW-RGP works remarkably better than 5An empirical setting is m = 2d that can obtain good performance as shown in [12, 21]. Shuffled-DBN in the subsequent experiments. We attribute this results to the ability of CW-RGP in amplifying the full-rank constraints by using groups. 4.1 Experiments for Empirical Study Table 2: Comparisons on ImageNet linear classification. All are based on ResNet-50 encoder. The table is mostly inherited from [8]. Method Batch size 100 eps 200 eps SimCLR [6] 4096 66.5 68.3 MoCo v2 [7] 256 67.4 69.9 BYOL [16] 4096 66.5 70.6 SwAV [4] 4096 66.5 69.1 SimSiam [8] 256 68.1 70.0 W-MSE 4 [12] 4096 69.4 - Zero-CL [48] 1024 68.9 - BYOL [16] (repro.) 512 66.1 69.2 SwAV [4] (repro.) 512 65.8 67.9 W-MSE 4 [12] (repro.) 512 66.7 67.9 CW-RGP 4 (ours) 512 69.7 71.0 In this section, we conduct experiments to validate the effectiveness of our proposed CW-RGP. We evaluate the performances of CW-RGP for classification on CIFAR-10, CIFAR-100 [28], STL-10 [10], TinyImageNet [29] and ImageNet [11]. We also evaluate the effectiveness in transfer learning, for a pre-trained model using CW-RGP. We run the experiments on one workstation with 4 GPUs. For more details of implementation and training protocol, please refer to supplementary materials. Evaluation for Classification We first conduct experiments on small and medium size datasets (including CIFAR-10, CIFAR-100, STL-10 and TinyImageNet), strictly following the setup of W-MSE paper [12]. Our CW-RGP inherits the advantages of W-MSE in exploiting different views. CW-RGP 2 and CW-RGP 4 indicate our methods with s = 2 and s = 4 positive views extracted per image respectively, similar to W-MSE [12]. The results of baselines shown in Table. 1 are partly inherited in [12], except that we reproduce certain baselines under the same training and evaluation settings as in [12] (some different hyper-parameter settings are shown in supplementary materials). We observe that CW-RGP obtains the highest accuracy on almost all the datasets except Tiny-ImageNet. Besides, CW-RGP with 4 views are generally better than 2, similar to W-MSE. These results show that CW-RGP is a competitive SSL method. We also confirm that CW with random group partition could obtain a higher performance than BW (and with random group partition), comparing CW-RGP to W-MSE and Shuffled-DBN. We then conduct experiments on large-scale ImageNet, strictly following the setup of SimSiam paper [8]. The results of baselines shown in Table 2 are mostly reported in [8], except that the result of W-MSE 4 is from the W-MSE paper [12] and we reproduce BYOL [16], SwAV [4] and W-MSE 4 [12] under a batch size of 512 based on the same training and evaluation settings as in [8] for fairness. CW-RGP 4 is trained with a batch size of 512 and gets the highest accuracy among all methods under both 100 and 200 epochs training. We find that our CW-RGP can also work well when combined with the whitening penalty used in VICReg. Note that we also try a batch size of 256 under 100-epoch training, which gets the top-1 accuracy of 69.5%. Transfer to downstream tasks We examine the representation quality by transferring our model to other tasks, including VOC [13] object detection, COCO [32] object detection and instance segmentation. We use the baseline (except for the pre-training model, the others are exactly the same) of the detection codebase from MoCo [19] for CW-RGP to produce the results. The results of baselines shown in Table3 are mostly inherited from [8]. We clearly observe that CW-RGP performs better than or on par with these state-of-the-art approaches on COCO object detection and instance segmentation, which shows the great potential of CW-RGP in transferring to downstream tasks. Ablation for Random Group Partition. We also conduct experiments to show the advantages of random group partition for channel whitening. We use ‘CW’, ‘CWGP’ and ‘CW-RGP’ to indicate channel whitening without group partition, with group partition and with random group partition, respectively. We further consider the setup with s = 2 and s = 4 positive views. We use the same setup as in Table 1 and show the results in Table 4. We have similar observation as in Figure 6 that CW with random group partition improves the performance. Ablation for Batch Size. Here, we conduct experiments to empirically show the advantages of CW over BW, in terms of the stability using different batch size. We train CW and BW on ImageNet-100, using batch size ranging in {32, 64, 128, 256}. Figure 7 shows the results. We can find that CW is more robust for small batch size training. 5 Conclusion and Limitation In this paper, we invested whitening loss for SSL, and observed several interesting phenomena with further clarification based on our analysis framework. We showed that batch whitening (BW) based methods only require the embedding to be full-rank, which is also a sufficient condition for collapse avoidance. We proposed channel whitening with random group partition (CW-RGP) that is well motivated theoretically in avoiding a collapse and has been validated empirically in learning good representation. Limitation. Our work only shows how to avoid collapse by using whitening loss, but does not explicitly show what should be the extent of whitening of a good representation. We note that a concurrent work addresses this problem by connecting the eigenspectrum of a representation to a power law [15], and shows the coefficient of the power law is a strong indicator for the effects of representation. We believe our work can be further extended when combined with the analyses from [15]. Besides, our work does not answer how the projector affects the extents of whitening between encoding and embedding [18], which is important to answer why encoding is usually used as a representation for evaluation, rather than the whitened output or embedding. Our attempts, shown in supplementary materials, provide preliminary results, but does not offer an answer to this question. Acknowledgement This work was partially supported by the National Key Research and Development Plan of China under Grant 2021ZD0112901, National Natural Science Foundation of China (Grant No. 62106012), the Fundamental Research Funds for the Central Universities.
1. What is the focus of the paper regarding self-supervised learning and whitening losses? 2. What are the strengths and weaknesses of the proposed channel whitening with random group partition (CW-RGP)? 3. Do you have any concerns about the interpretations provided in the paper regarding PCA and whitened outputs? 4. How does the reviewer assess the comparisons in Table 2, particularly regarding under-trained baselines like BYOL/SWAV? 5. What is the novelty of the proposed method compared to previous works such as [47], and how does it contribute to the field of self-supervised learning? 6. Are there any minor issues or suggestions for improvements in the paper, such as including references for all baselines, explaining the estimation of the whitening matrix, or clarifying the meaning of "d=2" and "d=4"? 7. Are there any potential limitations or negative societal impacts associated with the proposed approach that should be considered?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper provides an analysis of different whitening losses used in Self-supervised learning, seeking to interpret some empirical observations, e.g. the connection between whitening losses and the asymmetric methods (BYOL/SimSiam), why PCA does not work, and why whitened outputs are not good representations. The paper also proposes channel whitening with random group partition (CW-RGP), which is shown to be an effective whitening loss. CW-RGP is evaluated on image recognition (ImageNet1k and 4 other benchmarks), object detection (VOC/COCO), and instance segmentation (COCO). Strengths And Weaknesses Strengths i) the proposed random group partition is technically sound; ii) the performance of the proposed CW-RGP method looks promising on some benchmarks e.g. COCO object detection/ instance segmentation; iii) an ablation study on the batch size is provided; iv) the writing is clear in general. Weaknesses i) The interpretations about “why PCA does not work” and “why the whitened output is not good” are not convincing. To me, the explanation could be much simpler and more intuitive: the batch whitened outputs rely on the batch statistics: an image may have different whitened representations when computed in different mini-batches. When using PCA, the descriptor of an image relies on the eigenvectors (U in L136), which may change dramatically across mini-batches. This explains why BW-based approaches prefer large batch sizes. It explains the experimental results shown in Fig. 4. It also explains why whitened outputs are not good representations, i.e. experiments in Fig. 3. Note that the predictors in the asymmetric methods (L197), on the other hand, do not rely on batch statistics, which I believe is a key difference. ii) The comparisons in Table 2 may not show the full picture, e.g. baselines like BYOL/SWAV may be significantly under-trained. Here, the batch size for BYOL/SWAV is 4096. When trained for fewer epochs (e.g. 200 epochs), a large batch size may hurt the performance as it leads to significantly fewer training iterations. It would be better if baselines like BYOL/SWAV-batch-size-512-epoch-100/200 are also included. iv) Channel whitening has been proposed before in [47] for the same task. As [47] has been published in ICLR 2022. I’m not sure if [47] could be considered a concurrent work. Compared to [47], the new content is the random group partition. This extra design may not be enough for NeurIPS. Overall, I believe [47] should at least be included as a baseline, and the ablation on the random group partition should be included. Minor issues i) L18-19 “two networks are trained …[8]”, I think there is only one network ii) L286: “rand” → “random” iii) Table 1, Simsim → SimSiam iv) Table 1, references are included for some baselines (SimCLR, BYOL), but not all (e.g. Shuffled-DBN, W-MSE) v) Table 2 & 3, it would be better if references are included. Questions i) When evaluating the performance of the whitened representation (L167, Fig.3) I wonder if the whitening matrix (\Phi(Z)) is estimated per mini-batch or over the whole training set? ii) L308-309, are “d=2” and “d=4” here referring to “g=2” and “g=4”? Limitations Limitations are discussed in the main paper. Potential negative societal impacts are not discussed.
NIPS
Title An Investigation into Whitening Loss for Self-supervised Learning Abstract A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP. 1 Introduction Self-supervised learning (SSL) has made significant progress over the last several years [1, 19, 6, 16, 8], almost reaching the performance of supervised baselines on many downstream tasks [33, 24, 35]. Several recent approaches rely on a joint embedding architecture in which a dual pair of networks are trained to produce similar embeddings for different views of the same image [8]. Such methods aim to learn representations that are invariant to transformation of the same input. One main challenge with the joint embedding architectures is how to prevent a collapse of representation, in which the two branches ignore the inputs and produce identical and constant output representations [6, 8]. One line of work uses contrastive learning methods that attract different views from the same image (positive pairs) while pull apart different images (negative pairs), which can prevent constant outputs from the solution space [43]. While the concept is simple, these methods need large batch size to obtain a good performance [19, 6, 37]. Another line of work tries to directly match the positive targets without introducing negative pairs. A seminal approach, BYOL [16], shows that an extra predictor and momentum is essential for representation learning. SimSiam [8] further generalizes [16] by empirically showing that stop-gradient is essential for preventing trivial solutions. Recent works generalize the collapse problem into dimensional collapse [21, 25]2 where the embedding vectors only span a lower-dimensional subspace and would be highly correlated. Therefore, the embedding vector dimensions would vary together and contain redundant information. To prevent the dimensional ∗equal contribution corresponding author (huangleiAI@buaa.edu.cn). This work was partially done while Lei Huang was a visiting scholar at Mohamed bin Zayed University of Artificial Intelligence, UAE. 2This collapse is also referred to informational collapse in [2]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). collapse, whitening loss is proposed by only minimizing the distance between embeddings of positive pairs under the condition that embeddings from different views are whitened [12, 21]. A typical way is using batch whitening (BW) and imposing the loss on the whitened output [12, 21], which obtains promising results. Although whitening loss has theoretical guarantee in avoiding collapse, we experimentally observe that this guarantee depends on which kind of whitening transformation [26] is used in practice (see Section 3.2 for details). This interesting observation challenges the motivations of whitening loss for SSL. Besides, the motivation of whitening loss is that the whitening operation can remove the correlation among axes [21] and a whitened representation ensures the examples scattered in a spherical distribution [12]. Based on this argument, one can use the whitened output as the representation for downstream tasks, but it is not used in practice. To this end, this paper investigates whitening loss and tries to demystify these interesting observations. Our contributions are as follows: • We decompose the symmetric formulation of whitening loss into two asymmetric losses, where each asymmetric loss requires an online network to match a whitened target. This mechanism provides a pivoting point connecting to other methods, and a way to understand why certain whitening transformation fails to avoid dimensional collapse. • Our analysis shows that BW based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. • We propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based method in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection show that CW-RGP has promising potential in learning good representation. 2 Related Work A desirable objective in self-supervised learning is to avoid feature collapse. Contrastive learning prevents collapse by attracting positive samples closer, and spreading negative samples apart [43, 44]. In these methods, negative samples play an important role and need to be well designed [34, 1, 20]. One typical mechanism is building a memory bank with a momentum encoder to provide consistent negative samples, proposed in MoCos [19], yielding promising results [19, 7, 9, 30]. Other works include SimCLR [6] addresses that more negative samples in a batch with strong data augmentations perform better. Contrastive methods require large batch sizes or memory banks, which tends to be costly, promoting the questions whether negative pairs is necessary. Non-contrastive methods aim to accomplish SSL without introducing negative pairs explicitly [3, 4, 31, 16, 8]. One typical way to avoid representational collapse is the introduction of asymmetric network architecture. BYOL [16] appends a predictor after the online network and introduce momentum into the target network. SimSiam [8] further simplifies BYOL by removing the momentum mechanism, and shows that stop-gradient to target network serves as an alternative approximation to the momentum encoder. Other progress includes an asymmetric pipeline with a self-distillation loss for Vision Transformers [5]. It remains not clear how the asymmetric network avoids collapse without negative pairs, leaving the debates on batch normalization (BN) [14, 41, 36] and stop-gradient [8, 46], even though preliminary works have attempted to analyze the training dynamics theoretical with certain assumptions [40] and build a connection between asymmetric network with contrastive learning methods [39]. Our work provides a pivoting point connecting asymmetric network to profound whitening loss in avoiding collapse. Whitening loss has theoretical guarantee in avoiding collapse by minimizing the distance of positive pairs under the conditioning that the embeddings from different views are whitened [45, 12, 21, 2]. One way to obtain whitened output is imposing a whitening penalty as regularization on embedding— the so-called soft whitening, which is proposed in Barlow Twins [45], VICReg [2] and CCA-SSG [47]. Another way is using batch whitening (BW) [22]—the so-called hard whitening, which is used in W-MSE [12] and Shuffled-DBN [21]. We propose a different hard whitening method—channel whitening (CW) that has the same function that ensures all the singular values of transformed output being one for avoiding collapse. But CW is more numerical stable and works better when batch size is small, compared to BW. Furthermore, our CW with random group partition (CW-RGP) can effectively control the extent of constraint on embedding and obtain better performance in practice. We note that a recent work ICL [48] proposes to decorrelate instances, like CW but having several significant differences in technical details. ICL uses "stop-gradient" for the whitening matrix, while CW requires back-propagation through the whitening transformation. Besides, ICL uses extra pre-conditioning on the covariance and whitening matrices, which is essential for the numerical stability, while CW does not use extra pre-conditioning and can work well since it encourages the embedding to be full-rank. 3 Exploring Whitening Loss for SSL 3.1 Preliminaries Let x denote the input sampled uniformly from a set of images D, and T denote the set of data transformations available for augmentation. We consider the Siamese network fθ(·) parameterized by θ. It takes as input two randomly augmented views, x1 = T1(x) and x2 = T2(x), where T1,2 ∈ T. The network fθ(·) is trained with an objective function that minimizes the distance between embeddings obtained from different views of the same image: L(x, θ) = Ex∼D, T1,2∼T ` ( fθ(T1(x)), fθ(T2(x)) ) . (1) where `(·, ·) is a loss function. In particular, the Siamese network usually consists of an encoder Eθe(·) and a projector Gθg (·). Their outputs h = Eθe(T (x)) and z = Gθg (h) are referred to as encoding and embedding, respectively. We summarize the notations and use the corresponding capital letters denoting mini-batch data in Figure 1. Under this notation, we have fθ(·) = Gθg (Eθe(·)) with learnable parameters θ = {θe, θg}. The encoding h is usually used as representation for evaluation by either training a linear classifier [19] or transferring to downstream tasks. This is due to that h is shown to obtain significantly better performance than the embedding z [6, 8]. The mean square error (MSE) of L2−normalized vectors is usually used as the loss function [8]: `(z1, z2) = ‖ z1 ‖z1‖2 − z2 ‖z2‖2 ‖22, (2) where ‖ · ‖2 denotes the L2 norm. This loss is also equivalent to the negative cosine similarity, up to a scale of 12 and an optimization irrelevant constant. Collapse and Whitening Loss. While minimizing Eqn. 1, a trivial solution known as collapse could occur such that fθ(x) ≡ c, ∀x ∈ D. The state of collapse will provide no gradients for learning and offer no information for discrimination. Moreover, a weaker collapse condition called dimensional collapse can be easily arrived, for which the projected features collapse into a low-dimensional manifold. As illustrated in [21], dimensional collapse is associated with strong correlations between axes, which motivates the use of whitening method in avoiding the dimensional collapse. The general idea of whitening loss [12] is to minimize Eqn. 1, under the condition that embeddings from different views are whitened, which can be formulated as3: min θ L(x; θ) = Ex∼D, T1,2∼T `(z1, z2), s.t. cov(zi, zi) = I, i ∈ {1, 2}. (3) Whitening loss provides theoretical guarantee in avoiding (dimensional) collapse, since the embedding is whitened with all axes decorrelated [12, 21]. While it is difficult to directly solve the problem of Eqn. 3, Ermolov et al. [12] propose to whiten the mini-batch embedding Z ∈ Rdz×m using batch whitening (BW) [22, 38] and impose the loss on the whitened output Ẑ ∈ Rdz×m, given the mini-batch inputs X with size of m, as follows: min θ L(X; θ) = EX∼D, T1,2∼T ‖Ẑ1 − Ẑ2‖2F with Ẑi = Φ(Zi), i ∈ {1, 2}, (4) where Φ(·) denotes the whitening transformation over mini-batch data. 3The dual view formulation can be extended to s different views, as shown in [12]. Whitening Transformations. There are an infinite number of possible whitening matrices, as shown in [26, 22], since any whitened data with a rotation is still whitened. For simplifying notation, we assume Z is centered by Z := Z(I − 1m11 T ). Ermolov et al. [12] propose W-MSE that uses Cholesky decomposition (CD) whitening: ΦCD(Z) = L−1Z in Eqn. 4, where L is a lower triangular matrix from the Cholesky decomposition, with LLT = Σ. Here Σ = 1mZZ T is the covariance matrix of the embedding. Hua et al. [21] use zero-phase component analysis (ZCA) whitening [22] in Eqn. 4: ΦZCA = UΛ − 12UT , where Λ = diag(λ1, . . . , λdz ) and U = [u1, ...,udz ] are the eigenvalues and associated eigenvectors of Σ, i.e., UΛUT = Σ. Another famous whitening is principal components analysis (PCA) whitening: ΦPCA = Λ− 1 2UT [26, 22]. 3.2 Empirical Investigation on Whitening Loss In this section, we conduct experiments to investigate the effects of different whitening transformations Φ(·) used in Eqn. 4 for SSL. Besides, we investigate the performances of different features (including encoding H, embedding Z and the whitened output Ẑ) used as representation for evaluation. For illustration, we first define the rank and stable-rank [42] of a matrix as follows: Definition 1. Given a matrix A ∈ Rd×m, d ≤ m, we denote {λ1, ..., λd} the singular values of A in a descent order with convention λ1 > 0. The rank of A is the number of its non-zero singular values, denoted as Rank(A) = ∑d i=1 I(λi > 0), where I(·) is the indicator function. The stable-rank of A is denoted as r(A) = ∑d i=1 λi λ1 . By definition, Rank(A) can be a good indicator to evaluate the extent of dimensional collapse of A, and r(A) can be an indicator to evaluate the extent of whitening of A. It can be demonstrated that r(A) ≤ Rank(A) ≤ d [42]. Note that if A is fully whitened with covariance matrix AAT = mI, we have r(A) = Rank(A) = d. We also define normalized rank as R̂ank(A) = Rank(A)d and normalized stable-rank as r̂(A) = r(A)d , for comparing the extent of dimensional collapse and whitening of matrices with different dimensions, respectively. PCA Whitening Fails to Avoid Dimensional Collapse. We compare the effects of ZCA, CD, PCA transformations for whitening loss, evaluated on CIFAR-10 using the standard setup for SSL (see Section 4.1 for details). Besides, we also provide the result of batch normalization (BN) that only performs standardization without decorrelating the axes, and the ‘Plain’ method that imposes the loss directly on embedding. From Figure 2, we observe that naively training a Siamese network (‘Plain’) results in collapse both on the embedding (Figure 2(c)) and encoding (Figure 2(d)), which significantly hampers the performance (Figure 2(a)), although its training loss becomes close to zero (Figure 2(b)). We also observe that an extra BN imposed on the embedding prevents collapse to a point. However, it suffers from the dimensional collapse where the rank of embedding and encoding are significantly low, which also hampers the performance. ZCA and CD whitening both maintain high rank of embedding and encoding by decorrelating the axes, ensuring high linear evaluation accuracy. However, we note that PCA whitening shows significantly different behaviors: PCA whitening cannot decrease the loss and even cannot avoid the dimensional collapse, which also leads to significantly downgraded performance. This interesting observation challenges the motivations of whitening loss for SSL. We defer the analyses and illustration in Section 3.3. Whitened Output is not a Good Representation. As introduced before, the motivation of whitening loss for SSL is that the whitening operation can remove the correlation among axes [21] and a whitened representation ensures that the examples scattered in a spherical distribution [12], which is sufficient to avoid collapse. Based on this argument, one should use the whitened output Ẑ as the representation for downstream tasks, rather than the encoding H that is commonly used. This raises questions that whether H is well whitened and whether the whitened output is a good feature. We conduct experiments to compare the performances of whitening loss, when using H, Z and Ẑ as representations for evaluation respectively. The results are shown in Figure 3. We observe that using whitened output Ẑ as a representation has significantly worse performance than using H. Furthermore, we find that the normalized stable rank of H is significantly smaller than 100%, which suggests that H is not well whitened. These results show that the whitened output could not be a good representation. 3.3 Analysing Decomposition of Whitening Loss For clarity, we use the mini-batch input with size of m. Given one mini-batch input X with two augmented views, Eqn. 4 can be formulated as: L(X) = 1 m ‖Ẑ1 − Ẑ2‖2F . (5) Let us consider a proxy loss described as: L ′ (X) = 1 m ‖Ẑ1 − (Ẑ2)st‖2F︸ ︷︷ ︸ L′1 + 1 m ‖(Ẑ1)st − Ẑ2‖2F︸ ︷︷ ︸ L′2 , (6) where (·)st indicates the stop-gradient operation. It is easy to demonstrate that ∂L∂θ = ∂L ′ ∂θ (see supplementary materials for proof). That is, the optimization dynamics of L is equivalent to L′ . By looking into the first term of Eqn. 6, we have: L ′ 1 = 1 m ‖φ(Z1)Z1 − (Ẑ2)st‖2F . (7) Here, we can view φ(Z1) as a predictor that depends on Z1 during forward propagation, and Ẑ2 as a whitened target with r(Ẑ2) = Rank(Ẑ2) = dz . In this way, we find that minimizing L ′ 1 only requires the embedding Z1 being full-rank with Rank(Ẑ1) = dz , as stated by following proposition. Proposition 1. Let A = argminZ1L ′ 1(Z1). We have that A is not an empty set, and ∀Z1 ∈ A, Z1 is full-rank. Furthermore, for any {σi}dzi=1 with σ1 ≥ σ2 ≥, ..., σdz > 0, we construct à = {Z1|Z1 = U2 diag(σ1, σ2, ..., σdz ) V T 2 , where U2 ∈ Rdz×dz and V2 ∈ Rm×dz are from the singular value decomposition of Ẑ2, i.e., U2( √ mI)VT2 = Ẑ2. When we use ZCA whitening, we have à ⊆ A. The proof is shown in supplementary materials. Proposition 1 states that there are infinity matrix with full-rank that is the optimum when minimizing L′1 w.r.t. Z1. Therefore, minimizing L ′ 1 only requires the embedding Z1 being full-rank with Rank(Ẑ1) = dz , and does not necessarily impose the constraints on Z1 to be whitened with r(Z1) = dz . Similar analysis also applies to L ′ 2 and minimizing L′2 requires Z2 being full-rank. Therefore, BW-based methods shown in Eqn. 4 do not impose whitening constraints on the embedding as formulated in Eqn. 3, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse for embedding, even though it is a weaker constraint than whitening. Our analysis further implies that whitening loss in its symmetric formulation (Eqn. 5) can be decomposed into two asymmetric losses (Eqn. 6), where each asymmetric loss requires an online network to match a whitened target. This mechanism provides a pivot connecting to other methods, and a clue to understand why PCA whitening fails to avoid dimensional collapse for SSL. Connection to Asymmetric Methods. The asymmetric formulation of whitening loss shown in Eqn. 7 bears resemblance to those asymmetry methods without negative pairs, e.g., SimSiam [8]. In these methods, an extra predictor is incorporated and the stop-gradient is essential for avoid collapse. In particular, SimSiam uses the objective as: L(X) = 1 m ‖Pθp(·) ◦ Z1 − (Z2)st‖2F + 1 m ‖Pθp(·) ◦ Z2 − (Z1)st‖2F , (8) where Pθp(·) is the predictor with learnable parameters θp. By contrasting Eqn. 7 and the first term of Eqn. 8, we find that: 1) BW-based whitening loss ensures a whitened target Ẑ2, while SimSiam does not put constraint on the target Z2; 2) SimSiam uses a learnable predictor Pθp(·), which is shown to empirically avoid collapse by matching the rank of the covariance matrix by back-propagation [40], while BW-based whitening loss has an implicit predictor φ(Z1) depending on the input itself, which is a full-rank matrix by design. Based on this analysis, we find that BW-based whitening loss can surely avoid collapse if the loss converges well, while Simsian can not provide such a guarantee in avoiding collapse. Similar analysis also applies to BYOL [16], except that BYOL uses a momentum target network for providing target signal. Connection to Soft Whitening. VICReg [2] also encourages whitened embedding produced from different views, but by imposing a whitening penalty as a regularization on the embedding, which is called soft whitening. In particular, given a mini-batch input, the objective of VICReg is as follows4: L(X) = 1 m ‖Z1 − Z2‖2F + α 2∑ i=1 (‖ 1 m ZiZ T i − λI‖2F ), (9) where α ≥ 0 is the penalty factor. Similarly, we can use a proxy loss for VICReg and considering its term corresponding to optimizing Z1 only (similar to Eqn. 7), we have: L ′ V ICReg(X) = 1 m ‖Z1 − (Z2)st‖2F + α‖ 1 m Z1Z T 1 − λI‖2F . (10) Based on this formulation, we observe that VICReg requires embedding Z1 to be whitened by, 1) the additional whitening penalty, and 2) fitting the (expected) whitened targets Z2. By contrasting Eqns. 7 and 10, we highlight that the so-called hard whitening methods, like W-MSE [12], only impose fullrank constraints on the embedding, while soft whitening methods indeed impose whitening constraints. Similar analysis also applies to Barlow Twins [45], except that the whitening/decorrelation penalty is imposed on the cross-covariance matrix of embedding from different views. Connection to Other Non-contrastive Methods. SwAV [4], a clustering-based method, uses a "swapped" prediction mechanism where the cluster assignment (code) of a view is predicted from the representation of another view, by minimizing the following objective: L(X) = `(CTZ1, (Q2)st) + `(CTZ2, (Q1)st). (11) Here, C is the prototype matrix learned by back-propagation, Qi is the predicted code with equalpartition and high-entropy constraints, and SwAV uses cross-entropy loss as `(·, ·) to match the distributions. The constraints on Qi are approximately satisfied during optimization, by using the iterative Sinkhorn-Knopp algorithm conditioned on the input CTZi. Note that SwAV explicitly uses stop-gradient when it calculates the target Qi. By contrasting Eqn. 7 and the first term of Eqn. 11, we find that: 1) SwAV can be viewed as an online network to match a target with constraints, like BW-based whitening loss, even thought the constraints imposed on the targets between them are 4Note the slight difference where VICReg uses margin loss on the diagonal of covariance, while our notation uses MSE loss. different; 2) From the perspective of asymmetric structure, SwAV indeed uses a linear predictor CT that is also learned by back-propagation like SimSiam, while BW-based whitening loss has an implicit predictor φ(Z1) depending on the input itself. Similar analysis also applies to DINO [5], which further simplifies the formulation of SwAV by removing the prototype matrix and directly matching the output of another view, from the view of knowledge distillation. DINO uses centering and sharpening operations to impose the constraints on the target (output of another view). One significant difference between DINO and whitening loss is that DINO uses population statistics of centering calculated by moving average, while whitening loss uses the mini-batch statistics of whitening. Why PCA Whitening Fails to Avoid Dimensional Collapse? Based on Eqn. 7, we note that whitening loss can favorably provide full-rank constraints on the embedding under the condition that the online network can match the whitened targets well. We experimentally find that PCA-based whitening loss provides volatile sequence of whitened targets during training, as shown in Figure 4(a). It is difficult for the online network to match such a target signal with significant variation, resulting in minimal decrease in the whitening loss (see Figure 2). Furthermore, we observe that PCA-based whitening loss has also significantly varying whitening matrix sequences {φt(·)} (Figure 4(b)), even given the same input data. This coincides with the observation in [16, 8], where an unstable predictor results in significant degenerate performance. Our observations are also in accordance with the arguments in [22, 23] that PCA-based BW shows significantly large stochasticity. We note that ZCA whitening can provide relatively stable sequences of whitened targets and whitening matrix during training (Figure 4), which ensures stable training for SSL. This is likely due to the property of ZCA-based whitening that minimizes the total squared distance between the original and whitened variables [26, 22]. Why Whitened Output is not a Good Representation? A whitened output removes the correlation among axes [21] and ensures the examples scattered in a spherical distribution [12], which bears resemblance to contrastive learning where different examples are pulled away. We conduct experiments to compare SimCLR [6], BYOL [16], VICReg [2] and W-MSE [12], and monitor the cosine similarity for all negative pairs, stable-rank and rank during training. From Figure 5, we find that all methods can achieve a high rank on the encoding. This is driven by the improved extent of whitening on the embedding. Furthermore, we observe that the negatives cosine similarity decreases during the training, while the extent of stable-rank increases, for all methods. This observation suggests that a representation with stronger extent of whitening is more likely to have less similarity among different examples. We further conduct experiments to validate this argument, using VICReg with varying penalty factor α (Eqn. 10) to adjust the extent of whitening on embedding (Figure 5(d)). Therefore, a whitened output leads to the state that all examples have dissimilar features. This state can break the potential manifold the examples in the same class belong to, which makes the learning more difficult [17]. Similar analysis for contrastive learning is also shown in [6], where classes represented by the projected output (embedding) are not well separated, compared to encoding. 4 Channel Whitening with Random Group Partition One main weakness of BW-based whitening loss is that the whitening operation requires the number of examples (mini-batch size) m to be larger than the size of channels d, to avoid numerical instability5. This requirement limits its usage in scenarios where large batch of training data cannot be fit into the memory. Based on previous analysis, the whitening loss can be viewed as an online learner to match a whitened target with all singular values being one. We note the key of whitening loss is that it conducts a transformation φ : Z → Ẑ, ensuring that the singular values of Ẑ are one. We thus propose channel whitening (CW) that ensures the examples in a mini-batch are orthogonal: Centering : Zc = (I− 1 d 1 · 1T )Z, Whitening : Ẑ = ZcΦ, (12) where Φ ∈ Rm×m is the ‘whitening matrix’ that is derived from the corresponding ‘covariance matrix’: Σ ′ = 1d−1Z T c Zc. In our implementation, we use ZCA whitening to obtain Φ. CW ensures the examples in a mini-batch are orthogonal to each other, with ẐT Ẑ = 1d−1I. This means CW has the same ability as BW for SSL in avoiding the dimensional collapse, by providing target Ẑ whose singular values are one. More importantly, one significant advantage of CW is that it can obtain numerical stability when the batch size is small, since the condition that d > m can be obtained by design (e.g., we can set the channel number of embedding d to be larger than the batch size m). Besides, we find that CW can amplify the full-rank constraints on the embedding by dividing the channels/neurons into random groups, as we will illustrate. Random Group Partition. Given the embedding Z ∈ Rd×m, d > m, we divide it into g ≥ 1 groups {Z(i) ∈ R d g×m}gi=1, where we assume that d is divisible by g and ensure dg > m. We then perform CW on each Z(i), i = 1, ..., g. Note that the ranks of Z and Z(i) are all at most m. Therefore, CW with group partition provides g constraints with Rank(Z(i)) = m on embedding, compared to CW without group partition that only one constraint with Rank(Z) = m. Although CW with group partition can provide more full-rank constraints for mini-batch data, we find that it can also make the population data correlated, if group partition is all the same during training, which decreases the rank and does not improve the performance in accuracy by our experiments (Figure 6). We find random group partition, which randomly divide the channels/neurons into group for each iteration (mini-batch data), can alleviate this issue and obtain an improved performance, from Figure 6. We call our method as channel whitening with random group partition (CW-RGP), and provide the full algorithm and PyTorch-style code in supplementary materials. We note that Hua et al. [21] use a similar idea for BW, called Shuffled-DBN. However Shuffled-DBN cannot well amplify the full-rank constraints by using more groups, since BW-based methods require m > dg to avoid numerical instability. We further show that CW-RGP works remarkably better than 5An empirical setting is m = 2d that can obtain good performance as shown in [12, 21]. Shuffled-DBN in the subsequent experiments. We attribute this results to the ability of CW-RGP in amplifying the full-rank constraints by using groups. 4.1 Experiments for Empirical Study Table 2: Comparisons on ImageNet linear classification. All are based on ResNet-50 encoder. The table is mostly inherited from [8]. Method Batch size 100 eps 200 eps SimCLR [6] 4096 66.5 68.3 MoCo v2 [7] 256 67.4 69.9 BYOL [16] 4096 66.5 70.6 SwAV [4] 4096 66.5 69.1 SimSiam [8] 256 68.1 70.0 W-MSE 4 [12] 4096 69.4 - Zero-CL [48] 1024 68.9 - BYOL [16] (repro.) 512 66.1 69.2 SwAV [4] (repro.) 512 65.8 67.9 W-MSE 4 [12] (repro.) 512 66.7 67.9 CW-RGP 4 (ours) 512 69.7 71.0 In this section, we conduct experiments to validate the effectiveness of our proposed CW-RGP. We evaluate the performances of CW-RGP for classification on CIFAR-10, CIFAR-100 [28], STL-10 [10], TinyImageNet [29] and ImageNet [11]. We also evaluate the effectiveness in transfer learning, for a pre-trained model using CW-RGP. We run the experiments on one workstation with 4 GPUs. For more details of implementation and training protocol, please refer to supplementary materials. Evaluation for Classification We first conduct experiments on small and medium size datasets (including CIFAR-10, CIFAR-100, STL-10 and TinyImageNet), strictly following the setup of W-MSE paper [12]. Our CW-RGP inherits the advantages of W-MSE in exploiting different views. CW-RGP 2 and CW-RGP 4 indicate our methods with s = 2 and s = 4 positive views extracted per image respectively, similar to W-MSE [12]. The results of baselines shown in Table. 1 are partly inherited in [12], except that we reproduce certain baselines under the same training and evaluation settings as in [12] (some different hyper-parameter settings are shown in supplementary materials). We observe that CW-RGP obtains the highest accuracy on almost all the datasets except Tiny-ImageNet. Besides, CW-RGP with 4 views are generally better than 2, similar to W-MSE. These results show that CW-RGP is a competitive SSL method. We also confirm that CW with random group partition could obtain a higher performance than BW (and with random group partition), comparing CW-RGP to W-MSE and Shuffled-DBN. We then conduct experiments on large-scale ImageNet, strictly following the setup of SimSiam paper [8]. The results of baselines shown in Table 2 are mostly reported in [8], except that the result of W-MSE 4 is from the W-MSE paper [12] and we reproduce BYOL [16], SwAV [4] and W-MSE 4 [12] under a batch size of 512 based on the same training and evaluation settings as in [8] for fairness. CW-RGP 4 is trained with a batch size of 512 and gets the highest accuracy among all methods under both 100 and 200 epochs training. We find that our CW-RGP can also work well when combined with the whitening penalty used in VICReg. Note that we also try a batch size of 256 under 100-epoch training, which gets the top-1 accuracy of 69.5%. Transfer to downstream tasks We examine the representation quality by transferring our model to other tasks, including VOC [13] object detection, COCO [32] object detection and instance segmentation. We use the baseline (except for the pre-training model, the others are exactly the same) of the detection codebase from MoCo [19] for CW-RGP to produce the results. The results of baselines shown in Table3 are mostly inherited from [8]. We clearly observe that CW-RGP performs better than or on par with these state-of-the-art approaches on COCO object detection and instance segmentation, which shows the great potential of CW-RGP in transferring to downstream tasks. Ablation for Random Group Partition. We also conduct experiments to show the advantages of random group partition for channel whitening. We use ‘CW’, ‘CWGP’ and ‘CW-RGP’ to indicate channel whitening without group partition, with group partition and with random group partition, respectively. We further consider the setup with s = 2 and s = 4 positive views. We use the same setup as in Table 1 and show the results in Table 4. We have similar observation as in Figure 6 that CW with random group partition improves the performance. Ablation for Batch Size. Here, we conduct experiments to empirically show the advantages of CW over BW, in terms of the stability using different batch size. We train CW and BW on ImageNet-100, using batch size ranging in {32, 64, 128, 256}. Figure 7 shows the results. We can find that CW is more robust for small batch size training. 5 Conclusion and Limitation In this paper, we invested whitening loss for SSL, and observed several interesting phenomena with further clarification based on our analysis framework. We showed that batch whitening (BW) based methods only require the embedding to be full-rank, which is also a sufficient condition for collapse avoidance. We proposed channel whitening with random group partition (CW-RGP) that is well motivated theoretically in avoiding a collapse and has been validated empirically in learning good representation. Limitation. Our work only shows how to avoid collapse by using whitening loss, but does not explicitly show what should be the extent of whitening of a good representation. We note that a concurrent work addresses this problem by connecting the eigenspectrum of a representation to a power law [15], and shows the coefficient of the power law is a strong indicator for the effects of representation. We believe our work can be further extended when combined with the analyses from [15]. Besides, our work does not answer how the projector affects the extents of whitening between encoding and embedding [18], which is important to answer why encoding is usually used as a representation for evaluation, rather than the whitened output or embedding. Our attempts, shown in supplementary materials, provide preliminary results, but does not offer an answer to this question. Acknowledgement This work was partially supported by the National Key Research and Development Plan of China under Grant 2021ZD0112901, National Natural Science Foundation of China (Grant No. 62106012), the Fundamental Research Funds for the Central Universities.
1. What is the focus of the paper, and what are the key contributions of the proposed channel whitening method? 2. What are the strengths of the paper regarding its organization, writing, and experimentation? 3. What are the weaknesses or limitations of the paper, particularly in terms of comparisons with other methods? 4. Do you have any questions or concerns regarding the classification of whitening loss-based methods or the absence of certain methods in specific tables? 5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper gives out a thorough investigation into whitening loss applied in self-supervised learning. Based on their analysis, they proposed a channel whitening method named CW-RGP. From their experiments on variuos datasets, CW-RGP gets the SOTA in most cases. Strengths And Weaknesses Strengths: 1, The writing is nicely done and the paper is organized well for understanding. 2, The experiments are quite sufficient, including the analysis experiments and comparison experiments. 3, The understanding for whitening loss is convincing and there are also some new indicators proposed for future study. Weakness: Overall, I didn't see much disadvantages in this paper, I'm just curious about several points below. 1, The barlow twins and VICReg methods are not present in all tables. In the related work, the authors have classified them as soft whitening, so I think some comparison with them should not be neglected. 2, Although, Barlow Twins might get even better results than CW-RGP (from my experiments before), I don't think it can be very important. This paper mainly studies the whitening loss, thus I think the comparison with whitening loss based method (such as W-MSE) is more important. Questions I would wonder whether the Barlow Twins method can be better. Then grouping the experiments based on their techniques is easier to show the contribution, especially this paper belongs to the whitening loss branch. Finally, the comparison methods from Table 1,2,3 are a bit different, actually I was finding the W-MSE results in Table 3. Can you explain a bit? Limitations None
NIPS
Title An Investigation into Whitening Loss for Self-supervised Learning Abstract A desirable objective in self-supervised learning (SSL) is to avoid feature collapse. Whitening loss guarantees collapse avoidance by minimizing the distance between embeddings of positive pairs under the conditioning that the embeddings from different views are whitened. In this paper, we propose a framework with an informative indicator to analyze whitening loss, which provides a clue to demystify several interesting phenomena as well as a pivoting point connecting to other SSL methods. We reveal that batch whitening (BW) based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. Based on our analysis, we propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based methods in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection reveal that the proposed CW-RGP possesses a promising potential for learning good representations. The code is available at https://github.com/winci-ai/CW-RGP. 1 Introduction Self-supervised learning (SSL) has made significant progress over the last several years [1, 19, 6, 16, 8], almost reaching the performance of supervised baselines on many downstream tasks [33, 24, 35]. Several recent approaches rely on a joint embedding architecture in which a dual pair of networks are trained to produce similar embeddings for different views of the same image [8]. Such methods aim to learn representations that are invariant to transformation of the same input. One main challenge with the joint embedding architectures is how to prevent a collapse of representation, in which the two branches ignore the inputs and produce identical and constant output representations [6, 8]. One line of work uses contrastive learning methods that attract different views from the same image (positive pairs) while pull apart different images (negative pairs), which can prevent constant outputs from the solution space [43]. While the concept is simple, these methods need large batch size to obtain a good performance [19, 6, 37]. Another line of work tries to directly match the positive targets without introducing negative pairs. A seminal approach, BYOL [16], shows that an extra predictor and momentum is essential for representation learning. SimSiam [8] further generalizes [16] by empirically showing that stop-gradient is essential for preventing trivial solutions. Recent works generalize the collapse problem into dimensional collapse [21, 25]2 where the embedding vectors only span a lower-dimensional subspace and would be highly correlated. Therefore, the embedding vector dimensions would vary together and contain redundant information. To prevent the dimensional ∗equal contribution corresponding author (huangleiAI@buaa.edu.cn). This work was partially done while Lei Huang was a visiting scholar at Mohamed bin Zayed University of Artificial Intelligence, UAE. 2This collapse is also referred to informational collapse in [2]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). collapse, whitening loss is proposed by only minimizing the distance between embeddings of positive pairs under the condition that embeddings from different views are whitened [12, 21]. A typical way is using batch whitening (BW) and imposing the loss on the whitened output [12, 21], which obtains promising results. Although whitening loss has theoretical guarantee in avoiding collapse, we experimentally observe that this guarantee depends on which kind of whitening transformation [26] is used in practice (see Section 3.2 for details). This interesting observation challenges the motivations of whitening loss for SSL. Besides, the motivation of whitening loss is that the whitening operation can remove the correlation among axes [21] and a whitened representation ensures the examples scattered in a spherical distribution [12]. Based on this argument, one can use the whitened output as the representation for downstream tasks, but it is not used in practice. To this end, this paper investigates whitening loss and tries to demystify these interesting observations. Our contributions are as follows: • We decompose the symmetric formulation of whitening loss into two asymmetric losses, where each asymmetric loss requires an online network to match a whitened target. This mechanism provides a pivoting point connecting to other methods, and a way to understand why certain whitening transformation fails to avoid dimensional collapse. • Our analysis shows that BW based methods do not impose whitening constraints on the embedding, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse. • We propose channel whitening with random group partition (CW-RGP), which exploits the advantages of BW-based method in preventing collapse and avoids their disadvantages requiring large batch size. Experimental results on ImageNet classification and COCO object detection show that CW-RGP has promising potential in learning good representation. 2 Related Work A desirable objective in self-supervised learning is to avoid feature collapse. Contrastive learning prevents collapse by attracting positive samples closer, and spreading negative samples apart [43, 44]. In these methods, negative samples play an important role and need to be well designed [34, 1, 20]. One typical mechanism is building a memory bank with a momentum encoder to provide consistent negative samples, proposed in MoCos [19], yielding promising results [19, 7, 9, 30]. Other works include SimCLR [6] addresses that more negative samples in a batch with strong data augmentations perform better. Contrastive methods require large batch sizes or memory banks, which tends to be costly, promoting the questions whether negative pairs is necessary. Non-contrastive methods aim to accomplish SSL without introducing negative pairs explicitly [3, 4, 31, 16, 8]. One typical way to avoid representational collapse is the introduction of asymmetric network architecture. BYOL [16] appends a predictor after the online network and introduce momentum into the target network. SimSiam [8] further simplifies BYOL by removing the momentum mechanism, and shows that stop-gradient to target network serves as an alternative approximation to the momentum encoder. Other progress includes an asymmetric pipeline with a self-distillation loss for Vision Transformers [5]. It remains not clear how the asymmetric network avoids collapse without negative pairs, leaving the debates on batch normalization (BN) [14, 41, 36] and stop-gradient [8, 46], even though preliminary works have attempted to analyze the training dynamics theoretical with certain assumptions [40] and build a connection between asymmetric network with contrastive learning methods [39]. Our work provides a pivoting point connecting asymmetric network to profound whitening loss in avoiding collapse. Whitening loss has theoretical guarantee in avoiding collapse by minimizing the distance of positive pairs under the conditioning that the embeddings from different views are whitened [45, 12, 21, 2]. One way to obtain whitened output is imposing a whitening penalty as regularization on embedding— the so-called soft whitening, which is proposed in Barlow Twins [45], VICReg [2] and CCA-SSG [47]. Another way is using batch whitening (BW) [22]—the so-called hard whitening, which is used in W-MSE [12] and Shuffled-DBN [21]. We propose a different hard whitening method—channel whitening (CW) that has the same function that ensures all the singular values of transformed output being one for avoiding collapse. But CW is more numerical stable and works better when batch size is small, compared to BW. Furthermore, our CW with random group partition (CW-RGP) can effectively control the extent of constraint on embedding and obtain better performance in practice. We note that a recent work ICL [48] proposes to decorrelate instances, like CW but having several significant differences in technical details. ICL uses "stop-gradient" for the whitening matrix, while CW requires back-propagation through the whitening transformation. Besides, ICL uses extra pre-conditioning on the covariance and whitening matrices, which is essential for the numerical stability, while CW does not use extra pre-conditioning and can work well since it encourages the embedding to be full-rank. 3 Exploring Whitening Loss for SSL 3.1 Preliminaries Let x denote the input sampled uniformly from a set of images D, and T denote the set of data transformations available for augmentation. We consider the Siamese network fθ(·) parameterized by θ. It takes as input two randomly augmented views, x1 = T1(x) and x2 = T2(x), where T1,2 ∈ T. The network fθ(·) is trained with an objective function that minimizes the distance between embeddings obtained from different views of the same image: L(x, θ) = Ex∼D, T1,2∼T ` ( fθ(T1(x)), fθ(T2(x)) ) . (1) where `(·, ·) is a loss function. In particular, the Siamese network usually consists of an encoder Eθe(·) and a projector Gθg (·). Their outputs h = Eθe(T (x)) and z = Gθg (h) are referred to as encoding and embedding, respectively. We summarize the notations and use the corresponding capital letters denoting mini-batch data in Figure 1. Under this notation, we have fθ(·) = Gθg (Eθe(·)) with learnable parameters θ = {θe, θg}. The encoding h is usually used as representation for evaluation by either training a linear classifier [19] or transferring to downstream tasks. This is due to that h is shown to obtain significantly better performance than the embedding z [6, 8]. The mean square error (MSE) of L2−normalized vectors is usually used as the loss function [8]: `(z1, z2) = ‖ z1 ‖z1‖2 − z2 ‖z2‖2 ‖22, (2) where ‖ · ‖2 denotes the L2 norm. This loss is also equivalent to the negative cosine similarity, up to a scale of 12 and an optimization irrelevant constant. Collapse and Whitening Loss. While minimizing Eqn. 1, a trivial solution known as collapse could occur such that fθ(x) ≡ c, ∀x ∈ D. The state of collapse will provide no gradients for learning and offer no information for discrimination. Moreover, a weaker collapse condition called dimensional collapse can be easily arrived, for which the projected features collapse into a low-dimensional manifold. As illustrated in [21], dimensional collapse is associated with strong correlations between axes, which motivates the use of whitening method in avoiding the dimensional collapse. The general idea of whitening loss [12] is to minimize Eqn. 1, under the condition that embeddings from different views are whitened, which can be formulated as3: min θ L(x; θ) = Ex∼D, T1,2∼T `(z1, z2), s.t. cov(zi, zi) = I, i ∈ {1, 2}. (3) Whitening loss provides theoretical guarantee in avoiding (dimensional) collapse, since the embedding is whitened with all axes decorrelated [12, 21]. While it is difficult to directly solve the problem of Eqn. 3, Ermolov et al. [12] propose to whiten the mini-batch embedding Z ∈ Rdz×m using batch whitening (BW) [22, 38] and impose the loss on the whitened output Ẑ ∈ Rdz×m, given the mini-batch inputs X with size of m, as follows: min θ L(X; θ) = EX∼D, T1,2∼T ‖Ẑ1 − Ẑ2‖2F with Ẑi = Φ(Zi), i ∈ {1, 2}, (4) where Φ(·) denotes the whitening transformation over mini-batch data. 3The dual view formulation can be extended to s different views, as shown in [12]. Whitening Transformations. There are an infinite number of possible whitening matrices, as shown in [26, 22], since any whitened data with a rotation is still whitened. For simplifying notation, we assume Z is centered by Z := Z(I − 1m11 T ). Ermolov et al. [12] propose W-MSE that uses Cholesky decomposition (CD) whitening: ΦCD(Z) = L−1Z in Eqn. 4, where L is a lower triangular matrix from the Cholesky decomposition, with LLT = Σ. Here Σ = 1mZZ T is the covariance matrix of the embedding. Hua et al. [21] use zero-phase component analysis (ZCA) whitening [22] in Eqn. 4: ΦZCA = UΛ − 12UT , where Λ = diag(λ1, . . . , λdz ) and U = [u1, ...,udz ] are the eigenvalues and associated eigenvectors of Σ, i.e., UΛUT = Σ. Another famous whitening is principal components analysis (PCA) whitening: ΦPCA = Λ− 1 2UT [26, 22]. 3.2 Empirical Investigation on Whitening Loss In this section, we conduct experiments to investigate the effects of different whitening transformations Φ(·) used in Eqn. 4 for SSL. Besides, we investigate the performances of different features (including encoding H, embedding Z and the whitened output Ẑ) used as representation for evaluation. For illustration, we first define the rank and stable-rank [42] of a matrix as follows: Definition 1. Given a matrix A ∈ Rd×m, d ≤ m, we denote {λ1, ..., λd} the singular values of A in a descent order with convention λ1 > 0. The rank of A is the number of its non-zero singular values, denoted as Rank(A) = ∑d i=1 I(λi > 0), where I(·) is the indicator function. The stable-rank of A is denoted as r(A) = ∑d i=1 λi λ1 . By definition, Rank(A) can be a good indicator to evaluate the extent of dimensional collapse of A, and r(A) can be an indicator to evaluate the extent of whitening of A. It can be demonstrated that r(A) ≤ Rank(A) ≤ d [42]. Note that if A is fully whitened with covariance matrix AAT = mI, we have r(A) = Rank(A) = d. We also define normalized rank as R̂ank(A) = Rank(A)d and normalized stable-rank as r̂(A) = r(A)d , for comparing the extent of dimensional collapse and whitening of matrices with different dimensions, respectively. PCA Whitening Fails to Avoid Dimensional Collapse. We compare the effects of ZCA, CD, PCA transformations for whitening loss, evaluated on CIFAR-10 using the standard setup for SSL (see Section 4.1 for details). Besides, we also provide the result of batch normalization (BN) that only performs standardization without decorrelating the axes, and the ‘Plain’ method that imposes the loss directly on embedding. From Figure 2, we observe that naively training a Siamese network (‘Plain’) results in collapse both on the embedding (Figure 2(c)) and encoding (Figure 2(d)), which significantly hampers the performance (Figure 2(a)), although its training loss becomes close to zero (Figure 2(b)). We also observe that an extra BN imposed on the embedding prevents collapse to a point. However, it suffers from the dimensional collapse where the rank of embedding and encoding are significantly low, which also hampers the performance. ZCA and CD whitening both maintain high rank of embedding and encoding by decorrelating the axes, ensuring high linear evaluation accuracy. However, we note that PCA whitening shows significantly different behaviors: PCA whitening cannot decrease the loss and even cannot avoid the dimensional collapse, which also leads to significantly downgraded performance. This interesting observation challenges the motivations of whitening loss for SSL. We defer the analyses and illustration in Section 3.3. Whitened Output is not a Good Representation. As introduced before, the motivation of whitening loss for SSL is that the whitening operation can remove the correlation among axes [21] and a whitened representation ensures that the examples scattered in a spherical distribution [12], which is sufficient to avoid collapse. Based on this argument, one should use the whitened output Ẑ as the representation for downstream tasks, rather than the encoding H that is commonly used. This raises questions that whether H is well whitened and whether the whitened output is a good feature. We conduct experiments to compare the performances of whitening loss, when using H, Z and Ẑ as representations for evaluation respectively. The results are shown in Figure 3. We observe that using whitened output Ẑ as a representation has significantly worse performance than using H. Furthermore, we find that the normalized stable rank of H is significantly smaller than 100%, which suggests that H is not well whitened. These results show that the whitened output could not be a good representation. 3.3 Analysing Decomposition of Whitening Loss For clarity, we use the mini-batch input with size of m. Given one mini-batch input X with two augmented views, Eqn. 4 can be formulated as: L(X) = 1 m ‖Ẑ1 − Ẑ2‖2F . (5) Let us consider a proxy loss described as: L ′ (X) = 1 m ‖Ẑ1 − (Ẑ2)st‖2F︸ ︷︷ ︸ L′1 + 1 m ‖(Ẑ1)st − Ẑ2‖2F︸ ︷︷ ︸ L′2 , (6) where (·)st indicates the stop-gradient operation. It is easy to demonstrate that ∂L∂θ = ∂L ′ ∂θ (see supplementary materials for proof). That is, the optimization dynamics of L is equivalent to L′ . By looking into the first term of Eqn. 6, we have: L ′ 1 = 1 m ‖φ(Z1)Z1 − (Ẑ2)st‖2F . (7) Here, we can view φ(Z1) as a predictor that depends on Z1 during forward propagation, and Ẑ2 as a whitened target with r(Ẑ2) = Rank(Ẑ2) = dz . In this way, we find that minimizing L ′ 1 only requires the embedding Z1 being full-rank with Rank(Ẑ1) = dz , as stated by following proposition. Proposition 1. Let A = argminZ1L ′ 1(Z1). We have that A is not an empty set, and ∀Z1 ∈ A, Z1 is full-rank. Furthermore, for any {σi}dzi=1 with σ1 ≥ σ2 ≥, ..., σdz > 0, we construct à = {Z1|Z1 = U2 diag(σ1, σ2, ..., σdz ) V T 2 , where U2 ∈ Rdz×dz and V2 ∈ Rm×dz are from the singular value decomposition of Ẑ2, i.e., U2( √ mI)VT2 = Ẑ2. When we use ZCA whitening, we have à ⊆ A. The proof is shown in supplementary materials. Proposition 1 states that there are infinity matrix with full-rank that is the optimum when minimizing L′1 w.r.t. Z1. Therefore, minimizing L ′ 1 only requires the embedding Z1 being full-rank with Rank(Ẑ1) = dz , and does not necessarily impose the constraints on Z1 to be whitened with r(Z1) = dz . Similar analysis also applies to L ′ 2 and minimizing L′2 requires Z2 being full-rank. Therefore, BW-based methods shown in Eqn. 4 do not impose whitening constraints on the embedding as formulated in Eqn. 3, but they only require the embedding to be full-rank. This full-rank constraint is also sufficient to avoid dimensional collapse for embedding, even though it is a weaker constraint than whitening. Our analysis further implies that whitening loss in its symmetric formulation (Eqn. 5) can be decomposed into two asymmetric losses (Eqn. 6), where each asymmetric loss requires an online network to match a whitened target. This mechanism provides a pivot connecting to other methods, and a clue to understand why PCA whitening fails to avoid dimensional collapse for SSL. Connection to Asymmetric Methods. The asymmetric formulation of whitening loss shown in Eqn. 7 bears resemblance to those asymmetry methods without negative pairs, e.g., SimSiam [8]. In these methods, an extra predictor is incorporated and the stop-gradient is essential for avoid collapse. In particular, SimSiam uses the objective as: L(X) = 1 m ‖Pθp(·) ◦ Z1 − (Z2)st‖2F + 1 m ‖Pθp(·) ◦ Z2 − (Z1)st‖2F , (8) where Pθp(·) is the predictor with learnable parameters θp. By contrasting Eqn. 7 and the first term of Eqn. 8, we find that: 1) BW-based whitening loss ensures a whitened target Ẑ2, while SimSiam does not put constraint on the target Z2; 2) SimSiam uses a learnable predictor Pθp(·), which is shown to empirically avoid collapse by matching the rank of the covariance matrix by back-propagation [40], while BW-based whitening loss has an implicit predictor φ(Z1) depending on the input itself, which is a full-rank matrix by design. Based on this analysis, we find that BW-based whitening loss can surely avoid collapse if the loss converges well, while Simsian can not provide such a guarantee in avoiding collapse. Similar analysis also applies to BYOL [16], except that BYOL uses a momentum target network for providing target signal. Connection to Soft Whitening. VICReg [2] also encourages whitened embedding produced from different views, but by imposing a whitening penalty as a regularization on the embedding, which is called soft whitening. In particular, given a mini-batch input, the objective of VICReg is as follows4: L(X) = 1 m ‖Z1 − Z2‖2F + α 2∑ i=1 (‖ 1 m ZiZ T i − λI‖2F ), (9) where α ≥ 0 is the penalty factor. Similarly, we can use a proxy loss for VICReg and considering its term corresponding to optimizing Z1 only (similar to Eqn. 7), we have: L ′ V ICReg(X) = 1 m ‖Z1 − (Z2)st‖2F + α‖ 1 m Z1Z T 1 − λI‖2F . (10) Based on this formulation, we observe that VICReg requires embedding Z1 to be whitened by, 1) the additional whitening penalty, and 2) fitting the (expected) whitened targets Z2. By contrasting Eqns. 7 and 10, we highlight that the so-called hard whitening methods, like W-MSE [12], only impose fullrank constraints on the embedding, while soft whitening methods indeed impose whitening constraints. Similar analysis also applies to Barlow Twins [45], except that the whitening/decorrelation penalty is imposed on the cross-covariance matrix of embedding from different views. Connection to Other Non-contrastive Methods. SwAV [4], a clustering-based method, uses a "swapped" prediction mechanism where the cluster assignment (code) of a view is predicted from the representation of another view, by minimizing the following objective: L(X) = `(CTZ1, (Q2)st) + `(CTZ2, (Q1)st). (11) Here, C is the prototype matrix learned by back-propagation, Qi is the predicted code with equalpartition and high-entropy constraints, and SwAV uses cross-entropy loss as `(·, ·) to match the distributions. The constraints on Qi are approximately satisfied during optimization, by using the iterative Sinkhorn-Knopp algorithm conditioned on the input CTZi. Note that SwAV explicitly uses stop-gradient when it calculates the target Qi. By contrasting Eqn. 7 and the first term of Eqn. 11, we find that: 1) SwAV can be viewed as an online network to match a target with constraints, like BW-based whitening loss, even thought the constraints imposed on the targets between them are 4Note the slight difference where VICReg uses margin loss on the diagonal of covariance, while our notation uses MSE loss. different; 2) From the perspective of asymmetric structure, SwAV indeed uses a linear predictor CT that is also learned by back-propagation like SimSiam, while BW-based whitening loss has an implicit predictor φ(Z1) depending on the input itself. Similar analysis also applies to DINO [5], which further simplifies the formulation of SwAV by removing the prototype matrix and directly matching the output of another view, from the view of knowledge distillation. DINO uses centering and sharpening operations to impose the constraints on the target (output of another view). One significant difference between DINO and whitening loss is that DINO uses population statistics of centering calculated by moving average, while whitening loss uses the mini-batch statistics of whitening. Why PCA Whitening Fails to Avoid Dimensional Collapse? Based on Eqn. 7, we note that whitening loss can favorably provide full-rank constraints on the embedding under the condition that the online network can match the whitened targets well. We experimentally find that PCA-based whitening loss provides volatile sequence of whitened targets during training, as shown in Figure 4(a). It is difficult for the online network to match such a target signal with significant variation, resulting in minimal decrease in the whitening loss (see Figure 2). Furthermore, we observe that PCA-based whitening loss has also significantly varying whitening matrix sequences {φt(·)} (Figure 4(b)), even given the same input data. This coincides with the observation in [16, 8], where an unstable predictor results in significant degenerate performance. Our observations are also in accordance with the arguments in [22, 23] that PCA-based BW shows significantly large stochasticity. We note that ZCA whitening can provide relatively stable sequences of whitened targets and whitening matrix during training (Figure 4), which ensures stable training for SSL. This is likely due to the property of ZCA-based whitening that minimizes the total squared distance between the original and whitened variables [26, 22]. Why Whitened Output is not a Good Representation? A whitened output removes the correlation among axes [21] and ensures the examples scattered in a spherical distribution [12], which bears resemblance to contrastive learning where different examples are pulled away. We conduct experiments to compare SimCLR [6], BYOL [16], VICReg [2] and W-MSE [12], and monitor the cosine similarity for all negative pairs, stable-rank and rank during training. From Figure 5, we find that all methods can achieve a high rank on the encoding. This is driven by the improved extent of whitening on the embedding. Furthermore, we observe that the negatives cosine similarity decreases during the training, while the extent of stable-rank increases, for all methods. This observation suggests that a representation with stronger extent of whitening is more likely to have less similarity among different examples. We further conduct experiments to validate this argument, using VICReg with varying penalty factor α (Eqn. 10) to adjust the extent of whitening on embedding (Figure 5(d)). Therefore, a whitened output leads to the state that all examples have dissimilar features. This state can break the potential manifold the examples in the same class belong to, which makes the learning more difficult [17]. Similar analysis for contrastive learning is also shown in [6], where classes represented by the projected output (embedding) are not well separated, compared to encoding. 4 Channel Whitening with Random Group Partition One main weakness of BW-based whitening loss is that the whitening operation requires the number of examples (mini-batch size) m to be larger than the size of channels d, to avoid numerical instability5. This requirement limits its usage in scenarios where large batch of training data cannot be fit into the memory. Based on previous analysis, the whitening loss can be viewed as an online learner to match a whitened target with all singular values being one. We note the key of whitening loss is that it conducts a transformation φ : Z → Ẑ, ensuring that the singular values of Ẑ are one. We thus propose channel whitening (CW) that ensures the examples in a mini-batch are orthogonal: Centering : Zc = (I− 1 d 1 · 1T )Z, Whitening : Ẑ = ZcΦ, (12) where Φ ∈ Rm×m is the ‘whitening matrix’ that is derived from the corresponding ‘covariance matrix’: Σ ′ = 1d−1Z T c Zc. In our implementation, we use ZCA whitening to obtain Φ. CW ensures the examples in a mini-batch are orthogonal to each other, with ẐT Ẑ = 1d−1I. This means CW has the same ability as BW for SSL in avoiding the dimensional collapse, by providing target Ẑ whose singular values are one. More importantly, one significant advantage of CW is that it can obtain numerical stability when the batch size is small, since the condition that d > m can be obtained by design (e.g., we can set the channel number of embedding d to be larger than the batch size m). Besides, we find that CW can amplify the full-rank constraints on the embedding by dividing the channels/neurons into random groups, as we will illustrate. Random Group Partition. Given the embedding Z ∈ Rd×m, d > m, we divide it into g ≥ 1 groups {Z(i) ∈ R d g×m}gi=1, where we assume that d is divisible by g and ensure dg > m. We then perform CW on each Z(i), i = 1, ..., g. Note that the ranks of Z and Z(i) are all at most m. Therefore, CW with group partition provides g constraints with Rank(Z(i)) = m on embedding, compared to CW without group partition that only one constraint with Rank(Z) = m. Although CW with group partition can provide more full-rank constraints for mini-batch data, we find that it can also make the population data correlated, if group partition is all the same during training, which decreases the rank and does not improve the performance in accuracy by our experiments (Figure 6). We find random group partition, which randomly divide the channels/neurons into group for each iteration (mini-batch data), can alleviate this issue and obtain an improved performance, from Figure 6. We call our method as channel whitening with random group partition (CW-RGP), and provide the full algorithm and PyTorch-style code in supplementary materials. We note that Hua et al. [21] use a similar idea for BW, called Shuffled-DBN. However Shuffled-DBN cannot well amplify the full-rank constraints by using more groups, since BW-based methods require m > dg to avoid numerical instability. We further show that CW-RGP works remarkably better than 5An empirical setting is m = 2d that can obtain good performance as shown in [12, 21]. Shuffled-DBN in the subsequent experiments. We attribute this results to the ability of CW-RGP in amplifying the full-rank constraints by using groups. 4.1 Experiments for Empirical Study Table 2: Comparisons on ImageNet linear classification. All are based on ResNet-50 encoder. The table is mostly inherited from [8]. Method Batch size 100 eps 200 eps SimCLR [6] 4096 66.5 68.3 MoCo v2 [7] 256 67.4 69.9 BYOL [16] 4096 66.5 70.6 SwAV [4] 4096 66.5 69.1 SimSiam [8] 256 68.1 70.0 W-MSE 4 [12] 4096 69.4 - Zero-CL [48] 1024 68.9 - BYOL [16] (repro.) 512 66.1 69.2 SwAV [4] (repro.) 512 65.8 67.9 W-MSE 4 [12] (repro.) 512 66.7 67.9 CW-RGP 4 (ours) 512 69.7 71.0 In this section, we conduct experiments to validate the effectiveness of our proposed CW-RGP. We evaluate the performances of CW-RGP for classification on CIFAR-10, CIFAR-100 [28], STL-10 [10], TinyImageNet [29] and ImageNet [11]. We also evaluate the effectiveness in transfer learning, for a pre-trained model using CW-RGP. We run the experiments on one workstation with 4 GPUs. For more details of implementation and training protocol, please refer to supplementary materials. Evaluation for Classification We first conduct experiments on small and medium size datasets (including CIFAR-10, CIFAR-100, STL-10 and TinyImageNet), strictly following the setup of W-MSE paper [12]. Our CW-RGP inherits the advantages of W-MSE in exploiting different views. CW-RGP 2 and CW-RGP 4 indicate our methods with s = 2 and s = 4 positive views extracted per image respectively, similar to W-MSE [12]. The results of baselines shown in Table. 1 are partly inherited in [12], except that we reproduce certain baselines under the same training and evaluation settings as in [12] (some different hyper-parameter settings are shown in supplementary materials). We observe that CW-RGP obtains the highest accuracy on almost all the datasets except Tiny-ImageNet. Besides, CW-RGP with 4 views are generally better than 2, similar to W-MSE. These results show that CW-RGP is a competitive SSL method. We also confirm that CW with random group partition could obtain a higher performance than BW (and with random group partition), comparing CW-RGP to W-MSE and Shuffled-DBN. We then conduct experiments on large-scale ImageNet, strictly following the setup of SimSiam paper [8]. The results of baselines shown in Table 2 are mostly reported in [8], except that the result of W-MSE 4 is from the W-MSE paper [12] and we reproduce BYOL [16], SwAV [4] and W-MSE 4 [12] under a batch size of 512 based on the same training and evaluation settings as in [8] for fairness. CW-RGP 4 is trained with a batch size of 512 and gets the highest accuracy among all methods under both 100 and 200 epochs training. We find that our CW-RGP can also work well when combined with the whitening penalty used in VICReg. Note that we also try a batch size of 256 under 100-epoch training, which gets the top-1 accuracy of 69.5%. Transfer to downstream tasks We examine the representation quality by transferring our model to other tasks, including VOC [13] object detection, COCO [32] object detection and instance segmentation. We use the baseline (except for the pre-training model, the others are exactly the same) of the detection codebase from MoCo [19] for CW-RGP to produce the results. The results of baselines shown in Table3 are mostly inherited from [8]. We clearly observe that CW-RGP performs better than or on par with these state-of-the-art approaches on COCO object detection and instance segmentation, which shows the great potential of CW-RGP in transferring to downstream tasks. Ablation for Random Group Partition. We also conduct experiments to show the advantages of random group partition for channel whitening. We use ‘CW’, ‘CWGP’ and ‘CW-RGP’ to indicate channel whitening without group partition, with group partition and with random group partition, respectively. We further consider the setup with s = 2 and s = 4 positive views. We use the same setup as in Table 1 and show the results in Table 4. We have similar observation as in Figure 6 that CW with random group partition improves the performance. Ablation for Batch Size. Here, we conduct experiments to empirically show the advantages of CW over BW, in terms of the stability using different batch size. We train CW and BW on ImageNet-100, using batch size ranging in {32, 64, 128, 256}. Figure 7 shows the results. We can find that CW is more robust for small batch size training. 5 Conclusion and Limitation In this paper, we invested whitening loss for SSL, and observed several interesting phenomena with further clarification based on our analysis framework. We showed that batch whitening (BW) based methods only require the embedding to be full-rank, which is also a sufficient condition for collapse avoidance. We proposed channel whitening with random group partition (CW-RGP) that is well motivated theoretically in avoiding a collapse and has been validated empirically in learning good representation. Limitation. Our work only shows how to avoid collapse by using whitening loss, but does not explicitly show what should be the extent of whitening of a good representation. We note that a concurrent work addresses this problem by connecting the eigenspectrum of a representation to a power law [15], and shows the coefficient of the power law is a strong indicator for the effects of representation. We believe our work can be further extended when combined with the analyses from [15]. Besides, our work does not answer how the projector affects the extents of whitening between encoding and embedding [18], which is important to answer why encoding is usually used as a representation for evaluation, rather than the whitened output or embedding. Our attempts, shown in supplementary materials, provide preliminary results, but does not offer an answer to this question. Acknowledgement This work was partially supported by the National Key Research and Development Plan of China under Grant 2021ZD0112901, National Natural Science Foundation of China (Grant No. 62106012), the Fundamental Research Funds for the Central Universities.
1. What is the focus and contribution of the paper regarding whitening loss in self-supervised learning? 2. What are the strengths of the proposed approach, particularly in its ability to prevent collapse and achieve state-of-the-art results? 3. Do you have any concerns or suggestions regarding the paper's content, such as providing more details and intuitions behind certain explanations or comparing the proposed approach with other methods like DINO and SwAV? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential negative impacts of the proposed approach that the authors should discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Self-supervised learning approaches need to avoid collapse to a trivial representation. This has been tackled by various approaches in literature including use of negatives, use of asymmetric networks etc. Use of whitening loss has been explored by some recent works. This paper studies this whitening loss and various variants of the whitening transformation used in practice. The paper investigates some issues with the transformations used and a new random group partition based channel whitening approach to prevent collapse. The approach works well for large batch sizes which is shown through experiments on datasets like ImageNet and transfer to COCO. Strengths And Weaknesses Strengths: The paper seeks to analyze various approaches used for whitening of feature representations used for SSL. The paper is very well written. The paper does a good job at explaining some of the preliminaries. Previous approaches have been analyzed extensively through experiments based on public repositories. The paper decomposes the whitening loss and connects it to common SSL approaches. Obtains state-of-the-art results on standard benchmark datasets. I think the paper will be of significance to the wider ML/SSL community. Minor Weaknesses, suggestions and questions : Since one the Section 3.3 is one of the most important contributions of this paper, I would suggest having some more details & intuitions behind the explanations. Especially : L185-196. This could perhaps be included in the supplementary material. Can the centering+sharpening operation used in DINO & equipartion constraint used in SwAV be looked at through a similar analysis ? L280: To ensure d g > m , do you modify the last layer of the projection head ? How does the proposed approach compare to state-of-the-art on training time ? Do you have some thoughts on why the gap in performance decreases at 100 ep vs 200 ep (Table 2) ? What about W-MSE 4 @ 200 ? I agree with the motivation in L263-266. I think from a practical standpoint, it is be a better idea to report required GPU memory & GPUs per approach in Table 2 in addition to the batch size. Questions Please refer to the section on "Strengths And Weaknesses" for questions and comments. Limitations The authors have adequately discussed limitations of the proposed approach. While the work is more at a fundamental level, I do urge the authors to include some discussion on potential negative impacts.
NIPS
Title Learning Deep Embeddings with Histogram Loss Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M . Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x− form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p− corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = 〈xi, xj〉 |mij = +1} and S− = {sij = 〈xi, xj〉 |mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆ = 2R−1 . We estimate the value h+r of the histogram H + at each node as: h+r = 1 |S+| ∑ (i,j) :mij=+1 δi,j,r (1) where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r = (sij − tr−1)/∆, if sij ∈ [tr−1; tr], (tr+1 − sij)/∆, if sij ∈ [tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H− proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = ∫ 1 −1 p−(x) [∫ x −1 p+(y) dy ] dx = ∫ 1 −1 p−(x) Φ+(x) dx = Ex∼p− [Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R∑ r=1 ( h−r r∑ q=1 h+q ) = R∑ r=1 h−r φ + r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+r = ∑r q=1 h + q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈ S+ and s ∈ S−. Indeed, it is straightforward to obtain ∂L ∂h−r = ∑r q=1 h + q and ∂L ∂h+r = ∑R q=r h − q from (4). Furthermore, from (1) and (2) it follows that: ∂h+r ∂sij = +1 ∆|S+| , if sij ∈ [tr−1; tr], −1 ∆|S+| , if sij ∈ [tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h−r ∂sij ). Finally, ∂sij∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R∑ r=1 ( h−r r+µ∑ q=1 h+q ) , (6) where the new loss effectively enforces the margin µ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————- 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = ∑ i,j∈I wi,j ln(exp −α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j = { 1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j = { 1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e − 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e − 5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071.
1. What is the main contribution of the paper regarding the histogram loss function for deep network embeddings? 2. How does the proposed method compare to other state-of-the-art embedding loss functions, specifically binomial deviance loss and Lifted Structured Similarity Softmax? 3. What are the strengths and weaknesses of the paper's approach to learning embeddings? 4. Do you have any questions or concerns about the experimental setup, such as the choice of datasets, architectures, and optimization algorithms? 5. How does the paper's method differ from previous works in terms of being parameter-free, and how does this impact its performance and appeal?
Review
Review The authors provide a new loss function for learning embeddings in deep networks, called histogram loss. This loss is based on a pairwise classification: whether two labels belong to the same class or not. In particular, the authors suggest to look at the similarity distribution of the embeddings on the L2 unit sphere (all embeddings are L2 normalized). The idea is to look at the distribution of the similar embedding (positive pairs) and the distribution of the non-similar ones (negative pairs) and make the probability that positive pairs has smaller score then negative pairs, smaller. After reviewing previous work in the area (Section 2), in Section 3 they develop a method how to estimate the Histogram loss. They begin with the definitions of Histogram (eq. 1, and 2) and based on them give the definition of $p_reverse$ which later we they want to minimize (eq.3). Afterward, they present an approximation to $p_reverse$ and show that it is differentiable. They conclude Section 3 with showing a connection to the quadruplet loss. In Section 4 the authors compare their method to other state of the art embedding loss, specifically, they consider the binomial deviance loss (eq. 7 and 8) and Lifted Structured Similarity Softmax (LSSS; eq. 9). They try these loses on 4 datasets: CUB200-2011 [25], CUHK03 [11], Market-1501 [29] and Online Products. In addition, different architectures for the neural network are used: For training on the CUB-200-2011 and the Online Products they use GoogleNet while for CUHK03 and Market-1501 they used the Deep Metric Learning architecture. The optimizatin algorithm used in this work is Adam (for stochastic optimization). They show the results using the Recall@K performance on all the datasets with different methods in Figures 3 and 4. The authors show how their method is sometime competitive and sometimes outperforms previous state of the art algorithms in the embeddings area. I like and see the benefit of the authors' method for embeddings. The (almost) parameter free method is very appealing and have its benefits. The authors simply show the results of their algorithm and demonstrate that it works. Also, after many works where I had to read the supplementary material in order to understand the work, I appreciate the authors non-supplementary paper! Also, their throughout comparison to other methods in terms of loss functions, datasets, and different parameters, is very good. Also, I like the choice of the datasets, where there are many classes, which seems natural to this kind of loss. Two things bother me in this work: 1) The authors do not explain what is recall@K, where I needed to go back through the papers "Deep Metric Learning via Lifted Structured Feature Embedding" and then "Product quantization for nearest neighbor search". I could postulate what is the definition in this case is, but still it affected the self-contained nature of this work. 2) After doing so much profound work, it was not clear to me why each dataset has different set of graphs. I miss a good reason why such comparison is adequate in this context, especially in the light of that the authors did most of the work for throughout comparison. Typos: * In figure 5 caption, should it read |Red is for *positive* pair, green is for *negative* pairs?
NIPS
Title Learning Deep Embeddings with Histogram Loss Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M . Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x− form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p− corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = 〈xi, xj〉 |mij = +1} and S− = {sij = 〈xi, xj〉 |mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆ = 2R−1 . We estimate the value h+r of the histogram H + at each node as: h+r = 1 |S+| ∑ (i,j) :mij=+1 δi,j,r (1) where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r = (sij − tr−1)/∆, if sij ∈ [tr−1; tr], (tr+1 − sij)/∆, if sij ∈ [tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H− proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = ∫ 1 −1 p−(x) [∫ x −1 p+(y) dy ] dx = ∫ 1 −1 p−(x) Φ+(x) dx = Ex∼p− [Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R∑ r=1 ( h−r r∑ q=1 h+q ) = R∑ r=1 h−r φ + r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+r = ∑r q=1 h + q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈ S+ and s ∈ S−. Indeed, it is straightforward to obtain ∂L ∂h−r = ∑r q=1 h + q and ∂L ∂h+r = ∑R q=r h − q from (4). Furthermore, from (1) and (2) it follows that: ∂h+r ∂sij = +1 ∆|S+| , if sij ∈ [tr−1; tr], −1 ∆|S+| , if sij ∈ [tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h−r ∂sij ). Finally, ∂sij∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R∑ r=1 ( h−r r+µ∑ q=1 h+q ) , (6) where the new loss effectively enforces the margin µ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————- 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = ∑ i,j∈I wi,j ln(exp −α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j = { 1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j = { 1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e − 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e − 5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071.
1. What is the focus of the paper in terms of its contribution to the field? 2. How does the proposed cost function differ from existing ones, and what are its advantages? 3. What are the strengths of the paper regarding its clarity and presentation? 4. Are there any concerns or criticisms regarding the experimental results? 5. How does the reviewer assess the potential impact of the proposed cost function on future research and applications?
Review
Review This paper deals with an interesting problem with wide applicability, namely the design of cost functions for learning deep embeddings. After an overview of existing cost functions, the authors introduce a cost function which is based on the histograms of similarities between positive and negative pairs and which satisfies the important property that it is differentiable and allows for learning using back propagation. In my opinion, the paper is well written and it presents an intuitive cost function that may end up being used by a large number of people. A minor weakness is that I think the authors could have done a better motivate the cost function from an intuitive perspective; it was easy to understand the underlying motivations for the design, but that was mainly because I think the cost functions makes a lot of sense. As for the experiments, I would say that they support the authors' statement that the proposed cost function may become the standard choice at some point in the future, but they also illustrate that the benefits are not substantial for all datasets. The paper also contains a small number of grammatical errors, but overall it was a pleasure to read the paper.
NIPS
Title Learning Deep Embeddings with Histogram Loss Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M . Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x− form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p− corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = 〈xi, xj〉 |mij = +1} and S− = {sij = 〈xi, xj〉 |mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆ = 2R−1 . We estimate the value h+r of the histogram H + at each node as: h+r = 1 |S+| ∑ (i,j) :mij=+1 δi,j,r (1) where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r = (sij − tr−1)/∆, if sij ∈ [tr−1; tr], (tr+1 − sij)/∆, if sij ∈ [tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H− proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = ∫ 1 −1 p−(x) [∫ x −1 p+(y) dy ] dx = ∫ 1 −1 p−(x) Φ+(x) dx = Ex∼p− [Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R∑ r=1 ( h−r r∑ q=1 h+q ) = R∑ r=1 h−r φ + r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+r = ∑r q=1 h + q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈ S+ and s ∈ S−. Indeed, it is straightforward to obtain ∂L ∂h−r = ∑r q=1 h + q and ∂L ∂h+r = ∑R q=r h − q from (4). Furthermore, from (1) and (2) it follows that: ∂h+r ∂sij = +1 ∆|S+| , if sij ∈ [tr−1; tr], −1 ∆|S+| , if sij ∈ [tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h−r ∂sij ). Finally, ∂sij∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R∑ r=1 ( h−r r+µ∑ q=1 h+q ) , (6) where the new loss effectively enforces the margin µ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————- 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = ∑ i,j∈I wi,j ln(exp −α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j = { 1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j = { 1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e − 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e − 5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071.
1. What is the focus of the paper regarding deep embeddings? 2. What are the strengths of the proposed histogram loss? 3. Do you have any concerns or questions regarding the paper's experiments or comparisons with other works?
Review
Review This paper present a new loss for learning deep embeddings, which is measured by the overlap between the distributions of similarities for positive and negative point pairs. Since this loss is differentiable, it is able to be backpropagated into a deep embedding network. The experimental results on several tasks, such as person re-identification, image search and fine-grained bird recognition, reveal the effectiveness of the new loss for embedding learning. The proposed histogram loss is new and well-designed. It has several advantages, including: 1. It is piecewise-differentiable, so that it can be minimized by standard backpropagation. 2. It has only one tunable parameter which even is not sensitive. 3. Its computational complexity is relatively low, which makes learning more efficient. My concern about the paper includes three aspects: 1. The authors said they did not use equation (6) in their experiments. Why? Seems it is an interesting idea worthy of trying. 2. In the experiments, the batch size was fixed. As the distributions of similarities for positive and negative point pairs are computed in a batch, how do the results change by varying the batch size. 3. Since the main competitor, LSSS [21], also reports the results on CARS196, it’s better to test the proposed loss on it as well.
NIPS
Title Learning Deep Embeddings with Histogram Loss Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M . Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x− form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p− corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = 〈xi, xj〉 |mij = +1} and S− = {sij = 〈xi, xj〉 |mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆ = 2R−1 . We estimate the value h+r of the histogram H + at each node as: h+r = 1 |S+| ∑ (i,j) :mij=+1 δi,j,r (1) where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r = (sij − tr−1)/∆, if sij ∈ [tr−1; tr], (tr+1 − sij)/∆, if sij ∈ [tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H− proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = ∫ 1 −1 p−(x) [∫ x −1 p+(y) dy ] dx = ∫ 1 −1 p−(x) Φ+(x) dx = Ex∼p− [Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R∑ r=1 ( h−r r∑ q=1 h+q ) = R∑ r=1 h−r φ + r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+r = ∑r q=1 h + q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈ S+ and s ∈ S−. Indeed, it is straightforward to obtain ∂L ∂h−r = ∑r q=1 h + q and ∂L ∂h+r = ∑R q=r h − q from (4). Furthermore, from (1) and (2) it follows that: ∂h+r ∂sij = +1 ∆|S+| , if sij ∈ [tr−1; tr], −1 ∆|S+| , if sij ∈ [tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h−r ∂sij ). Finally, ∂sij∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R∑ r=1 ( h−r r+µ∑ q=1 h+q ) , (6) where the new loss effectively enforces the margin µ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————- 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = ∑ i,j∈I wi,j ln(exp −α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j = { 1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j = { 1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e − 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e − 5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071.
1. What is the focus and contribution of the paper on deep embeddings? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and computational efficiency? 3. How does the Histogram Loss compare to other loss functions, such as the quadruplet loss, in terms of its effectiveness and practicality? 4. What are the limitations of the proposed method, if any? 5. How do the experimental results support the claims made by the authors regarding the competitiveness of the Histogram Loss?
Review
Review The paper has proposed a new loss function for learning deep embeddings, called the Histogram Loss. The new loss function is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. The news loss has virtually no parameters compared to other losses. The paper also demonstrated the competitive results of the new loss on a few datasets.strengths of the paper: 1. The paper is well and clearly written. The authors have explained their methods clearly and have analysed the relation to quadruplet loss. The experiments setting are also explained in detail. 2. The paper has proposed a novel loss function call the Histogram Loss. The new loss function has virtually no parameters. An alternative way can achieve a similar goal using the quadruplet loss with an impractical complexity. The new loss brings down the complexity to quadratic in batch size. 3. The experiments have shown the competitive results of the new loss function. The paper has conducted intense experiments on a few datasets and is convincing enough to show consistent results of the new loss.
NIPS
Title Learning Deep Embeddings with Histogram Loss Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M . Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x− form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p− corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = 〈xi, xj〉 |mij = +1} and S− = {sij = 〈xi, xj〉 |mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆ = 2R−1 . We estimate the value h+r of the histogram H + at each node as: h+r = 1 |S+| ∑ (i,j) :mij=+1 δi,j,r (1) where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r = (sij − tr−1)/∆, if sij ∈ [tr−1; tr], (tr+1 − sij)/∆, if sij ∈ [tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H− proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = ∫ 1 −1 p−(x) [∫ x −1 p+(y) dy ] dx = ∫ 1 −1 p−(x) Φ+(x) dx = Ex∼p− [Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R∑ r=1 ( h−r r∑ q=1 h+q ) = R∑ r=1 h−r φ + r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+r = ∑r q=1 h + q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈ S+ and s ∈ S−. Indeed, it is straightforward to obtain ∂L ∂h−r = ∑r q=1 h + q and ∂L ∂h+r = ∑R q=r h − q from (4). Furthermore, from (1) and (2) it follows that: ∂h+r ∂sij = +1 ∆|S+| , if sij ∈ [tr−1; tr], −1 ∆|S+| , if sij ∈ [tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h−r ∂sij ). Finally, ∂sij∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R∑ r=1 ( h−r r+µ∑ q=1 h+q ) , (6) where the new loss effectively enforces the margin µ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————- 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = ∑ i,j∈I wi,j ln(exp −α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j = { 1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j = { 1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e − 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e − 5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071.
1. What is the focus of the paper regarding deep learning approaches? 2. What is the novel aspect of the proposed loss function compared to existing approaches? 3. How does the reviewer assess the advantage of the proposed loss function? 4. What are the concerns regarding the experiments and comparisons with other works? 5. Can you provide additional information or figures to support your claims?
Review
Review The paper addresses the problem of learning feature embeddings using deep learning approaches. Existing approaches can be grouped in base at which loss configuration of positive and negatives tuples they use to carry out the learning, ranging from pairs to quadruplets. The authors of this paper, instead proposed to consider a two stages loss: first, the similarity distributions between positive and negative pairs in the batch is computed; then the overlap between the two distribution is computed. The goal is to reduce the overlap between the two distributions. Advantageously, the proposed loss has only one tunable parameter (the number of bins of the histogram approximating the positive and negative distributions) to which it is quite robust as shown by the authors in Figure 2. From the point of view of results, the authors tested the proposed loss with three datasets: on two of them they showed lower performance w.r.t. state-of-the-art, on the other two (related to person re-identification) they showed the best performance. I think that designing a loss that takes into account the statistics of the batch positive/negative examples rather than only pairs (resp. triplets, quadruplets) of them is a good idea. It also constitutes a more general approach that potentially encloses the pairs/triplets/quadruplets losses. The fact that there are basically no parameters to tune in the loss is a further advantage w.r.t. the pairs/triplets/quadruplets losses (the only parameter to tune is the number of bins in the histogram that approximate the positive/negative distributions). What is slightly less convincing to me are the experiment proposed. The authors tested the proposed loss on 4 datasets comprising online products and person re-identifications. On the first two dataset the proposed loss performs slightly worse that Binomial Deviance (BD) and similar to Lifted Structured Similarity Softmax (LSSS). It is not clear to me whether the parameters for the BD were tuned or not: only the results for C=25 and C=10 are shown. Therefore there is the chance that the BD performance could increases. On the two person re-identification datasets, instead histogram loss is outperforming the other two approaches. Figure 5 compares the positive and negative distributions after training. What is strange to me is the fact that the Histrogram loss does not separate the two populations significantly more than the other approaches. What I would like to see here is a figure showing how the distribution looks like before and after training.
NIPS
Title Learning Deep Embeddings with Histogram Loss Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M . Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x− form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p− corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = 〈xi, xj〉 |mij = +1} and S− = {sij = 〈xi, xj〉 |mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆ = 2R−1 . We estimate the value h+r of the histogram H + at each node as: h+r = 1 |S+| ∑ (i,j) :mij=+1 δi,j,r (1) where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r = (sij − tr−1)/∆, if sij ∈ [tr−1; tr], (tr+1 − sij)/∆, if sij ∈ [tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H− proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = ∫ 1 −1 p−(x) [∫ x −1 p+(y) dy ] dx = ∫ 1 −1 p−(x) Φ+(x) dx = Ex∼p− [Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R∑ r=1 ( h−r r∑ q=1 h+q ) = R∑ r=1 h−r φ + r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+r = ∑r q=1 h + q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈ S+ and s ∈ S−. Indeed, it is straightforward to obtain ∂L ∂h−r = ∑r q=1 h + q and ∂L ∂h+r = ∑R q=r h − q from (4). Furthermore, from (1) and (2) it follows that: ∂h+r ∂sij = +1 ∆|S+| , if sij ∈ [tr−1; tr], −1 ∆|S+| , if sij ∈ [tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h−r ∂sij ). Finally, ∂sij∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R∑ r=1 ( h−r r+µ∑ q=1 h+q ) , (6) where the new loss effectively enforces the margin µ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————- 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = ∑ i,j∈I wi,j ln(exp −α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j = { 1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j = { 1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e − 4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e − 5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071.
1. What is the novel contribution of the paper in the field of deep metric embeddings? 2. How does the proposed loss function differ from other existing losses in terms of hyperparameter absence? 3. Can you explain the intuition behind the proposed loss function and how it encourages separation in the embedding space? 4. What are the strengths and weaknesses of the experimental results presented in the paper? 5. Are there any limitations or areas for improvement in the proposed approach?
Review
Review This paper introduces a new differentiable loss for training deep metric embeddings. The main advantage of this loss is the absence of hyperparameters. In essence, given a sample batch, the loss estimates the probability that a negative sample pair's inner product is closer to a positive sample pair's inner product. Experiments were conducted on visual datasets and were evaluated using the Recall@K metric.The loss function is intuitive and well-presented. In theory, this loss encourages the inner product between positive and negative sample pairs to be distinct, therefore effectively separating the embedding space into different classes. Experimental result shows that the histogram loss improves over many existing loss functions, but is outperformed by the binomial deviance loss on CUB-200-2011 and Online Products datasets. Many baseline results are missing for the CUHK03 and Market-1501 datasets.
NIPS
Title CASTLE: Regularization via Auxiliary Causal Graph Discovery Abstract Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal directed acyclical graph (DAG) as an adjacency matrix embedded in the neural network’s input layers, thereby facilitating the discovery of optimal predictors. Furthermore, CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features. We provide a theoretical generalization bound for our approach and conduct experiments on a plethora of synthetic and real publicly available datasets demonstrating that CASTLE consistently leads to better out-of-sample predictions as compared to other popular benchmark regularizers. 1 Introduction A primary concern of machine learning, and deep learning in particular, is generalization performance on out-of-sample data. Over-parameterized deep networks efficiently learn complex models and are, therefore, susceptible to overfit to training data. Common regularization techniques to mitigate overfitting include data augmentation [1, 2], dropout [3, 4, 5], adversarial training [6], label smoothing [7], and layer-wise strategies [8, 9, 10] to name a few. However, these methods are agnostic of the causal relationships between variables limiting their potential to identify optimal predictors based on graphical topology, such as the causal parents of the target variable. An alternative approach to regularization leverages supervised reconstruction, which has been proven theoretically and demonstrated empirically to improve generalization performance by obligating hidden bottleneck layers to reconstruct input features [11, 12]. However, supervised auto-encoders suboptimally reconstruct all features, including those without causal neighbors, i.e., adjacent cause or effect nodes. Naively reconstructing these variables does not improve regularization and representation learning for the predictive model. In some cases, it may be harmful to generalization performance, e.g., reconstructing a random noise variable. ∗Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Although causality has been a topic of research for decades, only recently has cause and effect relationships been incorporated into machine learning methodologies and research. Recently, researchers at the confluence of machine learning and causal modeling have advanced causal discovery [13, 14], causal inference [15, 16], model explainability [17], domain adaptation [18, 19, 20] and transfer learning [21] among countless others. The existing synergy between these two disciplines has been recognized for some time [22], and recent work suggests that causality can improve and complement machine learning regularization [23, 24, 25]. Furthermore, many recent causal works have demonstrated and acknowledged the optimality of predicting in the causal direction, i.e., predicting effect from cause, which results in less test error than predicting in the anti-causal direction [21, 26, 27, 28]. Contributions. In this work, we introduce a novel regularization method called CASTLE (CAusal STructure LEarning) regularization. CASTLE regularization uses causal graph discovery as an auxiliary task when training a supervised model to improve the generalization performance of the primary prediction task. Specifically, CASTLE learns the causal directed acyclical graph (DAG) under continuous optimization as an adjacency matrix embedded in a feed-forward neural network’s input layers. By jointly learning the causal graph, CASTLE can surpass the benefits provided by feature selection regularizers by identifying optimal predictors, such as the target variable’s causal parents. Additionally, CASTLE further improves upon auto-encoder-based regularization [12] by reconstructing only the input features that have neighbors (adjacent nodes) in the causal graph. Regularization of a predictive model to satisfy the causal relationships among feature and target variables effectively guide the model towards the direction of better out-of-sample generalization guarantees. We provide a theoretical generalization bound for CASTLE and demonstrate improved performance against a variety of benchmark methods on a plethora of real and synthetic datasets. 2 Related Works We compare to the related work in the simplest supervised learning setting where we desire learning a function from some featuresX to a target variable Y given some data of the variables X and Y to improve out-of-sample generalization within the same distribution. This is a significant departure from the branches of machine learning algorithms, such as in semi-supervised learning and domain adaptation, where the regularizer is constructed with information other than variablesX and Y . Regularization controls model complexity and mitigates overfitting. `1 [29] and `2 [30] regularization are commonly used regularization approaches where the former is used when a sparse model is preferred. For deep neural networks, dropout regularization [3, 4, 5] has been shown to be superior in practice to `p regularization techniques. Other capacity-based regularization techniques commonly used in practice include early stopping [31], parameter sharing [31], gradient clipping [32], batch normalization [33], data augmentation [2], weight noise [34], and MixUp [35] to name a few. Normbased regularizers with sparsity, e.g. Lasso [29], are used to guide feature selection for supervised models. The work of [12] on supervised auto-encoders (SAE) theoretically and empirically shows that adding a reconstruction loss of the input features functions as a regularizer for predictive models. However, this method does not select which features to reconstruct and therefore suffers performance degradation when tasked to reconstruct features that are noise or unrelated to the target variables. Two existing works [25, 23] attempt to draw the connection between causality and regularization. Based on an analogy between overfitting and confounding in linear models, [25] proposed a method to determine the regularization hyperparameter in linear Ridge or Lasso regression models by estimating the strength of confounding. [23] use causality detectors [36, 27] to weight a sparsity regularizer, e.g. `1, for performing non-linear causality analysis and generating multivariate causal hypotheses. Neither of the works has the same objective as us — improving the generalization performance of supervised learning models, nor do they overlap methodologically by using causal DAG discovery. Causal discovery is an NP-hard problem that requires a brute-force search through a non-convex combinatorial search space, limiting the existing algorithms to reaching global optima for only small problems. Recent approaches have successfully accelerated these methods by using a novel acyclicity constraint and formulating the causal discovery problem as a continuous optimization over real matrices (avoiding combinatorial search) in the linear [37] and nonlinear [38, 39] cases. CASTLE incorporates these recent causal discovery approaches of [37, 38] to improve regularization for prediction problems in general. As shown in Table 1, CASTLE regularization provides two additional benefits: causal prediction and target selection. First, CASTLE identifies causal predictors (e.g., causal parents if they exist) rather than correlated features. Furthermore, CASTLE improves upon reconstruction regularization by only reconstructing features that have neighbors in the underlying DAG. We refer to this advantage as “target selection”. Collectively these benefits contribute to the improved generalization of CASTLE. Next we introduce our notation (Section 3.1) and provide more details of these benefits (Section 3.2). 3 Methodology In this section, we provide a problem formulation with causal preliminaries for CASTLE. Then we provide a motivational discussion, regularizer methodology, and generalization theory for CASTLE. 3.1 Problem Formulation In the standard supervised learning setting, we denote the input feature variables and target variable, byX = [X1, ..., Xd] ∈ X and Y ∈ Y , respectively, where X ⊆ Rd is a d-dimensional feature space and Y ⊆ R is a one-dimensional target space. Let PX,Y denote the joint distribution of the features and target. Let [N ] denote the set {1, ..., N}. We observe a dataset, D = { (Xi, Yi), i ∈ [N ] } , consisting of N i.i.d. samples drawn from PX,Y . The goal of a supervised learning algorithm A is to find a predictive model, fY : X → Y , in a hypothesis space H that can explain the association between the features and the target variable. In the learning algorithm A, the predictive model f̂Y is trained on a finite number of samples in D, to predict well on the out-of-sample data generated from the same distribution PX,Y . However, overfitting, a mismatch between training and testing performance of f̂Y , can occur if the hypothesis spaceH is too complex and the training data fails to represent the underlying distribution PX,Y . This motivates the usage of regularization to reduce the hypothesis space’s complexityH so that the learning algorithm A will only find the desired function to explain the data. Assumptions of the underlying distribution dictate regularization choice. For example, if we believe only a subset of features is associated with the label Y , then `1 regularization [29] can be beneficial in creating sparsity for feature selection. CASTLE regularization is based on the assumption that a causal DAG exists among the input features and target variable. In the causal framework of [40], a causal structure of a set of variables X is a DAG in which each vertex v ∈ V corresponds to a distinct element in X , and each edge e ∈ E represents direct functional relationships between two neighboring variables. Formally, we assume the variables in our dataset satisfy a nonparametric structural equation model (NPSEM) as defined in Definition 1. The word “nonparametric” means we do not make any assumption on the underlying functions fi in the NPSEM. In this work, we characterize optimal learning by a predictive model as discovering the function Y = fY (Pa(Y ), uY ) in NPSEM [40]. Definition 1. (NPSEMs) Given a DAG G = (V = [d+ 1], E), the random variables X̃ = [Y,X] satisfy a NPSEM if Xi = fi(Pa(Xi), ui), i ∈ [d+ 1], where Pa(i) is the parents (direct causes) of Xi in G and u[d+1] are some random noise variables. 3.2 Why CASTLE regularization matters We now present a graphical example to explain the two benefits of CASTLE mentioned in Section 2, causal prediction and target selection. Consider Figure 1 where we are given nine feature variables X1, ..., X9 and a target variable Y . Causal Prediction. The target variable Y is generated by a function fY (Pa(Y ), uY ) from Definition 1 where the parents of Y are Pa(Y ) = {X2, X3}. In CASTLE regularization, we train a predictive model f̂Y jointly with learning the DAG amongX and Y . The features that the model uses to predict Y are the causal parents of Y in the learned DAG. Such a model is sample efficient in uncovering the true function fY (Pa(Y ), uY ) and generalizes well on the out-of-sample data. Our theoretical analysis in Section 3.4 validates this advantage when there exists a DAG structure among the variablesX and Y . However, there may exist other variables that predict Y more accurately than the causal parents Pa(Y ). For example, if the function from Y to X8 is a one-to-one linear mapping, we can predict Y trivially from the feature X8. In our objective function introduced later, the prediction loss of Y will be weighted higher than the causal regularizer. Among the predictive models with a similar prediction loss of Y , our objective function still prefers to use the model, which minimizes the causal regularizer and uses the causal parents. However, it would favor the easier predictor if one exists and gives a much lower prediction loss of Y . In this case, the learned DAG may differ from the true DAG, but we reiterate that we are focused on the problem of generalization rather than causal discovery. Target Selection. Consider the variables X5, X6 and X7 which share parents (X2 and X3) with Y in Figure 1. The functions X5 = f5(X2, u5), X6 = f6(X3, u6), and X7 = f7(X3, u7) may have some learnable similarity (e.g. basis functions and representations) with Y = fY (X2, X3, uY ), that we can exploit by training a shared predictive model of Y with the auxiliary task of predicting X5, X6 and X7. From the causal graph topology, CASTLE discovers the optimal features that should act as the auxiliary task for learning fY . CASTLE learns the related functions jointly in a shared model, which is proven to improve the generalization performance of predicting Y by learning shared basis functions and representations [41]. 3.3 CASTLE regularization Let X̃ = Y × X denote the data space, P(X,Y ) = PX̃ the data distribution, and ‖ · ‖F the Frobenius norm. We define random variables X̃ = [X̃1, X̃2, ..., X̃d+1] := [Y,X1, ..., Xd] ∈ X̃ . Let X = [ X1, ...,Xd ] denote the N × d input data matrix, Y the N -dimensional label vector, X̃ = [Y,X] the N × (d+ 1) matrix that contains data of all the variables in the DAG. To facilitate exposition, we first introduce CASTLE in the linear setting. Here, the parameters are a (d+ 1)× (d+ 1) adjacency matrix W with zero in the diagonal. The objective function is given as Ŵ ∈ min W 1 N ‖Y − X̃W:,1‖ 2 + λRDAG(X̃,W) (1) where W:,1 is the first column of W. We define the DAG regularization lossRDAG(X̃,W) as RDAG(X̃,W) = LW +RW + βVW. (2) where LW = 1N ‖X̃ − X̃W‖ 2 F , RW = ( Tr ( eW W ) − d − 1 )2 , VW is the `1 norm of W, is the Hadamard product, and eM is the matrix exponential of M. The DAG loss RDAG(X̃,W) is introduced in [37] for learning linear DAG by continuous optimization. Here we use it as the regularizer for our linear regression model Y = X̃W:,1 + . From Theorem 1 in [37], we know the graph given by W is a DAG if and only if RW = 0. The prediction Ŷ = X̃W:,1 is the projection of Y onto the parents of Y in the learned DAG. This increases the stability of linear regression when issues pertaining to collinearity or multicollinearity among the input features appear. Continuous optimization for learning nonparametric causal DAGs has been proposed in the prior work by [38]. In a similar manner, we also adapt CASTLE to nonlinear cases. Suppose the predictive model for Y and the function generating each feature Xk in the causal DAG are parameterized by an M -layer feed-forward neural network fΘ : X̃ → X̃ with ReLU activations and layer size h. Figure 2 shows the network architecture of fΘ. This joint network can be instantiated as a d+ 1 sub-network fk with shared hidden layers, where fk is responsible for reconstructing the feature X̃k. We let Wk1 denote the h× (d+ 1) weight matrix in the input layer of fk, k ∈ [d+ 1]. We set the k-th column of Wk1 to zero such that fk does not utilize X̃k in its prediction of X̃k. We let Wm, m = 2, ..,M − 1 denote the weight matrices in the network’s shared hidden layers, and WM = [W1M , ...,W d+1 M ] denotes the h× (d+ 1) weight matrix in the output layer. Explicitly, we define the sub-network fk as fk(X̃) = φ ( · · ·φ ( φ ( X̃Wk1 ) W2 ) · · ·WM−1 ) WkM , (3) where φ(·) is the ReLU activation function. The function fΘ is given as fΘ(X̃) = [f1(X̃), ..., fd+1(X̃)]. Let fΘ(X̃) denote the prediction for the N samples matrix X̃ where [fΘ(X̃)]i,k = fk(X̃i), i ∈ [N ] and k ∈ [d+ 1]. All network parameters are collected into sets as Θ1 = {Wk1}d+1k=1, Θ = Θ1 ∪ {Wm} M k=2 (4) The training objective function of fΘ is Θ ∈ min Θ 1 N ∥∥Y − [fΘ(X̃)]:,1∥∥2 + λRDAG(X̃, fΘ). (5) The DAG lossRDAG ( X̃, fΘ ) is given as RDAG ( X̃, fΘ ) = LN (fΘ) +RΘ1 + βVΘ1 . (6) Because the k-th column of the input weight matrix Wk1 is set to zero, LN (fΘ) = 1N ∥∥X̃−fΘ(X̃)∥∥2F differs from the standard reconstruction loss in auto-encoders (e.g. SAE) by only allowing the model to reconstruct each feature and target variable from the others. In contrast, auto-encoders reconstruct each feature using all the features including itself. VΘ1 is the `1 norm of the weight matrices Wk1 in Θ1, and the termRΘ1 is given as, RΘ1 = (Tr ( eM M ) − d− 1)2, (7) where M is a (d + 1) × (d + 1) matrix such that [M]k,j is the `2-norm of the k-th row of the matrix Wj1. When the acyclicity lossRΘ1 is minimized, the sub-networks f1, . . . fd+1 forms a DAG among the variables;RΘ1 obligates the sub-networks to reconstruct only the input features that have neighbors (adjacent nodes) in the learned DAG. We note that converting the nonlinear version of CASTLE into a linear form can be accomplished by removing all the hidden layers and output layers and setting the dimension h of the input weight matrices to be 1 in (3), i.e., fk(X̃) = X̃Wk1 and fΘ(X̃) = [X̃W 1 1, ..., X̃W d+1 1 ] = X̃W, which is the linear model in (1-2). Managing computational complexity. If the number of features is large, it is computationally expensive to train all the sub-networks simultaneously. We can mitigate this by sub-sampling. At each iteration of gradient descent, we randomly sample a subset of features to reconstruct and only minimize the prediction loss and reconstruction loss on these sub-sampled features. Note that we do not have a hidden confounders issue here, since Y and the sub-sampled features are predicted by all the features except itself. The sparsity DAG constraint on the weight matrices is unchanged at each iteration. In this case, we keep the training complexity per iteration at a manageable level approximately around the computational time and space complexity of training a few networks jointly. We include experiments on CASTLE scalability with respect to input feature size in Appendix C. 3.4 Generalization bound for CASTLE regularization In this section, we analyze theoretically why CASTLE regularization can improve the generalization performance by introducing a generalization bound for our model in Figure 2. Our bound is based on the PAC-Bayesian learning theory in [42, 43, 44]. Here, we re-interpret the DAG regularizer as a special prior or assumption on the input weight matrices of our model and use existing PAC-Bayes theory to prove the generalization of our algorithm. Traditionally, PAC-Bayes bounds are only applied to randomized models, such as Bayesian or Gibbs classifiers. Here, our bound is applied to our deterministic model by using the recent derandomization formalism from [45, 46]. We acknowledge and note that developing tighter and non-vacuous generalization bounds for deep neural networks is still a challenging and evolving topic in learning theory. The bounds are often stated with many constants from different steps of the proof. For reader convenience, we provide the simplified version of our bound in Theorem 1. The proof, details (e.g., the constants), and discussions about the assumptions are provided in Appendix A. We begin with a few assumptions before stating our bound. Assumption 1. For any sample X̃ = (Y,X) ∼ PX̃ , X̃ has bounded `2 norm s.t. ‖X̃‖2 ≤ B, for some B > 0. Assumption 2. The loss function L(fΘ) = ‖fΘ(X̃)− X̃‖2 is sub-Gaussian under the distribution PX̃ with a variance factor s 2 s.t. ∀t > 0, EPX̃ [ exp ( t ( L(fΘ)− LP (fΘ) ))] ≤ exp( t 2s2 2 ). Theorem 1. Let fΘ : X̃ → X̃ be a M -layer ReLU feed-forward network with layer size h, and each of its weight matrices has the spectral norm bounded by κ. Then, under Assumptions 1 and 2, for any δ, γ > 0, with probability 1− δ over a training set of N i.i.d samples, for any Θ in (4), we have: LP (fΘ) ≤ 4LN (fΘ) + 1N [ RΘ1 + C1(VΘ1 + VΘ2) + log ( 8 δ )] + C3 (8) where LP (fΘ) is the expected reconstruction loss of X̃ under PX̃ , LN (fΘ), VΘ1 and RΘ1 are defined in (6-7), VΘ2 is the `2 norm of the network weights in the output and shared hidden layers, and C1 and C2 are some constants depending on γ, d, h,B, s and M . The statistical properties of the reconstruction loss in learning linear DAGs, e.g. LW = 1N ‖X̃ − WX̃‖2F , have been well studied in the literature: the loss minimizer provably recovers a true DAG with high probability on finite-samples, and hence is consistent for both Gaussian SEM [47] and non-Gaussian SEM [48, 49]. Note also that the regularizerRW orRΘ1 are not a part of the results in [47, 48, 49]. However, the works of [37, 38] empirically show that using RW or RΘ1 on top of the reconstruction loss leads to more efficient and more accurate DAG learning than existing approaches. Our theoretical result on the reconstruction loss explains the benefit ofRW orRΘ1 for the generalization performance of predicting Y . This provides theoretical support for our CASTLE regularizer in supervised learning. However, the objectives of DAG discovery, e.g., identifying the Markov Blanket of Y , is beyond the scope of our analysis. The bound in (8) justifies RΘ1 in general, including linear or nonlinear cases, if the underlying distribution PX̃ is factorized according to some causal DAG. We note that the expected loss LP (fΘ) is upper bounded by the empirical loss LN (fΘ), VΘ1 , VΘ1 andRΘ1 which measures how close (via acyclicity constraint) the model is to a DAG. From (8) it is obvious that not minimizingRΘ1 is an acceptable strategy asymptotically or in the large samples limit (large N ) because RΘ1/N becomes negligible. This aligns with the consistency theory in [47, 48, 49] for linear models. However for small N , a preferred strategy is to train a model fΘ by minimizing LN (fΘ) andRΘ1 jointly. This would be trivial because the samples are generated under the DAG structure in PX̃ . MinimizingRΘ1 can decrease the upper bound of LP (fΘ) in (8), improve the generalization performance of fΘ, as well as facilitate the convergence of fΘ to the true model. If PX̃ does not correspond to any causal DAG, such as image data, then there will be a tradeoff between minimizing RΘ1 and LN (fΘ). In this case, RΘ1 becomes harder to minimize, and generalization may not benefit from adding CASTLE. However, this is a rare case since causal structure exists in most datasets inherently. Our experiments demonstrate that CASTLE regularization outperforms popular regularizers on a variety of datasets in the next section. 4 Experiments In this section, we empirically evaluate CASTLE as a regularization method for improving generalization performance. We present our benchmark methods and training architecture, followed by our synthetic and publicly available data results. Benchmarks. We benchmark CASTLE against common regularizers that include: early stopping (Baseline) [31], L1 [29], L2 [30], dropout [3] with drop rate of 20% and 50% denoted as DO(0.2) and DO(0.5) respectively, SAE [12], batch normalization (BN) [33], data augmentation or input noise (IN) [2], and MixUp (MU) [35], in no particular order. For each regularizer with tunable hyperparameters we performed a standard grid search. For the weight decay regularizers L1 and L2 we searched for λ`p ∈ {0.1, 0.01, 0.001}, and for input noise we use a Gaussian noise with mean of 0 and standard deviation σ ∈ {0.1, 0.01, 0.01}. L1 and L2 were applied at every dense layer. BN and DO were applied after every dense layer and active only during training. Because each regularization method converges at different rates, we use early stopping on a validation set to terminate each benchmark training, which we refer to as our Baseline. Network architecture and training. We implemented CASTLE in Tensorflow2. Our proposed architecture is comprised of d + 1 sub-networks with shared hidden layers, as shown in Figure 2. In the linear case, VW is the `1 norm of W. In the nonlinear case, VΘ1 is the `1 norm of the input weight matrices Wk1 , k ∈ [d+ 1]. To make a clear comparison with L2 regularization, we exclude the capacity term VΘ2 from CASTLE, although it is a part of our generalization bound in (8). Since we predict the target variable as our primary task, we benchmark CASTLE against this common network architecture. Specifically, we use a network with two hidden layers of d+ 1 neurons with ReLU activation. Each benchmark method is initialized and seeded identically with the same random weights. For dataset preprocessing, all continuous variables are standardized with a mean of 0 and a variance of 1. Each model is trained using the Adam optimizer with a learning rate of 0.001 for up to a maximum of 200 epochs. An early stopping regime halts training with a patience of 30 epochs. 4.1 Regularization on Synthetic Data Synthetic data generation. Given a DAG G, we generate functional relationships between each variable and its respective parent(s) with additive Gaussian noise applied to each variable with a mean of 0 and variance of 1. In the linear case, each variable is equal to the sum of its parents plus noise. For the nonlinear case, each variable is equal to the sum of the sigmoid of its parents plus noise. We provide further details on our synthetic DGP and pseudocode in Appendix B. Consider Table 2, using our nonlinear DGP we generated 1000 test samples according to the DAG in Figure 1. We then used 10-fold cross-validation to train and validate each benchmark on varying training sets of size n. Each model was evaluated on the test set from weights saved at the lowest validation error. Table 2 shows that CASTLE improves over all experimental benchmarks. We present similar results for our linear experiments in Appendix B. 2Code is provided at https://bitbucket.org/mvdschaar/mlforhealthlabpub. Dissecting CASTLE. In the synthetic environment, we know the causal relationships with certainty. We analyze three aspects of CASTLE regularization using synthetic data. Because we are comparing across randomly simulated DAGs with differing functional relationships, the magnitude of regression testing error will vary between runs. We examine the model performance in terms of each model’s average rank over each fold to normalize this. If we have r regularizers, the best and worst possible rank is one and r, respectively (i.e., the higher the rank the better). We used 10-fold cross-validation to terminate model training and tested each model on a held-out test set of 1000 samples. First, we examine the impact of increasing the feature size or DAG vertex cardinality |G|. We do this by randomly generating a DAG of size |G| ∈ {10, 50, 100, 150} with 50|G| training samples. We repeat this ten times for each DAG cardinality. On the left-hand side of Fig. 3, CASTLE has the highest rank of all benchmarks and does not degrade with increasing |G|. Second, we analyze the impact of increasing dataset size. We randomly generate DAGs of size |G| ∈ {10, 50, 100, 150}, which we use to create datasets of α|G| samples, where α ∈ {20, 50, 100, 150, 200}. We repeat this ten times for each dataset size. In the middle plot of Figure 3, we see that CASTLE has superior performance for all dataset sizes, and as expected, all benchmark methods (except for SAE) start to converge about the average rank at large data sizes (α = 200). Third, we analyze our method’s sensitivity to noise variables, i.e., variables disconnected to the target variable in G. We randomly generate DAGs of size |G| = 50 to create datasets with 50|G| samples. We randomly add v ∈ {20i}5i=0 noise variables normally distributed with 0 mean and unit variance. We repeat this process for ten different DAG instantiations. The results on the right-hand side of Figure 3 show that our method is not sensitive to the existence of disconnected noise variables, whereas SAE performance degrades with the increase of uncorrelated input features. This highlights the benefit of target selection based on the DAG topology. In Appendix C, we provide an analysis of adjacency matrix weights that are learned under various random DAG configurations, e.g., target with parents, orphaned target, etc. There, we highlight CASTLE in comparison to SAE for target selection by showing that the adjacency matrix weights for noise variables are near zero. We also provide a sensitivity analysis on the parameter λ from (5) and results for additional experiments demonstrating that CASTLE does not reconstruct noisy (neighborless) variables in the underlying causal DAG. 4.2 Regularization on Real Data We perform regression and classification experiments on a spectrum of publicly available datasets from [50] including Boston Housing (BH), Wine Quality (WQ), Facebook Metrics (FB), Bioconcentration (BC), Student Performance (SP), Community (CM), Contraception Choice (CC), Pima Diabetes (PD), Las Vegas Ratings (LV), Statlog Heart (SH), and Retinopathy (RP). For each dataset, we randomly reserve 20% of the samples for a testing set. We perform 10-fold cross-validation on the remaining 80%. As the results show in Table 3, CASTLE provides improved regularization across all datasets for both regression and classification tasks. Additionally, CASTLE consistently ranks as the top regularizer (graphically shown in Appendix C.3), with no definitive benchmark method coming in as a consensus runner-up. This emphasizes the stability of CASTLE as a reliable regularizer. In Appendix C, we provide additional experiments on several other datasets, an ablation study highlighting our sources of gain, and real-world dataset statistics. 5 Conclusion We have introduced CASTLE regularization, a novel regularization method that jointly learns the causal graph to improve generalization performance in comparison to existing capacity-based and reconstruction-based regularization methods. We used existing PAC-Bayes theory to provide a theoretical generalization bound for CASTLE. We have shown experimentally that CASTLE is insensitive to increasing feature dimensionality, dataset size, and uncorrelated noise variables. Furthermore, we have shown that CASTLE regularization improves performance on a plethora of real datasets and, in the worst case, never degrades performance. We hope that CASTLE will play a role as a general-purpose regularizer that can be leveraged by the entire machine learning community. Broader Impact One of the big challenges of machine learning, and deep learning in particular, is generalization to out-of-sample data. Regularization is necessary and used to prevent overfitting thereby promoting generalization. In this work, we have presented a novel regularization method inspired by causality. Since the applicability of our approach spans all problems where causal relationships exist between variables, there are countless beneficiaries of our research. Apart from the general machine learning community, the beneficiaries of our research include practitioners in the social sciences (sociology, psychology, etc.), natural sciences (physics, biology, etc.), and healthcare among countless others. These fields have already been exploiting causality for some time and serve as a natural launch-pad for deploying and leveraging CASTLE. With that said, our method does not immediately apply to certain architectures, such as CNNs, where causal relationships are ambiguous or perhaps non-existent. Acknowledgments This work was supported by GlaxoSmithKline (GSK), the US Office of Naval Research (ONR), and the National Science Foundation (NSF) 1722516. We thank all reviewers for their invaluable comments and suggestions.
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths of the proposed approach, particularly in its application of existing ideas? 3. What are the weaknesses of the paper, especially regarding its empirical results and comparisons with other methods? 4. How does the reviewer assess the clarity and novelty of the paper's content? 5. Are there any suggestions for improving the paper, such as exploring different regularizers or evaluating the method on non-linear networks?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The proposed work introduces a regularization method for improving generalization performance by learning a causal graph. The proposed work also discovers the optimal features based on the topology of the causal graph that also acts as an auxiliary task for learning a predictive model. This is in contrast to methods that use reconstruction loss of all the features as a regularizer and force the model to reconstruct features regardless of any direct dependency in the discovered DAG. --- After reading the author response, I found the additional baselines to be compelling experimental evidence, so I weakly recommend acceptance. Strengths The paper is very clearly written. The proposed work uses the idea of DAG with NO TEARS in a feature space of a neural network. As far as the reviewer is aware, this has not been tried before. The overall idea is not new, but the application of existing idea is new. Weaknesses The empirical results seem a bit weak. In Table 2, it's not obvious to me, why the performance with BN, input noise as well as weight noise is worse as compared to the baseline. Similarly, in table 3, the proposed method marginally improves the performance as compared to classification. It would also be interesting the effect of trying popular regularizers in the case of non-linear networks like mixup [1] and manifold mixup [2]. [1] Mixup https://arxiv.org/abs/1710.09412 [2] Manifold Mixup https://arxiv.org/abs/1806.05236
NIPS
Title CASTLE: Regularization via Auxiliary Causal Graph Discovery Abstract Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal directed acyclical graph (DAG) as an adjacency matrix embedded in the neural network’s input layers, thereby facilitating the discovery of optimal predictors. Furthermore, CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features. We provide a theoretical generalization bound for our approach and conduct experiments on a plethora of synthetic and real publicly available datasets demonstrating that CASTLE consistently leads to better out-of-sample predictions as compared to other popular benchmark regularizers. 1 Introduction A primary concern of machine learning, and deep learning in particular, is generalization performance on out-of-sample data. Over-parameterized deep networks efficiently learn complex models and are, therefore, susceptible to overfit to training data. Common regularization techniques to mitigate overfitting include data augmentation [1, 2], dropout [3, 4, 5], adversarial training [6], label smoothing [7], and layer-wise strategies [8, 9, 10] to name a few. However, these methods are agnostic of the causal relationships between variables limiting their potential to identify optimal predictors based on graphical topology, such as the causal parents of the target variable. An alternative approach to regularization leverages supervised reconstruction, which has been proven theoretically and demonstrated empirically to improve generalization performance by obligating hidden bottleneck layers to reconstruct input features [11, 12]. However, supervised auto-encoders suboptimally reconstruct all features, including those without causal neighbors, i.e., adjacent cause or effect nodes. Naively reconstructing these variables does not improve regularization and representation learning for the predictive model. In some cases, it may be harmful to generalization performance, e.g., reconstructing a random noise variable. ∗Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Although causality has been a topic of research for decades, only recently has cause and effect relationships been incorporated into machine learning methodologies and research. Recently, researchers at the confluence of machine learning and causal modeling have advanced causal discovery [13, 14], causal inference [15, 16], model explainability [17], domain adaptation [18, 19, 20] and transfer learning [21] among countless others. The existing synergy between these two disciplines has been recognized for some time [22], and recent work suggests that causality can improve and complement machine learning regularization [23, 24, 25]. Furthermore, many recent causal works have demonstrated and acknowledged the optimality of predicting in the causal direction, i.e., predicting effect from cause, which results in less test error than predicting in the anti-causal direction [21, 26, 27, 28]. Contributions. In this work, we introduce a novel regularization method called CASTLE (CAusal STructure LEarning) regularization. CASTLE regularization uses causal graph discovery as an auxiliary task when training a supervised model to improve the generalization performance of the primary prediction task. Specifically, CASTLE learns the causal directed acyclical graph (DAG) under continuous optimization as an adjacency matrix embedded in a feed-forward neural network’s input layers. By jointly learning the causal graph, CASTLE can surpass the benefits provided by feature selection regularizers by identifying optimal predictors, such as the target variable’s causal parents. Additionally, CASTLE further improves upon auto-encoder-based regularization [12] by reconstructing only the input features that have neighbors (adjacent nodes) in the causal graph. Regularization of a predictive model to satisfy the causal relationships among feature and target variables effectively guide the model towards the direction of better out-of-sample generalization guarantees. We provide a theoretical generalization bound for CASTLE and demonstrate improved performance against a variety of benchmark methods on a plethora of real and synthetic datasets. 2 Related Works We compare to the related work in the simplest supervised learning setting where we desire learning a function from some featuresX to a target variable Y given some data of the variables X and Y to improve out-of-sample generalization within the same distribution. This is a significant departure from the branches of machine learning algorithms, such as in semi-supervised learning and domain adaptation, where the regularizer is constructed with information other than variablesX and Y . Regularization controls model complexity and mitigates overfitting. `1 [29] and `2 [30] regularization are commonly used regularization approaches where the former is used when a sparse model is preferred. For deep neural networks, dropout regularization [3, 4, 5] has been shown to be superior in practice to `p regularization techniques. Other capacity-based regularization techniques commonly used in practice include early stopping [31], parameter sharing [31], gradient clipping [32], batch normalization [33], data augmentation [2], weight noise [34], and MixUp [35] to name a few. Normbased regularizers with sparsity, e.g. Lasso [29], are used to guide feature selection for supervised models. The work of [12] on supervised auto-encoders (SAE) theoretically and empirically shows that adding a reconstruction loss of the input features functions as a regularizer for predictive models. However, this method does not select which features to reconstruct and therefore suffers performance degradation when tasked to reconstruct features that are noise or unrelated to the target variables. Two existing works [25, 23] attempt to draw the connection between causality and regularization. Based on an analogy between overfitting and confounding in linear models, [25] proposed a method to determine the regularization hyperparameter in linear Ridge or Lasso regression models by estimating the strength of confounding. [23] use causality detectors [36, 27] to weight a sparsity regularizer, e.g. `1, for performing non-linear causality analysis and generating multivariate causal hypotheses. Neither of the works has the same objective as us — improving the generalization performance of supervised learning models, nor do they overlap methodologically by using causal DAG discovery. Causal discovery is an NP-hard problem that requires a brute-force search through a non-convex combinatorial search space, limiting the existing algorithms to reaching global optima for only small problems. Recent approaches have successfully accelerated these methods by using a novel acyclicity constraint and formulating the causal discovery problem as a continuous optimization over real matrices (avoiding combinatorial search) in the linear [37] and nonlinear [38, 39] cases. CASTLE incorporates these recent causal discovery approaches of [37, 38] to improve regularization for prediction problems in general. As shown in Table 1, CASTLE regularization provides two additional benefits: causal prediction and target selection. First, CASTLE identifies causal predictors (e.g., causal parents if they exist) rather than correlated features. Furthermore, CASTLE improves upon reconstruction regularization by only reconstructing features that have neighbors in the underlying DAG. We refer to this advantage as “target selection”. Collectively these benefits contribute to the improved generalization of CASTLE. Next we introduce our notation (Section 3.1) and provide more details of these benefits (Section 3.2). 3 Methodology In this section, we provide a problem formulation with causal preliminaries for CASTLE. Then we provide a motivational discussion, regularizer methodology, and generalization theory for CASTLE. 3.1 Problem Formulation In the standard supervised learning setting, we denote the input feature variables and target variable, byX = [X1, ..., Xd] ∈ X and Y ∈ Y , respectively, where X ⊆ Rd is a d-dimensional feature space and Y ⊆ R is a one-dimensional target space. Let PX,Y denote the joint distribution of the features and target. Let [N ] denote the set {1, ..., N}. We observe a dataset, D = { (Xi, Yi), i ∈ [N ] } , consisting of N i.i.d. samples drawn from PX,Y . The goal of a supervised learning algorithm A is to find a predictive model, fY : X → Y , in a hypothesis space H that can explain the association between the features and the target variable. In the learning algorithm A, the predictive model f̂Y is trained on a finite number of samples in D, to predict well on the out-of-sample data generated from the same distribution PX,Y . However, overfitting, a mismatch between training and testing performance of f̂Y , can occur if the hypothesis spaceH is too complex and the training data fails to represent the underlying distribution PX,Y . This motivates the usage of regularization to reduce the hypothesis space’s complexityH so that the learning algorithm A will only find the desired function to explain the data. Assumptions of the underlying distribution dictate regularization choice. For example, if we believe only a subset of features is associated with the label Y , then `1 regularization [29] can be beneficial in creating sparsity for feature selection. CASTLE regularization is based on the assumption that a causal DAG exists among the input features and target variable. In the causal framework of [40], a causal structure of a set of variables X is a DAG in which each vertex v ∈ V corresponds to a distinct element in X , and each edge e ∈ E represents direct functional relationships between two neighboring variables. Formally, we assume the variables in our dataset satisfy a nonparametric structural equation model (NPSEM) as defined in Definition 1. The word “nonparametric” means we do not make any assumption on the underlying functions fi in the NPSEM. In this work, we characterize optimal learning by a predictive model as discovering the function Y = fY (Pa(Y ), uY ) in NPSEM [40]. Definition 1. (NPSEMs) Given a DAG G = (V = [d+ 1], E), the random variables X̃ = [Y,X] satisfy a NPSEM if Xi = fi(Pa(Xi), ui), i ∈ [d+ 1], where Pa(i) is the parents (direct causes) of Xi in G and u[d+1] are some random noise variables. 3.2 Why CASTLE regularization matters We now present a graphical example to explain the two benefits of CASTLE mentioned in Section 2, causal prediction and target selection. Consider Figure 1 where we are given nine feature variables X1, ..., X9 and a target variable Y . Causal Prediction. The target variable Y is generated by a function fY (Pa(Y ), uY ) from Definition 1 where the parents of Y are Pa(Y ) = {X2, X3}. In CASTLE regularization, we train a predictive model f̂Y jointly with learning the DAG amongX and Y . The features that the model uses to predict Y are the causal parents of Y in the learned DAG. Such a model is sample efficient in uncovering the true function fY (Pa(Y ), uY ) and generalizes well on the out-of-sample data. Our theoretical analysis in Section 3.4 validates this advantage when there exists a DAG structure among the variablesX and Y . However, there may exist other variables that predict Y more accurately than the causal parents Pa(Y ). For example, if the function from Y to X8 is a one-to-one linear mapping, we can predict Y trivially from the feature X8. In our objective function introduced later, the prediction loss of Y will be weighted higher than the causal regularizer. Among the predictive models with a similar prediction loss of Y , our objective function still prefers to use the model, which minimizes the causal regularizer and uses the causal parents. However, it would favor the easier predictor if one exists and gives a much lower prediction loss of Y . In this case, the learned DAG may differ from the true DAG, but we reiterate that we are focused on the problem of generalization rather than causal discovery. Target Selection. Consider the variables X5, X6 and X7 which share parents (X2 and X3) with Y in Figure 1. The functions X5 = f5(X2, u5), X6 = f6(X3, u6), and X7 = f7(X3, u7) may have some learnable similarity (e.g. basis functions and representations) with Y = fY (X2, X3, uY ), that we can exploit by training a shared predictive model of Y with the auxiliary task of predicting X5, X6 and X7. From the causal graph topology, CASTLE discovers the optimal features that should act as the auxiliary task for learning fY . CASTLE learns the related functions jointly in a shared model, which is proven to improve the generalization performance of predicting Y by learning shared basis functions and representations [41]. 3.3 CASTLE regularization Let X̃ = Y × X denote the data space, P(X,Y ) = PX̃ the data distribution, and ‖ · ‖F the Frobenius norm. We define random variables X̃ = [X̃1, X̃2, ..., X̃d+1] := [Y,X1, ..., Xd] ∈ X̃ . Let X = [ X1, ...,Xd ] denote the N × d input data matrix, Y the N -dimensional label vector, X̃ = [Y,X] the N × (d+ 1) matrix that contains data of all the variables in the DAG. To facilitate exposition, we first introduce CASTLE in the linear setting. Here, the parameters are a (d+ 1)× (d+ 1) adjacency matrix W with zero in the diagonal. The objective function is given as Ŵ ∈ min W 1 N ‖Y − X̃W:,1‖ 2 + λRDAG(X̃,W) (1) where W:,1 is the first column of W. We define the DAG regularization lossRDAG(X̃,W) as RDAG(X̃,W) = LW +RW + βVW. (2) where LW = 1N ‖X̃ − X̃W‖ 2 F , RW = ( Tr ( eW W ) − d − 1 )2 , VW is the `1 norm of W, is the Hadamard product, and eM is the matrix exponential of M. The DAG loss RDAG(X̃,W) is introduced in [37] for learning linear DAG by continuous optimization. Here we use it as the regularizer for our linear regression model Y = X̃W:,1 + . From Theorem 1 in [37], we know the graph given by W is a DAG if and only if RW = 0. The prediction Ŷ = X̃W:,1 is the projection of Y onto the parents of Y in the learned DAG. This increases the stability of linear regression when issues pertaining to collinearity or multicollinearity among the input features appear. Continuous optimization for learning nonparametric causal DAGs has been proposed in the prior work by [38]. In a similar manner, we also adapt CASTLE to nonlinear cases. Suppose the predictive model for Y and the function generating each feature Xk in the causal DAG are parameterized by an M -layer feed-forward neural network fΘ : X̃ → X̃ with ReLU activations and layer size h. Figure 2 shows the network architecture of fΘ. This joint network can be instantiated as a d+ 1 sub-network fk with shared hidden layers, where fk is responsible for reconstructing the feature X̃k. We let Wk1 denote the h× (d+ 1) weight matrix in the input layer of fk, k ∈ [d+ 1]. We set the k-th column of Wk1 to zero such that fk does not utilize X̃k in its prediction of X̃k. We let Wm, m = 2, ..,M − 1 denote the weight matrices in the network’s shared hidden layers, and WM = [W1M , ...,W d+1 M ] denotes the h× (d+ 1) weight matrix in the output layer. Explicitly, we define the sub-network fk as fk(X̃) = φ ( · · ·φ ( φ ( X̃Wk1 ) W2 ) · · ·WM−1 ) WkM , (3) where φ(·) is the ReLU activation function. The function fΘ is given as fΘ(X̃) = [f1(X̃), ..., fd+1(X̃)]. Let fΘ(X̃) denote the prediction for the N samples matrix X̃ where [fΘ(X̃)]i,k = fk(X̃i), i ∈ [N ] and k ∈ [d+ 1]. All network parameters are collected into sets as Θ1 = {Wk1}d+1k=1, Θ = Θ1 ∪ {Wm} M k=2 (4) The training objective function of fΘ is Θ ∈ min Θ 1 N ∥∥Y − [fΘ(X̃)]:,1∥∥2 + λRDAG(X̃, fΘ). (5) The DAG lossRDAG ( X̃, fΘ ) is given as RDAG ( X̃, fΘ ) = LN (fΘ) +RΘ1 + βVΘ1 . (6) Because the k-th column of the input weight matrix Wk1 is set to zero, LN (fΘ) = 1N ∥∥X̃−fΘ(X̃)∥∥2F differs from the standard reconstruction loss in auto-encoders (e.g. SAE) by only allowing the model to reconstruct each feature and target variable from the others. In contrast, auto-encoders reconstruct each feature using all the features including itself. VΘ1 is the `1 norm of the weight matrices Wk1 in Θ1, and the termRΘ1 is given as, RΘ1 = (Tr ( eM M ) − d− 1)2, (7) where M is a (d + 1) × (d + 1) matrix such that [M]k,j is the `2-norm of the k-th row of the matrix Wj1. When the acyclicity lossRΘ1 is minimized, the sub-networks f1, . . . fd+1 forms a DAG among the variables;RΘ1 obligates the sub-networks to reconstruct only the input features that have neighbors (adjacent nodes) in the learned DAG. We note that converting the nonlinear version of CASTLE into a linear form can be accomplished by removing all the hidden layers and output layers and setting the dimension h of the input weight matrices to be 1 in (3), i.e., fk(X̃) = X̃Wk1 and fΘ(X̃) = [X̃W 1 1, ..., X̃W d+1 1 ] = X̃W, which is the linear model in (1-2). Managing computational complexity. If the number of features is large, it is computationally expensive to train all the sub-networks simultaneously. We can mitigate this by sub-sampling. At each iteration of gradient descent, we randomly sample a subset of features to reconstruct and only minimize the prediction loss and reconstruction loss on these sub-sampled features. Note that we do not have a hidden confounders issue here, since Y and the sub-sampled features are predicted by all the features except itself. The sparsity DAG constraint on the weight matrices is unchanged at each iteration. In this case, we keep the training complexity per iteration at a manageable level approximately around the computational time and space complexity of training a few networks jointly. We include experiments on CASTLE scalability with respect to input feature size in Appendix C. 3.4 Generalization bound for CASTLE regularization In this section, we analyze theoretically why CASTLE regularization can improve the generalization performance by introducing a generalization bound for our model in Figure 2. Our bound is based on the PAC-Bayesian learning theory in [42, 43, 44]. Here, we re-interpret the DAG regularizer as a special prior or assumption on the input weight matrices of our model and use existing PAC-Bayes theory to prove the generalization of our algorithm. Traditionally, PAC-Bayes bounds are only applied to randomized models, such as Bayesian or Gibbs classifiers. Here, our bound is applied to our deterministic model by using the recent derandomization formalism from [45, 46]. We acknowledge and note that developing tighter and non-vacuous generalization bounds for deep neural networks is still a challenging and evolving topic in learning theory. The bounds are often stated with many constants from different steps of the proof. For reader convenience, we provide the simplified version of our bound in Theorem 1. The proof, details (e.g., the constants), and discussions about the assumptions are provided in Appendix A. We begin with a few assumptions before stating our bound. Assumption 1. For any sample X̃ = (Y,X) ∼ PX̃ , X̃ has bounded `2 norm s.t. ‖X̃‖2 ≤ B, for some B > 0. Assumption 2. The loss function L(fΘ) = ‖fΘ(X̃)− X̃‖2 is sub-Gaussian under the distribution PX̃ with a variance factor s 2 s.t. ∀t > 0, EPX̃ [ exp ( t ( L(fΘ)− LP (fΘ) ))] ≤ exp( t 2s2 2 ). Theorem 1. Let fΘ : X̃ → X̃ be a M -layer ReLU feed-forward network with layer size h, and each of its weight matrices has the spectral norm bounded by κ. Then, under Assumptions 1 and 2, for any δ, γ > 0, with probability 1− δ over a training set of N i.i.d samples, for any Θ in (4), we have: LP (fΘ) ≤ 4LN (fΘ) + 1N [ RΘ1 + C1(VΘ1 + VΘ2) + log ( 8 δ )] + C3 (8) where LP (fΘ) is the expected reconstruction loss of X̃ under PX̃ , LN (fΘ), VΘ1 and RΘ1 are defined in (6-7), VΘ2 is the `2 norm of the network weights in the output and shared hidden layers, and C1 and C2 are some constants depending on γ, d, h,B, s and M . The statistical properties of the reconstruction loss in learning linear DAGs, e.g. LW = 1N ‖X̃ − WX̃‖2F , have been well studied in the literature: the loss minimizer provably recovers a true DAG with high probability on finite-samples, and hence is consistent for both Gaussian SEM [47] and non-Gaussian SEM [48, 49]. Note also that the regularizerRW orRΘ1 are not a part of the results in [47, 48, 49]. However, the works of [37, 38] empirically show that using RW or RΘ1 on top of the reconstruction loss leads to more efficient and more accurate DAG learning than existing approaches. Our theoretical result on the reconstruction loss explains the benefit ofRW orRΘ1 for the generalization performance of predicting Y . This provides theoretical support for our CASTLE regularizer in supervised learning. However, the objectives of DAG discovery, e.g., identifying the Markov Blanket of Y , is beyond the scope of our analysis. The bound in (8) justifies RΘ1 in general, including linear or nonlinear cases, if the underlying distribution PX̃ is factorized according to some causal DAG. We note that the expected loss LP (fΘ) is upper bounded by the empirical loss LN (fΘ), VΘ1 , VΘ1 andRΘ1 which measures how close (via acyclicity constraint) the model is to a DAG. From (8) it is obvious that not minimizingRΘ1 is an acceptable strategy asymptotically or in the large samples limit (large N ) because RΘ1/N becomes negligible. This aligns with the consistency theory in [47, 48, 49] for linear models. However for small N , a preferred strategy is to train a model fΘ by minimizing LN (fΘ) andRΘ1 jointly. This would be trivial because the samples are generated under the DAG structure in PX̃ . MinimizingRΘ1 can decrease the upper bound of LP (fΘ) in (8), improve the generalization performance of fΘ, as well as facilitate the convergence of fΘ to the true model. If PX̃ does not correspond to any causal DAG, such as image data, then there will be a tradeoff between minimizing RΘ1 and LN (fΘ). In this case, RΘ1 becomes harder to minimize, and generalization may not benefit from adding CASTLE. However, this is a rare case since causal structure exists in most datasets inherently. Our experiments demonstrate that CASTLE regularization outperforms popular regularizers on a variety of datasets in the next section. 4 Experiments In this section, we empirically evaluate CASTLE as a regularization method for improving generalization performance. We present our benchmark methods and training architecture, followed by our synthetic and publicly available data results. Benchmarks. We benchmark CASTLE against common regularizers that include: early stopping (Baseline) [31], L1 [29], L2 [30], dropout [3] with drop rate of 20% and 50% denoted as DO(0.2) and DO(0.5) respectively, SAE [12], batch normalization (BN) [33], data augmentation or input noise (IN) [2], and MixUp (MU) [35], in no particular order. For each regularizer with tunable hyperparameters we performed a standard grid search. For the weight decay regularizers L1 and L2 we searched for λ`p ∈ {0.1, 0.01, 0.001}, and for input noise we use a Gaussian noise with mean of 0 and standard deviation σ ∈ {0.1, 0.01, 0.01}. L1 and L2 were applied at every dense layer. BN and DO were applied after every dense layer and active only during training. Because each regularization method converges at different rates, we use early stopping on a validation set to terminate each benchmark training, which we refer to as our Baseline. Network architecture and training. We implemented CASTLE in Tensorflow2. Our proposed architecture is comprised of d + 1 sub-networks with shared hidden layers, as shown in Figure 2. In the linear case, VW is the `1 norm of W. In the nonlinear case, VΘ1 is the `1 norm of the input weight matrices Wk1 , k ∈ [d+ 1]. To make a clear comparison with L2 regularization, we exclude the capacity term VΘ2 from CASTLE, although it is a part of our generalization bound in (8). Since we predict the target variable as our primary task, we benchmark CASTLE against this common network architecture. Specifically, we use a network with two hidden layers of d+ 1 neurons with ReLU activation. Each benchmark method is initialized and seeded identically with the same random weights. For dataset preprocessing, all continuous variables are standardized with a mean of 0 and a variance of 1. Each model is trained using the Adam optimizer with a learning rate of 0.001 for up to a maximum of 200 epochs. An early stopping regime halts training with a patience of 30 epochs. 4.1 Regularization on Synthetic Data Synthetic data generation. Given a DAG G, we generate functional relationships between each variable and its respective parent(s) with additive Gaussian noise applied to each variable with a mean of 0 and variance of 1. In the linear case, each variable is equal to the sum of its parents plus noise. For the nonlinear case, each variable is equal to the sum of the sigmoid of its parents plus noise. We provide further details on our synthetic DGP and pseudocode in Appendix B. Consider Table 2, using our nonlinear DGP we generated 1000 test samples according to the DAG in Figure 1. We then used 10-fold cross-validation to train and validate each benchmark on varying training sets of size n. Each model was evaluated on the test set from weights saved at the lowest validation error. Table 2 shows that CASTLE improves over all experimental benchmarks. We present similar results for our linear experiments in Appendix B. 2Code is provided at https://bitbucket.org/mvdschaar/mlforhealthlabpub. Dissecting CASTLE. In the synthetic environment, we know the causal relationships with certainty. We analyze three aspects of CASTLE regularization using synthetic data. Because we are comparing across randomly simulated DAGs with differing functional relationships, the magnitude of regression testing error will vary between runs. We examine the model performance in terms of each model’s average rank over each fold to normalize this. If we have r regularizers, the best and worst possible rank is one and r, respectively (i.e., the higher the rank the better). We used 10-fold cross-validation to terminate model training and tested each model on a held-out test set of 1000 samples. First, we examine the impact of increasing the feature size or DAG vertex cardinality |G|. We do this by randomly generating a DAG of size |G| ∈ {10, 50, 100, 150} with 50|G| training samples. We repeat this ten times for each DAG cardinality. On the left-hand side of Fig. 3, CASTLE has the highest rank of all benchmarks and does not degrade with increasing |G|. Second, we analyze the impact of increasing dataset size. We randomly generate DAGs of size |G| ∈ {10, 50, 100, 150}, which we use to create datasets of α|G| samples, where α ∈ {20, 50, 100, 150, 200}. We repeat this ten times for each dataset size. In the middle plot of Figure 3, we see that CASTLE has superior performance for all dataset sizes, and as expected, all benchmark methods (except for SAE) start to converge about the average rank at large data sizes (α = 200). Third, we analyze our method’s sensitivity to noise variables, i.e., variables disconnected to the target variable in G. We randomly generate DAGs of size |G| = 50 to create datasets with 50|G| samples. We randomly add v ∈ {20i}5i=0 noise variables normally distributed with 0 mean and unit variance. We repeat this process for ten different DAG instantiations. The results on the right-hand side of Figure 3 show that our method is not sensitive to the existence of disconnected noise variables, whereas SAE performance degrades with the increase of uncorrelated input features. This highlights the benefit of target selection based on the DAG topology. In Appendix C, we provide an analysis of adjacency matrix weights that are learned under various random DAG configurations, e.g., target with parents, orphaned target, etc. There, we highlight CASTLE in comparison to SAE for target selection by showing that the adjacency matrix weights for noise variables are near zero. We also provide a sensitivity analysis on the parameter λ from (5) and results for additional experiments demonstrating that CASTLE does not reconstruct noisy (neighborless) variables in the underlying causal DAG. 4.2 Regularization on Real Data We perform regression and classification experiments on a spectrum of publicly available datasets from [50] including Boston Housing (BH), Wine Quality (WQ), Facebook Metrics (FB), Bioconcentration (BC), Student Performance (SP), Community (CM), Contraception Choice (CC), Pima Diabetes (PD), Las Vegas Ratings (LV), Statlog Heart (SH), and Retinopathy (RP). For each dataset, we randomly reserve 20% of the samples for a testing set. We perform 10-fold cross-validation on the remaining 80%. As the results show in Table 3, CASTLE provides improved regularization across all datasets for both regression and classification tasks. Additionally, CASTLE consistently ranks as the top regularizer (graphically shown in Appendix C.3), with no definitive benchmark method coming in as a consensus runner-up. This emphasizes the stability of CASTLE as a reliable regularizer. In Appendix C, we provide additional experiments on several other datasets, an ablation study highlighting our sources of gain, and real-world dataset statistics. 5 Conclusion We have introduced CASTLE regularization, a novel regularization method that jointly learns the causal graph to improve generalization performance in comparison to existing capacity-based and reconstruction-based regularization methods. We used existing PAC-Bayes theory to provide a theoretical generalization bound for CASTLE. We have shown experimentally that CASTLE is insensitive to increasing feature dimensionality, dataset size, and uncorrelated noise variables. Furthermore, we have shown that CASTLE regularization improves performance on a plethora of real datasets and, in the worst case, never degrades performance. We hope that CASTLE will play a role as a general-purpose regularizer that can be leveraged by the entire machine learning community. Broader Impact One of the big challenges of machine learning, and deep learning in particular, is generalization to out-of-sample data. Regularization is necessary and used to prevent overfitting thereby promoting generalization. In this work, we have presented a novel regularization method inspired by causality. Since the applicability of our approach spans all problems where causal relationships exist between variables, there are countless beneficiaries of our research. Apart from the general machine learning community, the beneficiaries of our research include practitioners in the social sciences (sociology, psychology, etc.), natural sciences (physics, biology, etc.), and healthcare among countless others. These fields have already been exploiting causality for some time and serve as a natural launch-pad for deploying and leveraging CASTLE. With that said, our method does not immediately apply to certain architectures, such as CNNs, where causal relationships are ambiguous or perhaps non-existent. Acknowledgments This work was supported by GlaxoSmithKline (GSK), the US Office of Naval Research (ONR), and the National Science Foundation (NSF) 1722516. We thank all reviewers for their invaluable comments and suggestions.
1. What is the main contribution of the paper, and how does it relate to the motivation? 2. What are the strengths of the proposed approach, particularly in terms of its ability to prevent overfitting? 3. What are the weaknesses of the paper regarding the form of the regularizer and the lack of theoretical analysis and experimental results for model selection consistency? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the applicability of the proposed method in real-world scenarios, especially when dealing with high-dimensional data?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a CASTLE as a regularizer that aims at enforcing the input vertices to form a DAG and reconstruct the nodes in the Markov Blanket of the output. This method can be extended to MLP. A generalization bound is derived to guarantee that the method can achieve acceptable out-of-sample generalization. Experiments are applied to the synthetic dataset and real-world datasets and can show improvement compared with other regularization methods. Strengths This work assumes that the covariates and the response variable are generated in a DAG. Correspondingly, the authors proposed to introduce R(W) as a regularizer to enforce the learned weight coefficients of covariates and the response variable to form a DAG. Besides, compared with previous work that leverages reconstruction loss as a regularizer, the authors further introduce V(W) as a regularizer to enforce that only the nodes that are related to Y (in the sense of Markov Blanket), which can further avoid over-fitting. In a nutshell, the propose method aligns well with the motivation which is reasonable. Besides, the authors give a theoretical analysis of out-of-sample generalization bound. The experimental results can support the claims and show the superority of the proposed methods, in term of out-of-sample generalization. Weaknesses This work does not introduce what is the form of V(W), is it \ell-1 type sparsity? What's more, this work lacks theoretical analysis and experimental results to support that such a regularizer can identify the Markov Blanket of Y, i.e., the model selection consistency, at least in linear cases. Besides, in many realistic applications such as image classification, the covariates are sensory-level hence it may not be reasonable to assume a DAG among these covariates and the response variable. It would be better if the work can stand on a more general assumption.
NIPS
Title CASTLE: Regularization via Auxiliary Causal Graph Discovery Abstract Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal directed acyclical graph (DAG) as an adjacency matrix embedded in the neural network’s input layers, thereby facilitating the discovery of optimal predictors. Furthermore, CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features. We provide a theoretical generalization bound for our approach and conduct experiments on a plethora of synthetic and real publicly available datasets demonstrating that CASTLE consistently leads to better out-of-sample predictions as compared to other popular benchmark regularizers. 1 Introduction A primary concern of machine learning, and deep learning in particular, is generalization performance on out-of-sample data. Over-parameterized deep networks efficiently learn complex models and are, therefore, susceptible to overfit to training data. Common regularization techniques to mitigate overfitting include data augmentation [1, 2], dropout [3, 4, 5], adversarial training [6], label smoothing [7], and layer-wise strategies [8, 9, 10] to name a few. However, these methods are agnostic of the causal relationships between variables limiting their potential to identify optimal predictors based on graphical topology, such as the causal parents of the target variable. An alternative approach to regularization leverages supervised reconstruction, which has been proven theoretically and demonstrated empirically to improve generalization performance by obligating hidden bottleneck layers to reconstruct input features [11, 12]. However, supervised auto-encoders suboptimally reconstruct all features, including those without causal neighbors, i.e., adjacent cause or effect nodes. Naively reconstructing these variables does not improve regularization and representation learning for the predictive model. In some cases, it may be harmful to generalization performance, e.g., reconstructing a random noise variable. ∗Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Although causality has been a topic of research for decades, only recently has cause and effect relationships been incorporated into machine learning methodologies and research. Recently, researchers at the confluence of machine learning and causal modeling have advanced causal discovery [13, 14], causal inference [15, 16], model explainability [17], domain adaptation [18, 19, 20] and transfer learning [21] among countless others. The existing synergy between these two disciplines has been recognized for some time [22], and recent work suggests that causality can improve and complement machine learning regularization [23, 24, 25]. Furthermore, many recent causal works have demonstrated and acknowledged the optimality of predicting in the causal direction, i.e., predicting effect from cause, which results in less test error than predicting in the anti-causal direction [21, 26, 27, 28]. Contributions. In this work, we introduce a novel regularization method called CASTLE (CAusal STructure LEarning) regularization. CASTLE regularization uses causal graph discovery as an auxiliary task when training a supervised model to improve the generalization performance of the primary prediction task. Specifically, CASTLE learns the causal directed acyclical graph (DAG) under continuous optimization as an adjacency matrix embedded in a feed-forward neural network’s input layers. By jointly learning the causal graph, CASTLE can surpass the benefits provided by feature selection regularizers by identifying optimal predictors, such as the target variable’s causal parents. Additionally, CASTLE further improves upon auto-encoder-based regularization [12] by reconstructing only the input features that have neighbors (adjacent nodes) in the causal graph. Regularization of a predictive model to satisfy the causal relationships among feature and target variables effectively guide the model towards the direction of better out-of-sample generalization guarantees. We provide a theoretical generalization bound for CASTLE and demonstrate improved performance against a variety of benchmark methods on a plethora of real and synthetic datasets. 2 Related Works We compare to the related work in the simplest supervised learning setting where we desire learning a function from some featuresX to a target variable Y given some data of the variables X and Y to improve out-of-sample generalization within the same distribution. This is a significant departure from the branches of machine learning algorithms, such as in semi-supervised learning and domain adaptation, where the regularizer is constructed with information other than variablesX and Y . Regularization controls model complexity and mitigates overfitting. `1 [29] and `2 [30] regularization are commonly used regularization approaches where the former is used when a sparse model is preferred. For deep neural networks, dropout regularization [3, 4, 5] has been shown to be superior in practice to `p regularization techniques. Other capacity-based regularization techniques commonly used in practice include early stopping [31], parameter sharing [31], gradient clipping [32], batch normalization [33], data augmentation [2], weight noise [34], and MixUp [35] to name a few. Normbased regularizers with sparsity, e.g. Lasso [29], are used to guide feature selection for supervised models. The work of [12] on supervised auto-encoders (SAE) theoretically and empirically shows that adding a reconstruction loss of the input features functions as a regularizer for predictive models. However, this method does not select which features to reconstruct and therefore suffers performance degradation when tasked to reconstruct features that are noise or unrelated to the target variables. Two existing works [25, 23] attempt to draw the connection between causality and regularization. Based on an analogy between overfitting and confounding in linear models, [25] proposed a method to determine the regularization hyperparameter in linear Ridge or Lasso regression models by estimating the strength of confounding. [23] use causality detectors [36, 27] to weight a sparsity regularizer, e.g. `1, for performing non-linear causality analysis and generating multivariate causal hypotheses. Neither of the works has the same objective as us — improving the generalization performance of supervised learning models, nor do they overlap methodologically by using causal DAG discovery. Causal discovery is an NP-hard problem that requires a brute-force search through a non-convex combinatorial search space, limiting the existing algorithms to reaching global optima for only small problems. Recent approaches have successfully accelerated these methods by using a novel acyclicity constraint and formulating the causal discovery problem as a continuous optimization over real matrices (avoiding combinatorial search) in the linear [37] and nonlinear [38, 39] cases. CASTLE incorporates these recent causal discovery approaches of [37, 38] to improve regularization for prediction problems in general. As shown in Table 1, CASTLE regularization provides two additional benefits: causal prediction and target selection. First, CASTLE identifies causal predictors (e.g., causal parents if they exist) rather than correlated features. Furthermore, CASTLE improves upon reconstruction regularization by only reconstructing features that have neighbors in the underlying DAG. We refer to this advantage as “target selection”. Collectively these benefits contribute to the improved generalization of CASTLE. Next we introduce our notation (Section 3.1) and provide more details of these benefits (Section 3.2). 3 Methodology In this section, we provide a problem formulation with causal preliminaries for CASTLE. Then we provide a motivational discussion, regularizer methodology, and generalization theory for CASTLE. 3.1 Problem Formulation In the standard supervised learning setting, we denote the input feature variables and target variable, byX = [X1, ..., Xd] ∈ X and Y ∈ Y , respectively, where X ⊆ Rd is a d-dimensional feature space and Y ⊆ R is a one-dimensional target space. Let PX,Y denote the joint distribution of the features and target. Let [N ] denote the set {1, ..., N}. We observe a dataset, D = { (Xi, Yi), i ∈ [N ] } , consisting of N i.i.d. samples drawn from PX,Y . The goal of a supervised learning algorithm A is to find a predictive model, fY : X → Y , in a hypothesis space H that can explain the association between the features and the target variable. In the learning algorithm A, the predictive model f̂Y is trained on a finite number of samples in D, to predict well on the out-of-sample data generated from the same distribution PX,Y . However, overfitting, a mismatch between training and testing performance of f̂Y , can occur if the hypothesis spaceH is too complex and the training data fails to represent the underlying distribution PX,Y . This motivates the usage of regularization to reduce the hypothesis space’s complexityH so that the learning algorithm A will only find the desired function to explain the data. Assumptions of the underlying distribution dictate regularization choice. For example, if we believe only a subset of features is associated with the label Y , then `1 regularization [29] can be beneficial in creating sparsity for feature selection. CASTLE regularization is based on the assumption that a causal DAG exists among the input features and target variable. In the causal framework of [40], a causal structure of a set of variables X is a DAG in which each vertex v ∈ V corresponds to a distinct element in X , and each edge e ∈ E represents direct functional relationships between two neighboring variables. Formally, we assume the variables in our dataset satisfy a nonparametric structural equation model (NPSEM) as defined in Definition 1. The word “nonparametric” means we do not make any assumption on the underlying functions fi in the NPSEM. In this work, we characterize optimal learning by a predictive model as discovering the function Y = fY (Pa(Y ), uY ) in NPSEM [40]. Definition 1. (NPSEMs) Given a DAG G = (V = [d+ 1], E), the random variables X̃ = [Y,X] satisfy a NPSEM if Xi = fi(Pa(Xi), ui), i ∈ [d+ 1], where Pa(i) is the parents (direct causes) of Xi in G and u[d+1] are some random noise variables. 3.2 Why CASTLE regularization matters We now present a graphical example to explain the two benefits of CASTLE mentioned in Section 2, causal prediction and target selection. Consider Figure 1 where we are given nine feature variables X1, ..., X9 and a target variable Y . Causal Prediction. The target variable Y is generated by a function fY (Pa(Y ), uY ) from Definition 1 where the parents of Y are Pa(Y ) = {X2, X3}. In CASTLE regularization, we train a predictive model f̂Y jointly with learning the DAG amongX and Y . The features that the model uses to predict Y are the causal parents of Y in the learned DAG. Such a model is sample efficient in uncovering the true function fY (Pa(Y ), uY ) and generalizes well on the out-of-sample data. Our theoretical analysis in Section 3.4 validates this advantage when there exists a DAG structure among the variablesX and Y . However, there may exist other variables that predict Y more accurately than the causal parents Pa(Y ). For example, if the function from Y to X8 is a one-to-one linear mapping, we can predict Y trivially from the feature X8. In our objective function introduced later, the prediction loss of Y will be weighted higher than the causal regularizer. Among the predictive models with a similar prediction loss of Y , our objective function still prefers to use the model, which minimizes the causal regularizer and uses the causal parents. However, it would favor the easier predictor if one exists and gives a much lower prediction loss of Y . In this case, the learned DAG may differ from the true DAG, but we reiterate that we are focused on the problem of generalization rather than causal discovery. Target Selection. Consider the variables X5, X6 and X7 which share parents (X2 and X3) with Y in Figure 1. The functions X5 = f5(X2, u5), X6 = f6(X3, u6), and X7 = f7(X3, u7) may have some learnable similarity (e.g. basis functions and representations) with Y = fY (X2, X3, uY ), that we can exploit by training a shared predictive model of Y with the auxiliary task of predicting X5, X6 and X7. From the causal graph topology, CASTLE discovers the optimal features that should act as the auxiliary task for learning fY . CASTLE learns the related functions jointly in a shared model, which is proven to improve the generalization performance of predicting Y by learning shared basis functions and representations [41]. 3.3 CASTLE regularization Let X̃ = Y × X denote the data space, P(X,Y ) = PX̃ the data distribution, and ‖ · ‖F the Frobenius norm. We define random variables X̃ = [X̃1, X̃2, ..., X̃d+1] := [Y,X1, ..., Xd] ∈ X̃ . Let X = [ X1, ...,Xd ] denote the N × d input data matrix, Y the N -dimensional label vector, X̃ = [Y,X] the N × (d+ 1) matrix that contains data of all the variables in the DAG. To facilitate exposition, we first introduce CASTLE in the linear setting. Here, the parameters are a (d+ 1)× (d+ 1) adjacency matrix W with zero in the diagonal. The objective function is given as Ŵ ∈ min W 1 N ‖Y − X̃W:,1‖ 2 + λRDAG(X̃,W) (1) where W:,1 is the first column of W. We define the DAG regularization lossRDAG(X̃,W) as RDAG(X̃,W) = LW +RW + βVW. (2) where LW = 1N ‖X̃ − X̃W‖ 2 F , RW = ( Tr ( eW W ) − d − 1 )2 , VW is the `1 norm of W, is the Hadamard product, and eM is the matrix exponential of M. The DAG loss RDAG(X̃,W) is introduced in [37] for learning linear DAG by continuous optimization. Here we use it as the regularizer for our linear regression model Y = X̃W:,1 + . From Theorem 1 in [37], we know the graph given by W is a DAG if and only if RW = 0. The prediction Ŷ = X̃W:,1 is the projection of Y onto the parents of Y in the learned DAG. This increases the stability of linear regression when issues pertaining to collinearity or multicollinearity among the input features appear. Continuous optimization for learning nonparametric causal DAGs has been proposed in the prior work by [38]. In a similar manner, we also adapt CASTLE to nonlinear cases. Suppose the predictive model for Y and the function generating each feature Xk in the causal DAG are parameterized by an M -layer feed-forward neural network fΘ : X̃ → X̃ with ReLU activations and layer size h. Figure 2 shows the network architecture of fΘ. This joint network can be instantiated as a d+ 1 sub-network fk with shared hidden layers, where fk is responsible for reconstructing the feature X̃k. We let Wk1 denote the h× (d+ 1) weight matrix in the input layer of fk, k ∈ [d+ 1]. We set the k-th column of Wk1 to zero such that fk does not utilize X̃k in its prediction of X̃k. We let Wm, m = 2, ..,M − 1 denote the weight matrices in the network’s shared hidden layers, and WM = [W1M , ...,W d+1 M ] denotes the h× (d+ 1) weight matrix in the output layer. Explicitly, we define the sub-network fk as fk(X̃) = φ ( · · ·φ ( φ ( X̃Wk1 ) W2 ) · · ·WM−1 ) WkM , (3) where φ(·) is the ReLU activation function. The function fΘ is given as fΘ(X̃) = [f1(X̃), ..., fd+1(X̃)]. Let fΘ(X̃) denote the prediction for the N samples matrix X̃ where [fΘ(X̃)]i,k = fk(X̃i), i ∈ [N ] and k ∈ [d+ 1]. All network parameters are collected into sets as Θ1 = {Wk1}d+1k=1, Θ = Θ1 ∪ {Wm} M k=2 (4) The training objective function of fΘ is Θ ∈ min Θ 1 N ∥∥Y − [fΘ(X̃)]:,1∥∥2 + λRDAG(X̃, fΘ). (5) The DAG lossRDAG ( X̃, fΘ ) is given as RDAG ( X̃, fΘ ) = LN (fΘ) +RΘ1 + βVΘ1 . (6) Because the k-th column of the input weight matrix Wk1 is set to zero, LN (fΘ) = 1N ∥∥X̃−fΘ(X̃)∥∥2F differs from the standard reconstruction loss in auto-encoders (e.g. SAE) by only allowing the model to reconstruct each feature and target variable from the others. In contrast, auto-encoders reconstruct each feature using all the features including itself. VΘ1 is the `1 norm of the weight matrices Wk1 in Θ1, and the termRΘ1 is given as, RΘ1 = (Tr ( eM M ) − d− 1)2, (7) where M is a (d + 1) × (d + 1) matrix such that [M]k,j is the `2-norm of the k-th row of the matrix Wj1. When the acyclicity lossRΘ1 is minimized, the sub-networks f1, . . . fd+1 forms a DAG among the variables;RΘ1 obligates the sub-networks to reconstruct only the input features that have neighbors (adjacent nodes) in the learned DAG. We note that converting the nonlinear version of CASTLE into a linear form can be accomplished by removing all the hidden layers and output layers and setting the dimension h of the input weight matrices to be 1 in (3), i.e., fk(X̃) = X̃Wk1 and fΘ(X̃) = [X̃W 1 1, ..., X̃W d+1 1 ] = X̃W, which is the linear model in (1-2). Managing computational complexity. If the number of features is large, it is computationally expensive to train all the sub-networks simultaneously. We can mitigate this by sub-sampling. At each iteration of gradient descent, we randomly sample a subset of features to reconstruct and only minimize the prediction loss and reconstruction loss on these sub-sampled features. Note that we do not have a hidden confounders issue here, since Y and the sub-sampled features are predicted by all the features except itself. The sparsity DAG constraint on the weight matrices is unchanged at each iteration. In this case, we keep the training complexity per iteration at a manageable level approximately around the computational time and space complexity of training a few networks jointly. We include experiments on CASTLE scalability with respect to input feature size in Appendix C. 3.4 Generalization bound for CASTLE regularization In this section, we analyze theoretically why CASTLE regularization can improve the generalization performance by introducing a generalization bound for our model in Figure 2. Our bound is based on the PAC-Bayesian learning theory in [42, 43, 44]. Here, we re-interpret the DAG regularizer as a special prior or assumption on the input weight matrices of our model and use existing PAC-Bayes theory to prove the generalization of our algorithm. Traditionally, PAC-Bayes bounds are only applied to randomized models, such as Bayesian or Gibbs classifiers. Here, our bound is applied to our deterministic model by using the recent derandomization formalism from [45, 46]. We acknowledge and note that developing tighter and non-vacuous generalization bounds for deep neural networks is still a challenging and evolving topic in learning theory. The bounds are often stated with many constants from different steps of the proof. For reader convenience, we provide the simplified version of our bound in Theorem 1. The proof, details (e.g., the constants), and discussions about the assumptions are provided in Appendix A. We begin with a few assumptions before stating our bound. Assumption 1. For any sample X̃ = (Y,X) ∼ PX̃ , X̃ has bounded `2 norm s.t. ‖X̃‖2 ≤ B, for some B > 0. Assumption 2. The loss function L(fΘ) = ‖fΘ(X̃)− X̃‖2 is sub-Gaussian under the distribution PX̃ with a variance factor s 2 s.t. ∀t > 0, EPX̃ [ exp ( t ( L(fΘ)− LP (fΘ) ))] ≤ exp( t 2s2 2 ). Theorem 1. Let fΘ : X̃ → X̃ be a M -layer ReLU feed-forward network with layer size h, and each of its weight matrices has the spectral norm bounded by κ. Then, under Assumptions 1 and 2, for any δ, γ > 0, with probability 1− δ over a training set of N i.i.d samples, for any Θ in (4), we have: LP (fΘ) ≤ 4LN (fΘ) + 1N [ RΘ1 + C1(VΘ1 + VΘ2) + log ( 8 δ )] + C3 (8) where LP (fΘ) is the expected reconstruction loss of X̃ under PX̃ , LN (fΘ), VΘ1 and RΘ1 are defined in (6-7), VΘ2 is the `2 norm of the network weights in the output and shared hidden layers, and C1 and C2 are some constants depending on γ, d, h,B, s and M . The statistical properties of the reconstruction loss in learning linear DAGs, e.g. LW = 1N ‖X̃ − WX̃‖2F , have been well studied in the literature: the loss minimizer provably recovers a true DAG with high probability on finite-samples, and hence is consistent for both Gaussian SEM [47] and non-Gaussian SEM [48, 49]. Note also that the regularizerRW orRΘ1 are not a part of the results in [47, 48, 49]. However, the works of [37, 38] empirically show that using RW or RΘ1 on top of the reconstruction loss leads to more efficient and more accurate DAG learning than existing approaches. Our theoretical result on the reconstruction loss explains the benefit ofRW orRΘ1 for the generalization performance of predicting Y . This provides theoretical support for our CASTLE regularizer in supervised learning. However, the objectives of DAG discovery, e.g., identifying the Markov Blanket of Y , is beyond the scope of our analysis. The bound in (8) justifies RΘ1 in general, including linear or nonlinear cases, if the underlying distribution PX̃ is factorized according to some causal DAG. We note that the expected loss LP (fΘ) is upper bounded by the empirical loss LN (fΘ), VΘ1 , VΘ1 andRΘ1 which measures how close (via acyclicity constraint) the model is to a DAG. From (8) it is obvious that not minimizingRΘ1 is an acceptable strategy asymptotically or in the large samples limit (large N ) because RΘ1/N becomes negligible. This aligns with the consistency theory in [47, 48, 49] for linear models. However for small N , a preferred strategy is to train a model fΘ by minimizing LN (fΘ) andRΘ1 jointly. This would be trivial because the samples are generated under the DAG structure in PX̃ . MinimizingRΘ1 can decrease the upper bound of LP (fΘ) in (8), improve the generalization performance of fΘ, as well as facilitate the convergence of fΘ to the true model. If PX̃ does not correspond to any causal DAG, such as image data, then there will be a tradeoff between minimizing RΘ1 and LN (fΘ). In this case, RΘ1 becomes harder to minimize, and generalization may not benefit from adding CASTLE. However, this is a rare case since causal structure exists in most datasets inherently. Our experiments demonstrate that CASTLE regularization outperforms popular regularizers on a variety of datasets in the next section. 4 Experiments In this section, we empirically evaluate CASTLE as a regularization method for improving generalization performance. We present our benchmark methods and training architecture, followed by our synthetic and publicly available data results. Benchmarks. We benchmark CASTLE against common regularizers that include: early stopping (Baseline) [31], L1 [29], L2 [30], dropout [3] with drop rate of 20% and 50% denoted as DO(0.2) and DO(0.5) respectively, SAE [12], batch normalization (BN) [33], data augmentation or input noise (IN) [2], and MixUp (MU) [35], in no particular order. For each regularizer with tunable hyperparameters we performed a standard grid search. For the weight decay regularizers L1 and L2 we searched for λ`p ∈ {0.1, 0.01, 0.001}, and for input noise we use a Gaussian noise with mean of 0 and standard deviation σ ∈ {0.1, 0.01, 0.01}. L1 and L2 were applied at every dense layer. BN and DO were applied after every dense layer and active only during training. Because each regularization method converges at different rates, we use early stopping on a validation set to terminate each benchmark training, which we refer to as our Baseline. Network architecture and training. We implemented CASTLE in Tensorflow2. Our proposed architecture is comprised of d + 1 sub-networks with shared hidden layers, as shown in Figure 2. In the linear case, VW is the `1 norm of W. In the nonlinear case, VΘ1 is the `1 norm of the input weight matrices Wk1 , k ∈ [d+ 1]. To make a clear comparison with L2 regularization, we exclude the capacity term VΘ2 from CASTLE, although it is a part of our generalization bound in (8). Since we predict the target variable as our primary task, we benchmark CASTLE against this common network architecture. Specifically, we use a network with two hidden layers of d+ 1 neurons with ReLU activation. Each benchmark method is initialized and seeded identically with the same random weights. For dataset preprocessing, all continuous variables are standardized with a mean of 0 and a variance of 1. Each model is trained using the Adam optimizer with a learning rate of 0.001 for up to a maximum of 200 epochs. An early stopping regime halts training with a patience of 30 epochs. 4.1 Regularization on Synthetic Data Synthetic data generation. Given a DAG G, we generate functional relationships between each variable and its respective parent(s) with additive Gaussian noise applied to each variable with a mean of 0 and variance of 1. In the linear case, each variable is equal to the sum of its parents plus noise. For the nonlinear case, each variable is equal to the sum of the sigmoid of its parents plus noise. We provide further details on our synthetic DGP and pseudocode in Appendix B. Consider Table 2, using our nonlinear DGP we generated 1000 test samples according to the DAG in Figure 1. We then used 10-fold cross-validation to train and validate each benchmark on varying training sets of size n. Each model was evaluated on the test set from weights saved at the lowest validation error. Table 2 shows that CASTLE improves over all experimental benchmarks. We present similar results for our linear experiments in Appendix B. 2Code is provided at https://bitbucket.org/mvdschaar/mlforhealthlabpub. Dissecting CASTLE. In the synthetic environment, we know the causal relationships with certainty. We analyze three aspects of CASTLE regularization using synthetic data. Because we are comparing across randomly simulated DAGs with differing functional relationships, the magnitude of regression testing error will vary between runs. We examine the model performance in terms of each model’s average rank over each fold to normalize this. If we have r regularizers, the best and worst possible rank is one and r, respectively (i.e., the higher the rank the better). We used 10-fold cross-validation to terminate model training and tested each model on a held-out test set of 1000 samples. First, we examine the impact of increasing the feature size or DAG vertex cardinality |G|. We do this by randomly generating a DAG of size |G| ∈ {10, 50, 100, 150} with 50|G| training samples. We repeat this ten times for each DAG cardinality. On the left-hand side of Fig. 3, CASTLE has the highest rank of all benchmarks and does not degrade with increasing |G|. Second, we analyze the impact of increasing dataset size. We randomly generate DAGs of size |G| ∈ {10, 50, 100, 150}, which we use to create datasets of α|G| samples, where α ∈ {20, 50, 100, 150, 200}. We repeat this ten times for each dataset size. In the middle plot of Figure 3, we see that CASTLE has superior performance for all dataset sizes, and as expected, all benchmark methods (except for SAE) start to converge about the average rank at large data sizes (α = 200). Third, we analyze our method’s sensitivity to noise variables, i.e., variables disconnected to the target variable in G. We randomly generate DAGs of size |G| = 50 to create datasets with 50|G| samples. We randomly add v ∈ {20i}5i=0 noise variables normally distributed with 0 mean and unit variance. We repeat this process for ten different DAG instantiations. The results on the right-hand side of Figure 3 show that our method is not sensitive to the existence of disconnected noise variables, whereas SAE performance degrades with the increase of uncorrelated input features. This highlights the benefit of target selection based on the DAG topology. In Appendix C, we provide an analysis of adjacency matrix weights that are learned under various random DAG configurations, e.g., target with parents, orphaned target, etc. There, we highlight CASTLE in comparison to SAE for target selection by showing that the adjacency matrix weights for noise variables are near zero. We also provide a sensitivity analysis on the parameter λ from (5) and results for additional experiments demonstrating that CASTLE does not reconstruct noisy (neighborless) variables in the underlying causal DAG. 4.2 Regularization on Real Data We perform regression and classification experiments on a spectrum of publicly available datasets from [50] including Boston Housing (BH), Wine Quality (WQ), Facebook Metrics (FB), Bioconcentration (BC), Student Performance (SP), Community (CM), Contraception Choice (CC), Pima Diabetes (PD), Las Vegas Ratings (LV), Statlog Heart (SH), and Retinopathy (RP). For each dataset, we randomly reserve 20% of the samples for a testing set. We perform 10-fold cross-validation on the remaining 80%. As the results show in Table 3, CASTLE provides improved regularization across all datasets for both regression and classification tasks. Additionally, CASTLE consistently ranks as the top regularizer (graphically shown in Appendix C.3), with no definitive benchmark method coming in as a consensus runner-up. This emphasizes the stability of CASTLE as a reliable regularizer. In Appendix C, we provide additional experiments on several other datasets, an ablation study highlighting our sources of gain, and real-world dataset statistics. 5 Conclusion We have introduced CASTLE regularization, a novel regularization method that jointly learns the causal graph to improve generalization performance in comparison to existing capacity-based and reconstruction-based regularization methods. We used existing PAC-Bayes theory to provide a theoretical generalization bound for CASTLE. We have shown experimentally that CASTLE is insensitive to increasing feature dimensionality, dataset size, and uncorrelated noise variables. Furthermore, we have shown that CASTLE regularization improves performance on a plethora of real datasets and, in the worst case, never degrades performance. We hope that CASTLE will play a role as a general-purpose regularizer that can be leveraged by the entire machine learning community. Broader Impact One of the big challenges of machine learning, and deep learning in particular, is generalization to out-of-sample data. Regularization is necessary and used to prevent overfitting thereby promoting generalization. In this work, we have presented a novel regularization method inspired by causality. Since the applicability of our approach spans all problems where causal relationships exist between variables, there are countless beneficiaries of our research. Apart from the general machine learning community, the beneficiaries of our research include practitioners in the social sciences (sociology, psychology, etc.), natural sciences (physics, biology, etc.), and healthcare among countless others. These fields have already been exploiting causality for some time and serve as a natural launch-pad for deploying and leveraging CASTLE. With that said, our method does not immediately apply to certain architectures, such as CNNs, where causal relationships are ambiguous or perhaps non-existent. Acknowledgments This work was supported by GlaxoSmithKline (GSK), the US Office of Naval Research (ONR), and the National Science Foundation (NSF) 1722516. We thank all reviewers for their invaluable comments and suggestions.
1. What is the focus and contribution of the paper on regularization methods? 2. What are the strengths of the proposed approach, particularly in terms of its flexibility and generalizability? 3. What are the weaknesses of the paper, especially regarding its writing quality, methodology, and experiment setup? 4. Do you have any concerns or suggestions regarding the proposed regularization method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a regularization method (CASTLE) based on learned causality relationship among input variables. The proposed method first learns a DAG representing the causality relationship and then reconstructs each input variable using its parents in DAG. Strengths This paper propose a novel regularization method based on causal DAG. Rather than using prior knowledge to construct the DAG, the author propose to learn the DAG which makes the method flexible and generalizable. In addition, the author has shown extensive efforts in evaluating the proposed method against other regularization baselines in many public datasets. Weaknesses This work suffers from several major weaknesses. First of all, the writing and organization of this paper is disappointing. The writing quality poses great difficulty for readers to understand the paper. Moreover, several claims about the methodology are not well-supported. In addition, the experimental set-up is flawed and needs further improvement.
NIPS
Title CASTLE: Regularization via Auxiliary Causal Graph Discovery Abstract Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal directed acyclical graph (DAG) as an adjacency matrix embedded in the neural network’s input layers, thereby facilitating the discovery of optimal predictors. Furthermore, CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features. We provide a theoretical generalization bound for our approach and conduct experiments on a plethora of synthetic and real publicly available datasets demonstrating that CASTLE consistently leads to better out-of-sample predictions as compared to other popular benchmark regularizers. 1 Introduction A primary concern of machine learning, and deep learning in particular, is generalization performance on out-of-sample data. Over-parameterized deep networks efficiently learn complex models and are, therefore, susceptible to overfit to training data. Common regularization techniques to mitigate overfitting include data augmentation [1, 2], dropout [3, 4, 5], adversarial training [6], label smoothing [7], and layer-wise strategies [8, 9, 10] to name a few. However, these methods are agnostic of the causal relationships between variables limiting their potential to identify optimal predictors based on graphical topology, such as the causal parents of the target variable. An alternative approach to regularization leverages supervised reconstruction, which has been proven theoretically and demonstrated empirically to improve generalization performance by obligating hidden bottleneck layers to reconstruct input features [11, 12]. However, supervised auto-encoders suboptimally reconstruct all features, including those without causal neighbors, i.e., adjacent cause or effect nodes. Naively reconstructing these variables does not improve regularization and representation learning for the predictive model. In some cases, it may be harmful to generalization performance, e.g., reconstructing a random noise variable. ∗Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Although causality has been a topic of research for decades, only recently has cause and effect relationships been incorporated into machine learning methodologies and research. Recently, researchers at the confluence of machine learning and causal modeling have advanced causal discovery [13, 14], causal inference [15, 16], model explainability [17], domain adaptation [18, 19, 20] and transfer learning [21] among countless others. The existing synergy between these two disciplines has been recognized for some time [22], and recent work suggests that causality can improve and complement machine learning regularization [23, 24, 25]. Furthermore, many recent causal works have demonstrated and acknowledged the optimality of predicting in the causal direction, i.e., predicting effect from cause, which results in less test error than predicting in the anti-causal direction [21, 26, 27, 28]. Contributions. In this work, we introduce a novel regularization method called CASTLE (CAusal STructure LEarning) regularization. CASTLE regularization uses causal graph discovery as an auxiliary task when training a supervised model to improve the generalization performance of the primary prediction task. Specifically, CASTLE learns the causal directed acyclical graph (DAG) under continuous optimization as an adjacency matrix embedded in a feed-forward neural network’s input layers. By jointly learning the causal graph, CASTLE can surpass the benefits provided by feature selection regularizers by identifying optimal predictors, such as the target variable’s causal parents. Additionally, CASTLE further improves upon auto-encoder-based regularization [12] by reconstructing only the input features that have neighbors (adjacent nodes) in the causal graph. Regularization of a predictive model to satisfy the causal relationships among feature and target variables effectively guide the model towards the direction of better out-of-sample generalization guarantees. We provide a theoretical generalization bound for CASTLE and demonstrate improved performance against a variety of benchmark methods on a plethora of real and synthetic datasets. 2 Related Works We compare to the related work in the simplest supervised learning setting where we desire learning a function from some featuresX to a target variable Y given some data of the variables X and Y to improve out-of-sample generalization within the same distribution. This is a significant departure from the branches of machine learning algorithms, such as in semi-supervised learning and domain adaptation, where the regularizer is constructed with information other than variablesX and Y . Regularization controls model complexity and mitigates overfitting. `1 [29] and `2 [30] regularization are commonly used regularization approaches where the former is used when a sparse model is preferred. For deep neural networks, dropout regularization [3, 4, 5] has been shown to be superior in practice to `p regularization techniques. Other capacity-based regularization techniques commonly used in practice include early stopping [31], parameter sharing [31], gradient clipping [32], batch normalization [33], data augmentation [2], weight noise [34], and MixUp [35] to name a few. Normbased regularizers with sparsity, e.g. Lasso [29], are used to guide feature selection for supervised models. The work of [12] on supervised auto-encoders (SAE) theoretically and empirically shows that adding a reconstruction loss of the input features functions as a regularizer for predictive models. However, this method does not select which features to reconstruct and therefore suffers performance degradation when tasked to reconstruct features that are noise or unrelated to the target variables. Two existing works [25, 23] attempt to draw the connection between causality and regularization. Based on an analogy between overfitting and confounding in linear models, [25] proposed a method to determine the regularization hyperparameter in linear Ridge or Lasso regression models by estimating the strength of confounding. [23] use causality detectors [36, 27] to weight a sparsity regularizer, e.g. `1, for performing non-linear causality analysis and generating multivariate causal hypotheses. Neither of the works has the same objective as us — improving the generalization performance of supervised learning models, nor do they overlap methodologically by using causal DAG discovery. Causal discovery is an NP-hard problem that requires a brute-force search through a non-convex combinatorial search space, limiting the existing algorithms to reaching global optima for only small problems. Recent approaches have successfully accelerated these methods by using a novel acyclicity constraint and formulating the causal discovery problem as a continuous optimization over real matrices (avoiding combinatorial search) in the linear [37] and nonlinear [38, 39] cases. CASTLE incorporates these recent causal discovery approaches of [37, 38] to improve regularization for prediction problems in general. As shown in Table 1, CASTLE regularization provides two additional benefits: causal prediction and target selection. First, CASTLE identifies causal predictors (e.g., causal parents if they exist) rather than correlated features. Furthermore, CASTLE improves upon reconstruction regularization by only reconstructing features that have neighbors in the underlying DAG. We refer to this advantage as “target selection”. Collectively these benefits contribute to the improved generalization of CASTLE. Next we introduce our notation (Section 3.1) and provide more details of these benefits (Section 3.2). 3 Methodology In this section, we provide a problem formulation with causal preliminaries for CASTLE. Then we provide a motivational discussion, regularizer methodology, and generalization theory for CASTLE. 3.1 Problem Formulation In the standard supervised learning setting, we denote the input feature variables and target variable, byX = [X1, ..., Xd] ∈ X and Y ∈ Y , respectively, where X ⊆ Rd is a d-dimensional feature space and Y ⊆ R is a one-dimensional target space. Let PX,Y denote the joint distribution of the features and target. Let [N ] denote the set {1, ..., N}. We observe a dataset, D = { (Xi, Yi), i ∈ [N ] } , consisting of N i.i.d. samples drawn from PX,Y . The goal of a supervised learning algorithm A is to find a predictive model, fY : X → Y , in a hypothesis space H that can explain the association between the features and the target variable. In the learning algorithm A, the predictive model f̂Y is trained on a finite number of samples in D, to predict well on the out-of-sample data generated from the same distribution PX,Y . However, overfitting, a mismatch between training and testing performance of f̂Y , can occur if the hypothesis spaceH is too complex and the training data fails to represent the underlying distribution PX,Y . This motivates the usage of regularization to reduce the hypothesis space’s complexityH so that the learning algorithm A will only find the desired function to explain the data. Assumptions of the underlying distribution dictate regularization choice. For example, if we believe only a subset of features is associated with the label Y , then `1 regularization [29] can be beneficial in creating sparsity for feature selection. CASTLE regularization is based on the assumption that a causal DAG exists among the input features and target variable. In the causal framework of [40], a causal structure of a set of variables X is a DAG in which each vertex v ∈ V corresponds to a distinct element in X , and each edge e ∈ E represents direct functional relationships between two neighboring variables. Formally, we assume the variables in our dataset satisfy a nonparametric structural equation model (NPSEM) as defined in Definition 1. The word “nonparametric” means we do not make any assumption on the underlying functions fi in the NPSEM. In this work, we characterize optimal learning by a predictive model as discovering the function Y = fY (Pa(Y ), uY ) in NPSEM [40]. Definition 1. (NPSEMs) Given a DAG G = (V = [d+ 1], E), the random variables X̃ = [Y,X] satisfy a NPSEM if Xi = fi(Pa(Xi), ui), i ∈ [d+ 1], where Pa(i) is the parents (direct causes) of Xi in G and u[d+1] are some random noise variables. 3.2 Why CASTLE regularization matters We now present a graphical example to explain the two benefits of CASTLE mentioned in Section 2, causal prediction and target selection. Consider Figure 1 where we are given nine feature variables X1, ..., X9 and a target variable Y . Causal Prediction. The target variable Y is generated by a function fY (Pa(Y ), uY ) from Definition 1 where the parents of Y are Pa(Y ) = {X2, X3}. In CASTLE regularization, we train a predictive model f̂Y jointly with learning the DAG amongX and Y . The features that the model uses to predict Y are the causal parents of Y in the learned DAG. Such a model is sample efficient in uncovering the true function fY (Pa(Y ), uY ) and generalizes well on the out-of-sample data. Our theoretical analysis in Section 3.4 validates this advantage when there exists a DAG structure among the variablesX and Y . However, there may exist other variables that predict Y more accurately than the causal parents Pa(Y ). For example, if the function from Y to X8 is a one-to-one linear mapping, we can predict Y trivially from the feature X8. In our objective function introduced later, the prediction loss of Y will be weighted higher than the causal regularizer. Among the predictive models with a similar prediction loss of Y , our objective function still prefers to use the model, which minimizes the causal regularizer and uses the causal parents. However, it would favor the easier predictor if one exists and gives a much lower prediction loss of Y . In this case, the learned DAG may differ from the true DAG, but we reiterate that we are focused on the problem of generalization rather than causal discovery. Target Selection. Consider the variables X5, X6 and X7 which share parents (X2 and X3) with Y in Figure 1. The functions X5 = f5(X2, u5), X6 = f6(X3, u6), and X7 = f7(X3, u7) may have some learnable similarity (e.g. basis functions and representations) with Y = fY (X2, X3, uY ), that we can exploit by training a shared predictive model of Y with the auxiliary task of predicting X5, X6 and X7. From the causal graph topology, CASTLE discovers the optimal features that should act as the auxiliary task for learning fY . CASTLE learns the related functions jointly in a shared model, which is proven to improve the generalization performance of predicting Y by learning shared basis functions and representations [41]. 3.3 CASTLE regularization Let X̃ = Y × X denote the data space, P(X,Y ) = PX̃ the data distribution, and ‖ · ‖F the Frobenius norm. We define random variables X̃ = [X̃1, X̃2, ..., X̃d+1] := [Y,X1, ..., Xd] ∈ X̃ . Let X = [ X1, ...,Xd ] denote the N × d input data matrix, Y the N -dimensional label vector, X̃ = [Y,X] the N × (d+ 1) matrix that contains data of all the variables in the DAG. To facilitate exposition, we first introduce CASTLE in the linear setting. Here, the parameters are a (d+ 1)× (d+ 1) adjacency matrix W with zero in the diagonal. The objective function is given as Ŵ ∈ min W 1 N ‖Y − X̃W:,1‖ 2 + λRDAG(X̃,W) (1) where W:,1 is the first column of W. We define the DAG regularization lossRDAG(X̃,W) as RDAG(X̃,W) = LW +RW + βVW. (2) where LW = 1N ‖X̃ − X̃W‖ 2 F , RW = ( Tr ( eW W ) − d − 1 )2 , VW is the `1 norm of W, is the Hadamard product, and eM is the matrix exponential of M. The DAG loss RDAG(X̃,W) is introduced in [37] for learning linear DAG by continuous optimization. Here we use it as the regularizer for our linear regression model Y = X̃W:,1 + . From Theorem 1 in [37], we know the graph given by W is a DAG if and only if RW = 0. The prediction Ŷ = X̃W:,1 is the projection of Y onto the parents of Y in the learned DAG. This increases the stability of linear regression when issues pertaining to collinearity or multicollinearity among the input features appear. Continuous optimization for learning nonparametric causal DAGs has been proposed in the prior work by [38]. In a similar manner, we also adapt CASTLE to nonlinear cases. Suppose the predictive model for Y and the function generating each feature Xk in the causal DAG are parameterized by an M -layer feed-forward neural network fΘ : X̃ → X̃ with ReLU activations and layer size h. Figure 2 shows the network architecture of fΘ. This joint network can be instantiated as a d+ 1 sub-network fk with shared hidden layers, where fk is responsible for reconstructing the feature X̃k. We let Wk1 denote the h× (d+ 1) weight matrix in the input layer of fk, k ∈ [d+ 1]. We set the k-th column of Wk1 to zero such that fk does not utilize X̃k in its prediction of X̃k. We let Wm, m = 2, ..,M − 1 denote the weight matrices in the network’s shared hidden layers, and WM = [W1M , ...,W d+1 M ] denotes the h× (d+ 1) weight matrix in the output layer. Explicitly, we define the sub-network fk as fk(X̃) = φ ( · · ·φ ( φ ( X̃Wk1 ) W2 ) · · ·WM−1 ) WkM , (3) where φ(·) is the ReLU activation function. The function fΘ is given as fΘ(X̃) = [f1(X̃), ..., fd+1(X̃)]. Let fΘ(X̃) denote the prediction for the N samples matrix X̃ where [fΘ(X̃)]i,k = fk(X̃i), i ∈ [N ] and k ∈ [d+ 1]. All network parameters are collected into sets as Θ1 = {Wk1}d+1k=1, Θ = Θ1 ∪ {Wm} M k=2 (4) The training objective function of fΘ is Θ ∈ min Θ 1 N ∥∥Y − [fΘ(X̃)]:,1∥∥2 + λRDAG(X̃, fΘ). (5) The DAG lossRDAG ( X̃, fΘ ) is given as RDAG ( X̃, fΘ ) = LN (fΘ) +RΘ1 + βVΘ1 . (6) Because the k-th column of the input weight matrix Wk1 is set to zero, LN (fΘ) = 1N ∥∥X̃−fΘ(X̃)∥∥2F differs from the standard reconstruction loss in auto-encoders (e.g. SAE) by only allowing the model to reconstruct each feature and target variable from the others. In contrast, auto-encoders reconstruct each feature using all the features including itself. VΘ1 is the `1 norm of the weight matrices Wk1 in Θ1, and the termRΘ1 is given as, RΘ1 = (Tr ( eM M ) − d− 1)2, (7) where M is a (d + 1) × (d + 1) matrix such that [M]k,j is the `2-norm of the k-th row of the matrix Wj1. When the acyclicity lossRΘ1 is minimized, the sub-networks f1, . . . fd+1 forms a DAG among the variables;RΘ1 obligates the sub-networks to reconstruct only the input features that have neighbors (adjacent nodes) in the learned DAG. We note that converting the nonlinear version of CASTLE into a linear form can be accomplished by removing all the hidden layers and output layers and setting the dimension h of the input weight matrices to be 1 in (3), i.e., fk(X̃) = X̃Wk1 and fΘ(X̃) = [X̃W 1 1, ..., X̃W d+1 1 ] = X̃W, which is the linear model in (1-2). Managing computational complexity. If the number of features is large, it is computationally expensive to train all the sub-networks simultaneously. We can mitigate this by sub-sampling. At each iteration of gradient descent, we randomly sample a subset of features to reconstruct and only minimize the prediction loss and reconstruction loss on these sub-sampled features. Note that we do not have a hidden confounders issue here, since Y and the sub-sampled features are predicted by all the features except itself. The sparsity DAG constraint on the weight matrices is unchanged at each iteration. In this case, we keep the training complexity per iteration at a manageable level approximately around the computational time and space complexity of training a few networks jointly. We include experiments on CASTLE scalability with respect to input feature size in Appendix C. 3.4 Generalization bound for CASTLE regularization In this section, we analyze theoretically why CASTLE regularization can improve the generalization performance by introducing a generalization bound for our model in Figure 2. Our bound is based on the PAC-Bayesian learning theory in [42, 43, 44]. Here, we re-interpret the DAG regularizer as a special prior or assumption on the input weight matrices of our model and use existing PAC-Bayes theory to prove the generalization of our algorithm. Traditionally, PAC-Bayes bounds are only applied to randomized models, such as Bayesian or Gibbs classifiers. Here, our bound is applied to our deterministic model by using the recent derandomization formalism from [45, 46]. We acknowledge and note that developing tighter and non-vacuous generalization bounds for deep neural networks is still a challenging and evolving topic in learning theory. The bounds are often stated with many constants from different steps of the proof. For reader convenience, we provide the simplified version of our bound in Theorem 1. The proof, details (e.g., the constants), and discussions about the assumptions are provided in Appendix A. We begin with a few assumptions before stating our bound. Assumption 1. For any sample X̃ = (Y,X) ∼ PX̃ , X̃ has bounded `2 norm s.t. ‖X̃‖2 ≤ B, for some B > 0. Assumption 2. The loss function L(fΘ) = ‖fΘ(X̃)− X̃‖2 is sub-Gaussian under the distribution PX̃ with a variance factor s 2 s.t. ∀t > 0, EPX̃ [ exp ( t ( L(fΘ)− LP (fΘ) ))] ≤ exp( t 2s2 2 ). Theorem 1. Let fΘ : X̃ → X̃ be a M -layer ReLU feed-forward network with layer size h, and each of its weight matrices has the spectral norm bounded by κ. Then, under Assumptions 1 and 2, for any δ, γ > 0, with probability 1− δ over a training set of N i.i.d samples, for any Θ in (4), we have: LP (fΘ) ≤ 4LN (fΘ) + 1N [ RΘ1 + C1(VΘ1 + VΘ2) + log ( 8 δ )] + C3 (8) where LP (fΘ) is the expected reconstruction loss of X̃ under PX̃ , LN (fΘ), VΘ1 and RΘ1 are defined in (6-7), VΘ2 is the `2 norm of the network weights in the output and shared hidden layers, and C1 and C2 are some constants depending on γ, d, h,B, s and M . The statistical properties of the reconstruction loss in learning linear DAGs, e.g. LW = 1N ‖X̃ − WX̃‖2F , have been well studied in the literature: the loss minimizer provably recovers a true DAG with high probability on finite-samples, and hence is consistent for both Gaussian SEM [47] and non-Gaussian SEM [48, 49]. Note also that the regularizerRW orRΘ1 are not a part of the results in [47, 48, 49]. However, the works of [37, 38] empirically show that using RW or RΘ1 on top of the reconstruction loss leads to more efficient and more accurate DAG learning than existing approaches. Our theoretical result on the reconstruction loss explains the benefit ofRW orRΘ1 for the generalization performance of predicting Y . This provides theoretical support for our CASTLE regularizer in supervised learning. However, the objectives of DAG discovery, e.g., identifying the Markov Blanket of Y , is beyond the scope of our analysis. The bound in (8) justifies RΘ1 in general, including linear or nonlinear cases, if the underlying distribution PX̃ is factorized according to some causal DAG. We note that the expected loss LP (fΘ) is upper bounded by the empirical loss LN (fΘ), VΘ1 , VΘ1 andRΘ1 which measures how close (via acyclicity constraint) the model is to a DAG. From (8) it is obvious that not minimizingRΘ1 is an acceptable strategy asymptotically or in the large samples limit (large N ) because RΘ1/N becomes negligible. This aligns with the consistency theory in [47, 48, 49] for linear models. However for small N , a preferred strategy is to train a model fΘ by minimizing LN (fΘ) andRΘ1 jointly. This would be trivial because the samples are generated under the DAG structure in PX̃ . MinimizingRΘ1 can decrease the upper bound of LP (fΘ) in (8), improve the generalization performance of fΘ, as well as facilitate the convergence of fΘ to the true model. If PX̃ does not correspond to any causal DAG, such as image data, then there will be a tradeoff between minimizing RΘ1 and LN (fΘ). In this case, RΘ1 becomes harder to minimize, and generalization may not benefit from adding CASTLE. However, this is a rare case since causal structure exists in most datasets inherently. Our experiments demonstrate that CASTLE regularization outperforms popular regularizers on a variety of datasets in the next section. 4 Experiments In this section, we empirically evaluate CASTLE as a regularization method for improving generalization performance. We present our benchmark methods and training architecture, followed by our synthetic and publicly available data results. Benchmarks. We benchmark CASTLE against common regularizers that include: early stopping (Baseline) [31], L1 [29], L2 [30], dropout [3] with drop rate of 20% and 50% denoted as DO(0.2) and DO(0.5) respectively, SAE [12], batch normalization (BN) [33], data augmentation or input noise (IN) [2], and MixUp (MU) [35], in no particular order. For each regularizer with tunable hyperparameters we performed a standard grid search. For the weight decay regularizers L1 and L2 we searched for λ`p ∈ {0.1, 0.01, 0.001}, and for input noise we use a Gaussian noise with mean of 0 and standard deviation σ ∈ {0.1, 0.01, 0.01}. L1 and L2 were applied at every dense layer. BN and DO were applied after every dense layer and active only during training. Because each regularization method converges at different rates, we use early stopping on a validation set to terminate each benchmark training, which we refer to as our Baseline. Network architecture and training. We implemented CASTLE in Tensorflow2. Our proposed architecture is comprised of d + 1 sub-networks with shared hidden layers, as shown in Figure 2. In the linear case, VW is the `1 norm of W. In the nonlinear case, VΘ1 is the `1 norm of the input weight matrices Wk1 , k ∈ [d+ 1]. To make a clear comparison with L2 regularization, we exclude the capacity term VΘ2 from CASTLE, although it is a part of our generalization bound in (8). Since we predict the target variable as our primary task, we benchmark CASTLE against this common network architecture. Specifically, we use a network with two hidden layers of d+ 1 neurons with ReLU activation. Each benchmark method is initialized and seeded identically with the same random weights. For dataset preprocessing, all continuous variables are standardized with a mean of 0 and a variance of 1. Each model is trained using the Adam optimizer with a learning rate of 0.001 for up to a maximum of 200 epochs. An early stopping regime halts training with a patience of 30 epochs. 4.1 Regularization on Synthetic Data Synthetic data generation. Given a DAG G, we generate functional relationships between each variable and its respective parent(s) with additive Gaussian noise applied to each variable with a mean of 0 and variance of 1. In the linear case, each variable is equal to the sum of its parents plus noise. For the nonlinear case, each variable is equal to the sum of the sigmoid of its parents plus noise. We provide further details on our synthetic DGP and pseudocode in Appendix B. Consider Table 2, using our nonlinear DGP we generated 1000 test samples according to the DAG in Figure 1. We then used 10-fold cross-validation to train and validate each benchmark on varying training sets of size n. Each model was evaluated on the test set from weights saved at the lowest validation error. Table 2 shows that CASTLE improves over all experimental benchmarks. We present similar results for our linear experiments in Appendix B. 2Code is provided at https://bitbucket.org/mvdschaar/mlforhealthlabpub. Dissecting CASTLE. In the synthetic environment, we know the causal relationships with certainty. We analyze three aspects of CASTLE regularization using synthetic data. Because we are comparing across randomly simulated DAGs with differing functional relationships, the magnitude of regression testing error will vary between runs. We examine the model performance in terms of each model’s average rank over each fold to normalize this. If we have r regularizers, the best and worst possible rank is one and r, respectively (i.e., the higher the rank the better). We used 10-fold cross-validation to terminate model training and tested each model on a held-out test set of 1000 samples. First, we examine the impact of increasing the feature size or DAG vertex cardinality |G|. We do this by randomly generating a DAG of size |G| ∈ {10, 50, 100, 150} with 50|G| training samples. We repeat this ten times for each DAG cardinality. On the left-hand side of Fig. 3, CASTLE has the highest rank of all benchmarks and does not degrade with increasing |G|. Second, we analyze the impact of increasing dataset size. We randomly generate DAGs of size |G| ∈ {10, 50, 100, 150}, which we use to create datasets of α|G| samples, where α ∈ {20, 50, 100, 150, 200}. We repeat this ten times for each dataset size. In the middle plot of Figure 3, we see that CASTLE has superior performance for all dataset sizes, and as expected, all benchmark methods (except for SAE) start to converge about the average rank at large data sizes (α = 200). Third, we analyze our method’s sensitivity to noise variables, i.e., variables disconnected to the target variable in G. We randomly generate DAGs of size |G| = 50 to create datasets with 50|G| samples. We randomly add v ∈ {20i}5i=0 noise variables normally distributed with 0 mean and unit variance. We repeat this process for ten different DAG instantiations. The results on the right-hand side of Figure 3 show that our method is not sensitive to the existence of disconnected noise variables, whereas SAE performance degrades with the increase of uncorrelated input features. This highlights the benefit of target selection based on the DAG topology. In Appendix C, we provide an analysis of adjacency matrix weights that are learned under various random DAG configurations, e.g., target with parents, orphaned target, etc. There, we highlight CASTLE in comparison to SAE for target selection by showing that the adjacency matrix weights for noise variables are near zero. We also provide a sensitivity analysis on the parameter λ from (5) and results for additional experiments demonstrating that CASTLE does not reconstruct noisy (neighborless) variables in the underlying causal DAG. 4.2 Regularization on Real Data We perform regression and classification experiments on a spectrum of publicly available datasets from [50] including Boston Housing (BH), Wine Quality (WQ), Facebook Metrics (FB), Bioconcentration (BC), Student Performance (SP), Community (CM), Contraception Choice (CC), Pima Diabetes (PD), Las Vegas Ratings (LV), Statlog Heart (SH), and Retinopathy (RP). For each dataset, we randomly reserve 20% of the samples for a testing set. We perform 10-fold cross-validation on the remaining 80%. As the results show in Table 3, CASTLE provides improved regularization across all datasets for both regression and classification tasks. Additionally, CASTLE consistently ranks as the top regularizer (graphically shown in Appendix C.3), with no definitive benchmark method coming in as a consensus runner-up. This emphasizes the stability of CASTLE as a reliable regularizer. In Appendix C, we provide additional experiments on several other datasets, an ablation study highlighting our sources of gain, and real-world dataset statistics. 5 Conclusion We have introduced CASTLE regularization, a novel regularization method that jointly learns the causal graph to improve generalization performance in comparison to existing capacity-based and reconstruction-based regularization methods. We used existing PAC-Bayes theory to provide a theoretical generalization bound for CASTLE. We have shown experimentally that CASTLE is insensitive to increasing feature dimensionality, dataset size, and uncorrelated noise variables. Furthermore, we have shown that CASTLE regularization improves performance on a plethora of real datasets and, in the worst case, never degrades performance. We hope that CASTLE will play a role as a general-purpose regularizer that can be leveraged by the entire machine learning community. Broader Impact One of the big challenges of machine learning, and deep learning in particular, is generalization to out-of-sample data. Regularization is necessary and used to prevent overfitting thereby promoting generalization. In this work, we have presented a novel regularization method inspired by causality. Since the applicability of our approach spans all problems where causal relationships exist between variables, there are countless beneficiaries of our research. Apart from the general machine learning community, the beneficiaries of our research include practitioners in the social sciences (sociology, psychology, etc.), natural sciences (physics, biology, etc.), and healthcare among countless others. These fields have already been exploiting causality for some time and serve as a natural launch-pad for deploying and leveraging CASTLE. With that said, our method does not immediately apply to certain architectures, such as CNNs, where causal relationships are ambiguous or perhaps non-existent. Acknowledgments This work was supported by GlaxoSmithKline (GSK), the US Office of Naval Research (ONR), and the National Science Foundation (NSF) 1722516. We thank all reviewers for their invaluable comments and suggestions.
1. What is the focus of the paper regarding improving supervised learning performance? 2. What are the strengths of the proposed approach, particularly in exploiting causal structure? 3. What are the weaknesses of the paper, especially regarding scalability and computational cost? 4. How does the method improve interpretability in feature selection and target selection? 5. Can you provide more information on the empirical results and their significance? 6. How does the method compare to other regularization techniques in terms of computational cost and prediction improvement?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The aim of this paper is to improve performance of supervised learning on out-of-bag samples. In the case of deep networks, regularization helps mitigate overfit but does not exploit the structure of the feature variables and their relation to the outcome when the DGP can be represented by a causal DAG. The authors propose CASTLE, which jointly learns the causal graph while performing regularization. In particular, the adjacency matrix of the learned DAG is used in the input layers of neural network, which translates to the penalty function being decomposed into the reconstruction loss found in SAE, a (new) acyclicity loss, and a capacity-based regularizer of the adjacency matrices. Unlike other approaches, CASTLE improves upon capacity-based and auto-encoder-based regularization by exploiting the DAG structure for identification of causal predictors (parents of Y, if they exist) and for target selection for reconstruction regularization (features that have neighbours in the underlying DAG). The main contributions lie in (1) combining the results from SAEs in [12] with continuous optimization for DAG learning in [39], [40]; and (2) borrowing results from the PAC-Bayes literature to derive an upper bound on the expected reconstruction loss under the DGP. CASTLE regularization was tested in the case where the causal DAG is parameterized by an M-layer feed-forward neural network with ReLU activations and layer size h and is shown to outperform several benchmark regularizers on both synthetic and real world data sets. Strengths This work clearly lies at the intersection of causal discovery and machine learning. By learning the DGP the complexity of the FNN is reduced because the adjacency matrix is embedded in the input layer. The method relies on non-parametric SEMs as opposed to any particular parametric form (e.g. need not be exponential family). The theoretical results seem sound and are based on viewing the DAG regularizer as a prior for the (structure of) the input weight matrices. The empirical results are encouraging. One significance not stated is the gains in interpretability. While the overall goal is prediction, this approach is less black box than standard FNN because the choice of variables for feature selection or target selection are grounded in the structure of the learned causal DAG. I would imagine this to be a selling point to critics of black-box modelling. Further, exploiting recent results in continuous optimization for learning non-parametric DAGs makes the proposed method more feasible. This work has the potential to become a go-to regularizer, pending the scalability of the method. The main idea (exploiting causal structure in NN regularization) is a neat idea. Since it seems to readily take previous results in causal discovery and embed it in previous results on SAEs the results are not as novel per se, but that may be outweighed by the potential significance of the method. Weaknesses The main weakness is scalability and lack of available runtime information. There is no mention of how long it takes to run the algorithm. While the goal is improving out-of-bag prediction, what is the computational cost associated with this prediction improvement, particularly compared to SAEs? If the gains in OOB prediction error are minimal but the computational cost formidable, that is worth contemplating. On the other hand, since the adjacency matrix is embedded in the input layers' weight matrices, is there a computational gain over some of these other methods? Finally, the simulations looked at DAGs having 10 to 150 nodes, which can still be considered low for some applications. Again, does the computational burden of learning the causal structure outweigh the gains made in OOB prediction error?
NIPS
Title 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data Abstract We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views. In such cases, the visual evidence is usually insufficient to identify a 3D reconstruction uniquely, so we aim at recovering several plausible reconstructions compatible with the input data. We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans. We propose to learn a multi-hypothesis neural network regressor using a best-of-M loss, where each of the M hypotheses is constrained to lie on a manifold of plausible human poses by means of a generative model. We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans, and in heavily occluded versions of these benchmarks. 1 Introduction We are interested in reconstructing 3D human pose from the observation of single 2D images. As humans, we have no problem in predicting, at least approximately, the 3D structure of most scenes, including the pose and shape of other people, even from a single view. However, 2D images notoriously [9] do not contain sufficient geometric information to allow recovery of the third dimension. Hence, single-view reconstruction is only possible in a probabilistic sense and the goal is to make the posterior distribution as sharp as possible, by learning a strong prior on the space of possible solutions. Recent progress in single-view 3D pose reconstruction has been impressive. Methods such as HMR [17], GraphCMR [20] and SPIN [19] formulate this task as learning a deep neural network that maps 2D images to the parameters of a 3D model of the human body, usually SMPL [26]. These methods work well in general, but not always (fig. 2). Their main weakness is processing heavily occluded images of the object. When a large part of the object is missing, say the lower body of a sitting human, they output reconstructions that are often implausible. Since they can produce only one hypothesis as output, they very likely learn to approximate the mean of the posterior distribution, which may not correspond to any plausible pose. Unfortunately, this failure modality is rather common in applications due to scene clutter and crowds. In this paper, we propose a solution to this issue. Specifically, we consider the challenge of recovering 3D mesh reconstructions of complex articulated objects such as humans from highly ambiguous ∗work completed during internship at Facebook AI Research 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. image data, often containing significant occlusions of the object. Clearly, it is generally impossible to reconstruct the object uniquely if too much evidence is missing; however, we can still predict a set containing all possible reconstructions (see fig. 1), making this set as small as possible. While ambiguous pose reconstruction has been previously investigated, as far as we know, this is the first paper that looks specifically at a deep learning approach for ambiguous reconstructions of the full human mesh. Our primary contribution is to introduce a principled multi-hypothesis framework to model the ambiguities in monocular pose recovery. In the literature, such multiple-hypotheses networks are often trained with a so-called best-of-M loss — namely, during training, the loss is incurred only by the best of the M hypothesis, back-propagating gradients from that alone [12]. In this work we opt for the best-of-M approach since it has been show to outperform alternatives (such as variational auto-encoders or mixture density networks) in tasks that are similar to our 3D human pose recovery, and which have constrained output spaces [34]. Input image crop Prediction Full image PredictionInput masked image Full image Pr ed ic tio ns o f S PI N tra in ed o n fu ll po se s Pr ed ic tio ns o f S PI N m as ke d im ag es SP IN tr ai ne d on fu ll im ag es SP IN tr ai ne d on m as ke d im ag es Figure 2: Top: Pretrained SPIN model tested on an ambiguous example, Bottom: SPIN model after fine-tuning to ambiguous examples. Note the network tends to regress to the mean over plausible poses, shown by predicting the missing legs vertically downward — arguably the average position over the training dataset. A major drawback of the best-of-M approach is that it only guarantees that one of the hypotheses lies close to the correct solution; however, it says nothing about the plausibility, or lack thereof, of the other M − 1 hypotheses, which can be arbitrarily ‘bad’.2 Not only does this mean that most of the hypotheses may be uninformative, but in an application we are also unable to tell which hypothesis should be used, and we might very well pick a ‘bad’ one. This has also a detrimental effect during learning because it makes gradients sparse as prediction errors are back-propagated only through one of the M hypotheses for each training image. In order to address these issues, our first contribution is a hypothesis reprojection loss that forces each member of the multi-hypothesis set to correctly reproject to 2D image keypoint annotations. The main benefit is to constrain the whole predicted set of meshes to be consistent with the observed image, not just the best hypothesis, also addressing gradient sparsity. Next, we observe that another drawback of the best-of-M pipelines is to be tied to a particular value of M , whereas in applications we are often interested in tuning the num- ber of hypothesis considered. Furthermore, minimizing the reprojection loss makes hypotheses geometrically consistent with the observation, but not necessarily likely. Our second contribution is thus to improve the flexibility of best-of-M models by allowing them to output any smaller number 2 Theoretically, best-of-M can minimize its loss by quantizing optimally (in the sense of minimum expected distortion) the posterior distribution, which would be desirable for coverage. However, this is not the only solution that optimizes the best-of-M training loss, as in the end it is sufficient that one hypothesis per training sample is close to the ground truth. In fact, this is exactly what happens; for instance, during training hypotheses in best-of-M are known to easily become degenerate and ‘die off’, a clear symptom of this problem. n < M of hypotheses while at the same time making these hypotheses more representative of likely poses. The new method, which we call n-quantized-best-of-M , does so by quantizing the best-of-M model to output weighed by a explicit pose prior, learned by means of normalizing flows. To summarise, our key contributions are as follows. First, we deal with the challenge of 3D mesh reconstruction for articulated objects such as humans in ambiguous scenarios. Second, we introduce a n-quantized-best-of-M mechanism to allow best-of-M models to generate an arbitrary number of n < M predictions. Third, we introduce a mode-wise re-projection loss for multi-hypothesis prediction, to ensure that predicted hypotheses are all consistent with the input. Empirically, we achieve state-of-the-art monocular mesh recovery accuracy on Human36M, its more challenging version augmented with heavy occlusions, and the 3DPW datasets. Our ablation study validates each of our modelling choices, demonstrating their positive effect. 2 Related work There is ample literature on recovering the pose of 3D models from images. We break this into five categories: methods that reconstruct 3D points directly, methods that reconstruct the parameters of a 3D model of the object via optimization, methods that do the latter via learning-based regression, hybrid methods and methods which deal with uncertainty in 3D human reconstruction. Reconstructing 3D body points without a model. Several papers have focused on the problem of estimating 3D body points from 2D observations [3, 29, 33, 41, 20]. Of these, Martinez et al. [27] introduced a particularly simple pipeline based on a shallow neural network. In this work, we aim at recovering the full 3D surface of a human body, rather than only lifting sparse keypoints. Fitting 3D models via direct optimization. Several methods fit the parameters of a 3D model such as SMPL [25] or SCAPE [3] to 2D observations using an optimization algorithm to iteratively improve the fitting quality. While early approaches such as [10, 37] required some manual intervention, the SMPLify method of Bogo et al. [5] was perhaps the first to fit SMPL to 2D keypoints fully automatically. SMPL was then extended to use silhouette, multiple views, and multiple people in [21, 13, 48]. Recent optimization methods such as [16, 32, 46] have significantly increased the scale of the models and data that can be handled. Fitting 3D models via learning-based regression. More recently, methods have focused on regressing the parameters of the 3D models directly, in a feed-forward manner, generally by learning a deep neural network [42, 43, 30, 31, 17]. Due to the scarcity of 3D ground truth data for humans in the wild, most of these methods train a deep regressor using a mix of datasets with 3D and 2D annotations in form of 3D MoCap markers, 2D keypoints and silhouettes. Among those, HMR of Kanazawa et al. [17] and GraphCMR of Kolotouros et al. [20] stand out as particularly effective. Hybrid methods. Other authors have also combined optimization and learning-based regression methods. In most cases, the integration is done by using a deep regressor to initialize the optimization algorithm [37, 21, 33, 31, 44]. However, recently Kolotouros et al. [19] has shown strong results by integrating the optimization loop in learning the deep neural network that performs the regression, thereby exploiting the weak cues available in 2D keypoints. Modelling ambiguities in 3D human reconstruction. Several previous papers have looked at the problem of modelling ambiguous 3D human pose reconstructions. Early work includes Sminchisescu and Triggs [39], Sidenbladh et al. [36] and Sminchisescu et al. [38]. More recently, Akhter and Black [1] learn a prior over human skeleton joint angles (but not directly a prior on the SMPL parameters) from a MoCap dataset. Li and Lee [22] use the Mixture Density Networks model of [4] to capture ambiguous 3D reconstructions of sparse human body keypoints directly in physical space. Sharma et al. [35] learn a conditional variational auto-encoder to model ambiguous reconstructions as a posterior distribution; they also propose two scoring methods to extract a single 3D reconstruction from the distribution. Cheng et al. [7] tackle the problem of video 3D reconstruction in the presence of occlusions, and show that temporal cues can be used to disambiguate the solution. While our method is similar in the goal of correctly handling the prediction uncertainty, we differ by applying our method to predicting full mesh of the human body. This is arguably a more challenging scenario due to the increased complexity of the desired 3D shape. X̂1 X̂2 X̂M . . . hypothesesM Finally, some recent concurrent works also consider building priors over 3D human pose using normalizing flows. Xu et al. [47] release a prior for their new GHUM/GHUML model, and Zanfir et al. [49] build a prior on SMPL joint angles to constrain their weakly-supervised network. Our method differs as we learn our prior on 3D SMPL joints. 3 Preliminaries Before discussing our method, we describe the necessary background, starting from SMPL. SMPL. SMPL is a model of the human body parameterized by axis-angle rotations θ ∈ R69 of 23 body joints, the shape coefficients β ∈ R10 modelling shape variations, and a global rotation γ ∈ R3. SMPL defines a skinning function S : (θ, β, γ) 7→ V that maps the body parameters to the vertices V ∈ R6890×3 of a 3D mesh. Predicting the SMPL parameters from a single image. Given an image I containing a person, the goal is to recover the SMPL parameters (θ, β, γ) that provide the best 3D reconstruction of it. Existing algorithms [18] cast this as learning a deep network G(I) = (θ, β, γ, t) that predicts the SMPL parameters as well as the translation t ∈ R3 of the perspective camera observing the person. We assume a fixed set of camera parameters. During training, the camera is used to constrain the reconstructed 3D mesh and the annotated 2D keypoints to be consistent. Since most datasets only contain annotations for a small set of keypoints ([11] is an exception), and since these keypoints do not correspond directly to any of the SMPL mesh vertices, we need a mechanism to translate between them. This mechanism is a fixed linear regressor J : V 7→ X that maps the SMPL mesh vertices V = S(G(I)) to the 3D locations X = J(V ) = J(S(G(I))) of the K joints. Then, the projections πt(X) of the 3D joint positions into image I can be compared to the available 2D annotations. Normalizing flows. The idea of normalizing flows (NF) is to represent a complex distribution p(X) on a random variable X as a much simpler distribution p(z) on a transformed version z = f(X) of X . The transformation f is learned so that p(z) has a fixed shape, usually a Normal p(z) ∼ N (0, 1). Furthermore, f itself must be invertible and smooth. In this paper, we utilize a particular version of NF dubbed RealNVP [8]. A more detailed explanation of NF and RealNVP has been deferred to the supplementary. 4 Method We start from a neural network architecture that implements the function G(I) = (θ, β, γ, t) described above. As shown in SPIN [19], the HMR [18] architecture attains state-of-the-art results for this task, so we use it here. However, the resulting regressor G(I), given an input image I , can only produce a single unique solution. In general, and in particular for cases with a high degree of reconstruction ambiguity, we are interested in predicting set of plausible 3D poses rather than a single one. We thus extend our model to explicitly produce a set of M different hypotheses Gm(I) = (θm, βm, γm, tm), m = 1, . . . ,M . This is easily achieved by modifying the HMR’s final output layer to produce a tensor M times larger, effectively stacking the hypotheses. In what follows, we describe the learning scheme that drives the monocular predictor G to achieve an optimal coverage of the plausible poses consistent with the input image. Our method is summarized in fig. 3. 4.1 Learning with multiple hypotheses For learning the model, we assume to have a training set of N images {Ii}i=1,...,N , each cropped around a person. Furthermore, for each training image Ii we assume to know (1) the 2D location Yi of the body joints (2) their 3D locationXi, and (3) the ground truth SMPL fit (θi, βi, γi). Depending on the set up, some of these quantities can be inferred from the others (e.g. we can use the function J to convert the SMPL parameters to the 3D joints Xi and then the camera projection to obtain Yi). Best-of-M loss. Given a single input image, our network predicts a set of poses, where at least one should be similar to the ground truth annotation Xi. This is captured by the best-of-M loss [12]: Lbest(J,G;m∗) = 1 N N∑ i=1 ∥∥Xi − X̂m∗i (Ii)∥∥, m∗i = argmin m=1,...,M ∥∥Xi − X̂m(Ii)∥∥, (1) where X̂m(Ii) = J(Gm(V (Ii))) are the 3D joints estimated by the m-th SMPL predictor Gm(Ii) applied to image Ii. In this way, only the best hypothesis is steered to match the ground truth, leaving the other hypotheses free to sample the space of ambiguous solutions. During the computation of this loss, we also extract the best index m∗i for each training example. Limitations of best-of-M . As noted in section 1, best-of-M only guarantees that one of the M hypotheses is a good solution, but says nothing about the other ones. Furthermore, in applications we are often interested in modulating the number of hypotheses generated, but the best-of-M regressor G(I) only produces a fixed number of output hypothesis M , and changing M would require retraining from scratch, which is intractable. We first address these issues by introducing a method that allows us to train a best-of-M model for a large M once and leverage it later to generate an arbitrary number of n < M hypotheses without the need of retraining, while ensuring that these are good representatives of likely body poses. n-quantized-best-of-M Formally, given a set of M predictions X̂M (I) = {X̂1(I), ..., X̂M (I)} we seek to generate a smaller n-sized set X̄n(I) = {X̄1(I), ..., X̄n(I)} which preserves the information contained in X̂M . In other words, X̄n optimally quantizes X̂M . To this end, we interpret the output of the best-of-M model as a set of choices X̂M (I) for the possible pose. These poses are of course not all equally likely, but it is difficult to infer their probability from (1). We thus work with the following approximation. We consider the prior p(X) on possible poses (defined in the next section), and set: p(X|I) = p(X|X̂M (I)) = M∑ i=1 δ(X − X̂i(I)) p(X̂ i(I))∑M k=1 p(X̂ k(I)) . (2) This amounts to using the best-of-M output as a conditioning set (i.e. an unweighted selection of plausible poses) and then use the prior p(x) to weight the samples in this set. With the weighted samples, we can then run K-means [24] to further quantize the best-of-M output while minimizing the quantization energy E: E(X̄ |X̂ ) = Ep(X|I) [ min {X̄1,...,X̄n} ‖X − X̄j‖2 ] = M∑ i=1 p(X̂i(I))∑M k=1 p(X̂ k(I)) min {X̄1,...,X̄n} ‖X̂i(I)−X̄j‖2. (3) This can be done efficiently on GPU — for our problem, K-Means consumes less than 20% of the execution time of the entire forward pass of our method. Learning the pose prior with normalizing flows. In order to obtain p(X), we propose to learn a normalizing flow model in form of the RealNVP network f described in section 3 and the supplementary. RealNVP optimizes the log likelihood Lnf(f) of training ground truth 3D skeletons {X1, ...XN} annotated in their corresponding images {I1, ..., IN} : Lnf(f) = − 1 N N∑ i=1 log p(Xi) = − 1 N N∑ i=1 ( logN (f(Xi))− L∑ l=1 log ∣∣∣∣dfl(Xli)dXli ∣∣∣∣ ) . (4) 2D re-projection loss. Since the best-of-M loss optimizes a single prediction at a time, often some members of the ensemble X̂ (I) drift away from the manifold of plausible human body shapes, ultimately becoming ‘dead’ predictions that are never selected as the best hypothesism∗. In order to prevent this, we further utilize a re-projection loss that acts across all hypotheses for a given image. More specifically, we constrain the set of 3D reconstructions to lie on projection rays passing through the 2D input keypoints with the following hypothesis re-projection loss: Lri(J,G) = 1 N N∑ i=1 M∑ m=1 ∥∥Yi − πti(X̂m(I))∥∥. (5) Note that many of our training images exhibit significant occlusion, so Y may contain invisible or missing points. We handle this by masking Lri to prevent these points contributing to the loss. SMPL loss. The final loss terms, introduced by prior work [18, 31, 19], penalize deviations between the predicted and ground truth SMPL parameters. For our method, these are only applied to the best hypothesis m∗i found above: Lθ(G;m∗) = 1 N N∑ i=1 ‖θi −Gθ,m∗i (Ii)‖; LV (G;m ∗) = 1 N N∑ i=1 ‖S(θi, βi, γi)− S(G(θ,β,γ),m∗i (Ii))‖ (6) Lβ(G;m∗) = 1 N N∑ i=1 ‖βi −Gβ,m∗i (Ii)‖; Lrb(G;m ∗) = 1 N N∑ i=1 ‖Yi − πti(X̂ m∗i (Ii))‖ (7) Note here we use Lrb to refer to a 2D re-projection error between the best hypothesis and ground truth 2D points Yi. This differs from the earlier loss Lri, which is applied across all modes to enforce consistency to the visible input points. Note that we could have used eqs. (6) and (7) to select the best hypothesis m∗i , but it would entail an unmanageable memory footprint due to the requirement of SMPL-meshing for every hypothesis before the best-of-M selection. Overall loss. The model is thus trained to minimize: L(J,G) = λriLri(J,G) + λbestLbest(J,G;m∗) + λθLθ(J,G;m∗) + λβLβ(J,G;m∗) + λVLV (J,G;m∗) + λrbLrb(J,G;m∗) (8) where m∗ is given in eq. (1) and λri, λbest, λθ, λβ , λV, λrb are weighing factors. We use a consistent set of SMPL loss weights across all experiments λbest = 25.0, λθ = 1.0, λβ = 0.001, λV = 1.0, and set λri = 1.0. Since the training of the normalizing flow f is independent of the rest of the model, we train f separately by optimizing Lnf with the weight of λnf = 1.0. Samples from our trained normalizing flow are shown in fig. 4 5 Experiments In this section we compare our method to several strong baselines. We start by describing the datasets and the baselines, followed by a quantitative and a qualitative evaluation. As common practice, we train on subjects S1, S5, S6, S7 and S8, and test on S9 and S11. 3DPW is only used for evaluation and, following [20], we evaluate on its test set. Our evaluation is consistent with [19, 20] - we report two metrics that compare the lifted dense 3D SMPL shape to the ground truth mesh: Mean Per Joint Position Error (MPJPE), Reconstruction Error (RE). For H36M, all errors are computed using an evaluation scheme known as “Protocol #2”. Please refer to supplementary for a detailed explanation of MPJPE and RE. Multipose metrics. MPJPE and RE are traditional metrics that assume a single correct ground truth pre- diction for a given 2D observation. As mentioned above, such an assumption is rarely correct due to the inherent ambiguity of the monocular 3D shape estimation task. We thus also report MPJPE- n/RE-n an extension of MPJPE RE used in [22], that enables an evaluation of n different shape hypotheses. In more detail, to evaluate an algorithm, we allow it to output n possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric. We report results for n ∈ {1, 5, 10, 25}. Ambiguous H36M/3DPW (AH36M/A3DPW). Since H36M is captured in a controlled environment, it rarely depicts challenging real-world scenarios such as body occlusions that are the main source of ambiguity in the single-view 3D shape estimation problem. Hence, we construct an adapted version of H36M with synthetically-generated occlusions (fig. 5) by randomly hiding a subset of the 2D keypoints and re-computing an image crop around the remaining visible joints. Please refer to the supplementary for details of the occlusion generation process. While 3DPW does contain real scenes, for completeness, we also evaluate on a noisy, and thus more challenging version (A3DPW) generated according to the aforementioned strategy. Baselines Our method is compared to two multi-pose prediction baselines. For fairness, both baselines extend the same (state-of-the-art) trunk architecture as we use, and all methods have access to the same training data. SMPL-MDN follows [22] and outputs parameters of a mixture density model over the set of SMPL log-rotation pose parameters. Since a naïve implementation of the MDN model leads to poor performance (≈ 200mm MPJPE-n = 5 on H36M), we introduced several improvements that allow optimization of the total loss eq. (8). SMPL-CVAE, the second baseline, is a conditional variational autoencoder [40] combined with our trunk network. SMPL-CVAE consists of an encoding network that maps a ground truth SMPL mesh V to a gaussian vector z which is fed together with an encoding of the image to generate a mesh V ′ such that V ′ ≈ V . At test time, we sample n plausible human meshes by drawing z ∼ N (0, 1) to evaluate with MPJPE-n/RE-n. More details of both SMPL-CVAE and SMPL-MDN have been deferred to the supplementary material. For completeness, we also compare to three more baselines that tackle the standard single-mesh prediction problem: HMR [17], GraphCMR [31], and SPIN [19], where the latter currently attain state-of-the-art performance on H36M/3DPW. All methods were trained on H36M [14], MPI-INF3DHP [28], LSP [15], MPII [2] and COCO [23]. 5.1 Results Table 1 contains a comprehensive summary of the results on all 3 benchmarks. Our method outperforms the SMPL-CVAE and SMPL-MDN in all metrics on all datasets. For SMPL-CVAE, we found that the encoding network often “cheats” during training by transporting all information about the ground truth, instead of only encoding the modes of ambiguity. The reason for a lower performance of SMPL-MDN is probably the representation of the probability in the space of log-rotations, rather in the space of vertices. Modelling the MDN in the space of model vertices would be more convenient due to being more relevant to the final evaluation metric that aggregates per-vertex errors, however, fitting such high-dimensional (dim=6890× 3) Gaussian mixture is prohibitively costly. Furthermore, it is very encouraging to observe that our method is also able to outperform the singlemode baselines [17, 20, 19] on the single mode MPJPE on both H36M and 3DPW. This comes as a surprise since our method has not been optimized for this mode of operation. The difference is more significant for 3DPW which probably happens because 3DPW is not used for training and, hence, the normalizing flow prior acts as an effective filter of predicted outlier poses. Qualitiative results are shown in fig. 6. Ablation study. We further conduct an ablative study on 3DPW that removes components of our method and measures the incurred change in performance. More specifically, we: 1) ablate the hypothesis reprojection loss; 2) set p(X|I) = Uniform in eq. (3), effectively removing the normalizing flow component and executing unweighted K-Means in n-quantized-best-of-M . Table 2 demonstrates that removing both contributions decreases performance, validating our design choices. 6 Conclusions In this work, we have explored a seldom visited problem of representing the set of plausible 3D meshes corresponding to a single ambiguous input image of a human. To this end, we have pro- posed a novel method that trains a single multi-hypothesis best-of-M model and, using a novel n-quantized-best-of-M strategy, allows to sample an arbitrary number n < M of hypotheses. Importantly, this proposed quantization technique leverages a normalizing flow model, that effectively filters out the predicted hypotheses that are unnatural. Empirical evaluation reveals performance superior to several strong probabilistic baselines on Human36M, its challenging ambiguous version, and on 3DPW. Our method encounters occasional failure cases, such as when tested on individuals with unusual shape (e.g. obese people), since we have very few of these examples in the training set. Tackling such cases would make for interesting and worthwhile future work. Acknowledgements The authors would like to thank Richard Turner for useful technical discussions relating to normalizing flows, and Philippa Liggins, Thomas Roddick and Nicholas Biggs for proof reading. This work was entirely funded by Facebook AI Research. Broader impact Our method improves the ability of machines to understand human body poses in images and videos. Understanding people automatically may arguably be misused by bad actors. However, importantly, our method is not a form of biometric as it does not allow the identification of people. Rather, only their overall body shape and pose is reconstructed, but these details are insufficient for unique identification. In particular, individual facial features are not reconstructed at all. Furthermore, our method is an improvement of existing capabilities, but does not introduce a radical new capability in machine learning. Thus our contribution is unlikely to facilitate misuse of technology which is already available to anyone. Finally, any potential negative use of a technology should be balanced against positive uses. Understanding body poses has many legitimate applications in VR and AR, medical, assistance to the elderly, assistance to the visual impaired, autonomous driving, human-machine interactions, image and video categorization, platform integrity, etc.
1. What is the focus and contribution of the paper on 3D human body estimation? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle ambiguity? 3. What are the weaknesses of the paper regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns regarding the methodology, such as the use of a pose prior or the choice of losses?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper addresses the problem of estimating multiple 3D human bodies that are consistent with a single 2D image. This is important because the 3D pose and shape is ambiguous. Here they explicitly model this ambiguity and produce multiple samples, all of which are consistent with the 2D data. This could be useful for later stages of processing (like tracking). Strengths Classical methods for human pose estimation have addressed this problem (some additional references below) but I do not know of a deep learning approach that has done so. This is useful. The numerical results look good and the qualitative results are also good. The method seems to represent valid and sample poses. The paper introduces new losses to make this possible and these are likely to be picked up by others. The multi-hypothesis framework is their key contribution and I think it is a nice contribution to the current field. To make this work, the paper introduces an "n-quantized-best-of-M" loss, which seems useful. Weaknesses The authors acknowledge that ambiguous human pose has been considered before (Lines 34-36). They claim to be the first to look at full meshes. This is both a bit narrow and probably not true. Certainly the papers I cite below used meshes, just not learned body models like SMPL. I think these lines should be replaced by a clearer statement of the contribution. Is this the first method to do this in a deep learning framework? The paper does not mention whether code will be made available. Re the pose prior: * It is not clear from the paper what data is used to train the normalizing flow pose prior. * why does fig 4 show samples from the prior as stick figures rather than SMPL bodies? This makes me think the prior is not over SMPL parameters. This is not clear. * in Table 2, it seems that the pose prior makes very little difference. I find it surprising that replacing it with a *uniform* prior works nearly as well. Am I reading this wrong? If a uniform prior works, why bother with the fancy prior? Re the quantitative results: It is surprising that the method is better for M=1 mode. If I am not wrong, this really reduces to HMR with a different pose prior. All the best-of-M stuff shouldn't play a role. Is the method trained like SPIN? In my experience, SPIN is a hard baseline to beat so if you can explain why you beat it for M=1, this will be interesting to your audience.
NIPS
Title 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data Abstract We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views. In such cases, the visual evidence is usually insufficient to identify a 3D reconstruction uniquely, so we aim at recovering several plausible reconstructions compatible with the input data. We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans. We propose to learn a multi-hypothesis neural network regressor using a best-of-M loss, where each of the M hypotheses is constrained to lie on a manifold of plausible human poses by means of a generative model. We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans, and in heavily occluded versions of these benchmarks. 1 Introduction We are interested in reconstructing 3D human pose from the observation of single 2D images. As humans, we have no problem in predicting, at least approximately, the 3D structure of most scenes, including the pose and shape of other people, even from a single view. However, 2D images notoriously [9] do not contain sufficient geometric information to allow recovery of the third dimension. Hence, single-view reconstruction is only possible in a probabilistic sense and the goal is to make the posterior distribution as sharp as possible, by learning a strong prior on the space of possible solutions. Recent progress in single-view 3D pose reconstruction has been impressive. Methods such as HMR [17], GraphCMR [20] and SPIN [19] formulate this task as learning a deep neural network that maps 2D images to the parameters of a 3D model of the human body, usually SMPL [26]. These methods work well in general, but not always (fig. 2). Their main weakness is processing heavily occluded images of the object. When a large part of the object is missing, say the lower body of a sitting human, they output reconstructions that are often implausible. Since they can produce only one hypothesis as output, they very likely learn to approximate the mean of the posterior distribution, which may not correspond to any plausible pose. Unfortunately, this failure modality is rather common in applications due to scene clutter and crowds. In this paper, we propose a solution to this issue. Specifically, we consider the challenge of recovering 3D mesh reconstructions of complex articulated objects such as humans from highly ambiguous ∗work completed during internship at Facebook AI Research 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. image data, often containing significant occlusions of the object. Clearly, it is generally impossible to reconstruct the object uniquely if too much evidence is missing; however, we can still predict a set containing all possible reconstructions (see fig. 1), making this set as small as possible. While ambiguous pose reconstruction has been previously investigated, as far as we know, this is the first paper that looks specifically at a deep learning approach for ambiguous reconstructions of the full human mesh. Our primary contribution is to introduce a principled multi-hypothesis framework to model the ambiguities in monocular pose recovery. In the literature, such multiple-hypotheses networks are often trained with a so-called best-of-M loss — namely, during training, the loss is incurred only by the best of the M hypothesis, back-propagating gradients from that alone [12]. In this work we opt for the best-of-M approach since it has been show to outperform alternatives (such as variational auto-encoders or mixture density networks) in tasks that are similar to our 3D human pose recovery, and which have constrained output spaces [34]. Input image crop Prediction Full image PredictionInput masked image Full image Pr ed ic tio ns o f S PI N tra in ed o n fu ll po se s Pr ed ic tio ns o f S PI N m as ke d im ag es SP IN tr ai ne d on fu ll im ag es SP IN tr ai ne d on m as ke d im ag es Figure 2: Top: Pretrained SPIN model tested on an ambiguous example, Bottom: SPIN model after fine-tuning to ambiguous examples. Note the network tends to regress to the mean over plausible poses, shown by predicting the missing legs vertically downward — arguably the average position over the training dataset. A major drawback of the best-of-M approach is that it only guarantees that one of the hypotheses lies close to the correct solution; however, it says nothing about the plausibility, or lack thereof, of the other M − 1 hypotheses, which can be arbitrarily ‘bad’.2 Not only does this mean that most of the hypotheses may be uninformative, but in an application we are also unable to tell which hypothesis should be used, and we might very well pick a ‘bad’ one. This has also a detrimental effect during learning because it makes gradients sparse as prediction errors are back-propagated only through one of the M hypotheses for each training image. In order to address these issues, our first contribution is a hypothesis reprojection loss that forces each member of the multi-hypothesis set to correctly reproject to 2D image keypoint annotations. The main benefit is to constrain the whole predicted set of meshes to be consistent with the observed image, not just the best hypothesis, also addressing gradient sparsity. Next, we observe that another drawback of the best-of-M pipelines is to be tied to a particular value of M , whereas in applications we are often interested in tuning the num- ber of hypothesis considered. Furthermore, minimizing the reprojection loss makes hypotheses geometrically consistent with the observation, but not necessarily likely. Our second contribution is thus to improve the flexibility of best-of-M models by allowing them to output any smaller number 2 Theoretically, best-of-M can minimize its loss by quantizing optimally (in the sense of minimum expected distortion) the posterior distribution, which would be desirable for coverage. However, this is not the only solution that optimizes the best-of-M training loss, as in the end it is sufficient that one hypothesis per training sample is close to the ground truth. In fact, this is exactly what happens; for instance, during training hypotheses in best-of-M are known to easily become degenerate and ‘die off’, a clear symptom of this problem. n < M of hypotheses while at the same time making these hypotheses more representative of likely poses. The new method, which we call n-quantized-best-of-M , does so by quantizing the best-of-M model to output weighed by a explicit pose prior, learned by means of normalizing flows. To summarise, our key contributions are as follows. First, we deal with the challenge of 3D mesh reconstruction for articulated objects such as humans in ambiguous scenarios. Second, we introduce a n-quantized-best-of-M mechanism to allow best-of-M models to generate an arbitrary number of n < M predictions. Third, we introduce a mode-wise re-projection loss for multi-hypothesis prediction, to ensure that predicted hypotheses are all consistent with the input. Empirically, we achieve state-of-the-art monocular mesh recovery accuracy on Human36M, its more challenging version augmented with heavy occlusions, and the 3DPW datasets. Our ablation study validates each of our modelling choices, demonstrating their positive effect. 2 Related work There is ample literature on recovering the pose of 3D models from images. We break this into five categories: methods that reconstruct 3D points directly, methods that reconstruct the parameters of a 3D model of the object via optimization, methods that do the latter via learning-based regression, hybrid methods and methods which deal with uncertainty in 3D human reconstruction. Reconstructing 3D body points without a model. Several papers have focused on the problem of estimating 3D body points from 2D observations [3, 29, 33, 41, 20]. Of these, Martinez et al. [27] introduced a particularly simple pipeline based on a shallow neural network. In this work, we aim at recovering the full 3D surface of a human body, rather than only lifting sparse keypoints. Fitting 3D models via direct optimization. Several methods fit the parameters of a 3D model such as SMPL [25] or SCAPE [3] to 2D observations using an optimization algorithm to iteratively improve the fitting quality. While early approaches such as [10, 37] required some manual intervention, the SMPLify method of Bogo et al. [5] was perhaps the first to fit SMPL to 2D keypoints fully automatically. SMPL was then extended to use silhouette, multiple views, and multiple people in [21, 13, 48]. Recent optimization methods such as [16, 32, 46] have significantly increased the scale of the models and data that can be handled. Fitting 3D models via learning-based regression. More recently, methods have focused on regressing the parameters of the 3D models directly, in a feed-forward manner, generally by learning a deep neural network [42, 43, 30, 31, 17]. Due to the scarcity of 3D ground truth data for humans in the wild, most of these methods train a deep regressor using a mix of datasets with 3D and 2D annotations in form of 3D MoCap markers, 2D keypoints and silhouettes. Among those, HMR of Kanazawa et al. [17] and GraphCMR of Kolotouros et al. [20] stand out as particularly effective. Hybrid methods. Other authors have also combined optimization and learning-based regression methods. In most cases, the integration is done by using a deep regressor to initialize the optimization algorithm [37, 21, 33, 31, 44]. However, recently Kolotouros et al. [19] has shown strong results by integrating the optimization loop in learning the deep neural network that performs the regression, thereby exploiting the weak cues available in 2D keypoints. Modelling ambiguities in 3D human reconstruction. Several previous papers have looked at the problem of modelling ambiguous 3D human pose reconstructions. Early work includes Sminchisescu and Triggs [39], Sidenbladh et al. [36] and Sminchisescu et al. [38]. More recently, Akhter and Black [1] learn a prior over human skeleton joint angles (but not directly a prior on the SMPL parameters) from a MoCap dataset. Li and Lee [22] use the Mixture Density Networks model of [4] to capture ambiguous 3D reconstructions of sparse human body keypoints directly in physical space. Sharma et al. [35] learn a conditional variational auto-encoder to model ambiguous reconstructions as a posterior distribution; they also propose two scoring methods to extract a single 3D reconstruction from the distribution. Cheng et al. [7] tackle the problem of video 3D reconstruction in the presence of occlusions, and show that temporal cues can be used to disambiguate the solution. While our method is similar in the goal of correctly handling the prediction uncertainty, we differ by applying our method to predicting full mesh of the human body. This is arguably a more challenging scenario due to the increased complexity of the desired 3D shape. X̂1 X̂2 X̂M . . . hypothesesM Finally, some recent concurrent works also consider building priors over 3D human pose using normalizing flows. Xu et al. [47] release a prior for their new GHUM/GHUML model, and Zanfir et al. [49] build a prior on SMPL joint angles to constrain their weakly-supervised network. Our method differs as we learn our prior on 3D SMPL joints. 3 Preliminaries Before discussing our method, we describe the necessary background, starting from SMPL. SMPL. SMPL is a model of the human body parameterized by axis-angle rotations θ ∈ R69 of 23 body joints, the shape coefficients β ∈ R10 modelling shape variations, and a global rotation γ ∈ R3. SMPL defines a skinning function S : (θ, β, γ) 7→ V that maps the body parameters to the vertices V ∈ R6890×3 of a 3D mesh. Predicting the SMPL parameters from a single image. Given an image I containing a person, the goal is to recover the SMPL parameters (θ, β, γ) that provide the best 3D reconstruction of it. Existing algorithms [18] cast this as learning a deep network G(I) = (θ, β, γ, t) that predicts the SMPL parameters as well as the translation t ∈ R3 of the perspective camera observing the person. We assume a fixed set of camera parameters. During training, the camera is used to constrain the reconstructed 3D mesh and the annotated 2D keypoints to be consistent. Since most datasets only contain annotations for a small set of keypoints ([11] is an exception), and since these keypoints do not correspond directly to any of the SMPL mesh vertices, we need a mechanism to translate between them. This mechanism is a fixed linear regressor J : V 7→ X that maps the SMPL mesh vertices V = S(G(I)) to the 3D locations X = J(V ) = J(S(G(I))) of the K joints. Then, the projections πt(X) of the 3D joint positions into image I can be compared to the available 2D annotations. Normalizing flows. The idea of normalizing flows (NF) is to represent a complex distribution p(X) on a random variable X as a much simpler distribution p(z) on a transformed version z = f(X) of X . The transformation f is learned so that p(z) has a fixed shape, usually a Normal p(z) ∼ N (0, 1). Furthermore, f itself must be invertible and smooth. In this paper, we utilize a particular version of NF dubbed RealNVP [8]. A more detailed explanation of NF and RealNVP has been deferred to the supplementary. 4 Method We start from a neural network architecture that implements the function G(I) = (θ, β, γ, t) described above. As shown in SPIN [19], the HMR [18] architecture attains state-of-the-art results for this task, so we use it here. However, the resulting regressor G(I), given an input image I , can only produce a single unique solution. In general, and in particular for cases with a high degree of reconstruction ambiguity, we are interested in predicting set of plausible 3D poses rather than a single one. We thus extend our model to explicitly produce a set of M different hypotheses Gm(I) = (θm, βm, γm, tm), m = 1, . . . ,M . This is easily achieved by modifying the HMR’s final output layer to produce a tensor M times larger, effectively stacking the hypotheses. In what follows, we describe the learning scheme that drives the monocular predictor G to achieve an optimal coverage of the plausible poses consistent with the input image. Our method is summarized in fig. 3. 4.1 Learning with multiple hypotheses For learning the model, we assume to have a training set of N images {Ii}i=1,...,N , each cropped around a person. Furthermore, for each training image Ii we assume to know (1) the 2D location Yi of the body joints (2) their 3D locationXi, and (3) the ground truth SMPL fit (θi, βi, γi). Depending on the set up, some of these quantities can be inferred from the others (e.g. we can use the function J to convert the SMPL parameters to the 3D joints Xi and then the camera projection to obtain Yi). Best-of-M loss. Given a single input image, our network predicts a set of poses, where at least one should be similar to the ground truth annotation Xi. This is captured by the best-of-M loss [12]: Lbest(J,G;m∗) = 1 N N∑ i=1 ∥∥Xi − X̂m∗i (Ii)∥∥, m∗i = argmin m=1,...,M ∥∥Xi − X̂m(Ii)∥∥, (1) where X̂m(Ii) = J(Gm(V (Ii))) are the 3D joints estimated by the m-th SMPL predictor Gm(Ii) applied to image Ii. In this way, only the best hypothesis is steered to match the ground truth, leaving the other hypotheses free to sample the space of ambiguous solutions. During the computation of this loss, we also extract the best index m∗i for each training example. Limitations of best-of-M . As noted in section 1, best-of-M only guarantees that one of the M hypotheses is a good solution, but says nothing about the other ones. Furthermore, in applications we are often interested in modulating the number of hypotheses generated, but the best-of-M regressor G(I) only produces a fixed number of output hypothesis M , and changing M would require retraining from scratch, which is intractable. We first address these issues by introducing a method that allows us to train a best-of-M model for a large M once and leverage it later to generate an arbitrary number of n < M hypotheses without the need of retraining, while ensuring that these are good representatives of likely body poses. n-quantized-best-of-M Formally, given a set of M predictions X̂M (I) = {X̂1(I), ..., X̂M (I)} we seek to generate a smaller n-sized set X̄n(I) = {X̄1(I), ..., X̄n(I)} which preserves the information contained in X̂M . In other words, X̄n optimally quantizes X̂M . To this end, we interpret the output of the best-of-M model as a set of choices X̂M (I) for the possible pose. These poses are of course not all equally likely, but it is difficult to infer their probability from (1). We thus work with the following approximation. We consider the prior p(X) on possible poses (defined in the next section), and set: p(X|I) = p(X|X̂M (I)) = M∑ i=1 δ(X − X̂i(I)) p(X̂ i(I))∑M k=1 p(X̂ k(I)) . (2) This amounts to using the best-of-M output as a conditioning set (i.e. an unweighted selection of plausible poses) and then use the prior p(x) to weight the samples in this set. With the weighted samples, we can then run K-means [24] to further quantize the best-of-M output while minimizing the quantization energy E: E(X̄ |X̂ ) = Ep(X|I) [ min {X̄1,...,X̄n} ‖X − X̄j‖2 ] = M∑ i=1 p(X̂i(I))∑M k=1 p(X̂ k(I)) min {X̄1,...,X̄n} ‖X̂i(I)−X̄j‖2. (3) This can be done efficiently on GPU — for our problem, K-Means consumes less than 20% of the execution time of the entire forward pass of our method. Learning the pose prior with normalizing flows. In order to obtain p(X), we propose to learn a normalizing flow model in form of the RealNVP network f described in section 3 and the supplementary. RealNVP optimizes the log likelihood Lnf(f) of training ground truth 3D skeletons {X1, ...XN} annotated in their corresponding images {I1, ..., IN} : Lnf(f) = − 1 N N∑ i=1 log p(Xi) = − 1 N N∑ i=1 ( logN (f(Xi))− L∑ l=1 log ∣∣∣∣dfl(Xli)dXli ∣∣∣∣ ) . (4) 2D re-projection loss. Since the best-of-M loss optimizes a single prediction at a time, often some members of the ensemble X̂ (I) drift away from the manifold of plausible human body shapes, ultimately becoming ‘dead’ predictions that are never selected as the best hypothesism∗. In order to prevent this, we further utilize a re-projection loss that acts across all hypotheses for a given image. More specifically, we constrain the set of 3D reconstructions to lie on projection rays passing through the 2D input keypoints with the following hypothesis re-projection loss: Lri(J,G) = 1 N N∑ i=1 M∑ m=1 ∥∥Yi − πti(X̂m(I))∥∥. (5) Note that many of our training images exhibit significant occlusion, so Y may contain invisible or missing points. We handle this by masking Lri to prevent these points contributing to the loss. SMPL loss. The final loss terms, introduced by prior work [18, 31, 19], penalize deviations between the predicted and ground truth SMPL parameters. For our method, these are only applied to the best hypothesis m∗i found above: Lθ(G;m∗) = 1 N N∑ i=1 ‖θi −Gθ,m∗i (Ii)‖; LV (G;m ∗) = 1 N N∑ i=1 ‖S(θi, βi, γi)− S(G(θ,β,γ),m∗i (Ii))‖ (6) Lβ(G;m∗) = 1 N N∑ i=1 ‖βi −Gβ,m∗i (Ii)‖; Lrb(G;m ∗) = 1 N N∑ i=1 ‖Yi − πti(X̂ m∗i (Ii))‖ (7) Note here we use Lrb to refer to a 2D re-projection error between the best hypothesis and ground truth 2D points Yi. This differs from the earlier loss Lri, which is applied across all modes to enforce consistency to the visible input points. Note that we could have used eqs. (6) and (7) to select the best hypothesis m∗i , but it would entail an unmanageable memory footprint due to the requirement of SMPL-meshing for every hypothesis before the best-of-M selection. Overall loss. The model is thus trained to minimize: L(J,G) = λriLri(J,G) + λbestLbest(J,G;m∗) + λθLθ(J,G;m∗) + λβLβ(J,G;m∗) + λVLV (J,G;m∗) + λrbLrb(J,G;m∗) (8) where m∗ is given in eq. (1) and λri, λbest, λθ, λβ , λV, λrb are weighing factors. We use a consistent set of SMPL loss weights across all experiments λbest = 25.0, λθ = 1.0, λβ = 0.001, λV = 1.0, and set λri = 1.0. Since the training of the normalizing flow f is independent of the rest of the model, we train f separately by optimizing Lnf with the weight of λnf = 1.0. Samples from our trained normalizing flow are shown in fig. 4 5 Experiments In this section we compare our method to several strong baselines. We start by describing the datasets and the baselines, followed by a quantitative and a qualitative evaluation. As common practice, we train on subjects S1, S5, S6, S7 and S8, and test on S9 and S11. 3DPW is only used for evaluation and, following [20], we evaluate on its test set. Our evaluation is consistent with [19, 20] - we report two metrics that compare the lifted dense 3D SMPL shape to the ground truth mesh: Mean Per Joint Position Error (MPJPE), Reconstruction Error (RE). For H36M, all errors are computed using an evaluation scheme known as “Protocol #2”. Please refer to supplementary for a detailed explanation of MPJPE and RE. Multipose metrics. MPJPE and RE are traditional metrics that assume a single correct ground truth pre- diction for a given 2D observation. As mentioned above, such an assumption is rarely correct due to the inherent ambiguity of the monocular 3D shape estimation task. We thus also report MPJPE- n/RE-n an extension of MPJPE RE used in [22], that enables an evaluation of n different shape hypotheses. In more detail, to evaluate an algorithm, we allow it to output n possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric. We report results for n ∈ {1, 5, 10, 25}. Ambiguous H36M/3DPW (AH36M/A3DPW). Since H36M is captured in a controlled environment, it rarely depicts challenging real-world scenarios such as body occlusions that are the main source of ambiguity in the single-view 3D shape estimation problem. Hence, we construct an adapted version of H36M with synthetically-generated occlusions (fig. 5) by randomly hiding a subset of the 2D keypoints and re-computing an image crop around the remaining visible joints. Please refer to the supplementary for details of the occlusion generation process. While 3DPW does contain real scenes, for completeness, we also evaluate on a noisy, and thus more challenging version (A3DPW) generated according to the aforementioned strategy. Baselines Our method is compared to two multi-pose prediction baselines. For fairness, both baselines extend the same (state-of-the-art) trunk architecture as we use, and all methods have access to the same training data. SMPL-MDN follows [22] and outputs parameters of a mixture density model over the set of SMPL log-rotation pose parameters. Since a naïve implementation of the MDN model leads to poor performance (≈ 200mm MPJPE-n = 5 on H36M), we introduced several improvements that allow optimization of the total loss eq. (8). SMPL-CVAE, the second baseline, is a conditional variational autoencoder [40] combined with our trunk network. SMPL-CVAE consists of an encoding network that maps a ground truth SMPL mesh V to a gaussian vector z which is fed together with an encoding of the image to generate a mesh V ′ such that V ′ ≈ V . At test time, we sample n plausible human meshes by drawing z ∼ N (0, 1) to evaluate with MPJPE-n/RE-n. More details of both SMPL-CVAE and SMPL-MDN have been deferred to the supplementary material. For completeness, we also compare to three more baselines that tackle the standard single-mesh prediction problem: HMR [17], GraphCMR [31], and SPIN [19], where the latter currently attain state-of-the-art performance on H36M/3DPW. All methods were trained on H36M [14], MPI-INF3DHP [28], LSP [15], MPII [2] and COCO [23]. 5.1 Results Table 1 contains a comprehensive summary of the results on all 3 benchmarks. Our method outperforms the SMPL-CVAE and SMPL-MDN in all metrics on all datasets. For SMPL-CVAE, we found that the encoding network often “cheats” during training by transporting all information about the ground truth, instead of only encoding the modes of ambiguity. The reason for a lower performance of SMPL-MDN is probably the representation of the probability in the space of log-rotations, rather in the space of vertices. Modelling the MDN in the space of model vertices would be more convenient due to being more relevant to the final evaluation metric that aggregates per-vertex errors, however, fitting such high-dimensional (dim=6890× 3) Gaussian mixture is prohibitively costly. Furthermore, it is very encouraging to observe that our method is also able to outperform the singlemode baselines [17, 20, 19] on the single mode MPJPE on both H36M and 3DPW. This comes as a surprise since our method has not been optimized for this mode of operation. The difference is more significant for 3DPW which probably happens because 3DPW is not used for training and, hence, the normalizing flow prior acts as an effective filter of predicted outlier poses. Qualitiative results are shown in fig. 6. Ablation study. We further conduct an ablative study on 3DPW that removes components of our method and measures the incurred change in performance. More specifically, we: 1) ablate the hypothesis reprojection loss; 2) set p(X|I) = Uniform in eq. (3), effectively removing the normalizing flow component and executing unweighted K-Means in n-quantized-best-of-M . Table 2 demonstrates that removing both contributions decreases performance, validating our design choices. 6 Conclusions In this work, we have explored a seldom visited problem of representing the set of plausible 3D meshes corresponding to a single ambiguous input image of a human. To this end, we have pro- posed a novel method that trains a single multi-hypothesis best-of-M model and, using a novel n-quantized-best-of-M strategy, allows to sample an arbitrary number n < M of hypotheses. Importantly, this proposed quantization technique leverages a normalizing flow model, that effectively filters out the predicted hypotheses that are unnatural. Empirical evaluation reveals performance superior to several strong probabilistic baselines on Human36M, its challenging ambiguous version, and on 3DPW. Our method encounters occasional failure cases, such as when tested on individuals with unusual shape (e.g. obese people), since we have very few of these examples in the training set. Tackling such cases would make for interesting and worthwhile future work. Acknowledgements The authors would like to thank Richard Turner for useful technical discussions relating to normalizing flows, and Philippa Liggins, Thomas Roddick and Nicholas Biggs for proof reading. This work was entirely funded by Facebook AI Research. Broader impact Our method improves the ability of machines to understand human body poses in images and videos. Understanding people automatically may arguably be misused by bad actors. However, importantly, our method is not a form of biometric as it does not allow the identification of people. Rather, only their overall body shape and pose is reconstructed, but these details are insufficient for unique identification. In particular, individual facial features are not reconstructed at all. Furthermore, our method is an improvement of existing capabilities, but does not introduce a radical new capability in machine learning. Thus our contribution is unlikely to facilitate misuse of technology which is already available to anyone. Finally, any potential negative use of a technology should be balanced against positive uses. Understanding body poses has many legitimate applications in VR and AR, medical, assistance to the elderly, assistance to the visual impaired, autonomous driving, human-machine interactions, image and video categorization, platform integrity, etc.
1. What is the focus and contribution of the paper on 3D pose and reconstruction? 2. What are the strengths of the proposed approach, particularly in tackling severe occlusions and partial views? 3. What are the weaknesses of the paper, such as the lack of technical contribution and reliance on previous methods? 4. Do you have any concerns about the method's ability to ensure a diverse set of outputs and its potential collapse to a single mesh? 5. Would you like more information on how the normalizing flows are conditioned on the input image I in the learning process of p(X|I)? 6. How do you think the method could be improved regarding the evaluation of the diversity of the hypotheses pool and the worst solution? 7. What is your opinion on the value of n (number of clusters) used in the quantitative evaluation and its impact on the results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper introduces a method for obtaining multiple plausible solutions for the problem of 3d pose and reconstruction in monocular images. The proposed method targets mostly the images with severe occlusions and partial views, where multiple poses could match the evidence. The method outputs a fixed number of hypotheses which, at testing time, are clustered based on a pose prior learned using normalizing flows.To ensure that all learned hypotheses match the pose in the image, the authors propose to use a 2d reconstruction loss on each hypotheses (not only on the best one). Results are shown on H36m (and a cropped variant of it) and 3DPW datasets. Strengths - The paper proposes to tackle a severely occluded/partially visible scenario where many of previous methods either fail or output only a single plausible pose, whereas multiple ones could be plausible. This can have a significant impact in real-world images where this scenario is often encountered. Weaknesses The technical contribution of the paper is small. The authors rely on a previous method, HMR, with a slightly modified output. The proposed losses are not new (2d reprojection loss, loss on SMPL parameters, etc). The main contribution is a way to generate and select a plausible set of outputs. However, there are a series of issues that the proposed method raises: - How does the method ensure that the output set of M meshes is diverse and it does not collapse to a single one? (has been addressed in the rebuttal) - Details regarding the learning of p(X|I) are missing, that is, it is unclear how the normalizing flows are conditioned on the input image I. - In the experimental section, the authors only evaluate the *best* out of M solutions. In order to have a better understanding of the capabilities of the method it would have been good to show results for all of them, or, at least also show what is the error for the worst solution. Also, an evaluation of the diversity of the hypotheses pool is missing. (has been addressed in the rebuttal) - What is the value of n (number of clusters) in tables 1 and 2? The visual results (figure 6 for example) suggest that n=3, but is the same value used in the quantitative evaluation? (has been addressed in the rebuttal)
NIPS
Title 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data Abstract We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views. In such cases, the visual evidence is usually insufficient to identify a 3D reconstruction uniquely, so we aim at recovering several plausible reconstructions compatible with the input data. We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans. We propose to learn a multi-hypothesis neural network regressor using a best-of-M loss, where each of the M hypotheses is constrained to lie on a manifold of plausible human poses by means of a generative model. We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans, and in heavily occluded versions of these benchmarks. 1 Introduction We are interested in reconstructing 3D human pose from the observation of single 2D images. As humans, we have no problem in predicting, at least approximately, the 3D structure of most scenes, including the pose and shape of other people, even from a single view. However, 2D images notoriously [9] do not contain sufficient geometric information to allow recovery of the third dimension. Hence, single-view reconstruction is only possible in a probabilistic sense and the goal is to make the posterior distribution as sharp as possible, by learning a strong prior on the space of possible solutions. Recent progress in single-view 3D pose reconstruction has been impressive. Methods such as HMR [17], GraphCMR [20] and SPIN [19] formulate this task as learning a deep neural network that maps 2D images to the parameters of a 3D model of the human body, usually SMPL [26]. These methods work well in general, but not always (fig. 2). Their main weakness is processing heavily occluded images of the object. When a large part of the object is missing, say the lower body of a sitting human, they output reconstructions that are often implausible. Since they can produce only one hypothesis as output, they very likely learn to approximate the mean of the posterior distribution, which may not correspond to any plausible pose. Unfortunately, this failure modality is rather common in applications due to scene clutter and crowds. In this paper, we propose a solution to this issue. Specifically, we consider the challenge of recovering 3D mesh reconstructions of complex articulated objects such as humans from highly ambiguous ∗work completed during internship at Facebook AI Research 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. image data, often containing significant occlusions of the object. Clearly, it is generally impossible to reconstruct the object uniquely if too much evidence is missing; however, we can still predict a set containing all possible reconstructions (see fig. 1), making this set as small as possible. While ambiguous pose reconstruction has been previously investigated, as far as we know, this is the first paper that looks specifically at a deep learning approach for ambiguous reconstructions of the full human mesh. Our primary contribution is to introduce a principled multi-hypothesis framework to model the ambiguities in monocular pose recovery. In the literature, such multiple-hypotheses networks are often trained with a so-called best-of-M loss — namely, during training, the loss is incurred only by the best of the M hypothesis, back-propagating gradients from that alone [12]. In this work we opt for the best-of-M approach since it has been show to outperform alternatives (such as variational auto-encoders or mixture density networks) in tasks that are similar to our 3D human pose recovery, and which have constrained output spaces [34]. Input image crop Prediction Full image PredictionInput masked image Full image Pr ed ic tio ns o f S PI N tra in ed o n fu ll po se s Pr ed ic tio ns o f S PI N m as ke d im ag es SP IN tr ai ne d on fu ll im ag es SP IN tr ai ne d on m as ke d im ag es Figure 2: Top: Pretrained SPIN model tested on an ambiguous example, Bottom: SPIN model after fine-tuning to ambiguous examples. Note the network tends to regress to the mean over plausible poses, shown by predicting the missing legs vertically downward — arguably the average position over the training dataset. A major drawback of the best-of-M approach is that it only guarantees that one of the hypotheses lies close to the correct solution; however, it says nothing about the plausibility, or lack thereof, of the other M − 1 hypotheses, which can be arbitrarily ‘bad’.2 Not only does this mean that most of the hypotheses may be uninformative, but in an application we are also unable to tell which hypothesis should be used, and we might very well pick a ‘bad’ one. This has also a detrimental effect during learning because it makes gradients sparse as prediction errors are back-propagated only through one of the M hypotheses for each training image. In order to address these issues, our first contribution is a hypothesis reprojection loss that forces each member of the multi-hypothesis set to correctly reproject to 2D image keypoint annotations. The main benefit is to constrain the whole predicted set of meshes to be consistent with the observed image, not just the best hypothesis, also addressing gradient sparsity. Next, we observe that another drawback of the best-of-M pipelines is to be tied to a particular value of M , whereas in applications we are often interested in tuning the num- ber of hypothesis considered. Furthermore, minimizing the reprojection loss makes hypotheses geometrically consistent with the observation, but not necessarily likely. Our second contribution is thus to improve the flexibility of best-of-M models by allowing them to output any smaller number 2 Theoretically, best-of-M can minimize its loss by quantizing optimally (in the sense of minimum expected distortion) the posterior distribution, which would be desirable for coverage. However, this is not the only solution that optimizes the best-of-M training loss, as in the end it is sufficient that one hypothesis per training sample is close to the ground truth. In fact, this is exactly what happens; for instance, during training hypotheses in best-of-M are known to easily become degenerate and ‘die off’, a clear symptom of this problem. n < M of hypotheses while at the same time making these hypotheses more representative of likely poses. The new method, which we call n-quantized-best-of-M , does so by quantizing the best-of-M model to output weighed by a explicit pose prior, learned by means of normalizing flows. To summarise, our key contributions are as follows. First, we deal with the challenge of 3D mesh reconstruction for articulated objects such as humans in ambiguous scenarios. Second, we introduce a n-quantized-best-of-M mechanism to allow best-of-M models to generate an arbitrary number of n < M predictions. Third, we introduce a mode-wise re-projection loss for multi-hypothesis prediction, to ensure that predicted hypotheses are all consistent with the input. Empirically, we achieve state-of-the-art monocular mesh recovery accuracy on Human36M, its more challenging version augmented with heavy occlusions, and the 3DPW datasets. Our ablation study validates each of our modelling choices, demonstrating their positive effect. 2 Related work There is ample literature on recovering the pose of 3D models from images. We break this into five categories: methods that reconstruct 3D points directly, methods that reconstruct the parameters of a 3D model of the object via optimization, methods that do the latter via learning-based regression, hybrid methods and methods which deal with uncertainty in 3D human reconstruction. Reconstructing 3D body points without a model. Several papers have focused on the problem of estimating 3D body points from 2D observations [3, 29, 33, 41, 20]. Of these, Martinez et al. [27] introduced a particularly simple pipeline based on a shallow neural network. In this work, we aim at recovering the full 3D surface of a human body, rather than only lifting sparse keypoints. Fitting 3D models via direct optimization. Several methods fit the parameters of a 3D model such as SMPL [25] or SCAPE [3] to 2D observations using an optimization algorithm to iteratively improve the fitting quality. While early approaches such as [10, 37] required some manual intervention, the SMPLify method of Bogo et al. [5] was perhaps the first to fit SMPL to 2D keypoints fully automatically. SMPL was then extended to use silhouette, multiple views, and multiple people in [21, 13, 48]. Recent optimization methods such as [16, 32, 46] have significantly increased the scale of the models and data that can be handled. Fitting 3D models via learning-based regression. More recently, methods have focused on regressing the parameters of the 3D models directly, in a feed-forward manner, generally by learning a deep neural network [42, 43, 30, 31, 17]. Due to the scarcity of 3D ground truth data for humans in the wild, most of these methods train a deep regressor using a mix of datasets with 3D and 2D annotations in form of 3D MoCap markers, 2D keypoints and silhouettes. Among those, HMR of Kanazawa et al. [17] and GraphCMR of Kolotouros et al. [20] stand out as particularly effective. Hybrid methods. Other authors have also combined optimization and learning-based regression methods. In most cases, the integration is done by using a deep regressor to initialize the optimization algorithm [37, 21, 33, 31, 44]. However, recently Kolotouros et al. [19] has shown strong results by integrating the optimization loop in learning the deep neural network that performs the regression, thereby exploiting the weak cues available in 2D keypoints. Modelling ambiguities in 3D human reconstruction. Several previous papers have looked at the problem of modelling ambiguous 3D human pose reconstructions. Early work includes Sminchisescu and Triggs [39], Sidenbladh et al. [36] and Sminchisescu et al. [38]. More recently, Akhter and Black [1] learn a prior over human skeleton joint angles (but not directly a prior on the SMPL parameters) from a MoCap dataset. Li and Lee [22] use the Mixture Density Networks model of [4] to capture ambiguous 3D reconstructions of sparse human body keypoints directly in physical space. Sharma et al. [35] learn a conditional variational auto-encoder to model ambiguous reconstructions as a posterior distribution; they also propose two scoring methods to extract a single 3D reconstruction from the distribution. Cheng et al. [7] tackle the problem of video 3D reconstruction in the presence of occlusions, and show that temporal cues can be used to disambiguate the solution. While our method is similar in the goal of correctly handling the prediction uncertainty, we differ by applying our method to predicting full mesh of the human body. This is arguably a more challenging scenario due to the increased complexity of the desired 3D shape. X̂1 X̂2 X̂M . . . hypothesesM Finally, some recent concurrent works also consider building priors over 3D human pose using normalizing flows. Xu et al. [47] release a prior for their new GHUM/GHUML model, and Zanfir et al. [49] build a prior on SMPL joint angles to constrain their weakly-supervised network. Our method differs as we learn our prior on 3D SMPL joints. 3 Preliminaries Before discussing our method, we describe the necessary background, starting from SMPL. SMPL. SMPL is a model of the human body parameterized by axis-angle rotations θ ∈ R69 of 23 body joints, the shape coefficients β ∈ R10 modelling shape variations, and a global rotation γ ∈ R3. SMPL defines a skinning function S : (θ, β, γ) 7→ V that maps the body parameters to the vertices V ∈ R6890×3 of a 3D mesh. Predicting the SMPL parameters from a single image. Given an image I containing a person, the goal is to recover the SMPL parameters (θ, β, γ) that provide the best 3D reconstruction of it. Existing algorithms [18] cast this as learning a deep network G(I) = (θ, β, γ, t) that predicts the SMPL parameters as well as the translation t ∈ R3 of the perspective camera observing the person. We assume a fixed set of camera parameters. During training, the camera is used to constrain the reconstructed 3D mesh and the annotated 2D keypoints to be consistent. Since most datasets only contain annotations for a small set of keypoints ([11] is an exception), and since these keypoints do not correspond directly to any of the SMPL mesh vertices, we need a mechanism to translate between them. This mechanism is a fixed linear regressor J : V 7→ X that maps the SMPL mesh vertices V = S(G(I)) to the 3D locations X = J(V ) = J(S(G(I))) of the K joints. Then, the projections πt(X) of the 3D joint positions into image I can be compared to the available 2D annotations. Normalizing flows. The idea of normalizing flows (NF) is to represent a complex distribution p(X) on a random variable X as a much simpler distribution p(z) on a transformed version z = f(X) of X . The transformation f is learned so that p(z) has a fixed shape, usually a Normal p(z) ∼ N (0, 1). Furthermore, f itself must be invertible and smooth. In this paper, we utilize a particular version of NF dubbed RealNVP [8]. A more detailed explanation of NF and RealNVP has been deferred to the supplementary. 4 Method We start from a neural network architecture that implements the function G(I) = (θ, β, γ, t) described above. As shown in SPIN [19], the HMR [18] architecture attains state-of-the-art results for this task, so we use it here. However, the resulting regressor G(I), given an input image I , can only produce a single unique solution. In general, and in particular for cases with a high degree of reconstruction ambiguity, we are interested in predicting set of plausible 3D poses rather than a single one. We thus extend our model to explicitly produce a set of M different hypotheses Gm(I) = (θm, βm, γm, tm), m = 1, . . . ,M . This is easily achieved by modifying the HMR’s final output layer to produce a tensor M times larger, effectively stacking the hypotheses. In what follows, we describe the learning scheme that drives the monocular predictor G to achieve an optimal coverage of the plausible poses consistent with the input image. Our method is summarized in fig. 3. 4.1 Learning with multiple hypotheses For learning the model, we assume to have a training set of N images {Ii}i=1,...,N , each cropped around a person. Furthermore, for each training image Ii we assume to know (1) the 2D location Yi of the body joints (2) their 3D locationXi, and (3) the ground truth SMPL fit (θi, βi, γi). Depending on the set up, some of these quantities can be inferred from the others (e.g. we can use the function J to convert the SMPL parameters to the 3D joints Xi and then the camera projection to obtain Yi). Best-of-M loss. Given a single input image, our network predicts a set of poses, where at least one should be similar to the ground truth annotation Xi. This is captured by the best-of-M loss [12]: Lbest(J,G;m∗) = 1 N N∑ i=1 ∥∥Xi − X̂m∗i (Ii)∥∥, m∗i = argmin m=1,...,M ∥∥Xi − X̂m(Ii)∥∥, (1) where X̂m(Ii) = J(Gm(V (Ii))) are the 3D joints estimated by the m-th SMPL predictor Gm(Ii) applied to image Ii. In this way, only the best hypothesis is steered to match the ground truth, leaving the other hypotheses free to sample the space of ambiguous solutions. During the computation of this loss, we also extract the best index m∗i for each training example. Limitations of best-of-M . As noted in section 1, best-of-M only guarantees that one of the M hypotheses is a good solution, but says nothing about the other ones. Furthermore, in applications we are often interested in modulating the number of hypotheses generated, but the best-of-M regressor G(I) only produces a fixed number of output hypothesis M , and changing M would require retraining from scratch, which is intractable. We first address these issues by introducing a method that allows us to train a best-of-M model for a large M once and leverage it later to generate an arbitrary number of n < M hypotheses without the need of retraining, while ensuring that these are good representatives of likely body poses. n-quantized-best-of-M Formally, given a set of M predictions X̂M (I) = {X̂1(I), ..., X̂M (I)} we seek to generate a smaller n-sized set X̄n(I) = {X̄1(I), ..., X̄n(I)} which preserves the information contained in X̂M . In other words, X̄n optimally quantizes X̂M . To this end, we interpret the output of the best-of-M model as a set of choices X̂M (I) for the possible pose. These poses are of course not all equally likely, but it is difficult to infer their probability from (1). We thus work with the following approximation. We consider the prior p(X) on possible poses (defined in the next section), and set: p(X|I) = p(X|X̂M (I)) = M∑ i=1 δ(X − X̂i(I)) p(X̂ i(I))∑M k=1 p(X̂ k(I)) . (2) This amounts to using the best-of-M output as a conditioning set (i.e. an unweighted selection of plausible poses) and then use the prior p(x) to weight the samples in this set. With the weighted samples, we can then run K-means [24] to further quantize the best-of-M output while minimizing the quantization energy E: E(X̄ |X̂ ) = Ep(X|I) [ min {X̄1,...,X̄n} ‖X − X̄j‖2 ] = M∑ i=1 p(X̂i(I))∑M k=1 p(X̂ k(I)) min {X̄1,...,X̄n} ‖X̂i(I)−X̄j‖2. (3) This can be done efficiently on GPU — for our problem, K-Means consumes less than 20% of the execution time of the entire forward pass of our method. Learning the pose prior with normalizing flows. In order to obtain p(X), we propose to learn a normalizing flow model in form of the RealNVP network f described in section 3 and the supplementary. RealNVP optimizes the log likelihood Lnf(f) of training ground truth 3D skeletons {X1, ...XN} annotated in their corresponding images {I1, ..., IN} : Lnf(f) = − 1 N N∑ i=1 log p(Xi) = − 1 N N∑ i=1 ( logN (f(Xi))− L∑ l=1 log ∣∣∣∣dfl(Xli)dXli ∣∣∣∣ ) . (4) 2D re-projection loss. Since the best-of-M loss optimizes a single prediction at a time, often some members of the ensemble X̂ (I) drift away from the manifold of plausible human body shapes, ultimately becoming ‘dead’ predictions that are never selected as the best hypothesism∗. In order to prevent this, we further utilize a re-projection loss that acts across all hypotheses for a given image. More specifically, we constrain the set of 3D reconstructions to lie on projection rays passing through the 2D input keypoints with the following hypothesis re-projection loss: Lri(J,G) = 1 N N∑ i=1 M∑ m=1 ∥∥Yi − πti(X̂m(I))∥∥. (5) Note that many of our training images exhibit significant occlusion, so Y may contain invisible or missing points. We handle this by masking Lri to prevent these points contributing to the loss. SMPL loss. The final loss terms, introduced by prior work [18, 31, 19], penalize deviations between the predicted and ground truth SMPL parameters. For our method, these are only applied to the best hypothesis m∗i found above: Lθ(G;m∗) = 1 N N∑ i=1 ‖θi −Gθ,m∗i (Ii)‖; LV (G;m ∗) = 1 N N∑ i=1 ‖S(θi, βi, γi)− S(G(θ,β,γ),m∗i (Ii))‖ (6) Lβ(G;m∗) = 1 N N∑ i=1 ‖βi −Gβ,m∗i (Ii)‖; Lrb(G;m ∗) = 1 N N∑ i=1 ‖Yi − πti(X̂ m∗i (Ii))‖ (7) Note here we use Lrb to refer to a 2D re-projection error between the best hypothesis and ground truth 2D points Yi. This differs from the earlier loss Lri, which is applied across all modes to enforce consistency to the visible input points. Note that we could have used eqs. (6) and (7) to select the best hypothesis m∗i , but it would entail an unmanageable memory footprint due to the requirement of SMPL-meshing for every hypothesis before the best-of-M selection. Overall loss. The model is thus trained to minimize: L(J,G) = λriLri(J,G) + λbestLbest(J,G;m∗) + λθLθ(J,G;m∗) + λβLβ(J,G;m∗) + λVLV (J,G;m∗) + λrbLrb(J,G;m∗) (8) where m∗ is given in eq. (1) and λri, λbest, λθ, λβ , λV, λrb are weighing factors. We use a consistent set of SMPL loss weights across all experiments λbest = 25.0, λθ = 1.0, λβ = 0.001, λV = 1.0, and set λri = 1.0. Since the training of the normalizing flow f is independent of the rest of the model, we train f separately by optimizing Lnf with the weight of λnf = 1.0. Samples from our trained normalizing flow are shown in fig. 4 5 Experiments In this section we compare our method to several strong baselines. We start by describing the datasets and the baselines, followed by a quantitative and a qualitative evaluation. As common practice, we train on subjects S1, S5, S6, S7 and S8, and test on S9 and S11. 3DPW is only used for evaluation and, following [20], we evaluate on its test set. Our evaluation is consistent with [19, 20] - we report two metrics that compare the lifted dense 3D SMPL shape to the ground truth mesh: Mean Per Joint Position Error (MPJPE), Reconstruction Error (RE). For H36M, all errors are computed using an evaluation scheme known as “Protocol #2”. Please refer to supplementary for a detailed explanation of MPJPE and RE. Multipose metrics. MPJPE and RE are traditional metrics that assume a single correct ground truth pre- diction for a given 2D observation. As mentioned above, such an assumption is rarely correct due to the inherent ambiguity of the monocular 3D shape estimation task. We thus also report MPJPE- n/RE-n an extension of MPJPE RE used in [22], that enables an evaluation of n different shape hypotheses. In more detail, to evaluate an algorithm, we allow it to output n possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric. We report results for n ∈ {1, 5, 10, 25}. Ambiguous H36M/3DPW (AH36M/A3DPW). Since H36M is captured in a controlled environment, it rarely depicts challenging real-world scenarios such as body occlusions that are the main source of ambiguity in the single-view 3D shape estimation problem. Hence, we construct an adapted version of H36M with synthetically-generated occlusions (fig. 5) by randomly hiding a subset of the 2D keypoints and re-computing an image crop around the remaining visible joints. Please refer to the supplementary for details of the occlusion generation process. While 3DPW does contain real scenes, for completeness, we also evaluate on a noisy, and thus more challenging version (A3DPW) generated according to the aforementioned strategy. Baselines Our method is compared to two multi-pose prediction baselines. For fairness, both baselines extend the same (state-of-the-art) trunk architecture as we use, and all methods have access to the same training data. SMPL-MDN follows [22] and outputs parameters of a mixture density model over the set of SMPL log-rotation pose parameters. Since a naïve implementation of the MDN model leads to poor performance (≈ 200mm MPJPE-n = 5 on H36M), we introduced several improvements that allow optimization of the total loss eq. (8). SMPL-CVAE, the second baseline, is a conditional variational autoencoder [40] combined with our trunk network. SMPL-CVAE consists of an encoding network that maps a ground truth SMPL mesh V to a gaussian vector z which is fed together with an encoding of the image to generate a mesh V ′ such that V ′ ≈ V . At test time, we sample n plausible human meshes by drawing z ∼ N (0, 1) to evaluate with MPJPE-n/RE-n. More details of both SMPL-CVAE and SMPL-MDN have been deferred to the supplementary material. For completeness, we also compare to three more baselines that tackle the standard single-mesh prediction problem: HMR [17], GraphCMR [31], and SPIN [19], where the latter currently attain state-of-the-art performance on H36M/3DPW. All methods were trained on H36M [14], MPI-INF3DHP [28], LSP [15], MPII [2] and COCO [23]. 5.1 Results Table 1 contains a comprehensive summary of the results on all 3 benchmarks. Our method outperforms the SMPL-CVAE and SMPL-MDN in all metrics on all datasets. For SMPL-CVAE, we found that the encoding network often “cheats” during training by transporting all information about the ground truth, instead of only encoding the modes of ambiguity. The reason for a lower performance of SMPL-MDN is probably the representation of the probability in the space of log-rotations, rather in the space of vertices. Modelling the MDN in the space of model vertices would be more convenient due to being more relevant to the final evaluation metric that aggregates per-vertex errors, however, fitting such high-dimensional (dim=6890× 3) Gaussian mixture is prohibitively costly. Furthermore, it is very encouraging to observe that our method is also able to outperform the singlemode baselines [17, 20, 19] on the single mode MPJPE on both H36M and 3DPW. This comes as a surprise since our method has not been optimized for this mode of operation. The difference is more significant for 3DPW which probably happens because 3DPW is not used for training and, hence, the normalizing flow prior acts as an effective filter of predicted outlier poses. Qualitiative results are shown in fig. 6. Ablation study. We further conduct an ablative study on 3DPW that removes components of our method and measures the incurred change in performance. More specifically, we: 1) ablate the hypothesis reprojection loss; 2) set p(X|I) = Uniform in eq. (3), effectively removing the normalizing flow component and executing unweighted K-Means in n-quantized-best-of-M . Table 2 demonstrates that removing both contributions decreases performance, validating our design choices. 6 Conclusions In this work, we have explored a seldom visited problem of representing the set of plausible 3D meshes corresponding to a single ambiguous input image of a human. To this end, we have pro- posed a novel method that trains a single multi-hypothesis best-of-M model and, using a novel n-quantized-best-of-M strategy, allows to sample an arbitrary number n < M of hypotheses. Importantly, this proposed quantization technique leverages a normalizing flow model, that effectively filters out the predicted hypotheses that are unnatural. Empirical evaluation reveals performance superior to several strong probabilistic baselines on Human36M, its challenging ambiguous version, and on 3DPW. Our method encounters occasional failure cases, such as when tested on individuals with unusual shape (e.g. obese people), since we have very few of these examples in the training set. Tackling such cases would make for interesting and worthwhile future work. Acknowledgements The authors would like to thank Richard Turner for useful technical discussions relating to normalizing flows, and Philippa Liggins, Thomas Roddick and Nicholas Biggs for proof reading. This work was entirely funded by Facebook AI Research. Broader impact Our method improves the ability of machines to understand human body poses in images and videos. Understanding people automatically may arguably be misused by bad actors. However, importantly, our method is not a form of biometric as it does not allow the identification of people. Rather, only their overall body shape and pose is reconstructed, but these details are insufficient for unique identification. In particular, individual facial features are not reconstructed at all. Furthermore, our method is an improvement of existing capabilities, but does not introduce a radical new capability in machine learning. Thus our contribution is unlikely to facilitate misuse of technology which is already available to anyone. Finally, any potential negative use of a technology should be balanced against positive uses. Understanding body poses has many legitimate applications in VR and AR, medical, assistance to the elderly, assistance to the visual impaired, autonomous driving, human-machine interactions, image and video categorization, platform integrity, etc.
1. What is the main contribution of the paper in 3D pose estimation? 2. What are the strengths of the proposed method, particularly in its simplicity and effectiveness? 3. What are the weaknesses of the paper regarding its comparisons with prior works and the selection of the best pose? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns regarding the use of the curated AH36M dataset and the lack of code and data release? 6. Would including failure cases and limitations benefit future research building upon this work?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Paper tackles with the problem of 3D pose estimation from heavily occluded images. To solve the problem, a model that predicts multiple poses and a multiple hypotheses loss called "Best-of-M loss" are proposed. A quantization scheme and pose prior are developed to select the n likely poses out of M predictions at test time. Strengths - The problem addressed by this paper is quite important and there is not much prior work tackling it. Even state-of-the-art 3D pose estimation methods suffer from heavy occlusions. - Proposed method is neat, simple and works effectively as validated by experiments. - Experiments and ablative analysis are quite strong. Several strong baselines are implemented and analyzed by the authors. - Paper is very well written. It is quite clear and easy to follow. Weaknesses - In L104-112 several prior arts are listed. I understand that the task authors tackle is predicting full mesh, but why proposed method is better than [21] or [6]? What makes the proposed approach better than previous methods? From the experiments, the performance difference is clear. However, I am missing the core insights/motivations behind the approach. - In L230, it is indicated that "we allow it (3D pose regressor) to output M possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric". Comparison here seems a bit unfair. Instead of using oracle poses, the authors would compute the MPJPE/RE for all of the M or maybe n out of M poses, then report the median error. - It is not clearly indicated whether the curated AH36M dataset is used for training. If so, did other methods eg. HMR, SPIN have access to AH36M data during training for a fair comparison? - There is no promise to release the code and the data. Even though the method is explained clearly, a standard implementation would be quite helpful for the research community. - There is no failure cases/limitations sections. It would be insightful to include such information for researchers who would like to build on this work.
NIPS
Title 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data Abstract We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views. In such cases, the visual evidence is usually insufficient to identify a 3D reconstruction uniquely, so we aim at recovering several plausible reconstructions compatible with the input data. We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans. We propose to learn a multi-hypothesis neural network regressor using a best-of-M loss, where each of the M hypotheses is constrained to lie on a manifold of plausible human poses by means of a generative model. We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans, and in heavily occluded versions of these benchmarks. 1 Introduction We are interested in reconstructing 3D human pose from the observation of single 2D images. As humans, we have no problem in predicting, at least approximately, the 3D structure of most scenes, including the pose and shape of other people, even from a single view. However, 2D images notoriously [9] do not contain sufficient geometric information to allow recovery of the third dimension. Hence, single-view reconstruction is only possible in a probabilistic sense and the goal is to make the posterior distribution as sharp as possible, by learning a strong prior on the space of possible solutions. Recent progress in single-view 3D pose reconstruction has been impressive. Methods such as HMR [17], GraphCMR [20] and SPIN [19] formulate this task as learning a deep neural network that maps 2D images to the parameters of a 3D model of the human body, usually SMPL [26]. These methods work well in general, but not always (fig. 2). Their main weakness is processing heavily occluded images of the object. When a large part of the object is missing, say the lower body of a sitting human, they output reconstructions that are often implausible. Since they can produce only one hypothesis as output, they very likely learn to approximate the mean of the posterior distribution, which may not correspond to any plausible pose. Unfortunately, this failure modality is rather common in applications due to scene clutter and crowds. In this paper, we propose a solution to this issue. Specifically, we consider the challenge of recovering 3D mesh reconstructions of complex articulated objects such as humans from highly ambiguous ∗work completed during internship at Facebook AI Research 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. image data, often containing significant occlusions of the object. Clearly, it is generally impossible to reconstruct the object uniquely if too much evidence is missing; however, we can still predict a set containing all possible reconstructions (see fig. 1), making this set as small as possible. While ambiguous pose reconstruction has been previously investigated, as far as we know, this is the first paper that looks specifically at a deep learning approach for ambiguous reconstructions of the full human mesh. Our primary contribution is to introduce a principled multi-hypothesis framework to model the ambiguities in monocular pose recovery. In the literature, such multiple-hypotheses networks are often trained with a so-called best-of-M loss — namely, during training, the loss is incurred only by the best of the M hypothesis, back-propagating gradients from that alone [12]. In this work we opt for the best-of-M approach since it has been show to outperform alternatives (such as variational auto-encoders or mixture density networks) in tasks that are similar to our 3D human pose recovery, and which have constrained output spaces [34]. Input image crop Prediction Full image PredictionInput masked image Full image Pr ed ic tio ns o f S PI N tra in ed o n fu ll po se s Pr ed ic tio ns o f S PI N m as ke d im ag es SP IN tr ai ne d on fu ll im ag es SP IN tr ai ne d on m as ke d im ag es Figure 2: Top: Pretrained SPIN model tested on an ambiguous example, Bottom: SPIN model after fine-tuning to ambiguous examples. Note the network tends to regress to the mean over plausible poses, shown by predicting the missing legs vertically downward — arguably the average position over the training dataset. A major drawback of the best-of-M approach is that it only guarantees that one of the hypotheses lies close to the correct solution; however, it says nothing about the plausibility, or lack thereof, of the other M − 1 hypotheses, which can be arbitrarily ‘bad’.2 Not only does this mean that most of the hypotheses may be uninformative, but in an application we are also unable to tell which hypothesis should be used, and we might very well pick a ‘bad’ one. This has also a detrimental effect during learning because it makes gradients sparse as prediction errors are back-propagated only through one of the M hypotheses for each training image. In order to address these issues, our first contribution is a hypothesis reprojection loss that forces each member of the multi-hypothesis set to correctly reproject to 2D image keypoint annotations. The main benefit is to constrain the whole predicted set of meshes to be consistent with the observed image, not just the best hypothesis, also addressing gradient sparsity. Next, we observe that another drawback of the best-of-M pipelines is to be tied to a particular value of M , whereas in applications we are often interested in tuning the num- ber of hypothesis considered. Furthermore, minimizing the reprojection loss makes hypotheses geometrically consistent with the observation, but not necessarily likely. Our second contribution is thus to improve the flexibility of best-of-M models by allowing them to output any smaller number 2 Theoretically, best-of-M can minimize its loss by quantizing optimally (in the sense of minimum expected distortion) the posterior distribution, which would be desirable for coverage. However, this is not the only solution that optimizes the best-of-M training loss, as in the end it is sufficient that one hypothesis per training sample is close to the ground truth. In fact, this is exactly what happens; for instance, during training hypotheses in best-of-M are known to easily become degenerate and ‘die off’, a clear symptom of this problem. n < M of hypotheses while at the same time making these hypotheses more representative of likely poses. The new method, which we call n-quantized-best-of-M , does so by quantizing the best-of-M model to output weighed by a explicit pose prior, learned by means of normalizing flows. To summarise, our key contributions are as follows. First, we deal with the challenge of 3D mesh reconstruction for articulated objects such as humans in ambiguous scenarios. Second, we introduce a n-quantized-best-of-M mechanism to allow best-of-M models to generate an arbitrary number of n < M predictions. Third, we introduce a mode-wise re-projection loss for multi-hypothesis prediction, to ensure that predicted hypotheses are all consistent with the input. Empirically, we achieve state-of-the-art monocular mesh recovery accuracy on Human36M, its more challenging version augmented with heavy occlusions, and the 3DPW datasets. Our ablation study validates each of our modelling choices, demonstrating their positive effect. 2 Related work There is ample literature on recovering the pose of 3D models from images. We break this into five categories: methods that reconstruct 3D points directly, methods that reconstruct the parameters of a 3D model of the object via optimization, methods that do the latter via learning-based regression, hybrid methods and methods which deal with uncertainty in 3D human reconstruction. Reconstructing 3D body points without a model. Several papers have focused on the problem of estimating 3D body points from 2D observations [3, 29, 33, 41, 20]. Of these, Martinez et al. [27] introduced a particularly simple pipeline based on a shallow neural network. In this work, we aim at recovering the full 3D surface of a human body, rather than only lifting sparse keypoints. Fitting 3D models via direct optimization. Several methods fit the parameters of a 3D model such as SMPL [25] or SCAPE [3] to 2D observations using an optimization algorithm to iteratively improve the fitting quality. While early approaches such as [10, 37] required some manual intervention, the SMPLify method of Bogo et al. [5] was perhaps the first to fit SMPL to 2D keypoints fully automatically. SMPL was then extended to use silhouette, multiple views, and multiple people in [21, 13, 48]. Recent optimization methods such as [16, 32, 46] have significantly increased the scale of the models and data that can be handled. Fitting 3D models via learning-based regression. More recently, methods have focused on regressing the parameters of the 3D models directly, in a feed-forward manner, generally by learning a deep neural network [42, 43, 30, 31, 17]. Due to the scarcity of 3D ground truth data for humans in the wild, most of these methods train a deep regressor using a mix of datasets with 3D and 2D annotations in form of 3D MoCap markers, 2D keypoints and silhouettes. Among those, HMR of Kanazawa et al. [17] and GraphCMR of Kolotouros et al. [20] stand out as particularly effective. Hybrid methods. Other authors have also combined optimization and learning-based regression methods. In most cases, the integration is done by using a deep regressor to initialize the optimization algorithm [37, 21, 33, 31, 44]. However, recently Kolotouros et al. [19] has shown strong results by integrating the optimization loop in learning the deep neural network that performs the regression, thereby exploiting the weak cues available in 2D keypoints. Modelling ambiguities in 3D human reconstruction. Several previous papers have looked at the problem of modelling ambiguous 3D human pose reconstructions. Early work includes Sminchisescu and Triggs [39], Sidenbladh et al. [36] and Sminchisescu et al. [38]. More recently, Akhter and Black [1] learn a prior over human skeleton joint angles (but not directly a prior on the SMPL parameters) from a MoCap dataset. Li and Lee [22] use the Mixture Density Networks model of [4] to capture ambiguous 3D reconstructions of sparse human body keypoints directly in physical space. Sharma et al. [35] learn a conditional variational auto-encoder to model ambiguous reconstructions as a posterior distribution; they also propose two scoring methods to extract a single 3D reconstruction from the distribution. Cheng et al. [7] tackle the problem of video 3D reconstruction in the presence of occlusions, and show that temporal cues can be used to disambiguate the solution. While our method is similar in the goal of correctly handling the prediction uncertainty, we differ by applying our method to predicting full mesh of the human body. This is arguably a more challenging scenario due to the increased complexity of the desired 3D shape. X̂1 X̂2 X̂M . . . hypothesesM Finally, some recent concurrent works also consider building priors over 3D human pose using normalizing flows. Xu et al. [47] release a prior for their new GHUM/GHUML model, and Zanfir et al. [49] build a prior on SMPL joint angles to constrain their weakly-supervised network. Our method differs as we learn our prior on 3D SMPL joints. 3 Preliminaries Before discussing our method, we describe the necessary background, starting from SMPL. SMPL. SMPL is a model of the human body parameterized by axis-angle rotations θ ∈ R69 of 23 body joints, the shape coefficients β ∈ R10 modelling shape variations, and a global rotation γ ∈ R3. SMPL defines a skinning function S : (θ, β, γ) 7→ V that maps the body parameters to the vertices V ∈ R6890×3 of a 3D mesh. Predicting the SMPL parameters from a single image. Given an image I containing a person, the goal is to recover the SMPL parameters (θ, β, γ) that provide the best 3D reconstruction of it. Existing algorithms [18] cast this as learning a deep network G(I) = (θ, β, γ, t) that predicts the SMPL parameters as well as the translation t ∈ R3 of the perspective camera observing the person. We assume a fixed set of camera parameters. During training, the camera is used to constrain the reconstructed 3D mesh and the annotated 2D keypoints to be consistent. Since most datasets only contain annotations for a small set of keypoints ([11] is an exception), and since these keypoints do not correspond directly to any of the SMPL mesh vertices, we need a mechanism to translate between them. This mechanism is a fixed linear regressor J : V 7→ X that maps the SMPL mesh vertices V = S(G(I)) to the 3D locations X = J(V ) = J(S(G(I))) of the K joints. Then, the projections πt(X) of the 3D joint positions into image I can be compared to the available 2D annotations. Normalizing flows. The idea of normalizing flows (NF) is to represent a complex distribution p(X) on a random variable X as a much simpler distribution p(z) on a transformed version z = f(X) of X . The transformation f is learned so that p(z) has a fixed shape, usually a Normal p(z) ∼ N (0, 1). Furthermore, f itself must be invertible and smooth. In this paper, we utilize a particular version of NF dubbed RealNVP [8]. A more detailed explanation of NF and RealNVP has been deferred to the supplementary. 4 Method We start from a neural network architecture that implements the function G(I) = (θ, β, γ, t) described above. As shown in SPIN [19], the HMR [18] architecture attains state-of-the-art results for this task, so we use it here. However, the resulting regressor G(I), given an input image I , can only produce a single unique solution. In general, and in particular for cases with a high degree of reconstruction ambiguity, we are interested in predicting set of plausible 3D poses rather than a single one. We thus extend our model to explicitly produce a set of M different hypotheses Gm(I) = (θm, βm, γm, tm), m = 1, . . . ,M . This is easily achieved by modifying the HMR’s final output layer to produce a tensor M times larger, effectively stacking the hypotheses. In what follows, we describe the learning scheme that drives the monocular predictor G to achieve an optimal coverage of the plausible poses consistent with the input image. Our method is summarized in fig. 3. 4.1 Learning with multiple hypotheses For learning the model, we assume to have a training set of N images {Ii}i=1,...,N , each cropped around a person. Furthermore, for each training image Ii we assume to know (1) the 2D location Yi of the body joints (2) their 3D locationXi, and (3) the ground truth SMPL fit (θi, βi, γi). Depending on the set up, some of these quantities can be inferred from the others (e.g. we can use the function J to convert the SMPL parameters to the 3D joints Xi and then the camera projection to obtain Yi). Best-of-M loss. Given a single input image, our network predicts a set of poses, where at least one should be similar to the ground truth annotation Xi. This is captured by the best-of-M loss [12]: Lbest(J,G;m∗) = 1 N N∑ i=1 ∥∥Xi − X̂m∗i (Ii)∥∥, m∗i = argmin m=1,...,M ∥∥Xi − X̂m(Ii)∥∥, (1) where X̂m(Ii) = J(Gm(V (Ii))) are the 3D joints estimated by the m-th SMPL predictor Gm(Ii) applied to image Ii. In this way, only the best hypothesis is steered to match the ground truth, leaving the other hypotheses free to sample the space of ambiguous solutions. During the computation of this loss, we also extract the best index m∗i for each training example. Limitations of best-of-M . As noted in section 1, best-of-M only guarantees that one of the M hypotheses is a good solution, but says nothing about the other ones. Furthermore, in applications we are often interested in modulating the number of hypotheses generated, but the best-of-M regressor G(I) only produces a fixed number of output hypothesis M , and changing M would require retraining from scratch, which is intractable. We first address these issues by introducing a method that allows us to train a best-of-M model for a large M once and leverage it later to generate an arbitrary number of n < M hypotheses without the need of retraining, while ensuring that these are good representatives of likely body poses. n-quantized-best-of-M Formally, given a set of M predictions X̂M (I) = {X̂1(I), ..., X̂M (I)} we seek to generate a smaller n-sized set X̄n(I) = {X̄1(I), ..., X̄n(I)} which preserves the information contained in X̂M . In other words, X̄n optimally quantizes X̂M . To this end, we interpret the output of the best-of-M model as a set of choices X̂M (I) for the possible pose. These poses are of course not all equally likely, but it is difficult to infer their probability from (1). We thus work with the following approximation. We consider the prior p(X) on possible poses (defined in the next section), and set: p(X|I) = p(X|X̂M (I)) = M∑ i=1 δ(X − X̂i(I)) p(X̂ i(I))∑M k=1 p(X̂ k(I)) . (2) This amounts to using the best-of-M output as a conditioning set (i.e. an unweighted selection of plausible poses) and then use the prior p(x) to weight the samples in this set. With the weighted samples, we can then run K-means [24] to further quantize the best-of-M output while minimizing the quantization energy E: E(X̄ |X̂ ) = Ep(X|I) [ min {X̄1,...,X̄n} ‖X − X̄j‖2 ] = M∑ i=1 p(X̂i(I))∑M k=1 p(X̂ k(I)) min {X̄1,...,X̄n} ‖X̂i(I)−X̄j‖2. (3) This can be done efficiently on GPU — for our problem, K-Means consumes less than 20% of the execution time of the entire forward pass of our method. Learning the pose prior with normalizing flows. In order to obtain p(X), we propose to learn a normalizing flow model in form of the RealNVP network f described in section 3 and the supplementary. RealNVP optimizes the log likelihood Lnf(f) of training ground truth 3D skeletons {X1, ...XN} annotated in their corresponding images {I1, ..., IN} : Lnf(f) = − 1 N N∑ i=1 log p(Xi) = − 1 N N∑ i=1 ( logN (f(Xi))− L∑ l=1 log ∣∣∣∣dfl(Xli)dXli ∣∣∣∣ ) . (4) 2D re-projection loss. Since the best-of-M loss optimizes a single prediction at a time, often some members of the ensemble X̂ (I) drift away from the manifold of plausible human body shapes, ultimately becoming ‘dead’ predictions that are never selected as the best hypothesism∗. In order to prevent this, we further utilize a re-projection loss that acts across all hypotheses for a given image. More specifically, we constrain the set of 3D reconstructions to lie on projection rays passing through the 2D input keypoints with the following hypothesis re-projection loss: Lri(J,G) = 1 N N∑ i=1 M∑ m=1 ∥∥Yi − πti(X̂m(I))∥∥. (5) Note that many of our training images exhibit significant occlusion, so Y may contain invisible or missing points. We handle this by masking Lri to prevent these points contributing to the loss. SMPL loss. The final loss terms, introduced by prior work [18, 31, 19], penalize deviations between the predicted and ground truth SMPL parameters. For our method, these are only applied to the best hypothesis m∗i found above: Lθ(G;m∗) = 1 N N∑ i=1 ‖θi −Gθ,m∗i (Ii)‖; LV (G;m ∗) = 1 N N∑ i=1 ‖S(θi, βi, γi)− S(G(θ,β,γ),m∗i (Ii))‖ (6) Lβ(G;m∗) = 1 N N∑ i=1 ‖βi −Gβ,m∗i (Ii)‖; Lrb(G;m ∗) = 1 N N∑ i=1 ‖Yi − πti(X̂ m∗i (Ii))‖ (7) Note here we use Lrb to refer to a 2D re-projection error between the best hypothesis and ground truth 2D points Yi. This differs from the earlier loss Lri, which is applied across all modes to enforce consistency to the visible input points. Note that we could have used eqs. (6) and (7) to select the best hypothesis m∗i , but it would entail an unmanageable memory footprint due to the requirement of SMPL-meshing for every hypothesis before the best-of-M selection. Overall loss. The model is thus trained to minimize: L(J,G) = λriLri(J,G) + λbestLbest(J,G;m∗) + λθLθ(J,G;m∗) + λβLβ(J,G;m∗) + λVLV (J,G;m∗) + λrbLrb(J,G;m∗) (8) where m∗ is given in eq. (1) and λri, λbest, λθ, λβ , λV, λrb are weighing factors. We use a consistent set of SMPL loss weights across all experiments λbest = 25.0, λθ = 1.0, λβ = 0.001, λV = 1.0, and set λri = 1.0. Since the training of the normalizing flow f is independent of the rest of the model, we train f separately by optimizing Lnf with the weight of λnf = 1.0. Samples from our trained normalizing flow are shown in fig. 4 5 Experiments In this section we compare our method to several strong baselines. We start by describing the datasets and the baselines, followed by a quantitative and a qualitative evaluation. As common practice, we train on subjects S1, S5, S6, S7 and S8, and test on S9 and S11. 3DPW is only used for evaluation and, following [20], we evaluate on its test set. Our evaluation is consistent with [19, 20] - we report two metrics that compare the lifted dense 3D SMPL shape to the ground truth mesh: Mean Per Joint Position Error (MPJPE), Reconstruction Error (RE). For H36M, all errors are computed using an evaluation scheme known as “Protocol #2”. Please refer to supplementary for a detailed explanation of MPJPE and RE. Multipose metrics. MPJPE and RE are traditional metrics that assume a single correct ground truth pre- diction for a given 2D observation. As mentioned above, such an assumption is rarely correct due to the inherent ambiguity of the monocular 3D shape estimation task. We thus also report MPJPE- n/RE-n an extension of MPJPE RE used in [22], that enables an evaluation of n different shape hypotheses. In more detail, to evaluate an algorithm, we allow it to output n possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric. We report results for n ∈ {1, 5, 10, 25}. Ambiguous H36M/3DPW (AH36M/A3DPW). Since H36M is captured in a controlled environment, it rarely depicts challenging real-world scenarios such as body occlusions that are the main source of ambiguity in the single-view 3D shape estimation problem. Hence, we construct an adapted version of H36M with synthetically-generated occlusions (fig. 5) by randomly hiding a subset of the 2D keypoints and re-computing an image crop around the remaining visible joints. Please refer to the supplementary for details of the occlusion generation process. While 3DPW does contain real scenes, for completeness, we also evaluate on a noisy, and thus more challenging version (A3DPW) generated according to the aforementioned strategy. Baselines Our method is compared to two multi-pose prediction baselines. For fairness, both baselines extend the same (state-of-the-art) trunk architecture as we use, and all methods have access to the same training data. SMPL-MDN follows [22] and outputs parameters of a mixture density model over the set of SMPL log-rotation pose parameters. Since a naïve implementation of the MDN model leads to poor performance (≈ 200mm MPJPE-n = 5 on H36M), we introduced several improvements that allow optimization of the total loss eq. (8). SMPL-CVAE, the second baseline, is a conditional variational autoencoder [40] combined with our trunk network. SMPL-CVAE consists of an encoding network that maps a ground truth SMPL mesh V to a gaussian vector z which is fed together with an encoding of the image to generate a mesh V ′ such that V ′ ≈ V . At test time, we sample n plausible human meshes by drawing z ∼ N (0, 1) to evaluate with MPJPE-n/RE-n. More details of both SMPL-CVAE and SMPL-MDN have been deferred to the supplementary material. For completeness, we also compare to three more baselines that tackle the standard single-mesh prediction problem: HMR [17], GraphCMR [31], and SPIN [19], where the latter currently attain state-of-the-art performance on H36M/3DPW. All methods were trained on H36M [14], MPI-INF3DHP [28], LSP [15], MPII [2] and COCO [23]. 5.1 Results Table 1 contains a comprehensive summary of the results on all 3 benchmarks. Our method outperforms the SMPL-CVAE and SMPL-MDN in all metrics on all datasets. For SMPL-CVAE, we found that the encoding network often “cheats” during training by transporting all information about the ground truth, instead of only encoding the modes of ambiguity. The reason for a lower performance of SMPL-MDN is probably the representation of the probability in the space of log-rotations, rather in the space of vertices. Modelling the MDN in the space of model vertices would be more convenient due to being more relevant to the final evaluation metric that aggregates per-vertex errors, however, fitting such high-dimensional (dim=6890× 3) Gaussian mixture is prohibitively costly. Furthermore, it is very encouraging to observe that our method is also able to outperform the singlemode baselines [17, 20, 19] on the single mode MPJPE on both H36M and 3DPW. This comes as a surprise since our method has not been optimized for this mode of operation. The difference is more significant for 3DPW which probably happens because 3DPW is not used for training and, hence, the normalizing flow prior acts as an effective filter of predicted outlier poses. Qualitiative results are shown in fig. 6. Ablation study. We further conduct an ablative study on 3DPW that removes components of our method and measures the incurred change in performance. More specifically, we: 1) ablate the hypothesis reprojection loss; 2) set p(X|I) = Uniform in eq. (3), effectively removing the normalizing flow component and executing unweighted K-Means in n-quantized-best-of-M . Table 2 demonstrates that removing both contributions decreases performance, validating our design choices. 6 Conclusions In this work, we have explored a seldom visited problem of representing the set of plausible 3D meshes corresponding to a single ambiguous input image of a human. To this end, we have pro- posed a novel method that trains a single multi-hypothesis best-of-M model and, using a novel n-quantized-best-of-M strategy, allows to sample an arbitrary number n < M of hypotheses. Importantly, this proposed quantization technique leverages a normalizing flow model, that effectively filters out the predicted hypotheses that are unnatural. Empirical evaluation reveals performance superior to several strong probabilistic baselines on Human36M, its challenging ambiguous version, and on 3DPW. Our method encounters occasional failure cases, such as when tested on individuals with unusual shape (e.g. obese people), since we have very few of these examples in the training set. Tackling such cases would make for interesting and worthwhile future work. Acknowledgements The authors would like to thank Richard Turner for useful technical discussions relating to normalizing flows, and Philippa Liggins, Thomas Roddick and Nicholas Biggs for proof reading. This work was entirely funded by Facebook AI Research. Broader impact Our method improves the ability of machines to understand human body poses in images and videos. Understanding people automatically may arguably be misused by bad actors. However, importantly, our method is not a form of biometric as it does not allow the identification of people. Rather, only their overall body shape and pose is reconstructed, but these details are insufficient for unique identification. In particular, individual facial features are not reconstructed at all. Furthermore, our method is an improvement of existing capabilities, but does not introduce a radical new capability in machine learning. Thus our contribution is unlikely to facilitate misuse of technology which is already available to anyone. Finally, any potential negative use of a technology should be balanced against positive uses. Understanding body poses has many legitimate applications in VR and AR, medical, assistance to the elderly, assistance to the visual impaired, autonomous driving, human-machine interactions, image and video categorization, platform integrity, etc.
1. What is the main contribution of the paper? 2. What are the strengths of the proposed approach? 3. What are the weaknesses or limitations of the method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a multi-hypothesis neural network regressor to recover several plausible reconstructions that is compatible with the input data. The proposed regressor is trained with both best-of-M-loss and hypothesis re-projection loss. The flexibility of the regressor is improved by quantizing the best-of-M model by the so-called n-quantized-best-of-M method. Both quantitative and qualitative results prove the effectiveness of the proposed methods. Strengths This paper focuses on an interesting problem of reconstructing several dense 3d reconstructions from single and partially occluded views. The proposed method is novel and technically practicable. The n-quantized-best-of-M method is well designed which makes the best-of-M model more flexible to other applications. The experimental results prove the effectiveness of the proposed methods. Weaknesses This paper assumes that the 2D locations of the body joints are known for all input images and utilize a re-projection loss to constraint all hypothesis. However, to accurately capture 2D joint location in partially occluded views is also a challenging task which limits the flexibility of the proposed method. More implementation details should be provided to improve the reproducibility of the proposed method, e.g. what kind of training images are utilized as input for H36M and AH36M datasets, the batch size, weight decay and number of epochs etc.
NIPS
Title Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters Abstract Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles ofQ-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member’s Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of Q-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained Q-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at RL. 1 Introduction Offline reinforcement learning (RL), also referred to as batch RL [1], is a problem setting in which one is provided a dataset of interactions with an environment in the form of a Markov decision process (MDP), and the goal is to learn an effective policy exclusively from this fixed dataset. Offline RL holds the promise of data-efficiency through data reuse, and improved safety due to minimizing the need for policy rollouts. As a result, offline RL has been the subject of significant renewed interest in the machine learning literature [2]. One common approach to offline RL in the model-free setting is to use approximate dynamic programming (ADP) to learn a Q-value function via iterative regression to backed-up target values. The predominant algorithmic philosophy with most success in ADP-based offline RL is to encourage 36th Conference on Neural Information Processing Systems (NeurIPS 2022). obtained policies to remain close to the support set of the available offline data. A large variety of methods have been developed for enforcing such constraints, examples of which include regularizing policies with behavior cloning objectives [3, 4], performing updates only on actions observed inside [5, 6, 7, 8] or close to [9] the offline dataset, and regularizing value functions to underestimate the value of actions not seen in the dataset [10, 11, 12]. The need for such regularizers arises from inevitable inaccuracies in value estimation when function approximation, bootstrapping, and off-policy learning – i.e. The Deadly Triad [13] – are involved. In offline RL in particular, such inaccuracies cannot be resolved through additional interactions with the MDP. Thus, remaining close to the offline dataset limits opportunities for catastrophic inaccuracies to arise. However, recent works have argued that the aforementioned constraints can be overly pessimistic, and instead opt for approaches that take into consideration the uncertainty about the value function [14, 15, 16], thus re-focusing the offline RL problem to that of deriving accurate lower confidence bounds (LCB) of Q-values. In the empirical supervised learning literature, deep network ensembles (definition in Appendix L) and their more efficient variants have been shown to be the most effective approaches for uncertainty estimation, towards learning calibrated estimates and confidence bounds with modern neural network function approximators [17]. Motivated by this, in our work we take a renewed look intoQ-ensembles, and study how to leverage them as the primary source of pessimism for offline RL. In deep RL, a very popular algorithmic choice is to use an ensemble of Q-functions to obtain pessimistic value estimates and combat overestimation bias [18]. Specifically, in the policy evaluation procedure, all Q-networks are updated towards a shared pessimistic temporal difference target. Similarly in offline RL, in addition to the main offline RL objective that they propose, several existing methods use such Q-ensembles [10, 3, 19, 20, 21, 22, 23, 8]. We begin by mathematically characterizing a critical flaw in the aforementioned ensembling procedure. Specifically, we demonstrate that using shared pessimistic targets can paradoxically lead to Q-estimates which are in fact optimistic! We verify our finding by constructing pedagogical toy MDPs. These results demonstrate that the formulation of using shared pessimistic targets is fundamentally ill-formed. To resolve this problem, we propose Model Standard-deviation Gradients (MSG), an ensemble-based offline RL algorithm. In MSG, each Q-network is trained independently, without sharing targets. Crucially, ensembles trained with independent target values will always provide pessimistic value estimates. The pessimistic lower-confidence bound (LCB) value estimate – computed as the mean minus standard deviation of the Q-value ensemble – is then used to update the policy being trained. Evaluating MSG on the established D4RL [24] and RL Unplugged [25] benchmarks for offline RL, we demonstrate that MSG matches, and in the more challenging domains such as antmazes, significantly exceeds the prior state-of-the-art. Additionally, through a series of ablation experiments on benchmark domains, we verify the significance of our theoretical findings, study the role of ensemble size, and highlight the settings in which ensembles provide the most benefit. The use of ensembles will inevitably be a computational bottleneck when applying offline RL to domains requiring large neural network models. Hence, as a final analysis, we investigate whether the favorable performance of MSG can be obtained through the use of modern efficient ensemble approaches which have been successful in the supervised learning literature [26, 27, 28, 17]. We demonstrate that while efficient ensembles are competitive with the state-of-the-art on simpler offline RL benchmark domains, similar to many popular offline RL methods they fail on more challenging tasks, and cannot recover the performance and robustness of MSG using full ensembles with separate neural networks. Our work highlights some of the unique and often overlooked challenges of ensemble-based uncertainty estimation in offline RL. Given the strong performance of MSG, we hope our work motivates increased focus into efficient and stable ensembling techniques directed at RL, and that it highlights intriguing research questions for the community of neural network uncertainty estimation researchers whom thus far have not employed sequential domains such as offline RL as a testbed for validating modern uncertainty estimation techniques. 2 Related Work Uncertainty estimation is a core component of RL, since an agent only has a limited view into the mechanics of the environment through its available experience data. Traditionally, uncertainty estimation has been key to developing proper exploration strategies such as upper confidence bound (UCB) and Thompson sampling [29], in which an agent is encouraged to seek out paths where its uncertainty is high. Offline RL presents an alternative paradigm, where the agent must act conservatively and is thus encouraged to seek out paths where its uncertainty is low [14]. In either case, proper and accurate estimation of uncertainties is paramount. To this end, much research has been produced with the aim of devising provably correct uncertainty estimates [30, 31, 32], or at least bounds on uncertainty that are good enough for acting exploratorily [33] or conservatively [34]. However, these approaches require exceedingly simple environment structure, typically either a finite discrete state and action space or linear spaces with linear dynamics and rewards. While theoretical guarantees for uncertainty estimation are more limited in practical situations with deep neural network function approximators, a number of works have been able to achieve practical success, for example using deep network analogues for count-based uncertainty [35], Bayesian uncertainty [36, 37], and bootstrapping [38, 39]. Many of these methods employ ensembles. In fact, in continuous control RL, it is common to use an ensemble of two value functions and use their minimum for computing a target value during Bellman error minimization [18]. A number of works in offline RL have extended this to propose backing up minimums or lower confidence bound estimates over larger ensembles [3, 10, 19, 20, 22, 23, 21]. In our work, we continue to find that ensembles are extremely useful for acting conservatively, but the manner in which ensembles are used is critical. Specifically our proposed MSG algorithm advocates for using independently learned ensembles, without sharing of target values, and this important design decision is supported by empirical evidence. The widespread success of ensembles for uncertainty estimation in RL echoes similar findings in supervised deep learning. While there exist proposals for more technical approaches to uncertainty estimation [40, 41, 42], ensembles have repeatedly been found to perform best empirically [26, 43]. Much of the active literature on ensembles in supervised learning is concerned with computational efficiency, with various proposals for reducing the compute or memory footprint of training and inference on large ensembles [28, 44, 27]. While these approaches have been able to achieve impressive results in supervised learning, our empirical results suggest that their performance suffers significantly in challenging offline RL settings compared to deep ensembles. 3 Pessimistic Q-Ensembles: Independent or Shared Targets? In this section we identify a critical flaw in how ensembles are commonly employed – in offline as well as online RL – for obtaining pessimistic value estimates [10, 3, 19, 20, 21, 22, 23, 8, 21], which can paradoxically lead to an optimism bonus! We begin by mathematically characterizing this problem and presenting a simple fix. Subsequently, we leverage our results to construct pedagogical toy MDPs demonstrating the practical importance of the identified problem and solution. 3.1 Mathematical Characterization 1. Initialize θi for all i ∈ Z. 2. For t = 1, 2, . . . : • For each (s, a, r, s′) ∈ D and i ∈ Z com- pute target values yi(r, s′, π). • For each i ∈ Z, update θi to optimize the regression objective 1 |D| ∑ (s,a,r,s′)∈D (Qθi(s, a)−yi(r, s′, π))2 3. Return a pessimistic Q-value function Qpessimistic based on the trained ensemble. We assume access to a dataset D composed of (s, a, r, s′) transition tuples from a Markov Decision Process (MDP) determined by a tuple M = 〈S,A,R,P, γ〉, corresponding to state space, action space, reward function, transitions dynamics, and discount, respectively. As is standard in RL, we do not assume any knowledge of R,P , other than that implicitly provided by the dataset D. In this section, for clarity of exposition, we assume that the policies we consider are deterministic, and that our MDPs do not have terminal states. We consider Q-value ensemble members given by a parameterization Qθi , where i indexes into some set Z, which is finite in practice but may be infinite or uncountable in theory. We assume Z has an associated probability space allowing us to make expectation E or variance V computations over the ensemble members. Given a fixed policy π, a general dynamic programming based procedure for obtaining pessimistic value estimates is outlined by the iterative regression described in the box above. A key algorithmic choice in this recipe is where pessimism should be introduced. This can be done by either (a) pessimistically aggregating Q-values after training, i.e. inside Step 3, or (b) also incorporating pessimism during Step 2, by using a shared pessimistic target value y. Through our review of the offline RL (as well as online RL) literature, we have observed that the most common approach is the latter, where the targets are pessimistic, shared, and identical across ensemble members [10, 3, 19, 20, 21, 22, 23, 8]. Specifically, they are computed as, yi(r, s′, π) = PO({r + γQθi(s′, π(s′)),∀i ∈ Z}) with PO being a desired pessimism operator aggregating the TD target values of the ensemble members (e.g. “mean minus standard deviation", or “minimum"). In this section, our goal is to compare these two alternative approaches. For our analysis, we will use “mean minus standard deviation" (a lower confidence bound (LCB)) as our pessimism operator, and use the notation QLCB in place of Qpessimistic (defined in the box above). Under the LCB pessimism operator we will have: Independent Targets (Method 1): yi(r, s′, π) = r + γ ·Qθi(s′, π(s′)) Shared Targets (Method 2): yi(r, s′, π) = r + γ · ( Eens [Qθi(s ′, π(s′))]− √ Vens [Qθi(s′, π(s′))] ) For both we have: QLCB(s, a) = Eens [Qθi(s, a)]− √ Vens [Qθi(s, a)] To characterize the form of QLCB when using complex neural networks, we refer to the work on infinite-width neural networks, namely the Neural Tangent Kernel (NTK) [45]. We consider Q-value ensemble members, Qθi , which all share the same infinite-width neural network architecture (and thus the same NTK parameterization). As noted in the algorithm box above, and as is the case in deep ensembles [43], the only difference amongst ensemble members Qθi is in their initial weights θi sampled from the neural network’s initial weight distribution. Before presenting our results, we establish some notation relevant to the infinite-width and NTK regime. Let X , R,X ′ denote data matrices containing (s, a), r, and (s′, π(s′)) appearing in the offline dataset D; i.e., the k-th transition (s, a, r, s′) in D is represented by the k-th rows in X , R,X ′. Let A,B denote two data matrices, where similar to X ,X ′, each row contains a state-action tuple (s, a) ∈ S ×A. The NTK, which governs the training dynamics of the infinitely-wide neural network, is then given by the outer product of gradients of the neural network at initialization: Θ̂(0)i (A,B) := ∇θQθi(A) · ∇θQθi(B)T |t=0, where we overload notation Qθi(A) to represent the column vector containing Q-values. At infinite-width in the NTK regime, Θ̂(0)i (A,B) converges to a deterministic kernel (i.e. does not depend on the random weight sample θi), and hence is the same for all ensemble members. Thus, hereafter we will remove the index i from the notation of the NTK kernel and simply write, Θ̂(0)(A,B). With our notation in place, we define, C := Θ̂(0)(X ′,X ) · Θ̂(0)(X ,X )−1. Intuitively, C is a |D| × |D| matrix where the element at column q, row p, captures a notion of similarity between (s, a) in the qth row of X , and (s′, π(s′)) in the pth row of X ′. We now have all the necessary machinery to characterize the form of QLCB: Theorem 3.1. For a given (s, a) ∈ S×A, letQ(0)θi (s, a) denoteQθi(s, a)|t=0 (value at initialization), with θ sampled from the initial weight distribution. After t + 1 iterations of pessimistic policy evaluation, the LCB value estimate for (s′, π(s′)) ∈ X ′ is given by, Independent Targets (Method 1): (1) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− √√√√Eens[( (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term (Q (0) θi (X ′)− CQ (0) θi (X )) )2] Shared Targets (Method 2): (2) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term √ Eens [( Q (0) θi (X ′)− CQ (0) θi (X ) )2] where the square and square-root operations are applied element-wise.1 Please refer to Appendix F for the proof. As can be seen, the equations for the pessimistic LCB value estimates in both settings are similar, only differing in the third term. The first term is negligible and tends towards zero as the number of iterations of policy evaluation increases. The second term shared by both variants corresponds to the expected result of the policy evaluation procedure without any pessimism (as before, we mean expectation under θ sampled from the initial weight distribution). Accordingly, the differing third term in each variant exactly corresponds to the “pessimism” or “penalty” induced by that variant. Considering the available offline RL dataset D as a restricted MDP in itself, we see that the use of Independent Targets (Method 1) leads to a pessimism term that performs “backups" along the trajectories that the policy would experience in this restricted MDP (using the geometric term 1 + · · ·+ γtCt) before computing a variance estimate. Meanwhile the use of Shared Targets (Method 2) does the reverse – it first computes a variance term and then performs the “backups". While this difference may seem inconsequential, it becomes critical when one realizes that in Equation 2 for Shared Targets (Method 2), the pessimism term (third term) may become positive, i.e. a negative penalty, yielding an effectively optimistic LCB estimate. Critically, with Independent Targets (Method 1), this problem cannot occur. 3.2 Validating Theoretical Predictions 1. Initialize empty X , R,X ′ 2. For N episodes: • sample s ∼ N (0, I) • For T steps: – sample a ∼ N (0, I) – sample s′ ∼ N (0, I) – set π(s′)← a – Add (s, a) to X – Add r ∼ N (0, I) to R – Add (s′, π(s′)) to X ′ – Set s← s′ 3. Return the offline dataset X , R,X ′ In this section we demonstrate that our analysis is not solely a theoretical result concerning the idiosyncracies of infinite-width neural networks, but that it is rather straightforward to construct combinations of an MDP, offline data, and a policy, that lead to the critical flaw of an optimistic LCB estimate. Let ds, da denote the dimensionality of state and action vectors respectively. We consider an MDP whose initial state distribution is a spherical multivariate normal distribution N (0, I), and whose transition function is given by P(s′|s, a) = N (0, I). Consider the procedure for generating our offline data matrices, described in the box to the right. This procedure returns data matrices X , R,X ′ by generating N episodes of length T , using a behavior policy a ∼ N (0, I). In this generation process, we set the policy we seek to pessimistically evaluate, π, to always apply the behavior policy’s action in state s to the next state s′. To construct our examples, we consider the setting where we use linear models to represent Qθi , with the initial weight distribution being a spherical multivariate normal distribution, N (0, I). With linear models, the equations for QLCB takes an identical form to those in Theorem 3.1. Given the described data generating process and our choice of linear function approximation, we can compute the pessimism term for the Shared Targets (Method 2) (i.e. the third term in Theorem 3.1, Equation 2). We implement this computation in a simple Python script, which we include in the supplementary material. We choose, ds = 30, da = 30, γ = 0.5, N = 5, T = 5, and t = 1000 (t is the exponent in the geometric term above). We run this simulation 1000 times, each with a different random seed. After filtering simulation runs to ensure γ‖C‖ < 1 (as discussed in an earlier footnote), we observe that 221 of the simulation runs result in an optimistic LCB bonus, meaning that in those experiments, the pessimism term was in fact positive for some (s′, π(s′)) ∈ X ′. We have made the python notebook implementing this experiment available in our supplementary material. For further intriguing investigations in pedagogical toy MDPs regarding the structure of uncertainties, we strongly encourage the interested reader to refer to Appendix G. 1Note that if γ‖C‖ ≥ 1, dynamic programming is liable to diverge in either setting. In our discussions, we avoid this degenerate case and assume γ‖C‖ < 1. 4 Model Standard-deviation Gradients (MSG) It is important to note that even if the pessimism term does not become positive for a particular combination of MDPs, offline datasets, and policies, the fact that it can occur highlights that the formulation of Shared Targets is fundamentally ill-formed. To resolve this problem we propose Model Standard-deviation Gradients (MSG), an offline RL algorithm which leverages ensembles to approximate the LCB using the approach of Independent Targets. 4.1 Policy Evaluation and Optimization in MSG MSG follows an actor-critic setup. At the beginning of training, we create an ensemble of N Qfunctions by taking N samples from the initial weight distribution. During training, in each iteration, we first perform policy evaluation by estimating the QLCB for the current policy, and subsequently optimize the policy through gradient ascent on QLCB. Policy Evaluation As motivated by our analysis in Section 3, we train the ensemble Q-functions independently using the standard least-squares Q-evaluation loss, L(θi) = E(s,a,r,s′)∼D [ ( Qθi(s, a)− yi(r, s′, π) )2 ] ; yi = r + γ · Ea′∼π(s′) [ Qθ̄i(s ′, a′) ] (3) where θi, θ̄i denote the parameters and target network parameters for the ith Q-function. In each iteration, as is common practice, we do not update the Q-functions until convergence, and instead update the networks using a single gradient step. In practice, the expectation in L(θi) is estimated by a minibatch, and the expectation in yi is estimated with a single action sample from the policy. After every update to the Q-function parameters, their corresponding target parameters are updated to be an exponential moving average of the parameters in the standard fashion. Policy Optimization As in standard deep actor-critic algorithms, policy evaluation steps (learning Q) are interleaved with policy optimization steps (learning π). In MSG, we optimize the policy through gradient ascent on QLCB. Specifically, our proposed policy optimization objective in MSG is, L(π) = Es∼D,a∼π(s) [QLCB(s, a)] = Es∼D,a∼π(s) [ Eens[Qθi(s, a)] + β √ Vens[Qθi(s, a)] ] (4) where β ≤ 0 is a hyperparameter that determines the amount of pessimism. 4.2 The Trade-Off Between Trust and Pessimism While our hope is to leverage the implicit generalization capabilities of neural networks to estimate proper LCBs beyond states and actions in the finite dataset D, neural network architectures can be fundamentally biased, or we can simply be in a setting with insufficient data coverage, such that the generalization capability of those networks is limited. To this end, we augment the policy evaluation objective of MSG (L(θi), equation 3) with a support constraint regularizer inspired by CQL [11] 2: H(θi) = Es∼D,a∼π(s) [Qθi(s, a)]− E(s,a)∼D [Qθi(s, a)] . This regularizer encourages the Q-functions to increase the values for actions seen in the dataset D, while decreasing the values of the actions of the current policy. Practically, we estimate the latter expectation of H using the states in the mini-batch, and we approximate the former expectation using a single sample from the policy. We control the contribution ofH(θi) by weighting this term with weight parameter α. The full critic loss is thus given by, L(θ1, . . . , θN ) = N∑ i=1 ( L(θi) + αH(θi) ) (5) Empirically, as evidenced by our results in Appendix A.2, we have observed that such a regularizer can be necessary in two situations: 1) The first scenario is where the offline dataset only contains a narrow data distribution (e.g., imitation learning datasets only containing expert data). We believe 2Instead of a CQL-style value regularizer, other forms of support constraints such as a behavioral cloning regularizer on the policy could potentially be used. this is because the power of ensembles comes from predicting a value distribution for unseen (s, a) based on the available training data. Thus, if no data for sub-optimal actions is present, ensembles cannot make accurate predictions and increased pessimism viaH becomes necessary. 2) The second scenario is where environment dynamics can be chaotic (e.g. Gym [46] hopper and walker2d). In such domains it would be beneficial to remain close to the observed data in the offline dataset. Pseudo-code for our proposed MSG algorithm can be viewed in Algorithm Box 1. 5 Experiments In this section we seek to empirically answer the following questions: 1) How well does MSG perform compared to current state-of-the-art in offline RL? 2) Are the theoretical differences in ensembling approaches (Section 3) practically relevant? 3) When and how does ensemble size affect perfomance? 4) Can we match the performance of MSG through efficient ensemble approximations developed in the supervised learning literature? 5.1 Offline RL Benchmarks D4RL Gym Domains We begin by evaluating MSG on the Gym domains (halfcheetah, hopper, walker2d) of the D4RL offline RL benchmark [24], using the medium, medium-replay, medium-expert, and expert data settings. Our results presented in Appendix A.2 (summarized in Figure 4) demonstrates that MSG is competitive with well-tuned state-of-the-art methods CQL [11] and F-BRC [12]. D4RL Antmaze Domains Due to the narrow range of behaviors in Gym environments, offline datasets for these domains tend to be very similar to imitation learning datasets. As a result, many prior offline RL approaches that perform well on D4RL Gym fail on harder tasks that require stitching trajectories through dynamic programming (c.f. [48]). An example of such tasks are the D4RL antmaze settings, in particular those in the antmaze-medium and antmaze-large environments. The data for antmaze tasks consists of many episodes of an Ant agent [46] running along arbitrary paths in a maze. The agent is tasked with using this data to learn a point-to-point navigation policy from one corner of the maze to the opposite corner, where rewards are given by a sparse signal that is 1 when near the desired end location in the maze – at which point the episode is terminated – and 0 otherwise. The undirected, extremely sparse reward nature of antmaze tasks make them very challenging, especially for the large maze sizes. Table 1 and Appendix B.2 present our results. To the best of our knowledge, the antmaze domains are considered unsolved, with few prior works reporting non-zero results on the large mazes [11, 48]. As can be seen, MSG obtains results that far exceed the prior state-of-the-art results reported by [48]. While some works that use specialized hierarchical approaches have reported strong results as well [49], it is notable that MSG is able to solve these challenging tasks with standard architectures and training procedures, and this shows the power that ensembling can provide – as long as the ensembling is performed properly! RL Unplugged In addition to the D4RL benchmark, we evaluate MSG on the RL Unplugged benchmark [25]. Our results are presented in Figure 1. We compare to results for Behavioral Cloning (BC) and two state-of-the-art methods in these domains, Critic-Regularized Regression (CRR) [7] and MuZero Unplugged [47]. Due to computational constraints when using deep ensembles, we use the same network architectures as we used for D4RL experiments. The networks we use are approximately 160 -th the size of those used by the BC, CRR, and MuZero Unplugged baselines in terms of number of parameters. We observe that MSG is on par with or exceeds the current state-of-the-art on all tasks with the exception of humanoid.run, which appears to require the larger architectures used by the baseline methods. Experimental details can be found in Appendix C. Benchmark Conclusion Prior work has demonstrated that many offline RL approaches that perform well on Gym domains, fail to succeed on much more challenging domains [48]. Our results demonstrate that through uncertainty estimation with deep ensembles, MSG is able to very significantly outperform prior work on very challenging benchmark domains such as the D4RL antmazes. 5.2 Ensemble Ablations Independence in Ensembles Ablation In Section 3, through theoretical arguments and toy experiments we demonstrated the importance of training using “Independent" ensembles. Here, we seek to validate the significance of our theoretical findings using offline RL benchmarks, by comparing Independent targets (as in MSG), to Shared-LCB and Shared-Min targets. Our results are presented in Appendices A.3 and B.3, with a summary in Figures 3 and 4. In the Gym domains (Appendix A.3), with ensemble size N = 4, Shared-LCB significantly underperforms MSG. In fact, not using ensembles at all (N = 1) outperforms Shared-LCB. With ensemble size N = 4, Shared-Min is on par with MSG. When the ensemble size is increased to N = 64 (Figure 7), we observe the performance of Shared-Min drops significantly on 7/12 D4RL Gym settings. In constrast, the performance of MSG is stable and does not change. In the challenging antmaze domains (Appendix B.3), for both ensemble sizes N = 4 and N = 64, Shared-LCB and Shared-Min targets completely fail to solve the tasks, while for both ensemble sizes MSG exceeds the prior state-of-the-art (Table 1), IQL [48]. Independence in Ensembles Conclusion Our experiments corroborate the theoretical results in Section 3, demonstrating that Independent targets are critical to the success of MSG. These results are particularly striking when one considers that the implementations for MSG, Shared-LCB, and Shared-Min differ by only 2 lines of code. Ensemble Size Ablation An important ablation is to understand the role of ensemble size in MSG. In the Gym domains, Figure 5 demonstrates that increasing the number of ensembles from 4 to 64 does not result in a noticeable change in performance. In the antmaze domains, we evaluate MSG under ensemble sizes {1, 4, 16, 64}. Figure 2 presents our results. Our key takeaways are as follows: • For the harder antmaze-large tasks, there is a clear upward trend as ensemble size increases. • Using a small ensemble size (e.g. N = 4) is already quite good, but more sensitive to hyperparam- eter choice especially on the harder tasks. • Very small ensemble sizes benefit more from using α > 0 3. However, across the board, using α = 0 is preferable to using too large of a value for α – with the exception of N = 1 which cannot take advantange of the benefits of ensembling. • When using lower values of β, lower values of α should be used. Ensemble Size Conclusion In domains such as D4RL Gym where offline datasets are qualitatively similar to imitation learning datasets, larger ensembles do not result in noticeable gains. In domains such as D4RL antmaze which contain more data diversity, larger ensembles significantly improve the performance of agents. 5.3 Efficient Ensembles Thus far we have demonstrated the significant performance gains attainable through MSG. An important concern however, is that of parameter and computational efficiency: Deep ensembles of Q-networks result in an N -fold increase in memory and compute usage, both in the policy evaluation and policy optimization phases of actor-critic training. While this might not be a significant problem in offline RL benchmark domains due to small model footprints4, it becomes a major bottleneck with larger architectures such as those used in language and vision domains. To this end, we evaluate whether recent advances in “Efficient Ensemble" approaches from the supervised learning literature transfer well to the problem of offline RL. Specifically, the efficient ensemble approaches we consider are: Multi-Head Ensembles [26, 50, 51], MIMO Ensembles [27], and Batch Ensembles [28]. For a description of these efficient ensembling approaches please refer to Appendix E. A runtime comparison of different ensembling approaches can be viewed in Table 2. D4RL Gym Domains Appendix A.4 presents our results in the D4RL Gym domains with ensemble size N = 4 (summary in Figure 4). Amongst the considered efficient ensemble approaches, Batch Ensembles [28] result in the best performance, which follows findings from the supervised learning literature [17]. 3As a reminder, α is the weight of the CQL-style regularizer loss discussed in Section 4.2. 4All our experiments were ran on a single Nvidia P100 GPU. D4RL Antmaze Domains Appendix B.4 presents our results in the D4RL antmaze domains for both ensemble sizes of N = 4 and N = 64 (summary in Figure 3). As can be seen, compared to MSG with deep ensembles (separate networks), the efficient ensemble approaches we consider are very unreliable, and fail for most hyperparameter choices. Efficient Ensembles Conclusion We believe the observations in this section very clearly motivate future work in developing efficient uncertainty estimation approaches that are better suited to the domain of reinforcement learning. To facilitate this direction of research, in our codebase we have included a complete boilerplate example of an offline RL agent, amenable to drop-in implementation of novel uncertainty-estimation techniques. 6 Discussion & Future Work Our work has highlighted the significant power of ensembling as a mechanism for uncertainty estimation for offline RL. In this work we took a renewed look into Q-ensembles, and studied how to leverage them as the primary source of pessimism for offline RL. Through theoretical analyses and toy constructions, we demonstrated a critical flaw in the popular approach of using shared targets for obtaining pessimistic Q-values, and demonstrated that it can in fact lead to optimistic estimates. Using a simple fix, we developed a practical deep offline RL algorithm, MSG, which resulted in large performance gains on established offline RL benchmarks. As demonstrated by our experimental results, an important outstanding direction is to study how we can design improved efficient ensemble approximations, as we have demonstrated that current approaches used in supervised learning are not nearly as effective as MSG with ensembles that use separate networks. We hope that this work engenders new efforts from the community of neural network uncertainty estimation researchers towards developing efficient uncertainty estimation techniques directed at reinforcement learning. Acknowledgments and Disclosure of Funding We would like to thank Yasaman Bahri for insightful discussions regarding infinite-width neural networks. We would like to thank Laura Graesser for providing a detailed review of our work. We would like to thank conference reviewers for posing important questions that helped clarify the organization of this manuscript.
1. What is the focus and contribution of the paper regarding offline reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its theoretical analysis and empirical validation? 3. How does the reviewer assess the significance and limitations of the paper's content? 4. What are the concerns regarding the sufficient pessimism induced by independent training, and how does it relate to existing offline RL implementations? 5. How does the paper address the issue of data assumptions required for the proposed method to work properly as an offline RL method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper revisits the design of ensemble critics in offline RL. The authors argue that the common design where critics in the ensemble sharing the same, pessimistic target function in learning can lead to actually optimistic critics. The authors analyze this phenomenon theoretically under the NTK assumption, and present toy simulation examples. The authors further use this insight to design an offline RL algorithm MSG, which gives SoTA results on common offline RL benchmarks. In these experiments, they show that the separating the targets is a key to the algorithm's superior performance. ********** Comments for Rebuttal ************* Thanks for the rebuttal and the clarification. While they address some of my concerns, my main concern stay the same. As stated in the original review, the main issue I have is "whether the proposal here is sufficient to design a full offline RL algorithm or just provide an important note on implementation choice". The rebuttal also points out "it is the objective of our work to advocate for relying on uncertainty estimation as the main source of pessimism for offline RL". Let's examine this question from two aspects based on the paper and rebuttal . From the theoretical side, the paper provides that Theorem 3.1, which compares the iterates of Q L C B of Independent Targets and the Shared Targets. However, it does not show "how" good pessimistic the estimate of Independent Targets is. In the review, I mentioned "in general optimizing a pessimistic critic or being more pessimistic does not imply good performance in offline RL." because whether a pessimistic critic is useful or not depends on how "tight" the under estimation is and where it is pessimistic. Being more pessimistic is not always good, e.g., estimating all values as V m i n is pessimistic but it's obviously useless. The current theory does not provide enough insights to how good the pessimistic estimate is, or how good the learned policy based on such value estimate will be. This is why I said "the significance of the theoretical results are rather limited" in the review. For empirical side, I think to demonstrate the authors claim, it is necessary to show that SoTA can be achieved with α = 0 . However, the current results do not support that fully. While I agree that in Figure 3, α = 0 is among the best performing results. I also think that Figure 3 does not provide a conclusive answer, as it is missing results of larger α value for β = 0 , as there is an increasing trend. This was pointed out in the review. It's also hard to compare Fig 3 and Fig 2 directly I think the failure of using α = 0 in simpler mujoco domains is actually showing that the proposed approach "alone" might not be sufficient to provide enough pessimism "broadly". I do not agree with the rebuttal's statement that these environments can be solved by behavior cloning. All the methods the authors listed there are not pure behavior cloning, which mimics all actions, as all of them perform some reasoning what actions are better based on the rewards. Therefore I don't think this is a good excuse of using the CQL term here. Yes, I agree that the proposed method works with α = 0 with Antmaze which is considered as a harder domain, but is it because of the reason that the authors mentioned? if this is the case, why does not perform well in simpler domain. Or is it because of something that is related to the structure of Antmaze environment and dataset? Currently, we don't have sufficient evidence to tell. Thus, I think that currently the paper provides insufficient results to show that the proposed uncertainty estimation "alone" can achieve SoTA offline RL results. Nonetheless, I also think that this paper provides more than sufficient reasons to show that it would be a good design choice to improve an existing value-based offline RL algorithm. Therefore, I keep my original recommendation. Strengths And Weaknesses Strengths: This paper presents an overlooked finding. It provides some theoretical reasons to back it up. It also provides empirical validation. The writing of this paper is clear and the experimental results are thorough. Weakness: While the authors provide explanations of why the common usage of Shared Targets may lead to optimism, the current results are not conclusive. In particular, while existing offline RL implementations use Shared Targets, the pessimism of Shared Targets is not the main source of pessimism but rather an implementation detail. Therefore, it is unclear whether the proposal here is sufficient to design a full offline RL algorithm or just provide an important note on implementation choice. This factor limits the significance of the paper. It is unclear what data assumptions are needed for the proposed method to work properly as an offline RL method. Questions My main concern is whether the independent ensemble idea here is indeed sufficient to induce adequate pessimism for offline RL as the author claim. Here're few reasons why it does not seem to be the case. Theorem 3.1 compare the results of a pessimistic policy evaluation procedure. The authors argue that using Shared Targets may lead to optimism, compared with the Independent Targets version. However, significance of such a comparison is quite limited for the following reasons. First, to my knowledge, I do not know of existing offline RL methods introducing pessimism purely based on what the authors describe in the Shared Targets update rule. While that update rule are used in some implementations, that is usually not the main source of pessimism that the offline RL algorithm uses but just a detail. So the theoretical comparison here may not be realistic, even if we acknowledge the NTK approximation part. Second, the authors study a version based on E[Q] - sqrt{V[Q]}. A more common implementation I know is pointwise pessimism, i.e. min_i Q_i(s,a), which e.g. is used in CQL and TD3+BC implementation based on double Q networks. I doubt the comparison would carry through to this more realistic case, because using pointwise pessimism would be strictly more pessimistic than the Independent Targets. The above theoretical analysis is limited to offline policy evaluation. It's unclear how it can be translated into policy optimization, as in general optimizing a pessimistic critic or being more pessimistic does not imply good performance in offline RL. Therefore, the significance of the theoretical results are rather limited in my mind, unless the authors discuss in details how the proposed procedure affect the performance of the learned policy. Another reason why the pessimism induced by Independent Training is not sufficient is that in MSG, the authors introduce the difference term H(\thteta^i) as part of the objective. The authors write "Empirically, as evidenced by our results in Appendix A.2, we have observed that such a regularizer can be necessary" This seems to imply that the pessimism of the Independent Target can be serve as the main source of pessimism. In particular, the authors suggests using this extra regularizer H(\theta^i) is necessary, when the data distribution is narrow. Since addressing the lack of coverage is the main focus of offline RL (say compared with off-policy RL), this shows again the proposed pessimism is not sufficient for offline RL. The above insufficiency is partially shown in the experimental results as well. For instance in Figure 3, MSG does not perform the best when alpha=0. In the experiments of beta=0, the authors should also try even larger alpha to get a fuller picture of the performance and how hyperparamters interact. From these results, it is unclear whether the good performance of MSG is due to using the ensemble uncertainty as the primary source of uncertainty. In summary, while this paper presents an interesting observation, showing that using Independent Targets can be more pessimistic than using Shared Targets, such results do not directly address the main research question of offline RL for the reasons above. (But the title "Why So Pessimistic?" seems to want to convey that we should be less pessimistic?) It is therefore hard for me to position this paper. This paper presents a good and thorough empirical study on how a certain design knob in implementation can affect the performance. However, the results here do not provide enough insights to design offline RL methods more broadly outside the tested implementation here. Limitations The authors well discuss the limitation such as the extra complexity needed by the proposed method. However, in my view, the results here are more limited than what the authors claim for the reasons above.
NIPS
Title Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters Abstract Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles ofQ-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member’s Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of Q-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained Q-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at RL. 1 Introduction Offline reinforcement learning (RL), also referred to as batch RL [1], is a problem setting in which one is provided a dataset of interactions with an environment in the form of a Markov decision process (MDP), and the goal is to learn an effective policy exclusively from this fixed dataset. Offline RL holds the promise of data-efficiency through data reuse, and improved safety due to minimizing the need for policy rollouts. As a result, offline RL has been the subject of significant renewed interest in the machine learning literature [2]. One common approach to offline RL in the model-free setting is to use approximate dynamic programming (ADP) to learn a Q-value function via iterative regression to backed-up target values. The predominant algorithmic philosophy with most success in ADP-based offline RL is to encourage 36th Conference on Neural Information Processing Systems (NeurIPS 2022). obtained policies to remain close to the support set of the available offline data. A large variety of methods have been developed for enforcing such constraints, examples of which include regularizing policies with behavior cloning objectives [3, 4], performing updates only on actions observed inside [5, 6, 7, 8] or close to [9] the offline dataset, and regularizing value functions to underestimate the value of actions not seen in the dataset [10, 11, 12]. The need for such regularizers arises from inevitable inaccuracies in value estimation when function approximation, bootstrapping, and off-policy learning – i.e. The Deadly Triad [13] – are involved. In offline RL in particular, such inaccuracies cannot be resolved through additional interactions with the MDP. Thus, remaining close to the offline dataset limits opportunities for catastrophic inaccuracies to arise. However, recent works have argued that the aforementioned constraints can be overly pessimistic, and instead opt for approaches that take into consideration the uncertainty about the value function [14, 15, 16], thus re-focusing the offline RL problem to that of deriving accurate lower confidence bounds (LCB) of Q-values. In the empirical supervised learning literature, deep network ensembles (definition in Appendix L) and their more efficient variants have been shown to be the most effective approaches for uncertainty estimation, towards learning calibrated estimates and confidence bounds with modern neural network function approximators [17]. Motivated by this, in our work we take a renewed look intoQ-ensembles, and study how to leverage them as the primary source of pessimism for offline RL. In deep RL, a very popular algorithmic choice is to use an ensemble of Q-functions to obtain pessimistic value estimates and combat overestimation bias [18]. Specifically, in the policy evaluation procedure, all Q-networks are updated towards a shared pessimistic temporal difference target. Similarly in offline RL, in addition to the main offline RL objective that they propose, several existing methods use such Q-ensembles [10, 3, 19, 20, 21, 22, 23, 8]. We begin by mathematically characterizing a critical flaw in the aforementioned ensembling procedure. Specifically, we demonstrate that using shared pessimistic targets can paradoxically lead to Q-estimates which are in fact optimistic! We verify our finding by constructing pedagogical toy MDPs. These results demonstrate that the formulation of using shared pessimistic targets is fundamentally ill-formed. To resolve this problem, we propose Model Standard-deviation Gradients (MSG), an ensemble-based offline RL algorithm. In MSG, each Q-network is trained independently, without sharing targets. Crucially, ensembles trained with independent target values will always provide pessimistic value estimates. The pessimistic lower-confidence bound (LCB) value estimate – computed as the mean minus standard deviation of the Q-value ensemble – is then used to update the policy being trained. Evaluating MSG on the established D4RL [24] and RL Unplugged [25] benchmarks for offline RL, we demonstrate that MSG matches, and in the more challenging domains such as antmazes, significantly exceeds the prior state-of-the-art. Additionally, through a series of ablation experiments on benchmark domains, we verify the significance of our theoretical findings, study the role of ensemble size, and highlight the settings in which ensembles provide the most benefit. The use of ensembles will inevitably be a computational bottleneck when applying offline RL to domains requiring large neural network models. Hence, as a final analysis, we investigate whether the favorable performance of MSG can be obtained through the use of modern efficient ensemble approaches which have been successful in the supervised learning literature [26, 27, 28, 17]. We demonstrate that while efficient ensembles are competitive with the state-of-the-art on simpler offline RL benchmark domains, similar to many popular offline RL methods they fail on more challenging tasks, and cannot recover the performance and robustness of MSG using full ensembles with separate neural networks. Our work highlights some of the unique and often overlooked challenges of ensemble-based uncertainty estimation in offline RL. Given the strong performance of MSG, we hope our work motivates increased focus into efficient and stable ensembling techniques directed at RL, and that it highlights intriguing research questions for the community of neural network uncertainty estimation researchers whom thus far have not employed sequential domains such as offline RL as a testbed for validating modern uncertainty estimation techniques. 2 Related Work Uncertainty estimation is a core component of RL, since an agent only has a limited view into the mechanics of the environment through its available experience data. Traditionally, uncertainty estimation has been key to developing proper exploration strategies such as upper confidence bound (UCB) and Thompson sampling [29], in which an agent is encouraged to seek out paths where its uncertainty is high. Offline RL presents an alternative paradigm, where the agent must act conservatively and is thus encouraged to seek out paths where its uncertainty is low [14]. In either case, proper and accurate estimation of uncertainties is paramount. To this end, much research has been produced with the aim of devising provably correct uncertainty estimates [30, 31, 32], or at least bounds on uncertainty that are good enough for acting exploratorily [33] or conservatively [34]. However, these approaches require exceedingly simple environment structure, typically either a finite discrete state and action space or linear spaces with linear dynamics and rewards. While theoretical guarantees for uncertainty estimation are more limited in practical situations with deep neural network function approximators, a number of works have been able to achieve practical success, for example using deep network analogues for count-based uncertainty [35], Bayesian uncertainty [36, 37], and bootstrapping [38, 39]. Many of these methods employ ensembles. In fact, in continuous control RL, it is common to use an ensemble of two value functions and use their minimum for computing a target value during Bellman error minimization [18]. A number of works in offline RL have extended this to propose backing up minimums or lower confidence bound estimates over larger ensembles [3, 10, 19, 20, 22, 23, 21]. In our work, we continue to find that ensembles are extremely useful for acting conservatively, but the manner in which ensembles are used is critical. Specifically our proposed MSG algorithm advocates for using independently learned ensembles, without sharing of target values, and this important design decision is supported by empirical evidence. The widespread success of ensembles for uncertainty estimation in RL echoes similar findings in supervised deep learning. While there exist proposals for more technical approaches to uncertainty estimation [40, 41, 42], ensembles have repeatedly been found to perform best empirically [26, 43]. Much of the active literature on ensembles in supervised learning is concerned with computational efficiency, with various proposals for reducing the compute or memory footprint of training and inference on large ensembles [28, 44, 27]. While these approaches have been able to achieve impressive results in supervised learning, our empirical results suggest that their performance suffers significantly in challenging offline RL settings compared to deep ensembles. 3 Pessimistic Q-Ensembles: Independent or Shared Targets? In this section we identify a critical flaw in how ensembles are commonly employed – in offline as well as online RL – for obtaining pessimistic value estimates [10, 3, 19, 20, 21, 22, 23, 8, 21], which can paradoxically lead to an optimism bonus! We begin by mathematically characterizing this problem and presenting a simple fix. Subsequently, we leverage our results to construct pedagogical toy MDPs demonstrating the practical importance of the identified problem and solution. 3.1 Mathematical Characterization 1. Initialize θi for all i ∈ Z. 2. For t = 1, 2, . . . : • For each (s, a, r, s′) ∈ D and i ∈ Z com- pute target values yi(r, s′, π). • For each i ∈ Z, update θi to optimize the regression objective 1 |D| ∑ (s,a,r,s′)∈D (Qθi(s, a)−yi(r, s′, π))2 3. Return a pessimistic Q-value function Qpessimistic based on the trained ensemble. We assume access to a dataset D composed of (s, a, r, s′) transition tuples from a Markov Decision Process (MDP) determined by a tuple M = 〈S,A,R,P, γ〉, corresponding to state space, action space, reward function, transitions dynamics, and discount, respectively. As is standard in RL, we do not assume any knowledge of R,P , other than that implicitly provided by the dataset D. In this section, for clarity of exposition, we assume that the policies we consider are deterministic, and that our MDPs do not have terminal states. We consider Q-value ensemble members given by a parameterization Qθi , where i indexes into some set Z, which is finite in practice but may be infinite or uncountable in theory. We assume Z has an associated probability space allowing us to make expectation E or variance V computations over the ensemble members. Given a fixed policy π, a general dynamic programming based procedure for obtaining pessimistic value estimates is outlined by the iterative regression described in the box above. A key algorithmic choice in this recipe is where pessimism should be introduced. This can be done by either (a) pessimistically aggregating Q-values after training, i.e. inside Step 3, or (b) also incorporating pessimism during Step 2, by using a shared pessimistic target value y. Through our review of the offline RL (as well as online RL) literature, we have observed that the most common approach is the latter, where the targets are pessimistic, shared, and identical across ensemble members [10, 3, 19, 20, 21, 22, 23, 8]. Specifically, they are computed as, yi(r, s′, π) = PO({r + γQθi(s′, π(s′)),∀i ∈ Z}) with PO being a desired pessimism operator aggregating the TD target values of the ensemble members (e.g. “mean minus standard deviation", or “minimum"). In this section, our goal is to compare these two alternative approaches. For our analysis, we will use “mean minus standard deviation" (a lower confidence bound (LCB)) as our pessimism operator, and use the notation QLCB in place of Qpessimistic (defined in the box above). Under the LCB pessimism operator we will have: Independent Targets (Method 1): yi(r, s′, π) = r + γ ·Qθi(s′, π(s′)) Shared Targets (Method 2): yi(r, s′, π) = r + γ · ( Eens [Qθi(s ′, π(s′))]− √ Vens [Qθi(s′, π(s′))] ) For both we have: QLCB(s, a) = Eens [Qθi(s, a)]− √ Vens [Qθi(s, a)] To characterize the form of QLCB when using complex neural networks, we refer to the work on infinite-width neural networks, namely the Neural Tangent Kernel (NTK) [45]. We consider Q-value ensemble members, Qθi , which all share the same infinite-width neural network architecture (and thus the same NTK parameterization). As noted in the algorithm box above, and as is the case in deep ensembles [43], the only difference amongst ensemble members Qθi is in their initial weights θi sampled from the neural network’s initial weight distribution. Before presenting our results, we establish some notation relevant to the infinite-width and NTK regime. Let X , R,X ′ denote data matrices containing (s, a), r, and (s′, π(s′)) appearing in the offline dataset D; i.e., the k-th transition (s, a, r, s′) in D is represented by the k-th rows in X , R,X ′. Let A,B denote two data matrices, where similar to X ,X ′, each row contains a state-action tuple (s, a) ∈ S ×A. The NTK, which governs the training dynamics of the infinitely-wide neural network, is then given by the outer product of gradients of the neural network at initialization: Θ̂(0)i (A,B) := ∇θQθi(A) · ∇θQθi(B)T |t=0, where we overload notation Qθi(A) to represent the column vector containing Q-values. At infinite-width in the NTK regime, Θ̂(0)i (A,B) converges to a deterministic kernel (i.e. does not depend on the random weight sample θi), and hence is the same for all ensemble members. Thus, hereafter we will remove the index i from the notation of the NTK kernel and simply write, Θ̂(0)(A,B). With our notation in place, we define, C := Θ̂(0)(X ′,X ) · Θ̂(0)(X ,X )−1. Intuitively, C is a |D| × |D| matrix where the element at column q, row p, captures a notion of similarity between (s, a) in the qth row of X , and (s′, π(s′)) in the pth row of X ′. We now have all the necessary machinery to characterize the form of QLCB: Theorem 3.1. For a given (s, a) ∈ S×A, letQ(0)θi (s, a) denoteQθi(s, a)|t=0 (value at initialization), with θ sampled from the initial weight distribution. After t + 1 iterations of pessimistic policy evaluation, the LCB value estimate for (s′, π(s′)) ∈ X ′ is given by, Independent Targets (Method 1): (1) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− √√√√Eens[( (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term (Q (0) θi (X ′)− CQ (0) θi (X )) )2] Shared Targets (Method 2): (2) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term √ Eens [( Q (0) θi (X ′)− CQ (0) θi (X ) )2] where the square and square-root operations are applied element-wise.1 Please refer to Appendix F for the proof. As can be seen, the equations for the pessimistic LCB value estimates in both settings are similar, only differing in the third term. The first term is negligible and tends towards zero as the number of iterations of policy evaluation increases. The second term shared by both variants corresponds to the expected result of the policy evaluation procedure without any pessimism (as before, we mean expectation under θ sampled from the initial weight distribution). Accordingly, the differing third term in each variant exactly corresponds to the “pessimism” or “penalty” induced by that variant. Considering the available offline RL dataset D as a restricted MDP in itself, we see that the use of Independent Targets (Method 1) leads to a pessimism term that performs “backups" along the trajectories that the policy would experience in this restricted MDP (using the geometric term 1 + · · ·+ γtCt) before computing a variance estimate. Meanwhile the use of Shared Targets (Method 2) does the reverse – it first computes a variance term and then performs the “backups". While this difference may seem inconsequential, it becomes critical when one realizes that in Equation 2 for Shared Targets (Method 2), the pessimism term (third term) may become positive, i.e. a negative penalty, yielding an effectively optimistic LCB estimate. Critically, with Independent Targets (Method 1), this problem cannot occur. 3.2 Validating Theoretical Predictions 1. Initialize empty X , R,X ′ 2. For N episodes: • sample s ∼ N (0, I) • For T steps: – sample a ∼ N (0, I) – sample s′ ∼ N (0, I) – set π(s′)← a – Add (s, a) to X – Add r ∼ N (0, I) to R – Add (s′, π(s′)) to X ′ – Set s← s′ 3. Return the offline dataset X , R,X ′ In this section we demonstrate that our analysis is not solely a theoretical result concerning the idiosyncracies of infinite-width neural networks, but that it is rather straightforward to construct combinations of an MDP, offline data, and a policy, that lead to the critical flaw of an optimistic LCB estimate. Let ds, da denote the dimensionality of state and action vectors respectively. We consider an MDP whose initial state distribution is a spherical multivariate normal distribution N (0, I), and whose transition function is given by P(s′|s, a) = N (0, I). Consider the procedure for generating our offline data matrices, described in the box to the right. This procedure returns data matrices X , R,X ′ by generating N episodes of length T , using a behavior policy a ∼ N (0, I). In this generation process, we set the policy we seek to pessimistically evaluate, π, to always apply the behavior policy’s action in state s to the next state s′. To construct our examples, we consider the setting where we use linear models to represent Qθi , with the initial weight distribution being a spherical multivariate normal distribution, N (0, I). With linear models, the equations for QLCB takes an identical form to those in Theorem 3.1. Given the described data generating process and our choice of linear function approximation, we can compute the pessimism term for the Shared Targets (Method 2) (i.e. the third term in Theorem 3.1, Equation 2). We implement this computation in a simple Python script, which we include in the supplementary material. We choose, ds = 30, da = 30, γ = 0.5, N = 5, T = 5, and t = 1000 (t is the exponent in the geometric term above). We run this simulation 1000 times, each with a different random seed. After filtering simulation runs to ensure γ‖C‖ < 1 (as discussed in an earlier footnote), we observe that 221 of the simulation runs result in an optimistic LCB bonus, meaning that in those experiments, the pessimism term was in fact positive for some (s′, π(s′)) ∈ X ′. We have made the python notebook implementing this experiment available in our supplementary material. For further intriguing investigations in pedagogical toy MDPs regarding the structure of uncertainties, we strongly encourage the interested reader to refer to Appendix G. 1Note that if γ‖C‖ ≥ 1, dynamic programming is liable to diverge in either setting. In our discussions, we avoid this degenerate case and assume γ‖C‖ < 1. 4 Model Standard-deviation Gradients (MSG) It is important to note that even if the pessimism term does not become positive for a particular combination of MDPs, offline datasets, and policies, the fact that it can occur highlights that the formulation of Shared Targets is fundamentally ill-formed. To resolve this problem we propose Model Standard-deviation Gradients (MSG), an offline RL algorithm which leverages ensembles to approximate the LCB using the approach of Independent Targets. 4.1 Policy Evaluation and Optimization in MSG MSG follows an actor-critic setup. At the beginning of training, we create an ensemble of N Qfunctions by taking N samples from the initial weight distribution. During training, in each iteration, we first perform policy evaluation by estimating the QLCB for the current policy, and subsequently optimize the policy through gradient ascent on QLCB. Policy Evaluation As motivated by our analysis in Section 3, we train the ensemble Q-functions independently using the standard least-squares Q-evaluation loss, L(θi) = E(s,a,r,s′)∼D [ ( Qθi(s, a)− yi(r, s′, π) )2 ] ; yi = r + γ · Ea′∼π(s′) [ Qθ̄i(s ′, a′) ] (3) where θi, θ̄i denote the parameters and target network parameters for the ith Q-function. In each iteration, as is common practice, we do not update the Q-functions until convergence, and instead update the networks using a single gradient step. In practice, the expectation in L(θi) is estimated by a minibatch, and the expectation in yi is estimated with a single action sample from the policy. After every update to the Q-function parameters, their corresponding target parameters are updated to be an exponential moving average of the parameters in the standard fashion. Policy Optimization As in standard deep actor-critic algorithms, policy evaluation steps (learning Q) are interleaved with policy optimization steps (learning π). In MSG, we optimize the policy through gradient ascent on QLCB. Specifically, our proposed policy optimization objective in MSG is, L(π) = Es∼D,a∼π(s) [QLCB(s, a)] = Es∼D,a∼π(s) [ Eens[Qθi(s, a)] + β √ Vens[Qθi(s, a)] ] (4) where β ≤ 0 is a hyperparameter that determines the amount of pessimism. 4.2 The Trade-Off Between Trust and Pessimism While our hope is to leverage the implicit generalization capabilities of neural networks to estimate proper LCBs beyond states and actions in the finite dataset D, neural network architectures can be fundamentally biased, or we can simply be in a setting with insufficient data coverage, such that the generalization capability of those networks is limited. To this end, we augment the policy evaluation objective of MSG (L(θi), equation 3) with a support constraint regularizer inspired by CQL [11] 2: H(θi) = Es∼D,a∼π(s) [Qθi(s, a)]− E(s,a)∼D [Qθi(s, a)] . This regularizer encourages the Q-functions to increase the values for actions seen in the dataset D, while decreasing the values of the actions of the current policy. Practically, we estimate the latter expectation of H using the states in the mini-batch, and we approximate the former expectation using a single sample from the policy. We control the contribution ofH(θi) by weighting this term with weight parameter α. The full critic loss is thus given by, L(θ1, . . . , θN ) = N∑ i=1 ( L(θi) + αH(θi) ) (5) Empirically, as evidenced by our results in Appendix A.2, we have observed that such a regularizer can be necessary in two situations: 1) The first scenario is where the offline dataset only contains a narrow data distribution (e.g., imitation learning datasets only containing expert data). We believe 2Instead of a CQL-style value regularizer, other forms of support constraints such as a behavioral cloning regularizer on the policy could potentially be used. this is because the power of ensembles comes from predicting a value distribution for unseen (s, a) based on the available training data. Thus, if no data for sub-optimal actions is present, ensembles cannot make accurate predictions and increased pessimism viaH becomes necessary. 2) The second scenario is where environment dynamics can be chaotic (e.g. Gym [46] hopper and walker2d). In such domains it would be beneficial to remain close to the observed data in the offline dataset. Pseudo-code for our proposed MSG algorithm can be viewed in Algorithm Box 1. 5 Experiments In this section we seek to empirically answer the following questions: 1) How well does MSG perform compared to current state-of-the-art in offline RL? 2) Are the theoretical differences in ensembling approaches (Section 3) practically relevant? 3) When and how does ensemble size affect perfomance? 4) Can we match the performance of MSG through efficient ensemble approximations developed in the supervised learning literature? 5.1 Offline RL Benchmarks D4RL Gym Domains We begin by evaluating MSG on the Gym domains (halfcheetah, hopper, walker2d) of the D4RL offline RL benchmark [24], using the medium, medium-replay, medium-expert, and expert data settings. Our results presented in Appendix A.2 (summarized in Figure 4) demonstrates that MSG is competitive with well-tuned state-of-the-art methods CQL [11] and F-BRC [12]. D4RL Antmaze Domains Due to the narrow range of behaviors in Gym environments, offline datasets for these domains tend to be very similar to imitation learning datasets. As a result, many prior offline RL approaches that perform well on D4RL Gym fail on harder tasks that require stitching trajectories through dynamic programming (c.f. [48]). An example of such tasks are the D4RL antmaze settings, in particular those in the antmaze-medium and antmaze-large environments. The data for antmaze tasks consists of many episodes of an Ant agent [46] running along arbitrary paths in a maze. The agent is tasked with using this data to learn a point-to-point navigation policy from one corner of the maze to the opposite corner, where rewards are given by a sparse signal that is 1 when near the desired end location in the maze – at which point the episode is terminated – and 0 otherwise. The undirected, extremely sparse reward nature of antmaze tasks make them very challenging, especially for the large maze sizes. Table 1 and Appendix B.2 present our results. To the best of our knowledge, the antmaze domains are considered unsolved, with few prior works reporting non-zero results on the large mazes [11, 48]. As can be seen, MSG obtains results that far exceed the prior state-of-the-art results reported by [48]. While some works that use specialized hierarchical approaches have reported strong results as well [49], it is notable that MSG is able to solve these challenging tasks with standard architectures and training procedures, and this shows the power that ensembling can provide – as long as the ensembling is performed properly! RL Unplugged In addition to the D4RL benchmark, we evaluate MSG on the RL Unplugged benchmark [25]. Our results are presented in Figure 1. We compare to results for Behavioral Cloning (BC) and two state-of-the-art methods in these domains, Critic-Regularized Regression (CRR) [7] and MuZero Unplugged [47]. Due to computational constraints when using deep ensembles, we use the same network architectures as we used for D4RL experiments. The networks we use are approximately 160 -th the size of those used by the BC, CRR, and MuZero Unplugged baselines in terms of number of parameters. We observe that MSG is on par with or exceeds the current state-of-the-art on all tasks with the exception of humanoid.run, which appears to require the larger architectures used by the baseline methods. Experimental details can be found in Appendix C. Benchmark Conclusion Prior work has demonstrated that many offline RL approaches that perform well on Gym domains, fail to succeed on much more challenging domains [48]. Our results demonstrate that through uncertainty estimation with deep ensembles, MSG is able to very significantly outperform prior work on very challenging benchmark domains such as the D4RL antmazes. 5.2 Ensemble Ablations Independence in Ensembles Ablation In Section 3, through theoretical arguments and toy experiments we demonstrated the importance of training using “Independent" ensembles. Here, we seek to validate the significance of our theoretical findings using offline RL benchmarks, by comparing Independent targets (as in MSG), to Shared-LCB and Shared-Min targets. Our results are presented in Appendices A.3 and B.3, with a summary in Figures 3 and 4. In the Gym domains (Appendix A.3), with ensemble size N = 4, Shared-LCB significantly underperforms MSG. In fact, not using ensembles at all (N = 1) outperforms Shared-LCB. With ensemble size N = 4, Shared-Min is on par with MSG. When the ensemble size is increased to N = 64 (Figure 7), we observe the performance of Shared-Min drops significantly on 7/12 D4RL Gym settings. In constrast, the performance of MSG is stable and does not change. In the challenging antmaze domains (Appendix B.3), for both ensemble sizes N = 4 and N = 64, Shared-LCB and Shared-Min targets completely fail to solve the tasks, while for both ensemble sizes MSG exceeds the prior state-of-the-art (Table 1), IQL [48]. Independence in Ensembles Conclusion Our experiments corroborate the theoretical results in Section 3, demonstrating that Independent targets are critical to the success of MSG. These results are particularly striking when one considers that the implementations for MSG, Shared-LCB, and Shared-Min differ by only 2 lines of code. Ensemble Size Ablation An important ablation is to understand the role of ensemble size in MSG. In the Gym domains, Figure 5 demonstrates that increasing the number of ensembles from 4 to 64 does not result in a noticeable change in performance. In the antmaze domains, we evaluate MSG under ensemble sizes {1, 4, 16, 64}. Figure 2 presents our results. Our key takeaways are as follows: • For the harder antmaze-large tasks, there is a clear upward trend as ensemble size increases. • Using a small ensemble size (e.g. N = 4) is already quite good, but more sensitive to hyperparam- eter choice especially on the harder tasks. • Very small ensemble sizes benefit more from using α > 0 3. However, across the board, using α = 0 is preferable to using too large of a value for α – with the exception of N = 1 which cannot take advantange of the benefits of ensembling. • When using lower values of β, lower values of α should be used. Ensemble Size Conclusion In domains such as D4RL Gym where offline datasets are qualitatively similar to imitation learning datasets, larger ensembles do not result in noticeable gains. In domains such as D4RL antmaze which contain more data diversity, larger ensembles significantly improve the performance of agents. 5.3 Efficient Ensembles Thus far we have demonstrated the significant performance gains attainable through MSG. An important concern however, is that of parameter and computational efficiency: Deep ensembles of Q-networks result in an N -fold increase in memory and compute usage, both in the policy evaluation and policy optimization phases of actor-critic training. While this might not be a significant problem in offline RL benchmark domains due to small model footprints4, it becomes a major bottleneck with larger architectures such as those used in language and vision domains. To this end, we evaluate whether recent advances in “Efficient Ensemble" approaches from the supervised learning literature transfer well to the problem of offline RL. Specifically, the efficient ensemble approaches we consider are: Multi-Head Ensembles [26, 50, 51], MIMO Ensembles [27], and Batch Ensembles [28]. For a description of these efficient ensembling approaches please refer to Appendix E. A runtime comparison of different ensembling approaches can be viewed in Table 2. D4RL Gym Domains Appendix A.4 presents our results in the D4RL Gym domains with ensemble size N = 4 (summary in Figure 4). Amongst the considered efficient ensemble approaches, Batch Ensembles [28] result in the best performance, which follows findings from the supervised learning literature [17]. 3As a reminder, α is the weight of the CQL-style regularizer loss discussed in Section 4.2. 4All our experiments were ran on a single Nvidia P100 GPU. D4RL Antmaze Domains Appendix B.4 presents our results in the D4RL antmaze domains for both ensemble sizes of N = 4 and N = 64 (summary in Figure 3). As can be seen, compared to MSG with deep ensembles (separate networks), the efficient ensemble approaches we consider are very unreliable, and fail for most hyperparameter choices. Efficient Ensembles Conclusion We believe the observations in this section very clearly motivate future work in developing efficient uncertainty estimation approaches that are better suited to the domain of reinforcement learning. To facilitate this direction of research, in our codebase we have included a complete boilerplate example of an offline RL agent, amenable to drop-in implementation of novel uncertainty-estimation techniques. 6 Discussion & Future Work Our work has highlighted the significant power of ensembling as a mechanism for uncertainty estimation for offline RL. In this work we took a renewed look into Q-ensembles, and studied how to leverage them as the primary source of pessimism for offline RL. Through theoretical analyses and toy constructions, we demonstrated a critical flaw in the popular approach of using shared targets for obtaining pessimistic Q-values, and demonstrated that it can in fact lead to optimistic estimates. Using a simple fix, we developed a practical deep offline RL algorithm, MSG, which resulted in large performance gains on established offline RL benchmarks. As demonstrated by our experimental results, an important outstanding direction is to study how we can design improved efficient ensemble approximations, as we have demonstrated that current approaches used in supervised learning are not nearly as effective as MSG with ensembles that use separate networks. We hope that this work engenders new efforts from the community of neural network uncertainty estimation researchers towards developing efficient uncertainty estimation techniques directed at reinforcement learning. Acknowledgments and Disclosure of Funding We would like to thank Yasaman Bahri for insightful discussions regarding infinite-width neural networks. We would like to thank Laura Graesser for providing a detailed review of our work. We would like to thank conference reviewers for posing important questions that helped clarify the organization of this manuscript.
1. What is the focus and contribution of the paper regarding ensemble-based pessimism in offline RL? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical and empirical analyses? 3. What are the weaknesses of the paper, especially regarding its assumptions and limitations in broader value iteration? 4. Do you have any questions regarding the paper's content or its presentation? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies ensemble-based pessimism in offline RL from both theoretical and empirical aspects, Giving an analysis mathematically through NTK to show shared pessimistic targets can paradoxically lead to Q-estimates which are in fact optimistic. Proposing MSG that trains each Q-network independently, and conducts experiments in D4RL and RL Unplugged tasks. Strengths And Weaknesses Strengths Formally analyze the offline RL methods based on Q-ensemble pessimism on infinite-width neural networks, and shows the pessimism term may become positive in shared targets. Empirically verifies the effectiveness of the algorithm by combining Q-ensemble with CQL. Weaknesses The NTK assumptions in section 3.1 and the Gaussian assumptions in section 3.2 seems limited in broader value iteration. The Q-ensemble needs to combine with CQL to obtain reasonable performance. The use of CQL makes it difficult to analyze the source of performance improvement. However, there exist several methods (e.g., EDAC in NeurIPS 2021 and PBRL in ICLR 2022) that perform Q-ensemble for offline-RL with purely uncertainty. The experiment results are not complete for D4RL benchmark. Questions In line 191, why the pessimism term of method 2 may become positive, while it cannot be positive in method 1? ln line 781, how to get Eq. 20, 21, 22 from Eq. 19? ln line 791, how to get Eq. 31, 32 from Eq. 30? Limitations N/A
NIPS
Title Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters Abstract Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles ofQ-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member’s Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of Q-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained Q-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at RL. 1 Introduction Offline reinforcement learning (RL), also referred to as batch RL [1], is a problem setting in which one is provided a dataset of interactions with an environment in the form of a Markov decision process (MDP), and the goal is to learn an effective policy exclusively from this fixed dataset. Offline RL holds the promise of data-efficiency through data reuse, and improved safety due to minimizing the need for policy rollouts. As a result, offline RL has been the subject of significant renewed interest in the machine learning literature [2]. One common approach to offline RL in the model-free setting is to use approximate dynamic programming (ADP) to learn a Q-value function via iterative regression to backed-up target values. The predominant algorithmic philosophy with most success in ADP-based offline RL is to encourage 36th Conference on Neural Information Processing Systems (NeurIPS 2022). obtained policies to remain close to the support set of the available offline data. A large variety of methods have been developed for enforcing such constraints, examples of which include regularizing policies with behavior cloning objectives [3, 4], performing updates only on actions observed inside [5, 6, 7, 8] or close to [9] the offline dataset, and regularizing value functions to underestimate the value of actions not seen in the dataset [10, 11, 12]. The need for such regularizers arises from inevitable inaccuracies in value estimation when function approximation, bootstrapping, and off-policy learning – i.e. The Deadly Triad [13] – are involved. In offline RL in particular, such inaccuracies cannot be resolved through additional interactions with the MDP. Thus, remaining close to the offline dataset limits opportunities for catastrophic inaccuracies to arise. However, recent works have argued that the aforementioned constraints can be overly pessimistic, and instead opt for approaches that take into consideration the uncertainty about the value function [14, 15, 16], thus re-focusing the offline RL problem to that of deriving accurate lower confidence bounds (LCB) of Q-values. In the empirical supervised learning literature, deep network ensembles (definition in Appendix L) and their more efficient variants have been shown to be the most effective approaches for uncertainty estimation, towards learning calibrated estimates and confidence bounds with modern neural network function approximators [17]. Motivated by this, in our work we take a renewed look intoQ-ensembles, and study how to leverage them as the primary source of pessimism for offline RL. In deep RL, a very popular algorithmic choice is to use an ensemble of Q-functions to obtain pessimistic value estimates and combat overestimation bias [18]. Specifically, in the policy evaluation procedure, all Q-networks are updated towards a shared pessimistic temporal difference target. Similarly in offline RL, in addition to the main offline RL objective that they propose, several existing methods use such Q-ensembles [10, 3, 19, 20, 21, 22, 23, 8]. We begin by mathematically characterizing a critical flaw in the aforementioned ensembling procedure. Specifically, we demonstrate that using shared pessimistic targets can paradoxically lead to Q-estimates which are in fact optimistic! We verify our finding by constructing pedagogical toy MDPs. These results demonstrate that the formulation of using shared pessimistic targets is fundamentally ill-formed. To resolve this problem, we propose Model Standard-deviation Gradients (MSG), an ensemble-based offline RL algorithm. In MSG, each Q-network is trained independently, without sharing targets. Crucially, ensembles trained with independent target values will always provide pessimistic value estimates. The pessimistic lower-confidence bound (LCB) value estimate – computed as the mean minus standard deviation of the Q-value ensemble – is then used to update the policy being trained. Evaluating MSG on the established D4RL [24] and RL Unplugged [25] benchmarks for offline RL, we demonstrate that MSG matches, and in the more challenging domains such as antmazes, significantly exceeds the prior state-of-the-art. Additionally, through a series of ablation experiments on benchmark domains, we verify the significance of our theoretical findings, study the role of ensemble size, and highlight the settings in which ensembles provide the most benefit. The use of ensembles will inevitably be a computational bottleneck when applying offline RL to domains requiring large neural network models. Hence, as a final analysis, we investigate whether the favorable performance of MSG can be obtained through the use of modern efficient ensemble approaches which have been successful in the supervised learning literature [26, 27, 28, 17]. We demonstrate that while efficient ensembles are competitive with the state-of-the-art on simpler offline RL benchmark domains, similar to many popular offline RL methods they fail on more challenging tasks, and cannot recover the performance and robustness of MSG using full ensembles with separate neural networks. Our work highlights some of the unique and often overlooked challenges of ensemble-based uncertainty estimation in offline RL. Given the strong performance of MSG, we hope our work motivates increased focus into efficient and stable ensembling techniques directed at RL, and that it highlights intriguing research questions for the community of neural network uncertainty estimation researchers whom thus far have not employed sequential domains such as offline RL as a testbed for validating modern uncertainty estimation techniques. 2 Related Work Uncertainty estimation is a core component of RL, since an agent only has a limited view into the mechanics of the environment through its available experience data. Traditionally, uncertainty estimation has been key to developing proper exploration strategies such as upper confidence bound (UCB) and Thompson sampling [29], in which an agent is encouraged to seek out paths where its uncertainty is high. Offline RL presents an alternative paradigm, where the agent must act conservatively and is thus encouraged to seek out paths where its uncertainty is low [14]. In either case, proper and accurate estimation of uncertainties is paramount. To this end, much research has been produced with the aim of devising provably correct uncertainty estimates [30, 31, 32], or at least bounds on uncertainty that are good enough for acting exploratorily [33] or conservatively [34]. However, these approaches require exceedingly simple environment structure, typically either a finite discrete state and action space or linear spaces with linear dynamics and rewards. While theoretical guarantees for uncertainty estimation are more limited in practical situations with deep neural network function approximators, a number of works have been able to achieve practical success, for example using deep network analogues for count-based uncertainty [35], Bayesian uncertainty [36, 37], and bootstrapping [38, 39]. Many of these methods employ ensembles. In fact, in continuous control RL, it is common to use an ensemble of two value functions and use their minimum for computing a target value during Bellman error minimization [18]. A number of works in offline RL have extended this to propose backing up minimums or lower confidence bound estimates over larger ensembles [3, 10, 19, 20, 22, 23, 21]. In our work, we continue to find that ensembles are extremely useful for acting conservatively, but the manner in which ensembles are used is critical. Specifically our proposed MSG algorithm advocates for using independently learned ensembles, without sharing of target values, and this important design decision is supported by empirical evidence. The widespread success of ensembles for uncertainty estimation in RL echoes similar findings in supervised deep learning. While there exist proposals for more technical approaches to uncertainty estimation [40, 41, 42], ensembles have repeatedly been found to perform best empirically [26, 43]. Much of the active literature on ensembles in supervised learning is concerned with computational efficiency, with various proposals for reducing the compute or memory footprint of training and inference on large ensembles [28, 44, 27]. While these approaches have been able to achieve impressive results in supervised learning, our empirical results suggest that their performance suffers significantly in challenging offline RL settings compared to deep ensembles. 3 Pessimistic Q-Ensembles: Independent or Shared Targets? In this section we identify a critical flaw in how ensembles are commonly employed – in offline as well as online RL – for obtaining pessimistic value estimates [10, 3, 19, 20, 21, 22, 23, 8, 21], which can paradoxically lead to an optimism bonus! We begin by mathematically characterizing this problem and presenting a simple fix. Subsequently, we leverage our results to construct pedagogical toy MDPs demonstrating the practical importance of the identified problem and solution. 3.1 Mathematical Characterization 1. Initialize θi for all i ∈ Z. 2. For t = 1, 2, . . . : • For each (s, a, r, s′) ∈ D and i ∈ Z com- pute target values yi(r, s′, π). • For each i ∈ Z, update θi to optimize the regression objective 1 |D| ∑ (s,a,r,s′)∈D (Qθi(s, a)−yi(r, s′, π))2 3. Return a pessimistic Q-value function Qpessimistic based on the trained ensemble. We assume access to a dataset D composed of (s, a, r, s′) transition tuples from a Markov Decision Process (MDP) determined by a tuple M = 〈S,A,R,P, γ〉, corresponding to state space, action space, reward function, transitions dynamics, and discount, respectively. As is standard in RL, we do not assume any knowledge of R,P , other than that implicitly provided by the dataset D. In this section, for clarity of exposition, we assume that the policies we consider are deterministic, and that our MDPs do not have terminal states. We consider Q-value ensemble members given by a parameterization Qθi , where i indexes into some set Z, which is finite in practice but may be infinite or uncountable in theory. We assume Z has an associated probability space allowing us to make expectation E or variance V computations over the ensemble members. Given a fixed policy π, a general dynamic programming based procedure for obtaining pessimistic value estimates is outlined by the iterative regression described in the box above. A key algorithmic choice in this recipe is where pessimism should be introduced. This can be done by either (a) pessimistically aggregating Q-values after training, i.e. inside Step 3, or (b) also incorporating pessimism during Step 2, by using a shared pessimistic target value y. Through our review of the offline RL (as well as online RL) literature, we have observed that the most common approach is the latter, where the targets are pessimistic, shared, and identical across ensemble members [10, 3, 19, 20, 21, 22, 23, 8]. Specifically, they are computed as, yi(r, s′, π) = PO({r + γQθi(s′, π(s′)),∀i ∈ Z}) with PO being a desired pessimism operator aggregating the TD target values of the ensemble members (e.g. “mean minus standard deviation", or “minimum"). In this section, our goal is to compare these two alternative approaches. For our analysis, we will use “mean minus standard deviation" (a lower confidence bound (LCB)) as our pessimism operator, and use the notation QLCB in place of Qpessimistic (defined in the box above). Under the LCB pessimism operator we will have: Independent Targets (Method 1): yi(r, s′, π) = r + γ ·Qθi(s′, π(s′)) Shared Targets (Method 2): yi(r, s′, π) = r + γ · ( Eens [Qθi(s ′, π(s′))]− √ Vens [Qθi(s′, π(s′))] ) For both we have: QLCB(s, a) = Eens [Qθi(s, a)]− √ Vens [Qθi(s, a)] To characterize the form of QLCB when using complex neural networks, we refer to the work on infinite-width neural networks, namely the Neural Tangent Kernel (NTK) [45]. We consider Q-value ensemble members, Qθi , which all share the same infinite-width neural network architecture (and thus the same NTK parameterization). As noted in the algorithm box above, and as is the case in deep ensembles [43], the only difference amongst ensemble members Qθi is in their initial weights θi sampled from the neural network’s initial weight distribution. Before presenting our results, we establish some notation relevant to the infinite-width and NTK regime. Let X , R,X ′ denote data matrices containing (s, a), r, and (s′, π(s′)) appearing in the offline dataset D; i.e., the k-th transition (s, a, r, s′) in D is represented by the k-th rows in X , R,X ′. Let A,B denote two data matrices, where similar to X ,X ′, each row contains a state-action tuple (s, a) ∈ S ×A. The NTK, which governs the training dynamics of the infinitely-wide neural network, is then given by the outer product of gradients of the neural network at initialization: Θ̂(0)i (A,B) := ∇θQθi(A) · ∇θQθi(B)T |t=0, where we overload notation Qθi(A) to represent the column vector containing Q-values. At infinite-width in the NTK regime, Θ̂(0)i (A,B) converges to a deterministic kernel (i.e. does not depend on the random weight sample θi), and hence is the same for all ensemble members. Thus, hereafter we will remove the index i from the notation of the NTK kernel and simply write, Θ̂(0)(A,B). With our notation in place, we define, C := Θ̂(0)(X ′,X ) · Θ̂(0)(X ,X )−1. Intuitively, C is a |D| × |D| matrix where the element at column q, row p, captures a notion of similarity between (s, a) in the qth row of X , and (s′, π(s′)) in the pth row of X ′. We now have all the necessary machinery to characterize the form of QLCB: Theorem 3.1. For a given (s, a) ∈ S×A, letQ(0)θi (s, a) denoteQθi(s, a)|t=0 (value at initialization), with θ sampled from the initial weight distribution. After t + 1 iterations of pessimistic policy evaluation, the LCB value estimate for (s′, π(s′)) ∈ X ′ is given by, Independent Targets (Method 1): (1) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− √√√√Eens[( (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term (Q (0) θi (X ′)− CQ (0) θi (X )) )2] Shared Targets (Method 2): (2) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term √ Eens [( Q (0) θi (X ′)− CQ (0) θi (X ) )2] where the square and square-root operations are applied element-wise.1 Please refer to Appendix F for the proof. As can be seen, the equations for the pessimistic LCB value estimates in both settings are similar, only differing in the third term. The first term is negligible and tends towards zero as the number of iterations of policy evaluation increases. The second term shared by both variants corresponds to the expected result of the policy evaluation procedure without any pessimism (as before, we mean expectation under θ sampled from the initial weight distribution). Accordingly, the differing third term in each variant exactly corresponds to the “pessimism” or “penalty” induced by that variant. Considering the available offline RL dataset D as a restricted MDP in itself, we see that the use of Independent Targets (Method 1) leads to a pessimism term that performs “backups" along the trajectories that the policy would experience in this restricted MDP (using the geometric term 1 + · · ·+ γtCt) before computing a variance estimate. Meanwhile the use of Shared Targets (Method 2) does the reverse – it first computes a variance term and then performs the “backups". While this difference may seem inconsequential, it becomes critical when one realizes that in Equation 2 for Shared Targets (Method 2), the pessimism term (third term) may become positive, i.e. a negative penalty, yielding an effectively optimistic LCB estimate. Critically, with Independent Targets (Method 1), this problem cannot occur. 3.2 Validating Theoretical Predictions 1. Initialize empty X , R,X ′ 2. For N episodes: • sample s ∼ N (0, I) • For T steps: – sample a ∼ N (0, I) – sample s′ ∼ N (0, I) – set π(s′)← a – Add (s, a) to X – Add r ∼ N (0, I) to R – Add (s′, π(s′)) to X ′ – Set s← s′ 3. Return the offline dataset X , R,X ′ In this section we demonstrate that our analysis is not solely a theoretical result concerning the idiosyncracies of infinite-width neural networks, but that it is rather straightforward to construct combinations of an MDP, offline data, and a policy, that lead to the critical flaw of an optimistic LCB estimate. Let ds, da denote the dimensionality of state and action vectors respectively. We consider an MDP whose initial state distribution is a spherical multivariate normal distribution N (0, I), and whose transition function is given by P(s′|s, a) = N (0, I). Consider the procedure for generating our offline data matrices, described in the box to the right. This procedure returns data matrices X , R,X ′ by generating N episodes of length T , using a behavior policy a ∼ N (0, I). In this generation process, we set the policy we seek to pessimistically evaluate, π, to always apply the behavior policy’s action in state s to the next state s′. To construct our examples, we consider the setting where we use linear models to represent Qθi , with the initial weight distribution being a spherical multivariate normal distribution, N (0, I). With linear models, the equations for QLCB takes an identical form to those in Theorem 3.1. Given the described data generating process and our choice of linear function approximation, we can compute the pessimism term for the Shared Targets (Method 2) (i.e. the third term in Theorem 3.1, Equation 2). We implement this computation in a simple Python script, which we include in the supplementary material. We choose, ds = 30, da = 30, γ = 0.5, N = 5, T = 5, and t = 1000 (t is the exponent in the geometric term above). We run this simulation 1000 times, each with a different random seed. After filtering simulation runs to ensure γ‖C‖ < 1 (as discussed in an earlier footnote), we observe that 221 of the simulation runs result in an optimistic LCB bonus, meaning that in those experiments, the pessimism term was in fact positive for some (s′, π(s′)) ∈ X ′. We have made the python notebook implementing this experiment available in our supplementary material. For further intriguing investigations in pedagogical toy MDPs regarding the structure of uncertainties, we strongly encourage the interested reader to refer to Appendix G. 1Note that if γ‖C‖ ≥ 1, dynamic programming is liable to diverge in either setting. In our discussions, we avoid this degenerate case and assume γ‖C‖ < 1. 4 Model Standard-deviation Gradients (MSG) It is important to note that even if the pessimism term does not become positive for a particular combination of MDPs, offline datasets, and policies, the fact that it can occur highlights that the formulation of Shared Targets is fundamentally ill-formed. To resolve this problem we propose Model Standard-deviation Gradients (MSG), an offline RL algorithm which leverages ensembles to approximate the LCB using the approach of Independent Targets. 4.1 Policy Evaluation and Optimization in MSG MSG follows an actor-critic setup. At the beginning of training, we create an ensemble of N Qfunctions by taking N samples from the initial weight distribution. During training, in each iteration, we first perform policy evaluation by estimating the QLCB for the current policy, and subsequently optimize the policy through gradient ascent on QLCB. Policy Evaluation As motivated by our analysis in Section 3, we train the ensemble Q-functions independently using the standard least-squares Q-evaluation loss, L(θi) = E(s,a,r,s′)∼D [ ( Qθi(s, a)− yi(r, s′, π) )2 ] ; yi = r + γ · Ea′∼π(s′) [ Qθ̄i(s ′, a′) ] (3) where θi, θ̄i denote the parameters and target network parameters for the ith Q-function. In each iteration, as is common practice, we do not update the Q-functions until convergence, and instead update the networks using a single gradient step. In practice, the expectation in L(θi) is estimated by a minibatch, and the expectation in yi is estimated with a single action sample from the policy. After every update to the Q-function parameters, their corresponding target parameters are updated to be an exponential moving average of the parameters in the standard fashion. Policy Optimization As in standard deep actor-critic algorithms, policy evaluation steps (learning Q) are interleaved with policy optimization steps (learning π). In MSG, we optimize the policy through gradient ascent on QLCB. Specifically, our proposed policy optimization objective in MSG is, L(π) = Es∼D,a∼π(s) [QLCB(s, a)] = Es∼D,a∼π(s) [ Eens[Qθi(s, a)] + β √ Vens[Qθi(s, a)] ] (4) where β ≤ 0 is a hyperparameter that determines the amount of pessimism. 4.2 The Trade-Off Between Trust and Pessimism While our hope is to leverage the implicit generalization capabilities of neural networks to estimate proper LCBs beyond states and actions in the finite dataset D, neural network architectures can be fundamentally biased, or we can simply be in a setting with insufficient data coverage, such that the generalization capability of those networks is limited. To this end, we augment the policy evaluation objective of MSG (L(θi), equation 3) with a support constraint regularizer inspired by CQL [11] 2: H(θi) = Es∼D,a∼π(s) [Qθi(s, a)]− E(s,a)∼D [Qθi(s, a)] . This regularizer encourages the Q-functions to increase the values for actions seen in the dataset D, while decreasing the values of the actions of the current policy. Practically, we estimate the latter expectation of H using the states in the mini-batch, and we approximate the former expectation using a single sample from the policy. We control the contribution ofH(θi) by weighting this term with weight parameter α. The full critic loss is thus given by, L(θ1, . . . , θN ) = N∑ i=1 ( L(θi) + αH(θi) ) (5) Empirically, as evidenced by our results in Appendix A.2, we have observed that such a regularizer can be necessary in two situations: 1) The first scenario is where the offline dataset only contains a narrow data distribution (e.g., imitation learning datasets only containing expert data). We believe 2Instead of a CQL-style value regularizer, other forms of support constraints such as a behavioral cloning regularizer on the policy could potentially be used. this is because the power of ensembles comes from predicting a value distribution for unseen (s, a) based on the available training data. Thus, if no data for sub-optimal actions is present, ensembles cannot make accurate predictions and increased pessimism viaH becomes necessary. 2) The second scenario is where environment dynamics can be chaotic (e.g. Gym [46] hopper and walker2d). In such domains it would be beneficial to remain close to the observed data in the offline dataset. Pseudo-code for our proposed MSG algorithm can be viewed in Algorithm Box 1. 5 Experiments In this section we seek to empirically answer the following questions: 1) How well does MSG perform compared to current state-of-the-art in offline RL? 2) Are the theoretical differences in ensembling approaches (Section 3) practically relevant? 3) When and how does ensemble size affect perfomance? 4) Can we match the performance of MSG through efficient ensemble approximations developed in the supervised learning literature? 5.1 Offline RL Benchmarks D4RL Gym Domains We begin by evaluating MSG on the Gym domains (halfcheetah, hopper, walker2d) of the D4RL offline RL benchmark [24], using the medium, medium-replay, medium-expert, and expert data settings. Our results presented in Appendix A.2 (summarized in Figure 4) demonstrates that MSG is competitive with well-tuned state-of-the-art methods CQL [11] and F-BRC [12]. D4RL Antmaze Domains Due to the narrow range of behaviors in Gym environments, offline datasets for these domains tend to be very similar to imitation learning datasets. As a result, many prior offline RL approaches that perform well on D4RL Gym fail on harder tasks that require stitching trajectories through dynamic programming (c.f. [48]). An example of such tasks are the D4RL antmaze settings, in particular those in the antmaze-medium and antmaze-large environments. The data for antmaze tasks consists of many episodes of an Ant agent [46] running along arbitrary paths in a maze. The agent is tasked with using this data to learn a point-to-point navigation policy from one corner of the maze to the opposite corner, where rewards are given by a sparse signal that is 1 when near the desired end location in the maze – at which point the episode is terminated – and 0 otherwise. The undirected, extremely sparse reward nature of antmaze tasks make them very challenging, especially for the large maze sizes. Table 1 and Appendix B.2 present our results. To the best of our knowledge, the antmaze domains are considered unsolved, with few prior works reporting non-zero results on the large mazes [11, 48]. As can be seen, MSG obtains results that far exceed the prior state-of-the-art results reported by [48]. While some works that use specialized hierarchical approaches have reported strong results as well [49], it is notable that MSG is able to solve these challenging tasks with standard architectures and training procedures, and this shows the power that ensembling can provide – as long as the ensembling is performed properly! RL Unplugged In addition to the D4RL benchmark, we evaluate MSG on the RL Unplugged benchmark [25]. Our results are presented in Figure 1. We compare to results for Behavioral Cloning (BC) and two state-of-the-art methods in these domains, Critic-Regularized Regression (CRR) [7] and MuZero Unplugged [47]. Due to computational constraints when using deep ensembles, we use the same network architectures as we used for D4RL experiments. The networks we use are approximately 160 -th the size of those used by the BC, CRR, and MuZero Unplugged baselines in terms of number of parameters. We observe that MSG is on par with or exceeds the current state-of-the-art on all tasks with the exception of humanoid.run, which appears to require the larger architectures used by the baseline methods. Experimental details can be found in Appendix C. Benchmark Conclusion Prior work has demonstrated that many offline RL approaches that perform well on Gym domains, fail to succeed on much more challenging domains [48]. Our results demonstrate that through uncertainty estimation with deep ensembles, MSG is able to very significantly outperform prior work on very challenging benchmark domains such as the D4RL antmazes. 5.2 Ensemble Ablations Independence in Ensembles Ablation In Section 3, through theoretical arguments and toy experiments we demonstrated the importance of training using “Independent" ensembles. Here, we seek to validate the significance of our theoretical findings using offline RL benchmarks, by comparing Independent targets (as in MSG), to Shared-LCB and Shared-Min targets. Our results are presented in Appendices A.3 and B.3, with a summary in Figures 3 and 4. In the Gym domains (Appendix A.3), with ensemble size N = 4, Shared-LCB significantly underperforms MSG. In fact, not using ensembles at all (N = 1) outperforms Shared-LCB. With ensemble size N = 4, Shared-Min is on par with MSG. When the ensemble size is increased to N = 64 (Figure 7), we observe the performance of Shared-Min drops significantly on 7/12 D4RL Gym settings. In constrast, the performance of MSG is stable and does not change. In the challenging antmaze domains (Appendix B.3), for both ensemble sizes N = 4 and N = 64, Shared-LCB and Shared-Min targets completely fail to solve the tasks, while for both ensemble sizes MSG exceeds the prior state-of-the-art (Table 1), IQL [48]. Independence in Ensembles Conclusion Our experiments corroborate the theoretical results in Section 3, demonstrating that Independent targets are critical to the success of MSG. These results are particularly striking when one considers that the implementations for MSG, Shared-LCB, and Shared-Min differ by only 2 lines of code. Ensemble Size Ablation An important ablation is to understand the role of ensemble size in MSG. In the Gym domains, Figure 5 demonstrates that increasing the number of ensembles from 4 to 64 does not result in a noticeable change in performance. In the antmaze domains, we evaluate MSG under ensemble sizes {1, 4, 16, 64}. Figure 2 presents our results. Our key takeaways are as follows: • For the harder antmaze-large tasks, there is a clear upward trend as ensemble size increases. • Using a small ensemble size (e.g. N = 4) is already quite good, but more sensitive to hyperparam- eter choice especially on the harder tasks. • Very small ensemble sizes benefit more from using α > 0 3. However, across the board, using α = 0 is preferable to using too large of a value for α – with the exception of N = 1 which cannot take advantange of the benefits of ensembling. • When using lower values of β, lower values of α should be used. Ensemble Size Conclusion In domains such as D4RL Gym where offline datasets are qualitatively similar to imitation learning datasets, larger ensembles do not result in noticeable gains. In domains such as D4RL antmaze which contain more data diversity, larger ensembles significantly improve the performance of agents. 5.3 Efficient Ensembles Thus far we have demonstrated the significant performance gains attainable through MSG. An important concern however, is that of parameter and computational efficiency: Deep ensembles of Q-networks result in an N -fold increase in memory and compute usage, both in the policy evaluation and policy optimization phases of actor-critic training. While this might not be a significant problem in offline RL benchmark domains due to small model footprints4, it becomes a major bottleneck with larger architectures such as those used in language and vision domains. To this end, we evaluate whether recent advances in “Efficient Ensemble" approaches from the supervised learning literature transfer well to the problem of offline RL. Specifically, the efficient ensemble approaches we consider are: Multi-Head Ensembles [26, 50, 51], MIMO Ensembles [27], and Batch Ensembles [28]. For a description of these efficient ensembling approaches please refer to Appendix E. A runtime comparison of different ensembling approaches can be viewed in Table 2. D4RL Gym Domains Appendix A.4 presents our results in the D4RL Gym domains with ensemble size N = 4 (summary in Figure 4). Amongst the considered efficient ensemble approaches, Batch Ensembles [28] result in the best performance, which follows findings from the supervised learning literature [17]. 3As a reminder, α is the weight of the CQL-style regularizer loss discussed in Section 4.2. 4All our experiments were ran on a single Nvidia P100 GPU. D4RL Antmaze Domains Appendix B.4 presents our results in the D4RL antmaze domains for both ensemble sizes of N = 4 and N = 64 (summary in Figure 3). As can be seen, compared to MSG with deep ensembles (separate networks), the efficient ensemble approaches we consider are very unreliable, and fail for most hyperparameter choices. Efficient Ensembles Conclusion We believe the observations in this section very clearly motivate future work in developing efficient uncertainty estimation approaches that are better suited to the domain of reinforcement learning. To facilitate this direction of research, in our codebase we have included a complete boilerplate example of an offline RL agent, amenable to drop-in implementation of novel uncertainty-estimation techniques. 6 Discussion & Future Work Our work has highlighted the significant power of ensembling as a mechanism for uncertainty estimation for offline RL. In this work we took a renewed look into Q-ensembles, and studied how to leverage them as the primary source of pessimism for offline RL. Through theoretical analyses and toy constructions, we demonstrated a critical flaw in the popular approach of using shared targets for obtaining pessimistic Q-values, and demonstrated that it can in fact lead to optimistic estimates. Using a simple fix, we developed a practical deep offline RL algorithm, MSG, which resulted in large performance gains on established offline RL benchmarks. As demonstrated by our experimental results, an important outstanding direction is to study how we can design improved efficient ensemble approximations, as we have demonstrated that current approaches used in supervised learning are not nearly as effective as MSG with ensembles that use separate networks. We hope that this work engenders new efforts from the community of neural network uncertainty estimation researchers towards developing efficient uncertainty estimation techniques directed at reinforcement learning. Acknowledgments and Disclosure of Funding We would like to thank Yasaman Bahri for insightful discussions regarding infinite-width neural networks. We would like to thank Laura Graesser for providing a detailed review of our work. We would like to thank conference reviewers for posing important questions that helped clarify the organization of this manuscript.
1. What is the main contribution of the paper regarding uncertainty estimation in reinforcement learning? 2. What are the strengths of the proposed method, particularly in its theoretical analysis and empirical studies? 3. Do you have any questions or concerns regarding the paper's findings and claims? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential drawbacks of the proposed approach that the reviewer identifies?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper discusses the uncertainty estimation in RL, which is an alternative to induce pessimism in offline RL. Though uncertainty estimation through ensembles has been proposed in the offline RL literature, this paper points out a critical flaw in how to incorporate the LCB in the actor-critic based algorithm. It shows theoretically that the previous algorithms, which regress different Q functions to the shared pessimistic target values, and does policy evaluation based on the LCB could sometime leads to over-estimation of the Q function. To address this, the paper proposes a simple fix, that is in the Bellman backup stage, instead of regressing the different Q values to the shared LCB estimate, just regressing them to independent Q target. Empirically, it shows better performance in challenging tasks that require stitching. Beyond this, the paper also examines how different efficient ensemble method work in RL setting, and it seems there still exist a large gap in the performance compared with deep ensemble, which opens up more interesting questions in efficient ensemble methods in the RL setting. Strengths And Weaknesses Strength: [1]. The paper is well-written and very easy to follow. [2]. It discusses a major flaw in how to incorporate LCB estimate in offline RL algorithms, which seems being overlooked in the literature, but empirically seems make a big difference. The theoretical claim is well-supported. [3]. It has a comprehensive set of empirical studies, which covers various aspects about the applicability of the method, such as the ensemble size, the hyper-parameter sensitivity. I do appreciate the authors' effort in discussing how to transfer efficient ensemble method in supervised learning setting to the RL setup, to make it more computationally efficient, though some negative results there. Weakness: [1]. Maybe I am misunderstanding something, but i do feel some of the claims and findings are not explained in a super clear way, see Questions section for details for this. [2]. The experiment section does show that incorporating the independent target leads to the better performance in the challenging tasks, it would be great to see that this is result from better LCB estimate. We see the overestimation issues in the toy task, and it would be really helpful to see in the challenging tasks, that shared target does lead to over-estimation, which is the reason that the method helps. [3]. Section 4.2 seems a little bit out of picture. As it seems alpha=0 works great in most cases, the authors state that it might help in some narrow data regime, is any empirical study supporting this? Questions [1]. The comparison of independent and shared target comparison is interesting. Could the authors summarize the literature in a clearer way in how they do ensembles, and mark the subtle difference, as this is missing in Section 2, but these subtle differences seem make a big difference, as what is discussed in this paper. [2]. From L154-155, it seems the shared targets are doing this "doubling pessimism" in both the policy evaluation and learning step, which seems actually a little weird to me, could the authors point out the exact literature that is doing this. As I can imagine either (1). we are doing the independent target, or (2). we do the evaluation pessimistically, but in learning we take the mean of the Q, compared to LCB. Just curious that do the authors try (2)? [3]. Followed by [2], it seems the shared target do seem doing more pessimism, but theoretically it could lead to over-estimation. Any comments in bringing this intuition to the theoretical results? it seems related with the correlation of the transition (s,a) paris and properties of the C matrix. Could the authors comment more on when the shared target leads to over-pessimism or optimism, and the practical guidelines? [4]. The toy example in Section 3.2 is a little bit confusing to me, what is the rationale of using the policy that play the same a in the next state? [5].The discussion in L327-337 shows that the shared target has worse performance as ensemble sizes decreases, any explanation on this? Limitations Yes.
NIPS
Title Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters Abstract Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles ofQ-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member’s Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of Q-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained Q-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at RL. 1 Introduction Offline reinforcement learning (RL), also referred to as batch RL [1], is a problem setting in which one is provided a dataset of interactions with an environment in the form of a Markov decision process (MDP), and the goal is to learn an effective policy exclusively from this fixed dataset. Offline RL holds the promise of data-efficiency through data reuse, and improved safety due to minimizing the need for policy rollouts. As a result, offline RL has been the subject of significant renewed interest in the machine learning literature [2]. One common approach to offline RL in the model-free setting is to use approximate dynamic programming (ADP) to learn a Q-value function via iterative regression to backed-up target values. The predominant algorithmic philosophy with most success in ADP-based offline RL is to encourage 36th Conference on Neural Information Processing Systems (NeurIPS 2022). obtained policies to remain close to the support set of the available offline data. A large variety of methods have been developed for enforcing such constraints, examples of which include regularizing policies with behavior cloning objectives [3, 4], performing updates only on actions observed inside [5, 6, 7, 8] or close to [9] the offline dataset, and regularizing value functions to underestimate the value of actions not seen in the dataset [10, 11, 12]. The need for such regularizers arises from inevitable inaccuracies in value estimation when function approximation, bootstrapping, and off-policy learning – i.e. The Deadly Triad [13] – are involved. In offline RL in particular, such inaccuracies cannot be resolved through additional interactions with the MDP. Thus, remaining close to the offline dataset limits opportunities for catastrophic inaccuracies to arise. However, recent works have argued that the aforementioned constraints can be overly pessimistic, and instead opt for approaches that take into consideration the uncertainty about the value function [14, 15, 16], thus re-focusing the offline RL problem to that of deriving accurate lower confidence bounds (LCB) of Q-values. In the empirical supervised learning literature, deep network ensembles (definition in Appendix L) and their more efficient variants have been shown to be the most effective approaches for uncertainty estimation, towards learning calibrated estimates and confidence bounds with modern neural network function approximators [17]. Motivated by this, in our work we take a renewed look intoQ-ensembles, and study how to leverage them as the primary source of pessimism for offline RL. In deep RL, a very popular algorithmic choice is to use an ensemble of Q-functions to obtain pessimistic value estimates and combat overestimation bias [18]. Specifically, in the policy evaluation procedure, all Q-networks are updated towards a shared pessimistic temporal difference target. Similarly in offline RL, in addition to the main offline RL objective that they propose, several existing methods use such Q-ensembles [10, 3, 19, 20, 21, 22, 23, 8]. We begin by mathematically characterizing a critical flaw in the aforementioned ensembling procedure. Specifically, we demonstrate that using shared pessimistic targets can paradoxically lead to Q-estimates which are in fact optimistic! We verify our finding by constructing pedagogical toy MDPs. These results demonstrate that the formulation of using shared pessimistic targets is fundamentally ill-formed. To resolve this problem, we propose Model Standard-deviation Gradients (MSG), an ensemble-based offline RL algorithm. In MSG, each Q-network is trained independently, without sharing targets. Crucially, ensembles trained with independent target values will always provide pessimistic value estimates. The pessimistic lower-confidence bound (LCB) value estimate – computed as the mean minus standard deviation of the Q-value ensemble – is then used to update the policy being trained. Evaluating MSG on the established D4RL [24] and RL Unplugged [25] benchmarks for offline RL, we demonstrate that MSG matches, and in the more challenging domains such as antmazes, significantly exceeds the prior state-of-the-art. Additionally, through a series of ablation experiments on benchmark domains, we verify the significance of our theoretical findings, study the role of ensemble size, and highlight the settings in which ensembles provide the most benefit. The use of ensembles will inevitably be a computational bottleneck when applying offline RL to domains requiring large neural network models. Hence, as a final analysis, we investigate whether the favorable performance of MSG can be obtained through the use of modern efficient ensemble approaches which have been successful in the supervised learning literature [26, 27, 28, 17]. We demonstrate that while efficient ensembles are competitive with the state-of-the-art on simpler offline RL benchmark domains, similar to many popular offline RL methods they fail on more challenging tasks, and cannot recover the performance and robustness of MSG using full ensembles with separate neural networks. Our work highlights some of the unique and often overlooked challenges of ensemble-based uncertainty estimation in offline RL. Given the strong performance of MSG, we hope our work motivates increased focus into efficient and stable ensembling techniques directed at RL, and that it highlights intriguing research questions for the community of neural network uncertainty estimation researchers whom thus far have not employed sequential domains such as offline RL as a testbed for validating modern uncertainty estimation techniques. 2 Related Work Uncertainty estimation is a core component of RL, since an agent only has a limited view into the mechanics of the environment through its available experience data. Traditionally, uncertainty estimation has been key to developing proper exploration strategies such as upper confidence bound (UCB) and Thompson sampling [29], in which an agent is encouraged to seek out paths where its uncertainty is high. Offline RL presents an alternative paradigm, where the agent must act conservatively and is thus encouraged to seek out paths where its uncertainty is low [14]. In either case, proper and accurate estimation of uncertainties is paramount. To this end, much research has been produced with the aim of devising provably correct uncertainty estimates [30, 31, 32], or at least bounds on uncertainty that are good enough for acting exploratorily [33] or conservatively [34]. However, these approaches require exceedingly simple environment structure, typically either a finite discrete state and action space or linear spaces with linear dynamics and rewards. While theoretical guarantees for uncertainty estimation are more limited in practical situations with deep neural network function approximators, a number of works have been able to achieve practical success, for example using deep network analogues for count-based uncertainty [35], Bayesian uncertainty [36, 37], and bootstrapping [38, 39]. Many of these methods employ ensembles. In fact, in continuous control RL, it is common to use an ensemble of two value functions and use their minimum for computing a target value during Bellman error minimization [18]. A number of works in offline RL have extended this to propose backing up minimums or lower confidence bound estimates over larger ensembles [3, 10, 19, 20, 22, 23, 21]. In our work, we continue to find that ensembles are extremely useful for acting conservatively, but the manner in which ensembles are used is critical. Specifically our proposed MSG algorithm advocates for using independently learned ensembles, without sharing of target values, and this important design decision is supported by empirical evidence. The widespread success of ensembles for uncertainty estimation in RL echoes similar findings in supervised deep learning. While there exist proposals for more technical approaches to uncertainty estimation [40, 41, 42], ensembles have repeatedly been found to perform best empirically [26, 43]. Much of the active literature on ensembles in supervised learning is concerned with computational efficiency, with various proposals for reducing the compute or memory footprint of training and inference on large ensembles [28, 44, 27]. While these approaches have been able to achieve impressive results in supervised learning, our empirical results suggest that their performance suffers significantly in challenging offline RL settings compared to deep ensembles. 3 Pessimistic Q-Ensembles: Independent or Shared Targets? In this section we identify a critical flaw in how ensembles are commonly employed – in offline as well as online RL – for obtaining pessimistic value estimates [10, 3, 19, 20, 21, 22, 23, 8, 21], which can paradoxically lead to an optimism bonus! We begin by mathematically characterizing this problem and presenting a simple fix. Subsequently, we leverage our results to construct pedagogical toy MDPs demonstrating the practical importance of the identified problem and solution. 3.1 Mathematical Characterization 1. Initialize θi for all i ∈ Z. 2. For t = 1, 2, . . . : • For each (s, a, r, s′) ∈ D and i ∈ Z com- pute target values yi(r, s′, π). • For each i ∈ Z, update θi to optimize the regression objective 1 |D| ∑ (s,a,r,s′)∈D (Qθi(s, a)−yi(r, s′, π))2 3. Return a pessimistic Q-value function Qpessimistic based on the trained ensemble. We assume access to a dataset D composed of (s, a, r, s′) transition tuples from a Markov Decision Process (MDP) determined by a tuple M = 〈S,A,R,P, γ〉, corresponding to state space, action space, reward function, transitions dynamics, and discount, respectively. As is standard in RL, we do not assume any knowledge of R,P , other than that implicitly provided by the dataset D. In this section, for clarity of exposition, we assume that the policies we consider are deterministic, and that our MDPs do not have terminal states. We consider Q-value ensemble members given by a parameterization Qθi , where i indexes into some set Z, which is finite in practice but may be infinite or uncountable in theory. We assume Z has an associated probability space allowing us to make expectation E or variance V computations over the ensemble members. Given a fixed policy π, a general dynamic programming based procedure for obtaining pessimistic value estimates is outlined by the iterative regression described in the box above. A key algorithmic choice in this recipe is where pessimism should be introduced. This can be done by either (a) pessimistically aggregating Q-values after training, i.e. inside Step 3, or (b) also incorporating pessimism during Step 2, by using a shared pessimistic target value y. Through our review of the offline RL (as well as online RL) literature, we have observed that the most common approach is the latter, where the targets are pessimistic, shared, and identical across ensemble members [10, 3, 19, 20, 21, 22, 23, 8]. Specifically, they are computed as, yi(r, s′, π) = PO({r + γQθi(s′, π(s′)),∀i ∈ Z}) with PO being a desired pessimism operator aggregating the TD target values of the ensemble members (e.g. “mean minus standard deviation", or “minimum"). In this section, our goal is to compare these two alternative approaches. For our analysis, we will use “mean minus standard deviation" (a lower confidence bound (LCB)) as our pessimism operator, and use the notation QLCB in place of Qpessimistic (defined in the box above). Under the LCB pessimism operator we will have: Independent Targets (Method 1): yi(r, s′, π) = r + γ ·Qθi(s′, π(s′)) Shared Targets (Method 2): yi(r, s′, π) = r + γ · ( Eens [Qθi(s ′, π(s′))]− √ Vens [Qθi(s′, π(s′))] ) For both we have: QLCB(s, a) = Eens [Qθi(s, a)]− √ Vens [Qθi(s, a)] To characterize the form of QLCB when using complex neural networks, we refer to the work on infinite-width neural networks, namely the Neural Tangent Kernel (NTK) [45]. We consider Q-value ensemble members, Qθi , which all share the same infinite-width neural network architecture (and thus the same NTK parameterization). As noted in the algorithm box above, and as is the case in deep ensembles [43], the only difference amongst ensemble members Qθi is in their initial weights θi sampled from the neural network’s initial weight distribution. Before presenting our results, we establish some notation relevant to the infinite-width and NTK regime. Let X , R,X ′ denote data matrices containing (s, a), r, and (s′, π(s′)) appearing in the offline dataset D; i.e., the k-th transition (s, a, r, s′) in D is represented by the k-th rows in X , R,X ′. Let A,B denote two data matrices, where similar to X ,X ′, each row contains a state-action tuple (s, a) ∈ S ×A. The NTK, which governs the training dynamics of the infinitely-wide neural network, is then given by the outer product of gradients of the neural network at initialization: Θ̂(0)i (A,B) := ∇θQθi(A) · ∇θQθi(B)T |t=0, where we overload notation Qθi(A) to represent the column vector containing Q-values. At infinite-width in the NTK regime, Θ̂(0)i (A,B) converges to a deterministic kernel (i.e. does not depend on the random weight sample θi), and hence is the same for all ensemble members. Thus, hereafter we will remove the index i from the notation of the NTK kernel and simply write, Θ̂(0)(A,B). With our notation in place, we define, C := Θ̂(0)(X ′,X ) · Θ̂(0)(X ,X )−1. Intuitively, C is a |D| × |D| matrix where the element at column q, row p, captures a notion of similarity between (s, a) in the qth row of X , and (s′, π(s′)) in the pth row of X ′. We now have all the necessary machinery to characterize the form of QLCB: Theorem 3.1. For a given (s, a) ∈ S×A, letQ(0)θi (s, a) denoteQθi(s, a)|t=0 (value at initialization), with θ sampled from the initial weight distribution. After t + 1 iterations of pessimistic policy evaluation, the LCB value estimate for (s′, π(s′)) ∈ X ′ is given by, Independent Targets (Method 1): (1) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− √√√√Eens[( (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term (Q (0) θi (X ′)− CQ (0) θi (X )) )2] Shared Targets (Method 2): (2) Q (t+1) LCB (X ′) = O(γt‖C‖t) + (1 + . . .+ γ tCt)︸ ︷︷ ︸ backup term CR− (1 + . . .+ γtCt)︸ ︷︷ ︸ backup term √ Eens [( Q (0) θi (X ′)− CQ (0) θi (X ) )2] where the square and square-root operations are applied element-wise.1 Please refer to Appendix F for the proof. As can be seen, the equations for the pessimistic LCB value estimates in both settings are similar, only differing in the third term. The first term is negligible and tends towards zero as the number of iterations of policy evaluation increases. The second term shared by both variants corresponds to the expected result of the policy evaluation procedure without any pessimism (as before, we mean expectation under θ sampled from the initial weight distribution). Accordingly, the differing third term in each variant exactly corresponds to the “pessimism” or “penalty” induced by that variant. Considering the available offline RL dataset D as a restricted MDP in itself, we see that the use of Independent Targets (Method 1) leads to a pessimism term that performs “backups" along the trajectories that the policy would experience in this restricted MDP (using the geometric term 1 + · · ·+ γtCt) before computing a variance estimate. Meanwhile the use of Shared Targets (Method 2) does the reverse – it first computes a variance term and then performs the “backups". While this difference may seem inconsequential, it becomes critical when one realizes that in Equation 2 for Shared Targets (Method 2), the pessimism term (third term) may become positive, i.e. a negative penalty, yielding an effectively optimistic LCB estimate. Critically, with Independent Targets (Method 1), this problem cannot occur. 3.2 Validating Theoretical Predictions 1. Initialize empty X , R,X ′ 2. For N episodes: • sample s ∼ N (0, I) • For T steps: – sample a ∼ N (0, I) – sample s′ ∼ N (0, I) – set π(s′)← a – Add (s, a) to X – Add r ∼ N (0, I) to R – Add (s′, π(s′)) to X ′ – Set s← s′ 3. Return the offline dataset X , R,X ′ In this section we demonstrate that our analysis is not solely a theoretical result concerning the idiosyncracies of infinite-width neural networks, but that it is rather straightforward to construct combinations of an MDP, offline data, and a policy, that lead to the critical flaw of an optimistic LCB estimate. Let ds, da denote the dimensionality of state and action vectors respectively. We consider an MDP whose initial state distribution is a spherical multivariate normal distribution N (0, I), and whose transition function is given by P(s′|s, a) = N (0, I). Consider the procedure for generating our offline data matrices, described in the box to the right. This procedure returns data matrices X , R,X ′ by generating N episodes of length T , using a behavior policy a ∼ N (0, I). In this generation process, we set the policy we seek to pessimistically evaluate, π, to always apply the behavior policy’s action in state s to the next state s′. To construct our examples, we consider the setting where we use linear models to represent Qθi , with the initial weight distribution being a spherical multivariate normal distribution, N (0, I). With linear models, the equations for QLCB takes an identical form to those in Theorem 3.1. Given the described data generating process and our choice of linear function approximation, we can compute the pessimism term for the Shared Targets (Method 2) (i.e. the third term in Theorem 3.1, Equation 2). We implement this computation in a simple Python script, which we include in the supplementary material. We choose, ds = 30, da = 30, γ = 0.5, N = 5, T = 5, and t = 1000 (t is the exponent in the geometric term above). We run this simulation 1000 times, each with a different random seed. After filtering simulation runs to ensure γ‖C‖ < 1 (as discussed in an earlier footnote), we observe that 221 of the simulation runs result in an optimistic LCB bonus, meaning that in those experiments, the pessimism term was in fact positive for some (s′, π(s′)) ∈ X ′. We have made the python notebook implementing this experiment available in our supplementary material. For further intriguing investigations in pedagogical toy MDPs regarding the structure of uncertainties, we strongly encourage the interested reader to refer to Appendix G. 1Note that if γ‖C‖ ≥ 1, dynamic programming is liable to diverge in either setting. In our discussions, we avoid this degenerate case and assume γ‖C‖ < 1. 4 Model Standard-deviation Gradients (MSG) It is important to note that even if the pessimism term does not become positive for a particular combination of MDPs, offline datasets, and policies, the fact that it can occur highlights that the formulation of Shared Targets is fundamentally ill-formed. To resolve this problem we propose Model Standard-deviation Gradients (MSG), an offline RL algorithm which leverages ensembles to approximate the LCB using the approach of Independent Targets. 4.1 Policy Evaluation and Optimization in MSG MSG follows an actor-critic setup. At the beginning of training, we create an ensemble of N Qfunctions by taking N samples from the initial weight distribution. During training, in each iteration, we first perform policy evaluation by estimating the QLCB for the current policy, and subsequently optimize the policy through gradient ascent on QLCB. Policy Evaluation As motivated by our analysis in Section 3, we train the ensemble Q-functions independently using the standard least-squares Q-evaluation loss, L(θi) = E(s,a,r,s′)∼D [ ( Qθi(s, a)− yi(r, s′, π) )2 ] ; yi = r + γ · Ea′∼π(s′) [ Qθ̄i(s ′, a′) ] (3) where θi, θ̄i denote the parameters and target network parameters for the ith Q-function. In each iteration, as is common practice, we do not update the Q-functions until convergence, and instead update the networks using a single gradient step. In practice, the expectation in L(θi) is estimated by a minibatch, and the expectation in yi is estimated with a single action sample from the policy. After every update to the Q-function parameters, their corresponding target parameters are updated to be an exponential moving average of the parameters in the standard fashion. Policy Optimization As in standard deep actor-critic algorithms, policy evaluation steps (learning Q) are interleaved with policy optimization steps (learning π). In MSG, we optimize the policy through gradient ascent on QLCB. Specifically, our proposed policy optimization objective in MSG is, L(π) = Es∼D,a∼π(s) [QLCB(s, a)] = Es∼D,a∼π(s) [ Eens[Qθi(s, a)] + β √ Vens[Qθi(s, a)] ] (4) where β ≤ 0 is a hyperparameter that determines the amount of pessimism. 4.2 The Trade-Off Between Trust and Pessimism While our hope is to leverage the implicit generalization capabilities of neural networks to estimate proper LCBs beyond states and actions in the finite dataset D, neural network architectures can be fundamentally biased, or we can simply be in a setting with insufficient data coverage, such that the generalization capability of those networks is limited. To this end, we augment the policy evaluation objective of MSG (L(θi), equation 3) with a support constraint regularizer inspired by CQL [11] 2: H(θi) = Es∼D,a∼π(s) [Qθi(s, a)]− E(s,a)∼D [Qθi(s, a)] . This regularizer encourages the Q-functions to increase the values for actions seen in the dataset D, while decreasing the values of the actions of the current policy. Practically, we estimate the latter expectation of H using the states in the mini-batch, and we approximate the former expectation using a single sample from the policy. We control the contribution ofH(θi) by weighting this term with weight parameter α. The full critic loss is thus given by, L(θ1, . . . , θN ) = N∑ i=1 ( L(θi) + αH(θi) ) (5) Empirically, as evidenced by our results in Appendix A.2, we have observed that such a regularizer can be necessary in two situations: 1) The first scenario is where the offline dataset only contains a narrow data distribution (e.g., imitation learning datasets only containing expert data). We believe 2Instead of a CQL-style value regularizer, other forms of support constraints such as a behavioral cloning regularizer on the policy could potentially be used. this is because the power of ensembles comes from predicting a value distribution for unseen (s, a) based on the available training data. Thus, if no data for sub-optimal actions is present, ensembles cannot make accurate predictions and increased pessimism viaH becomes necessary. 2) The second scenario is where environment dynamics can be chaotic (e.g. Gym [46] hopper and walker2d). In such domains it would be beneficial to remain close to the observed data in the offline dataset. Pseudo-code for our proposed MSG algorithm can be viewed in Algorithm Box 1. 5 Experiments In this section we seek to empirically answer the following questions: 1) How well does MSG perform compared to current state-of-the-art in offline RL? 2) Are the theoretical differences in ensembling approaches (Section 3) practically relevant? 3) When and how does ensemble size affect perfomance? 4) Can we match the performance of MSG through efficient ensemble approximations developed in the supervised learning literature? 5.1 Offline RL Benchmarks D4RL Gym Domains We begin by evaluating MSG on the Gym domains (halfcheetah, hopper, walker2d) of the D4RL offline RL benchmark [24], using the medium, medium-replay, medium-expert, and expert data settings. Our results presented in Appendix A.2 (summarized in Figure 4) demonstrates that MSG is competitive with well-tuned state-of-the-art methods CQL [11] and F-BRC [12]. D4RL Antmaze Domains Due to the narrow range of behaviors in Gym environments, offline datasets for these domains tend to be very similar to imitation learning datasets. As a result, many prior offline RL approaches that perform well on D4RL Gym fail on harder tasks that require stitching trajectories through dynamic programming (c.f. [48]). An example of such tasks are the D4RL antmaze settings, in particular those in the antmaze-medium and antmaze-large environments. The data for antmaze tasks consists of many episodes of an Ant agent [46] running along arbitrary paths in a maze. The agent is tasked with using this data to learn a point-to-point navigation policy from one corner of the maze to the opposite corner, where rewards are given by a sparse signal that is 1 when near the desired end location in the maze – at which point the episode is terminated – and 0 otherwise. The undirected, extremely sparse reward nature of antmaze tasks make them very challenging, especially for the large maze sizes. Table 1 and Appendix B.2 present our results. To the best of our knowledge, the antmaze domains are considered unsolved, with few prior works reporting non-zero results on the large mazes [11, 48]. As can be seen, MSG obtains results that far exceed the prior state-of-the-art results reported by [48]. While some works that use specialized hierarchical approaches have reported strong results as well [49], it is notable that MSG is able to solve these challenging tasks with standard architectures and training procedures, and this shows the power that ensembling can provide – as long as the ensembling is performed properly! RL Unplugged In addition to the D4RL benchmark, we evaluate MSG on the RL Unplugged benchmark [25]. Our results are presented in Figure 1. We compare to results for Behavioral Cloning (BC) and two state-of-the-art methods in these domains, Critic-Regularized Regression (CRR) [7] and MuZero Unplugged [47]. Due to computational constraints when using deep ensembles, we use the same network architectures as we used for D4RL experiments. The networks we use are approximately 160 -th the size of those used by the BC, CRR, and MuZero Unplugged baselines in terms of number of parameters. We observe that MSG is on par with or exceeds the current state-of-the-art on all tasks with the exception of humanoid.run, which appears to require the larger architectures used by the baseline methods. Experimental details can be found in Appendix C. Benchmark Conclusion Prior work has demonstrated that many offline RL approaches that perform well on Gym domains, fail to succeed on much more challenging domains [48]. Our results demonstrate that through uncertainty estimation with deep ensembles, MSG is able to very significantly outperform prior work on very challenging benchmark domains such as the D4RL antmazes. 5.2 Ensemble Ablations Independence in Ensembles Ablation In Section 3, through theoretical arguments and toy experiments we demonstrated the importance of training using “Independent" ensembles. Here, we seek to validate the significance of our theoretical findings using offline RL benchmarks, by comparing Independent targets (as in MSG), to Shared-LCB and Shared-Min targets. Our results are presented in Appendices A.3 and B.3, with a summary in Figures 3 and 4. In the Gym domains (Appendix A.3), with ensemble size N = 4, Shared-LCB significantly underperforms MSG. In fact, not using ensembles at all (N = 1) outperforms Shared-LCB. With ensemble size N = 4, Shared-Min is on par with MSG. When the ensemble size is increased to N = 64 (Figure 7), we observe the performance of Shared-Min drops significantly on 7/12 D4RL Gym settings. In constrast, the performance of MSG is stable and does not change. In the challenging antmaze domains (Appendix B.3), for both ensemble sizes N = 4 and N = 64, Shared-LCB and Shared-Min targets completely fail to solve the tasks, while for both ensemble sizes MSG exceeds the prior state-of-the-art (Table 1), IQL [48]. Independence in Ensembles Conclusion Our experiments corroborate the theoretical results in Section 3, demonstrating that Independent targets are critical to the success of MSG. These results are particularly striking when one considers that the implementations for MSG, Shared-LCB, and Shared-Min differ by only 2 lines of code. Ensemble Size Ablation An important ablation is to understand the role of ensemble size in MSG. In the Gym domains, Figure 5 demonstrates that increasing the number of ensembles from 4 to 64 does not result in a noticeable change in performance. In the antmaze domains, we evaluate MSG under ensemble sizes {1, 4, 16, 64}. Figure 2 presents our results. Our key takeaways are as follows: • For the harder antmaze-large tasks, there is a clear upward trend as ensemble size increases. • Using a small ensemble size (e.g. N = 4) is already quite good, but more sensitive to hyperparam- eter choice especially on the harder tasks. • Very small ensemble sizes benefit more from using α > 0 3. However, across the board, using α = 0 is preferable to using too large of a value for α – with the exception of N = 1 which cannot take advantange of the benefits of ensembling. • When using lower values of β, lower values of α should be used. Ensemble Size Conclusion In domains such as D4RL Gym where offline datasets are qualitatively similar to imitation learning datasets, larger ensembles do not result in noticeable gains. In domains such as D4RL antmaze which contain more data diversity, larger ensembles significantly improve the performance of agents. 5.3 Efficient Ensembles Thus far we have demonstrated the significant performance gains attainable through MSG. An important concern however, is that of parameter and computational efficiency: Deep ensembles of Q-networks result in an N -fold increase in memory and compute usage, both in the policy evaluation and policy optimization phases of actor-critic training. While this might not be a significant problem in offline RL benchmark domains due to small model footprints4, it becomes a major bottleneck with larger architectures such as those used in language and vision domains. To this end, we evaluate whether recent advances in “Efficient Ensemble" approaches from the supervised learning literature transfer well to the problem of offline RL. Specifically, the efficient ensemble approaches we consider are: Multi-Head Ensembles [26, 50, 51], MIMO Ensembles [27], and Batch Ensembles [28]. For a description of these efficient ensembling approaches please refer to Appendix E. A runtime comparison of different ensembling approaches can be viewed in Table 2. D4RL Gym Domains Appendix A.4 presents our results in the D4RL Gym domains with ensemble size N = 4 (summary in Figure 4). Amongst the considered efficient ensemble approaches, Batch Ensembles [28] result in the best performance, which follows findings from the supervised learning literature [17]. 3As a reminder, α is the weight of the CQL-style regularizer loss discussed in Section 4.2. 4All our experiments were ran on a single Nvidia P100 GPU. D4RL Antmaze Domains Appendix B.4 presents our results in the D4RL antmaze domains for both ensemble sizes of N = 4 and N = 64 (summary in Figure 3). As can be seen, compared to MSG with deep ensembles (separate networks), the efficient ensemble approaches we consider are very unreliable, and fail for most hyperparameter choices. Efficient Ensembles Conclusion We believe the observations in this section very clearly motivate future work in developing efficient uncertainty estimation approaches that are better suited to the domain of reinforcement learning. To facilitate this direction of research, in our codebase we have included a complete boilerplate example of an offline RL agent, amenable to drop-in implementation of novel uncertainty-estimation techniques. 6 Discussion & Future Work Our work has highlighted the significant power of ensembling as a mechanism for uncertainty estimation for offline RL. In this work we took a renewed look into Q-ensembles, and studied how to leverage them as the primary source of pessimism for offline RL. Through theoretical analyses and toy constructions, we demonstrated a critical flaw in the popular approach of using shared targets for obtaining pessimistic Q-values, and demonstrated that it can in fact lead to optimistic estimates. Using a simple fix, we developed a practical deep offline RL algorithm, MSG, which resulted in large performance gains on established offline RL benchmarks. As demonstrated by our experimental results, an important outstanding direction is to study how we can design improved efficient ensemble approximations, as we have demonstrated that current approaches used in supervised learning are not nearly as effective as MSG with ensembles that use separate networks. We hope that this work engenders new efforts from the community of neural network uncertainty estimation researchers towards developing efficient uncertainty estimation techniques directed at reinforcement learning. Acknowledgments and Disclosure of Funding We would like to thank Yasaman Bahri for insightful discussions regarding infinite-width neural networks. We would like to thank Laura Graesser for providing a detailed review of our work. We would like to thank conference reviewers for posing important questions that helped clarify the organization of this manuscript.
1. What is the main contribution of the paper regarding offline RL using ensembles? 2. What are the strengths and weaknesses of the proposed method compared to existing methods? 3. Do you have any questions or concerns about the theoretical results and their gap with practical situations? 4. How do the experimental results support the effectiveness of the proposed method, and what are the limitations of the presented empirical evaluation? 5. Are there any minor issues or typos in the paper that need attention?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper observes a problem in existing pessimism estimation in offline RL using ensembles: using shared targets for all ensembles updates. The paper instead proposes to update each ensemble individually and apply the pessimism at policy updates. The paper derives the update form of both methods in the NTK setting and shows that the update method with shared target could even result in optimism, which is also shown with some synthetic simulation data. Finally the paper evaluates the proposed method in several offline RL benchmarks and show its empirical competitiveness. Strengths And Weaknesses Strength Overall the paper is well written, easy to follow, and the technical part seems correct. The paper makes a good observation of the existing methods for offline RL when they update the Q-values for ensembles: from hindsight one really does not need to incorporate the pessimism into the function update procedure, but instead just apply pessimism during policy update. This also seems to agree to the theory RL algorithms: one can just perform the regular bellman updates (or perform elimination in version space algorithms) and define policy with LCB or take minimum over the remaining set of functions (for pessimism) (for example, [1]). Although it may not be obvious under what kind of conditions such that in the NTK setting, using the shared target could result in optimism, the following subsection provides good evidence that that indeed could happen. It could be better to provide some more intuitive scenario or even a closed-form construction. The paper provides extensive and convincing experiments, including a) good ablation experiments which contains the different kinds of shared target updating methods (such as shared-LCB Ens., Shared-Min Deep Ens, and with a different number of ensembles) . b) The paper tries many different hyperparameters for the baselines, so the baselines seem to be fine-tuned for the final presentation of the results. c) The experiments are performed on extensive benchmarks. Weakness The theoretical results provide very good intuition into the problems of the previous pessimism estimation in offline deep RL methods, but since the result is based on the NTK setting, it still has some gap between the practical situations. The result presented in table 1 has different hyperparameter for different tasks, which likely undermines the empirical merits of the proposed algorithm. references [1] Xie, Tengyang, et al. "Bellman-consistent pessimism for offline reinforcement learning." Advances in neural information processing systems 34 (2021): 6683-6694. Questions Major Questions This question is less about the proposed algorithm itself: for the shared target update methods, the target itself contains a pessimism term: it's already a LCB estimation. Why, for policy update, another LCB term is added into the Q L C B estimation, why not just directly using E [ Q θ i ] ? It seems like it's doing pessimism twice? Although it looks like removing the LCB term in Q L C B won't affect the theoretical result by much (you just subtract the backup term by 1? which of course leads to optimism more easily). In addition to the existing baselines, it may seem like the following baselines could also be of interest? For example, a) we take the min over the ensemble with individual targets? b) every ensemble is bootstrapped from their own target but we also subtract with a common lcb term? some minor problems The formula in the algorithm box in section 3.1 overflows. Why the γ in section 3.2 is chosen to be very small (instead of 0.9)? The y-axis are inconsistent in Fig.2 and Fig.3. Also in Fig.3, what does each dot mean? It seems a little bit hard to interpret the plot. Why is line 763 true, for θ l i n ( 0 ) = 0 ? Limitations The overall algorithm has good intuition and motivation, but the introduction to the additional term in section 4.2 looks irrelevant to the rest of the paper. From the experiments, this term seems crucial to a good performance of the algorithm and thus unavoidable in the current version. Although it makes sense that some kind of regularization may be needed for unseen action, this additional term indeed undermines the overall message a little bit. The ablation of using a more condensed surrogate for ensemble is a good experiment, and as the paper already suggests, it would be better if a more efficient way of pessimism could be derived, which seems beyond the scope of this paper.
NIPS
Title Meta-Auto-Decoder for Solving Parametric Partial Differential Equations Abstract Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computation domains, etc. Recently, building learning-based numerical solvers for parametric PDEs has become an emerging new field. One category of methods such as the Deep Galerkin Method (DGM) and Physics-Informed Neural Networks (PINNs) aim to approximate the solution of the PDEs. They are typically unsupervised and mesh-free, but require going through the time-consuming network training process from scratch for each set of parameters of the PDE. Another category of methods such as Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet) try to approximate the solution mapping directly. Being fast with only one forward inference for each PDE parameter without retraining, they often require a large corpus of paired input-output observations drawn from numerical simulations, and most of them need a predefined mesh as well. In this paper, we propose Meta-AutoDecoder (MAD), a mesh-free and unsupervised deep learning method that enables ∗The first two authors contributed equally to this paper, and Bin Dong is the corresponding author. Zhanhong Ye proposed MAD-L and explain the effectiveness of the MAD method from the perspective of manifold learning. Huang Xiang proposed MAD-LM on the basis of MAD-L and completed all the experiments in the paper. Xiang Huang performed this work during an internship at Huawei. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. The project page with code is available: https://gitee.com/mindspore/mindscience/tree/master/MindElec/. N/A ∗The first two authors contributed equally to this paper, and Bin Dong is the corresponding author. Zhanhong Ye proposed MAD-L and explain the effectiveness of the MAD method from the perspective of manifold learning. Huang Xiang proposed MAD-LM on the basis of MAD-L and completed all the experiments in the paper. Xiang Huang performed this work during an internship at Huawei. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. The project page with code is available: https://gitee.com/mindspore/mindscience/tree/master/MindElec/. 1 Introduction Many important problems in science and engineering, such as inverse problems, control and optimization, risk assessment, and uncertainty quantification [1, 2], require solving the so-called parametric PDEs, i.e., partial differential equations (PDEs) with different physical parameters, boundary conditions, or solution regions. Mathematically, they require to solve the so-called parametric PDEs that can be formulated as: Lγ1x̃ u = 0, x̃ ∈ Ω ⊂ R d, Bγ2x̃ u = 0, x̃ ∈ ∂Ω (1) where Lγ1 and Bγ2 are partial differential operators parametrized by γ1 and γ2, respectively, and x̃ denotes the independent variable in spatiotemporal-dependent PDEs. Given U = U(Ω;Rdu) and the space of parameters A, η = (γ1, γ2,Ω) ∈ A is the variable parameter of the PDEs and u ∈ U is the solution of the PDEs. Note that the form of η considered here is very general with possible heterogeneity allowed, since the computational domain shape Ω and the functions defined on this domain or its boundary (which may be involved in γ1, γ2) is obviously of different type. Solving parametric PDEs requires to learn an infinite-dimensional operator G : A → U that map any PDE parameter η to its corresponding solution uη (i.e., the solution mapping). In recent years, learning-based PDE solvers have become very popular, and it is generally believed that learning-based PDE solvers have the potential to improve efficiency [3, 4, 5]. The learning-based PDE solvers can be categorized into two categories in terms of the objects that are approximated by neural networks (NN), i.e., the approximation of the solution uη and the approximation of the solution mapping G. NN as a new ansatz of solution. This kind of approaches approximate the solution of the PDEs with a neural network and mainly rely on governing equations and boundary conditions (or their variants) to train the neural networks. For example, PINNs [3] and DGM [6] constrain the output of deep neural networks to satisfy the given governing equations and boundary conditions. Deep Ritz Method (DRM) [7] exploits the variational form of PDEs and can be used to solve PDEs that can be reformulated as equivalent energy minimization problems. Based on a weak formulation of PDEs, Weak Adversarial Network (WAN) [8] parameterizes the weak solution and test functions as primal and adversarial neural networks, respectively. These neural approximation methods can work in an unsupervised manner, without the need to generate labeled data from conventional computational methods. However, all these methods treat different PDE parameters as independent tasks, and need to retrain the neural network from scratch for each PDE parameter. When a large number of tasks with different PDE parameters need to be solved, these methods are computationally expensive and impractical. In order to mitigate retraining cost, E and Yu [7] recommends a transfer learning method that uses a model trained for one task as the initial model to train another task. However, according to our experiments, transfer learning method is not always effective in improving convergence speed (see Sec.3.1, 3.3). NN as a new ansatz of solution mapping. This kind of approaches use neural networks to learn the solution mapping between two infinite-dimensional function spaces [9, 10, 11, 12, 13]. For example, PDE-Nets [9, 10] are among the earliest neural operators that are specifically designed convolutional neural networks inspired by finite difference approximations of PDEs. They are able to uncover hidden PDE models from observed dynamical data and perform fast and accurate predictions at the same time. DeepONet [11] uses two subnets to encode the parameters and location variables of the PDEs separately, and merge them together to compute the solution. FNO [13] utilizes fast Fourier transform to build the neural operator architecture and learn the solution mapping between two infinite-dimensional function spaces. A significant advantage of these approaches is that once the neural network is trained, the prediction time is almost negligible. Although they have demonstrated promising results across a wide range of applications, several issues occur. First, the data acquisition cost is prohibitive in complex physical, biological, or engineering systems, and the generalization ability of these models is poor when there is not enough labeled data [14]. Second, most of these methods [9, 10, 12, 13] require a predefined mesh and utilize the labeled data on the mesh for training and inference. Third, simply applying one forward inference may lead to unsatisfactory generalization, especially on out-of-distribution (OOD) settings (i.e., PDE parameters for training and inference are from different probability distributions). Finally, these operators directly takes the PDE parameter η as network input, which would bring inconvenience in network implementation if η is heterogeneous. The recently proposed Physics-Informed DeepONet (PI-DeepONet) [15] can learn a mesh-free solution mapping without any labeled data and retraining. However, it needs to collect a large number of training samples in the parameter space A to obtain an acceptable accuracy (see Sec.3.1), and is still inflexible dealing with heterogeneous PDE parameters. Meta-Learning. Different from conventional machine learning that learns to do a given task, meta-learning learns to improve the learning algorithm itself based on multiple learning episodes over a distribution of related tasks. As a result, meta-learning can handle new tasks faster and better. In this field, the Model-Agnostic Meta-Learning (MAML) [16] algorithm and its variants [17, 18, 19] have beed widely used. These algorithms try to find an initial model with good generalization ability such that it can be adapted to new tasks with a small number of gradient updates. For example, MAML [16] firstly trains a meta-model with good initialization weight on a variety of learning tasks, which is then fine-tuned on a new task through a few steps of gradient descent to get the target model. The Reptile [18] algorithm eliminates second-order derivatives in MAML algorithm by repeatedly sampling a task, training on it, and moving the initialization towards the trained weight on that task. Borrowing the idea of meta-learning may inspire new ways to solve parametric PDEs, where different PDE parameters correspond to different tasks. To the best of our knowledge, Meta-MgNet [20] is the first work that view solving parametric PDEs as a meta-learning problem, which is based on hypernet and the multigrid algorithm. Meta-MgNet utilizes the similarity between tasks to generate good smoothing operators adaptively, and thereby accelerates the solution process, but is not directly applicable to PDEs on which the multigrid algorithm is not available. Recently, the Reptile algorithm is also used to accelerate the PDE solving problems in [21]. However, MAML and Reptile are not always effective in improving convergence speed (see Sec.3.2 and 3.3). Our contributions. We propose Meta-Auto-Decoder (MAD), a mesh-free and unsupervised deep learning method that enables the pre-trained model to be quickly adapted to equation instances by implicitly encoding heterogeneous PDE parameters as latent vectors. Different from Meta-MgNet, MAD makes use of the similarity between tasks from the perspective of manifold learning, and tries to learn a nonlinear approximation of the solution manifold. We construct the ansatz of solution as a neural network in the form uθ(x̃, z). By taking the spatial (or spatial-temporal) coordinate x̃ directly as the network input, unsupervised training loss is allowed, and a mesh is no longer required. As the additional input z varies, uθ(x̃, z) moves on a manifold in an infinite-dimensional function space, which may be an approximation of the true solution manifold for certain θ. The PDE parameter η is implicitly encoded into z by applying the auto-decoder architecture motivated by [22], regardless of the possible heterogeneity. When a new task comes, MAD achieves fast transfer by projecting the new task to the manifold and fine-tuning the manifold at the same time. The main contributions of this paper are summarized as follows: • A mesh-free and unsupervised deep neural network approach is proposed to solve parametric PDEs. Based on meta-learning concept, once the neural network is pre-trained, solving a new task involves only a small number of iterations. In addition, the auto-decoder architecture adopted by MAD can realize auto-encoding of heterogeneous PDE parameters. • The mathematical intuition behind the MAD method is analyzed from the perspective of manifold learning. In short, a neural network is pre-trained to approximate the solution manifold, and the required solution is searched on the solution manifold or in a neighborhood of the solution manifold. • Extensive numerical experiments are carried out to demonstrate the effectiveness of our method, which show that MAD can significantly improve the convergence speed and has good extrapolation ability for OOD settings. 2 Methodology 2.1 Meta-Auto-Decoder We adopt meta-learning concept to realize fast solution of parametric PDEs. Our basic idea is to first learn some universal meta-knowledge from a set of sampled tasks in the pre-training stage, and then solve a new task quickly by combining the task-specific knowledge with the shared meta-knowledge in the fine-tuning stage. We also adapt the auto-decoder architecture in [22], and introduce uθ(x̃, z) to approximate the solutions of parametric PDEs. The architecture of uθ(x̃, z) is shown in Fig.1. A physics-informed loss is used for training, making the proposed method unsupervised. Putting all these together, we propose a new method Meta-Auto-Decoder (MAD) to solve parametric PDEs. For the rest of the subsection, the loss function and the two stages of training will be explained in details. To enable unsupervised learning, given any PDE parameter η ∈ A, the physics-informed loss Lη : U → [0,∞) about Eq.(1) Lη[u] = ‖Lγ1x̃ u‖ 2 L2(Ω) + λbc‖B γ2 x̃ u‖ 2 L2(∂Ω) (2) is considered, where λbc > 0 is a weighting coefficient. The Monte Carlo estimate of Lη[u] is L̂η[u] = 1 Mr Mr∑ j=1 ∥∥∥Lγ1x̃ u(x̃rj)∥∥∥2 2 + λbc Mbc Mbc∑ j=1 ∥∥∥Bγ2x̃ u(x̃bcj )∥∥∥2 2 , (3) where {x̃rj}j∈{1,...,Mr} and {x̃bcj }j∈{1,...,Mbc} are two sets of random sampling points in Ω and ∂Ω, respectively. This task-specific loss L̂η[u] can be computed by automatic differentiation [23], and will be used in the pre-training stage and the fine-tuing stage. In the pre-training stage, through minimizing the loss function, a pre-trained model parametrized by θ∗ is learned for all tasks and each task is paired with its own decoded latent vector z∗i . Such a pre-trained model is considered as the meta knowledge as it is learned from the distribution of all tasks and the learned latent vector z∗i is the task-specific knowledge. When solving a new task in the fine-tuning stage, keep the model weight θ∗ fixed and minimize the loss by fine-tuning the latent vector z. Alternatively, we may unfreeze θ and allow it to be fine-tuned along with z. These two fine-tuning strategies give rise to different versions of MAD, which are called MAD-L and MAD-LM, respectively. The corresponding problems of pre-training and fine-tuning are formulated as follows: Pre-training Stage Given N randomly generated PDE parameters η1, . . . , ηN ∈ A, both MAD-L and MAD-LM solve the following optimization problem ({z∗i }i∈{1,...,N}, θ∗) = arg min θ,{zi}i∈{1,...,N} N∑ i=1 ( L̂ηi [uθ(·, zi)] + 1 σ2 ‖zi‖2 ) , (4) where θ∗ is the optimal model weight, {z∗i }i∈{1,...,N} are the optimal latent vectors for different PDE parameters, and L̂ηi is defined in Eq.(3). The regularization 1σ2 ‖zi‖ 2 is added for training stability. Fine-tuning Stage (MAD-L) Given a new PDE parameter ηnew, MAD-L keeps θ∗ fixed, and minimizes the following loss function to get z∗new = arg min z L̂ηnew [uθ∗(·, z)] + 1 σ2 ‖z‖2, (5) then uθ∗(·, z∗new) is the approximate solution of PDEs with parameter ηnew. To speed up convergence, we can set the initial value of z to z∗i obtained during pre-training where ηi is the nearest 2 to ηnew. Fine-tuning Stage (MAD-LM) MAD-LM fine-tunes the model weight θ with the latent vector z simultaneously, and solves the following optimization problem (z∗new, θ ∗ new) = arg min z,θ L̂ηnew [uθ(·, z)] + 1 σ2 ‖z‖2 (6) with initial model weight θ∗. This would produce an alternative approximate solution uθ∗new(·, z ∗ new). The latent vector is initialized in the same way as MAD-L. Remark 1 The MAD method has several key advantages compared with existing methods. Besides being mesh-free and unsupervised, it can deal with heterogeneous PDE parameters painlessly, since η is not taken as the network input, and is encoded into z in an implicit way. Introduction of the meta-knowledge θ∗ would accelerate the fine-tuning process, which can be better understood in the light of the manifold learning perspective. For MAD-LM, the accuracy on OOD tasks is likely to be at least comparable with training from scratch based on PINNs. Although the fine-tuning process of MAD is still slower than one forward inference of a neural network solution mapping, the advantages presented above can make it more suitable for some real applications. Remark 2 If we replace the physics-informed loss by certain supervised loss, the MAD-L method would then coincide with the DeepSDF algorithm [22]. Despite of this, the field of solving parametric PDEs is quite different from 3D shape representation in computer graphics. Moreover, the introduction of model weight fine-tuning in MAD-LM can significantly improve solution accuracy, as is explained intuitively in Sec.2.2,2.3 and validated by numerical experiments in Sec.3. 2.2 Manifold Learning Interpretation of MAD-L We interpret how the MAD-L method works from the manifold learning perspective, which also provides a new interpretation of the DeepSDF algorithm [22]. For the rest of this section, the domain Ω is fixed and excluded from η for simplicity. Now, we consider the following scenario. Scenario 1 The set of solutions G(A) = {G(η) | η ∈ A} ⊂ U is contained in a low-dimensional structure. To be more specific, there is a finite-dimensional space Z = Rl (with l dimU) and a Lipschitz continuous mapping Ḡ : Z → U , such that G(A) ⊆ Ḡ(Z). In other words, for any η ∈ A, there exists z ∈ Z satisfying Ḡ(z) = G(η). The mapping Ḡ is Lipschitz continuous if and only if there exists some C > 0 such that ∥∥Ḡ(z)− Ḡ(z′) ∥∥ U ≤ C‖z − z ′‖ for all z, z′ ∈ Z. This Lipschitz continuous constraint excludes highly irregular mappings like space-filling curves. When A is a finite-dimensional space and G is Lipschitz continuous, the parametric PDE would fall into this scenario (just take Z = A, Ḡ = G). Since dimZ dimU (the latter is usually infinity) holds, we may view the mapping Ḡ as some sort of “decoder”, and Z is the corresponding latent vector space, despite of the fact that there doesn’t exist an “encoder”. In many cases, Ḡ(Z) ⊂ U forms an embedded submanifold, and therefore our MAD method can be viewed as a manifold-learning approach. Once the mapping Ḡ is learned as above, then for a given parameter η, searching for the solution uη in the whole space U is no longer needed. Instead, we may focus on the smaller subset Ḡ(Z), i.e. the class of functions in U that is parametrized by Z, since uη = G(η) ∈ Ḡ(Z) holds for any η ∈ A. We then solve the optimization problem zη = arg min z Lη[Ḡ(z)], (7) 2For example, if A is a space of functions, we can discretize a function into a vector and then find the Euclidean distance between the two vectors as the distance between two PDE parameters. and Ḡ(zη) is the approximate solution. Assuming that the dimension ofZ is chosen (either empirically or through trial and error), the aim is to find the mapping Ḡ. Since such a mapping is usually complex and hard to design by hand, we consider the θ-parametrized3 version Gθ : Z → U , and find the best θ automatically by solving an optimization problem. Gθ can be constructed in the simple form Gθ(z)(x̃) = uθ(x̃, z), (8) where uθ is a neural network whose input is the concatenation of x̃ ∈ Rd and z ∈ Rl. The next step is to find the optimal model weight θ via training, with the target being G(A) ⊆ Gθ(Z). Assuming that the PDE parameters are generated from a probability distribution η ∼ pA, then G(η) ∈ Gθ(Z) holds almost surely if and only if d(θ) = E η∼pA [ dU ( uη, Gθ(Z) )] = E η∼pA [ min z ∥∥uη − uθ(·, z)∥∥U] = 0, (9) which suggests taking θ∗ = arg minθ d(θ). In case we do not have direct access to the exact solutions uη , the equivalent4 condition d′(θ) = E η∼pA [ min z Lη[uθ(·, z)] ] = 0 (10) is considered, and d′(θ) becomes the alternative loss to be minimized.In the specific implementation, the expectation on η ∼ pA is estimated by Monte Carlo samples η1, . . . , ηN , and the optimal network weight θ is taken to be θ∗ ≈ arg min θ 1 N N∑ i=1 min zi Lηi [uθ(·, zi)]. (11) We further estimate the physics-informed loss Lη using Monte Carlo method to obtain Eq.(4). After that, when a new PDE parameter ηnew ∈ A comes, a direct adaptation of Eq.(7) would then give rise to the fine-tuning process Eq.(5), since uθ∗(·, z) = Gθ∗(z) ≈ Ḡ(z) holds. An intuitive illustration of how MAD-L works from the manifold learning perspective is given in Fig.2(a). 3Two types of parametrization are considered here. The latent vector z parametrizes a point on the manifold Ḡ(Z) or Gθ(Z), and θ parametrizes the shape of the entire manifold Gθ(Z). 4Assume that the solution of Eq.(1) is unique for all η ∈ A, and u ∈ U is the solution if and only if Lη[u] = 0. A Visualization Example An ordinary differential equation (ODE) is used to visualize the pretraining and fine-tuning processes of MAD-L. Consider the following problem with domain Ω = (−π, π) ⊂ R: du dx = 2(x− η) cos ( (x− η)2 ) , u(±π) = sin ( (±π − η)2 ) . (12) We sample 20 points equidistantly on the interval [0, 2] as variable ODE parameters, and randomly select one ηnew for fine-tuning stage and the rest {ηi}i∈{1,··· ,19} for pre-training stage. MAD-L generates a sequence of (θ(m), {z(m)i }i∈{1,··· ,19}) in pre-training stage, and terminates at m = 200 with the optimal (θ∗, {z∗i }i∈{1,··· ,19}). The infinite-dimensional function space U = C([−π, π]) is projected onto a 2-dimensional plane using Principal Component Analysis (PCA). Fig.3(a) visualizes how Gθ(Z) gradually fits G(A) in pre-training stage. The set of exact solutions G(A) forms a 1-dimensional manifold (i.e. the red solid curve), and the marked points {G(ηi)}i∈{1,··· ,19} represent the corresponding ODE parameters used for pre-training. Each dotted curve represents a solution set G(m)θ (Z) obtained by the neural network at the m-th iteration with the points Gθ(m)(z (m) i ) = uθ(m)(·, z (m) i ) also marked on the curve. As the number of iterations m increases, the network weight θ = θ(m) updates, making the dotted curves evolve and finally fit the red solid curve, i.e., the target manifold G(A). Fig.3(b) illustrates the fine-tuning process for a given new ODE parameter ηnew ∈ A. As in Fig.3(a), the red solid curve represents the set of exact solutions G(A), while the cyan dotted curve represents the solution set Gθ∗(Z) = Gθ(200)(Z) obtained by the pre-trained network. As z = z (m) new updates (i.e., through fine-tuning z), the marked point Gθ∗(z (m) new ) moves on the cyan dotted curve, and finally converges to the approximate solution Gθ∗(z∗new) = Gθ∗(z (12) new ) ≈ G(ηnew). 2.3 Manifold Learning Interpretation of MAD-LM The MAD-L method is designed for Scenario 1. However, many parametric PDEs encountered in real applications do not fall into this scenario, especially when the parameter set A of PDEs is an infinite-dimensional function space. Simply applying MAD-L method to these PDE solving problems would likely lead to unsatisfactory results. However, MAD-LM works in a more general scenario, and thus has the potential of getting improved performance for a wide range of parametric PDE problems. This alternative scenario is given as follows. Scenario 2 The solution setG(A) ⊂ U can be approximated by a set with low-dimensional structure, in the sense that there is a finite-dimensional space Z = Rl (with l dimU) and a Lipschitz continuous mapping Ḡ : Z → U , such that G(A) is contained in the c-neighborhood of Ḡ(Z) ⊂ U , where c is a relatively small constant. In other words, for any η ∈ A, there exists some z ∈ Z satisfying ‖Ḡ(z)−G(η)‖U ≤ c. Appendix A gives an example of G(A) that falls into Scenario 2 but not Scenario 1. In this new scenario, similar derivation leads to the same pre-training stage, which is used to find the initial decoder mapping Gθ∗ ≈ Ḡ. However, in the fine-tuning stage, simply fine-tuning the latent vector z won’t give a satisfactory solution in general due to the existence of the c-gap. Therefore, we have to fine-tune the model weight θ with the latent vector z simultaneously, and solve the optimization problem Eq.(6). It produces a new decoder Gθ∗new(z ∗ new) specific to the parameter ηnew. An intuitive illustration is given in Fig.2(b). 3 Numerical Experiments To evaluate the effectiveness of the MAD method, we apply it to solve three parametric PDEs: (1) Burgers’ equation with variable initial conditions; (2) Maxwell’s equations with variable equation coefficients; and (3) Laplace’s equation with variable solution domains and boundary conditions (heterogeneous PDE parameters). Accuracy of the model is measured by average relative L2 error (abbreviated as L2 error) between predicted solutions and reference solutions, and we provide the mean value and the 95% confidence interval of L2 error. We compare MAD with other methods including learning from scratch (abbreviated as From-Scratch), Transfer-Learning [7], MAML [16, 17], Reptile [18] and PI-DeepONet [15]. For each experiment, the PDE parameters are divided into two sets: S1 and S2. Parameters in S1 correspond to sample tasks for pre-training, and parameters in S2 correspond to new tasks for fine-tuning. See Appendix B for the default experimental setup, and more detailed experimental setups and results for Burgers’ equation, Maxwell’s equation and Laplace’s equation are given in Appendix C, D and E respectively. Unless otherwise specified, all the experiments are conducted under the MindSpore5. 3.1 Burgers’ Equation We consider the 1-D Burgers’ equation: ∂u ∂t + u ∂u ∂x = ν ∂2u ∂x2 , x ∈ (0, 1), t ∈ (0, 1], u(x, 0) = u0(x), x ∈ (0, 1), (13) Eq.(13) can model one-dimensional flow of a viscous fluid, where u is the velocity, ν is the viscosity coefficient and initial condition u0(x) is the changing parameter of the PDE, i.e. η = u0(x). The initial condition u0(x) is generated using Gaussian random field (GRF) [24] according to u0(x) ∼ N (0; 100(−∆ + 9I)−3) with periodic boundary conditions. Fig.4(a) shows the mean L2 error of all methods as the number of training iterations increases. All methods converge to nearly the same accuracy (the mean L2 error close to 0.013) except for MAD-L, which we guess probably due to the c-gap introduced in Sec.2.3. In terms of convergence speed, From-Scratch and Transfer-Learning need about 1200 iterations to converge, whereas MAML, Reptile and MAD-LM need about 200 iterations to converge. MAD-LM has the fastest convergence speed, requiring only 17% of the training iterations of From-Scratch. In this experiment, Transfer-Learning does not show any advantage over From-Scratch, which means that Transfer-Learning fails to obtain any useful knowledge in pre-training stage. PI-DeepONet can directly inference for unseen PDE parameters in S2, so it has no fine-tuning process. Table 1 shows the comparison of the mean L2 error of PI-DeepONet and MAD under different numbers of training samples in S1. The results show that PI-DeepONet has a strong dependence on the number of training samples, and its mean L2 error is remarkably high when S1 is small. Moreover, its mean L2 error is significantly higher than that of MAD-L or MAD-LM in all cases. 5https://www.mindspore.cn/ In the above experiments, ηs in S1 and S2 come from the same GRF, so we can assume that the tasks in the pre-training stage come from the same task distribution as the tasks in the fine-tuing stage. We investigate the extrapolation capability of MAD, that is, tasks in the fine-tuing stage come from different task distribution than those in the pre-training stage. Specifically, S1 is still the same as above, but S2 is generated from N (0; 100(−∆ + 25I)−2.5). Fig.5(a) shows the results of extrapolation experiments. Since the distribution of tasks has changed, the manifold learned in the pre-training stage fits G(A) worse, so MAD-L exhibits worse accuracy than that in Fig.4(a). However, as with Fig.4(a), the convergence speed of MAD-LM in Fig.5(a) is also better than other methods. This shows that the extrapolation capability of MAD is also better than other methods in this example. 3.2 Time-Domain Maxwell’s Equations We consider the time-domain 2-D Maxwell’s equations with a point source in the transverse Electric (TE) mode [25]: ∂Ex ∂t = 1 0 r ∂Hz ∂y , ∂Ey ∂t = − 1 0 r ∂Hz ∂x , ∂Hz ∂t = − 1 µ0µr ( ∂Ey ∂x − ∂Ex ∂y + J ) , (14) where Ex, Ey and Hz are the electromagnetic fields, J is the point source term. The equation coefficients 0 and µ0 are the permittivity and permeability in vacuum, respectively. The equation coefficients r and µr are the relative permittivity and relative permeability of the media, respectively. [26] uses modified PINNs method to solve Eq.(14) with fixed r = 1 and µr = 1. However, in this paper, ( r, µr) are variable parameters of the PDEs, i.e., η = ( r, µr), which corresponds to the media properties in the simulation region. Fig.4(b) shows that all methods converge to similar accuracy (mean L2 error is close to 0.04), and MAD-LM achieves the lowest mean L2 error (0.028). In terms of convergence speed, MAD-L and MAD-LM are obviously superior to other methods. It is worth noting that MAML fails to converge in pre-training stage, therefore its data is missing in Fig.4(b). We guess the reason is that singularity brought by point source and computation of second-order derivatives pose great difficulties in solving optimization problem. Reptile also does not show good generalization ability probably due to the same singularity problem. We do an extrapolation experiment, where ( r, µr) in S1 comes from [1, 5]2, but in the fine-tuning stage, we only consider the ( r, µr) = (7, 7) case. Because the extrapolated task does not lie in the task distribution in the pre-training stage, the point G(ηnew) corresponding to the extrapolated task in the function space U is not on the learned manifold Gθ∗(Z), which causes MAD-L to converge to poor accuracy. Fig.5(b) indicates that MAD-LM is significantly faster than From-Scratch and Reptile in convergence speed while maintaining high accuracy. Notably, Transfer-Learning also exhibits faster convergence than From-Scratch and Reptile. This is because ( r, µr) = (4, 5) is randomly selected in the pre-training stage and is very close to ( r, µr) = (7, 7) in Euclidean distance. 3.3 Laplace’s Equation We consider the 2-D Laplace’s equation as follows: ∂2u ∂x2 + ∂2u ∂y2 = 0, (x, y) ∈ Ω, u(x, y) = g(x, y), (x, y) ∈ ∂Ω, (15) where the shape of Ω and boundary condition g(x, y) are the variable parameters of the PDE, i.e. η = (Ω, g(x, y)). In this experiment, we use triangular domain Ω and vary the shape of Ω by randomly choosing three points on the circumference of a unit circle to form the triangle. Given that h is the boundary condition on the unit circle, we use GRF to generate h ∼ N (0, 103/2(−∆+100I)−3) with periodic boundary conditions. The analytical solution of the Laplace’s equation on the unit circle can be obtained by Fourier method. Then, we use the analytical solution on the three sides of the triangle as the boundary condition g(x, y). The variable PDE parameters include the shape of the solution domain (the shape of the triangle) and the boundary conditions on the three sides of the triangle, so the PDE parameters here are heterogeneous. MAD can implicitly encode such heterogeneous PDE parameters as latent vectors conveniently, whereas PI-DeepONet is unable to handle this case without further adaptations. Fig.4(c) shows that all methods finally converge to similar accuracy (mean L2 error close to 0.001), and MAD-LM achieves the lowest mean L2 error. MAD-L, MAD-LM and Reptile show good generalization capability and excellent convergence speed, whereas Transfer-Learning and MAML do not show any advantage over From-Scratch. Summary of experimental results. Achieving fast adaptation is the major focus of this paper, and solutions within a reasonable precision need to be found. Indeed, in many control and inverse problems, a higher precision in solving the forward problem (such as parametric PDEs) does not always lead to better results. For example, a solution with about 5% relative error is already enough for Maxwell’s equations in certain engineering scenarios. We are therefore interested in reducing the cost of solving the PDE with a new set of parameters by using only a relatively small number of iterations, in order to obtain an accurate enough solution in practice. The advantages of MAD (especially MAD-LM) is directly validated in the numerical experiments, as it achieves very fast convergence in the early stage of the training process. Some other applications may focus more on the final precision, and is not as sensitive to the training cost. In this alternate criterion, the superiority of the MAD method becomes less obvious in our test cases except Maxwell’s equations, but its performance is still comparable to other methods. 4 Conclusions In this paper, a novel mesh-free and unsupervised deep learning method MAD is proposed for solving parametric PDEs based on meta-learning idea. A good initial model is obtained in pre-training stage to learn useful information from a set of sampled tasks, which is then used to help solve the parametric PDEs quickly in fine-tuning stage. Moreover, MAD can implicitly encode heterogeneous PDE parameters as latent vectors. The effectiveness of MAD method is analyzed from the perspective of manifold learning and verified by extensive numerical experiments. Acknowledgments This work was supported by National Key R&D Program of China under Grant No. 2021ZD0110400.
1. What is the focus and contribution of the paper regarding parametric PDE solutions? 2. What are the strengths of the proposed approach, particularly in its application to design optimization and parameter discovery? 3. What are the weaknesses of the paper, such as the need for larger-scale experiments and comparisons with other numerical methods? 4. Do you have any questions regarding the experimental setup and the introduction of extra parameters in the NN for MAD experiments? 5. How does the reviewer assess the quality and limitations of the paper's content, including its potential negative societal impact?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces a manifold learning approach for speeding up solutions to parametric PDEs, inspired by DeepSDF [22]. They associate a latent vector for each each instance of the PDE, and learn a solution manifold using an initial dataset of PDE instances. During inference, they either search for the closest latent vector (using gradient-based optimization) (MAD-L) or fine-tune both the network parameters alongside the latent vector (MAD-LM). They report strong performance improvements compared to Meta-Learning, PINNs from scratch, and DeepONets. Strengths And Weaknesses The paper attacks an important problem. Often times we have to solve many versions of a PDE, either for design optimization or parameter discovery. One of the strengths of using neural networks as an ansatz for PDE solutions is that we can use techniques such as meta-learning to speed up solution process. This paper adapts methods from the shape representation literature to this problem, and shows consistent performance improvements. The contribution is simple but works well, so I think it is solid. I like the sections on the manifold learning intuition, especially the simple ODE example, as I think it really motivates why the algorithm works. It would be nice to see results from larger-scale experiments. Could you demonstrate a case-study where you would want to solve a sequence of PDEs, and your meta-learning setup gives large performance gains? I think this would help motivate the problem of parametric PDEs much better. Questions For the experiments with MAD, you had to introduce extra parameters in the NN to take the latent vector as input. Does this significantly increase parameter count? Could you give the number of NN parameters for "From-Scratch" experiments vs "MAD-L/LM" experiments? It would be really nice to see a comparison of these methods to more standard numerical methods (such as FEM/spectral methods). it would be interesting to see if MAD-L can get close to or even surpass more classical methods in terms of performance. Quality of life nits: Could you add descriptions of the figures into the captions themselves as well? It becomes difficult to scroll back and forth. Limitations It was really nice seeing the cases where MAD-L failed. I think comparisons with more standard numerical methods could also make it clear where this method stands in terms of potential limitations compared to state of the art. Potential negative societal impact was not assessed.
NIPS
Title Meta-Auto-Decoder for Solving Parametric Partial Differential Equations Abstract Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computation domains, etc. Recently, building learning-based numerical solvers for parametric PDEs has become an emerging new field. One category of methods such as the Deep Galerkin Method (DGM) and Physics-Informed Neural Networks (PINNs) aim to approximate the solution of the PDEs. They are typically unsupervised and mesh-free, but require going through the time-consuming network training process from scratch for each set of parameters of the PDE. Another category of methods such as Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet) try to approximate the solution mapping directly. Being fast with only one forward inference for each PDE parameter without retraining, they often require a large corpus of paired input-output observations drawn from numerical simulations, and most of them need a predefined mesh as well. In this paper, we propose Meta-AutoDecoder (MAD), a mesh-free and unsupervised deep learning method that enables ∗The first two authors contributed equally to this paper, and Bin Dong is the corresponding author. Zhanhong Ye proposed MAD-L and explain the effectiveness of the MAD method from the perspective of manifold learning. Huang Xiang proposed MAD-LM on the basis of MAD-L and completed all the experiments in the paper. Xiang Huang performed this work during an internship at Huawei. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. The project page with code is available: https://gitee.com/mindspore/mindscience/tree/master/MindElec/. N/A ∗The first two authors contributed equally to this paper, and Bin Dong is the corresponding author. Zhanhong Ye proposed MAD-L and explain the effectiveness of the MAD method from the perspective of manifold learning. Huang Xiang proposed MAD-LM on the basis of MAD-L and completed all the experiments in the paper. Xiang Huang performed this work during an internship at Huawei. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. The project page with code is available: https://gitee.com/mindspore/mindscience/tree/master/MindElec/. 1 Introduction Many important problems in science and engineering, such as inverse problems, control and optimization, risk assessment, and uncertainty quantification [1, 2], require solving the so-called parametric PDEs, i.e., partial differential equations (PDEs) with different physical parameters, boundary conditions, or solution regions. Mathematically, they require to solve the so-called parametric PDEs that can be formulated as: Lγ1x̃ u = 0, x̃ ∈ Ω ⊂ R d, Bγ2x̃ u = 0, x̃ ∈ ∂Ω (1) where Lγ1 and Bγ2 are partial differential operators parametrized by γ1 and γ2, respectively, and x̃ denotes the independent variable in spatiotemporal-dependent PDEs. Given U = U(Ω;Rdu) and the space of parameters A, η = (γ1, γ2,Ω) ∈ A is the variable parameter of the PDEs and u ∈ U is the solution of the PDEs. Note that the form of η considered here is very general with possible heterogeneity allowed, since the computational domain shape Ω and the functions defined on this domain or its boundary (which may be involved in γ1, γ2) is obviously of different type. Solving parametric PDEs requires to learn an infinite-dimensional operator G : A → U that map any PDE parameter η to its corresponding solution uη (i.e., the solution mapping). In recent years, learning-based PDE solvers have become very popular, and it is generally believed that learning-based PDE solvers have the potential to improve efficiency [3, 4, 5]. The learning-based PDE solvers can be categorized into two categories in terms of the objects that are approximated by neural networks (NN), i.e., the approximation of the solution uη and the approximation of the solution mapping G. NN as a new ansatz of solution. This kind of approaches approximate the solution of the PDEs with a neural network and mainly rely on governing equations and boundary conditions (or their variants) to train the neural networks. For example, PINNs [3] and DGM [6] constrain the output of deep neural networks to satisfy the given governing equations and boundary conditions. Deep Ritz Method (DRM) [7] exploits the variational form of PDEs and can be used to solve PDEs that can be reformulated as equivalent energy minimization problems. Based on a weak formulation of PDEs, Weak Adversarial Network (WAN) [8] parameterizes the weak solution and test functions as primal and adversarial neural networks, respectively. These neural approximation methods can work in an unsupervised manner, without the need to generate labeled data from conventional computational methods. However, all these methods treat different PDE parameters as independent tasks, and need to retrain the neural network from scratch for each PDE parameter. When a large number of tasks with different PDE parameters need to be solved, these methods are computationally expensive and impractical. In order to mitigate retraining cost, E and Yu [7] recommends a transfer learning method that uses a model trained for one task as the initial model to train another task. However, according to our experiments, transfer learning method is not always effective in improving convergence speed (see Sec.3.1, 3.3). NN as a new ansatz of solution mapping. This kind of approaches use neural networks to learn the solution mapping between two infinite-dimensional function spaces [9, 10, 11, 12, 13]. For example, PDE-Nets [9, 10] are among the earliest neural operators that are specifically designed convolutional neural networks inspired by finite difference approximations of PDEs. They are able to uncover hidden PDE models from observed dynamical data and perform fast and accurate predictions at the same time. DeepONet [11] uses two subnets to encode the parameters and location variables of the PDEs separately, and merge them together to compute the solution. FNO [13] utilizes fast Fourier transform to build the neural operator architecture and learn the solution mapping between two infinite-dimensional function spaces. A significant advantage of these approaches is that once the neural network is trained, the prediction time is almost negligible. Although they have demonstrated promising results across a wide range of applications, several issues occur. First, the data acquisition cost is prohibitive in complex physical, biological, or engineering systems, and the generalization ability of these models is poor when there is not enough labeled data [14]. Second, most of these methods [9, 10, 12, 13] require a predefined mesh and utilize the labeled data on the mesh for training and inference. Third, simply applying one forward inference may lead to unsatisfactory generalization, especially on out-of-distribution (OOD) settings (i.e., PDE parameters for training and inference are from different probability distributions). Finally, these operators directly takes the PDE parameter η as network input, which would bring inconvenience in network implementation if η is heterogeneous. The recently proposed Physics-Informed DeepONet (PI-DeepONet) [15] can learn a mesh-free solution mapping without any labeled data and retraining. However, it needs to collect a large number of training samples in the parameter space A to obtain an acceptable accuracy (see Sec.3.1), and is still inflexible dealing with heterogeneous PDE parameters. Meta-Learning. Different from conventional machine learning that learns to do a given task, meta-learning learns to improve the learning algorithm itself based on multiple learning episodes over a distribution of related tasks. As a result, meta-learning can handle new tasks faster and better. In this field, the Model-Agnostic Meta-Learning (MAML) [16] algorithm and its variants [17, 18, 19] have beed widely used. These algorithms try to find an initial model with good generalization ability such that it can be adapted to new tasks with a small number of gradient updates. For example, MAML [16] firstly trains a meta-model with good initialization weight on a variety of learning tasks, which is then fine-tuned on a new task through a few steps of gradient descent to get the target model. The Reptile [18] algorithm eliminates second-order derivatives in MAML algorithm by repeatedly sampling a task, training on it, and moving the initialization towards the trained weight on that task. Borrowing the idea of meta-learning may inspire new ways to solve parametric PDEs, where different PDE parameters correspond to different tasks. To the best of our knowledge, Meta-MgNet [20] is the first work that view solving parametric PDEs as a meta-learning problem, which is based on hypernet and the multigrid algorithm. Meta-MgNet utilizes the similarity between tasks to generate good smoothing operators adaptively, and thereby accelerates the solution process, but is not directly applicable to PDEs on which the multigrid algorithm is not available. Recently, the Reptile algorithm is also used to accelerate the PDE solving problems in [21]. However, MAML and Reptile are not always effective in improving convergence speed (see Sec.3.2 and 3.3). Our contributions. We propose Meta-Auto-Decoder (MAD), a mesh-free and unsupervised deep learning method that enables the pre-trained model to be quickly adapted to equation instances by implicitly encoding heterogeneous PDE parameters as latent vectors. Different from Meta-MgNet, MAD makes use of the similarity between tasks from the perspective of manifold learning, and tries to learn a nonlinear approximation of the solution manifold. We construct the ansatz of solution as a neural network in the form uθ(x̃, z). By taking the spatial (or spatial-temporal) coordinate x̃ directly as the network input, unsupervised training loss is allowed, and a mesh is no longer required. As the additional input z varies, uθ(x̃, z) moves on a manifold in an infinite-dimensional function space, which may be an approximation of the true solution manifold for certain θ. The PDE parameter η is implicitly encoded into z by applying the auto-decoder architecture motivated by [22], regardless of the possible heterogeneity. When a new task comes, MAD achieves fast transfer by projecting the new task to the manifold and fine-tuning the manifold at the same time. The main contributions of this paper are summarized as follows: • A mesh-free and unsupervised deep neural network approach is proposed to solve parametric PDEs. Based on meta-learning concept, once the neural network is pre-trained, solving a new task involves only a small number of iterations. In addition, the auto-decoder architecture adopted by MAD can realize auto-encoding of heterogeneous PDE parameters. • The mathematical intuition behind the MAD method is analyzed from the perspective of manifold learning. In short, a neural network is pre-trained to approximate the solution manifold, and the required solution is searched on the solution manifold or in a neighborhood of the solution manifold. • Extensive numerical experiments are carried out to demonstrate the effectiveness of our method, which show that MAD can significantly improve the convergence speed and has good extrapolation ability for OOD settings. 2 Methodology 2.1 Meta-Auto-Decoder We adopt meta-learning concept to realize fast solution of parametric PDEs. Our basic idea is to first learn some universal meta-knowledge from a set of sampled tasks in the pre-training stage, and then solve a new task quickly by combining the task-specific knowledge with the shared meta-knowledge in the fine-tuning stage. We also adapt the auto-decoder architecture in [22], and introduce uθ(x̃, z) to approximate the solutions of parametric PDEs. The architecture of uθ(x̃, z) is shown in Fig.1. A physics-informed loss is used for training, making the proposed method unsupervised. Putting all these together, we propose a new method Meta-Auto-Decoder (MAD) to solve parametric PDEs. For the rest of the subsection, the loss function and the two stages of training will be explained in details. To enable unsupervised learning, given any PDE parameter η ∈ A, the physics-informed loss Lη : U → [0,∞) about Eq.(1) Lη[u] = ‖Lγ1x̃ u‖ 2 L2(Ω) + λbc‖B γ2 x̃ u‖ 2 L2(∂Ω) (2) is considered, where λbc > 0 is a weighting coefficient. The Monte Carlo estimate of Lη[u] is L̂η[u] = 1 Mr Mr∑ j=1 ∥∥∥Lγ1x̃ u(x̃rj)∥∥∥2 2 + λbc Mbc Mbc∑ j=1 ∥∥∥Bγ2x̃ u(x̃bcj )∥∥∥2 2 , (3) where {x̃rj}j∈{1,...,Mr} and {x̃bcj }j∈{1,...,Mbc} are two sets of random sampling points in Ω and ∂Ω, respectively. This task-specific loss L̂η[u] can be computed by automatic differentiation [23], and will be used in the pre-training stage and the fine-tuing stage. In the pre-training stage, through minimizing the loss function, a pre-trained model parametrized by θ∗ is learned for all tasks and each task is paired with its own decoded latent vector z∗i . Such a pre-trained model is considered as the meta knowledge as it is learned from the distribution of all tasks and the learned latent vector z∗i is the task-specific knowledge. When solving a new task in the fine-tuning stage, keep the model weight θ∗ fixed and minimize the loss by fine-tuning the latent vector z. Alternatively, we may unfreeze θ and allow it to be fine-tuned along with z. These two fine-tuning strategies give rise to different versions of MAD, which are called MAD-L and MAD-LM, respectively. The corresponding problems of pre-training and fine-tuning are formulated as follows: Pre-training Stage Given N randomly generated PDE parameters η1, . . . , ηN ∈ A, both MAD-L and MAD-LM solve the following optimization problem ({z∗i }i∈{1,...,N}, θ∗) = arg min θ,{zi}i∈{1,...,N} N∑ i=1 ( L̂ηi [uθ(·, zi)] + 1 σ2 ‖zi‖2 ) , (4) where θ∗ is the optimal model weight, {z∗i }i∈{1,...,N} are the optimal latent vectors for different PDE parameters, and L̂ηi is defined in Eq.(3). The regularization 1σ2 ‖zi‖ 2 is added for training stability. Fine-tuning Stage (MAD-L) Given a new PDE parameter ηnew, MAD-L keeps θ∗ fixed, and minimizes the following loss function to get z∗new = arg min z L̂ηnew [uθ∗(·, z)] + 1 σ2 ‖z‖2, (5) then uθ∗(·, z∗new) is the approximate solution of PDEs with parameter ηnew. To speed up convergence, we can set the initial value of z to z∗i obtained during pre-training where ηi is the nearest 2 to ηnew. Fine-tuning Stage (MAD-LM) MAD-LM fine-tunes the model weight θ with the latent vector z simultaneously, and solves the following optimization problem (z∗new, θ ∗ new) = arg min z,θ L̂ηnew [uθ(·, z)] + 1 σ2 ‖z‖2 (6) with initial model weight θ∗. This would produce an alternative approximate solution uθ∗new(·, z ∗ new). The latent vector is initialized in the same way as MAD-L. Remark 1 The MAD method has several key advantages compared with existing methods. Besides being mesh-free and unsupervised, it can deal with heterogeneous PDE parameters painlessly, since η is not taken as the network input, and is encoded into z in an implicit way. Introduction of the meta-knowledge θ∗ would accelerate the fine-tuning process, which can be better understood in the light of the manifold learning perspective. For MAD-LM, the accuracy on OOD tasks is likely to be at least comparable with training from scratch based on PINNs. Although the fine-tuning process of MAD is still slower than one forward inference of a neural network solution mapping, the advantages presented above can make it more suitable for some real applications. Remark 2 If we replace the physics-informed loss by certain supervised loss, the MAD-L method would then coincide with the DeepSDF algorithm [22]. Despite of this, the field of solving parametric PDEs is quite different from 3D shape representation in computer graphics. Moreover, the introduction of model weight fine-tuning in MAD-LM can significantly improve solution accuracy, as is explained intuitively in Sec.2.2,2.3 and validated by numerical experiments in Sec.3. 2.2 Manifold Learning Interpretation of MAD-L We interpret how the MAD-L method works from the manifold learning perspective, which also provides a new interpretation of the DeepSDF algorithm [22]. For the rest of this section, the domain Ω is fixed and excluded from η for simplicity. Now, we consider the following scenario. Scenario 1 The set of solutions G(A) = {G(η) | η ∈ A} ⊂ U is contained in a low-dimensional structure. To be more specific, there is a finite-dimensional space Z = Rl (with l dimU) and a Lipschitz continuous mapping Ḡ : Z → U , such that G(A) ⊆ Ḡ(Z). In other words, for any η ∈ A, there exists z ∈ Z satisfying Ḡ(z) = G(η). The mapping Ḡ is Lipschitz continuous if and only if there exists some C > 0 such that ∥∥Ḡ(z)− Ḡ(z′) ∥∥ U ≤ C‖z − z ′‖ for all z, z′ ∈ Z. This Lipschitz continuous constraint excludes highly irregular mappings like space-filling curves. When A is a finite-dimensional space and G is Lipschitz continuous, the parametric PDE would fall into this scenario (just take Z = A, Ḡ = G). Since dimZ dimU (the latter is usually infinity) holds, we may view the mapping Ḡ as some sort of “decoder”, and Z is the corresponding latent vector space, despite of the fact that there doesn’t exist an “encoder”. In many cases, Ḡ(Z) ⊂ U forms an embedded submanifold, and therefore our MAD method can be viewed as a manifold-learning approach. Once the mapping Ḡ is learned as above, then for a given parameter η, searching for the solution uη in the whole space U is no longer needed. Instead, we may focus on the smaller subset Ḡ(Z), i.e. the class of functions in U that is parametrized by Z, since uη = G(η) ∈ Ḡ(Z) holds for any η ∈ A. We then solve the optimization problem zη = arg min z Lη[Ḡ(z)], (7) 2For example, if A is a space of functions, we can discretize a function into a vector and then find the Euclidean distance between the two vectors as the distance between two PDE parameters. and Ḡ(zη) is the approximate solution. Assuming that the dimension ofZ is chosen (either empirically or through trial and error), the aim is to find the mapping Ḡ. Since such a mapping is usually complex and hard to design by hand, we consider the θ-parametrized3 version Gθ : Z → U , and find the best θ automatically by solving an optimization problem. Gθ can be constructed in the simple form Gθ(z)(x̃) = uθ(x̃, z), (8) where uθ is a neural network whose input is the concatenation of x̃ ∈ Rd and z ∈ Rl. The next step is to find the optimal model weight θ via training, with the target being G(A) ⊆ Gθ(Z). Assuming that the PDE parameters are generated from a probability distribution η ∼ pA, then G(η) ∈ Gθ(Z) holds almost surely if and only if d(θ) = E η∼pA [ dU ( uη, Gθ(Z) )] = E η∼pA [ min z ∥∥uη − uθ(·, z)∥∥U] = 0, (9) which suggests taking θ∗ = arg minθ d(θ). In case we do not have direct access to the exact solutions uη , the equivalent4 condition d′(θ) = E η∼pA [ min z Lη[uθ(·, z)] ] = 0 (10) is considered, and d′(θ) becomes the alternative loss to be minimized.In the specific implementation, the expectation on η ∼ pA is estimated by Monte Carlo samples η1, . . . , ηN , and the optimal network weight θ is taken to be θ∗ ≈ arg min θ 1 N N∑ i=1 min zi Lηi [uθ(·, zi)]. (11) We further estimate the physics-informed loss Lη using Monte Carlo method to obtain Eq.(4). After that, when a new PDE parameter ηnew ∈ A comes, a direct adaptation of Eq.(7) would then give rise to the fine-tuning process Eq.(5), since uθ∗(·, z) = Gθ∗(z) ≈ Ḡ(z) holds. An intuitive illustration of how MAD-L works from the manifold learning perspective is given in Fig.2(a). 3Two types of parametrization are considered here. The latent vector z parametrizes a point on the manifold Ḡ(Z) or Gθ(Z), and θ parametrizes the shape of the entire manifold Gθ(Z). 4Assume that the solution of Eq.(1) is unique for all η ∈ A, and u ∈ U is the solution if and only if Lη[u] = 0. A Visualization Example An ordinary differential equation (ODE) is used to visualize the pretraining and fine-tuning processes of MAD-L. Consider the following problem with domain Ω = (−π, π) ⊂ R: du dx = 2(x− η) cos ( (x− η)2 ) , u(±π) = sin ( (±π − η)2 ) . (12) We sample 20 points equidistantly on the interval [0, 2] as variable ODE parameters, and randomly select one ηnew for fine-tuning stage and the rest {ηi}i∈{1,··· ,19} for pre-training stage. MAD-L generates a sequence of (θ(m), {z(m)i }i∈{1,··· ,19}) in pre-training stage, and terminates at m = 200 with the optimal (θ∗, {z∗i }i∈{1,··· ,19}). The infinite-dimensional function space U = C([−π, π]) is projected onto a 2-dimensional plane using Principal Component Analysis (PCA). Fig.3(a) visualizes how Gθ(Z) gradually fits G(A) in pre-training stage. The set of exact solutions G(A) forms a 1-dimensional manifold (i.e. the red solid curve), and the marked points {G(ηi)}i∈{1,··· ,19} represent the corresponding ODE parameters used for pre-training. Each dotted curve represents a solution set G(m)θ (Z) obtained by the neural network at the m-th iteration with the points Gθ(m)(z (m) i ) = uθ(m)(·, z (m) i ) also marked on the curve. As the number of iterations m increases, the network weight θ = θ(m) updates, making the dotted curves evolve and finally fit the red solid curve, i.e., the target manifold G(A). Fig.3(b) illustrates the fine-tuning process for a given new ODE parameter ηnew ∈ A. As in Fig.3(a), the red solid curve represents the set of exact solutions G(A), while the cyan dotted curve represents the solution set Gθ∗(Z) = Gθ(200)(Z) obtained by the pre-trained network. As z = z (m) new updates (i.e., through fine-tuning z), the marked point Gθ∗(z (m) new ) moves on the cyan dotted curve, and finally converges to the approximate solution Gθ∗(z∗new) = Gθ∗(z (12) new ) ≈ G(ηnew). 2.3 Manifold Learning Interpretation of MAD-LM The MAD-L method is designed for Scenario 1. However, many parametric PDEs encountered in real applications do not fall into this scenario, especially when the parameter set A of PDEs is an infinite-dimensional function space. Simply applying MAD-L method to these PDE solving problems would likely lead to unsatisfactory results. However, MAD-LM works in a more general scenario, and thus has the potential of getting improved performance for a wide range of parametric PDE problems. This alternative scenario is given as follows. Scenario 2 The solution setG(A) ⊂ U can be approximated by a set with low-dimensional structure, in the sense that there is a finite-dimensional space Z = Rl (with l dimU) and a Lipschitz continuous mapping Ḡ : Z → U , such that G(A) is contained in the c-neighborhood of Ḡ(Z) ⊂ U , where c is a relatively small constant. In other words, for any η ∈ A, there exists some z ∈ Z satisfying ‖Ḡ(z)−G(η)‖U ≤ c. Appendix A gives an example of G(A) that falls into Scenario 2 but not Scenario 1. In this new scenario, similar derivation leads to the same pre-training stage, which is used to find the initial decoder mapping Gθ∗ ≈ Ḡ. However, in the fine-tuning stage, simply fine-tuning the latent vector z won’t give a satisfactory solution in general due to the existence of the c-gap. Therefore, we have to fine-tune the model weight θ with the latent vector z simultaneously, and solve the optimization problem Eq.(6). It produces a new decoder Gθ∗new(z ∗ new) specific to the parameter ηnew. An intuitive illustration is given in Fig.2(b). 3 Numerical Experiments To evaluate the effectiveness of the MAD method, we apply it to solve three parametric PDEs: (1) Burgers’ equation with variable initial conditions; (2) Maxwell’s equations with variable equation coefficients; and (3) Laplace’s equation with variable solution domains and boundary conditions (heterogeneous PDE parameters). Accuracy of the model is measured by average relative L2 error (abbreviated as L2 error) between predicted solutions and reference solutions, and we provide the mean value and the 95% confidence interval of L2 error. We compare MAD with other methods including learning from scratch (abbreviated as From-Scratch), Transfer-Learning [7], MAML [16, 17], Reptile [18] and PI-DeepONet [15]. For each experiment, the PDE parameters are divided into two sets: S1 and S2. Parameters in S1 correspond to sample tasks for pre-training, and parameters in S2 correspond to new tasks for fine-tuning. See Appendix B for the default experimental setup, and more detailed experimental setups and results for Burgers’ equation, Maxwell’s equation and Laplace’s equation are given in Appendix C, D and E respectively. Unless otherwise specified, all the experiments are conducted under the MindSpore5. 3.1 Burgers’ Equation We consider the 1-D Burgers’ equation: ∂u ∂t + u ∂u ∂x = ν ∂2u ∂x2 , x ∈ (0, 1), t ∈ (0, 1], u(x, 0) = u0(x), x ∈ (0, 1), (13) Eq.(13) can model one-dimensional flow of a viscous fluid, where u is the velocity, ν is the viscosity coefficient and initial condition u0(x) is the changing parameter of the PDE, i.e. η = u0(x). The initial condition u0(x) is generated using Gaussian random field (GRF) [24] according to u0(x) ∼ N (0; 100(−∆ + 9I)−3) with periodic boundary conditions. Fig.4(a) shows the mean L2 error of all methods as the number of training iterations increases. All methods converge to nearly the same accuracy (the mean L2 error close to 0.013) except for MAD-L, which we guess probably due to the c-gap introduced in Sec.2.3. In terms of convergence speed, From-Scratch and Transfer-Learning need about 1200 iterations to converge, whereas MAML, Reptile and MAD-LM need about 200 iterations to converge. MAD-LM has the fastest convergence speed, requiring only 17% of the training iterations of From-Scratch. In this experiment, Transfer-Learning does not show any advantage over From-Scratch, which means that Transfer-Learning fails to obtain any useful knowledge in pre-training stage. PI-DeepONet can directly inference for unseen PDE parameters in S2, so it has no fine-tuning process. Table 1 shows the comparison of the mean L2 error of PI-DeepONet and MAD under different numbers of training samples in S1. The results show that PI-DeepONet has a strong dependence on the number of training samples, and its mean L2 error is remarkably high when S1 is small. Moreover, its mean L2 error is significantly higher than that of MAD-L or MAD-LM in all cases. 5https://www.mindspore.cn/ In the above experiments, ηs in S1 and S2 come from the same GRF, so we can assume that the tasks in the pre-training stage come from the same task distribution as the tasks in the fine-tuing stage. We investigate the extrapolation capability of MAD, that is, tasks in the fine-tuing stage come from different task distribution than those in the pre-training stage. Specifically, S1 is still the same as above, but S2 is generated from N (0; 100(−∆ + 25I)−2.5). Fig.5(a) shows the results of extrapolation experiments. Since the distribution of tasks has changed, the manifold learned in the pre-training stage fits G(A) worse, so MAD-L exhibits worse accuracy than that in Fig.4(a). However, as with Fig.4(a), the convergence speed of MAD-LM in Fig.5(a) is also better than other methods. This shows that the extrapolation capability of MAD is also better than other methods in this example. 3.2 Time-Domain Maxwell’s Equations We consider the time-domain 2-D Maxwell’s equations with a point source in the transverse Electric (TE) mode [25]: ∂Ex ∂t = 1 0 r ∂Hz ∂y , ∂Ey ∂t = − 1 0 r ∂Hz ∂x , ∂Hz ∂t = − 1 µ0µr ( ∂Ey ∂x − ∂Ex ∂y + J ) , (14) where Ex, Ey and Hz are the electromagnetic fields, J is the point source term. The equation coefficients 0 and µ0 are the permittivity and permeability in vacuum, respectively. The equation coefficients r and µr are the relative permittivity and relative permeability of the media, respectively. [26] uses modified PINNs method to solve Eq.(14) with fixed r = 1 and µr = 1. However, in this paper, ( r, µr) are variable parameters of the PDEs, i.e., η = ( r, µr), which corresponds to the media properties in the simulation region. Fig.4(b) shows that all methods converge to similar accuracy (mean L2 error is close to 0.04), and MAD-LM achieves the lowest mean L2 error (0.028). In terms of convergence speed, MAD-L and MAD-LM are obviously superior to other methods. It is worth noting that MAML fails to converge in pre-training stage, therefore its data is missing in Fig.4(b). We guess the reason is that singularity brought by point source and computation of second-order derivatives pose great difficulties in solving optimization problem. Reptile also does not show good generalization ability probably due to the same singularity problem. We do an extrapolation experiment, where ( r, µr) in S1 comes from [1, 5]2, but in the fine-tuning stage, we only consider the ( r, µr) = (7, 7) case. Because the extrapolated task does not lie in the task distribution in the pre-training stage, the point G(ηnew) corresponding to the extrapolated task in the function space U is not on the learned manifold Gθ∗(Z), which causes MAD-L to converge to poor accuracy. Fig.5(b) indicates that MAD-LM is significantly faster than From-Scratch and Reptile in convergence speed while maintaining high accuracy. Notably, Transfer-Learning also exhibits faster convergence than From-Scratch and Reptile. This is because ( r, µr) = (4, 5) is randomly selected in the pre-training stage and is very close to ( r, µr) = (7, 7) in Euclidean distance. 3.3 Laplace’s Equation We consider the 2-D Laplace’s equation as follows: ∂2u ∂x2 + ∂2u ∂y2 = 0, (x, y) ∈ Ω, u(x, y) = g(x, y), (x, y) ∈ ∂Ω, (15) where the shape of Ω and boundary condition g(x, y) are the variable parameters of the PDE, i.e. η = (Ω, g(x, y)). In this experiment, we use triangular domain Ω and vary the shape of Ω by randomly choosing three points on the circumference of a unit circle to form the triangle. Given that h is the boundary condition on the unit circle, we use GRF to generate h ∼ N (0, 103/2(−∆+100I)−3) with periodic boundary conditions. The analytical solution of the Laplace’s equation on the unit circle can be obtained by Fourier method. Then, we use the analytical solution on the three sides of the triangle as the boundary condition g(x, y). The variable PDE parameters include the shape of the solution domain (the shape of the triangle) and the boundary conditions on the three sides of the triangle, so the PDE parameters here are heterogeneous. MAD can implicitly encode such heterogeneous PDE parameters as latent vectors conveniently, whereas PI-DeepONet is unable to handle this case without further adaptations. Fig.4(c) shows that all methods finally converge to similar accuracy (mean L2 error close to 0.001), and MAD-LM achieves the lowest mean L2 error. MAD-L, MAD-LM and Reptile show good generalization capability and excellent convergence speed, whereas Transfer-Learning and MAML do not show any advantage over From-Scratch. Summary of experimental results. Achieving fast adaptation is the major focus of this paper, and solutions within a reasonable precision need to be found. Indeed, in many control and inverse problems, a higher precision in solving the forward problem (such as parametric PDEs) does not always lead to better results. For example, a solution with about 5% relative error is already enough for Maxwell’s equations in certain engineering scenarios. We are therefore interested in reducing the cost of solving the PDE with a new set of parameters by using only a relatively small number of iterations, in order to obtain an accurate enough solution in practice. The advantages of MAD (especially MAD-LM) is directly validated in the numerical experiments, as it achieves very fast convergence in the early stage of the training process. Some other applications may focus more on the final precision, and is not as sensitive to the training cost. In this alternate criterion, the superiority of the MAD method becomes less obvious in our test cases except Maxwell’s equations, but its performance is still comparable to other methods. 4 Conclusions In this paper, a novel mesh-free and unsupervised deep learning method MAD is proposed for solving parametric PDEs based on meta-learning idea. A good initial model is obtained in pre-training stage to learn useful information from a set of sampled tasks, which is then used to help solve the parametric PDEs quickly in fine-tuning stage. Moreover, MAD can implicitly encode heterogeneous PDE parameters as latent vectors. The effectiveness of MAD method is analyzed from the perspective of manifold learning and verified by extensive numerical experiments. Acknowledgments This work was supported by National Key R&D Program of China under Grant No. 2021ZD0110400.
1. What is the main contribution of the paper regarding parameterized PDEs? 2. What are the strengths and weaknesses of the proposed method, particularly in its novelty and comparisons with prior works? 3. How does the reviewer assess the effectiveness and efficiency of the method's optimization objectives? 4. What are the limitations of the paper regarding its experimental analysis and potential applications in more complex PDE problems?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a method for learning the solutions of parameterized PDEs, where the PDE takes a set of parameters as an input. The paper introduces a neural network architecture (MAD) which takes coordinates as well as a hidden representation, which is implicitly encoded for each PDE parameters. After training, the proposed method solves optimization problem either to 1) find the best hidden representation or 2) find the best hidden representation and the best model parameters (i.e., weights and biases simultaneously). Strengths And Weaknesses [+] a simple, but effective idea to extend learning PDE solutions to parameterized PDE settings [-] novelty: learning hidden representations of parameterized PDEs in training time and exploring the latent space for finding the best hidden representation for the new parameter instances have been studied in reduced-order modeling context (Eq (7.3) in Lee and Carlberg, JCP, 2020), which can be considered as an equivalent idea of MAD-L. Although the context and the optimization objective (Eq.(5)) are different, the idea of exploring latent space for the best hidden representation of parameterized PDEs has been out there. Questions how does the size of the latent vector affect the performance? how many samples (S1) would be required to learn a meaningful latent space? what is the computational overhead to solve the optimization problems in MAD-L and MAD-LM, respectively? (compared to other baselines) for the extrapolation task, how would the model performs if the testing parameters are generated in a following way: what if the training parameters are generated in a bounding box and the test parameters are chosen outside of the box. Limitations the proposed method, in general, is technically sound, but would be more appreciated if they provide more in-depth analysis on the experimental results, e.g., providing intuitions for setting up the size of the latent dimension, the size of the decoder, the choice of optimizers, the choice of parameter set-ups, and so on. I believe all these choices would be PDE-specific and it would be great if the authors could provide some general guidelines for making choices on those matters. Also, it would be great to see whether this method can be generalizable to more complex PDE problems.
NIPS
Title Meta-Auto-Decoder for Solving Parametric Partial Differential Equations Abstract Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computation domains, etc. Recently, building learning-based numerical solvers for parametric PDEs has become an emerging new field. One category of methods such as the Deep Galerkin Method (DGM) and Physics-Informed Neural Networks (PINNs) aim to approximate the solution of the PDEs. They are typically unsupervised and mesh-free, but require going through the time-consuming network training process from scratch for each set of parameters of the PDE. Another category of methods such as Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet) try to approximate the solution mapping directly. Being fast with only one forward inference for each PDE parameter without retraining, they often require a large corpus of paired input-output observations drawn from numerical simulations, and most of them need a predefined mesh as well. In this paper, we propose Meta-AutoDecoder (MAD), a mesh-free and unsupervised deep learning method that enables ∗The first two authors contributed equally to this paper, and Bin Dong is the corresponding author. Zhanhong Ye proposed MAD-L and explain the effectiveness of the MAD method from the perspective of manifold learning. Huang Xiang proposed MAD-LM on the basis of MAD-L and completed all the experiments in the paper. Xiang Huang performed this work during an internship at Huawei. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. The project page with code is available: https://gitee.com/mindspore/mindscience/tree/master/MindElec/. N/A ∗The first two authors contributed equally to this paper, and Bin Dong is the corresponding author. Zhanhong Ye proposed MAD-L and explain the effectiveness of the MAD method from the perspective of manifold learning. Huang Xiang proposed MAD-LM on the basis of MAD-L and completed all the experiments in the paper. Xiang Huang performed this work during an internship at Huawei. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. The project page with code is available: https://gitee.com/mindspore/mindscience/tree/master/MindElec/. 1 Introduction Many important problems in science and engineering, such as inverse problems, control and optimization, risk assessment, and uncertainty quantification [1, 2], require solving the so-called parametric PDEs, i.e., partial differential equations (PDEs) with different physical parameters, boundary conditions, or solution regions. Mathematically, they require to solve the so-called parametric PDEs that can be formulated as: Lγ1x̃ u = 0, x̃ ∈ Ω ⊂ R d, Bγ2x̃ u = 0, x̃ ∈ ∂Ω (1) where Lγ1 and Bγ2 are partial differential operators parametrized by γ1 and γ2, respectively, and x̃ denotes the independent variable in spatiotemporal-dependent PDEs. Given U = U(Ω;Rdu) and the space of parameters A, η = (γ1, γ2,Ω) ∈ A is the variable parameter of the PDEs and u ∈ U is the solution of the PDEs. Note that the form of η considered here is very general with possible heterogeneity allowed, since the computational domain shape Ω and the functions defined on this domain or its boundary (which may be involved in γ1, γ2) is obviously of different type. Solving parametric PDEs requires to learn an infinite-dimensional operator G : A → U that map any PDE parameter η to its corresponding solution uη (i.e., the solution mapping). In recent years, learning-based PDE solvers have become very popular, and it is generally believed that learning-based PDE solvers have the potential to improve efficiency [3, 4, 5]. The learning-based PDE solvers can be categorized into two categories in terms of the objects that are approximated by neural networks (NN), i.e., the approximation of the solution uη and the approximation of the solution mapping G. NN as a new ansatz of solution. This kind of approaches approximate the solution of the PDEs with a neural network and mainly rely on governing equations and boundary conditions (or their variants) to train the neural networks. For example, PINNs [3] and DGM [6] constrain the output of deep neural networks to satisfy the given governing equations and boundary conditions. Deep Ritz Method (DRM) [7] exploits the variational form of PDEs and can be used to solve PDEs that can be reformulated as equivalent energy minimization problems. Based on a weak formulation of PDEs, Weak Adversarial Network (WAN) [8] parameterizes the weak solution and test functions as primal and adversarial neural networks, respectively. These neural approximation methods can work in an unsupervised manner, without the need to generate labeled data from conventional computational methods. However, all these methods treat different PDE parameters as independent tasks, and need to retrain the neural network from scratch for each PDE parameter. When a large number of tasks with different PDE parameters need to be solved, these methods are computationally expensive and impractical. In order to mitigate retraining cost, E and Yu [7] recommends a transfer learning method that uses a model trained for one task as the initial model to train another task. However, according to our experiments, transfer learning method is not always effective in improving convergence speed (see Sec.3.1, 3.3). NN as a new ansatz of solution mapping. This kind of approaches use neural networks to learn the solution mapping between two infinite-dimensional function spaces [9, 10, 11, 12, 13]. For example, PDE-Nets [9, 10] are among the earliest neural operators that are specifically designed convolutional neural networks inspired by finite difference approximations of PDEs. They are able to uncover hidden PDE models from observed dynamical data and perform fast and accurate predictions at the same time. DeepONet [11] uses two subnets to encode the parameters and location variables of the PDEs separately, and merge them together to compute the solution. FNO [13] utilizes fast Fourier transform to build the neural operator architecture and learn the solution mapping between two infinite-dimensional function spaces. A significant advantage of these approaches is that once the neural network is trained, the prediction time is almost negligible. Although they have demonstrated promising results across a wide range of applications, several issues occur. First, the data acquisition cost is prohibitive in complex physical, biological, or engineering systems, and the generalization ability of these models is poor when there is not enough labeled data [14]. Second, most of these methods [9, 10, 12, 13] require a predefined mesh and utilize the labeled data on the mesh for training and inference. Third, simply applying one forward inference may lead to unsatisfactory generalization, especially on out-of-distribution (OOD) settings (i.e., PDE parameters for training and inference are from different probability distributions). Finally, these operators directly takes the PDE parameter η as network input, which would bring inconvenience in network implementation if η is heterogeneous. The recently proposed Physics-Informed DeepONet (PI-DeepONet) [15] can learn a mesh-free solution mapping without any labeled data and retraining. However, it needs to collect a large number of training samples in the parameter space A to obtain an acceptable accuracy (see Sec.3.1), and is still inflexible dealing with heterogeneous PDE parameters. Meta-Learning. Different from conventional machine learning that learns to do a given task, meta-learning learns to improve the learning algorithm itself based on multiple learning episodes over a distribution of related tasks. As a result, meta-learning can handle new tasks faster and better. In this field, the Model-Agnostic Meta-Learning (MAML) [16] algorithm and its variants [17, 18, 19] have beed widely used. These algorithms try to find an initial model with good generalization ability such that it can be adapted to new tasks with a small number of gradient updates. For example, MAML [16] firstly trains a meta-model with good initialization weight on a variety of learning tasks, which is then fine-tuned on a new task through a few steps of gradient descent to get the target model. The Reptile [18] algorithm eliminates second-order derivatives in MAML algorithm by repeatedly sampling a task, training on it, and moving the initialization towards the trained weight on that task. Borrowing the idea of meta-learning may inspire new ways to solve parametric PDEs, where different PDE parameters correspond to different tasks. To the best of our knowledge, Meta-MgNet [20] is the first work that view solving parametric PDEs as a meta-learning problem, which is based on hypernet and the multigrid algorithm. Meta-MgNet utilizes the similarity between tasks to generate good smoothing operators adaptively, and thereby accelerates the solution process, but is not directly applicable to PDEs on which the multigrid algorithm is not available. Recently, the Reptile algorithm is also used to accelerate the PDE solving problems in [21]. However, MAML and Reptile are not always effective in improving convergence speed (see Sec.3.2 and 3.3). Our contributions. We propose Meta-Auto-Decoder (MAD), a mesh-free and unsupervised deep learning method that enables the pre-trained model to be quickly adapted to equation instances by implicitly encoding heterogeneous PDE parameters as latent vectors. Different from Meta-MgNet, MAD makes use of the similarity between tasks from the perspective of manifold learning, and tries to learn a nonlinear approximation of the solution manifold. We construct the ansatz of solution as a neural network in the form uθ(x̃, z). By taking the spatial (or spatial-temporal) coordinate x̃ directly as the network input, unsupervised training loss is allowed, and a mesh is no longer required. As the additional input z varies, uθ(x̃, z) moves on a manifold in an infinite-dimensional function space, which may be an approximation of the true solution manifold for certain θ. The PDE parameter η is implicitly encoded into z by applying the auto-decoder architecture motivated by [22], regardless of the possible heterogeneity. When a new task comes, MAD achieves fast transfer by projecting the new task to the manifold and fine-tuning the manifold at the same time. The main contributions of this paper are summarized as follows: • A mesh-free and unsupervised deep neural network approach is proposed to solve parametric PDEs. Based on meta-learning concept, once the neural network is pre-trained, solving a new task involves only a small number of iterations. In addition, the auto-decoder architecture adopted by MAD can realize auto-encoding of heterogeneous PDE parameters. • The mathematical intuition behind the MAD method is analyzed from the perspective of manifold learning. In short, a neural network is pre-trained to approximate the solution manifold, and the required solution is searched on the solution manifold or in a neighborhood of the solution manifold. • Extensive numerical experiments are carried out to demonstrate the effectiveness of our method, which show that MAD can significantly improve the convergence speed and has good extrapolation ability for OOD settings. 2 Methodology 2.1 Meta-Auto-Decoder We adopt meta-learning concept to realize fast solution of parametric PDEs. Our basic idea is to first learn some universal meta-knowledge from a set of sampled tasks in the pre-training stage, and then solve a new task quickly by combining the task-specific knowledge with the shared meta-knowledge in the fine-tuning stage. We also adapt the auto-decoder architecture in [22], and introduce uθ(x̃, z) to approximate the solutions of parametric PDEs. The architecture of uθ(x̃, z) is shown in Fig.1. A physics-informed loss is used for training, making the proposed method unsupervised. Putting all these together, we propose a new method Meta-Auto-Decoder (MAD) to solve parametric PDEs. For the rest of the subsection, the loss function and the two stages of training will be explained in details. To enable unsupervised learning, given any PDE parameter η ∈ A, the physics-informed loss Lη : U → [0,∞) about Eq.(1) Lη[u] = ‖Lγ1x̃ u‖ 2 L2(Ω) + λbc‖B γ2 x̃ u‖ 2 L2(∂Ω) (2) is considered, where λbc > 0 is a weighting coefficient. The Monte Carlo estimate of Lη[u] is L̂η[u] = 1 Mr Mr∑ j=1 ∥∥∥Lγ1x̃ u(x̃rj)∥∥∥2 2 + λbc Mbc Mbc∑ j=1 ∥∥∥Bγ2x̃ u(x̃bcj )∥∥∥2 2 , (3) where {x̃rj}j∈{1,...,Mr} and {x̃bcj }j∈{1,...,Mbc} are two sets of random sampling points in Ω and ∂Ω, respectively. This task-specific loss L̂η[u] can be computed by automatic differentiation [23], and will be used in the pre-training stage and the fine-tuing stage. In the pre-training stage, through minimizing the loss function, a pre-trained model parametrized by θ∗ is learned for all tasks and each task is paired with its own decoded latent vector z∗i . Such a pre-trained model is considered as the meta knowledge as it is learned from the distribution of all tasks and the learned latent vector z∗i is the task-specific knowledge. When solving a new task in the fine-tuning stage, keep the model weight θ∗ fixed and minimize the loss by fine-tuning the latent vector z. Alternatively, we may unfreeze θ and allow it to be fine-tuned along with z. These two fine-tuning strategies give rise to different versions of MAD, which are called MAD-L and MAD-LM, respectively. The corresponding problems of pre-training and fine-tuning are formulated as follows: Pre-training Stage Given N randomly generated PDE parameters η1, . . . , ηN ∈ A, both MAD-L and MAD-LM solve the following optimization problem ({z∗i }i∈{1,...,N}, θ∗) = arg min θ,{zi}i∈{1,...,N} N∑ i=1 ( L̂ηi [uθ(·, zi)] + 1 σ2 ‖zi‖2 ) , (4) where θ∗ is the optimal model weight, {z∗i }i∈{1,...,N} are the optimal latent vectors for different PDE parameters, and L̂ηi is defined in Eq.(3). The regularization 1σ2 ‖zi‖ 2 is added for training stability. Fine-tuning Stage (MAD-L) Given a new PDE parameter ηnew, MAD-L keeps θ∗ fixed, and minimizes the following loss function to get z∗new = arg min z L̂ηnew [uθ∗(·, z)] + 1 σ2 ‖z‖2, (5) then uθ∗(·, z∗new) is the approximate solution of PDEs with parameter ηnew. To speed up convergence, we can set the initial value of z to z∗i obtained during pre-training where ηi is the nearest 2 to ηnew. Fine-tuning Stage (MAD-LM) MAD-LM fine-tunes the model weight θ with the latent vector z simultaneously, and solves the following optimization problem (z∗new, θ ∗ new) = arg min z,θ L̂ηnew [uθ(·, z)] + 1 σ2 ‖z‖2 (6) with initial model weight θ∗. This would produce an alternative approximate solution uθ∗new(·, z ∗ new). The latent vector is initialized in the same way as MAD-L. Remark 1 The MAD method has several key advantages compared with existing methods. Besides being mesh-free and unsupervised, it can deal with heterogeneous PDE parameters painlessly, since η is not taken as the network input, and is encoded into z in an implicit way. Introduction of the meta-knowledge θ∗ would accelerate the fine-tuning process, which can be better understood in the light of the manifold learning perspective. For MAD-LM, the accuracy on OOD tasks is likely to be at least comparable with training from scratch based on PINNs. Although the fine-tuning process of MAD is still slower than one forward inference of a neural network solution mapping, the advantages presented above can make it more suitable for some real applications. Remark 2 If we replace the physics-informed loss by certain supervised loss, the MAD-L method would then coincide with the DeepSDF algorithm [22]. Despite of this, the field of solving parametric PDEs is quite different from 3D shape representation in computer graphics. Moreover, the introduction of model weight fine-tuning in MAD-LM can significantly improve solution accuracy, as is explained intuitively in Sec.2.2,2.3 and validated by numerical experiments in Sec.3. 2.2 Manifold Learning Interpretation of MAD-L We interpret how the MAD-L method works from the manifold learning perspective, which also provides a new interpretation of the DeepSDF algorithm [22]. For the rest of this section, the domain Ω is fixed and excluded from η for simplicity. Now, we consider the following scenario. Scenario 1 The set of solutions G(A) = {G(η) | η ∈ A} ⊂ U is contained in a low-dimensional structure. To be more specific, there is a finite-dimensional space Z = Rl (with l dimU) and a Lipschitz continuous mapping Ḡ : Z → U , such that G(A) ⊆ Ḡ(Z). In other words, for any η ∈ A, there exists z ∈ Z satisfying Ḡ(z) = G(η). The mapping Ḡ is Lipschitz continuous if and only if there exists some C > 0 such that ∥∥Ḡ(z)− Ḡ(z′) ∥∥ U ≤ C‖z − z ′‖ for all z, z′ ∈ Z. This Lipschitz continuous constraint excludes highly irregular mappings like space-filling curves. When A is a finite-dimensional space and G is Lipschitz continuous, the parametric PDE would fall into this scenario (just take Z = A, Ḡ = G). Since dimZ dimU (the latter is usually infinity) holds, we may view the mapping Ḡ as some sort of “decoder”, and Z is the corresponding latent vector space, despite of the fact that there doesn’t exist an “encoder”. In many cases, Ḡ(Z) ⊂ U forms an embedded submanifold, and therefore our MAD method can be viewed as a manifold-learning approach. Once the mapping Ḡ is learned as above, then for a given parameter η, searching for the solution uη in the whole space U is no longer needed. Instead, we may focus on the smaller subset Ḡ(Z), i.e. the class of functions in U that is parametrized by Z, since uη = G(η) ∈ Ḡ(Z) holds for any η ∈ A. We then solve the optimization problem zη = arg min z Lη[Ḡ(z)], (7) 2For example, if A is a space of functions, we can discretize a function into a vector and then find the Euclidean distance between the two vectors as the distance between two PDE parameters. and Ḡ(zη) is the approximate solution. Assuming that the dimension ofZ is chosen (either empirically or through trial and error), the aim is to find the mapping Ḡ. Since such a mapping is usually complex and hard to design by hand, we consider the θ-parametrized3 version Gθ : Z → U , and find the best θ automatically by solving an optimization problem. Gθ can be constructed in the simple form Gθ(z)(x̃) = uθ(x̃, z), (8) where uθ is a neural network whose input is the concatenation of x̃ ∈ Rd and z ∈ Rl. The next step is to find the optimal model weight θ via training, with the target being G(A) ⊆ Gθ(Z). Assuming that the PDE parameters are generated from a probability distribution η ∼ pA, then G(η) ∈ Gθ(Z) holds almost surely if and only if d(θ) = E η∼pA [ dU ( uη, Gθ(Z) )] = E η∼pA [ min z ∥∥uη − uθ(·, z)∥∥U] = 0, (9) which suggests taking θ∗ = arg minθ d(θ). In case we do not have direct access to the exact solutions uη , the equivalent4 condition d′(θ) = E η∼pA [ min z Lη[uθ(·, z)] ] = 0 (10) is considered, and d′(θ) becomes the alternative loss to be minimized.In the specific implementation, the expectation on η ∼ pA is estimated by Monte Carlo samples η1, . . . , ηN , and the optimal network weight θ is taken to be θ∗ ≈ arg min θ 1 N N∑ i=1 min zi Lηi [uθ(·, zi)]. (11) We further estimate the physics-informed loss Lη using Monte Carlo method to obtain Eq.(4). After that, when a new PDE parameter ηnew ∈ A comes, a direct adaptation of Eq.(7) would then give rise to the fine-tuning process Eq.(5), since uθ∗(·, z) = Gθ∗(z) ≈ Ḡ(z) holds. An intuitive illustration of how MAD-L works from the manifold learning perspective is given in Fig.2(a). 3Two types of parametrization are considered here. The latent vector z parametrizes a point on the manifold Ḡ(Z) or Gθ(Z), and θ parametrizes the shape of the entire manifold Gθ(Z). 4Assume that the solution of Eq.(1) is unique for all η ∈ A, and u ∈ U is the solution if and only if Lη[u] = 0. A Visualization Example An ordinary differential equation (ODE) is used to visualize the pretraining and fine-tuning processes of MAD-L. Consider the following problem with domain Ω = (−π, π) ⊂ R: du dx = 2(x− η) cos ( (x− η)2 ) , u(±π) = sin ( (±π − η)2 ) . (12) We sample 20 points equidistantly on the interval [0, 2] as variable ODE parameters, and randomly select one ηnew for fine-tuning stage and the rest {ηi}i∈{1,··· ,19} for pre-training stage. MAD-L generates a sequence of (θ(m), {z(m)i }i∈{1,··· ,19}) in pre-training stage, and terminates at m = 200 with the optimal (θ∗, {z∗i }i∈{1,··· ,19}). The infinite-dimensional function space U = C([−π, π]) is projected onto a 2-dimensional plane using Principal Component Analysis (PCA). Fig.3(a) visualizes how Gθ(Z) gradually fits G(A) in pre-training stage. The set of exact solutions G(A) forms a 1-dimensional manifold (i.e. the red solid curve), and the marked points {G(ηi)}i∈{1,··· ,19} represent the corresponding ODE parameters used for pre-training. Each dotted curve represents a solution set G(m)θ (Z) obtained by the neural network at the m-th iteration with the points Gθ(m)(z (m) i ) = uθ(m)(·, z (m) i ) also marked on the curve. As the number of iterations m increases, the network weight θ = θ(m) updates, making the dotted curves evolve and finally fit the red solid curve, i.e., the target manifold G(A). Fig.3(b) illustrates the fine-tuning process for a given new ODE parameter ηnew ∈ A. As in Fig.3(a), the red solid curve represents the set of exact solutions G(A), while the cyan dotted curve represents the solution set Gθ∗(Z) = Gθ(200)(Z) obtained by the pre-trained network. As z = z (m) new updates (i.e., through fine-tuning z), the marked point Gθ∗(z (m) new ) moves on the cyan dotted curve, and finally converges to the approximate solution Gθ∗(z∗new) = Gθ∗(z (12) new ) ≈ G(ηnew). 2.3 Manifold Learning Interpretation of MAD-LM The MAD-L method is designed for Scenario 1. However, many parametric PDEs encountered in real applications do not fall into this scenario, especially when the parameter set A of PDEs is an infinite-dimensional function space. Simply applying MAD-L method to these PDE solving problems would likely lead to unsatisfactory results. However, MAD-LM works in a more general scenario, and thus has the potential of getting improved performance for a wide range of parametric PDE problems. This alternative scenario is given as follows. Scenario 2 The solution setG(A) ⊂ U can be approximated by a set with low-dimensional structure, in the sense that there is a finite-dimensional space Z = Rl (with l dimU) and a Lipschitz continuous mapping Ḡ : Z → U , such that G(A) is contained in the c-neighborhood of Ḡ(Z) ⊂ U , where c is a relatively small constant. In other words, for any η ∈ A, there exists some z ∈ Z satisfying ‖Ḡ(z)−G(η)‖U ≤ c. Appendix A gives an example of G(A) that falls into Scenario 2 but not Scenario 1. In this new scenario, similar derivation leads to the same pre-training stage, which is used to find the initial decoder mapping Gθ∗ ≈ Ḡ. However, in the fine-tuning stage, simply fine-tuning the latent vector z won’t give a satisfactory solution in general due to the existence of the c-gap. Therefore, we have to fine-tune the model weight θ with the latent vector z simultaneously, and solve the optimization problem Eq.(6). It produces a new decoder Gθ∗new(z ∗ new) specific to the parameter ηnew. An intuitive illustration is given in Fig.2(b). 3 Numerical Experiments To evaluate the effectiveness of the MAD method, we apply it to solve three parametric PDEs: (1) Burgers’ equation with variable initial conditions; (2) Maxwell’s equations with variable equation coefficients; and (3) Laplace’s equation with variable solution domains and boundary conditions (heterogeneous PDE parameters). Accuracy of the model is measured by average relative L2 error (abbreviated as L2 error) between predicted solutions and reference solutions, and we provide the mean value and the 95% confidence interval of L2 error. We compare MAD with other methods including learning from scratch (abbreviated as From-Scratch), Transfer-Learning [7], MAML [16, 17], Reptile [18] and PI-DeepONet [15]. For each experiment, the PDE parameters are divided into two sets: S1 and S2. Parameters in S1 correspond to sample tasks for pre-training, and parameters in S2 correspond to new tasks for fine-tuning. See Appendix B for the default experimental setup, and more detailed experimental setups and results for Burgers’ equation, Maxwell’s equation and Laplace’s equation are given in Appendix C, D and E respectively. Unless otherwise specified, all the experiments are conducted under the MindSpore5. 3.1 Burgers’ Equation We consider the 1-D Burgers’ equation: ∂u ∂t + u ∂u ∂x = ν ∂2u ∂x2 , x ∈ (0, 1), t ∈ (0, 1], u(x, 0) = u0(x), x ∈ (0, 1), (13) Eq.(13) can model one-dimensional flow of a viscous fluid, where u is the velocity, ν is the viscosity coefficient and initial condition u0(x) is the changing parameter of the PDE, i.e. η = u0(x). The initial condition u0(x) is generated using Gaussian random field (GRF) [24] according to u0(x) ∼ N (0; 100(−∆ + 9I)−3) with periodic boundary conditions. Fig.4(a) shows the mean L2 error of all methods as the number of training iterations increases. All methods converge to nearly the same accuracy (the mean L2 error close to 0.013) except for MAD-L, which we guess probably due to the c-gap introduced in Sec.2.3. In terms of convergence speed, From-Scratch and Transfer-Learning need about 1200 iterations to converge, whereas MAML, Reptile and MAD-LM need about 200 iterations to converge. MAD-LM has the fastest convergence speed, requiring only 17% of the training iterations of From-Scratch. In this experiment, Transfer-Learning does not show any advantage over From-Scratch, which means that Transfer-Learning fails to obtain any useful knowledge in pre-training stage. PI-DeepONet can directly inference for unseen PDE parameters in S2, so it has no fine-tuning process. Table 1 shows the comparison of the mean L2 error of PI-DeepONet and MAD under different numbers of training samples in S1. The results show that PI-DeepONet has a strong dependence on the number of training samples, and its mean L2 error is remarkably high when S1 is small. Moreover, its mean L2 error is significantly higher than that of MAD-L or MAD-LM in all cases. 5https://www.mindspore.cn/ In the above experiments, ηs in S1 and S2 come from the same GRF, so we can assume that the tasks in the pre-training stage come from the same task distribution as the tasks in the fine-tuing stage. We investigate the extrapolation capability of MAD, that is, tasks in the fine-tuing stage come from different task distribution than those in the pre-training stage. Specifically, S1 is still the same as above, but S2 is generated from N (0; 100(−∆ + 25I)−2.5). Fig.5(a) shows the results of extrapolation experiments. Since the distribution of tasks has changed, the manifold learned in the pre-training stage fits G(A) worse, so MAD-L exhibits worse accuracy than that in Fig.4(a). However, as with Fig.4(a), the convergence speed of MAD-LM in Fig.5(a) is also better than other methods. This shows that the extrapolation capability of MAD is also better than other methods in this example. 3.2 Time-Domain Maxwell’s Equations We consider the time-domain 2-D Maxwell’s equations with a point source in the transverse Electric (TE) mode [25]: ∂Ex ∂t = 1 0 r ∂Hz ∂y , ∂Ey ∂t = − 1 0 r ∂Hz ∂x , ∂Hz ∂t = − 1 µ0µr ( ∂Ey ∂x − ∂Ex ∂y + J ) , (14) where Ex, Ey and Hz are the electromagnetic fields, J is the point source term. The equation coefficients 0 and µ0 are the permittivity and permeability in vacuum, respectively. The equation coefficients r and µr are the relative permittivity and relative permeability of the media, respectively. [26] uses modified PINNs method to solve Eq.(14) with fixed r = 1 and µr = 1. However, in this paper, ( r, µr) are variable parameters of the PDEs, i.e., η = ( r, µr), which corresponds to the media properties in the simulation region. Fig.4(b) shows that all methods converge to similar accuracy (mean L2 error is close to 0.04), and MAD-LM achieves the lowest mean L2 error (0.028). In terms of convergence speed, MAD-L and MAD-LM are obviously superior to other methods. It is worth noting that MAML fails to converge in pre-training stage, therefore its data is missing in Fig.4(b). We guess the reason is that singularity brought by point source and computation of second-order derivatives pose great difficulties in solving optimization problem. Reptile also does not show good generalization ability probably due to the same singularity problem. We do an extrapolation experiment, where ( r, µr) in S1 comes from [1, 5]2, but in the fine-tuning stage, we only consider the ( r, µr) = (7, 7) case. Because the extrapolated task does not lie in the task distribution in the pre-training stage, the point G(ηnew) corresponding to the extrapolated task in the function space U is not on the learned manifold Gθ∗(Z), which causes MAD-L to converge to poor accuracy. Fig.5(b) indicates that MAD-LM is significantly faster than From-Scratch and Reptile in convergence speed while maintaining high accuracy. Notably, Transfer-Learning also exhibits faster convergence than From-Scratch and Reptile. This is because ( r, µr) = (4, 5) is randomly selected in the pre-training stage and is very close to ( r, µr) = (7, 7) in Euclidean distance. 3.3 Laplace’s Equation We consider the 2-D Laplace’s equation as follows: ∂2u ∂x2 + ∂2u ∂y2 = 0, (x, y) ∈ Ω, u(x, y) = g(x, y), (x, y) ∈ ∂Ω, (15) where the shape of Ω and boundary condition g(x, y) are the variable parameters of the PDE, i.e. η = (Ω, g(x, y)). In this experiment, we use triangular domain Ω and vary the shape of Ω by randomly choosing three points on the circumference of a unit circle to form the triangle. Given that h is the boundary condition on the unit circle, we use GRF to generate h ∼ N (0, 103/2(−∆+100I)−3) with periodic boundary conditions. The analytical solution of the Laplace’s equation on the unit circle can be obtained by Fourier method. Then, we use the analytical solution on the three sides of the triangle as the boundary condition g(x, y). The variable PDE parameters include the shape of the solution domain (the shape of the triangle) and the boundary conditions on the three sides of the triangle, so the PDE parameters here are heterogeneous. MAD can implicitly encode such heterogeneous PDE parameters as latent vectors conveniently, whereas PI-DeepONet is unable to handle this case without further adaptations. Fig.4(c) shows that all methods finally converge to similar accuracy (mean L2 error close to 0.001), and MAD-LM achieves the lowest mean L2 error. MAD-L, MAD-LM and Reptile show good generalization capability and excellent convergence speed, whereas Transfer-Learning and MAML do not show any advantage over From-Scratch. Summary of experimental results. Achieving fast adaptation is the major focus of this paper, and solutions within a reasonable precision need to be found. Indeed, in many control and inverse problems, a higher precision in solving the forward problem (such as parametric PDEs) does not always lead to better results. For example, a solution with about 5% relative error is already enough for Maxwell’s equations in certain engineering scenarios. We are therefore interested in reducing the cost of solving the PDE with a new set of parameters by using only a relatively small number of iterations, in order to obtain an accurate enough solution in practice. The advantages of MAD (especially MAD-LM) is directly validated in the numerical experiments, as it achieves very fast convergence in the early stage of the training process. Some other applications may focus more on the final precision, and is not as sensitive to the training cost. In this alternate criterion, the superiority of the MAD method becomes less obvious in our test cases except Maxwell’s equations, but its performance is still comparable to other methods. 4 Conclusions In this paper, a novel mesh-free and unsupervised deep learning method MAD is proposed for solving parametric PDEs based on meta-learning idea. A good initial model is obtained in pre-training stage to learn useful information from a set of sampled tasks, which is then used to help solve the parametric PDEs quickly in fine-tuning stage. Moreover, MAD can implicitly encode heterogeneous PDE parameters as latent vectors. The effectiveness of MAD method is analyzed from the perspective of manifold learning and verified by extensive numerical experiments. Acknowledgments This work was supported by National Key R&D Program of China under Grant No. 2021ZD0110400.
1. What is the focus and contribution of the paper on mesh-free and unsupervised deep learning? 2. What are the strengths of the proposed approach, particularly in terms of its experimental validation? 3. What are the weaknesses of the paper, especially regarding its comparison with other works and its pretraining process? 4. Do you have any concerns or questions about the pretraining stage of the proposed method? 5. Are there any limitations or potential drawbacks of the method that should be considered?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper propose Meta-Auto-Decoder (MAD), a mesh-free and unsupervised deep learning method that uses pretraining by encoding PDE parameters to latent vectors. MAD has a manifold learning interpretation. Experiments show that MAD is faster than other deep learning-based methods. Strengths And Weaknesses Strengths: Well-motivated and clear presentation of contributions Well-validated experimental result Weaknesses: Many concepts are shared with Meta-Mgnet and moreover not empirically compared with Meta-MgNet Pretraining part is a bit unclear to me (detailed in the questions part) Questions When pretraining, what data is used? In paper, it says 'randomly-generated'? Does it mean both target and pretrain data are randomly-generated? What knowledge is shared among these data? When comparing effienciency, is the pretraining stage been taken into consideration for MAD? Limitations So far no.
NIPS
Title Early Convolutions Help Transformers See Better Abstract Vision transformer (ViT) models exhibit substandard optimizability. In particular, they are sensitive to the choice of optimizer (AdamW vs. SGD), optimizer hyperparameters, and training schedule length. In comparison, modern convolutional neural networks are easier to optimize. Why is this the case? In this work, we conjecture that the issue lies with the patchify stem of ViT models, which is implemented by a stride-p p×p convolution (p = 16 by default) applied to the input image. This large-kernel plus large-stride convolution runs counter to typical design choices of convolutional layers in neural networks. To test whether this atypical design choice causes an issue, we analyze the optimization behavior of ViT models with their original patchify stem versus a simple counterpart where we replace the ViT stem by a small number of stacked stride-two 3×3 convolutions. While the vast majority of computation in the two ViT designs is identical, we find that this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. Using a convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ∼1-2% top-1 accuracy on ImageNet-1k), while maintaining flops and runtime. The improvement can be observed across the wide spectrum of model complexities (from 1G to 36G flops) and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us to recommend using a standard, lightweight convolutional stem for ViT models in this regime as a more robust architectural choice compared to the original ViT model design. 1 Introduction Vision transformer (ViT) models [13] offer an alternative design paradigm to convolutional neural networks (CNNs) [24]. ViTs replace the inductive bias towards local processing inherent in convolutions with global processing performed by multi-headed self-attention [43]. The hope is that this design has the potential to improve performance on vision tasks, akin to the trends observed in natural language processing [11]. While investigating this conjecture, researchers face another unexpected difference between ViTs and CNNs: ViT models exhibit substandard optimizability. ViTs are sensitive to the choice of optimizer [41] (AdamW [27] vs. SGD), to the selection of dataset specific learning hyperparameters [13, 41], to training schedule length, to network depth [42], etc. These issues render former training recipes and intuitions ineffective and impede research. Convolutional neural networks, in contrast, are exceptionally easy and robust to optimize. Simple training recipes based on SGD, basic data augmentation, and standard hyperparameter values have been widely used for years [19]. Why does this difference exist between ViT and CNN models? In this paper we hypothesize that the issues lies primarily in the early visual processing performed by ViT. ViT “patchifies” the input image into p×p non-overlapping patches to form the transformer encoder’s input set. This patchify stem is implemented as a stride-p p×p convolution, with p = 16 as a default value. This large-kernel plus large-stride convolution runs counter to the typical design 35th Conference on Neural Information Processing Systems (NeurIPS 2021). choices used in CNNs, where best-practices have converged to a small stack of stride-two 3×3 kernels as the network’s stem (e.g., [30, 36, 39]). To test this hypothesis, we minimally change the early visual processing of ViT by replacing its patchify stem with a standard convolutional stem consisting of only ∼5 convolutions, see Figure 1. To compensate for the small addition in flops, we remove one transformer block to maintain parity in flops and runtime. We observe that even though the vast majority of the computation in the two ViT designs is identical, this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. In extensive experiments we show that replacing the ViT patchify stem with a more standard convolutional stem (i) allows ViT to converge faster (§5.1), (ii) enables, for the first time, the use of either AdamW or SGD without a significant drop in accuracy (§5.2), (iii) brings ViT’s stability w.r.t. learning rate and weight decay closer to that of modern CNNs (§5.3), and (iv) yields improvements in ImageNet [10] top-1 error of ∼1-2 percentage points (§6). We consistently observe these improvements across a wide spectrum of model complexities (from 1G flops to 36G flops) and dataset scales (ImageNet-1k to ImageNet-21k). These results show that injecting some convolutional inductive bias into ViTs can be beneficial under commonly studied settings. We did not observe evidence that the hard locality constraint in early layers hampers the representational capacity of the network, as might be feared [9]. In fact we observed the opposite, as ImageNet results improve even with larger-scale models and larger-scale data when using a convolution stem. Moreover, under carefully controlled comparisons, we find that ViTs are only able to surpass state-of-the-art CNNs when equipped with a convolutional stem (§6). We conjecture that restricting convolutions in ViT to early visual processing may be a crucial design choice that strikes a balance between (hard) inductive biases and the representation learning ability of transformer blocks. Evidence comes by comparison to the “hybrid ViT” presented in [13], which uses 40 convolutional layers (most of a ResNet-50) and shows no improvement over the default ViT. This perspective resonates with the findings of [9], who observe that early transformer blocks prefer to learn more local attention patterns than later blocks. Finally we note that exploring the design of hybrid CNN/ViT models is not a goal of this work; rather we demonstrate that simply using a minimal convolutional stem with ViT is sufficient to dramatically change its optimization behavior. In summary, the findings presented in this paper lead us to recommend using a standard, lightweight convolutional stem for ViT models in the analyzed dataset scale and model complexity spectrum as a more robust and higher performing architectural choice compared to the original ViT model design. 2 Related Work Convolutional neural networks (CNNs). The breakthrough performance of the AlexNet [23] CNN [15, 24] on ImageNet classification [10] transformed the field of recognition, leading to the development of higher performing architectures, e.g., [19, 36, 37, 48], and scalable training methods [16, 21]. These architectures are now core components in object detection (e.g., [34]), instance segmentation (e.g., [18]), and semantic segmentation (e.g., [26]). CNNs are typically trained with stochastic gradient descent (SGD) and are widely considered to be easy to optimize. Self-attention in vision models. Transformers [43] are revolutionizing natural language processing by enabling scalable training. Transformers use multi-headed self-attention, which performs global information processing and is strictly more general than convolution [6]. Wang et al. [46] show that (single-headed) self-attention is a form of non-local means [2] and that integrating it into a ResNet [19] improves several tasks. Ramachandran et al. [32] explore this direction further with stand-alone self-attention networks for vision. They report difficulties in designing an attention-based network stem and present a bespoke solution that avoids convolutions. In contrast, we demonstrate the benefits of a convolutional stem. Zhao et al. [53] explore a broader set of self-attention operations with hard-coded locality constraints, more similar to standard CNNs. Vision transformer (ViT). Dosovitskiy et al. [13] apply a transformer encoder to image classification with minimal vision-specific modifications. As the counterpart of input token embeddings, they partition the input image into, e.g., 16×16 pixel, non-overlapping patches and linearly project them to the encoder’s input dimension. They report lackluster results when training on ImageNet-1k, but demonstrate state-of-the-art transfer learning when using large-scale pretraining data. ViTs are sensitive to many details of the training recipe, e.g., they benefit greatly from AdamW [27] compared to SGD and require careful learning rate and weight decay selection. ViTs are generally considered to be difficult to optimize compared to CNNs (e.g., see [13, 41, 42]). Further evidence of challenges comes from Chen et al. [4] who report ViT optimization instability in self-supervised learning (unlike with CNNs), and find that freezing the patchify stem at its random initialization improves stability. ViT improvements. ViTs are gaining rapid interest in part because they may offer a novel direction away from CNNs. Touvron et al. [41] show that with more regularization and stronger data augmentation ViT models achieve competitive accuracy on ImageNet-1k alone (cf . [13]). Subsequently, works concurrent with our own explore numerous other ViT improvements. Dominant themes include multi-scale networks [14, 17, 25, 45, 50], increasing depth [42], and locality priors [5, 9, 17, 47, 49]. In [9], d’Ascoli et al. modify multi-head self-attention with a convolutional bias at initialization and show that this prior improves sample efficiency and ImageNet accuracy. Resonating with our work, [5, 17, 47, 49] present models with convolutional stems, but do not analyze optimizability (our focus). Discussion. Unlike the concurrent work on locality priors in ViT, our focus is studying optimizability under minimal ViT modifications in order to derive crisp conclusions. Our perspective brings several novel observations: by adding only ∼5 convolutions to the stem, ViT can be optimized well with either AdamW or SGD (cf . all prior works use AdamW to avoid large drops in accuracy [41]), it becomes less sensitive to the specific choice of learning rate and weight decay, and training converges faster. We also observe a consistent improvement in ImageNet top-1 accuracy across a wide spectrum of model complexities (1G flops to 36G flops) and dataset scales (ImageNet-1k to ImageNet-21k). These results suggest that a (hard) convolutional bias early in the network does not compromise representational capacity, as conjectured in [9], and is beneficial within the scope of this study. 3 Vision Transformer Architectures Next, we review vision transformers [13] and describe the convolutional stems used in our work. The vision transformer (ViT). ViT first partitions an input image into non-overlapping p×p patches and linearly projects each patch to a d-dimensional feature vector using a learned weight matrix. A patch size of p = 16 and an image size of 224×224 are typical. The resulting patch embeddings (plus positional embeddings and a learned classification token embedding) are processed by a standard transformer encoder [43, 44] followed by a classification head. Using common network nomenclature, we refer to the portion of ViT before the transformer blocks as the network’s stem. ViT’s stem is a specific case of convolution (stride-p, p×p kernel), but we will refer to it as the patchify stem and reserve the terminology of convolutional stem for stems with a more conventional CNN design with multiple layers of overlapping convolutions (i.e., with stride smaller than the kernel size). ViTP models. Prior work proposes ViT models of various sizes, such as ViT-Tiny, ViT-Small, ViT-Base, etc. [13, 41]. To facilitate comparisons with CNNs, which are typically standardized to 1 gigaflop (GF), 2GF, 4GF, 8GF, etc., we modify the original ViT models to obtain models at about these complexities. Details are given in Table 1 (left). For easier comparison with CNNs of similar flops, and to avoid subjective size names, we refer the models by their flops, e.g., ViTP -4GF in place of ViT-Small. We use the P subscript to indicate that these models use the original patchify stem. Convolutional stem design. We adopt a typical minimalist convolutional stem design by stacking 3×3 convolutions [36], followed by a single 1×1 convolution at the end to match the d-dimensional input of the transformer encoder. These stems quickly downsample a 224×224 input image using overlapping strided convolutions to 14×14, matching the number of inputs created by the standard patchify stem. We follow a simple design pattern: all 3×3 convolutions either have stride 2 and double the number of output channels or stride 1 and keep the number of output channels constant. We enforce that the stem accounts for approximately the computation of one transformer block of the corresponding model so that we can easily control for flops by removing one transformer block when using the convolutional stem instead of the patchify stem. Our stem design was chosen to be purposefully simple and we emphasize that it was not designed to maximize model accuracy. ViTC models. To form a ViT model with a convolutional stem, we simply replace the patchify stem with its counterpart convolutional stem and remove one transformer block to compensate for the convolutional stem’s extra flops (see Figure 1). We refer to the modified ViT with a convolutional stem as ViTC . Configurations for ViTC at various complexities are given in Table 1 (right); corresponding ViTP and ViTC models match closely on all complexity metrics including flops and runtime. Convolutional stem details. Our convolutional stem designs use four, four, and six 3×3 convolutions for the 1GF, 4GF, and 18GF models, respectively. The output channels are [24, 48, 96, 192], [48, 96, 192, 384], and [64, 128, 128, 256, 256, 512], respectively. All 3×3 convolutions are followed by batch norm (BN) [21] and then ReLU [29], while the final 1×1 convolution is not, to be consistent with the original patchify stem. Eventually, matching stem flops to transformer block flops results in an unreasonably large stem, thus ViTC-36GF uses the same stem as ViTC-18GF. Convolutions in ViT. Dosovitskiy et al. [13] also introduced a “hybrid ViT” architecture that blends a modified ResNet [19] (BiT-ResNet [22]) with a transformer encoder. In their hybrid model, the patchify stem is replaced by a partial BiT-ResNet-50 that terminates at the output of the conv4 stage or the output of an extended conv3 stage. These image embeddings replace the standard patchify stem embeddings. This partial BiT-ResNet-50 stem is deep, with 40 convolutional layers. In this work, we explore lightweight convolutional stems that consist of only 5 to 7 convolutions in total, instead of the 40 used by the hybrid ViT. Moreover, we emphasize that the goal of our work is not to explore the hybrid ViT design space, but rather to study the optimizability effects of simply replacing the patchify stem with a minimal convolutional stem that follows standard CNN design practices. 4 Measuring Optimizability It has been noted in the literature that ViT models are challenging to optimize, e.g., they may achieve only modest performance when trained on a mid-size dataset (ImageNet-1k) [13], are sensitive to data augmentation [41] and optimizer choice [41], and may perform poorly when made deeper [42]. We empirically observed the general presence of such difficulties through the course of our experiments and informally refer to such optimization characteristics collectively as optimizability. Models with poor optimizability can yield very different results when hyperparameters are varied, which can lead to seemingly bizarre observations, e.g., removing erasing data augmentation [54] causes a catastrophic drop in ImageNet accuracy in [41]. Quantitative metrics to measure optimizability are needed to allow for more robust comparisons. In this section, we establish the foundations of such comparisons; we extensively test various models using these optimizability measures in §5. Training length stability. Prior works train ViT models for lengthy schedules, e.g., 300 to 400 epochs on ImageNet is typical (at the extreme, [17] trains models for 1000 epochs), since results at a formerly common 100-epoch schedule are substantially worse (2-4% lower top-1 accuracy, see §5.1). In the context of ImageNet, we define top-1 accuracy at 400 epochs as an approximate asymptotic result, i.e., training for longer will not meaningfully improve top-1 accuracy, and we compare it to the accuracy of models trained for only 50, 100, or 200 epochs. We define training length stability as the gap to asymptotic accuracy. Intuitively, it’s a measure of convergence speed. Models that converge faster offer obvious practical benefits, especially when training many model variants. Optimizer stability. Prior works use AdamW [27] to optimize ViT models from random initialization. Results of SGD are not typically presented and we are only aware of Touvron et al. [41]’s report of a dramatic ∼7% drop in ImageNet top-1 accuracy. In contrast, widely used CNNs, such as ResNets, can be optimized equally well with either SGD or AdamW (see §5.2) and SGD (always with momentum) is typically used in practice. SGD has the practical benefit of having fewer hyperparameters (e.g., tuning AdamW’s β2 can be important [3]) and requiring 50% less optimizer state memory, which can ease scaling. We define optimizer stability as the accuracy gap between AdamW and SGD. Like training length stability, we use optimizer stability as a proxy for the ease of optimization of a model. Hyperparameter (lr, wd) stability. Learning rate (lr) and weight decay (wd) are among the most important hyperparameters governing optimization with SGD and AdamW. New models and datasets often require a search for their optimal values as the choice can dramatically affect results. It is desirable to have a model and optimizer that yield good results for a wide range of learning rate and weight decay values. We will explore this hyperparameter stability by comparing the error distribution functions (EDFs) [30] of models trained with various choices of lr and wd. In this setting, to create an EDF for a model we randomly sample values of lr and wd and train the model accordingly. Distributional estimates, like those provided by EDFs, give a more complete view of the characteristics of models that point estimates cannot reveal [30, 31]. We will review EDFs in §5.3. Peak performance. The maximum possible performance of each model is the most commonly used metric in previous literature and it is often provided without carefully controlling training details such as data augmentations, regularization methods, number of epochs, and lr, wd tuning. To make more robust comparisons, we define peak performance as the result of a model at 400 epochs using its best-performing optimizer and parsimoniously tuned lr and wd values (details in §6), while fixing justifiably good values for all other variables that have a known impact on training. Peak performance results for ViTs and CNNs under these carefully controlled training settings are presented in §6. 5 Stability Experiments In this section we test the stability of ViT models with the original patchify (P ) stem vs. the convolutional (C) stem defined in §3. For reference, we also train RegNetY [12, 31], a state-of-the-art CNN that is easy to optimize and serves as a reference point for good stability. We conduct experiments using ImageNet-1k [10]’s standard training and validation sets, and report top-1 error. Following [12], for all results, we carefully control training settings and we use a minimal set of data augmentations that still yields strong results, for details see §5.4. In this section, unless noted, for each model we use the optimal lr and wd found under a 50 epoch schedule (see Appendix). 5.1 Training Length Stability We first explore how rapidly networks converge to their asymptotic error on ImageNet-1k, i.e., the highest possible accuracy achievable by training for many epochs. We approximate asymptotic error as a model’s error using a 400 epoch schedule based on observing diminishing returns from 200 to 400. We consider a grid of 24 experiments for ViT: {P , C} stems × {1, 4, 18} GF model sizes × {50, 100, 200, 400} epochs. For reference we also train RegNetY at {1, 4, 16} GF. We use the best optimizer choice for each model (AdamW for ViT models and SGD for RegNetY models). Results. Figure 2 shows the absolute error deltas (∆top-1) between 50, 100, and 200 epoch schedules and asymptotic performance (at 400 epochs). ViTC demonstrates faster convergence than ViTP across the model complexity spectrum, and closes much of the gap to the rate of CNN convergence. The improvement is most significant in the shortest training schedule (50 epoch), e.g., ViTP -1GF has a 10% error delta, while ViTC-1GF reduces this to about 6%. This opens the door to applications that execute a large number of short-scheduled experiments, such as neural architecture search. 5.2 Optimizer Stability We next explore how well AdamW and SGD optimize ViT models with the two stem types. We consider the following grid of 48 ViT experiments: {P , C} stems × {1, 4, 18} GF sizes × {50, 100, 200, 400} epochs × {AdamW, SGD} optimizers. As a reference, we also train 24 RegNetY baselines, one for each complexity regime, epoch length, and optimizer. Results. Figure 3 shows the results. As a baseline, RegNetY models show virtually no gap when trained using either SGD or AdamW (the difference ∼0.1-0.2% is within noise). On the other hand, ViTP models suffer a dramatic drop when trained with SGD across all settings (of up to 10% for larger models and longer training schedules). With a convolutional stem, ViTC models exhibit much smaller error gaps between SGD and AdamW across all training schedules and model complexities, including in larger models and longer schedules, where the gap is reduced to less than 0.2%. In other words, both RegNetY and ViTC can be easily trained via either SGD or AdamW, but ViTP cannot. 0.0 0.2 0.4 0.6 0.8 1.0 cu m ul at iv e pr ob . 1GF models ViTP ViTC RegNetY 4GF models ViTP ViTC RegNetY 18GF models ViTP ViTC RegNetY 5.3 Learning Rate and Weight Decay Stability Next, we characterize how sensitive different model families are to changes in learning rate (lr) and weight decay (wd) under both AdamW and SGD optimizers. To quantify this, we make use of error distribution functions (EDFs) [30]. An EDF is computed by sorting a set of results from low-to-high error and plotting the cumulative proportion of results as error increases, see [30] for details. In particular, we generate EDFs of a model as a function of lr and wd. The intuition is that if a model is robust to these hyperparameter choices, the EDF will be steep (all models will perform similarly), while if the model is sensitive, the EDF will be shallow (performance will be spread out). We test 6 ViT models ({P , C} × {1, 4, 18} GF) and 3 RegNetY models ({1, 4, 16} GF). For each model and each optimizer, we compute an EDF by randomly sampling 64 (lr, wd) pairs with learning rate and weight decay sampled in a fixed width interval around their optimal values for that model and optimizer (see the Appendix for sampling details). Rather than plotting absolute error in the EDF, we plot ∆top-1 error between the best result (obtained with the optimal lr and wd) and the observed result. Due to the large number of models, we train each for only 50 epochs. Results. Figure 4 shows scatterplots and EDFs for models trained by AdamW. Figure 5 shows SGD results. In all cases we see that ViTC significantly improves the lr and wd stability over ViTP for both optimizers. This indicates that the lr and wd are easier to optimize for ViTC than for ViTP . 5.4 Experimental Details In all experiments we train with a single half-period cosine learning rate decay schedule with a 5-epoch linear learning rate warm-up [16]. We use a minibatch size of 2048. Crucially, weight decay is not applied to the gain factors found in normalization layers nor to bias parameters anywhere in the model; we found that decaying these parameters can dramatically reduce top-1 accuracy for small models and short schedules. For inference, we use an exponential moving average (EMA) of the model weights (e.g., [8]). The lr and wd used in this section are reported in the Appendix. Other hyperparameters use defaults: SGD momentum is 0.9 and AdamW’s β1 = 0.9 and β2 = 0.999. Regularization and data augmentation. We use a simplified training recipe compared to recent work such as DeiT [41], which we found to be equally effective across a wide spectrum of model complexities and dataset scales. We use AutoAugment [7], mixup [52] (α = 0.8), CutMix [51] (α = 1.0), and label smoothing [38] ( = 0.1). We prefer this setup because it is similar to common settings for CNNs (e.g., [12]) except for stronger mixup and the addition of CutMix (ViTs benefit from both, while CNNs are not harmed). We compare this recipe to the one used for DeiT models in the Appendix, and observe that our setup provides substantially faster training convergence likely because we remove repeating augmentation [1, 20], which is known to slow training [1]. 6 Peak Performance A model’s peak performance is the most commonly used metric in network design. It represents what is possible with the best-known-so-far settings and naturally evolves over time. Making fair comparisons between different models is desirable but fraught with difficulty. Simply citing results from prior work may be negatively biased against that work as it was unable to incorporate newer, yet applicable improvements. Here, we strive to provide a fairer comparison between state-of-the-art CNNs, ViTP , and ViTC . We identify a set of factors and then strike a pragmatic balance between which subset to optimize for each model vs. which subset share a constant value across all models. In our comparison, all models share the same epochs (400), use of model weight EMA, and set of regularization and augmentation methods (as specified in §5.4). All CNNs are trained with SGD with lr of 2.54 and wd of 2.4e−5; we found this single choice worked well across all models, as similarly observed in [12]. For all ViT models we found AdamW with a lr/wd of 1.0e−3/0.24 was effective, except for the 36GF models. For these larger models we tested a few settings and found a lr/wd of 6.0e−4/0.28 to be more effective for both ViTP -36GF and ViTC-36GF models. For training and inference, ViTs use 224×224 resolution (we do not fine-tune at higher resolutions), while the CNNs use (often larger) optimized resolutions specified in [12, 39]. Given this protocol, we compare ViTP , ViTC , and CNNs across a spectrum of model complexities (1GF to 36GF) and dataset scales (directly training on ImageNet-1k vs. pretraining on ImageNet-21k and then fine-tuning on ImageNet-1k). Results. Figure 6 shows a progression of results. Each plot shows ImageNet-1k val top-1 error vs. ImageNet-1k epoch training time.1 The left plot compares several state-of-the-art CNNs. RegNetY and RegNetZ [12] achieve similar results across the training speed spectrum and outperform EfficientNets [39]. Surprisingly, ResNets [19] are highly competitive at fast runtimes, showing that under a fairer comparison these years-old models perform substantially better than often reported (cf . [39]). The middle plot compares two representative CNNs (ResNet and RegNetY) to ViTs, still using only ImageNet-1k training. The baseline ViTP underperforms RegNetY across the entire model complexity spectrum. To our surprise, ViTP also underperforms ResNets in this regime. ViTC is more competitive and outperforms CNNs in the middle-complexity range. The right plot compares the same models but with ImageNet-21k pretraining (details in Appendix). In this setting ViT models demonstrates a greater capacity to benefit from the larger-scale data: now ViTC strictly outperforms both ViTP and RegNetY. Interestingly, the original ViTP does not outperform a state-of-the-art CNN even when trained on this much larger dataset. Numerical results are presented in Table 2 for reference to exact values. This table also highlights that flop counts are not significantly correlated with runtime, but that activations are (see Appendix for more details), as also observed by [12]. E.g., EfficientNets are slow relative to their flops while ViTs are fast. 1We time models in PyTorch on 8 32GB Volta GPUs. We note that batch inference time is highly correlated with training time, but we report epoch time as it is easy to interpret and does not depend on the use case. These results verify that ViTC ’s convolutional stem improves not only optimization stability, as seen in the previous section, but also peak performance. Moreover, this benefit can be seen across the model complexity and dataset scale spectrum. Perhaps surprisingly, given the recent excitement over ViT, we find that ViTP struggles to compete with state-of-the-art CNNs. We only observe improvements over CNNs when using both large-scale pretraining data and the proposed convolutional stem. 7 Conclusion In this work we demonstrated that the optimization challenges of ViT models are linked to the largestride, large-kernel convolution in ViT’s patchify stem. The seemingly trivial change of replacing this patchify stem with a simple convolutional stem leads to a remarkable change in optimization behavior. With the convolutional stem, ViT (termed ViTC) converges faster than the original ViT (termed ViTP ) (§5.1), trains well with either AdamW or SGD (§5.2), improves learning rate and weight decay stability (§5.3), and improves ImageNet top-1 error by ∼1-2% (§6). These results are consistent across a wide spectrum of model complexities (1GF to 36GF) and dataset scales (ImageNet-1k to ImageNet-21k). Our results indicate that injecting a small dose of convolutional inductive bias into the early stages of ViTs can be hugely beneficial. Looking forward, we are interested in the theoretical foundation of why such a minimal architectural modification can have such large (positive) impact on optimizability. We are also interested in studying larger models. Our preliminary explorations into 72GF models reveal that the convolutional stem still improves top-1 error, however we also find that a new form of instability arises that causes training error to randomly spike, especially for ViTC . Acknowledgements. We thank Hervé Jegou, Hugo Touvron, and Kaiming He for valuable feedback.
1. What is the main contribution of the paper regarding the use of convolutional stem/operator in Vision Transformers? 2. What are the strengths of the paper in terms of its experiments, baselines, and simulations? 3. What are the limitations of the paper regarding its scope and potential applications beyond object recognition tasks? 4. How does the reviewer assess the novelty and significance of the paper's findings? 5. Are there any questions or concerns regarding the paper's connection to previous works in computer vision and neuroscience-inspired perceptual systems?
Summary Of The Paper Review
Summary Of The Paper This paper overall tackles a very important question and pin-points the important dissociation that when the computational complexity of two models is equalized: one with a first stage convolutional operator and another one without one -- both pre-seeded to Vision Transformers; it is the one with a convolutional stem/operator that is only sufficiently necessary in the first layer! that exploits the locality structure in image, has increase in accuracy, faster convergence and flexible optimizability. Review Strengths: This paper presents all the correct ingredients for acceptance at any computer vision venue -- despite this being NeurIPS: The paper presents a strong set of experiments, baselines, and several simulations ran for different conditions (example GFlops, and hidden units in the ViT’s) that have been carefully tuned to show the stability of the results despite a lack of errorbars. Overall this paper should be accepted because it presents a simple idea that has been rigorously tested through various experimental manipulations and the novelty is presented quite clearly in Figure 1. The take-home (See above) is easy: if you want better results (accuracy/faster convergence/flexible optimizability) when training a Vision Transformer, add a convolutional stem/layer in the early block of visual processing (somewhat surprisingly similar -- to how the human visual system operates in early stages of processing). I have no follow-up questions to the authors, but I would like to add some limitations that prevent me from increasing my score (that is already very positive), as this paper could have had potential to have a more interdisciplinary scope beyond the Transformer movement. Limitations: There is more to computer vision (and vision in general) than object recognition and ImageNet-based experiments. Several follow-up questions arise, such as: How would a convolutional-stem aid a perceptual system if the task was not object recognition (the title of ‘seeing’, suggests Object Recognition is the ultimate standard). But more critically, experiments with Robustness to either common corruptions (Hendrycks & Dietterich, ICLR 2019) or Adversarial attacks like any of the PGD-flavors should be explored in future work. Not to be nosey, but these are indeed important questions as several authors have shown that there is a trade-off between accuracy and robustness (Zhang et al., ICML 2019). I do wish that the paper adds more discussion on the ML theoretical aspects of the advantages of convolution when a locality constraint is induced by the image structure as explored in Poggio (PNAS 2020) and recently, Deza, Banburski, Liao & Poggio (ArXiv 2021). Similarly, it would be nice if the paper connects the importance of the convolutional operator as a biologically plausible inductive bias (eg. V1 gabor-like filters; Hubel & Wiesel in the 60’s) and how recent trends in computer vision have started to revisit the importance of inductive biases in 2-stage neuroscience-inspired perceptual systems as the work of Dapello, Marques et al. for the case of Adversarial Robustness (NeurIPS, 2020); Parthasarathy & Simoncelli (ArXiv, 2020) for texture recognition w/ a self-supervised objective and Deza & Konkle (ArXiv, 2020) for scene recognition w/ a foveated module. Perhaps adding a small section in the Relevant work about these and other works that have explored the importance of locality, convolutional operators in CNNs and 2-stage models should be addressed. Other works worth adding that are particularly relevant to the theory of locality and convolutional operators as a critical computation in machines: Elsayed et al. “Revisiting Spatial Invariance with Low-Rank Local Connectivity”. ICML 2020. Pogodin et al.: “Towards Biologically Plausible Convolutional Networks”. ArXiv 2021. d’Ascoli et al: “Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias” NeurIPS 2019.
NIPS
Title Early Convolutions Help Transformers See Better Abstract Vision transformer (ViT) models exhibit substandard optimizability. In particular, they are sensitive to the choice of optimizer (AdamW vs. SGD), optimizer hyperparameters, and training schedule length. In comparison, modern convolutional neural networks are easier to optimize. Why is this the case? In this work, we conjecture that the issue lies with the patchify stem of ViT models, which is implemented by a stride-p p×p convolution (p = 16 by default) applied to the input image. This large-kernel plus large-stride convolution runs counter to typical design choices of convolutional layers in neural networks. To test whether this atypical design choice causes an issue, we analyze the optimization behavior of ViT models with their original patchify stem versus a simple counterpart where we replace the ViT stem by a small number of stacked stride-two 3×3 convolutions. While the vast majority of computation in the two ViT designs is identical, we find that this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. Using a convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ∼1-2% top-1 accuracy on ImageNet-1k), while maintaining flops and runtime. The improvement can be observed across the wide spectrum of model complexities (from 1G to 36G flops) and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us to recommend using a standard, lightweight convolutional stem for ViT models in this regime as a more robust architectural choice compared to the original ViT model design. 1 Introduction Vision transformer (ViT) models [13] offer an alternative design paradigm to convolutional neural networks (CNNs) [24]. ViTs replace the inductive bias towards local processing inherent in convolutions with global processing performed by multi-headed self-attention [43]. The hope is that this design has the potential to improve performance on vision tasks, akin to the trends observed in natural language processing [11]. While investigating this conjecture, researchers face another unexpected difference between ViTs and CNNs: ViT models exhibit substandard optimizability. ViTs are sensitive to the choice of optimizer [41] (AdamW [27] vs. SGD), to the selection of dataset specific learning hyperparameters [13, 41], to training schedule length, to network depth [42], etc. These issues render former training recipes and intuitions ineffective and impede research. Convolutional neural networks, in contrast, are exceptionally easy and robust to optimize. Simple training recipes based on SGD, basic data augmentation, and standard hyperparameter values have been widely used for years [19]. Why does this difference exist between ViT and CNN models? In this paper we hypothesize that the issues lies primarily in the early visual processing performed by ViT. ViT “patchifies” the input image into p×p non-overlapping patches to form the transformer encoder’s input set. This patchify stem is implemented as a stride-p p×p convolution, with p = 16 as a default value. This large-kernel plus large-stride convolution runs counter to the typical design 35th Conference on Neural Information Processing Systems (NeurIPS 2021). choices used in CNNs, where best-practices have converged to a small stack of stride-two 3×3 kernels as the network’s stem (e.g., [30, 36, 39]). To test this hypothesis, we minimally change the early visual processing of ViT by replacing its patchify stem with a standard convolutional stem consisting of only ∼5 convolutions, see Figure 1. To compensate for the small addition in flops, we remove one transformer block to maintain parity in flops and runtime. We observe that even though the vast majority of the computation in the two ViT designs is identical, this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. In extensive experiments we show that replacing the ViT patchify stem with a more standard convolutional stem (i) allows ViT to converge faster (§5.1), (ii) enables, for the first time, the use of either AdamW or SGD without a significant drop in accuracy (§5.2), (iii) brings ViT’s stability w.r.t. learning rate and weight decay closer to that of modern CNNs (§5.3), and (iv) yields improvements in ImageNet [10] top-1 error of ∼1-2 percentage points (§6). We consistently observe these improvements across a wide spectrum of model complexities (from 1G flops to 36G flops) and dataset scales (ImageNet-1k to ImageNet-21k). These results show that injecting some convolutional inductive bias into ViTs can be beneficial under commonly studied settings. We did not observe evidence that the hard locality constraint in early layers hampers the representational capacity of the network, as might be feared [9]. In fact we observed the opposite, as ImageNet results improve even with larger-scale models and larger-scale data when using a convolution stem. Moreover, under carefully controlled comparisons, we find that ViTs are only able to surpass state-of-the-art CNNs when equipped with a convolutional stem (§6). We conjecture that restricting convolutions in ViT to early visual processing may be a crucial design choice that strikes a balance between (hard) inductive biases and the representation learning ability of transformer blocks. Evidence comes by comparison to the “hybrid ViT” presented in [13], which uses 40 convolutional layers (most of a ResNet-50) and shows no improvement over the default ViT. This perspective resonates with the findings of [9], who observe that early transformer blocks prefer to learn more local attention patterns than later blocks. Finally we note that exploring the design of hybrid CNN/ViT models is not a goal of this work; rather we demonstrate that simply using a minimal convolutional stem with ViT is sufficient to dramatically change its optimization behavior. In summary, the findings presented in this paper lead us to recommend using a standard, lightweight convolutional stem for ViT models in the analyzed dataset scale and model complexity spectrum as a more robust and higher performing architectural choice compared to the original ViT model design. 2 Related Work Convolutional neural networks (CNNs). The breakthrough performance of the AlexNet [23] CNN [15, 24] on ImageNet classification [10] transformed the field of recognition, leading to the development of higher performing architectures, e.g., [19, 36, 37, 48], and scalable training methods [16, 21]. These architectures are now core components in object detection (e.g., [34]), instance segmentation (e.g., [18]), and semantic segmentation (e.g., [26]). CNNs are typically trained with stochastic gradient descent (SGD) and are widely considered to be easy to optimize. Self-attention in vision models. Transformers [43] are revolutionizing natural language processing by enabling scalable training. Transformers use multi-headed self-attention, which performs global information processing and is strictly more general than convolution [6]. Wang et al. [46] show that (single-headed) self-attention is a form of non-local means [2] and that integrating it into a ResNet [19] improves several tasks. Ramachandran et al. [32] explore this direction further with stand-alone self-attention networks for vision. They report difficulties in designing an attention-based network stem and present a bespoke solution that avoids convolutions. In contrast, we demonstrate the benefits of a convolutional stem. Zhao et al. [53] explore a broader set of self-attention operations with hard-coded locality constraints, more similar to standard CNNs. Vision transformer (ViT). Dosovitskiy et al. [13] apply a transformer encoder to image classification with minimal vision-specific modifications. As the counterpart of input token embeddings, they partition the input image into, e.g., 16×16 pixel, non-overlapping patches and linearly project them to the encoder’s input dimension. They report lackluster results when training on ImageNet-1k, but demonstrate state-of-the-art transfer learning when using large-scale pretraining data. ViTs are sensitive to many details of the training recipe, e.g., they benefit greatly from AdamW [27] compared to SGD and require careful learning rate and weight decay selection. ViTs are generally considered to be difficult to optimize compared to CNNs (e.g., see [13, 41, 42]). Further evidence of challenges comes from Chen et al. [4] who report ViT optimization instability in self-supervised learning (unlike with CNNs), and find that freezing the patchify stem at its random initialization improves stability. ViT improvements. ViTs are gaining rapid interest in part because they may offer a novel direction away from CNNs. Touvron et al. [41] show that with more regularization and stronger data augmentation ViT models achieve competitive accuracy on ImageNet-1k alone (cf . [13]). Subsequently, works concurrent with our own explore numerous other ViT improvements. Dominant themes include multi-scale networks [14, 17, 25, 45, 50], increasing depth [42], and locality priors [5, 9, 17, 47, 49]. In [9], d’Ascoli et al. modify multi-head self-attention with a convolutional bias at initialization and show that this prior improves sample efficiency and ImageNet accuracy. Resonating with our work, [5, 17, 47, 49] present models with convolutional stems, but do not analyze optimizability (our focus). Discussion. Unlike the concurrent work on locality priors in ViT, our focus is studying optimizability under minimal ViT modifications in order to derive crisp conclusions. Our perspective brings several novel observations: by adding only ∼5 convolutions to the stem, ViT can be optimized well with either AdamW or SGD (cf . all prior works use AdamW to avoid large drops in accuracy [41]), it becomes less sensitive to the specific choice of learning rate and weight decay, and training converges faster. We also observe a consistent improvement in ImageNet top-1 accuracy across a wide spectrum of model complexities (1G flops to 36G flops) and dataset scales (ImageNet-1k to ImageNet-21k). These results suggest that a (hard) convolutional bias early in the network does not compromise representational capacity, as conjectured in [9], and is beneficial within the scope of this study. 3 Vision Transformer Architectures Next, we review vision transformers [13] and describe the convolutional stems used in our work. The vision transformer (ViT). ViT first partitions an input image into non-overlapping p×p patches and linearly projects each patch to a d-dimensional feature vector using a learned weight matrix. A patch size of p = 16 and an image size of 224×224 are typical. The resulting patch embeddings (plus positional embeddings and a learned classification token embedding) are processed by a standard transformer encoder [43, 44] followed by a classification head. Using common network nomenclature, we refer to the portion of ViT before the transformer blocks as the network’s stem. ViT’s stem is a specific case of convolution (stride-p, p×p kernel), but we will refer to it as the patchify stem and reserve the terminology of convolutional stem for stems with a more conventional CNN design with multiple layers of overlapping convolutions (i.e., with stride smaller than the kernel size). ViTP models. Prior work proposes ViT models of various sizes, such as ViT-Tiny, ViT-Small, ViT-Base, etc. [13, 41]. To facilitate comparisons with CNNs, which are typically standardized to 1 gigaflop (GF), 2GF, 4GF, 8GF, etc., we modify the original ViT models to obtain models at about these complexities. Details are given in Table 1 (left). For easier comparison with CNNs of similar flops, and to avoid subjective size names, we refer the models by their flops, e.g., ViTP -4GF in place of ViT-Small. We use the P subscript to indicate that these models use the original patchify stem. Convolutional stem design. We adopt a typical minimalist convolutional stem design by stacking 3×3 convolutions [36], followed by a single 1×1 convolution at the end to match the d-dimensional input of the transformer encoder. These stems quickly downsample a 224×224 input image using overlapping strided convolutions to 14×14, matching the number of inputs created by the standard patchify stem. We follow a simple design pattern: all 3×3 convolutions either have stride 2 and double the number of output channels or stride 1 and keep the number of output channels constant. We enforce that the stem accounts for approximately the computation of one transformer block of the corresponding model so that we can easily control for flops by removing one transformer block when using the convolutional stem instead of the patchify stem. Our stem design was chosen to be purposefully simple and we emphasize that it was not designed to maximize model accuracy. ViTC models. To form a ViT model with a convolutional stem, we simply replace the patchify stem with its counterpart convolutional stem and remove one transformer block to compensate for the convolutional stem’s extra flops (see Figure 1). We refer to the modified ViT with a convolutional stem as ViTC . Configurations for ViTC at various complexities are given in Table 1 (right); corresponding ViTP and ViTC models match closely on all complexity metrics including flops and runtime. Convolutional stem details. Our convolutional stem designs use four, four, and six 3×3 convolutions for the 1GF, 4GF, and 18GF models, respectively. The output channels are [24, 48, 96, 192], [48, 96, 192, 384], and [64, 128, 128, 256, 256, 512], respectively. All 3×3 convolutions are followed by batch norm (BN) [21] and then ReLU [29], while the final 1×1 convolution is not, to be consistent with the original patchify stem. Eventually, matching stem flops to transformer block flops results in an unreasonably large stem, thus ViTC-36GF uses the same stem as ViTC-18GF. Convolutions in ViT. Dosovitskiy et al. [13] also introduced a “hybrid ViT” architecture that blends a modified ResNet [19] (BiT-ResNet [22]) with a transformer encoder. In their hybrid model, the patchify stem is replaced by a partial BiT-ResNet-50 that terminates at the output of the conv4 stage or the output of an extended conv3 stage. These image embeddings replace the standard patchify stem embeddings. This partial BiT-ResNet-50 stem is deep, with 40 convolutional layers. In this work, we explore lightweight convolutional stems that consist of only 5 to 7 convolutions in total, instead of the 40 used by the hybrid ViT. Moreover, we emphasize that the goal of our work is not to explore the hybrid ViT design space, but rather to study the optimizability effects of simply replacing the patchify stem with a minimal convolutional stem that follows standard CNN design practices. 4 Measuring Optimizability It has been noted in the literature that ViT models are challenging to optimize, e.g., they may achieve only modest performance when trained on a mid-size dataset (ImageNet-1k) [13], are sensitive to data augmentation [41] and optimizer choice [41], and may perform poorly when made deeper [42]. We empirically observed the general presence of such difficulties through the course of our experiments and informally refer to such optimization characteristics collectively as optimizability. Models with poor optimizability can yield very different results when hyperparameters are varied, which can lead to seemingly bizarre observations, e.g., removing erasing data augmentation [54] causes a catastrophic drop in ImageNet accuracy in [41]. Quantitative metrics to measure optimizability are needed to allow for more robust comparisons. In this section, we establish the foundations of such comparisons; we extensively test various models using these optimizability measures in §5. Training length stability. Prior works train ViT models for lengthy schedules, e.g., 300 to 400 epochs on ImageNet is typical (at the extreme, [17] trains models for 1000 epochs), since results at a formerly common 100-epoch schedule are substantially worse (2-4% lower top-1 accuracy, see §5.1). In the context of ImageNet, we define top-1 accuracy at 400 epochs as an approximate asymptotic result, i.e., training for longer will not meaningfully improve top-1 accuracy, and we compare it to the accuracy of models trained for only 50, 100, or 200 epochs. We define training length stability as the gap to asymptotic accuracy. Intuitively, it’s a measure of convergence speed. Models that converge faster offer obvious practical benefits, especially when training many model variants. Optimizer stability. Prior works use AdamW [27] to optimize ViT models from random initialization. Results of SGD are not typically presented and we are only aware of Touvron et al. [41]’s report of a dramatic ∼7% drop in ImageNet top-1 accuracy. In contrast, widely used CNNs, such as ResNets, can be optimized equally well with either SGD or AdamW (see §5.2) and SGD (always with momentum) is typically used in practice. SGD has the practical benefit of having fewer hyperparameters (e.g., tuning AdamW’s β2 can be important [3]) and requiring 50% less optimizer state memory, which can ease scaling. We define optimizer stability as the accuracy gap between AdamW and SGD. Like training length stability, we use optimizer stability as a proxy for the ease of optimization of a model. Hyperparameter (lr, wd) stability. Learning rate (lr) and weight decay (wd) are among the most important hyperparameters governing optimization with SGD and AdamW. New models and datasets often require a search for their optimal values as the choice can dramatically affect results. It is desirable to have a model and optimizer that yield good results for a wide range of learning rate and weight decay values. We will explore this hyperparameter stability by comparing the error distribution functions (EDFs) [30] of models trained with various choices of lr and wd. In this setting, to create an EDF for a model we randomly sample values of lr and wd and train the model accordingly. Distributional estimates, like those provided by EDFs, give a more complete view of the characteristics of models that point estimates cannot reveal [30, 31]. We will review EDFs in §5.3. Peak performance. The maximum possible performance of each model is the most commonly used metric in previous literature and it is often provided without carefully controlling training details such as data augmentations, regularization methods, number of epochs, and lr, wd tuning. To make more robust comparisons, we define peak performance as the result of a model at 400 epochs using its best-performing optimizer and parsimoniously tuned lr and wd values (details in §6), while fixing justifiably good values for all other variables that have a known impact on training. Peak performance results for ViTs and CNNs under these carefully controlled training settings are presented in §6. 5 Stability Experiments In this section we test the stability of ViT models with the original patchify (P ) stem vs. the convolutional (C) stem defined in §3. For reference, we also train RegNetY [12, 31], a state-of-the-art CNN that is easy to optimize and serves as a reference point for good stability. We conduct experiments using ImageNet-1k [10]’s standard training and validation sets, and report top-1 error. Following [12], for all results, we carefully control training settings and we use a minimal set of data augmentations that still yields strong results, for details see §5.4. In this section, unless noted, for each model we use the optimal lr and wd found under a 50 epoch schedule (see Appendix). 5.1 Training Length Stability We first explore how rapidly networks converge to their asymptotic error on ImageNet-1k, i.e., the highest possible accuracy achievable by training for many epochs. We approximate asymptotic error as a model’s error using a 400 epoch schedule based on observing diminishing returns from 200 to 400. We consider a grid of 24 experiments for ViT: {P , C} stems × {1, 4, 18} GF model sizes × {50, 100, 200, 400} epochs. For reference we also train RegNetY at {1, 4, 16} GF. We use the best optimizer choice for each model (AdamW for ViT models and SGD for RegNetY models). Results. Figure 2 shows the absolute error deltas (∆top-1) between 50, 100, and 200 epoch schedules and asymptotic performance (at 400 epochs). ViTC demonstrates faster convergence than ViTP across the model complexity spectrum, and closes much of the gap to the rate of CNN convergence. The improvement is most significant in the shortest training schedule (50 epoch), e.g., ViTP -1GF has a 10% error delta, while ViTC-1GF reduces this to about 6%. This opens the door to applications that execute a large number of short-scheduled experiments, such as neural architecture search. 5.2 Optimizer Stability We next explore how well AdamW and SGD optimize ViT models with the two stem types. We consider the following grid of 48 ViT experiments: {P , C} stems × {1, 4, 18} GF sizes × {50, 100, 200, 400} epochs × {AdamW, SGD} optimizers. As a reference, we also train 24 RegNetY baselines, one for each complexity regime, epoch length, and optimizer. Results. Figure 3 shows the results. As a baseline, RegNetY models show virtually no gap when trained using either SGD or AdamW (the difference ∼0.1-0.2% is within noise). On the other hand, ViTP models suffer a dramatic drop when trained with SGD across all settings (of up to 10% for larger models and longer training schedules). With a convolutional stem, ViTC models exhibit much smaller error gaps between SGD and AdamW across all training schedules and model complexities, including in larger models and longer schedules, where the gap is reduced to less than 0.2%. In other words, both RegNetY and ViTC can be easily trained via either SGD or AdamW, but ViTP cannot. 0.0 0.2 0.4 0.6 0.8 1.0 cu m ul at iv e pr ob . 1GF models ViTP ViTC RegNetY 4GF models ViTP ViTC RegNetY 18GF models ViTP ViTC RegNetY 5.3 Learning Rate and Weight Decay Stability Next, we characterize how sensitive different model families are to changes in learning rate (lr) and weight decay (wd) under both AdamW and SGD optimizers. To quantify this, we make use of error distribution functions (EDFs) [30]. An EDF is computed by sorting a set of results from low-to-high error and plotting the cumulative proportion of results as error increases, see [30] for details. In particular, we generate EDFs of a model as a function of lr and wd. The intuition is that if a model is robust to these hyperparameter choices, the EDF will be steep (all models will perform similarly), while if the model is sensitive, the EDF will be shallow (performance will be spread out). We test 6 ViT models ({P , C} × {1, 4, 18} GF) and 3 RegNetY models ({1, 4, 16} GF). For each model and each optimizer, we compute an EDF by randomly sampling 64 (lr, wd) pairs with learning rate and weight decay sampled in a fixed width interval around their optimal values for that model and optimizer (see the Appendix for sampling details). Rather than plotting absolute error in the EDF, we plot ∆top-1 error between the best result (obtained with the optimal lr and wd) and the observed result. Due to the large number of models, we train each for only 50 epochs. Results. Figure 4 shows scatterplots and EDFs for models trained by AdamW. Figure 5 shows SGD results. In all cases we see that ViTC significantly improves the lr and wd stability over ViTP for both optimizers. This indicates that the lr and wd are easier to optimize for ViTC than for ViTP . 5.4 Experimental Details In all experiments we train with a single half-period cosine learning rate decay schedule with a 5-epoch linear learning rate warm-up [16]. We use a minibatch size of 2048. Crucially, weight decay is not applied to the gain factors found in normalization layers nor to bias parameters anywhere in the model; we found that decaying these parameters can dramatically reduce top-1 accuracy for small models and short schedules. For inference, we use an exponential moving average (EMA) of the model weights (e.g., [8]). The lr and wd used in this section are reported in the Appendix. Other hyperparameters use defaults: SGD momentum is 0.9 and AdamW’s β1 = 0.9 and β2 = 0.999. Regularization and data augmentation. We use a simplified training recipe compared to recent work such as DeiT [41], which we found to be equally effective across a wide spectrum of model complexities and dataset scales. We use AutoAugment [7], mixup [52] (α = 0.8), CutMix [51] (α = 1.0), and label smoothing [38] ( = 0.1). We prefer this setup because it is similar to common settings for CNNs (e.g., [12]) except for stronger mixup and the addition of CutMix (ViTs benefit from both, while CNNs are not harmed). We compare this recipe to the one used for DeiT models in the Appendix, and observe that our setup provides substantially faster training convergence likely because we remove repeating augmentation [1, 20], which is known to slow training [1]. 6 Peak Performance A model’s peak performance is the most commonly used metric in network design. It represents what is possible with the best-known-so-far settings and naturally evolves over time. Making fair comparisons between different models is desirable but fraught with difficulty. Simply citing results from prior work may be negatively biased against that work as it was unable to incorporate newer, yet applicable improvements. Here, we strive to provide a fairer comparison between state-of-the-art CNNs, ViTP , and ViTC . We identify a set of factors and then strike a pragmatic balance between which subset to optimize for each model vs. which subset share a constant value across all models. In our comparison, all models share the same epochs (400), use of model weight EMA, and set of regularization and augmentation methods (as specified in §5.4). All CNNs are trained with SGD with lr of 2.54 and wd of 2.4e−5; we found this single choice worked well across all models, as similarly observed in [12]. For all ViT models we found AdamW with a lr/wd of 1.0e−3/0.24 was effective, except for the 36GF models. For these larger models we tested a few settings and found a lr/wd of 6.0e−4/0.28 to be more effective for both ViTP -36GF and ViTC-36GF models. For training and inference, ViTs use 224×224 resolution (we do not fine-tune at higher resolutions), while the CNNs use (often larger) optimized resolutions specified in [12, 39]. Given this protocol, we compare ViTP , ViTC , and CNNs across a spectrum of model complexities (1GF to 36GF) and dataset scales (directly training on ImageNet-1k vs. pretraining on ImageNet-21k and then fine-tuning on ImageNet-1k). Results. Figure 6 shows a progression of results. Each plot shows ImageNet-1k val top-1 error vs. ImageNet-1k epoch training time.1 The left plot compares several state-of-the-art CNNs. RegNetY and RegNetZ [12] achieve similar results across the training speed spectrum and outperform EfficientNets [39]. Surprisingly, ResNets [19] are highly competitive at fast runtimes, showing that under a fairer comparison these years-old models perform substantially better than often reported (cf . [39]). The middle plot compares two representative CNNs (ResNet and RegNetY) to ViTs, still using only ImageNet-1k training. The baseline ViTP underperforms RegNetY across the entire model complexity spectrum. To our surprise, ViTP also underperforms ResNets in this regime. ViTC is more competitive and outperforms CNNs in the middle-complexity range. The right plot compares the same models but with ImageNet-21k pretraining (details in Appendix). In this setting ViT models demonstrates a greater capacity to benefit from the larger-scale data: now ViTC strictly outperforms both ViTP and RegNetY. Interestingly, the original ViTP does not outperform a state-of-the-art CNN even when trained on this much larger dataset. Numerical results are presented in Table 2 for reference to exact values. This table also highlights that flop counts are not significantly correlated with runtime, but that activations are (see Appendix for more details), as also observed by [12]. E.g., EfficientNets are slow relative to their flops while ViTs are fast. 1We time models in PyTorch on 8 32GB Volta GPUs. We note that batch inference time is highly correlated with training time, but we report epoch time as it is easy to interpret and does not depend on the use case. These results verify that ViTC ’s convolutional stem improves not only optimization stability, as seen in the previous section, but also peak performance. Moreover, this benefit can be seen across the model complexity and dataset scale spectrum. Perhaps surprisingly, given the recent excitement over ViT, we find that ViTP struggles to compete with state-of-the-art CNNs. We only observe improvements over CNNs when using both large-scale pretraining data and the proposed convolutional stem. 7 Conclusion In this work we demonstrated that the optimization challenges of ViT models are linked to the largestride, large-kernel convolution in ViT’s patchify stem. The seemingly trivial change of replacing this patchify stem with a simple convolutional stem leads to a remarkable change in optimization behavior. With the convolutional stem, ViT (termed ViTC) converges faster than the original ViT (termed ViTP ) (§5.1), trains well with either AdamW or SGD (§5.2), improves learning rate and weight decay stability (§5.3), and improves ImageNet top-1 error by ∼1-2% (§6). These results are consistent across a wide spectrum of model complexities (1GF to 36GF) and dataset scales (ImageNet-1k to ImageNet-21k). Our results indicate that injecting a small dose of convolutional inductive bias into the early stages of ViTs can be hugely beneficial. Looking forward, we are interested in the theoretical foundation of why such a minimal architectural modification can have such large (positive) impact on optimizability. We are also interested in studying larger models. Our preliminary explorations into 72GF models reveal that the convolutional stem still improves top-1 error, however we also find that a new form of instability arises that causes training error to randomly spike, especially for ViTC . Acknowledgements. We thank Hervé Jegou, Hugo Touvron, and Kaiming He for valuable feedback.
1. What is the focus of the paper regarding ViT models, and what are the authors' findings? 2. What are the strengths of the paper, particularly in its analysis and comparisons with other works? 3. Do you have any concerns or criticisms regarding the paper's content or methodology? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors perform an extensive study investigating the stem of ViT models and replacing it with one that consists of a stack of downsampling 3x3 convolutions. The study reveals that this simple change makes ViT training more robust, faster to converge, but also scales better (cf. i21k pre-training). Review I have to admit, that I was initially set to reject the paper, because I have counter-evidence of the main claim that "ViT cannot be optimized by sgdm": I have been training original patch-stem ViT models with SGDM for a while, to same accuracy as AdamW (but unpublished), and am certain that the negative results reported in DeiT are due to confounders, such as not properly tuning sgdm. However, carefully reading the paper made me change my mind by 180°. This paper is really good and needs to be accepted. Many statements reflect my thoughts: It is crucial to tune lr/wd, or one gets misleading results such as "random erasing is necessary", and actually tuning those reveals it (and other tricks) to be unnecessary. Especially for ViT and EfficientNet models, flops and #params are not a good measurement to put on the X axis. We need to compare architectures in ideal settings for each. I find the very last presented finding, Fig6(right), to be the most interesting: the ViT_C seems to scale better than both ViT_P and Re{s/g}Nets. This is very promising. Now some criticism on the paper: line 112: I would argue that the conv-stem is not "strictly beneficial" since: A) the patchify stem has led to some interesting recent ideas which are simply not possible with a conv-stem, such as using different random augmentations per patch. B) It re-introduces BatchNorm into the model, with all of its problems. I wish authors had tried using GroupNorm here too. line 94: the cited papers do not really support "optimization difficulty". Why do we need the final 1x1 conv in the stem? Couldn't we simply have the last 3x3 conv output the required number of channels? That would furhter simplify the design. line 139: "we do not optimize the stem to maximize model accuracy". This is not really true to spirit, given AppendixA. Even if the best stem ends up being the one that authors first thought of. What if a different one from AppendixA was found to perform substantially better? Surely, that one would have been proposed instead. Simply drop this sentence. Reusing optimal lr/wd from 50ep for up to 400ep. This is known to be suboptimal (cf AdamW paper itself) and generally for much longer training, wd needs to be decreased. It likely does not invalidate the results though. It is not explicitly stated what lr/wd are used in 5.1 and 5.2. Is it the best ones found in 5.3? If not, this might be an issue. Does the SGDM sweep also use "decoupled" WD as per AdamW? Do the authors actually use decoupled WD from AdamW? (That is not commonly used in practice, but better for such sweeps.) By "decoupled", I mean doing w_t+1 = w_t - lr * grad w_t - wd * w_t, as opposed to w_t+1 = w_t - lr * grad w_t - wd * lr * w_t, see also the AdamW paper.
NIPS
Title Early Convolutions Help Transformers See Better Abstract Vision transformer (ViT) models exhibit substandard optimizability. In particular, they are sensitive to the choice of optimizer (AdamW vs. SGD), optimizer hyperparameters, and training schedule length. In comparison, modern convolutional neural networks are easier to optimize. Why is this the case? In this work, we conjecture that the issue lies with the patchify stem of ViT models, which is implemented by a stride-p p×p convolution (p = 16 by default) applied to the input image. This large-kernel plus large-stride convolution runs counter to typical design choices of convolutional layers in neural networks. To test whether this atypical design choice causes an issue, we analyze the optimization behavior of ViT models with their original patchify stem versus a simple counterpart where we replace the ViT stem by a small number of stacked stride-two 3×3 convolutions. While the vast majority of computation in the two ViT designs is identical, we find that this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. Using a convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ∼1-2% top-1 accuracy on ImageNet-1k), while maintaining flops and runtime. The improvement can be observed across the wide spectrum of model complexities (from 1G to 36G flops) and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us to recommend using a standard, lightweight convolutional stem for ViT models in this regime as a more robust architectural choice compared to the original ViT model design. 1 Introduction Vision transformer (ViT) models [13] offer an alternative design paradigm to convolutional neural networks (CNNs) [24]. ViTs replace the inductive bias towards local processing inherent in convolutions with global processing performed by multi-headed self-attention [43]. The hope is that this design has the potential to improve performance on vision tasks, akin to the trends observed in natural language processing [11]. While investigating this conjecture, researchers face another unexpected difference between ViTs and CNNs: ViT models exhibit substandard optimizability. ViTs are sensitive to the choice of optimizer [41] (AdamW [27] vs. SGD), to the selection of dataset specific learning hyperparameters [13, 41], to training schedule length, to network depth [42], etc. These issues render former training recipes and intuitions ineffective and impede research. Convolutional neural networks, in contrast, are exceptionally easy and robust to optimize. Simple training recipes based on SGD, basic data augmentation, and standard hyperparameter values have been widely used for years [19]. Why does this difference exist between ViT and CNN models? In this paper we hypothesize that the issues lies primarily in the early visual processing performed by ViT. ViT “patchifies” the input image into p×p non-overlapping patches to form the transformer encoder’s input set. This patchify stem is implemented as a stride-p p×p convolution, with p = 16 as a default value. This large-kernel plus large-stride convolution runs counter to the typical design 35th Conference on Neural Information Processing Systems (NeurIPS 2021). choices used in CNNs, where best-practices have converged to a small stack of stride-two 3×3 kernels as the network’s stem (e.g., [30, 36, 39]). To test this hypothesis, we minimally change the early visual processing of ViT by replacing its patchify stem with a standard convolutional stem consisting of only ∼5 convolutions, see Figure 1. To compensate for the small addition in flops, we remove one transformer block to maintain parity in flops and runtime. We observe that even though the vast majority of the computation in the two ViT designs is identical, this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. In extensive experiments we show that replacing the ViT patchify stem with a more standard convolutional stem (i) allows ViT to converge faster (§5.1), (ii) enables, for the first time, the use of either AdamW or SGD without a significant drop in accuracy (§5.2), (iii) brings ViT’s stability w.r.t. learning rate and weight decay closer to that of modern CNNs (§5.3), and (iv) yields improvements in ImageNet [10] top-1 error of ∼1-2 percentage points (§6). We consistently observe these improvements across a wide spectrum of model complexities (from 1G flops to 36G flops) and dataset scales (ImageNet-1k to ImageNet-21k). These results show that injecting some convolutional inductive bias into ViTs can be beneficial under commonly studied settings. We did not observe evidence that the hard locality constraint in early layers hampers the representational capacity of the network, as might be feared [9]. In fact we observed the opposite, as ImageNet results improve even with larger-scale models and larger-scale data when using a convolution stem. Moreover, under carefully controlled comparisons, we find that ViTs are only able to surpass state-of-the-art CNNs when equipped with a convolutional stem (§6). We conjecture that restricting convolutions in ViT to early visual processing may be a crucial design choice that strikes a balance between (hard) inductive biases and the representation learning ability of transformer blocks. Evidence comes by comparison to the “hybrid ViT” presented in [13], which uses 40 convolutional layers (most of a ResNet-50) and shows no improvement over the default ViT. This perspective resonates with the findings of [9], who observe that early transformer blocks prefer to learn more local attention patterns than later blocks. Finally we note that exploring the design of hybrid CNN/ViT models is not a goal of this work; rather we demonstrate that simply using a minimal convolutional stem with ViT is sufficient to dramatically change its optimization behavior. In summary, the findings presented in this paper lead us to recommend using a standard, lightweight convolutional stem for ViT models in the analyzed dataset scale and model complexity spectrum as a more robust and higher performing architectural choice compared to the original ViT model design. 2 Related Work Convolutional neural networks (CNNs). The breakthrough performance of the AlexNet [23] CNN [15, 24] on ImageNet classification [10] transformed the field of recognition, leading to the development of higher performing architectures, e.g., [19, 36, 37, 48], and scalable training methods [16, 21]. These architectures are now core components in object detection (e.g., [34]), instance segmentation (e.g., [18]), and semantic segmentation (e.g., [26]). CNNs are typically trained with stochastic gradient descent (SGD) and are widely considered to be easy to optimize. Self-attention in vision models. Transformers [43] are revolutionizing natural language processing by enabling scalable training. Transformers use multi-headed self-attention, which performs global information processing and is strictly more general than convolution [6]. Wang et al. [46] show that (single-headed) self-attention is a form of non-local means [2] and that integrating it into a ResNet [19] improves several tasks. Ramachandran et al. [32] explore this direction further with stand-alone self-attention networks for vision. They report difficulties in designing an attention-based network stem and present a bespoke solution that avoids convolutions. In contrast, we demonstrate the benefits of a convolutional stem. Zhao et al. [53] explore a broader set of self-attention operations with hard-coded locality constraints, more similar to standard CNNs. Vision transformer (ViT). Dosovitskiy et al. [13] apply a transformer encoder to image classification with minimal vision-specific modifications. As the counterpart of input token embeddings, they partition the input image into, e.g., 16×16 pixel, non-overlapping patches and linearly project them to the encoder’s input dimension. They report lackluster results when training on ImageNet-1k, but demonstrate state-of-the-art transfer learning when using large-scale pretraining data. ViTs are sensitive to many details of the training recipe, e.g., they benefit greatly from AdamW [27] compared to SGD and require careful learning rate and weight decay selection. ViTs are generally considered to be difficult to optimize compared to CNNs (e.g., see [13, 41, 42]). Further evidence of challenges comes from Chen et al. [4] who report ViT optimization instability in self-supervised learning (unlike with CNNs), and find that freezing the patchify stem at its random initialization improves stability. ViT improvements. ViTs are gaining rapid interest in part because they may offer a novel direction away from CNNs. Touvron et al. [41] show that with more regularization and stronger data augmentation ViT models achieve competitive accuracy on ImageNet-1k alone (cf . [13]). Subsequently, works concurrent with our own explore numerous other ViT improvements. Dominant themes include multi-scale networks [14, 17, 25, 45, 50], increasing depth [42], and locality priors [5, 9, 17, 47, 49]. In [9], d’Ascoli et al. modify multi-head self-attention with a convolutional bias at initialization and show that this prior improves sample efficiency and ImageNet accuracy. Resonating with our work, [5, 17, 47, 49] present models with convolutional stems, but do not analyze optimizability (our focus). Discussion. Unlike the concurrent work on locality priors in ViT, our focus is studying optimizability under minimal ViT modifications in order to derive crisp conclusions. Our perspective brings several novel observations: by adding only ∼5 convolutions to the stem, ViT can be optimized well with either AdamW or SGD (cf . all prior works use AdamW to avoid large drops in accuracy [41]), it becomes less sensitive to the specific choice of learning rate and weight decay, and training converges faster. We also observe a consistent improvement in ImageNet top-1 accuracy across a wide spectrum of model complexities (1G flops to 36G flops) and dataset scales (ImageNet-1k to ImageNet-21k). These results suggest that a (hard) convolutional bias early in the network does not compromise representational capacity, as conjectured in [9], and is beneficial within the scope of this study. 3 Vision Transformer Architectures Next, we review vision transformers [13] and describe the convolutional stems used in our work. The vision transformer (ViT). ViT first partitions an input image into non-overlapping p×p patches and linearly projects each patch to a d-dimensional feature vector using a learned weight matrix. A patch size of p = 16 and an image size of 224×224 are typical. The resulting patch embeddings (plus positional embeddings and a learned classification token embedding) are processed by a standard transformer encoder [43, 44] followed by a classification head. Using common network nomenclature, we refer to the portion of ViT before the transformer blocks as the network’s stem. ViT’s stem is a specific case of convolution (stride-p, p×p kernel), but we will refer to it as the patchify stem and reserve the terminology of convolutional stem for stems with a more conventional CNN design with multiple layers of overlapping convolutions (i.e., with stride smaller than the kernel size). ViTP models. Prior work proposes ViT models of various sizes, such as ViT-Tiny, ViT-Small, ViT-Base, etc. [13, 41]. To facilitate comparisons with CNNs, which are typically standardized to 1 gigaflop (GF), 2GF, 4GF, 8GF, etc., we modify the original ViT models to obtain models at about these complexities. Details are given in Table 1 (left). For easier comparison with CNNs of similar flops, and to avoid subjective size names, we refer the models by their flops, e.g., ViTP -4GF in place of ViT-Small. We use the P subscript to indicate that these models use the original patchify stem. Convolutional stem design. We adopt a typical minimalist convolutional stem design by stacking 3×3 convolutions [36], followed by a single 1×1 convolution at the end to match the d-dimensional input of the transformer encoder. These stems quickly downsample a 224×224 input image using overlapping strided convolutions to 14×14, matching the number of inputs created by the standard patchify stem. We follow a simple design pattern: all 3×3 convolutions either have stride 2 and double the number of output channels or stride 1 and keep the number of output channels constant. We enforce that the stem accounts for approximately the computation of one transformer block of the corresponding model so that we can easily control for flops by removing one transformer block when using the convolutional stem instead of the patchify stem. Our stem design was chosen to be purposefully simple and we emphasize that it was not designed to maximize model accuracy. ViTC models. To form a ViT model with a convolutional stem, we simply replace the patchify stem with its counterpart convolutional stem and remove one transformer block to compensate for the convolutional stem’s extra flops (see Figure 1). We refer to the modified ViT with a convolutional stem as ViTC . Configurations for ViTC at various complexities are given in Table 1 (right); corresponding ViTP and ViTC models match closely on all complexity metrics including flops and runtime. Convolutional stem details. Our convolutional stem designs use four, four, and six 3×3 convolutions for the 1GF, 4GF, and 18GF models, respectively. The output channels are [24, 48, 96, 192], [48, 96, 192, 384], and [64, 128, 128, 256, 256, 512], respectively. All 3×3 convolutions are followed by batch norm (BN) [21] and then ReLU [29], while the final 1×1 convolution is not, to be consistent with the original patchify stem. Eventually, matching stem flops to transformer block flops results in an unreasonably large stem, thus ViTC-36GF uses the same stem as ViTC-18GF. Convolutions in ViT. Dosovitskiy et al. [13] also introduced a “hybrid ViT” architecture that blends a modified ResNet [19] (BiT-ResNet [22]) with a transformer encoder. In their hybrid model, the patchify stem is replaced by a partial BiT-ResNet-50 that terminates at the output of the conv4 stage or the output of an extended conv3 stage. These image embeddings replace the standard patchify stem embeddings. This partial BiT-ResNet-50 stem is deep, with 40 convolutional layers. In this work, we explore lightweight convolutional stems that consist of only 5 to 7 convolutions in total, instead of the 40 used by the hybrid ViT. Moreover, we emphasize that the goal of our work is not to explore the hybrid ViT design space, but rather to study the optimizability effects of simply replacing the patchify stem with a minimal convolutional stem that follows standard CNN design practices. 4 Measuring Optimizability It has been noted in the literature that ViT models are challenging to optimize, e.g., they may achieve only modest performance when trained on a mid-size dataset (ImageNet-1k) [13], are sensitive to data augmentation [41] and optimizer choice [41], and may perform poorly when made deeper [42]. We empirically observed the general presence of such difficulties through the course of our experiments and informally refer to such optimization characteristics collectively as optimizability. Models with poor optimizability can yield very different results when hyperparameters are varied, which can lead to seemingly bizarre observations, e.g., removing erasing data augmentation [54] causes a catastrophic drop in ImageNet accuracy in [41]. Quantitative metrics to measure optimizability are needed to allow for more robust comparisons. In this section, we establish the foundations of such comparisons; we extensively test various models using these optimizability measures in §5. Training length stability. Prior works train ViT models for lengthy schedules, e.g., 300 to 400 epochs on ImageNet is typical (at the extreme, [17] trains models for 1000 epochs), since results at a formerly common 100-epoch schedule are substantially worse (2-4% lower top-1 accuracy, see §5.1). In the context of ImageNet, we define top-1 accuracy at 400 epochs as an approximate asymptotic result, i.e., training for longer will not meaningfully improve top-1 accuracy, and we compare it to the accuracy of models trained for only 50, 100, or 200 epochs. We define training length stability as the gap to asymptotic accuracy. Intuitively, it’s a measure of convergence speed. Models that converge faster offer obvious practical benefits, especially when training many model variants. Optimizer stability. Prior works use AdamW [27] to optimize ViT models from random initialization. Results of SGD are not typically presented and we are only aware of Touvron et al. [41]’s report of a dramatic ∼7% drop in ImageNet top-1 accuracy. In contrast, widely used CNNs, such as ResNets, can be optimized equally well with either SGD or AdamW (see §5.2) and SGD (always with momentum) is typically used in practice. SGD has the practical benefit of having fewer hyperparameters (e.g., tuning AdamW’s β2 can be important [3]) and requiring 50% less optimizer state memory, which can ease scaling. We define optimizer stability as the accuracy gap between AdamW and SGD. Like training length stability, we use optimizer stability as a proxy for the ease of optimization of a model. Hyperparameter (lr, wd) stability. Learning rate (lr) and weight decay (wd) are among the most important hyperparameters governing optimization with SGD and AdamW. New models and datasets often require a search for their optimal values as the choice can dramatically affect results. It is desirable to have a model and optimizer that yield good results for a wide range of learning rate and weight decay values. We will explore this hyperparameter stability by comparing the error distribution functions (EDFs) [30] of models trained with various choices of lr and wd. In this setting, to create an EDF for a model we randomly sample values of lr and wd and train the model accordingly. Distributional estimates, like those provided by EDFs, give a more complete view of the characteristics of models that point estimates cannot reveal [30, 31]. We will review EDFs in §5.3. Peak performance. The maximum possible performance of each model is the most commonly used metric in previous literature and it is often provided without carefully controlling training details such as data augmentations, regularization methods, number of epochs, and lr, wd tuning. To make more robust comparisons, we define peak performance as the result of a model at 400 epochs using its best-performing optimizer and parsimoniously tuned lr and wd values (details in §6), while fixing justifiably good values for all other variables that have a known impact on training. Peak performance results for ViTs and CNNs under these carefully controlled training settings are presented in §6. 5 Stability Experiments In this section we test the stability of ViT models with the original patchify (P ) stem vs. the convolutional (C) stem defined in §3. For reference, we also train RegNetY [12, 31], a state-of-the-art CNN that is easy to optimize and serves as a reference point for good stability. We conduct experiments using ImageNet-1k [10]’s standard training and validation sets, and report top-1 error. Following [12], for all results, we carefully control training settings and we use a minimal set of data augmentations that still yields strong results, for details see §5.4. In this section, unless noted, for each model we use the optimal lr and wd found under a 50 epoch schedule (see Appendix). 5.1 Training Length Stability We first explore how rapidly networks converge to their asymptotic error on ImageNet-1k, i.e., the highest possible accuracy achievable by training for many epochs. We approximate asymptotic error as a model’s error using a 400 epoch schedule based on observing diminishing returns from 200 to 400. We consider a grid of 24 experiments for ViT: {P , C} stems × {1, 4, 18} GF model sizes × {50, 100, 200, 400} epochs. For reference we also train RegNetY at {1, 4, 16} GF. We use the best optimizer choice for each model (AdamW for ViT models and SGD for RegNetY models). Results. Figure 2 shows the absolute error deltas (∆top-1) between 50, 100, and 200 epoch schedules and asymptotic performance (at 400 epochs). ViTC demonstrates faster convergence than ViTP across the model complexity spectrum, and closes much of the gap to the rate of CNN convergence. The improvement is most significant in the shortest training schedule (50 epoch), e.g., ViTP -1GF has a 10% error delta, while ViTC-1GF reduces this to about 6%. This opens the door to applications that execute a large number of short-scheduled experiments, such as neural architecture search. 5.2 Optimizer Stability We next explore how well AdamW and SGD optimize ViT models with the two stem types. We consider the following grid of 48 ViT experiments: {P , C} stems × {1, 4, 18} GF sizes × {50, 100, 200, 400} epochs × {AdamW, SGD} optimizers. As a reference, we also train 24 RegNetY baselines, one for each complexity regime, epoch length, and optimizer. Results. Figure 3 shows the results. As a baseline, RegNetY models show virtually no gap when trained using either SGD or AdamW (the difference ∼0.1-0.2% is within noise). On the other hand, ViTP models suffer a dramatic drop when trained with SGD across all settings (of up to 10% for larger models and longer training schedules). With a convolutional stem, ViTC models exhibit much smaller error gaps between SGD and AdamW across all training schedules and model complexities, including in larger models and longer schedules, where the gap is reduced to less than 0.2%. In other words, both RegNetY and ViTC can be easily trained via either SGD or AdamW, but ViTP cannot. 0.0 0.2 0.4 0.6 0.8 1.0 cu m ul at iv e pr ob . 1GF models ViTP ViTC RegNetY 4GF models ViTP ViTC RegNetY 18GF models ViTP ViTC RegNetY 5.3 Learning Rate and Weight Decay Stability Next, we characterize how sensitive different model families are to changes in learning rate (lr) and weight decay (wd) under both AdamW and SGD optimizers. To quantify this, we make use of error distribution functions (EDFs) [30]. An EDF is computed by sorting a set of results from low-to-high error and plotting the cumulative proportion of results as error increases, see [30] for details. In particular, we generate EDFs of a model as a function of lr and wd. The intuition is that if a model is robust to these hyperparameter choices, the EDF will be steep (all models will perform similarly), while if the model is sensitive, the EDF will be shallow (performance will be spread out). We test 6 ViT models ({P , C} × {1, 4, 18} GF) and 3 RegNetY models ({1, 4, 16} GF). For each model and each optimizer, we compute an EDF by randomly sampling 64 (lr, wd) pairs with learning rate and weight decay sampled in a fixed width interval around their optimal values for that model and optimizer (see the Appendix for sampling details). Rather than plotting absolute error in the EDF, we plot ∆top-1 error between the best result (obtained with the optimal lr and wd) and the observed result. Due to the large number of models, we train each for only 50 epochs. Results. Figure 4 shows scatterplots and EDFs for models trained by AdamW. Figure 5 shows SGD results. In all cases we see that ViTC significantly improves the lr and wd stability over ViTP for both optimizers. This indicates that the lr and wd are easier to optimize for ViTC than for ViTP . 5.4 Experimental Details In all experiments we train with a single half-period cosine learning rate decay schedule with a 5-epoch linear learning rate warm-up [16]. We use a minibatch size of 2048. Crucially, weight decay is not applied to the gain factors found in normalization layers nor to bias parameters anywhere in the model; we found that decaying these parameters can dramatically reduce top-1 accuracy for small models and short schedules. For inference, we use an exponential moving average (EMA) of the model weights (e.g., [8]). The lr and wd used in this section are reported in the Appendix. Other hyperparameters use defaults: SGD momentum is 0.9 and AdamW’s β1 = 0.9 and β2 = 0.999. Regularization and data augmentation. We use a simplified training recipe compared to recent work such as DeiT [41], which we found to be equally effective across a wide spectrum of model complexities and dataset scales. We use AutoAugment [7], mixup [52] (α = 0.8), CutMix [51] (α = 1.0), and label smoothing [38] ( = 0.1). We prefer this setup because it is similar to common settings for CNNs (e.g., [12]) except for stronger mixup and the addition of CutMix (ViTs benefit from both, while CNNs are not harmed). We compare this recipe to the one used for DeiT models in the Appendix, and observe that our setup provides substantially faster training convergence likely because we remove repeating augmentation [1, 20], which is known to slow training [1]. 6 Peak Performance A model’s peak performance is the most commonly used metric in network design. It represents what is possible with the best-known-so-far settings and naturally evolves over time. Making fair comparisons between different models is desirable but fraught with difficulty. Simply citing results from prior work may be negatively biased against that work as it was unable to incorporate newer, yet applicable improvements. Here, we strive to provide a fairer comparison between state-of-the-art CNNs, ViTP , and ViTC . We identify a set of factors and then strike a pragmatic balance between which subset to optimize for each model vs. which subset share a constant value across all models. In our comparison, all models share the same epochs (400), use of model weight EMA, and set of regularization and augmentation methods (as specified in §5.4). All CNNs are trained with SGD with lr of 2.54 and wd of 2.4e−5; we found this single choice worked well across all models, as similarly observed in [12]. For all ViT models we found AdamW with a lr/wd of 1.0e−3/0.24 was effective, except for the 36GF models. For these larger models we tested a few settings and found a lr/wd of 6.0e−4/0.28 to be more effective for both ViTP -36GF and ViTC-36GF models. For training and inference, ViTs use 224×224 resolution (we do not fine-tune at higher resolutions), while the CNNs use (often larger) optimized resolutions specified in [12, 39]. Given this protocol, we compare ViTP , ViTC , and CNNs across a spectrum of model complexities (1GF to 36GF) and dataset scales (directly training on ImageNet-1k vs. pretraining on ImageNet-21k and then fine-tuning on ImageNet-1k). Results. Figure 6 shows a progression of results. Each plot shows ImageNet-1k val top-1 error vs. ImageNet-1k epoch training time.1 The left plot compares several state-of-the-art CNNs. RegNetY and RegNetZ [12] achieve similar results across the training speed spectrum and outperform EfficientNets [39]. Surprisingly, ResNets [19] are highly competitive at fast runtimes, showing that under a fairer comparison these years-old models perform substantially better than often reported (cf . [39]). The middle plot compares two representative CNNs (ResNet and RegNetY) to ViTs, still using only ImageNet-1k training. The baseline ViTP underperforms RegNetY across the entire model complexity spectrum. To our surprise, ViTP also underperforms ResNets in this regime. ViTC is more competitive and outperforms CNNs in the middle-complexity range. The right plot compares the same models but with ImageNet-21k pretraining (details in Appendix). In this setting ViT models demonstrates a greater capacity to benefit from the larger-scale data: now ViTC strictly outperforms both ViTP and RegNetY. Interestingly, the original ViTP does not outperform a state-of-the-art CNN even when trained on this much larger dataset. Numerical results are presented in Table 2 for reference to exact values. This table also highlights that flop counts are not significantly correlated with runtime, but that activations are (see Appendix for more details), as also observed by [12]. E.g., EfficientNets are slow relative to their flops while ViTs are fast. 1We time models in PyTorch on 8 32GB Volta GPUs. We note that batch inference time is highly correlated with training time, but we report epoch time as it is easy to interpret and does not depend on the use case. These results verify that ViTC ’s convolutional stem improves not only optimization stability, as seen in the previous section, but also peak performance. Moreover, this benefit can be seen across the model complexity and dataset scale spectrum. Perhaps surprisingly, given the recent excitement over ViT, we find that ViTP struggles to compete with state-of-the-art CNNs. We only observe improvements over CNNs when using both large-scale pretraining data and the proposed convolutional stem. 7 Conclusion In this work we demonstrated that the optimization challenges of ViT models are linked to the largestride, large-kernel convolution in ViT’s patchify stem. The seemingly trivial change of replacing this patchify stem with a simple convolutional stem leads to a remarkable change in optimization behavior. With the convolutional stem, ViT (termed ViTC) converges faster than the original ViT (termed ViTP ) (§5.1), trains well with either AdamW or SGD (§5.2), improves learning rate and weight decay stability (§5.3), and improves ImageNet top-1 error by ∼1-2% (§6). These results are consistent across a wide spectrum of model complexities (1GF to 36GF) and dataset scales (ImageNet-1k to ImageNet-21k). Our results indicate that injecting a small dose of convolutional inductive bias into the early stages of ViTs can be hugely beneficial. Looking forward, we are interested in the theoretical foundation of why such a minimal architectural modification can have such large (positive) impact on optimizability. We are also interested in studying larger models. Our preliminary explorations into 72GF models reveal that the convolutional stem still improves top-1 error, however we also find that a new form of instability arises that causes training error to randomly spike, especially for ViTC . Acknowledgements. We thank Hervé Jegou, Hugo Touvron, and Kaiming He for valuable feedback.
1. What is the focus of the paper regarding ViT? 2. What are the strengths of the proposed approach, particularly in terms of its impact on optimization stability and robustness? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's methodology, results, or conclusions?
Summary Of The Paper Review
Summary Of The Paper The paper studies the impact of stem in ViT. Specifically, the authors propose to replace the patch-based stem in ViT with a standard, lightweight convolutional stem, and show that this simple revision leads to substantially improved stability wrt the choices of optimizers and several other hyper-parameters. The resulting model achieves comparable performance with strong convnets baselines when trained on ImageNet-1K and has better accuracy-speed tradeoff than both ViTs and convents when trained over larger dataset (ImageNet-21K). Review Strengths: Overall, the paper is very well written and easy to follow. Extensive amount of ablation studies have been provided to back up the main claim that the conv stem can improve ViT's optimization stability/robustness. IMO the most interesting part of the manuscript is not the resulting model itself, but rather the methodology. For example, the notion of "optimizability" examined in the paper can be potentially used as an additional metric to evaluate new architectures and provide complementary insights other than accuracy or speed. Weaknesses: The overall message is interesting but not strong enough to be exciting: In most ablation studies, ViT_c behaves like something in the middle of ViT and ConvNets (e.g., see Figure 2, Figure 3 and Figure 5). Those results are interesting, but on the other hand are more or less expected given the hybrid nature of the model. More importantly, although the authors provided extensive empirical evidence to back up their claim that a conv stem is helpful, the critical question that why it is effective is left unanswered. An in-depth analysis of the underlying reason for those empirical observations would make the paper a lot more informative. The paper claims their model outperforms the state-of-the-art convnets, but I'm not sure whether it is true: while RegNet is a competitive baseline, there are more recent networks with stronger performance (e.g., NFNets). There are missing details about the ImageNet-21K training setup that I cannot find in paper or appendix. E.g., what is the input resolution during fine-tuning (224 or 384)? Overall, I'm leaning positive about the paper because the methodology is interesting and I believe the data points would be useful for future studies. On the negative side, not much insight was provided to answer why conv stems are helpful, and I believe answering that technical question would substantially strengthen the work. ========= post rebuttal ========== Raising my rating to 7 since some of my technical questions have been addressed in the author's response.
NIPS
Title Differentially Private Bagging: Improved utility and cheaper privacy than subsample-and-aggregate Abstract Differential Privacy is a popular and well-studied notion of privacy. In the era of big data that we are in, privacy concerns are becoming ever more prevalent and thus differential privacy is being turned to as one such solution. A popular method for ensuring differential privacy of a classifier is known as subsample-and-aggregate, in which the dataset is divided into distinct chunks and a model is learned on each chunk, after which it is aggregated. This approach allows for easy analysis of the model on the data and thus differential privacy can be easily applied. In this paper, we extend this approach by dividing the data several times (rather than just once) and learning models on each chunk within each division. The first benefit of this approach is the natural improvement of utility by aggregating models trained on a more diverse range of subsets of the data (as demonstrated by the well-known bagging technique). The second benefit is that, through analysis that we provide in the paper, we can derive tighter differential privacy guarantees when several queries are made to this mechanism. In order to derive these guarantees, we introduce the upwards and downwards moments accountants and derive bounds for these moments accountants in a data-driven fashion. We demonstrate the improvements our model makes over standard subsample-and-aggregate in two datasets (Heart Failure (private) and UCI Adult (public)). 1 Introduction In the era of big data that we live in today, privacy concerns are becoming ever more prevalent. It falls to the researchers using the data to ensure that adequate measures are taken to ensure any results that are put into the public domain (such as the parameters of a model learned on the data) do not disclose sensitive attributes of the real data. For example, it is well known that the high capacity of deep neural networks can cause the networks to "memorize" training data; if such a network’s parameters were made public, it may be possible to deduce some of the training data that was used to train the model, thus resulting in real data being leaked to the public. Several attempts have been made at rigorously defining what it means for an algorithm, or an algorithm’s output, to be "private". One particularly attractive and well-researched notion is that of differential privacy [1]. Differential privacy is a formal definition that requires that the distribution of the output of a (necessarily probabilistic) algorithm not be too different when a single data point is included in the dataset or not. Typical methods for enforcing differential privacy involve bounding 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. the effect that inclusion of a single sample can have on the output and then adding noise (typically Laplacian or Gaussian) proportional to this effect. The most difficult step in this process is in attaining a good bound on the effect of inclusion. One method for bypassing this difficulty, is to build a classifier by dividing up the dataset into distinct subsets, training a separate classifier on each chunk, and then aggregating these classifiers. The effect of a single sample is then bounded by the fact that it was used only to train exactly one of these models and thus its inclusion or exclusion will affect only that model’s output. By dividing the data into smaller chunks, we learn more models and thus the one model that a sample can effect becomes a smaller "fraction" of the overall model, thus resulting in a smaller effect that any one sample has on the model as a whole. This method is commonly referred to as subsample-and-aggregate [2, 3, 4]. In this work, we propose an extension to the subsample-and-aggregate methodology that has similarities with bagging [5]. Fig. 1 depicts the key methodological difference between standard subsample-and-aggregate and our proposed framework, Differentially Private Bagging (DPBag), namely that we partition the dataset many times. This multiple-partitioning not only improves utility by building a better predictor, but also enjoys stronger privacy guarantees due to the fact that the effect of adding or removing a single sample can be more tightly bounded within our framework. In order to prove these guarantees, we introduce the personalised moments accountants, which are data-driven variants of the moments accountant [6], that allow us to track the privacy loss with respect to each sample in the dataset and then deduce the final privacy loss by taking the maximum loss over all samples. The personalised moments accountant also lends itself to allowing for personalised differential privacy [7] in which we may wish to allow each individual to specify their own privacy parameters. We demonstrate the efficacy of our model on two classification tasks, demonstrating that our model is an improvement over the standard subsample-and-aggregate algorithm. 2 Related Works Several works have proposed methods for differentially private classification. Of particular interest is the method of [6], in which they propose a method for differentially private training of deep neural networks. In particular, they introduce a new piece of mathematical machinery, the moments accountant. The moments accountant allows for more efficient composition of differentially private mechanisms than either simple or advanced composition [1]. Fortunately, the moments accountant is not exclusive to deep networks and has proven to be useful in other works. In this paper, we use two variants of the moments accountant, which we refer to collectively as the personalised moments accountants. Our algorithm lends itself naturally to being able to derive tighter bounds on these personalised moments accountants than would be possible on the "global" moments accountant. Most other methods use the subsample-and-aggregate framework (first discussed in [2]) to guarantee differential privacy. A popular, recent subsample-and-aggregate method is Private Aggregation of Teacher Ensembles (PATE), proposed in [8]. Their main contribution is to provide a data-driven bound on the moments accountant for a given query to the subsample-and-aggregate mechanism that they claim significantly reduces the privacy cost over the standard data-independent bound. This is further built on in [9] by adding a mechanism that first determines whether or not a query will be too expensive to answer or not, only answering those that are sufficiently cheap. Both works use standard subsample-and-aggregate in which the data is partitioned only once. Our method is more fundamental than PATE, in the sense that the techniques used by PATE to improve on subsample-and-aggregate would also be applicable to our differentially private bagging algorithm. The bound they derive in [8] on the moments accountant should translate to our personalised moments accountants in the same way the data-independent bound does (i.e. by multiplying the dependence on the inverse noise scale by a data-driven value) and as such our method would provide privacy improvements over PATE similar to the improvements it provides over standard subsample-and-aggregate. We give an example of our conjectured result for PATE in the Supplementary Materials for clarity. Another method that utilises subsample-and-aggregate is [10], in which they use the distance to instability framework [4] combined with subsample-and-aggregate to privately determine whether a query can be answered without adding any noise to it. In cases where the query can be answered, no privacy cost is incurred. Whenever the query cannot be answered, no answer is given but a privacy cost is incurred. Unfortunately, the gains to be had by applying our method over basic subsample-and-aggregate to their work are not clear, but we believe that at the very least, the utility of the answer provided may be improved on due to the ensemble having a higher utility in our case (and the same privacy guarantees will hold that they prove). In [11], they build a method for learning a differentially private decision tree. Although they apply bagging to their framework, they do not do so to create privacy, but only to improve the utility of their learned classifier. The privacy analysis they provide is performed only on each individual tree and not on the ensemble as a whole. 3 Differential Privacy Let us denote the feature space by X , the set of possible class labels by C and write U = X × C. Let us denote by D the collection of all possible datasets consisting of points in U . We will write D to denote a dataset in D, so that D = {ui}Ni=1 = {(xi, yi)}Ni=1 for some N . We first provide some preliminaries on differential privacy [1] before describing our method; we refer interested readers to [1] for a thorough exposition of differential privacy. We will denote an algorithm byM, which takes as input a dataset D and outputs a value from some output space,R. Definition 1 (Neighboring Datasets [1]). Two datasets D,D′ are said to be neighboring if ∃u ∈ U s.t. D \ {u} = D′ or D′ \ {u} = D. Definition 2 (Differential Privacy [1]). A randomized algorithm,M, is ( , δ)-differentially private if for all S ⊂ R and for all neighboring datasets D,D′: P(M(D) ∈ S) ≤ e P(M(D′) ∈ S) + δ where P is taken with respect to the randomness ofM. Differential privacy provides an intuitively understandable notion of privacy - a particular sample’s inclusion or exclusion in the dataset does not change the probability of a particular outcome very much: it does so by a multiplicative factor of e and an additive amount, δ. 4 Differentially Private Bagging In order to enforce differential privacy, we must bound the effect of a sample’s inclusion or exclusion on the output of the model. In order to do this, we propose a model for which the maximal effect can be easily deduced and moreover, for which we can actually show a lesser maximal effect by analysing the training procedure and deriving data-driven privacy guarantees. We begin by considering k (random) partitions of the dataset, D1, ...,Dk with Di = {Di1, ..., Din} for each i, where Dij is a set of size b |D| n c or d |D| n e. We then train a "teacher" model, Tij on each of these sets (i.e. Tij is trained on Dij). We note that each sample u ∈ D is in precisely one set from each partition and thus in precisely k sets overall; it is therefore used to train k teachers. We collect the indices of the corresponding teachers in the set I(u) = {(i, j) : u ∈ Dij} and denote by T (u) = {Tij : (i, j) ∈ I(u)} the set of teachers trained using the sample u. Given a new sample to classify x ∈ X , we first compute for each class the number of teachers that output that class, nc(x) = |{(i, j) : Tij(x) = c}|. The model then classifies the sample as ĉ(x) = arg max{nc(x) : c ∈ C} i.e. by classifying it as the class with the most votes. To make the output differentially private, we can add independent Laplacian noise to each of the resulting counts before taking arg max. So that the classification becomes c̃λ(x) = arg max{nc(x) + Yc : c ∈ C} where Yc, c ∈ C are independent Lap( kλ ) random variables and where λ is a hyper-parameter of our model. We scale the noise to the number of partitions because the number of partitions is precisely the total number of teachers that any individual sample can effect. Thus the (naive) bound on the `1-sensitivity of this algorithm is k, giving us the following theorem, which tells us that our differentially private bagging algorithm is at least as private as the standard subsample-and-aggregate mechanism, independent of the number of partitions used. Theorem 1. With k partitions and n teachers per partition, c̃λ is 2λ-differentially private with respect to the data D. Proof. This follows immediately from noting that the `1-sensitivity of nc(x) is k. See [1]. We note that the standard subsample-and-aggregate algorithm can be recovered from ours by setting k = 1. In the next section, we will derive tighter bounds on the differential privacy of our bagging algorithm when several queries are made to the classifier. 4.1 Personalised Moments Accountants In order to provide tighter differential privacy guarantees for our method, we now introduce the personalised moments accountants. Like the original moments accountant from [6], these will allow us to compose a sequence of differentially private mechanisms more efficiently than using standard or advanced composition [1]. We begin with a preliminary definition (found in [6]). Definition 3 (Privacy Loss and Privacy Loss Random Variable [6]). LetM : D → R be a randomized algorithm, with D and D′ a pair of neighbouring datasets. Let aux be any auxiliary input. For any outcome o ∈ R, we define the privacy loss at o to be: c(o;M, aux,D,D′) = log P(M(D, aux) = o) P(M(D′, aux) = o) with the privacy loss random variable, C, being defined by C(M, aux,D,D′) = c(M(D, aux), aux,D,D′) i.e. the random variable defined by evaluating the privacy loss at a sample fromM(D, aux). In defining the moments accountant, an intermediate quantity, referred to by [6] as the "l-th moment" is introduced. We divide the definition of this l-th moment into a downwards and an upwards version (corresponding to whetherD′ is obtained by either removing or adding an element toD, respectively). We do this because the upwards moments accountant must be bounded among all possible points u ∈ U that could be added, whereas the downwards moments accountants need only consider the points that are already in D. Definition 4. Let D be some dataset and let u ∈ D. Let aux be any auxiliary input. Then the downwards moments accountant is given by α̌M(l; aux,D, u) = logE(exp(lC(M, aux,D,D \ {u}))). Definition 5. Let D be some dataset. Then the upwards moments accountant is defined as α̂M(l; aux,D) = max u∈U logE(exp(lC(M, aux,D,D ∪ {u}))). We can recover the original moments accountant from [6], αM(l), as αM(l) = max aux,D {α̂M(l; aux,D),max u α̌M(l; aux,D, u)}. (1) We will use this fact, together with the two theorems in the following subsection, to calculate the final global privacy loss of our mechanism. 4.2 Results inherited from the Moments Accountant The following two theorems state two properties that our personalised moments accountants share with the original moments accountant. Note that the composability in Theorem 2 is being applied to each personalised moments accountant individually. Theorem 2 (Composability). Suppose that an algorithm M consists of a sequence of adaptive algorithms (i.e. algorithms that take as auxiliary input the outputs of the previous algorithms) M1, ...,Mm whereMi : ∏i−1 j=1Rj ×D → Ri. Then, for any l α̌M(l;D, u) ≤ m∑ i=1 α̌Mi(l;D, u) and α̂M(l;D) ≤ m∑ i=1 α̂Mi(l;D). Proof. The statement of this theorem is a variation on Theorem 2 from [6], applied to the personalised moments accountants. Their proof involves proving this stronger result. See [6], Theorem 2 proof. Theorem 3 (( , δ) from α(l) [6]). Let δ > 0. Any mechanismM is ( , δ)-differentially private for = min l αM(l) + log( 1 δ ) l (2) Proof. See [6], Theorem 2. Theorem 2 means that bounding each personalised moments accountant individually could provide a significant improvement on the overall bound for the moments accountant. Combined with Eq. 1, we can first sum over successive steps of the algorithm and then take the maximum. In contrast, original approaches that bound only the overall moments accountant at each step essentially compute αM(l) = m∑ i=1 max aux,D {α̂Mi(l; aux,D),max u α̌Mi(l; aux,D, u)}. (3) Our approach of bounding the personalised moments accountant allows us to compute the bound as αM(l) = max aux,D { m∑ i=1 α̂Mi(l; aux,D),max u m∑ i=1 α̌Mi(l; aux,D, u)} (4) which is strictly smaller whenever there is not some personalised moments accountant that is always larger than all other personalised moments accountants. The bounds we derive in the following subsection and the subsequent remarks will make clear why this is an unlikely scenario. 4.3 Bounding the Personalised Moments Accountants Having defined the personalised moments accountants, we can now state our main theorems, which provide a data-dependent bound on the personalised moments accountant for a single query to c̃λ. Theorem 4 (Downwards bound). Let xnew ∈ X be a new point to classify. For each c ∈ C and each u ∈ D, define the quantities nc(xnew;u) = |{(i, j) ∈ I(u) : Tij(xnew) = c}| k i.e. nc(xnew;u) is the fraction of teachers that were trained on a dataset containing u that output class c when classifying xnew. Let m(xnew;u) = max c {1− nc(xnew;u)}. Then α̌c̃λ(xnew)(l;D, u) ≤ 2λ 2m(xnew;u) 2l(l + 1). (5) Proof. (Sketch.) The theorem follows from the fact that m(xnew;u) is the maximum change that can occur in the vote fractions, nc, c ∈ C when the sample u is removed from the training of each model in T (u), corresponding to all teachers that were not already voting for the minority class switching their vote to the minority class. m can thus be thought of as the personalised `1-sensitivity of a specific query to our algorithm, and so the standard sensitivity based argument gives us that c̃λ(xnew) is 2λm(xnew;u)-differentially private with respect to removing u. The bound on the (downwards) moments accountant then follows using a similar argument to the proof of Prop. 3.3 in [12]. To prove the upwards bound, we must understand what happens when we add a point to our training data - which is that it will be added to a training set for precisely 1 teacher in each of the k partitions. Each dataset in a partition will either be of size d |D|n e or b |D| n c. We assume (without loss of generality) that a new point is added to the first dataset in each partition that contains b |D|n c samples. We collect the indices of these datasets in I(∗) and denote the set of teachers trained on these subsets by T (∗). Theorem 5 (Upwards bound). Let xnew ∈ X be a new point to classify. For each c ∈ C, define the quantity nc(xnew; ∗) = |{(i, j) ∈ I(∗) : Tij(xnew) = c}| k i.e. nc(xnew; ∗) is the fraction of teachers whose training set would receive the new point that output class c when classifying xnew. Let m(xnew; ∗) = max c {1− nc(xnew; ∗)}. Then α̂c̃λ(xnew)(l;D) ≤ 2λ 2m(xnew; ∗)2l(l + 1). (6) Proof. The proof is exactly as for Theorem 4, replacing I(u) and T (u) with I(∗) and T (∗). The standard bound on the moments accountant of a 2λ differentially private algorithm is 2λ2l(l+ 1) (see [12]). Thus, our theorems introduce a factor of m(xnew;u)2. Note that by definition m ≤ 1 and thus our bound is in general tighter, but always at least as tight. It should be noted, however, that for a single query, this bound may not improve on the naive 2λ2l(l + 1) bound, since in that case equations 3 and 4 are equal. If there is any training sample u ∈ D ∪ {∗} and any class c ∈ C for which all teachers in T (u) classify xnew as some class other than c then m(xnew;u) = 1. However, over the course of several queries, it is unlikely that each set of teachers T (u) always exclude some class, and as such the total bound according to Theorems 2, 4 and 5 is lower than if we just used the naive bound. In the case of binary classification, for example, the bounds are only the same if there is some set of teachers that are always unanimous when classifying new samples. Remarks. (i) m(xnew;u) is smallest when the teachers in T (u) are divided evenly among the classes when classifying xnew, this is intuitive because in such a situation, u is providing very little information about how to classify u and thus little is being leaked about u when we classify xnew. (ii) m(u) is bounded below by 1− 1|C| and so our method will provide the biggest improvements for binary classification and the improvements will decay as the number of classes increases. (iii) When k = 1, m(u) is always 1 because nc is 1 for some c ∈ C and then 0 for all remaining classes and from this we recover the standard bound of 2λl(l+ 1) used for subsample-and-aggregate. (iv) For Eq. 3 and 4 to be equal, there must exist some u∗ for which m(xnew;u∗) > m(xnew;u) for all u and xnew. This amounts to there being some set of teachers (corresponding to u∗) that are in more agreement than every other set of teachers for every new point they are asked to classify. Other than in this unlikely scenario, Eq. 4 will be strictly smaller than Eq. 3. 4.4 Semi-supervised knowledge transfer We now discuss how best to leverage the fact that the best gains from our approach come from answering several queries (as implied by equations 3 and 4). We first note that the vanilla subsampleand-aggregate method does not derive data-dependent privacy guarantees for an individual query, and thus, for a fixed and δ, the number of queries that can be answered by the mechanism is known in advance. In contrast, because our data-driven bounds on the personalised moments accountants depend on the queries themselves, the cost of any given query is not known in advance and as such the number of queries we can answer before using up our privacy allowance ( ) is unknown. Unfortunately, we cannot simply answer queries until the allowance is used up, because the number of queries that we answer is a function of the data itself and thus we would need to introduce a differentially private mechanism for determining when to stop (such as calculating and δ after each query using smooth-sensitivity as proposed in [8]). Instead, we follow [8] and leverage the fact that we can answer more queries than standard subsample-and-aggregate to train a student model using unlabelled public data. The final output of our algorithm will then be a trained classifier that can be queried indefinitely. To train this model, we take unlabelled public data P = {x̃1, x̃2, ...} and label it using c̃λ until the privacy allowance has been used up. This will result in a (privately) labelled dataset P̃ = {(x̃1, y1), ..., (x̃p, yp)} where p is the number of queries answered. We train a student model, S, on this dataset and the resultant classifier can now be used to answer any future queries. Because of our data-driven bound on the personalised moments accountant, we will typically have that p > q where q is the number of queries that can be answered by a standard subsampleand-aggregate procedure. The pseudo-code for learning a differentially private student classifier using our differentially private bagging model is given in Algorithm 1 (pseudo-code for training a student model using standard subsample-and-aggregate is given in the Supplementary Materials for comparison). Note that the majority of for loops (in particular the one on line 18) can be parallelized. Algorithm 1 Semi-supervised differentially private knowledge transfer using multiple partitions 1: Input: , δ, D, batch size nmb, number of partitions k, number of teachers per partition n, noise size λ, maximum order of moments to be explored, L, unlabelled public data Dpub 2: Initialize: {θi,jT } k,n i=1,j=1, θS , ̂ = 0, α(l;x) = 0 for l = 1, ..., L, x ∈ D ∪ {∗} 3: Create n partitions of the dataset which are each made up of n disjoint subsets of the data Di,j , i = 1, ..., n, j = 1, ..., k such that ⋃ iDi,j = D and Di1,j ∩ Di2,j = ∅ for all i1 6= i2, j 4: Set I(∗) = {(n, 1), ..., (n, k)} 5: while Teachers have not converged do 6: for i = 1, ..., n do 7: for j = 1, ..., k do 8: Sample (x1, y1), ..., (xnmb , ynmb) i.i.d.∼ Di,j 9: Update teacher, Ti,j , using SGD 10: ∇θi,jT − [∑nmb s=1 ∑ c∈C ys,c log(T c i,j(xs)) ] (multi-task cross-entropy loss) 11: while ̂ < do 12: Sample x1, ..., xnmb ∼ Dpub 13: for s = 1, ..., nmb do 14: rs ← c̃λ(xs) 15: Update the element-wise moments accountants 16: nc ← |{(i,j):Ti,j(xs)=c}|k for c ∈ C 17: for x ∈ D ∪ {∗} do 18: nc(x)← |{(i,j)∈I(x):Ti,j(xs)=c}|k for c ∈ C, x ∈ D 19: m(x)← maxc{1− nc(x)} 20: for l = 1, ..., L do 21: α(l;x)← α(l;x) + 2λ2m(x)2l(l + 1) 22: Update the student, S, using SGD 23: ∇θS − ∑nmb s=1 ∑ c∈C rs,c logS c(xs) (multi-task cross-entropy loss) 24: ̂← min l [ max x α(l;x)+log( 1δ ) l ] 25: Output: S Theorem 6. The output of Algorithm 1 is ( , δ)-differentially private with respect to D. Proof. This follows from Theorems 2, 3, 4 and 5. 5 Experiments In this section we compare our method (DPBag) against the standard subsample-and-aggregate framework (SAA) to illustrate the improvements that can be achieved at a fundamental level by using our model. Additionally, we compare against our method without the improved privacy bound (DPBAG-) to quantify the improvements that are due to the bagging procedure and those that are due to our improved privacy bound. We perform the experiments on two real-world datasets: Heart Failure and UCI Adult (dataset description and results for UCI Adult can be found in the Supplementary Materials). Implementation of DPBag can be found at https://bitbucket.org/mvdschaar/ mlforhealthlabpub/src/master/alg/dpbag/. Heart Failure dataset: The Heart Failure dataset is a private dataset consisting of 24175 patients who have suffered heart failure. We set the label of each patient as 3-year all-cause mortality, excluding all patients who are censored before 3 years. The total number of features is 29 and the number of patients is 24175. Among 24175 patients, 10387 patients (43.0%) die within 3 years. We randomly divide the data into 3 disjoint subsets: (1) a training set (33%), (2) public data (33%), (3) a testing set (33%). In the main paper, we use logistic regression for the teacher and student models in both algorithms; additional results for Gradient Boosting Method (GBM) can be found in the Supplementary Materials. We set δ = 10−5. We vary ∈ {1, 3, 5}, n ∈ {50, 100, 250} and k ∈ {10, 50, 100}. In all cases we set λ = 2n . To save space, we report DPBag results for n ∈ {100, 250}, k ∈ {50, 100} and SAA results for n = 250 (the best performing) in the main manuscript, with full tables reported in the Supplementary Materials. Results reported are the mean of 10 runs of each experiment. 5.1 Results In Table 1 we report the accuracy, AUROC and AUPRC of the 3 methods and we also report these for a non-privately trained baseline model (NPB), allowing us to quantify how much has been "lost due to privacy". In Table 2, we report the total number of queries that could be made to each differentially private classifier before the privacy budget was used up. In Table 1 we see that DPBag outperforms standard SAA for all values of with Table 2 showing that our method allows for a significant increase in the number of public samples that can be labelled (almost 100% more for = 3). The optimal number of teachers, n, varies with , for both DPBag and SAA. We see that for = 1, n = 250 performs best, but as we increase the optimal number of teachers decreases. For small and small n, very few public samples can be labelled and so the student does not have enough data to learn from. On the other hand, for large and large n, the number of answered queries is much larger, to the point where now the limiting factor is not the number of labels but is instead the quality of the labels. Since we scale the noise to the number of teachers, the label quality improves with fewer teachers because each teacher is trained on a larger portion of the training data. This is reflected by both DPBag and SAA. In the SAA results, the performance does not saturate as quickly with respect to because the number of queries that corresponds to for SAA is smaller than for DPBag. As expected, we see that DPBAG- sits between SAA and DPBAG, enjoying performance gains due to a stronger underlying model, and thus more accurately labelled training samples for the student, but the improved privacy bound that DPBAG allows more samples to be labelled and thus further gains are still made. Table 2 also sheds light on the behavior of DPBag with repsect to k. We see in Table 1 that both k = 50 and k = 100 can provide the best performance (depending on n and ). In Table 2, the number of queries that can be answered increases with k. This implies that (as expected), as we increase k, the quantity m(u) gets closer to 0.5, and so each query costs less. However, when m(u) is close to 0.5 for all samples, u, in the dataset then neither class will have a clear majority and thus the labels are more susceptible to flipping due to the noise added. k = 50 appears to balance this trade-off when is larger (and so we can already answer more queries) and when is smaller we see that answering more queries is more important than answering them well, so k = 100 is preferred. 6 Discussion In this work, we introduced a new methodology for developing a differentially private classifier. Building on the ideas of subsample-and-aggregate, we divide the dataset several times, allowing us to derive (tighter) data-dependent bounds on the privacy cost of a query to our mechanism. To do so, we defined the personalised moments accountants, which we use to accumulate the privacy loss of a query with respect to each sample in the dataset (and any potentially added sample) individually. A key advantage of our model, like subsample-and-aggregate is that it is model agnostic, and can be applied using any base learner, with the differential privacy guarantees holding regardless of the learner used. We believe this work opens up several interesting avenues for future research: (i) the privacy guarantees could potentially be improved on by making assumptions about the base learners used, (ii) the personalised moments accountants naturally allow for the development of an algorithm that affords each sample a different level of differential privacy, i.e. personalised differential privacy [7], (iii) we believe bounds such as those derived in [8] and [9] that rely on the subsample-and-aggregate method will have natural analogs with respect to our bagging procedure corresponding to tighter bounds on the personalised moments accountants than can be shown for the global moments accountant using simple subsample-and-aggregate (see the discussion in the Supplementary Materials). Acknowledgments This work was supported by the National Science Foundation (NSF grants 1462245 and 1533983), and the US Office of Naval Research (ONR).
1. What is the focus of the paper regarding differential privacy? 2. What are the strengths of the proposed approach, particularly in terms of the personalized moments accountant? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. Are there any weaknesses or limitations in the paper's analysis or experiments? 5. What is the significance of the paper's contribution to the field of differential privacy?
Review
Review The authors consider differentially private “bagging.” A single dataset is divided into k partitions. Each partition splits the data in n subsets, for a total of kn subsets. This is analogous to classical bagging since a single datapoint can appear in multiple subsets (in this case k). Furthermore, each subset will be used to train a classifier. A final classifier takes the majority vote of the individual classifiers with Lap(k/lambda) noise added to n_c, the number of sub-classifiers that output label c. This immediately yields a lambda-DP bound on the counts, and therefore the label of the final classifier. The authors are next interested in answering a sequence of queries using this classifier. The key analytic tool for understand this setting is the personalized moments accountant, which allows for a stronger composition of the privacy guarantees for each individual query than simple composition, advanced composition, or even the standard moments accountant. Ordinarily the l-th moment accountant is defined with respect to the privacy-loss random variable defined on two neighboring datasets. The privacy random variable is defined by drawing an outcome from the mechanism applied first dataset and computing the log odds ratio of the outcome with respect to the neighboring dataset. The worst-case (across datasets) l-th moment of this random variable essentially defines the lth moments accountant. In this work, the authors split the definition above into neighboring datasets that add an element, and neighboring datasets that subtract an element. They then show that all the same composition theorems hold for these “upwards” and “downwards” moments accountants individually. Much of the proofs here simply mirror the original moments accountant. They offer this as a tool for attaining better privacy bounds by first applying composition before taking the maximum between the upwards and downwards accountants. With this decomposition they can bound the moments accountant for the bagging setting by a data-dependent quantity. Given a new query point, for each class c and user u, we ask what fraction of classifiers that utilize user u’s data classify this point as c. If there is any user and class for which this fraction is 1, the personalized moments accountant yields a bound equivalent to the standard moments accountant. However, if the query point induces disagreement for all users, the bound is strictly better than the moments accountant. Across many query points, we should expect the latter case to sometimes happen, allowing us to use less privacy budget (although any budgeting will be data-dependent). This is born out in the experiments provided in the paper. The paper is well-written and easy to understand.
NIPS
Title Differentially Private Bagging: Improved utility and cheaper privacy than subsample-and-aggregate Abstract Differential Privacy is a popular and well-studied notion of privacy. In the era of big data that we are in, privacy concerns are becoming ever more prevalent and thus differential privacy is being turned to as one such solution. A popular method for ensuring differential privacy of a classifier is known as subsample-and-aggregate, in which the dataset is divided into distinct chunks and a model is learned on each chunk, after which it is aggregated. This approach allows for easy analysis of the model on the data and thus differential privacy can be easily applied. In this paper, we extend this approach by dividing the data several times (rather than just once) and learning models on each chunk within each division. The first benefit of this approach is the natural improvement of utility by aggregating models trained on a more diverse range of subsets of the data (as demonstrated by the well-known bagging technique). The second benefit is that, through analysis that we provide in the paper, we can derive tighter differential privacy guarantees when several queries are made to this mechanism. In order to derive these guarantees, we introduce the upwards and downwards moments accountants and derive bounds for these moments accountants in a data-driven fashion. We demonstrate the improvements our model makes over standard subsample-and-aggregate in two datasets (Heart Failure (private) and UCI Adult (public)). 1 Introduction In the era of big data that we live in today, privacy concerns are becoming ever more prevalent. It falls to the researchers using the data to ensure that adequate measures are taken to ensure any results that are put into the public domain (such as the parameters of a model learned on the data) do not disclose sensitive attributes of the real data. For example, it is well known that the high capacity of deep neural networks can cause the networks to "memorize" training data; if such a network’s parameters were made public, it may be possible to deduce some of the training data that was used to train the model, thus resulting in real data being leaked to the public. Several attempts have been made at rigorously defining what it means for an algorithm, or an algorithm’s output, to be "private". One particularly attractive and well-researched notion is that of differential privacy [1]. Differential privacy is a formal definition that requires that the distribution of the output of a (necessarily probabilistic) algorithm not be too different when a single data point is included in the dataset or not. Typical methods for enforcing differential privacy involve bounding 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. the effect that inclusion of a single sample can have on the output and then adding noise (typically Laplacian or Gaussian) proportional to this effect. The most difficult step in this process is in attaining a good bound on the effect of inclusion. One method for bypassing this difficulty, is to build a classifier by dividing up the dataset into distinct subsets, training a separate classifier on each chunk, and then aggregating these classifiers. The effect of a single sample is then bounded by the fact that it was used only to train exactly one of these models and thus its inclusion or exclusion will affect only that model’s output. By dividing the data into smaller chunks, we learn more models and thus the one model that a sample can effect becomes a smaller "fraction" of the overall model, thus resulting in a smaller effect that any one sample has on the model as a whole. This method is commonly referred to as subsample-and-aggregate [2, 3, 4]. In this work, we propose an extension to the subsample-and-aggregate methodology that has similarities with bagging [5]. Fig. 1 depicts the key methodological difference between standard subsample-and-aggregate and our proposed framework, Differentially Private Bagging (DPBag), namely that we partition the dataset many times. This multiple-partitioning not only improves utility by building a better predictor, but also enjoys stronger privacy guarantees due to the fact that the effect of adding or removing a single sample can be more tightly bounded within our framework. In order to prove these guarantees, we introduce the personalised moments accountants, which are data-driven variants of the moments accountant [6], that allow us to track the privacy loss with respect to each sample in the dataset and then deduce the final privacy loss by taking the maximum loss over all samples. The personalised moments accountant also lends itself to allowing for personalised differential privacy [7] in which we may wish to allow each individual to specify their own privacy parameters. We demonstrate the efficacy of our model on two classification tasks, demonstrating that our model is an improvement over the standard subsample-and-aggregate algorithm. 2 Related Works Several works have proposed methods for differentially private classification. Of particular interest is the method of [6], in which they propose a method for differentially private training of deep neural networks. In particular, they introduce a new piece of mathematical machinery, the moments accountant. The moments accountant allows for more efficient composition of differentially private mechanisms than either simple or advanced composition [1]. Fortunately, the moments accountant is not exclusive to deep networks and has proven to be useful in other works. In this paper, we use two variants of the moments accountant, which we refer to collectively as the personalised moments accountants. Our algorithm lends itself naturally to being able to derive tighter bounds on these personalised moments accountants than would be possible on the "global" moments accountant. Most other methods use the subsample-and-aggregate framework (first discussed in [2]) to guarantee differential privacy. A popular, recent subsample-and-aggregate method is Private Aggregation of Teacher Ensembles (PATE), proposed in [8]. Their main contribution is to provide a data-driven bound on the moments accountant for a given query to the subsample-and-aggregate mechanism that they claim significantly reduces the privacy cost over the standard data-independent bound. This is further built on in [9] by adding a mechanism that first determines whether or not a query will be too expensive to answer or not, only answering those that are sufficiently cheap. Both works use standard subsample-and-aggregate in which the data is partitioned only once. Our method is more fundamental than PATE, in the sense that the techniques used by PATE to improve on subsample-and-aggregate would also be applicable to our differentially private bagging algorithm. The bound they derive in [8] on the moments accountant should translate to our personalised moments accountants in the same way the data-independent bound does (i.e. by multiplying the dependence on the inverse noise scale by a data-driven value) and as such our method would provide privacy improvements over PATE similar to the improvements it provides over standard subsample-and-aggregate. We give an example of our conjectured result for PATE in the Supplementary Materials for clarity. Another method that utilises subsample-and-aggregate is [10], in which they use the distance to instability framework [4] combined with subsample-and-aggregate to privately determine whether a query can be answered without adding any noise to it. In cases where the query can be answered, no privacy cost is incurred. Whenever the query cannot be answered, no answer is given but a privacy cost is incurred. Unfortunately, the gains to be had by applying our method over basic subsample-and-aggregate to their work are not clear, but we believe that at the very least, the utility of the answer provided may be improved on due to the ensemble having a higher utility in our case (and the same privacy guarantees will hold that they prove). In [11], they build a method for learning a differentially private decision tree. Although they apply bagging to their framework, they do not do so to create privacy, but only to improve the utility of their learned classifier. The privacy analysis they provide is performed only on each individual tree and not on the ensemble as a whole. 3 Differential Privacy Let us denote the feature space by X , the set of possible class labels by C and write U = X × C. Let us denote by D the collection of all possible datasets consisting of points in U . We will write D to denote a dataset in D, so that D = {ui}Ni=1 = {(xi, yi)}Ni=1 for some N . We first provide some preliminaries on differential privacy [1] before describing our method; we refer interested readers to [1] for a thorough exposition of differential privacy. We will denote an algorithm byM, which takes as input a dataset D and outputs a value from some output space,R. Definition 1 (Neighboring Datasets [1]). Two datasets D,D′ are said to be neighboring if ∃u ∈ U s.t. D \ {u} = D′ or D′ \ {u} = D. Definition 2 (Differential Privacy [1]). A randomized algorithm,M, is ( , δ)-differentially private if for all S ⊂ R and for all neighboring datasets D,D′: P(M(D) ∈ S) ≤ e P(M(D′) ∈ S) + δ where P is taken with respect to the randomness ofM. Differential privacy provides an intuitively understandable notion of privacy - a particular sample’s inclusion or exclusion in the dataset does not change the probability of a particular outcome very much: it does so by a multiplicative factor of e and an additive amount, δ. 4 Differentially Private Bagging In order to enforce differential privacy, we must bound the effect of a sample’s inclusion or exclusion on the output of the model. In order to do this, we propose a model for which the maximal effect can be easily deduced and moreover, for which we can actually show a lesser maximal effect by analysing the training procedure and deriving data-driven privacy guarantees. We begin by considering k (random) partitions of the dataset, D1, ...,Dk with Di = {Di1, ..., Din} for each i, where Dij is a set of size b |D| n c or d |D| n e. We then train a "teacher" model, Tij on each of these sets (i.e. Tij is trained on Dij). We note that each sample u ∈ D is in precisely one set from each partition and thus in precisely k sets overall; it is therefore used to train k teachers. We collect the indices of the corresponding teachers in the set I(u) = {(i, j) : u ∈ Dij} and denote by T (u) = {Tij : (i, j) ∈ I(u)} the set of teachers trained using the sample u. Given a new sample to classify x ∈ X , we first compute for each class the number of teachers that output that class, nc(x) = |{(i, j) : Tij(x) = c}|. The model then classifies the sample as ĉ(x) = arg max{nc(x) : c ∈ C} i.e. by classifying it as the class with the most votes. To make the output differentially private, we can add independent Laplacian noise to each of the resulting counts before taking arg max. So that the classification becomes c̃λ(x) = arg max{nc(x) + Yc : c ∈ C} where Yc, c ∈ C are independent Lap( kλ ) random variables and where λ is a hyper-parameter of our model. We scale the noise to the number of partitions because the number of partitions is precisely the total number of teachers that any individual sample can effect. Thus the (naive) bound on the `1-sensitivity of this algorithm is k, giving us the following theorem, which tells us that our differentially private bagging algorithm is at least as private as the standard subsample-and-aggregate mechanism, independent of the number of partitions used. Theorem 1. With k partitions and n teachers per partition, c̃λ is 2λ-differentially private with respect to the data D. Proof. This follows immediately from noting that the `1-sensitivity of nc(x) is k. See [1]. We note that the standard subsample-and-aggregate algorithm can be recovered from ours by setting k = 1. In the next section, we will derive tighter bounds on the differential privacy of our bagging algorithm when several queries are made to the classifier. 4.1 Personalised Moments Accountants In order to provide tighter differential privacy guarantees for our method, we now introduce the personalised moments accountants. Like the original moments accountant from [6], these will allow us to compose a sequence of differentially private mechanisms more efficiently than using standard or advanced composition [1]. We begin with a preliminary definition (found in [6]). Definition 3 (Privacy Loss and Privacy Loss Random Variable [6]). LetM : D → R be a randomized algorithm, with D and D′ a pair of neighbouring datasets. Let aux be any auxiliary input. For any outcome o ∈ R, we define the privacy loss at o to be: c(o;M, aux,D,D′) = log P(M(D, aux) = o) P(M(D′, aux) = o) with the privacy loss random variable, C, being defined by C(M, aux,D,D′) = c(M(D, aux), aux,D,D′) i.e. the random variable defined by evaluating the privacy loss at a sample fromM(D, aux). In defining the moments accountant, an intermediate quantity, referred to by [6] as the "l-th moment" is introduced. We divide the definition of this l-th moment into a downwards and an upwards version (corresponding to whetherD′ is obtained by either removing or adding an element toD, respectively). We do this because the upwards moments accountant must be bounded among all possible points u ∈ U that could be added, whereas the downwards moments accountants need only consider the points that are already in D. Definition 4. Let D be some dataset and let u ∈ D. Let aux be any auxiliary input. Then the downwards moments accountant is given by α̌M(l; aux,D, u) = logE(exp(lC(M, aux,D,D \ {u}))). Definition 5. Let D be some dataset. Then the upwards moments accountant is defined as α̂M(l; aux,D) = max u∈U logE(exp(lC(M, aux,D,D ∪ {u}))). We can recover the original moments accountant from [6], αM(l), as αM(l) = max aux,D {α̂M(l; aux,D),max u α̌M(l; aux,D, u)}. (1) We will use this fact, together with the two theorems in the following subsection, to calculate the final global privacy loss of our mechanism. 4.2 Results inherited from the Moments Accountant The following two theorems state two properties that our personalised moments accountants share with the original moments accountant. Note that the composability in Theorem 2 is being applied to each personalised moments accountant individually. Theorem 2 (Composability). Suppose that an algorithm M consists of a sequence of adaptive algorithms (i.e. algorithms that take as auxiliary input the outputs of the previous algorithms) M1, ...,Mm whereMi : ∏i−1 j=1Rj ×D → Ri. Then, for any l α̌M(l;D, u) ≤ m∑ i=1 α̌Mi(l;D, u) and α̂M(l;D) ≤ m∑ i=1 α̂Mi(l;D). Proof. The statement of this theorem is a variation on Theorem 2 from [6], applied to the personalised moments accountants. Their proof involves proving this stronger result. See [6], Theorem 2 proof. Theorem 3 (( , δ) from α(l) [6]). Let δ > 0. Any mechanismM is ( , δ)-differentially private for = min l αM(l) + log( 1 δ ) l (2) Proof. See [6], Theorem 2. Theorem 2 means that bounding each personalised moments accountant individually could provide a significant improvement on the overall bound for the moments accountant. Combined with Eq. 1, we can first sum over successive steps of the algorithm and then take the maximum. In contrast, original approaches that bound only the overall moments accountant at each step essentially compute αM(l) = m∑ i=1 max aux,D {α̂Mi(l; aux,D),max u α̌Mi(l; aux,D, u)}. (3) Our approach of bounding the personalised moments accountant allows us to compute the bound as αM(l) = max aux,D { m∑ i=1 α̂Mi(l; aux,D),max u m∑ i=1 α̌Mi(l; aux,D, u)} (4) which is strictly smaller whenever there is not some personalised moments accountant that is always larger than all other personalised moments accountants. The bounds we derive in the following subsection and the subsequent remarks will make clear why this is an unlikely scenario. 4.3 Bounding the Personalised Moments Accountants Having defined the personalised moments accountants, we can now state our main theorems, which provide a data-dependent bound on the personalised moments accountant for a single query to c̃λ. Theorem 4 (Downwards bound). Let xnew ∈ X be a new point to classify. For each c ∈ C and each u ∈ D, define the quantities nc(xnew;u) = |{(i, j) ∈ I(u) : Tij(xnew) = c}| k i.e. nc(xnew;u) is the fraction of teachers that were trained on a dataset containing u that output class c when classifying xnew. Let m(xnew;u) = max c {1− nc(xnew;u)}. Then α̌c̃λ(xnew)(l;D, u) ≤ 2λ 2m(xnew;u) 2l(l + 1). (5) Proof. (Sketch.) The theorem follows from the fact that m(xnew;u) is the maximum change that can occur in the vote fractions, nc, c ∈ C when the sample u is removed from the training of each model in T (u), corresponding to all teachers that were not already voting for the minority class switching their vote to the minority class. m can thus be thought of as the personalised `1-sensitivity of a specific query to our algorithm, and so the standard sensitivity based argument gives us that c̃λ(xnew) is 2λm(xnew;u)-differentially private with respect to removing u. The bound on the (downwards) moments accountant then follows using a similar argument to the proof of Prop. 3.3 in [12]. To prove the upwards bound, we must understand what happens when we add a point to our training data - which is that it will be added to a training set for precisely 1 teacher in each of the k partitions. Each dataset in a partition will either be of size d |D|n e or b |D| n c. We assume (without loss of generality) that a new point is added to the first dataset in each partition that contains b |D|n c samples. We collect the indices of these datasets in I(∗) and denote the set of teachers trained on these subsets by T (∗). Theorem 5 (Upwards bound). Let xnew ∈ X be a new point to classify. For each c ∈ C, define the quantity nc(xnew; ∗) = |{(i, j) ∈ I(∗) : Tij(xnew) = c}| k i.e. nc(xnew; ∗) is the fraction of teachers whose training set would receive the new point that output class c when classifying xnew. Let m(xnew; ∗) = max c {1− nc(xnew; ∗)}. Then α̂c̃λ(xnew)(l;D) ≤ 2λ 2m(xnew; ∗)2l(l + 1). (6) Proof. The proof is exactly as for Theorem 4, replacing I(u) and T (u) with I(∗) and T (∗). The standard bound on the moments accountant of a 2λ differentially private algorithm is 2λ2l(l+ 1) (see [12]). Thus, our theorems introduce a factor of m(xnew;u)2. Note that by definition m ≤ 1 and thus our bound is in general tighter, but always at least as tight. It should be noted, however, that for a single query, this bound may not improve on the naive 2λ2l(l + 1) bound, since in that case equations 3 and 4 are equal. If there is any training sample u ∈ D ∪ {∗} and any class c ∈ C for which all teachers in T (u) classify xnew as some class other than c then m(xnew;u) = 1. However, over the course of several queries, it is unlikely that each set of teachers T (u) always exclude some class, and as such the total bound according to Theorems 2, 4 and 5 is lower than if we just used the naive bound. In the case of binary classification, for example, the bounds are only the same if there is some set of teachers that are always unanimous when classifying new samples. Remarks. (i) m(xnew;u) is smallest when the teachers in T (u) are divided evenly among the classes when classifying xnew, this is intuitive because in such a situation, u is providing very little information about how to classify u and thus little is being leaked about u when we classify xnew. (ii) m(u) is bounded below by 1− 1|C| and so our method will provide the biggest improvements for binary classification and the improvements will decay as the number of classes increases. (iii) When k = 1, m(u) is always 1 because nc is 1 for some c ∈ C and then 0 for all remaining classes and from this we recover the standard bound of 2λl(l+ 1) used for subsample-and-aggregate. (iv) For Eq. 3 and 4 to be equal, there must exist some u∗ for which m(xnew;u∗) > m(xnew;u) for all u and xnew. This amounts to there being some set of teachers (corresponding to u∗) that are in more agreement than every other set of teachers for every new point they are asked to classify. Other than in this unlikely scenario, Eq. 4 will be strictly smaller than Eq. 3. 4.4 Semi-supervised knowledge transfer We now discuss how best to leverage the fact that the best gains from our approach come from answering several queries (as implied by equations 3 and 4). We first note that the vanilla subsampleand-aggregate method does not derive data-dependent privacy guarantees for an individual query, and thus, for a fixed and δ, the number of queries that can be answered by the mechanism is known in advance. In contrast, because our data-driven bounds on the personalised moments accountants depend on the queries themselves, the cost of any given query is not known in advance and as such the number of queries we can answer before using up our privacy allowance ( ) is unknown. Unfortunately, we cannot simply answer queries until the allowance is used up, because the number of queries that we answer is a function of the data itself and thus we would need to introduce a differentially private mechanism for determining when to stop (such as calculating and δ after each query using smooth-sensitivity as proposed in [8]). Instead, we follow [8] and leverage the fact that we can answer more queries than standard subsample-and-aggregate to train a student model using unlabelled public data. The final output of our algorithm will then be a trained classifier that can be queried indefinitely. To train this model, we take unlabelled public data P = {x̃1, x̃2, ...} and label it using c̃λ until the privacy allowance has been used up. This will result in a (privately) labelled dataset P̃ = {(x̃1, y1), ..., (x̃p, yp)} where p is the number of queries answered. We train a student model, S, on this dataset and the resultant classifier can now be used to answer any future queries. Because of our data-driven bound on the personalised moments accountant, we will typically have that p > q where q is the number of queries that can be answered by a standard subsampleand-aggregate procedure. The pseudo-code for learning a differentially private student classifier using our differentially private bagging model is given in Algorithm 1 (pseudo-code for training a student model using standard subsample-and-aggregate is given in the Supplementary Materials for comparison). Note that the majority of for loops (in particular the one on line 18) can be parallelized. Algorithm 1 Semi-supervised differentially private knowledge transfer using multiple partitions 1: Input: , δ, D, batch size nmb, number of partitions k, number of teachers per partition n, noise size λ, maximum order of moments to be explored, L, unlabelled public data Dpub 2: Initialize: {θi,jT } k,n i=1,j=1, θS , ̂ = 0, α(l;x) = 0 for l = 1, ..., L, x ∈ D ∪ {∗} 3: Create n partitions of the dataset which are each made up of n disjoint subsets of the data Di,j , i = 1, ..., n, j = 1, ..., k such that ⋃ iDi,j = D and Di1,j ∩ Di2,j = ∅ for all i1 6= i2, j 4: Set I(∗) = {(n, 1), ..., (n, k)} 5: while Teachers have not converged do 6: for i = 1, ..., n do 7: for j = 1, ..., k do 8: Sample (x1, y1), ..., (xnmb , ynmb) i.i.d.∼ Di,j 9: Update teacher, Ti,j , using SGD 10: ∇θi,jT − [∑nmb s=1 ∑ c∈C ys,c log(T c i,j(xs)) ] (multi-task cross-entropy loss) 11: while ̂ < do 12: Sample x1, ..., xnmb ∼ Dpub 13: for s = 1, ..., nmb do 14: rs ← c̃λ(xs) 15: Update the element-wise moments accountants 16: nc ← |{(i,j):Ti,j(xs)=c}|k for c ∈ C 17: for x ∈ D ∪ {∗} do 18: nc(x)← |{(i,j)∈I(x):Ti,j(xs)=c}|k for c ∈ C, x ∈ D 19: m(x)← maxc{1− nc(x)} 20: for l = 1, ..., L do 21: α(l;x)← α(l;x) + 2λ2m(x)2l(l + 1) 22: Update the student, S, using SGD 23: ∇θS − ∑nmb s=1 ∑ c∈C rs,c logS c(xs) (multi-task cross-entropy loss) 24: ̂← min l [ max x α(l;x)+log( 1δ ) l ] 25: Output: S Theorem 6. The output of Algorithm 1 is ( , δ)-differentially private with respect to D. Proof. This follows from Theorems 2, 3, 4 and 5. 5 Experiments In this section we compare our method (DPBag) against the standard subsample-and-aggregate framework (SAA) to illustrate the improvements that can be achieved at a fundamental level by using our model. Additionally, we compare against our method without the improved privacy bound (DPBAG-) to quantify the improvements that are due to the bagging procedure and those that are due to our improved privacy bound. We perform the experiments on two real-world datasets: Heart Failure and UCI Adult (dataset description and results for UCI Adult can be found in the Supplementary Materials). Implementation of DPBag can be found at https://bitbucket.org/mvdschaar/ mlforhealthlabpub/src/master/alg/dpbag/. Heart Failure dataset: The Heart Failure dataset is a private dataset consisting of 24175 patients who have suffered heart failure. We set the label of each patient as 3-year all-cause mortality, excluding all patients who are censored before 3 years. The total number of features is 29 and the number of patients is 24175. Among 24175 patients, 10387 patients (43.0%) die within 3 years. We randomly divide the data into 3 disjoint subsets: (1) a training set (33%), (2) public data (33%), (3) a testing set (33%). In the main paper, we use logistic regression for the teacher and student models in both algorithms; additional results for Gradient Boosting Method (GBM) can be found in the Supplementary Materials. We set δ = 10−5. We vary ∈ {1, 3, 5}, n ∈ {50, 100, 250} and k ∈ {10, 50, 100}. In all cases we set λ = 2n . To save space, we report DPBag results for n ∈ {100, 250}, k ∈ {50, 100} and SAA results for n = 250 (the best performing) in the main manuscript, with full tables reported in the Supplementary Materials. Results reported are the mean of 10 runs of each experiment. 5.1 Results In Table 1 we report the accuracy, AUROC and AUPRC of the 3 methods and we also report these for a non-privately trained baseline model (NPB), allowing us to quantify how much has been "lost due to privacy". In Table 2, we report the total number of queries that could be made to each differentially private classifier before the privacy budget was used up. In Table 1 we see that DPBag outperforms standard SAA for all values of with Table 2 showing that our method allows for a significant increase in the number of public samples that can be labelled (almost 100% more for = 3). The optimal number of teachers, n, varies with , for both DPBag and SAA. We see that for = 1, n = 250 performs best, but as we increase the optimal number of teachers decreases. For small and small n, very few public samples can be labelled and so the student does not have enough data to learn from. On the other hand, for large and large n, the number of answered queries is much larger, to the point where now the limiting factor is not the number of labels but is instead the quality of the labels. Since we scale the noise to the number of teachers, the label quality improves with fewer teachers because each teacher is trained on a larger portion of the training data. This is reflected by both DPBag and SAA. In the SAA results, the performance does not saturate as quickly with respect to because the number of queries that corresponds to for SAA is smaller than for DPBag. As expected, we see that DPBAG- sits between SAA and DPBAG, enjoying performance gains due to a stronger underlying model, and thus more accurately labelled training samples for the student, but the improved privacy bound that DPBAG allows more samples to be labelled and thus further gains are still made. Table 2 also sheds light on the behavior of DPBag with repsect to k. We see in Table 1 that both k = 50 and k = 100 can provide the best performance (depending on n and ). In Table 2, the number of queries that can be answered increases with k. This implies that (as expected), as we increase k, the quantity m(u) gets closer to 0.5, and so each query costs less. However, when m(u) is close to 0.5 for all samples, u, in the dataset then neither class will have a clear majority and thus the labels are more susceptible to flipping due to the noise added. k = 50 appears to balance this trade-off when is larger (and so we can already answer more queries) and when is smaller we see that answering more queries is more important than answering them well, so k = 100 is preferred. 6 Discussion In this work, we introduced a new methodology for developing a differentially private classifier. Building on the ideas of subsample-and-aggregate, we divide the dataset several times, allowing us to derive (tighter) data-dependent bounds on the privacy cost of a query to our mechanism. To do so, we defined the personalised moments accountants, which we use to accumulate the privacy loss of a query with respect to each sample in the dataset (and any potentially added sample) individually. A key advantage of our model, like subsample-and-aggregate is that it is model agnostic, and can be applied using any base learner, with the differential privacy guarantees holding regardless of the learner used. We believe this work opens up several interesting avenues for future research: (i) the privacy guarantees could potentially be improved on by making assumptions about the base learners used, (ii) the personalised moments accountants naturally allow for the development of an algorithm that affords each sample a different level of differential privacy, i.e. personalised differential privacy [7], (iii) we believe bounds such as those derived in [8] and [9] that rely on the subsample-and-aggregate method will have natural analogs with respect to our bagging procedure corresponding to tighter bounds on the personalised moments accountants than can be shown for the global moments accountant using simple subsample-and-aggregate (see the discussion in the Supplementary Materials). Acknowledgments This work was supported by the National Science Foundation (NSF grants 1462245 and 1533983), and the US Office of Naval Research (ONR).
1. What is the focus of the paper, and what are the modifications made to the moments accountant analysis of Abadi et al? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity and organization of the paper's content, particularly in Section 4? 4. Are there any typos or errors in the paper that need to be addressed? 5. What additional analyses or comparisons would help strengthen the paper's contributions?
Review
Review The authors modify the moments accountant analysis of Abadi et al for a subsample-and-aggregate style algorithm. The key technical idea is that their analysis treats differently privacy loss when a data point is removed vs added. The results seem plausible and rigorous (although I did not verify all details), but I wish more effort had gone toward comparing the results here to the analog without the separate downwards and upwards moment accounting to help show the power of this technique. At many times the prose was too high-level/imprecise to help me understand the significance of the pieces not immediately inherited from Abadi et al. Comments: *Avoid opinion language like “we believe” in comparing techniques qualitatively and speculating about their future impact. *The paragraph before 4.2 seems to be the main idea, but it could use some clarification. How much better should we expect (4) to be than (3)? You make a comment about how it is “unlikely” that the two bounds are the same, but what does unlikely mean in this sentence? More rigorous claims along these lines could strengthen the paper. *The paper should be reorganized so Section 4 is (at least almost) all new contributions; as it is, almost all of 4.1 is just inherited from Abadi et al. *Use $\ell$ instead of $l$ for readability. *Is there a typo in Thm 2? alpha does not appear to be defined with u as a parameter. *Thm 3: “The mechanism” -> “Any mechanism” *m is defined to be the the fraction of teachers that voted for all but the least popular vote c_min, which is different from the claim at Line 203 that unanimity is the only way to get m=1. Thus Line 203 seems to be an overstatement. Can you clarify? *The simulations are useful baselines, but a formal accuracy guarantee is really required in 4.3 to assess the quality of this technique.
NIPS
Title Differentially Private Bagging: Improved utility and cheaper privacy than subsample-and-aggregate Abstract Differential Privacy is a popular and well-studied notion of privacy. In the era of big data that we are in, privacy concerns are becoming ever more prevalent and thus differential privacy is being turned to as one such solution. A popular method for ensuring differential privacy of a classifier is known as subsample-and-aggregate, in which the dataset is divided into distinct chunks and a model is learned on each chunk, after which it is aggregated. This approach allows for easy analysis of the model on the data and thus differential privacy can be easily applied. In this paper, we extend this approach by dividing the data several times (rather than just once) and learning models on each chunk within each division. The first benefit of this approach is the natural improvement of utility by aggregating models trained on a more diverse range of subsets of the data (as demonstrated by the well-known bagging technique). The second benefit is that, through analysis that we provide in the paper, we can derive tighter differential privacy guarantees when several queries are made to this mechanism. In order to derive these guarantees, we introduce the upwards and downwards moments accountants and derive bounds for these moments accountants in a data-driven fashion. We demonstrate the improvements our model makes over standard subsample-and-aggregate in two datasets (Heart Failure (private) and UCI Adult (public)). 1 Introduction In the era of big data that we live in today, privacy concerns are becoming ever more prevalent. It falls to the researchers using the data to ensure that adequate measures are taken to ensure any results that are put into the public domain (such as the parameters of a model learned on the data) do not disclose sensitive attributes of the real data. For example, it is well known that the high capacity of deep neural networks can cause the networks to "memorize" training data; if such a network’s parameters were made public, it may be possible to deduce some of the training data that was used to train the model, thus resulting in real data being leaked to the public. Several attempts have been made at rigorously defining what it means for an algorithm, or an algorithm’s output, to be "private". One particularly attractive and well-researched notion is that of differential privacy [1]. Differential privacy is a formal definition that requires that the distribution of the output of a (necessarily probabilistic) algorithm not be too different when a single data point is included in the dataset or not. Typical methods for enforcing differential privacy involve bounding 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. the effect that inclusion of a single sample can have on the output and then adding noise (typically Laplacian or Gaussian) proportional to this effect. The most difficult step in this process is in attaining a good bound on the effect of inclusion. One method for bypassing this difficulty, is to build a classifier by dividing up the dataset into distinct subsets, training a separate classifier on each chunk, and then aggregating these classifiers. The effect of a single sample is then bounded by the fact that it was used only to train exactly one of these models and thus its inclusion or exclusion will affect only that model’s output. By dividing the data into smaller chunks, we learn more models and thus the one model that a sample can effect becomes a smaller "fraction" of the overall model, thus resulting in a smaller effect that any one sample has on the model as a whole. This method is commonly referred to as subsample-and-aggregate [2, 3, 4]. In this work, we propose an extension to the subsample-and-aggregate methodology that has similarities with bagging [5]. Fig. 1 depicts the key methodological difference between standard subsample-and-aggregate and our proposed framework, Differentially Private Bagging (DPBag), namely that we partition the dataset many times. This multiple-partitioning not only improves utility by building a better predictor, but also enjoys stronger privacy guarantees due to the fact that the effect of adding or removing a single sample can be more tightly bounded within our framework. In order to prove these guarantees, we introduce the personalised moments accountants, which are data-driven variants of the moments accountant [6], that allow us to track the privacy loss with respect to each sample in the dataset and then deduce the final privacy loss by taking the maximum loss over all samples. The personalised moments accountant also lends itself to allowing for personalised differential privacy [7] in which we may wish to allow each individual to specify their own privacy parameters. We demonstrate the efficacy of our model on two classification tasks, demonstrating that our model is an improvement over the standard subsample-and-aggregate algorithm. 2 Related Works Several works have proposed methods for differentially private classification. Of particular interest is the method of [6], in which they propose a method for differentially private training of deep neural networks. In particular, they introduce a new piece of mathematical machinery, the moments accountant. The moments accountant allows for more efficient composition of differentially private mechanisms than either simple or advanced composition [1]. Fortunately, the moments accountant is not exclusive to deep networks and has proven to be useful in other works. In this paper, we use two variants of the moments accountant, which we refer to collectively as the personalised moments accountants. Our algorithm lends itself naturally to being able to derive tighter bounds on these personalised moments accountants than would be possible on the "global" moments accountant. Most other methods use the subsample-and-aggregate framework (first discussed in [2]) to guarantee differential privacy. A popular, recent subsample-and-aggregate method is Private Aggregation of Teacher Ensembles (PATE), proposed in [8]. Their main contribution is to provide a data-driven bound on the moments accountant for a given query to the subsample-and-aggregate mechanism that they claim significantly reduces the privacy cost over the standard data-independent bound. This is further built on in [9] by adding a mechanism that first determines whether or not a query will be too expensive to answer or not, only answering those that are sufficiently cheap. Both works use standard subsample-and-aggregate in which the data is partitioned only once. Our method is more fundamental than PATE, in the sense that the techniques used by PATE to improve on subsample-and-aggregate would also be applicable to our differentially private bagging algorithm. The bound they derive in [8] on the moments accountant should translate to our personalised moments accountants in the same way the data-independent bound does (i.e. by multiplying the dependence on the inverse noise scale by a data-driven value) and as such our method would provide privacy improvements over PATE similar to the improvements it provides over standard subsample-and-aggregate. We give an example of our conjectured result for PATE in the Supplementary Materials for clarity. Another method that utilises subsample-and-aggregate is [10], in which they use the distance to instability framework [4] combined with subsample-and-aggregate to privately determine whether a query can be answered without adding any noise to it. In cases where the query can be answered, no privacy cost is incurred. Whenever the query cannot be answered, no answer is given but a privacy cost is incurred. Unfortunately, the gains to be had by applying our method over basic subsample-and-aggregate to their work are not clear, but we believe that at the very least, the utility of the answer provided may be improved on due to the ensemble having a higher utility in our case (and the same privacy guarantees will hold that they prove). In [11], they build a method for learning a differentially private decision tree. Although they apply bagging to their framework, they do not do so to create privacy, but only to improve the utility of their learned classifier. The privacy analysis they provide is performed only on each individual tree and not on the ensemble as a whole. 3 Differential Privacy Let us denote the feature space by X , the set of possible class labels by C and write U = X × C. Let us denote by D the collection of all possible datasets consisting of points in U . We will write D to denote a dataset in D, so that D = {ui}Ni=1 = {(xi, yi)}Ni=1 for some N . We first provide some preliminaries on differential privacy [1] before describing our method; we refer interested readers to [1] for a thorough exposition of differential privacy. We will denote an algorithm byM, which takes as input a dataset D and outputs a value from some output space,R. Definition 1 (Neighboring Datasets [1]). Two datasets D,D′ are said to be neighboring if ∃u ∈ U s.t. D \ {u} = D′ or D′ \ {u} = D. Definition 2 (Differential Privacy [1]). A randomized algorithm,M, is ( , δ)-differentially private if for all S ⊂ R and for all neighboring datasets D,D′: P(M(D) ∈ S) ≤ e P(M(D′) ∈ S) + δ where P is taken with respect to the randomness ofM. Differential privacy provides an intuitively understandable notion of privacy - a particular sample’s inclusion or exclusion in the dataset does not change the probability of a particular outcome very much: it does so by a multiplicative factor of e and an additive amount, δ. 4 Differentially Private Bagging In order to enforce differential privacy, we must bound the effect of a sample’s inclusion or exclusion on the output of the model. In order to do this, we propose a model for which the maximal effect can be easily deduced and moreover, for which we can actually show a lesser maximal effect by analysing the training procedure and deriving data-driven privacy guarantees. We begin by considering k (random) partitions of the dataset, D1, ...,Dk with Di = {Di1, ..., Din} for each i, where Dij is a set of size b |D| n c or d |D| n e. We then train a "teacher" model, Tij on each of these sets (i.e. Tij is trained on Dij). We note that each sample u ∈ D is in precisely one set from each partition and thus in precisely k sets overall; it is therefore used to train k teachers. We collect the indices of the corresponding teachers in the set I(u) = {(i, j) : u ∈ Dij} and denote by T (u) = {Tij : (i, j) ∈ I(u)} the set of teachers trained using the sample u. Given a new sample to classify x ∈ X , we first compute for each class the number of teachers that output that class, nc(x) = |{(i, j) : Tij(x) = c}|. The model then classifies the sample as ĉ(x) = arg max{nc(x) : c ∈ C} i.e. by classifying it as the class with the most votes. To make the output differentially private, we can add independent Laplacian noise to each of the resulting counts before taking arg max. So that the classification becomes c̃λ(x) = arg max{nc(x) + Yc : c ∈ C} where Yc, c ∈ C are independent Lap( kλ ) random variables and where λ is a hyper-parameter of our model. We scale the noise to the number of partitions because the number of partitions is precisely the total number of teachers that any individual sample can effect. Thus the (naive) bound on the `1-sensitivity of this algorithm is k, giving us the following theorem, which tells us that our differentially private bagging algorithm is at least as private as the standard subsample-and-aggregate mechanism, independent of the number of partitions used. Theorem 1. With k partitions and n teachers per partition, c̃λ is 2λ-differentially private with respect to the data D. Proof. This follows immediately from noting that the `1-sensitivity of nc(x) is k. See [1]. We note that the standard subsample-and-aggregate algorithm can be recovered from ours by setting k = 1. In the next section, we will derive tighter bounds on the differential privacy of our bagging algorithm when several queries are made to the classifier. 4.1 Personalised Moments Accountants In order to provide tighter differential privacy guarantees for our method, we now introduce the personalised moments accountants. Like the original moments accountant from [6], these will allow us to compose a sequence of differentially private mechanisms more efficiently than using standard or advanced composition [1]. We begin with a preliminary definition (found in [6]). Definition 3 (Privacy Loss and Privacy Loss Random Variable [6]). LetM : D → R be a randomized algorithm, with D and D′ a pair of neighbouring datasets. Let aux be any auxiliary input. For any outcome o ∈ R, we define the privacy loss at o to be: c(o;M, aux,D,D′) = log P(M(D, aux) = o) P(M(D′, aux) = o) with the privacy loss random variable, C, being defined by C(M, aux,D,D′) = c(M(D, aux), aux,D,D′) i.e. the random variable defined by evaluating the privacy loss at a sample fromM(D, aux). In defining the moments accountant, an intermediate quantity, referred to by [6] as the "l-th moment" is introduced. We divide the definition of this l-th moment into a downwards and an upwards version (corresponding to whetherD′ is obtained by either removing or adding an element toD, respectively). We do this because the upwards moments accountant must be bounded among all possible points u ∈ U that could be added, whereas the downwards moments accountants need only consider the points that are already in D. Definition 4. Let D be some dataset and let u ∈ D. Let aux be any auxiliary input. Then the downwards moments accountant is given by α̌M(l; aux,D, u) = logE(exp(lC(M, aux,D,D \ {u}))). Definition 5. Let D be some dataset. Then the upwards moments accountant is defined as α̂M(l; aux,D) = max u∈U logE(exp(lC(M, aux,D,D ∪ {u}))). We can recover the original moments accountant from [6], αM(l), as αM(l) = max aux,D {α̂M(l; aux,D),max u α̌M(l; aux,D, u)}. (1) We will use this fact, together with the two theorems in the following subsection, to calculate the final global privacy loss of our mechanism. 4.2 Results inherited from the Moments Accountant The following two theorems state two properties that our personalised moments accountants share with the original moments accountant. Note that the composability in Theorem 2 is being applied to each personalised moments accountant individually. Theorem 2 (Composability). Suppose that an algorithm M consists of a sequence of adaptive algorithms (i.e. algorithms that take as auxiliary input the outputs of the previous algorithms) M1, ...,Mm whereMi : ∏i−1 j=1Rj ×D → Ri. Then, for any l α̌M(l;D, u) ≤ m∑ i=1 α̌Mi(l;D, u) and α̂M(l;D) ≤ m∑ i=1 α̂Mi(l;D). Proof. The statement of this theorem is a variation on Theorem 2 from [6], applied to the personalised moments accountants. Their proof involves proving this stronger result. See [6], Theorem 2 proof. Theorem 3 (( , δ) from α(l) [6]). Let δ > 0. Any mechanismM is ( , δ)-differentially private for = min l αM(l) + log( 1 δ ) l (2) Proof. See [6], Theorem 2. Theorem 2 means that bounding each personalised moments accountant individually could provide a significant improvement on the overall bound for the moments accountant. Combined with Eq. 1, we can first sum over successive steps of the algorithm and then take the maximum. In contrast, original approaches that bound only the overall moments accountant at each step essentially compute αM(l) = m∑ i=1 max aux,D {α̂Mi(l; aux,D),max u α̌Mi(l; aux,D, u)}. (3) Our approach of bounding the personalised moments accountant allows us to compute the bound as αM(l) = max aux,D { m∑ i=1 α̂Mi(l; aux,D),max u m∑ i=1 α̌Mi(l; aux,D, u)} (4) which is strictly smaller whenever there is not some personalised moments accountant that is always larger than all other personalised moments accountants. The bounds we derive in the following subsection and the subsequent remarks will make clear why this is an unlikely scenario. 4.3 Bounding the Personalised Moments Accountants Having defined the personalised moments accountants, we can now state our main theorems, which provide a data-dependent bound on the personalised moments accountant for a single query to c̃λ. Theorem 4 (Downwards bound). Let xnew ∈ X be a new point to classify. For each c ∈ C and each u ∈ D, define the quantities nc(xnew;u) = |{(i, j) ∈ I(u) : Tij(xnew) = c}| k i.e. nc(xnew;u) is the fraction of teachers that were trained on a dataset containing u that output class c when classifying xnew. Let m(xnew;u) = max c {1− nc(xnew;u)}. Then α̌c̃λ(xnew)(l;D, u) ≤ 2λ 2m(xnew;u) 2l(l + 1). (5) Proof. (Sketch.) The theorem follows from the fact that m(xnew;u) is the maximum change that can occur in the vote fractions, nc, c ∈ C when the sample u is removed from the training of each model in T (u), corresponding to all teachers that were not already voting for the minority class switching their vote to the minority class. m can thus be thought of as the personalised `1-sensitivity of a specific query to our algorithm, and so the standard sensitivity based argument gives us that c̃λ(xnew) is 2λm(xnew;u)-differentially private with respect to removing u. The bound on the (downwards) moments accountant then follows using a similar argument to the proof of Prop. 3.3 in [12]. To prove the upwards bound, we must understand what happens when we add a point to our training data - which is that it will be added to a training set for precisely 1 teacher in each of the k partitions. Each dataset in a partition will either be of size d |D|n e or b |D| n c. We assume (without loss of generality) that a new point is added to the first dataset in each partition that contains b |D|n c samples. We collect the indices of these datasets in I(∗) and denote the set of teachers trained on these subsets by T (∗). Theorem 5 (Upwards bound). Let xnew ∈ X be a new point to classify. For each c ∈ C, define the quantity nc(xnew; ∗) = |{(i, j) ∈ I(∗) : Tij(xnew) = c}| k i.e. nc(xnew; ∗) is the fraction of teachers whose training set would receive the new point that output class c when classifying xnew. Let m(xnew; ∗) = max c {1− nc(xnew; ∗)}. Then α̂c̃λ(xnew)(l;D) ≤ 2λ 2m(xnew; ∗)2l(l + 1). (6) Proof. The proof is exactly as for Theorem 4, replacing I(u) and T (u) with I(∗) and T (∗). The standard bound on the moments accountant of a 2λ differentially private algorithm is 2λ2l(l+ 1) (see [12]). Thus, our theorems introduce a factor of m(xnew;u)2. Note that by definition m ≤ 1 and thus our bound is in general tighter, but always at least as tight. It should be noted, however, that for a single query, this bound may not improve on the naive 2λ2l(l + 1) bound, since in that case equations 3 and 4 are equal. If there is any training sample u ∈ D ∪ {∗} and any class c ∈ C for which all teachers in T (u) classify xnew as some class other than c then m(xnew;u) = 1. However, over the course of several queries, it is unlikely that each set of teachers T (u) always exclude some class, and as such the total bound according to Theorems 2, 4 and 5 is lower than if we just used the naive bound. In the case of binary classification, for example, the bounds are only the same if there is some set of teachers that are always unanimous when classifying new samples. Remarks. (i) m(xnew;u) is smallest when the teachers in T (u) are divided evenly among the classes when classifying xnew, this is intuitive because in such a situation, u is providing very little information about how to classify u and thus little is being leaked about u when we classify xnew. (ii) m(u) is bounded below by 1− 1|C| and so our method will provide the biggest improvements for binary classification and the improvements will decay as the number of classes increases. (iii) When k = 1, m(u) is always 1 because nc is 1 for some c ∈ C and then 0 for all remaining classes and from this we recover the standard bound of 2λl(l+ 1) used for subsample-and-aggregate. (iv) For Eq. 3 and 4 to be equal, there must exist some u∗ for which m(xnew;u∗) > m(xnew;u) for all u and xnew. This amounts to there being some set of teachers (corresponding to u∗) that are in more agreement than every other set of teachers for every new point they are asked to classify. Other than in this unlikely scenario, Eq. 4 will be strictly smaller than Eq. 3. 4.4 Semi-supervised knowledge transfer We now discuss how best to leverage the fact that the best gains from our approach come from answering several queries (as implied by equations 3 and 4). We first note that the vanilla subsampleand-aggregate method does not derive data-dependent privacy guarantees for an individual query, and thus, for a fixed and δ, the number of queries that can be answered by the mechanism is known in advance. In contrast, because our data-driven bounds on the personalised moments accountants depend on the queries themselves, the cost of any given query is not known in advance and as such the number of queries we can answer before using up our privacy allowance ( ) is unknown. Unfortunately, we cannot simply answer queries until the allowance is used up, because the number of queries that we answer is a function of the data itself and thus we would need to introduce a differentially private mechanism for determining when to stop (such as calculating and δ after each query using smooth-sensitivity as proposed in [8]). Instead, we follow [8] and leverage the fact that we can answer more queries than standard subsample-and-aggregate to train a student model using unlabelled public data. The final output of our algorithm will then be a trained classifier that can be queried indefinitely. To train this model, we take unlabelled public data P = {x̃1, x̃2, ...} and label it using c̃λ until the privacy allowance has been used up. This will result in a (privately) labelled dataset P̃ = {(x̃1, y1), ..., (x̃p, yp)} where p is the number of queries answered. We train a student model, S, on this dataset and the resultant classifier can now be used to answer any future queries. Because of our data-driven bound on the personalised moments accountant, we will typically have that p > q where q is the number of queries that can be answered by a standard subsampleand-aggregate procedure. The pseudo-code for learning a differentially private student classifier using our differentially private bagging model is given in Algorithm 1 (pseudo-code for training a student model using standard subsample-and-aggregate is given in the Supplementary Materials for comparison). Note that the majority of for loops (in particular the one on line 18) can be parallelized. Algorithm 1 Semi-supervised differentially private knowledge transfer using multiple partitions 1: Input: , δ, D, batch size nmb, number of partitions k, number of teachers per partition n, noise size λ, maximum order of moments to be explored, L, unlabelled public data Dpub 2: Initialize: {θi,jT } k,n i=1,j=1, θS , ̂ = 0, α(l;x) = 0 for l = 1, ..., L, x ∈ D ∪ {∗} 3: Create n partitions of the dataset which are each made up of n disjoint subsets of the data Di,j , i = 1, ..., n, j = 1, ..., k such that ⋃ iDi,j = D and Di1,j ∩ Di2,j = ∅ for all i1 6= i2, j 4: Set I(∗) = {(n, 1), ..., (n, k)} 5: while Teachers have not converged do 6: for i = 1, ..., n do 7: for j = 1, ..., k do 8: Sample (x1, y1), ..., (xnmb , ynmb) i.i.d.∼ Di,j 9: Update teacher, Ti,j , using SGD 10: ∇θi,jT − [∑nmb s=1 ∑ c∈C ys,c log(T c i,j(xs)) ] (multi-task cross-entropy loss) 11: while ̂ < do 12: Sample x1, ..., xnmb ∼ Dpub 13: for s = 1, ..., nmb do 14: rs ← c̃λ(xs) 15: Update the element-wise moments accountants 16: nc ← |{(i,j):Ti,j(xs)=c}|k for c ∈ C 17: for x ∈ D ∪ {∗} do 18: nc(x)← |{(i,j)∈I(x):Ti,j(xs)=c}|k for c ∈ C, x ∈ D 19: m(x)← maxc{1− nc(x)} 20: for l = 1, ..., L do 21: α(l;x)← α(l;x) + 2λ2m(x)2l(l + 1) 22: Update the student, S, using SGD 23: ∇θS − ∑nmb s=1 ∑ c∈C rs,c logS c(xs) (multi-task cross-entropy loss) 24: ̂← min l [ max x α(l;x)+log( 1δ ) l ] 25: Output: S Theorem 6. The output of Algorithm 1 is ( , δ)-differentially private with respect to D. Proof. This follows from Theorems 2, 3, 4 and 5. 5 Experiments In this section we compare our method (DPBag) against the standard subsample-and-aggregate framework (SAA) to illustrate the improvements that can be achieved at a fundamental level by using our model. Additionally, we compare against our method without the improved privacy bound (DPBAG-) to quantify the improvements that are due to the bagging procedure and those that are due to our improved privacy bound. We perform the experiments on two real-world datasets: Heart Failure and UCI Adult (dataset description and results for UCI Adult can be found in the Supplementary Materials). Implementation of DPBag can be found at https://bitbucket.org/mvdschaar/ mlforhealthlabpub/src/master/alg/dpbag/. Heart Failure dataset: The Heart Failure dataset is a private dataset consisting of 24175 patients who have suffered heart failure. We set the label of each patient as 3-year all-cause mortality, excluding all patients who are censored before 3 years. The total number of features is 29 and the number of patients is 24175. Among 24175 patients, 10387 patients (43.0%) die within 3 years. We randomly divide the data into 3 disjoint subsets: (1) a training set (33%), (2) public data (33%), (3) a testing set (33%). In the main paper, we use logistic regression for the teacher and student models in both algorithms; additional results for Gradient Boosting Method (GBM) can be found in the Supplementary Materials. We set δ = 10−5. We vary ∈ {1, 3, 5}, n ∈ {50, 100, 250} and k ∈ {10, 50, 100}. In all cases we set λ = 2n . To save space, we report DPBag results for n ∈ {100, 250}, k ∈ {50, 100} and SAA results for n = 250 (the best performing) in the main manuscript, with full tables reported in the Supplementary Materials. Results reported are the mean of 10 runs of each experiment. 5.1 Results In Table 1 we report the accuracy, AUROC and AUPRC of the 3 methods and we also report these for a non-privately trained baseline model (NPB), allowing us to quantify how much has been "lost due to privacy". In Table 2, we report the total number of queries that could be made to each differentially private classifier before the privacy budget was used up. In Table 1 we see that DPBag outperforms standard SAA for all values of with Table 2 showing that our method allows for a significant increase in the number of public samples that can be labelled (almost 100% more for = 3). The optimal number of teachers, n, varies with , for both DPBag and SAA. We see that for = 1, n = 250 performs best, but as we increase the optimal number of teachers decreases. For small and small n, very few public samples can be labelled and so the student does not have enough data to learn from. On the other hand, for large and large n, the number of answered queries is much larger, to the point where now the limiting factor is not the number of labels but is instead the quality of the labels. Since we scale the noise to the number of teachers, the label quality improves with fewer teachers because each teacher is trained on a larger portion of the training data. This is reflected by both DPBag and SAA. In the SAA results, the performance does not saturate as quickly with respect to because the number of queries that corresponds to for SAA is smaller than for DPBag. As expected, we see that DPBAG- sits between SAA and DPBAG, enjoying performance gains due to a stronger underlying model, and thus more accurately labelled training samples for the student, but the improved privacy bound that DPBAG allows more samples to be labelled and thus further gains are still made. Table 2 also sheds light on the behavior of DPBag with repsect to k. We see in Table 1 that both k = 50 and k = 100 can provide the best performance (depending on n and ). In Table 2, the number of queries that can be answered increases with k. This implies that (as expected), as we increase k, the quantity m(u) gets closer to 0.5, and so each query costs less. However, when m(u) is close to 0.5 for all samples, u, in the dataset then neither class will have a clear majority and thus the labels are more susceptible to flipping due to the noise added. k = 50 appears to balance this trade-off when is larger (and so we can already answer more queries) and when is smaller we see that answering more queries is more important than answering them well, so k = 100 is preferred. 6 Discussion In this work, we introduced a new methodology for developing a differentially private classifier. Building on the ideas of subsample-and-aggregate, we divide the dataset several times, allowing us to derive (tighter) data-dependent bounds on the privacy cost of a query to our mechanism. To do so, we defined the personalised moments accountants, which we use to accumulate the privacy loss of a query with respect to each sample in the dataset (and any potentially added sample) individually. A key advantage of our model, like subsample-and-aggregate is that it is model agnostic, and can be applied using any base learner, with the differential privacy guarantees holding regardless of the learner used. We believe this work opens up several interesting avenues for future research: (i) the privacy guarantees could potentially be improved on by making assumptions about the base learners used, (ii) the personalised moments accountants naturally allow for the development of an algorithm that affords each sample a different level of differential privacy, i.e. personalised differential privacy [7], (iii) we believe bounds such as those derived in [8] and [9] that rely on the subsample-and-aggregate method will have natural analogs with respect to our bagging procedure corresponding to tighter bounds on the personalised moments accountants than can be shown for the global moments accountant using simple subsample-and-aggregate (see the discussion in the Supplementary Materials). Acknowledgments This work was supported by the National Science Foundation (NSF grants 1462245 and 1533983), and the US Office of Naval Research (ONR).
1. What is the focus of the paper, and what is the author's contribution to the field? 2. What are the strengths and weaknesses of the proposed approach compared to existing works? 3. How relevant is the paper's comparison to other works in the field, and what are some limitations of the proposed method? 4. What are the reviewer's main concerns regarding the paper's content and relevance? 5. How does the reviewer assess the novelty and impact of the paper's findings?
Review
Review The paper explores a remarkably simple but consequential improvement on the standard sample-and-aggregate framework. We have two main concerns about the paper. First, it is the relatively niche appeal of the result - for pragmatical reasons, sample-and-aggregate, or PATE, frameworks are very rarely used. Second, the paper compares its personalized accountant mechanism with the original sample-and-aggregate, outperforming it slightly. A more relevant comparison would have been with either PATE, or the "Scalable PATE" (ICLR 2018), both of which apply their own versions of data-dependent accounting mechanisms. In its current form the paper is somewhat half-baked: rather than improving on the latest state-of-the-art, it uses as the main benchmark a 2007 paper.
NIPS
Title Sliding Window Algorithms for k-Clustering Problems Abstract The sliding window model of computation captures scenarios in which data is arriving continuously, but only the latest w elements should be used for analysis. The goal is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch. In this work, we focus on k-clustering problems such as k-means and k-median. In this setting, we provide simple and practical algorithms that offer stronger performance guarantees than previous results. Empirically, we show that our methods store only a small fraction of the data, are orders of magnitude faster, and find solutions with costs only slightly higher than those returned by algorithms with access to the full dataset. 1 Introduction Data clustering is a central tenet of unsupervised machine learning. One version of the problem can be phrased as grouping data into k clusters so that elements within the same cluster are similar to each other. Classic formulations of this question include the k-median and k-means problems for which good approximation algorithms are known [1, 44]. Unfortunately, these algorithms often do not scale to large modern datasets requiring researchers to turn to parallel [8], distributed [9], and streaming methods. In the latter model, points arrive one at a time and the goal is to find algorithms that quickly update a small sketch (or summary) of the input data that can then be used to compute an approximately optimal solution. One significant limitation of the classic data stream model is that it ignores the time when a data point arrived; in fact, all of the points in the input are treated with equal significance. However, in practice, it is often important (and sometimes necessary) to restrict the computation to very recent data. This restriction may be due to data freshness—e.g., when training a model on recent events, data from many days ago may be less relevant compared to data from the previous hour. Another motivation arises from legal reasons, e.g., data privacy laws such as the General Data Protection Regulation (GDPR), encourage and mandate that companies not retain certain user data beyond a specified period. This has resulted in many products including a data retention policy [54]. Such recency requirements can be modeled by the sliding window model. Here the goal is to maintain a small sketch of the input data, just as with the streaming model, and then use only this sketch to approximate the solution on the last w elements of the stream. Clustering in the sliding window model is the main question that we study in this work. A trivial solution simply maintains the w elements in the window and recomputes the clusters from scratch at each step. We intend to find solutions that use less space, and are more efficient at processing each 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. new element. In particular, we present an algorithm which uses space linear in k, and polylogarithmic in w, but still attains a constant factor approximation. Related Work Clustering. Clustering is a fundamental problem in unsupervised machine learning and has application in a disparate variety of settings, including data summarization, exploratory data analysis, matrix approximations and outlier detection [39, 41, 46, 50]. One of the most studied formulations in clustering of metric spaces is that of finding k centers that minimize an objective consisting of the `p norm of the distances of all points to their closest center. For p ∈ {1, 2,∞} this problem corresponds to k-median, k-means, and k-center, respectively, which are NP-hard, but constant factor approximation algorithms are known [1, 34, 44]. Several techniques have been used to tackle these problems at scale, including dimensionality reduction [45], core-sets [6], distributed algorithms [5], and streaming methods reviewed later. To clarify between Euclidean or general metric spaces, we note that our results work on arbitrary general metric spaces. The hardness results in the literature hold even for special case of Euclidean metrics and the constant factor approximation algorithms hold for the general metric spaces. Streaming model. Significant attention has been devoted to models for analyzing large-scale datasets that evolve over time. The streaming model of computation is of the most well-known (see [49] for a survey) and focuses on defining low-memory algorithms for processing data arriving one item at a time. A number of interesting results are known in this model ranging from the estimation of stream statistics [3, 10], to submodular optimization [7], to graph problems [2, 30, 42], and many others. Clustering is also well studied in this setting, including algorithms for k-median, k-means, and k-center in the insertion-only stream case [6, 20, 35]. Sliding window streaming model. The sliding window model significantly increases the difficultly of the problem, since deletions need to be handled as well. Several techniques are known, including the exponential histogram framework [27] that addresses weakly additive set functions, and the smooth histogram framework [18] that is suited for functions that are well-behaved and possesses a sufficiently small constant approximation. Since many problems, such as k-clustering, do not fit into these two categories, a number of algorithms have been developed for specific problems such as submodular optimization [14, 21, 29], graph sparsification [26], minimizing the enclosing ball [55], private heavy hitters [54], diversity maximization [14] and linear algebra operations [15]. Sliding window algorithms find also applications in data summarization [23]. Turning to sliding window algorithms for clustering, for the k-center problem Cohen et al. [25] show a (6 + )-approximation using O(k log ∆) space and per point update time of O(k2 log ∆), where ∆ is the spread of the metric, i.e. the ratio of the largest to the smallest pairwise distances. For k-median and k-means, [17] give constant factor approximation algorithms that use O(k3 log6 w) space and per point update time of O(poly(k, logw)).1 Their bound is polylogarithmic in w, but cubic in k, making it impractical unless k w.2 In this paper we improve their bounds and give a simpler algorithm with only linear dependency of k. Furthermore we show experimentally (Figure 1 and Table 1) that our algorithm is faster and uses significantly less memory than the one presented in [17] even with very small values k (i.e., k ≥ 4). In a different approach, [56] study a variant where one receives points in batches and uses heuristics to reduce the space and time. Their approach does provide approximation guarantees but it applies only to the Euclidean k-means case. Recently, [32] studied clustering problems in the distributed sliding window model, but these results are not applicable to our setting. The more challenging fully-dynamic stream case has also received attention [16, 38]. Contrary to our result for the sliding window case, in the fully-dynamic case, obtaining a Õ(k) memory, low update time algorithm, for the arbitrary metric k-clustering case with general `p norms is an open problem. For the special case of d-dimensional Euclidean spaces for k-means, there are positive results—[38] give Õ(kd4)-space core-set with 1 + approximation. Dynamic algorithms have also been studied in a consistent model [24, 43], but there the objective is to minimize the number of changes to the solution as the input evolves, rather than minimizing the approximation ratio and space used. Finally, a relaxation of the fully dynamic model that allows only 1We note that the authors assume that the cost of any solution is polynomial in w. We chose to state our bounds explicitly, which introduces a dependence on the ratio of the max and min costs of the solution. 2We note here that in some practical applications k can be large. For instance, in spam and abuse [53], near-duplicate detection [37] or reconciliation tasks [52]. a limited number of deletions has also been addressed [33, 48]. The only work related to clustering is that of submodular maximization [48] which includes exemplar-based clustering as a special case. Our Contributions We simplify and improve the state-of-the-art of k-clustering sliding window algorithms, resulting in lower memory algorithms. Specifically, we: • Introduce a simple new algorithm for k-clustering in the sliding window setting (Section 3.2). The algorithm is an example of a more general technique that we develop for minimization problems in this setting. (Section 3). • Prove that the algorithm needs space linear in k to obtain a constant approximate solution (Theorem 3.4), thus improving over the best previously known result which required Ω(k3) space. • Show empirically that the algorithm is orders of magnitude faster, more space efficient, and more accurate than previous solutions, even for small values of k (Section 4). 2 Preliminaries Let X be a set of arbitrary points, and d : X ×X → R be a distance function. We assume that (X,d) is an arbitrary metric space, that is, d is non-negative, symmetric, and satisfies the triangle inequality. For simplicity of exposition we will make a series of additional assumptions, in supplementary material, we explain how we can remove all these assumptions. We assume that the distances are normalized to lie between 1 and ∆. We will also consider weighted instances of our problem where, in addition, we are given a function weight : X → Z denoting the multiplicity of the point. The k-clustering family of problems asks to find a set of k cluster centers that minimizes a particular objective function. For a point x and a set of points Y = {y1, y2, . . . , ym}, we let d(x, Y ) = miny∈Y d(x, y), and let cl(x, Y ) be the point that realizes it, arg miny∈Y d(x, y). The cost of a set of centers C is: fp(X, C) = ∑ x∈X d p(x, C). Similarly for weighted instances, we have fp(X,weight, C) = ∑ x∈X weight(x)d p(x, C). Note that for p = 2, this is precisely the k-MEDOIDS problem.3 For p = 1, the above encodes the k-MEDIAN problem. When p is clear from the context, we will drop the subscript. We also refer to the optimum cost for a particular instance (X, d) as OPTp(X), and the optimal clustering as C∗p(X) = {c∗1, c∗2, . . . , c∗k} , shortening to C∗ when clear from context. Throughout the paper, we assume that p is a constant with p ≥ 1. While mapping a point to its nearest cluster is optimal, any map µ : X → X will produce a valid clustering. In a slight abuse of notation we extend the definition of fp to say fp(X,µ) =∑ x∈X d(x, µ(x)) p. In this work, we are interested in algorithms for sliding window problems, we refer to the window size as w and to the set of elements in the active window as W , and we use n for the size of the entire stream, typically n w. We denote by Xt the t-th element of the stream and by X[a,b] the subset of the stream from time a to b (both included). For simplicity of exposition, we assume that we have access to a lower bound m and upper bound M of the cost of the optimal solution in any sliding window.4 We use two tools repeatedly in our analysis. The first is the relaxed triangle inequality. For p ≥ 1 and any x, y, z ∈ X , we have: d(x, y)p ≤ 2p−1(d(x, z)p + d(z, y)p). The second is the fact that the value of the optimum solution of a clustering problem does not change drastically if the points are shifted around by a small amount. This is captured by Lemma 2.1 which was first proved in [35]. For completeness we present its proof in the supplementary material. Lemma 2.1. Given a set of points X = {x1, . . . , xn} consider a multiset Y = {y1, . . . , yn} such that ∑ i d p(xi, yi) ≤ αOPTp(X), for a constant α. Let B∗ be the optimal k-clustering solution for Y . Then fp(X,B∗) ∈ O((1 + α)OPTp(X)). 3In the Euclidean space, if the centers do not need to be part of the input, then setting p = 2 recovers the k-MEANS problem. 4These assumptions are not necessary. In the supplementary material, we explain how we estimate them in our experiments and how from a theoretical perspective we can remove the assumptions. Given a set of points X , a mapping µ : X → Y , and a weighted instance defined by (Y,weight), we say that the weighted instance is consistent with µ, if for all y ∈ Y , we have that weight(y) = |{x ∈ X| µ(x) = y}|. We say it is -consistent (for constant ≥ 0), if for all y ∈ Y , we have that |{x ∈ X | µ(x) = y}| ≤ weight(y) ≤ (1 + )|{x ∈ X | µ(x) = y}|. Finally, we remark that the k-clustering problem is NP-hard, so our focus will be on finding efficient approximation algorithms. We say that we obtain an α approximation for a clustering problem if fp(X, C) ≤ α · OPTp(X). The best-known approximation factor for all the problems that we consider are constant [1, 19, 36]. Additionally, since the algorithms work in arbitrary metric spaces, we measure update time in terms of distance function evaluations and use the number of points as space cost (all other costs are negligible). 3 Algorithm and Analysis The starting point of our clustering is the development of efficient sketching technique that, given a stream of points, X , a mapping µ, and a time, τ , returns a weighted instance that is -consistent with µ for the points inserted at or after τ . To see why having such a sketch is useful, suppose µ has a cost a constant factor larger than the cost of the optimal solution. Then we could get an approximation to the sliding window problem by computing an approximately optimal clustering on the weighted instance (see Lemma 2.1). To develop such a sketch, we begin by relaxing our goal by allowing our sketch to return a weighted instance that is -consistent with µ for the entire stream X as opposed to the substream starting at Xτ . Although a single sketch with this property is not enough to obtain a good algorithm for the overall problem, we design a sliding window algorithm that builds multiple such sketches in parallel. We can show that it is enough to maintain a polylogarithmic number of carefully chosen sketches to guarantee that we can return a good approximation to the optimal solution in the active window. In subsection 3.1 we describe how we construct a single efficient sketch. Then, in the subsection 3.2, we describe how we can combine different sketches to obtain a good approximation. All of the missing proofs of the lemmas and the pseudo-code for all the missing algorithms are presented in the supplementary material. 3.1 Augmented Meyerson Sketch Our sketching technique builds upon previous clustering algorithms developed for the streaming model of computation. Among these, a powerful approach is the sketch introduced for facility location problems by Meyerson [47]. At its core, given an approximate lower bound to the value of the optimum solution, Meyerson’s algorithm constructs a set C of sizeO(k log ∆), known as a sketch, and a consistent weighted instance, such that, with constant probability, fp(X, C) ∈ O(OPTp(X)). Given such a sketch, it is easy to both: amplify the success probability to be arbitrarily close to 1 by running multiple copies in parallel, and reduce the number of centers to k by keeping track of the number of points assigned to each c ∈ C and then clustering this weighted instance into k groups. What makes the sketch appealing in practice is its easy construction—each arriving point is added as a new center with some carefully chosen probability. If a new point does not make it as a center, it is assigned to the nearest existing center, and the latter’s weight is incremented by 1. Meyerson algorithm was initially designed for online problems, and then adapted to algorithms in the streaming computation model, where points arrive one at a time but are never deleted. To solve the sliding window problem naively, one can simply start a new sketch with every newly arriving point, but this is inefficient. To overcome these limitations we extend the Meyerson sketch. In particular, there are two challenges that we face in sliding window models: 1. The weight of each cluster is not monotonically increasing, as points that are assigned to the cluster time out and are dropped from the window. 2. The designated center of each cluster may itself expire and be removed from the window, requiring us to pick a new representative for the cluster. Using some auxiliary bookkeeping we can augment the classic Meyerson sketch to return a weighted instance that is -consistent with a mapping µ whose cost is a constant factor larger than the cost of the optimal solution for the entire stream X . More precisely, Lemma 3.1. Let w be the size of the sliding window, ∈ (0, 1) be a constant and t the current time. Let (X,d) be a metric space and fix γ ∈ (0, 1). The augmented Meyerson algorithm computes an implicit mapping µ : X → C, and an -consistent weighted instance (C, ŵeight) for all substreamsX[τ,t] with τ ≥ t−w, such that, with probability 1−γ, we have: |C| ≤ 22p+8k log γ−1 log ∆ and fp(X[τ,t], C) ≤ 22p+8 OPTp(X). The algorithm uses spaceO(k log γ−1 log ∆ log(M/m)(logM+logw+log ∆)) and stores the cost of the consistent mapping, f(X,µ), and allows a 1 + approximation to the cost of the -consistent mapping, denoted by f̂(X[τ,t], µ). This is the -consistent mapping that is computed by the augmented Meyerson algorithm. In section 2, M and m are defined as the upper and lower bounds on the cost of the optimal solution. Note that when M/m and ∆ are polynomial in w,5 the above space bound is O(k log γ−1 log3(w)). 3.2 Sliding Window Algorithm In the previous section we have shown that we can the Meyerson sketch to have enough information to output a solution using the points in the active window whose cost is comparable to the cost of the optimal computed on the whole stream. However, we need an algorithm that is competitive with the cost of the optimum solution computed solely on the elements in the sliding window. We give some intuition behind our algorithm before delving into the details. Suppose we had a good guess on the value of the optimum solution, λ∗ and imagine splitting the input x1, x2, . . . , xt into blocks A1 = {x1, x2, . . . , xb1}, A2 = {xb1+1, . . . , xb2}, etc. with the constraints that (i) each block has optimum cost smaller than λ∗, and (ii) is also maximal, that is adding the next element to the block causes its cost to exceed λ∗. It is easy to see, that any sliding window of optimal solution of cost λ∗ overlaps at most two blocks. The idea behind our algorithm is that, if we started an augmented Meyerson sketch in each block, and we obtain a good mapping for the suffix of the first of these two blocks, we can recover a good approximate solution for the sliding window. We now show how to formalize this idea. During the execution of the algorithm, we first discretize the possible values of the optimum solution, and run a set of sketches for each value of λ. Specifically, for each guess λ, we run Algorithm 1 to compute the AugmentedMeyerson for two consecutive substreams, Aλ and Bλ, of the input stream X . (The full pseudocode of AugmentedMeyerson is available in the supplementary material.) When a new point, x, arrives we check whether the k-clustering cost of the solution computed on the sketch after adding x to Bλ exceeds λ. If not, we add it to the sketch for Bλ, if so we reset the Bλ substream to x, and rename the old sketch of Bλ as Aλ. Thus the algorithm maintains two sketches, on consecutive subintervals. Notice that the cost of each sketch is at most λ, and each sketch is grown to be maximal before being reset. We remark that to convert the Meyerson sketch to a k-clustering solution, we need to run a k-clustering algorithm on the weighted instance given by the sketch. Since the problem is NP-hard, let ALG denote any ρ-approximate algorithm, such as the one by [36]. Let S(Z) = (Y (Z),weight(Z)) denote the augmented Meyerson sketch built on a (sub)stream Z, with Y (Z) as the centers, and weight(Z) as the (approximate) weight function. We denote by ALG(S(Z)) the solution obtained by running ALG over the weighted instance S(Z). Let f̂p(S(Z),ALG(S(Z))) be the estimated cost of the solution ALG(S(Z)) over the stream Z obtained by the sketch S(Z). We show that we can implement a function f̂p that operates only on the information in the augmented Meyerson sketch S(Z) and gives a β ∈ O(ρ) approximation to the cost on the unabridged input. Lemma 3.2 (Approximate solution and approximate cost from a sketch). Using an approximation algorithm ALG, from the augmented Meyerson sketch S(Z), with probability ≥ 1− γ, we can output a solution ALG(S(Z)) and an estimate f̂p(S(Z),ALG(S(Z))) of its cost s.t. fp(Z,ALG(S(Z))) ≤ f̂p(S(Z),ALG(S(Z))) ≤ β(ρ)fp(Z,OPT(Z)) for a constant β(ρ) ≤ 23p+6ρ depending only the approximation factor ρ of ALG. 5We note that prior work [17, 25] makes similar assumptions to get a bound depending on w. Algorithm 1 Meyerson Sketches, ComputeSketches(X,w, λ,m,M,∆) 1: Input: A sequence of points X = x0, x1, x2, . . . , xn. The size of the window w. Cost threshold λ. A lower bound m and upper bound M of the cost of the optimal solution and upper bound on distances ∆. 2: Output: Two sketches for the stream S1 and S2. 3: S1 ← AugmentedMeyerson(∅, w,m,M,∆); S2 ← AugmentedMeyerson(∅, w,m,M,∆) 4: Aλ ← ∅; Bλ ← ∅ (Recall that Aλ, Bλ are sets and S1 and S2 the corresponding sketches. Note that the content of the sets is not stored explicitly.) 5: for x ∈ X do 6: Let Stemp be computed by AugmentedMeyerson(Bλ ∪ {x}, w,m,M,∆) . (Note: it can be computed by adding x to a copy of the sketch maintained by S2) 7: if f̂p(Stemp,ALG(Stemp)) ≤ λ then 8: Add x to the stream of the sketch S2. (Bλ ← Bλ ∪ {x}, S2 ← AugmentedMeyerson(Bλ, w,m,M,∆)) 9: else 10: S1 ← S2; S2 ← AugmentedMeyerson({x}, w,m,M,∆). (Aλ ← Bλ; Bλ ← {x}) 11: end if 12: end for 13: Return (S1, S2, and start and end times of Aλ and Bλ) Composition of sketches from sub-streams Before presenting the global sliding window algorithm that uses these pairs of sketches, we introduce some additional notation. Let S(Z) be the augmented Meyerson sketch computed over the stream Z. Let Suffixτ (S(Z)) denote the sketch obtained from a sketch S for the points that arrived after τ . This can be done using the operations defined in the supplementary material. We say that a time τ is contained in a substream A if A contains elements inserted on or after time τ . Finally we define Aτ as the suffix of A that contains elements starting at time τ . Given two sketches S(A), and S(B) computed over two disjoint substreams A,B, let S(A) ∪ S(B) be the sketch obtained by joining the centers of S(A) and S(B) (and summing their respective weights) in a single instance. We now prove a key property of the augmented Meyerson sketches we defined before. Lemma 3.3 (Composition with a Suffix of stream). Given two substreams A,B (with possibly B = ∅) and a time τ in A, let ALG be a constant approximation algorithm for the k-clustering problem. Then if OPTp(A) ≤ O(OPTp(Aτ ∪ B), then, with probability ≥ 1 − O(γ), we have fp(Aτ ∪B,ALG(Suffixτ (S(A)) ∪ S(B))) ≤ O(OPTp(Aτ ∪B)). The main idea of the proof is to show that Suffixτ (S(A))∪S(B) is -consistent with a good mapping from Aτ ∪ B and then by using a technique similar to Lemma 2.1 show that we can compute a constant approximation from an -consistent sketch. Algorithm 2 Our main algorithm. Input: X,m,M,∆, approx. factor of ALG (β) and δ. 1: Λ← {m, (1 + δ)m, . . . , 2pβ(1 + δ)M} 2: for λ ∈ Λ do 3: Sλ,1, Sλ,2 ← ComputeSketches(X,w, λ,m,M,∆) 4: end for 5: if Bλ∗ = W for some λ∗ then return ALG(Sλ∗,2) 6: λ∗ ← min({λ : Aλ 6⊆W}) 7: τ ← max(|X| − w, 1) 8: if W ∩Aλ∗ 6= ∅ then return ALG(Suffixτ (Sλ∗,1) ∪ Sλ∗,2) 9: else return ALG(Suffixτ (Sλ∗,2)) Final algorithm. We can now present the full algorithm in Algorithm 2. As mentioned before, we run multiple copies of ComputeSketches in parallel, for geometrically increasing values of λ. For each value of λ, we maintain the pair of sketches over the stream X . Finally, we compute the centers using such sketches. If we get lucky, and for the sliding window W there exists a subsequence where Bλ∗ is precisely W , we use the appropriate sketch and return ALG(Sλ∗,2). Otherwise, we find the smallest λ∗ for which Aλ is not a subset of W . We then use the pair of sketches associated with Aλ∗ and Bλ∗ , combining the sketch of the suffix of Aλ∗ that intersects with W , and the sketch on Bλ∗ . The main result is that this algorithm provides a constant approximation of the k-clustering problem, for any p ≥ 1, with probability at least 1 − γ, using space linear in k and logarithmic in other parameters. The total running time of the algorithm depends on the complexity of ALG. Let T (n, k) be the complexity of solving an instance of k-clustering with size n points using ALG. Theorem 3.4. With probability 1 − γ, Algorithm 2, outputs an O(1)-approximation for the sliding window k-clustering problem using space: O ( k log(∆)(log(∆) + log(w) + log(M)) log2(M/m) log(γ−1 log(M/m)) ) and total update time O(T (k log(∆), k) log2(M/m) log(γ−1 log(M/m)) (log(∆) + log(w) + log(M)). We remark that if M and ∆ are polynomial in w, then the total space is O(k log4 w log(logw/γ)) and the total update time is O(T (k logw, k) log3(w) log(logw/γ)). The main component in the constant approximation factor of Theorem 3.4 statement comes from the 23p+5ρ approximation for the insertion-only case [43]. Here p is the norm, and ρ is the offline algorithm factor. Given the composition operation in our analysis in addition to applying triangle inequality and some other steps, we end up with an approximation factor ≈ 28p+6ρ. We do not aim to optimize for this approximation factor, however it could be an interesting future direction. 4 Empirical Evaluation We now describe the methodology of our empirical evaluation before providing our experiments results. We report only the main results in the section, more details on the experiments and results are in supplementary material. Our code is available open-source on github6. All datasets used are publicly-available. Datasets. We used 3 real-world datasets from the UCI Repository [28] that have been used in previous experiments on k-clustering for data streams settings: SKINTYPE [12], n = 245057, d = 4, SHUTTLE, n = 58000, d = 9, and COVERTYPE [13], n = 581012, d = 54. Consistent with previous work, we stream all points in the natural order (as they are stored in the dataset). We also use 4 publicly-available synthetic dataset from [31] (the S-Set series) that have ground-truth clusters. We use 4 datasets (s1, s2, s3, s4) that are increasingly harder to cluster and have each k = 15 ground-truth clusters. Consistent with previous work, we stream the points in random order (as they are sorted by ground truth in the dataset). In all datasets, we pre-process each dataset to have zero mean and unit standard deviation in each dimension. All experiments use Euclidean distance, we focus on the the K-MEANS objective (p = 2) which we use as cost. We use k-means++ [4] as the solver ALG to extract the solution from our sketch. Parameters. We vary the number of centers, k, from 4 to 40 and window size, w, from 10,000 to 40,000. We experiment with δ = [0.1, 0.2] and set = 0.05 (empirically the results are robust to wide settings of ). Metrics. We focus on three key metrics: cost of the clustering, maximum space requirement of our sketch, and average running time of the update function. To give an implementation independent view into space and update time, we report as space usage the number of points stored, and as update time the number of distance evaluations. All of the other costs are negligible by comparison. Baselines. We consider the following baselines. Batch K-Means++: We use k-means++ over the entire window as a proxy for the optimum, since the latter is NP-hard to compute. At every insertion, we report the best solution over 10 runs of k-means++ on the window. Observe that this is inefficient as it requires Ω(w) space and Ω(kw) run time per update. Sampling: We maintain a random sample of points from the active window, and then run k-means++ on the sample. This allows us to evaluate the performance of a baseline, at the same space cost of our algorithm. SODA16: We also evaluated the only previously published algorithm for this setting in [17]. We note that we made some practical modifications to further improve the performance of our algorithm which we report in the supplementary material. 6https://github.com/google-research/google-research/tree/master/sliding_window_ clustering/ Comparison with previous work. We begin by comparing our algorithm to the previously published algorithm of [17]. The baseline in this paragraph is SODA16 algorithm in [17]. We confirm empirically that the memory use of this baseline already exceeds the size of the sliding window for very small k, and that it is significantly slower than our algorithm. Figure 1 shows the space used by our algorithm and by the baseline over the COVERTYPE dataset for a |W | = 10,000 and different k. We confirm that our algorithm’s memory grows linearly in k while the baseline grows super-linearly in k and that for k > 10 the baseline costs more than storing the entire window. In Table 1 we show that our algorithm is significantly faster and uses less memory than the SODA16 already for small values of k. In the supplementary material we show that the difference is even larger for bigger values of k. Given the inefficiency of the SODA16 baseline, for the rest of the section we do not run experiments with it. Cost of the solution. We now take a look at how the cost of the solution evolves over time during the execution of our algorithm. In Figure 2 we plot the cost of the solution obtained by our algorithm (Sketch), our proxy for the optimum (KM++) and the sampling baseline (Sampling Baseline) on the COVERTYPE dataset. The sampling baseline is allowed to store the same number of points stored by our algorithm (at the same point in time). We use k = 20, |W | = 40,000, and δ = 0.2. The plot is obtained by computing the cost of the algorithms every 100 timesteps. Observe that our algorithm closely tracks that of the offline algorithm result, even as the cost fluctuates up and down. Our algorithm’s cost is always close to that of the off-line algorithm and significantly better than the random sampling baseline Update time and space tradeoff. We now investigate the time and space tradeoff of our algorithm. As a baseline we look at the cost required simply to recompute the solution using k-means++ at every time step. In Table 2 (δ = 0.2) we focus on the COVERTYPE dataset, the other results are similar. Table 2 shows the percent of the sliding window data points stored (Space) and the percent of update time (Time) of our algorithm vs a single run of k-means++ over the window. In the supplementary material we show that the savings become larger (at parity of k) as |W | grows and that we always store a small fraction of the window, providing order-of-magnitude speed ups (e.g., we use < 0.5% of the time of the baseline for k = 10, |W | = 40,000). Here the baseline is the k-means++ algorithm. Recovering ground-truth clusters. We evaluated the accuracy of the clusters produced by our algorithm on a dataset with ground-truth clusters using the well known V-Measure accuracy definition for clustering [51]. We observe that on all datasets our algorithm performs better than the sampling baseline and in line with the offline k-means++. For example, on the s1 our algorithm gets V-Measure of 0.969, while k-means++ gets 0.969 and sampling gets 0.933. The full results are available in the supplementary material. 5 Conclusion We present the first algorithms for the k-clustering problem on sliding windows with space linear in k. Empirically we observe that the algorithm performs much better than the analytic bounds, and it allows to store only a small fraction of the input. A natural avenue for future work is to give a tighter analysis, and reduce this gap between theory and practice. Broader Impact Clustering is a fundamental unsupervised machine learning problem that lies at the core of multiple real-world applications. In this paper, we address the problem of clustering in a sliding window setting. As we argued in the introduction, the sliding window model allows us to discard old data which is a core principle in data retention policies. Whenever a clustering algorithm is used on user data it is important to consider the impact it may have on the users. In this work we focus on the algorithmic aspects of the problem and we do not address other considerations of using clustering that may be needed in practical settings. For instance, there is a burgeoning literature on fairness considerations in unsupervised methods, including clustering, which further delves into these issues. We refer to this literature [22, 40, 11] for addressing such issues. Funding Transparency Statement No third-party funding has been used for this research.
1. What is the focus and contribution of the paper regarding clustering in the sliding window model? 2. What are the strengths of the proposed approach, particularly in terms of its simplicity and improvement in space requirements? 3. What are the weaknesses of the paper, especially regarding its experimental analysis and lack of clarity in the proof? 4. Do you have any concerns about the applicability or practicality of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper considers clustering in the sliding window model. The problem is as follows: - There is a stream of points and a sliding window of size w. - At any point we wish to maintain a k-clustering of good objective value for the w last points (i.e. for the window consisting of w points) The main result of the paper is a new algorithm that maintains a constant-factor approximation at any time and uses space linear in k. I do think this is a nice result that is obtained in a quite simple way. We obtain basically a Meyerson sketch (well known technique in streaming) of two sets of data points so that the current window w is a subset of the two. To achieve this, several technicalities are needed such as running the algorithm for the possible values lambda of the current value of the optimal clustering etc. EDIT AFTER AUTHOR REBUTTAL: I read the author rebuttal and it did not change my rather favorable opinion of the paper (even though I think the argument that previous works didn't state the guarantee explicitly so we don't do either is not super convincing). Strengths - Natural problem. - Quite clean new algorithm that improves the space requirement both in theory and in experiments. Weaknesses - The experiments are run with an algorithm that is quite different from the one that is theoretically analyzed. The authors expand on this in the appendix but it worries me in that the other algorithms (especially SODA16) was not optimized in a similar way. - The constant-factor approximation is not stated. To be honest, I didn't understand why it was stated as the sketches only loses a small factor so I believe that the factor is quite good. However, when I tried to verify it, I found the Appendix in particular the proof of Theorem D.4 (lines 825-837) badly written and therefore unnecessary hard to understand what the constant in the approximation guarantee is.
NIPS
Title Sliding Window Algorithms for k-Clustering Problems Abstract The sliding window model of computation captures scenarios in which data is arriving continuously, but only the latest w elements should be used for analysis. The goal is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch. In this work, we focus on k-clustering problems such as k-means and k-median. In this setting, we provide simple and practical algorithms that offer stronger performance guarantees than previous results. Empirically, we show that our methods store only a small fraction of the data, are orders of magnitude faster, and find solutions with costs only slightly higher than those returned by algorithms with access to the full dataset. 1 Introduction Data clustering is a central tenet of unsupervised machine learning. One version of the problem can be phrased as grouping data into k clusters so that elements within the same cluster are similar to each other. Classic formulations of this question include the k-median and k-means problems for which good approximation algorithms are known [1, 44]. Unfortunately, these algorithms often do not scale to large modern datasets requiring researchers to turn to parallel [8], distributed [9], and streaming methods. In the latter model, points arrive one at a time and the goal is to find algorithms that quickly update a small sketch (or summary) of the input data that can then be used to compute an approximately optimal solution. One significant limitation of the classic data stream model is that it ignores the time when a data point arrived; in fact, all of the points in the input are treated with equal significance. However, in practice, it is often important (and sometimes necessary) to restrict the computation to very recent data. This restriction may be due to data freshness—e.g., when training a model on recent events, data from many days ago may be less relevant compared to data from the previous hour. Another motivation arises from legal reasons, e.g., data privacy laws such as the General Data Protection Regulation (GDPR), encourage and mandate that companies not retain certain user data beyond a specified period. This has resulted in many products including a data retention policy [54]. Such recency requirements can be modeled by the sliding window model. Here the goal is to maintain a small sketch of the input data, just as with the streaming model, and then use only this sketch to approximate the solution on the last w elements of the stream. Clustering in the sliding window model is the main question that we study in this work. A trivial solution simply maintains the w elements in the window and recomputes the clusters from scratch at each step. We intend to find solutions that use less space, and are more efficient at processing each 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. new element. In particular, we present an algorithm which uses space linear in k, and polylogarithmic in w, but still attains a constant factor approximation. Related Work Clustering. Clustering is a fundamental problem in unsupervised machine learning and has application in a disparate variety of settings, including data summarization, exploratory data analysis, matrix approximations and outlier detection [39, 41, 46, 50]. One of the most studied formulations in clustering of metric spaces is that of finding k centers that minimize an objective consisting of the `p norm of the distances of all points to their closest center. For p ∈ {1, 2,∞} this problem corresponds to k-median, k-means, and k-center, respectively, which are NP-hard, but constant factor approximation algorithms are known [1, 34, 44]. Several techniques have been used to tackle these problems at scale, including dimensionality reduction [45], core-sets [6], distributed algorithms [5], and streaming methods reviewed later. To clarify between Euclidean or general metric spaces, we note that our results work on arbitrary general metric spaces. The hardness results in the literature hold even for special case of Euclidean metrics and the constant factor approximation algorithms hold for the general metric spaces. Streaming model. Significant attention has been devoted to models for analyzing large-scale datasets that evolve over time. The streaming model of computation is of the most well-known (see [49] for a survey) and focuses on defining low-memory algorithms for processing data arriving one item at a time. A number of interesting results are known in this model ranging from the estimation of stream statistics [3, 10], to submodular optimization [7], to graph problems [2, 30, 42], and many others. Clustering is also well studied in this setting, including algorithms for k-median, k-means, and k-center in the insertion-only stream case [6, 20, 35]. Sliding window streaming model. The sliding window model significantly increases the difficultly of the problem, since deletions need to be handled as well. Several techniques are known, including the exponential histogram framework [27] that addresses weakly additive set functions, and the smooth histogram framework [18] that is suited for functions that are well-behaved and possesses a sufficiently small constant approximation. Since many problems, such as k-clustering, do not fit into these two categories, a number of algorithms have been developed for specific problems such as submodular optimization [14, 21, 29], graph sparsification [26], minimizing the enclosing ball [55], private heavy hitters [54], diversity maximization [14] and linear algebra operations [15]. Sliding window algorithms find also applications in data summarization [23]. Turning to sliding window algorithms for clustering, for the k-center problem Cohen et al. [25] show a (6 + )-approximation using O(k log ∆) space and per point update time of O(k2 log ∆), where ∆ is the spread of the metric, i.e. the ratio of the largest to the smallest pairwise distances. For k-median and k-means, [17] give constant factor approximation algorithms that use O(k3 log6 w) space and per point update time of O(poly(k, logw)).1 Their bound is polylogarithmic in w, but cubic in k, making it impractical unless k w.2 In this paper we improve their bounds and give a simpler algorithm with only linear dependency of k. Furthermore we show experimentally (Figure 1 and Table 1) that our algorithm is faster and uses significantly less memory than the one presented in [17] even with very small values k (i.e., k ≥ 4). In a different approach, [56] study a variant where one receives points in batches and uses heuristics to reduce the space and time. Their approach does provide approximation guarantees but it applies only to the Euclidean k-means case. Recently, [32] studied clustering problems in the distributed sliding window model, but these results are not applicable to our setting. The more challenging fully-dynamic stream case has also received attention [16, 38]. Contrary to our result for the sliding window case, in the fully-dynamic case, obtaining a Õ(k) memory, low update time algorithm, for the arbitrary metric k-clustering case with general `p norms is an open problem. For the special case of d-dimensional Euclidean spaces for k-means, there are positive results—[38] give Õ(kd4)-space core-set with 1 + approximation. Dynamic algorithms have also been studied in a consistent model [24, 43], but there the objective is to minimize the number of changes to the solution as the input evolves, rather than minimizing the approximation ratio and space used. Finally, a relaxation of the fully dynamic model that allows only 1We note that the authors assume that the cost of any solution is polynomial in w. We chose to state our bounds explicitly, which introduces a dependence on the ratio of the max and min costs of the solution. 2We note here that in some practical applications k can be large. For instance, in spam and abuse [53], near-duplicate detection [37] or reconciliation tasks [52]. a limited number of deletions has also been addressed [33, 48]. The only work related to clustering is that of submodular maximization [48] which includes exemplar-based clustering as a special case. Our Contributions We simplify and improve the state-of-the-art of k-clustering sliding window algorithms, resulting in lower memory algorithms. Specifically, we: • Introduce a simple new algorithm for k-clustering in the sliding window setting (Section 3.2). The algorithm is an example of a more general technique that we develop for minimization problems in this setting. (Section 3). • Prove that the algorithm needs space linear in k to obtain a constant approximate solution (Theorem 3.4), thus improving over the best previously known result which required Ω(k3) space. • Show empirically that the algorithm is orders of magnitude faster, more space efficient, and more accurate than previous solutions, even for small values of k (Section 4). 2 Preliminaries Let X be a set of arbitrary points, and d : X ×X → R be a distance function. We assume that (X,d) is an arbitrary metric space, that is, d is non-negative, symmetric, and satisfies the triangle inequality. For simplicity of exposition we will make a series of additional assumptions, in supplementary material, we explain how we can remove all these assumptions. We assume that the distances are normalized to lie between 1 and ∆. We will also consider weighted instances of our problem where, in addition, we are given a function weight : X → Z denoting the multiplicity of the point. The k-clustering family of problems asks to find a set of k cluster centers that minimizes a particular objective function. For a point x and a set of points Y = {y1, y2, . . . , ym}, we let d(x, Y ) = miny∈Y d(x, y), and let cl(x, Y ) be the point that realizes it, arg miny∈Y d(x, y). The cost of a set of centers C is: fp(X, C) = ∑ x∈X d p(x, C). Similarly for weighted instances, we have fp(X,weight, C) = ∑ x∈X weight(x)d p(x, C). Note that for p = 2, this is precisely the k-MEDOIDS problem.3 For p = 1, the above encodes the k-MEDIAN problem. When p is clear from the context, we will drop the subscript. We also refer to the optimum cost for a particular instance (X, d) as OPTp(X), and the optimal clustering as C∗p(X) = {c∗1, c∗2, . . . , c∗k} , shortening to C∗ when clear from context. Throughout the paper, we assume that p is a constant with p ≥ 1. While mapping a point to its nearest cluster is optimal, any map µ : X → X will produce a valid clustering. In a slight abuse of notation we extend the definition of fp to say fp(X,µ) =∑ x∈X d(x, µ(x)) p. In this work, we are interested in algorithms for sliding window problems, we refer to the window size as w and to the set of elements in the active window as W , and we use n for the size of the entire stream, typically n w. We denote by Xt the t-th element of the stream and by X[a,b] the subset of the stream from time a to b (both included). For simplicity of exposition, we assume that we have access to a lower bound m and upper bound M of the cost of the optimal solution in any sliding window.4 We use two tools repeatedly in our analysis. The first is the relaxed triangle inequality. For p ≥ 1 and any x, y, z ∈ X , we have: d(x, y)p ≤ 2p−1(d(x, z)p + d(z, y)p). The second is the fact that the value of the optimum solution of a clustering problem does not change drastically if the points are shifted around by a small amount. This is captured by Lemma 2.1 which was first proved in [35]. For completeness we present its proof in the supplementary material. Lemma 2.1. Given a set of points X = {x1, . . . , xn} consider a multiset Y = {y1, . . . , yn} such that ∑ i d p(xi, yi) ≤ αOPTp(X), for a constant α. Let B∗ be the optimal k-clustering solution for Y . Then fp(X,B∗) ∈ O((1 + α)OPTp(X)). 3In the Euclidean space, if the centers do not need to be part of the input, then setting p = 2 recovers the k-MEANS problem. 4These assumptions are not necessary. In the supplementary material, we explain how we estimate them in our experiments and how from a theoretical perspective we can remove the assumptions. Given a set of points X , a mapping µ : X → Y , and a weighted instance defined by (Y,weight), we say that the weighted instance is consistent with µ, if for all y ∈ Y , we have that weight(y) = |{x ∈ X| µ(x) = y}|. We say it is -consistent (for constant ≥ 0), if for all y ∈ Y , we have that |{x ∈ X | µ(x) = y}| ≤ weight(y) ≤ (1 + )|{x ∈ X | µ(x) = y}|. Finally, we remark that the k-clustering problem is NP-hard, so our focus will be on finding efficient approximation algorithms. We say that we obtain an α approximation for a clustering problem if fp(X, C) ≤ α · OPTp(X). The best-known approximation factor for all the problems that we consider are constant [1, 19, 36]. Additionally, since the algorithms work in arbitrary metric spaces, we measure update time in terms of distance function evaluations and use the number of points as space cost (all other costs are negligible). 3 Algorithm and Analysis The starting point of our clustering is the development of efficient sketching technique that, given a stream of points, X , a mapping µ, and a time, τ , returns a weighted instance that is -consistent with µ for the points inserted at or after τ . To see why having such a sketch is useful, suppose µ has a cost a constant factor larger than the cost of the optimal solution. Then we could get an approximation to the sliding window problem by computing an approximately optimal clustering on the weighted instance (see Lemma 2.1). To develop such a sketch, we begin by relaxing our goal by allowing our sketch to return a weighted instance that is -consistent with µ for the entire stream X as opposed to the substream starting at Xτ . Although a single sketch with this property is not enough to obtain a good algorithm for the overall problem, we design a sliding window algorithm that builds multiple such sketches in parallel. We can show that it is enough to maintain a polylogarithmic number of carefully chosen sketches to guarantee that we can return a good approximation to the optimal solution in the active window. In subsection 3.1 we describe how we construct a single efficient sketch. Then, in the subsection 3.2, we describe how we can combine different sketches to obtain a good approximation. All of the missing proofs of the lemmas and the pseudo-code for all the missing algorithms are presented in the supplementary material. 3.1 Augmented Meyerson Sketch Our sketching technique builds upon previous clustering algorithms developed for the streaming model of computation. Among these, a powerful approach is the sketch introduced for facility location problems by Meyerson [47]. At its core, given an approximate lower bound to the value of the optimum solution, Meyerson’s algorithm constructs a set C of sizeO(k log ∆), known as a sketch, and a consistent weighted instance, such that, with constant probability, fp(X, C) ∈ O(OPTp(X)). Given such a sketch, it is easy to both: amplify the success probability to be arbitrarily close to 1 by running multiple copies in parallel, and reduce the number of centers to k by keeping track of the number of points assigned to each c ∈ C and then clustering this weighted instance into k groups. What makes the sketch appealing in practice is its easy construction—each arriving point is added as a new center with some carefully chosen probability. If a new point does not make it as a center, it is assigned to the nearest existing center, and the latter’s weight is incremented by 1. Meyerson algorithm was initially designed for online problems, and then adapted to algorithms in the streaming computation model, where points arrive one at a time but are never deleted. To solve the sliding window problem naively, one can simply start a new sketch with every newly arriving point, but this is inefficient. To overcome these limitations we extend the Meyerson sketch. In particular, there are two challenges that we face in sliding window models: 1. The weight of each cluster is not monotonically increasing, as points that are assigned to the cluster time out and are dropped from the window. 2. The designated center of each cluster may itself expire and be removed from the window, requiring us to pick a new representative for the cluster. Using some auxiliary bookkeeping we can augment the classic Meyerson sketch to return a weighted instance that is -consistent with a mapping µ whose cost is a constant factor larger than the cost of the optimal solution for the entire stream X . More precisely, Lemma 3.1. Let w be the size of the sliding window, ∈ (0, 1) be a constant and t the current time. Let (X,d) be a metric space and fix γ ∈ (0, 1). The augmented Meyerson algorithm computes an implicit mapping µ : X → C, and an -consistent weighted instance (C, ŵeight) for all substreamsX[τ,t] with τ ≥ t−w, such that, with probability 1−γ, we have: |C| ≤ 22p+8k log γ−1 log ∆ and fp(X[τ,t], C) ≤ 22p+8 OPTp(X). The algorithm uses spaceO(k log γ−1 log ∆ log(M/m)(logM+logw+log ∆)) and stores the cost of the consistent mapping, f(X,µ), and allows a 1 + approximation to the cost of the -consistent mapping, denoted by f̂(X[τ,t], µ). This is the -consistent mapping that is computed by the augmented Meyerson algorithm. In section 2, M and m are defined as the upper and lower bounds on the cost of the optimal solution. Note that when M/m and ∆ are polynomial in w,5 the above space bound is O(k log γ−1 log3(w)). 3.2 Sliding Window Algorithm In the previous section we have shown that we can the Meyerson sketch to have enough information to output a solution using the points in the active window whose cost is comparable to the cost of the optimal computed on the whole stream. However, we need an algorithm that is competitive with the cost of the optimum solution computed solely on the elements in the sliding window. We give some intuition behind our algorithm before delving into the details. Suppose we had a good guess on the value of the optimum solution, λ∗ and imagine splitting the input x1, x2, . . . , xt into blocks A1 = {x1, x2, . . . , xb1}, A2 = {xb1+1, . . . , xb2}, etc. with the constraints that (i) each block has optimum cost smaller than λ∗, and (ii) is also maximal, that is adding the next element to the block causes its cost to exceed λ∗. It is easy to see, that any sliding window of optimal solution of cost λ∗ overlaps at most two blocks. The idea behind our algorithm is that, if we started an augmented Meyerson sketch in each block, and we obtain a good mapping for the suffix of the first of these two blocks, we can recover a good approximate solution for the sliding window. We now show how to formalize this idea. During the execution of the algorithm, we first discretize the possible values of the optimum solution, and run a set of sketches for each value of λ. Specifically, for each guess λ, we run Algorithm 1 to compute the AugmentedMeyerson for two consecutive substreams, Aλ and Bλ, of the input stream X . (The full pseudocode of AugmentedMeyerson is available in the supplementary material.) When a new point, x, arrives we check whether the k-clustering cost of the solution computed on the sketch after adding x to Bλ exceeds λ. If not, we add it to the sketch for Bλ, if so we reset the Bλ substream to x, and rename the old sketch of Bλ as Aλ. Thus the algorithm maintains two sketches, on consecutive subintervals. Notice that the cost of each sketch is at most λ, and each sketch is grown to be maximal before being reset. We remark that to convert the Meyerson sketch to a k-clustering solution, we need to run a k-clustering algorithm on the weighted instance given by the sketch. Since the problem is NP-hard, let ALG denote any ρ-approximate algorithm, such as the one by [36]. Let S(Z) = (Y (Z),weight(Z)) denote the augmented Meyerson sketch built on a (sub)stream Z, with Y (Z) as the centers, and weight(Z) as the (approximate) weight function. We denote by ALG(S(Z)) the solution obtained by running ALG over the weighted instance S(Z). Let f̂p(S(Z),ALG(S(Z))) be the estimated cost of the solution ALG(S(Z)) over the stream Z obtained by the sketch S(Z). We show that we can implement a function f̂p that operates only on the information in the augmented Meyerson sketch S(Z) and gives a β ∈ O(ρ) approximation to the cost on the unabridged input. Lemma 3.2 (Approximate solution and approximate cost from a sketch). Using an approximation algorithm ALG, from the augmented Meyerson sketch S(Z), with probability ≥ 1− γ, we can output a solution ALG(S(Z)) and an estimate f̂p(S(Z),ALG(S(Z))) of its cost s.t. fp(Z,ALG(S(Z))) ≤ f̂p(S(Z),ALG(S(Z))) ≤ β(ρ)fp(Z,OPT(Z)) for a constant β(ρ) ≤ 23p+6ρ depending only the approximation factor ρ of ALG. 5We note that prior work [17, 25] makes similar assumptions to get a bound depending on w. Algorithm 1 Meyerson Sketches, ComputeSketches(X,w, λ,m,M,∆) 1: Input: A sequence of points X = x0, x1, x2, . . . , xn. The size of the window w. Cost threshold λ. A lower bound m and upper bound M of the cost of the optimal solution and upper bound on distances ∆. 2: Output: Two sketches for the stream S1 and S2. 3: S1 ← AugmentedMeyerson(∅, w,m,M,∆); S2 ← AugmentedMeyerson(∅, w,m,M,∆) 4: Aλ ← ∅; Bλ ← ∅ (Recall that Aλ, Bλ are sets and S1 and S2 the corresponding sketches. Note that the content of the sets is not stored explicitly.) 5: for x ∈ X do 6: Let Stemp be computed by AugmentedMeyerson(Bλ ∪ {x}, w,m,M,∆) . (Note: it can be computed by adding x to a copy of the sketch maintained by S2) 7: if f̂p(Stemp,ALG(Stemp)) ≤ λ then 8: Add x to the stream of the sketch S2. (Bλ ← Bλ ∪ {x}, S2 ← AugmentedMeyerson(Bλ, w,m,M,∆)) 9: else 10: S1 ← S2; S2 ← AugmentedMeyerson({x}, w,m,M,∆). (Aλ ← Bλ; Bλ ← {x}) 11: end if 12: end for 13: Return (S1, S2, and start and end times of Aλ and Bλ) Composition of sketches from sub-streams Before presenting the global sliding window algorithm that uses these pairs of sketches, we introduce some additional notation. Let S(Z) be the augmented Meyerson sketch computed over the stream Z. Let Suffixτ (S(Z)) denote the sketch obtained from a sketch S for the points that arrived after τ . This can be done using the operations defined in the supplementary material. We say that a time τ is contained in a substream A if A contains elements inserted on or after time τ . Finally we define Aτ as the suffix of A that contains elements starting at time τ . Given two sketches S(A), and S(B) computed over two disjoint substreams A,B, let S(A) ∪ S(B) be the sketch obtained by joining the centers of S(A) and S(B) (and summing their respective weights) in a single instance. We now prove a key property of the augmented Meyerson sketches we defined before. Lemma 3.3 (Composition with a Suffix of stream). Given two substreams A,B (with possibly B = ∅) and a time τ in A, let ALG be a constant approximation algorithm for the k-clustering problem. Then if OPTp(A) ≤ O(OPTp(Aτ ∪ B), then, with probability ≥ 1 − O(γ), we have fp(Aτ ∪B,ALG(Suffixτ (S(A)) ∪ S(B))) ≤ O(OPTp(Aτ ∪B)). The main idea of the proof is to show that Suffixτ (S(A))∪S(B) is -consistent with a good mapping from Aτ ∪ B and then by using a technique similar to Lemma 2.1 show that we can compute a constant approximation from an -consistent sketch. Algorithm 2 Our main algorithm. Input: X,m,M,∆, approx. factor of ALG (β) and δ. 1: Λ← {m, (1 + δ)m, . . . , 2pβ(1 + δ)M} 2: for λ ∈ Λ do 3: Sλ,1, Sλ,2 ← ComputeSketches(X,w, λ,m,M,∆) 4: end for 5: if Bλ∗ = W for some λ∗ then return ALG(Sλ∗,2) 6: λ∗ ← min({λ : Aλ 6⊆W}) 7: τ ← max(|X| − w, 1) 8: if W ∩Aλ∗ 6= ∅ then return ALG(Suffixτ (Sλ∗,1) ∪ Sλ∗,2) 9: else return ALG(Suffixτ (Sλ∗,2)) Final algorithm. We can now present the full algorithm in Algorithm 2. As mentioned before, we run multiple copies of ComputeSketches in parallel, for geometrically increasing values of λ. For each value of λ, we maintain the pair of sketches over the stream X . Finally, we compute the centers using such sketches. If we get lucky, and for the sliding window W there exists a subsequence where Bλ∗ is precisely W , we use the appropriate sketch and return ALG(Sλ∗,2). Otherwise, we find the smallest λ∗ for which Aλ is not a subset of W . We then use the pair of sketches associated with Aλ∗ and Bλ∗ , combining the sketch of the suffix of Aλ∗ that intersects with W , and the sketch on Bλ∗ . The main result is that this algorithm provides a constant approximation of the k-clustering problem, for any p ≥ 1, with probability at least 1 − γ, using space linear in k and logarithmic in other parameters. The total running time of the algorithm depends on the complexity of ALG. Let T (n, k) be the complexity of solving an instance of k-clustering with size n points using ALG. Theorem 3.4. With probability 1 − γ, Algorithm 2, outputs an O(1)-approximation for the sliding window k-clustering problem using space: O ( k log(∆)(log(∆) + log(w) + log(M)) log2(M/m) log(γ−1 log(M/m)) ) and total update time O(T (k log(∆), k) log2(M/m) log(γ−1 log(M/m)) (log(∆) + log(w) + log(M)). We remark that if M and ∆ are polynomial in w, then the total space is O(k log4 w log(logw/γ)) and the total update time is O(T (k logw, k) log3(w) log(logw/γ)). The main component in the constant approximation factor of Theorem 3.4 statement comes from the 23p+5ρ approximation for the insertion-only case [43]. Here p is the norm, and ρ is the offline algorithm factor. Given the composition operation in our analysis in addition to applying triangle inequality and some other steps, we end up with an approximation factor ≈ 28p+6ρ. We do not aim to optimize for this approximation factor, however it could be an interesting future direction. 4 Empirical Evaluation We now describe the methodology of our empirical evaluation before providing our experiments results. We report only the main results in the section, more details on the experiments and results are in supplementary material. Our code is available open-source on github6. All datasets used are publicly-available. Datasets. We used 3 real-world datasets from the UCI Repository [28] that have been used in previous experiments on k-clustering for data streams settings: SKINTYPE [12], n = 245057, d = 4, SHUTTLE, n = 58000, d = 9, and COVERTYPE [13], n = 581012, d = 54. Consistent with previous work, we stream all points in the natural order (as they are stored in the dataset). We also use 4 publicly-available synthetic dataset from [31] (the S-Set series) that have ground-truth clusters. We use 4 datasets (s1, s2, s3, s4) that are increasingly harder to cluster and have each k = 15 ground-truth clusters. Consistent with previous work, we stream the points in random order (as they are sorted by ground truth in the dataset). In all datasets, we pre-process each dataset to have zero mean and unit standard deviation in each dimension. All experiments use Euclidean distance, we focus on the the K-MEANS objective (p = 2) which we use as cost. We use k-means++ [4] as the solver ALG to extract the solution from our sketch. Parameters. We vary the number of centers, k, from 4 to 40 and window size, w, from 10,000 to 40,000. We experiment with δ = [0.1, 0.2] and set = 0.05 (empirically the results are robust to wide settings of ). Metrics. We focus on three key metrics: cost of the clustering, maximum space requirement of our sketch, and average running time of the update function. To give an implementation independent view into space and update time, we report as space usage the number of points stored, and as update time the number of distance evaluations. All of the other costs are negligible by comparison. Baselines. We consider the following baselines. Batch K-Means++: We use k-means++ over the entire window as a proxy for the optimum, since the latter is NP-hard to compute. At every insertion, we report the best solution over 10 runs of k-means++ on the window. Observe that this is inefficient as it requires Ω(w) space and Ω(kw) run time per update. Sampling: We maintain a random sample of points from the active window, and then run k-means++ on the sample. This allows us to evaluate the performance of a baseline, at the same space cost of our algorithm. SODA16: We also evaluated the only previously published algorithm for this setting in [17]. We note that we made some practical modifications to further improve the performance of our algorithm which we report in the supplementary material. 6https://github.com/google-research/google-research/tree/master/sliding_window_ clustering/ Comparison with previous work. We begin by comparing our algorithm to the previously published algorithm of [17]. The baseline in this paragraph is SODA16 algorithm in [17]. We confirm empirically that the memory use of this baseline already exceeds the size of the sliding window for very small k, and that it is significantly slower than our algorithm. Figure 1 shows the space used by our algorithm and by the baseline over the COVERTYPE dataset for a |W | = 10,000 and different k. We confirm that our algorithm’s memory grows linearly in k while the baseline grows super-linearly in k and that for k > 10 the baseline costs more than storing the entire window. In Table 1 we show that our algorithm is significantly faster and uses less memory than the SODA16 already for small values of k. In the supplementary material we show that the difference is even larger for bigger values of k. Given the inefficiency of the SODA16 baseline, for the rest of the section we do not run experiments with it. Cost of the solution. We now take a look at how the cost of the solution evolves over time during the execution of our algorithm. In Figure 2 we plot the cost of the solution obtained by our algorithm (Sketch), our proxy for the optimum (KM++) and the sampling baseline (Sampling Baseline) on the COVERTYPE dataset. The sampling baseline is allowed to store the same number of points stored by our algorithm (at the same point in time). We use k = 20, |W | = 40,000, and δ = 0.2. The plot is obtained by computing the cost of the algorithms every 100 timesteps. Observe that our algorithm closely tracks that of the offline algorithm result, even as the cost fluctuates up and down. Our algorithm’s cost is always close to that of the off-line algorithm and significantly better than the random sampling baseline Update time and space tradeoff. We now investigate the time and space tradeoff of our algorithm. As a baseline we look at the cost required simply to recompute the solution using k-means++ at every time step. In Table 2 (δ = 0.2) we focus on the COVERTYPE dataset, the other results are similar. Table 2 shows the percent of the sliding window data points stored (Space) and the percent of update time (Time) of our algorithm vs a single run of k-means++ over the window. In the supplementary material we show that the savings become larger (at parity of k) as |W | grows and that we always store a small fraction of the window, providing order-of-magnitude speed ups (e.g., we use < 0.5% of the time of the baseline for k = 10, |W | = 40,000). Here the baseline is the k-means++ algorithm. Recovering ground-truth clusters. We evaluated the accuracy of the clusters produced by our algorithm on a dataset with ground-truth clusters using the well known V-Measure accuracy definition for clustering [51]. We observe that on all datasets our algorithm performs better than the sampling baseline and in line with the offline k-means++. For example, on the s1 our algorithm gets V-Measure of 0.969, while k-means++ gets 0.969 and sampling gets 0.933. The full results are available in the supplementary material. 5 Conclusion We present the first algorithms for the k-clustering problem on sliding windows with space linear in k. Empirically we observe that the algorithm performs much better than the analytic bounds, and it allows to store only a small fraction of the input. A natural avenue for future work is to give a tighter analysis, and reduce this gap between theory and practice. Broader Impact Clustering is a fundamental unsupervised machine learning problem that lies at the core of multiple real-world applications. In this paper, we address the problem of clustering in a sliding window setting. As we argued in the introduction, the sliding window model allows us to discard old data which is a core principle in data retention policies. Whenever a clustering algorithm is used on user data it is important to consider the impact it may have on the users. In this work we focus on the algorithmic aspects of the problem and we do not address other considerations of using clustering that may be needed in practical settings. For instance, there is a burgeoning literature on fairness considerations in unsupervised methods, including clustering, which further delves into these issues. We refer to this literature [22, 40, 11] for addressing such issues. Funding Transparency Statement No third-party funding has been used for this research.
1. What is the focus and contribution of the paper regarding k-clustering in data streams? 2. What are the strengths of the proposed approach, particularly in utilizing Meryerson's method? 3. What are the weaknesses of the paper, especially regarding the approximation factor in clustering objectives? 4. Do you have any questions about the theoretical guarantees provided in the paper? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces an algorithm to perform k-clustering on a sliding window in data streams. It uses augmented Meryerson sketches on two substreams of the data to create O(k polylog(w)) size weighted instance in each window then performs a known clustering algorithm (say ALG with an approximation factor \rho) on this instance. This results in a constant factor * \rho approximation to the clustering objective in the data window under consideration. Theoretical guarantees on the algorithms' performance are given. Extensive experiments are performed to evaluate the memory consumption, runtime, and the accuracy of this algorithm. ============================================================ Added after reading author rebuttal: I think authors have adequately addressed the concerns about the constant approximation factor in the rebuttal. I agree that those factors are rather pessimistic when compared with experimental results. Authors have promised to discuss the intuition behind Meryerson sketch and add some details of experiments on k-medians. Based on the novelty of the result and other theoretical contributions, I maintain my score. Strengths This paper utilizes Meryerson method which was popularly used in online setting. The algorithm overcomes the technical challenge of maintaining the weighted points only in the current sliding window (and "forgetting" old data) by using two substreams and building the sketches on them, which I think is a clever idea. The theoretical guarantees and the intuitions behind most of the steps in the algorithm are explained clearly. This paper improves the cubic dependency of space requirement on k in [17] to a linear dependency which is a significant improvement in settings where k is large. Experimental results (in the main paper and the appendices) elaborates the theoretical guarantees well. They show significant improvements in memory utilization and run-time to obtain comparable results with k-means++. Weaknesses The approximation factor of the clustering objective of the weighted instant constructed by Meryerson sketch is large. Even though the algorithm guarantees a constant factor, even for p=1, the approximation factor can be as large as 2^9 * the approximation factor of the used clustering algorithm. Can the authors explain how this value compares with previous results on streaming and sliding window settings?
NIPS
Title Sliding Window Algorithms for k-Clustering Problems Abstract The sliding window model of computation captures scenarios in which data is arriving continuously, but only the latest w elements should be used for analysis. The goal is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch. In this work, we focus on k-clustering problems such as k-means and k-median. In this setting, we provide simple and practical algorithms that offer stronger performance guarantees than previous results. Empirically, we show that our methods store only a small fraction of the data, are orders of magnitude faster, and find solutions with costs only slightly higher than those returned by algorithms with access to the full dataset. 1 Introduction Data clustering is a central tenet of unsupervised machine learning. One version of the problem can be phrased as grouping data into k clusters so that elements within the same cluster are similar to each other. Classic formulations of this question include the k-median and k-means problems for which good approximation algorithms are known [1, 44]. Unfortunately, these algorithms often do not scale to large modern datasets requiring researchers to turn to parallel [8], distributed [9], and streaming methods. In the latter model, points arrive one at a time and the goal is to find algorithms that quickly update a small sketch (or summary) of the input data that can then be used to compute an approximately optimal solution. One significant limitation of the classic data stream model is that it ignores the time when a data point arrived; in fact, all of the points in the input are treated with equal significance. However, in practice, it is often important (and sometimes necessary) to restrict the computation to very recent data. This restriction may be due to data freshness—e.g., when training a model on recent events, data from many days ago may be less relevant compared to data from the previous hour. Another motivation arises from legal reasons, e.g., data privacy laws such as the General Data Protection Regulation (GDPR), encourage and mandate that companies not retain certain user data beyond a specified period. This has resulted in many products including a data retention policy [54]. Such recency requirements can be modeled by the sliding window model. Here the goal is to maintain a small sketch of the input data, just as with the streaming model, and then use only this sketch to approximate the solution on the last w elements of the stream. Clustering in the sliding window model is the main question that we study in this work. A trivial solution simply maintains the w elements in the window and recomputes the clusters from scratch at each step. We intend to find solutions that use less space, and are more efficient at processing each 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. new element. In particular, we present an algorithm which uses space linear in k, and polylogarithmic in w, but still attains a constant factor approximation. Related Work Clustering. Clustering is a fundamental problem in unsupervised machine learning and has application in a disparate variety of settings, including data summarization, exploratory data analysis, matrix approximations and outlier detection [39, 41, 46, 50]. One of the most studied formulations in clustering of metric spaces is that of finding k centers that minimize an objective consisting of the `p norm of the distances of all points to their closest center. For p ∈ {1, 2,∞} this problem corresponds to k-median, k-means, and k-center, respectively, which are NP-hard, but constant factor approximation algorithms are known [1, 34, 44]. Several techniques have been used to tackle these problems at scale, including dimensionality reduction [45], core-sets [6], distributed algorithms [5], and streaming methods reviewed later. To clarify between Euclidean or general metric spaces, we note that our results work on arbitrary general metric spaces. The hardness results in the literature hold even for special case of Euclidean metrics and the constant factor approximation algorithms hold for the general metric spaces. Streaming model. Significant attention has been devoted to models for analyzing large-scale datasets that evolve over time. The streaming model of computation is of the most well-known (see [49] for a survey) and focuses on defining low-memory algorithms for processing data arriving one item at a time. A number of interesting results are known in this model ranging from the estimation of stream statistics [3, 10], to submodular optimization [7], to graph problems [2, 30, 42], and many others. Clustering is also well studied in this setting, including algorithms for k-median, k-means, and k-center in the insertion-only stream case [6, 20, 35]. Sliding window streaming model. The sliding window model significantly increases the difficultly of the problem, since deletions need to be handled as well. Several techniques are known, including the exponential histogram framework [27] that addresses weakly additive set functions, and the smooth histogram framework [18] that is suited for functions that are well-behaved and possesses a sufficiently small constant approximation. Since many problems, such as k-clustering, do not fit into these two categories, a number of algorithms have been developed for specific problems such as submodular optimization [14, 21, 29], graph sparsification [26], minimizing the enclosing ball [55], private heavy hitters [54], diversity maximization [14] and linear algebra operations [15]. Sliding window algorithms find also applications in data summarization [23]. Turning to sliding window algorithms for clustering, for the k-center problem Cohen et al. [25] show a (6 + )-approximation using O(k log ∆) space and per point update time of O(k2 log ∆), where ∆ is the spread of the metric, i.e. the ratio of the largest to the smallest pairwise distances. For k-median and k-means, [17] give constant factor approximation algorithms that use O(k3 log6 w) space and per point update time of O(poly(k, logw)).1 Their bound is polylogarithmic in w, but cubic in k, making it impractical unless k w.2 In this paper we improve their bounds and give a simpler algorithm with only linear dependency of k. Furthermore we show experimentally (Figure 1 and Table 1) that our algorithm is faster and uses significantly less memory than the one presented in [17] even with very small values k (i.e., k ≥ 4). In a different approach, [56] study a variant where one receives points in batches and uses heuristics to reduce the space and time. Their approach does provide approximation guarantees but it applies only to the Euclidean k-means case. Recently, [32] studied clustering problems in the distributed sliding window model, but these results are not applicable to our setting. The more challenging fully-dynamic stream case has also received attention [16, 38]. Contrary to our result for the sliding window case, in the fully-dynamic case, obtaining a Õ(k) memory, low update time algorithm, for the arbitrary metric k-clustering case with general `p norms is an open problem. For the special case of d-dimensional Euclidean spaces for k-means, there are positive results—[38] give Õ(kd4)-space core-set with 1 + approximation. Dynamic algorithms have also been studied in a consistent model [24, 43], but there the objective is to minimize the number of changes to the solution as the input evolves, rather than minimizing the approximation ratio and space used. Finally, a relaxation of the fully dynamic model that allows only 1We note that the authors assume that the cost of any solution is polynomial in w. We chose to state our bounds explicitly, which introduces a dependence on the ratio of the max and min costs of the solution. 2We note here that in some practical applications k can be large. For instance, in spam and abuse [53], near-duplicate detection [37] or reconciliation tasks [52]. a limited number of deletions has also been addressed [33, 48]. The only work related to clustering is that of submodular maximization [48] which includes exemplar-based clustering as a special case. Our Contributions We simplify and improve the state-of-the-art of k-clustering sliding window algorithms, resulting in lower memory algorithms. Specifically, we: • Introduce a simple new algorithm for k-clustering in the sliding window setting (Section 3.2). The algorithm is an example of a more general technique that we develop for minimization problems in this setting. (Section 3). • Prove that the algorithm needs space linear in k to obtain a constant approximate solution (Theorem 3.4), thus improving over the best previously known result which required Ω(k3) space. • Show empirically that the algorithm is orders of magnitude faster, more space efficient, and more accurate than previous solutions, even for small values of k (Section 4). 2 Preliminaries Let X be a set of arbitrary points, and d : X ×X → R be a distance function. We assume that (X,d) is an arbitrary metric space, that is, d is non-negative, symmetric, and satisfies the triangle inequality. For simplicity of exposition we will make a series of additional assumptions, in supplementary material, we explain how we can remove all these assumptions. We assume that the distances are normalized to lie between 1 and ∆. We will also consider weighted instances of our problem where, in addition, we are given a function weight : X → Z denoting the multiplicity of the point. The k-clustering family of problems asks to find a set of k cluster centers that minimizes a particular objective function. For a point x and a set of points Y = {y1, y2, . . . , ym}, we let d(x, Y ) = miny∈Y d(x, y), and let cl(x, Y ) be the point that realizes it, arg miny∈Y d(x, y). The cost of a set of centers C is: fp(X, C) = ∑ x∈X d p(x, C). Similarly for weighted instances, we have fp(X,weight, C) = ∑ x∈X weight(x)d p(x, C). Note that for p = 2, this is precisely the k-MEDOIDS problem.3 For p = 1, the above encodes the k-MEDIAN problem. When p is clear from the context, we will drop the subscript. We also refer to the optimum cost for a particular instance (X, d) as OPTp(X), and the optimal clustering as C∗p(X) = {c∗1, c∗2, . . . , c∗k} , shortening to C∗ when clear from context. Throughout the paper, we assume that p is a constant with p ≥ 1. While mapping a point to its nearest cluster is optimal, any map µ : X → X will produce a valid clustering. In a slight abuse of notation we extend the definition of fp to say fp(X,µ) =∑ x∈X d(x, µ(x)) p. In this work, we are interested in algorithms for sliding window problems, we refer to the window size as w and to the set of elements in the active window as W , and we use n for the size of the entire stream, typically n w. We denote by Xt the t-th element of the stream and by X[a,b] the subset of the stream from time a to b (both included). For simplicity of exposition, we assume that we have access to a lower bound m and upper bound M of the cost of the optimal solution in any sliding window.4 We use two tools repeatedly in our analysis. The first is the relaxed triangle inequality. For p ≥ 1 and any x, y, z ∈ X , we have: d(x, y)p ≤ 2p−1(d(x, z)p + d(z, y)p). The second is the fact that the value of the optimum solution of a clustering problem does not change drastically if the points are shifted around by a small amount. This is captured by Lemma 2.1 which was first proved in [35]. For completeness we present its proof in the supplementary material. Lemma 2.1. Given a set of points X = {x1, . . . , xn} consider a multiset Y = {y1, . . . , yn} such that ∑ i d p(xi, yi) ≤ αOPTp(X), for a constant α. Let B∗ be the optimal k-clustering solution for Y . Then fp(X,B∗) ∈ O((1 + α)OPTp(X)). 3In the Euclidean space, if the centers do not need to be part of the input, then setting p = 2 recovers the k-MEANS problem. 4These assumptions are not necessary. In the supplementary material, we explain how we estimate them in our experiments and how from a theoretical perspective we can remove the assumptions. Given a set of points X , a mapping µ : X → Y , and a weighted instance defined by (Y,weight), we say that the weighted instance is consistent with µ, if for all y ∈ Y , we have that weight(y) = |{x ∈ X| µ(x) = y}|. We say it is -consistent (for constant ≥ 0), if for all y ∈ Y , we have that |{x ∈ X | µ(x) = y}| ≤ weight(y) ≤ (1 + )|{x ∈ X | µ(x) = y}|. Finally, we remark that the k-clustering problem is NP-hard, so our focus will be on finding efficient approximation algorithms. We say that we obtain an α approximation for a clustering problem if fp(X, C) ≤ α · OPTp(X). The best-known approximation factor for all the problems that we consider are constant [1, 19, 36]. Additionally, since the algorithms work in arbitrary metric spaces, we measure update time in terms of distance function evaluations and use the number of points as space cost (all other costs are negligible). 3 Algorithm and Analysis The starting point of our clustering is the development of efficient sketching technique that, given a stream of points, X , a mapping µ, and a time, τ , returns a weighted instance that is -consistent with µ for the points inserted at or after τ . To see why having such a sketch is useful, suppose µ has a cost a constant factor larger than the cost of the optimal solution. Then we could get an approximation to the sliding window problem by computing an approximately optimal clustering on the weighted instance (see Lemma 2.1). To develop such a sketch, we begin by relaxing our goal by allowing our sketch to return a weighted instance that is -consistent with µ for the entire stream X as opposed to the substream starting at Xτ . Although a single sketch with this property is not enough to obtain a good algorithm for the overall problem, we design a sliding window algorithm that builds multiple such sketches in parallel. We can show that it is enough to maintain a polylogarithmic number of carefully chosen sketches to guarantee that we can return a good approximation to the optimal solution in the active window. In subsection 3.1 we describe how we construct a single efficient sketch. Then, in the subsection 3.2, we describe how we can combine different sketches to obtain a good approximation. All of the missing proofs of the lemmas and the pseudo-code for all the missing algorithms are presented in the supplementary material. 3.1 Augmented Meyerson Sketch Our sketching technique builds upon previous clustering algorithms developed for the streaming model of computation. Among these, a powerful approach is the sketch introduced for facility location problems by Meyerson [47]. At its core, given an approximate lower bound to the value of the optimum solution, Meyerson’s algorithm constructs a set C of sizeO(k log ∆), known as a sketch, and a consistent weighted instance, such that, with constant probability, fp(X, C) ∈ O(OPTp(X)). Given such a sketch, it is easy to both: amplify the success probability to be arbitrarily close to 1 by running multiple copies in parallel, and reduce the number of centers to k by keeping track of the number of points assigned to each c ∈ C and then clustering this weighted instance into k groups. What makes the sketch appealing in practice is its easy construction—each arriving point is added as a new center with some carefully chosen probability. If a new point does not make it as a center, it is assigned to the nearest existing center, and the latter’s weight is incremented by 1. Meyerson algorithm was initially designed for online problems, and then adapted to algorithms in the streaming computation model, where points arrive one at a time but are never deleted. To solve the sliding window problem naively, one can simply start a new sketch with every newly arriving point, but this is inefficient. To overcome these limitations we extend the Meyerson sketch. In particular, there are two challenges that we face in sliding window models: 1. The weight of each cluster is not monotonically increasing, as points that are assigned to the cluster time out and are dropped from the window. 2. The designated center of each cluster may itself expire and be removed from the window, requiring us to pick a new representative for the cluster. Using some auxiliary bookkeeping we can augment the classic Meyerson sketch to return a weighted instance that is -consistent with a mapping µ whose cost is a constant factor larger than the cost of the optimal solution for the entire stream X . More precisely, Lemma 3.1. Let w be the size of the sliding window, ∈ (0, 1) be a constant and t the current time. Let (X,d) be a metric space and fix γ ∈ (0, 1). The augmented Meyerson algorithm computes an implicit mapping µ : X → C, and an -consistent weighted instance (C, ŵeight) for all substreamsX[τ,t] with τ ≥ t−w, such that, with probability 1−γ, we have: |C| ≤ 22p+8k log γ−1 log ∆ and fp(X[τ,t], C) ≤ 22p+8 OPTp(X). The algorithm uses spaceO(k log γ−1 log ∆ log(M/m)(logM+logw+log ∆)) and stores the cost of the consistent mapping, f(X,µ), and allows a 1 + approximation to the cost of the -consistent mapping, denoted by f̂(X[τ,t], µ). This is the -consistent mapping that is computed by the augmented Meyerson algorithm. In section 2, M and m are defined as the upper and lower bounds on the cost of the optimal solution. Note that when M/m and ∆ are polynomial in w,5 the above space bound is O(k log γ−1 log3(w)). 3.2 Sliding Window Algorithm In the previous section we have shown that we can the Meyerson sketch to have enough information to output a solution using the points in the active window whose cost is comparable to the cost of the optimal computed on the whole stream. However, we need an algorithm that is competitive with the cost of the optimum solution computed solely on the elements in the sliding window. We give some intuition behind our algorithm before delving into the details. Suppose we had a good guess on the value of the optimum solution, λ∗ and imagine splitting the input x1, x2, . . . , xt into blocks A1 = {x1, x2, . . . , xb1}, A2 = {xb1+1, . . . , xb2}, etc. with the constraints that (i) each block has optimum cost smaller than λ∗, and (ii) is also maximal, that is adding the next element to the block causes its cost to exceed λ∗. It is easy to see, that any sliding window of optimal solution of cost λ∗ overlaps at most two blocks. The idea behind our algorithm is that, if we started an augmented Meyerson sketch in each block, and we obtain a good mapping for the suffix of the first of these two blocks, we can recover a good approximate solution for the sliding window. We now show how to formalize this idea. During the execution of the algorithm, we first discretize the possible values of the optimum solution, and run a set of sketches for each value of λ. Specifically, for each guess λ, we run Algorithm 1 to compute the AugmentedMeyerson for two consecutive substreams, Aλ and Bλ, of the input stream X . (The full pseudocode of AugmentedMeyerson is available in the supplementary material.) When a new point, x, arrives we check whether the k-clustering cost of the solution computed on the sketch after adding x to Bλ exceeds λ. If not, we add it to the sketch for Bλ, if so we reset the Bλ substream to x, and rename the old sketch of Bλ as Aλ. Thus the algorithm maintains two sketches, on consecutive subintervals. Notice that the cost of each sketch is at most λ, and each sketch is grown to be maximal before being reset. We remark that to convert the Meyerson sketch to a k-clustering solution, we need to run a k-clustering algorithm on the weighted instance given by the sketch. Since the problem is NP-hard, let ALG denote any ρ-approximate algorithm, such as the one by [36]. Let S(Z) = (Y (Z),weight(Z)) denote the augmented Meyerson sketch built on a (sub)stream Z, with Y (Z) as the centers, and weight(Z) as the (approximate) weight function. We denote by ALG(S(Z)) the solution obtained by running ALG over the weighted instance S(Z). Let f̂p(S(Z),ALG(S(Z))) be the estimated cost of the solution ALG(S(Z)) over the stream Z obtained by the sketch S(Z). We show that we can implement a function f̂p that operates only on the information in the augmented Meyerson sketch S(Z) and gives a β ∈ O(ρ) approximation to the cost on the unabridged input. Lemma 3.2 (Approximate solution and approximate cost from a sketch). Using an approximation algorithm ALG, from the augmented Meyerson sketch S(Z), with probability ≥ 1− γ, we can output a solution ALG(S(Z)) and an estimate f̂p(S(Z),ALG(S(Z))) of its cost s.t. fp(Z,ALG(S(Z))) ≤ f̂p(S(Z),ALG(S(Z))) ≤ β(ρ)fp(Z,OPT(Z)) for a constant β(ρ) ≤ 23p+6ρ depending only the approximation factor ρ of ALG. 5We note that prior work [17, 25] makes similar assumptions to get a bound depending on w. Algorithm 1 Meyerson Sketches, ComputeSketches(X,w, λ,m,M,∆) 1: Input: A sequence of points X = x0, x1, x2, . . . , xn. The size of the window w. Cost threshold λ. A lower bound m and upper bound M of the cost of the optimal solution and upper bound on distances ∆. 2: Output: Two sketches for the stream S1 and S2. 3: S1 ← AugmentedMeyerson(∅, w,m,M,∆); S2 ← AugmentedMeyerson(∅, w,m,M,∆) 4: Aλ ← ∅; Bλ ← ∅ (Recall that Aλ, Bλ are sets and S1 and S2 the corresponding sketches. Note that the content of the sets is not stored explicitly.) 5: for x ∈ X do 6: Let Stemp be computed by AugmentedMeyerson(Bλ ∪ {x}, w,m,M,∆) . (Note: it can be computed by adding x to a copy of the sketch maintained by S2) 7: if f̂p(Stemp,ALG(Stemp)) ≤ λ then 8: Add x to the stream of the sketch S2. (Bλ ← Bλ ∪ {x}, S2 ← AugmentedMeyerson(Bλ, w,m,M,∆)) 9: else 10: S1 ← S2; S2 ← AugmentedMeyerson({x}, w,m,M,∆). (Aλ ← Bλ; Bλ ← {x}) 11: end if 12: end for 13: Return (S1, S2, and start and end times of Aλ and Bλ) Composition of sketches from sub-streams Before presenting the global sliding window algorithm that uses these pairs of sketches, we introduce some additional notation. Let S(Z) be the augmented Meyerson sketch computed over the stream Z. Let Suffixτ (S(Z)) denote the sketch obtained from a sketch S for the points that arrived after τ . This can be done using the operations defined in the supplementary material. We say that a time τ is contained in a substream A if A contains elements inserted on or after time τ . Finally we define Aτ as the suffix of A that contains elements starting at time τ . Given two sketches S(A), and S(B) computed over two disjoint substreams A,B, let S(A) ∪ S(B) be the sketch obtained by joining the centers of S(A) and S(B) (and summing their respective weights) in a single instance. We now prove a key property of the augmented Meyerson sketches we defined before. Lemma 3.3 (Composition with a Suffix of stream). Given two substreams A,B (with possibly B = ∅) and a time τ in A, let ALG be a constant approximation algorithm for the k-clustering problem. Then if OPTp(A) ≤ O(OPTp(Aτ ∪ B), then, with probability ≥ 1 − O(γ), we have fp(Aτ ∪B,ALG(Suffixτ (S(A)) ∪ S(B))) ≤ O(OPTp(Aτ ∪B)). The main idea of the proof is to show that Suffixτ (S(A))∪S(B) is -consistent with a good mapping from Aτ ∪ B and then by using a technique similar to Lemma 2.1 show that we can compute a constant approximation from an -consistent sketch. Algorithm 2 Our main algorithm. Input: X,m,M,∆, approx. factor of ALG (β) and δ. 1: Λ← {m, (1 + δ)m, . . . , 2pβ(1 + δ)M} 2: for λ ∈ Λ do 3: Sλ,1, Sλ,2 ← ComputeSketches(X,w, λ,m,M,∆) 4: end for 5: if Bλ∗ = W for some λ∗ then return ALG(Sλ∗,2) 6: λ∗ ← min({λ : Aλ 6⊆W}) 7: τ ← max(|X| − w, 1) 8: if W ∩Aλ∗ 6= ∅ then return ALG(Suffixτ (Sλ∗,1) ∪ Sλ∗,2) 9: else return ALG(Suffixτ (Sλ∗,2)) Final algorithm. We can now present the full algorithm in Algorithm 2. As mentioned before, we run multiple copies of ComputeSketches in parallel, for geometrically increasing values of λ. For each value of λ, we maintain the pair of sketches over the stream X . Finally, we compute the centers using such sketches. If we get lucky, and for the sliding window W there exists a subsequence where Bλ∗ is precisely W , we use the appropriate sketch and return ALG(Sλ∗,2). Otherwise, we find the smallest λ∗ for which Aλ is not a subset of W . We then use the pair of sketches associated with Aλ∗ and Bλ∗ , combining the sketch of the suffix of Aλ∗ that intersects with W , and the sketch on Bλ∗ . The main result is that this algorithm provides a constant approximation of the k-clustering problem, for any p ≥ 1, with probability at least 1 − γ, using space linear in k and logarithmic in other parameters. The total running time of the algorithm depends on the complexity of ALG. Let T (n, k) be the complexity of solving an instance of k-clustering with size n points using ALG. Theorem 3.4. With probability 1 − γ, Algorithm 2, outputs an O(1)-approximation for the sliding window k-clustering problem using space: O ( k log(∆)(log(∆) + log(w) + log(M)) log2(M/m) log(γ−1 log(M/m)) ) and total update time O(T (k log(∆), k) log2(M/m) log(γ−1 log(M/m)) (log(∆) + log(w) + log(M)). We remark that if M and ∆ are polynomial in w, then the total space is O(k log4 w log(logw/γ)) and the total update time is O(T (k logw, k) log3(w) log(logw/γ)). The main component in the constant approximation factor of Theorem 3.4 statement comes from the 23p+5ρ approximation for the insertion-only case [43]. Here p is the norm, and ρ is the offline algorithm factor. Given the composition operation in our analysis in addition to applying triangle inequality and some other steps, we end up with an approximation factor ≈ 28p+6ρ. We do not aim to optimize for this approximation factor, however it could be an interesting future direction. 4 Empirical Evaluation We now describe the methodology of our empirical evaluation before providing our experiments results. We report only the main results in the section, more details on the experiments and results are in supplementary material. Our code is available open-source on github6. All datasets used are publicly-available. Datasets. We used 3 real-world datasets from the UCI Repository [28] that have been used in previous experiments on k-clustering for data streams settings: SKINTYPE [12], n = 245057, d = 4, SHUTTLE, n = 58000, d = 9, and COVERTYPE [13], n = 581012, d = 54. Consistent with previous work, we stream all points in the natural order (as they are stored in the dataset). We also use 4 publicly-available synthetic dataset from [31] (the S-Set series) that have ground-truth clusters. We use 4 datasets (s1, s2, s3, s4) that are increasingly harder to cluster and have each k = 15 ground-truth clusters. Consistent with previous work, we stream the points in random order (as they are sorted by ground truth in the dataset). In all datasets, we pre-process each dataset to have zero mean and unit standard deviation in each dimension. All experiments use Euclidean distance, we focus on the the K-MEANS objective (p = 2) which we use as cost. We use k-means++ [4] as the solver ALG to extract the solution from our sketch. Parameters. We vary the number of centers, k, from 4 to 40 and window size, w, from 10,000 to 40,000. We experiment with δ = [0.1, 0.2] and set = 0.05 (empirically the results are robust to wide settings of ). Metrics. We focus on three key metrics: cost of the clustering, maximum space requirement of our sketch, and average running time of the update function. To give an implementation independent view into space and update time, we report as space usage the number of points stored, and as update time the number of distance evaluations. All of the other costs are negligible by comparison. Baselines. We consider the following baselines. Batch K-Means++: We use k-means++ over the entire window as a proxy for the optimum, since the latter is NP-hard to compute. At every insertion, we report the best solution over 10 runs of k-means++ on the window. Observe that this is inefficient as it requires Ω(w) space and Ω(kw) run time per update. Sampling: We maintain a random sample of points from the active window, and then run k-means++ on the sample. This allows us to evaluate the performance of a baseline, at the same space cost of our algorithm. SODA16: We also evaluated the only previously published algorithm for this setting in [17]. We note that we made some practical modifications to further improve the performance of our algorithm which we report in the supplementary material. 6https://github.com/google-research/google-research/tree/master/sliding_window_ clustering/ Comparison with previous work. We begin by comparing our algorithm to the previously published algorithm of [17]. The baseline in this paragraph is SODA16 algorithm in [17]. We confirm empirically that the memory use of this baseline already exceeds the size of the sliding window for very small k, and that it is significantly slower than our algorithm. Figure 1 shows the space used by our algorithm and by the baseline over the COVERTYPE dataset for a |W | = 10,000 and different k. We confirm that our algorithm’s memory grows linearly in k while the baseline grows super-linearly in k and that for k > 10 the baseline costs more than storing the entire window. In Table 1 we show that our algorithm is significantly faster and uses less memory than the SODA16 already for small values of k. In the supplementary material we show that the difference is even larger for bigger values of k. Given the inefficiency of the SODA16 baseline, for the rest of the section we do not run experiments with it. Cost of the solution. We now take a look at how the cost of the solution evolves over time during the execution of our algorithm. In Figure 2 we plot the cost of the solution obtained by our algorithm (Sketch), our proxy for the optimum (KM++) and the sampling baseline (Sampling Baseline) on the COVERTYPE dataset. The sampling baseline is allowed to store the same number of points stored by our algorithm (at the same point in time). We use k = 20, |W | = 40,000, and δ = 0.2. The plot is obtained by computing the cost of the algorithms every 100 timesteps. Observe that our algorithm closely tracks that of the offline algorithm result, even as the cost fluctuates up and down. Our algorithm’s cost is always close to that of the off-line algorithm and significantly better than the random sampling baseline Update time and space tradeoff. We now investigate the time and space tradeoff of our algorithm. As a baseline we look at the cost required simply to recompute the solution using k-means++ at every time step. In Table 2 (δ = 0.2) we focus on the COVERTYPE dataset, the other results are similar. Table 2 shows the percent of the sliding window data points stored (Space) and the percent of update time (Time) of our algorithm vs a single run of k-means++ over the window. In the supplementary material we show that the savings become larger (at parity of k) as |W | grows and that we always store a small fraction of the window, providing order-of-magnitude speed ups (e.g., we use < 0.5% of the time of the baseline for k = 10, |W | = 40,000). Here the baseline is the k-means++ algorithm. Recovering ground-truth clusters. We evaluated the accuracy of the clusters produced by our algorithm on a dataset with ground-truth clusters using the well known V-Measure accuracy definition for clustering [51]. We observe that on all datasets our algorithm performs better than the sampling baseline and in line with the offline k-means++. For example, on the s1 our algorithm gets V-Measure of 0.969, while k-means++ gets 0.969 and sampling gets 0.933. The full results are available in the supplementary material. 5 Conclusion We present the first algorithms for the k-clustering problem on sliding windows with space linear in k. Empirically we observe that the algorithm performs much better than the analytic bounds, and it allows to store only a small fraction of the input. A natural avenue for future work is to give a tighter analysis, and reduce this gap between theory and practice. Broader Impact Clustering is a fundamental unsupervised machine learning problem that lies at the core of multiple real-world applications. In this paper, we address the problem of clustering in a sliding window setting. As we argued in the introduction, the sliding window model allows us to discard old data which is a core principle in data retention policies. Whenever a clustering algorithm is used on user data it is important to consider the impact it may have on the users. In this work we focus on the algorithmic aspects of the problem and we do not address other considerations of using clustering that may be needed in practical settings. For instance, there is a burgeoning literature on fairness considerations in unsupervised methods, including clustering, which further delves into these issues. We refer to this literature [22, 40, 11] for addressing such issues. Funding Transparency Statement No third-party funding has been used for this research.
1. What is the main contribution of the paper regarding clustering algorithms? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis and efficiency? 3. Do you have any concerns about the approximation guarantee provided by the algorithm? 4. How does the reviewer assess the novelty and significance of the paper's contributions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents an algorithm for k-clustering in a sliding window streaming model, where k-clustering means the generalization of k-median and k-means to any fixed l_p-norm. The main theoretical result is an algorithm that achieves O(1)-approximation for points in arbitrary metric space and thus includes the prevalent of Euclidean metric, which is also used in the experimental evaluation. This algorithm is for sliding window streaming, where the algorithm repeatedly solves the clustering problem on the w most recent points in the stream (for parameter w). While the minimal requirement is to estimate the cost of a k-clustering, this algorithm also reports k center points. The usual motivation for this model is to allow old data to expire, and analyze only recent data. As the paper mentions, expiration of old data might also be required by policies and restrictions on data retention, and therefore this model may be more valuable and timely than it seems initially. This main theorem shows that the algorithm's space complexity is about O(k (log w)^4), improving over the previous bound which grows like k^3. This should also lead to improved running time, which is more difficult to compare as it depends on running an approximation algorithm for offline. At a high level, the low space complexity follows by employing a well-known algorithm by Meyerson, which is a very simple strategy to subsample the points to something like O(k\log w), with only O(1)-factor loss in the objective. This paper views this subsample as a small sketch, because it indeed suffices to k-cluster this subsample (viewed as a weighted set). However, this approach is not applicable to the sliding window model, and the paper has to carefully manipulate the stream before applying Meyerson's sketch (polylog(w)-many times), in an ingenious manner. I should note that standard methods for the sliding window model (like smooth histograms) are not applicable here. In this sense, the paper really solves a difficult problem. ADDED AFTER AUTHOR REBUTTAL. I understand the clarifications. My evaluation has not changed. Strengths The paper solves a difficult theoretical problem, using new ideas. The results are applicable to a broad range of k-clustering problems, including different objectives (e.g., k-median and k-means) and every metric space (including Euclidean) The experimental evaluation shows that this new algorithm is quite efficient in comparison with the previous algorithm and other baseline solutions, and yields solutions with low cost (objective function) The sliding window model may be more valuable and timely than it seems initially. Weaknesses The theoretical guarantee is O(1)-approximation, which could be a large constant, and not say 1+epsilon or even a small explicit constant like 2
NIPS
Title Sliding Window Algorithms for k-Clustering Problems Abstract The sliding window model of computation captures scenarios in which data is arriving continuously, but only the latest w elements should be used for analysis. The goal is to design algorithms that update the solution efficiently with each arrival rather than recomputing it from scratch. In this work, we focus on k-clustering problems such as k-means and k-median. In this setting, we provide simple and practical algorithms that offer stronger performance guarantees than previous results. Empirically, we show that our methods store only a small fraction of the data, are orders of magnitude faster, and find solutions with costs only slightly higher than those returned by algorithms with access to the full dataset. 1 Introduction Data clustering is a central tenet of unsupervised machine learning. One version of the problem can be phrased as grouping data into k clusters so that elements within the same cluster are similar to each other. Classic formulations of this question include the k-median and k-means problems for which good approximation algorithms are known [1, 44]. Unfortunately, these algorithms often do not scale to large modern datasets requiring researchers to turn to parallel [8], distributed [9], and streaming methods. In the latter model, points arrive one at a time and the goal is to find algorithms that quickly update a small sketch (or summary) of the input data that can then be used to compute an approximately optimal solution. One significant limitation of the classic data stream model is that it ignores the time when a data point arrived; in fact, all of the points in the input are treated with equal significance. However, in practice, it is often important (and sometimes necessary) to restrict the computation to very recent data. This restriction may be due to data freshness—e.g., when training a model on recent events, data from many days ago may be less relevant compared to data from the previous hour. Another motivation arises from legal reasons, e.g., data privacy laws such as the General Data Protection Regulation (GDPR), encourage and mandate that companies not retain certain user data beyond a specified period. This has resulted in many products including a data retention policy [54]. Such recency requirements can be modeled by the sliding window model. Here the goal is to maintain a small sketch of the input data, just as with the streaming model, and then use only this sketch to approximate the solution on the last w elements of the stream. Clustering in the sliding window model is the main question that we study in this work. A trivial solution simply maintains the w elements in the window and recomputes the clusters from scratch at each step. We intend to find solutions that use less space, and are more efficient at processing each 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. new element. In particular, we present an algorithm which uses space linear in k, and polylogarithmic in w, but still attains a constant factor approximation. Related Work Clustering. Clustering is a fundamental problem in unsupervised machine learning and has application in a disparate variety of settings, including data summarization, exploratory data analysis, matrix approximations and outlier detection [39, 41, 46, 50]. One of the most studied formulations in clustering of metric spaces is that of finding k centers that minimize an objective consisting of the `p norm of the distances of all points to their closest center. For p ∈ {1, 2,∞} this problem corresponds to k-median, k-means, and k-center, respectively, which are NP-hard, but constant factor approximation algorithms are known [1, 34, 44]. Several techniques have been used to tackle these problems at scale, including dimensionality reduction [45], core-sets [6], distributed algorithms [5], and streaming methods reviewed later. To clarify between Euclidean or general metric spaces, we note that our results work on arbitrary general metric spaces. The hardness results in the literature hold even for special case of Euclidean metrics and the constant factor approximation algorithms hold for the general metric spaces. Streaming model. Significant attention has been devoted to models for analyzing large-scale datasets that evolve over time. The streaming model of computation is of the most well-known (see [49] for a survey) and focuses on defining low-memory algorithms for processing data arriving one item at a time. A number of interesting results are known in this model ranging from the estimation of stream statistics [3, 10], to submodular optimization [7], to graph problems [2, 30, 42], and many others. Clustering is also well studied in this setting, including algorithms for k-median, k-means, and k-center in the insertion-only stream case [6, 20, 35]. Sliding window streaming model. The sliding window model significantly increases the difficultly of the problem, since deletions need to be handled as well. Several techniques are known, including the exponential histogram framework [27] that addresses weakly additive set functions, and the smooth histogram framework [18] that is suited for functions that are well-behaved and possesses a sufficiently small constant approximation. Since many problems, such as k-clustering, do not fit into these two categories, a number of algorithms have been developed for specific problems such as submodular optimization [14, 21, 29], graph sparsification [26], minimizing the enclosing ball [55], private heavy hitters [54], diversity maximization [14] and linear algebra operations [15]. Sliding window algorithms find also applications in data summarization [23]. Turning to sliding window algorithms for clustering, for the k-center problem Cohen et al. [25] show a (6 + )-approximation using O(k log ∆) space and per point update time of O(k2 log ∆), where ∆ is the spread of the metric, i.e. the ratio of the largest to the smallest pairwise distances. For k-median and k-means, [17] give constant factor approximation algorithms that use O(k3 log6 w) space and per point update time of O(poly(k, logw)).1 Their bound is polylogarithmic in w, but cubic in k, making it impractical unless k w.2 In this paper we improve their bounds and give a simpler algorithm with only linear dependency of k. Furthermore we show experimentally (Figure 1 and Table 1) that our algorithm is faster and uses significantly less memory than the one presented in [17] even with very small values k (i.e., k ≥ 4). In a different approach, [56] study a variant where one receives points in batches and uses heuristics to reduce the space and time. Their approach does provide approximation guarantees but it applies only to the Euclidean k-means case. Recently, [32] studied clustering problems in the distributed sliding window model, but these results are not applicable to our setting. The more challenging fully-dynamic stream case has also received attention [16, 38]. Contrary to our result for the sliding window case, in the fully-dynamic case, obtaining a Õ(k) memory, low update time algorithm, for the arbitrary metric k-clustering case with general `p norms is an open problem. For the special case of d-dimensional Euclidean spaces for k-means, there are positive results—[38] give Õ(kd4)-space core-set with 1 + approximation. Dynamic algorithms have also been studied in a consistent model [24, 43], but there the objective is to minimize the number of changes to the solution as the input evolves, rather than minimizing the approximation ratio and space used. Finally, a relaxation of the fully dynamic model that allows only 1We note that the authors assume that the cost of any solution is polynomial in w. We chose to state our bounds explicitly, which introduces a dependence on the ratio of the max and min costs of the solution. 2We note here that in some practical applications k can be large. For instance, in spam and abuse [53], near-duplicate detection [37] or reconciliation tasks [52]. a limited number of deletions has also been addressed [33, 48]. The only work related to clustering is that of submodular maximization [48] which includes exemplar-based clustering as a special case. Our Contributions We simplify and improve the state-of-the-art of k-clustering sliding window algorithms, resulting in lower memory algorithms. Specifically, we: • Introduce a simple new algorithm for k-clustering in the sliding window setting (Section 3.2). The algorithm is an example of a more general technique that we develop for minimization problems in this setting. (Section 3). • Prove that the algorithm needs space linear in k to obtain a constant approximate solution (Theorem 3.4), thus improving over the best previously known result which required Ω(k3) space. • Show empirically that the algorithm is orders of magnitude faster, more space efficient, and more accurate than previous solutions, even for small values of k (Section 4). 2 Preliminaries Let X be a set of arbitrary points, and d : X ×X → R be a distance function. We assume that (X,d) is an arbitrary metric space, that is, d is non-negative, symmetric, and satisfies the triangle inequality. For simplicity of exposition we will make a series of additional assumptions, in supplementary material, we explain how we can remove all these assumptions. We assume that the distances are normalized to lie between 1 and ∆. We will also consider weighted instances of our problem where, in addition, we are given a function weight : X → Z denoting the multiplicity of the point. The k-clustering family of problems asks to find a set of k cluster centers that minimizes a particular objective function. For a point x and a set of points Y = {y1, y2, . . . , ym}, we let d(x, Y ) = miny∈Y d(x, y), and let cl(x, Y ) be the point that realizes it, arg miny∈Y d(x, y). The cost of a set of centers C is: fp(X, C) = ∑ x∈X d p(x, C). Similarly for weighted instances, we have fp(X,weight, C) = ∑ x∈X weight(x)d p(x, C). Note that for p = 2, this is precisely the k-MEDOIDS problem.3 For p = 1, the above encodes the k-MEDIAN problem. When p is clear from the context, we will drop the subscript. We also refer to the optimum cost for a particular instance (X, d) as OPTp(X), and the optimal clustering as C∗p(X) = {c∗1, c∗2, . . . , c∗k} , shortening to C∗ when clear from context. Throughout the paper, we assume that p is a constant with p ≥ 1. While mapping a point to its nearest cluster is optimal, any map µ : X → X will produce a valid clustering. In a slight abuse of notation we extend the definition of fp to say fp(X,µ) =∑ x∈X d(x, µ(x)) p. In this work, we are interested in algorithms for sliding window problems, we refer to the window size as w and to the set of elements in the active window as W , and we use n for the size of the entire stream, typically n w. We denote by Xt the t-th element of the stream and by X[a,b] the subset of the stream from time a to b (both included). For simplicity of exposition, we assume that we have access to a lower bound m and upper bound M of the cost of the optimal solution in any sliding window.4 We use two tools repeatedly in our analysis. The first is the relaxed triangle inequality. For p ≥ 1 and any x, y, z ∈ X , we have: d(x, y)p ≤ 2p−1(d(x, z)p + d(z, y)p). The second is the fact that the value of the optimum solution of a clustering problem does not change drastically if the points are shifted around by a small amount. This is captured by Lemma 2.1 which was first proved in [35]. For completeness we present its proof in the supplementary material. Lemma 2.1. Given a set of points X = {x1, . . . , xn} consider a multiset Y = {y1, . . . , yn} such that ∑ i d p(xi, yi) ≤ αOPTp(X), for a constant α. Let B∗ be the optimal k-clustering solution for Y . Then fp(X,B∗) ∈ O((1 + α)OPTp(X)). 3In the Euclidean space, if the centers do not need to be part of the input, then setting p = 2 recovers the k-MEANS problem. 4These assumptions are not necessary. In the supplementary material, we explain how we estimate them in our experiments and how from a theoretical perspective we can remove the assumptions. Given a set of points X , a mapping µ : X → Y , and a weighted instance defined by (Y,weight), we say that the weighted instance is consistent with µ, if for all y ∈ Y , we have that weight(y) = |{x ∈ X| µ(x) = y}|. We say it is -consistent (for constant ≥ 0), if for all y ∈ Y , we have that |{x ∈ X | µ(x) = y}| ≤ weight(y) ≤ (1 + )|{x ∈ X | µ(x) = y}|. Finally, we remark that the k-clustering problem is NP-hard, so our focus will be on finding efficient approximation algorithms. We say that we obtain an α approximation for a clustering problem if fp(X, C) ≤ α · OPTp(X). The best-known approximation factor for all the problems that we consider are constant [1, 19, 36]. Additionally, since the algorithms work in arbitrary metric spaces, we measure update time in terms of distance function evaluations and use the number of points as space cost (all other costs are negligible). 3 Algorithm and Analysis The starting point of our clustering is the development of efficient sketching technique that, given a stream of points, X , a mapping µ, and a time, τ , returns a weighted instance that is -consistent with µ for the points inserted at or after τ . To see why having such a sketch is useful, suppose µ has a cost a constant factor larger than the cost of the optimal solution. Then we could get an approximation to the sliding window problem by computing an approximately optimal clustering on the weighted instance (see Lemma 2.1). To develop such a sketch, we begin by relaxing our goal by allowing our sketch to return a weighted instance that is -consistent with µ for the entire stream X as opposed to the substream starting at Xτ . Although a single sketch with this property is not enough to obtain a good algorithm for the overall problem, we design a sliding window algorithm that builds multiple such sketches in parallel. We can show that it is enough to maintain a polylogarithmic number of carefully chosen sketches to guarantee that we can return a good approximation to the optimal solution in the active window. In subsection 3.1 we describe how we construct a single efficient sketch. Then, in the subsection 3.2, we describe how we can combine different sketches to obtain a good approximation. All of the missing proofs of the lemmas and the pseudo-code for all the missing algorithms are presented in the supplementary material. 3.1 Augmented Meyerson Sketch Our sketching technique builds upon previous clustering algorithms developed for the streaming model of computation. Among these, a powerful approach is the sketch introduced for facility location problems by Meyerson [47]. At its core, given an approximate lower bound to the value of the optimum solution, Meyerson’s algorithm constructs a set C of sizeO(k log ∆), known as a sketch, and a consistent weighted instance, such that, with constant probability, fp(X, C) ∈ O(OPTp(X)). Given such a sketch, it is easy to both: amplify the success probability to be arbitrarily close to 1 by running multiple copies in parallel, and reduce the number of centers to k by keeping track of the number of points assigned to each c ∈ C and then clustering this weighted instance into k groups. What makes the sketch appealing in practice is its easy construction—each arriving point is added as a new center with some carefully chosen probability. If a new point does not make it as a center, it is assigned to the nearest existing center, and the latter’s weight is incremented by 1. Meyerson algorithm was initially designed for online problems, and then adapted to algorithms in the streaming computation model, where points arrive one at a time but are never deleted. To solve the sliding window problem naively, one can simply start a new sketch with every newly arriving point, but this is inefficient. To overcome these limitations we extend the Meyerson sketch. In particular, there are two challenges that we face in sliding window models: 1. The weight of each cluster is not monotonically increasing, as points that are assigned to the cluster time out and are dropped from the window. 2. The designated center of each cluster may itself expire and be removed from the window, requiring us to pick a new representative for the cluster. Using some auxiliary bookkeeping we can augment the classic Meyerson sketch to return a weighted instance that is -consistent with a mapping µ whose cost is a constant factor larger than the cost of the optimal solution for the entire stream X . More precisely, Lemma 3.1. Let w be the size of the sliding window, ∈ (0, 1) be a constant and t the current time. Let (X,d) be a metric space and fix γ ∈ (0, 1). The augmented Meyerson algorithm computes an implicit mapping µ : X → C, and an -consistent weighted instance (C, ŵeight) for all substreamsX[τ,t] with τ ≥ t−w, such that, with probability 1−γ, we have: |C| ≤ 22p+8k log γ−1 log ∆ and fp(X[τ,t], C) ≤ 22p+8 OPTp(X). The algorithm uses spaceO(k log γ−1 log ∆ log(M/m)(logM+logw+log ∆)) and stores the cost of the consistent mapping, f(X,µ), and allows a 1 + approximation to the cost of the -consistent mapping, denoted by f̂(X[τ,t], µ). This is the -consistent mapping that is computed by the augmented Meyerson algorithm. In section 2, M and m are defined as the upper and lower bounds on the cost of the optimal solution. Note that when M/m and ∆ are polynomial in w,5 the above space bound is O(k log γ−1 log3(w)). 3.2 Sliding Window Algorithm In the previous section we have shown that we can the Meyerson sketch to have enough information to output a solution using the points in the active window whose cost is comparable to the cost of the optimal computed on the whole stream. However, we need an algorithm that is competitive with the cost of the optimum solution computed solely on the elements in the sliding window. We give some intuition behind our algorithm before delving into the details. Suppose we had a good guess on the value of the optimum solution, λ∗ and imagine splitting the input x1, x2, . . . , xt into blocks A1 = {x1, x2, . . . , xb1}, A2 = {xb1+1, . . . , xb2}, etc. with the constraints that (i) each block has optimum cost smaller than λ∗, and (ii) is also maximal, that is adding the next element to the block causes its cost to exceed λ∗. It is easy to see, that any sliding window of optimal solution of cost λ∗ overlaps at most two blocks. The idea behind our algorithm is that, if we started an augmented Meyerson sketch in each block, and we obtain a good mapping for the suffix of the first of these two blocks, we can recover a good approximate solution for the sliding window. We now show how to formalize this idea. During the execution of the algorithm, we first discretize the possible values of the optimum solution, and run a set of sketches for each value of λ. Specifically, for each guess λ, we run Algorithm 1 to compute the AugmentedMeyerson for two consecutive substreams, Aλ and Bλ, of the input stream X . (The full pseudocode of AugmentedMeyerson is available in the supplementary material.) When a new point, x, arrives we check whether the k-clustering cost of the solution computed on the sketch after adding x to Bλ exceeds λ. If not, we add it to the sketch for Bλ, if so we reset the Bλ substream to x, and rename the old sketch of Bλ as Aλ. Thus the algorithm maintains two sketches, on consecutive subintervals. Notice that the cost of each sketch is at most λ, and each sketch is grown to be maximal before being reset. We remark that to convert the Meyerson sketch to a k-clustering solution, we need to run a k-clustering algorithm on the weighted instance given by the sketch. Since the problem is NP-hard, let ALG denote any ρ-approximate algorithm, such as the one by [36]. Let S(Z) = (Y (Z),weight(Z)) denote the augmented Meyerson sketch built on a (sub)stream Z, with Y (Z) as the centers, and weight(Z) as the (approximate) weight function. We denote by ALG(S(Z)) the solution obtained by running ALG over the weighted instance S(Z). Let f̂p(S(Z),ALG(S(Z))) be the estimated cost of the solution ALG(S(Z)) over the stream Z obtained by the sketch S(Z). We show that we can implement a function f̂p that operates only on the information in the augmented Meyerson sketch S(Z) and gives a β ∈ O(ρ) approximation to the cost on the unabridged input. Lemma 3.2 (Approximate solution and approximate cost from a sketch). Using an approximation algorithm ALG, from the augmented Meyerson sketch S(Z), with probability ≥ 1− γ, we can output a solution ALG(S(Z)) and an estimate f̂p(S(Z),ALG(S(Z))) of its cost s.t. fp(Z,ALG(S(Z))) ≤ f̂p(S(Z),ALG(S(Z))) ≤ β(ρ)fp(Z,OPT(Z)) for a constant β(ρ) ≤ 23p+6ρ depending only the approximation factor ρ of ALG. 5We note that prior work [17, 25] makes similar assumptions to get a bound depending on w. Algorithm 1 Meyerson Sketches, ComputeSketches(X,w, λ,m,M,∆) 1: Input: A sequence of points X = x0, x1, x2, . . . , xn. The size of the window w. Cost threshold λ. A lower bound m and upper bound M of the cost of the optimal solution and upper bound on distances ∆. 2: Output: Two sketches for the stream S1 and S2. 3: S1 ← AugmentedMeyerson(∅, w,m,M,∆); S2 ← AugmentedMeyerson(∅, w,m,M,∆) 4: Aλ ← ∅; Bλ ← ∅ (Recall that Aλ, Bλ are sets and S1 and S2 the corresponding sketches. Note that the content of the sets is not stored explicitly.) 5: for x ∈ X do 6: Let Stemp be computed by AugmentedMeyerson(Bλ ∪ {x}, w,m,M,∆) . (Note: it can be computed by adding x to a copy of the sketch maintained by S2) 7: if f̂p(Stemp,ALG(Stemp)) ≤ λ then 8: Add x to the stream of the sketch S2. (Bλ ← Bλ ∪ {x}, S2 ← AugmentedMeyerson(Bλ, w,m,M,∆)) 9: else 10: S1 ← S2; S2 ← AugmentedMeyerson({x}, w,m,M,∆). (Aλ ← Bλ; Bλ ← {x}) 11: end if 12: end for 13: Return (S1, S2, and start and end times of Aλ and Bλ) Composition of sketches from sub-streams Before presenting the global sliding window algorithm that uses these pairs of sketches, we introduce some additional notation. Let S(Z) be the augmented Meyerson sketch computed over the stream Z. Let Suffixτ (S(Z)) denote the sketch obtained from a sketch S for the points that arrived after τ . This can be done using the operations defined in the supplementary material. We say that a time τ is contained in a substream A if A contains elements inserted on or after time τ . Finally we define Aτ as the suffix of A that contains elements starting at time τ . Given two sketches S(A), and S(B) computed over two disjoint substreams A,B, let S(A) ∪ S(B) be the sketch obtained by joining the centers of S(A) and S(B) (and summing their respective weights) in a single instance. We now prove a key property of the augmented Meyerson sketches we defined before. Lemma 3.3 (Composition with a Suffix of stream). Given two substreams A,B (with possibly B = ∅) and a time τ in A, let ALG be a constant approximation algorithm for the k-clustering problem. Then if OPTp(A) ≤ O(OPTp(Aτ ∪ B), then, with probability ≥ 1 − O(γ), we have fp(Aτ ∪B,ALG(Suffixτ (S(A)) ∪ S(B))) ≤ O(OPTp(Aτ ∪B)). The main idea of the proof is to show that Suffixτ (S(A))∪S(B) is -consistent with a good mapping from Aτ ∪ B and then by using a technique similar to Lemma 2.1 show that we can compute a constant approximation from an -consistent sketch. Algorithm 2 Our main algorithm. Input: X,m,M,∆, approx. factor of ALG (β) and δ. 1: Λ← {m, (1 + δ)m, . . . , 2pβ(1 + δ)M} 2: for λ ∈ Λ do 3: Sλ,1, Sλ,2 ← ComputeSketches(X,w, λ,m,M,∆) 4: end for 5: if Bλ∗ = W for some λ∗ then return ALG(Sλ∗,2) 6: λ∗ ← min({λ : Aλ 6⊆W}) 7: τ ← max(|X| − w, 1) 8: if W ∩Aλ∗ 6= ∅ then return ALG(Suffixτ (Sλ∗,1) ∪ Sλ∗,2) 9: else return ALG(Suffixτ (Sλ∗,2)) Final algorithm. We can now present the full algorithm in Algorithm 2. As mentioned before, we run multiple copies of ComputeSketches in parallel, for geometrically increasing values of λ. For each value of λ, we maintain the pair of sketches over the stream X . Finally, we compute the centers using such sketches. If we get lucky, and for the sliding window W there exists a subsequence where Bλ∗ is precisely W , we use the appropriate sketch and return ALG(Sλ∗,2). Otherwise, we find the smallest λ∗ for which Aλ is not a subset of W . We then use the pair of sketches associated with Aλ∗ and Bλ∗ , combining the sketch of the suffix of Aλ∗ that intersects with W , and the sketch on Bλ∗ . The main result is that this algorithm provides a constant approximation of the k-clustering problem, for any p ≥ 1, with probability at least 1 − γ, using space linear in k and logarithmic in other parameters. The total running time of the algorithm depends on the complexity of ALG. Let T (n, k) be the complexity of solving an instance of k-clustering with size n points using ALG. Theorem 3.4. With probability 1 − γ, Algorithm 2, outputs an O(1)-approximation for the sliding window k-clustering problem using space: O ( k log(∆)(log(∆) + log(w) + log(M)) log2(M/m) log(γ−1 log(M/m)) ) and total update time O(T (k log(∆), k) log2(M/m) log(γ−1 log(M/m)) (log(∆) + log(w) + log(M)). We remark that if M and ∆ are polynomial in w, then the total space is O(k log4 w log(logw/γ)) and the total update time is O(T (k logw, k) log3(w) log(logw/γ)). The main component in the constant approximation factor of Theorem 3.4 statement comes from the 23p+5ρ approximation for the insertion-only case [43]. Here p is the norm, and ρ is the offline algorithm factor. Given the composition operation in our analysis in addition to applying triangle inequality and some other steps, we end up with an approximation factor ≈ 28p+6ρ. We do not aim to optimize for this approximation factor, however it could be an interesting future direction. 4 Empirical Evaluation We now describe the methodology of our empirical evaluation before providing our experiments results. We report only the main results in the section, more details on the experiments and results are in supplementary material. Our code is available open-source on github6. All datasets used are publicly-available. Datasets. We used 3 real-world datasets from the UCI Repository [28] that have been used in previous experiments on k-clustering for data streams settings: SKINTYPE [12], n = 245057, d = 4, SHUTTLE, n = 58000, d = 9, and COVERTYPE [13], n = 581012, d = 54. Consistent with previous work, we stream all points in the natural order (as they are stored in the dataset). We also use 4 publicly-available synthetic dataset from [31] (the S-Set series) that have ground-truth clusters. We use 4 datasets (s1, s2, s3, s4) that are increasingly harder to cluster and have each k = 15 ground-truth clusters. Consistent with previous work, we stream the points in random order (as they are sorted by ground truth in the dataset). In all datasets, we pre-process each dataset to have zero mean and unit standard deviation in each dimension. All experiments use Euclidean distance, we focus on the the K-MEANS objective (p = 2) which we use as cost. We use k-means++ [4] as the solver ALG to extract the solution from our sketch. Parameters. We vary the number of centers, k, from 4 to 40 and window size, w, from 10,000 to 40,000. We experiment with δ = [0.1, 0.2] and set = 0.05 (empirically the results are robust to wide settings of ). Metrics. We focus on three key metrics: cost of the clustering, maximum space requirement of our sketch, and average running time of the update function. To give an implementation independent view into space and update time, we report as space usage the number of points stored, and as update time the number of distance evaluations. All of the other costs are negligible by comparison. Baselines. We consider the following baselines. Batch K-Means++: We use k-means++ over the entire window as a proxy for the optimum, since the latter is NP-hard to compute. At every insertion, we report the best solution over 10 runs of k-means++ on the window. Observe that this is inefficient as it requires Ω(w) space and Ω(kw) run time per update. Sampling: We maintain a random sample of points from the active window, and then run k-means++ on the sample. This allows us to evaluate the performance of a baseline, at the same space cost of our algorithm. SODA16: We also evaluated the only previously published algorithm for this setting in [17]. We note that we made some practical modifications to further improve the performance of our algorithm which we report in the supplementary material. 6https://github.com/google-research/google-research/tree/master/sliding_window_ clustering/ Comparison with previous work. We begin by comparing our algorithm to the previously published algorithm of [17]. The baseline in this paragraph is SODA16 algorithm in [17]. We confirm empirically that the memory use of this baseline already exceeds the size of the sliding window for very small k, and that it is significantly slower than our algorithm. Figure 1 shows the space used by our algorithm and by the baseline over the COVERTYPE dataset for a |W | = 10,000 and different k. We confirm that our algorithm’s memory grows linearly in k while the baseline grows super-linearly in k and that for k > 10 the baseline costs more than storing the entire window. In Table 1 we show that our algorithm is significantly faster and uses less memory than the SODA16 already for small values of k. In the supplementary material we show that the difference is even larger for bigger values of k. Given the inefficiency of the SODA16 baseline, for the rest of the section we do not run experiments with it. Cost of the solution. We now take a look at how the cost of the solution evolves over time during the execution of our algorithm. In Figure 2 we plot the cost of the solution obtained by our algorithm (Sketch), our proxy for the optimum (KM++) and the sampling baseline (Sampling Baseline) on the COVERTYPE dataset. The sampling baseline is allowed to store the same number of points stored by our algorithm (at the same point in time). We use k = 20, |W | = 40,000, and δ = 0.2. The plot is obtained by computing the cost of the algorithms every 100 timesteps. Observe that our algorithm closely tracks that of the offline algorithm result, even as the cost fluctuates up and down. Our algorithm’s cost is always close to that of the off-line algorithm and significantly better than the random sampling baseline Update time and space tradeoff. We now investigate the time and space tradeoff of our algorithm. As a baseline we look at the cost required simply to recompute the solution using k-means++ at every time step. In Table 2 (δ = 0.2) we focus on the COVERTYPE dataset, the other results are similar. Table 2 shows the percent of the sliding window data points stored (Space) and the percent of update time (Time) of our algorithm vs a single run of k-means++ over the window. In the supplementary material we show that the savings become larger (at parity of k) as |W | grows and that we always store a small fraction of the window, providing order-of-magnitude speed ups (e.g., we use < 0.5% of the time of the baseline for k = 10, |W | = 40,000). Here the baseline is the k-means++ algorithm. Recovering ground-truth clusters. We evaluated the accuracy of the clusters produced by our algorithm on a dataset with ground-truth clusters using the well known V-Measure accuracy definition for clustering [51]. We observe that on all datasets our algorithm performs better than the sampling baseline and in line with the offline k-means++. For example, on the s1 our algorithm gets V-Measure of 0.969, while k-means++ gets 0.969 and sampling gets 0.933. The full results are available in the supplementary material. 5 Conclusion We present the first algorithms for the k-clustering problem on sliding windows with space linear in k. Empirically we observe that the algorithm performs much better than the analytic bounds, and it allows to store only a small fraction of the input. A natural avenue for future work is to give a tighter analysis, and reduce this gap between theory and practice. Broader Impact Clustering is a fundamental unsupervised machine learning problem that lies at the core of multiple real-world applications. In this paper, we address the problem of clustering in a sliding window setting. As we argued in the introduction, the sliding window model allows us to discard old data which is a core principle in data retention policies. Whenever a clustering algorithm is used on user data it is important to consider the impact it may have on the users. In this work we focus on the algorithmic aspects of the problem and we do not address other considerations of using clustering that may be needed in practical settings. For instance, there is a burgeoning literature on fairness considerations in unsupervised methods, including clustering, which further delves into these issues. We refer to this literature [22, 40, 11] for addressing such issues. Funding Transparency Statement No third-party funding has been used for this research.
1. What is the focus and contribution of the paper regarding k-clustering problems? 2. What are the strengths of the proposed approach, particularly in terms of its application in the sliding window model? 3. What are the weaknesses of the paper, especially regarding the significance of its improvements? 4. Do you have any concerns or suggestions regarding the use of coresets in the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions I would like to mention that I have reviewed this paper in the past (submitted to another venue). - Consider following definitions: - k-clustering problems (k-means/median/center) problems are well known. - Sketching is an algorithmic technique where a small summary of the input Data is maintained for approximation some specific property of the data. - Sliding window algorithm is an algorithm in the streaming model such that the algorithm gives guarantee for the last w items of data seen in the stream. - The paper gives sketching algorithm in the sliding window model. Previous known results gave either sketching algorithm for the entire dataset seen so far in the stream or gave algorithm with worse dependency on the space requirement (k^3 versus k). Here are some comments about the writeup: 1. Lines 60-71: It will be nice to also have the comparison with previous work with respect to running time in this paragraph. 2. You maintain two sketches during the execution of the algorithm that suffices for finding a good set of centers for the sliding window. Do you think maintaining coresets similarly might also work? Coresets are powerful objects in the context of k-means/median clustering. It may be worthwhile adding a discussion in case you have given this some thought. 3. Any comment/discussion on the tightness of the approximation bounds obtained would be nice even though I understand that obtaining the tightest possible approximation ratio is not the main agenda of this work. 4. It may be better to state clearly what m and M are in Lemma 3.1. 5. Line 186: "Note that when M and ...". Did you mean M/m instead of M? I have read author rebuttal. There is no change in my review post rebuttal. Strengths - Meyerson’s sketching technique is a simple and practical algorithm in the context of k-clustering problems. The paper extends this to the sliding window model. People interested in using sketching in the sliding window model should find this interesting. Weaknesses There are settings where improvement from k^3 to k is significant. I am not sure if this improvement is interesting in most settings.
NIPS
Title Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals Abstract Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control. While alpha waves (8–12 Hz) are known to closely resemble short sinusoids, and thus are revealed by Fourier analysis or wavelet transforms, there is an evolving debate that electromagnetic neural signals are composed of more complex waveforms that cannot be analyzed by linear filters and traditional signal representations. In this paper, we propose to learn dedicated representations of such recordings using a multivariate convolutional sparse coding (CSC) algorithm. Applied to electroencephalography (EEG) or magnetoencephalography (MEG) data, this method is able to learn not only prototypical temporal waveforms, but also associated spatial patterns so their origin can be localized in the brain. Our algorithm is based on alternated minimization and a greedy coordinate descent solver that leads to state-of-the-art running time on long time series. To demonstrate the implications of this method, we apply it to MEG data and show that it is able to recover biological artifacts. More remarkably, our approach also reveals the presence of non-sinusoidal mu-shaped patterns, along with their topographic maps related to the somatosensory cortex. 1 Introduction Neural activity recorded via measurements of the electrical potential over the scalp by electroencephalography (EEG), or magnetic fields by magnetoencephalography (MEG), can be used to investigate human cognitive processes and certain pathologies. Such recordings consist of dozens to hundreds of simultaneously recorded signals, for durations going from minutes to hours. In order to describe and quantify neural activity in such multi-gigabyte data, it is classical to decompose the signal in predefined representations such as the Fourier or wavelet bases. It leads to canonical frequency bands such as theta (4–8 Hz), alpha (8–12 Hz), or beta (15–30 Hz) (Buzsaki, 2006), in which signal power can be quantified. While such linear analyses have had significant impact in neuroscience, there is now a debate regarding whether neural activity consists more of transient bursts of isolated events rather than rhythmically sustained oscillations (van Ede et al., 2018). To study the transient events and the morphology of the waveforms (Mazaheri and Jensen, 2008; Cole and Voytek, 2017), which matter in cognition and for our understanding of pathologies (Jones, 2016; Cole et al., 2017), there is a clear need to go beyond traditionally employed signal processing methodologies (Cole and Voytek, 2018). For instance, a classic Fourier analysis fails to distinguish alpha-rhythms from mu-rhythms, which have the same peak frequency at around 10 Hz, but whose waveforms are different (Cole and Voytek, 2017; Hari and Puce, 2017). The key to many modern statistical analyses of complex data such as natural images, sounds or neural time series is the estimation of data-driven representations. Dictionary learning is one family 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. of techniques, which consists in learning atoms (or patterns) that offer sparse data approximations. When working with long signals in which events can happen at any instant, one idea is to learn shift-invariant atoms. They can offer better signal approximations than generic bases such as Fourier or wavelets, since they are not limited to narrow frequency bands. Multiple approaches have been proposed to solve this shift-invariant dictionary learning problem, such as MoTIF (Jost et al., 2006), the sliding window matching (Gips et al., 2017), the adaptive waveform learning (Hitziger et al., 2017), or the learning of recurrent waveform (Brockmeier and Príncipe, 2016), yet they all have several limitations, as discussed in Jas et al. (2017). A more popular approach, especially in image processing, is the convolutional sparse coding (CSC) model (Jas et al., 2017; Pachitariu et al., 2013; Kavukcuoglu et al., 2010; Zeiler et al., 2010; Heide et al., 2015; Wohlberg, 2016b; Šorel and Šroubek, 2016; Grosse et al., 2007; Mailhé et al., 2008). The idea is to cast the problem as an optimization problem, representing the signal as a sum of convolutions between atoms and activation signals. The CSC approach has been quite successful in several fields such as computer vision (Kavukcuoglu et al., 2010; Zeiler et al., 2010; Heide et al., 2015; Wohlberg, 2016b; Šorel and Šroubek, 2016), biomedical imaging (Jas et al., 2017; Pachitariu et al., 2013), and audio signal processing (Grosse et al., 2007; Mailhé et al., 2008), yet it was essentially developed for univariate signals. Interestingly, images can be multivariate such as color or hyper-spectral images, yet most CSC methods only consider gray scale images. To the best of our knowledge, the only reference to multivariate CSC is Wohlberg (2016a), where the author proposes two models well suited for 3-channel images. In the case of EEG and MEG recordings, neural activity is instantaneously and linearly spread across channels, due to Maxwell’s equations (Hari and Puce, 2017). The same temporal patterns are reproduced on all channels with different intensities, which depend on each activity’s location in the brain. To exploit this property, we propose to use a rank-1 constraint on each multivariate atom. This idea has been mentioned in (Barthélemy et al., 2012, 2013), but was considered less flexible than the full-rank model. Moreover, their proposed optimization techniques are not specific to shift-invariant models, and not scalable to long signals. Multivariate shift-invariant rank-1 decomposition of EEG has also been considered with matching pursuit (Durka et al., 2005), but without learning the atoms, which are fixed Gabor filters. Contribution In this study, we develop a multivariate model for CSC, using a rank-1 constraint on the atoms to account for the instantaneous spreading of an electromagnetic source over all the channels. We also propose efficient optimization strategies, namely a locally greedy coordinate descent (LGCD, Moreau et al. 2018), and precomputation steps for faster gradient computations. We provide multiple numerical evaluations of our method, which show the highly competitive running time on both univariate and multivariate models, even when working with hundreds of channels. We also demonstrate the estimation performance of the multivariate model by recovering patterns on low signal-to-noise ratio (SNR) data. Finally, we illustrate our method with atoms learned on multivariate MEG data, that thanks to the rank-1 model can be localized in the brain for clinical or cognitive neuroscience studies. Notation A multivariate signal with T time points in RP is noted X ∈ RP×T , while x ∈ RT is a univariate signal. We index time with brackets X[t] ∈ Rp, while Xi ∈ RT is the channel i in X . For a vector v ∈ RP we define the `q norm as ‖v‖q = ( ∑ i |vi|q) 1/q, and for a multivariate signal X ∈ RP×T , we define the time-wise `q norm as ‖X‖q = ( ∑T t=1 ‖X[t]‖qq)1/q. The transpose of a matrix U is denoted by U>. For a multivariate signal X ∈ RP×T , X is obtained by reversal of the temporal dimension, i.e., X [t] = X[T + 1− t]. The convolution of two signals z ∈ RT−L+1 and d ∈ RL is denoted by z ∗ d ∈ RT . For D ∈ RP×L, z ∗D is obtained by convolving every row of D by z. For D′ ∈ RP×L, D ∗̃ D′ ∈ R2L−1 is obtained by summing the convolution between each row of D and D′: D ∗̃ D′ = ∑P p=1Dp ∗D′p . We note [a, b] the set of real numbers between a and b, and Ja, bK the set of integers between a and b. We define T̃ as T − L+ 1. 2 Multivariate Convolutional Sparse Coding In this section, we introduce the convolutional sparse coding (CSC) models used in this work. We focus on 1D-convolution, although these models can be naturally extended to higher order signals such as images by using the proper convolution operators. Univariate CSC The CSC formulation adopted in this work follows the shift-invariant sparse coding (SISC) model from Grosse et al. (2007). It is defined as follows: min {dk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥xn − K∑ k=1 znk ∗ dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 , s.t. ‖dk‖22 ≤ 1 and znk ≥ 0 , (1) where {xn}Nn=1 ⊂ RT areN observed signals, λ > 0 is the regularization parameter, {dk}Kk=1 ⊂ RL are the K temporal atoms we aim to learn, and {znk }Kk=1 ⊂ RT̃ are K signals of activations, a.k.a. the code associated with xn. This model assumes that the coding signals znk are sparse, in the sense that only few entries are nonzero in each signal. In this work, we also assume that the entries of znk are positive, which means that the temporal patterns are present each time with the same polarity. Multivariate CSC The multivariate formulation uses an additional dimension on the signals and on the atoms, since the signal is recorded over P channels (mapping to space locations): min {Dk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗Dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1, s.t. ‖Dk‖22 ≤ 1 and znk ≥ 0 , (2) where {Xn}Nn=1 ⊂ RP×T are N observed multivariate signals, {Dk}Kk=1 ⊂ RP×L are the spatiotemporal atoms, and {znk }Kk=1 ⊂ RT̃ are the sparse activations associated with Xn. Multivariate CSC with rank-1 constraint This model is similar to the multivariate case but it adds a rank-1 constraint on the dictionary, Dk = ukv > k ∈ RP×L, with uk ∈ RP being the pattern over channels and vk ∈ RL the pattern over time. The optimization problem boils down to: min {uk}k,{vk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗ (ukv>k ) ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 , s.t. ‖uk‖22 ≤ 1 , ‖vk‖22 ≤ 1 and znk ≥ 0 . (3) The rank-1 constraint is consistent with Maxwell’s equations and the physical model of electrophysiological signals like EEG or MEG, where each source is linearly spread instantaneously over channels with a constant topographic map (Hari and Puce, 2017). Using this assumption, one aims to improve the estimation of patterns under the presence of independent noise over channels. Moreover, it can help separating overlapped sources which are inherently rank-1 but whose sum is generally of higher rank. Finally, as explained below, several computations can be factorized to speed up computations. Noise model Note that our models use a Gaussian noise, whereas one can also use an alpha-stable noise distribution to better handle strong artifacts, as proposed by Jas et al. (2017). Importantly, our contribution is orthogonal to their work, and one can easily extend multivariate models to alpha-stable noise distributions, by using their EM algorithm and by updating the `2 loss into a weighted `2 loss in (3). Also, our experiments used artifact-free datasets, so the Gaussian noise model is appropriate. 3 Model estimation Problems (1), (2) and (3) share the same structure. They are convex in each variable but not jointly convex. The resolution is done by using a block coordinate descent approach which minimizes alternately the objective function over one block of the variables. In this section, we describe this approach on the multivariate CSC with rank-1 constraint case (3), updating iteratively the activations znk , the spatial patterns uk, and the temporal pattern vk. 3.1 Z-step: solving for the activations Given K fixed atoms Dk and a regularization parameter λ > 0, the Z-step aims to retrieve the NK activation signals znk ∈ RT̃ associated to the signals Xn ∈ RP×T by solving the following Algorithm 1: Locally greedy coordinate descent (LGCD) Input :Signal X , atoms Dk, number of segments M , stopping parameter > 0, zk initialization Initialize βk[t] with (5). repeat for m = 1 to M do Compute z′k[t] = max ( βk[t]−λ ‖Dk‖22 , 0 ) for (k, t) ∈ Cm Choose (k0, t0) = arg max (k,t)∈Cm |zk[t]− z′k[t]| Update β with (6) Update the current point estimate zk0 [t0]← z′k0 [t0] until ‖z − z′‖∞ < `1-regularized optimization problem: min {znk }k,n znk≥0 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗Dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 . (4) This problem is convex in znk and can be efficiently solved. In Chalasani et al. (2013), the authors proposed an algorithm based on FISTA (Beck and Teboulle, 2009) to solve it. Bristow et al. (2013) introduced a method based on ADMM (Boyd et al., 2011) to compute efficiently the activation signals znk . These two methods are detailed and compared by Wohlberg (2016b), which also made use of the fast Fourier transform (FFT) to accelerate the computations. Recently, Jas et al. (2017) proposed to use L-BFGS (Byrd et al., 1995) to improve on first order methods. Finally, Kavukcuoglu et al. (2010) adapted the greedy coordinate descent (GCD) to solve this convolutional sparse coding problem. However, for long signals, these techniques can be quite slow due the computation of the gradient (FISTA, ADMM, L-BFGS) or the choice of the best coordinate to update in GCD, which are operations that scale linearly in T . A way to alleviate this limitation is to use a locally greedy coordinate descent (LGCD) strategy, presented recently in Moreau et al. (2018). Note that problem (4) is independent for each signal Xn. The computation of each zn can thus be parallelized, independently of the technique selected to solve the optimization (Jas et al., 2017). Therefore, we omit the superscript n in the following subsection to simplify the notation. Coordinate descent (CD) The key idea of coordinate descent is to update our estimate of the solution one coordinate zk[t] at a time. For (4), it is possible to compute the optimal value z′k[t] of one coordinate zk[t] given that all the others are fixed. Indeed, the problem (4) restricted to one coordinate has a closed-form solution given by: z′k[t] = max ( βk[t]− λ ‖Dk‖22 , 0 ) , with βk[t] = [ D k ∗̃ ( X − K∑ l=1 zl ∗Dl + zk[t]et ∗Dk )] [t] (5) where et ∈ RT̃ is the canonical basis vector with value 1 at index t and 0 elsewhere. When updating the coefficient zk0 [t0] to the value z ′ k0 [t0], β is updated with: β (q+1) k [t] = β (q) k [t] + (D k0 ∗̃ Dk)[t− t0](zk0 [t0]− z ′ k0 [t0]), ∀(k, t) 6= (k0, t0) . (6) The term (D k0 ∗̃ Dk)[t− t0] is zero for |t− t0| ≥ L. Thus, only K(2L− 1) coefficients of β need to be changed (Kavukcuoglu et al., 2010). The CD algorithm updates at each iteration a coordinate to this optimal value. The coordinate to update can be chosen with different strategies, such as the cyclic strategy which iterates over all coordinates (Friedman et al., 2007), the randomized CD (Nesterov, 2010; Richtárik and Takáč, 2014) which chooses a coordinate at random for each iteration, or the greedy CD (Osher and Li, 2009) which chooses the coordinate the farthest from its optimal value. Locally greedy coordinate descent (LGCD) The choice of a coordinate selection strategy results of a tradeoff between the computational cost of each iteration and the improvement it provides. For cyclic and randomized strategies, the iteration complexity is O(KL) as the coordinate selection can be performed in constant time. The greedy selection of a coordinate is more expensive as it is linear in the signal length O(KT̃ ). However, greedy selection is more efficient iteration-wise (Nutini et al., 2015). Moreau et al. (2018) proposed to consider a locally greedy selection strategy for CD. The coordinate to update is chosen greedily in one of M subsegments of the signal, i.e., at iteration q, the selected coordinate is: (k0, t0) = arg max (k,t)∈Cm |zk[t]− z′k[t]| , m ≡ q (mod M) + 1 , (7) with Cm = J1,KK× J(m−1)T̃ /M,mT̃/MK. With this strategy, the coordinate selection complexity is linear in the length of the considered subsegment O(KT̃/M). By choosing M = bT̃ /(2L− 1)c, the complexity of update is the same as the complexity of random and cyclic coordinate selection, O(KL). We detail the steps of LGCD in Algorithm 1. This algorithm is particularly efficient when the zk are sparser. Indeed, in this case, only few coefficients need to be updated in the signal, resulting in a low number of iterations. Computational complexities are detailed in Table 1. Relation with matching pursuit (MP) Note that the greedy CD is strongly related to the wellknown matching pursuit (MP) algorithm (Locatello et al., 2018). The main difference is that MP solves a slightly different problem, where the `1 regularization is replaced with an `0 constraint. Therefore, the size of the support is a fixed parameter in MP, whereas it is controlled by the regularization parameter λ in our case. In term of algorithm, both methods update one coordinate at a time selected greedily, but MP does not apply a soft-thresholding in (5). 3.2 D-step: solving for the atoms Given KN fixed activation signals znk ∈ RT̃ , associated to signals Xn ∈ RP×T , the D-step aims to update the K spatial patterns uk ∈ RP and K temporal patterns vk ∈ RL, by solving: min ‖uk‖2≤1 ‖vk‖2≤1 E, where E ∆= N∑ n=1 1 2 ‖Xn − K∑ k=1 znk ∗ (ukv>k )‖22 . (8) The problem (8) is convex in each block of variables {uk}k and {vk}k, but not jointly convex. Therefore, we optimize first {uk}k, then {vk}k, using in both cases a projected gradient descent with an Armijo backtracking line-search (Wright and Nocedal, 1999) to find a good step size. These steps are detailed in Algorithm A.1. Gradient relative to uk and vk The gradient of E relatively to {uk}k and {vk}k can be computed using the chain rule. First, we compute the gradient relatively to a full atom Dk = ukv > k ∈ RP×L: ∇DkE = N∑ n=1 (znk ) ∗ ( Xn − K∑ l=1 znl ∗Dl ) = Φk − K∑ l=1 Ψk,l ∗Dl , (9) where we reordered this expression to define Φk ∈ RP×L and Ψk,l ∈ R2L−1. These terms are both constant during a D-step and can thus be precomputed to accelerate the computation of the gradients and the cost function E. We detail these computations in the supplementary materials (see Section A.1). Computational complexities are detailed in Table 1. Note that the dependence in T is present only in the precomputations, which makes the following iterations very fast. Without precomputations, the complexity of each gradient computation in the D-step would beO(NKTLP ). 3.3 Initialization The activations sub-problem (Z-step) is regularized with a `1-norm, which induces sparsity: the higher the regularization parameter λ, the higher the sparsity. Therefore, there exists a value λmax above which the sub-problem solution is always zeros (Hastie et al., 2015). As λmax depends on the atoms Dk and on the signals Xn, its value changes after each D-step. In particular, its value might change a lot between the initialization and the first D-step. This is problematic since we cannot use a regularization λ above this initial λmax, even though the following λmax might be higher. The standard strategy to initialize CSC methods is to generate random atoms with Gaussian white noise. However, as these atoms generally poorly correlate with the signals, the initial value of λmax is low compared to the following ones. For example, on the MEG dataset described later on, we found that the initial λmax is about 1/3 of the following ones in the univariate case, with L = 32. On the multivariate case, it is even more problematic as with P = 204, we could have an initial λmax as low as 1/20 of the following ones. To fix this problem, we propose to initialize the dictionary with random chunks of the signal, projecting each chunk on a rank-1 approximation using singular value decomposition. We noticed on the MEG dataset that the initial λmax was then about the same value as the following ones, which enables the use of higher regularization parameters. We used this scheme in all our experiments. 4 Experiments All numerical experiments were run using Python (Python Software Foundation, 2017) and our code is publicly available online at https://alphacsc.github.io/. Speed performance To illustrate the performance of our optimization strategy, we monitored its convergence speed on a real MEG dataset. The somatosensory dataset from the MNE software (Gram- fort et al., 2013, 2014) contains responses to median nerve stimulation. We consider only gradiometers channels and we used the following parameters: T = 134 700, N = 2, K = 8, and L = 128. First we compared our strategy against three state-of-the-art univariate CSC solvers available online. The first was developed by Garcia-Cardona and Wohlberg (2017) and is based on ADMM. The second and third were developed by Jas et al. (2017), and are respectively based on FISTA and L-BFGS. All solvers shared the same objective function, but as the problem is non-convex, the solvers are not guaranteed to reach the same local minima, even though we started from the same initial settings. Hence, for a fair comparison, we computed the convergence curves relative to each local minimum, and averaged them over 10 different initializations. The results, presented in Figure 1(a, b), demonstrate the competitiveness of our method, for reasonable choices of λ. Indeed, a higher regularization parameter leads to sparser activations znk , on which LGCD is particularly efficient. Then, we also compared our method against a multivariate ADMM solver developed by Wohlberg (2016a). As this solver was quite slow on these long signals, we limited our experiments to P = 5 channels. The results, presented in Figure 1(c, d), show that our method is faster than the competing method for large λ. More benchmarks are available in the supplementary materials. Scaling with the number of channels The multivariate model involves an extra dimension P but its impact on the computational complexity of our solver is limited. Figure 2 shows the average running times of the Z-step and the D-step. Timings are normalized w.r.t. the timings for a single channel. The running times are computed using the same signals from the somatosensory dataset, with the following parameters: T = 26 940, N = 10, K = 2, L = 128. We can see that the scaling of these three operations is sub-linear in P . For the Z-step, only the initial computations for the first βk and the constants D k ∗̃ Dl depend linearly on P so that the complexity increase is limited compared to the complexity of solving the optimization problem (4). For the D-step, the scaling to compute the gradients is linear with P . However, the most expensive operations here are the computation of the constant Ψk, which does not on P . Finding patterns in low SNR signals Since the multivariate model has access to more data, we would expect it to perform better compared to the univariate model especially for low SNR signals. To demonstrate this, we compare the two models when varying the number of channels P and the SNR of the data. The original dictionary contains two temporal patterns, a square and a triangle, presented in Figure 3(a). The spatial maps are designed with a sine and a cosine, and the first channel’s amplitude is forced to 1 to make sure both atoms are present even with only one channel. The signals are obtained by convolving the atoms with activation signals znk , where the activation locations are sampled uniformly in J1, T̃ K× J1,KK with 5% non-zero activations, and the amplitudes are uniformly sampled in [0, 1]. Then, a Gaussian white noise with variance σ is added to the signal. We fixed N = 100, L = 64 and T̃ = 640 for our simulated signals. We can see in Figure 3(a) the temporal patterns recovered for σ = 10−3 using only one channel and using 5 channels. While the patterns recovered with one channel are very noisy, the multivariate model with rank-1 constraint recovers the original atoms accurately. This can be expected as the univariate model is ill-defined in this situation, where some atoms are superimposed. For the rank-1 model, as the atoms have different spatial maps, the problem is easier. Then, we evaluate the learned temporal atoms. Due to permutation and sign ambiguity, we compute the `2-norm of the difference between the temporal pattern v̂k and the ground truths, vk or −vk, for all permutations S(K) i.e., loss(v̂) = min s∈S(K) K∑ k=1 min ( ‖v̂s(k) − vk‖22, ‖v̂s(k) + vk‖22 ) . (10) Multiple values of λ were tested and the best loss is reported in Figure 3(b) for varying noise levels σ. We observe that independently of the noise level, the multivariate rank-1 model outperforms the univariate one. This is true even for good SNR, as using multiple channels disambiguates the separation of overlapped patterns. Examples of atoms in real MEG signals: We show the results of our algorithm on experimental data, using the MNE somatosensory dataset (Gramfort et al., 2013, 2014). This dataset contains MEG recordings of one patient receiving median nerve stimulations. Here we first extract N = 103 trials from the data. Each trial lasts 6 s with a sampling frequency of 150 Hz (T = 900). We selected only gradiometer channels, leading to P = 204 channels. The signals were notch-filtered to remove the power-line noise, and high-pass filtered at 2 Hz to remove the low-frequency trend, i.e. to remove low frequency drift artifacts which contribute a lot to the variance of the raw signals. We learned K = 40 atoms with L = 150 using a rank-1 multivariate CSC model, with a regularization λ = 0.2λmax. Figure 4(a) shows a recovered non-sinusoidal brain rhythm which resembles the well-known murhythm. The mu-rhythm has been implicated in motor-related activity (Hari, 2006) and is centered around 9–11 Hz. Indeed, while the power is concentrated in the same frequency band as the alpha, it has a very different spatial topography (Figure 4(b)). In Figure 4(c), the power spectral density (PSD) shows two components of the mu-rhythm – one at around 9 Hz, and a harmonic at 18 Hz as previously reported in (Hari, 2006). Based on our analysis, it is clear that the 18 Hz component is simply a harmonic of the mu-rhythm even though a Fourier-based analysis could lead us to falsely conclude that the data contained beta-rhythms. Finally, due to the rank-1 nature of our atoms, it is straightforward to fit an equivalent current dipole (Tuomisto et al., 1983) to interpret the origin of the signal. Figure 4(d) shows that the atom does indeed localize in the primary somatosensory cortex, or the so-called S1 region with a 59.3% goodness of fit. For results on more MEG datasets, see Section B.2. It notably includes mu-shaped atoms from S2. 5 Conclusion Many neuroscientific debates today are centered around the morphology of the signals under consideration. For instance, are alpha-rhythms asymmetric (Mazaheri and Jensen, 2008) ? Are frequency specific patterns the result of sustained oscillations or transient bursts (van Ede et al., 2018) ? In this paper, we presented a multivariate extension to the CSC problem applied to MEG data to help answer such questions. In the original CSC formulation, the signal is expressed as a convolution of atoms and their activations. Our method extends this to the case of multiple channels and imposes a rank-1 constraint on the atoms to account for the instantaneous propagation of electromagnetic fields. We demonstrate the usefulness of our method on publicly available multivariate MEG data. Not only are we able to recover neurologically plausible atoms, but also we are able to find temporal waveforms which are non-sinusoidal. Empirical evaluations show that our solvers are significantly faster compared to existing CSC methods even for the univariate case (single channel). The algorithm scales sublinearly with the number of channels which means it can be employed even for dense sensor arrays with 200-300 sensors, leading to better estimation of the patterns and their origin in the brain. Acknowledgment This work was supported by the ERC Starting Grant SLAB ERC-YStG-676943 and by the ANR THALAMEEG ANR-14-NEUC-0002-01.
1. What is the main contribution of the paper regarding convolutional sparse coding? 2. What are the strengths and weaknesses of the proposed approach compared to previous works? 3. How does the reviewer assess the efficiency and scalability of the method? 4. What are the potential applications of the multivariate extension in EEG decomposition? 5. How does the reviewer evaluate the results and experiments presented in the paper? 6. Are there any suggestions or recommendations for future improvements or comparisons with other algorithms?
Review
Review This work extends convolutional sparse coding to the multivariate case with a focus on multichannel EEG decomposition. This corresponds to a non-convex minimization problem and a local minimum is found via an alternating optimization. Reasonable efficient bookkeeping (precomputation of certain factors, and windowing for locally greedy coordinate descent) is used to improve scalability. The locally greedy coordinate descent cycles time windows, but computes a greedy coordinate descent within each window. As spatial patterns are essential for understanding EEG, this multivariate extension is an important contribution. The work is clearly presented. The results demonstrate an efficient implementation. Some preliminary experiments show the potential for automatically learning brain waveforms. Weaknesses: In previous work, Jas et al. [13] consider different noise whereas this model assumes Gaussian noise versus the alpha stable noise. The authors should comment how this work could be extended to a different noise model. Parameter/model selection ($L$ and $K$) are not discussed for the real-world signal (lines 235–253). Choosing $K$ too large can lead to cases where some atoms are never used or updated, and other cases where the same waveform appears as different atoms (perhaps at different shifts). Perspectives on these real-world considerations should be mentioned. All the learned waveforms should be reported in the supplement. There are obvious similarities between greedy coordinate descent and the well-known matching pursuit algorithm, which is not mentioned. With the non-negative restriction, matching pursuit for time-series would have the same update in Eq. 5 without the shrinkage by lambda on the numerator. Assuming unit-norm patterns the greedy coordinate descent strategy (Eq. 7) would match the matching pursuit selection of maximal inner-product. Thus, the only change is that matching pursuit is done without shrinkage (lambda). Using matching pursuit, the remainder of the multivariate framework would not need to change. However, to be as efficient as the locally greedy version, a windowed version of matching pursuit should be done, which itself would be a novel algorithm. At least the issues with $\lambda_{max}$ could be replaced by using a fixed cardinality on the support. The authors could consider a discussion of the relation to matching pursuit as it has been used for both template matching (including the multivariate case) and learning waveforms in neural signals. Also to facilitate future comparisons the cardinality of support of the sparse code for different values of lambda could be investigated and reported. Relevant reference: Piotr J. Durka, Artur Matysiak, Eduardo Martínez Montes, Pedro Valdés Sosa, Katarzyna J. Blinowska, Multichannel matching pursuit and EEG inverse solutions, Journal of Neuroscience Methods, Volume 148, Issue 1, 2005, Pages 49-59. Based on my understanding of the reasoning, the authors believe the improved convergence efficiencies of a complete greedy coordinate descent would not compensate for the increased computational complexity of the complete greedy search. However, the gap between the local and globally greedy version could be narrowed: Couldn't the algorithm be improved by recomputing the differences in Eq. 7 after the update in each window, keeping them in a priority queue, and then proceeding to choose the best window? This could be implemented after the first loop through the $M$ windows in Algorithm 1. Lines 218–227 How are the spatial maps in this example created? From the text it appears they are distinct for the two atoms. But the multivariate approach still would be better than single channel even if the spatial maps were the same due to spatial independence of noise. This could be clarified. Figure 4. How was this atom selected (by manual inspection)? Is it representative or simply the most mu-like waveform? How often does this atom appear? Are there similar atoms to it with different spatial patterns? Minor issues: Line 20 "is central" is perhaps an overstatement in the context of neuroscience. "can be used to investigate" would be more reasonable. Line 22 duration -> durations Line 83 "aka" Line 98 overlapping -> overlapped Line 104 multivariate with -> multivariate CSC with Equations 2, 3 & 4 Should indicate the full sets of parameters as arguments in the minimizations. Especially to distinguish the case for a single multivariate signal in Equation 4. Line 246 "9 Hz.," Line 252 "In notably includes" Line 178 values -> value Line 179 "allows to use" Line 335 'et al.' in author list for reference [29] without additional authors. Line 356 'et al.' in author list for reference [38] with only one more author. ----- The authors have answered many of my questions. I look forward to a final version that will incorporate the additional discussion.
NIPS
Title Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals Abstract Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control. While alpha waves (8–12 Hz) are known to closely resemble short sinusoids, and thus are revealed by Fourier analysis or wavelet transforms, there is an evolving debate that electromagnetic neural signals are composed of more complex waveforms that cannot be analyzed by linear filters and traditional signal representations. In this paper, we propose to learn dedicated representations of such recordings using a multivariate convolutional sparse coding (CSC) algorithm. Applied to electroencephalography (EEG) or magnetoencephalography (MEG) data, this method is able to learn not only prototypical temporal waveforms, but also associated spatial patterns so their origin can be localized in the brain. Our algorithm is based on alternated minimization and a greedy coordinate descent solver that leads to state-of-the-art running time on long time series. To demonstrate the implications of this method, we apply it to MEG data and show that it is able to recover biological artifacts. More remarkably, our approach also reveals the presence of non-sinusoidal mu-shaped patterns, along with their topographic maps related to the somatosensory cortex. 1 Introduction Neural activity recorded via measurements of the electrical potential over the scalp by electroencephalography (EEG), or magnetic fields by magnetoencephalography (MEG), can be used to investigate human cognitive processes and certain pathologies. Such recordings consist of dozens to hundreds of simultaneously recorded signals, for durations going from minutes to hours. In order to describe and quantify neural activity in such multi-gigabyte data, it is classical to decompose the signal in predefined representations such as the Fourier or wavelet bases. It leads to canonical frequency bands such as theta (4–8 Hz), alpha (8–12 Hz), or beta (15–30 Hz) (Buzsaki, 2006), in which signal power can be quantified. While such linear analyses have had significant impact in neuroscience, there is now a debate regarding whether neural activity consists more of transient bursts of isolated events rather than rhythmically sustained oscillations (van Ede et al., 2018). To study the transient events and the morphology of the waveforms (Mazaheri and Jensen, 2008; Cole and Voytek, 2017), which matter in cognition and for our understanding of pathologies (Jones, 2016; Cole et al., 2017), there is a clear need to go beyond traditionally employed signal processing methodologies (Cole and Voytek, 2018). For instance, a classic Fourier analysis fails to distinguish alpha-rhythms from mu-rhythms, which have the same peak frequency at around 10 Hz, but whose waveforms are different (Cole and Voytek, 2017; Hari and Puce, 2017). The key to many modern statistical analyses of complex data such as natural images, sounds or neural time series is the estimation of data-driven representations. Dictionary learning is one family 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. of techniques, which consists in learning atoms (or patterns) that offer sparse data approximations. When working with long signals in which events can happen at any instant, one idea is to learn shift-invariant atoms. They can offer better signal approximations than generic bases such as Fourier or wavelets, since they are not limited to narrow frequency bands. Multiple approaches have been proposed to solve this shift-invariant dictionary learning problem, such as MoTIF (Jost et al., 2006), the sliding window matching (Gips et al., 2017), the adaptive waveform learning (Hitziger et al., 2017), or the learning of recurrent waveform (Brockmeier and Príncipe, 2016), yet they all have several limitations, as discussed in Jas et al. (2017). A more popular approach, especially in image processing, is the convolutional sparse coding (CSC) model (Jas et al., 2017; Pachitariu et al., 2013; Kavukcuoglu et al., 2010; Zeiler et al., 2010; Heide et al., 2015; Wohlberg, 2016b; Šorel and Šroubek, 2016; Grosse et al., 2007; Mailhé et al., 2008). The idea is to cast the problem as an optimization problem, representing the signal as a sum of convolutions between atoms and activation signals. The CSC approach has been quite successful in several fields such as computer vision (Kavukcuoglu et al., 2010; Zeiler et al., 2010; Heide et al., 2015; Wohlberg, 2016b; Šorel and Šroubek, 2016), biomedical imaging (Jas et al., 2017; Pachitariu et al., 2013), and audio signal processing (Grosse et al., 2007; Mailhé et al., 2008), yet it was essentially developed for univariate signals. Interestingly, images can be multivariate such as color or hyper-spectral images, yet most CSC methods only consider gray scale images. To the best of our knowledge, the only reference to multivariate CSC is Wohlberg (2016a), where the author proposes two models well suited for 3-channel images. In the case of EEG and MEG recordings, neural activity is instantaneously and linearly spread across channels, due to Maxwell’s equations (Hari and Puce, 2017). The same temporal patterns are reproduced on all channels with different intensities, which depend on each activity’s location in the brain. To exploit this property, we propose to use a rank-1 constraint on each multivariate atom. This idea has been mentioned in (Barthélemy et al., 2012, 2013), but was considered less flexible than the full-rank model. Moreover, their proposed optimization techniques are not specific to shift-invariant models, and not scalable to long signals. Multivariate shift-invariant rank-1 decomposition of EEG has also been considered with matching pursuit (Durka et al., 2005), but without learning the atoms, which are fixed Gabor filters. Contribution In this study, we develop a multivariate model for CSC, using a rank-1 constraint on the atoms to account for the instantaneous spreading of an electromagnetic source over all the channels. We also propose efficient optimization strategies, namely a locally greedy coordinate descent (LGCD, Moreau et al. 2018), and precomputation steps for faster gradient computations. We provide multiple numerical evaluations of our method, which show the highly competitive running time on both univariate and multivariate models, even when working with hundreds of channels. We also demonstrate the estimation performance of the multivariate model by recovering patterns on low signal-to-noise ratio (SNR) data. Finally, we illustrate our method with atoms learned on multivariate MEG data, that thanks to the rank-1 model can be localized in the brain for clinical or cognitive neuroscience studies. Notation A multivariate signal with T time points in RP is noted X ∈ RP×T , while x ∈ RT is a univariate signal. We index time with brackets X[t] ∈ Rp, while Xi ∈ RT is the channel i in X . For a vector v ∈ RP we define the `q norm as ‖v‖q = ( ∑ i |vi|q) 1/q, and for a multivariate signal X ∈ RP×T , we define the time-wise `q norm as ‖X‖q = ( ∑T t=1 ‖X[t]‖qq)1/q. The transpose of a matrix U is denoted by U>. For a multivariate signal X ∈ RP×T , X is obtained by reversal of the temporal dimension, i.e., X [t] = X[T + 1− t]. The convolution of two signals z ∈ RT−L+1 and d ∈ RL is denoted by z ∗ d ∈ RT . For D ∈ RP×L, z ∗D is obtained by convolving every row of D by z. For D′ ∈ RP×L, D ∗̃ D′ ∈ R2L−1 is obtained by summing the convolution between each row of D and D′: D ∗̃ D′ = ∑P p=1Dp ∗D′p . We note [a, b] the set of real numbers between a and b, and Ja, bK the set of integers between a and b. We define T̃ as T − L+ 1. 2 Multivariate Convolutional Sparse Coding In this section, we introduce the convolutional sparse coding (CSC) models used in this work. We focus on 1D-convolution, although these models can be naturally extended to higher order signals such as images by using the proper convolution operators. Univariate CSC The CSC formulation adopted in this work follows the shift-invariant sparse coding (SISC) model from Grosse et al. (2007). It is defined as follows: min {dk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥xn − K∑ k=1 znk ∗ dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 , s.t. ‖dk‖22 ≤ 1 and znk ≥ 0 , (1) where {xn}Nn=1 ⊂ RT areN observed signals, λ > 0 is the regularization parameter, {dk}Kk=1 ⊂ RL are the K temporal atoms we aim to learn, and {znk }Kk=1 ⊂ RT̃ are K signals of activations, a.k.a. the code associated with xn. This model assumes that the coding signals znk are sparse, in the sense that only few entries are nonzero in each signal. In this work, we also assume that the entries of znk are positive, which means that the temporal patterns are present each time with the same polarity. Multivariate CSC The multivariate formulation uses an additional dimension on the signals and on the atoms, since the signal is recorded over P channels (mapping to space locations): min {Dk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗Dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1, s.t. ‖Dk‖22 ≤ 1 and znk ≥ 0 , (2) where {Xn}Nn=1 ⊂ RP×T are N observed multivariate signals, {Dk}Kk=1 ⊂ RP×L are the spatiotemporal atoms, and {znk }Kk=1 ⊂ RT̃ are the sparse activations associated with Xn. Multivariate CSC with rank-1 constraint This model is similar to the multivariate case but it adds a rank-1 constraint on the dictionary, Dk = ukv > k ∈ RP×L, with uk ∈ RP being the pattern over channels and vk ∈ RL the pattern over time. The optimization problem boils down to: min {uk}k,{vk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗ (ukv>k ) ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 , s.t. ‖uk‖22 ≤ 1 , ‖vk‖22 ≤ 1 and znk ≥ 0 . (3) The rank-1 constraint is consistent with Maxwell’s equations and the physical model of electrophysiological signals like EEG or MEG, where each source is linearly spread instantaneously over channels with a constant topographic map (Hari and Puce, 2017). Using this assumption, one aims to improve the estimation of patterns under the presence of independent noise over channels. Moreover, it can help separating overlapped sources which are inherently rank-1 but whose sum is generally of higher rank. Finally, as explained below, several computations can be factorized to speed up computations. Noise model Note that our models use a Gaussian noise, whereas one can also use an alpha-stable noise distribution to better handle strong artifacts, as proposed by Jas et al. (2017). Importantly, our contribution is orthogonal to their work, and one can easily extend multivariate models to alpha-stable noise distributions, by using their EM algorithm and by updating the `2 loss into a weighted `2 loss in (3). Also, our experiments used artifact-free datasets, so the Gaussian noise model is appropriate. 3 Model estimation Problems (1), (2) and (3) share the same structure. They are convex in each variable but not jointly convex. The resolution is done by using a block coordinate descent approach which minimizes alternately the objective function over one block of the variables. In this section, we describe this approach on the multivariate CSC with rank-1 constraint case (3), updating iteratively the activations znk , the spatial patterns uk, and the temporal pattern vk. 3.1 Z-step: solving for the activations Given K fixed atoms Dk and a regularization parameter λ > 0, the Z-step aims to retrieve the NK activation signals znk ∈ RT̃ associated to the signals Xn ∈ RP×T by solving the following Algorithm 1: Locally greedy coordinate descent (LGCD) Input :Signal X , atoms Dk, number of segments M , stopping parameter > 0, zk initialization Initialize βk[t] with (5). repeat for m = 1 to M do Compute z′k[t] = max ( βk[t]−λ ‖Dk‖22 , 0 ) for (k, t) ∈ Cm Choose (k0, t0) = arg max (k,t)∈Cm |zk[t]− z′k[t]| Update β with (6) Update the current point estimate zk0 [t0]← z′k0 [t0] until ‖z − z′‖∞ < `1-regularized optimization problem: min {znk }k,n znk≥0 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗Dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 . (4) This problem is convex in znk and can be efficiently solved. In Chalasani et al. (2013), the authors proposed an algorithm based on FISTA (Beck and Teboulle, 2009) to solve it. Bristow et al. (2013) introduced a method based on ADMM (Boyd et al., 2011) to compute efficiently the activation signals znk . These two methods are detailed and compared by Wohlberg (2016b), which also made use of the fast Fourier transform (FFT) to accelerate the computations. Recently, Jas et al. (2017) proposed to use L-BFGS (Byrd et al., 1995) to improve on first order methods. Finally, Kavukcuoglu et al. (2010) adapted the greedy coordinate descent (GCD) to solve this convolutional sparse coding problem. However, for long signals, these techniques can be quite slow due the computation of the gradient (FISTA, ADMM, L-BFGS) or the choice of the best coordinate to update in GCD, which are operations that scale linearly in T . A way to alleviate this limitation is to use a locally greedy coordinate descent (LGCD) strategy, presented recently in Moreau et al. (2018). Note that problem (4) is independent for each signal Xn. The computation of each zn can thus be parallelized, independently of the technique selected to solve the optimization (Jas et al., 2017). Therefore, we omit the superscript n in the following subsection to simplify the notation. Coordinate descent (CD) The key idea of coordinate descent is to update our estimate of the solution one coordinate zk[t] at a time. For (4), it is possible to compute the optimal value z′k[t] of one coordinate zk[t] given that all the others are fixed. Indeed, the problem (4) restricted to one coordinate has a closed-form solution given by: z′k[t] = max ( βk[t]− λ ‖Dk‖22 , 0 ) , with βk[t] = [ D k ∗̃ ( X − K∑ l=1 zl ∗Dl + zk[t]et ∗Dk )] [t] (5) where et ∈ RT̃ is the canonical basis vector with value 1 at index t and 0 elsewhere. When updating the coefficient zk0 [t0] to the value z ′ k0 [t0], β is updated with: β (q+1) k [t] = β (q) k [t] + (D k0 ∗̃ Dk)[t− t0](zk0 [t0]− z ′ k0 [t0]), ∀(k, t) 6= (k0, t0) . (6) The term (D k0 ∗̃ Dk)[t− t0] is zero for |t− t0| ≥ L. Thus, only K(2L− 1) coefficients of β need to be changed (Kavukcuoglu et al., 2010). The CD algorithm updates at each iteration a coordinate to this optimal value. The coordinate to update can be chosen with different strategies, such as the cyclic strategy which iterates over all coordinates (Friedman et al., 2007), the randomized CD (Nesterov, 2010; Richtárik and Takáč, 2014) which chooses a coordinate at random for each iteration, or the greedy CD (Osher and Li, 2009) which chooses the coordinate the farthest from its optimal value. Locally greedy coordinate descent (LGCD) The choice of a coordinate selection strategy results of a tradeoff between the computational cost of each iteration and the improvement it provides. For cyclic and randomized strategies, the iteration complexity is O(KL) as the coordinate selection can be performed in constant time. The greedy selection of a coordinate is more expensive as it is linear in the signal length O(KT̃ ). However, greedy selection is more efficient iteration-wise (Nutini et al., 2015). Moreau et al. (2018) proposed to consider a locally greedy selection strategy for CD. The coordinate to update is chosen greedily in one of M subsegments of the signal, i.e., at iteration q, the selected coordinate is: (k0, t0) = arg max (k,t)∈Cm |zk[t]− z′k[t]| , m ≡ q (mod M) + 1 , (7) with Cm = J1,KK× J(m−1)T̃ /M,mT̃/MK. With this strategy, the coordinate selection complexity is linear in the length of the considered subsegment O(KT̃/M). By choosing M = bT̃ /(2L− 1)c, the complexity of update is the same as the complexity of random and cyclic coordinate selection, O(KL). We detail the steps of LGCD in Algorithm 1. This algorithm is particularly efficient when the zk are sparser. Indeed, in this case, only few coefficients need to be updated in the signal, resulting in a low number of iterations. Computational complexities are detailed in Table 1. Relation with matching pursuit (MP) Note that the greedy CD is strongly related to the wellknown matching pursuit (MP) algorithm (Locatello et al., 2018). The main difference is that MP solves a slightly different problem, where the `1 regularization is replaced with an `0 constraint. Therefore, the size of the support is a fixed parameter in MP, whereas it is controlled by the regularization parameter λ in our case. In term of algorithm, both methods update one coordinate at a time selected greedily, but MP does not apply a soft-thresholding in (5). 3.2 D-step: solving for the atoms Given KN fixed activation signals znk ∈ RT̃ , associated to signals Xn ∈ RP×T , the D-step aims to update the K spatial patterns uk ∈ RP and K temporal patterns vk ∈ RL, by solving: min ‖uk‖2≤1 ‖vk‖2≤1 E, where E ∆= N∑ n=1 1 2 ‖Xn − K∑ k=1 znk ∗ (ukv>k )‖22 . (8) The problem (8) is convex in each block of variables {uk}k and {vk}k, but not jointly convex. Therefore, we optimize first {uk}k, then {vk}k, using in both cases a projected gradient descent with an Armijo backtracking line-search (Wright and Nocedal, 1999) to find a good step size. These steps are detailed in Algorithm A.1. Gradient relative to uk and vk The gradient of E relatively to {uk}k and {vk}k can be computed using the chain rule. First, we compute the gradient relatively to a full atom Dk = ukv > k ∈ RP×L: ∇DkE = N∑ n=1 (znk ) ∗ ( Xn − K∑ l=1 znl ∗Dl ) = Φk − K∑ l=1 Ψk,l ∗Dl , (9) where we reordered this expression to define Φk ∈ RP×L and Ψk,l ∈ R2L−1. These terms are both constant during a D-step and can thus be precomputed to accelerate the computation of the gradients and the cost function E. We detail these computations in the supplementary materials (see Section A.1). Computational complexities are detailed in Table 1. Note that the dependence in T is present only in the precomputations, which makes the following iterations very fast. Without precomputations, the complexity of each gradient computation in the D-step would beO(NKTLP ). 3.3 Initialization The activations sub-problem (Z-step) is regularized with a `1-norm, which induces sparsity: the higher the regularization parameter λ, the higher the sparsity. Therefore, there exists a value λmax above which the sub-problem solution is always zeros (Hastie et al., 2015). As λmax depends on the atoms Dk and on the signals Xn, its value changes after each D-step. In particular, its value might change a lot between the initialization and the first D-step. This is problematic since we cannot use a regularization λ above this initial λmax, even though the following λmax might be higher. The standard strategy to initialize CSC methods is to generate random atoms with Gaussian white noise. However, as these atoms generally poorly correlate with the signals, the initial value of λmax is low compared to the following ones. For example, on the MEG dataset described later on, we found that the initial λmax is about 1/3 of the following ones in the univariate case, with L = 32. On the multivariate case, it is even more problematic as with P = 204, we could have an initial λmax as low as 1/20 of the following ones. To fix this problem, we propose to initialize the dictionary with random chunks of the signal, projecting each chunk on a rank-1 approximation using singular value decomposition. We noticed on the MEG dataset that the initial λmax was then about the same value as the following ones, which enables the use of higher regularization parameters. We used this scheme in all our experiments. 4 Experiments All numerical experiments were run using Python (Python Software Foundation, 2017) and our code is publicly available online at https://alphacsc.github.io/. Speed performance To illustrate the performance of our optimization strategy, we monitored its convergence speed on a real MEG dataset. The somatosensory dataset from the MNE software (Gram- fort et al., 2013, 2014) contains responses to median nerve stimulation. We consider only gradiometers channels and we used the following parameters: T = 134 700, N = 2, K = 8, and L = 128. First we compared our strategy against three state-of-the-art univariate CSC solvers available online. The first was developed by Garcia-Cardona and Wohlberg (2017) and is based on ADMM. The second and third were developed by Jas et al. (2017), and are respectively based on FISTA and L-BFGS. All solvers shared the same objective function, but as the problem is non-convex, the solvers are not guaranteed to reach the same local minima, even though we started from the same initial settings. Hence, for a fair comparison, we computed the convergence curves relative to each local minimum, and averaged them over 10 different initializations. The results, presented in Figure 1(a, b), demonstrate the competitiveness of our method, for reasonable choices of λ. Indeed, a higher regularization parameter leads to sparser activations znk , on which LGCD is particularly efficient. Then, we also compared our method against a multivariate ADMM solver developed by Wohlberg (2016a). As this solver was quite slow on these long signals, we limited our experiments to P = 5 channels. The results, presented in Figure 1(c, d), show that our method is faster than the competing method for large λ. More benchmarks are available in the supplementary materials. Scaling with the number of channels The multivariate model involves an extra dimension P but its impact on the computational complexity of our solver is limited. Figure 2 shows the average running times of the Z-step and the D-step. Timings are normalized w.r.t. the timings for a single channel. The running times are computed using the same signals from the somatosensory dataset, with the following parameters: T = 26 940, N = 10, K = 2, L = 128. We can see that the scaling of these three operations is sub-linear in P . For the Z-step, only the initial computations for the first βk and the constants D k ∗̃ Dl depend linearly on P so that the complexity increase is limited compared to the complexity of solving the optimization problem (4). For the D-step, the scaling to compute the gradients is linear with P . However, the most expensive operations here are the computation of the constant Ψk, which does not on P . Finding patterns in low SNR signals Since the multivariate model has access to more data, we would expect it to perform better compared to the univariate model especially for low SNR signals. To demonstrate this, we compare the two models when varying the number of channels P and the SNR of the data. The original dictionary contains two temporal patterns, a square and a triangle, presented in Figure 3(a). The spatial maps are designed with a sine and a cosine, and the first channel’s amplitude is forced to 1 to make sure both atoms are present even with only one channel. The signals are obtained by convolving the atoms with activation signals znk , where the activation locations are sampled uniformly in J1, T̃ K× J1,KK with 5% non-zero activations, and the amplitudes are uniformly sampled in [0, 1]. Then, a Gaussian white noise with variance σ is added to the signal. We fixed N = 100, L = 64 and T̃ = 640 for our simulated signals. We can see in Figure 3(a) the temporal patterns recovered for σ = 10−3 using only one channel and using 5 channels. While the patterns recovered with one channel are very noisy, the multivariate model with rank-1 constraint recovers the original atoms accurately. This can be expected as the univariate model is ill-defined in this situation, where some atoms are superimposed. For the rank-1 model, as the atoms have different spatial maps, the problem is easier. Then, we evaluate the learned temporal atoms. Due to permutation and sign ambiguity, we compute the `2-norm of the difference between the temporal pattern v̂k and the ground truths, vk or −vk, for all permutations S(K) i.e., loss(v̂) = min s∈S(K) K∑ k=1 min ( ‖v̂s(k) − vk‖22, ‖v̂s(k) + vk‖22 ) . (10) Multiple values of λ were tested and the best loss is reported in Figure 3(b) for varying noise levels σ. We observe that independently of the noise level, the multivariate rank-1 model outperforms the univariate one. This is true even for good SNR, as using multiple channels disambiguates the separation of overlapped patterns. Examples of atoms in real MEG signals: We show the results of our algorithm on experimental data, using the MNE somatosensory dataset (Gramfort et al., 2013, 2014). This dataset contains MEG recordings of one patient receiving median nerve stimulations. Here we first extract N = 103 trials from the data. Each trial lasts 6 s with a sampling frequency of 150 Hz (T = 900). We selected only gradiometer channels, leading to P = 204 channels. The signals were notch-filtered to remove the power-line noise, and high-pass filtered at 2 Hz to remove the low-frequency trend, i.e. to remove low frequency drift artifacts which contribute a lot to the variance of the raw signals. We learned K = 40 atoms with L = 150 using a rank-1 multivariate CSC model, with a regularization λ = 0.2λmax. Figure 4(a) shows a recovered non-sinusoidal brain rhythm which resembles the well-known murhythm. The mu-rhythm has been implicated in motor-related activity (Hari, 2006) and is centered around 9–11 Hz. Indeed, while the power is concentrated in the same frequency band as the alpha, it has a very different spatial topography (Figure 4(b)). In Figure 4(c), the power spectral density (PSD) shows two components of the mu-rhythm – one at around 9 Hz, and a harmonic at 18 Hz as previously reported in (Hari, 2006). Based on our analysis, it is clear that the 18 Hz component is simply a harmonic of the mu-rhythm even though a Fourier-based analysis could lead us to falsely conclude that the data contained beta-rhythms. Finally, due to the rank-1 nature of our atoms, it is straightforward to fit an equivalent current dipole (Tuomisto et al., 1983) to interpret the origin of the signal. Figure 4(d) shows that the atom does indeed localize in the primary somatosensory cortex, or the so-called S1 region with a 59.3% goodness of fit. For results on more MEG datasets, see Section B.2. It notably includes mu-shaped atoms from S2. 5 Conclusion Many neuroscientific debates today are centered around the morphology of the signals under consideration. For instance, are alpha-rhythms asymmetric (Mazaheri and Jensen, 2008) ? Are frequency specific patterns the result of sustained oscillations or transient bursts (van Ede et al., 2018) ? In this paper, we presented a multivariate extension to the CSC problem applied to MEG data to help answer such questions. In the original CSC formulation, the signal is expressed as a convolution of atoms and their activations. Our method extends this to the case of multiple channels and imposes a rank-1 constraint on the atoms to account for the instantaneous propagation of electromagnetic fields. We demonstrate the usefulness of our method on publicly available multivariate MEG data. Not only are we able to recover neurologically plausible atoms, but also we are able to find temporal waveforms which are non-sinusoidal. Empirical evaluations show that our solvers are significantly faster compared to existing CSC methods even for the univariate case (single channel). The algorithm scales sublinearly with the number of channels which means it can be employed even for dense sensor arrays with 200-300 sensors, leading to better estimation of the patterns and their origin in the brain. Acknowledgment This work was supported by the ERC Starting Grant SLAB ERC-YStG-676943 and by the ANR THALAMEEG ANR-14-NEUC-0002-01.
1. What is the main contribution of the paper in terms of extending convolutional sparse coding for electromagnetic brain signal analysis? 2. How does the proposed method impose a rank-one constraint on every dictionary atom, and what is the motivation behind this approach? 3. Can you explain the locally greedy coordinate descent (LGCD) method developed in a previous work and how it's used in this paper? 4. How does the proposed method compare to existing ones in terms of efficiency, and what are the practical advantages of using LGCD? 5. Can you provide more details about the real MEG data experiment, such as the number of dictionary atoms used and how the results were obtained? 6. How does the proposed rank-one model compare to the multivariate model without the constraint, and what are the differences in terms of computational speed and parameter reduction? 7. Are there any limitations or potential drawbacks to the proposed method, and how might they be addressed in future research?
Review
Review This paper presents an extension of convolutional sparse coding for electromagnetic brain signal analysis. The idea is to impose a rank-one constraint on every dictionary atom (channel x time), which is justified due to the instantaneous nature of the forward model. An efficient algorithm for estimating sparse codes is developed based on locally greedy coordinate descent (LGCD) method developed in a previous work. Experiment on real MEG data shows that the algorithm is more efficient than existing ones and the method can learn reasonable pairs of spatial and temporal atoms related to the mu-rhythm located at the somatosensory area. The use of rank-one constraint is well motivated for applications to MEG or EEG. Although technically rather straightforward, the combination with CSC in the particular application context sounds fairly an original idea and will be practically very useful. The application to real MEG data well demonstrates the validity. The presentation is clear, although the paper seems to put too much emphasis on algorithmic details rather than the data analysis results. About the timing comparison, the L-BFGS method by Jas et al. [13] appears to have a similar performance with the proposed method in the univariate case (Fig 1b). The improvement in large lambda is evident but the difference is relatively small as compared to the difference to the other methods. I suppose the L-BFGS method can readily apply to the multivariate or rank-one constraint cases by just replacing the Z-step, so the practical advantage of using LGCD may not be very large. The computational speed appears to be similar between the proposed multivariate and rank-one models, although I supposed rank-one constraint generally improves the speed by reducing the number of parameters. If it could be shown in some setting, this point may increase the relevance of the proposed rank-one model. About the MEG application, the figure only displays one atom obtained by the rank-one model. Since the rank-one constraint is the key idea of the proposed method, it should be compared with the multivariate model without the constraint (possibly with post-hoc rank-one approximation). Moreover, what was the nubmer of dictionary atoms in this experiment? It would be better if some comments on how other atoms look like as well as the robustness of the result shown here. Some symbols like [[1,T]] (=1,2,..,T) or that of the floor function are used without definition. It will be helpful for readers if they are provided in the text.
NIPS
Title Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals Abstract Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control. While alpha waves (8–12 Hz) are known to closely resemble short sinusoids, and thus are revealed by Fourier analysis or wavelet transforms, there is an evolving debate that electromagnetic neural signals are composed of more complex waveforms that cannot be analyzed by linear filters and traditional signal representations. In this paper, we propose to learn dedicated representations of such recordings using a multivariate convolutional sparse coding (CSC) algorithm. Applied to electroencephalography (EEG) or magnetoencephalography (MEG) data, this method is able to learn not only prototypical temporal waveforms, but also associated spatial patterns so their origin can be localized in the brain. Our algorithm is based on alternated minimization and a greedy coordinate descent solver that leads to state-of-the-art running time on long time series. To demonstrate the implications of this method, we apply it to MEG data and show that it is able to recover biological artifacts. More remarkably, our approach also reveals the presence of non-sinusoidal mu-shaped patterns, along with their topographic maps related to the somatosensory cortex. 1 Introduction Neural activity recorded via measurements of the electrical potential over the scalp by electroencephalography (EEG), or magnetic fields by magnetoencephalography (MEG), can be used to investigate human cognitive processes and certain pathologies. Such recordings consist of dozens to hundreds of simultaneously recorded signals, for durations going from minutes to hours. In order to describe and quantify neural activity in such multi-gigabyte data, it is classical to decompose the signal in predefined representations such as the Fourier or wavelet bases. It leads to canonical frequency bands such as theta (4–8 Hz), alpha (8–12 Hz), or beta (15–30 Hz) (Buzsaki, 2006), in which signal power can be quantified. While such linear analyses have had significant impact in neuroscience, there is now a debate regarding whether neural activity consists more of transient bursts of isolated events rather than rhythmically sustained oscillations (van Ede et al., 2018). To study the transient events and the morphology of the waveforms (Mazaheri and Jensen, 2008; Cole and Voytek, 2017), which matter in cognition and for our understanding of pathologies (Jones, 2016; Cole et al., 2017), there is a clear need to go beyond traditionally employed signal processing methodologies (Cole and Voytek, 2018). For instance, a classic Fourier analysis fails to distinguish alpha-rhythms from mu-rhythms, which have the same peak frequency at around 10 Hz, but whose waveforms are different (Cole and Voytek, 2017; Hari and Puce, 2017). The key to many modern statistical analyses of complex data such as natural images, sounds or neural time series is the estimation of data-driven representations. Dictionary learning is one family 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. of techniques, which consists in learning atoms (or patterns) that offer sparse data approximations. When working with long signals in which events can happen at any instant, one idea is to learn shift-invariant atoms. They can offer better signal approximations than generic bases such as Fourier or wavelets, since they are not limited to narrow frequency bands. Multiple approaches have been proposed to solve this shift-invariant dictionary learning problem, such as MoTIF (Jost et al., 2006), the sliding window matching (Gips et al., 2017), the adaptive waveform learning (Hitziger et al., 2017), or the learning of recurrent waveform (Brockmeier and Príncipe, 2016), yet they all have several limitations, as discussed in Jas et al. (2017). A more popular approach, especially in image processing, is the convolutional sparse coding (CSC) model (Jas et al., 2017; Pachitariu et al., 2013; Kavukcuoglu et al., 2010; Zeiler et al., 2010; Heide et al., 2015; Wohlberg, 2016b; Šorel and Šroubek, 2016; Grosse et al., 2007; Mailhé et al., 2008). The idea is to cast the problem as an optimization problem, representing the signal as a sum of convolutions between atoms and activation signals. The CSC approach has been quite successful in several fields such as computer vision (Kavukcuoglu et al., 2010; Zeiler et al., 2010; Heide et al., 2015; Wohlberg, 2016b; Šorel and Šroubek, 2016), biomedical imaging (Jas et al., 2017; Pachitariu et al., 2013), and audio signal processing (Grosse et al., 2007; Mailhé et al., 2008), yet it was essentially developed for univariate signals. Interestingly, images can be multivariate such as color or hyper-spectral images, yet most CSC methods only consider gray scale images. To the best of our knowledge, the only reference to multivariate CSC is Wohlberg (2016a), where the author proposes two models well suited for 3-channel images. In the case of EEG and MEG recordings, neural activity is instantaneously and linearly spread across channels, due to Maxwell’s equations (Hari and Puce, 2017). The same temporal patterns are reproduced on all channels with different intensities, which depend on each activity’s location in the brain. To exploit this property, we propose to use a rank-1 constraint on each multivariate atom. This idea has been mentioned in (Barthélemy et al., 2012, 2013), but was considered less flexible than the full-rank model. Moreover, their proposed optimization techniques are not specific to shift-invariant models, and not scalable to long signals. Multivariate shift-invariant rank-1 decomposition of EEG has also been considered with matching pursuit (Durka et al., 2005), but without learning the atoms, which are fixed Gabor filters. Contribution In this study, we develop a multivariate model for CSC, using a rank-1 constraint on the atoms to account for the instantaneous spreading of an electromagnetic source over all the channels. We also propose efficient optimization strategies, namely a locally greedy coordinate descent (LGCD, Moreau et al. 2018), and precomputation steps for faster gradient computations. We provide multiple numerical evaluations of our method, which show the highly competitive running time on both univariate and multivariate models, even when working with hundreds of channels. We also demonstrate the estimation performance of the multivariate model by recovering patterns on low signal-to-noise ratio (SNR) data. Finally, we illustrate our method with atoms learned on multivariate MEG data, that thanks to the rank-1 model can be localized in the brain for clinical or cognitive neuroscience studies. Notation A multivariate signal with T time points in RP is noted X ∈ RP×T , while x ∈ RT is a univariate signal. We index time with brackets X[t] ∈ Rp, while Xi ∈ RT is the channel i in X . For a vector v ∈ RP we define the `q norm as ‖v‖q = ( ∑ i |vi|q) 1/q, and for a multivariate signal X ∈ RP×T , we define the time-wise `q norm as ‖X‖q = ( ∑T t=1 ‖X[t]‖qq)1/q. The transpose of a matrix U is denoted by U>. For a multivariate signal X ∈ RP×T , X is obtained by reversal of the temporal dimension, i.e., X [t] = X[T + 1− t]. The convolution of two signals z ∈ RT−L+1 and d ∈ RL is denoted by z ∗ d ∈ RT . For D ∈ RP×L, z ∗D is obtained by convolving every row of D by z. For D′ ∈ RP×L, D ∗̃ D′ ∈ R2L−1 is obtained by summing the convolution between each row of D and D′: D ∗̃ D′ = ∑P p=1Dp ∗D′p . We note [a, b] the set of real numbers between a and b, and Ja, bK the set of integers between a and b. We define T̃ as T − L+ 1. 2 Multivariate Convolutional Sparse Coding In this section, we introduce the convolutional sparse coding (CSC) models used in this work. We focus on 1D-convolution, although these models can be naturally extended to higher order signals such as images by using the proper convolution operators. Univariate CSC The CSC formulation adopted in this work follows the shift-invariant sparse coding (SISC) model from Grosse et al. (2007). It is defined as follows: min {dk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥xn − K∑ k=1 znk ∗ dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 , s.t. ‖dk‖22 ≤ 1 and znk ≥ 0 , (1) where {xn}Nn=1 ⊂ RT areN observed signals, λ > 0 is the regularization parameter, {dk}Kk=1 ⊂ RL are the K temporal atoms we aim to learn, and {znk }Kk=1 ⊂ RT̃ are K signals of activations, a.k.a. the code associated with xn. This model assumes that the coding signals znk are sparse, in the sense that only few entries are nonzero in each signal. In this work, we also assume that the entries of znk are positive, which means that the temporal patterns are present each time with the same polarity. Multivariate CSC The multivariate formulation uses an additional dimension on the signals and on the atoms, since the signal is recorded over P channels (mapping to space locations): min {Dk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗Dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1, s.t. ‖Dk‖22 ≤ 1 and znk ≥ 0 , (2) where {Xn}Nn=1 ⊂ RP×T are N observed multivariate signals, {Dk}Kk=1 ⊂ RP×L are the spatiotemporal atoms, and {znk }Kk=1 ⊂ RT̃ are the sparse activations associated with Xn. Multivariate CSC with rank-1 constraint This model is similar to the multivariate case but it adds a rank-1 constraint on the dictionary, Dk = ukv > k ∈ RP×L, with uk ∈ RP being the pattern over channels and vk ∈ RL the pattern over time. The optimization problem boils down to: min {uk}k,{vk}k,{znk }k,n N∑ n=1 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗ (ukv>k ) ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 , s.t. ‖uk‖22 ≤ 1 , ‖vk‖22 ≤ 1 and znk ≥ 0 . (3) The rank-1 constraint is consistent with Maxwell’s equations and the physical model of electrophysiological signals like EEG or MEG, where each source is linearly spread instantaneously over channels with a constant topographic map (Hari and Puce, 2017). Using this assumption, one aims to improve the estimation of patterns under the presence of independent noise over channels. Moreover, it can help separating overlapped sources which are inherently rank-1 but whose sum is generally of higher rank. Finally, as explained below, several computations can be factorized to speed up computations. Noise model Note that our models use a Gaussian noise, whereas one can also use an alpha-stable noise distribution to better handle strong artifacts, as proposed by Jas et al. (2017). Importantly, our contribution is orthogonal to their work, and one can easily extend multivariate models to alpha-stable noise distributions, by using their EM algorithm and by updating the `2 loss into a weighted `2 loss in (3). Also, our experiments used artifact-free datasets, so the Gaussian noise model is appropriate. 3 Model estimation Problems (1), (2) and (3) share the same structure. They are convex in each variable but not jointly convex. The resolution is done by using a block coordinate descent approach which minimizes alternately the objective function over one block of the variables. In this section, we describe this approach on the multivariate CSC with rank-1 constraint case (3), updating iteratively the activations znk , the spatial patterns uk, and the temporal pattern vk. 3.1 Z-step: solving for the activations Given K fixed atoms Dk and a regularization parameter λ > 0, the Z-step aims to retrieve the NK activation signals znk ∈ RT̃ associated to the signals Xn ∈ RP×T by solving the following Algorithm 1: Locally greedy coordinate descent (LGCD) Input :Signal X , atoms Dk, number of segments M , stopping parameter > 0, zk initialization Initialize βk[t] with (5). repeat for m = 1 to M do Compute z′k[t] = max ( βk[t]−λ ‖Dk‖22 , 0 ) for (k, t) ∈ Cm Choose (k0, t0) = arg max (k,t)∈Cm |zk[t]− z′k[t]| Update β with (6) Update the current point estimate zk0 [t0]← z′k0 [t0] until ‖z − z′‖∞ < `1-regularized optimization problem: min {znk }k,n znk≥0 1 2 ∥∥∥∥∥Xn − K∑ k=1 znk ∗Dk ∥∥∥∥∥ 2 2 + λ K∑ k=1 ‖znk ‖1 . (4) This problem is convex in znk and can be efficiently solved. In Chalasani et al. (2013), the authors proposed an algorithm based on FISTA (Beck and Teboulle, 2009) to solve it. Bristow et al. (2013) introduced a method based on ADMM (Boyd et al., 2011) to compute efficiently the activation signals znk . These two methods are detailed and compared by Wohlberg (2016b), which also made use of the fast Fourier transform (FFT) to accelerate the computations. Recently, Jas et al. (2017) proposed to use L-BFGS (Byrd et al., 1995) to improve on first order methods. Finally, Kavukcuoglu et al. (2010) adapted the greedy coordinate descent (GCD) to solve this convolutional sparse coding problem. However, for long signals, these techniques can be quite slow due the computation of the gradient (FISTA, ADMM, L-BFGS) or the choice of the best coordinate to update in GCD, which are operations that scale linearly in T . A way to alleviate this limitation is to use a locally greedy coordinate descent (LGCD) strategy, presented recently in Moreau et al. (2018). Note that problem (4) is independent for each signal Xn. The computation of each zn can thus be parallelized, independently of the technique selected to solve the optimization (Jas et al., 2017). Therefore, we omit the superscript n in the following subsection to simplify the notation. Coordinate descent (CD) The key idea of coordinate descent is to update our estimate of the solution one coordinate zk[t] at a time. For (4), it is possible to compute the optimal value z′k[t] of one coordinate zk[t] given that all the others are fixed. Indeed, the problem (4) restricted to one coordinate has a closed-form solution given by: z′k[t] = max ( βk[t]− λ ‖Dk‖22 , 0 ) , with βk[t] = [ D k ∗̃ ( X − K∑ l=1 zl ∗Dl + zk[t]et ∗Dk )] [t] (5) where et ∈ RT̃ is the canonical basis vector with value 1 at index t and 0 elsewhere. When updating the coefficient zk0 [t0] to the value z ′ k0 [t0], β is updated with: β (q+1) k [t] = β (q) k [t] + (D k0 ∗̃ Dk)[t− t0](zk0 [t0]− z ′ k0 [t0]), ∀(k, t) 6= (k0, t0) . (6) The term (D k0 ∗̃ Dk)[t− t0] is zero for |t− t0| ≥ L. Thus, only K(2L− 1) coefficients of β need to be changed (Kavukcuoglu et al., 2010). The CD algorithm updates at each iteration a coordinate to this optimal value. The coordinate to update can be chosen with different strategies, such as the cyclic strategy which iterates over all coordinates (Friedman et al., 2007), the randomized CD (Nesterov, 2010; Richtárik and Takáč, 2014) which chooses a coordinate at random for each iteration, or the greedy CD (Osher and Li, 2009) which chooses the coordinate the farthest from its optimal value. Locally greedy coordinate descent (LGCD) The choice of a coordinate selection strategy results of a tradeoff between the computational cost of each iteration and the improvement it provides. For cyclic and randomized strategies, the iteration complexity is O(KL) as the coordinate selection can be performed in constant time. The greedy selection of a coordinate is more expensive as it is linear in the signal length O(KT̃ ). However, greedy selection is more efficient iteration-wise (Nutini et al., 2015). Moreau et al. (2018) proposed to consider a locally greedy selection strategy for CD. The coordinate to update is chosen greedily in one of M subsegments of the signal, i.e., at iteration q, the selected coordinate is: (k0, t0) = arg max (k,t)∈Cm |zk[t]− z′k[t]| , m ≡ q (mod M) + 1 , (7) with Cm = J1,KK× J(m−1)T̃ /M,mT̃/MK. With this strategy, the coordinate selection complexity is linear in the length of the considered subsegment O(KT̃/M). By choosing M = bT̃ /(2L− 1)c, the complexity of update is the same as the complexity of random and cyclic coordinate selection, O(KL). We detail the steps of LGCD in Algorithm 1. This algorithm is particularly efficient when the zk are sparser. Indeed, in this case, only few coefficients need to be updated in the signal, resulting in a low number of iterations. Computational complexities are detailed in Table 1. Relation with matching pursuit (MP) Note that the greedy CD is strongly related to the wellknown matching pursuit (MP) algorithm (Locatello et al., 2018). The main difference is that MP solves a slightly different problem, where the `1 regularization is replaced with an `0 constraint. Therefore, the size of the support is a fixed parameter in MP, whereas it is controlled by the regularization parameter λ in our case. In term of algorithm, both methods update one coordinate at a time selected greedily, but MP does not apply a soft-thresholding in (5). 3.2 D-step: solving for the atoms Given KN fixed activation signals znk ∈ RT̃ , associated to signals Xn ∈ RP×T , the D-step aims to update the K spatial patterns uk ∈ RP and K temporal patterns vk ∈ RL, by solving: min ‖uk‖2≤1 ‖vk‖2≤1 E, where E ∆= N∑ n=1 1 2 ‖Xn − K∑ k=1 znk ∗ (ukv>k )‖22 . (8) The problem (8) is convex in each block of variables {uk}k and {vk}k, but not jointly convex. Therefore, we optimize first {uk}k, then {vk}k, using in both cases a projected gradient descent with an Armijo backtracking line-search (Wright and Nocedal, 1999) to find a good step size. These steps are detailed in Algorithm A.1. Gradient relative to uk and vk The gradient of E relatively to {uk}k and {vk}k can be computed using the chain rule. First, we compute the gradient relatively to a full atom Dk = ukv > k ∈ RP×L: ∇DkE = N∑ n=1 (znk ) ∗ ( Xn − K∑ l=1 znl ∗Dl ) = Φk − K∑ l=1 Ψk,l ∗Dl , (9) where we reordered this expression to define Φk ∈ RP×L and Ψk,l ∈ R2L−1. These terms are both constant during a D-step and can thus be precomputed to accelerate the computation of the gradients and the cost function E. We detail these computations in the supplementary materials (see Section A.1). Computational complexities are detailed in Table 1. Note that the dependence in T is present only in the precomputations, which makes the following iterations very fast. Without precomputations, the complexity of each gradient computation in the D-step would beO(NKTLP ). 3.3 Initialization The activations sub-problem (Z-step) is regularized with a `1-norm, which induces sparsity: the higher the regularization parameter λ, the higher the sparsity. Therefore, there exists a value λmax above which the sub-problem solution is always zeros (Hastie et al., 2015). As λmax depends on the atoms Dk and on the signals Xn, its value changes after each D-step. In particular, its value might change a lot between the initialization and the first D-step. This is problematic since we cannot use a regularization λ above this initial λmax, even though the following λmax might be higher. The standard strategy to initialize CSC methods is to generate random atoms with Gaussian white noise. However, as these atoms generally poorly correlate with the signals, the initial value of λmax is low compared to the following ones. For example, on the MEG dataset described later on, we found that the initial λmax is about 1/3 of the following ones in the univariate case, with L = 32. On the multivariate case, it is even more problematic as with P = 204, we could have an initial λmax as low as 1/20 of the following ones. To fix this problem, we propose to initialize the dictionary with random chunks of the signal, projecting each chunk on a rank-1 approximation using singular value decomposition. We noticed on the MEG dataset that the initial λmax was then about the same value as the following ones, which enables the use of higher regularization parameters. We used this scheme in all our experiments. 4 Experiments All numerical experiments were run using Python (Python Software Foundation, 2017) and our code is publicly available online at https://alphacsc.github.io/. Speed performance To illustrate the performance of our optimization strategy, we monitored its convergence speed on a real MEG dataset. The somatosensory dataset from the MNE software (Gram- fort et al., 2013, 2014) contains responses to median nerve stimulation. We consider only gradiometers channels and we used the following parameters: T = 134 700, N = 2, K = 8, and L = 128. First we compared our strategy against three state-of-the-art univariate CSC solvers available online. The first was developed by Garcia-Cardona and Wohlberg (2017) and is based on ADMM. The second and third were developed by Jas et al. (2017), and are respectively based on FISTA and L-BFGS. All solvers shared the same objective function, but as the problem is non-convex, the solvers are not guaranteed to reach the same local minima, even though we started from the same initial settings. Hence, for a fair comparison, we computed the convergence curves relative to each local minimum, and averaged them over 10 different initializations. The results, presented in Figure 1(a, b), demonstrate the competitiveness of our method, for reasonable choices of λ. Indeed, a higher regularization parameter leads to sparser activations znk , on which LGCD is particularly efficient. Then, we also compared our method against a multivariate ADMM solver developed by Wohlberg (2016a). As this solver was quite slow on these long signals, we limited our experiments to P = 5 channels. The results, presented in Figure 1(c, d), show that our method is faster than the competing method for large λ. More benchmarks are available in the supplementary materials. Scaling with the number of channels The multivariate model involves an extra dimension P but its impact on the computational complexity of our solver is limited. Figure 2 shows the average running times of the Z-step and the D-step. Timings are normalized w.r.t. the timings for a single channel. The running times are computed using the same signals from the somatosensory dataset, with the following parameters: T = 26 940, N = 10, K = 2, L = 128. We can see that the scaling of these three operations is sub-linear in P . For the Z-step, only the initial computations for the first βk and the constants D k ∗̃ Dl depend linearly on P so that the complexity increase is limited compared to the complexity of solving the optimization problem (4). For the D-step, the scaling to compute the gradients is linear with P . However, the most expensive operations here are the computation of the constant Ψk, which does not on P . Finding patterns in low SNR signals Since the multivariate model has access to more data, we would expect it to perform better compared to the univariate model especially for low SNR signals. To demonstrate this, we compare the two models when varying the number of channels P and the SNR of the data. The original dictionary contains two temporal patterns, a square and a triangle, presented in Figure 3(a). The spatial maps are designed with a sine and a cosine, and the first channel’s amplitude is forced to 1 to make sure both atoms are present even with only one channel. The signals are obtained by convolving the atoms with activation signals znk , where the activation locations are sampled uniformly in J1, T̃ K× J1,KK with 5% non-zero activations, and the amplitudes are uniformly sampled in [0, 1]. Then, a Gaussian white noise with variance σ is added to the signal. We fixed N = 100, L = 64 and T̃ = 640 for our simulated signals. We can see in Figure 3(a) the temporal patterns recovered for σ = 10−3 using only one channel and using 5 channels. While the patterns recovered with one channel are very noisy, the multivariate model with rank-1 constraint recovers the original atoms accurately. This can be expected as the univariate model is ill-defined in this situation, where some atoms are superimposed. For the rank-1 model, as the atoms have different spatial maps, the problem is easier. Then, we evaluate the learned temporal atoms. Due to permutation and sign ambiguity, we compute the `2-norm of the difference between the temporal pattern v̂k and the ground truths, vk or −vk, for all permutations S(K) i.e., loss(v̂) = min s∈S(K) K∑ k=1 min ( ‖v̂s(k) − vk‖22, ‖v̂s(k) + vk‖22 ) . (10) Multiple values of λ were tested and the best loss is reported in Figure 3(b) for varying noise levels σ. We observe that independently of the noise level, the multivariate rank-1 model outperforms the univariate one. This is true even for good SNR, as using multiple channels disambiguates the separation of overlapped patterns. Examples of atoms in real MEG signals: We show the results of our algorithm on experimental data, using the MNE somatosensory dataset (Gramfort et al., 2013, 2014). This dataset contains MEG recordings of one patient receiving median nerve stimulations. Here we first extract N = 103 trials from the data. Each trial lasts 6 s with a sampling frequency of 150 Hz (T = 900). We selected only gradiometer channels, leading to P = 204 channels. The signals were notch-filtered to remove the power-line noise, and high-pass filtered at 2 Hz to remove the low-frequency trend, i.e. to remove low frequency drift artifacts which contribute a lot to the variance of the raw signals. We learned K = 40 atoms with L = 150 using a rank-1 multivariate CSC model, with a regularization λ = 0.2λmax. Figure 4(a) shows a recovered non-sinusoidal brain rhythm which resembles the well-known murhythm. The mu-rhythm has been implicated in motor-related activity (Hari, 2006) and is centered around 9–11 Hz. Indeed, while the power is concentrated in the same frequency band as the alpha, it has a very different spatial topography (Figure 4(b)). In Figure 4(c), the power spectral density (PSD) shows two components of the mu-rhythm – one at around 9 Hz, and a harmonic at 18 Hz as previously reported in (Hari, 2006). Based on our analysis, it is clear that the 18 Hz component is simply a harmonic of the mu-rhythm even though a Fourier-based analysis could lead us to falsely conclude that the data contained beta-rhythms. Finally, due to the rank-1 nature of our atoms, it is straightforward to fit an equivalent current dipole (Tuomisto et al., 1983) to interpret the origin of the signal. Figure 4(d) shows that the atom does indeed localize in the primary somatosensory cortex, or the so-called S1 region with a 59.3% goodness of fit. For results on more MEG datasets, see Section B.2. It notably includes mu-shaped atoms from S2. 5 Conclusion Many neuroscientific debates today are centered around the morphology of the signals under consideration. For instance, are alpha-rhythms asymmetric (Mazaheri and Jensen, 2008) ? Are frequency specific patterns the result of sustained oscillations or transient bursts (van Ede et al., 2018) ? In this paper, we presented a multivariate extension to the CSC problem applied to MEG data to help answer such questions. In the original CSC formulation, the signal is expressed as a convolution of atoms and their activations. Our method extends this to the case of multiple channels and imposes a rank-1 constraint on the atoms to account for the instantaneous propagation of electromagnetic fields. We demonstrate the usefulness of our method on publicly available multivariate MEG data. Not only are we able to recover neurologically plausible atoms, but also we are able to find temporal waveforms which are non-sinusoidal. Empirical evaluations show that our solvers are significantly faster compared to existing CSC methods even for the univariate case (single channel). The algorithm scales sublinearly with the number of channels which means it can be employed even for dense sensor arrays with 200-300 sensors, leading to better estimation of the patterns and their origin in the brain. Acknowledgment This work was supported by the ERC Starting Grant SLAB ERC-YStG-676943 and by the ANR THALAMEEG ANR-14-NEUC-0002-01.
1. What is the focus and contribution of the paper on convolution sparse coding? 2. What are the strengths and weaknesses of the proposed method, particularly in its optimization and application to characterize MEG signals? 3. Do you have any questions regarding the paper's explanation of the rank-1 constraint and its impact on the results? 4. How can the paper improve its clarity and detail in describing the experimental dataset and findings? 5. Are there any limitations or potential biases in the study that should be acknowledged and addressed?
Review
Review This paper proposes a method for a convolution sparse coding (CSC) for multi-variate signals. The problem is relaxed by imposing a rank-1 constraint on the dictionary and optimized by iterative steps to solve locally greedy coordinate descent. The method runs efficiently and is applied to characterize MEG signals from the brain which results in identifying different shaped signals in the same frequency bands. The paper is quite well written and here are some of my concerns. 1. The intuition why rank-1 constraint is used and why this would yield a good result is not well explained. 2. Label for y-axis in Fig. 1 a) and c) are missing. 3. There needs to be more description on datasets in the experiment. E.g., what is MEG data and why is it measured? What is the sample size? 4. Does the findings in the experiment have any relationship with existing literature in neuroscience? Is there any clinical justification or interpretation?
NIPS
Title Deep Active Learning by Leveraging Training Dynamics Abstract Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in real applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance are positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work would inspire more attempts on bridging the theoretical findings of deep networks and practical impacts of deep active learning in real applications. 1 Introduction Training deep learning (DL) models usually requires large amount of high-quality labeled data [1] to optimize a model with a massive number of parameters. The acquisition of such annotated data is usually time-consuming and expensive, making it unaffordable in the fields that require high domain expertise. A promising approach for minimizing the labeling effort is active learning (AL), which aims to identify and label the maximally informative samples, so that a high-performing classifier can be trained with minimal labeling effort [2]. Under classical statistical learning settings, theories of active learning have been extensively studied from the perspective of VC dimension [3]. As a result, a variety of methods have been proposed, such as (i) the version-space-based approaches, which require maintaining a set of models [4, 5], and (ii) the clustering-based approaches, which assume that the data within the same cluster have pure labels [6]. However, the theoretical analyses for these classical settings may not hold for over-parameterized deep neural networks where the traditional wisdom is ineffective [1]. For example, margin-based methods select the labeling examples in the vicinity of the learned decision boundary [7, 8]. However, in the over-parameterized regime, every labeled example could potentially be near the learned decision boundary [9]. As a result, theoretically, such analysis can hardly guide us to design practical active 36th Conference on Neural Information Processing Systems (NeurIPS 2022). learning methods. Besides, empirically, multiple deep active learning works, borrowing observations and insights from the classical theories and methods, have been observed unable to outperform their passive learning counterparts in a few application scenarios [10, 11]. On the other hand, the analysis of neural network’s optimization and generalization performance has witnessed several exciting developments in recent years in terms of the deep learning theory [12, 13, 14]. It is shown that the training dynamics of deep neural networks using gradient descent can be characterized by the Neural Tangent Kernel (NTK) of infinite [12] or finite [15] width networks. This is further leveraged to characterize the generalization of over-parameterized networks through Rademacher complexity analysis [13, 16]. We are therefore inspired to ask: How can we design a practical and generic active learning method for deep neural networks with theoretical justifications? To answer this question, we firstly explore the connection between the model performance on testing data and the convergence speed on training data for the over-parameterized deep neural networks. Based on the NTK framework [12, 13], we theoretically show that if a deep neural network converges faster (“Train Faster”), then it tends to have better generalization performance (“Generalize Better”), which matches the existing observations [17, 18, 19, 20, 21]. Motivated by the aforementioned connection, we first introduce Training Dynamics, the derivative of training loss with respect to iteration, as a proxy to quantitatively describe the training process. On top of it, we formally propose our generic and theoretically-motivated deep active learning method, dynamicAL, which will query labels for a subset of unlabeled samples that maximally increase the training dynamics. In order to compute the training dynamics by merely using the unlabeled samples, we leverage two relaxations Pseudo-labeling and Subset Approximation to solve this non-trivial subset selection problem. Our relaxed approaches are capable of effectively estimating the training dynamics as well as efficiently solving the subset selection problem by reducing the complexity from O(N b) to O(b). In theory, we coin a new term Alignment to measure the length of the label vector’s projection on the neural tangent kernel space. Then, we demonstrate that higher alignment usually comes with a faster convergence speed and a lower generalization bound. Furthermore, with the help of the maximum mean discrepancy [22], we extend the previous analysis to an active learning setting where the i.i.d. assumption may not hold. Finally, we show that alignment is positively correlated with our active learning goal, training dynamics, which implies that maximizing training dynamics will lead to better generalization performance. Regarding experiments, we have empirically verified our theory by conducting extensive experiments on three datasets, CIFAR10 [23], SVHN [24], and Caltech101 [25] using three types of network structures: vanilla CNN, ResNet [26], and VGG [27]. We first show that the result of the subset selection problem delivered by the subset approximation is close to the global optimal solution. Furthermore, under the active learning setting, our method not only outperforms other baselines but also scales well on large deep learning models. The main contributions of our paper can be summarized as follows: • We propose a theory-driven deep active learning method, dynamicAL, inspired by the observation of “train faster, generalize better”. To this end, we introduce the Training Dynamics, as a proxy to describe the training process. • We demonstrate that the convergence speed of training and the generalization performance is strongly (positively) correlated under the ultra-wide condition; we also show that maximizing the training dynamics will lead to a lower generalization error in the scenario of active learning. • Our method is easy to implement. We conduct extensive experiments to evaluate the effectiveness of dynamicAL and empirically show that our method consistently outperforms other methods in a wide range of active learning settings. 2 Background Notation. We use the random variable x ∈ X to represent the input data feature and y ∈ Y as the label where K is the number of classes and [K] := {1, 2, ...,K}. We are given non-degenerated a data source D with unknown distribution p(x, y). We further denote the concatenation of x as X = [x1, x2, ..., xM ] ⊤ and that of y as Y = [y1, y2, ..., yM ]⊤. We consider a deep learning classifier hθ(x) = argmax σ(f(x; θ)) : x → y parameterized by θ ∈ Rp, where σ(·) is the softmax function and f is a neural network. Let ⊗ be the Kronecker Product and IK ∈ RK×K be an identity matrix. Active learning. The goal of active learning is to improve the learning efficiency of a model with a limited labeling budget. In this work, we consider the pool-based AL setup, where a finite data set S = {(xl, yl)}Ml=1 with M points are i.i.d. sampled from p(x, y) as the (initial) labeled set. The AL model receives an unlabeled data set U sampled from p(x) and request labels according to p(y|x) for any x ∈ U in each query round. There are R rounds in total, and for each round, a query set Q consisting of b unlabeled samples can be queried. The total budget size B = b×R. Neural Tangent Kernel. The Neural Tangent Kernel [12] has been widely applied to analyze the dynamics of neural networks. If a neural network is sufficiently wide, properly initialized, and trained by gradient descent with infinitesimal step size (i.e., gradient flow), then the neural network is equivalent to kernel regression predictor with a deterministic kernel Θ(·, ·), called Neural Tangent Kernel (NTK). When minimizing the mean squared error loss, at the iteration t, the dynamics of the neural network f has a closed-form expression: df(X ; θ(t)) dt = −Kt(X ,X ) (f(X ; θ(t))− Y) , (1) where θ(t) denotes the parameter of the neural network at iteration t, Kt(X ,X ) ∈ R|X |×K×|X|×K is called the empirical NTK and Ki,jt (x, x′) = ∇θf i(x; θ(t))⊤∇θf j(x′; θ(t)) is the inner product of the gradient of the i-th class probability and the gradient of the j-th class probability for two samples x, x′ ∈ X and i, j ∈ [K]. The time-variant kernel Kt(·, ·) is equivalent to the (time-invariant) NTK with a high probability, i.e., if the neural network is sufficiently wide and properly initialized, then: Kt(X ,X ) = Θ(X ,X )⊗ IK . (2) The final learned neural network at iteration t, is equivalent to the kernel regression solution with respect to the NTK [14]. For any input x and training data {X,Y } we have, f(x; θ(t)) ≈ Θ(x,X)⊤Θ(X,X)−1(I − e−ηΘ(X,X)t)Y, (3) where η is the learning rate, Θ(x,X) is the NTK matrix between input x and all samples in training data X . 3 Method In section 3.1, we introduce the notion of training dynamics which can be used to describe the training process. Then, in section 3.2, based on the training dynamics, we propose dynamicAL. In section 3.3, we discuss the connection between dynamicAL and existing deep active learning methods. 3.1 Training dynamics In this section, we introduce the notion of training dynamics. The cross-entropy loss over the labeled set S is defined as: L(S) = ∑ (xl,yl)∈S ℓ(f(xl; θ), yl) = − ∑ (xl,yl)∈S ∑ i∈[K] yil log σ i(f(xl; θ)), (4) where σi(f(x; θ)) = exp(f i(x;θ))∑ j exp(f j(x;θ)) . We first analyze the dynamics of the training loss, with respect to iteration t, on one labeled sample (derivation is in Appendix A.1): ∂ℓ(f(x; θ), y) ∂t = − ∑ i ( yi − σi(f(x; θ)) ) ∇θf i(x; θ)∇⊤t θ. (5) For neural networks trained by gradient descent, if the learning rate η is small, then ∇tθ = θt+1−θt = −η ∂ ∑ (xl,yl)∈S ℓ(f(xl;θ),yl) ∂θ . Taking the partial derivative of the training loss with respect to the parameters, we have (the derivation of the following equation can be found in Appendix A.2): ∂ℓ(f(x; θ), y) ∂θ = ∑ j∈[K] ( σj(f(x; θ))− yj )∂f j(x; θ) ∂θ . (6) Therefore, we can further get the following result for the dynamics of training loss: ∂ℓ(f(x; θ), y) ∂t = −η ∑ i ( σi(f(x; θ))− yi )∑ j ∑ (x l ′ ,y l ′ )∈S ∇θf i(x; θ)⊤∇θf j(xl′ ; θ) ( σj(f(xl′ ; θ))− y j l ′ ) . (7) Furthermore, we define di(X,Y ) = σi(f(X; θ))− Y i and Y i is the label vector of all samples for i-th class. Then, the training dynamics (dynamics of training loss) over training set S, computed with the empirical NTK Kij(X,X), is denoted by G(S) ∈ R: G(S) = −1 η ∑ (xl,yl)∈S ∂ℓ(f(xl; θ), yl) ∂t = ∑ i ∑ j di(X,Y )⊤Kij(X,X)dj(X,Y ). (8) 3.2 Active learning by activating training dynamics Before we present dynamicAL, we state Proposition 1, which serves as the theoretical guidance for dynamicAL and will be proved in Section 4. Proposition 1. For deep neural networks, converging faster leads to a lower worst-case generalization error. Motivated by the connection between convergence speed and generalization performance, we propose the general-purpose active learning method, dynamicAL, which aims to accelerate the convergence by querying labels for unlabeled samples. As we described in the previous section, the training dynamics can be used to describe the training process. Therefore, we employ the training dynamics as a proxy to design an active learning method. Specifically, at each query round, dynamicAL will query labels for samples which maximize the training dynamics G(S), i.e., Q = argmaxQ⊆UG(S ∪Q), s.t. |Q| = b, (9) where Q is the corresponding data set for Q with ground-truth labels. Notice that when applying the above objective in practice, we are facing two major challenges. First, G(S ∪Q) cannot be directly computed, because the label information of unlabeled examples is not available before the query. Second, the subset selection problem can be computationally prohibitive if enumerating all possible sets with size b. Therefore, we employ the following two relaxations to make this maximization problem to be solved with constant time complexity. Pseudo labeling. To estimate the training dynamics, we use the predicted label ŷu for sample xu in the unlabeled data set U to compute G. Note, the effectiveness of this adaptation has been demonstrated in the recent gradient-based methods [11, 28], which compute the gradient as if the model’s current prediction on the example is the true label. Therefore, the maximization problem in Equation (9) is changed to, Q = argmaxQ⊆UG(S ∪ Q̂). (10) where Q̂ is the corresponding data set for Q with pseudo labels ŶQ. Subset approximation. The subset selection problem of Equation (10) still requires enumerating all possible subsets of U with size b, which is O(nb). We simplify the selection problem to the following problem without causing any change on the result, argmaxQ⊆UG(S ∪ Q̂) = argmaxQ⊆U∆(Q̂|S), (11) where ∆(Q̂|S) = G(S ∪ Q̂)−G(S) is defined as the change of training dynamics. We approximate the change of training dynamics caused by query set Q using the summation of the change of training dynamics caused by each sample in the query set. Then the maximization problem can be converted to Equation (12) which can be solved by a greedy algorithm with O(b). Q = argmaxQ⊆U ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S), s.t. |Q| = b. (12) To further show the approximated result is reasonably good, we decompose the change of training dynamics as (derivation in Appendix A.4): ∆(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) + ∑ (x,ŷ),(x′,ŷ′)∈Q̂ di(x, ŷ)⊤Kij(x, x′)dj(x′, ŷ′), (13) where Kij(x, x′) is the empirical NTK. The first term in the right hand side is the approximated change of training dynamics. Then, we further define the Approximation Ratio (14) which measures the approximation quality, R(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) ∆(Q̂|S) . (14) We empirically measure the expectation of the Approximation Ratio on two data sets with two different neural networks under three different batch sizes. As shown in Figure 4, the expectation EQ∼UR(Q̂|S) ≈ 1 when the model is converged. Therefore, the approximated result delivered by the greedy algorithm is close to the global optimal solution of the original maximization problem, Equation (10), especially when the model is converged. Based on the above two approximations, we present the proposed method dynamicAL in Algorithm 1. As described below, the algorithm starts by training a neural network f(·; θ) on the initial labeled set S until convergence. Then, for every unlabeled sample xu, we compute pseudo label ŷu and the change of training dynamics ∆({(xu, ŷu)}|S). After that, dynamicAL will query labels for top-b samples causing the maximal change on training dynamics, train the neural network on the extended labeled set, and repeat the process. Note, to keep close to the theoretical analysis, re-initialization is not used after each query, which also enables dynamicAL to get rid of the computational overhead of retraining the deep neural networks every time. Algorithm 1 Deep Active Learning by Leveraging Training Dynamics Input: Neural network f(·; θ), unlabeled sample set U , initial labeled set S, number of query round R, query batch size b. for r = 1 to R do Train f(·; θ) on S with cross-entropy loss until convergence. for xu ∈ U do Compute its pseudo label ŷu = argmaxf(xu; θ). Compute ∆({(xu, ŷu)}|S). end for Select b query samples Q with the highest ∆ values, and request their labels from the oracle. Update the labeled data set S = S ∪Q . end for return Final model f(·; θ). 3.3 Relation to existing methods Although existing deep active learning methods are usually designed based on heuristic criteria, some of them have empirically shown their effectiveness [11, 29, 30]. We surprisingly found that our theoretically-motivated method dynamicAL has some connections with those existing methods from the perspective of active learning criterion. The proposed active learning criterion in Equation (12) can be explicitly written as (derivation in Appendix A.5): ∆({(xu,ŷu)}|S) = ∥∇θℓ(f(xu; θ), ŷu)∥2 + 2 ∑ (x,y)∈S ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). (15) Note. The first term of the right-hand side can be interpreted as the square of gradient length (2- norm) which reflects the uncertainty of the model on the example and has been wildly used as an active learning criterion in some existing works [30, 11, 31]. The second term can be viewed as the influence function [32] with identity hessian matrix. And recently, [29] has empirically shown that the effectiveness of using the influence function with identity hessian matrix as active learning criterion. We hope our theoretical analysis can also shed some light on the interpretation of previous methods. 4 Theoretical analysis In this section, we study the correlation between the convergence rate of the training loss and the generalization error under the ultra-wide condition [12, 13]. We define a measure named alignment to quantify the convergence rate and further show its connection with generalization bound. The analysis provides a theoretical guarantee for the phenomenon of “Train Faster, Generalize Better” as well as our active learning method dynamicAL with a rigorous treatment. Finally, we show that the active learning proxy, training dynamics, is correlated with alignment, which indicates that increasing the training dynamics leads to larger convergence rate and better generalization performance. We leave all proofs of theorems and details of verification experiments in Appendix B and D respectively. 4.1 Train faster provably generalize better Given an ultra-wide neural network, the gradient descent can achieve a near-zero training error [12, 33] and its generalization ability in unseen data can be bounded [13]. It is shown that both the convergence and generalization of a neural network can be analyzed using the NTK [13]. However, the question what is the relation between the convergence rate and the generalization bound has not been answered. We formally give a solution by introducing the concept of alignment, which is defined as follows: Definition 1 (Alignment). Given a data set S = {X,Y }, the alignment is a measure of correlation between X and Y projected in the NTK space. In particular, the alignment can be computed by A(X,Y ) = Tr[Y ⊤Θ(X,X)Y ] = ∑K k=1 ∑n i=1 λi(v⃗ ⊤ i Y k)2. In the following, we will demonstrate why “Train Faster” leads to “Generalize Better” through alignment. In particular, the relation of the convergence rate and the generalization bound with alignment is analyzed. The convergence rate of gradient descent for ultra-wide networks is presented in following lemma: Lemma 1 (Convergence Analysis with NTK, Theorem 4.1 of [13]). Suppose λ0 = λmin(Θ) > 0 for all subsets of data samples. For δ ∈ (0, 1), if m = Ω( n 7 λ40δ 4ϵ2 ) and η = O(λ0n2 ), with probability at least 1− δ, the network can achieve near-zero training error, ∥Y − f(X; θ(t))∥2 = √√√√ K∑ k=1 n∑ i=1 (1− ηλi)2t(v⃗⊤i Y k)2 ± ϵ, (16) where n denotes the number of training samples and m denotes the width of hidden layers. The NTK Θ = V ⊤ΛV with Λ = {λi}ni=1 is a diagonal matrix of eigenvalues and V = {v⃗i}ni=1 is a unitary matrix. In this lemma, we take mean square error (MSE) loss as an example for the convenience of illustration. The conclusion can be extended to other loss functions such as cross-entropy loss (see Appendix B.2 in [14]). From the lemma, we find the convergence rate is governed by the dominant term (16) as Et(X,Y ) = √∑K k=1 ∑n i=1(1− ηλi)2t(v⃗⊤i Y k)2, which is correlated with the alignment: Theorem 1 (Relationship between the convergence rate and alignment). Under the same assumptions as in Lemma 1, the convergence rate described by Et satisfies, Tr[Y ⊤Y ]− 2tηA(X,Y ) ≤ E2t (X,Y ) ≤ Tr[Y ⊤Y ]− ηA(X,Y ). (17) Remark 1. In the above theorem, we demonstrate that the alignment can measure the convergence rate. Especially, we find that both the upper bound and the lower bound of error Et(X,Y ) are inversely proportional to the alignment, which implies that higher alignment will lead to achieving faster convergence. Now we analyze the generalization performance of the proposed method through complexity analysis. We demonstrate that the ultra-wide networks can achieve a reasonable generalization bound. Lemma 2 (Generalization bound with NTK, Theorem 5.1 of [13]). Suppose data S = {(xi, yi)}ni=1 are i.i.d. samples from a non-degenerate distribution p(x, y), and m ≥ poly(n, λ−10 , δ−1). Consider any loss function ℓ : R× R → [0, 1] that is 1-Lipschitz, then with probability at least 1− δ over the random initialization, the network trained by gradient descent for T ≥ Ω( 1ηλ0 log n δ ) iterations has population risk Lp = E(x,y)∼p(x,y)[ℓ(fT (x; θ), y)] that is bounded as follows: Lp ≤ √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) . (18) In this lemma, we show that the dominant term in the generalization upper bound is B(X,Y ) =√ 2Tr[Y ⊤Θ−1Y ] n . In the following theorem, we further prove that this bound is inversely proportional to the alignment A(X,Y ). Theorem 2 (Relationship between the generalization bound and alignment). Under the same assump- tions as in Lemma 2, if we define the generalization upper bound as B(X,Y ) = √ 2Tr[Y ⊤Θ−1Y ] n , then it can be bounded with the alignment as follows: Tr2[Y ⊤Y ] A(X,Y ) ≤ n 2 B2(X,Y ) ≤ λmax λmin Tr2[Y ⊤Y ] A(X,Y ) . (19) Remark 2. Theorems 1 and 2 reveal that the cause for the correlated phenomenons “Train Faster” and “Generalize Better” is the projection of label vector on the NTK space (alignment). 4.2 “ Train Faster, Generalize Better ” for active learning In the NTK framework [13], the empirical average requires data in S is i.i.d. samples (Lemma 2). However, this assumption may not hold in the active learning setting with multiple query rounds, because the training data is composed by i.i.d. sampled initial label set and samples queried by active learning policy. To extend the previous analysis principle to active learning, we follow [34] to reformulate the Lemma 2 as: Lp ≤ (Lp − Lq) + √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) , (20) where Lq = E(x,y)∼q(x,y)[ℓ(f(x; θ), y)], q(x, y) denotes the data distribution after query, and X,Y includes initial training samples and samples after query. There is a new term in the upper bound, which is the difference between the true risk under different data distributions. Lp − Lq =E(x,y)∼p(x,y)[ℓ(f(x; θ), y)]− E(x,y)∼q(x,y)[ℓ(f(x; θ), y)] (21) Though in active learning the data distribution for the labeled samples may be different from the original distribution, they share the same conditional probability p(y|x). We define g(x) =∫ y ℓ(f(x; θ), y)p(y|x)dy, and then we have: Lp − Lq = ∫ x g(x)p(x)dx− ∫ x g(x)q(x)dx. (22) To measure the distance between two distributions, we employ the Maximum Mean Discrepancy (MMD) with neural tangent kernel [35] (derivation in Appendix B.3). Lp − Lq ≤ MMD(S0, S,HΘ) +O (√C ln(1/δ) n ) . (23) Slightly overloading the notation, we denote the initial labeled set as S0, HΘ as the associated Reproducing Kernel Hilbert Space for the NTK Θ, and ∀x, x′ ∈ S,Θ(x, x′) ≤ C. Note, MMD(S0, S,HΘ) is the empirical measure for MMD(p(x), q(x),HΘ). We empirically compute MMD and the dominant term of the generalization upper bound B under the active learning setting with our method dynamicAL. As shown in Figure 1, on CIFAR10 with a CNN target model (three convolutional layers with global average pooling), the initial labeled set size |S| = 500, query round R = 1 and budget size b ∈ {250, 500, 1000}, we observe that, under different active learning settings, the MMD is always much smaller than the B. Besides, we further investigate the MMD and B for R ≥ 2 and observe the similar results. Therefore, the lemma 2 still holds for the target model with dynamicAL. More results and discussions for R ≥ 2 are in Appendix E.4 and the computation details of MMD and NTK are in Appendix D.1. 4.3 Alignment and training dynamics in active learning In this section, we show the relationship between the alignment and the training dynamics. To be consistent with the previous theoretical analysis (Theorem 1 and 2), we use the training dynamics with mean square error under the ultrawidth condition, which can be expressed as GMSE(S) = Tr [ (f(X; θ)− Y )⊤Θ(X,X)(f(X; θ)− Y ) ] . Due to the limited space, we leave the derivation in Appendix A.3. To further quantitatively evaluate the correlation between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), we utilize the Kendall τ coefficient [36] to empirically measure their relation. As shown in Figure 2, for CNN on CIFAR10 with active learning setting, where |S| = 500 and |Q| = 250, there is a strong agreement between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), which further indicates that increasing the training dynamics will lead to a faster convergence and better generalization performance. More details about this verification experiment are in Appendix D.2. 5 Experiments 5.1 Experiment setup Baselines. We compare dynamicAL with the following eight baselines: Random, Corset, Confidence Sampling (Conf), Margin Sampling (Marg), Entropy, and Active Learning by Learning (ALBL), Batch Active learning by Diverse Gradient Embeddings (BADGE). Description of baseline methods is in Appendix E.1. Data sets and Target Model. We evaluate all the methods on three benchmark data sets, namely, CIFAR10 [23], SVHN [24], and Caltech101 [25]. We use accuracy as the evaluation metric and report the mean value of 5 runs. We consider three neural network architectures: vanilla CNN, ResNet18 [26], and VGG11 [27]. For each model, we keep the hyper-parameters used in their official implementations. More information about the implementation is in Appendix C.1. Active Learning Protocol. Following the previous evaluation protocol [11], we compare all those active learning methods in a batch-mode setup with an initial set size M = 500 for all those three data sets, batch size b varying from {250, 500, 1000}. For the selection of test set, we use the benchmark split of the CIFAR10 [23], SVHN [24] and sample 20% from each class to form the test set for the Caltech101 [25]. 5.2 Results and analysis The main experimental results have been provided as plots due to the limited space. We also provide tables in which we report the mean and standard deviation for each plot in Appendix E.3. Overall results. The average test accuracy at each query round is shown in Figure 3. Our method dynamicAL can consistently outperform other methods for all query rounds. This suggests that dynamicAL is a good choice regardless of the labeling budget. And, we notice dynamicAL can work well on data sets with a large class number, such as Caltech101. However, the previous state-of-the-art method, BADGE, cannot be scaled up to those data sets, because the required memory is linear with the number of classes. Besides, because dynamicAL depends on pseudo labeling, a relatively large initial labeled set can provide advantages for dynamicAL. Therefore, it is important to examine whether dynamicAL can work well with a small initial labeled set. As shown in Figure 3, dynamicAL is able to work well with a relatively small initial labeled set (M = 500). Due to the limited space, we only show the result under three different settings in Figure 3. More evaluation results are in Appendix E.2. Moreover, although the re-initialization trick makes dynamicAL deviate from the dynamics analysis, we investigate the effect of it to dynamicAL and provide the empirical observations and analysis in Appendix E.5. Effect of query size and query round. Given the total label budget B, the increasing of query size always leads to the decreasing of query round. We study the influence of different query size and query round on dynamicAL from two perspectives. First, we study the expected approximation ratio with different query batch sizes on different data sets. As shown in Figure 4, under different settings the expected approximation ratio always converges to 1 with the increase of training epochs, which further indicates that the query set selected by using the approximated change of training dynamics is a reasonably good result for the query set selection problem. Second, we study influence of query round for actual performance of target models. The performance for different target models on different data sets with total budge size B = 1000 is shown in Table 1. For certain query budget, our active learning algorithm can be further improved if more query rounds are allowed. Comparison with different variants. The active learning criterion of dynamicAL can be written as∑ (x,y)∈S ∥∇θℓ(f(x; θu), ŷu)∥2 + γ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). We empirically show the performance for γ ∈ {0, 1, 2,∞} in Figure 5. With γ = 0, the criterion is close to the expected gradient length method [31]. And with γ = ∞, the selected samples are same with the samples selected by using the influence function with identity hessian matrix criterion [29]. As shown in Figure 5, the model achieves the best performance with γ = 2, which is aligned with the value indicated by the theoretical analysis (Equation 15). The result confirms the importance of theoretical analysis for the design of deep active learning methods. 6 Related work Neural Tangent Kernel (NTK): Recent study has shown that under proper conditions, an infinitewidth neural network can be simplified as a linear model with Neural Tangent Kernel (NTK) [12]. Since then, NTK has become a powerful theoretical tool to analyze the behavior of deep learning architecture (CNN, GNN, RNN) [33, 37, 38], random initialization [39], stochastic neural network [40], and graph neural network [41] from its output dynamics and to characterize the convergence and generalization error [13]. Besides, [15] studies the finite-width NTK, aiming at making the NTK more practical. Active Learning: Active learning aims at interactively query labels for unlabeled data points to maximize model performances [2]. Among others, there are two popular strategies for active learning, i.e., diversity sampling [42, 43, 44] and uncertainty sampling [45, 46, 47, 11, 48, 49, 29]. Recently, several papers proposed to use gradient to measure uncertainty [49, 11, 29]. However, those methods need to compute gradient for each class, and thus they can hardly be applied on data sets with a large class number. Besides, recent works [50, 51] leverage NTK to analyze contextual bandit with streaming data, which are hard to be applied into our pool-based setting. 7 Conclusion In this work, we bridge the gap between the theoretic findings of deep neural networks and realworld deep active learning applications. By exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven method, dynamicAL, which selects samples to maximize training dynamics. We prove that the convergence speed of training and the generalization performance is (positively) strongly correlated under the ultra-wide condition and we show that maximizing the training dynamics will lead to a lower generalization error. Empirically, our work shows that dynamicAL not only consistently outperforms strong baselines across various setting, but also scales well on large deep learning models. 8 Acknowledgment This work is supported by National Science Foundation (IIS-1947203, IIS-2117902, IIS-2137468, IIS-2134079, and CNS-2125626), a joint ACES-ICGA funding initiative via USDA Hatch ILLU802-946, and Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
1. What is the focus and contribution of the paper on active learning for deep learning? 2. What are the strengths of the proposed approach, particularly in its experimental performance? 3. What are the weaknesses of the paper regarding its theoretical analysis and notations? 4. Do you have any concerns or suggestions regarding the paper's limitations and potential negative societal impact?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes an active learning method for deep learning (named dynamicAL) that selects data points which maximize the training dynamics. The paper proves a relationship between the convergence speed of training and generalization and shows experimentally that dynamicAL can outperform other popular active learning baselines. Strengths And Weaknesses Strengths: The method proposed in the paper is novel and works well compared to other baselines. The experiments are extensive. The paper is well-written. Weaknesses: The main theoretical results in Section 4.1 (Theorem 1 and 2) are only for iid data, which do not hold for the active learning setting. Although Section 4.2 gives some discussions for the active learning setting, the paper does not prove any formal results for the setting. Some notations can be simplified to make the formulas cleaner; for example in Eq 13, the subscript u for x, y, etc. can be dropped without affecting the meaning of the equation. The same is also true for other equations such as 12, 14, etc. Questions Can you prove formal results similar to Theorem 1 and 2 for the active learning setting in Section 4.2 and 4.3 ? Limitations There is no potential negative societal impact.
NIPS
Title Deep Active Learning by Leveraging Training Dynamics Abstract Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in real applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance are positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work would inspire more attempts on bridging the theoretical findings of deep networks and practical impacts of deep active learning in real applications. 1 Introduction Training deep learning (DL) models usually requires large amount of high-quality labeled data [1] to optimize a model with a massive number of parameters. The acquisition of such annotated data is usually time-consuming and expensive, making it unaffordable in the fields that require high domain expertise. A promising approach for minimizing the labeling effort is active learning (AL), which aims to identify and label the maximally informative samples, so that a high-performing classifier can be trained with minimal labeling effort [2]. Under classical statistical learning settings, theories of active learning have been extensively studied from the perspective of VC dimension [3]. As a result, a variety of methods have been proposed, such as (i) the version-space-based approaches, which require maintaining a set of models [4, 5], and (ii) the clustering-based approaches, which assume that the data within the same cluster have pure labels [6]. However, the theoretical analyses for these classical settings may not hold for over-parameterized deep neural networks where the traditional wisdom is ineffective [1]. For example, margin-based methods select the labeling examples in the vicinity of the learned decision boundary [7, 8]. However, in the over-parameterized regime, every labeled example could potentially be near the learned decision boundary [9]. As a result, theoretically, such analysis can hardly guide us to design practical active 36th Conference on Neural Information Processing Systems (NeurIPS 2022). learning methods. Besides, empirically, multiple deep active learning works, borrowing observations and insights from the classical theories and methods, have been observed unable to outperform their passive learning counterparts in a few application scenarios [10, 11]. On the other hand, the analysis of neural network’s optimization and generalization performance has witnessed several exciting developments in recent years in terms of the deep learning theory [12, 13, 14]. It is shown that the training dynamics of deep neural networks using gradient descent can be characterized by the Neural Tangent Kernel (NTK) of infinite [12] or finite [15] width networks. This is further leveraged to characterize the generalization of over-parameterized networks through Rademacher complexity analysis [13, 16]. We are therefore inspired to ask: How can we design a practical and generic active learning method for deep neural networks with theoretical justifications? To answer this question, we firstly explore the connection between the model performance on testing data and the convergence speed on training data for the over-parameterized deep neural networks. Based on the NTK framework [12, 13], we theoretically show that if a deep neural network converges faster (“Train Faster”), then it tends to have better generalization performance (“Generalize Better”), which matches the existing observations [17, 18, 19, 20, 21]. Motivated by the aforementioned connection, we first introduce Training Dynamics, the derivative of training loss with respect to iteration, as a proxy to quantitatively describe the training process. On top of it, we formally propose our generic and theoretically-motivated deep active learning method, dynamicAL, which will query labels for a subset of unlabeled samples that maximally increase the training dynamics. In order to compute the training dynamics by merely using the unlabeled samples, we leverage two relaxations Pseudo-labeling and Subset Approximation to solve this non-trivial subset selection problem. Our relaxed approaches are capable of effectively estimating the training dynamics as well as efficiently solving the subset selection problem by reducing the complexity from O(N b) to O(b). In theory, we coin a new term Alignment to measure the length of the label vector’s projection on the neural tangent kernel space. Then, we demonstrate that higher alignment usually comes with a faster convergence speed and a lower generalization bound. Furthermore, with the help of the maximum mean discrepancy [22], we extend the previous analysis to an active learning setting where the i.i.d. assumption may not hold. Finally, we show that alignment is positively correlated with our active learning goal, training dynamics, which implies that maximizing training dynamics will lead to better generalization performance. Regarding experiments, we have empirically verified our theory by conducting extensive experiments on three datasets, CIFAR10 [23], SVHN [24], and Caltech101 [25] using three types of network structures: vanilla CNN, ResNet [26], and VGG [27]. We first show that the result of the subset selection problem delivered by the subset approximation is close to the global optimal solution. Furthermore, under the active learning setting, our method not only outperforms other baselines but also scales well on large deep learning models. The main contributions of our paper can be summarized as follows: • We propose a theory-driven deep active learning method, dynamicAL, inspired by the observation of “train faster, generalize better”. To this end, we introduce the Training Dynamics, as a proxy to describe the training process. • We demonstrate that the convergence speed of training and the generalization performance is strongly (positively) correlated under the ultra-wide condition; we also show that maximizing the training dynamics will lead to a lower generalization error in the scenario of active learning. • Our method is easy to implement. We conduct extensive experiments to evaluate the effectiveness of dynamicAL and empirically show that our method consistently outperforms other methods in a wide range of active learning settings. 2 Background Notation. We use the random variable x ∈ X to represent the input data feature and y ∈ Y as the label where K is the number of classes and [K] := {1, 2, ...,K}. We are given non-degenerated a data source D with unknown distribution p(x, y). We further denote the concatenation of x as X = [x1, x2, ..., xM ] ⊤ and that of y as Y = [y1, y2, ..., yM ]⊤. We consider a deep learning classifier hθ(x) = argmax σ(f(x; θ)) : x → y parameterized by θ ∈ Rp, where σ(·) is the softmax function and f is a neural network. Let ⊗ be the Kronecker Product and IK ∈ RK×K be an identity matrix. Active learning. The goal of active learning is to improve the learning efficiency of a model with a limited labeling budget. In this work, we consider the pool-based AL setup, where a finite data set S = {(xl, yl)}Ml=1 with M points are i.i.d. sampled from p(x, y) as the (initial) labeled set. The AL model receives an unlabeled data set U sampled from p(x) and request labels according to p(y|x) for any x ∈ U in each query round. There are R rounds in total, and for each round, a query set Q consisting of b unlabeled samples can be queried. The total budget size B = b×R. Neural Tangent Kernel. The Neural Tangent Kernel [12] has been widely applied to analyze the dynamics of neural networks. If a neural network is sufficiently wide, properly initialized, and trained by gradient descent with infinitesimal step size (i.e., gradient flow), then the neural network is equivalent to kernel regression predictor with a deterministic kernel Θ(·, ·), called Neural Tangent Kernel (NTK). When minimizing the mean squared error loss, at the iteration t, the dynamics of the neural network f has a closed-form expression: df(X ; θ(t)) dt = −Kt(X ,X ) (f(X ; θ(t))− Y) , (1) where θ(t) denotes the parameter of the neural network at iteration t, Kt(X ,X ) ∈ R|X |×K×|X|×K is called the empirical NTK and Ki,jt (x, x′) = ∇θf i(x; θ(t))⊤∇θf j(x′; θ(t)) is the inner product of the gradient of the i-th class probability and the gradient of the j-th class probability for two samples x, x′ ∈ X and i, j ∈ [K]. The time-variant kernel Kt(·, ·) is equivalent to the (time-invariant) NTK with a high probability, i.e., if the neural network is sufficiently wide and properly initialized, then: Kt(X ,X ) = Θ(X ,X )⊗ IK . (2) The final learned neural network at iteration t, is equivalent to the kernel regression solution with respect to the NTK [14]. For any input x and training data {X,Y } we have, f(x; θ(t)) ≈ Θ(x,X)⊤Θ(X,X)−1(I − e−ηΘ(X,X)t)Y, (3) where η is the learning rate, Θ(x,X) is the NTK matrix between input x and all samples in training data X . 3 Method In section 3.1, we introduce the notion of training dynamics which can be used to describe the training process. Then, in section 3.2, based on the training dynamics, we propose dynamicAL. In section 3.3, we discuss the connection between dynamicAL and existing deep active learning methods. 3.1 Training dynamics In this section, we introduce the notion of training dynamics. The cross-entropy loss over the labeled set S is defined as: L(S) = ∑ (xl,yl)∈S ℓ(f(xl; θ), yl) = − ∑ (xl,yl)∈S ∑ i∈[K] yil log σ i(f(xl; θ)), (4) where σi(f(x; θ)) = exp(f i(x;θ))∑ j exp(f j(x;θ)) . We first analyze the dynamics of the training loss, with respect to iteration t, on one labeled sample (derivation is in Appendix A.1): ∂ℓ(f(x; θ), y) ∂t = − ∑ i ( yi − σi(f(x; θ)) ) ∇θf i(x; θ)∇⊤t θ. (5) For neural networks trained by gradient descent, if the learning rate η is small, then ∇tθ = θt+1−θt = −η ∂ ∑ (xl,yl)∈S ℓ(f(xl;θ),yl) ∂θ . Taking the partial derivative of the training loss with respect to the parameters, we have (the derivation of the following equation can be found in Appendix A.2): ∂ℓ(f(x; θ), y) ∂θ = ∑ j∈[K] ( σj(f(x; θ))− yj )∂f j(x; θ) ∂θ . (6) Therefore, we can further get the following result for the dynamics of training loss: ∂ℓ(f(x; θ), y) ∂t = −η ∑ i ( σi(f(x; θ))− yi )∑ j ∑ (x l ′ ,y l ′ )∈S ∇θf i(x; θ)⊤∇θf j(xl′ ; θ) ( σj(f(xl′ ; θ))− y j l ′ ) . (7) Furthermore, we define di(X,Y ) = σi(f(X; θ))− Y i and Y i is the label vector of all samples for i-th class. Then, the training dynamics (dynamics of training loss) over training set S, computed with the empirical NTK Kij(X,X), is denoted by G(S) ∈ R: G(S) = −1 η ∑ (xl,yl)∈S ∂ℓ(f(xl; θ), yl) ∂t = ∑ i ∑ j di(X,Y )⊤Kij(X,X)dj(X,Y ). (8) 3.2 Active learning by activating training dynamics Before we present dynamicAL, we state Proposition 1, which serves as the theoretical guidance for dynamicAL and will be proved in Section 4. Proposition 1. For deep neural networks, converging faster leads to a lower worst-case generalization error. Motivated by the connection between convergence speed and generalization performance, we propose the general-purpose active learning method, dynamicAL, which aims to accelerate the convergence by querying labels for unlabeled samples. As we described in the previous section, the training dynamics can be used to describe the training process. Therefore, we employ the training dynamics as a proxy to design an active learning method. Specifically, at each query round, dynamicAL will query labels for samples which maximize the training dynamics G(S), i.e., Q = argmaxQ⊆UG(S ∪Q), s.t. |Q| = b, (9) where Q is the corresponding data set for Q with ground-truth labels. Notice that when applying the above objective in practice, we are facing two major challenges. First, G(S ∪Q) cannot be directly computed, because the label information of unlabeled examples is not available before the query. Second, the subset selection problem can be computationally prohibitive if enumerating all possible sets with size b. Therefore, we employ the following two relaxations to make this maximization problem to be solved with constant time complexity. Pseudo labeling. To estimate the training dynamics, we use the predicted label ŷu for sample xu in the unlabeled data set U to compute G. Note, the effectiveness of this adaptation has been demonstrated in the recent gradient-based methods [11, 28], which compute the gradient as if the model’s current prediction on the example is the true label. Therefore, the maximization problem in Equation (9) is changed to, Q = argmaxQ⊆UG(S ∪ Q̂). (10) where Q̂ is the corresponding data set for Q with pseudo labels ŶQ. Subset approximation. The subset selection problem of Equation (10) still requires enumerating all possible subsets of U with size b, which is O(nb). We simplify the selection problem to the following problem without causing any change on the result, argmaxQ⊆UG(S ∪ Q̂) = argmaxQ⊆U∆(Q̂|S), (11) where ∆(Q̂|S) = G(S ∪ Q̂)−G(S) is defined as the change of training dynamics. We approximate the change of training dynamics caused by query set Q using the summation of the change of training dynamics caused by each sample in the query set. Then the maximization problem can be converted to Equation (12) which can be solved by a greedy algorithm with O(b). Q = argmaxQ⊆U ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S), s.t. |Q| = b. (12) To further show the approximated result is reasonably good, we decompose the change of training dynamics as (derivation in Appendix A.4): ∆(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) + ∑ (x,ŷ),(x′,ŷ′)∈Q̂ di(x, ŷ)⊤Kij(x, x′)dj(x′, ŷ′), (13) where Kij(x, x′) is the empirical NTK. The first term in the right hand side is the approximated change of training dynamics. Then, we further define the Approximation Ratio (14) which measures the approximation quality, R(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) ∆(Q̂|S) . (14) We empirically measure the expectation of the Approximation Ratio on two data sets with two different neural networks under three different batch sizes. As shown in Figure 4, the expectation EQ∼UR(Q̂|S) ≈ 1 when the model is converged. Therefore, the approximated result delivered by the greedy algorithm is close to the global optimal solution of the original maximization problem, Equation (10), especially when the model is converged. Based on the above two approximations, we present the proposed method dynamicAL in Algorithm 1. As described below, the algorithm starts by training a neural network f(·; θ) on the initial labeled set S until convergence. Then, for every unlabeled sample xu, we compute pseudo label ŷu and the change of training dynamics ∆({(xu, ŷu)}|S). After that, dynamicAL will query labels for top-b samples causing the maximal change on training dynamics, train the neural network on the extended labeled set, and repeat the process. Note, to keep close to the theoretical analysis, re-initialization is not used after each query, which also enables dynamicAL to get rid of the computational overhead of retraining the deep neural networks every time. Algorithm 1 Deep Active Learning by Leveraging Training Dynamics Input: Neural network f(·; θ), unlabeled sample set U , initial labeled set S, number of query round R, query batch size b. for r = 1 to R do Train f(·; θ) on S with cross-entropy loss until convergence. for xu ∈ U do Compute its pseudo label ŷu = argmaxf(xu; θ). Compute ∆({(xu, ŷu)}|S). end for Select b query samples Q with the highest ∆ values, and request their labels from the oracle. Update the labeled data set S = S ∪Q . end for return Final model f(·; θ). 3.3 Relation to existing methods Although existing deep active learning methods are usually designed based on heuristic criteria, some of them have empirically shown their effectiveness [11, 29, 30]. We surprisingly found that our theoretically-motivated method dynamicAL has some connections with those existing methods from the perspective of active learning criterion. The proposed active learning criterion in Equation (12) can be explicitly written as (derivation in Appendix A.5): ∆({(xu,ŷu)}|S) = ∥∇θℓ(f(xu; θ), ŷu)∥2 + 2 ∑ (x,y)∈S ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). (15) Note. The first term of the right-hand side can be interpreted as the square of gradient length (2- norm) which reflects the uncertainty of the model on the example and has been wildly used as an active learning criterion in some existing works [30, 11, 31]. The second term can be viewed as the influence function [32] with identity hessian matrix. And recently, [29] has empirically shown that the effectiveness of using the influence function with identity hessian matrix as active learning criterion. We hope our theoretical analysis can also shed some light on the interpretation of previous methods. 4 Theoretical analysis In this section, we study the correlation between the convergence rate of the training loss and the generalization error under the ultra-wide condition [12, 13]. We define a measure named alignment to quantify the convergence rate and further show its connection with generalization bound. The analysis provides a theoretical guarantee for the phenomenon of “Train Faster, Generalize Better” as well as our active learning method dynamicAL with a rigorous treatment. Finally, we show that the active learning proxy, training dynamics, is correlated with alignment, which indicates that increasing the training dynamics leads to larger convergence rate and better generalization performance. We leave all proofs of theorems and details of verification experiments in Appendix B and D respectively. 4.1 Train faster provably generalize better Given an ultra-wide neural network, the gradient descent can achieve a near-zero training error [12, 33] and its generalization ability in unseen data can be bounded [13]. It is shown that both the convergence and generalization of a neural network can be analyzed using the NTK [13]. However, the question what is the relation between the convergence rate and the generalization bound has not been answered. We formally give a solution by introducing the concept of alignment, which is defined as follows: Definition 1 (Alignment). Given a data set S = {X,Y }, the alignment is a measure of correlation between X and Y projected in the NTK space. In particular, the alignment can be computed by A(X,Y ) = Tr[Y ⊤Θ(X,X)Y ] = ∑K k=1 ∑n i=1 λi(v⃗ ⊤ i Y k)2. In the following, we will demonstrate why “Train Faster” leads to “Generalize Better” through alignment. In particular, the relation of the convergence rate and the generalization bound with alignment is analyzed. The convergence rate of gradient descent for ultra-wide networks is presented in following lemma: Lemma 1 (Convergence Analysis with NTK, Theorem 4.1 of [13]). Suppose λ0 = λmin(Θ) > 0 for all subsets of data samples. For δ ∈ (0, 1), if m = Ω( n 7 λ40δ 4ϵ2 ) and η = O(λ0n2 ), with probability at least 1− δ, the network can achieve near-zero training error, ∥Y − f(X; θ(t))∥2 = √√√√ K∑ k=1 n∑ i=1 (1− ηλi)2t(v⃗⊤i Y k)2 ± ϵ, (16) where n denotes the number of training samples and m denotes the width of hidden layers. The NTK Θ = V ⊤ΛV with Λ = {λi}ni=1 is a diagonal matrix of eigenvalues and V = {v⃗i}ni=1 is a unitary matrix. In this lemma, we take mean square error (MSE) loss as an example for the convenience of illustration. The conclusion can be extended to other loss functions such as cross-entropy loss (see Appendix B.2 in [14]). From the lemma, we find the convergence rate is governed by the dominant term (16) as Et(X,Y ) = √∑K k=1 ∑n i=1(1− ηλi)2t(v⃗⊤i Y k)2, which is correlated with the alignment: Theorem 1 (Relationship between the convergence rate and alignment). Under the same assumptions as in Lemma 1, the convergence rate described by Et satisfies, Tr[Y ⊤Y ]− 2tηA(X,Y ) ≤ E2t (X,Y ) ≤ Tr[Y ⊤Y ]− ηA(X,Y ). (17) Remark 1. In the above theorem, we demonstrate that the alignment can measure the convergence rate. Especially, we find that both the upper bound and the lower bound of error Et(X,Y ) are inversely proportional to the alignment, which implies that higher alignment will lead to achieving faster convergence. Now we analyze the generalization performance of the proposed method through complexity analysis. We demonstrate that the ultra-wide networks can achieve a reasonable generalization bound. Lemma 2 (Generalization bound with NTK, Theorem 5.1 of [13]). Suppose data S = {(xi, yi)}ni=1 are i.i.d. samples from a non-degenerate distribution p(x, y), and m ≥ poly(n, λ−10 , δ−1). Consider any loss function ℓ : R× R → [0, 1] that is 1-Lipschitz, then with probability at least 1− δ over the random initialization, the network trained by gradient descent for T ≥ Ω( 1ηλ0 log n δ ) iterations has population risk Lp = E(x,y)∼p(x,y)[ℓ(fT (x; θ), y)] that is bounded as follows: Lp ≤ √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) . (18) In this lemma, we show that the dominant term in the generalization upper bound is B(X,Y ) =√ 2Tr[Y ⊤Θ−1Y ] n . In the following theorem, we further prove that this bound is inversely proportional to the alignment A(X,Y ). Theorem 2 (Relationship between the generalization bound and alignment). Under the same assump- tions as in Lemma 2, if we define the generalization upper bound as B(X,Y ) = √ 2Tr[Y ⊤Θ−1Y ] n , then it can be bounded with the alignment as follows: Tr2[Y ⊤Y ] A(X,Y ) ≤ n 2 B2(X,Y ) ≤ λmax λmin Tr2[Y ⊤Y ] A(X,Y ) . (19) Remark 2. Theorems 1 and 2 reveal that the cause for the correlated phenomenons “Train Faster” and “Generalize Better” is the projection of label vector on the NTK space (alignment). 4.2 “ Train Faster, Generalize Better ” for active learning In the NTK framework [13], the empirical average requires data in S is i.i.d. samples (Lemma 2). However, this assumption may not hold in the active learning setting with multiple query rounds, because the training data is composed by i.i.d. sampled initial label set and samples queried by active learning policy. To extend the previous analysis principle to active learning, we follow [34] to reformulate the Lemma 2 as: Lp ≤ (Lp − Lq) + √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) , (20) where Lq = E(x,y)∼q(x,y)[ℓ(f(x; θ), y)], q(x, y) denotes the data distribution after query, and X,Y includes initial training samples and samples after query. There is a new term in the upper bound, which is the difference between the true risk under different data distributions. Lp − Lq =E(x,y)∼p(x,y)[ℓ(f(x; θ), y)]− E(x,y)∼q(x,y)[ℓ(f(x; θ), y)] (21) Though in active learning the data distribution for the labeled samples may be different from the original distribution, they share the same conditional probability p(y|x). We define g(x) =∫ y ℓ(f(x; θ), y)p(y|x)dy, and then we have: Lp − Lq = ∫ x g(x)p(x)dx− ∫ x g(x)q(x)dx. (22) To measure the distance between two distributions, we employ the Maximum Mean Discrepancy (MMD) with neural tangent kernel [35] (derivation in Appendix B.3). Lp − Lq ≤ MMD(S0, S,HΘ) +O (√C ln(1/δ) n ) . (23) Slightly overloading the notation, we denote the initial labeled set as S0, HΘ as the associated Reproducing Kernel Hilbert Space for the NTK Θ, and ∀x, x′ ∈ S,Θ(x, x′) ≤ C. Note, MMD(S0, S,HΘ) is the empirical measure for MMD(p(x), q(x),HΘ). We empirically compute MMD and the dominant term of the generalization upper bound B under the active learning setting with our method dynamicAL. As shown in Figure 1, on CIFAR10 with a CNN target model (three convolutional layers with global average pooling), the initial labeled set size |S| = 500, query round R = 1 and budget size b ∈ {250, 500, 1000}, we observe that, under different active learning settings, the MMD is always much smaller than the B. Besides, we further investigate the MMD and B for R ≥ 2 and observe the similar results. Therefore, the lemma 2 still holds for the target model with dynamicAL. More results and discussions for R ≥ 2 are in Appendix E.4 and the computation details of MMD and NTK are in Appendix D.1. 4.3 Alignment and training dynamics in active learning In this section, we show the relationship between the alignment and the training dynamics. To be consistent with the previous theoretical analysis (Theorem 1 and 2), we use the training dynamics with mean square error under the ultrawidth condition, which can be expressed as GMSE(S) = Tr [ (f(X; θ)− Y )⊤Θ(X,X)(f(X; θ)− Y ) ] . Due to the limited space, we leave the derivation in Appendix A.3. To further quantitatively evaluate the correlation between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), we utilize the Kendall τ coefficient [36] to empirically measure their relation. As shown in Figure 2, for CNN on CIFAR10 with active learning setting, where |S| = 500 and |Q| = 250, there is a strong agreement between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), which further indicates that increasing the training dynamics will lead to a faster convergence and better generalization performance. More details about this verification experiment are in Appendix D.2. 5 Experiments 5.1 Experiment setup Baselines. We compare dynamicAL with the following eight baselines: Random, Corset, Confidence Sampling (Conf), Margin Sampling (Marg), Entropy, and Active Learning by Learning (ALBL), Batch Active learning by Diverse Gradient Embeddings (BADGE). Description of baseline methods is in Appendix E.1. Data sets and Target Model. We evaluate all the methods on three benchmark data sets, namely, CIFAR10 [23], SVHN [24], and Caltech101 [25]. We use accuracy as the evaluation metric and report the mean value of 5 runs. We consider three neural network architectures: vanilla CNN, ResNet18 [26], and VGG11 [27]. For each model, we keep the hyper-parameters used in their official implementations. More information about the implementation is in Appendix C.1. Active Learning Protocol. Following the previous evaluation protocol [11], we compare all those active learning methods in a batch-mode setup with an initial set size M = 500 for all those three data sets, batch size b varying from {250, 500, 1000}. For the selection of test set, we use the benchmark split of the CIFAR10 [23], SVHN [24] and sample 20% from each class to form the test set for the Caltech101 [25]. 5.2 Results and analysis The main experimental results have been provided as plots due to the limited space. We also provide tables in which we report the mean and standard deviation for each plot in Appendix E.3. Overall results. The average test accuracy at each query round is shown in Figure 3. Our method dynamicAL can consistently outperform other methods for all query rounds. This suggests that dynamicAL is a good choice regardless of the labeling budget. And, we notice dynamicAL can work well on data sets with a large class number, such as Caltech101. However, the previous state-of-the-art method, BADGE, cannot be scaled up to those data sets, because the required memory is linear with the number of classes. Besides, because dynamicAL depends on pseudo labeling, a relatively large initial labeled set can provide advantages for dynamicAL. Therefore, it is important to examine whether dynamicAL can work well with a small initial labeled set. As shown in Figure 3, dynamicAL is able to work well with a relatively small initial labeled set (M = 500). Due to the limited space, we only show the result under three different settings in Figure 3. More evaluation results are in Appendix E.2. Moreover, although the re-initialization trick makes dynamicAL deviate from the dynamics analysis, we investigate the effect of it to dynamicAL and provide the empirical observations and analysis in Appendix E.5. Effect of query size and query round. Given the total label budget B, the increasing of query size always leads to the decreasing of query round. We study the influence of different query size and query round on dynamicAL from two perspectives. First, we study the expected approximation ratio with different query batch sizes on different data sets. As shown in Figure 4, under different settings the expected approximation ratio always converges to 1 with the increase of training epochs, which further indicates that the query set selected by using the approximated change of training dynamics is a reasonably good result for the query set selection problem. Second, we study influence of query round for actual performance of target models. The performance for different target models on different data sets with total budge size B = 1000 is shown in Table 1. For certain query budget, our active learning algorithm can be further improved if more query rounds are allowed. Comparison with different variants. The active learning criterion of dynamicAL can be written as∑ (x,y)∈S ∥∇θℓ(f(x; θu), ŷu)∥2 + γ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). We empirically show the performance for γ ∈ {0, 1, 2,∞} in Figure 5. With γ = 0, the criterion is close to the expected gradient length method [31]. And with γ = ∞, the selected samples are same with the samples selected by using the influence function with identity hessian matrix criterion [29]. As shown in Figure 5, the model achieves the best performance with γ = 2, which is aligned with the value indicated by the theoretical analysis (Equation 15). The result confirms the importance of theoretical analysis for the design of deep active learning methods. 6 Related work Neural Tangent Kernel (NTK): Recent study has shown that under proper conditions, an infinitewidth neural network can be simplified as a linear model with Neural Tangent Kernel (NTK) [12]. Since then, NTK has become a powerful theoretical tool to analyze the behavior of deep learning architecture (CNN, GNN, RNN) [33, 37, 38], random initialization [39], stochastic neural network [40], and graph neural network [41] from its output dynamics and to characterize the convergence and generalization error [13]. Besides, [15] studies the finite-width NTK, aiming at making the NTK more practical. Active Learning: Active learning aims at interactively query labels for unlabeled data points to maximize model performances [2]. Among others, there are two popular strategies for active learning, i.e., diversity sampling [42, 43, 44] and uncertainty sampling [45, 46, 47, 11, 48, 49, 29]. Recently, several papers proposed to use gradient to measure uncertainty [49, 11, 29]. However, those methods need to compute gradient for each class, and thus they can hardly be applied on data sets with a large class number. Besides, recent works [50, 51] leverage NTK to analyze contextual bandit with streaming data, which are hard to be applied into our pool-based setting. 7 Conclusion In this work, we bridge the gap between the theoretic findings of deep neural networks and realworld deep active learning applications. By exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven method, dynamicAL, which selects samples to maximize training dynamics. We prove that the convergence speed of training and the generalization performance is (positively) strongly correlated under the ultra-wide condition and we show that maximizing the training dynamics will lead to a lower generalization error. Empirically, our work shows that dynamicAL not only consistently outperforms strong baselines across various setting, but also scales well on large deep learning models. 8 Acknowledgment This work is supported by National Science Foundation (IIS-1947203, IIS-2117902, IIS-2137468, IIS-2134079, and CNS-2125626), a joint ACES-ICGA funding initiative via USDA Hatch ILLU802-946, and Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
1. What is the focus and contribution of the paper on active learning? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundation and empirical validation? 3. What are the weaknesses and limitations of the paper, especially regarding its reliance on NTK theory and practical applicability? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions about the algorithm's runtime, clustering, and correlation with final generalization performance?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper describes DynamicAL, an algorithm for active learning based on NTK-based analysis of training dynamics The algorithm is based on the notion that fast training results in better generalization. The authors describe "alignment" as a measure of convergence rate and show that this term affects the generalization bounds derived in the NTK regime. The authors then show that the proxy used in DynamicAL ("training dynamics") correlates with alignment and therefore could be used as a proxy The paper propose a greedy variant of the proposed algorithm and show that it well approximates the non-greedy variant, while being significantly faster The paper shows that the resulting algorithm outperforms existing baselines on a few models/datasets Strengths And Weaknesses Strengths: Good presentation, well structured Well motivated, main claims of the papers are well justified from theoretical point of view, and verified empirically (section 4) Resulting algorithm is simple to implement and could be considered generalizations of existing methods (as mentioned in section 3.3) Weaknesses: Derivation and algorithm based heavily on NTK theory, but practical sized networks have been shown to deviate far from this regime (see [1, 2], for example). This paper does not address this issue. At least some discussion of it would be useful Unclear computational requirements compared to other methods or runtime analysis (this part, while possible to interpret based on the algorithm description, could be stated more explicitly) Improvements over baselines seem rather marginal (although this is somewhat expected in a somewhat saturated field) [1] - https://arxiv.org/abs/2007.15801 [2] - https://arxiv.org/abs/2010.15110 Questions What is the runtime of the algorithm? It appears that calculating equation 15 would scale linearly with |S|, which seems quite undesirable Does the algorithm suffer from clustering due to the greedy approximation? As an example, suppose that every single item in the dataset is duplicated 10 times and the query size is 10. Will the algorithm not pick the exact same image 10 times? (see the distinction between BALD and BatchBALD, for example) Is the experiment in section 4.3 calculating G using pseudo-labels or ground truth labels? It would be interesting to see the effect of changing pseudo-labels to ground-truth labels on the results in section 4.3 While section 4.3 shows that G (training dynamics) correlates with A (alignment), it would useful to see how well G correlates to the final generalization performance, as this is true objective of the active learning algorithm. Limitations See the questions and strength/weaknesses section
NIPS
Title Deep Active Learning by Leveraging Training Dynamics Abstract Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in real applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance are positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work would inspire more attempts on bridging the theoretical findings of deep networks and practical impacts of deep active learning in real applications. 1 Introduction Training deep learning (DL) models usually requires large amount of high-quality labeled data [1] to optimize a model with a massive number of parameters. The acquisition of such annotated data is usually time-consuming and expensive, making it unaffordable in the fields that require high domain expertise. A promising approach for minimizing the labeling effort is active learning (AL), which aims to identify and label the maximally informative samples, so that a high-performing classifier can be trained with minimal labeling effort [2]. Under classical statistical learning settings, theories of active learning have been extensively studied from the perspective of VC dimension [3]. As a result, a variety of methods have been proposed, such as (i) the version-space-based approaches, which require maintaining a set of models [4, 5], and (ii) the clustering-based approaches, which assume that the data within the same cluster have pure labels [6]. However, the theoretical analyses for these classical settings may not hold for over-parameterized deep neural networks where the traditional wisdom is ineffective [1]. For example, margin-based methods select the labeling examples in the vicinity of the learned decision boundary [7, 8]. However, in the over-parameterized regime, every labeled example could potentially be near the learned decision boundary [9]. As a result, theoretically, such analysis can hardly guide us to design practical active 36th Conference on Neural Information Processing Systems (NeurIPS 2022). learning methods. Besides, empirically, multiple deep active learning works, borrowing observations and insights from the classical theories and methods, have been observed unable to outperform their passive learning counterparts in a few application scenarios [10, 11]. On the other hand, the analysis of neural network’s optimization and generalization performance has witnessed several exciting developments in recent years in terms of the deep learning theory [12, 13, 14]. It is shown that the training dynamics of deep neural networks using gradient descent can be characterized by the Neural Tangent Kernel (NTK) of infinite [12] or finite [15] width networks. This is further leveraged to characterize the generalization of over-parameterized networks through Rademacher complexity analysis [13, 16]. We are therefore inspired to ask: How can we design a practical and generic active learning method for deep neural networks with theoretical justifications? To answer this question, we firstly explore the connection between the model performance on testing data and the convergence speed on training data for the over-parameterized deep neural networks. Based on the NTK framework [12, 13], we theoretically show that if a deep neural network converges faster (“Train Faster”), then it tends to have better generalization performance (“Generalize Better”), which matches the existing observations [17, 18, 19, 20, 21]. Motivated by the aforementioned connection, we first introduce Training Dynamics, the derivative of training loss with respect to iteration, as a proxy to quantitatively describe the training process. On top of it, we formally propose our generic and theoretically-motivated deep active learning method, dynamicAL, which will query labels for a subset of unlabeled samples that maximally increase the training dynamics. In order to compute the training dynamics by merely using the unlabeled samples, we leverage two relaxations Pseudo-labeling and Subset Approximation to solve this non-trivial subset selection problem. Our relaxed approaches are capable of effectively estimating the training dynamics as well as efficiently solving the subset selection problem by reducing the complexity from O(N b) to O(b). In theory, we coin a new term Alignment to measure the length of the label vector’s projection on the neural tangent kernel space. Then, we demonstrate that higher alignment usually comes with a faster convergence speed and a lower generalization bound. Furthermore, with the help of the maximum mean discrepancy [22], we extend the previous analysis to an active learning setting where the i.i.d. assumption may not hold. Finally, we show that alignment is positively correlated with our active learning goal, training dynamics, which implies that maximizing training dynamics will lead to better generalization performance. Regarding experiments, we have empirically verified our theory by conducting extensive experiments on three datasets, CIFAR10 [23], SVHN [24], and Caltech101 [25] using three types of network structures: vanilla CNN, ResNet [26], and VGG [27]. We first show that the result of the subset selection problem delivered by the subset approximation is close to the global optimal solution. Furthermore, under the active learning setting, our method not only outperforms other baselines but also scales well on large deep learning models. The main contributions of our paper can be summarized as follows: • We propose a theory-driven deep active learning method, dynamicAL, inspired by the observation of “train faster, generalize better”. To this end, we introduce the Training Dynamics, as a proxy to describe the training process. • We demonstrate that the convergence speed of training and the generalization performance is strongly (positively) correlated under the ultra-wide condition; we also show that maximizing the training dynamics will lead to a lower generalization error in the scenario of active learning. • Our method is easy to implement. We conduct extensive experiments to evaluate the effectiveness of dynamicAL and empirically show that our method consistently outperforms other methods in a wide range of active learning settings. 2 Background Notation. We use the random variable x ∈ X to represent the input data feature and y ∈ Y as the label where K is the number of classes and [K] := {1, 2, ...,K}. We are given non-degenerated a data source D with unknown distribution p(x, y). We further denote the concatenation of x as X = [x1, x2, ..., xM ] ⊤ and that of y as Y = [y1, y2, ..., yM ]⊤. We consider a deep learning classifier hθ(x) = argmax σ(f(x; θ)) : x → y parameterized by θ ∈ Rp, where σ(·) is the softmax function and f is a neural network. Let ⊗ be the Kronecker Product and IK ∈ RK×K be an identity matrix. Active learning. The goal of active learning is to improve the learning efficiency of a model with a limited labeling budget. In this work, we consider the pool-based AL setup, where a finite data set S = {(xl, yl)}Ml=1 with M points are i.i.d. sampled from p(x, y) as the (initial) labeled set. The AL model receives an unlabeled data set U sampled from p(x) and request labels according to p(y|x) for any x ∈ U in each query round. There are R rounds in total, and for each round, a query set Q consisting of b unlabeled samples can be queried. The total budget size B = b×R. Neural Tangent Kernel. The Neural Tangent Kernel [12] has been widely applied to analyze the dynamics of neural networks. If a neural network is sufficiently wide, properly initialized, and trained by gradient descent with infinitesimal step size (i.e., gradient flow), then the neural network is equivalent to kernel regression predictor with a deterministic kernel Θ(·, ·), called Neural Tangent Kernel (NTK). When minimizing the mean squared error loss, at the iteration t, the dynamics of the neural network f has a closed-form expression: df(X ; θ(t)) dt = −Kt(X ,X ) (f(X ; θ(t))− Y) , (1) where θ(t) denotes the parameter of the neural network at iteration t, Kt(X ,X ) ∈ R|X |×K×|X|×K is called the empirical NTK and Ki,jt (x, x′) = ∇θf i(x; θ(t))⊤∇θf j(x′; θ(t)) is the inner product of the gradient of the i-th class probability and the gradient of the j-th class probability for two samples x, x′ ∈ X and i, j ∈ [K]. The time-variant kernel Kt(·, ·) is equivalent to the (time-invariant) NTK with a high probability, i.e., if the neural network is sufficiently wide and properly initialized, then: Kt(X ,X ) = Θ(X ,X )⊗ IK . (2) The final learned neural network at iteration t, is equivalent to the kernel regression solution with respect to the NTK [14]. For any input x and training data {X,Y } we have, f(x; θ(t)) ≈ Θ(x,X)⊤Θ(X,X)−1(I − e−ηΘ(X,X)t)Y, (3) where η is the learning rate, Θ(x,X) is the NTK matrix between input x and all samples in training data X . 3 Method In section 3.1, we introduce the notion of training dynamics which can be used to describe the training process. Then, in section 3.2, based on the training dynamics, we propose dynamicAL. In section 3.3, we discuss the connection between dynamicAL and existing deep active learning methods. 3.1 Training dynamics In this section, we introduce the notion of training dynamics. The cross-entropy loss over the labeled set S is defined as: L(S) = ∑ (xl,yl)∈S ℓ(f(xl; θ), yl) = − ∑ (xl,yl)∈S ∑ i∈[K] yil log σ i(f(xl; θ)), (4) where σi(f(x; θ)) = exp(f i(x;θ))∑ j exp(f j(x;θ)) . We first analyze the dynamics of the training loss, with respect to iteration t, on one labeled sample (derivation is in Appendix A.1): ∂ℓ(f(x; θ), y) ∂t = − ∑ i ( yi − σi(f(x; θ)) ) ∇θf i(x; θ)∇⊤t θ. (5) For neural networks trained by gradient descent, if the learning rate η is small, then ∇tθ = θt+1−θt = −η ∂ ∑ (xl,yl)∈S ℓ(f(xl;θ),yl) ∂θ . Taking the partial derivative of the training loss with respect to the parameters, we have (the derivation of the following equation can be found in Appendix A.2): ∂ℓ(f(x; θ), y) ∂θ = ∑ j∈[K] ( σj(f(x; θ))− yj )∂f j(x; θ) ∂θ . (6) Therefore, we can further get the following result for the dynamics of training loss: ∂ℓ(f(x; θ), y) ∂t = −η ∑ i ( σi(f(x; θ))− yi )∑ j ∑ (x l ′ ,y l ′ )∈S ∇θf i(x; θ)⊤∇θf j(xl′ ; θ) ( σj(f(xl′ ; θ))− y j l ′ ) . (7) Furthermore, we define di(X,Y ) = σi(f(X; θ))− Y i and Y i is the label vector of all samples for i-th class. Then, the training dynamics (dynamics of training loss) over training set S, computed with the empirical NTK Kij(X,X), is denoted by G(S) ∈ R: G(S) = −1 η ∑ (xl,yl)∈S ∂ℓ(f(xl; θ), yl) ∂t = ∑ i ∑ j di(X,Y )⊤Kij(X,X)dj(X,Y ). (8) 3.2 Active learning by activating training dynamics Before we present dynamicAL, we state Proposition 1, which serves as the theoretical guidance for dynamicAL and will be proved in Section 4. Proposition 1. For deep neural networks, converging faster leads to a lower worst-case generalization error. Motivated by the connection between convergence speed and generalization performance, we propose the general-purpose active learning method, dynamicAL, which aims to accelerate the convergence by querying labels for unlabeled samples. As we described in the previous section, the training dynamics can be used to describe the training process. Therefore, we employ the training dynamics as a proxy to design an active learning method. Specifically, at each query round, dynamicAL will query labels for samples which maximize the training dynamics G(S), i.e., Q = argmaxQ⊆UG(S ∪Q), s.t. |Q| = b, (9) where Q is the corresponding data set for Q with ground-truth labels. Notice that when applying the above objective in practice, we are facing two major challenges. First, G(S ∪Q) cannot be directly computed, because the label information of unlabeled examples is not available before the query. Second, the subset selection problem can be computationally prohibitive if enumerating all possible sets with size b. Therefore, we employ the following two relaxations to make this maximization problem to be solved with constant time complexity. Pseudo labeling. To estimate the training dynamics, we use the predicted label ŷu for sample xu in the unlabeled data set U to compute G. Note, the effectiveness of this adaptation has been demonstrated in the recent gradient-based methods [11, 28], which compute the gradient as if the model’s current prediction on the example is the true label. Therefore, the maximization problem in Equation (9) is changed to, Q = argmaxQ⊆UG(S ∪ Q̂). (10) where Q̂ is the corresponding data set for Q with pseudo labels ŶQ. Subset approximation. The subset selection problem of Equation (10) still requires enumerating all possible subsets of U with size b, which is O(nb). We simplify the selection problem to the following problem without causing any change on the result, argmaxQ⊆UG(S ∪ Q̂) = argmaxQ⊆U∆(Q̂|S), (11) where ∆(Q̂|S) = G(S ∪ Q̂)−G(S) is defined as the change of training dynamics. We approximate the change of training dynamics caused by query set Q using the summation of the change of training dynamics caused by each sample in the query set. Then the maximization problem can be converted to Equation (12) which can be solved by a greedy algorithm with O(b). Q = argmaxQ⊆U ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S), s.t. |Q| = b. (12) To further show the approximated result is reasonably good, we decompose the change of training dynamics as (derivation in Appendix A.4): ∆(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) + ∑ (x,ŷ),(x′,ŷ′)∈Q̂ di(x, ŷ)⊤Kij(x, x′)dj(x′, ŷ′), (13) where Kij(x, x′) is the empirical NTK. The first term in the right hand side is the approximated change of training dynamics. Then, we further define the Approximation Ratio (14) which measures the approximation quality, R(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) ∆(Q̂|S) . (14) We empirically measure the expectation of the Approximation Ratio on two data sets with two different neural networks under three different batch sizes. As shown in Figure 4, the expectation EQ∼UR(Q̂|S) ≈ 1 when the model is converged. Therefore, the approximated result delivered by the greedy algorithm is close to the global optimal solution of the original maximization problem, Equation (10), especially when the model is converged. Based on the above two approximations, we present the proposed method dynamicAL in Algorithm 1. As described below, the algorithm starts by training a neural network f(·; θ) on the initial labeled set S until convergence. Then, for every unlabeled sample xu, we compute pseudo label ŷu and the change of training dynamics ∆({(xu, ŷu)}|S). After that, dynamicAL will query labels for top-b samples causing the maximal change on training dynamics, train the neural network on the extended labeled set, and repeat the process. Note, to keep close to the theoretical analysis, re-initialization is not used after each query, which also enables dynamicAL to get rid of the computational overhead of retraining the deep neural networks every time. Algorithm 1 Deep Active Learning by Leveraging Training Dynamics Input: Neural network f(·; θ), unlabeled sample set U , initial labeled set S, number of query round R, query batch size b. for r = 1 to R do Train f(·; θ) on S with cross-entropy loss until convergence. for xu ∈ U do Compute its pseudo label ŷu = argmaxf(xu; θ). Compute ∆({(xu, ŷu)}|S). end for Select b query samples Q with the highest ∆ values, and request their labels from the oracle. Update the labeled data set S = S ∪Q . end for return Final model f(·; θ). 3.3 Relation to existing methods Although existing deep active learning methods are usually designed based on heuristic criteria, some of them have empirically shown their effectiveness [11, 29, 30]. We surprisingly found that our theoretically-motivated method dynamicAL has some connections with those existing methods from the perspective of active learning criterion. The proposed active learning criterion in Equation (12) can be explicitly written as (derivation in Appendix A.5): ∆({(xu,ŷu)}|S) = ∥∇θℓ(f(xu; θ), ŷu)∥2 + 2 ∑ (x,y)∈S ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). (15) Note. The first term of the right-hand side can be interpreted as the square of gradient length (2- norm) which reflects the uncertainty of the model on the example and has been wildly used as an active learning criterion in some existing works [30, 11, 31]. The second term can be viewed as the influence function [32] with identity hessian matrix. And recently, [29] has empirically shown that the effectiveness of using the influence function with identity hessian matrix as active learning criterion. We hope our theoretical analysis can also shed some light on the interpretation of previous methods. 4 Theoretical analysis In this section, we study the correlation between the convergence rate of the training loss and the generalization error under the ultra-wide condition [12, 13]. We define a measure named alignment to quantify the convergence rate and further show its connection with generalization bound. The analysis provides a theoretical guarantee for the phenomenon of “Train Faster, Generalize Better” as well as our active learning method dynamicAL with a rigorous treatment. Finally, we show that the active learning proxy, training dynamics, is correlated with alignment, which indicates that increasing the training dynamics leads to larger convergence rate and better generalization performance. We leave all proofs of theorems and details of verification experiments in Appendix B and D respectively. 4.1 Train faster provably generalize better Given an ultra-wide neural network, the gradient descent can achieve a near-zero training error [12, 33] and its generalization ability in unseen data can be bounded [13]. It is shown that both the convergence and generalization of a neural network can be analyzed using the NTK [13]. However, the question what is the relation between the convergence rate and the generalization bound has not been answered. We formally give a solution by introducing the concept of alignment, which is defined as follows: Definition 1 (Alignment). Given a data set S = {X,Y }, the alignment is a measure of correlation between X and Y projected in the NTK space. In particular, the alignment can be computed by A(X,Y ) = Tr[Y ⊤Θ(X,X)Y ] = ∑K k=1 ∑n i=1 λi(v⃗ ⊤ i Y k)2. In the following, we will demonstrate why “Train Faster” leads to “Generalize Better” through alignment. In particular, the relation of the convergence rate and the generalization bound with alignment is analyzed. The convergence rate of gradient descent for ultra-wide networks is presented in following lemma: Lemma 1 (Convergence Analysis with NTK, Theorem 4.1 of [13]). Suppose λ0 = λmin(Θ) > 0 for all subsets of data samples. For δ ∈ (0, 1), if m = Ω( n 7 λ40δ 4ϵ2 ) and η = O(λ0n2 ), with probability at least 1− δ, the network can achieve near-zero training error, ∥Y − f(X; θ(t))∥2 = √√√√ K∑ k=1 n∑ i=1 (1− ηλi)2t(v⃗⊤i Y k)2 ± ϵ, (16) where n denotes the number of training samples and m denotes the width of hidden layers. The NTK Θ = V ⊤ΛV with Λ = {λi}ni=1 is a diagonal matrix of eigenvalues and V = {v⃗i}ni=1 is a unitary matrix. In this lemma, we take mean square error (MSE) loss as an example for the convenience of illustration. The conclusion can be extended to other loss functions such as cross-entropy loss (see Appendix B.2 in [14]). From the lemma, we find the convergence rate is governed by the dominant term (16) as Et(X,Y ) = √∑K k=1 ∑n i=1(1− ηλi)2t(v⃗⊤i Y k)2, which is correlated with the alignment: Theorem 1 (Relationship between the convergence rate and alignment). Under the same assumptions as in Lemma 1, the convergence rate described by Et satisfies, Tr[Y ⊤Y ]− 2tηA(X,Y ) ≤ E2t (X,Y ) ≤ Tr[Y ⊤Y ]− ηA(X,Y ). (17) Remark 1. In the above theorem, we demonstrate that the alignment can measure the convergence rate. Especially, we find that both the upper bound and the lower bound of error Et(X,Y ) are inversely proportional to the alignment, which implies that higher alignment will lead to achieving faster convergence. Now we analyze the generalization performance of the proposed method through complexity analysis. We demonstrate that the ultra-wide networks can achieve a reasonable generalization bound. Lemma 2 (Generalization bound with NTK, Theorem 5.1 of [13]). Suppose data S = {(xi, yi)}ni=1 are i.i.d. samples from a non-degenerate distribution p(x, y), and m ≥ poly(n, λ−10 , δ−1). Consider any loss function ℓ : R× R → [0, 1] that is 1-Lipschitz, then with probability at least 1− δ over the random initialization, the network trained by gradient descent for T ≥ Ω( 1ηλ0 log n δ ) iterations has population risk Lp = E(x,y)∼p(x,y)[ℓ(fT (x; θ), y)] that is bounded as follows: Lp ≤ √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) . (18) In this lemma, we show that the dominant term in the generalization upper bound is B(X,Y ) =√ 2Tr[Y ⊤Θ−1Y ] n . In the following theorem, we further prove that this bound is inversely proportional to the alignment A(X,Y ). Theorem 2 (Relationship between the generalization bound and alignment). Under the same assump- tions as in Lemma 2, if we define the generalization upper bound as B(X,Y ) = √ 2Tr[Y ⊤Θ−1Y ] n , then it can be bounded with the alignment as follows: Tr2[Y ⊤Y ] A(X,Y ) ≤ n 2 B2(X,Y ) ≤ λmax λmin Tr2[Y ⊤Y ] A(X,Y ) . (19) Remark 2. Theorems 1 and 2 reveal that the cause for the correlated phenomenons “Train Faster” and “Generalize Better” is the projection of label vector on the NTK space (alignment). 4.2 “ Train Faster, Generalize Better ” for active learning In the NTK framework [13], the empirical average requires data in S is i.i.d. samples (Lemma 2). However, this assumption may not hold in the active learning setting with multiple query rounds, because the training data is composed by i.i.d. sampled initial label set and samples queried by active learning policy. To extend the previous analysis principle to active learning, we follow [34] to reformulate the Lemma 2 as: Lp ≤ (Lp − Lq) + √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) , (20) where Lq = E(x,y)∼q(x,y)[ℓ(f(x; θ), y)], q(x, y) denotes the data distribution after query, and X,Y includes initial training samples and samples after query. There is a new term in the upper bound, which is the difference between the true risk under different data distributions. Lp − Lq =E(x,y)∼p(x,y)[ℓ(f(x; θ), y)]− E(x,y)∼q(x,y)[ℓ(f(x; θ), y)] (21) Though in active learning the data distribution for the labeled samples may be different from the original distribution, they share the same conditional probability p(y|x). We define g(x) =∫ y ℓ(f(x; θ), y)p(y|x)dy, and then we have: Lp − Lq = ∫ x g(x)p(x)dx− ∫ x g(x)q(x)dx. (22) To measure the distance between two distributions, we employ the Maximum Mean Discrepancy (MMD) with neural tangent kernel [35] (derivation in Appendix B.3). Lp − Lq ≤ MMD(S0, S,HΘ) +O (√C ln(1/δ) n ) . (23) Slightly overloading the notation, we denote the initial labeled set as S0, HΘ as the associated Reproducing Kernel Hilbert Space for the NTK Θ, and ∀x, x′ ∈ S,Θ(x, x′) ≤ C. Note, MMD(S0, S,HΘ) is the empirical measure for MMD(p(x), q(x),HΘ). We empirically compute MMD and the dominant term of the generalization upper bound B under the active learning setting with our method dynamicAL. As shown in Figure 1, on CIFAR10 with a CNN target model (three convolutional layers with global average pooling), the initial labeled set size |S| = 500, query round R = 1 and budget size b ∈ {250, 500, 1000}, we observe that, under different active learning settings, the MMD is always much smaller than the B. Besides, we further investigate the MMD and B for R ≥ 2 and observe the similar results. Therefore, the lemma 2 still holds for the target model with dynamicAL. More results and discussions for R ≥ 2 are in Appendix E.4 and the computation details of MMD and NTK are in Appendix D.1. 4.3 Alignment and training dynamics in active learning In this section, we show the relationship between the alignment and the training dynamics. To be consistent with the previous theoretical analysis (Theorem 1 and 2), we use the training dynamics with mean square error under the ultrawidth condition, which can be expressed as GMSE(S) = Tr [ (f(X; θ)− Y )⊤Θ(X,X)(f(X; θ)− Y ) ] . Due to the limited space, we leave the derivation in Appendix A.3. To further quantitatively evaluate the correlation between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), we utilize the Kendall τ coefficient [36] to empirically measure their relation. As shown in Figure 2, for CNN on CIFAR10 with active learning setting, where |S| = 500 and |Q| = 250, there is a strong agreement between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), which further indicates that increasing the training dynamics will lead to a faster convergence and better generalization performance. More details about this verification experiment are in Appendix D.2. 5 Experiments 5.1 Experiment setup Baselines. We compare dynamicAL with the following eight baselines: Random, Corset, Confidence Sampling (Conf), Margin Sampling (Marg), Entropy, and Active Learning by Learning (ALBL), Batch Active learning by Diverse Gradient Embeddings (BADGE). Description of baseline methods is in Appendix E.1. Data sets and Target Model. We evaluate all the methods on three benchmark data sets, namely, CIFAR10 [23], SVHN [24], and Caltech101 [25]. We use accuracy as the evaluation metric and report the mean value of 5 runs. We consider three neural network architectures: vanilla CNN, ResNet18 [26], and VGG11 [27]. For each model, we keep the hyper-parameters used in their official implementations. More information about the implementation is in Appendix C.1. Active Learning Protocol. Following the previous evaluation protocol [11], we compare all those active learning methods in a batch-mode setup with an initial set size M = 500 for all those three data sets, batch size b varying from {250, 500, 1000}. For the selection of test set, we use the benchmark split of the CIFAR10 [23], SVHN [24] and sample 20% from each class to form the test set for the Caltech101 [25]. 5.2 Results and analysis The main experimental results have been provided as plots due to the limited space. We also provide tables in which we report the mean and standard deviation for each plot in Appendix E.3. Overall results. The average test accuracy at each query round is shown in Figure 3. Our method dynamicAL can consistently outperform other methods for all query rounds. This suggests that dynamicAL is a good choice regardless of the labeling budget. And, we notice dynamicAL can work well on data sets with a large class number, such as Caltech101. However, the previous state-of-the-art method, BADGE, cannot be scaled up to those data sets, because the required memory is linear with the number of classes. Besides, because dynamicAL depends on pseudo labeling, a relatively large initial labeled set can provide advantages for dynamicAL. Therefore, it is important to examine whether dynamicAL can work well with a small initial labeled set. As shown in Figure 3, dynamicAL is able to work well with a relatively small initial labeled set (M = 500). Due to the limited space, we only show the result under three different settings in Figure 3. More evaluation results are in Appendix E.2. Moreover, although the re-initialization trick makes dynamicAL deviate from the dynamics analysis, we investigate the effect of it to dynamicAL and provide the empirical observations and analysis in Appendix E.5. Effect of query size and query round. Given the total label budget B, the increasing of query size always leads to the decreasing of query round. We study the influence of different query size and query round on dynamicAL from two perspectives. First, we study the expected approximation ratio with different query batch sizes on different data sets. As shown in Figure 4, under different settings the expected approximation ratio always converges to 1 with the increase of training epochs, which further indicates that the query set selected by using the approximated change of training dynamics is a reasonably good result for the query set selection problem. Second, we study influence of query round for actual performance of target models. The performance for different target models on different data sets with total budge size B = 1000 is shown in Table 1. For certain query budget, our active learning algorithm can be further improved if more query rounds are allowed. Comparison with different variants. The active learning criterion of dynamicAL can be written as∑ (x,y)∈S ∥∇θℓ(f(x; θu), ŷu)∥2 + γ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). We empirically show the performance for γ ∈ {0, 1, 2,∞} in Figure 5. With γ = 0, the criterion is close to the expected gradient length method [31]. And with γ = ∞, the selected samples are same with the samples selected by using the influence function with identity hessian matrix criterion [29]. As shown in Figure 5, the model achieves the best performance with γ = 2, which is aligned with the value indicated by the theoretical analysis (Equation 15). The result confirms the importance of theoretical analysis for the design of deep active learning methods. 6 Related work Neural Tangent Kernel (NTK): Recent study has shown that under proper conditions, an infinitewidth neural network can be simplified as a linear model with Neural Tangent Kernel (NTK) [12]. Since then, NTK has become a powerful theoretical tool to analyze the behavior of deep learning architecture (CNN, GNN, RNN) [33, 37, 38], random initialization [39], stochastic neural network [40], and graph neural network [41] from its output dynamics and to characterize the convergence and generalization error [13]. Besides, [15] studies the finite-width NTK, aiming at making the NTK more practical. Active Learning: Active learning aims at interactively query labels for unlabeled data points to maximize model performances [2]. Among others, there are two popular strategies for active learning, i.e., diversity sampling [42, 43, 44] and uncertainty sampling [45, 46, 47, 11, 48, 49, 29]. Recently, several papers proposed to use gradient to measure uncertainty [49, 11, 29]. However, those methods need to compute gradient for each class, and thus they can hardly be applied on data sets with a large class number. Besides, recent works [50, 51] leverage NTK to analyze contextual bandit with streaming data, which are hard to be applied into our pool-based setting. 7 Conclusion In this work, we bridge the gap between the theoretic findings of deep neural networks and realworld deep active learning applications. By exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven method, dynamicAL, which selects samples to maximize training dynamics. We prove that the convergence speed of training and the generalization performance is (positively) strongly correlated under the ultra-wide condition and we show that maximizing the training dynamics will lead to a lower generalization error. Empirically, our work shows that dynamicAL not only consistently outperforms strong baselines across various setting, but also scales well on large deep learning models. 8 Acknowledgment This work is supported by National Science Foundation (IIS-1947203, IIS-2117902, IIS-2137468, IIS-2134079, and CNS-2125626), a joint ACES-ICGA funding initiative via USDA Hatch ILLU802-946, and Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
1. What is the focus of the paper regarding active learning in deep neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly in its notation and empirical NTK representation? 3. How does the reviewer assess the correlation between training dynamics and generalization performance in the paper's experiments? 4. What are some limitations regarding the selection of points acquired by dynamicAL compared to other acquisition functions? 5. How does the reviewer suggest improving the empirical analysis of dynamicAL versus other acquisition functions? 6. Are there any connections between this work and self-supervised learning methods that the authors could explore?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors address the problem of active learning in the setting where the learning model is over-parameterized with respect to the data. They note that theories of active learning are based upon assumptions that hold for hypothesis classes with lower parameterizations may not hold in the over-parameterized setting, and propose to use newer tools from generalization theory to theoretically ground active learning for deep neural networks. They propose to use tools for studying the relationship between training dynamics and generalization performance (such as the NTK) as the basis for new deep active learning heuristics to select unlabeled data points in the pooled setting, eschewing traditional methods that select points that maximize either labeled data diversity or model uncertainty. They demonstrate that selecting data points which maximize the rate at which models train (the rate of descent of the training loss as a function of the number of parameter updates) is positively correlated with a lower generalization bound, thus establishing it as a bona fide heuristic for the acquisition function. Strengths And Weaknesses Lines 49-52: Will definitely have to ensure that TD can be computed or well-estimated quickly, lest this be impractical Line 100: K i , j ( x , x ′ ) is the inner product of the gradient of the i -th class probability and the gradient of the j -th class probability evaluated at {x} and {x’} respectively. I think? It's not totally clear, and having to flip back to the notation section much higher up is not ideal. Line 122: Similar criticism here. This section includes a lot of notation, it would help the reader here to better understand equation (8) to recall that K i j ( X , X ) is the empirical NTK. Line 150: This is a common issue in active learning. This can bias the selection of points, since the point-wise contribution to G(S) is not necessarily additive. See work by Farquahr et al 2021 for an example in the case of using uncertainty to acquire labels. A test to see how vulnerable G(S) is to this phenomenon would be to run selection with b = 1 , and then with b increasing in various intervals, to see how frequently the single points acquired with b = 1 appear in the set of b points acquired in a single batch selection. Line 154: I presume that K i j is the empirical NTK, but it would help the reader to specify what it represents, as it’s not super clear by the derivations from lines 149 - 155. Lines 176-178: While using an IF derived acquisition function makes some sense (in the sense that high influence points will by identification provoke large magnitude changes in the model parameters), it’s not clear that it’s a better or more valuable way to acquire points. Line 194: “of” missing here Lines 217-219: In theorem 1, the factor ν will change at t − > ∞ , since it depends on the reciprocal of n 2 . Is this accounted for in the proof? Line 260: I would reformulate this sentence here, since evaluating one experiment for two query rounds on one network and dataset does not do much to convince the reader that Lemma 2 holds. I think the authors should broaden the datasets they use to establish the validity across other datasets, especially with different class imbalances. Line 263: This figure is nice, but it lacks some context. How good is the correlation between the training dynamics proxy and alignment? What would help readers understand this better would be a comparison of different acquisition functions and how they correlate with alignment. This correlation argument would become much stronger. Section 5: There are two key points missing in the empirical analysis of dynamicAL versus other acquisition functions. The first is an analysis of the degree of overlap between the points selected by dynamicAL and other methods; how many of the same points are chosen, and does the membership of the sets of acquired points converge over time, or do they diverge as successive rounds of acquisition take place? The second is that in both figure 3 and 5 we see only the means but not the variances of each set of retraining experiments. I suspect that there are two components contributing to the performance of each algorithm: first is the selection of the points acquired, as well as the smaller differences in training dynamics for each network due to initialization, batch order in which the data is presented to the models, or even GPU noise. It would be more convincing to see error bars, so that we can gauge how much of an effect the selection of points has on the accuracy. Questions Lines 284-285: I would recommend trying this out on datasets with more severe class imbalances. Each of CIFAR10, SVHN and Caltech 101 are rather well behaved and don’t exhibit any class imbalances, making them odd choices for active learning. Perhaps this is why there does not seem to be much of a drastic difference in the results of Figure 3? Line 162: The use of pseudo labeling in this paper connects this work to recent work on self-supervised learning. I wonder if the authors have considered the implications of what current theories of self-supervised learning can do to inform this work, or what this work could do to transform a self-supervised learning method into a few-shot method that would preform better? Limitations Yes, they have.
NIPS
Title Deep Active Learning by Leveraging Training Dynamics Abstract Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in real applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that the convergence speed of training and the generalization performance are positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to better generalization performance. Furthermore, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work would inspire more attempts on bridging the theoretical findings of deep networks and practical impacts of deep active learning in real applications. 1 Introduction Training deep learning (DL) models usually requires large amount of high-quality labeled data [1] to optimize a model with a massive number of parameters. The acquisition of such annotated data is usually time-consuming and expensive, making it unaffordable in the fields that require high domain expertise. A promising approach for minimizing the labeling effort is active learning (AL), which aims to identify and label the maximally informative samples, so that a high-performing classifier can be trained with minimal labeling effort [2]. Under classical statistical learning settings, theories of active learning have been extensively studied from the perspective of VC dimension [3]. As a result, a variety of methods have been proposed, such as (i) the version-space-based approaches, which require maintaining a set of models [4, 5], and (ii) the clustering-based approaches, which assume that the data within the same cluster have pure labels [6]. However, the theoretical analyses for these classical settings may not hold for over-parameterized deep neural networks where the traditional wisdom is ineffective [1]. For example, margin-based methods select the labeling examples in the vicinity of the learned decision boundary [7, 8]. However, in the over-parameterized regime, every labeled example could potentially be near the learned decision boundary [9]. As a result, theoretically, such analysis can hardly guide us to design practical active 36th Conference on Neural Information Processing Systems (NeurIPS 2022). learning methods. Besides, empirically, multiple deep active learning works, borrowing observations and insights from the classical theories and methods, have been observed unable to outperform their passive learning counterparts in a few application scenarios [10, 11]. On the other hand, the analysis of neural network’s optimization and generalization performance has witnessed several exciting developments in recent years in terms of the deep learning theory [12, 13, 14]. It is shown that the training dynamics of deep neural networks using gradient descent can be characterized by the Neural Tangent Kernel (NTK) of infinite [12] or finite [15] width networks. This is further leveraged to characterize the generalization of over-parameterized networks through Rademacher complexity analysis [13, 16]. We are therefore inspired to ask: How can we design a practical and generic active learning method for deep neural networks with theoretical justifications? To answer this question, we firstly explore the connection between the model performance on testing data and the convergence speed on training data for the over-parameterized deep neural networks. Based on the NTK framework [12, 13], we theoretically show that if a deep neural network converges faster (“Train Faster”), then it tends to have better generalization performance (“Generalize Better”), which matches the existing observations [17, 18, 19, 20, 21]. Motivated by the aforementioned connection, we first introduce Training Dynamics, the derivative of training loss with respect to iteration, as a proxy to quantitatively describe the training process. On top of it, we formally propose our generic and theoretically-motivated deep active learning method, dynamicAL, which will query labels for a subset of unlabeled samples that maximally increase the training dynamics. In order to compute the training dynamics by merely using the unlabeled samples, we leverage two relaxations Pseudo-labeling and Subset Approximation to solve this non-trivial subset selection problem. Our relaxed approaches are capable of effectively estimating the training dynamics as well as efficiently solving the subset selection problem by reducing the complexity from O(N b) to O(b). In theory, we coin a new term Alignment to measure the length of the label vector’s projection on the neural tangent kernel space. Then, we demonstrate that higher alignment usually comes with a faster convergence speed and a lower generalization bound. Furthermore, with the help of the maximum mean discrepancy [22], we extend the previous analysis to an active learning setting where the i.i.d. assumption may not hold. Finally, we show that alignment is positively correlated with our active learning goal, training dynamics, which implies that maximizing training dynamics will lead to better generalization performance. Regarding experiments, we have empirically verified our theory by conducting extensive experiments on three datasets, CIFAR10 [23], SVHN [24], and Caltech101 [25] using three types of network structures: vanilla CNN, ResNet [26], and VGG [27]. We first show that the result of the subset selection problem delivered by the subset approximation is close to the global optimal solution. Furthermore, under the active learning setting, our method not only outperforms other baselines but also scales well on large deep learning models. The main contributions of our paper can be summarized as follows: • We propose a theory-driven deep active learning method, dynamicAL, inspired by the observation of “train faster, generalize better”. To this end, we introduce the Training Dynamics, as a proxy to describe the training process. • We demonstrate that the convergence speed of training and the generalization performance is strongly (positively) correlated under the ultra-wide condition; we also show that maximizing the training dynamics will lead to a lower generalization error in the scenario of active learning. • Our method is easy to implement. We conduct extensive experiments to evaluate the effectiveness of dynamicAL and empirically show that our method consistently outperforms other methods in a wide range of active learning settings. 2 Background Notation. We use the random variable x ∈ X to represent the input data feature and y ∈ Y as the label where K is the number of classes and [K] := {1, 2, ...,K}. We are given non-degenerated a data source D with unknown distribution p(x, y). We further denote the concatenation of x as X = [x1, x2, ..., xM ] ⊤ and that of y as Y = [y1, y2, ..., yM ]⊤. We consider a deep learning classifier hθ(x) = argmax σ(f(x; θ)) : x → y parameterized by θ ∈ Rp, where σ(·) is the softmax function and f is a neural network. Let ⊗ be the Kronecker Product and IK ∈ RK×K be an identity matrix. Active learning. The goal of active learning is to improve the learning efficiency of a model with a limited labeling budget. In this work, we consider the pool-based AL setup, where a finite data set S = {(xl, yl)}Ml=1 with M points are i.i.d. sampled from p(x, y) as the (initial) labeled set. The AL model receives an unlabeled data set U sampled from p(x) and request labels according to p(y|x) for any x ∈ U in each query round. There are R rounds in total, and for each round, a query set Q consisting of b unlabeled samples can be queried. The total budget size B = b×R. Neural Tangent Kernel. The Neural Tangent Kernel [12] has been widely applied to analyze the dynamics of neural networks. If a neural network is sufficiently wide, properly initialized, and trained by gradient descent with infinitesimal step size (i.e., gradient flow), then the neural network is equivalent to kernel regression predictor with a deterministic kernel Θ(·, ·), called Neural Tangent Kernel (NTK). When minimizing the mean squared error loss, at the iteration t, the dynamics of the neural network f has a closed-form expression: df(X ; θ(t)) dt = −Kt(X ,X ) (f(X ; θ(t))− Y) , (1) where θ(t) denotes the parameter of the neural network at iteration t, Kt(X ,X ) ∈ R|X |×K×|X|×K is called the empirical NTK and Ki,jt (x, x′) = ∇θf i(x; θ(t))⊤∇θf j(x′; θ(t)) is the inner product of the gradient of the i-th class probability and the gradient of the j-th class probability for two samples x, x′ ∈ X and i, j ∈ [K]. The time-variant kernel Kt(·, ·) is equivalent to the (time-invariant) NTK with a high probability, i.e., if the neural network is sufficiently wide and properly initialized, then: Kt(X ,X ) = Θ(X ,X )⊗ IK . (2) The final learned neural network at iteration t, is equivalent to the kernel regression solution with respect to the NTK [14]. For any input x and training data {X,Y } we have, f(x; θ(t)) ≈ Θ(x,X)⊤Θ(X,X)−1(I − e−ηΘ(X,X)t)Y, (3) where η is the learning rate, Θ(x,X) is the NTK matrix between input x and all samples in training data X . 3 Method In section 3.1, we introduce the notion of training dynamics which can be used to describe the training process. Then, in section 3.2, based on the training dynamics, we propose dynamicAL. In section 3.3, we discuss the connection between dynamicAL and existing deep active learning methods. 3.1 Training dynamics In this section, we introduce the notion of training dynamics. The cross-entropy loss over the labeled set S is defined as: L(S) = ∑ (xl,yl)∈S ℓ(f(xl; θ), yl) = − ∑ (xl,yl)∈S ∑ i∈[K] yil log σ i(f(xl; θ)), (4) where σi(f(x; θ)) = exp(f i(x;θ))∑ j exp(f j(x;θ)) . We first analyze the dynamics of the training loss, with respect to iteration t, on one labeled sample (derivation is in Appendix A.1): ∂ℓ(f(x; θ), y) ∂t = − ∑ i ( yi − σi(f(x; θ)) ) ∇θf i(x; θ)∇⊤t θ. (5) For neural networks trained by gradient descent, if the learning rate η is small, then ∇tθ = θt+1−θt = −η ∂ ∑ (xl,yl)∈S ℓ(f(xl;θ),yl) ∂θ . Taking the partial derivative of the training loss with respect to the parameters, we have (the derivation of the following equation can be found in Appendix A.2): ∂ℓ(f(x; θ), y) ∂θ = ∑ j∈[K] ( σj(f(x; θ))− yj )∂f j(x; θ) ∂θ . (6) Therefore, we can further get the following result for the dynamics of training loss: ∂ℓ(f(x; θ), y) ∂t = −η ∑ i ( σi(f(x; θ))− yi )∑ j ∑ (x l ′ ,y l ′ )∈S ∇θf i(x; θ)⊤∇θf j(xl′ ; θ) ( σj(f(xl′ ; θ))− y j l ′ ) . (7) Furthermore, we define di(X,Y ) = σi(f(X; θ))− Y i and Y i is the label vector of all samples for i-th class. Then, the training dynamics (dynamics of training loss) over training set S, computed with the empirical NTK Kij(X,X), is denoted by G(S) ∈ R: G(S) = −1 η ∑ (xl,yl)∈S ∂ℓ(f(xl; θ), yl) ∂t = ∑ i ∑ j di(X,Y )⊤Kij(X,X)dj(X,Y ). (8) 3.2 Active learning by activating training dynamics Before we present dynamicAL, we state Proposition 1, which serves as the theoretical guidance for dynamicAL and will be proved in Section 4. Proposition 1. For deep neural networks, converging faster leads to a lower worst-case generalization error. Motivated by the connection between convergence speed and generalization performance, we propose the general-purpose active learning method, dynamicAL, which aims to accelerate the convergence by querying labels for unlabeled samples. As we described in the previous section, the training dynamics can be used to describe the training process. Therefore, we employ the training dynamics as a proxy to design an active learning method. Specifically, at each query round, dynamicAL will query labels for samples which maximize the training dynamics G(S), i.e., Q = argmaxQ⊆UG(S ∪Q), s.t. |Q| = b, (9) where Q is the corresponding data set for Q with ground-truth labels. Notice that when applying the above objective in practice, we are facing two major challenges. First, G(S ∪Q) cannot be directly computed, because the label information of unlabeled examples is not available before the query. Second, the subset selection problem can be computationally prohibitive if enumerating all possible sets with size b. Therefore, we employ the following two relaxations to make this maximization problem to be solved with constant time complexity. Pseudo labeling. To estimate the training dynamics, we use the predicted label ŷu for sample xu in the unlabeled data set U to compute G. Note, the effectiveness of this adaptation has been demonstrated in the recent gradient-based methods [11, 28], which compute the gradient as if the model’s current prediction on the example is the true label. Therefore, the maximization problem in Equation (9) is changed to, Q = argmaxQ⊆UG(S ∪ Q̂). (10) where Q̂ is the corresponding data set for Q with pseudo labels ŶQ. Subset approximation. The subset selection problem of Equation (10) still requires enumerating all possible subsets of U with size b, which is O(nb). We simplify the selection problem to the following problem without causing any change on the result, argmaxQ⊆UG(S ∪ Q̂) = argmaxQ⊆U∆(Q̂|S), (11) where ∆(Q̂|S) = G(S ∪ Q̂)−G(S) is defined as the change of training dynamics. We approximate the change of training dynamics caused by query set Q using the summation of the change of training dynamics caused by each sample in the query set. Then the maximization problem can be converted to Equation (12) which can be solved by a greedy algorithm with O(b). Q = argmaxQ⊆U ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S), s.t. |Q| = b. (12) To further show the approximated result is reasonably good, we decompose the change of training dynamics as (derivation in Appendix A.4): ∆(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) + ∑ (x,ŷ),(x′,ŷ′)∈Q̂ di(x, ŷ)⊤Kij(x, x′)dj(x′, ŷ′), (13) where Kij(x, x′) is the empirical NTK. The first term in the right hand side is the approximated change of training dynamics. Then, we further define the Approximation Ratio (14) which measures the approximation quality, R(Q̂|S) = ∑ (x,ŷ)∈Q̂ ∆({(x, ŷ)}|S) ∆(Q̂|S) . (14) We empirically measure the expectation of the Approximation Ratio on two data sets with two different neural networks under three different batch sizes. As shown in Figure 4, the expectation EQ∼UR(Q̂|S) ≈ 1 when the model is converged. Therefore, the approximated result delivered by the greedy algorithm is close to the global optimal solution of the original maximization problem, Equation (10), especially when the model is converged. Based on the above two approximations, we present the proposed method dynamicAL in Algorithm 1. As described below, the algorithm starts by training a neural network f(·; θ) on the initial labeled set S until convergence. Then, for every unlabeled sample xu, we compute pseudo label ŷu and the change of training dynamics ∆({(xu, ŷu)}|S). After that, dynamicAL will query labels for top-b samples causing the maximal change on training dynamics, train the neural network on the extended labeled set, and repeat the process. Note, to keep close to the theoretical analysis, re-initialization is not used after each query, which also enables dynamicAL to get rid of the computational overhead of retraining the deep neural networks every time. Algorithm 1 Deep Active Learning by Leveraging Training Dynamics Input: Neural network f(·; θ), unlabeled sample set U , initial labeled set S, number of query round R, query batch size b. for r = 1 to R do Train f(·; θ) on S with cross-entropy loss until convergence. for xu ∈ U do Compute its pseudo label ŷu = argmaxf(xu; θ). Compute ∆({(xu, ŷu)}|S). end for Select b query samples Q with the highest ∆ values, and request their labels from the oracle. Update the labeled data set S = S ∪Q . end for return Final model f(·; θ). 3.3 Relation to existing methods Although existing deep active learning methods are usually designed based on heuristic criteria, some of them have empirically shown their effectiveness [11, 29, 30]. We surprisingly found that our theoretically-motivated method dynamicAL has some connections with those existing methods from the perspective of active learning criterion. The proposed active learning criterion in Equation (12) can be explicitly written as (derivation in Appendix A.5): ∆({(xu,ŷu)}|S) = ∥∇θℓ(f(xu; θ), ŷu)∥2 + 2 ∑ (x,y)∈S ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). (15) Note. The first term of the right-hand side can be interpreted as the square of gradient length (2- norm) which reflects the uncertainty of the model on the example and has been wildly used as an active learning criterion in some existing works [30, 11, 31]. The second term can be viewed as the influence function [32] with identity hessian matrix. And recently, [29] has empirically shown that the effectiveness of using the influence function with identity hessian matrix as active learning criterion. We hope our theoretical analysis can also shed some light on the interpretation of previous methods. 4 Theoretical analysis In this section, we study the correlation between the convergence rate of the training loss and the generalization error under the ultra-wide condition [12, 13]. We define a measure named alignment to quantify the convergence rate and further show its connection with generalization bound. The analysis provides a theoretical guarantee for the phenomenon of “Train Faster, Generalize Better” as well as our active learning method dynamicAL with a rigorous treatment. Finally, we show that the active learning proxy, training dynamics, is correlated with alignment, which indicates that increasing the training dynamics leads to larger convergence rate and better generalization performance. We leave all proofs of theorems and details of verification experiments in Appendix B and D respectively. 4.1 Train faster provably generalize better Given an ultra-wide neural network, the gradient descent can achieve a near-zero training error [12, 33] and its generalization ability in unseen data can be bounded [13]. It is shown that both the convergence and generalization of a neural network can be analyzed using the NTK [13]. However, the question what is the relation between the convergence rate and the generalization bound has not been answered. We formally give a solution by introducing the concept of alignment, which is defined as follows: Definition 1 (Alignment). Given a data set S = {X,Y }, the alignment is a measure of correlation between X and Y projected in the NTK space. In particular, the alignment can be computed by A(X,Y ) = Tr[Y ⊤Θ(X,X)Y ] = ∑K k=1 ∑n i=1 λi(v⃗ ⊤ i Y k)2. In the following, we will demonstrate why “Train Faster” leads to “Generalize Better” through alignment. In particular, the relation of the convergence rate and the generalization bound with alignment is analyzed. The convergence rate of gradient descent for ultra-wide networks is presented in following lemma: Lemma 1 (Convergence Analysis with NTK, Theorem 4.1 of [13]). Suppose λ0 = λmin(Θ) > 0 for all subsets of data samples. For δ ∈ (0, 1), if m = Ω( n 7 λ40δ 4ϵ2 ) and η = O(λ0n2 ), with probability at least 1− δ, the network can achieve near-zero training error, ∥Y − f(X; θ(t))∥2 = √√√√ K∑ k=1 n∑ i=1 (1− ηλi)2t(v⃗⊤i Y k)2 ± ϵ, (16) where n denotes the number of training samples and m denotes the width of hidden layers. The NTK Θ = V ⊤ΛV with Λ = {λi}ni=1 is a diagonal matrix of eigenvalues and V = {v⃗i}ni=1 is a unitary matrix. In this lemma, we take mean square error (MSE) loss as an example for the convenience of illustration. The conclusion can be extended to other loss functions such as cross-entropy loss (see Appendix B.2 in [14]). From the lemma, we find the convergence rate is governed by the dominant term (16) as Et(X,Y ) = √∑K k=1 ∑n i=1(1− ηλi)2t(v⃗⊤i Y k)2, which is correlated with the alignment: Theorem 1 (Relationship between the convergence rate and alignment). Under the same assumptions as in Lemma 1, the convergence rate described by Et satisfies, Tr[Y ⊤Y ]− 2tηA(X,Y ) ≤ E2t (X,Y ) ≤ Tr[Y ⊤Y ]− ηA(X,Y ). (17) Remark 1. In the above theorem, we demonstrate that the alignment can measure the convergence rate. Especially, we find that both the upper bound and the lower bound of error Et(X,Y ) are inversely proportional to the alignment, which implies that higher alignment will lead to achieving faster convergence. Now we analyze the generalization performance of the proposed method through complexity analysis. We demonstrate that the ultra-wide networks can achieve a reasonable generalization bound. Lemma 2 (Generalization bound with NTK, Theorem 5.1 of [13]). Suppose data S = {(xi, yi)}ni=1 are i.i.d. samples from a non-degenerate distribution p(x, y), and m ≥ poly(n, λ−10 , δ−1). Consider any loss function ℓ : R× R → [0, 1] that is 1-Lipschitz, then with probability at least 1− δ over the random initialization, the network trained by gradient descent for T ≥ Ω( 1ηλ0 log n δ ) iterations has population risk Lp = E(x,y)∼p(x,y)[ℓ(fT (x; θ), y)] that is bounded as follows: Lp ≤ √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) . (18) In this lemma, we show that the dominant term in the generalization upper bound is B(X,Y ) =√ 2Tr[Y ⊤Θ−1Y ] n . In the following theorem, we further prove that this bound is inversely proportional to the alignment A(X,Y ). Theorem 2 (Relationship between the generalization bound and alignment). Under the same assump- tions as in Lemma 2, if we define the generalization upper bound as B(X,Y ) = √ 2Tr[Y ⊤Θ−1Y ] n , then it can be bounded with the alignment as follows: Tr2[Y ⊤Y ] A(X,Y ) ≤ n 2 B2(X,Y ) ≤ λmax λmin Tr2[Y ⊤Y ] A(X,Y ) . (19) Remark 2. Theorems 1 and 2 reveal that the cause for the correlated phenomenons “Train Faster” and “Generalize Better” is the projection of label vector on the NTK space (alignment). 4.2 “ Train Faster, Generalize Better ” for active learning In the NTK framework [13], the empirical average requires data in S is i.i.d. samples (Lemma 2). However, this assumption may not hold in the active learning setting with multiple query rounds, because the training data is composed by i.i.d. sampled initial label set and samples queried by active learning policy. To extend the previous analysis principle to active learning, we follow [34] to reformulate the Lemma 2 as: Lp ≤ (Lp − Lq) + √ 2Tr[Y ⊤Θ−1(X,X)Y ] n +O (√ log n λ0δ n ) , (20) where Lq = E(x,y)∼q(x,y)[ℓ(f(x; θ), y)], q(x, y) denotes the data distribution after query, and X,Y includes initial training samples and samples after query. There is a new term in the upper bound, which is the difference between the true risk under different data distributions. Lp − Lq =E(x,y)∼p(x,y)[ℓ(f(x; θ), y)]− E(x,y)∼q(x,y)[ℓ(f(x; θ), y)] (21) Though in active learning the data distribution for the labeled samples may be different from the original distribution, they share the same conditional probability p(y|x). We define g(x) =∫ y ℓ(f(x; θ), y)p(y|x)dy, and then we have: Lp − Lq = ∫ x g(x)p(x)dx− ∫ x g(x)q(x)dx. (22) To measure the distance between two distributions, we employ the Maximum Mean Discrepancy (MMD) with neural tangent kernel [35] (derivation in Appendix B.3). Lp − Lq ≤ MMD(S0, S,HΘ) +O (√C ln(1/δ) n ) . (23) Slightly overloading the notation, we denote the initial labeled set as S0, HΘ as the associated Reproducing Kernel Hilbert Space for the NTK Θ, and ∀x, x′ ∈ S,Θ(x, x′) ≤ C. Note, MMD(S0, S,HΘ) is the empirical measure for MMD(p(x), q(x),HΘ). We empirically compute MMD and the dominant term of the generalization upper bound B under the active learning setting with our method dynamicAL. As shown in Figure 1, on CIFAR10 with a CNN target model (three convolutional layers with global average pooling), the initial labeled set size |S| = 500, query round R = 1 and budget size b ∈ {250, 500, 1000}, we observe that, under different active learning settings, the MMD is always much smaller than the B. Besides, we further investigate the MMD and B for R ≥ 2 and observe the similar results. Therefore, the lemma 2 still holds for the target model with dynamicAL. More results and discussions for R ≥ 2 are in Appendix E.4 and the computation details of MMD and NTK are in Appendix D.1. 4.3 Alignment and training dynamics in active learning In this section, we show the relationship between the alignment and the training dynamics. To be consistent with the previous theoretical analysis (Theorem 1 and 2), we use the training dynamics with mean square error under the ultrawidth condition, which can be expressed as GMSE(S) = Tr [ (f(X; θ)− Y )⊤Θ(X,X)(f(X; θ)− Y ) ] . Due to the limited space, we leave the derivation in Appendix A.3. To further quantitatively evaluate the correlation between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), we utilize the Kendall τ coefficient [36] to empirically measure their relation. As shown in Figure 2, for CNN on CIFAR10 with active learning setting, where |S| = 500 and |Q| = 250, there is a strong agreement between GMSE(S ∪Q) and A(X∥XQ, Y ∥YQ), which further indicates that increasing the training dynamics will lead to a faster convergence and better generalization performance. More details about this verification experiment are in Appendix D.2. 5 Experiments 5.1 Experiment setup Baselines. We compare dynamicAL with the following eight baselines: Random, Corset, Confidence Sampling (Conf), Margin Sampling (Marg), Entropy, and Active Learning by Learning (ALBL), Batch Active learning by Diverse Gradient Embeddings (BADGE). Description of baseline methods is in Appendix E.1. Data sets and Target Model. We evaluate all the methods on three benchmark data sets, namely, CIFAR10 [23], SVHN [24], and Caltech101 [25]. We use accuracy as the evaluation metric and report the mean value of 5 runs. We consider three neural network architectures: vanilla CNN, ResNet18 [26], and VGG11 [27]. For each model, we keep the hyper-parameters used in their official implementations. More information about the implementation is in Appendix C.1. Active Learning Protocol. Following the previous evaluation protocol [11], we compare all those active learning methods in a batch-mode setup with an initial set size M = 500 for all those three data sets, batch size b varying from {250, 500, 1000}. For the selection of test set, we use the benchmark split of the CIFAR10 [23], SVHN [24] and sample 20% from each class to form the test set for the Caltech101 [25]. 5.2 Results and analysis The main experimental results have been provided as plots due to the limited space. We also provide tables in which we report the mean and standard deviation for each plot in Appendix E.3. Overall results. The average test accuracy at each query round is shown in Figure 3. Our method dynamicAL can consistently outperform other methods for all query rounds. This suggests that dynamicAL is a good choice regardless of the labeling budget. And, we notice dynamicAL can work well on data sets with a large class number, such as Caltech101. However, the previous state-of-the-art method, BADGE, cannot be scaled up to those data sets, because the required memory is linear with the number of classes. Besides, because dynamicAL depends on pseudo labeling, a relatively large initial labeled set can provide advantages for dynamicAL. Therefore, it is important to examine whether dynamicAL can work well with a small initial labeled set. As shown in Figure 3, dynamicAL is able to work well with a relatively small initial labeled set (M = 500). Due to the limited space, we only show the result under three different settings in Figure 3. More evaluation results are in Appendix E.2. Moreover, although the re-initialization trick makes dynamicAL deviate from the dynamics analysis, we investigate the effect of it to dynamicAL and provide the empirical observations and analysis in Appendix E.5. Effect of query size and query round. Given the total label budget B, the increasing of query size always leads to the decreasing of query round. We study the influence of different query size and query round on dynamicAL from two perspectives. First, we study the expected approximation ratio with different query batch sizes on different data sets. As shown in Figure 4, under different settings the expected approximation ratio always converges to 1 with the increase of training epochs, which further indicates that the query set selected by using the approximated change of training dynamics is a reasonably good result for the query set selection problem. Second, we study influence of query round for actual performance of target models. The performance for different target models on different data sets with total budge size B = 1000 is shown in Table 1. For certain query budget, our active learning algorithm can be further improved if more query rounds are allowed. Comparison with different variants. The active learning criterion of dynamicAL can be written as∑ (x,y)∈S ∥∇θℓ(f(x; θu), ŷu)∥2 + γ∇θℓ(f(xu; θ), ŷu)⊤∇θℓ(f(x; θ), y). We empirically show the performance for γ ∈ {0, 1, 2,∞} in Figure 5. With γ = 0, the criterion is close to the expected gradient length method [31]. And with γ = ∞, the selected samples are same with the samples selected by using the influence function with identity hessian matrix criterion [29]. As shown in Figure 5, the model achieves the best performance with γ = 2, which is aligned with the value indicated by the theoretical analysis (Equation 15). The result confirms the importance of theoretical analysis for the design of deep active learning methods. 6 Related work Neural Tangent Kernel (NTK): Recent study has shown that under proper conditions, an infinitewidth neural network can be simplified as a linear model with Neural Tangent Kernel (NTK) [12]. Since then, NTK has become a powerful theoretical tool to analyze the behavior of deep learning architecture (CNN, GNN, RNN) [33, 37, 38], random initialization [39], stochastic neural network [40], and graph neural network [41] from its output dynamics and to characterize the convergence and generalization error [13]. Besides, [15] studies the finite-width NTK, aiming at making the NTK more practical. Active Learning: Active learning aims at interactively query labels for unlabeled data points to maximize model performances [2]. Among others, there are two popular strategies for active learning, i.e., diversity sampling [42, 43, 44] and uncertainty sampling [45, 46, 47, 11, 48, 49, 29]. Recently, several papers proposed to use gradient to measure uncertainty [49, 11, 29]. However, those methods need to compute gradient for each class, and thus they can hardly be applied on data sets with a large class number. Besides, recent works [50, 51] leverage NTK to analyze contextual bandit with streaming data, which are hard to be applied into our pool-based setting. 7 Conclusion In this work, we bridge the gap between the theoretic findings of deep neural networks and realworld deep active learning applications. By exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven method, dynamicAL, which selects samples to maximize training dynamics. We prove that the convergence speed of training and the generalization performance is (positively) strongly correlated under the ultra-wide condition and we show that maximizing the training dynamics will lead to a lower generalization error. Empirically, our work shows that dynamicAL not only consistently outperforms strong baselines across various setting, but also scales well on large deep learning models. 8 Acknowledgment This work is supported by National Science Foundation (IIS-1947203, IIS-2117902, IIS-2137468, IIS-2134079, and CNS-2125626), a joint ACES-ICGA funding initiative via USDA Hatch ILLU802-946, and Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
1. What is the main contribution of the paper regarding active learning and training dynamics? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its originality, quality, clarity, and significance? 3. Do you have any concerns or suggestions regarding the experimental results and their presentation? 4. How does the reviewer assess the correlation between convergence speed and generalization in the paper's analysis? 5. Are there any limitations in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors prove that convergence speed of training and generalization are positively correlated. Based on this observation, the authors proposed a novel active learning algorithm (dynamicAL), which maximizes training dynamics. DynamicAL is theoretically motivated and generalizes existing methods such as uncertainty sampling methods and influence function methods. Strengths And Weaknesses Originality: The paper examines the training dynamics of Neural Tangent Kernel, which is suitable for theoretical analysis. The authors proved that convergence speed and generalization are positively correlated. The authors proposed a new objective for an active learning algorithm: optimize the speed of training convergence. Quality: The results are novel and interesting. However, the quality of the bound is not clear. For example, the authors introduce the notion of label alignment and show that label alignment is correlated with convergence and generalization. Based on this correlation, the authors conclude that convergence and generalization are positively correlated. While it is logical, I believe it oversimplifies the dependency between convergence and generalization given that the bound is an approximation and the quality of the bound was not assessed. Experimental results can be improved by measuring performance for all rounds (> 10 rounds until the training set is exhausted). Currently, the authors evaluate and compared methods for less than 10 query rounds. It is not clear if the proposed method outperforms the baselines in all rounds. The improvement is rather small (often less than 1%). Clarity: In general, the paper is easy to read. However, its flow and presentation can be improved. For example, the authors first present the method and later present a theoretical analysis of “Train faster → generalize better”. I suggest that some of the theoretical results can be presented together with the method to improve the presentation quality. Other, unnecessary lemmas should be moved to the appendix. Then, the authors can expand the theoretical section and discussion of related work sections. Significance: The authors’ idea of optimizing for convergence speed is interesting and of potential significance, as it can be explored by a wider community in other fields. However, the authors should improve the presentation and clarify of the paper, while also updating experimental results. Questions Can you compare the proposed method with other baseline methods for rounds > 10? Can you discuss the tightness of the bound in the relationships between convergence and generalization? Limitations Yes
NIPS
Title If Influence Functions are the Answer, Then What is the Question? Abstract Influence functions efficiently estimate the effect of removing a single training data point on a model’s learned parameters. While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks. In this work, we investigate the specific factors that cause this discrepancy by decomposing it into five separate terms. We study the contributions of each term on a variety of architectures and datasets and how they vary with factors such as network width and training time. While practical influence function estimates may be a poor match to leave-one-out retraining for nonlinear networks, we show that they are often a good approximation to a different object we term the proximal Bregman response function (PBRF). Since the PBRF can still be used to answer many of the questions motivating influence functions such as identifying influential or mislabeled examples, our results suggest that current algorithms for influence function estimation give more informative results than previous error analyses would suggest. 1 Introduction The influence function [Hampel, 1974, Cook, 1979] is a classic technique from robust statistics that estimates the effect of deleting a single data example (or a group of data examples) from a training dataset. Formally, given a neural network with learned parameters θ⋆ trained on a dataset D, we are interested in the parameters θ⋆−z learned by training on a dataset D − {z} constructed by deleting a single training example z from D. By taking the second-order Taylor approximation to the cost function around θ⋆, influence functions approximate the parameters θ⋆−z without the computationally prohibitive cost of retraining the model. Since Koh and Liang [2017] first deployed influence functions in machine learning, influence functions have been used to solve various tasks such as explaining model’s predictions [Koh and Liang, 2017, Han et al., 2020], relabelling harmful training examples [Kong et al., 2021], carrying out data poisoning attacks [Koh et al., 2022], increasing fairness in models’ predictions [Brunet et al., 2019, Schulam and Saria, 2019], and learning data augmentation techniques [Lee et al., 2020]. When the training objective is strongly convex (e.g., as in logistic regression with L2 regularization), influence functions are expected to align well with leave-one-out (LOO) or leave-k-out retraining [Koh and Liang, 2017, Koh et al., 2019, Izzo et al., 2021]. However, Basu et al. [2020a] showed that influence functions in neural networks often do not accurately predict the effect of retraining the model and concluded that influence estimates are often “fragile” and “erroneous”. Because of the poor match between influence estimates and LOO retraining, influence function methods are often evaluated with alternative metrics such as the detection rate of maliciously corrupted examples using influence scores [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2021]. However, these indirect signals make it difficult to develop algorithmic improvements to influence function estimation. If one is interested in improving certain aspects of influence function estimation, such as the linear system solver, it would be preferable to have a well-defined quantity that influence function estimators are approximating so that algorithmic choices could be directly evaluated based on the accuracy of their estimates. In this work, we investigate the source of the discrepancy between influence functions and LOO retraining in neural networks. We decompose the discrepancy into five components: (1) the difference between cold-start and warm-start response functions (a concept elaborated on below), (2) an implicit proximity regularizer, (3) influence estimation on non-converged parameters, (4) linearization, and (5) approximate solution of a linear system. This decomposition was chosen to capture all gaps and errors caused by approximations and assumptions made in applying influence functions to neural networks. We empirically evaluate the contributions of each component on binary classification, regression, image reconstruction, image classification, and language modeling tasks and show that, across all tasks, components (1–3) are most responsible for the discrepancy between influence functions and LOO retraining. We further investigate how the contribution of each component changes in response to the change in network width and depth, weight decay, training time, damping, and the number of data points being removed. Moreover, we show that while influence functions for neural networks are often a poor match to LOO retraining, they are a much better match to what we term the proximal Bregman response function (PBRF). Intuitively, the PBRF approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. From this perspective, we reframe misalignment components (1–3) as simply reflecting the difference between LOO retraining and the PBRF. The gap between the influence function estimate and the PRBF only comes from sources (4) and (5), which we found empirically to be at least an order of magnitude smaller for most neural networks. As a result, on a wide variety of tasks, influence functions closely align with the PBRF while failing to approximate the effect of retraining the model, as shown in Figure 1. The PBRF can be used for many of the same use cases that have motivated influence functions, such as finding influential or mislabeled examples [Schioppa et al., 2021] and carrying out data poisoning attacks [Koh and Liang, 2017, Koh et al., 2022], and can therefore be considered an alternative to LOO retraining as a gold standard for evaluating influence functions. Hence, we conclude that influence functions applied to neural networks are not inherently “fragile” as is often believed [Basu et al., 2020a], but instead can be seen as giving accurate answers to a different question than is normally assumed. 2 Related Work Instance-based interpretability methods are a class of techniques that explain a model’s predictions in terms of the examples on which the model was trained. Methods of this type include TracIn [Pruthi et al., 2020], Representer Point Selection [Yeh et al., 2018], Grad-Cos and Grad-Dot [Charpiat et al., 2019, Hanawa et al., 2021], MMD-critic [Kim et al., 2016], unconditional counterfactual explanations [Wachter et al., 2018], and of central focus in this paper, influence functions. Since its adoption in machine learning by Koh and Liang [2017], multiple extensions and improvements upon influence functions have also been proposed, such as variants that use Fisher kernels [Khanna et al., 2019], higher-order approximations [Basu et al., 2020b], tricks for faster and scalable inference [Guo et al., 2021, Schioppa et al., 2021], group influence formulations [Koh et al., 2019, Basu et al., 2020b], and relative local weighting [Barshan et al., 2020]. However, many of these methods rely on the same strong assumptions made in the original influence function derivation that the objective needs to be strongly convex and influence functions must be computed on the optimal parameters. In general, influence functions are assumed to approximate the effects of leave-one-out (LOO) retraining from scratch, the parameters of the network that are trained without a data point of interest. Hence, measuring the quality of influence functions is often performed by analyzing the correlation between LOO retraining and influence function estimations [Koh and Liang, 2017, Basu et al., 2020a,b, Yang and Chaudhuri, 2022]. However, recent empirical analyses have demonstrated the fragility of influence functions and a fundamental misalignment between their assumed and actual effects [Basu et al., 2020a, Ghorbani et al., 2019, K and Søgaard, 2021]. For example, Basu et al. [2020a] argued that the accuracy of influence functions in deep networks is highly sensitive to network width and depth, weight decay strength, inverse-Hessian vector product estimation methodology, and test query point by measuring the alignment between influence functions and LOO retraining. Because of the inherent misalignment between influence estimations and LOO retraining in neural networks, many works often evaluate the accuracy of the influence functions on an alternative metric, such as the recovery rate of maliciously mislabelled or poisoned data using influence functions [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 2021]. In this work, instead of interpreting the misalignment between influence functions and LOO retraining as a failure, we claim that it simply reflects that influence functions answer a different question than is typically assumed. 3 Background Consider a prediction task from an input space X to a target space T where we are given a finite training dataset Dtrain = {(x(i), t(i))}Ni=1. Given a data point z = (x, t), let y = f(θ,x) be the prediction of the network parameterized by θ ∈ Rd and L(y, t) be the loss (e.g., squared error or cross-entropy). We aim to solve the following optimization problem: θ⋆ = argmin θ∈Rd J (θ) = argmin θ∈Rd 1 N N∑ i=1 L(f(θ,x(i)), t(i)), (1) where J (·) is the cost function. If the regularization (e.g., L2 regularization) is imposed in the cost function, we fold the regularization terms into the loss function. We summarize the notation used in this paper in Appendix A. 3.1 Downweighting a Training Example The training objective in Eqn. 1 aims to find the parameters that minimize the average loss on all training examples. Herein, we are interested in studying the change in optimal model parameters when a particular training example z = (x, t) ∈ Dtrain is removed from the training dataset, or more generally, when the data point z is downweighted by an amount ϵ ∈ R. Formally, this corresponds to minimizing the following downweighted objective: θ⋆−z,ϵ = argmin θ∈Rd Q−z(θ, ϵ) = argmin θ∈Rd J (θ)− L(f(θ,x), t)ϵ. (2) When ϵ = 1/N, the downweighted objective reduces to the cost over the dataset with the example z removed, up to a constant factor. To see how the optimum of the downweighted objective responds to changes in the downweighting factor ϵ, we define the response function r⋆−z : R → Rd by: r⋆−z(ϵ) = argmin θ∈Rd Q−z(θ, ϵ), (3) where we assume that the downweighted objective is strongly convex and hence the solution to the downweighted objective is unique given some factor ϵ. Under these assumptions, note that r⋆−z(0) = θ ⋆ and the response function is differentiable at 0 by the Implicit Function Theorem [Krantz and Parks, 2002, Griewank and Walther, 2008]. Influence functions approximate the response function by performing a first-order Taylor expansion around ϵ0 = 0: r⋆−z,lin(ϵ) = r ⋆ −z(ϵ0) + dr⋆−z dϵ ∣∣∣∣ ϵ=ϵ0 (ϵ− ϵ0) = θ⋆ + (∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t)ϵ. (4) We refers readers to Van der Vaart [2000] and Appendix B for a detailed derivation. The optimal parameters trained without z can then be approximated by plugging in ϵ = 1/N to Eqn. 4. Influence functions can further approximate the loss of a particular test point ztest = (xtest, ttest) when a data point z is eliminated from the training set using the chain rule [Koh and Liang, 2017]: L(f(r⋆−z,lin (1/N) ,xtest), ttest) ≈ L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤ dr⋆z dϵ ∣∣∣∣ ϵ=0 = L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤(∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t). (5) 3.2 Influence Function Estimation in Neural Networks Influence functions face two main challenges when deployed on neural networks. First, the influence estimation (shown in Eqn. 4) requires computing an inverse Hessian-vector product (iHVP). Unfortunately, storing and inverting the Hessian requires O(d3) operations and is infeasible to compute for modern neural networks. Instead, Koh and Liang [2017] tractably approximate the iHVP using truncated non-linear conjugate gradient (CG) [Martens et al., 2010] or the LiSSA algorithm [Agarwal et al., 2016]. Both approaches avoid explicit computation of the Hessian inverse (see Appendix G for details) and only require O(Nd) operations to approximate the influence function. Second, the derivation of influence functions assumes a strongly convex objective, which is often not satisfied for neural networks. The Hessian may be singular, especially when the parameters have not fully converged, due to non-positive eigenvalues. To enforce positive-definiteness of the Hessian, Koh and Liang [2017] add a damping term in the iHVP. Teso et al. [2021] further approximate the Hessian with the Fisher information matrix (which is equivalent to the Gauss-Newton Hessian [Martens, 2014] for commonly used loss functions such as cross-entropy) as follows: r⋆−z,damp,lin(ϵ) ≈ θ⋆ + (J⊤yθ⋆Hy⋆Jyθ⋆ + λI)−1∇θL(f(θ⋆,x), t)ϵ, (6) where Jyθ⋆ is the parameter-output Jacobian and Hy⋆ is the Hessian of the cost with respect to the network outputs both evaluated on the optimal parameters θ⋆. Here, G⋆ = J⊤yθ⋆Hy⋆Jyθ⋆ is the Gauss-Newton Hessian (GNH) and λ > 0 is a damping term to ensure the invertibility of GNH. Unlike the Hessian, the GNH is guaranteed to be positive semidefinite as long as the loss function is convex as a function of the network outputs [Martens et al., 2010]. 4 Understanding the Discrepancy between Influence Function and LOO Retraining in Neural Networks In this section, we investigate several factors responsible for the misalignment between influence functions and LOO retraining. Specifically, we decompose the misalignment into five separate terms: (1) the warm-start gap, (2) the damping gap, (3) the non-convergence gap, (4) the linearization error, and (5) the solver error. This decomposition captures all approximations and assumption violations when deploying influence functions in neural networks. By summing the parameter (or outputs) differences introduced by each term we can bound the parameter (or outputs) difference between LOO retraining and influence estimates. We use the term “gap” rather than “error” for the first three terms to emphasize that they reflect differences between solutions to different influence-related questions, rather than actual errors. For all models we investigate, we find that the first three sources dominate the misalignment, indicating that the misalignment reflects not algorithmic errors but rather the fact that influence function estimators are answering a different question from what is normally assumed. All proximal objectives are summarized in Table 1 and we provide the derivations in Appendix B. 4.1 Warm-Start Gap: Non-Strongly Convex Training Objective By taking a first-order Taylor approximation of the response function at ϵ0 = 0 (Eqn. 4), influence functions approximate the effect of removing a data point z at a local neighborhood of the optimum θ⋆. Hence, influence approximation has a more natural connection to the retraining scheme that initializes the network at the current optimum θ⋆ (warm-start retraining) than the scheme that initializes the network randomly (cold-start retraining). The warm-start optimum is equivalent to the cold-start optimum when the objective is strongly convex (where the solution to the response function is unique), making the influence estimation close to the LOO retraining on logistic regression with L2 regularization. However, the equivalence between warm-start and coldstart optima is not typically guaranteed in neural networks [Vicol et al., 2022a]. Particularly, in the overparametrized regime (N < d), neural networks exhibit multiple global optima, and their converged solutions depend highly on the specifics of the optimization dynamics [Lee et al., 2019, Arora et al., 2019, Bartlett et al., 2020, Amari et al., 2020]. For quadratic cost functions, gradient descent with initialization θ0 converges to the optimum that achieves the minimum L2 distance from θ0 [Hastie et al., 2022]. This phenomenon of the converged parameters being dependent on the initialization hinders influence functions from accurately predicting the effect of retraining the model from scratch as shown in Figure 2. We denote the discrepancy between cold-start and warm-start optima as warm-start gap. 4.2 Proximity Gap: Addition of Damping Term in iHVP In practical settings, we often impose a damping term (Eqn. 6) in influence approximations to ensure that the cost Hessian is positive-definite and hence invertible. As adding a damping term in influence estimations is equivalent to adding L2 regularization to the cost function [Martens et al., 2010], when damping is used, influence functions can be seen as linearizing the following proximal response function at ϵ0 = 0: r⋆−z,damp(ϵ) = argmin θ∈Rd Q−z(θ, ϵ) + λ 2 ∥θ − θ⋆∥2. (7) See Appendix B.2 for the derivation. Note that λ > 0 is a damping strength and our use of “proximal” is based on the notion of proximal equilibria [Farnia and Ozdaglar, 2020]. Intuitively, the proximal objective in Eqn. 7 not only minimizes the downweighted objective but also encourages the parameters to stay close to the optimal parameters at ϵ0 = 0. Hence, when the damping term is used in the iHVP, influence functions aim at approximating the warm-start retraining scheme with a proximity term that penalizes the L2 distance between the new estimate and the optimal parameters. We call the discrepancy between the warm-start and proximal warm-start optima the proximity gap. Interestingly, past works have observed that for quadratic cost functions, early stopping has a similar effect to L2 regularization [Vicol et al., 2022a, Ali et al., 2019]. Therefore, the proximal response function can be thought of as capturing how gradient descent will respond to a dataset perturbation if it takes only a limited number of steps starting from the warm-start solution. 4.3 Non-Convergence Gap: Influence Estimation on Non-Converged Parameters Thus far, our analysis has assumed that influence functions are computed on fully converged parameters θ⋆ at which the gradient of the cost is 0. However, in neural network training, we often terminate the optimization procedure before reaching the exact optimum due to several reasons, including having limited computational resources or to avoid overfitting [Bengio, 2012]. In such situations, much of the change in the parameters from LOO retraining simply reflects the effect of training for longer, rather than the effect of removing a training example, as illustrated in Figure 3. What we desire from influence functions is to understand the effect of removing the training example; the effect of extended training is simply a nuisance. Therefore, to the extent that this factor contributes to the misalignment between influence functions and LOO retraining, influence functions are arguably more useful than LOO retraining. Since training the network to convergence may be impractical or undesirable, we instead modify the response function by replacing the original training objective with a similar one for which the (possibly non-converged) final parameters θs are optimal. Here, we assume the loss function L(·, ·) is convex as a function of the network outputs; this is true for commonly used loss functions such as squared error or cross-entropy. We replace the training loss with a term that penalizes mismatch to the predictions made by θs (hence implying that θs is optimal). Our proximal Bregman response function (PBRF) is defined as follows: rb−z,damp(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)(f(θ,x (i)), f(θs,x(i)))− L(f(θ,x), t)ϵ+ λ 2 ∥θ − θs∥2, (8) where DL(i)(·, ·) is the Bregman divergence defined as: DL(i)(y,y s) = L(y, t(i))− L(ys, t(i))−∇yL(ys, t(i))⊤(y − ys). (9) The PBRF defined in Eqn. 8 is composed of three terms. The first term measures the functional discrepancy between the current estimate and the parameters θs in Bregman divergence, and its role is to prevent the new estimate from drastically altering the predictions on the training dataset. One way of understanding this term in the cases of squared error or cross-entropy losses is that it is equivalent to the training error on a dataset where the original training labels are replaced with soft targets obtained from the predictions made by θs. The second term is the negative loss on the data point z = (x, t), which aims to respond to the deletion of a training example. The final term is simply the proximity term described before. In Appendix B.3, we further show that the influence function on non-converged parameters is equivalent to the first-order approximation of PBRF instead of the first-order approximation of proximal response function for linear models. Rather than computing the LOO retrained parameters by performing K additional optimization steps under the original training objective, we can instead perform K optimization steps under the proximal Bregman objective. The difference between the resulting parameter vectors is what we call the non-convergence gap. 4.4 Linearization Error: A First-order Taylor Approximation of the Response Function The key idea behind influence functions is the linearization of the response function. To simulate the local approximations made in influence functions, we define the linearized PBRF as: rb−z,damp,lin(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)quad (flin(θ,x (i)), f(θs,x(i))) −∇θL(f(θs,x), t)⊤θϵ+ λ 2 ∥θ − θs∥2, (10) where Lquad(·, ·) is the second-order expansion of the loss around ys and flin(·, ·) is the linearization of the network outputs with respect to the parameters. The optimal solution to the linearized PBRF is equivalent to the influence estimation at the parameters θs with the GNH approximation and a damping term λ (see Appendix B.4 for the derivation). As the linearized PBRF relies on several local approximations, the linearization error increases when the downweighting factor magnitude |ϵ| is large or the PBRF is highly non-linear. We refer to the discrepancy between the PBRF and linearized PBRF as the linearization error. 4.5 Solver Error: A Crude Approximation of iHVP As the precise computation of the iHVP is computationally infeasible, in practice, we use truncated CG or LiSSA to efficiently approximate influence functions [Koh and Liang, 2017]. Unfortunately, these efficient linear solvers introduce additional error by crudely approximating the iHVP. Moreover, different linear solvers can introduce specific biases in the influence estimation. For example, Vicol et al. [2022b] show that the truncated LiSSA algorithm implicitly adds an additional damping term in the iHVP. We use solver error to refer to the difference between the linearized PBRF and the influence estimation computed by a linear solver. Interestingly, Koh and Liang [2017] reported that the LiSSA algorithm gave more accurate results than CG. We have determined that this difference resulted not from any inherent algorithmic advantage to LiSSA, but rather from the fact that the software used different damping strengths for the two algorithms, thereby resulting in different weightings of the proximity term in the proximal response function. 5 PBRF: The Question Influence Functions are Really Answering The PBRF (Eqn. 8) approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. Since the discrepancy between the PBRF and influence function estimates is only due to the linearization and solver errors, the PBRF can be thought of as better representing the question that influence functions are trying to answer. Reframing influence functions in this way means that the PBRF can be regarded as a gold-standard ground truth for evaluating methods for influence function approximation. Existing analyses of influence functions [Basu et al., 2020a] rely on generating LOO retraining ground truth estimates by imposing strong L2 regularization or training till convergence without early stopping. However, these conditions do not accurately reflect the typical way neural networks are trained in practice. In contrast, our PBRF formulation does not require the addition of any regularizers or modified training regimes and can be easily optimized. In addition, although the PBRF may not necessarily align with LOO retraining due to the warm-start, proximity, and non-convergence gaps, the motivating use cases for influence functions typically do not rely on exact LOO retraining. This means that the PBRF can be used in place of LOO retraining for many tasks such as identifying influential or mislabelled examples, as demonstrated in Appendix D.3. In these cases, influence functions are still useful since they provide an efficient way of approximating PBRF estimates. 6 Experiments Our experiments investigate the following questions: (1) What factors discussed in Section 4 contribute most to the misalignment between influence functions and LOO retraining? (2) While influence functions fail to approximate the effect of retraining, do they accurately approximate the PBRF? (3) How do changes in weight decay, damping, the number of total epochs, and the number of removed training examples affect each source of misalignment? In all experiments, we first train the base network with the entire dataset to obtain the parameters θs. We repeat the training procedure 20 times with a different random training example deleted. The cold-start retraining begins from the same initialization used to train θs. All proximal objectives are trained with initialization θs for 50% of the epochs used to train the base network. Lastly, we use the LiSSA algorithm with GNH approximation to compute influence functions. Since we are primarily interested in the effect of deleting a data point on model’s predictions, we measure the discrepancy of each gap and error using the average L2 distance between networks’ outputs E(x,·)∼Dtrain [∥f(θ,x)− f(θ′,x)∥] on the training dataset. We provide the full experimental set-up and additional experiments in Appendix C and D, respectively. 6.1 Influence Misalignment Decomposition We first applied our decomposition to various models trained on a broad range of tasks covering binary classification, regression, image reconstruction, image classification, and language modeling. The summary of our results is provided in Figure 4 and Table 5 (Appendix E). Across all tasks, we found that the first three sources dominate the misalignment, indicating influence function estimators are answering a different question from what is normally assumed. Small linearization and solver errors indicate that influence functions accurately answer the modified question (PBRF). Logistic Regression. We analyzed the logistic regression (LR) model trained on the Cancer and Diabetes classification datasets from the UCI collection [Dua and Graff, 2017]. We trained the model using L-BFGS [Liu and Nocedal, 1989] with L2 regularization of 0.01 and damping term of λ = 0.001. As the training objective is strongly convex and the base model parameters were trained till convergence, in Table 5, we observed that each source of misalignment is significantly low. Hence, in the case of logistic regression with L2 regularization, influence functions accurately capture the effect of retraining the model without a data point. Multilayer Perceptron. Next, we applied our analysis to the 2-hidden layer Multilayer Perceptron (MLP) with ReLU activations. We conducted the experiments in two settings: (1) regression on the Concrete and Energy datasets from the UCI collection and (2) image classification on 10% of the MNIST [Deng, 2012] and FashionMNIST [Xiao et al., 2017] datasets, following the set-up from Koh and Liang [2017] and Basu et al. [2020a]. We trained the networks for 1000 epochs using stochastic gradient descent (SGD) with a batch size of 128 and set a damping strength of λ = 0.001. As opposed to linear models, MLPs violate the assumptions in the influence derivation and we observed an increase in gaps and errors on all five factors. We observed that warm-start, proximity, and the non-convergence gaps contribute more to the misalignment than linearization and solver errors. The average network’s predictions for PBRF were similar to that computed by the LiSSA algorithm, demonstrating that influence functions are still a good approximation to PBRF. Autoencoder. Next, we applied our framework to an 8-layer autoencoder (AE) on the full MNIST dataset. We followed the experimental set-up from Martens and Grosse [2015], where the encoder and decoder each consist of 4 fully-connected layers with sigmoid activation functions. We trained the network for 1000 epochs using SGD with momentum. We set the batch size to 1024, used L2 regularization of 10−5 with a damping factor of λ = 0.001. In accordance with the findings from our MLP experiments, the warm-start, proximity, and non-convergence gaps were more significant than the linearization and solver errors, and influence functions accurately predicted the PBRF. Convolutional Neural Networks. To investigate the source of discrepancy on largerscale networks, we trained a set of convolutional neural networks of increasing complexity and size. Namely, LeNet [Lecun et al., 1998], AlexNet [Krizhevsky et al., 2012], VGG13 Simonyan and Zisserman [2014], and ResNet-20 [He et al., 2015] were trained on 10% of the MNIST dataset and the full CIFAR10 [Krizhevsky, 2009] dataset. We trained the base network for 200 epochs on both datasets with a batch size of 128. For MNIST, we kept the learning rate fixed throughout training, while for CIFAR10, we decayed the learning rate by a factor of 5 at epochs 60, 120, and 160, following Zagoruyko and Komodakis [2016]. We used L2 regularization with strength 5 · 10−4 and a damping factor of λ = 0.001. Consistent with the findings from our MLP and autoencoder experiments, the first three gaps were more significant than linearization and solver errors. We further compared influence functions’ approximations on the difference in test loss when a random training data point is removed with the value obtained from cold-start retraining, warm-start retraining, and PBRF in Table 2. We used both Pearson [Sedgwick, 2012] and Spearman rank-order correlation [Spearman, 1961] to measure the alignment. While the test loss predicted by influence functions does not align well with the values obtained by cold-start and warm-start retraining schemes, they show high correlations when compared to the estimates given by PBRF. Transformer. Finally, we trained 2-layer Transformer language models on the Penn Treebank (PTB) [Marcus et al., 1993] dataset. We set the number of hidden dimensions to 256 and the number of attention heads to 2. As we observed that model overfits after a few epochs of training, we trained the base network for 10 epochs using Adam. Notably, we observed that the non-convergence gap had the most considerable contribution to the discrepancy between influence functions and LOO retraining. Consistent with our previous findings, the first tree gaps had more impact on the discrepancy compared to linearization and solver errors. 6.2 Factors in Influence Misalignment We further analyzed how the contribution of each component changes in response to changes in network width and depth, training time, weight decay, damping, and the percentage of data removed. We used an MLP trained on 10% of the MNIST dataset and summarized results in Figure 5. Width and Depth. As we increase the width of the network, we observe a decrease in the linearization error. This is consistent with previous observations that networks behave more linearly as the width is increased [Lee et al., 2019]. In contrast to the findings from Basu et al. [2020a], we did not observe a strong relationship between the contribution of the components and the depth of the network. Training Time. Unsurprisingly, as we increase the number of training epochs, we observe a decrease in the non-convergence gap. We hypothesize that, as we increase the training epoch, the cost gradient reaches 0, resulting in better alignment between the proximal response function and PBRF. Weight Decay. The weight decay allows the training objective to be better conditioned. Consequently, as weight decay increases, the training objective may act more as a strictly convex objective, resulting in a decrease in overall discrepancy for all components. Basu et al. [2020a] also found that the alignment between influence functions and LOO retraining increases as weight decay increases. Damping. A higher damping term makes linear systems better conditioned, allowing solvers to find accurate solutions in fewer iterations [Demmel, 1997], thereby reducing the solver error. Furthermore, the higher proximity term keeps the parameters close to θs, reducing the linearization error. On the other hand, increasing the effective proximity penalty directly increases the proximity gap. Percentage of Training Examples Removed. As we remove more training examples from the dataset the PBRF becomes more non-linear and we observe a sharp increase in the linearization error. The cost landscape is also more likely to change as we remove more training examples, and we observe a corresponding increase in the warm-start gap. 7 Conclusion In this paper, we investigate the sources of the discrepancy between influence functions and LOO retraining in neural networks. We decompose this difference into five distinct components: the warmstart gap, proximity gap, non-convergence gap, linearization error, and solver error. We empirically evaluate the contributions of each of these components on a wide variety of architectures and datasets and investigate how they change with factors such as network size and regularization. Our results show that the first three components are most responsible for the discrepancy between influence functions and LOO retraining. We further introduce the proximal Bregman response function (PBRF) to better capture the behavior of influence functions in neural networks. Compared to LOO retraining, the PBRF is more easily calculated and correlates better with influence functions, meaning it is an attractive alternative gold standard for evaluating influence functions. Although the PBRF may not necessarily align with LOO retraining, it can still be applied in many of the motivating use cases for influence functions. We conclude that influence functions in neural networks are not necessarily “fragile”, but instead are giving accurate answers to a different question than is normally assumed. Acknowledgements We would like to thank Pang Wei Koh for the helpful discussions. Resources used in this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (www.vectorinstitute.ai/partners).
1. What is the focus of the paper regarding influence function-based methods? 2. What are the strengths of the paper, particularly in its analysis and hypothesis development? 3. What are the weaknesses of the paper, especially regarding its proposal of a new score? 4. Do you have any concerns about the paper's direction and goals? 5. What are the limitations of the paper, as mentioned by the authors?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The influence function-based method can analyze the effects of adding/removing/perturbating a single data point on parametric models using a linear approximation at the optimal point of parameters. It was pointed out that this approximation on deep neural networks is poor in previous works. In this paper, the authors push it further and analyze the factors that affect the approximation, and argue that the performance of influence function methods is not necessarily reflected in the LOO re-training process, but rather can be measured using proximal Bregman Response Function. In the experiments, the authors thoroughly analyzed how different factors affect the discrepancy between influence functions and LOO retraining and showed that PBRF better aligns with influence function methods. Strengths And Weaknesses Strengths: The authors perform analysis and make hypotheses on an important problem (the discrepancy between influence function and LOO), and point out five sources that affect the misalignment The authors pointed out that PBRF better captures the behaviour of influence functions The experiment evaluations are thorough Weaknesses: Although identifying the sources that lead to the misalignment is quite interesting, I'm not very convinced that introducing a different score to solely analyze the influence functions is quite useful or necessary. Influence function methods are initially proposed as an interpretability tool to analyze the model behaviour of removing data points. Introduction and emphasizing on PBRF seem to have lost this goal and are leading to a less meaningful direction. In general, I think the structure and writing of this paper need to be improved. Questions see above Limitations authors briefly mentioned that in Q/A
NIPS
Title If Influence Functions are the Answer, Then What is the Question? Abstract Influence functions efficiently estimate the effect of removing a single training data point on a model’s learned parameters. While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks. In this work, we investigate the specific factors that cause this discrepancy by decomposing it into five separate terms. We study the contributions of each term on a variety of architectures and datasets and how they vary with factors such as network width and training time. While practical influence function estimates may be a poor match to leave-one-out retraining for nonlinear networks, we show that they are often a good approximation to a different object we term the proximal Bregman response function (PBRF). Since the PBRF can still be used to answer many of the questions motivating influence functions such as identifying influential or mislabeled examples, our results suggest that current algorithms for influence function estimation give more informative results than previous error analyses would suggest. 1 Introduction The influence function [Hampel, 1974, Cook, 1979] is a classic technique from robust statistics that estimates the effect of deleting a single data example (or a group of data examples) from a training dataset. Formally, given a neural network with learned parameters θ⋆ trained on a dataset D, we are interested in the parameters θ⋆−z learned by training on a dataset D − {z} constructed by deleting a single training example z from D. By taking the second-order Taylor approximation to the cost function around θ⋆, influence functions approximate the parameters θ⋆−z without the computationally prohibitive cost of retraining the model. Since Koh and Liang [2017] first deployed influence functions in machine learning, influence functions have been used to solve various tasks such as explaining model’s predictions [Koh and Liang, 2017, Han et al., 2020], relabelling harmful training examples [Kong et al., 2021], carrying out data poisoning attacks [Koh et al., 2022], increasing fairness in models’ predictions [Brunet et al., 2019, Schulam and Saria, 2019], and learning data augmentation techniques [Lee et al., 2020]. When the training objective is strongly convex (e.g., as in logistic regression with L2 regularization), influence functions are expected to align well with leave-one-out (LOO) or leave-k-out retraining [Koh and Liang, 2017, Koh et al., 2019, Izzo et al., 2021]. However, Basu et al. [2020a] showed that influence functions in neural networks often do not accurately predict the effect of retraining the model and concluded that influence estimates are often “fragile” and “erroneous”. Because of the poor match between influence estimates and LOO retraining, influence function methods are often evaluated with alternative metrics such as the detection rate of maliciously corrupted examples using influence scores [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2021]. However, these indirect signals make it difficult to develop algorithmic improvements to influence function estimation. If one is interested in improving certain aspects of influence function estimation, such as the linear system solver, it would be preferable to have a well-defined quantity that influence function estimators are approximating so that algorithmic choices could be directly evaluated based on the accuracy of their estimates. In this work, we investigate the source of the discrepancy between influence functions and LOO retraining in neural networks. We decompose the discrepancy into five components: (1) the difference between cold-start and warm-start response functions (a concept elaborated on below), (2) an implicit proximity regularizer, (3) influence estimation on non-converged parameters, (4) linearization, and (5) approximate solution of a linear system. This decomposition was chosen to capture all gaps and errors caused by approximations and assumptions made in applying influence functions to neural networks. We empirically evaluate the contributions of each component on binary classification, regression, image reconstruction, image classification, and language modeling tasks and show that, across all tasks, components (1–3) are most responsible for the discrepancy between influence functions and LOO retraining. We further investigate how the contribution of each component changes in response to the change in network width and depth, weight decay, training time, damping, and the number of data points being removed. Moreover, we show that while influence functions for neural networks are often a poor match to LOO retraining, they are a much better match to what we term the proximal Bregman response function (PBRF). Intuitively, the PBRF approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. From this perspective, we reframe misalignment components (1–3) as simply reflecting the difference between LOO retraining and the PBRF. The gap between the influence function estimate and the PRBF only comes from sources (4) and (5), which we found empirically to be at least an order of magnitude smaller for most neural networks. As a result, on a wide variety of tasks, influence functions closely align with the PBRF while failing to approximate the effect of retraining the model, as shown in Figure 1. The PBRF can be used for many of the same use cases that have motivated influence functions, such as finding influential or mislabeled examples [Schioppa et al., 2021] and carrying out data poisoning attacks [Koh and Liang, 2017, Koh et al., 2022], and can therefore be considered an alternative to LOO retraining as a gold standard for evaluating influence functions. Hence, we conclude that influence functions applied to neural networks are not inherently “fragile” as is often believed [Basu et al., 2020a], but instead can be seen as giving accurate answers to a different question than is normally assumed. 2 Related Work Instance-based interpretability methods are a class of techniques that explain a model’s predictions in terms of the examples on which the model was trained. Methods of this type include TracIn [Pruthi et al., 2020], Representer Point Selection [Yeh et al., 2018], Grad-Cos and Grad-Dot [Charpiat et al., 2019, Hanawa et al., 2021], MMD-critic [Kim et al., 2016], unconditional counterfactual explanations [Wachter et al., 2018], and of central focus in this paper, influence functions. Since its adoption in machine learning by Koh and Liang [2017], multiple extensions and improvements upon influence functions have also been proposed, such as variants that use Fisher kernels [Khanna et al., 2019], higher-order approximations [Basu et al., 2020b], tricks for faster and scalable inference [Guo et al., 2021, Schioppa et al., 2021], group influence formulations [Koh et al., 2019, Basu et al., 2020b], and relative local weighting [Barshan et al., 2020]. However, many of these methods rely on the same strong assumptions made in the original influence function derivation that the objective needs to be strongly convex and influence functions must be computed on the optimal parameters. In general, influence functions are assumed to approximate the effects of leave-one-out (LOO) retraining from scratch, the parameters of the network that are trained without a data point of interest. Hence, measuring the quality of influence functions is often performed by analyzing the correlation between LOO retraining and influence function estimations [Koh and Liang, 2017, Basu et al., 2020a,b, Yang and Chaudhuri, 2022]. However, recent empirical analyses have demonstrated the fragility of influence functions and a fundamental misalignment between their assumed and actual effects [Basu et al., 2020a, Ghorbani et al., 2019, K and Søgaard, 2021]. For example, Basu et al. [2020a] argued that the accuracy of influence functions in deep networks is highly sensitive to network width and depth, weight decay strength, inverse-Hessian vector product estimation methodology, and test query point by measuring the alignment between influence functions and LOO retraining. Because of the inherent misalignment between influence estimations and LOO retraining in neural networks, many works often evaluate the accuracy of the influence functions on an alternative metric, such as the recovery rate of maliciously mislabelled or poisoned data using influence functions [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 2021]. In this work, instead of interpreting the misalignment between influence functions and LOO retraining as a failure, we claim that it simply reflects that influence functions answer a different question than is typically assumed. 3 Background Consider a prediction task from an input space X to a target space T where we are given a finite training dataset Dtrain = {(x(i), t(i))}Ni=1. Given a data point z = (x, t), let y = f(θ,x) be the prediction of the network parameterized by θ ∈ Rd and L(y, t) be the loss (e.g., squared error or cross-entropy). We aim to solve the following optimization problem: θ⋆ = argmin θ∈Rd J (θ) = argmin θ∈Rd 1 N N∑ i=1 L(f(θ,x(i)), t(i)), (1) where J (·) is the cost function. If the regularization (e.g., L2 regularization) is imposed in the cost function, we fold the regularization terms into the loss function. We summarize the notation used in this paper in Appendix A. 3.1 Downweighting a Training Example The training objective in Eqn. 1 aims to find the parameters that minimize the average loss on all training examples. Herein, we are interested in studying the change in optimal model parameters when a particular training example z = (x, t) ∈ Dtrain is removed from the training dataset, or more generally, when the data point z is downweighted by an amount ϵ ∈ R. Formally, this corresponds to minimizing the following downweighted objective: θ⋆−z,ϵ = argmin θ∈Rd Q−z(θ, ϵ) = argmin θ∈Rd J (θ)− L(f(θ,x), t)ϵ. (2) When ϵ = 1/N, the downweighted objective reduces to the cost over the dataset with the example z removed, up to a constant factor. To see how the optimum of the downweighted objective responds to changes in the downweighting factor ϵ, we define the response function r⋆−z : R → Rd by: r⋆−z(ϵ) = argmin θ∈Rd Q−z(θ, ϵ), (3) where we assume that the downweighted objective is strongly convex and hence the solution to the downweighted objective is unique given some factor ϵ. Under these assumptions, note that r⋆−z(0) = θ ⋆ and the response function is differentiable at 0 by the Implicit Function Theorem [Krantz and Parks, 2002, Griewank and Walther, 2008]. Influence functions approximate the response function by performing a first-order Taylor expansion around ϵ0 = 0: r⋆−z,lin(ϵ) = r ⋆ −z(ϵ0) + dr⋆−z dϵ ∣∣∣∣ ϵ=ϵ0 (ϵ− ϵ0) = θ⋆ + (∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t)ϵ. (4) We refers readers to Van der Vaart [2000] and Appendix B for a detailed derivation. The optimal parameters trained without z can then be approximated by plugging in ϵ = 1/N to Eqn. 4. Influence functions can further approximate the loss of a particular test point ztest = (xtest, ttest) when a data point z is eliminated from the training set using the chain rule [Koh and Liang, 2017]: L(f(r⋆−z,lin (1/N) ,xtest), ttest) ≈ L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤ dr⋆z dϵ ∣∣∣∣ ϵ=0 = L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤(∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t). (5) 3.2 Influence Function Estimation in Neural Networks Influence functions face two main challenges when deployed on neural networks. First, the influence estimation (shown in Eqn. 4) requires computing an inverse Hessian-vector product (iHVP). Unfortunately, storing and inverting the Hessian requires O(d3) operations and is infeasible to compute for modern neural networks. Instead, Koh and Liang [2017] tractably approximate the iHVP using truncated non-linear conjugate gradient (CG) [Martens et al., 2010] or the LiSSA algorithm [Agarwal et al., 2016]. Both approaches avoid explicit computation of the Hessian inverse (see Appendix G for details) and only require O(Nd) operations to approximate the influence function. Second, the derivation of influence functions assumes a strongly convex objective, which is often not satisfied for neural networks. The Hessian may be singular, especially when the parameters have not fully converged, due to non-positive eigenvalues. To enforce positive-definiteness of the Hessian, Koh and Liang [2017] add a damping term in the iHVP. Teso et al. [2021] further approximate the Hessian with the Fisher information matrix (which is equivalent to the Gauss-Newton Hessian [Martens, 2014] for commonly used loss functions such as cross-entropy) as follows: r⋆−z,damp,lin(ϵ) ≈ θ⋆ + (J⊤yθ⋆Hy⋆Jyθ⋆ + λI)−1∇θL(f(θ⋆,x), t)ϵ, (6) where Jyθ⋆ is the parameter-output Jacobian and Hy⋆ is the Hessian of the cost with respect to the network outputs both evaluated on the optimal parameters θ⋆. Here, G⋆ = J⊤yθ⋆Hy⋆Jyθ⋆ is the Gauss-Newton Hessian (GNH) and λ > 0 is a damping term to ensure the invertibility of GNH. Unlike the Hessian, the GNH is guaranteed to be positive semidefinite as long as the loss function is convex as a function of the network outputs [Martens et al., 2010]. 4 Understanding the Discrepancy between Influence Function and LOO Retraining in Neural Networks In this section, we investigate several factors responsible for the misalignment between influence functions and LOO retraining. Specifically, we decompose the misalignment into five separate terms: (1) the warm-start gap, (2) the damping gap, (3) the non-convergence gap, (4) the linearization error, and (5) the solver error. This decomposition captures all approximations and assumption violations when deploying influence functions in neural networks. By summing the parameter (or outputs) differences introduced by each term we can bound the parameter (or outputs) difference between LOO retraining and influence estimates. We use the term “gap” rather than “error” for the first three terms to emphasize that they reflect differences between solutions to different influence-related questions, rather than actual errors. For all models we investigate, we find that the first three sources dominate the misalignment, indicating that the misalignment reflects not algorithmic errors but rather the fact that influence function estimators are answering a different question from what is normally assumed. All proximal objectives are summarized in Table 1 and we provide the derivations in Appendix B. 4.1 Warm-Start Gap: Non-Strongly Convex Training Objective By taking a first-order Taylor approximation of the response function at ϵ0 = 0 (Eqn. 4), influence functions approximate the effect of removing a data point z at a local neighborhood of the optimum θ⋆. Hence, influence approximation has a more natural connection to the retraining scheme that initializes the network at the current optimum θ⋆ (warm-start retraining) than the scheme that initializes the network randomly (cold-start retraining). The warm-start optimum is equivalent to the cold-start optimum when the objective is strongly convex (where the solution to the response function is unique), making the influence estimation close to the LOO retraining on logistic regression with L2 regularization. However, the equivalence between warm-start and coldstart optima is not typically guaranteed in neural networks [Vicol et al., 2022a]. Particularly, in the overparametrized regime (N < d), neural networks exhibit multiple global optima, and their converged solutions depend highly on the specifics of the optimization dynamics [Lee et al., 2019, Arora et al., 2019, Bartlett et al., 2020, Amari et al., 2020]. For quadratic cost functions, gradient descent with initialization θ0 converges to the optimum that achieves the minimum L2 distance from θ0 [Hastie et al., 2022]. This phenomenon of the converged parameters being dependent on the initialization hinders influence functions from accurately predicting the effect of retraining the model from scratch as shown in Figure 2. We denote the discrepancy between cold-start and warm-start optima as warm-start gap. 4.2 Proximity Gap: Addition of Damping Term in iHVP In practical settings, we often impose a damping term (Eqn. 6) in influence approximations to ensure that the cost Hessian is positive-definite and hence invertible. As adding a damping term in influence estimations is equivalent to adding L2 regularization to the cost function [Martens et al., 2010], when damping is used, influence functions can be seen as linearizing the following proximal response function at ϵ0 = 0: r⋆−z,damp(ϵ) = argmin θ∈Rd Q−z(θ, ϵ) + λ 2 ∥θ − θ⋆∥2. (7) See Appendix B.2 for the derivation. Note that λ > 0 is a damping strength and our use of “proximal” is based on the notion of proximal equilibria [Farnia and Ozdaglar, 2020]. Intuitively, the proximal objective in Eqn. 7 not only minimizes the downweighted objective but also encourages the parameters to stay close to the optimal parameters at ϵ0 = 0. Hence, when the damping term is used in the iHVP, influence functions aim at approximating the warm-start retraining scheme with a proximity term that penalizes the L2 distance between the new estimate and the optimal parameters. We call the discrepancy between the warm-start and proximal warm-start optima the proximity gap. Interestingly, past works have observed that for quadratic cost functions, early stopping has a similar effect to L2 regularization [Vicol et al., 2022a, Ali et al., 2019]. Therefore, the proximal response function can be thought of as capturing how gradient descent will respond to a dataset perturbation if it takes only a limited number of steps starting from the warm-start solution. 4.3 Non-Convergence Gap: Influence Estimation on Non-Converged Parameters Thus far, our analysis has assumed that influence functions are computed on fully converged parameters θ⋆ at which the gradient of the cost is 0. However, in neural network training, we often terminate the optimization procedure before reaching the exact optimum due to several reasons, including having limited computational resources or to avoid overfitting [Bengio, 2012]. In such situations, much of the change in the parameters from LOO retraining simply reflects the effect of training for longer, rather than the effect of removing a training example, as illustrated in Figure 3. What we desire from influence functions is to understand the effect of removing the training example; the effect of extended training is simply a nuisance. Therefore, to the extent that this factor contributes to the misalignment between influence functions and LOO retraining, influence functions are arguably more useful than LOO retraining. Since training the network to convergence may be impractical or undesirable, we instead modify the response function by replacing the original training objective with a similar one for which the (possibly non-converged) final parameters θs are optimal. Here, we assume the loss function L(·, ·) is convex as a function of the network outputs; this is true for commonly used loss functions such as squared error or cross-entropy. We replace the training loss with a term that penalizes mismatch to the predictions made by θs (hence implying that θs is optimal). Our proximal Bregman response function (PBRF) is defined as follows: rb−z,damp(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)(f(θ,x (i)), f(θs,x(i)))− L(f(θ,x), t)ϵ+ λ 2 ∥θ − θs∥2, (8) where DL(i)(·, ·) is the Bregman divergence defined as: DL(i)(y,y s) = L(y, t(i))− L(ys, t(i))−∇yL(ys, t(i))⊤(y − ys). (9) The PBRF defined in Eqn. 8 is composed of three terms. The first term measures the functional discrepancy between the current estimate and the parameters θs in Bregman divergence, and its role is to prevent the new estimate from drastically altering the predictions on the training dataset. One way of understanding this term in the cases of squared error or cross-entropy losses is that it is equivalent to the training error on a dataset where the original training labels are replaced with soft targets obtained from the predictions made by θs. The second term is the negative loss on the data point z = (x, t), which aims to respond to the deletion of a training example. The final term is simply the proximity term described before. In Appendix B.3, we further show that the influence function on non-converged parameters is equivalent to the first-order approximation of PBRF instead of the first-order approximation of proximal response function for linear models. Rather than computing the LOO retrained parameters by performing K additional optimization steps under the original training objective, we can instead perform K optimization steps under the proximal Bregman objective. The difference between the resulting parameter vectors is what we call the non-convergence gap. 4.4 Linearization Error: A First-order Taylor Approximation of the Response Function The key idea behind influence functions is the linearization of the response function. To simulate the local approximations made in influence functions, we define the linearized PBRF as: rb−z,damp,lin(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)quad (flin(θ,x (i)), f(θs,x(i))) −∇θL(f(θs,x), t)⊤θϵ+ λ 2 ∥θ − θs∥2, (10) where Lquad(·, ·) is the second-order expansion of the loss around ys and flin(·, ·) is the linearization of the network outputs with respect to the parameters. The optimal solution to the linearized PBRF is equivalent to the influence estimation at the parameters θs with the GNH approximation and a damping term λ (see Appendix B.4 for the derivation). As the linearized PBRF relies on several local approximations, the linearization error increases when the downweighting factor magnitude |ϵ| is large or the PBRF is highly non-linear. We refer to the discrepancy between the PBRF and linearized PBRF as the linearization error. 4.5 Solver Error: A Crude Approximation of iHVP As the precise computation of the iHVP is computationally infeasible, in practice, we use truncated CG or LiSSA to efficiently approximate influence functions [Koh and Liang, 2017]. Unfortunately, these efficient linear solvers introduce additional error by crudely approximating the iHVP. Moreover, different linear solvers can introduce specific biases in the influence estimation. For example, Vicol et al. [2022b] show that the truncated LiSSA algorithm implicitly adds an additional damping term in the iHVP. We use solver error to refer to the difference between the linearized PBRF and the influence estimation computed by a linear solver. Interestingly, Koh and Liang [2017] reported that the LiSSA algorithm gave more accurate results than CG. We have determined that this difference resulted not from any inherent algorithmic advantage to LiSSA, but rather from the fact that the software used different damping strengths for the two algorithms, thereby resulting in different weightings of the proximity term in the proximal response function. 5 PBRF: The Question Influence Functions are Really Answering The PBRF (Eqn. 8) approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. Since the discrepancy between the PBRF and influence function estimates is only due to the linearization and solver errors, the PBRF can be thought of as better representing the question that influence functions are trying to answer. Reframing influence functions in this way means that the PBRF can be regarded as a gold-standard ground truth for evaluating methods for influence function approximation. Existing analyses of influence functions [Basu et al., 2020a] rely on generating LOO retraining ground truth estimates by imposing strong L2 regularization or training till convergence without early stopping. However, these conditions do not accurately reflect the typical way neural networks are trained in practice. In contrast, our PBRF formulation does not require the addition of any regularizers or modified training regimes and can be easily optimized. In addition, although the PBRF may not necessarily align with LOO retraining due to the warm-start, proximity, and non-convergence gaps, the motivating use cases for influence functions typically do not rely on exact LOO retraining. This means that the PBRF can be used in place of LOO retraining for many tasks such as identifying influential or mislabelled examples, as demonstrated in Appendix D.3. In these cases, influence functions are still useful since they provide an efficient way of approximating PBRF estimates. 6 Experiments Our experiments investigate the following questions: (1) What factors discussed in Section 4 contribute most to the misalignment between influence functions and LOO retraining? (2) While influence functions fail to approximate the effect of retraining, do they accurately approximate the PBRF? (3) How do changes in weight decay, damping, the number of total epochs, and the number of removed training examples affect each source of misalignment? In all experiments, we first train the base network with the entire dataset to obtain the parameters θs. We repeat the training procedure 20 times with a different random training example deleted. The cold-start retraining begins from the same initialization used to train θs. All proximal objectives are trained with initialization θs for 50% of the epochs used to train the base network. Lastly, we use the LiSSA algorithm with GNH approximation to compute influence functions. Since we are primarily interested in the effect of deleting a data point on model’s predictions, we measure the discrepancy of each gap and error using the average L2 distance between networks’ outputs E(x,·)∼Dtrain [∥f(θ,x)− f(θ′,x)∥] on the training dataset. We provide the full experimental set-up and additional experiments in Appendix C and D, respectively. 6.1 Influence Misalignment Decomposition We first applied our decomposition to various models trained on a broad range of tasks covering binary classification, regression, image reconstruction, image classification, and language modeling. The summary of our results is provided in Figure 4 and Table 5 (Appendix E). Across all tasks, we found that the first three sources dominate the misalignment, indicating influence function estimators are answering a different question from what is normally assumed. Small linearization and solver errors indicate that influence functions accurately answer the modified question (PBRF). Logistic Regression. We analyzed the logistic regression (LR) model trained on the Cancer and Diabetes classification datasets from the UCI collection [Dua and Graff, 2017]. We trained the model using L-BFGS [Liu and Nocedal, 1989] with L2 regularization of 0.01 and damping term of λ = 0.001. As the training objective is strongly convex and the base model parameters were trained till convergence, in Table 5, we observed that each source of misalignment is significantly low. Hence, in the case of logistic regression with L2 regularization, influence functions accurately capture the effect of retraining the model without a data point. Multilayer Perceptron. Next, we applied our analysis to the 2-hidden layer Multilayer Perceptron (MLP) with ReLU activations. We conducted the experiments in two settings: (1) regression on the Concrete and Energy datasets from the UCI collection and (2) image classification on 10% of the MNIST [Deng, 2012] and FashionMNIST [Xiao et al., 2017] datasets, following the set-up from Koh and Liang [2017] and Basu et al. [2020a]. We trained the networks for 1000 epochs using stochastic gradient descent (SGD) with a batch size of 128 and set a damping strength of λ = 0.001. As opposed to linear models, MLPs violate the assumptions in the influence derivation and we observed an increase in gaps and errors on all five factors. We observed that warm-start, proximity, and the non-convergence gaps contribute more to the misalignment than linearization and solver errors. The average network’s predictions for PBRF were similar to that computed by the LiSSA algorithm, demonstrating that influence functions are still a good approximation to PBRF. Autoencoder. Next, we applied our framework to an 8-layer autoencoder (AE) on the full MNIST dataset. We followed the experimental set-up from Martens and Grosse [2015], where the encoder and decoder each consist of 4 fully-connected layers with sigmoid activation functions. We trained the network for 1000 epochs using SGD with momentum. We set the batch size to 1024, used L2 regularization of 10−5 with a damping factor of λ = 0.001. In accordance with the findings from our MLP experiments, the warm-start, proximity, and non-convergence gaps were more significant than the linearization and solver errors, and influence functions accurately predicted the PBRF. Convolutional Neural Networks. To investigate the source of discrepancy on largerscale networks, we trained a set of convolutional neural networks of increasing complexity and size. Namely, LeNet [Lecun et al., 1998], AlexNet [Krizhevsky et al., 2012], VGG13 Simonyan and Zisserman [2014], and ResNet-20 [He et al., 2015] were trained on 10% of the MNIST dataset and the full CIFAR10 [Krizhevsky, 2009] dataset. We trained the base network for 200 epochs on both datasets with a batch size of 128. For MNIST, we kept the learning rate fixed throughout training, while for CIFAR10, we decayed the learning rate by a factor of 5 at epochs 60, 120, and 160, following Zagoruyko and Komodakis [2016]. We used L2 regularization with strength 5 · 10−4 and a damping factor of λ = 0.001. Consistent with the findings from our MLP and autoencoder experiments, the first three gaps were more significant than linearization and solver errors. We further compared influence functions’ approximations on the difference in test loss when a random training data point is removed with the value obtained from cold-start retraining, warm-start retraining, and PBRF in Table 2. We used both Pearson [Sedgwick, 2012] and Spearman rank-order correlation [Spearman, 1961] to measure the alignment. While the test loss predicted by influence functions does not align well with the values obtained by cold-start and warm-start retraining schemes, they show high correlations when compared to the estimates given by PBRF. Transformer. Finally, we trained 2-layer Transformer language models on the Penn Treebank (PTB) [Marcus et al., 1993] dataset. We set the number of hidden dimensions to 256 and the number of attention heads to 2. As we observed that model overfits after a few epochs of training, we trained the base network for 10 epochs using Adam. Notably, we observed that the non-convergence gap had the most considerable contribution to the discrepancy between influence functions and LOO retraining. Consistent with our previous findings, the first tree gaps had more impact on the discrepancy compared to linearization and solver errors. 6.2 Factors in Influence Misalignment We further analyzed how the contribution of each component changes in response to changes in network width and depth, training time, weight decay, damping, and the percentage of data removed. We used an MLP trained on 10% of the MNIST dataset and summarized results in Figure 5. Width and Depth. As we increase the width of the network, we observe a decrease in the linearization error. This is consistent with previous observations that networks behave more linearly as the width is increased [Lee et al., 2019]. In contrast to the findings from Basu et al. [2020a], we did not observe a strong relationship between the contribution of the components and the depth of the network. Training Time. Unsurprisingly, as we increase the number of training epochs, we observe a decrease in the non-convergence gap. We hypothesize that, as we increase the training epoch, the cost gradient reaches 0, resulting in better alignment between the proximal response function and PBRF. Weight Decay. The weight decay allows the training objective to be better conditioned. Consequently, as weight decay increases, the training objective may act more as a strictly convex objective, resulting in a decrease in overall discrepancy for all components. Basu et al. [2020a] also found that the alignment between influence functions and LOO retraining increases as weight decay increases. Damping. A higher damping term makes linear systems better conditioned, allowing solvers to find accurate solutions in fewer iterations [Demmel, 1997], thereby reducing the solver error. Furthermore, the higher proximity term keeps the parameters close to θs, reducing the linearization error. On the other hand, increasing the effective proximity penalty directly increases the proximity gap. Percentage of Training Examples Removed. As we remove more training examples from the dataset the PBRF becomes more non-linear and we observe a sharp increase in the linearization error. The cost landscape is also more likely to change as we remove more training examples, and we observe a corresponding increase in the warm-start gap. 7 Conclusion In this paper, we investigate the sources of the discrepancy between influence functions and LOO retraining in neural networks. We decompose this difference into five distinct components: the warmstart gap, proximity gap, non-convergence gap, linearization error, and solver error. We empirically evaluate the contributions of each of these components on a wide variety of architectures and datasets and investigate how they change with factors such as network size and regularization. Our results show that the first three components are most responsible for the discrepancy between influence functions and LOO retraining. We further introduce the proximal Bregman response function (PBRF) to better capture the behavior of influence functions in neural networks. Compared to LOO retraining, the PBRF is more easily calculated and correlates better with influence functions, meaning it is an attractive alternative gold standard for evaluating influence functions. Although the PBRF may not necessarily align with LOO retraining, it can still be applied in many of the motivating use cases for influence functions. We conclude that influence functions in neural networks are not necessarily “fragile”, but instead are giving accurate answers to a different question than is normally assumed. Acknowledgements We would like to thank Pang Wei Koh for the helpful discussions. Resources used in this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (www.vectorinstitute.ai/partners).
1. What is the focus and contribution of the paper regarding influence functions in neural networks? 2. What are the strengths of the proposed approach, particularly in its analysis and decomposition of the discrepancy between influence functions and leave-one-out retraining? 3. What are the weaknesses of the paper, especially in the selection of components and the consideration of the proximal Bregman response function as the gold standard? 4. Do you have any concerns or questions regarding the experimental datasets used in the study? 5. How do the authors address the limitations of their work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies influence functions. Influence functions work well with leave-one-out retraining for linear models, but they often have poor performance in neural networks. The authors investigate the specific factors that cause this discrepancy and decompose into five terms. The authors then study the contributions of each term. Furthermore, the authors show that influence functions are often a good approximation to the proximal Bregman response function (PBRF). The experimental results suggest that current algorithms for influence function estimation can still give more informative results than previous error analyses would suggest. In my opinion, the main contribution of this work is a more in-depth analysis of influence functions, especially as it applies to neural networks. Overall, the paper is well organized and well written. Strengths And Weaknesses Strengths: This work investigates the source of the discrepancy between influence functions and LOO retraining in neural networks, and further decompose the discrepancy into five detailed components. The authors evaluate the contributions of each component on binary classification, regression, image classification, and language modeling tasks. The experimental results show that influence functions are a much better match to the proximal Bregman response function (PBRF), and also can give more informative results than previous error analyses would suggest. Weaknesses: What is the underlying reason for selecting the 5 components and why are these components considered important? These are not explicitly stated in the paper. The author considers PBRF can be regarded as the gold standard, and I think more rigorous proofs are needed. Questions Here are some questions for the authors: As mentioned in weaknesses, what is the underlying reason for selecting the 5 components? Are there any considerations in the selection of experimental datasets? Limitations The authors adequately addressed the limitations.
NIPS
Title If Influence Functions are the Answer, Then What is the Question? Abstract Influence functions efficiently estimate the effect of removing a single training data point on a model’s learned parameters. While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks. In this work, we investigate the specific factors that cause this discrepancy by decomposing it into five separate terms. We study the contributions of each term on a variety of architectures and datasets and how they vary with factors such as network width and training time. While practical influence function estimates may be a poor match to leave-one-out retraining for nonlinear networks, we show that they are often a good approximation to a different object we term the proximal Bregman response function (PBRF). Since the PBRF can still be used to answer many of the questions motivating influence functions such as identifying influential or mislabeled examples, our results suggest that current algorithms for influence function estimation give more informative results than previous error analyses would suggest. 1 Introduction The influence function [Hampel, 1974, Cook, 1979] is a classic technique from robust statistics that estimates the effect of deleting a single data example (or a group of data examples) from a training dataset. Formally, given a neural network with learned parameters θ⋆ trained on a dataset D, we are interested in the parameters θ⋆−z learned by training on a dataset D − {z} constructed by deleting a single training example z from D. By taking the second-order Taylor approximation to the cost function around θ⋆, influence functions approximate the parameters θ⋆−z without the computationally prohibitive cost of retraining the model. Since Koh and Liang [2017] first deployed influence functions in machine learning, influence functions have been used to solve various tasks such as explaining model’s predictions [Koh and Liang, 2017, Han et al., 2020], relabelling harmful training examples [Kong et al., 2021], carrying out data poisoning attacks [Koh et al., 2022], increasing fairness in models’ predictions [Brunet et al., 2019, Schulam and Saria, 2019], and learning data augmentation techniques [Lee et al., 2020]. When the training objective is strongly convex (e.g., as in logistic regression with L2 regularization), influence functions are expected to align well with leave-one-out (LOO) or leave-k-out retraining [Koh and Liang, 2017, Koh et al., 2019, Izzo et al., 2021]. However, Basu et al. [2020a] showed that influence functions in neural networks often do not accurately predict the effect of retraining the model and concluded that influence estimates are often “fragile” and “erroneous”. Because of the poor match between influence estimates and LOO retraining, influence function methods are often evaluated with alternative metrics such as the detection rate of maliciously corrupted examples using influence scores [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2021]. However, these indirect signals make it difficult to develop algorithmic improvements to influence function estimation. If one is interested in improving certain aspects of influence function estimation, such as the linear system solver, it would be preferable to have a well-defined quantity that influence function estimators are approximating so that algorithmic choices could be directly evaluated based on the accuracy of their estimates. In this work, we investigate the source of the discrepancy between influence functions and LOO retraining in neural networks. We decompose the discrepancy into five components: (1) the difference between cold-start and warm-start response functions (a concept elaborated on below), (2) an implicit proximity regularizer, (3) influence estimation on non-converged parameters, (4) linearization, and (5) approximate solution of a linear system. This decomposition was chosen to capture all gaps and errors caused by approximations and assumptions made in applying influence functions to neural networks. We empirically evaluate the contributions of each component on binary classification, regression, image reconstruction, image classification, and language modeling tasks and show that, across all tasks, components (1–3) are most responsible for the discrepancy between influence functions and LOO retraining. We further investigate how the contribution of each component changes in response to the change in network width and depth, weight decay, training time, damping, and the number of data points being removed. Moreover, we show that while influence functions for neural networks are often a poor match to LOO retraining, they are a much better match to what we term the proximal Bregman response function (PBRF). Intuitively, the PBRF approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. From this perspective, we reframe misalignment components (1–3) as simply reflecting the difference between LOO retraining and the PBRF. The gap between the influence function estimate and the PRBF only comes from sources (4) and (5), which we found empirically to be at least an order of magnitude smaller for most neural networks. As a result, on a wide variety of tasks, influence functions closely align with the PBRF while failing to approximate the effect of retraining the model, as shown in Figure 1. The PBRF can be used for many of the same use cases that have motivated influence functions, such as finding influential or mislabeled examples [Schioppa et al., 2021] and carrying out data poisoning attacks [Koh and Liang, 2017, Koh et al., 2022], and can therefore be considered an alternative to LOO retraining as a gold standard for evaluating influence functions. Hence, we conclude that influence functions applied to neural networks are not inherently “fragile” as is often believed [Basu et al., 2020a], but instead can be seen as giving accurate answers to a different question than is normally assumed. 2 Related Work Instance-based interpretability methods are a class of techniques that explain a model’s predictions in terms of the examples on which the model was trained. Methods of this type include TracIn [Pruthi et al., 2020], Representer Point Selection [Yeh et al., 2018], Grad-Cos and Grad-Dot [Charpiat et al., 2019, Hanawa et al., 2021], MMD-critic [Kim et al., 2016], unconditional counterfactual explanations [Wachter et al., 2018], and of central focus in this paper, influence functions. Since its adoption in machine learning by Koh and Liang [2017], multiple extensions and improvements upon influence functions have also been proposed, such as variants that use Fisher kernels [Khanna et al., 2019], higher-order approximations [Basu et al., 2020b], tricks for faster and scalable inference [Guo et al., 2021, Schioppa et al., 2021], group influence formulations [Koh et al., 2019, Basu et al., 2020b], and relative local weighting [Barshan et al., 2020]. However, many of these methods rely on the same strong assumptions made in the original influence function derivation that the objective needs to be strongly convex and influence functions must be computed on the optimal parameters. In general, influence functions are assumed to approximate the effects of leave-one-out (LOO) retraining from scratch, the parameters of the network that are trained without a data point of interest. Hence, measuring the quality of influence functions is often performed by analyzing the correlation between LOO retraining and influence function estimations [Koh and Liang, 2017, Basu et al., 2020a,b, Yang and Chaudhuri, 2022]. However, recent empirical analyses have demonstrated the fragility of influence functions and a fundamental misalignment between their assumed and actual effects [Basu et al., 2020a, Ghorbani et al., 2019, K and Søgaard, 2021]. For example, Basu et al. [2020a] argued that the accuracy of influence functions in deep networks is highly sensitive to network width and depth, weight decay strength, inverse-Hessian vector product estimation methodology, and test query point by measuring the alignment between influence functions and LOO retraining. Because of the inherent misalignment between influence estimations and LOO retraining in neural networks, many works often evaluate the accuracy of the influence functions on an alternative metric, such as the recovery rate of maliciously mislabelled or poisoned data using influence functions [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 2021]. In this work, instead of interpreting the misalignment between influence functions and LOO retraining as a failure, we claim that it simply reflects that influence functions answer a different question than is typically assumed. 3 Background Consider a prediction task from an input space X to a target space T where we are given a finite training dataset Dtrain = {(x(i), t(i))}Ni=1. Given a data point z = (x, t), let y = f(θ,x) be the prediction of the network parameterized by θ ∈ Rd and L(y, t) be the loss (e.g., squared error or cross-entropy). We aim to solve the following optimization problem: θ⋆ = argmin θ∈Rd J (θ) = argmin θ∈Rd 1 N N∑ i=1 L(f(θ,x(i)), t(i)), (1) where J (·) is the cost function. If the regularization (e.g., L2 regularization) is imposed in the cost function, we fold the regularization terms into the loss function. We summarize the notation used in this paper in Appendix A. 3.1 Downweighting a Training Example The training objective in Eqn. 1 aims to find the parameters that minimize the average loss on all training examples. Herein, we are interested in studying the change in optimal model parameters when a particular training example z = (x, t) ∈ Dtrain is removed from the training dataset, or more generally, when the data point z is downweighted by an amount ϵ ∈ R. Formally, this corresponds to minimizing the following downweighted objective: θ⋆−z,ϵ = argmin θ∈Rd Q−z(θ, ϵ) = argmin θ∈Rd J (θ)− L(f(θ,x), t)ϵ. (2) When ϵ = 1/N, the downweighted objective reduces to the cost over the dataset with the example z removed, up to a constant factor. To see how the optimum of the downweighted objective responds to changes in the downweighting factor ϵ, we define the response function r⋆−z : R → Rd by: r⋆−z(ϵ) = argmin θ∈Rd Q−z(θ, ϵ), (3) where we assume that the downweighted objective is strongly convex and hence the solution to the downweighted objective is unique given some factor ϵ. Under these assumptions, note that r⋆−z(0) = θ ⋆ and the response function is differentiable at 0 by the Implicit Function Theorem [Krantz and Parks, 2002, Griewank and Walther, 2008]. Influence functions approximate the response function by performing a first-order Taylor expansion around ϵ0 = 0: r⋆−z,lin(ϵ) = r ⋆ −z(ϵ0) + dr⋆−z dϵ ∣∣∣∣ ϵ=ϵ0 (ϵ− ϵ0) = θ⋆ + (∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t)ϵ. (4) We refers readers to Van der Vaart [2000] and Appendix B for a detailed derivation. The optimal parameters trained without z can then be approximated by plugging in ϵ = 1/N to Eqn. 4. Influence functions can further approximate the loss of a particular test point ztest = (xtest, ttest) when a data point z is eliminated from the training set using the chain rule [Koh and Liang, 2017]: L(f(r⋆−z,lin (1/N) ,xtest), ttest) ≈ L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤ dr⋆z dϵ ∣∣∣∣ ϵ=0 = L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤(∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t). (5) 3.2 Influence Function Estimation in Neural Networks Influence functions face two main challenges when deployed on neural networks. First, the influence estimation (shown in Eqn. 4) requires computing an inverse Hessian-vector product (iHVP). Unfortunately, storing and inverting the Hessian requires O(d3) operations and is infeasible to compute for modern neural networks. Instead, Koh and Liang [2017] tractably approximate the iHVP using truncated non-linear conjugate gradient (CG) [Martens et al., 2010] or the LiSSA algorithm [Agarwal et al., 2016]. Both approaches avoid explicit computation of the Hessian inverse (see Appendix G for details) and only require O(Nd) operations to approximate the influence function. Second, the derivation of influence functions assumes a strongly convex objective, which is often not satisfied for neural networks. The Hessian may be singular, especially when the parameters have not fully converged, due to non-positive eigenvalues. To enforce positive-definiteness of the Hessian, Koh and Liang [2017] add a damping term in the iHVP. Teso et al. [2021] further approximate the Hessian with the Fisher information matrix (which is equivalent to the Gauss-Newton Hessian [Martens, 2014] for commonly used loss functions such as cross-entropy) as follows: r⋆−z,damp,lin(ϵ) ≈ θ⋆ + (J⊤yθ⋆Hy⋆Jyθ⋆ + λI)−1∇θL(f(θ⋆,x), t)ϵ, (6) where Jyθ⋆ is the parameter-output Jacobian and Hy⋆ is the Hessian of the cost with respect to the network outputs both evaluated on the optimal parameters θ⋆. Here, G⋆ = J⊤yθ⋆Hy⋆Jyθ⋆ is the Gauss-Newton Hessian (GNH) and λ > 0 is a damping term to ensure the invertibility of GNH. Unlike the Hessian, the GNH is guaranteed to be positive semidefinite as long as the loss function is convex as a function of the network outputs [Martens et al., 2010]. 4 Understanding the Discrepancy between Influence Function and LOO Retraining in Neural Networks In this section, we investigate several factors responsible for the misalignment between influence functions and LOO retraining. Specifically, we decompose the misalignment into five separate terms: (1) the warm-start gap, (2) the damping gap, (3) the non-convergence gap, (4) the linearization error, and (5) the solver error. This decomposition captures all approximations and assumption violations when deploying influence functions in neural networks. By summing the parameter (or outputs) differences introduced by each term we can bound the parameter (or outputs) difference between LOO retraining and influence estimates. We use the term “gap” rather than “error” for the first three terms to emphasize that they reflect differences between solutions to different influence-related questions, rather than actual errors. For all models we investigate, we find that the first three sources dominate the misalignment, indicating that the misalignment reflects not algorithmic errors but rather the fact that influence function estimators are answering a different question from what is normally assumed. All proximal objectives are summarized in Table 1 and we provide the derivations in Appendix B. 4.1 Warm-Start Gap: Non-Strongly Convex Training Objective By taking a first-order Taylor approximation of the response function at ϵ0 = 0 (Eqn. 4), influence functions approximate the effect of removing a data point z at a local neighborhood of the optimum θ⋆. Hence, influence approximation has a more natural connection to the retraining scheme that initializes the network at the current optimum θ⋆ (warm-start retraining) than the scheme that initializes the network randomly (cold-start retraining). The warm-start optimum is equivalent to the cold-start optimum when the objective is strongly convex (where the solution to the response function is unique), making the influence estimation close to the LOO retraining on logistic regression with L2 regularization. However, the equivalence between warm-start and coldstart optima is not typically guaranteed in neural networks [Vicol et al., 2022a]. Particularly, in the overparametrized regime (N < d), neural networks exhibit multiple global optima, and their converged solutions depend highly on the specifics of the optimization dynamics [Lee et al., 2019, Arora et al., 2019, Bartlett et al., 2020, Amari et al., 2020]. For quadratic cost functions, gradient descent with initialization θ0 converges to the optimum that achieves the minimum L2 distance from θ0 [Hastie et al., 2022]. This phenomenon of the converged parameters being dependent on the initialization hinders influence functions from accurately predicting the effect of retraining the model from scratch as shown in Figure 2. We denote the discrepancy between cold-start and warm-start optima as warm-start gap. 4.2 Proximity Gap: Addition of Damping Term in iHVP In practical settings, we often impose a damping term (Eqn. 6) in influence approximations to ensure that the cost Hessian is positive-definite and hence invertible. As adding a damping term in influence estimations is equivalent to adding L2 regularization to the cost function [Martens et al., 2010], when damping is used, influence functions can be seen as linearizing the following proximal response function at ϵ0 = 0: r⋆−z,damp(ϵ) = argmin θ∈Rd Q−z(θ, ϵ) + λ 2 ∥θ − θ⋆∥2. (7) See Appendix B.2 for the derivation. Note that λ > 0 is a damping strength and our use of “proximal” is based on the notion of proximal equilibria [Farnia and Ozdaglar, 2020]. Intuitively, the proximal objective in Eqn. 7 not only minimizes the downweighted objective but also encourages the parameters to stay close to the optimal parameters at ϵ0 = 0. Hence, when the damping term is used in the iHVP, influence functions aim at approximating the warm-start retraining scheme with a proximity term that penalizes the L2 distance between the new estimate and the optimal parameters. We call the discrepancy between the warm-start and proximal warm-start optima the proximity gap. Interestingly, past works have observed that for quadratic cost functions, early stopping has a similar effect to L2 regularization [Vicol et al., 2022a, Ali et al., 2019]. Therefore, the proximal response function can be thought of as capturing how gradient descent will respond to a dataset perturbation if it takes only a limited number of steps starting from the warm-start solution. 4.3 Non-Convergence Gap: Influence Estimation on Non-Converged Parameters Thus far, our analysis has assumed that influence functions are computed on fully converged parameters θ⋆ at which the gradient of the cost is 0. However, in neural network training, we often terminate the optimization procedure before reaching the exact optimum due to several reasons, including having limited computational resources or to avoid overfitting [Bengio, 2012]. In such situations, much of the change in the parameters from LOO retraining simply reflects the effect of training for longer, rather than the effect of removing a training example, as illustrated in Figure 3. What we desire from influence functions is to understand the effect of removing the training example; the effect of extended training is simply a nuisance. Therefore, to the extent that this factor contributes to the misalignment between influence functions and LOO retraining, influence functions are arguably more useful than LOO retraining. Since training the network to convergence may be impractical or undesirable, we instead modify the response function by replacing the original training objective with a similar one for which the (possibly non-converged) final parameters θs are optimal. Here, we assume the loss function L(·, ·) is convex as a function of the network outputs; this is true for commonly used loss functions such as squared error or cross-entropy. We replace the training loss with a term that penalizes mismatch to the predictions made by θs (hence implying that θs is optimal). Our proximal Bregman response function (PBRF) is defined as follows: rb−z,damp(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)(f(θ,x (i)), f(θs,x(i)))− L(f(θ,x), t)ϵ+ λ 2 ∥θ − θs∥2, (8) where DL(i)(·, ·) is the Bregman divergence defined as: DL(i)(y,y s) = L(y, t(i))− L(ys, t(i))−∇yL(ys, t(i))⊤(y − ys). (9) The PBRF defined in Eqn. 8 is composed of three terms. The first term measures the functional discrepancy between the current estimate and the parameters θs in Bregman divergence, and its role is to prevent the new estimate from drastically altering the predictions on the training dataset. One way of understanding this term in the cases of squared error or cross-entropy losses is that it is equivalent to the training error on a dataset where the original training labels are replaced with soft targets obtained from the predictions made by θs. The second term is the negative loss on the data point z = (x, t), which aims to respond to the deletion of a training example. The final term is simply the proximity term described before. In Appendix B.3, we further show that the influence function on non-converged parameters is equivalent to the first-order approximation of PBRF instead of the first-order approximation of proximal response function for linear models. Rather than computing the LOO retrained parameters by performing K additional optimization steps under the original training objective, we can instead perform K optimization steps under the proximal Bregman objective. The difference between the resulting parameter vectors is what we call the non-convergence gap. 4.4 Linearization Error: A First-order Taylor Approximation of the Response Function The key idea behind influence functions is the linearization of the response function. To simulate the local approximations made in influence functions, we define the linearized PBRF as: rb−z,damp,lin(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)quad (flin(θ,x (i)), f(θs,x(i))) −∇θL(f(θs,x), t)⊤θϵ+ λ 2 ∥θ − θs∥2, (10) where Lquad(·, ·) is the second-order expansion of the loss around ys and flin(·, ·) is the linearization of the network outputs with respect to the parameters. The optimal solution to the linearized PBRF is equivalent to the influence estimation at the parameters θs with the GNH approximation and a damping term λ (see Appendix B.4 for the derivation). As the linearized PBRF relies on several local approximations, the linearization error increases when the downweighting factor magnitude |ϵ| is large or the PBRF is highly non-linear. We refer to the discrepancy between the PBRF and linearized PBRF as the linearization error. 4.5 Solver Error: A Crude Approximation of iHVP As the precise computation of the iHVP is computationally infeasible, in practice, we use truncated CG or LiSSA to efficiently approximate influence functions [Koh and Liang, 2017]. Unfortunately, these efficient linear solvers introduce additional error by crudely approximating the iHVP. Moreover, different linear solvers can introduce specific biases in the influence estimation. For example, Vicol et al. [2022b] show that the truncated LiSSA algorithm implicitly adds an additional damping term in the iHVP. We use solver error to refer to the difference between the linearized PBRF and the influence estimation computed by a linear solver. Interestingly, Koh and Liang [2017] reported that the LiSSA algorithm gave more accurate results than CG. We have determined that this difference resulted not from any inherent algorithmic advantage to LiSSA, but rather from the fact that the software used different damping strengths for the two algorithms, thereby resulting in different weightings of the proximity term in the proximal response function. 5 PBRF: The Question Influence Functions are Really Answering The PBRF (Eqn. 8) approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. Since the discrepancy between the PBRF and influence function estimates is only due to the linearization and solver errors, the PBRF can be thought of as better representing the question that influence functions are trying to answer. Reframing influence functions in this way means that the PBRF can be regarded as a gold-standard ground truth for evaluating methods for influence function approximation. Existing analyses of influence functions [Basu et al., 2020a] rely on generating LOO retraining ground truth estimates by imposing strong L2 regularization or training till convergence without early stopping. However, these conditions do not accurately reflect the typical way neural networks are trained in practice. In contrast, our PBRF formulation does not require the addition of any regularizers or modified training regimes and can be easily optimized. In addition, although the PBRF may not necessarily align with LOO retraining due to the warm-start, proximity, and non-convergence gaps, the motivating use cases for influence functions typically do not rely on exact LOO retraining. This means that the PBRF can be used in place of LOO retraining for many tasks such as identifying influential or mislabelled examples, as demonstrated in Appendix D.3. In these cases, influence functions are still useful since they provide an efficient way of approximating PBRF estimates. 6 Experiments Our experiments investigate the following questions: (1) What factors discussed in Section 4 contribute most to the misalignment between influence functions and LOO retraining? (2) While influence functions fail to approximate the effect of retraining, do they accurately approximate the PBRF? (3) How do changes in weight decay, damping, the number of total epochs, and the number of removed training examples affect each source of misalignment? In all experiments, we first train the base network with the entire dataset to obtain the parameters θs. We repeat the training procedure 20 times with a different random training example deleted. The cold-start retraining begins from the same initialization used to train θs. All proximal objectives are trained with initialization θs for 50% of the epochs used to train the base network. Lastly, we use the LiSSA algorithm with GNH approximation to compute influence functions. Since we are primarily interested in the effect of deleting a data point on model’s predictions, we measure the discrepancy of each gap and error using the average L2 distance between networks’ outputs E(x,·)∼Dtrain [∥f(θ,x)− f(θ′,x)∥] on the training dataset. We provide the full experimental set-up and additional experiments in Appendix C and D, respectively. 6.1 Influence Misalignment Decomposition We first applied our decomposition to various models trained on a broad range of tasks covering binary classification, regression, image reconstruction, image classification, and language modeling. The summary of our results is provided in Figure 4 and Table 5 (Appendix E). Across all tasks, we found that the first three sources dominate the misalignment, indicating influence function estimators are answering a different question from what is normally assumed. Small linearization and solver errors indicate that influence functions accurately answer the modified question (PBRF). Logistic Regression. We analyzed the logistic regression (LR) model trained on the Cancer and Diabetes classification datasets from the UCI collection [Dua and Graff, 2017]. We trained the model using L-BFGS [Liu and Nocedal, 1989] with L2 regularization of 0.01 and damping term of λ = 0.001. As the training objective is strongly convex and the base model parameters were trained till convergence, in Table 5, we observed that each source of misalignment is significantly low. Hence, in the case of logistic regression with L2 regularization, influence functions accurately capture the effect of retraining the model without a data point. Multilayer Perceptron. Next, we applied our analysis to the 2-hidden layer Multilayer Perceptron (MLP) with ReLU activations. We conducted the experiments in two settings: (1) regression on the Concrete and Energy datasets from the UCI collection and (2) image classification on 10% of the MNIST [Deng, 2012] and FashionMNIST [Xiao et al., 2017] datasets, following the set-up from Koh and Liang [2017] and Basu et al. [2020a]. We trained the networks for 1000 epochs using stochastic gradient descent (SGD) with a batch size of 128 and set a damping strength of λ = 0.001. As opposed to linear models, MLPs violate the assumptions in the influence derivation and we observed an increase in gaps and errors on all five factors. We observed that warm-start, proximity, and the non-convergence gaps contribute more to the misalignment than linearization and solver errors. The average network’s predictions for PBRF were similar to that computed by the LiSSA algorithm, demonstrating that influence functions are still a good approximation to PBRF. Autoencoder. Next, we applied our framework to an 8-layer autoencoder (AE) on the full MNIST dataset. We followed the experimental set-up from Martens and Grosse [2015], where the encoder and decoder each consist of 4 fully-connected layers with sigmoid activation functions. We trained the network for 1000 epochs using SGD with momentum. We set the batch size to 1024, used L2 regularization of 10−5 with a damping factor of λ = 0.001. In accordance with the findings from our MLP experiments, the warm-start, proximity, and non-convergence gaps were more significant than the linearization and solver errors, and influence functions accurately predicted the PBRF. Convolutional Neural Networks. To investigate the source of discrepancy on largerscale networks, we trained a set of convolutional neural networks of increasing complexity and size. Namely, LeNet [Lecun et al., 1998], AlexNet [Krizhevsky et al., 2012], VGG13 Simonyan and Zisserman [2014], and ResNet-20 [He et al., 2015] were trained on 10% of the MNIST dataset and the full CIFAR10 [Krizhevsky, 2009] dataset. We trained the base network for 200 epochs on both datasets with a batch size of 128. For MNIST, we kept the learning rate fixed throughout training, while for CIFAR10, we decayed the learning rate by a factor of 5 at epochs 60, 120, and 160, following Zagoruyko and Komodakis [2016]. We used L2 regularization with strength 5 · 10−4 and a damping factor of λ = 0.001. Consistent with the findings from our MLP and autoencoder experiments, the first three gaps were more significant than linearization and solver errors. We further compared influence functions’ approximations on the difference in test loss when a random training data point is removed with the value obtained from cold-start retraining, warm-start retraining, and PBRF in Table 2. We used both Pearson [Sedgwick, 2012] and Spearman rank-order correlation [Spearman, 1961] to measure the alignment. While the test loss predicted by influence functions does not align well with the values obtained by cold-start and warm-start retraining schemes, they show high correlations when compared to the estimates given by PBRF. Transformer. Finally, we trained 2-layer Transformer language models on the Penn Treebank (PTB) [Marcus et al., 1993] dataset. We set the number of hidden dimensions to 256 and the number of attention heads to 2. As we observed that model overfits after a few epochs of training, we trained the base network for 10 epochs using Adam. Notably, we observed that the non-convergence gap had the most considerable contribution to the discrepancy between influence functions and LOO retraining. Consistent with our previous findings, the first tree gaps had more impact on the discrepancy compared to linearization and solver errors. 6.2 Factors in Influence Misalignment We further analyzed how the contribution of each component changes in response to changes in network width and depth, training time, weight decay, damping, and the percentage of data removed. We used an MLP trained on 10% of the MNIST dataset and summarized results in Figure 5. Width and Depth. As we increase the width of the network, we observe a decrease in the linearization error. This is consistent with previous observations that networks behave more linearly as the width is increased [Lee et al., 2019]. In contrast to the findings from Basu et al. [2020a], we did not observe a strong relationship between the contribution of the components and the depth of the network. Training Time. Unsurprisingly, as we increase the number of training epochs, we observe a decrease in the non-convergence gap. We hypothesize that, as we increase the training epoch, the cost gradient reaches 0, resulting in better alignment between the proximal response function and PBRF. Weight Decay. The weight decay allows the training objective to be better conditioned. Consequently, as weight decay increases, the training objective may act more as a strictly convex objective, resulting in a decrease in overall discrepancy for all components. Basu et al. [2020a] also found that the alignment between influence functions and LOO retraining increases as weight decay increases. Damping. A higher damping term makes linear systems better conditioned, allowing solvers to find accurate solutions in fewer iterations [Demmel, 1997], thereby reducing the solver error. Furthermore, the higher proximity term keeps the parameters close to θs, reducing the linearization error. On the other hand, increasing the effective proximity penalty directly increases the proximity gap. Percentage of Training Examples Removed. As we remove more training examples from the dataset the PBRF becomes more non-linear and we observe a sharp increase in the linearization error. The cost landscape is also more likely to change as we remove more training examples, and we observe a corresponding increase in the warm-start gap. 7 Conclusion In this paper, we investigate the sources of the discrepancy between influence functions and LOO retraining in neural networks. We decompose this difference into five distinct components: the warmstart gap, proximity gap, non-convergence gap, linearization error, and solver error. We empirically evaluate the contributions of each of these components on a wide variety of architectures and datasets and investigate how they change with factors such as network size and regularization. Our results show that the first three components are most responsible for the discrepancy between influence functions and LOO retraining. We further introduce the proximal Bregman response function (PBRF) to better capture the behavior of influence functions in neural networks. Compared to LOO retraining, the PBRF is more easily calculated and correlates better with influence functions, meaning it is an attractive alternative gold standard for evaluating influence functions. Although the PBRF may not necessarily align with LOO retraining, it can still be applied in many of the motivating use cases for influence functions. We conclude that influence functions in neural networks are not necessarily “fragile”, but instead are giving accurate answers to a different question than is normally assumed. Acknowledgements We would like to thank Pang Wei Koh for the helpful discussions. Resources used in this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (www.vectorinstitute.ai/partners).
1. What is the focus and contribution of the paper regarding influence estimates and leave-one-out retraining? 2. What are the strengths and weaknesses of the proposed decomposition and its connection to proximal Bregman response function (PBRF)? 3. Do you have any questions or suggestions regarding the empirical analysis and use cases of PBRF? 4. How does the reviewer assess the clarity, significance, and limitations of the paper's content? 5. Are there any specific areas or equations in the paper that require further explanation or clarification?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies what causes influence estimates misalign the leave-one-out retraining. Specifically, the authors decompose discrepancy into five separate terms and empirically show the contributions of each term under a variety of settings (e.g., model architectures and datasets). Strengths And Weaknesses Originality: The paper shows the connections between influence estimates and the decomposition of the five terms, and finds out that influence estimates are highly correlated to the proposed proximal Bregman response function (PBRF). The results are overall interesting. Quality: The decomposition is technically sound and empirical analysis of the decomposition is well conducted. However, I suggest the authors show some use cases of PBRF empirically. As demonstrated in the experiment, PBRF is highly correlated with influence estimates. Does it offer some better advantage over influence estimates in some use cases such as data poisoning, improving fairness and explanablity of the model. I think these are also key questions to answer. Clarity: The paper is well written and easy to read. However, some technical details of the paper are not well explained. For example, I am a little confused by the introduction of Linearization Error (from Eq. 7 to Eq. 9). I am not sure what are the purposes/usages by performing the second-order expansion of the loss around y s . Significance: This paper provides an in-depth understanding of influence function by trying to bridge the gap between influence function and LOO re-training . It could impact on the community in the long run. Questions What is the purpose of the deviation from Eq. 7 to Eq. 9? In eq. 8, the authors mention “ f lin ( ⋅ , ⋅ ) is the linearization of the network outputs with respect to the parameters”. What do you mean by “the linearization of the network outputs”? Is there any practical advantage of using over PBRF influence estimates? Limitations The authors have adequately addressed the limitations and potential negative societal impact of their work.
NIPS
Title If Influence Functions are the Answer, Then What is the Question? Abstract Influence functions efficiently estimate the effect of removing a single training data point on a model’s learned parameters. While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks. In this work, we investigate the specific factors that cause this discrepancy by decomposing it into five separate terms. We study the contributions of each term on a variety of architectures and datasets and how they vary with factors such as network width and training time. While practical influence function estimates may be a poor match to leave-one-out retraining for nonlinear networks, we show that they are often a good approximation to a different object we term the proximal Bregman response function (PBRF). Since the PBRF can still be used to answer many of the questions motivating influence functions such as identifying influential or mislabeled examples, our results suggest that current algorithms for influence function estimation give more informative results than previous error analyses would suggest. 1 Introduction The influence function [Hampel, 1974, Cook, 1979] is a classic technique from robust statistics that estimates the effect of deleting a single data example (or a group of data examples) from a training dataset. Formally, given a neural network with learned parameters θ⋆ trained on a dataset D, we are interested in the parameters θ⋆−z learned by training on a dataset D − {z} constructed by deleting a single training example z from D. By taking the second-order Taylor approximation to the cost function around θ⋆, influence functions approximate the parameters θ⋆−z without the computationally prohibitive cost of retraining the model. Since Koh and Liang [2017] first deployed influence functions in machine learning, influence functions have been used to solve various tasks such as explaining model’s predictions [Koh and Liang, 2017, Han et al., 2020], relabelling harmful training examples [Kong et al., 2021], carrying out data poisoning attacks [Koh et al., 2022], increasing fairness in models’ predictions [Brunet et al., 2019, Schulam and Saria, 2019], and learning data augmentation techniques [Lee et al., 2020]. When the training objective is strongly convex (e.g., as in logistic regression with L2 regularization), influence functions are expected to align well with leave-one-out (LOO) or leave-k-out retraining [Koh and Liang, 2017, Koh et al., 2019, Izzo et al., 2021]. However, Basu et al. [2020a] showed that influence functions in neural networks often do not accurately predict the effect of retraining the model and concluded that influence estimates are often “fragile” and “erroneous”. Because of the poor match between influence estimates and LOO retraining, influence function methods are often evaluated with alternative metrics such as the detection rate of maliciously corrupted examples using influence scores [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2021]. However, these indirect signals make it difficult to develop algorithmic improvements to influence function estimation. If one is interested in improving certain aspects of influence function estimation, such as the linear system solver, it would be preferable to have a well-defined quantity that influence function estimators are approximating so that algorithmic choices could be directly evaluated based on the accuracy of their estimates. In this work, we investigate the source of the discrepancy between influence functions and LOO retraining in neural networks. We decompose the discrepancy into five components: (1) the difference between cold-start and warm-start response functions (a concept elaborated on below), (2) an implicit proximity regularizer, (3) influence estimation on non-converged parameters, (4) linearization, and (5) approximate solution of a linear system. This decomposition was chosen to capture all gaps and errors caused by approximations and assumptions made in applying influence functions to neural networks. We empirically evaluate the contributions of each component on binary classification, regression, image reconstruction, image classification, and language modeling tasks and show that, across all tasks, components (1–3) are most responsible for the discrepancy between influence functions and LOO retraining. We further investigate how the contribution of each component changes in response to the change in network width and depth, weight decay, training time, damping, and the number of data points being removed. Moreover, we show that while influence functions for neural networks are often a poor match to LOO retraining, they are a much better match to what we term the proximal Bregman response function (PBRF). Intuitively, the PBRF approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. From this perspective, we reframe misalignment components (1–3) as simply reflecting the difference between LOO retraining and the PBRF. The gap between the influence function estimate and the PRBF only comes from sources (4) and (5), which we found empirically to be at least an order of magnitude smaller for most neural networks. As a result, on a wide variety of tasks, influence functions closely align with the PBRF while failing to approximate the effect of retraining the model, as shown in Figure 1. The PBRF can be used for many of the same use cases that have motivated influence functions, such as finding influential or mislabeled examples [Schioppa et al., 2021] and carrying out data poisoning attacks [Koh and Liang, 2017, Koh et al., 2022], and can therefore be considered an alternative to LOO retraining as a gold standard for evaluating influence functions. Hence, we conclude that influence functions applied to neural networks are not inherently “fragile” as is often believed [Basu et al., 2020a], but instead can be seen as giving accurate answers to a different question than is normally assumed. 2 Related Work Instance-based interpretability methods are a class of techniques that explain a model’s predictions in terms of the examples on which the model was trained. Methods of this type include TracIn [Pruthi et al., 2020], Representer Point Selection [Yeh et al., 2018], Grad-Cos and Grad-Dot [Charpiat et al., 2019, Hanawa et al., 2021], MMD-critic [Kim et al., 2016], unconditional counterfactual explanations [Wachter et al., 2018], and of central focus in this paper, influence functions. Since its adoption in machine learning by Koh and Liang [2017], multiple extensions and improvements upon influence functions have also been proposed, such as variants that use Fisher kernels [Khanna et al., 2019], higher-order approximations [Basu et al., 2020b], tricks for faster and scalable inference [Guo et al., 2021, Schioppa et al., 2021], group influence formulations [Koh et al., 2019, Basu et al., 2020b], and relative local weighting [Barshan et al., 2020]. However, many of these methods rely on the same strong assumptions made in the original influence function derivation that the objective needs to be strongly convex and influence functions must be computed on the optimal parameters. In general, influence functions are assumed to approximate the effects of leave-one-out (LOO) retraining from scratch, the parameters of the network that are trained without a data point of interest. Hence, measuring the quality of influence functions is often performed by analyzing the correlation between LOO retraining and influence function estimations [Koh and Liang, 2017, Basu et al., 2020a,b, Yang and Chaudhuri, 2022]. However, recent empirical analyses have demonstrated the fragility of influence functions and a fundamental misalignment between their assumed and actual effects [Basu et al., 2020a, Ghorbani et al., 2019, K and Søgaard, 2021]. For example, Basu et al. [2020a] argued that the accuracy of influence functions in deep networks is highly sensitive to network width and depth, weight decay strength, inverse-Hessian vector product estimation methodology, and test query point by measuring the alignment between influence functions and LOO retraining. Because of the inherent misalignment between influence estimations and LOO retraining in neural networks, many works often evaluate the accuracy of the influence functions on an alternative metric, such as the recovery rate of maliciously mislabelled or poisoned data using influence functions [Khanna et al., 2019, Koh and Liang, 2017, Schioppa et al., 2021, K and Søgaard, 2021]. In this work, instead of interpreting the misalignment between influence functions and LOO retraining as a failure, we claim that it simply reflects that influence functions answer a different question than is typically assumed. 3 Background Consider a prediction task from an input space X to a target space T where we are given a finite training dataset Dtrain = {(x(i), t(i))}Ni=1. Given a data point z = (x, t), let y = f(θ,x) be the prediction of the network parameterized by θ ∈ Rd and L(y, t) be the loss (e.g., squared error or cross-entropy). We aim to solve the following optimization problem: θ⋆ = argmin θ∈Rd J (θ) = argmin θ∈Rd 1 N N∑ i=1 L(f(θ,x(i)), t(i)), (1) where J (·) is the cost function. If the regularization (e.g., L2 regularization) is imposed in the cost function, we fold the regularization terms into the loss function. We summarize the notation used in this paper in Appendix A. 3.1 Downweighting a Training Example The training objective in Eqn. 1 aims to find the parameters that minimize the average loss on all training examples. Herein, we are interested in studying the change in optimal model parameters when a particular training example z = (x, t) ∈ Dtrain is removed from the training dataset, or more generally, when the data point z is downweighted by an amount ϵ ∈ R. Formally, this corresponds to minimizing the following downweighted objective: θ⋆−z,ϵ = argmin θ∈Rd Q−z(θ, ϵ) = argmin θ∈Rd J (θ)− L(f(θ,x), t)ϵ. (2) When ϵ = 1/N, the downweighted objective reduces to the cost over the dataset with the example z removed, up to a constant factor. To see how the optimum of the downweighted objective responds to changes in the downweighting factor ϵ, we define the response function r⋆−z : R → Rd by: r⋆−z(ϵ) = argmin θ∈Rd Q−z(θ, ϵ), (3) where we assume that the downweighted objective is strongly convex and hence the solution to the downweighted objective is unique given some factor ϵ. Under these assumptions, note that r⋆−z(0) = θ ⋆ and the response function is differentiable at 0 by the Implicit Function Theorem [Krantz and Parks, 2002, Griewank and Walther, 2008]. Influence functions approximate the response function by performing a first-order Taylor expansion around ϵ0 = 0: r⋆−z,lin(ϵ) = r ⋆ −z(ϵ0) + dr⋆−z dϵ ∣∣∣∣ ϵ=ϵ0 (ϵ− ϵ0) = θ⋆ + (∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t)ϵ. (4) We refers readers to Van der Vaart [2000] and Appendix B for a detailed derivation. The optimal parameters trained without z can then be approximated by plugging in ϵ = 1/N to Eqn. 4. Influence functions can further approximate the loss of a particular test point ztest = (xtest, ttest) when a data point z is eliminated from the training set using the chain rule [Koh and Liang, 2017]: L(f(r⋆−z,lin (1/N) ,xtest), ttest) ≈ L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤ dr⋆z dϵ ∣∣∣∣ ϵ=0 = L(f(θ⋆,xtest), ttest) + 1 N ∇θL(f(θ⋆,xtest), ttest)⊤(∇2θJ (θ⋆))−1∇θL(f(θ⋆,x), t). (5) 3.2 Influence Function Estimation in Neural Networks Influence functions face two main challenges when deployed on neural networks. First, the influence estimation (shown in Eqn. 4) requires computing an inverse Hessian-vector product (iHVP). Unfortunately, storing and inverting the Hessian requires O(d3) operations and is infeasible to compute for modern neural networks. Instead, Koh and Liang [2017] tractably approximate the iHVP using truncated non-linear conjugate gradient (CG) [Martens et al., 2010] or the LiSSA algorithm [Agarwal et al., 2016]. Both approaches avoid explicit computation of the Hessian inverse (see Appendix G for details) and only require O(Nd) operations to approximate the influence function. Second, the derivation of influence functions assumes a strongly convex objective, which is often not satisfied for neural networks. The Hessian may be singular, especially when the parameters have not fully converged, due to non-positive eigenvalues. To enforce positive-definiteness of the Hessian, Koh and Liang [2017] add a damping term in the iHVP. Teso et al. [2021] further approximate the Hessian with the Fisher information matrix (which is equivalent to the Gauss-Newton Hessian [Martens, 2014] for commonly used loss functions such as cross-entropy) as follows: r⋆−z,damp,lin(ϵ) ≈ θ⋆ + (J⊤yθ⋆Hy⋆Jyθ⋆ + λI)−1∇θL(f(θ⋆,x), t)ϵ, (6) where Jyθ⋆ is the parameter-output Jacobian and Hy⋆ is the Hessian of the cost with respect to the network outputs both evaluated on the optimal parameters θ⋆. Here, G⋆ = J⊤yθ⋆Hy⋆Jyθ⋆ is the Gauss-Newton Hessian (GNH) and λ > 0 is a damping term to ensure the invertibility of GNH. Unlike the Hessian, the GNH is guaranteed to be positive semidefinite as long as the loss function is convex as a function of the network outputs [Martens et al., 2010]. 4 Understanding the Discrepancy between Influence Function and LOO Retraining in Neural Networks In this section, we investigate several factors responsible for the misalignment between influence functions and LOO retraining. Specifically, we decompose the misalignment into five separate terms: (1) the warm-start gap, (2) the damping gap, (3) the non-convergence gap, (4) the linearization error, and (5) the solver error. This decomposition captures all approximations and assumption violations when deploying influence functions in neural networks. By summing the parameter (or outputs) differences introduced by each term we can bound the parameter (or outputs) difference between LOO retraining and influence estimates. We use the term “gap” rather than “error” for the first three terms to emphasize that they reflect differences between solutions to different influence-related questions, rather than actual errors. For all models we investigate, we find that the first three sources dominate the misalignment, indicating that the misalignment reflects not algorithmic errors but rather the fact that influence function estimators are answering a different question from what is normally assumed. All proximal objectives are summarized in Table 1 and we provide the derivations in Appendix B. 4.1 Warm-Start Gap: Non-Strongly Convex Training Objective By taking a first-order Taylor approximation of the response function at ϵ0 = 0 (Eqn. 4), influence functions approximate the effect of removing a data point z at a local neighborhood of the optimum θ⋆. Hence, influence approximation has a more natural connection to the retraining scheme that initializes the network at the current optimum θ⋆ (warm-start retraining) than the scheme that initializes the network randomly (cold-start retraining). The warm-start optimum is equivalent to the cold-start optimum when the objective is strongly convex (where the solution to the response function is unique), making the influence estimation close to the LOO retraining on logistic regression with L2 regularization. However, the equivalence between warm-start and coldstart optima is not typically guaranteed in neural networks [Vicol et al., 2022a]. Particularly, in the overparametrized regime (N < d), neural networks exhibit multiple global optima, and their converged solutions depend highly on the specifics of the optimization dynamics [Lee et al., 2019, Arora et al., 2019, Bartlett et al., 2020, Amari et al., 2020]. For quadratic cost functions, gradient descent with initialization θ0 converges to the optimum that achieves the minimum L2 distance from θ0 [Hastie et al., 2022]. This phenomenon of the converged parameters being dependent on the initialization hinders influence functions from accurately predicting the effect of retraining the model from scratch as shown in Figure 2. We denote the discrepancy between cold-start and warm-start optima as warm-start gap. 4.2 Proximity Gap: Addition of Damping Term in iHVP In practical settings, we often impose a damping term (Eqn. 6) in influence approximations to ensure that the cost Hessian is positive-definite and hence invertible. As adding a damping term in influence estimations is equivalent to adding L2 regularization to the cost function [Martens et al., 2010], when damping is used, influence functions can be seen as linearizing the following proximal response function at ϵ0 = 0: r⋆−z,damp(ϵ) = argmin θ∈Rd Q−z(θ, ϵ) + λ 2 ∥θ − θ⋆∥2. (7) See Appendix B.2 for the derivation. Note that λ > 0 is a damping strength and our use of “proximal” is based on the notion of proximal equilibria [Farnia and Ozdaglar, 2020]. Intuitively, the proximal objective in Eqn. 7 not only minimizes the downweighted objective but also encourages the parameters to stay close to the optimal parameters at ϵ0 = 0. Hence, when the damping term is used in the iHVP, influence functions aim at approximating the warm-start retraining scheme with a proximity term that penalizes the L2 distance between the new estimate and the optimal parameters. We call the discrepancy between the warm-start and proximal warm-start optima the proximity gap. Interestingly, past works have observed that for quadratic cost functions, early stopping has a similar effect to L2 regularization [Vicol et al., 2022a, Ali et al., 2019]. Therefore, the proximal response function can be thought of as capturing how gradient descent will respond to a dataset perturbation if it takes only a limited number of steps starting from the warm-start solution. 4.3 Non-Convergence Gap: Influence Estimation on Non-Converged Parameters Thus far, our analysis has assumed that influence functions are computed on fully converged parameters θ⋆ at which the gradient of the cost is 0. However, in neural network training, we often terminate the optimization procedure before reaching the exact optimum due to several reasons, including having limited computational resources or to avoid overfitting [Bengio, 2012]. In such situations, much of the change in the parameters from LOO retraining simply reflects the effect of training for longer, rather than the effect of removing a training example, as illustrated in Figure 3. What we desire from influence functions is to understand the effect of removing the training example; the effect of extended training is simply a nuisance. Therefore, to the extent that this factor contributes to the misalignment between influence functions and LOO retraining, influence functions are arguably more useful than LOO retraining. Since training the network to convergence may be impractical or undesirable, we instead modify the response function by replacing the original training objective with a similar one for which the (possibly non-converged) final parameters θs are optimal. Here, we assume the loss function L(·, ·) is convex as a function of the network outputs; this is true for commonly used loss functions such as squared error or cross-entropy. We replace the training loss with a term that penalizes mismatch to the predictions made by θs (hence implying that θs is optimal). Our proximal Bregman response function (PBRF) is defined as follows: rb−z,damp(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)(f(θ,x (i)), f(θs,x(i)))− L(f(θ,x), t)ϵ+ λ 2 ∥θ − θs∥2, (8) where DL(i)(·, ·) is the Bregman divergence defined as: DL(i)(y,y s) = L(y, t(i))− L(ys, t(i))−∇yL(ys, t(i))⊤(y − ys). (9) The PBRF defined in Eqn. 8 is composed of three terms. The first term measures the functional discrepancy between the current estimate and the parameters θs in Bregman divergence, and its role is to prevent the new estimate from drastically altering the predictions on the training dataset. One way of understanding this term in the cases of squared error or cross-entropy losses is that it is equivalent to the training error on a dataset where the original training labels are replaced with soft targets obtained from the predictions made by θs. The second term is the negative loss on the data point z = (x, t), which aims to respond to the deletion of a training example. The final term is simply the proximity term described before. In Appendix B.3, we further show that the influence function on non-converged parameters is equivalent to the first-order approximation of PBRF instead of the first-order approximation of proximal response function for linear models. Rather than computing the LOO retrained parameters by performing K additional optimization steps under the original training objective, we can instead perform K optimization steps under the proximal Bregman objective. The difference between the resulting parameter vectors is what we call the non-convergence gap. 4.4 Linearization Error: A First-order Taylor Approximation of the Response Function The key idea behind influence functions is the linearization of the response function. To simulate the local approximations made in influence functions, we define the linearized PBRF as: rb−z,damp,lin(ϵ) = argmin θ∈Rd 1 N N∑ i=1 DL(i)quad (flin(θ,x (i)), f(θs,x(i))) −∇θL(f(θs,x), t)⊤θϵ+ λ 2 ∥θ − θs∥2, (10) where Lquad(·, ·) is the second-order expansion of the loss around ys and flin(·, ·) is the linearization of the network outputs with respect to the parameters. The optimal solution to the linearized PBRF is equivalent to the influence estimation at the parameters θs with the GNH approximation and a damping term λ (see Appendix B.4 for the derivation). As the linearized PBRF relies on several local approximations, the linearization error increases when the downweighting factor magnitude |ϵ| is large or the PBRF is highly non-linear. We refer to the discrepancy between the PBRF and linearized PBRF as the linearization error. 4.5 Solver Error: A Crude Approximation of iHVP As the precise computation of the iHVP is computationally infeasible, in practice, we use truncated CG or LiSSA to efficiently approximate influence functions [Koh and Liang, 2017]. Unfortunately, these efficient linear solvers introduce additional error by crudely approximating the iHVP. Moreover, different linear solvers can introduce specific biases in the influence estimation. For example, Vicol et al. [2022b] show that the truncated LiSSA algorithm implicitly adds an additional damping term in the iHVP. We use solver error to refer to the difference between the linearized PBRF and the influence estimation computed by a linear solver. Interestingly, Koh and Liang [2017] reported that the LiSSA algorithm gave more accurate results than CG. We have determined that this difference resulted not from any inherent algorithmic advantage to LiSSA, but rather from the fact that the software used different damping strengths for the two algorithms, thereby resulting in different weightings of the proximity term in the proximal response function. 5 PBRF: The Question Influence Functions are Really Answering The PBRF (Eqn. 8) approximates the effect of removing a data point while trying to keep the predictions consistent with those of the (partially) trained model. Since the discrepancy between the PBRF and influence function estimates is only due to the linearization and solver errors, the PBRF can be thought of as better representing the question that influence functions are trying to answer. Reframing influence functions in this way means that the PBRF can be regarded as a gold-standard ground truth for evaluating methods for influence function approximation. Existing analyses of influence functions [Basu et al., 2020a] rely on generating LOO retraining ground truth estimates by imposing strong L2 regularization or training till convergence without early stopping. However, these conditions do not accurately reflect the typical way neural networks are trained in practice. In contrast, our PBRF formulation does not require the addition of any regularizers or modified training regimes and can be easily optimized. In addition, although the PBRF may not necessarily align with LOO retraining due to the warm-start, proximity, and non-convergence gaps, the motivating use cases for influence functions typically do not rely on exact LOO retraining. This means that the PBRF can be used in place of LOO retraining for many tasks such as identifying influential or mislabelled examples, as demonstrated in Appendix D.3. In these cases, influence functions are still useful since they provide an efficient way of approximating PBRF estimates. 6 Experiments Our experiments investigate the following questions: (1) What factors discussed in Section 4 contribute most to the misalignment between influence functions and LOO retraining? (2) While influence functions fail to approximate the effect of retraining, do they accurately approximate the PBRF? (3) How do changes in weight decay, damping, the number of total epochs, and the number of removed training examples affect each source of misalignment? In all experiments, we first train the base network with the entire dataset to obtain the parameters θs. We repeat the training procedure 20 times with a different random training example deleted. The cold-start retraining begins from the same initialization used to train θs. All proximal objectives are trained with initialization θs for 50% of the epochs used to train the base network. Lastly, we use the LiSSA algorithm with GNH approximation to compute influence functions. Since we are primarily interested in the effect of deleting a data point on model’s predictions, we measure the discrepancy of each gap and error using the average L2 distance between networks’ outputs E(x,·)∼Dtrain [∥f(θ,x)− f(θ′,x)∥] on the training dataset. We provide the full experimental set-up and additional experiments in Appendix C and D, respectively. 6.1 Influence Misalignment Decomposition We first applied our decomposition to various models trained on a broad range of tasks covering binary classification, regression, image reconstruction, image classification, and language modeling. The summary of our results is provided in Figure 4 and Table 5 (Appendix E). Across all tasks, we found that the first three sources dominate the misalignment, indicating influence function estimators are answering a different question from what is normally assumed. Small linearization and solver errors indicate that influence functions accurately answer the modified question (PBRF). Logistic Regression. We analyzed the logistic regression (LR) model trained on the Cancer and Diabetes classification datasets from the UCI collection [Dua and Graff, 2017]. We trained the model using L-BFGS [Liu and Nocedal, 1989] with L2 regularization of 0.01 and damping term of λ = 0.001. As the training objective is strongly convex and the base model parameters were trained till convergence, in Table 5, we observed that each source of misalignment is significantly low. Hence, in the case of logistic regression with L2 regularization, influence functions accurately capture the effect of retraining the model without a data point. Multilayer Perceptron. Next, we applied our analysis to the 2-hidden layer Multilayer Perceptron (MLP) with ReLU activations. We conducted the experiments in two settings: (1) regression on the Concrete and Energy datasets from the UCI collection and (2) image classification on 10% of the MNIST [Deng, 2012] and FashionMNIST [Xiao et al., 2017] datasets, following the set-up from Koh and Liang [2017] and Basu et al. [2020a]. We trained the networks for 1000 epochs using stochastic gradient descent (SGD) with a batch size of 128 and set a damping strength of λ = 0.001. As opposed to linear models, MLPs violate the assumptions in the influence derivation and we observed an increase in gaps and errors on all five factors. We observed that warm-start, proximity, and the non-convergence gaps contribute more to the misalignment than linearization and solver errors. The average network’s predictions for PBRF were similar to that computed by the LiSSA algorithm, demonstrating that influence functions are still a good approximation to PBRF. Autoencoder. Next, we applied our framework to an 8-layer autoencoder (AE) on the full MNIST dataset. We followed the experimental set-up from Martens and Grosse [2015], where the encoder and decoder each consist of 4 fully-connected layers with sigmoid activation functions. We trained the network for 1000 epochs using SGD with momentum. We set the batch size to 1024, used L2 regularization of 10−5 with a damping factor of λ = 0.001. In accordance with the findings from our MLP experiments, the warm-start, proximity, and non-convergence gaps were more significant than the linearization and solver errors, and influence functions accurately predicted the PBRF. Convolutional Neural Networks. To investigate the source of discrepancy on largerscale networks, we trained a set of convolutional neural networks of increasing complexity and size. Namely, LeNet [Lecun et al., 1998], AlexNet [Krizhevsky et al., 2012], VGG13 Simonyan and Zisserman [2014], and ResNet-20 [He et al., 2015] were trained on 10% of the MNIST dataset and the full CIFAR10 [Krizhevsky, 2009] dataset. We trained the base network for 200 epochs on both datasets with a batch size of 128. For MNIST, we kept the learning rate fixed throughout training, while for CIFAR10, we decayed the learning rate by a factor of 5 at epochs 60, 120, and 160, following Zagoruyko and Komodakis [2016]. We used L2 regularization with strength 5 · 10−4 and a damping factor of λ = 0.001. Consistent with the findings from our MLP and autoencoder experiments, the first three gaps were more significant than linearization and solver errors. We further compared influence functions’ approximations on the difference in test loss when a random training data point is removed with the value obtained from cold-start retraining, warm-start retraining, and PBRF in Table 2. We used both Pearson [Sedgwick, 2012] and Spearman rank-order correlation [Spearman, 1961] to measure the alignment. While the test loss predicted by influence functions does not align well with the values obtained by cold-start and warm-start retraining schemes, they show high correlations when compared to the estimates given by PBRF. Transformer. Finally, we trained 2-layer Transformer language models on the Penn Treebank (PTB) [Marcus et al., 1993] dataset. We set the number of hidden dimensions to 256 and the number of attention heads to 2. As we observed that model overfits after a few epochs of training, we trained the base network for 10 epochs using Adam. Notably, we observed that the non-convergence gap had the most considerable contribution to the discrepancy between influence functions and LOO retraining. Consistent with our previous findings, the first tree gaps had more impact on the discrepancy compared to linearization and solver errors. 6.2 Factors in Influence Misalignment We further analyzed how the contribution of each component changes in response to changes in network width and depth, training time, weight decay, damping, and the percentage of data removed. We used an MLP trained on 10% of the MNIST dataset and summarized results in Figure 5. Width and Depth. As we increase the width of the network, we observe a decrease in the linearization error. This is consistent with previous observations that networks behave more linearly as the width is increased [Lee et al., 2019]. In contrast to the findings from Basu et al. [2020a], we did not observe a strong relationship between the contribution of the components and the depth of the network. Training Time. Unsurprisingly, as we increase the number of training epochs, we observe a decrease in the non-convergence gap. We hypothesize that, as we increase the training epoch, the cost gradient reaches 0, resulting in better alignment between the proximal response function and PBRF. Weight Decay. The weight decay allows the training objective to be better conditioned. Consequently, as weight decay increases, the training objective may act more as a strictly convex objective, resulting in a decrease in overall discrepancy for all components. Basu et al. [2020a] also found that the alignment between influence functions and LOO retraining increases as weight decay increases. Damping. A higher damping term makes linear systems better conditioned, allowing solvers to find accurate solutions in fewer iterations [Demmel, 1997], thereby reducing the solver error. Furthermore, the higher proximity term keeps the parameters close to θs, reducing the linearization error. On the other hand, increasing the effective proximity penalty directly increases the proximity gap. Percentage of Training Examples Removed. As we remove more training examples from the dataset the PBRF becomes more non-linear and we observe a sharp increase in the linearization error. The cost landscape is also more likely to change as we remove more training examples, and we observe a corresponding increase in the warm-start gap. 7 Conclusion In this paper, we investigate the sources of the discrepancy between influence functions and LOO retraining in neural networks. We decompose this difference into five distinct components: the warmstart gap, proximity gap, non-convergence gap, linearization error, and solver error. We empirically evaluate the contributions of each of these components on a wide variety of architectures and datasets and investigate how they change with factors such as network size and regularization. Our results show that the first three components are most responsible for the discrepancy between influence functions and LOO retraining. We further introduce the proximal Bregman response function (PBRF) to better capture the behavior of influence functions in neural networks. Compared to LOO retraining, the PBRF is more easily calculated and correlates better with influence functions, meaning it is an attractive alternative gold standard for evaluating influence functions. Although the PBRF may not necessarily align with LOO retraining, it can still be applied in many of the motivating use cases for influence functions. We conclude that influence functions in neural networks are not necessarily “fragile”, but instead are giving accurate answers to a different question than is normally assumed. Acknowledgements We would like to thank Pang Wei Koh for the helpful discussions. Resources used in this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (www.vectorinstitute.ai/partners).
1. What is the focus of the paper regarding influence function-based methods? 2. What are the strengths of the proposed approach, particularly in its comprehensive analysis? 3. What are the weaknesses of the paper, especially regarding the choice of metric and the need for further substantiation? 4. Do you have any concerns or suggestions regarding the experimental design and the inclusion of more complex datasets? 5. How does the reviewer assess the overall quality and contribution of the paper, and what are their suggestions for improvement?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper tackles a very important topic about the evaluation of influence function based methods. Currently, the effectiveness of influence functions are based on (a) Direct metrics such as Leave-out-retraining OR (b) Indirect metrics such as detection of mislabeled samples etc. The authors propose to use proximal Bregman response function as a way to measure the effectiveness or alignment of influence functions, especially for deep models. Strengths And Weaknesses Strengths: The paper does a comprehensive analysis of the different components (warm start, non-convergence, linearization, addition of iHVP etc. ) involved in the misalignment between the influence function and the LOO training objectives. The analysis is interesting and well-executed. The authors acknowledge that in principle the deep models are only partially trained as they always don't converge to the optima. Hence, taking this information into account and designing a gold-standard ground-truth is impactful to understand the effectiveness of influence functions. The array of experiments are solid and cover a wide range of architectures and datasets / domains. The paper is a good extension to Basu et al (ICLR 2021)[1] and follows up that paper with more in-depth experiments. Weaknesses: While the PBRF metric aligns well with the influence score, I would like to see some more motivation about why the authors thought it’s a good metric to evaluate influence functions on. The current version of the paper lacks this motivation or information corresponding to it. If we already know IFs are good at other metrics such as detection of mislabeled examples / relabeling, why do we require a fix to the evaluation procedure of influence functions. It would be good if the authors could substantiate on why alignment of IF with a gold standard ground-truth is important as opposed to testing them on other indirect but more practical metrics such as detection of mislabeled examples / relabeling. While the authors cover a wide range of experiments, I would like to see some experiments on more complex datasets such as Imagenet-1k. A focused analysis on Imagenet-1k would make the paper stronger. [1]. https://arxiv.org/abs/2006.14651?context=stat.ML Questions The questions are added in the Weaknesses section. Limitations Lack of motivation on the design of the PBRF metric. More analysis is required on Imagenet scale datasets. Although IF is tricky to compute on Imagenet, it would be beneficial to understand the proposed metric at scale. Overall, I feel the paper is strong and does a well laid out analysis on the alignment of influence estimation and LOO re-training which is lacking in the community. I would also urge the authors to think about in what practical settings and how this metric can be used instead of the commonly used indirect metrics such as mislabeled data points detection.
NIPS
Title Content Provider Dynamics and Coordination in Recommendation Ecosystems Abstract Recommendation Systems like YouTube are vibrant ecosystems with two types of users: Content consumers (those who watch videos) and content providers (those who create videos). While the computational task of recommending relevant content is largely solved, designing a system that guarantees high social welfare for all stakeholders is still in its infancy. In this work, we investigate the dynamics of content creation using a game-theoretic lens. Employing a stylized model that was recently suggested by other works, we show that the dynamics will always converge to a pure Nash Equilibrium (PNE), but the convergence rate can be exponential. We complement the analysis by proposing an efficient PNE computation algorithm via a combinatorial optimization problem that is of independent interest. 1 Introduction Recommendation systems (RSs hereinafter) play a major role in our life nowadays. Many modern RSs, like YouTube, Medium, or Spotify, recommend content created by others and go far beyond recommendations. They are vibrant ecosystems with multiple stakeholders and are responsible for the well-being of all of them. For example, in the online publishing platform Medium, the platform should be profitable; suggest relevant content to the content consumers (readers); and support the content providers (authors). In light of this ecosystem approach, research on RSs has shifted from determining consumers’ taste (e.g., the Netflix Prize challenge [9, 25]) to other aspects like fairness, ethics, and long-term welfare [5, 29, 31, 35, 37, 40–42, 44]. Understanding content providers and their utility1 is still in its infancy. Content providers produce a constant supply of content (e.g., articles in Medium, videos on YouTube), and are hence indispensable. Successful content providers rely on the RS for some part of their income: Advertising, affiliated marketing, sponsorship, and merchandise; thus, unsatisfied content providers might decide to provide a different type of content or even abandon the RS. To illustrate, a content provider who is unsatisfied with her exposure, which is heavily correlated with her income from the RS, can switch to another type of content or seek another niche. Such downstream effects are detrimental to content consumer satisfaction because they change the available content the RS can recommend. The synergy between content providers and consumers is thus fragile, and solidifying one side solidifies the other. 1We use the term utility to address the well-being of the content providers, and social welfare for the well-being of the content consumers. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we investigate the dynamics of RSs using a stylized model in which content providers are strategic. Content providers obtain utility from displays of their content and are willing to change the content they offer to increase their utility. These fluctuations change not only the utility of the providers but also the social welfare of the consumers, defined as the quality of their proposed content. We show that the provider dynamics always converges to a stable point (namely, a pure Nash equilibrium), but the convergence time may be long. This observation suggests a more centralized approach, in which the RS coordinates the providers, and leads to fast convergence. While our model is stylized, we believe it offers insights into more general, real-world RSs. The game-theoretic modeling allows counterfactual reasoning about the content that could-have-beengenerated, which is impossible to achieve using existing data-sets and small online experiments. Our analysis advocates increased awareness to content providers and their incentives, a behavior that rarely exists these days in RSs.2 Our contribution We explore the ecosystem using the following game-theoretic model, and use the blogging terminology to simplify the discussion. We consider a set of players (i.e., content providers), each selects a topic to write from a predefined set of topics (e.g., economics, sports, medieval movies, etc.). Each player has a quality w.r.t. each topic, quantifying relevance and attractiveness of that author’s content if she writes on that topic, and a conversion rate. Given a selection of topics (namely, a strategy profile), the RS serves users who consume content. All queries concerned with a topic are modeled as the demand for that topic. The utility every player obtains is the sum of displays her content receives (affected by the demand for topics and the operating RS) multiplied by the conversion rate. The game-theoretic model we adopt in service is suggested by Ben Basat et al. [4] and is well-justified by later research [8, 42]. Technically, we deal with the question of reaching a stable point—a point in which none of the players can deviate from her selected topic and increase her utility. We are interested in the convergence time and the welfare of the system in these stable points. We first explore the decentralized approach: Better-response learning dynamics (see, e.g., [16, 21]), in which players asynchronously deviate to improve their utility (an arbitrary player to an arbitrary strategy, as long as she improves upon her current utility). We show that every better-response dynamic converges, thereby extending prior work [8]. Through a careful recursive construction, we show a negative result: The convergence time can be exponential in the number of topics. Long convergence time suggests a different approach. We consider the scenario in which the RS could act centrally, and support the process of matching players with topics. We devise an algorithm that computes an equilibrium fast (roughly squared in the input size). To solve this computational challenge, which is a mixture of matching and load-balancing, we propose a novel combinatorial optimization problem that is of independent interest. Conceptually, we offer a qualitative grounding for the advantages of coordination and intervention3 in the content provider dynamics. Our analysis relies on the assumption of complete knowledge of all model parameters, in particular the qualities. While unrealistic in practice, we expect that incomplete information will only exacerbate the problems we address. The main takeaway from this paper is that RSs are not self-regulated markets, and as much as suggesting authors topics to write on can lead to a significant increase in the system’s stability. We discuss some practical ways of reaching this goal in Section 5. Related work Strikingly, content provider welfare and their fair treatment were only suggested very recently in the Recommendation Systems and Information Retrieval communities [12, 14, 18, 35, 40, 46]. All of these works do not model the incentives of content providers explicitly, and consequently cannot offer a what-if analysis like ours. Our model is similar to those employed in several recent papers [4, 5, 7, 8, 30]. Ben-Porat et al. [8] study a model that is a special case of ours, and show that every learning dynamic converges. Our Theorem 1 recovers and extends their convergence results. Moreover, unlike this work, they do not address convergence time, social welfare, and centralized equilibrium computation. Other works [5, 7, 30] aim to design recommendation mechanisms that mitigate strategic behavior and 2There are some exceptions, e.g., YouTube instructing providers how to find their niche [1]. However, these are sporadic, primitive, and certainly do not enjoy recent technological advancements like collaborative filtering. 3We do not say that the RSs should dictate authors what to write; instead, it should suggest to each author profitable topics that he/she can write on competently to increase her utility. lead to long-term welfare. On the negative side, their mechanisms might knowingly recommend inferior content to some consumers. We see their work as parallel to ours, as in this work we focus on the prevailing recommendation approach—recommending the best-fitting content. We suggest that a centralized approach, in which the RS orchestrates the player-topic matching, can significantly improve the time until the system reaches stability (in the form of equilibrium). Furthermore, we envision that our approach can also lead to high social welfare, as we discuss in Section 5. More broadly, an ever-growing body of research deals with fairness considerations in Machine Learning [15, 17, 36, 38, 45]. In the context of RSs, a related line of research suggests fairer ranking methods to improve the overall performance [11, 26, 43]. For example, Yao and Huang [43] propose metrics mitigating discrimination in collaborative-filtering methods that arise from learning from historical data. Despite not always being explicit, the ultimate goal of fairness imposition is to achieve long-term welfare [28]. Our paper and analysis share a similar flavor: To achieve high stability via faster convergence, RSs should coordinate the process of content selection. 2 Model We consider the following recommendation ecosystem, where for concreteness we continue with the blog authors4 example. There is a set of authors P , each owning a blog. We further assume that each blog is concerned with a single topic, from a predefined topic set T . We assume P and T are finite, and denote |P| = P and |T | = T . The strategy space of each player is thus T ; she selects the topic she writes on. A pure strategy profile is a tuple a = (a1, . . . aP ) of topic selections, where aj is the topic selected by author j. For every author j and topic k, there is a quality that quantifies the relevance and attractiveness of j’s blog if she picks the topic k. We denote by Q the quality matrix, for Q ∈ [0, 1]P×T . The RS serves users who consume content. We do not distinguish individual consumers, but rather model the need for content as a demand for each topic. A demand distribution D over the topics T is publicly known, where we use D(k) to denote the demand mass for topic k ∈ T . W.l.o.g., we assume that D(1) ≥ D(2) ≥ . . . ≥ D(m). The recommendation functionR matches demand with available blogs. Given the demand for topic k, a strategy profile a, and the qualityQ of the blogs for the selected topics in a, the recommendation function R recommends content, possibly in a randomized manner. It is well-known that content consumers pay most of their attention to highly ranked content [13, 22, 24, 27]; therefore, we assume for simplicity thatR recommends one content solely. For ease of notation, we denoteRj(Q, k,a) as the probability that author j is ranked first under the distribution R(Q, k,a) (or rather, author j’s content is ranked first). While blog readers admire high-quality recommended blogs, blog authors care for payoffs. As described in Section 1, authors draw monetary rewards from attracting readers in various ways. We model this payoff abstractly using a conversion matrix C, C ∈ [0, 1]P×T . We assume that every blog reader grants Cj,k monetary units to author j when she writes on topic k. For example, if author j only cares for exposure, namely the number of impressions her blog receives, then Cj,k = 1 for every k ∈ T . Alternatively, if author j cares for the engagement of readers in her blog, then the conversion Cj,k should be somewhat correlated with the qualityQj,k. We will return to these two special cases later on, in Subsection 3.1. The utility of author j under a strategy profile a is given by Uj(a) def = ∑ k∈T 1aj=k · D(k) · Rj(Q, k,a) · Cj,k. (1) Overall, we represent a game as a tuple 〈P, T ,D,Q, C,R,U〉, where P is the authors, T is the topics, D is the demand for topics,Q and C are the quality and conversion matrices,R is the recommendation function, and U is the utility function. Recommending the Highest Quality Content In this paper, we focus on the RS that recommends blogs of the highest quality, breaking ties randomly. Such a behavior is intuitive and well-justified in the literature [3, 10, 23, 39]. More formally, let Bk(a) denote the highest quality of a blog written on topic k under the profile a, i.e., Bk(a) def = maxj∈P{1aj=k · Qj,k}. Furthermore, let Hk(a) denote the set of authors whose documents have the highest quality among those who write on topic k under 4We use authors and players interchangeably. a, Hk(a) def = {j ∈ P | 1aj=k · Qj,k = Bk(a)}. The recommendation function Rtop is therefore defined as Rtopj (Q, k,a) def = { 1 |Hk(a)| j ∈ Hk(a) 0 otherwise . Consequently, we can reformulate the utility function from Equation (1) in the following succinct form,5 Uj(a) def = ∑ k∈T 1aj=k · D(k) |Hk(a)| · Cj,k. (2) From here on, sinceRtop and U are fully determined by the rest of the objects, we omit them from the game representation; hence, we represent every game by the more concise tuple 〈P, T ,D,Q, C〉. Quality-Conversion Assumption Throughout the paper, we make the following Assumption 1 about the relation between quality and conversion. Assumption 1. For every topic k ∈ T and every two authors j1, j2 ∈ P , Qj1,k ≥ Qj2,k ⇒ Cj1,k ≥ Cj2,k. Intuitively, Assumption 1 implies that quality and conversion are correlated given the topic. For every topic k, if authors j1 and j2 write on topic k and j1’s content has a weakly better quality, then j1’s content has also a weakly better conversion. This assumption plays a crucial role in our analysis; we discuss relaxing it in Section 5. Solution Concepts The social welfare of the readers is the average weighted quality. Formally, given a strategy profile a, SW (a) def = ∑ k∈T D(k) ∑ j∈P Rj(Q, k,a)Qj,k. (3) As the recommendation functionRtop always recommends the highest quality content, we can have the following more succinct representation of social welfare, SW (a) = ∑ k∈T D(k)Bk(a). However, social welfare maximization does not concern author utility. Authors may be willing to deviate from the socially optimal profile if such a deviation is beneficial in terms of utility. Consequently, we seek stable solutions, as captured by the property of pure Nash equilibrium (hereinafter PNE). We say that a strategy profile a is a PNE if for every author j and topic k, Uj(a) ≥ Uj(a−j , k), where a−j is the tuple obtained by deleting the j’s entry of a. It is worth noting that while mixed Nash equilibrium is guaranteed to exist in finite games, a PNE generally does not exist in games. However, as we show later on, it always exists in our class of games. Example To clarify our notation and setting, we provide the following example. Consider a game with two players (P = 2), two topics (T = 2) and the demand distribution D such that D(1) = 3/5,D(2) = 2/5. Let the quality and conversion matrices be Q = ( 1 1/3 2/3 1/3 ) , C = ( 1/3 1 1/5 1 ) . Consider the strategy profile (a1, a2) = (1, 1). Author 1 is more competent that author 2 on topic 1, since Q1,1 = 1 > Q2,1 = 23 ; thus, the utility of author 1 under the profile (1, 1) is U1(1, 1) = D(1) · Rtop1 (Q, 1, (1, 1)) · C1,1 = 35 · 1 · 1 3 = 1 5 . On the other hand, author 2 gets U2(1, 1) = 35 · 0 · 1 5 = 0. Author 2 has a beneficial deviation: Under the profile (1, 2), her utility is U2(1, 2) = 25 ·1 ·1 = 2 5 , while the utility of author 1 remains the same, U1(1, 2) = 1 5 . For the strategy profile (2, 2), both authors have the same quality; thus,Rtop1 (Q, 2, (2, 2)) = R top 2 (Q, 2, (2, 2)) = 12 . As for the utilities, U1(2, 2) = U2(2, 2) = 25 · 1 2 · 1 = 1 5 . Overall, we see that both (1, 2) and (2, 2) are PNEs, since the authors do not have beneficial deviations. However, the social welfare of these PNEs is different: SW (1, 2) = 35 · 1 + 2 5 · 1 3 ≈ 0.73, yet SW (2, 2) = 3 5 · 0 + 2 5 1 3 ≈ 0.13. 5In case no author writes on topic k under a,R do not make any recommendation. As reflected in the utility function U through the indicator 1aj=k, readers associated with a non-selected topic k do not contribute to any author’s utility. 3 Decentralized Approach In this section, we consider the prevailing, decentralized approach. Starting from an arbitrary profile, authors interact asynchronously, each improving her utility in every time step. Such dynamics is widely-known in the Game Theory literature as better-response dynamics (hereinafter, BRDs). Studying BRDs is a robust approach for assuring the environment reaches a stable point, while making minimal assumption on the information of the players. Two central questions about BRDs in games are a) whether any BRD converges; and b) what is the convergence rate. We show that the answer to the first question is in the affirmative. For the second question, we show through an intricate combinatorial construction a result of negative flavor: The convergence rate can be exponential in the number of topics T . 3.1 Better-Response Dynamic Convergence Before we go on, we define BRDs formally. Given a strategy profile a, we say that a′j ∈ T is a better response of author j w.r.t. a if Uj(a−j , a′j) > Uj(a). A BRD is a sequence of profiles (a1,a2, . . . ), where at every step i + 1 exactly one author better-responds to ai, i.e., there exists an author j(i) such that ai+1 = (ai−j(i), a i+1 j(i)) and Uj(i)(a i+1) > Uj(i)(ai). A BRD can start from any arbitrary profile, and include improvements of any arbitrary author at any arbitrary step (assuming she has a better response in that time step). If a BRD a1, . . . ,al converges, namely no player can better respond to al, then by definition al is a PNE. Our goal is to show that every BRD of any game in our class of games converges. If there exists an infinite BRD, then it must contain cycles as the number of different strategy profiles is finite. Equivalently, nonexistence of improvement cycles suggests that any BRD will converge to a PNE [32]. General techniques for showing BRD convergence in games are rare, and are typically based on coming up with a potential function [6, 21, 34] or a natural lexicographic order [2, 19]. However, as already established by prior work [8, Proposition 1], our class of game does not fit into the category of an exact potential function; and a lexicographic order does not seem to arise naturally. Ben-Porat et al. [8] prove BRD convergence for two sub-classes of games: Games where C is identically 1, and games with C = Q. Interestingly, they prove BRD convergence for each sub-class separately using different arguments. We extend their technique to deal with any conversion matrix C that satisfies Assumption 1. Theorem 1. If a game G satisfies Assumption 1, then every BRD in G converges to a PNE. 3.2 Rate of Convergence We now move on to the second question proposed in the beginning of the section, which deals with convergence rate. The convergence rate is the worst-case length of any BRD. Recall that a BRD can start from a PNE and thus converge after one step, and hence the worst-case approach we offer here is justified. Our next theorem lower bounds the worst case convergence rate by an exponential factor in the number of topics T . This result is illuminating as it shows that in the worst case, although convergence is guaranteed, it may not be reachable in feasible time. Theorem 2. Consider P ≥ 1 and T ≥ 2. There exist games satisfying Assumption 1 with |P| = P and |T | = T , in which there are BRDs with at least ( T−2 P + 1 )P steps. Proof sketch of Theorem 2. The proof relies on a recursive construction. We construct a game and an improvement path with at least the length specified in the theorem. To balance rigor and intuition, we present here a special case of our general construction and defer the formal proof to the appendix. Consider the game with P = 3, T = 5, D(k) = 15 for every k ∈ T and Q = C = c 2c 3c 4c 5c c 9c 8c 7c 6c c 10c 11c 12c 13c for c = 1PT . The first column of the matrix, which is associated with the quality of topic 1, is identical for all authors. The snake-shape path in the matrix is always greater than the value c in the first column, and is monotonically increasing (top-down). The immediate implications are a) odd players improve their quality when deviating to a topic with a greater index, while even players improve their quality when deviating to a topic with a smaller index (which is not topic 1); and b) every player is more competent than all the players that precede her on every topic but topic 1. The initial profile is a0 = (1, 1, . . . , 1). We construct the BRD that appears in Figure 1.6 It comprises three types of steps: Purple, green and yellow. In purple steps, author 1 deviates to a topic with a higher index. In yellow steps, author 2 deviates to the topic selected by author 1 (e.g., in a5) or author 3 deviates to the topic selected by author 2 (e.g., in a19). Green steps always follow yellow steps. In green steps, the author whose topic was selected in the previous step by an author with a higher index deviates back to topic 1 (e.g., author 1 in a6 after author 2 selects topic 5 in a5, or author 2 in a20 after author 3 selects topic 2 in a19). In steps a1 − a4, only author 1 deviates (purple steps). This is also the recursive path in a game with author 1 solely (disregarding the entries of the other players). Then, in a5, author 2 deviates to topic 5 (yellow). Since author 2 is more competent than author 1 in every topic (excluding topic 1), author 1’s utility equals zero. Then, author 1 deviates to back topic 1 in a6 (green). This goes on until step a18—author 1 improves, author 2 ties, and author 1 returns to topic 1. Steps a1 − a18 comprise the recursive path for two players. Until step a18, author 3 did not move. Then, in step a19, author 3 deviates to topic 2. Author 3 is more competent than author 2, so in a20 author 2 returns to topic 1. In steps a21 − a32 authors 1 and 2 follow the same logic as before, but they overlook topic 2 (since author 3, who is more competent than both of them, selects it). In steps a33 − a34 author 3 deviates to topic 3, and then author 2 returns to topic 1. In steps a35 − a41 authors 1 and 2 follow the same logic as before, but they overlook both topics 2 and 3. The path continues similarly until we reach the profile a48. Notice that the latter profile is not an equilibrium, but we end the path at this point for the sake of the analysis. This path is indeed exponential—for every step author i makes, for 1 < i ≤ 3, author i − 1 makes at least twice as many (in fact, much more than that; see the formal proof for more details). Theorem 2 implies that there are BRDs of length ( T−2 P + 1 )P , which is O(exp(T )) for large enough P . Furthermore, if the number of topics T and the authors P are in the same order of magnitude, then length is also exponential in P . 4 Centralized Approach - Equilibrium Computation To remedy the long convergence rate, in this section we propose an efficient algorithm for PNE computation. The algorithm is a matching application and relies on a novel graph-theoretic notion. To motivate the matching perspective, we reconsider social welfare (see Equation (3)) and neglect strategic aspects momentarily. We can find a social welfare-maximizing profile using the following matching reduction. We construct a bipartite graph, one side being the authors and the other side being the topics. The weight on each edge (j, k) is Qj,kD(k), the quality author j has on topic k times the user mass on that topic. Notice that every author can only select one strategy (topic). Furthermore, for the purpose of social welfare maximization, it suffices to consider candidate profiles in which every topic is selected by at most one author. Consequently, a maximum weighted matching 6An accessible version of Figure 1 appears in the appendix. of this graph corresponds to the social welfare maximizer. By using, e.g., the Hungarian algorithm, the problem of finding a social welfare-maximizing profile can be solved in O(max{P, T}3). However, equilibrium profiles and social welfare-maximizing profiles typically do not coincide (see the celebrated work on the Price of Anarchy [33]). The maximum matching that we proposed in the previous paragraph is susceptible to beneficial devotions; therefore, it is not stable in the equilibrium sense.7 There exist many variants of stable matching in the literature, but virtually none fit the equilibrium stability we seek. In particular, the deferred acceptance algorithm [20] cannot be used since several players can select the same topic and thus the matching is not one-to-one. If we create several copies of the same topic (a common practice for the deferred acceptance algorithm), high-quality players would block low-quality authors matched to it (unlike several medical students with varying qualities that are matched to the same hospital). In the remainder of this section, we propose a sequential matching technique to compute a PNE. Our approach contributes to the matching literature and is based on the definition of saturated sets. Due to our extensive use of graph theory in what follows, we introduce a few notational conventions. We denote a graph by G = (V,E). For a subset W ⊂ V , the induced sub-graph G[W ] is the graph whose vertex set is W and whose edge set consists of all the edges in E that have both endpoints in W . We use the standard notation NG(W ) to denote the neighbors of the vertices W in the graph G. A matching M in G is a set of pairwise non-adjacent edges. For our application, we care mostly about bipartite graphs; thus, we denote V = X ∪ Y . An X-saturating matching is a matching that covers every node in X . Hall’s Marriage Theorem, a fundamental result in combinatorics, gives necessary and sufficient conditions for the existence of perfect matching. The theorem asserts that there exists an X-saturated matching in G if and only if for every subset W ⊆ X , |W | ≤ |NG(W )|. In other words, the size of every subset in X does not exceed the number of its neighbors. The essential property we use in the PNE algorithm is saturated sets. Definition 1 (Saturated set). Let G = (X ∪ Y,E) be a finite bipartite graph. A set W ⊆ X is called saturated if |W | = |NG(W )|. Of course, this definition naturally extends beyond bipartite graphs. Furthermore, if for every other saturated set W ′ it holds that |W | ≥ |W ′|, we say that W is a maximum saturated set. Despite its striking simplicity, to the best of our knowledge, this notion of saturated sets did not receive enough attention in the CS literature (under this name or a different one), and is therefore interesting in its own right. 4.1 PNE Computation We now turn to discuss the intuition behind Algorithm 1, which computes a PNE efficiently. By and large, Algorithm 1 can be seen as a best-response dynamic. It starts from a null profile (assigning all players to a factitious topic with zero user mass) and then determines the order of best-responding. The input is the entire game description,8 as described in Section 2. In Lines 1-5 we initialize the variables we use. T̃ is the set of unmatched topics; Lk is a lower bound on the load on topic k, namely the ongoing number of players we matched to it; X,Y and E are the elements of the bipartite graph G (Y stores the set of unmatched players); and a∗ is a non-valid, empty profile that we construct as the algorithm advances. The for loop in Line 6 goes as follows. We first find the set of highest-quality players for every topic k, denoted Ak (Line 7). These players can block the others from playing k because their quality is higher, and thus we prioritize them in our sequential process. Afterwards, we set k∗ to be the most profitable topic under the current partial matching (Line 8). That is, for every topic k, we consider the set of most profitable players w.r.t. k and their potential utility if matched to k. The term D(k)Cj,k/Lk+1 upper bounds the utility of every player j ∈ Ak (see Equation (2)), in case we match Lk + 1 or more players to topic k (we might increase the load Lk in later iterations). We subsequently update LK∗ in Line 9. We now move to the bipartite graph G. In Line 10, we create a new node x, which is the Lk∗ -copy of topic k∗ (we store this information about x). We add x to the left side of G, X (Line 11), and connect 7There are exceptions, of course. In degenerate cases whereQ has no ties, the game is essentially a stable marriage problem. 8For the sake of illustration, we assume P ≤ T . If that is not the case, we can add enough topics with zero mass D to achieve it. Noticeably, a PNE in the new game can be converted to a PNE in the original game. Algorithm 1: PNE computation Input: A game description 〈P, T ,D,Q, C〉 Output: A PNE a 1 T̃ ← T // available topics 2 ∀k ∈ T : Lk ← 0 // loads on topic 3 X ← ∅, Y ← P, E ← ∅ 4 G← (X ∪ Y,E) 5 a∗ ← (∅)m // empty profile 6 for t = 1 . . . P 7 ∀k ∈ T̃ : Ak ← argmaxj∈Y Qj,k 8 set k∗ ∈ argmaxk∈T̃ { maxj∈Ak D(k)Cj,k Lk+1 } 9 Lk∗ ← Lk∗ + 1 \\for loop continues... 10 create a new node x associated with topic k∗ 11 X.add(x) 12 E.add ({(x, j) : j ∈ Ak∗}) 13 Let W ⊆ X be the maximum saturated set in G 14 if W 6= ∅ then 15 find a maximum matching M in G[W ∪ Y ] 16 ∀j ∈ NG(W ) : a∗j ← Topic(M(j)) 17 Y.remove(NG(W )) 18 X.remove(W ) 19 T̃ .remove(Topics(W )) // see Line 10 20 return a∗ x to the players of Ak∗ in Y (Line 12). Line 13 is the crux of the algorithm: We find a subset W of X that is the maximum saturated set. We will justify our use of the article the in the previous sentence later on, as well as describe the implications of having a saturated set in this dynamically constructed graph. If W is empty, we continue to the next iteration of the for loop. But if W is non-empty, we enter the if block in Line 14. We find a maximum matching M in the induced graph G[W ∪ Y ]. We will later prove that G[W ∪ Y ] satisfies Hall’s marriage condition, and thus |M | = |W | = |NG(W )|. In Line 16 we use M to set the strategies of the players in NG(W ): Every player j ∈ NG(W ) is matched to the topic associated with the node M(j) ∈ W . In Lines 17-19 we remove the newly matched players NG(W ) from Y , the topic copies W from X , and the topics associated with W from the set of unmatched topics T̃ . We repeat this process until all players are matched. Let us explain the implications of having a non-empty saturated set in G. Focus on the first time a non-empty saturated set W was found in Line 13, and denote the iteration index by t′. The set W is composed of nodes associated with several topics (association in the sense we explain about Line 10); each one may have several copies. Importantly, every time we add a node x to X with an associated topic k, we increased the load Lk; hence, in iteration t′, Lk accurately reflects the number of copies of k in X . Furthermore, k was selected for the Lk + 1 time, suggesting that it is more profitable than other topics. With a few more arguments, we show that all Lk copies of k must be in W . Crucially, if we match the players in NG(W ) they cannot have beneficial deviations. We formalize this intuition via Theorem 3. Theorem 3. If the input game G satisfies Assumption 1, then Algorithm 1 returns a PNE of G. We now move on to discuss its run-time. The only two lines that require a non-trivial discussion are Lines 13 and 15. As we describe in Lemma 1 below, finding the maximum saturated set includes finding a maximum matching, and thus we need not recompute it in Line 15. We therefore focus on the complexity of finding the saturated set in G solely. The following Lemma 1 shows that as long as a bipartite G satisfies Hall’s marriage condition, we can find the maximum saturated set W efficiently. Because of the independent interest in this combinatorial problem, we state it in its full generality. Lemma 1. Let G = (V,E) be a bipartite graph that satisfies Hall’s marriage condition. There exists an algorithm that finds the maximum saturated set of G in time O( √ |V ||E|). The proof of this basic lemma appears in the appendix. The sketch of the proof is as follows. Let G = (X ∪ Y,E) be a graph satisfying Hall’s marriage condition. We first compute a maximum matching M of G. Since Hall’s marriage condition holds, we are guaranteed that M is an Xsaturating matching. We then devise a technique to find whether a node x ∈ X participates in at least one saturated set. We show that nodes participating in saturated sets are reachable from the set of unmatched nodes in Y via a variation of alternating paths, and thus can be identified quickly. By the end of this procedure, we have a set X ′ ⊆ X such that every x ∈ X ′ participates in at least one saturated set. The last part is showing that under the marriage condition, every union of saturated sets is a saturated set. As a result, we conclude that X ′ is the maximum saturated set. Using Lemma 1, we can bound the run-time of Algorithm 1. Corollary 1. Algorithm 1 can be implemented in running time of O(P 2.5 · T ). 5 Discussion With great effort, companies like Amazon turned the “you bought that, would you also be interested in this” feature into a significant source of revenue. In this paper, we suggest that a “you wrote this, would you also be interested in writing on that?” feature could be revolutionary as well—contributing to better social welfare of content consumers, as well as the utility of content providers. Such a policy could be implemented in practice by a direct recommendation to providers, or by a more moderate action like nudging content providers to experiment with a different set of contents. To support our vision of content provider coordination in RSs even further, we show in the appendix that the ratio between the social welfare of the best equilibrium and the worst equilibrium is unbounded. Indeed, such a coordination between content providers may lead to a significant lift in social welfare. More broadly, we note that maximizing the overall welfare of RSs with multiple stakeholders is an important challenge that goes way beyond this paper (see, e.g., [12]). From a technical perspective, this work suggests a variety of open questions. First, the challenge of computing the social welfare-maximizing equilibrium is still open. Second, as we show in the appendix that if Assumption 1 does not hold, BRDs may not converge. A recent work [5] demonstrates that using randomization in the recommendation functionR in a non-trivial manner can break this divergence. Finding a reasonable way to do so (in terms of social welfare) in our model is left as an open question. Third, implementing cooperation using other solution concepts like no-regret learning and correlated or coarse-correlated equilibrium are also natural extensions of this work. Lastly, our modeling neglects many real-world aspects of RSs: Providers join and leave the system, demand for content changes over time, providers create content of several types, etc. Future work with a more complex modeling is required for implementing our ideas in real-world applications. Broader Impact It is well-understood in the Machine Learning community that economic aspects must be incorporated into machine learning algorithms. In that view, estimating content satisfaction in RSs is not enough. As we argue in this paper, content providers depend on the system for some part of their income; thus, their better treatment makes them the main beneficiaries of the stance this paper offers. We envision that RSs that will coordinate their content providers (and hence the content available for recommendation) will suffer from less fluctuations, be deemed fairer by all their stakeholders, and will enjoy long-term consumer engagement. Acknowledgements We thank the anonymous reviewers for providing helpful and insightful comments. The work of O. Ben-Porat is partially funded by a PhD fellowship from JPMorgan Chase & Co. The work of M. Tennenholtz is funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n◦ 740435).
1. What is the main contribution of the paper regarding the content generation game model? 2. What are the strengths of the paper, particularly in terms of the proof's soundness and significance? 3. Do you have any concerns or questions regarding the paper's focus on pure Nash equilibria and better response dynamics? 4. How does the reviewer assess the relevance of the paper to the NeurIPS community and its practical applications? 5. Are there any other related works that the reviewer thinks the authors should have considered or discussed in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors consider the model of content generation game in a recommender system. The particular model was introduced by Ben Basat et al., JAIR 2017 and has previously been studied in a series of papers by Ben Porat and authors published in NeurIPS, EC and AAAI. The model studied is as follows: It's a game between authors. Each author chooses a topic to write about. They are shown to a viewer randomly if they are one of the highest quality and derive a value equal a conversion factor that depends on the author and the topic. The authors extend previous work by showing that in this game better response dynamics converge under more general conversion structure that satisfies simple monotonicity properties (content of higher quality yields has higher conversion rate). They show that better response dynamics can still take exponential time to converge and provide an algorithm for directly computing a pure Nash equilibrium in these games. Strengths Soundness of the claims: The authors justify all their claims with complete proofs. The proofs are involved but very thorough. I was able to follow all the details that I tried to understand. Significance and novelty: The proofs provided by the authors are non-trivial. They require novel ideas and constructions. Relevance to the NeurIPS community: The model studied by the authors has previously been published in JMLR, AAAI, NeurIPS, EC. It also seems to be of practical relevance as it captures the natural competition between content creators on platforms like Medium. Weaknesses Significance and novelty: Considering this work by itself, I wonder why pure Nash equilibria, better response dynamics are the main concepts to study in this model. Would no-regret learning and correlated/coarse-correlated equilibrium be a more natural model. Isn't this better thought of as a repeated game between content creators? At the least, wouldn't authors choose a mixed strategy where they write about a variety of topics and see how they fair compared to their competition? Perhaps this is covered by previous work but might be useful to repeat, to justify further effort on pure Nash equilibria and better response dynamics. I was also curious about the focus on better response dynamics instead of best response dynamics. In particular since the authors' last result is about computing a pure Nash equilibrium, can the pure Nash equilibrium be reached using best response dynamics? My apologies if there is a theorem that says that consider best response and better response in equivalent. Post-rebuttal: I am still not fully convinced why better response dynamics are the right dynamics to consider. I am curious to know if any best response dynamics is guaranteed to converge in a polynomial number of steps. As for mixed vs pure strategies - the authors are free to choose any topic that gives them best utility so it is still conceivable that they could randomized between a few different topics.
NIPS
Title Content Provider Dynamics and Coordination in Recommendation Ecosystems Abstract Recommendation Systems like YouTube are vibrant ecosystems with two types of users: Content consumers (those who watch videos) and content providers (those who create videos). While the computational task of recommending relevant content is largely solved, designing a system that guarantees high social welfare for all stakeholders is still in its infancy. In this work, we investigate the dynamics of content creation using a game-theoretic lens. Employing a stylized model that was recently suggested by other works, we show that the dynamics will always converge to a pure Nash Equilibrium (PNE), but the convergence rate can be exponential. We complement the analysis by proposing an efficient PNE computation algorithm via a combinatorial optimization problem that is of independent interest. 1 Introduction Recommendation systems (RSs hereinafter) play a major role in our life nowadays. Many modern RSs, like YouTube, Medium, or Spotify, recommend content created by others and go far beyond recommendations. They are vibrant ecosystems with multiple stakeholders and are responsible for the well-being of all of them. For example, in the online publishing platform Medium, the platform should be profitable; suggest relevant content to the content consumers (readers); and support the content providers (authors). In light of this ecosystem approach, research on RSs has shifted from determining consumers’ taste (e.g., the Netflix Prize challenge [9, 25]) to other aspects like fairness, ethics, and long-term welfare [5, 29, 31, 35, 37, 40–42, 44]. Understanding content providers and their utility1 is still in its infancy. Content providers produce a constant supply of content (e.g., articles in Medium, videos on YouTube), and are hence indispensable. Successful content providers rely on the RS for some part of their income: Advertising, affiliated marketing, sponsorship, and merchandise; thus, unsatisfied content providers might decide to provide a different type of content or even abandon the RS. To illustrate, a content provider who is unsatisfied with her exposure, which is heavily correlated with her income from the RS, can switch to another type of content or seek another niche. Such downstream effects are detrimental to content consumer satisfaction because they change the available content the RS can recommend. The synergy between content providers and consumers is thus fragile, and solidifying one side solidifies the other. 1We use the term utility to address the well-being of the content providers, and social welfare for the well-being of the content consumers. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we investigate the dynamics of RSs using a stylized model in which content providers are strategic. Content providers obtain utility from displays of their content and are willing to change the content they offer to increase their utility. These fluctuations change not only the utility of the providers but also the social welfare of the consumers, defined as the quality of their proposed content. We show that the provider dynamics always converges to a stable point (namely, a pure Nash equilibrium), but the convergence time may be long. This observation suggests a more centralized approach, in which the RS coordinates the providers, and leads to fast convergence. While our model is stylized, we believe it offers insights into more general, real-world RSs. The game-theoretic modeling allows counterfactual reasoning about the content that could-have-beengenerated, which is impossible to achieve using existing data-sets and small online experiments. Our analysis advocates increased awareness to content providers and their incentives, a behavior that rarely exists these days in RSs.2 Our contribution We explore the ecosystem using the following game-theoretic model, and use the blogging terminology to simplify the discussion. We consider a set of players (i.e., content providers), each selects a topic to write from a predefined set of topics (e.g., economics, sports, medieval movies, etc.). Each player has a quality w.r.t. each topic, quantifying relevance and attractiveness of that author’s content if she writes on that topic, and a conversion rate. Given a selection of topics (namely, a strategy profile), the RS serves users who consume content. All queries concerned with a topic are modeled as the demand for that topic. The utility every player obtains is the sum of displays her content receives (affected by the demand for topics and the operating RS) multiplied by the conversion rate. The game-theoretic model we adopt in service is suggested by Ben Basat et al. [4] and is well-justified by later research [8, 42]. Technically, we deal with the question of reaching a stable point—a point in which none of the players can deviate from her selected topic and increase her utility. We are interested in the convergence time and the welfare of the system in these stable points. We first explore the decentralized approach: Better-response learning dynamics (see, e.g., [16, 21]), in which players asynchronously deviate to improve their utility (an arbitrary player to an arbitrary strategy, as long as she improves upon her current utility). We show that every better-response dynamic converges, thereby extending prior work [8]. Through a careful recursive construction, we show a negative result: The convergence time can be exponential in the number of topics. Long convergence time suggests a different approach. We consider the scenario in which the RS could act centrally, and support the process of matching players with topics. We devise an algorithm that computes an equilibrium fast (roughly squared in the input size). To solve this computational challenge, which is a mixture of matching and load-balancing, we propose a novel combinatorial optimization problem that is of independent interest. Conceptually, we offer a qualitative grounding for the advantages of coordination and intervention3 in the content provider dynamics. Our analysis relies on the assumption of complete knowledge of all model parameters, in particular the qualities. While unrealistic in practice, we expect that incomplete information will only exacerbate the problems we address. The main takeaway from this paper is that RSs are not self-regulated markets, and as much as suggesting authors topics to write on can lead to a significant increase in the system’s stability. We discuss some practical ways of reaching this goal in Section 5. Related work Strikingly, content provider welfare and their fair treatment were only suggested very recently in the Recommendation Systems and Information Retrieval communities [12, 14, 18, 35, 40, 46]. All of these works do not model the incentives of content providers explicitly, and consequently cannot offer a what-if analysis like ours. Our model is similar to those employed in several recent papers [4, 5, 7, 8, 30]. Ben-Porat et al. [8] study a model that is a special case of ours, and show that every learning dynamic converges. Our Theorem 1 recovers and extends their convergence results. Moreover, unlike this work, they do not address convergence time, social welfare, and centralized equilibrium computation. Other works [5, 7, 30] aim to design recommendation mechanisms that mitigate strategic behavior and 2There are some exceptions, e.g., YouTube instructing providers how to find their niche [1]. However, these are sporadic, primitive, and certainly do not enjoy recent technological advancements like collaborative filtering. 3We do not say that the RSs should dictate authors what to write; instead, it should suggest to each author profitable topics that he/she can write on competently to increase her utility. lead to long-term welfare. On the negative side, their mechanisms might knowingly recommend inferior content to some consumers. We see their work as parallel to ours, as in this work we focus on the prevailing recommendation approach—recommending the best-fitting content. We suggest that a centralized approach, in which the RS orchestrates the player-topic matching, can significantly improve the time until the system reaches stability (in the form of equilibrium). Furthermore, we envision that our approach can also lead to high social welfare, as we discuss in Section 5. More broadly, an ever-growing body of research deals with fairness considerations in Machine Learning [15, 17, 36, 38, 45]. In the context of RSs, a related line of research suggests fairer ranking methods to improve the overall performance [11, 26, 43]. For example, Yao and Huang [43] propose metrics mitigating discrimination in collaborative-filtering methods that arise from learning from historical data. Despite not always being explicit, the ultimate goal of fairness imposition is to achieve long-term welfare [28]. Our paper and analysis share a similar flavor: To achieve high stability via faster convergence, RSs should coordinate the process of content selection. 2 Model We consider the following recommendation ecosystem, where for concreteness we continue with the blog authors4 example. There is a set of authors P , each owning a blog. We further assume that each blog is concerned with a single topic, from a predefined topic set T . We assume P and T are finite, and denote |P| = P and |T | = T . The strategy space of each player is thus T ; she selects the topic she writes on. A pure strategy profile is a tuple a = (a1, . . . aP ) of topic selections, where aj is the topic selected by author j. For every author j and topic k, there is a quality that quantifies the relevance and attractiveness of j’s blog if she picks the topic k. We denote by Q the quality matrix, for Q ∈ [0, 1]P×T . The RS serves users who consume content. We do not distinguish individual consumers, but rather model the need for content as a demand for each topic. A demand distribution D over the topics T is publicly known, where we use D(k) to denote the demand mass for topic k ∈ T . W.l.o.g., we assume that D(1) ≥ D(2) ≥ . . . ≥ D(m). The recommendation functionR matches demand with available blogs. Given the demand for topic k, a strategy profile a, and the qualityQ of the blogs for the selected topics in a, the recommendation function R recommends content, possibly in a randomized manner. It is well-known that content consumers pay most of their attention to highly ranked content [13, 22, 24, 27]; therefore, we assume for simplicity thatR recommends one content solely. For ease of notation, we denoteRj(Q, k,a) as the probability that author j is ranked first under the distribution R(Q, k,a) (or rather, author j’s content is ranked first). While blog readers admire high-quality recommended blogs, blog authors care for payoffs. As described in Section 1, authors draw monetary rewards from attracting readers in various ways. We model this payoff abstractly using a conversion matrix C, C ∈ [0, 1]P×T . We assume that every blog reader grants Cj,k monetary units to author j when she writes on topic k. For example, if author j only cares for exposure, namely the number of impressions her blog receives, then Cj,k = 1 for every k ∈ T . Alternatively, if author j cares for the engagement of readers in her blog, then the conversion Cj,k should be somewhat correlated with the qualityQj,k. We will return to these two special cases later on, in Subsection 3.1. The utility of author j under a strategy profile a is given by Uj(a) def = ∑ k∈T 1aj=k · D(k) · Rj(Q, k,a) · Cj,k. (1) Overall, we represent a game as a tuple 〈P, T ,D,Q, C,R,U〉, where P is the authors, T is the topics, D is the demand for topics,Q and C are the quality and conversion matrices,R is the recommendation function, and U is the utility function. Recommending the Highest Quality Content In this paper, we focus on the RS that recommends blogs of the highest quality, breaking ties randomly. Such a behavior is intuitive and well-justified in the literature [3, 10, 23, 39]. More formally, let Bk(a) denote the highest quality of a blog written on topic k under the profile a, i.e., Bk(a) def = maxj∈P{1aj=k · Qj,k}. Furthermore, let Hk(a) denote the set of authors whose documents have the highest quality among those who write on topic k under 4We use authors and players interchangeably. a, Hk(a) def = {j ∈ P | 1aj=k · Qj,k = Bk(a)}. The recommendation function Rtop is therefore defined as Rtopj (Q, k,a) def = { 1 |Hk(a)| j ∈ Hk(a) 0 otherwise . Consequently, we can reformulate the utility function from Equation (1) in the following succinct form,5 Uj(a) def = ∑ k∈T 1aj=k · D(k) |Hk(a)| · Cj,k. (2) From here on, sinceRtop and U are fully determined by the rest of the objects, we omit them from the game representation; hence, we represent every game by the more concise tuple 〈P, T ,D,Q, C〉. Quality-Conversion Assumption Throughout the paper, we make the following Assumption 1 about the relation between quality and conversion. Assumption 1. For every topic k ∈ T and every two authors j1, j2 ∈ P , Qj1,k ≥ Qj2,k ⇒ Cj1,k ≥ Cj2,k. Intuitively, Assumption 1 implies that quality and conversion are correlated given the topic. For every topic k, if authors j1 and j2 write on topic k and j1’s content has a weakly better quality, then j1’s content has also a weakly better conversion. This assumption plays a crucial role in our analysis; we discuss relaxing it in Section 5. Solution Concepts The social welfare of the readers is the average weighted quality. Formally, given a strategy profile a, SW (a) def = ∑ k∈T D(k) ∑ j∈P Rj(Q, k,a)Qj,k. (3) As the recommendation functionRtop always recommends the highest quality content, we can have the following more succinct representation of social welfare, SW (a) = ∑ k∈T D(k)Bk(a). However, social welfare maximization does not concern author utility. Authors may be willing to deviate from the socially optimal profile if such a deviation is beneficial in terms of utility. Consequently, we seek stable solutions, as captured by the property of pure Nash equilibrium (hereinafter PNE). We say that a strategy profile a is a PNE if for every author j and topic k, Uj(a) ≥ Uj(a−j , k), where a−j is the tuple obtained by deleting the j’s entry of a. It is worth noting that while mixed Nash equilibrium is guaranteed to exist in finite games, a PNE generally does not exist in games. However, as we show later on, it always exists in our class of games. Example To clarify our notation and setting, we provide the following example. Consider a game with two players (P = 2), two topics (T = 2) and the demand distribution D such that D(1) = 3/5,D(2) = 2/5. Let the quality and conversion matrices be Q = ( 1 1/3 2/3 1/3 ) , C = ( 1/3 1 1/5 1 ) . Consider the strategy profile (a1, a2) = (1, 1). Author 1 is more competent that author 2 on topic 1, since Q1,1 = 1 > Q2,1 = 23 ; thus, the utility of author 1 under the profile (1, 1) is U1(1, 1) = D(1) · Rtop1 (Q, 1, (1, 1)) · C1,1 = 35 · 1 · 1 3 = 1 5 . On the other hand, author 2 gets U2(1, 1) = 35 · 0 · 1 5 = 0. Author 2 has a beneficial deviation: Under the profile (1, 2), her utility is U2(1, 2) = 25 ·1 ·1 = 2 5 , while the utility of author 1 remains the same, U1(1, 2) = 1 5 . For the strategy profile (2, 2), both authors have the same quality; thus,Rtop1 (Q, 2, (2, 2)) = R top 2 (Q, 2, (2, 2)) = 12 . As for the utilities, U1(2, 2) = U2(2, 2) = 25 · 1 2 · 1 = 1 5 . Overall, we see that both (1, 2) and (2, 2) are PNEs, since the authors do not have beneficial deviations. However, the social welfare of these PNEs is different: SW (1, 2) = 35 · 1 + 2 5 · 1 3 ≈ 0.73, yet SW (2, 2) = 3 5 · 0 + 2 5 1 3 ≈ 0.13. 5In case no author writes on topic k under a,R do not make any recommendation. As reflected in the utility function U through the indicator 1aj=k, readers associated with a non-selected topic k do not contribute to any author’s utility. 3 Decentralized Approach In this section, we consider the prevailing, decentralized approach. Starting from an arbitrary profile, authors interact asynchronously, each improving her utility in every time step. Such dynamics is widely-known in the Game Theory literature as better-response dynamics (hereinafter, BRDs). Studying BRDs is a robust approach for assuring the environment reaches a stable point, while making minimal assumption on the information of the players. Two central questions about BRDs in games are a) whether any BRD converges; and b) what is the convergence rate. We show that the answer to the first question is in the affirmative. For the second question, we show through an intricate combinatorial construction a result of negative flavor: The convergence rate can be exponential in the number of topics T . 3.1 Better-Response Dynamic Convergence Before we go on, we define BRDs formally. Given a strategy profile a, we say that a′j ∈ T is a better response of author j w.r.t. a if Uj(a−j , a′j) > Uj(a). A BRD is a sequence of profiles (a1,a2, . . . ), where at every step i + 1 exactly one author better-responds to ai, i.e., there exists an author j(i) such that ai+1 = (ai−j(i), a i+1 j(i)) and Uj(i)(a i+1) > Uj(i)(ai). A BRD can start from any arbitrary profile, and include improvements of any arbitrary author at any arbitrary step (assuming she has a better response in that time step). If a BRD a1, . . . ,al converges, namely no player can better respond to al, then by definition al is a PNE. Our goal is to show that every BRD of any game in our class of games converges. If there exists an infinite BRD, then it must contain cycles as the number of different strategy profiles is finite. Equivalently, nonexistence of improvement cycles suggests that any BRD will converge to a PNE [32]. General techniques for showing BRD convergence in games are rare, and are typically based on coming up with a potential function [6, 21, 34] or a natural lexicographic order [2, 19]. However, as already established by prior work [8, Proposition 1], our class of game does not fit into the category of an exact potential function; and a lexicographic order does not seem to arise naturally. Ben-Porat et al. [8] prove BRD convergence for two sub-classes of games: Games where C is identically 1, and games with C = Q. Interestingly, they prove BRD convergence for each sub-class separately using different arguments. We extend their technique to deal with any conversion matrix C that satisfies Assumption 1. Theorem 1. If a game G satisfies Assumption 1, then every BRD in G converges to a PNE. 3.2 Rate of Convergence We now move on to the second question proposed in the beginning of the section, which deals with convergence rate. The convergence rate is the worst-case length of any BRD. Recall that a BRD can start from a PNE and thus converge after one step, and hence the worst-case approach we offer here is justified. Our next theorem lower bounds the worst case convergence rate by an exponential factor in the number of topics T . This result is illuminating as it shows that in the worst case, although convergence is guaranteed, it may not be reachable in feasible time. Theorem 2. Consider P ≥ 1 and T ≥ 2. There exist games satisfying Assumption 1 with |P| = P and |T | = T , in which there are BRDs with at least ( T−2 P + 1 )P steps. Proof sketch of Theorem 2. The proof relies on a recursive construction. We construct a game and an improvement path with at least the length specified in the theorem. To balance rigor and intuition, we present here a special case of our general construction and defer the formal proof to the appendix. Consider the game with P = 3, T = 5, D(k) = 15 for every k ∈ T and Q = C = c 2c 3c 4c 5c c 9c 8c 7c 6c c 10c 11c 12c 13c for c = 1PT . The first column of the matrix, which is associated with the quality of topic 1, is identical for all authors. The snake-shape path in the matrix is always greater than the value c in the first column, and is monotonically increasing (top-down). The immediate implications are a) odd players improve their quality when deviating to a topic with a greater index, while even players improve their quality when deviating to a topic with a smaller index (which is not topic 1); and b) every player is more competent than all the players that precede her on every topic but topic 1. The initial profile is a0 = (1, 1, . . . , 1). We construct the BRD that appears in Figure 1.6 It comprises three types of steps: Purple, green and yellow. In purple steps, author 1 deviates to a topic with a higher index. In yellow steps, author 2 deviates to the topic selected by author 1 (e.g., in a5) or author 3 deviates to the topic selected by author 2 (e.g., in a19). Green steps always follow yellow steps. In green steps, the author whose topic was selected in the previous step by an author with a higher index deviates back to topic 1 (e.g., author 1 in a6 after author 2 selects topic 5 in a5, or author 2 in a20 after author 3 selects topic 2 in a19). In steps a1 − a4, only author 1 deviates (purple steps). This is also the recursive path in a game with author 1 solely (disregarding the entries of the other players). Then, in a5, author 2 deviates to topic 5 (yellow). Since author 2 is more competent than author 1 in every topic (excluding topic 1), author 1’s utility equals zero. Then, author 1 deviates to back topic 1 in a6 (green). This goes on until step a18—author 1 improves, author 2 ties, and author 1 returns to topic 1. Steps a1 − a18 comprise the recursive path for two players. Until step a18, author 3 did not move. Then, in step a19, author 3 deviates to topic 2. Author 3 is more competent than author 2, so in a20 author 2 returns to topic 1. In steps a21 − a32 authors 1 and 2 follow the same logic as before, but they overlook topic 2 (since author 3, who is more competent than both of them, selects it). In steps a33 − a34 author 3 deviates to topic 3, and then author 2 returns to topic 1. In steps a35 − a41 authors 1 and 2 follow the same logic as before, but they overlook both topics 2 and 3. The path continues similarly until we reach the profile a48. Notice that the latter profile is not an equilibrium, but we end the path at this point for the sake of the analysis. This path is indeed exponential—for every step author i makes, for 1 < i ≤ 3, author i − 1 makes at least twice as many (in fact, much more than that; see the formal proof for more details). Theorem 2 implies that there are BRDs of length ( T−2 P + 1 )P , which is O(exp(T )) for large enough P . Furthermore, if the number of topics T and the authors P are in the same order of magnitude, then length is also exponential in P . 4 Centralized Approach - Equilibrium Computation To remedy the long convergence rate, in this section we propose an efficient algorithm for PNE computation. The algorithm is a matching application and relies on a novel graph-theoretic notion. To motivate the matching perspective, we reconsider social welfare (see Equation (3)) and neglect strategic aspects momentarily. We can find a social welfare-maximizing profile using the following matching reduction. We construct a bipartite graph, one side being the authors and the other side being the topics. The weight on each edge (j, k) is Qj,kD(k), the quality author j has on topic k times the user mass on that topic. Notice that every author can only select one strategy (topic). Furthermore, for the purpose of social welfare maximization, it suffices to consider candidate profiles in which every topic is selected by at most one author. Consequently, a maximum weighted matching 6An accessible version of Figure 1 appears in the appendix. of this graph corresponds to the social welfare maximizer. By using, e.g., the Hungarian algorithm, the problem of finding a social welfare-maximizing profile can be solved in O(max{P, T}3). However, equilibrium profiles and social welfare-maximizing profiles typically do not coincide (see the celebrated work on the Price of Anarchy [33]). The maximum matching that we proposed in the previous paragraph is susceptible to beneficial devotions; therefore, it is not stable in the equilibrium sense.7 There exist many variants of stable matching in the literature, but virtually none fit the equilibrium stability we seek. In particular, the deferred acceptance algorithm [20] cannot be used since several players can select the same topic and thus the matching is not one-to-one. If we create several copies of the same topic (a common practice for the deferred acceptance algorithm), high-quality players would block low-quality authors matched to it (unlike several medical students with varying qualities that are matched to the same hospital). In the remainder of this section, we propose a sequential matching technique to compute a PNE. Our approach contributes to the matching literature and is based on the definition of saturated sets. Due to our extensive use of graph theory in what follows, we introduce a few notational conventions. We denote a graph by G = (V,E). For a subset W ⊂ V , the induced sub-graph G[W ] is the graph whose vertex set is W and whose edge set consists of all the edges in E that have both endpoints in W . We use the standard notation NG(W ) to denote the neighbors of the vertices W in the graph G. A matching M in G is a set of pairwise non-adjacent edges. For our application, we care mostly about bipartite graphs; thus, we denote V = X ∪ Y . An X-saturating matching is a matching that covers every node in X . Hall’s Marriage Theorem, a fundamental result in combinatorics, gives necessary and sufficient conditions for the existence of perfect matching. The theorem asserts that there exists an X-saturated matching in G if and only if for every subset W ⊆ X , |W | ≤ |NG(W )|. In other words, the size of every subset in X does not exceed the number of its neighbors. The essential property we use in the PNE algorithm is saturated sets. Definition 1 (Saturated set). Let G = (X ∪ Y,E) be a finite bipartite graph. A set W ⊆ X is called saturated if |W | = |NG(W )|. Of course, this definition naturally extends beyond bipartite graphs. Furthermore, if for every other saturated set W ′ it holds that |W | ≥ |W ′|, we say that W is a maximum saturated set. Despite its striking simplicity, to the best of our knowledge, this notion of saturated sets did not receive enough attention in the CS literature (under this name or a different one), and is therefore interesting in its own right. 4.1 PNE Computation We now turn to discuss the intuition behind Algorithm 1, which computes a PNE efficiently. By and large, Algorithm 1 can be seen as a best-response dynamic. It starts from a null profile (assigning all players to a factitious topic with zero user mass) and then determines the order of best-responding. The input is the entire game description,8 as described in Section 2. In Lines 1-5 we initialize the variables we use. T̃ is the set of unmatched topics; Lk is a lower bound on the load on topic k, namely the ongoing number of players we matched to it; X,Y and E are the elements of the bipartite graph G (Y stores the set of unmatched players); and a∗ is a non-valid, empty profile that we construct as the algorithm advances. The for loop in Line 6 goes as follows. We first find the set of highest-quality players for every topic k, denoted Ak (Line 7). These players can block the others from playing k because their quality is higher, and thus we prioritize them in our sequential process. Afterwards, we set k∗ to be the most profitable topic under the current partial matching (Line 8). That is, for every topic k, we consider the set of most profitable players w.r.t. k and their potential utility if matched to k. The term D(k)Cj,k/Lk+1 upper bounds the utility of every player j ∈ Ak (see Equation (2)), in case we match Lk + 1 or more players to topic k (we might increase the load Lk in later iterations). We subsequently update LK∗ in Line 9. We now move to the bipartite graph G. In Line 10, we create a new node x, which is the Lk∗ -copy of topic k∗ (we store this information about x). We add x to the left side of G, X (Line 11), and connect 7There are exceptions, of course. In degenerate cases whereQ has no ties, the game is essentially a stable marriage problem. 8For the sake of illustration, we assume P ≤ T . If that is not the case, we can add enough topics with zero mass D to achieve it. Noticeably, a PNE in the new game can be converted to a PNE in the original game. Algorithm 1: PNE computation Input: A game description 〈P, T ,D,Q, C〉 Output: A PNE a 1 T̃ ← T // available topics 2 ∀k ∈ T : Lk ← 0 // loads on topic 3 X ← ∅, Y ← P, E ← ∅ 4 G← (X ∪ Y,E) 5 a∗ ← (∅)m // empty profile 6 for t = 1 . . . P 7 ∀k ∈ T̃ : Ak ← argmaxj∈Y Qj,k 8 set k∗ ∈ argmaxk∈T̃ { maxj∈Ak D(k)Cj,k Lk+1 } 9 Lk∗ ← Lk∗ + 1 \\for loop continues... 10 create a new node x associated with topic k∗ 11 X.add(x) 12 E.add ({(x, j) : j ∈ Ak∗}) 13 Let W ⊆ X be the maximum saturated set in G 14 if W 6= ∅ then 15 find a maximum matching M in G[W ∪ Y ] 16 ∀j ∈ NG(W ) : a∗j ← Topic(M(j)) 17 Y.remove(NG(W )) 18 X.remove(W ) 19 T̃ .remove(Topics(W )) // see Line 10 20 return a∗ x to the players of Ak∗ in Y (Line 12). Line 13 is the crux of the algorithm: We find a subset W of X that is the maximum saturated set. We will justify our use of the article the in the previous sentence later on, as well as describe the implications of having a saturated set in this dynamically constructed graph. If W is empty, we continue to the next iteration of the for loop. But if W is non-empty, we enter the if block in Line 14. We find a maximum matching M in the induced graph G[W ∪ Y ]. We will later prove that G[W ∪ Y ] satisfies Hall’s marriage condition, and thus |M | = |W | = |NG(W )|. In Line 16 we use M to set the strategies of the players in NG(W ): Every player j ∈ NG(W ) is matched to the topic associated with the node M(j) ∈ W . In Lines 17-19 we remove the newly matched players NG(W ) from Y , the topic copies W from X , and the topics associated with W from the set of unmatched topics T̃ . We repeat this process until all players are matched. Let us explain the implications of having a non-empty saturated set in G. Focus on the first time a non-empty saturated set W was found in Line 13, and denote the iteration index by t′. The set W is composed of nodes associated with several topics (association in the sense we explain about Line 10); each one may have several copies. Importantly, every time we add a node x to X with an associated topic k, we increased the load Lk; hence, in iteration t′, Lk accurately reflects the number of copies of k in X . Furthermore, k was selected for the Lk + 1 time, suggesting that it is more profitable than other topics. With a few more arguments, we show that all Lk copies of k must be in W . Crucially, if we match the players in NG(W ) they cannot have beneficial deviations. We formalize this intuition via Theorem 3. Theorem 3. If the input game G satisfies Assumption 1, then Algorithm 1 returns a PNE of G. We now move on to discuss its run-time. The only two lines that require a non-trivial discussion are Lines 13 and 15. As we describe in Lemma 1 below, finding the maximum saturated set includes finding a maximum matching, and thus we need not recompute it in Line 15. We therefore focus on the complexity of finding the saturated set in G solely. The following Lemma 1 shows that as long as a bipartite G satisfies Hall’s marriage condition, we can find the maximum saturated set W efficiently. Because of the independent interest in this combinatorial problem, we state it in its full generality. Lemma 1. Let G = (V,E) be a bipartite graph that satisfies Hall’s marriage condition. There exists an algorithm that finds the maximum saturated set of G in time O( √ |V ||E|). The proof of this basic lemma appears in the appendix. The sketch of the proof is as follows. Let G = (X ∪ Y,E) be a graph satisfying Hall’s marriage condition. We first compute a maximum matching M of G. Since Hall’s marriage condition holds, we are guaranteed that M is an Xsaturating matching. We then devise a technique to find whether a node x ∈ X participates in at least one saturated set. We show that nodes participating in saturated sets are reachable from the set of unmatched nodes in Y via a variation of alternating paths, and thus can be identified quickly. By the end of this procedure, we have a set X ′ ⊆ X such that every x ∈ X ′ participates in at least one saturated set. The last part is showing that under the marriage condition, every union of saturated sets is a saturated set. As a result, we conclude that X ′ is the maximum saturated set. Using Lemma 1, we can bound the run-time of Algorithm 1. Corollary 1. Algorithm 1 can be implemented in running time of O(P 2.5 · T ). 5 Discussion With great effort, companies like Amazon turned the “you bought that, would you also be interested in this” feature into a significant source of revenue. In this paper, we suggest that a “you wrote this, would you also be interested in writing on that?” feature could be revolutionary as well—contributing to better social welfare of content consumers, as well as the utility of content providers. Such a policy could be implemented in practice by a direct recommendation to providers, or by a more moderate action like nudging content providers to experiment with a different set of contents. To support our vision of content provider coordination in RSs even further, we show in the appendix that the ratio between the social welfare of the best equilibrium and the worst equilibrium is unbounded. Indeed, such a coordination between content providers may lead to a significant lift in social welfare. More broadly, we note that maximizing the overall welfare of RSs with multiple stakeholders is an important challenge that goes way beyond this paper (see, e.g., [12]). From a technical perspective, this work suggests a variety of open questions. First, the challenge of computing the social welfare-maximizing equilibrium is still open. Second, as we show in the appendix that if Assumption 1 does not hold, BRDs may not converge. A recent work [5] demonstrates that using randomization in the recommendation functionR in a non-trivial manner can break this divergence. Finding a reasonable way to do so (in terms of social welfare) in our model is left as an open question. Third, implementing cooperation using other solution concepts like no-regret learning and correlated or coarse-correlated equilibrium are also natural extensions of this work. Lastly, our modeling neglects many real-world aspects of RSs: Providers join and leave the system, demand for content changes over time, providers create content of several types, etc. Future work with a more complex modeling is required for implementing our ideas in real-world applications. Broader Impact It is well-understood in the Machine Learning community that economic aspects must be incorporated into machine learning algorithms. In that view, estimating content satisfaction in RSs is not enough. As we argue in this paper, content providers depend on the system for some part of their income; thus, their better treatment makes them the main beneficiaries of the stance this paper offers. We envision that RSs that will coordinate their content providers (and hence the content available for recommendation) will suffer from less fluctuations, be deemed fairer by all their stakeholders, and will enjoy long-term consumer engagement. Acknowledgements We thank the anonymous reviewers for providing helpful and insightful comments. The work of O. Ben-Porat is partially funded by a PhD fellowship from JPMorgan Chase & Co. The work of M. Tennenholtz is funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n◦ 740435).
1. What is the focus of the paper in terms of game theory? 2. What are the main contributions and results presented in the paper? 3. How does the reviewer assess the significance and originality of the paper's content compared to prior works? 4. Are there any concerns or questions regarding the paper's methodology, particularly in handling tie quality scores? 5. How does the reviewer evaluate the paper's relevance to practical scenarios and its potential impact on future research?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper studies the following game that captures the decisions content providers must make in choosing the topics they cover: Each player (content provider) selects from a finite set of topics. If player j writes on topic k, the quality of this content is an exogenously determined parameter q_{jk}. Readers interested in each topic choose the highest quality content provided (with ties broken uniformly at random), and the content providers are rewarded based on the fraction of readers they get, multiplied by a conversion rate that depends on the content provider and the topic. The main results of the paper are the following theoretical results: this game always has a pure-strategy Nash equilibrium, the better-response dynamics always converges but might take exponential time, and there is an efficient algorithm to compute a Nash equilibrium. The game is essentially something in between a congestion game with player-specific payoff functions and a stable marriage game. Given all the previous work on similar settings (in particular the AAAI'19 paper), the results are not surprising, although actually working out all the technical details is non-trivial. It's worth noting that (if I'm not mistaken) when there is no ties in quality parameters, the game is essentially a stable marriage game, and a Nash equilibrium can be found by an author-proposing algorithm. Therefore, all the complications of Algorithm 1 has to do with ties among quality scores. In terms of motivation, the paper falls under the category of theory papers loosely motivated by a practical scenario. In particular, the model is really about how authors choose a topic to write about, and there's no real connection to the recommender systems, since the model assumes a trivial model of recommendation (the recommender system that always picks an item with the highest quality, for known quality scores). Strengths Nice theoretical results. The algorithm for computing a NE is non-trivial and interesting. Weaknesses The marginal contribution over the previous work (e.g., AAAI'19 paper) is not that substantial. Problem is not very well motivated.
NIPS
Title Content Provider Dynamics and Coordination in Recommendation Ecosystems Abstract Recommendation Systems like YouTube are vibrant ecosystems with two types of users: Content consumers (those who watch videos) and content providers (those who create videos). While the computational task of recommending relevant content is largely solved, designing a system that guarantees high social welfare for all stakeholders is still in its infancy. In this work, we investigate the dynamics of content creation using a game-theoretic lens. Employing a stylized model that was recently suggested by other works, we show that the dynamics will always converge to a pure Nash Equilibrium (PNE), but the convergence rate can be exponential. We complement the analysis by proposing an efficient PNE computation algorithm via a combinatorial optimization problem that is of independent interest. 1 Introduction Recommendation systems (RSs hereinafter) play a major role in our life nowadays. Many modern RSs, like YouTube, Medium, or Spotify, recommend content created by others and go far beyond recommendations. They are vibrant ecosystems with multiple stakeholders and are responsible for the well-being of all of them. For example, in the online publishing platform Medium, the platform should be profitable; suggest relevant content to the content consumers (readers); and support the content providers (authors). In light of this ecosystem approach, research on RSs has shifted from determining consumers’ taste (e.g., the Netflix Prize challenge [9, 25]) to other aspects like fairness, ethics, and long-term welfare [5, 29, 31, 35, 37, 40–42, 44]. Understanding content providers and their utility1 is still in its infancy. Content providers produce a constant supply of content (e.g., articles in Medium, videos on YouTube), and are hence indispensable. Successful content providers rely on the RS for some part of their income: Advertising, affiliated marketing, sponsorship, and merchandise; thus, unsatisfied content providers might decide to provide a different type of content or even abandon the RS. To illustrate, a content provider who is unsatisfied with her exposure, which is heavily correlated with her income from the RS, can switch to another type of content or seek another niche. Such downstream effects are detrimental to content consumer satisfaction because they change the available content the RS can recommend. The synergy between content providers and consumers is thus fragile, and solidifying one side solidifies the other. 1We use the term utility to address the well-being of the content providers, and social welfare for the well-being of the content consumers. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we investigate the dynamics of RSs using a stylized model in which content providers are strategic. Content providers obtain utility from displays of their content and are willing to change the content they offer to increase their utility. These fluctuations change not only the utility of the providers but also the social welfare of the consumers, defined as the quality of their proposed content. We show that the provider dynamics always converges to a stable point (namely, a pure Nash equilibrium), but the convergence time may be long. This observation suggests a more centralized approach, in which the RS coordinates the providers, and leads to fast convergence. While our model is stylized, we believe it offers insights into more general, real-world RSs. The game-theoretic modeling allows counterfactual reasoning about the content that could-have-beengenerated, which is impossible to achieve using existing data-sets and small online experiments. Our analysis advocates increased awareness to content providers and their incentives, a behavior that rarely exists these days in RSs.2 Our contribution We explore the ecosystem using the following game-theoretic model, and use the blogging terminology to simplify the discussion. We consider a set of players (i.e., content providers), each selects a topic to write from a predefined set of topics (e.g., economics, sports, medieval movies, etc.). Each player has a quality w.r.t. each topic, quantifying relevance and attractiveness of that author’s content if she writes on that topic, and a conversion rate. Given a selection of topics (namely, a strategy profile), the RS serves users who consume content. All queries concerned with a topic are modeled as the demand for that topic. The utility every player obtains is the sum of displays her content receives (affected by the demand for topics and the operating RS) multiplied by the conversion rate. The game-theoretic model we adopt in service is suggested by Ben Basat et al. [4] and is well-justified by later research [8, 42]. Technically, we deal with the question of reaching a stable point—a point in which none of the players can deviate from her selected topic and increase her utility. We are interested in the convergence time and the welfare of the system in these stable points. We first explore the decentralized approach: Better-response learning dynamics (see, e.g., [16, 21]), in which players asynchronously deviate to improve their utility (an arbitrary player to an arbitrary strategy, as long as she improves upon her current utility). We show that every better-response dynamic converges, thereby extending prior work [8]. Through a careful recursive construction, we show a negative result: The convergence time can be exponential in the number of topics. Long convergence time suggests a different approach. We consider the scenario in which the RS could act centrally, and support the process of matching players with topics. We devise an algorithm that computes an equilibrium fast (roughly squared in the input size). To solve this computational challenge, which is a mixture of matching and load-balancing, we propose a novel combinatorial optimization problem that is of independent interest. Conceptually, we offer a qualitative grounding for the advantages of coordination and intervention3 in the content provider dynamics. Our analysis relies on the assumption of complete knowledge of all model parameters, in particular the qualities. While unrealistic in practice, we expect that incomplete information will only exacerbate the problems we address. The main takeaway from this paper is that RSs are not self-regulated markets, and as much as suggesting authors topics to write on can lead to a significant increase in the system’s stability. We discuss some practical ways of reaching this goal in Section 5. Related work Strikingly, content provider welfare and their fair treatment were only suggested very recently in the Recommendation Systems and Information Retrieval communities [12, 14, 18, 35, 40, 46]. All of these works do not model the incentives of content providers explicitly, and consequently cannot offer a what-if analysis like ours. Our model is similar to those employed in several recent papers [4, 5, 7, 8, 30]. Ben-Porat et al. [8] study a model that is a special case of ours, and show that every learning dynamic converges. Our Theorem 1 recovers and extends their convergence results. Moreover, unlike this work, they do not address convergence time, social welfare, and centralized equilibrium computation. Other works [5, 7, 30] aim to design recommendation mechanisms that mitigate strategic behavior and 2There are some exceptions, e.g., YouTube instructing providers how to find their niche [1]. However, these are sporadic, primitive, and certainly do not enjoy recent technological advancements like collaborative filtering. 3We do not say that the RSs should dictate authors what to write; instead, it should suggest to each author profitable topics that he/she can write on competently to increase her utility. lead to long-term welfare. On the negative side, their mechanisms might knowingly recommend inferior content to some consumers. We see their work as parallel to ours, as in this work we focus on the prevailing recommendation approach—recommending the best-fitting content. We suggest that a centralized approach, in which the RS orchestrates the player-topic matching, can significantly improve the time until the system reaches stability (in the form of equilibrium). Furthermore, we envision that our approach can also lead to high social welfare, as we discuss in Section 5. More broadly, an ever-growing body of research deals with fairness considerations in Machine Learning [15, 17, 36, 38, 45]. In the context of RSs, a related line of research suggests fairer ranking methods to improve the overall performance [11, 26, 43]. For example, Yao and Huang [43] propose metrics mitigating discrimination in collaborative-filtering methods that arise from learning from historical data. Despite not always being explicit, the ultimate goal of fairness imposition is to achieve long-term welfare [28]. Our paper and analysis share a similar flavor: To achieve high stability via faster convergence, RSs should coordinate the process of content selection. 2 Model We consider the following recommendation ecosystem, where for concreteness we continue with the blog authors4 example. There is a set of authors P , each owning a blog. We further assume that each blog is concerned with a single topic, from a predefined topic set T . We assume P and T are finite, and denote |P| = P and |T | = T . The strategy space of each player is thus T ; she selects the topic she writes on. A pure strategy profile is a tuple a = (a1, . . . aP ) of topic selections, where aj is the topic selected by author j. For every author j and topic k, there is a quality that quantifies the relevance and attractiveness of j’s blog if she picks the topic k. We denote by Q the quality matrix, for Q ∈ [0, 1]P×T . The RS serves users who consume content. We do not distinguish individual consumers, but rather model the need for content as a demand for each topic. A demand distribution D over the topics T is publicly known, where we use D(k) to denote the demand mass for topic k ∈ T . W.l.o.g., we assume that D(1) ≥ D(2) ≥ . . . ≥ D(m). The recommendation functionR matches demand with available blogs. Given the demand for topic k, a strategy profile a, and the qualityQ of the blogs for the selected topics in a, the recommendation function R recommends content, possibly in a randomized manner. It is well-known that content consumers pay most of their attention to highly ranked content [13, 22, 24, 27]; therefore, we assume for simplicity thatR recommends one content solely. For ease of notation, we denoteRj(Q, k,a) as the probability that author j is ranked first under the distribution R(Q, k,a) (or rather, author j’s content is ranked first). While blog readers admire high-quality recommended blogs, blog authors care for payoffs. As described in Section 1, authors draw monetary rewards from attracting readers in various ways. We model this payoff abstractly using a conversion matrix C, C ∈ [0, 1]P×T . We assume that every blog reader grants Cj,k monetary units to author j when she writes on topic k. For example, if author j only cares for exposure, namely the number of impressions her blog receives, then Cj,k = 1 for every k ∈ T . Alternatively, if author j cares for the engagement of readers in her blog, then the conversion Cj,k should be somewhat correlated with the qualityQj,k. We will return to these two special cases later on, in Subsection 3.1. The utility of author j under a strategy profile a is given by Uj(a) def = ∑ k∈T 1aj=k · D(k) · Rj(Q, k,a) · Cj,k. (1) Overall, we represent a game as a tuple 〈P, T ,D,Q, C,R,U〉, where P is the authors, T is the topics, D is the demand for topics,Q and C are the quality and conversion matrices,R is the recommendation function, and U is the utility function. Recommending the Highest Quality Content In this paper, we focus on the RS that recommends blogs of the highest quality, breaking ties randomly. Such a behavior is intuitive and well-justified in the literature [3, 10, 23, 39]. More formally, let Bk(a) denote the highest quality of a blog written on topic k under the profile a, i.e., Bk(a) def = maxj∈P{1aj=k · Qj,k}. Furthermore, let Hk(a) denote the set of authors whose documents have the highest quality among those who write on topic k under 4We use authors and players interchangeably. a, Hk(a) def = {j ∈ P | 1aj=k · Qj,k = Bk(a)}. The recommendation function Rtop is therefore defined as Rtopj (Q, k,a) def = { 1 |Hk(a)| j ∈ Hk(a) 0 otherwise . Consequently, we can reformulate the utility function from Equation (1) in the following succinct form,5 Uj(a) def = ∑ k∈T 1aj=k · D(k) |Hk(a)| · Cj,k. (2) From here on, sinceRtop and U are fully determined by the rest of the objects, we omit them from the game representation; hence, we represent every game by the more concise tuple 〈P, T ,D,Q, C〉. Quality-Conversion Assumption Throughout the paper, we make the following Assumption 1 about the relation between quality and conversion. Assumption 1. For every topic k ∈ T and every two authors j1, j2 ∈ P , Qj1,k ≥ Qj2,k ⇒ Cj1,k ≥ Cj2,k. Intuitively, Assumption 1 implies that quality and conversion are correlated given the topic. For every topic k, if authors j1 and j2 write on topic k and j1’s content has a weakly better quality, then j1’s content has also a weakly better conversion. This assumption plays a crucial role in our analysis; we discuss relaxing it in Section 5. Solution Concepts The social welfare of the readers is the average weighted quality. Formally, given a strategy profile a, SW (a) def = ∑ k∈T D(k) ∑ j∈P Rj(Q, k,a)Qj,k. (3) As the recommendation functionRtop always recommends the highest quality content, we can have the following more succinct representation of social welfare, SW (a) = ∑ k∈T D(k)Bk(a). However, social welfare maximization does not concern author utility. Authors may be willing to deviate from the socially optimal profile if such a deviation is beneficial in terms of utility. Consequently, we seek stable solutions, as captured by the property of pure Nash equilibrium (hereinafter PNE). We say that a strategy profile a is a PNE if for every author j and topic k, Uj(a) ≥ Uj(a−j , k), where a−j is the tuple obtained by deleting the j’s entry of a. It is worth noting that while mixed Nash equilibrium is guaranteed to exist in finite games, a PNE generally does not exist in games. However, as we show later on, it always exists in our class of games. Example To clarify our notation and setting, we provide the following example. Consider a game with two players (P = 2), two topics (T = 2) and the demand distribution D such that D(1) = 3/5,D(2) = 2/5. Let the quality and conversion matrices be Q = ( 1 1/3 2/3 1/3 ) , C = ( 1/3 1 1/5 1 ) . Consider the strategy profile (a1, a2) = (1, 1). Author 1 is more competent that author 2 on topic 1, since Q1,1 = 1 > Q2,1 = 23 ; thus, the utility of author 1 under the profile (1, 1) is U1(1, 1) = D(1) · Rtop1 (Q, 1, (1, 1)) · C1,1 = 35 · 1 · 1 3 = 1 5 . On the other hand, author 2 gets U2(1, 1) = 35 · 0 · 1 5 = 0. Author 2 has a beneficial deviation: Under the profile (1, 2), her utility is U2(1, 2) = 25 ·1 ·1 = 2 5 , while the utility of author 1 remains the same, U1(1, 2) = 1 5 . For the strategy profile (2, 2), both authors have the same quality; thus,Rtop1 (Q, 2, (2, 2)) = R top 2 (Q, 2, (2, 2)) = 12 . As for the utilities, U1(2, 2) = U2(2, 2) = 25 · 1 2 · 1 = 1 5 . Overall, we see that both (1, 2) and (2, 2) are PNEs, since the authors do not have beneficial deviations. However, the social welfare of these PNEs is different: SW (1, 2) = 35 · 1 + 2 5 · 1 3 ≈ 0.73, yet SW (2, 2) = 3 5 · 0 + 2 5 1 3 ≈ 0.13. 5In case no author writes on topic k under a,R do not make any recommendation. As reflected in the utility function U through the indicator 1aj=k, readers associated with a non-selected topic k do not contribute to any author’s utility. 3 Decentralized Approach In this section, we consider the prevailing, decentralized approach. Starting from an arbitrary profile, authors interact asynchronously, each improving her utility in every time step. Such dynamics is widely-known in the Game Theory literature as better-response dynamics (hereinafter, BRDs). Studying BRDs is a robust approach for assuring the environment reaches a stable point, while making minimal assumption on the information of the players. Two central questions about BRDs in games are a) whether any BRD converges; and b) what is the convergence rate. We show that the answer to the first question is in the affirmative. For the second question, we show through an intricate combinatorial construction a result of negative flavor: The convergence rate can be exponential in the number of topics T . 3.1 Better-Response Dynamic Convergence Before we go on, we define BRDs formally. Given a strategy profile a, we say that a′j ∈ T is a better response of author j w.r.t. a if Uj(a−j , a′j) > Uj(a). A BRD is a sequence of profiles (a1,a2, . . . ), where at every step i + 1 exactly one author better-responds to ai, i.e., there exists an author j(i) such that ai+1 = (ai−j(i), a i+1 j(i)) and Uj(i)(a i+1) > Uj(i)(ai). A BRD can start from any arbitrary profile, and include improvements of any arbitrary author at any arbitrary step (assuming she has a better response in that time step). If a BRD a1, . . . ,al converges, namely no player can better respond to al, then by definition al is a PNE. Our goal is to show that every BRD of any game in our class of games converges. If there exists an infinite BRD, then it must contain cycles as the number of different strategy profiles is finite. Equivalently, nonexistence of improvement cycles suggests that any BRD will converge to a PNE [32]. General techniques for showing BRD convergence in games are rare, and are typically based on coming up with a potential function [6, 21, 34] or a natural lexicographic order [2, 19]. However, as already established by prior work [8, Proposition 1], our class of game does not fit into the category of an exact potential function; and a lexicographic order does not seem to arise naturally. Ben-Porat et al. [8] prove BRD convergence for two sub-classes of games: Games where C is identically 1, and games with C = Q. Interestingly, they prove BRD convergence for each sub-class separately using different arguments. We extend their technique to deal with any conversion matrix C that satisfies Assumption 1. Theorem 1. If a game G satisfies Assumption 1, then every BRD in G converges to a PNE. 3.2 Rate of Convergence We now move on to the second question proposed in the beginning of the section, which deals with convergence rate. The convergence rate is the worst-case length of any BRD. Recall that a BRD can start from a PNE and thus converge after one step, and hence the worst-case approach we offer here is justified. Our next theorem lower bounds the worst case convergence rate by an exponential factor in the number of topics T . This result is illuminating as it shows that in the worst case, although convergence is guaranteed, it may not be reachable in feasible time. Theorem 2. Consider P ≥ 1 and T ≥ 2. There exist games satisfying Assumption 1 with |P| = P and |T | = T , in which there are BRDs with at least ( T−2 P + 1 )P steps. Proof sketch of Theorem 2. The proof relies on a recursive construction. We construct a game and an improvement path with at least the length specified in the theorem. To balance rigor and intuition, we present here a special case of our general construction and defer the formal proof to the appendix. Consider the game with P = 3, T = 5, D(k) = 15 for every k ∈ T and Q = C = c 2c 3c 4c 5c c 9c 8c 7c 6c c 10c 11c 12c 13c for c = 1PT . The first column of the matrix, which is associated with the quality of topic 1, is identical for all authors. The snake-shape path in the matrix is always greater than the value c in the first column, and is monotonically increasing (top-down). The immediate implications are a) odd players improve their quality when deviating to a topic with a greater index, while even players improve their quality when deviating to a topic with a smaller index (which is not topic 1); and b) every player is more competent than all the players that precede her on every topic but topic 1. The initial profile is a0 = (1, 1, . . . , 1). We construct the BRD that appears in Figure 1.6 It comprises three types of steps: Purple, green and yellow. In purple steps, author 1 deviates to a topic with a higher index. In yellow steps, author 2 deviates to the topic selected by author 1 (e.g., in a5) or author 3 deviates to the topic selected by author 2 (e.g., in a19). Green steps always follow yellow steps. In green steps, the author whose topic was selected in the previous step by an author with a higher index deviates back to topic 1 (e.g., author 1 in a6 after author 2 selects topic 5 in a5, or author 2 in a20 after author 3 selects topic 2 in a19). In steps a1 − a4, only author 1 deviates (purple steps). This is also the recursive path in a game with author 1 solely (disregarding the entries of the other players). Then, in a5, author 2 deviates to topic 5 (yellow). Since author 2 is more competent than author 1 in every topic (excluding topic 1), author 1’s utility equals zero. Then, author 1 deviates to back topic 1 in a6 (green). This goes on until step a18—author 1 improves, author 2 ties, and author 1 returns to topic 1. Steps a1 − a18 comprise the recursive path for two players. Until step a18, author 3 did not move. Then, in step a19, author 3 deviates to topic 2. Author 3 is more competent than author 2, so in a20 author 2 returns to topic 1. In steps a21 − a32 authors 1 and 2 follow the same logic as before, but they overlook topic 2 (since author 3, who is more competent than both of them, selects it). In steps a33 − a34 author 3 deviates to topic 3, and then author 2 returns to topic 1. In steps a35 − a41 authors 1 and 2 follow the same logic as before, but they overlook both topics 2 and 3. The path continues similarly until we reach the profile a48. Notice that the latter profile is not an equilibrium, but we end the path at this point for the sake of the analysis. This path is indeed exponential—for every step author i makes, for 1 < i ≤ 3, author i − 1 makes at least twice as many (in fact, much more than that; see the formal proof for more details). Theorem 2 implies that there are BRDs of length ( T−2 P + 1 )P , which is O(exp(T )) for large enough P . Furthermore, if the number of topics T and the authors P are in the same order of magnitude, then length is also exponential in P . 4 Centralized Approach - Equilibrium Computation To remedy the long convergence rate, in this section we propose an efficient algorithm for PNE computation. The algorithm is a matching application and relies on a novel graph-theoretic notion. To motivate the matching perspective, we reconsider social welfare (see Equation (3)) and neglect strategic aspects momentarily. We can find a social welfare-maximizing profile using the following matching reduction. We construct a bipartite graph, one side being the authors and the other side being the topics. The weight on each edge (j, k) is Qj,kD(k), the quality author j has on topic k times the user mass on that topic. Notice that every author can only select one strategy (topic). Furthermore, for the purpose of social welfare maximization, it suffices to consider candidate profiles in which every topic is selected by at most one author. Consequently, a maximum weighted matching 6An accessible version of Figure 1 appears in the appendix. of this graph corresponds to the social welfare maximizer. By using, e.g., the Hungarian algorithm, the problem of finding a social welfare-maximizing profile can be solved in O(max{P, T}3). However, equilibrium profiles and social welfare-maximizing profiles typically do not coincide (see the celebrated work on the Price of Anarchy [33]). The maximum matching that we proposed in the previous paragraph is susceptible to beneficial devotions; therefore, it is not stable in the equilibrium sense.7 There exist many variants of stable matching in the literature, but virtually none fit the equilibrium stability we seek. In particular, the deferred acceptance algorithm [20] cannot be used since several players can select the same topic and thus the matching is not one-to-one. If we create several copies of the same topic (a common practice for the deferred acceptance algorithm), high-quality players would block low-quality authors matched to it (unlike several medical students with varying qualities that are matched to the same hospital). In the remainder of this section, we propose a sequential matching technique to compute a PNE. Our approach contributes to the matching literature and is based on the definition of saturated sets. Due to our extensive use of graph theory in what follows, we introduce a few notational conventions. We denote a graph by G = (V,E). For a subset W ⊂ V , the induced sub-graph G[W ] is the graph whose vertex set is W and whose edge set consists of all the edges in E that have both endpoints in W . We use the standard notation NG(W ) to denote the neighbors of the vertices W in the graph G. A matching M in G is a set of pairwise non-adjacent edges. For our application, we care mostly about bipartite graphs; thus, we denote V = X ∪ Y . An X-saturating matching is a matching that covers every node in X . Hall’s Marriage Theorem, a fundamental result in combinatorics, gives necessary and sufficient conditions for the existence of perfect matching. The theorem asserts that there exists an X-saturated matching in G if and only if for every subset W ⊆ X , |W | ≤ |NG(W )|. In other words, the size of every subset in X does not exceed the number of its neighbors. The essential property we use in the PNE algorithm is saturated sets. Definition 1 (Saturated set). Let G = (X ∪ Y,E) be a finite bipartite graph. A set W ⊆ X is called saturated if |W | = |NG(W )|. Of course, this definition naturally extends beyond bipartite graphs. Furthermore, if for every other saturated set W ′ it holds that |W | ≥ |W ′|, we say that W is a maximum saturated set. Despite its striking simplicity, to the best of our knowledge, this notion of saturated sets did not receive enough attention in the CS literature (under this name or a different one), and is therefore interesting in its own right. 4.1 PNE Computation We now turn to discuss the intuition behind Algorithm 1, which computes a PNE efficiently. By and large, Algorithm 1 can be seen as a best-response dynamic. It starts from a null profile (assigning all players to a factitious topic with zero user mass) and then determines the order of best-responding. The input is the entire game description,8 as described in Section 2. In Lines 1-5 we initialize the variables we use. T̃ is the set of unmatched topics; Lk is a lower bound on the load on topic k, namely the ongoing number of players we matched to it; X,Y and E are the elements of the bipartite graph G (Y stores the set of unmatched players); and a∗ is a non-valid, empty profile that we construct as the algorithm advances. The for loop in Line 6 goes as follows. We first find the set of highest-quality players for every topic k, denoted Ak (Line 7). These players can block the others from playing k because their quality is higher, and thus we prioritize them in our sequential process. Afterwards, we set k∗ to be the most profitable topic under the current partial matching (Line 8). That is, for every topic k, we consider the set of most profitable players w.r.t. k and their potential utility if matched to k. The term D(k)Cj,k/Lk+1 upper bounds the utility of every player j ∈ Ak (see Equation (2)), in case we match Lk + 1 or more players to topic k (we might increase the load Lk in later iterations). We subsequently update LK∗ in Line 9. We now move to the bipartite graph G. In Line 10, we create a new node x, which is the Lk∗ -copy of topic k∗ (we store this information about x). We add x to the left side of G, X (Line 11), and connect 7There are exceptions, of course. In degenerate cases whereQ has no ties, the game is essentially a stable marriage problem. 8For the sake of illustration, we assume P ≤ T . If that is not the case, we can add enough topics with zero mass D to achieve it. Noticeably, a PNE in the new game can be converted to a PNE in the original game. Algorithm 1: PNE computation Input: A game description 〈P, T ,D,Q, C〉 Output: A PNE a 1 T̃ ← T // available topics 2 ∀k ∈ T : Lk ← 0 // loads on topic 3 X ← ∅, Y ← P, E ← ∅ 4 G← (X ∪ Y,E) 5 a∗ ← (∅)m // empty profile 6 for t = 1 . . . P 7 ∀k ∈ T̃ : Ak ← argmaxj∈Y Qj,k 8 set k∗ ∈ argmaxk∈T̃ { maxj∈Ak D(k)Cj,k Lk+1 } 9 Lk∗ ← Lk∗ + 1 \\for loop continues... 10 create a new node x associated with topic k∗ 11 X.add(x) 12 E.add ({(x, j) : j ∈ Ak∗}) 13 Let W ⊆ X be the maximum saturated set in G 14 if W 6= ∅ then 15 find a maximum matching M in G[W ∪ Y ] 16 ∀j ∈ NG(W ) : a∗j ← Topic(M(j)) 17 Y.remove(NG(W )) 18 X.remove(W ) 19 T̃ .remove(Topics(W )) // see Line 10 20 return a∗ x to the players of Ak∗ in Y (Line 12). Line 13 is the crux of the algorithm: We find a subset W of X that is the maximum saturated set. We will justify our use of the article the in the previous sentence later on, as well as describe the implications of having a saturated set in this dynamically constructed graph. If W is empty, we continue to the next iteration of the for loop. But if W is non-empty, we enter the if block in Line 14. We find a maximum matching M in the induced graph G[W ∪ Y ]. We will later prove that G[W ∪ Y ] satisfies Hall’s marriage condition, and thus |M | = |W | = |NG(W )|. In Line 16 we use M to set the strategies of the players in NG(W ): Every player j ∈ NG(W ) is matched to the topic associated with the node M(j) ∈ W . In Lines 17-19 we remove the newly matched players NG(W ) from Y , the topic copies W from X , and the topics associated with W from the set of unmatched topics T̃ . We repeat this process until all players are matched. Let us explain the implications of having a non-empty saturated set in G. Focus on the first time a non-empty saturated set W was found in Line 13, and denote the iteration index by t′. The set W is composed of nodes associated with several topics (association in the sense we explain about Line 10); each one may have several copies. Importantly, every time we add a node x to X with an associated topic k, we increased the load Lk; hence, in iteration t′, Lk accurately reflects the number of copies of k in X . Furthermore, k was selected for the Lk + 1 time, suggesting that it is more profitable than other topics. With a few more arguments, we show that all Lk copies of k must be in W . Crucially, if we match the players in NG(W ) they cannot have beneficial deviations. We formalize this intuition via Theorem 3. Theorem 3. If the input game G satisfies Assumption 1, then Algorithm 1 returns a PNE of G. We now move on to discuss its run-time. The only two lines that require a non-trivial discussion are Lines 13 and 15. As we describe in Lemma 1 below, finding the maximum saturated set includes finding a maximum matching, and thus we need not recompute it in Line 15. We therefore focus on the complexity of finding the saturated set in G solely. The following Lemma 1 shows that as long as a bipartite G satisfies Hall’s marriage condition, we can find the maximum saturated set W efficiently. Because of the independent interest in this combinatorial problem, we state it in its full generality. Lemma 1. Let G = (V,E) be a bipartite graph that satisfies Hall’s marriage condition. There exists an algorithm that finds the maximum saturated set of G in time O( √ |V ||E|). The proof of this basic lemma appears in the appendix. The sketch of the proof is as follows. Let G = (X ∪ Y,E) be a graph satisfying Hall’s marriage condition. We first compute a maximum matching M of G. Since Hall’s marriage condition holds, we are guaranteed that M is an Xsaturating matching. We then devise a technique to find whether a node x ∈ X participates in at least one saturated set. We show that nodes participating in saturated sets are reachable from the set of unmatched nodes in Y via a variation of alternating paths, and thus can be identified quickly. By the end of this procedure, we have a set X ′ ⊆ X such that every x ∈ X ′ participates in at least one saturated set. The last part is showing that under the marriage condition, every union of saturated sets is a saturated set. As a result, we conclude that X ′ is the maximum saturated set. Using Lemma 1, we can bound the run-time of Algorithm 1. Corollary 1. Algorithm 1 can be implemented in running time of O(P 2.5 · T ). 5 Discussion With great effort, companies like Amazon turned the “you bought that, would you also be interested in this” feature into a significant source of revenue. In this paper, we suggest that a “you wrote this, would you also be interested in writing on that?” feature could be revolutionary as well—contributing to better social welfare of content consumers, as well as the utility of content providers. Such a policy could be implemented in practice by a direct recommendation to providers, or by a more moderate action like nudging content providers to experiment with a different set of contents. To support our vision of content provider coordination in RSs even further, we show in the appendix that the ratio between the social welfare of the best equilibrium and the worst equilibrium is unbounded. Indeed, such a coordination between content providers may lead to a significant lift in social welfare. More broadly, we note that maximizing the overall welfare of RSs with multiple stakeholders is an important challenge that goes way beyond this paper (see, e.g., [12]). From a technical perspective, this work suggests a variety of open questions. First, the challenge of computing the social welfare-maximizing equilibrium is still open. Second, as we show in the appendix that if Assumption 1 does not hold, BRDs may not converge. A recent work [5] demonstrates that using randomization in the recommendation functionR in a non-trivial manner can break this divergence. Finding a reasonable way to do so (in terms of social welfare) in our model is left as an open question. Third, implementing cooperation using other solution concepts like no-regret learning and correlated or coarse-correlated equilibrium are also natural extensions of this work. Lastly, our modeling neglects many real-world aspects of RSs: Providers join and leave the system, demand for content changes over time, providers create content of several types, etc. Future work with a more complex modeling is required for implementing our ideas in real-world applications. Broader Impact It is well-understood in the Machine Learning community that economic aspects must be incorporated into machine learning algorithms. In that view, estimating content satisfaction in RSs is not enough. As we argue in this paper, content providers depend on the system for some part of their income; thus, their better treatment makes them the main beneficiaries of the stance this paper offers. We envision that RSs that will coordinate their content providers (and hence the content available for recommendation) will suffer from less fluctuations, be deemed fairer by all their stakeholders, and will enjoy long-term consumer engagement. Acknowledgements We thank the anonymous reviewers for providing helpful and insightful comments. The work of O. Ben-Porat is partially funded by a PhD fellowship from JPMorgan Chase & Co. The work of M. Tennenholtz is funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n◦ 740435).
1. What is the focus and contribution of the paper on recommendation systems? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding the chosen algorithm and its limitations? 4. Do you have any concerns or questions regarding the paper's assumptions or conclusions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper formalizes a game theoretical scenario between content provides (players) of the recommendation system. With the solution concept of pure Nash equilibrium, the decentralized algorithm, better-response dynamics, is shown to converge in exponential time, while a centralized algorithm is designed to find a PNE efficiently. The convergence result indicates the existence of PNE is such a game, which is important and the centralized algorithm is mainly leveraging the perfect matching to find the corresponding PNE. Strengths The game formalization in this paper is quite novel and characterize most properties in practical recommendation system. Most proofs, as far as I checked, are correct and well-written.The topic is relevance to the NeurIPS community, as the authors are trying to justify the significance of the recommendation system. Weaknesses The choice of better-response dynamics (BRD) is questionable. BRD is not popular algorithm to find NE, neither efficient nor guaranteed to converge in most games. I believe it is mainly used to prove the existence of PNE through non-existence of improvement cycle in BRD, which is well-written although relatively not novel. Meanwhile, BRD should not also be considered as a practical algorithm that may be adopted by each content provider alone, since BRD still requires much information about the conversion matrix and demand function. In other word, the exponential running time of BRD is expected, and somewhat meaningless.
NIPS
Title Content Provider Dynamics and Coordination in Recommendation Ecosystems Abstract Recommendation Systems like YouTube are vibrant ecosystems with two types of users: Content consumers (those who watch videos) and content providers (those who create videos). While the computational task of recommending relevant content is largely solved, designing a system that guarantees high social welfare for all stakeholders is still in its infancy. In this work, we investigate the dynamics of content creation using a game-theoretic lens. Employing a stylized model that was recently suggested by other works, we show that the dynamics will always converge to a pure Nash Equilibrium (PNE), but the convergence rate can be exponential. We complement the analysis by proposing an efficient PNE computation algorithm via a combinatorial optimization problem that is of independent interest. 1 Introduction Recommendation systems (RSs hereinafter) play a major role in our life nowadays. Many modern RSs, like YouTube, Medium, or Spotify, recommend content created by others and go far beyond recommendations. They are vibrant ecosystems with multiple stakeholders and are responsible for the well-being of all of them. For example, in the online publishing platform Medium, the platform should be profitable; suggest relevant content to the content consumers (readers); and support the content providers (authors). In light of this ecosystem approach, research on RSs has shifted from determining consumers’ taste (e.g., the Netflix Prize challenge [9, 25]) to other aspects like fairness, ethics, and long-term welfare [5, 29, 31, 35, 37, 40–42, 44]. Understanding content providers and their utility1 is still in its infancy. Content providers produce a constant supply of content (e.g., articles in Medium, videos on YouTube), and are hence indispensable. Successful content providers rely on the RS for some part of their income: Advertising, affiliated marketing, sponsorship, and merchandise; thus, unsatisfied content providers might decide to provide a different type of content or even abandon the RS. To illustrate, a content provider who is unsatisfied with her exposure, which is heavily correlated with her income from the RS, can switch to another type of content or seek another niche. Such downstream effects are detrimental to content consumer satisfaction because they change the available content the RS can recommend. The synergy between content providers and consumers is thus fragile, and solidifying one side solidifies the other. 1We use the term utility to address the well-being of the content providers, and social welfare for the well-being of the content consumers. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this paper, we investigate the dynamics of RSs using a stylized model in which content providers are strategic. Content providers obtain utility from displays of their content and are willing to change the content they offer to increase their utility. These fluctuations change not only the utility of the providers but also the social welfare of the consumers, defined as the quality of their proposed content. We show that the provider dynamics always converges to a stable point (namely, a pure Nash equilibrium), but the convergence time may be long. This observation suggests a more centralized approach, in which the RS coordinates the providers, and leads to fast convergence. While our model is stylized, we believe it offers insights into more general, real-world RSs. The game-theoretic modeling allows counterfactual reasoning about the content that could-have-beengenerated, which is impossible to achieve using existing data-sets and small online experiments. Our analysis advocates increased awareness to content providers and their incentives, a behavior that rarely exists these days in RSs.2 Our contribution We explore the ecosystem using the following game-theoretic model, and use the blogging terminology to simplify the discussion. We consider a set of players (i.e., content providers), each selects a topic to write from a predefined set of topics (e.g., economics, sports, medieval movies, etc.). Each player has a quality w.r.t. each topic, quantifying relevance and attractiveness of that author’s content if she writes on that topic, and a conversion rate. Given a selection of topics (namely, a strategy profile), the RS serves users who consume content. All queries concerned with a topic are modeled as the demand for that topic. The utility every player obtains is the sum of displays her content receives (affected by the demand for topics and the operating RS) multiplied by the conversion rate. The game-theoretic model we adopt in service is suggested by Ben Basat et al. [4] and is well-justified by later research [8, 42]. Technically, we deal with the question of reaching a stable point—a point in which none of the players can deviate from her selected topic and increase her utility. We are interested in the convergence time and the welfare of the system in these stable points. We first explore the decentralized approach: Better-response learning dynamics (see, e.g., [16, 21]), in which players asynchronously deviate to improve their utility (an arbitrary player to an arbitrary strategy, as long as she improves upon her current utility). We show that every better-response dynamic converges, thereby extending prior work [8]. Through a careful recursive construction, we show a negative result: The convergence time can be exponential in the number of topics. Long convergence time suggests a different approach. We consider the scenario in which the RS could act centrally, and support the process of matching players with topics. We devise an algorithm that computes an equilibrium fast (roughly squared in the input size). To solve this computational challenge, which is a mixture of matching and load-balancing, we propose a novel combinatorial optimization problem that is of independent interest. Conceptually, we offer a qualitative grounding for the advantages of coordination and intervention3 in the content provider dynamics. Our analysis relies on the assumption of complete knowledge of all model parameters, in particular the qualities. While unrealistic in practice, we expect that incomplete information will only exacerbate the problems we address. The main takeaway from this paper is that RSs are not self-regulated markets, and as much as suggesting authors topics to write on can lead to a significant increase in the system’s stability. We discuss some practical ways of reaching this goal in Section 5. Related work Strikingly, content provider welfare and their fair treatment were only suggested very recently in the Recommendation Systems and Information Retrieval communities [12, 14, 18, 35, 40, 46]. All of these works do not model the incentives of content providers explicitly, and consequently cannot offer a what-if analysis like ours. Our model is similar to those employed in several recent papers [4, 5, 7, 8, 30]. Ben-Porat et al. [8] study a model that is a special case of ours, and show that every learning dynamic converges. Our Theorem 1 recovers and extends their convergence results. Moreover, unlike this work, they do not address convergence time, social welfare, and centralized equilibrium computation. Other works [5, 7, 30] aim to design recommendation mechanisms that mitigate strategic behavior and 2There are some exceptions, e.g., YouTube instructing providers how to find their niche [1]. However, these are sporadic, primitive, and certainly do not enjoy recent technological advancements like collaborative filtering. 3We do not say that the RSs should dictate authors what to write; instead, it should suggest to each author profitable topics that he/she can write on competently to increase her utility. lead to long-term welfare. On the negative side, their mechanisms might knowingly recommend inferior content to some consumers. We see their work as parallel to ours, as in this work we focus on the prevailing recommendation approach—recommending the best-fitting content. We suggest that a centralized approach, in which the RS orchestrates the player-topic matching, can significantly improve the time until the system reaches stability (in the form of equilibrium). Furthermore, we envision that our approach can also lead to high social welfare, as we discuss in Section 5. More broadly, an ever-growing body of research deals with fairness considerations in Machine Learning [15, 17, 36, 38, 45]. In the context of RSs, a related line of research suggests fairer ranking methods to improve the overall performance [11, 26, 43]. For example, Yao and Huang [43] propose metrics mitigating discrimination in collaborative-filtering methods that arise from learning from historical data. Despite not always being explicit, the ultimate goal of fairness imposition is to achieve long-term welfare [28]. Our paper and analysis share a similar flavor: To achieve high stability via faster convergence, RSs should coordinate the process of content selection. 2 Model We consider the following recommendation ecosystem, where for concreteness we continue with the blog authors4 example. There is a set of authors P , each owning a blog. We further assume that each blog is concerned with a single topic, from a predefined topic set T . We assume P and T are finite, and denote |P| = P and |T | = T . The strategy space of each player is thus T ; she selects the topic she writes on. A pure strategy profile is a tuple a = (a1, . . . aP ) of topic selections, where aj is the topic selected by author j. For every author j and topic k, there is a quality that quantifies the relevance and attractiveness of j’s blog if she picks the topic k. We denote by Q the quality matrix, for Q ∈ [0, 1]P×T . The RS serves users who consume content. We do not distinguish individual consumers, but rather model the need for content as a demand for each topic. A demand distribution D over the topics T is publicly known, where we use D(k) to denote the demand mass for topic k ∈ T . W.l.o.g., we assume that D(1) ≥ D(2) ≥ . . . ≥ D(m). The recommendation functionR matches demand with available blogs. Given the demand for topic k, a strategy profile a, and the qualityQ of the blogs for the selected topics in a, the recommendation function R recommends content, possibly in a randomized manner. It is well-known that content consumers pay most of their attention to highly ranked content [13, 22, 24, 27]; therefore, we assume for simplicity thatR recommends one content solely. For ease of notation, we denoteRj(Q, k,a) as the probability that author j is ranked first under the distribution R(Q, k,a) (or rather, author j’s content is ranked first). While blog readers admire high-quality recommended blogs, blog authors care for payoffs. As described in Section 1, authors draw monetary rewards from attracting readers in various ways. We model this payoff abstractly using a conversion matrix C, C ∈ [0, 1]P×T . We assume that every blog reader grants Cj,k monetary units to author j when she writes on topic k. For example, if author j only cares for exposure, namely the number of impressions her blog receives, then Cj,k = 1 for every k ∈ T . Alternatively, if author j cares for the engagement of readers in her blog, then the conversion Cj,k should be somewhat correlated with the qualityQj,k. We will return to these two special cases later on, in Subsection 3.1. The utility of author j under a strategy profile a is given by Uj(a) def = ∑ k∈T 1aj=k · D(k) · Rj(Q, k,a) · Cj,k. (1) Overall, we represent a game as a tuple 〈P, T ,D,Q, C,R,U〉, where P is the authors, T is the topics, D is the demand for topics,Q and C are the quality and conversion matrices,R is the recommendation function, and U is the utility function. Recommending the Highest Quality Content In this paper, we focus on the RS that recommends blogs of the highest quality, breaking ties randomly. Such a behavior is intuitive and well-justified in the literature [3, 10, 23, 39]. More formally, let Bk(a) denote the highest quality of a blog written on topic k under the profile a, i.e., Bk(a) def = maxj∈P{1aj=k · Qj,k}. Furthermore, let Hk(a) denote the set of authors whose documents have the highest quality among those who write on topic k under 4We use authors and players interchangeably. a, Hk(a) def = {j ∈ P | 1aj=k · Qj,k = Bk(a)}. The recommendation function Rtop is therefore defined as Rtopj (Q, k,a) def = { 1 |Hk(a)| j ∈ Hk(a) 0 otherwise . Consequently, we can reformulate the utility function from Equation (1) in the following succinct form,5 Uj(a) def = ∑ k∈T 1aj=k · D(k) |Hk(a)| · Cj,k. (2) From here on, sinceRtop and U are fully determined by the rest of the objects, we omit them from the game representation; hence, we represent every game by the more concise tuple 〈P, T ,D,Q, C〉. Quality-Conversion Assumption Throughout the paper, we make the following Assumption 1 about the relation between quality and conversion. Assumption 1. For every topic k ∈ T and every two authors j1, j2 ∈ P , Qj1,k ≥ Qj2,k ⇒ Cj1,k ≥ Cj2,k. Intuitively, Assumption 1 implies that quality and conversion are correlated given the topic. For every topic k, if authors j1 and j2 write on topic k and j1’s content has a weakly better quality, then j1’s content has also a weakly better conversion. This assumption plays a crucial role in our analysis; we discuss relaxing it in Section 5. Solution Concepts The social welfare of the readers is the average weighted quality. Formally, given a strategy profile a, SW (a) def = ∑ k∈T D(k) ∑ j∈P Rj(Q, k,a)Qj,k. (3) As the recommendation functionRtop always recommends the highest quality content, we can have the following more succinct representation of social welfare, SW (a) = ∑ k∈T D(k)Bk(a). However, social welfare maximization does not concern author utility. Authors may be willing to deviate from the socially optimal profile if such a deviation is beneficial in terms of utility. Consequently, we seek stable solutions, as captured by the property of pure Nash equilibrium (hereinafter PNE). We say that a strategy profile a is a PNE if for every author j and topic k, Uj(a) ≥ Uj(a−j , k), where a−j is the tuple obtained by deleting the j’s entry of a. It is worth noting that while mixed Nash equilibrium is guaranteed to exist in finite games, a PNE generally does not exist in games. However, as we show later on, it always exists in our class of games. Example To clarify our notation and setting, we provide the following example. Consider a game with two players (P = 2), two topics (T = 2) and the demand distribution D such that D(1) = 3/5,D(2) = 2/5. Let the quality and conversion matrices be Q = ( 1 1/3 2/3 1/3 ) , C = ( 1/3 1 1/5 1 ) . Consider the strategy profile (a1, a2) = (1, 1). Author 1 is more competent that author 2 on topic 1, since Q1,1 = 1 > Q2,1 = 23 ; thus, the utility of author 1 under the profile (1, 1) is U1(1, 1) = D(1) · Rtop1 (Q, 1, (1, 1)) · C1,1 = 35 · 1 · 1 3 = 1 5 . On the other hand, author 2 gets U2(1, 1) = 35 · 0 · 1 5 = 0. Author 2 has a beneficial deviation: Under the profile (1, 2), her utility is U2(1, 2) = 25 ·1 ·1 = 2 5 , while the utility of author 1 remains the same, U1(1, 2) = 1 5 . For the strategy profile (2, 2), both authors have the same quality; thus,Rtop1 (Q, 2, (2, 2)) = R top 2 (Q, 2, (2, 2)) = 12 . As for the utilities, U1(2, 2) = U2(2, 2) = 25 · 1 2 · 1 = 1 5 . Overall, we see that both (1, 2) and (2, 2) are PNEs, since the authors do not have beneficial deviations. However, the social welfare of these PNEs is different: SW (1, 2) = 35 · 1 + 2 5 · 1 3 ≈ 0.73, yet SW (2, 2) = 3 5 · 0 + 2 5 1 3 ≈ 0.13. 5In case no author writes on topic k under a,R do not make any recommendation. As reflected in the utility function U through the indicator 1aj=k, readers associated with a non-selected topic k do not contribute to any author’s utility. 3 Decentralized Approach In this section, we consider the prevailing, decentralized approach. Starting from an arbitrary profile, authors interact asynchronously, each improving her utility in every time step. Such dynamics is widely-known in the Game Theory literature as better-response dynamics (hereinafter, BRDs). Studying BRDs is a robust approach for assuring the environment reaches a stable point, while making minimal assumption on the information of the players. Two central questions about BRDs in games are a) whether any BRD converges; and b) what is the convergence rate. We show that the answer to the first question is in the affirmative. For the second question, we show through an intricate combinatorial construction a result of negative flavor: The convergence rate can be exponential in the number of topics T . 3.1 Better-Response Dynamic Convergence Before we go on, we define BRDs formally. Given a strategy profile a, we say that a′j ∈ T is a better response of author j w.r.t. a if Uj(a−j , a′j) > Uj(a). A BRD is a sequence of profiles (a1,a2, . . . ), where at every step i + 1 exactly one author better-responds to ai, i.e., there exists an author j(i) such that ai+1 = (ai−j(i), a i+1 j(i)) and Uj(i)(a i+1) > Uj(i)(ai). A BRD can start from any arbitrary profile, and include improvements of any arbitrary author at any arbitrary step (assuming she has a better response in that time step). If a BRD a1, . . . ,al converges, namely no player can better respond to al, then by definition al is a PNE. Our goal is to show that every BRD of any game in our class of games converges. If there exists an infinite BRD, then it must contain cycles as the number of different strategy profiles is finite. Equivalently, nonexistence of improvement cycles suggests that any BRD will converge to a PNE [32]. General techniques for showing BRD convergence in games are rare, and are typically based on coming up with a potential function [6, 21, 34] or a natural lexicographic order [2, 19]. However, as already established by prior work [8, Proposition 1], our class of game does not fit into the category of an exact potential function; and a lexicographic order does not seem to arise naturally. Ben-Porat et al. [8] prove BRD convergence for two sub-classes of games: Games where C is identically 1, and games with C = Q. Interestingly, they prove BRD convergence for each sub-class separately using different arguments. We extend their technique to deal with any conversion matrix C that satisfies Assumption 1. Theorem 1. If a game G satisfies Assumption 1, then every BRD in G converges to a PNE. 3.2 Rate of Convergence We now move on to the second question proposed in the beginning of the section, which deals with convergence rate. The convergence rate is the worst-case length of any BRD. Recall that a BRD can start from a PNE and thus converge after one step, and hence the worst-case approach we offer here is justified. Our next theorem lower bounds the worst case convergence rate by an exponential factor in the number of topics T . This result is illuminating as it shows that in the worst case, although convergence is guaranteed, it may not be reachable in feasible time. Theorem 2. Consider P ≥ 1 and T ≥ 2. There exist games satisfying Assumption 1 with |P| = P and |T | = T , in which there are BRDs with at least ( T−2 P + 1 )P steps. Proof sketch of Theorem 2. The proof relies on a recursive construction. We construct a game and an improvement path with at least the length specified in the theorem. To balance rigor and intuition, we present here a special case of our general construction and defer the formal proof to the appendix. Consider the game with P = 3, T = 5, D(k) = 15 for every k ∈ T and Q = C = c 2c 3c 4c 5c c 9c 8c 7c 6c c 10c 11c 12c 13c for c = 1PT . The first column of the matrix, which is associated with the quality of topic 1, is identical for all authors. The snake-shape path in the matrix is always greater than the value c in the first column, and is monotonically increasing (top-down). The immediate implications are a) odd players improve their quality when deviating to a topic with a greater index, while even players improve their quality when deviating to a topic with a smaller index (which is not topic 1); and b) every player is more competent than all the players that precede her on every topic but topic 1. The initial profile is a0 = (1, 1, . . . , 1). We construct the BRD that appears in Figure 1.6 It comprises three types of steps: Purple, green and yellow. In purple steps, author 1 deviates to a topic with a higher index. In yellow steps, author 2 deviates to the topic selected by author 1 (e.g., in a5) or author 3 deviates to the topic selected by author 2 (e.g., in a19). Green steps always follow yellow steps. In green steps, the author whose topic was selected in the previous step by an author with a higher index deviates back to topic 1 (e.g., author 1 in a6 after author 2 selects topic 5 in a5, or author 2 in a20 after author 3 selects topic 2 in a19). In steps a1 − a4, only author 1 deviates (purple steps). This is also the recursive path in a game with author 1 solely (disregarding the entries of the other players). Then, in a5, author 2 deviates to topic 5 (yellow). Since author 2 is more competent than author 1 in every topic (excluding topic 1), author 1’s utility equals zero. Then, author 1 deviates to back topic 1 in a6 (green). This goes on until step a18—author 1 improves, author 2 ties, and author 1 returns to topic 1. Steps a1 − a18 comprise the recursive path for two players. Until step a18, author 3 did not move. Then, in step a19, author 3 deviates to topic 2. Author 3 is more competent than author 2, so in a20 author 2 returns to topic 1. In steps a21 − a32 authors 1 and 2 follow the same logic as before, but they overlook topic 2 (since author 3, who is more competent than both of them, selects it). In steps a33 − a34 author 3 deviates to topic 3, and then author 2 returns to topic 1. In steps a35 − a41 authors 1 and 2 follow the same logic as before, but they overlook both topics 2 and 3. The path continues similarly until we reach the profile a48. Notice that the latter profile is not an equilibrium, but we end the path at this point for the sake of the analysis. This path is indeed exponential—for every step author i makes, for 1 < i ≤ 3, author i − 1 makes at least twice as many (in fact, much more than that; see the formal proof for more details). Theorem 2 implies that there are BRDs of length ( T−2 P + 1 )P , which is O(exp(T )) for large enough P . Furthermore, if the number of topics T and the authors P are in the same order of magnitude, then length is also exponential in P . 4 Centralized Approach - Equilibrium Computation To remedy the long convergence rate, in this section we propose an efficient algorithm for PNE computation. The algorithm is a matching application and relies on a novel graph-theoretic notion. To motivate the matching perspective, we reconsider social welfare (see Equation (3)) and neglect strategic aspects momentarily. We can find a social welfare-maximizing profile using the following matching reduction. We construct a bipartite graph, one side being the authors and the other side being the topics. The weight on each edge (j, k) is Qj,kD(k), the quality author j has on topic k times the user mass on that topic. Notice that every author can only select one strategy (topic). Furthermore, for the purpose of social welfare maximization, it suffices to consider candidate profiles in which every topic is selected by at most one author. Consequently, a maximum weighted matching 6An accessible version of Figure 1 appears in the appendix. of this graph corresponds to the social welfare maximizer. By using, e.g., the Hungarian algorithm, the problem of finding a social welfare-maximizing profile can be solved in O(max{P, T}3). However, equilibrium profiles and social welfare-maximizing profiles typically do not coincide (see the celebrated work on the Price of Anarchy [33]). The maximum matching that we proposed in the previous paragraph is susceptible to beneficial devotions; therefore, it is not stable in the equilibrium sense.7 There exist many variants of stable matching in the literature, but virtually none fit the equilibrium stability we seek. In particular, the deferred acceptance algorithm [20] cannot be used since several players can select the same topic and thus the matching is not one-to-one. If we create several copies of the same topic (a common practice for the deferred acceptance algorithm), high-quality players would block low-quality authors matched to it (unlike several medical students with varying qualities that are matched to the same hospital). In the remainder of this section, we propose a sequential matching technique to compute a PNE. Our approach contributes to the matching literature and is based on the definition of saturated sets. Due to our extensive use of graph theory in what follows, we introduce a few notational conventions. We denote a graph by G = (V,E). For a subset W ⊂ V , the induced sub-graph G[W ] is the graph whose vertex set is W and whose edge set consists of all the edges in E that have both endpoints in W . We use the standard notation NG(W ) to denote the neighbors of the vertices W in the graph G. A matching M in G is a set of pairwise non-adjacent edges. For our application, we care mostly about bipartite graphs; thus, we denote V = X ∪ Y . An X-saturating matching is a matching that covers every node in X . Hall’s Marriage Theorem, a fundamental result in combinatorics, gives necessary and sufficient conditions for the existence of perfect matching. The theorem asserts that there exists an X-saturated matching in G if and only if for every subset W ⊆ X , |W | ≤ |NG(W )|. In other words, the size of every subset in X does not exceed the number of its neighbors. The essential property we use in the PNE algorithm is saturated sets. Definition 1 (Saturated set). Let G = (X ∪ Y,E) be a finite bipartite graph. A set W ⊆ X is called saturated if |W | = |NG(W )|. Of course, this definition naturally extends beyond bipartite graphs. Furthermore, if for every other saturated set W ′ it holds that |W | ≥ |W ′|, we say that W is a maximum saturated set. Despite its striking simplicity, to the best of our knowledge, this notion of saturated sets did not receive enough attention in the CS literature (under this name or a different one), and is therefore interesting in its own right. 4.1 PNE Computation We now turn to discuss the intuition behind Algorithm 1, which computes a PNE efficiently. By and large, Algorithm 1 can be seen as a best-response dynamic. It starts from a null profile (assigning all players to a factitious topic with zero user mass) and then determines the order of best-responding. The input is the entire game description,8 as described in Section 2. In Lines 1-5 we initialize the variables we use. T̃ is the set of unmatched topics; Lk is a lower bound on the load on topic k, namely the ongoing number of players we matched to it; X,Y and E are the elements of the bipartite graph G (Y stores the set of unmatched players); and a∗ is a non-valid, empty profile that we construct as the algorithm advances. The for loop in Line 6 goes as follows. We first find the set of highest-quality players for every topic k, denoted Ak (Line 7). These players can block the others from playing k because their quality is higher, and thus we prioritize them in our sequential process. Afterwards, we set k∗ to be the most profitable topic under the current partial matching (Line 8). That is, for every topic k, we consider the set of most profitable players w.r.t. k and their potential utility if matched to k. The term D(k)Cj,k/Lk+1 upper bounds the utility of every player j ∈ Ak (see Equation (2)), in case we match Lk + 1 or more players to topic k (we might increase the load Lk in later iterations). We subsequently update LK∗ in Line 9. We now move to the bipartite graph G. In Line 10, we create a new node x, which is the Lk∗ -copy of topic k∗ (we store this information about x). We add x to the left side of G, X (Line 11), and connect 7There are exceptions, of course. In degenerate cases whereQ has no ties, the game is essentially a stable marriage problem. 8For the sake of illustration, we assume P ≤ T . If that is not the case, we can add enough topics with zero mass D to achieve it. Noticeably, a PNE in the new game can be converted to a PNE in the original game. Algorithm 1: PNE computation Input: A game description 〈P, T ,D,Q, C〉 Output: A PNE a 1 T̃ ← T // available topics 2 ∀k ∈ T : Lk ← 0 // loads on topic 3 X ← ∅, Y ← P, E ← ∅ 4 G← (X ∪ Y,E) 5 a∗ ← (∅)m // empty profile 6 for t = 1 . . . P 7 ∀k ∈ T̃ : Ak ← argmaxj∈Y Qj,k 8 set k∗ ∈ argmaxk∈T̃ { maxj∈Ak D(k)Cj,k Lk+1 } 9 Lk∗ ← Lk∗ + 1 \\for loop continues... 10 create a new node x associated with topic k∗ 11 X.add(x) 12 E.add ({(x, j) : j ∈ Ak∗}) 13 Let W ⊆ X be the maximum saturated set in G 14 if W 6= ∅ then 15 find a maximum matching M in G[W ∪ Y ] 16 ∀j ∈ NG(W ) : a∗j ← Topic(M(j)) 17 Y.remove(NG(W )) 18 X.remove(W ) 19 T̃ .remove(Topics(W )) // see Line 10 20 return a∗ x to the players of Ak∗ in Y (Line 12). Line 13 is the crux of the algorithm: We find a subset W of X that is the maximum saturated set. We will justify our use of the article the in the previous sentence later on, as well as describe the implications of having a saturated set in this dynamically constructed graph. If W is empty, we continue to the next iteration of the for loop. But if W is non-empty, we enter the if block in Line 14. We find a maximum matching M in the induced graph G[W ∪ Y ]. We will later prove that G[W ∪ Y ] satisfies Hall’s marriage condition, and thus |M | = |W | = |NG(W )|. In Line 16 we use M to set the strategies of the players in NG(W ): Every player j ∈ NG(W ) is matched to the topic associated with the node M(j) ∈ W . In Lines 17-19 we remove the newly matched players NG(W ) from Y , the topic copies W from X , and the topics associated with W from the set of unmatched topics T̃ . We repeat this process until all players are matched. Let us explain the implications of having a non-empty saturated set in G. Focus on the first time a non-empty saturated set W was found in Line 13, and denote the iteration index by t′. The set W is composed of nodes associated with several topics (association in the sense we explain about Line 10); each one may have several copies. Importantly, every time we add a node x to X with an associated topic k, we increased the load Lk; hence, in iteration t′, Lk accurately reflects the number of copies of k in X . Furthermore, k was selected for the Lk + 1 time, suggesting that it is more profitable than other topics. With a few more arguments, we show that all Lk copies of k must be in W . Crucially, if we match the players in NG(W ) they cannot have beneficial deviations. We formalize this intuition via Theorem 3. Theorem 3. If the input game G satisfies Assumption 1, then Algorithm 1 returns a PNE of G. We now move on to discuss its run-time. The only two lines that require a non-trivial discussion are Lines 13 and 15. As we describe in Lemma 1 below, finding the maximum saturated set includes finding a maximum matching, and thus we need not recompute it in Line 15. We therefore focus on the complexity of finding the saturated set in G solely. The following Lemma 1 shows that as long as a bipartite G satisfies Hall’s marriage condition, we can find the maximum saturated set W efficiently. Because of the independent interest in this combinatorial problem, we state it in its full generality. Lemma 1. Let G = (V,E) be a bipartite graph that satisfies Hall’s marriage condition. There exists an algorithm that finds the maximum saturated set of G in time O( √ |V ||E|). The proof of this basic lemma appears in the appendix. The sketch of the proof is as follows. Let G = (X ∪ Y,E) be a graph satisfying Hall’s marriage condition. We first compute a maximum matching M of G. Since Hall’s marriage condition holds, we are guaranteed that M is an Xsaturating matching. We then devise a technique to find whether a node x ∈ X participates in at least one saturated set. We show that nodes participating in saturated sets are reachable from the set of unmatched nodes in Y via a variation of alternating paths, and thus can be identified quickly. By the end of this procedure, we have a set X ′ ⊆ X such that every x ∈ X ′ participates in at least one saturated set. The last part is showing that under the marriage condition, every union of saturated sets is a saturated set. As a result, we conclude that X ′ is the maximum saturated set. Using Lemma 1, we can bound the run-time of Algorithm 1. Corollary 1. Algorithm 1 can be implemented in running time of O(P 2.5 · T ). 5 Discussion With great effort, companies like Amazon turned the “you bought that, would you also be interested in this” feature into a significant source of revenue. In this paper, we suggest that a “you wrote this, would you also be interested in writing on that?” feature could be revolutionary as well—contributing to better social welfare of content consumers, as well as the utility of content providers. Such a policy could be implemented in practice by a direct recommendation to providers, or by a more moderate action like nudging content providers to experiment with a different set of contents. To support our vision of content provider coordination in RSs even further, we show in the appendix that the ratio between the social welfare of the best equilibrium and the worst equilibrium is unbounded. Indeed, such a coordination between content providers may lead to a significant lift in social welfare. More broadly, we note that maximizing the overall welfare of RSs with multiple stakeholders is an important challenge that goes way beyond this paper (see, e.g., [12]). From a technical perspective, this work suggests a variety of open questions. First, the challenge of computing the social welfare-maximizing equilibrium is still open. Second, as we show in the appendix that if Assumption 1 does not hold, BRDs may not converge. A recent work [5] demonstrates that using randomization in the recommendation functionR in a non-trivial manner can break this divergence. Finding a reasonable way to do so (in terms of social welfare) in our model is left as an open question. Third, implementing cooperation using other solution concepts like no-regret learning and correlated or coarse-correlated equilibrium are also natural extensions of this work. Lastly, our modeling neglects many real-world aspects of RSs: Providers join and leave the system, demand for content changes over time, providers create content of several types, etc. Future work with a more complex modeling is required for implementing our ideas in real-world applications. Broader Impact It is well-understood in the Machine Learning community that economic aspects must be incorporated into machine learning algorithms. In that view, estimating content satisfaction in RSs is not enough. As we argue in this paper, content providers depend on the system for some part of their income; thus, their better treatment makes them the main beneficiaries of the stance this paper offers. We envision that RSs that will coordinate their content providers (and hence the content available for recommendation) will suffer from less fluctuations, be deemed fairer by all their stakeholders, and will enjoy long-term consumer engagement. Acknowledgements We thank the anonymous reviewers for providing helpful and insightful comments. The work of O. Ben-Porat is partially funded by a PhD fellowship from JPMorgan Chase & Co. The work of M. Tennenholtz is funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n◦ 740435).
1. What is the main contribution of the paper regarding game-theoretical dynamics? 2. What are the strengths of the proposed model and results, particularly in terms of convergence and computation? 3. Do you have any concerns or questions about the paper's assumptions and its impact on social welfare? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The work studies the game-theoretical dynamics of "ecosystems" such as blogging platforms. On the one hand we have the content producers, that is, bloggers or filmmakers; on the other hand, the content consumers, that is, the final users, who demand different amounts of content on different topics. On top of this there is a recommender system/platform that suggests contents on the basis of their topics and quality. The only actual players are the producers, who try to maximize their profit (for example, the number of readers of their blog). To this end, each producer can adapt the topic of its content, for example by opening a blog on a more profitable subject. But social welfare, measured as the average quality of the blogs, is at stake too. This complex scenario is formalized as a non-cooperative game. The paper gives two main results. First, under natural assumptions, if each producer sequentially makes a move to increase profit, then the game converges to a pure Nash equilibrium, and that this may require a large number of steps. Second, a pure Nash equilibrium for the system can be computed reasonably fast, in polynomial time; this is done in an interesting way by (loosely speaking) computing max-weight perfect matchings. Strengths The subject is of interest of the NeurIPS community (although perhaps not exactly central). The work gives a very clear message with two/three main results. The results are nontrivial (the model is complex and not easy to analyse). The fact that the system converges to a pure Nash equilibrium is interesting (it is not obvious that a pure equilibrium strategy exists, unlike for mixed strategies i.e. distributions). The lower bound construction for the convergence time is neat and insightful. The computation of the equilibrium in polynomial time is interesting as well. The work is well presented and pleasant to read. At a higher level, the work sheds light on the interplay between profit maximization (the bloggers' point of view), social welfare maximization (the readers' point of view), and system design (the recommender system's point of view). This is different from the traditional recommender system problem, that is, suggesting relevant content to users, and different techniques are used. One drawback is that the model is complex, but this is not a fault of the paper. Rather, it is necessary in order to model both the content provider side (bloggers) and the users side (readers) while giving a role to the recommender system/platform. A second drawback is that the social welfare "disappears" in the paper, unless I am missing something. That is, the two results of the paper are oblivious to the social welfare of the system. They are a function of the content producers' utilities but not of the average quality of the content. The only relationship is in the (natural) assumption that higher quality carries higher profit everything else being equal. I find this a bit weird given that the paper brings as motivation the study of long-term social welfare in these dynamic systems. Weaknesses One drawback is that the model is complex, but this is not a fault of the paper. Rather, it is necessary in order to model both the content provider side (bloggers) and the users side (readers) while giving a role to the recommender system/platform. A second drawback is that the social welfare "disappears" in the paper, unless I am missing something. That is, the two results of the paper are oblivious to the social welfare of the system. They are a function of the content producers' utilities but not of the average quality of the content. The only relationship is in the (natural) assumption that higher quality carries higher profit everything else being equal. I find this a bit weird given that the paper brings as motivation the study of long-term social welfare in these dynamic systems.
NIPS
Title The NetHack Learning Environment Abstract Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminalbased roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source and available at https://github.com/facebookresearch/nle. 1 Introduction Recent advances in (Deep) Reinforcement Learning (RL) have been driven by the development of novel simulation environments, such as the Arcade Learning Environment (ALE) [9], StarCraft [64, 69], BabyAI [16], Obstacle Tower [38], Minecraft [37, 29, 35], and Procgen Benchmark [18]. These environments introduced new challenges for state-of-the-art methods and demonstrated failure modes of existing RL approaches. For example, Montezuma’s Revenge highlighted that methods performing well on other ALE tasks were not able to successfully learn in this sparse-reward environment. This sparked a long line of research on novel methods for exploration [e.g., 8, 66, 53] and learning from demonstrations [e.g., 31, 62, 6]. However, this progress has limits: the current best approach on this environment, Go-Explore [22, 23], overfits to specific properties of ALE and Montezuma’s Revenge. While Go-Explore is an impressive solution for Montezuma’s Revenge, it exploits the determinism of environment transitions, allowing it to memorize sequences of actions that lead to previously visited states from which the agent can continue to explore. We are interested in surpassing the limits of deterministic or repetitive settings and seek a simulation environment that is complex and modular enough to test various open research challenges such as exploration, planning, skill acquisition, memory, and transfer. However, since state-of-the-art RL approaches still require millions or even billions of samples, simulation environments need to be fast to allow RL agents to perform many interactions per second. Among attempts to surpass the limits of deterministic or repetitive settings, procedurally generated environments are a promising path 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. towards testing systematic generalization of RL methods [e.g., 39, 38, 60, 18]. Here, the game state is generated programmatically in every episode, making it extremely unlikely for an agent to visit the exact state more than once during its lifetime. Existing procedurally generated RL environments are either costly to run [e.g., 69, 37, 38] or are, as we argue, of limited complexity [e.g., 17, 19, 7]. To address these issues, we present the NetHack Learning Environment (NLE), a procedurally generated environment that strikes a balance between complexity and speed. It is a fully-featured Gym environment [11] around the popular open-source terminal-based single-player turn-based “dungeon-crawler” game, NetHack [43]. Aside from procedurally generated content, NetHack is an attractive research platform as it contains hundreds of enemy and object types, it has complex and stochastic environment dynamics, and there is a clearly defined goal (descend the dungeon, retrieve an amulet, and ascend). Furthermore, NetHack is difficult to master for human players, who often rely on external knowledge to learn about strategies and NetHack’s complex dynamics and secrets.1 Thus, in addition to a guide book [58, 59] released with NetHack itself, many extensive community-created documents exist, outlining various strategies for the game [e.g., 50, 25]. In summary, we make the following core contributions: (i) we present NLE, a fast but complex and feature-rich Gym environment for RL research built around the popular terminal-based game, NetHack, (ii) we release an initial suite of tasks in the environment and demonstrate that novel tasks can be added easily, (iii) we introduce baseline models trained using IMPALA [24] and Random Network Distillation (RND) [13], a popular exploration bonus, resulting in agents that learn diverse policies for early stages of NetHack, and (iv) we demonstrate the benefit of NetHack’s symbolic observation space by presenting in-depth qualitative analyses of trained agents. 2 NetHack: a Frontier for Reinforcement Learning Research In traditional so-called roguelike games (e.g., Rogue, Hack, NetHack, and Dungeon Crawl Stone Soup) the player acts turn-by-turn in a procedurally generated grid-world environment, with game dynamics strongly focused on exploration, resource management, and continuous discovery of entities and game mechanics [IRDC, 2008]. These games are designed to provide a steep learning curve and a constant level of challenge and surprise to the player. They are generally extremely difficult to win even once, let alone to master, i.e., win regularly and multiple times in a row. As advocated by [39, 38, 18], procedurally generated environments are a promising direction for testing systematic generalization of RL agents. We argue that such environments need to be both sufficiently complex and fast to run to serve as a challenging long-term research testbed. In Section 2.1, we illustrate that NetHack contains many desirable properties, making it an excellent candidate for driving long-term research in RL. We introduce NLE in Section 2.2, an initial suite of tasks in Section 2.3, an evaluation protocol for measuring progress towards solving NetHack in Section 2.4, as well as baseline models in Section 2.5. 2.1 NetHack NetHack is one of the oldest and most popular roguelikes, originally released in 1987 as a successor to Hack, an open-source implementation of the original Rogue game. At the beginning of the game, the player takes the role of a hero who is placed into a dungeon and tasked with finding the Amulet of Yendor to offer it to an in-game deity. To do so, the player has to descend to the bottom of over 50 procedurally generated levels to retrieve the amulet and then subsequently escape the dungeon, unlocking five extremely challenging final levels (the four Elemental Planes and the Astral Plane). Many aspects of the game are procedurally generated and follow stochastic dynamics. For example, the overall structure of the dungeon is somewhat linear, but the exact location of places of interest (e.g., the Oracle) and the structure of branching sub-dungeons (e.g., the Gnomish Mines) are determined randomly. The procedurally generated content of each level makes it highly unlikely that a player will ever experience the exact same situation more than once. This provides a fundamental challenge to learning systems and a degree of complexity that enables us to more effectively evaluate an agent’s ability to generalize. It also disqualifies current state-of-the-art exploration methods such as Go-Explore [22, 23] that are based on a goal-conditioned policy to navigate to previously visited 1“NetHack is largely based on discovering secrets and tricks during gameplay. It can take years for one to become well-versed in them, and even experienced players routinely discover new ones.” [26] 1 1 states. Moreover, states in NetHack are composed of hundreds of possible symbols, resulting in an enormous combinatorial observation space.2 It is an open question how to best project this symbolic space to a low-dimensional representation appropriate for methods like Go-Explore. For example, Ecoffet et al.’s heuristic of downsampling images of states to measure their similarity to be used as an exploration bonus will likely not work for large symbolic and procedurally generated environments. NetHack provides further variation by different hero roles (e.g., monk, valkyrie, wizard, tourist), races (human, elf, dwarf, gnome, orc) and random starting inventories (see Appendix A for details). Consequently, NetHack poses unique challenges to the research community and requires novel ways to determine state similarity and, likely, entirely new exploration frameworks. The gelatinous cube eats a scroll! }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}.}}}}}P.}}}}}......}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}..".}}}...}}}}} }...}}.....}}}}}....}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}...............} }....}}}}}}}}}}....}}}..}}}}}}}}}}}.......}}}}}}}}}}}}}}}}..}}.....}}}...}} }....}}}}}}}}b....}}}}.[}}}}}}%................}}}}}}}}}}}.}}}}.....}}...}} }....}}}}}}}}}}}}.}}}}.}}}}}}M-----------------.}}}}}}}}}}}}}}}}}.........} }....}}}}}}}}}}}}}}}}}}.}}}...|.......^.......|...}}}}}}}}}}}}}}}}}}}....}} }.....}.}}s...}}}}}}}}}.}}....--------+--------....}}}}}}..}}}}}}}}}}}...}} }..oo..}}}}.%}}}}}}}}}}}}}........|.......|........}}}}}....}}}}}}}}}}}}}}} }.....}}}}}}}}}}}}}}}}}}}}........|.>.....|^....^..}}}}}...}}}}}}}}}.}}}}}} }.....}}}}}}}}}}}}}}}}}}}}....--------+--------....}}}}}}.}.}}}}}}}}}}}}}}} }....@.}}}}}}}}}}}}}}}}}}}}...|.......^.......|...}}}}}}}}}}}}}}}}}.}}}}}}} }.......}}}}}}}..}}}}}}}}}}}}.-----------------.}}}}}}}}}}}}}}}}}....}}}}}} }....f..p}}.}}...U}}}}}}}}}}}}.......Y.........}}}}}..}}}}}}}}}.......}}}}} }.......}}}}}}}......}}}}}}}}}}}}}}..v....}}}}}}}}}..C..}}}}}}...}}..}}}}}} }.....}}}}}}}}}}}.Y...}}}}}}}}}}}}}}}}}}}}}}.}}}}}}}..B}}}}}}}}}....}}}}}}} }}..}}}}}}}}}}}}}....}}}}}}}}}}}}}}}}}}}}}}...}}..}}}}}}}.}}.}}}}.^}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} Georg XIII the Thaumaturge St:7 Dx:14 Co:17 In:19 Wi:10 Ch:10 Neutral S: Dlvl:6 $:0 HP:52(52) Pw:28(73) AC:5 Xp:7/921 T:7451 Hungry 1Figure 2: The hero (@) has to cross water (}) to get past Medusa (@, out of the hero’s line of sight) down the staircase (>) to the next level. To provide a glimpse into the complexity of NetHack’s environment dynamics, we closely follow the educational example given by “Mr Wendal” on YouTube.3 At a specific point in the game, the hero has to get past Medusa’s Island (see Figure 2 for an example). Medusa’s Island is surrounded by water } that the agent has to cross. Water can rust and corrode the hero’s metallic weapons ) and armor [. Applying a can of grease ( prevents rusting and corrosion. Furthermore, going into water will make a hero’s inventory wet, erasing scrolls ? and spellbooks + that they carry. Applying a can of grease to a bag or sack ( will make it a waterproof container for items. But the sea can also contain a kraken ; that can grab and drown the hero, leading to instant death. Applying a can of grease to a hero’s armor prevents the kraken from grabbing the hero. However, a cursed can of grease will grease the hero’s hands instead and they will drop their weapon and rings. One can use a towel ( to wipe off grease. To reach Medusa @, the hero can alternatively use magic to freeze the water and turn it into walkable ice .. Wearing snow boots [ will help the hero not to slip. When Medusa is in the hero’s line of sight, her gaze will petrify and instantly kill—the hero should use a towel to cover their eyes to fight Medusa, or even apply a mirror ( to petrify her with her own gaze. There are many other entities a hero must learn to face, many of which appear rarely even across multiple games, especially the most powerful monsters. These entities are often compositional, for example a monster might be a wolf d, which shares some characteristics with other in-game canines such as coyotes d or hell hounds d. To help a player learn, NetHack provides in-game messages 2Information about the over 450 items and 580 monster types, as well as environment dynamics involving these entities can be found in the NetHack Wiki [50] and to some extent in the NetHack Guidebook [59]. 3youtube.com/watch?v=SjuTyJlgLJ8 describing many of the hero’s interactions (see the top of Figure 1).4 Learning to capture these interesting and somewhat realistic albeit abstract dynamics poses challenges for multi-modal and language-conditioned RL [46]. NetHack is an extremely long game. Successful expert episodes usually last tens of thousands of turns, while average successful runs can easily last hundreds of thousands of turns, spawning multiple days of play-time. Compared to testbeds with long episode horizons such as StarCraft and Dota 2, NetHack’s “episodes” are one or two orders of magnitude longer, and they wildly vary depending on the policy. Moreover, several official conducts exist in NetHack that make the game even more challenging, e.g., by not wearing any armor throughout the game (see Appendix A for more). Finally, in comparison to other classic roguelike games, NetHack’s popularity has attracted a larger number of contributors to its community. Consequently, there exists a comprehensive game wiki [50] and many so-called spoilers [25] that provide advice to players. Due to the randomized nature of NetHack, this advice is general in nature (e.g., explaining the behavior of various entities) and not a step-by-step guide. These texts could be used for language-assisted RL along the lines of [72]. Lastly, there is also a large public repository of human replay data (over five million games) hosted on the NetHack Alt.org (NAO) servers, with hundreds of finished games per day on average [47]. This extensive dataset could spur research advances in imitation learning, inverse RL, and learning from demonstrations [1, 3]. 2.2 The NetHack Learning Environment The NetHack Learning Environment (NLE) is built on NetHack 3.6.6, the 36th public release of NetHack, which was released on March 8th, 2020 and is the latest available version of the game at the time of publication of this paper. NLE is designed to provide a common, turn-based (i.e., synchronous) RL interface around the standard terminal interface of NetHack. We use the game as-is as the backend for our NLE environment, leaving the game dynamics unchanged. We added to the source code more control over the random number generator for seeding the environment, as well as various modifications to expose the game’s internal state to our Python frontend. By default, the observation space consists of the elements glyphs, chars, colors, specials, blstats, message, inv_glyphs, inv_strs, inv_letters, as well as inv_oclasses. The elements glyphs, chars, colors, and specials are tensors representing the (batched) 2D symbolic observation of the dungeon; blstats is a vector of agent coordinates and other character attributes (“bottom-line stats”, e.g., health points, strength, dexterity, hunger level; normally displayed in the bottom area of the GUI), message is a tensor representing the current message shown to the player (normally displayed in the top area of the GUI), and the inv_* elements are padded tensors representing the hero’s inventory items. More details about the default observation space and possible extensions can be found in Appendix B. The environment has 93 available actions, corresponding to all the actions a human player can take in NetHack. More precisely, the action space is composed of 77 command actions and 16 movement actions. The movement actions are split into eight “one-step” compass directions (i.e., the agent moves a single step in a given direction) and eight “move far” compass directions (i.e., the agent moves in the specified direction until it runs into some entity). The 77 command actions include eating, opening, kicking, reading, praying as well as many others. We refer the reader to Appendix C as well as to the NetHack Guidebook [59] for the full table of actions and NetHack commands. NLE comes with a Gym interface [11] and includes multiple pre-defined tasks with different reward functions and action spaces (see next section and Appendix E for details). We designed the interface to be lightweight, achieving competitive speeds with Gym-based ALE (see Appendix D for a rough comparison). Finally, NLE also includes a dashboard to analyze NetHack runs recorded as terminal tty recordings. This allows NLE users to analyze replays of the agent’s behavior at an arbitrary speed and provides an interface to visualize action distributions and game events (see Appendix H for details). NLE is available under an open source license at https://github.com/facebookresearch/nle. 4An example interaction after applying a figurine of an Archon: “You set the figurine on the ground and it transforms. You get a bad feeling about this. The Archon hits! You are blinded by the Archon’s radiance! You stagger. . . It hits! You die. . . But wait. . . Your medallion feels warm! You feel much better! The medallion crumbles to dust! You survived that attempt on your life.” 2.3 Tasks NLE aims to make it easy for researchers to probe the behavior of their agents by defining new tasks with only a few lines of code, enabled by NetHack’s symbolic observation space as well as its rich entities and environment dynamics. To demonstrate that NetHack is a suitable testbed for advancing RL, we release a set of initial tasks for tractable subgoals in the game: navigating to a staircase down to the next level, navigating to a staircase while being accompanied by a pet, locating and eating edibles, collecting gold, maximizing in-game score, scouting to discover unseen parts of the dungeon, and finding the oracle. These tasks are described in detail in Appendix E, and, as we demonstrate in our experiments, lead to unique challenges and diverse behaviors of trained agents. 2.4 Evaluation Protocol We lay out a protocol and provide guidance for evaluating future work on NLE in a reproducible manner. The overall goal of NLE is to train agents that can solve NetHack. An episode in the full game of NetHack is considered solved if the agent retrieves the Amulet of Yendor and offers it to its co-aligned deity in the Astral Plane, thereby ascending to demigodhood. We declare NLE to be solved once agents can be trained to consecutively ascend (ten episodes without retry) to demigodhood on unseen seeds given a random role, race, alignment, and gender combination. Since the environment is procedurally generated and stochastic, evaluating on held-out unseen seeds ensures we test systematic generalization of agents. As of October 2020, NAO reports the longest streak of human ascensions on NetHack 3.6.x to be 61; the role, race, etc. are not necessarily randomized for these ascension streaks. Since we believe that this goal is out of reach for machine learning approaches in the foreseeable future, we recommend comparing models on the score task in the meantime. Using NetHack’s in-game score as the measure for progress has caveats. For example, expert human players can solve NetHack while minimizing the score [see 50, “Score” entry, for details]. NAO reports ascension scores for NetHack 3.6.x ranging from the low hundreds of thousands to tens of millions. Although we believe training agents to maximize the in-game score is likely insufficient for solving the game, the in-game score is still a sensible proxy for incremental progress on NLE as it is a function of, among other things, the dungeon depth that the agent reached, the number of enemies it killed, the amount of gold it collected, as well as the knowledge it gathered about potions, scrolls, and wands. When reporting results on NLE, we require future work to state the full character specification (e.g., mon-hum-neu-mal), all NetHack options that were used (e.g., whether or not autopickup was used), which actions were allowed (see Table 1), which actions or action-sequences were hardcoded (e.g., engraving [see 50, “Elbereth” as an example]) and how many different seeds were used during training. We ask to report the average score obtained on 1000 episodes of randomly sampled and previously unseen seeds. We do not impose any restrictions during training, but at test time any save scumming (i.e., saving and loading previous checkpoints of the episode) or manipulation of the random number generator [e.g., 2] is forbidden. 2.5 Baseline Models For our baseline models, we encode the multi-modal observation ot as follows. Let the observation ot at time step t be a tuple (gt, zt) consisting of the 21 × 79 matrix of glyph identifiers and a 21- dimensional vector containing agent stats such as its (x, y)-coordinate, health points, experience level, and so on. We produce three dense representations based on the observation (see Figure 3). For every of the 5991 possible glyphs in NetHack (monsters, items, dungeon features, etc.), we learn a k-dimensional vector embedding. We apply a ConvNet (red) to all visible glyph embeddings as well as another ConvNet (blue) to the 9×9 crop of glyphs around the agent to create a dedicated egocentric representation for improved generalization [32, 71]. We found this egocentric representation to be an important component during preliminary experiments. Furthermore, we use an MLP to encode the hero’s stats (green). These vectors are concatenated and processed by another MLP to produce a low-dimensional latent representation ot of the observation. Finally, we employ a recurrent policy parameterized by an LSTM [33] to obtain the action distribution. For baseline results on the tasks above, we use a reduced action space that includes the movement, search, kick, and eat actions. For the main experiments, we train the agent’s policy for 1B steps in the environment using IMPALA [24] as implemented in TorchBeast [44]. Throughout training, we change NetHack’s seed for procedurally generating the environment after every episode. To demonstrate NetHack’s variability based on the character configuration, we train with four different agent characters: a neutral human male monk (mon-hum-neu-mal), a lawful dwarf female valkyrie (val-dwa-law-fem), a chaotic elf male wizard (wiz-elf-cha-mal), and a neutral human female tourist (tou-hum-neu-fem). More implementation details can be found in Appendix F. In addition, we present results using Random Network Distillation (RND) [13], a popular exploration technique for Deep RL. As previously discussed, exploration techniques which require returning to previously visited states such as Go-Explore are not suitable for use in NLE, but RND does not have this restriction. RND encourages agents to visit unfamiliar states by using the prediction error of a fixed random network as an intrinsic exploration reward, which has proven effective for hard exploration games such as Montezuma’s Revenge [12]. The intrinsic reward obtained from RND can create “reward bridges” between states which provide sparse extrinsic environmental rewards, thereby enabling the agent to discover new sources of extrinsic reward that it otherwise would not have reached. We replace the baseline network’s pixel-based feature extractor with the symbolic feature extractor described above for the baseline model, and use the best configuration of other RND hyperparameters documented by the authors (see Appendix G for full details). 3 Experiments and Results We present quantitative results on the suite of tasks included in NLE using a standard distributed Deep RL baseline and a popular exploration method, before additionally analyzing agent behavior qualitatively. For each model and character combination, we present results of the mean episode return over the last 100 episodes averaged for five runs in Figure 5. We discuss results for individual tasks below (see Table 5 in the appendix for full details). Staircase: Our agents learning to navigate the dungeon to the staircase > with a success rate of 77.26% for the monk, 50.42% for the tourist, 74.62% for the valkyrie, and 80.42% for the wizard. What surprised us is that agents learn to reliably kick in locked doors. This is a costly action to explore as the agent loses health points and might even die when accidentally kicking against walls. Similarly, the agent has to learn to reliably search for hidden passages and secret doors. Often, this involves using the search action many times in a row, sometimes even at many locations on the map (e.g., around all walls inside a room). Since NLE is procedurally generated, during training agents might encounter easier environment instances and use the acquired skills to accelerate learning on the harder ones [60, 18]. With a small probability, the staircase down might be generated near the agent’s starting position. Using RND exploration, we observe substantial gains in the success rate for the monk (+13.58pp), tourist (+6.52pp) and valkyrie (+16.34pp) roles, while lower results for wizard roles (−12.96pp). Pet: Finding the staircase while taking care of the hero’s pet (e.g., the starting kitten f or little dog d) is a harder task as the pet might get killed or fall into a trap door, making it impossible for the agent to successfully complete the episode. Compared to the staircase task, the agent success rates are generally lower (62.02% for monk, 25.66% for tourist, 63.30% for valkyrie, and wizard 66.80%). Again, RND exploration provides consistent and substantial gains. Eat: This tasks highlights the importance of testing with different character classes in NetHack. The monk and tourist start with a number edible items (e.g., food rations %, apples % and oranges %). A sub-optimal strategy is to consume all of these comestibles right at the start of the episode, potentially risking choking to death. In contrast, the other roles have to hunt for food, which our agents learn to do slowly over time for the valkyrie and wizard roles. By having more pressure to quickly learn a sustainable food strategy, the valkyrie learns to outlast other roles and survives the longest in the game (on average 1713 time steps). Interestingly, RND exploration leads to consistently worse results for this task. Gold: Locating gold $ in NetHack provides a relatively sparse reward signal. Still, our agents learn to collect decent amounts during training and learn to descend to deeper dungeon levels in search for more. For example, monk agents reach dungeon level 4.2 on average for the CNN baseline and even 5.0 using RND exploration. Score: As discussed in Section 2.4, we believe this task is the best candidate for comparing future methods regarding progress on NetHack. However, it is questionable whether a reward function based on NetHack’s in-game score is sufficient for training agents to solve the game. Our agents average at a score of 748 for monk, 11 for tourist, 573 for valkyrie, and 314 for wizard, with RND exploration again providing substantial gains (e.g. increasing the average score to 780 for monk). The resulting agents explore much of the early stages of the game, reaching dungeon level 5.4 on average for the monk with the deepest descent to level 11 achieving a high score of 4260 while leveling up to experience level 7 (see Table 6 in the appendix). Scout: The scout task shows a trend that is similar to the score task. Interestingly, we observe a lower experience level and in-game score, but agents descend, on average, similarly deep into the dungeon (e.g. level 5.5 for monk). This is sensible, since a policy that avoids to fight monsters, thereby lowering the chances of premature death, will not increase the in-game score as fast or level up the character as quickly, thus keeping the difficulty of spawned monsters low. We note that delaying to level up in order to avoid encountering stronger enemies early in the game is a known strategy human players adopt in NetHack [e.g. 50, “Why do I keep dying?” entry, January 2019 version]. Oracle: None of our agents find the Oracle @ (except for one lucky valkyrie episode). Locating the Oracle is a difficult exploration task. Even if the agent learns to make its way down the dungeon levels, it needs to search many, potentially branching, levels of the dungeon. Thus, we believe this task serves as a challenging benchmark for exploration methods in procedurally generated environments in the short term. Long term, many tasks harder than this (e.g., reaching Minetown, Mines’ End, Medusa’s Island, The Castle, Vlad’s Tower, Moloch’s Sanctum etc.) can be easily defined in NLE with very few lines of code. 3.1 Generalization Analysis Akin to [18], we evaluate agents trained on a limited set of seeds while still testing on 100 held-out seeds. We find that test performance increases monotonically with the size of the set of seeds that the agent is trained on. Figure 4 shows this effect for the score and staircase tasks. Training only on a limited number of seeds leads to high training performance, but poor generalization. The gap between training and test performance becomes narrow when training with at least 1000 seeds, indicating that at that point agents are exposed to sufficient variation during training to make memorization infeasible. We also investigate how model capacity affects performance by comparing agents with five different hidden sizes for the final layer (of the architecture described in Section 2.5). Figure 7 in the appendix shows that increasing the model capacity improves results on the score but not on the staircase task, indicating that it is an important hyperparameter to consider, as also noted by [18]. 3.2 Qualitative Analysis We analyse the cause for death of our agents during training and present results in Figure 9 in the appendix. We notice that starvation and traps become a less prominent cause of death over time, most likely because our agents, when starting to learn to descend dungeon levels and fight monsters, are more likely to die in combat before they starve or get killed by a trap. In the score and scout tasks, our agents quickly learn to avoid eating rotten corpses, but food poisoning becomes again prominent towards the end of training. We can see that gnome lords G, gnome kings G, chameleons :, and even mind flayers h become a more prominent cause of death over time, which can be explained with our agents leveling up and descending deeper into the dungeon. Chameleons are a particularly interesting entity in NetHack as they regularly change their form to a random animal or monster, thereby adversarially confusing our agent with rarely seen symbols for which it has not yet learned a meaningful representation (similar to unknown words in natural language processing). We release a set of high-score recordings of our agents (see Appendix J on how to view them via a browser or terminal). 4 Related Work Progress in RL has historically been achieved both by algorithmic innovations as well as development of novel environments to train and evaluate agents. Below, we review recent RL environments and delineate their strengths and weaknesses as testbeds for current methods and future research. Recent Game-Based Environments: Retro video games have been a major catalyst for Deep RL research. ALE [9] provides a unified interface to Atari 2600 games, which enables testing of RL algorithms on high-dimensional visual observations quickly and cheaply, resulting in numerous Deep RL publications over the years [4]. The Gym Retro environment [51] expands the list of classic games, but focuses on evaluating visual generalization and transfer learning on a single game, Sonic The Hedgehog. Both StarCraft: BroodWar and StarCraft II have been successfully employed as RL environments [64, 69] for research on, for example, planning [52, 49], multi-agent systems [27, 63], imitation learning [70], and model-free reinforcement learning [70]. However, the complexity of these games creates a high entry barrier both in terms of computational resources required as well as intricate baseline models that require a high degree of domain knowledge to be extended. 3D games have proven to be useful testbeds for tasks such as navigation and embodied reasoning. Vizdoom [42] modifies the classic first-person shooter game Doom to construct an API for visual control; DeepMind Lab [7] presents a game engine based on Quake III Arena to allow for the creation of tasks based on the dynamics of the original game; Project Malmo [37], MineRL [29] and CraftAssist [35] provide visual and symbolic interfaces to the popular Minecraft game. While Minecraft is also procedurally generated and has complex environment dynamics that an agent needs to learn about, it is much more computationally demanding than NetHack (see Table 4 in the appendix). As a consequence, the focus has been on learning from demonstrations [29]. More recent work has produced game-like environments with procedurally generated elements, such as the Procgen Benchmark [18], MazeExplorer [30], and the Obstacle Tower environment [38]. However, we argue that, compared to NetHack or Minecraft, these environments do not provide the depth likely necessary to serve as long-term RL testbeds due to limited number of entities and environment interactions that agents have to learn to master. In contrast, NetHack agents have to acquire knowledge about complex environment dynamics of hundreds of entities (dungeon features, items, monsters etc.) to do well in a game that humans often take years of practice to solve. In conclusion, none of the current benchmarks combine a fast simulator with a procedurally generated environment, a hard exploration problem, a wide variety of complex environment dynamics, and numerous types of static and interactive entities. The unique combination of challenges present in NetHack makes NLE well-suited for driving research towards more general and robust RL algorithms. Roguelikes as Reinforcement Learning Testbeds: We are not the first to argue for roguelike games to be used as testbeds for RL. Asperti et al. [5] present an interface to Rogue, the very first roguelike game and one of the simplest roguelikes in terms of game dynamics and difficulty. They show that policies trained with model-free RL algorithms can successfully learn rudimentary navigation. Similarly, Kanagawa and Kaneko [41] present an environment inspired by Rogue that provides a parameterizable generation of Rogue levels. Like us, Dannenhauer et al. [20] argue that roguelike games could be a useful RL testbed. They discuss the roguelike game Dungeon Crawl Stone Soup, but their position paper provides neither an RL environment nor experiments to validate their claims. Most similar to our work is gym_nethack [14, 15], which offers a Gym environment based on NetHack 3.6.0. We commend the authors for introducing NetHack as an RL environment, and to the best of our knowledge they were the first to suggest the idea. However, there are several design choices that limit the impact and longevity of their version as a research testbed. First, they heavily modified NetHack to enable agent interaction. In the process, gym_nethack disables various crucial game mechanics to simplify the game, its environment dynamics, and the resulting optimal policies. This includes removing obstacles like boulders, traps, and locked doors as well as all item identification mechanics, making items much easier to employ and the overall environment much closer to its simpler predecessor, Rogue. Additionally, these modifications tie the environment to a particular version of the game. This is not ideal as (i) players tend to use new versions of the game as they are released, hence, publicly available human data becomes progressively incompatible, thereby limiting the amount of data that can be used for learning from demonstrations; (ii) older versions of NetHack tend to include well-documented exploits which may be discovered by agents (see Appendix I for exploits used in programmatic bots). In contrast, NLE is designed to make the interaction with NetHack as close as possible to the one experienced by humans playing the full game. NLE is the only environment exposing the entire game in all its complexity, allowing for larger-scale experimentation to push the boundaries of RL research. 5 Conclusion and Future Work The NetHack Learning Environment is a fast, complex, procedurally generated environment for advancing research in RL. We demonstrate that current state-of-the-art model-free RL serves as a sensible baseline, and we provide an in-depth analysis of learned agent behaviors. NetHack provides interesting challenges for exploration methods given the extremely large number of possible states and wide variety of environment dynamics to discover. Previously proposed formulations of intrinsic motivation based on seeking novelty [8, 53, 13] or maximizing surprise [56, 12, 57] are likely insufficient to make progress on NetHack given that an agent will constantly find itself in novel states or observe unexpected environment dynamics. NetHack poses further challenges since, in order to win, an agent needs to acquire a wide range of skills such as collecting resources, fighting monsters, eating, manipulating objects, casting spells, or taking care of their pet, to name just a few. The multilevel dependencies present in NetHack could inspire progress in hierarchical RL and long-term planning [21, 40, 55, 68]. Transfer to unseen game characters, environment dynamics, or level layouts can be evaluated [67]. Furthermore, its richness and constant challenge make NetHack an interesting benchmark for lifelong learning [45, 54, 61, 48]. In addition, the extensive documentation about NetHack can enable research on using prior (natural language) knowledge for learning, which could lead to improvements in generalization and sample efficiency [10, 46, 72, 36]. Lastly, NetHack can also drive research on learning from demonstrations [1, 3] since a large collection of replay data is available. In sum, we argue that the NetHack Learning Environment strikes an excellent balance between complexity and speed while encompassing a variety of challenges for the research community. For future versions of the environment, we plan to support NetHack 3.7 once it is released, as it will further increase the variability of observations via Themed Rooms. This version will also introduce scripting in the Lua language, which we will leverage to enable users to create their custom sandbox tasks, directly tapping into NetHack and its rich universe of entities and their complex interactions to define custom RL tasks. 6 Broader Impact To bridge the gap between the constrained world of video and board games, and the open and unpredictable real world, there is a need for environments and tasks which challenge the limits of current Reinforcement Learning (RL) approaches. Some excellent challenges have been put forth over the years, demanding increases in the complexity of policies needed to solve a problem or scale needed to deal with increasingly photorealistic, complex environments. In contrast, our work seeks to be extremely fast to run while still testing the generalization and exploration abilities of agents in an environment which is rich, procedurally generated, and in which reward is sparse. The impact of solving these problems with minimal environment-specific heuristics lies in the development of RL algorithms which produce sample efficient, robust, and general policies capable of more readily dealing with the uncertain and changing dynamics of “real world” environments. We do not solve these problems here, but rather provide the challenge and the testbed against such improvements can be produced and evaluated. Auxiliary to this, and in line with growing concerns that progress in Deep RL is more the result of industrial labs having privileged access to the resources required to run environments and agents on a massive scale, the environment presented here is computationally cheap to run and to collect data in. This democratizes access for researchers in more resource-constrained labs, while not sacrificing the difficulty and richness of the environment. We hope that as a result of this, and of the more general need to develop sample-efficient agents with fewer data, the environmental impact of research using our environment will be reduced compared to more visually sophisticated ones. Acknowledgements We thank the NetHack DevTeam for creating and continuously extending this amazing game over the last decades. We thank Paul Winner, Bart House, M. Drew Streib, Mikko Juola, Florian Mayer, Philip H.S. Torr, Stephen Roller, Minqi Jiang, Vegard Mella, Eric Hambro, Fabio Petroni, Mikayel Samvelyan, Vitaly Kurin, Arthur Szlam, Sebastian Riedel, Antoine Bordes, Gabriel Synnaeve, Jeremy Reizenstein, as well as the NeurIPS 2020, ICML 2020, and BeTR-RL 2020 reviewers and area chairs for their valuable feedback. Nantas Nardelli is supported by EPSRC/MURI grant EP/N019474/1. Finally, we would like to pay tribute to the 863,918,816 simulated NetHack heroes who lost their lives in the name of science for this project (thus far).
1. What is the focus and contribution of the paper regarding the introduction of a new RL benchmark adapted from NetHack? 2. What are the strengths of the proposed environment, particularly in terms of efficiency and potential for NLP-related work? 3. What are the weaknesses of the paper, especially regarding the lack of baseline algorithms and special train/test splits for skill acquisition and composition? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any additional suggestions or recommendations for improving the paper, such as manual decomposition and composition of tasks for explicit demonstration of skill acquisition and combination?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper introduce a new RL benchmark adapted from NetHack. The authors describe the complexity of the game and run some basic baselines like IMPALA. Strengths It takes some engineering effort to produce a benchmark with solid baselines. This environment is also efficient to generate, which makes it appealing to many RL researchers. For a benchmark that aims for more complexity, the discussion on the failure case is useful for the following researchers. After reading this author response, I do think this environment has a lot more potential, especially the possibility of doing NLP related work with the wiki. I improve the score by 1. Weaknesses The main weakness would be lack of basline algorithm, especially as an RL benchmark. It would make the paper stronger if most RL algo is tested. Also as a benchmark which aimes for skill acuisition, the paper should provide some speical train/test split to show the skill acuisition and composition, like in MetaWorld. [After Reading the author feedback] 1. I agree that training on a large set of RL baselines is not that crucial for this paper. It's a minor comment for me. 2. On special train/test splits: I am glad to hear that the current environment would support generate completely new map based on the seeds. However, what I have in my mind is something close to the policy sketch envrionment [1], where you can decompose Task A into several subtasks that can be combined into Task B. I understand that right now these skill decomposition/composition are probably already there in the current environment, nevertheless, it would be very useful if the authors could manually decide this split to explictly show it. E.g., Task A "Go To Dungeon": Subtasks: Navigate, OpenDoor(find key first), fight monster TaskB "Feed Pet": Subtasks: Navitage(find food), OpenDoor, Feed [1] Andreas, Jacob, Dan Klein, and Sergey Levine. "Modular multitask reinforcement learning with policy sketches." International Conference on Machine Learning. 2017.
NIPS
Title The NetHack Learning Environment Abstract Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminalbased roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source and available at https://github.com/facebookresearch/nle. 1 Introduction Recent advances in (Deep) Reinforcement Learning (RL) have been driven by the development of novel simulation environments, such as the Arcade Learning Environment (ALE) [9], StarCraft [64, 69], BabyAI [16], Obstacle Tower [38], Minecraft [37, 29, 35], and Procgen Benchmark [18]. These environments introduced new challenges for state-of-the-art methods and demonstrated failure modes of existing RL approaches. For example, Montezuma’s Revenge highlighted that methods performing well on other ALE tasks were not able to successfully learn in this sparse-reward environment. This sparked a long line of research on novel methods for exploration [e.g., 8, 66, 53] and learning from demonstrations [e.g., 31, 62, 6]. However, this progress has limits: the current best approach on this environment, Go-Explore [22, 23], overfits to specific properties of ALE and Montezuma’s Revenge. While Go-Explore is an impressive solution for Montezuma’s Revenge, it exploits the determinism of environment transitions, allowing it to memorize sequences of actions that lead to previously visited states from which the agent can continue to explore. We are interested in surpassing the limits of deterministic or repetitive settings and seek a simulation environment that is complex and modular enough to test various open research challenges such as exploration, planning, skill acquisition, memory, and transfer. However, since state-of-the-art RL approaches still require millions or even billions of samples, simulation environments need to be fast to allow RL agents to perform many interactions per second. Among attempts to surpass the limits of deterministic or repetitive settings, procedurally generated environments are a promising path 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. towards testing systematic generalization of RL methods [e.g., 39, 38, 60, 18]. Here, the game state is generated programmatically in every episode, making it extremely unlikely for an agent to visit the exact state more than once during its lifetime. Existing procedurally generated RL environments are either costly to run [e.g., 69, 37, 38] or are, as we argue, of limited complexity [e.g., 17, 19, 7]. To address these issues, we present the NetHack Learning Environment (NLE), a procedurally generated environment that strikes a balance between complexity and speed. It is a fully-featured Gym environment [11] around the popular open-source terminal-based single-player turn-based “dungeon-crawler” game, NetHack [43]. Aside from procedurally generated content, NetHack is an attractive research platform as it contains hundreds of enemy and object types, it has complex and stochastic environment dynamics, and there is a clearly defined goal (descend the dungeon, retrieve an amulet, and ascend). Furthermore, NetHack is difficult to master for human players, who often rely on external knowledge to learn about strategies and NetHack’s complex dynamics and secrets.1 Thus, in addition to a guide book [58, 59] released with NetHack itself, many extensive community-created documents exist, outlining various strategies for the game [e.g., 50, 25]. In summary, we make the following core contributions: (i) we present NLE, a fast but complex and feature-rich Gym environment for RL research built around the popular terminal-based game, NetHack, (ii) we release an initial suite of tasks in the environment and demonstrate that novel tasks can be added easily, (iii) we introduce baseline models trained using IMPALA [24] and Random Network Distillation (RND) [13], a popular exploration bonus, resulting in agents that learn diverse policies for early stages of NetHack, and (iv) we demonstrate the benefit of NetHack’s symbolic observation space by presenting in-depth qualitative analyses of trained agents. 2 NetHack: a Frontier for Reinforcement Learning Research In traditional so-called roguelike games (e.g., Rogue, Hack, NetHack, and Dungeon Crawl Stone Soup) the player acts turn-by-turn in a procedurally generated grid-world environment, with game dynamics strongly focused on exploration, resource management, and continuous discovery of entities and game mechanics [IRDC, 2008]. These games are designed to provide a steep learning curve and a constant level of challenge and surprise to the player. They are generally extremely difficult to win even once, let alone to master, i.e., win regularly and multiple times in a row. As advocated by [39, 38, 18], procedurally generated environments are a promising direction for testing systematic generalization of RL agents. We argue that such environments need to be both sufficiently complex and fast to run to serve as a challenging long-term research testbed. In Section 2.1, we illustrate that NetHack contains many desirable properties, making it an excellent candidate for driving long-term research in RL. We introduce NLE in Section 2.2, an initial suite of tasks in Section 2.3, an evaluation protocol for measuring progress towards solving NetHack in Section 2.4, as well as baseline models in Section 2.5. 2.1 NetHack NetHack is one of the oldest and most popular roguelikes, originally released in 1987 as a successor to Hack, an open-source implementation of the original Rogue game. At the beginning of the game, the player takes the role of a hero who is placed into a dungeon and tasked with finding the Amulet of Yendor to offer it to an in-game deity. To do so, the player has to descend to the bottom of over 50 procedurally generated levels to retrieve the amulet and then subsequently escape the dungeon, unlocking five extremely challenging final levels (the four Elemental Planes and the Astral Plane). Many aspects of the game are procedurally generated and follow stochastic dynamics. For example, the overall structure of the dungeon is somewhat linear, but the exact location of places of interest (e.g., the Oracle) and the structure of branching sub-dungeons (e.g., the Gnomish Mines) are determined randomly. The procedurally generated content of each level makes it highly unlikely that a player will ever experience the exact same situation more than once. This provides a fundamental challenge to learning systems and a degree of complexity that enables us to more effectively evaluate an agent’s ability to generalize. It also disqualifies current state-of-the-art exploration methods such as Go-Explore [22, 23] that are based on a goal-conditioned policy to navigate to previously visited 1“NetHack is largely based on discovering secrets and tricks during gameplay. It can take years for one to become well-versed in them, and even experienced players routinely discover new ones.” [26] 1 1 states. Moreover, states in NetHack are composed of hundreds of possible symbols, resulting in an enormous combinatorial observation space.2 It is an open question how to best project this symbolic space to a low-dimensional representation appropriate for methods like Go-Explore. For example, Ecoffet et al.’s heuristic of downsampling images of states to measure their similarity to be used as an exploration bonus will likely not work for large symbolic and procedurally generated environments. NetHack provides further variation by different hero roles (e.g., monk, valkyrie, wizard, tourist), races (human, elf, dwarf, gnome, orc) and random starting inventories (see Appendix A for details). Consequently, NetHack poses unique challenges to the research community and requires novel ways to determine state similarity and, likely, entirely new exploration frameworks. The gelatinous cube eats a scroll! }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}.}}}}}P.}}}}}......}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}..".}}}...}}}}} }...}}.....}}}}}....}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}...............} }....}}}}}}}}}}....}}}..}}}}}}}}}}}.......}}}}}}}}}}}}}}}}..}}.....}}}...}} }....}}}}}}}}b....}}}}.[}}}}}}%................}}}}}}}}}}}.}}}}.....}}...}} }....}}}}}}}}}}}}.}}}}.}}}}}}M-----------------.}}}}}}}}}}}}}}}}}.........} }....}}}}}}}}}}}}}}}}}}.}}}...|.......^.......|...}}}}}}}}}}}}}}}}}}}....}} }.....}.}}s...}}}}}}}}}.}}....--------+--------....}}}}}}..}}}}}}}}}}}...}} }..oo..}}}}.%}}}}}}}}}}}}}........|.......|........}}}}}....}}}}}}}}}}}}}}} }.....}}}}}}}}}}}}}}}}}}}}........|.>.....|^....^..}}}}}...}}}}}}}}}.}}}}}} }.....}}}}}}}}}}}}}}}}}}}}....--------+--------....}}}}}}.}.}}}}}}}}}}}}}}} }....@.}}}}}}}}}}}}}}}}}}}}...|.......^.......|...}}}}}}}}}}}}}}}}}.}}}}}}} }.......}}}}}}}..}}}}}}}}}}}}.-----------------.}}}}}}}}}}}}}}}}}....}}}}}} }....f..p}}.}}...U}}}}}}}}}}}}.......Y.........}}}}}..}}}}}}}}}.......}}}}} }.......}}}}}}}......}}}}}}}}}}}}}}..v....}}}}}}}}}..C..}}}}}}...}}..}}}}}} }.....}}}}}}}}}}}.Y...}}}}}}}}}}}}}}}}}}}}}}.}}}}}}}..B}}}}}}}}}....}}}}}}} }}..}}}}}}}}}}}}}....}}}}}}}}}}}}}}}}}}}}}}...}}..}}}}}}}.}}.}}}}.^}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} Georg XIII the Thaumaturge St:7 Dx:14 Co:17 In:19 Wi:10 Ch:10 Neutral S: Dlvl:6 $:0 HP:52(52) Pw:28(73) AC:5 Xp:7/921 T:7451 Hungry 1Figure 2: The hero (@) has to cross water (}) to get past Medusa (@, out of the hero’s line of sight) down the staircase (>) to the next level. To provide a glimpse into the complexity of NetHack’s environment dynamics, we closely follow the educational example given by “Mr Wendal” on YouTube.3 At a specific point in the game, the hero has to get past Medusa’s Island (see Figure 2 for an example). Medusa’s Island is surrounded by water } that the agent has to cross. Water can rust and corrode the hero’s metallic weapons ) and armor [. Applying a can of grease ( prevents rusting and corrosion. Furthermore, going into water will make a hero’s inventory wet, erasing scrolls ? and spellbooks + that they carry. Applying a can of grease to a bag or sack ( will make it a waterproof container for items. But the sea can also contain a kraken ; that can grab and drown the hero, leading to instant death. Applying a can of grease to a hero’s armor prevents the kraken from grabbing the hero. However, a cursed can of grease will grease the hero’s hands instead and they will drop their weapon and rings. One can use a towel ( to wipe off grease. To reach Medusa @, the hero can alternatively use magic to freeze the water and turn it into walkable ice .. Wearing snow boots [ will help the hero not to slip. When Medusa is in the hero’s line of sight, her gaze will petrify and instantly kill—the hero should use a towel to cover their eyes to fight Medusa, or even apply a mirror ( to petrify her with her own gaze. There are many other entities a hero must learn to face, many of which appear rarely even across multiple games, especially the most powerful monsters. These entities are often compositional, for example a monster might be a wolf d, which shares some characteristics with other in-game canines such as coyotes d or hell hounds d. To help a player learn, NetHack provides in-game messages 2Information about the over 450 items and 580 monster types, as well as environment dynamics involving these entities can be found in the NetHack Wiki [50] and to some extent in the NetHack Guidebook [59]. 3youtube.com/watch?v=SjuTyJlgLJ8 describing many of the hero’s interactions (see the top of Figure 1).4 Learning to capture these interesting and somewhat realistic albeit abstract dynamics poses challenges for multi-modal and language-conditioned RL [46]. NetHack is an extremely long game. Successful expert episodes usually last tens of thousands of turns, while average successful runs can easily last hundreds of thousands of turns, spawning multiple days of play-time. Compared to testbeds with long episode horizons such as StarCraft and Dota 2, NetHack’s “episodes” are one or two orders of magnitude longer, and they wildly vary depending on the policy. Moreover, several official conducts exist in NetHack that make the game even more challenging, e.g., by not wearing any armor throughout the game (see Appendix A for more). Finally, in comparison to other classic roguelike games, NetHack’s popularity has attracted a larger number of contributors to its community. Consequently, there exists a comprehensive game wiki [50] and many so-called spoilers [25] that provide advice to players. Due to the randomized nature of NetHack, this advice is general in nature (e.g., explaining the behavior of various entities) and not a step-by-step guide. These texts could be used for language-assisted RL along the lines of [72]. Lastly, there is also a large public repository of human replay data (over five million games) hosted on the NetHack Alt.org (NAO) servers, with hundreds of finished games per day on average [47]. This extensive dataset could spur research advances in imitation learning, inverse RL, and learning from demonstrations [1, 3]. 2.2 The NetHack Learning Environment The NetHack Learning Environment (NLE) is built on NetHack 3.6.6, the 36th public release of NetHack, which was released on March 8th, 2020 and is the latest available version of the game at the time of publication of this paper. NLE is designed to provide a common, turn-based (i.e., synchronous) RL interface around the standard terminal interface of NetHack. We use the game as-is as the backend for our NLE environment, leaving the game dynamics unchanged. We added to the source code more control over the random number generator for seeding the environment, as well as various modifications to expose the game’s internal state to our Python frontend. By default, the observation space consists of the elements glyphs, chars, colors, specials, blstats, message, inv_glyphs, inv_strs, inv_letters, as well as inv_oclasses. The elements glyphs, chars, colors, and specials are tensors representing the (batched) 2D symbolic observation of the dungeon; blstats is a vector of agent coordinates and other character attributes (“bottom-line stats”, e.g., health points, strength, dexterity, hunger level; normally displayed in the bottom area of the GUI), message is a tensor representing the current message shown to the player (normally displayed in the top area of the GUI), and the inv_* elements are padded tensors representing the hero’s inventory items. More details about the default observation space and possible extensions can be found in Appendix B. The environment has 93 available actions, corresponding to all the actions a human player can take in NetHack. More precisely, the action space is composed of 77 command actions and 16 movement actions. The movement actions are split into eight “one-step” compass directions (i.e., the agent moves a single step in a given direction) and eight “move far” compass directions (i.e., the agent moves in the specified direction until it runs into some entity). The 77 command actions include eating, opening, kicking, reading, praying as well as many others. We refer the reader to Appendix C as well as to the NetHack Guidebook [59] for the full table of actions and NetHack commands. NLE comes with a Gym interface [11] and includes multiple pre-defined tasks with different reward functions and action spaces (see next section and Appendix E for details). We designed the interface to be lightweight, achieving competitive speeds with Gym-based ALE (see Appendix D for a rough comparison). Finally, NLE also includes a dashboard to analyze NetHack runs recorded as terminal tty recordings. This allows NLE users to analyze replays of the agent’s behavior at an arbitrary speed and provides an interface to visualize action distributions and game events (see Appendix H for details). NLE is available under an open source license at https://github.com/facebookresearch/nle. 4An example interaction after applying a figurine of an Archon: “You set the figurine on the ground and it transforms. You get a bad feeling about this. The Archon hits! You are blinded by the Archon’s radiance! You stagger. . . It hits! You die. . . But wait. . . Your medallion feels warm! You feel much better! The medallion crumbles to dust! You survived that attempt on your life.” 2.3 Tasks NLE aims to make it easy for researchers to probe the behavior of their agents by defining new tasks with only a few lines of code, enabled by NetHack’s symbolic observation space as well as its rich entities and environment dynamics. To demonstrate that NetHack is a suitable testbed for advancing RL, we release a set of initial tasks for tractable subgoals in the game: navigating to a staircase down to the next level, navigating to a staircase while being accompanied by a pet, locating and eating edibles, collecting gold, maximizing in-game score, scouting to discover unseen parts of the dungeon, and finding the oracle. These tasks are described in detail in Appendix E, and, as we demonstrate in our experiments, lead to unique challenges and diverse behaviors of trained agents. 2.4 Evaluation Protocol We lay out a protocol and provide guidance for evaluating future work on NLE in a reproducible manner. The overall goal of NLE is to train agents that can solve NetHack. An episode in the full game of NetHack is considered solved if the agent retrieves the Amulet of Yendor and offers it to its co-aligned deity in the Astral Plane, thereby ascending to demigodhood. We declare NLE to be solved once agents can be trained to consecutively ascend (ten episodes without retry) to demigodhood on unseen seeds given a random role, race, alignment, and gender combination. Since the environment is procedurally generated and stochastic, evaluating on held-out unseen seeds ensures we test systematic generalization of agents. As of October 2020, NAO reports the longest streak of human ascensions on NetHack 3.6.x to be 61; the role, race, etc. are not necessarily randomized for these ascension streaks. Since we believe that this goal is out of reach for machine learning approaches in the foreseeable future, we recommend comparing models on the score task in the meantime. Using NetHack’s in-game score as the measure for progress has caveats. For example, expert human players can solve NetHack while minimizing the score [see 50, “Score” entry, for details]. NAO reports ascension scores for NetHack 3.6.x ranging from the low hundreds of thousands to tens of millions. Although we believe training agents to maximize the in-game score is likely insufficient for solving the game, the in-game score is still a sensible proxy for incremental progress on NLE as it is a function of, among other things, the dungeon depth that the agent reached, the number of enemies it killed, the amount of gold it collected, as well as the knowledge it gathered about potions, scrolls, and wands. When reporting results on NLE, we require future work to state the full character specification (e.g., mon-hum-neu-mal), all NetHack options that were used (e.g., whether or not autopickup was used), which actions were allowed (see Table 1), which actions or action-sequences were hardcoded (e.g., engraving [see 50, “Elbereth” as an example]) and how many different seeds were used during training. We ask to report the average score obtained on 1000 episodes of randomly sampled and previously unseen seeds. We do not impose any restrictions during training, but at test time any save scumming (i.e., saving and loading previous checkpoints of the episode) or manipulation of the random number generator [e.g., 2] is forbidden. 2.5 Baseline Models For our baseline models, we encode the multi-modal observation ot as follows. Let the observation ot at time step t be a tuple (gt, zt) consisting of the 21 × 79 matrix of glyph identifiers and a 21- dimensional vector containing agent stats such as its (x, y)-coordinate, health points, experience level, and so on. We produce three dense representations based on the observation (see Figure 3). For every of the 5991 possible glyphs in NetHack (monsters, items, dungeon features, etc.), we learn a k-dimensional vector embedding. We apply a ConvNet (red) to all visible glyph embeddings as well as another ConvNet (blue) to the 9×9 crop of glyphs around the agent to create a dedicated egocentric representation for improved generalization [32, 71]. We found this egocentric representation to be an important component during preliminary experiments. Furthermore, we use an MLP to encode the hero’s stats (green). These vectors are concatenated and processed by another MLP to produce a low-dimensional latent representation ot of the observation. Finally, we employ a recurrent policy parameterized by an LSTM [33] to obtain the action distribution. For baseline results on the tasks above, we use a reduced action space that includes the movement, search, kick, and eat actions. For the main experiments, we train the agent’s policy for 1B steps in the environment using IMPALA [24] as implemented in TorchBeast [44]. Throughout training, we change NetHack’s seed for procedurally generating the environment after every episode. To demonstrate NetHack’s variability based on the character configuration, we train with four different agent characters: a neutral human male monk (mon-hum-neu-mal), a lawful dwarf female valkyrie (val-dwa-law-fem), a chaotic elf male wizard (wiz-elf-cha-mal), and a neutral human female tourist (tou-hum-neu-fem). More implementation details can be found in Appendix F. In addition, we present results using Random Network Distillation (RND) [13], a popular exploration technique for Deep RL. As previously discussed, exploration techniques which require returning to previously visited states such as Go-Explore are not suitable for use in NLE, but RND does not have this restriction. RND encourages agents to visit unfamiliar states by using the prediction error of a fixed random network as an intrinsic exploration reward, which has proven effective for hard exploration games such as Montezuma’s Revenge [12]. The intrinsic reward obtained from RND can create “reward bridges” between states which provide sparse extrinsic environmental rewards, thereby enabling the agent to discover new sources of extrinsic reward that it otherwise would not have reached. We replace the baseline network’s pixel-based feature extractor with the symbolic feature extractor described above for the baseline model, and use the best configuration of other RND hyperparameters documented by the authors (see Appendix G for full details). 3 Experiments and Results We present quantitative results on the suite of tasks included in NLE using a standard distributed Deep RL baseline and a popular exploration method, before additionally analyzing agent behavior qualitatively. For each model and character combination, we present results of the mean episode return over the last 100 episodes averaged for five runs in Figure 5. We discuss results for individual tasks below (see Table 5 in the appendix for full details). Staircase: Our agents learning to navigate the dungeon to the staircase > with a success rate of 77.26% for the monk, 50.42% for the tourist, 74.62% for the valkyrie, and 80.42% for the wizard. What surprised us is that agents learn to reliably kick in locked doors. This is a costly action to explore as the agent loses health points and might even die when accidentally kicking against walls. Similarly, the agent has to learn to reliably search for hidden passages and secret doors. Often, this involves using the search action many times in a row, sometimes even at many locations on the map (e.g., around all walls inside a room). Since NLE is procedurally generated, during training agents might encounter easier environment instances and use the acquired skills to accelerate learning on the harder ones [60, 18]. With a small probability, the staircase down might be generated near the agent’s starting position. Using RND exploration, we observe substantial gains in the success rate for the monk (+13.58pp), tourist (+6.52pp) and valkyrie (+16.34pp) roles, while lower results for wizard roles (−12.96pp). Pet: Finding the staircase while taking care of the hero’s pet (e.g., the starting kitten f or little dog d) is a harder task as the pet might get killed or fall into a trap door, making it impossible for the agent to successfully complete the episode. Compared to the staircase task, the agent success rates are generally lower (62.02% for monk, 25.66% for tourist, 63.30% for valkyrie, and wizard 66.80%). Again, RND exploration provides consistent and substantial gains. Eat: This tasks highlights the importance of testing with different character classes in NetHack. The monk and tourist start with a number edible items (e.g., food rations %, apples % and oranges %). A sub-optimal strategy is to consume all of these comestibles right at the start of the episode, potentially risking choking to death. In contrast, the other roles have to hunt for food, which our agents learn to do slowly over time for the valkyrie and wizard roles. By having more pressure to quickly learn a sustainable food strategy, the valkyrie learns to outlast other roles and survives the longest in the game (on average 1713 time steps). Interestingly, RND exploration leads to consistently worse results for this task. Gold: Locating gold $ in NetHack provides a relatively sparse reward signal. Still, our agents learn to collect decent amounts during training and learn to descend to deeper dungeon levels in search for more. For example, monk agents reach dungeon level 4.2 on average for the CNN baseline and even 5.0 using RND exploration. Score: As discussed in Section 2.4, we believe this task is the best candidate for comparing future methods regarding progress on NetHack. However, it is questionable whether a reward function based on NetHack’s in-game score is sufficient for training agents to solve the game. Our agents average at a score of 748 for monk, 11 for tourist, 573 for valkyrie, and 314 for wizard, with RND exploration again providing substantial gains (e.g. increasing the average score to 780 for monk). The resulting agents explore much of the early stages of the game, reaching dungeon level 5.4 on average for the monk with the deepest descent to level 11 achieving a high score of 4260 while leveling up to experience level 7 (see Table 6 in the appendix). Scout: The scout task shows a trend that is similar to the score task. Interestingly, we observe a lower experience level and in-game score, but agents descend, on average, similarly deep into the dungeon (e.g. level 5.5 for monk). This is sensible, since a policy that avoids to fight monsters, thereby lowering the chances of premature death, will not increase the in-game score as fast or level up the character as quickly, thus keeping the difficulty of spawned monsters low. We note that delaying to level up in order to avoid encountering stronger enemies early in the game is a known strategy human players adopt in NetHack [e.g. 50, “Why do I keep dying?” entry, January 2019 version]. Oracle: None of our agents find the Oracle @ (except for one lucky valkyrie episode). Locating the Oracle is a difficult exploration task. Even if the agent learns to make its way down the dungeon levels, it needs to search many, potentially branching, levels of the dungeon. Thus, we believe this task serves as a challenging benchmark for exploration methods in procedurally generated environments in the short term. Long term, many tasks harder than this (e.g., reaching Minetown, Mines’ End, Medusa’s Island, The Castle, Vlad’s Tower, Moloch’s Sanctum etc.) can be easily defined in NLE with very few lines of code. 3.1 Generalization Analysis Akin to [18], we evaluate agents trained on a limited set of seeds while still testing on 100 held-out seeds. We find that test performance increases monotonically with the size of the set of seeds that the agent is trained on. Figure 4 shows this effect for the score and staircase tasks. Training only on a limited number of seeds leads to high training performance, but poor generalization. The gap between training and test performance becomes narrow when training with at least 1000 seeds, indicating that at that point agents are exposed to sufficient variation during training to make memorization infeasible. We also investigate how model capacity affects performance by comparing agents with five different hidden sizes for the final layer (of the architecture described in Section 2.5). Figure 7 in the appendix shows that increasing the model capacity improves results on the score but not on the staircase task, indicating that it is an important hyperparameter to consider, as also noted by [18]. 3.2 Qualitative Analysis We analyse the cause for death of our agents during training and present results in Figure 9 in the appendix. We notice that starvation and traps become a less prominent cause of death over time, most likely because our agents, when starting to learn to descend dungeon levels and fight monsters, are more likely to die in combat before they starve or get killed by a trap. In the score and scout tasks, our agents quickly learn to avoid eating rotten corpses, but food poisoning becomes again prominent towards the end of training. We can see that gnome lords G, gnome kings G, chameleons :, and even mind flayers h become a more prominent cause of death over time, which can be explained with our agents leveling up and descending deeper into the dungeon. Chameleons are a particularly interesting entity in NetHack as they regularly change their form to a random animal or monster, thereby adversarially confusing our agent with rarely seen symbols for which it has not yet learned a meaningful representation (similar to unknown words in natural language processing). We release a set of high-score recordings of our agents (see Appendix J on how to view them via a browser or terminal). 4 Related Work Progress in RL has historically been achieved both by algorithmic innovations as well as development of novel environments to train and evaluate agents. Below, we review recent RL environments and delineate their strengths and weaknesses as testbeds for current methods and future research. Recent Game-Based Environments: Retro video games have been a major catalyst for Deep RL research. ALE [9] provides a unified interface to Atari 2600 games, which enables testing of RL algorithms on high-dimensional visual observations quickly and cheaply, resulting in numerous Deep RL publications over the years [4]. The Gym Retro environment [51] expands the list of classic games, but focuses on evaluating visual generalization and transfer learning on a single game, Sonic The Hedgehog. Both StarCraft: BroodWar and StarCraft II have been successfully employed as RL environments [64, 69] for research on, for example, planning [52, 49], multi-agent systems [27, 63], imitation learning [70], and model-free reinforcement learning [70]. However, the complexity of these games creates a high entry barrier both in terms of computational resources required as well as intricate baseline models that require a high degree of domain knowledge to be extended. 3D games have proven to be useful testbeds for tasks such as navigation and embodied reasoning. Vizdoom [42] modifies the classic first-person shooter game Doom to construct an API for visual control; DeepMind Lab [7] presents a game engine based on Quake III Arena to allow for the creation of tasks based on the dynamics of the original game; Project Malmo [37], MineRL [29] and CraftAssist [35] provide visual and symbolic interfaces to the popular Minecraft game. While Minecraft is also procedurally generated and has complex environment dynamics that an agent needs to learn about, it is much more computationally demanding than NetHack (see Table 4 in the appendix). As a consequence, the focus has been on learning from demonstrations [29]. More recent work has produced game-like environments with procedurally generated elements, such as the Procgen Benchmark [18], MazeExplorer [30], and the Obstacle Tower environment [38]. However, we argue that, compared to NetHack or Minecraft, these environments do not provide the depth likely necessary to serve as long-term RL testbeds due to limited number of entities and environment interactions that agents have to learn to master. In contrast, NetHack agents have to acquire knowledge about complex environment dynamics of hundreds of entities (dungeon features, items, monsters etc.) to do well in a game that humans often take years of practice to solve. In conclusion, none of the current benchmarks combine a fast simulator with a procedurally generated environment, a hard exploration problem, a wide variety of complex environment dynamics, and numerous types of static and interactive entities. The unique combination of challenges present in NetHack makes NLE well-suited for driving research towards more general and robust RL algorithms. Roguelikes as Reinforcement Learning Testbeds: We are not the first to argue for roguelike games to be used as testbeds for RL. Asperti et al. [5] present an interface to Rogue, the very first roguelike game and one of the simplest roguelikes in terms of game dynamics and difficulty. They show that policies trained with model-free RL algorithms can successfully learn rudimentary navigation. Similarly, Kanagawa and Kaneko [41] present an environment inspired by Rogue that provides a parameterizable generation of Rogue levels. Like us, Dannenhauer et al. [20] argue that roguelike games could be a useful RL testbed. They discuss the roguelike game Dungeon Crawl Stone Soup, but their position paper provides neither an RL environment nor experiments to validate their claims. Most similar to our work is gym_nethack [14, 15], which offers a Gym environment based on NetHack 3.6.0. We commend the authors for introducing NetHack as an RL environment, and to the best of our knowledge they were the first to suggest the idea. However, there are several design choices that limit the impact and longevity of their version as a research testbed. First, they heavily modified NetHack to enable agent interaction. In the process, gym_nethack disables various crucial game mechanics to simplify the game, its environment dynamics, and the resulting optimal policies. This includes removing obstacles like boulders, traps, and locked doors as well as all item identification mechanics, making items much easier to employ and the overall environment much closer to its simpler predecessor, Rogue. Additionally, these modifications tie the environment to a particular version of the game. This is not ideal as (i) players tend to use new versions of the game as they are released, hence, publicly available human data becomes progressively incompatible, thereby limiting the amount of data that can be used for learning from demonstrations; (ii) older versions of NetHack tend to include well-documented exploits which may be discovered by agents (see Appendix I for exploits used in programmatic bots). In contrast, NLE is designed to make the interaction with NetHack as close as possible to the one experienced by humans playing the full game. NLE is the only environment exposing the entire game in all its complexity, allowing for larger-scale experimentation to push the boundaries of RL research. 5 Conclusion and Future Work The NetHack Learning Environment is a fast, complex, procedurally generated environment for advancing research in RL. We demonstrate that current state-of-the-art model-free RL serves as a sensible baseline, and we provide an in-depth analysis of learned agent behaviors. NetHack provides interesting challenges for exploration methods given the extremely large number of possible states and wide variety of environment dynamics to discover. Previously proposed formulations of intrinsic motivation based on seeking novelty [8, 53, 13] or maximizing surprise [56, 12, 57] are likely insufficient to make progress on NetHack given that an agent will constantly find itself in novel states or observe unexpected environment dynamics. NetHack poses further challenges since, in order to win, an agent needs to acquire a wide range of skills such as collecting resources, fighting monsters, eating, manipulating objects, casting spells, or taking care of their pet, to name just a few. The multilevel dependencies present in NetHack could inspire progress in hierarchical RL and long-term planning [21, 40, 55, 68]. Transfer to unseen game characters, environment dynamics, or level layouts can be evaluated [67]. Furthermore, its richness and constant challenge make NetHack an interesting benchmark for lifelong learning [45, 54, 61, 48]. In addition, the extensive documentation about NetHack can enable research on using prior (natural language) knowledge for learning, which could lead to improvements in generalization and sample efficiency [10, 46, 72, 36]. Lastly, NetHack can also drive research on learning from demonstrations [1, 3] since a large collection of replay data is available. In sum, we argue that the NetHack Learning Environment strikes an excellent balance between complexity and speed while encompassing a variety of challenges for the research community. For future versions of the environment, we plan to support NetHack 3.7 once it is released, as it will further increase the variability of observations via Themed Rooms. This version will also introduce scripting in the Lua language, which we will leverage to enable users to create their custom sandbox tasks, directly tapping into NetHack and its rich universe of entities and their complex interactions to define custom RL tasks. 6 Broader Impact To bridge the gap between the constrained world of video and board games, and the open and unpredictable real world, there is a need for environments and tasks which challenge the limits of current Reinforcement Learning (RL) approaches. Some excellent challenges have been put forth over the years, demanding increases in the complexity of policies needed to solve a problem or scale needed to deal with increasingly photorealistic, complex environments. In contrast, our work seeks to be extremely fast to run while still testing the generalization and exploration abilities of agents in an environment which is rich, procedurally generated, and in which reward is sparse. The impact of solving these problems with minimal environment-specific heuristics lies in the development of RL algorithms which produce sample efficient, robust, and general policies capable of more readily dealing with the uncertain and changing dynamics of “real world” environments. We do not solve these problems here, but rather provide the challenge and the testbed against such improvements can be produced and evaluated. Auxiliary to this, and in line with growing concerns that progress in Deep RL is more the result of industrial labs having privileged access to the resources required to run environments and agents on a massive scale, the environment presented here is computationally cheap to run and to collect data in. This democratizes access for researchers in more resource-constrained labs, while not sacrificing the difficulty and richness of the environment. We hope that as a result of this, and of the more general need to develop sample-efficient agents with fewer data, the environmental impact of research using our environment will be reduced compared to more visually sophisticated ones. Acknowledgements We thank the NetHack DevTeam for creating and continuously extending this amazing game over the last decades. We thank Paul Winner, Bart House, M. Drew Streib, Mikko Juola, Florian Mayer, Philip H.S. Torr, Stephen Roller, Minqi Jiang, Vegard Mella, Eric Hambro, Fabio Petroni, Mikayel Samvelyan, Vitaly Kurin, Arthur Szlam, Sebastian Riedel, Antoine Bordes, Gabriel Synnaeve, Jeremy Reizenstein, as well as the NeurIPS 2020, ICML 2020, and BeTR-RL 2020 reviewers and area chairs for their valuable feedback. Nantas Nardelli is supported by EPSRC/MURI grant EP/N019474/1. Finally, we would like to pay tribute to the 863,918,816 simulated NetHack heroes who lost their lives in the name of science for this project (thus far).
1. What is the main contribution of the paper regarding the NetHack Learning Environment (NLE)? 2. What are the strengths of the proposed environment, particularly in its suitability for research in reinforcement learning? 3. What are the weaknesses of the paper, especially regarding its novelty compared to prior works? 4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions [edit] The authors took the time to answer my concerns. I agree that my initial assessment might not have been completely in line with my comments. I adjusted accordingly. The authors present the NetHack Learning Environment (NLE), a gym environment based on the NetHack game. They present a set of mini-games, where the goal is to solve tractable problems. They also demonstrate the suitability of the simulator as a testbed for RL by solving these mini-games using several RL baselines. Strengths The authors clearly motivate the need of learning environments which are accessible, fast, and suitable as long-term research test-bed. The focus of the paper being on presenting the characteristics of the NLE, the experimental section is brief. However the authors made an effort in testing their different initial test-beds on different RL baselines to show that NLE was indeed a suitable environment for Research in RL. The need for new RL environments is well motivated. Given the observations and actions related to this simulator, I would argue that it concerns learning and reasoning close to the symbolic level, and very few simulators in the field are as complex and diverse. Weaknesses [edit] Following the comments by the author, I agree that there is a clear difference and a substantial improvement compared to previous version of the environment. The proposed simulator is a wrapper around an existing game, and a set of smaller tasks related within this environment. Other wrappers of NetHack were presented in [14,15]. The authors clearly argue in favor of their version of the simulator, and I would agree that theirs is less restricted and allow for larger-scale experimentation, and maintainability over time. So, I would say that the technical contribution is there, but the novelty is not really present.
NIPS
Title The NetHack Learning Environment Abstract Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminalbased roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source and available at https://github.com/facebookresearch/nle. 1 Introduction Recent advances in (Deep) Reinforcement Learning (RL) have been driven by the development of novel simulation environments, such as the Arcade Learning Environment (ALE) [9], StarCraft [64, 69], BabyAI [16], Obstacle Tower [38], Minecraft [37, 29, 35], and Procgen Benchmark [18]. These environments introduced new challenges for state-of-the-art methods and demonstrated failure modes of existing RL approaches. For example, Montezuma’s Revenge highlighted that methods performing well on other ALE tasks were not able to successfully learn in this sparse-reward environment. This sparked a long line of research on novel methods for exploration [e.g., 8, 66, 53] and learning from demonstrations [e.g., 31, 62, 6]. However, this progress has limits: the current best approach on this environment, Go-Explore [22, 23], overfits to specific properties of ALE and Montezuma’s Revenge. While Go-Explore is an impressive solution for Montezuma’s Revenge, it exploits the determinism of environment transitions, allowing it to memorize sequences of actions that lead to previously visited states from which the agent can continue to explore. We are interested in surpassing the limits of deterministic or repetitive settings and seek a simulation environment that is complex and modular enough to test various open research challenges such as exploration, planning, skill acquisition, memory, and transfer. However, since state-of-the-art RL approaches still require millions or even billions of samples, simulation environments need to be fast to allow RL agents to perform many interactions per second. Among attempts to surpass the limits of deterministic or repetitive settings, procedurally generated environments are a promising path 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. towards testing systematic generalization of RL methods [e.g., 39, 38, 60, 18]. Here, the game state is generated programmatically in every episode, making it extremely unlikely for an agent to visit the exact state more than once during its lifetime. Existing procedurally generated RL environments are either costly to run [e.g., 69, 37, 38] or are, as we argue, of limited complexity [e.g., 17, 19, 7]. To address these issues, we present the NetHack Learning Environment (NLE), a procedurally generated environment that strikes a balance between complexity and speed. It is a fully-featured Gym environment [11] around the popular open-source terminal-based single-player turn-based “dungeon-crawler” game, NetHack [43]. Aside from procedurally generated content, NetHack is an attractive research platform as it contains hundreds of enemy and object types, it has complex and stochastic environment dynamics, and there is a clearly defined goal (descend the dungeon, retrieve an amulet, and ascend). Furthermore, NetHack is difficult to master for human players, who often rely on external knowledge to learn about strategies and NetHack’s complex dynamics and secrets.1 Thus, in addition to a guide book [58, 59] released with NetHack itself, many extensive community-created documents exist, outlining various strategies for the game [e.g., 50, 25]. In summary, we make the following core contributions: (i) we present NLE, a fast but complex and feature-rich Gym environment for RL research built around the popular terminal-based game, NetHack, (ii) we release an initial suite of tasks in the environment and demonstrate that novel tasks can be added easily, (iii) we introduce baseline models trained using IMPALA [24] and Random Network Distillation (RND) [13], a popular exploration bonus, resulting in agents that learn diverse policies for early stages of NetHack, and (iv) we demonstrate the benefit of NetHack’s symbolic observation space by presenting in-depth qualitative analyses of trained agents. 2 NetHack: a Frontier for Reinforcement Learning Research In traditional so-called roguelike games (e.g., Rogue, Hack, NetHack, and Dungeon Crawl Stone Soup) the player acts turn-by-turn in a procedurally generated grid-world environment, with game dynamics strongly focused on exploration, resource management, and continuous discovery of entities and game mechanics [IRDC, 2008]. These games are designed to provide a steep learning curve and a constant level of challenge and surprise to the player. They are generally extremely difficult to win even once, let alone to master, i.e., win regularly and multiple times in a row. As advocated by [39, 38, 18], procedurally generated environments are a promising direction for testing systematic generalization of RL agents. We argue that such environments need to be both sufficiently complex and fast to run to serve as a challenging long-term research testbed. In Section 2.1, we illustrate that NetHack contains many desirable properties, making it an excellent candidate for driving long-term research in RL. We introduce NLE in Section 2.2, an initial suite of tasks in Section 2.3, an evaluation protocol for measuring progress towards solving NetHack in Section 2.4, as well as baseline models in Section 2.5. 2.1 NetHack NetHack is one of the oldest and most popular roguelikes, originally released in 1987 as a successor to Hack, an open-source implementation of the original Rogue game. At the beginning of the game, the player takes the role of a hero who is placed into a dungeon and tasked with finding the Amulet of Yendor to offer it to an in-game deity. To do so, the player has to descend to the bottom of over 50 procedurally generated levels to retrieve the amulet and then subsequently escape the dungeon, unlocking five extremely challenging final levels (the four Elemental Planes and the Astral Plane). Many aspects of the game are procedurally generated and follow stochastic dynamics. For example, the overall structure of the dungeon is somewhat linear, but the exact location of places of interest (e.g., the Oracle) and the structure of branching sub-dungeons (e.g., the Gnomish Mines) are determined randomly. The procedurally generated content of each level makes it highly unlikely that a player will ever experience the exact same situation more than once. This provides a fundamental challenge to learning systems and a degree of complexity that enables us to more effectively evaluate an agent’s ability to generalize. It also disqualifies current state-of-the-art exploration methods such as Go-Explore [22, 23] that are based on a goal-conditioned policy to navigate to previously visited 1“NetHack is largely based on discovering secrets and tricks during gameplay. It can take years for one to become well-versed in them, and even experienced players routinely discover new ones.” [26] 1 1 states. Moreover, states in NetHack are composed of hundreds of possible symbols, resulting in an enormous combinatorial observation space.2 It is an open question how to best project this symbolic space to a low-dimensional representation appropriate for methods like Go-Explore. For example, Ecoffet et al.’s heuristic of downsampling images of states to measure their similarity to be used as an exploration bonus will likely not work for large symbolic and procedurally generated environments. NetHack provides further variation by different hero roles (e.g., monk, valkyrie, wizard, tourist), races (human, elf, dwarf, gnome, orc) and random starting inventories (see Appendix A for details). Consequently, NetHack poses unique challenges to the research community and requires novel ways to determine state similarity and, likely, entirely new exploration frameworks. The gelatinous cube eats a scroll! }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}.}}}}}P.}}}}}......}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}..".}}}...}}}}} }...}}.....}}}}}....}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}...............} }....}}}}}}}}}}....}}}..}}}}}}}}}}}.......}}}}}}}}}}}}}}}}..}}.....}}}...}} }....}}}}}}}}b....}}}}.[}}}}}}%................}}}}}}}}}}}.}}}}.....}}...}} }....}}}}}}}}}}}}.}}}}.}}}}}}M-----------------.}}}}}}}}}}}}}}}}}.........} }....}}}}}}}}}}}}}}}}}}.}}}...|.......^.......|...}}}}}}}}}}}}}}}}}}}....}} }.....}.}}s...}}}}}}}}}.}}....--------+--------....}}}}}}..}}}}}}}}}}}...}} }..oo..}}}}.%}}}}}}}}}}}}}........|.......|........}}}}}....}}}}}}}}}}}}}}} }.....}}}}}}}}}}}}}}}}}}}}........|.>.....|^....^..}}}}}...}}}}}}}}}.}}}}}} }.....}}}}}}}}}}}}}}}}}}}}....--------+--------....}}}}}}.}.}}}}}}}}}}}}}}} }....@.}}}}}}}}}}}}}}}}}}}}...|.......^.......|...}}}}}}}}}}}}}}}}}.}}}}}}} }.......}}}}}}}..}}}}}}}}}}}}.-----------------.}}}}}}}}}}}}}}}}}....}}}}}} }....f..p}}.}}...U}}}}}}}}}}}}.......Y.........}}}}}..}}}}}}}}}.......}}}}} }.......}}}}}}}......}}}}}}}}}}}}}}..v....}}}}}}}}}..C..}}}}}}...}}..}}}}}} }.....}}}}}}}}}}}.Y...}}}}}}}}}}}}}}}}}}}}}}.}}}}}}}..B}}}}}}}}}....}}}}}}} }}..}}}}}}}}}}}}}....}}}}}}}}}}}}}}}}}}}}}}...}}..}}}}}}}.}}.}}}}.^}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} Georg XIII the Thaumaturge St:7 Dx:14 Co:17 In:19 Wi:10 Ch:10 Neutral S: Dlvl:6 $:0 HP:52(52) Pw:28(73) AC:5 Xp:7/921 T:7451 Hungry 1Figure 2: The hero (@) has to cross water (}) to get past Medusa (@, out of the hero’s line of sight) down the staircase (>) to the next level. To provide a glimpse into the complexity of NetHack’s environment dynamics, we closely follow the educational example given by “Mr Wendal” on YouTube.3 At a specific point in the game, the hero has to get past Medusa’s Island (see Figure 2 for an example). Medusa’s Island is surrounded by water } that the agent has to cross. Water can rust and corrode the hero’s metallic weapons ) and armor [. Applying a can of grease ( prevents rusting and corrosion. Furthermore, going into water will make a hero’s inventory wet, erasing scrolls ? and spellbooks + that they carry. Applying a can of grease to a bag or sack ( will make it a waterproof container for items. But the sea can also contain a kraken ; that can grab and drown the hero, leading to instant death. Applying a can of grease to a hero’s armor prevents the kraken from grabbing the hero. However, a cursed can of grease will grease the hero’s hands instead and they will drop their weapon and rings. One can use a towel ( to wipe off grease. To reach Medusa @, the hero can alternatively use magic to freeze the water and turn it into walkable ice .. Wearing snow boots [ will help the hero not to slip. When Medusa is in the hero’s line of sight, her gaze will petrify and instantly kill—the hero should use a towel to cover their eyes to fight Medusa, or even apply a mirror ( to petrify her with her own gaze. There are many other entities a hero must learn to face, many of which appear rarely even across multiple games, especially the most powerful monsters. These entities are often compositional, for example a monster might be a wolf d, which shares some characteristics with other in-game canines such as coyotes d or hell hounds d. To help a player learn, NetHack provides in-game messages 2Information about the over 450 items and 580 monster types, as well as environment dynamics involving these entities can be found in the NetHack Wiki [50] and to some extent in the NetHack Guidebook [59]. 3youtube.com/watch?v=SjuTyJlgLJ8 describing many of the hero’s interactions (see the top of Figure 1).4 Learning to capture these interesting and somewhat realistic albeit abstract dynamics poses challenges for multi-modal and language-conditioned RL [46]. NetHack is an extremely long game. Successful expert episodes usually last tens of thousands of turns, while average successful runs can easily last hundreds of thousands of turns, spawning multiple days of play-time. Compared to testbeds with long episode horizons such as StarCraft and Dota 2, NetHack’s “episodes” are one or two orders of magnitude longer, and they wildly vary depending on the policy. Moreover, several official conducts exist in NetHack that make the game even more challenging, e.g., by not wearing any armor throughout the game (see Appendix A for more). Finally, in comparison to other classic roguelike games, NetHack’s popularity has attracted a larger number of contributors to its community. Consequently, there exists a comprehensive game wiki [50] and many so-called spoilers [25] that provide advice to players. Due to the randomized nature of NetHack, this advice is general in nature (e.g., explaining the behavior of various entities) and not a step-by-step guide. These texts could be used for language-assisted RL along the lines of [72]. Lastly, there is also a large public repository of human replay data (over five million games) hosted on the NetHack Alt.org (NAO) servers, with hundreds of finished games per day on average [47]. This extensive dataset could spur research advances in imitation learning, inverse RL, and learning from demonstrations [1, 3]. 2.2 The NetHack Learning Environment The NetHack Learning Environment (NLE) is built on NetHack 3.6.6, the 36th public release of NetHack, which was released on March 8th, 2020 and is the latest available version of the game at the time of publication of this paper. NLE is designed to provide a common, turn-based (i.e., synchronous) RL interface around the standard terminal interface of NetHack. We use the game as-is as the backend for our NLE environment, leaving the game dynamics unchanged. We added to the source code more control over the random number generator for seeding the environment, as well as various modifications to expose the game’s internal state to our Python frontend. By default, the observation space consists of the elements glyphs, chars, colors, specials, blstats, message, inv_glyphs, inv_strs, inv_letters, as well as inv_oclasses. The elements glyphs, chars, colors, and specials are tensors representing the (batched) 2D symbolic observation of the dungeon; blstats is a vector of agent coordinates and other character attributes (“bottom-line stats”, e.g., health points, strength, dexterity, hunger level; normally displayed in the bottom area of the GUI), message is a tensor representing the current message shown to the player (normally displayed in the top area of the GUI), and the inv_* elements are padded tensors representing the hero’s inventory items. More details about the default observation space and possible extensions can be found in Appendix B. The environment has 93 available actions, corresponding to all the actions a human player can take in NetHack. More precisely, the action space is composed of 77 command actions and 16 movement actions. The movement actions are split into eight “one-step” compass directions (i.e., the agent moves a single step in a given direction) and eight “move far” compass directions (i.e., the agent moves in the specified direction until it runs into some entity). The 77 command actions include eating, opening, kicking, reading, praying as well as many others. We refer the reader to Appendix C as well as to the NetHack Guidebook [59] for the full table of actions and NetHack commands. NLE comes with a Gym interface [11] and includes multiple pre-defined tasks with different reward functions and action spaces (see next section and Appendix E for details). We designed the interface to be lightweight, achieving competitive speeds with Gym-based ALE (see Appendix D for a rough comparison). Finally, NLE also includes a dashboard to analyze NetHack runs recorded as terminal tty recordings. This allows NLE users to analyze replays of the agent’s behavior at an arbitrary speed and provides an interface to visualize action distributions and game events (see Appendix H for details). NLE is available under an open source license at https://github.com/facebookresearch/nle. 4An example interaction after applying a figurine of an Archon: “You set the figurine on the ground and it transforms. You get a bad feeling about this. The Archon hits! You are blinded by the Archon’s radiance! You stagger. . . It hits! You die. . . But wait. . . Your medallion feels warm! You feel much better! The medallion crumbles to dust! You survived that attempt on your life.” 2.3 Tasks NLE aims to make it easy for researchers to probe the behavior of their agents by defining new tasks with only a few lines of code, enabled by NetHack’s symbolic observation space as well as its rich entities and environment dynamics. To demonstrate that NetHack is a suitable testbed for advancing RL, we release a set of initial tasks for tractable subgoals in the game: navigating to a staircase down to the next level, navigating to a staircase while being accompanied by a pet, locating and eating edibles, collecting gold, maximizing in-game score, scouting to discover unseen parts of the dungeon, and finding the oracle. These tasks are described in detail in Appendix E, and, as we demonstrate in our experiments, lead to unique challenges and diverse behaviors of trained agents. 2.4 Evaluation Protocol We lay out a protocol and provide guidance for evaluating future work on NLE in a reproducible manner. The overall goal of NLE is to train agents that can solve NetHack. An episode in the full game of NetHack is considered solved if the agent retrieves the Amulet of Yendor and offers it to its co-aligned deity in the Astral Plane, thereby ascending to demigodhood. We declare NLE to be solved once agents can be trained to consecutively ascend (ten episodes without retry) to demigodhood on unseen seeds given a random role, race, alignment, and gender combination. Since the environment is procedurally generated and stochastic, evaluating on held-out unseen seeds ensures we test systematic generalization of agents. As of October 2020, NAO reports the longest streak of human ascensions on NetHack 3.6.x to be 61; the role, race, etc. are not necessarily randomized for these ascension streaks. Since we believe that this goal is out of reach for machine learning approaches in the foreseeable future, we recommend comparing models on the score task in the meantime. Using NetHack’s in-game score as the measure for progress has caveats. For example, expert human players can solve NetHack while minimizing the score [see 50, “Score” entry, for details]. NAO reports ascension scores for NetHack 3.6.x ranging from the low hundreds of thousands to tens of millions. Although we believe training agents to maximize the in-game score is likely insufficient for solving the game, the in-game score is still a sensible proxy for incremental progress on NLE as it is a function of, among other things, the dungeon depth that the agent reached, the number of enemies it killed, the amount of gold it collected, as well as the knowledge it gathered about potions, scrolls, and wands. When reporting results on NLE, we require future work to state the full character specification (e.g., mon-hum-neu-mal), all NetHack options that were used (e.g., whether or not autopickup was used), which actions were allowed (see Table 1), which actions or action-sequences were hardcoded (e.g., engraving [see 50, “Elbereth” as an example]) and how many different seeds were used during training. We ask to report the average score obtained on 1000 episodes of randomly sampled and previously unseen seeds. We do not impose any restrictions during training, but at test time any save scumming (i.e., saving and loading previous checkpoints of the episode) or manipulation of the random number generator [e.g., 2] is forbidden. 2.5 Baseline Models For our baseline models, we encode the multi-modal observation ot as follows. Let the observation ot at time step t be a tuple (gt, zt) consisting of the 21 × 79 matrix of glyph identifiers and a 21- dimensional vector containing agent stats such as its (x, y)-coordinate, health points, experience level, and so on. We produce three dense representations based on the observation (see Figure 3). For every of the 5991 possible glyphs in NetHack (monsters, items, dungeon features, etc.), we learn a k-dimensional vector embedding. We apply a ConvNet (red) to all visible glyph embeddings as well as another ConvNet (blue) to the 9×9 crop of glyphs around the agent to create a dedicated egocentric representation for improved generalization [32, 71]. We found this egocentric representation to be an important component during preliminary experiments. Furthermore, we use an MLP to encode the hero’s stats (green). These vectors are concatenated and processed by another MLP to produce a low-dimensional latent representation ot of the observation. Finally, we employ a recurrent policy parameterized by an LSTM [33] to obtain the action distribution. For baseline results on the tasks above, we use a reduced action space that includes the movement, search, kick, and eat actions. For the main experiments, we train the agent’s policy for 1B steps in the environment using IMPALA [24] as implemented in TorchBeast [44]. Throughout training, we change NetHack’s seed for procedurally generating the environment after every episode. To demonstrate NetHack’s variability based on the character configuration, we train with four different agent characters: a neutral human male monk (mon-hum-neu-mal), a lawful dwarf female valkyrie (val-dwa-law-fem), a chaotic elf male wizard (wiz-elf-cha-mal), and a neutral human female tourist (tou-hum-neu-fem). More implementation details can be found in Appendix F. In addition, we present results using Random Network Distillation (RND) [13], a popular exploration technique for Deep RL. As previously discussed, exploration techniques which require returning to previously visited states such as Go-Explore are not suitable for use in NLE, but RND does not have this restriction. RND encourages agents to visit unfamiliar states by using the prediction error of a fixed random network as an intrinsic exploration reward, which has proven effective for hard exploration games such as Montezuma’s Revenge [12]. The intrinsic reward obtained from RND can create “reward bridges” between states which provide sparse extrinsic environmental rewards, thereby enabling the agent to discover new sources of extrinsic reward that it otherwise would not have reached. We replace the baseline network’s pixel-based feature extractor with the symbolic feature extractor described above for the baseline model, and use the best configuration of other RND hyperparameters documented by the authors (see Appendix G for full details). 3 Experiments and Results We present quantitative results on the suite of tasks included in NLE using a standard distributed Deep RL baseline and a popular exploration method, before additionally analyzing agent behavior qualitatively. For each model and character combination, we present results of the mean episode return over the last 100 episodes averaged for five runs in Figure 5. We discuss results for individual tasks below (see Table 5 in the appendix for full details). Staircase: Our agents learning to navigate the dungeon to the staircase > with a success rate of 77.26% for the monk, 50.42% for the tourist, 74.62% for the valkyrie, and 80.42% for the wizard. What surprised us is that agents learn to reliably kick in locked doors. This is a costly action to explore as the agent loses health points and might even die when accidentally kicking against walls. Similarly, the agent has to learn to reliably search for hidden passages and secret doors. Often, this involves using the search action many times in a row, sometimes even at many locations on the map (e.g., around all walls inside a room). Since NLE is procedurally generated, during training agents might encounter easier environment instances and use the acquired skills to accelerate learning on the harder ones [60, 18]. With a small probability, the staircase down might be generated near the agent’s starting position. Using RND exploration, we observe substantial gains in the success rate for the monk (+13.58pp), tourist (+6.52pp) and valkyrie (+16.34pp) roles, while lower results for wizard roles (−12.96pp). Pet: Finding the staircase while taking care of the hero’s pet (e.g., the starting kitten f or little dog d) is a harder task as the pet might get killed or fall into a trap door, making it impossible for the agent to successfully complete the episode. Compared to the staircase task, the agent success rates are generally lower (62.02% for monk, 25.66% for tourist, 63.30% for valkyrie, and wizard 66.80%). Again, RND exploration provides consistent and substantial gains. Eat: This tasks highlights the importance of testing with different character classes in NetHack. The monk and tourist start with a number edible items (e.g., food rations %, apples % and oranges %). A sub-optimal strategy is to consume all of these comestibles right at the start of the episode, potentially risking choking to death. In contrast, the other roles have to hunt for food, which our agents learn to do slowly over time for the valkyrie and wizard roles. By having more pressure to quickly learn a sustainable food strategy, the valkyrie learns to outlast other roles and survives the longest in the game (on average 1713 time steps). Interestingly, RND exploration leads to consistently worse results for this task. Gold: Locating gold $ in NetHack provides a relatively sparse reward signal. Still, our agents learn to collect decent amounts during training and learn to descend to deeper dungeon levels in search for more. For example, monk agents reach dungeon level 4.2 on average for the CNN baseline and even 5.0 using RND exploration. Score: As discussed in Section 2.4, we believe this task is the best candidate for comparing future methods regarding progress on NetHack. However, it is questionable whether a reward function based on NetHack’s in-game score is sufficient for training agents to solve the game. Our agents average at a score of 748 for monk, 11 for tourist, 573 for valkyrie, and 314 for wizard, with RND exploration again providing substantial gains (e.g. increasing the average score to 780 for monk). The resulting agents explore much of the early stages of the game, reaching dungeon level 5.4 on average for the monk with the deepest descent to level 11 achieving a high score of 4260 while leveling up to experience level 7 (see Table 6 in the appendix). Scout: The scout task shows a trend that is similar to the score task. Interestingly, we observe a lower experience level and in-game score, but agents descend, on average, similarly deep into the dungeon (e.g. level 5.5 for monk). This is sensible, since a policy that avoids to fight monsters, thereby lowering the chances of premature death, will not increase the in-game score as fast or level up the character as quickly, thus keeping the difficulty of spawned monsters low. We note that delaying to level up in order to avoid encountering stronger enemies early in the game is a known strategy human players adopt in NetHack [e.g. 50, “Why do I keep dying?” entry, January 2019 version]. Oracle: None of our agents find the Oracle @ (except for one lucky valkyrie episode). Locating the Oracle is a difficult exploration task. Even if the agent learns to make its way down the dungeon levels, it needs to search many, potentially branching, levels of the dungeon. Thus, we believe this task serves as a challenging benchmark for exploration methods in procedurally generated environments in the short term. Long term, many tasks harder than this (e.g., reaching Minetown, Mines’ End, Medusa’s Island, The Castle, Vlad’s Tower, Moloch’s Sanctum etc.) can be easily defined in NLE with very few lines of code. 3.1 Generalization Analysis Akin to [18], we evaluate agents trained on a limited set of seeds while still testing on 100 held-out seeds. We find that test performance increases monotonically with the size of the set of seeds that the agent is trained on. Figure 4 shows this effect for the score and staircase tasks. Training only on a limited number of seeds leads to high training performance, but poor generalization. The gap between training and test performance becomes narrow when training with at least 1000 seeds, indicating that at that point agents are exposed to sufficient variation during training to make memorization infeasible. We also investigate how model capacity affects performance by comparing agents with five different hidden sizes for the final layer (of the architecture described in Section 2.5). Figure 7 in the appendix shows that increasing the model capacity improves results on the score but not on the staircase task, indicating that it is an important hyperparameter to consider, as also noted by [18]. 3.2 Qualitative Analysis We analyse the cause for death of our agents during training and present results in Figure 9 in the appendix. We notice that starvation and traps become a less prominent cause of death over time, most likely because our agents, when starting to learn to descend dungeon levels and fight monsters, are more likely to die in combat before they starve or get killed by a trap. In the score and scout tasks, our agents quickly learn to avoid eating rotten corpses, but food poisoning becomes again prominent towards the end of training. We can see that gnome lords G, gnome kings G, chameleons :, and even mind flayers h become a more prominent cause of death over time, which can be explained with our agents leveling up and descending deeper into the dungeon. Chameleons are a particularly interesting entity in NetHack as they regularly change their form to a random animal or monster, thereby adversarially confusing our agent with rarely seen symbols for which it has not yet learned a meaningful representation (similar to unknown words in natural language processing). We release a set of high-score recordings of our agents (see Appendix J on how to view them via a browser or terminal). 4 Related Work Progress in RL has historically been achieved both by algorithmic innovations as well as development of novel environments to train and evaluate agents. Below, we review recent RL environments and delineate their strengths and weaknesses as testbeds for current methods and future research. Recent Game-Based Environments: Retro video games have been a major catalyst for Deep RL research. ALE [9] provides a unified interface to Atari 2600 games, which enables testing of RL algorithms on high-dimensional visual observations quickly and cheaply, resulting in numerous Deep RL publications over the years [4]. The Gym Retro environment [51] expands the list of classic games, but focuses on evaluating visual generalization and transfer learning on a single game, Sonic The Hedgehog. Both StarCraft: BroodWar and StarCraft II have been successfully employed as RL environments [64, 69] for research on, for example, planning [52, 49], multi-agent systems [27, 63], imitation learning [70], and model-free reinforcement learning [70]. However, the complexity of these games creates a high entry barrier both in terms of computational resources required as well as intricate baseline models that require a high degree of domain knowledge to be extended. 3D games have proven to be useful testbeds for tasks such as navigation and embodied reasoning. Vizdoom [42] modifies the classic first-person shooter game Doom to construct an API for visual control; DeepMind Lab [7] presents a game engine based on Quake III Arena to allow for the creation of tasks based on the dynamics of the original game; Project Malmo [37], MineRL [29] and CraftAssist [35] provide visual and symbolic interfaces to the popular Minecraft game. While Minecraft is also procedurally generated and has complex environment dynamics that an agent needs to learn about, it is much more computationally demanding than NetHack (see Table 4 in the appendix). As a consequence, the focus has been on learning from demonstrations [29]. More recent work has produced game-like environments with procedurally generated elements, such as the Procgen Benchmark [18], MazeExplorer [30], and the Obstacle Tower environment [38]. However, we argue that, compared to NetHack or Minecraft, these environments do not provide the depth likely necessary to serve as long-term RL testbeds due to limited number of entities and environment interactions that agents have to learn to master. In contrast, NetHack agents have to acquire knowledge about complex environment dynamics of hundreds of entities (dungeon features, items, monsters etc.) to do well in a game that humans often take years of practice to solve. In conclusion, none of the current benchmarks combine a fast simulator with a procedurally generated environment, a hard exploration problem, a wide variety of complex environment dynamics, and numerous types of static and interactive entities. The unique combination of challenges present in NetHack makes NLE well-suited for driving research towards more general and robust RL algorithms. Roguelikes as Reinforcement Learning Testbeds: We are not the first to argue for roguelike games to be used as testbeds for RL. Asperti et al. [5] present an interface to Rogue, the very first roguelike game and one of the simplest roguelikes in terms of game dynamics and difficulty. They show that policies trained with model-free RL algorithms can successfully learn rudimentary navigation. Similarly, Kanagawa and Kaneko [41] present an environment inspired by Rogue that provides a parameterizable generation of Rogue levels. Like us, Dannenhauer et al. [20] argue that roguelike games could be a useful RL testbed. They discuss the roguelike game Dungeon Crawl Stone Soup, but their position paper provides neither an RL environment nor experiments to validate their claims. Most similar to our work is gym_nethack [14, 15], which offers a Gym environment based on NetHack 3.6.0. We commend the authors for introducing NetHack as an RL environment, and to the best of our knowledge they were the first to suggest the idea. However, there are several design choices that limit the impact and longevity of their version as a research testbed. First, they heavily modified NetHack to enable agent interaction. In the process, gym_nethack disables various crucial game mechanics to simplify the game, its environment dynamics, and the resulting optimal policies. This includes removing obstacles like boulders, traps, and locked doors as well as all item identification mechanics, making items much easier to employ and the overall environment much closer to its simpler predecessor, Rogue. Additionally, these modifications tie the environment to a particular version of the game. This is not ideal as (i) players tend to use new versions of the game as they are released, hence, publicly available human data becomes progressively incompatible, thereby limiting the amount of data that can be used for learning from demonstrations; (ii) older versions of NetHack tend to include well-documented exploits which may be discovered by agents (see Appendix I for exploits used in programmatic bots). In contrast, NLE is designed to make the interaction with NetHack as close as possible to the one experienced by humans playing the full game. NLE is the only environment exposing the entire game in all its complexity, allowing for larger-scale experimentation to push the boundaries of RL research. 5 Conclusion and Future Work The NetHack Learning Environment is a fast, complex, procedurally generated environment for advancing research in RL. We demonstrate that current state-of-the-art model-free RL serves as a sensible baseline, and we provide an in-depth analysis of learned agent behaviors. NetHack provides interesting challenges for exploration methods given the extremely large number of possible states and wide variety of environment dynamics to discover. Previously proposed formulations of intrinsic motivation based on seeking novelty [8, 53, 13] or maximizing surprise [56, 12, 57] are likely insufficient to make progress on NetHack given that an agent will constantly find itself in novel states or observe unexpected environment dynamics. NetHack poses further challenges since, in order to win, an agent needs to acquire a wide range of skills such as collecting resources, fighting monsters, eating, manipulating objects, casting spells, or taking care of their pet, to name just a few. The multilevel dependencies present in NetHack could inspire progress in hierarchical RL and long-term planning [21, 40, 55, 68]. Transfer to unseen game characters, environment dynamics, or level layouts can be evaluated [67]. Furthermore, its richness and constant challenge make NetHack an interesting benchmark for lifelong learning [45, 54, 61, 48]. In addition, the extensive documentation about NetHack can enable research on using prior (natural language) knowledge for learning, which could lead to improvements in generalization and sample efficiency [10, 46, 72, 36]. Lastly, NetHack can also drive research on learning from demonstrations [1, 3] since a large collection of replay data is available. In sum, we argue that the NetHack Learning Environment strikes an excellent balance between complexity and speed while encompassing a variety of challenges for the research community. For future versions of the environment, we plan to support NetHack 3.7 once it is released, as it will further increase the variability of observations via Themed Rooms. This version will also introduce scripting in the Lua language, which we will leverage to enable users to create their custom sandbox tasks, directly tapping into NetHack and its rich universe of entities and their complex interactions to define custom RL tasks. 6 Broader Impact To bridge the gap between the constrained world of video and board games, and the open and unpredictable real world, there is a need for environments and tasks which challenge the limits of current Reinforcement Learning (RL) approaches. Some excellent challenges have been put forth over the years, demanding increases in the complexity of policies needed to solve a problem or scale needed to deal with increasingly photorealistic, complex environments. In contrast, our work seeks to be extremely fast to run while still testing the generalization and exploration abilities of agents in an environment which is rich, procedurally generated, and in which reward is sparse. The impact of solving these problems with minimal environment-specific heuristics lies in the development of RL algorithms which produce sample efficient, robust, and general policies capable of more readily dealing with the uncertain and changing dynamics of “real world” environments. We do not solve these problems here, but rather provide the challenge and the testbed against such improvements can be produced and evaluated. Auxiliary to this, and in line with growing concerns that progress in Deep RL is more the result of industrial labs having privileged access to the resources required to run environments and agents on a massive scale, the environment presented here is computationally cheap to run and to collect data in. This democratizes access for researchers in more resource-constrained labs, while not sacrificing the difficulty and richness of the environment. We hope that as a result of this, and of the more general need to develop sample-efficient agents with fewer data, the environmental impact of research using our environment will be reduced compared to more visually sophisticated ones. Acknowledgements We thank the NetHack DevTeam for creating and continuously extending this amazing game over the last decades. We thank Paul Winner, Bart House, M. Drew Streib, Mikko Juola, Florian Mayer, Philip H.S. Torr, Stephen Roller, Minqi Jiang, Vegard Mella, Eric Hambro, Fabio Petroni, Mikayel Samvelyan, Vitaly Kurin, Arthur Szlam, Sebastian Riedel, Antoine Bordes, Gabriel Synnaeve, Jeremy Reizenstein, as well as the NeurIPS 2020, ICML 2020, and BeTR-RL 2020 reviewers and area chairs for their valuable feedback. Nantas Nardelli is supported by EPSRC/MURI grant EP/N019474/1. Finally, we would like to pay tribute to the 863,918,816 simulated NetHack heroes who lost their lives in the name of science for this project (thus far).
1. What is the main contribution of the paper, and how does it fill a gap in existing research? 2. What are the strengths of the proposed environment, and how does it challenge current exploration methods? 3. What are the weaknesses of the paper, and how could they be addressed? 4. How does the paper demonstrate the challenges of training agents in the NetHack environment? 5. What opportunities does the environment offer for combining NLP and RL?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents the NetHack Learning Environment a terminal-based grid-world environment for RL research. NetHack environment is procedurally generated, with a large observation space and complex stochastic game dynamics, creating challenges for current exploration methods, while providing fast simulation for efficient training of RL agents. To demonstrate different challenges and behaviors emerging from the environment, the paper proposes 5 tasks together with an evaluation protocol, and use a distributed RL and an exploration method to evaluate agent's performance in them, showing the challenges of training agents in such environment. Strengths The main strength of the paper is in the environment, which will certainly be useful for the RL/embodied AI community. The NetHack environment proposed in the paper seems to fill a gap in exiting environments for RL research, which can help develop new RL algorithms, but also new problems related to embodied intelligence. The environment is procedurally generated and stochastic, which avoids having agents memorizing past episodes in order to solve the game, and makes some of the existing exploration methods such as Go-Explore fail. While the observations are symbolic, they contain a large number of symbols corresponding to the different game elements, as well as natural language, creating opportunities for combining NLP and RL. The game entities are compositional, meaning that agents can reason about common attributes to interact with entities of different classes (line 108). Given the popularity of the game by which it is inspired, there is a corpus of natural language, game demonstrations and other knowledge bases that can bring interesting problems in offline learning. Despite these features, the game is faster than simpler environments widely used in RL research (such as ALE), making it useful for research under non-industrial computing resources. The proposed tasks and evaluation create a measurable benchmark for making progress, given the difficulty of solving the full NetHack game. While the proposed architecture for the baselines is simple, the methods used are fair with the state of the art, and the low performance demonstrate the challenges in the tasks proposed. The qualitative results give a good understanding of the skills that need to be learned by agents and some of the challenges that appear in performing the tasks. The paper is well written, and the tasks and experiments are clear, as well as the explanation of the environment. Weaknesses While the environment description is clear, it would have helped to provide an image of the environment that matched the description when it is first presented. For those unfamiliar with the environment, the proposed youtube video detaches oneself from the paper and it is difficult to follow the description in lines 94-111 without a screenshot, or an indication of the second we should focus on in the video. Another part that I am missing is a description of the resources used to train the baselines for the proposed tasks, and the time it took to train. While the supplementary materials give a good idea of the environment speed, those measurements are (as claimed) using a random policy, which may differ from the times taken by a learned policy. It would be good to get a sense of the resources needed to train agents in the environment.
NIPS
Title The NetHack Learning Environment Abstract Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminalbased roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source and available at https://github.com/facebookresearch/nle. 1 Introduction Recent advances in (Deep) Reinforcement Learning (RL) have been driven by the development of novel simulation environments, such as the Arcade Learning Environment (ALE) [9], StarCraft [64, 69], BabyAI [16], Obstacle Tower [38], Minecraft [37, 29, 35], and Procgen Benchmark [18]. These environments introduced new challenges for state-of-the-art methods and demonstrated failure modes of existing RL approaches. For example, Montezuma’s Revenge highlighted that methods performing well on other ALE tasks were not able to successfully learn in this sparse-reward environment. This sparked a long line of research on novel methods for exploration [e.g., 8, 66, 53] and learning from demonstrations [e.g., 31, 62, 6]. However, this progress has limits: the current best approach on this environment, Go-Explore [22, 23], overfits to specific properties of ALE and Montezuma’s Revenge. While Go-Explore is an impressive solution for Montezuma’s Revenge, it exploits the determinism of environment transitions, allowing it to memorize sequences of actions that lead to previously visited states from which the agent can continue to explore. We are interested in surpassing the limits of deterministic or repetitive settings and seek a simulation environment that is complex and modular enough to test various open research challenges such as exploration, planning, skill acquisition, memory, and transfer. However, since state-of-the-art RL approaches still require millions or even billions of samples, simulation environments need to be fast to allow RL agents to perform many interactions per second. Among attempts to surpass the limits of deterministic or repetitive settings, procedurally generated environments are a promising path 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. towards testing systematic generalization of RL methods [e.g., 39, 38, 60, 18]. Here, the game state is generated programmatically in every episode, making it extremely unlikely for an agent to visit the exact state more than once during its lifetime. Existing procedurally generated RL environments are either costly to run [e.g., 69, 37, 38] or are, as we argue, of limited complexity [e.g., 17, 19, 7]. To address these issues, we present the NetHack Learning Environment (NLE), a procedurally generated environment that strikes a balance between complexity and speed. It is a fully-featured Gym environment [11] around the popular open-source terminal-based single-player turn-based “dungeon-crawler” game, NetHack [43]. Aside from procedurally generated content, NetHack is an attractive research platform as it contains hundreds of enemy and object types, it has complex and stochastic environment dynamics, and there is a clearly defined goal (descend the dungeon, retrieve an amulet, and ascend). Furthermore, NetHack is difficult to master for human players, who often rely on external knowledge to learn about strategies and NetHack’s complex dynamics and secrets.1 Thus, in addition to a guide book [58, 59] released with NetHack itself, many extensive community-created documents exist, outlining various strategies for the game [e.g., 50, 25]. In summary, we make the following core contributions: (i) we present NLE, a fast but complex and feature-rich Gym environment for RL research built around the popular terminal-based game, NetHack, (ii) we release an initial suite of tasks in the environment and demonstrate that novel tasks can be added easily, (iii) we introduce baseline models trained using IMPALA [24] and Random Network Distillation (RND) [13], a popular exploration bonus, resulting in agents that learn diverse policies for early stages of NetHack, and (iv) we demonstrate the benefit of NetHack’s symbolic observation space by presenting in-depth qualitative analyses of trained agents. 2 NetHack: a Frontier for Reinforcement Learning Research In traditional so-called roguelike games (e.g., Rogue, Hack, NetHack, and Dungeon Crawl Stone Soup) the player acts turn-by-turn in a procedurally generated grid-world environment, with game dynamics strongly focused on exploration, resource management, and continuous discovery of entities and game mechanics [IRDC, 2008]. These games are designed to provide a steep learning curve and a constant level of challenge and surprise to the player. They are generally extremely difficult to win even once, let alone to master, i.e., win regularly and multiple times in a row. As advocated by [39, 38, 18], procedurally generated environments are a promising direction for testing systematic generalization of RL agents. We argue that such environments need to be both sufficiently complex and fast to run to serve as a challenging long-term research testbed. In Section 2.1, we illustrate that NetHack contains many desirable properties, making it an excellent candidate for driving long-term research in RL. We introduce NLE in Section 2.2, an initial suite of tasks in Section 2.3, an evaluation protocol for measuring progress towards solving NetHack in Section 2.4, as well as baseline models in Section 2.5. 2.1 NetHack NetHack is one of the oldest and most popular roguelikes, originally released in 1987 as a successor to Hack, an open-source implementation of the original Rogue game. At the beginning of the game, the player takes the role of a hero who is placed into a dungeon and tasked with finding the Amulet of Yendor to offer it to an in-game deity. To do so, the player has to descend to the bottom of over 50 procedurally generated levels to retrieve the amulet and then subsequently escape the dungeon, unlocking five extremely challenging final levels (the four Elemental Planes and the Astral Plane). Many aspects of the game are procedurally generated and follow stochastic dynamics. For example, the overall structure of the dungeon is somewhat linear, but the exact location of places of interest (e.g., the Oracle) and the structure of branching sub-dungeons (e.g., the Gnomish Mines) are determined randomly. The procedurally generated content of each level makes it highly unlikely that a player will ever experience the exact same situation more than once. This provides a fundamental challenge to learning systems and a degree of complexity that enables us to more effectively evaluate an agent’s ability to generalize. It also disqualifies current state-of-the-art exploration methods such as Go-Explore [22, 23] that are based on a goal-conditioned policy to navigate to previously visited 1“NetHack is largely based on discovering secrets and tricks during gameplay. It can take years for one to become well-versed in them, and even experienced players routinely discover new ones.” [26] 1 1 states. Moreover, states in NetHack are composed of hundreds of possible symbols, resulting in an enormous combinatorial observation space.2 It is an open question how to best project this symbolic space to a low-dimensional representation appropriate for methods like Go-Explore. For example, Ecoffet et al.’s heuristic of downsampling images of states to measure their similarity to be used as an exploration bonus will likely not work for large symbolic and procedurally generated environments. NetHack provides further variation by different hero roles (e.g., monk, valkyrie, wizard, tourist), races (human, elf, dwarf, gnome, orc) and random starting inventories (see Appendix A for details). Consequently, NetHack poses unique challenges to the research community and requires novel ways to determine state similarity and, likely, entirely new exploration frameworks. The gelatinous cube eats a scroll! }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}.}}}}}P.}}}}}......}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}..".}}}...}}}}} }...}}.....}}}}}....}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}...............} }....}}}}}}}}}}....}}}..}}}}}}}}}}}.......}}}}}}}}}}}}}}}}..}}.....}}}...}} }....}}}}}}}}b....}}}}.[}}}}}}%................}}}}}}}}}}}.}}}}.....}}...}} }....}}}}}}}}}}}}.}}}}.}}}}}}M-----------------.}}}}}}}}}}}}}}}}}.........} }....}}}}}}}}}}}}}}}}}}.}}}...|.......^.......|...}}}}}}}}}}}}}}}}}}}....}} }.....}.}}s...}}}}}}}}}.}}....--------+--------....}}}}}}..}}}}}}}}}}}...}} }..oo..}}}}.%}}}}}}}}}}}}}........|.......|........}}}}}....}}}}}}}}}}}}}}} }.....}}}}}}}}}}}}}}}}}}}}........|.>.....|^....^..}}}}}...}}}}}}}}}.}}}}}} }.....}}}}}}}}}}}}}}}}}}}}....--------+--------....}}}}}}.}.}}}}}}}}}}}}}}} }....@.}}}}}}}}}}}}}}}}}}}}...|.......^.......|...}}}}}}}}}}}}}}}}}.}}}}}}} }.......}}}}}}}..}}}}}}}}}}}}.-----------------.}}}}}}}}}}}}}}}}}....}}}}}} }....f..p}}.}}...U}}}}}}}}}}}}.......Y.........}}}}}..}}}}}}}}}.......}}}}} }.......}}}}}}}......}}}}}}}}}}}}}}..v....}}}}}}}}}..C..}}}}}}...}}..}}}}}} }.....}}}}}}}}}}}.Y...}}}}}}}}}}}}}}}}}}}}}}.}}}}}}}..B}}}}}}}}}....}}}}}}} }}..}}}}}}}}}}}}}....}}}}}}}}}}}}}}}}}}}}}}...}}..}}}}}}}.}}.}}}}.^}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} Georg XIII the Thaumaturge St:7 Dx:14 Co:17 In:19 Wi:10 Ch:10 Neutral S: Dlvl:6 $:0 HP:52(52) Pw:28(73) AC:5 Xp:7/921 T:7451 Hungry 1Figure 2: The hero (@) has to cross water (}) to get past Medusa (@, out of the hero’s line of sight) down the staircase (>) to the next level. To provide a glimpse into the complexity of NetHack’s environment dynamics, we closely follow the educational example given by “Mr Wendal” on YouTube.3 At a specific point in the game, the hero has to get past Medusa’s Island (see Figure 2 for an example). Medusa’s Island is surrounded by water } that the agent has to cross. Water can rust and corrode the hero’s metallic weapons ) and armor [. Applying a can of grease ( prevents rusting and corrosion. Furthermore, going into water will make a hero’s inventory wet, erasing scrolls ? and spellbooks + that they carry. Applying a can of grease to a bag or sack ( will make it a waterproof container for items. But the sea can also contain a kraken ; that can grab and drown the hero, leading to instant death. Applying a can of grease to a hero’s armor prevents the kraken from grabbing the hero. However, a cursed can of grease will grease the hero’s hands instead and they will drop their weapon and rings. One can use a towel ( to wipe off grease. To reach Medusa @, the hero can alternatively use magic to freeze the water and turn it into walkable ice .. Wearing snow boots [ will help the hero not to slip. When Medusa is in the hero’s line of sight, her gaze will petrify and instantly kill—the hero should use a towel to cover their eyes to fight Medusa, or even apply a mirror ( to petrify her with her own gaze. There are many other entities a hero must learn to face, many of which appear rarely even across multiple games, especially the most powerful monsters. These entities are often compositional, for example a monster might be a wolf d, which shares some characteristics with other in-game canines such as coyotes d or hell hounds d. To help a player learn, NetHack provides in-game messages 2Information about the over 450 items and 580 monster types, as well as environment dynamics involving these entities can be found in the NetHack Wiki [50] and to some extent in the NetHack Guidebook [59]. 3youtube.com/watch?v=SjuTyJlgLJ8 describing many of the hero’s interactions (see the top of Figure 1).4 Learning to capture these interesting and somewhat realistic albeit abstract dynamics poses challenges for multi-modal and language-conditioned RL [46]. NetHack is an extremely long game. Successful expert episodes usually last tens of thousands of turns, while average successful runs can easily last hundreds of thousands of turns, spawning multiple days of play-time. Compared to testbeds with long episode horizons such as StarCraft and Dota 2, NetHack’s “episodes” are one or two orders of magnitude longer, and they wildly vary depending on the policy. Moreover, several official conducts exist in NetHack that make the game even more challenging, e.g., by not wearing any armor throughout the game (see Appendix A for more). Finally, in comparison to other classic roguelike games, NetHack’s popularity has attracted a larger number of contributors to its community. Consequently, there exists a comprehensive game wiki [50] and many so-called spoilers [25] that provide advice to players. Due to the randomized nature of NetHack, this advice is general in nature (e.g., explaining the behavior of various entities) and not a step-by-step guide. These texts could be used for language-assisted RL along the lines of [72]. Lastly, there is also a large public repository of human replay data (over five million games) hosted on the NetHack Alt.org (NAO) servers, with hundreds of finished games per day on average [47]. This extensive dataset could spur research advances in imitation learning, inverse RL, and learning from demonstrations [1, 3]. 2.2 The NetHack Learning Environment The NetHack Learning Environment (NLE) is built on NetHack 3.6.6, the 36th public release of NetHack, which was released on March 8th, 2020 and is the latest available version of the game at the time of publication of this paper. NLE is designed to provide a common, turn-based (i.e., synchronous) RL interface around the standard terminal interface of NetHack. We use the game as-is as the backend for our NLE environment, leaving the game dynamics unchanged. We added to the source code more control over the random number generator for seeding the environment, as well as various modifications to expose the game’s internal state to our Python frontend. By default, the observation space consists of the elements glyphs, chars, colors, specials, blstats, message, inv_glyphs, inv_strs, inv_letters, as well as inv_oclasses. The elements glyphs, chars, colors, and specials are tensors representing the (batched) 2D symbolic observation of the dungeon; blstats is a vector of agent coordinates and other character attributes (“bottom-line stats”, e.g., health points, strength, dexterity, hunger level; normally displayed in the bottom area of the GUI), message is a tensor representing the current message shown to the player (normally displayed in the top area of the GUI), and the inv_* elements are padded tensors representing the hero’s inventory items. More details about the default observation space and possible extensions can be found in Appendix B. The environment has 93 available actions, corresponding to all the actions a human player can take in NetHack. More precisely, the action space is composed of 77 command actions and 16 movement actions. The movement actions are split into eight “one-step” compass directions (i.e., the agent moves a single step in a given direction) and eight “move far” compass directions (i.e., the agent moves in the specified direction until it runs into some entity). The 77 command actions include eating, opening, kicking, reading, praying as well as many others. We refer the reader to Appendix C as well as to the NetHack Guidebook [59] for the full table of actions and NetHack commands. NLE comes with a Gym interface [11] and includes multiple pre-defined tasks with different reward functions and action spaces (see next section and Appendix E for details). We designed the interface to be lightweight, achieving competitive speeds with Gym-based ALE (see Appendix D for a rough comparison). Finally, NLE also includes a dashboard to analyze NetHack runs recorded as terminal tty recordings. This allows NLE users to analyze replays of the agent’s behavior at an arbitrary speed and provides an interface to visualize action distributions and game events (see Appendix H for details). NLE is available under an open source license at https://github.com/facebookresearch/nle. 4An example interaction after applying a figurine of an Archon: “You set the figurine on the ground and it transforms. You get a bad feeling about this. The Archon hits! You are blinded by the Archon’s radiance! You stagger. . . It hits! You die. . . But wait. . . Your medallion feels warm! You feel much better! The medallion crumbles to dust! You survived that attempt on your life.” 2.3 Tasks NLE aims to make it easy for researchers to probe the behavior of their agents by defining new tasks with only a few lines of code, enabled by NetHack’s symbolic observation space as well as its rich entities and environment dynamics. To demonstrate that NetHack is a suitable testbed for advancing RL, we release a set of initial tasks for tractable subgoals in the game: navigating to a staircase down to the next level, navigating to a staircase while being accompanied by a pet, locating and eating edibles, collecting gold, maximizing in-game score, scouting to discover unseen parts of the dungeon, and finding the oracle. These tasks are described in detail in Appendix E, and, as we demonstrate in our experiments, lead to unique challenges and diverse behaviors of trained agents. 2.4 Evaluation Protocol We lay out a protocol and provide guidance for evaluating future work on NLE in a reproducible manner. The overall goal of NLE is to train agents that can solve NetHack. An episode in the full game of NetHack is considered solved if the agent retrieves the Amulet of Yendor and offers it to its co-aligned deity in the Astral Plane, thereby ascending to demigodhood. We declare NLE to be solved once agents can be trained to consecutively ascend (ten episodes without retry) to demigodhood on unseen seeds given a random role, race, alignment, and gender combination. Since the environment is procedurally generated and stochastic, evaluating on held-out unseen seeds ensures we test systematic generalization of agents. As of October 2020, NAO reports the longest streak of human ascensions on NetHack 3.6.x to be 61; the role, race, etc. are not necessarily randomized for these ascension streaks. Since we believe that this goal is out of reach for machine learning approaches in the foreseeable future, we recommend comparing models on the score task in the meantime. Using NetHack’s in-game score as the measure for progress has caveats. For example, expert human players can solve NetHack while minimizing the score [see 50, “Score” entry, for details]. NAO reports ascension scores for NetHack 3.6.x ranging from the low hundreds of thousands to tens of millions. Although we believe training agents to maximize the in-game score is likely insufficient for solving the game, the in-game score is still a sensible proxy for incremental progress on NLE as it is a function of, among other things, the dungeon depth that the agent reached, the number of enemies it killed, the amount of gold it collected, as well as the knowledge it gathered about potions, scrolls, and wands. When reporting results on NLE, we require future work to state the full character specification (e.g., mon-hum-neu-mal), all NetHack options that were used (e.g., whether or not autopickup was used), which actions were allowed (see Table 1), which actions or action-sequences were hardcoded (e.g., engraving [see 50, “Elbereth” as an example]) and how many different seeds were used during training. We ask to report the average score obtained on 1000 episodes of randomly sampled and previously unseen seeds. We do not impose any restrictions during training, but at test time any save scumming (i.e., saving and loading previous checkpoints of the episode) or manipulation of the random number generator [e.g., 2] is forbidden. 2.5 Baseline Models For our baseline models, we encode the multi-modal observation ot as follows. Let the observation ot at time step t be a tuple (gt, zt) consisting of the 21 × 79 matrix of glyph identifiers and a 21- dimensional vector containing agent stats such as its (x, y)-coordinate, health points, experience level, and so on. We produce three dense representations based on the observation (see Figure 3). For every of the 5991 possible glyphs in NetHack (monsters, items, dungeon features, etc.), we learn a k-dimensional vector embedding. We apply a ConvNet (red) to all visible glyph embeddings as well as another ConvNet (blue) to the 9×9 crop of glyphs around the agent to create a dedicated egocentric representation for improved generalization [32, 71]. We found this egocentric representation to be an important component during preliminary experiments. Furthermore, we use an MLP to encode the hero’s stats (green). These vectors are concatenated and processed by another MLP to produce a low-dimensional latent representation ot of the observation. Finally, we employ a recurrent policy parameterized by an LSTM [33] to obtain the action distribution. For baseline results on the tasks above, we use a reduced action space that includes the movement, search, kick, and eat actions. For the main experiments, we train the agent’s policy for 1B steps in the environment using IMPALA [24] as implemented in TorchBeast [44]. Throughout training, we change NetHack’s seed for procedurally generating the environment after every episode. To demonstrate NetHack’s variability based on the character configuration, we train with four different agent characters: a neutral human male monk (mon-hum-neu-mal), a lawful dwarf female valkyrie (val-dwa-law-fem), a chaotic elf male wizard (wiz-elf-cha-mal), and a neutral human female tourist (tou-hum-neu-fem). More implementation details can be found in Appendix F. In addition, we present results using Random Network Distillation (RND) [13], a popular exploration technique for Deep RL. As previously discussed, exploration techniques which require returning to previously visited states such as Go-Explore are not suitable for use in NLE, but RND does not have this restriction. RND encourages agents to visit unfamiliar states by using the prediction error of a fixed random network as an intrinsic exploration reward, which has proven effective for hard exploration games such as Montezuma’s Revenge [12]. The intrinsic reward obtained from RND can create “reward bridges” between states which provide sparse extrinsic environmental rewards, thereby enabling the agent to discover new sources of extrinsic reward that it otherwise would not have reached. We replace the baseline network’s pixel-based feature extractor with the symbolic feature extractor described above for the baseline model, and use the best configuration of other RND hyperparameters documented by the authors (see Appendix G for full details). 3 Experiments and Results We present quantitative results on the suite of tasks included in NLE using a standard distributed Deep RL baseline and a popular exploration method, before additionally analyzing agent behavior qualitatively. For each model and character combination, we present results of the mean episode return over the last 100 episodes averaged for five runs in Figure 5. We discuss results for individual tasks below (see Table 5 in the appendix for full details). Staircase: Our agents learning to navigate the dungeon to the staircase > with a success rate of 77.26% for the monk, 50.42% for the tourist, 74.62% for the valkyrie, and 80.42% for the wizard. What surprised us is that agents learn to reliably kick in locked doors. This is a costly action to explore as the agent loses health points and might even die when accidentally kicking against walls. Similarly, the agent has to learn to reliably search for hidden passages and secret doors. Often, this involves using the search action many times in a row, sometimes even at many locations on the map (e.g., around all walls inside a room). Since NLE is procedurally generated, during training agents might encounter easier environment instances and use the acquired skills to accelerate learning on the harder ones [60, 18]. With a small probability, the staircase down might be generated near the agent’s starting position. Using RND exploration, we observe substantial gains in the success rate for the monk (+13.58pp), tourist (+6.52pp) and valkyrie (+16.34pp) roles, while lower results for wizard roles (−12.96pp). Pet: Finding the staircase while taking care of the hero’s pet (e.g., the starting kitten f or little dog d) is a harder task as the pet might get killed or fall into a trap door, making it impossible for the agent to successfully complete the episode. Compared to the staircase task, the agent success rates are generally lower (62.02% for monk, 25.66% for tourist, 63.30% for valkyrie, and wizard 66.80%). Again, RND exploration provides consistent and substantial gains. Eat: This tasks highlights the importance of testing with different character classes in NetHack. The monk and tourist start with a number edible items (e.g., food rations %, apples % and oranges %). A sub-optimal strategy is to consume all of these comestibles right at the start of the episode, potentially risking choking to death. In contrast, the other roles have to hunt for food, which our agents learn to do slowly over time for the valkyrie and wizard roles. By having more pressure to quickly learn a sustainable food strategy, the valkyrie learns to outlast other roles and survives the longest in the game (on average 1713 time steps). Interestingly, RND exploration leads to consistently worse results for this task. Gold: Locating gold $ in NetHack provides a relatively sparse reward signal. Still, our agents learn to collect decent amounts during training and learn to descend to deeper dungeon levels in search for more. For example, monk agents reach dungeon level 4.2 on average for the CNN baseline and even 5.0 using RND exploration. Score: As discussed in Section 2.4, we believe this task is the best candidate for comparing future methods regarding progress on NetHack. However, it is questionable whether a reward function based on NetHack’s in-game score is sufficient for training agents to solve the game. Our agents average at a score of 748 for monk, 11 for tourist, 573 for valkyrie, and 314 for wizard, with RND exploration again providing substantial gains (e.g. increasing the average score to 780 for monk). The resulting agents explore much of the early stages of the game, reaching dungeon level 5.4 on average for the monk with the deepest descent to level 11 achieving a high score of 4260 while leveling up to experience level 7 (see Table 6 in the appendix). Scout: The scout task shows a trend that is similar to the score task. Interestingly, we observe a lower experience level and in-game score, but agents descend, on average, similarly deep into the dungeon (e.g. level 5.5 for monk). This is sensible, since a policy that avoids to fight monsters, thereby lowering the chances of premature death, will not increase the in-game score as fast or level up the character as quickly, thus keeping the difficulty of spawned monsters low. We note that delaying to level up in order to avoid encountering stronger enemies early in the game is a known strategy human players adopt in NetHack [e.g. 50, “Why do I keep dying?” entry, January 2019 version]. Oracle: None of our agents find the Oracle @ (except for one lucky valkyrie episode). Locating the Oracle is a difficult exploration task. Even if the agent learns to make its way down the dungeon levels, it needs to search many, potentially branching, levels of the dungeon. Thus, we believe this task serves as a challenging benchmark for exploration methods in procedurally generated environments in the short term. Long term, many tasks harder than this (e.g., reaching Minetown, Mines’ End, Medusa’s Island, The Castle, Vlad’s Tower, Moloch’s Sanctum etc.) can be easily defined in NLE with very few lines of code. 3.1 Generalization Analysis Akin to [18], we evaluate agents trained on a limited set of seeds while still testing on 100 held-out seeds. We find that test performance increases monotonically with the size of the set of seeds that the agent is trained on. Figure 4 shows this effect for the score and staircase tasks. Training only on a limited number of seeds leads to high training performance, but poor generalization. The gap between training and test performance becomes narrow when training with at least 1000 seeds, indicating that at that point agents are exposed to sufficient variation during training to make memorization infeasible. We also investigate how model capacity affects performance by comparing agents with five different hidden sizes for the final layer (of the architecture described in Section 2.5). Figure 7 in the appendix shows that increasing the model capacity improves results on the score but not on the staircase task, indicating that it is an important hyperparameter to consider, as also noted by [18]. 3.2 Qualitative Analysis We analyse the cause for death of our agents during training and present results in Figure 9 in the appendix. We notice that starvation and traps become a less prominent cause of death over time, most likely because our agents, when starting to learn to descend dungeon levels and fight monsters, are more likely to die in combat before they starve or get killed by a trap. In the score and scout tasks, our agents quickly learn to avoid eating rotten corpses, but food poisoning becomes again prominent towards the end of training. We can see that gnome lords G, gnome kings G, chameleons :, and even mind flayers h become a more prominent cause of death over time, which can be explained with our agents leveling up and descending deeper into the dungeon. Chameleons are a particularly interesting entity in NetHack as they regularly change their form to a random animal or monster, thereby adversarially confusing our agent with rarely seen symbols for which it has not yet learned a meaningful representation (similar to unknown words in natural language processing). We release a set of high-score recordings of our agents (see Appendix J on how to view them via a browser or terminal). 4 Related Work Progress in RL has historically been achieved both by algorithmic innovations as well as development of novel environments to train and evaluate agents. Below, we review recent RL environments and delineate their strengths and weaknesses as testbeds for current methods and future research. Recent Game-Based Environments: Retro video games have been a major catalyst for Deep RL research. ALE [9] provides a unified interface to Atari 2600 games, which enables testing of RL algorithms on high-dimensional visual observations quickly and cheaply, resulting in numerous Deep RL publications over the years [4]. The Gym Retro environment [51] expands the list of classic games, but focuses on evaluating visual generalization and transfer learning on a single game, Sonic The Hedgehog. Both StarCraft: BroodWar and StarCraft II have been successfully employed as RL environments [64, 69] for research on, for example, planning [52, 49], multi-agent systems [27, 63], imitation learning [70], and model-free reinforcement learning [70]. However, the complexity of these games creates a high entry barrier both in terms of computational resources required as well as intricate baseline models that require a high degree of domain knowledge to be extended. 3D games have proven to be useful testbeds for tasks such as navigation and embodied reasoning. Vizdoom [42] modifies the classic first-person shooter game Doom to construct an API for visual control; DeepMind Lab [7] presents a game engine based on Quake III Arena to allow for the creation of tasks based on the dynamics of the original game; Project Malmo [37], MineRL [29] and CraftAssist [35] provide visual and symbolic interfaces to the popular Minecraft game. While Minecraft is also procedurally generated and has complex environment dynamics that an agent needs to learn about, it is much more computationally demanding than NetHack (see Table 4 in the appendix). As a consequence, the focus has been on learning from demonstrations [29]. More recent work has produced game-like environments with procedurally generated elements, such as the Procgen Benchmark [18], MazeExplorer [30], and the Obstacle Tower environment [38]. However, we argue that, compared to NetHack or Minecraft, these environments do not provide the depth likely necessary to serve as long-term RL testbeds due to limited number of entities and environment interactions that agents have to learn to master. In contrast, NetHack agents have to acquire knowledge about complex environment dynamics of hundreds of entities (dungeon features, items, monsters etc.) to do well in a game that humans often take years of practice to solve. In conclusion, none of the current benchmarks combine a fast simulator with a procedurally generated environment, a hard exploration problem, a wide variety of complex environment dynamics, and numerous types of static and interactive entities. The unique combination of challenges present in NetHack makes NLE well-suited for driving research towards more general and robust RL algorithms. Roguelikes as Reinforcement Learning Testbeds: We are not the first to argue for roguelike games to be used as testbeds for RL. Asperti et al. [5] present an interface to Rogue, the very first roguelike game and one of the simplest roguelikes in terms of game dynamics and difficulty. They show that policies trained with model-free RL algorithms can successfully learn rudimentary navigation. Similarly, Kanagawa and Kaneko [41] present an environment inspired by Rogue that provides a parameterizable generation of Rogue levels. Like us, Dannenhauer et al. [20] argue that roguelike games could be a useful RL testbed. They discuss the roguelike game Dungeon Crawl Stone Soup, but their position paper provides neither an RL environment nor experiments to validate their claims. Most similar to our work is gym_nethack [14, 15], which offers a Gym environment based on NetHack 3.6.0. We commend the authors for introducing NetHack as an RL environment, and to the best of our knowledge they were the first to suggest the idea. However, there are several design choices that limit the impact and longevity of their version as a research testbed. First, they heavily modified NetHack to enable agent interaction. In the process, gym_nethack disables various crucial game mechanics to simplify the game, its environment dynamics, and the resulting optimal policies. This includes removing obstacles like boulders, traps, and locked doors as well as all item identification mechanics, making items much easier to employ and the overall environment much closer to its simpler predecessor, Rogue. Additionally, these modifications tie the environment to a particular version of the game. This is not ideal as (i) players tend to use new versions of the game as they are released, hence, publicly available human data becomes progressively incompatible, thereby limiting the amount of data that can be used for learning from demonstrations; (ii) older versions of NetHack tend to include well-documented exploits which may be discovered by agents (see Appendix I for exploits used in programmatic bots). In contrast, NLE is designed to make the interaction with NetHack as close as possible to the one experienced by humans playing the full game. NLE is the only environment exposing the entire game in all its complexity, allowing for larger-scale experimentation to push the boundaries of RL research. 5 Conclusion and Future Work The NetHack Learning Environment is a fast, complex, procedurally generated environment for advancing research in RL. We demonstrate that current state-of-the-art model-free RL serves as a sensible baseline, and we provide an in-depth analysis of learned agent behaviors. NetHack provides interesting challenges for exploration methods given the extremely large number of possible states and wide variety of environment dynamics to discover. Previously proposed formulations of intrinsic motivation based on seeking novelty [8, 53, 13] or maximizing surprise [56, 12, 57] are likely insufficient to make progress on NetHack given that an agent will constantly find itself in novel states or observe unexpected environment dynamics. NetHack poses further challenges since, in order to win, an agent needs to acquire a wide range of skills such as collecting resources, fighting monsters, eating, manipulating objects, casting spells, or taking care of their pet, to name just a few. The multilevel dependencies present in NetHack could inspire progress in hierarchical RL and long-term planning [21, 40, 55, 68]. Transfer to unseen game characters, environment dynamics, or level layouts can be evaluated [67]. Furthermore, its richness and constant challenge make NetHack an interesting benchmark for lifelong learning [45, 54, 61, 48]. In addition, the extensive documentation about NetHack can enable research on using prior (natural language) knowledge for learning, which could lead to improvements in generalization and sample efficiency [10, 46, 72, 36]. Lastly, NetHack can also drive research on learning from demonstrations [1, 3] since a large collection of replay data is available. In sum, we argue that the NetHack Learning Environment strikes an excellent balance between complexity and speed while encompassing a variety of challenges for the research community. For future versions of the environment, we plan to support NetHack 3.7 once it is released, as it will further increase the variability of observations via Themed Rooms. This version will also introduce scripting in the Lua language, which we will leverage to enable users to create their custom sandbox tasks, directly tapping into NetHack and its rich universe of entities and their complex interactions to define custom RL tasks. 6 Broader Impact To bridge the gap between the constrained world of video and board games, and the open and unpredictable real world, there is a need for environments and tasks which challenge the limits of current Reinforcement Learning (RL) approaches. Some excellent challenges have been put forth over the years, demanding increases in the complexity of policies needed to solve a problem or scale needed to deal with increasingly photorealistic, complex environments. In contrast, our work seeks to be extremely fast to run while still testing the generalization and exploration abilities of agents in an environment which is rich, procedurally generated, and in which reward is sparse. The impact of solving these problems with minimal environment-specific heuristics lies in the development of RL algorithms which produce sample efficient, robust, and general policies capable of more readily dealing with the uncertain and changing dynamics of “real world” environments. We do not solve these problems here, but rather provide the challenge and the testbed against such improvements can be produced and evaluated. Auxiliary to this, and in line with growing concerns that progress in Deep RL is more the result of industrial labs having privileged access to the resources required to run environments and agents on a massive scale, the environment presented here is computationally cheap to run and to collect data in. This democratizes access for researchers in more resource-constrained labs, while not sacrificing the difficulty and richness of the environment. We hope that as a result of this, and of the more general need to develop sample-efficient agents with fewer data, the environmental impact of research using our environment will be reduced compared to more visually sophisticated ones. Acknowledgements We thank the NetHack DevTeam for creating and continuously extending this amazing game over the last decades. We thank Paul Winner, Bart House, M. Drew Streib, Mikko Juola, Florian Mayer, Philip H.S. Torr, Stephen Roller, Minqi Jiang, Vegard Mella, Eric Hambro, Fabio Petroni, Mikayel Samvelyan, Vitaly Kurin, Arthur Szlam, Sebastian Riedel, Antoine Bordes, Gabriel Synnaeve, Jeremy Reizenstein, as well as the NeurIPS 2020, ICML 2020, and BeTR-RL 2020 reviewers and area chairs for their valuable feedback. Nantas Nardelli is supported by EPSRC/MURI grant EP/N019474/1. Finally, we would like to pay tribute to the 863,918,816 simulated NetHack heroes who lost their lives in the name of science for this project (thus far).
1. What is the focus and contribution of the paper regarding simulation environments for decision systems? 2. What are the strengths of the proposed environment, particularly in its description and potential uses? 3. What are the weaknesses of the paper, especially in terms of analysis and results? 4. How does the reviewer assess the usefulness and suitability of the environment compared to other alternatives?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The manuscript proposes a simulation environment that can be employed for training/evaluating/analyzing decision systems - including training RL models. The environment is built on a game - NetHack. Strengths Not being familiar with the game or the environment in detail, I think that the author(s) did a reasonably good job in describing the tasks that can employ this environment. However, for a learning algorithm designer to select this environment to train on, more information is needed. Weaknesses We need such environments for training our models. The analysis and results presented in the paper are insufficient for selecting this environment vs. alternatives.
NIPS
Title Distilling Representations from GAN Generator via Squeeze and Span Abstract In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains. The controllable synthesis ability of GAN generators suggests that they maintain informative, disentangled, and explainable image representations, but leveraging and transferring their representations to downstream tasks is largely unexplored. In this paper, we propose to distill knowledge from GAN generators by squeezing and spanning their representations. We squeeze the generator features into representations that are invariant to semantic-preserving transformations through a network before they are distilled into the student network. We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs and boost the student network performance in a real domain. Experiments justify the efficacy of our method and reveal its great significance in self-supervised representation learning. Code is available at https://github.com/yangyu12/squeeze-and-span. 1 Introduction Generative adversarial networks (GANs) [23] continue to achieve impressive image synthesis results thanks to large datasets and recent advances in network architecture design [5, 36, 37, 34]. GANs synthesize not only realistic images but also steerable ones towards specific content or styles [22, 52, 49, 33, 57, 53, 32]. These properties motivate a rich body of works to adopt powerful pretrained GANs for various computer vision tasks, including part segmentation [68, 56, 61], 3D reconstruction [67], image alignment [48, 45], showing the strengths of GANs in the few-label regime. GANs typically produce fine-grained, disentangled, and explainable representations, which allow for higher data efficiency and better generalization [42, 68, 56, 61, 67, 48]. Prior works on GAN-based representation learning focus on learned features from either a discriminator network [50] or an encoder network mapping images back into the latent space [19, 17, 18]. However, there is still inadequate exploration about how to leverage or transfer the learned representations in generators. Inspired by the recent success of [68, 56, 61], we hypothesize that representations produced in generator networks are rich and informative for downstream discriminative tasks. Hence, this paper proposes to distill representations from feature maps of a pretrained generator network into a student network (see Fig. 1). In particular, we present a novel “squeeze-and-span” technique to distill knowledge from a generator into a representation network2 that is transferred to a downstream task. Unlike transferring discrimi- ∗Equal Contribution 2Throughout the paper, two terms“representation network” and “student network” are used interchangeably, as are the “generator network” and “teacher network”. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). nator network, generator network is not directly transferable to downstream image recognition tasks, as it cannot ingest image input but a latent vector. Hence, we distill generator network representations into a representation network that can be further transferred to the target task. When fed in a synthesized image, the representation network is optimized to produce similar representations to the generator network’s. However, the generator representations are very high-dimensional and not all of them are informative for the downstream task. Thus, we propose a squeeze module that purifies generator representations to be invariant to semantic-preserving transformations through an MLP and an augmentation strategy. As the joint optimization of the squeeze module and representation network can lead to a trivial solution (e.g. mapping representations to zero vector), we employ variance-covariance regularization in [3] while maximizing the agreement between the two networks. Finally, to address the potential domain gap between synthetic and real images, we span the learned representation of synthetic images by training the representation network additionally on real images. We evaluate our distilled representations on CIFAR10, CIFAR100 and STL10 with linear classification tasks as commonly done in representation learning. Experimental results show that squeezing and spanning generator representations outperforms methods that build on discriminator and encoding images into latent space. Moreover, our method achieves better results than discriminative SSL algorithms, including SimSiam [10] and VICReg [3] on CIFAR10 and CIFAR100, and competitive results on STL10, showing significant potential for transferable representation learning. Our contributions can be summarized as follows: We (1) provide a new taxonomy of representation and transfer learning in generative adversarial networks based on the location of the representations, (2) propose a novel “squeeze-and-span” framework to distill representations in the GAN generator and transfer them for downstream tasks, (3) empirically show the promise of utilizing generator features to benefit self-supervised representation learning. 2 Related Work GANs for Representation Learning. Significant progress has been made on the interpretability, manipulability, and versatility of the latent space and representation of GANs [36, 37, 34, 35]. It inspires a broad spectrum of GAN-based applications, such as semantic segmentation [68, 56, 61], visual alignment [48, 45], and 3D reconstruction [67], where GAN representations are leveraged to synthesize supervision signals efficiently. As GAN can be trained unsupervised, its representations are transferred to downstream tasks. DCGAN [50] proposes a convolutional GAN and uses the pre-trained discriminator for image classification. BiGAN [17] adopts an inverse mapping strategy to transfer the real domain knowledge for representation learning. While ALI [19] improves this idea with a stochastic network instead of a deterministic one, BigBiGAN [18] extends BiGAN with BigGAN [5] for large scale representation learning. GHFeat [59] trains a post hoc encoder that maps given images back into style codes of style-based GANs [36, 37, 35] for image representation. These works leverage or transfer representations from either discriminators or encoders. In contrast, our method reveals that the generator of a pre-trained GAN is typically more suitable for representation transfer with a proper distillation strategy. Knowledge Distillation (KD) aims at training a small student network, under the supervision of a relatively large teacher network [31]. In terms of the knowledge source, it can be broadly divided into logit-based KD and feature-based KD. Logit-based KD methods [41, 60, 12] optimize the divergence loss between the predicted class distributions, usually called logits or soft labels, of the teacher and student network. Feature-based KD methods [38, 2, 54] adopt the teacher model’s intermediate layers as supervisory signals for the student. FitNet [51] introduces the output of hidden layers of the teacher network as supervision. AT [63] proposes to match attention maps between the teacher and student. FSP [62] calculates flow between layers as guidance for distillation. Likewise, our method distills knowledge from intermediate layers from a pre-trained GAN generator. Self-Supervised Representation Learning (SSL) pursues learning general transferable representations from unlabelled data. To produce informative self-supervision signals, the design of handcrafted pretext tasks has flourished for a long time, including jigsaw puzzle completion [46], relative position prediction [15, 16], rotation perception [21], inpainting [47], colorization [40, 65], masked image modeling [27, 58], etc. Instead of performing intra-instance prediction, contrastive learning-based SSL methods explore inter-instance relation. Applying the InfoNCE loss or its variants [26], they typically partition informative positive/negative data subsets and attempt to attract positive pairs while repelling negative ones. MoCo series [28, 8, 11] introduce an offline memory bank to store large negative samples for contrast and a momentum encoder to make them consistent. SimCLR [7] adopts an end-to-end manner to provide negatives in a mini-batch and introduce substantial data augmentation and a projection head to improve the performance significantly. Surprisingly, without negative pairs, BYOL [25] proposes a simple asymmetry SSL framework with the momentum branch applying the stop gradient to avoid model collapse. It inspires a series of in-deep explorations, such as SimSiam [9], Barlow Twins [64], VICReg [3], etc. In this paper, despite the same end goal of obtaining transferable representations and the use of techniques from VICReg [3], we study the transferability of generator representations in pretrained GANs to discriminative tasks, use asymmetric instead of siamese networks, and design effective distillation strategies. 3 Rethinking GAN Representations Let G : W → X denote a generator network that maps a latent variable in W to an image in X . An unconditional GAN trains G adversarially against a discriminator network D : X → [0, 1] that estimates the realness of the given images, max G min D E log(1−D(G(w))) + logD(x). (1) The adversarial learning does not require any human supervision and therefore allows for learning representations in an unsupervised way. In this paper, we show that the type of GAN representations and how they are obtained has a large effect on their transferrability. To illustrate the impact on the transferability, Fig. 2 plots the embedded 2D points of three different type of representations from an unconditional GAN, where color is assigned based on the class labels.3. Note that we describe each representation in the following paragraphs. Discriminator Feature The discriminator D, which is tasked to distinguish real and fake images, can be transferred to various recognition tasks [50]. Formally, let D = d(L) ◦ d(L−1) ◦ · · · ◦ d(1) denote the decomposition of a discriminator into L consecutive layers. As shown in Fig. 1(a), given an image x, the discriminator representation can be extracted by concatenating the features after average pooling from each discriminator block output, hd = [µ(hd1), . . . , µ(h d L)], where h d i = d (i) ◦ · · · ◦ d(1)(x), (2) where µ denotes the average pooling operator. However, Fig. 2(a) shows that the cluster of discriminator features is not significantly correlated with class information indicating that real/fake discrimination does not necessarily relate to class separation. Latent Variable An alternative way of transferring GAN representation is through its latent variable w [19, 17, 18]. In particular, one can invert the generator such that it can extract a latent variable representation of the generated image through a learned encoder E. Then the representations of the encoder can be transferred to a downstream task. While some works jointly trains the encoder with the generator and discriminator [19, 17, 18], we consider training a post hoc encoder [6] given a fixed pre-trained generator G, as this provides more consistent comparison with the other two strategies: E∗ = arg min E Ew∼P (w),x=G(w) [ ‖G(E(x))− x‖1 + Lpercep(G(E(x)),x) + λ‖E(x)−w‖22 ] , (3) where Lpercep denotes the LPIPS loss [66] and λ = 1.0 is used to balance different loss terms. The key assumption behind this strategy is that latent variables encode various characteristics of generated images (e.g. [33, 57]) and hence extracting them from generated images result in learning transferrable representations. Fig. 2(b) visualizes the embedding of latent variables4. It shows that samples from the same classes are not clustered together and distant from other ones. In other words, latent variables do not disentangle the class information while encoding other information about image synthesis. Generator Feature An overlooked practice is to utilize generator features. Typically, GAN generators transform a low-resolution (e.g. 4×4) feature map to a higher-resolution one (e.g. 256×256) and further synthesize images from the final feature map [17, 36] or multi-scale feature maps [37]. The image synthesis is performed hierarchically: feature map from low to high resolution encodes the low-frequency to high-frequency component for composing an image signal [35]. This understanding is also evidenced by image editing works [22, 52, 49, 53, 32] which show that interfering with low-resolution feature maps leads to a structural and high-level change of an image, and altering high-resolution feature maps only induces subtle appearance changes. Therefore, generator features contain valuable hierarchical knowledge about an image. Formally, let G = g(L) ◦ g(L−1) ◦ · · · ◦ g(1) denote the decomposition of a discriminator into L consecutive layers. Given a latent variable w ∼ P (w) drawn from a prior distribution, we consider the concatenated features average pooled from each generator block output, hg = [µ(hg1), . . . , µ(h g L)], where h g i = g (i) ◦ · · · ◦ g(1)(w). (4) As Fig. 2(c) shows, generator features within the same class are naturally clustered. This result suggests that generators contain identifiable representations that can be transferred for downstream tasks. However, as GANs do not initially provide a reverse model for the accurate recovery of generator features, it is still inconvenient to extract generator features for any given image. This limitation motivates us to distill the valuable features from GAN generators. 3The GAN is trained trained on CIFAR10. We use UMAP embeddings [44] for dimensionality reduction. As the class labels of the generated images are unknown, they are inferred by a classifier that is trained on CIFAR10 training set and achieves around 95% top-1 accuracy on CIFAR10 validation set. 4In the family of StyleGAN, to achieve more disentangled latent variables, the prior latent variable that observes standard normal distribution is mapped into a learnable latent space via an MLP before fed into the generator. In our work, we refer to latent variables as the transformed ones, which are also known as latent variables in W+ space in other works [1] 𝐰~𝑷(𝐰) Synthesis 4x4 Synthesis 8x8 Synthesis 16x16 Synthesis 32x32 toRGB M L P toRGB toRGB toRGB 𝜇 & FC Const Up Up Up 𝜇 & FC 𝜇 & FC 𝜇 & FC 𝐱𝐠 Squeeze module 𝑇𝜙 Generator 𝐺 𝑮 𝑻𝝓 𝑺𝜽 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝐱𝐠 𝒂 𝐱𝐠 𝒂 𝐱𝐫 𝒂′ 𝐱𝒓 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐑𝐃 ′ 𝓛𝐑𝐃 𝐰~𝑷(𝐰) Teacher Student Squeeze Span Teacher Figure 3: Squeeze and span representation from the GAN generator. Left: pretrained generator G and squeeze module Tφ constitute teacher network to produce squeezed representations which are further distilled into a student network Sθ (Squeeze part). The student network is also trained on real data (Span part). Right: the generator structure and our squeeze module. We average pool (denoted as µ) the feature maps from each synthesis block and transform them with a linear layer plus an MLP, termed squeeze module. 4 Squeeze-and-Span Representations from GAN Generator This section introduces the “Squeeze-and-Span” technique to distill representation from GANs into a student network, which can then be readily transferred for downstream tasks, e.g. image classification. Let Sθ : X → H denote a student network that maps a given image into representation space. A naive way of representation learning can be achieved by tasking the student network to predict the teacher representation, which can be formulated as the following optimization problem: min θ Ew∼P (w) ‖Sθ(G(w))− hg(w)‖22, (5) where we use mean squared error to measure the prediction loss and hg(w) to denote the dependence of hg on w. However, this formulation has two problems. First, representations extracted through multiple layers of the generator are likely to contain significantly redundant information for downstream tasks but necessary for image synthesis. Second, as the student network is only optimized on synthetic images, it is likely to perform poorly in extracting features from real images in the downstream task due to the potential domain gap between real and synthetic images. To mitigate these issues, we propose the “Squeeze and Span” technique as illustrated in Fig. 3. 4.1 Squeezing Informative Representations To alleviate the first issue that generator representation may contain a big portion of irrelevant information for downstream tasks, we introduce a squeeze (or bottleneck) module Tφ (Fig. 3) that squeezes informative representations out of the generator representation. In addition, we transform the generated image via a semantic-preserving image transformation a (e.g. color jittering and cropping) before feeding it to the student work. Equ. 5 can be rewritten as min θ,φ LRD = Ew∼P (w),a∼A ‖Sθ(a[G(w)])− Tφ(hg(w))‖22, (6) where image transformation a is randomly sampled from A. In words, we seek to distill compact representations from the generator among the ones that are invariant to data augmentationA, inspired from the success of recent self-supervised methods [7, 10]. An informal intepretation is that, similar to Chen & He [10], considering one of the alternate subproblems that fix θ and solve φ, the optimal solution would result in the effect of Tφ∗(hg(w)) ≈ Ea∼A Sθ(a[G(w)]), which implies Equ 6 encourages Tφ to squeeze out transformation-invariant representation. However, similar to the siamese network in SSL [10], there exists a trivial solution to Equ. 6: both the squeeze module and the student network degenerate to output constant for any input. Therefore, we consult the techniques from SSL methods and add regularization terms to the distillation loss. In particular, we employ variance-covariance [3] to explicitly regularize representations to be significantly uncorrelated and varied in each dimension. Formally, in a mini-batch of N samples, we denote the squeezed generator representations and student representations with Zg = [Tφ(h g(w1)), Tφ(h g(w2)), . . . , Tφ(h g(wN ))] ∈ RM×N , (7) Zs = [Sθ(a1[G(w1)]), Sθ(a2[G(w2)]), . . . , Sθ(aN [G(wN )])] ∈ RM×N , (8) where wi ∼ P (w) and ai ∼ A denote random sample of latent variable and data augmentation operator. The variance loss is introduced to encourage the standard deviation of each representation dimension to be greater than 1, Lvar(Z) = 1 M M∑ j=1 max(0, 1− √ Var(zj) + ), (9) where zj represents the j-th dimension in representation z. The covariance loss is introduced to encourage the correlation of any pair of dimensions to be uncorrelated, Lcov(Z) = 1M ∑ i 6=j [C(Z)] 2 ij , where C(Z) = 1N−1 ∑N i=1(zi − z̄)(zi − z̄)>, z̄ = 1 N ∑N i=1 zi. (10) To this end, the loss function of squeezing representations from the generator into the student network can be summarized as Lsqueeze = λLRD + µ [Lvar(Zf ) + Lvar(Zg)] + ν [Lcov(Zf ) + Lcov(Zg)] . (11) Discussion Our work differs from multi-view representation learning methods [3, 10] in the following aspects. (1) Our work studies the transfer of the generative model that does not originally favor representation extraction, whereas most multi-view representation learning learns representation with discriminative pretext tasks. (2) Unlike typical Siamese networks in multi-view representation learning, the two networks in our work are asymmetric: one takes in noise and outputs an image and the other works in the reverse fashion. (3) While most multi-view representation learning methods learn representation networks from scratch, our work distills representations from a pre-trained model. In specific, most SSL methods create multiview representations by transforming input images in multiple ways, we instead pursue different representation views from a well-trained data generator. 4.2 Spanning Representations from Synthetic to Real Domain Here we address the second problem, the domain between synthetic and real domains, due to two factors. First, the synthesized images may be of low quality. This aspect has been improved a lot with recent GAN modelling [37, 35] and is out of our concern. Second, more importantly, GAN is notorious for the mode collapse issue, suggesting the synthetic data can only cover partial modes of real data distribution. In other words, the synthetic dataset appears to be a subset of the real dataset. To undermine the harm of mode collapse, we include the real data in the training data of the student network. In particular, in each training step, synthetic data and real data consist of a mini-batch of training data. For synthetic data, the aforementioned squeeze loss is employed. For real data, we employ the original VICReg to compute loss. Specifically, given a mini-batch of real data {xri }Ni=1, each image xri is transformed twice with random data augmentation to obtain two views ai(x r i ) and a′i(x r i ), where ai, a ′ i ∼ A. The corresponding representations Zr and Z ′r are obtained by feeding the transformed images into Sθ similarly to Equ. 8. Then the loss on real data is computed as Lspan = λL′RD + µ [Lvar(Zr) + Lvar(Z ′r)] + ν [Lcov(Zr) + Lcov(Z ′r)] , (12) where L′RD denotes a self-distillation by measuring the distance of two-view representations on real images. The overall loss is computed by simply combine the generated data loss and real data loss as Ltotal = αLsqueeze + (1 − α)Lspan, where α = 0.5 denotes the proportion of synthetic data in a mini-batch of training samples. From a technical perspective, spanning representation seems to be a combination of representation distillation and SSL using VICReg [3]. We interpret this combination as spanning representation from the synthetic domain to the real domain. The representation is dominantly learned in the synthetic domain and generalized to the real domain. The student network learns to fuse representation spaces of two domains into a consistent one in the spanning process. Our experimental evaluation shows that “squeeze and span” can outperform VICReg on real data, suggesting that the squeezed representations do have a nontrivial contribution to the learned representation. 5 Experiments 5.1 Setup Dataset and pre-trained GAN Our methods are mainly evaluated on CIFAR10, CIFAR100, and STL10, ImageNet100, and ImageNet. CIFAR10 and CIFAR100 [39] are two image datasets containing small images at 32×32 resolution with 10 and 100 classes, respectively, and both split into 50,000 images for training and 10,000 for validation. STL-10 [13], which is derived from the ImageNet [14], includes images at 96×96 resolution over 10 classes. STL-10 contains 500 labeled images per class (i.e. 5K in total) with an additional 100K unlabeled images for training and 800 labeled images for testing. ImageNet100 [55] contains images of 100 classes, among which 126,689 images are regarded as the train split and 5,000 images are taken as the validation split. ImageNet [14] is a popular large-scale image dataset of 1000 classes, which is split into 1,281,167 images as training set and 50,000 images as validation set. We adopt StyleGAN2-ADA5 for representation distillation since it has good stability and high performance. GANs are all pre-trained on training split. More details can be referred in the supplementary material. Implementation details The squeeze module uses linear layers to transform the generator features into vectors with 2048 dimensions, which are then summed up and fed into a three-layer MLP to get a 2048-d teacher representation. On CIFAR10 and CIFAR100, we use ResNet18 [30] of the CIFAR variant as the backbone. On STL10, we use ResNet18 as the backbone. On ImageNet100 and ImageNet, we use ResNet50 as the backbone. On top of the backbone network, a five-layer MLP is added for producing representation. We use SGD optimizer with cosine learning rate decay [43] scheduler to optimize our models. The actual learning rate is linearly scaled according to the ratio of batch size to 256, i.e. base_lr × batch_size/256 [24]. We follow the common practice in SSL [7, 55, 29] to evaluate the distilled representation with linear classification task. More details are available in the supplementary material. 5.2 Transferring GAN Representation Compared methods In this section, we justify the advantage of distilling generator representations by comparing the performance of different ways of transferring GAN representation. In particular, we consider the following competitors: • Discriminator. As the discriminator network receives image as input and is ready for representation extraction, we directly extract features, single penultimate features, or multiple features (Equ 2), using a pre-trained discriminator and train a linear classifier on top of them. • Encoding. We train a post hoc encoder with or without real images involved in the training process as in Equ 3. • Distilling latent variable. We employ the vanilla distillation or squeeze method on latent variables with data augmentation engaged. • Distilling generator feature. Our method as described in Section 4. Results Table 1 presents the comparison results, from which we can draw the following conclusions. (1) Representation distillation, whether from the latent variable or generator feature, significantly outperforms discriminator and encoding. We think this is because image reconstruction and realness discrimination are not suitable pretext tasks for representation learning. (2) Distillation from latent variable achieve comparable performance to distillation from generator feature, despite that the former one show entangled class information (Fig. 2). This result can be attributed to a projection head in the student network. (3) Our method works significantly better than vanilla distillation which does not employ a squeeze module. This result suggests that our method squeeze more informative representation that can help to improve the student performance. 5.3 Comparison to SSL Linear classification We further compare our methods to SSL algorithms such as SimSiam [10] and VICReg [3] in different training data domains: real, synthetic, and a mixture of real and synthetic. 5https://github.com/NVlabs/stylegan2-ada-pytorch Table 2 presents the linear classification results, from which we want to highlight the following points. (1) Both SimSiam and VICReg perform worse when pre-trained on only synthetic data than only real data, indicating the existence of a domain gap between synthetic data and real data. (2) Our methods outperform SimSiam and VICReg in synthetic and mixture domains, suggesting distillation of generator feature contributes extra improvement SSL. (3) Our "Squeeze and Span" is the best among all the competitors on CIFAR10, CIFAR100, and STL10. (4) Our method outperforms VICReg with a large margin (6.90% Top-1 Acc) on ImageNet100 and a clear increase (0.48% Top-1 Acc) on ImageNet. Transfer learning As one goal of representation learning is its transferability to other datasets, we further conduct a comprehensive transfer learning evaluation. We follow the protocol in [20] and use its released source code6 to conduct a thorough transfer learning evaluation for our pre-trained models on ImageNet100/ImageNet. In particular, the learned representations are mainly evaluated for (1) linear classification on 11 datasets including Aircraft, Caltech101, Cars, CIFAR10, CIFAR100, DTD, Flowers, Food, Pets, SUN397, and VOC2007; (2) finetuning on three downstream tasks and datasets, including object detection on PASCAL VOC, surface normal estimation on NYUv2, and semantic segmentation on ADEChallenge2016. Please refer to [20] for the details of evaluation protocol. The results are presented in Table 3 and Table 4, where we have the following observation (1) As depicted in Table 3, our method achieves better transferability than VICReg on the mixed data no matter pre-trained on ImageNet100 or ImageNet. Our method beats VICReg on nearly all other datasets and the improvement on average accuracy is 3.40 with models pre-trained on ImageNet100 and 1.00 with models pre-trained on ImageNet. (2) As depicted in Table 4, representations learned with our method can be well transferred to various downstream tasks such as object detection, 6https://github.com/linusericsson/ssl-transfer surface normal estimation and semantic segmentation, and consistently show higher performance than VICReg. We believe these results suggest that generator features have strong transferability and great promise to contribute to self-supervised representation learning. 5.4 Ablation Study Effect of squeeze and span The effect of our method is studied by adding modules to the vanilla version of representation distillation (a) one by one. (a)→ (b): After added data augmentation, significant improvement can be observed, suggesting that invariant representation to data augmentation is crucial for linear classification performance. This result inspires us to make teacher representation more invariant. (b)→ (c): the learnable Tφ is introduced to squeeze out invariant representation as teacher. However, trivial performance (10% top-1 accuracy, no better than random guess) is obtained, implying models learn trivial solutions, probably constant output. (c)→ (d) & (e): regularization terms are added, and the student network now achieves meaningful performance, which indicates the trivial solution is prevented. Moreover, using both regularizations achieves the best performance, which outperforms (b) without "squeeze". (e)→ (f): training data is supplemented with real data, i.e. adding "span", the performance is further improved. Domain gap issue We calculate the squared MMD [4] of representations of synthetic and real data to measure their gap in representation space. Table 6 shows “Squeeze and Span” (Sq&Sp) reducesthe MMD compared to “Squeeze” by an order of magnitude on CIFAR10 and CIFAR100 and a large margin on STL10, clearly justifying the efficacy of “span” as reducing the domain gap. Impact of generator We further compare the performance of our method when we use GAN checkpoints of different quality. Fig. 4 shows the top-1 accuracy with respect to FID, which indicates the quality of GAN. It is not surprising that GAN quality significantly impacts our method. The higher the quality of generator we utilize, the higher performance of learned representation our method can attain. It is noteworthy that a moderately trained GAN (FID < 11.03) is already able to contribute additional performance improvement on CIFAR100 when compared to VICReg trained on a mixture of synthetic and real data. In the appendix, we further analyze the im- pact of generator feature choices and GAN architectures on the distillation performance. 6 Conclusions This paper proposes to "squeeze and span" representation from the GAN generator to extract transferable representation for downstream tasks like image classification. The key techniques, "squeeze" and "span", aim to mitigate issues that the GAN generator contains the information necessary for image synthesis but unnecessary for downstream tasks and the domain gap between synthetic and real data. Experimental results justify the effectiveness of our method and show its great promise in self-supervised representation learning. We hope more attention can be drawn to studying GAN for representation learning. Limitation and future work The current form of our work still maintains several limitations that need to be studied in the future. (1) Since we distill representation from GANs, the performance of learned representation relies on the quality of pretrained GANs and thus is limited by the performance of the GAN techniques. Therefore, whether a prematurely trained GAN can also contribute to self-supervised representation learning and how to effectively distill them will be an interesting problem. (2) In this paper, the squeeze module sets the widely-used transformation-invariance as the learning objective of representation distillation. We leave other learning objectives tailored for specific downstream tasks as future work. (3) More comprehensive empirical study with larger scale is left as future work to further exhibit the potential of our method. Acknowledgments and Disclosure of Funding This work was supported by the National Key R&D Program of China under Grant 2018AAA0102801, National Natural Science Foundation of China under Grant 61620106005, EPSRC Visual AI grant EP/T028572/1.
1. What is the primary contribution of the paper regarding using pre-trained generators for downstream tasks? 2. What are the strengths of the proposed approach, particularly in its novelty and improvements over existing methods? 3. Do you have any concerns or questions about the 'span' module and its similarity to VICReg? 4. How convinced are you of the claims made about the 'span' module's ability to address mode collapse and the gap between real and fake data? 5. Is there sufficient motivation for using the generator for distillation instead of the classifier?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The main idea of the paper is how to use the pre-trained generator network of GAN to improve the downstream task of classification. Specifically, the paper use with VICReg-based method to train the network S (called student) that takes the real inputs and applies different augmentations before putting them as inputs of S, the features output of these augmentations is regularized to be similar (invariant to transformation) while the variance-covariance are maximum (this task is called “span” module). Then, the method applies the pre-trained generator to synthesize images and these images are input into the S to produce the features which are regularized from match with those features from concatenation of feature maps (go through module called “squeeze” module before that) that generated the synthetic image. The losses for synthetic images are similar to on real images except the distilled loss between generator’s features and output of the student' feature. The experiments on low resolution datasets (CIFAR-10, CIFAR-100 and STL10) showing the improvements over two existing works SiamSiam and VICReg trained on the real, fake or mixed data. The paper conducted the ablation study to understand the contributions of different components. However, the paper is written by the other way around: starting with “squeeze” and improve with “span” module that I think want to make the idea more interesting (but probably makes reader a bit confusing on what is actual contribution)? With this order, the paper investigates different ways to exact features from GAN: from discriminator, latent variable and generator. The 2D visualisation and experiments suggest the features from generator are most discriminative. Inspired from the generator’s features, the paper introduces two modules “squeeze” and “span” to extract the generator features for distillation. Strengths And Weaknesses Strength I think the main contribution this paper regarding the "squeeze" module and how the generator being used to improve the existing self-supervised models. It is novel enough to me and this brings some improvements (2%) from the simple way of mixing real/fake dataset. The paper is well-written and easy to read. Weakness From my perspective, “span” module is considered as one contribution is questionable as it is just similar to VICReg method. Therefore, I am not sure some claims of the span module to address the mode collapse and the gap between real and fake is convincing enough, except if the authors can bring some theoretical / empirical results to show that. Otherwise, it is not highly motivated why we need the generator for distillation instead of the classifier? At the moment, I put as borderline accept but the rating can be increased and decreased depends also comments of other reviewers which may point out something I am missing and also the rebuttal. Questions See above Limitations Yes
NIPS
Title Distilling Representations from GAN Generator via Squeeze and Span Abstract In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains. The controllable synthesis ability of GAN generators suggests that they maintain informative, disentangled, and explainable image representations, but leveraging and transferring their representations to downstream tasks is largely unexplored. In this paper, we propose to distill knowledge from GAN generators by squeezing and spanning their representations. We squeeze the generator features into representations that are invariant to semantic-preserving transformations through a network before they are distilled into the student network. We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs and boost the student network performance in a real domain. Experiments justify the efficacy of our method and reveal its great significance in self-supervised representation learning. Code is available at https://github.com/yangyu12/squeeze-and-span. 1 Introduction Generative adversarial networks (GANs) [23] continue to achieve impressive image synthesis results thanks to large datasets and recent advances in network architecture design [5, 36, 37, 34]. GANs synthesize not only realistic images but also steerable ones towards specific content or styles [22, 52, 49, 33, 57, 53, 32]. These properties motivate a rich body of works to adopt powerful pretrained GANs for various computer vision tasks, including part segmentation [68, 56, 61], 3D reconstruction [67], image alignment [48, 45], showing the strengths of GANs in the few-label regime. GANs typically produce fine-grained, disentangled, and explainable representations, which allow for higher data efficiency and better generalization [42, 68, 56, 61, 67, 48]. Prior works on GAN-based representation learning focus on learned features from either a discriminator network [50] or an encoder network mapping images back into the latent space [19, 17, 18]. However, there is still inadequate exploration about how to leverage or transfer the learned representations in generators. Inspired by the recent success of [68, 56, 61], we hypothesize that representations produced in generator networks are rich and informative for downstream discriminative tasks. Hence, this paper proposes to distill representations from feature maps of a pretrained generator network into a student network (see Fig. 1). In particular, we present a novel “squeeze-and-span” technique to distill knowledge from a generator into a representation network2 that is transferred to a downstream task. Unlike transferring discrimi- ∗Equal Contribution 2Throughout the paper, two terms“representation network” and “student network” are used interchangeably, as are the “generator network” and “teacher network”. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). nator network, generator network is not directly transferable to downstream image recognition tasks, as it cannot ingest image input but a latent vector. Hence, we distill generator network representations into a representation network that can be further transferred to the target task. When fed in a synthesized image, the representation network is optimized to produce similar representations to the generator network’s. However, the generator representations are very high-dimensional and not all of them are informative for the downstream task. Thus, we propose a squeeze module that purifies generator representations to be invariant to semantic-preserving transformations through an MLP and an augmentation strategy. As the joint optimization of the squeeze module and representation network can lead to a trivial solution (e.g. mapping representations to zero vector), we employ variance-covariance regularization in [3] while maximizing the agreement between the two networks. Finally, to address the potential domain gap between synthetic and real images, we span the learned representation of synthetic images by training the representation network additionally on real images. We evaluate our distilled representations on CIFAR10, CIFAR100 and STL10 with linear classification tasks as commonly done in representation learning. Experimental results show that squeezing and spanning generator representations outperforms methods that build on discriminator and encoding images into latent space. Moreover, our method achieves better results than discriminative SSL algorithms, including SimSiam [10] and VICReg [3] on CIFAR10 and CIFAR100, and competitive results on STL10, showing significant potential for transferable representation learning. Our contributions can be summarized as follows: We (1) provide a new taxonomy of representation and transfer learning in generative adversarial networks based on the location of the representations, (2) propose a novel “squeeze-and-span” framework to distill representations in the GAN generator and transfer them for downstream tasks, (3) empirically show the promise of utilizing generator features to benefit self-supervised representation learning. 2 Related Work GANs for Representation Learning. Significant progress has been made on the interpretability, manipulability, and versatility of the latent space and representation of GANs [36, 37, 34, 35]. It inspires a broad spectrum of GAN-based applications, such as semantic segmentation [68, 56, 61], visual alignment [48, 45], and 3D reconstruction [67], where GAN representations are leveraged to synthesize supervision signals efficiently. As GAN can be trained unsupervised, its representations are transferred to downstream tasks. DCGAN [50] proposes a convolutional GAN and uses the pre-trained discriminator for image classification. BiGAN [17] adopts an inverse mapping strategy to transfer the real domain knowledge for representation learning. While ALI [19] improves this idea with a stochastic network instead of a deterministic one, BigBiGAN [18] extends BiGAN with BigGAN [5] for large scale representation learning. GHFeat [59] trains a post hoc encoder that maps given images back into style codes of style-based GANs [36, 37, 35] for image representation. These works leverage or transfer representations from either discriminators or encoders. In contrast, our method reveals that the generator of a pre-trained GAN is typically more suitable for representation transfer with a proper distillation strategy. Knowledge Distillation (KD) aims at training a small student network, under the supervision of a relatively large teacher network [31]. In terms of the knowledge source, it can be broadly divided into logit-based KD and feature-based KD. Logit-based KD methods [41, 60, 12] optimize the divergence loss between the predicted class distributions, usually called logits or soft labels, of the teacher and student network. Feature-based KD methods [38, 2, 54] adopt the teacher model’s intermediate layers as supervisory signals for the student. FitNet [51] introduces the output of hidden layers of the teacher network as supervision. AT [63] proposes to match attention maps between the teacher and student. FSP [62] calculates flow between layers as guidance for distillation. Likewise, our method distills knowledge from intermediate layers from a pre-trained GAN generator. Self-Supervised Representation Learning (SSL) pursues learning general transferable representations from unlabelled data. To produce informative self-supervision signals, the design of handcrafted pretext tasks has flourished for a long time, including jigsaw puzzle completion [46], relative position prediction [15, 16], rotation perception [21], inpainting [47], colorization [40, 65], masked image modeling [27, 58], etc. Instead of performing intra-instance prediction, contrastive learning-based SSL methods explore inter-instance relation. Applying the InfoNCE loss or its variants [26], they typically partition informative positive/negative data subsets and attempt to attract positive pairs while repelling negative ones. MoCo series [28, 8, 11] introduce an offline memory bank to store large negative samples for contrast and a momentum encoder to make them consistent. SimCLR [7] adopts an end-to-end manner to provide negatives in a mini-batch and introduce substantial data augmentation and a projection head to improve the performance significantly. Surprisingly, without negative pairs, BYOL [25] proposes a simple asymmetry SSL framework with the momentum branch applying the stop gradient to avoid model collapse. It inspires a series of in-deep explorations, such as SimSiam [9], Barlow Twins [64], VICReg [3], etc. In this paper, despite the same end goal of obtaining transferable representations and the use of techniques from VICReg [3], we study the transferability of generator representations in pretrained GANs to discriminative tasks, use asymmetric instead of siamese networks, and design effective distillation strategies. 3 Rethinking GAN Representations Let G : W → X denote a generator network that maps a latent variable in W to an image in X . An unconditional GAN trains G adversarially against a discriminator network D : X → [0, 1] that estimates the realness of the given images, max G min D E log(1−D(G(w))) + logD(x). (1) The adversarial learning does not require any human supervision and therefore allows for learning representations in an unsupervised way. In this paper, we show that the type of GAN representations and how they are obtained has a large effect on their transferrability. To illustrate the impact on the transferability, Fig. 2 plots the embedded 2D points of three different type of representations from an unconditional GAN, where color is assigned based on the class labels.3. Note that we describe each representation in the following paragraphs. Discriminator Feature The discriminator D, which is tasked to distinguish real and fake images, can be transferred to various recognition tasks [50]. Formally, let D = d(L) ◦ d(L−1) ◦ · · · ◦ d(1) denote the decomposition of a discriminator into L consecutive layers. As shown in Fig. 1(a), given an image x, the discriminator representation can be extracted by concatenating the features after average pooling from each discriminator block output, hd = [µ(hd1), . . . , µ(h d L)], where h d i = d (i) ◦ · · · ◦ d(1)(x), (2) where µ denotes the average pooling operator. However, Fig. 2(a) shows that the cluster of discriminator features is not significantly correlated with class information indicating that real/fake discrimination does not necessarily relate to class separation. Latent Variable An alternative way of transferring GAN representation is through its latent variable w [19, 17, 18]. In particular, one can invert the generator such that it can extract a latent variable representation of the generated image through a learned encoder E. Then the representations of the encoder can be transferred to a downstream task. While some works jointly trains the encoder with the generator and discriminator [19, 17, 18], we consider training a post hoc encoder [6] given a fixed pre-trained generator G, as this provides more consistent comparison with the other two strategies: E∗ = arg min E Ew∼P (w),x=G(w) [ ‖G(E(x))− x‖1 + Lpercep(G(E(x)),x) + λ‖E(x)−w‖22 ] , (3) where Lpercep denotes the LPIPS loss [66] and λ = 1.0 is used to balance different loss terms. The key assumption behind this strategy is that latent variables encode various characteristics of generated images (e.g. [33, 57]) and hence extracting them from generated images result in learning transferrable representations. Fig. 2(b) visualizes the embedding of latent variables4. It shows that samples from the same classes are not clustered together and distant from other ones. In other words, latent variables do not disentangle the class information while encoding other information about image synthesis. Generator Feature An overlooked practice is to utilize generator features. Typically, GAN generators transform a low-resolution (e.g. 4×4) feature map to a higher-resolution one (e.g. 256×256) and further synthesize images from the final feature map [17, 36] or multi-scale feature maps [37]. The image synthesis is performed hierarchically: feature map from low to high resolution encodes the low-frequency to high-frequency component for composing an image signal [35]. This understanding is also evidenced by image editing works [22, 52, 49, 53, 32] which show that interfering with low-resolution feature maps leads to a structural and high-level change of an image, and altering high-resolution feature maps only induces subtle appearance changes. Therefore, generator features contain valuable hierarchical knowledge about an image. Formally, let G = g(L) ◦ g(L−1) ◦ · · · ◦ g(1) denote the decomposition of a discriminator into L consecutive layers. Given a latent variable w ∼ P (w) drawn from a prior distribution, we consider the concatenated features average pooled from each generator block output, hg = [µ(hg1), . . . , µ(h g L)], where h g i = g (i) ◦ · · · ◦ g(1)(w). (4) As Fig. 2(c) shows, generator features within the same class are naturally clustered. This result suggests that generators contain identifiable representations that can be transferred for downstream tasks. However, as GANs do not initially provide a reverse model for the accurate recovery of generator features, it is still inconvenient to extract generator features for any given image. This limitation motivates us to distill the valuable features from GAN generators. 3The GAN is trained trained on CIFAR10. We use UMAP embeddings [44] for dimensionality reduction. As the class labels of the generated images are unknown, they are inferred by a classifier that is trained on CIFAR10 training set and achieves around 95% top-1 accuracy on CIFAR10 validation set. 4In the family of StyleGAN, to achieve more disentangled latent variables, the prior latent variable that observes standard normal distribution is mapped into a learnable latent space via an MLP before fed into the generator. In our work, we refer to latent variables as the transformed ones, which are also known as latent variables in W+ space in other works [1] 𝐰~𝑷(𝐰) Synthesis 4x4 Synthesis 8x8 Synthesis 16x16 Synthesis 32x32 toRGB M L P toRGB toRGB toRGB 𝜇 & FC Const Up Up Up 𝜇 & FC 𝜇 & FC 𝜇 & FC 𝐱𝐠 Squeeze module 𝑇𝜙 Generator 𝐺 𝑮 𝑻𝝓 𝑺𝜽 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝐱𝐠 𝒂 𝐱𝐠 𝒂 𝐱𝐫 𝒂′ 𝐱𝒓 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐑𝐃 ′ 𝓛𝐑𝐃 𝐰~𝑷(𝐰) Teacher Student Squeeze Span Teacher Figure 3: Squeeze and span representation from the GAN generator. Left: pretrained generator G and squeeze module Tφ constitute teacher network to produce squeezed representations which are further distilled into a student network Sθ (Squeeze part). The student network is also trained on real data (Span part). Right: the generator structure and our squeeze module. We average pool (denoted as µ) the feature maps from each synthesis block and transform them with a linear layer plus an MLP, termed squeeze module. 4 Squeeze-and-Span Representations from GAN Generator This section introduces the “Squeeze-and-Span” technique to distill representation from GANs into a student network, which can then be readily transferred for downstream tasks, e.g. image classification. Let Sθ : X → H denote a student network that maps a given image into representation space. A naive way of representation learning can be achieved by tasking the student network to predict the teacher representation, which can be formulated as the following optimization problem: min θ Ew∼P (w) ‖Sθ(G(w))− hg(w)‖22, (5) where we use mean squared error to measure the prediction loss and hg(w) to denote the dependence of hg on w. However, this formulation has two problems. First, representations extracted through multiple layers of the generator are likely to contain significantly redundant information for downstream tasks but necessary for image synthesis. Second, as the student network is only optimized on synthetic images, it is likely to perform poorly in extracting features from real images in the downstream task due to the potential domain gap between real and synthetic images. To mitigate these issues, we propose the “Squeeze and Span” technique as illustrated in Fig. 3. 4.1 Squeezing Informative Representations To alleviate the first issue that generator representation may contain a big portion of irrelevant information for downstream tasks, we introduce a squeeze (or bottleneck) module Tφ (Fig. 3) that squeezes informative representations out of the generator representation. In addition, we transform the generated image via a semantic-preserving image transformation a (e.g. color jittering and cropping) before feeding it to the student work. Equ. 5 can be rewritten as min θ,φ LRD = Ew∼P (w),a∼A ‖Sθ(a[G(w)])− Tφ(hg(w))‖22, (6) where image transformation a is randomly sampled from A. In words, we seek to distill compact representations from the generator among the ones that are invariant to data augmentationA, inspired from the success of recent self-supervised methods [7, 10]. An informal intepretation is that, similar to Chen & He [10], considering one of the alternate subproblems that fix θ and solve φ, the optimal solution would result in the effect of Tφ∗(hg(w)) ≈ Ea∼A Sθ(a[G(w)]), which implies Equ 6 encourages Tφ to squeeze out transformation-invariant representation. However, similar to the siamese network in SSL [10], there exists a trivial solution to Equ. 6: both the squeeze module and the student network degenerate to output constant for any input. Therefore, we consult the techniques from SSL methods and add regularization terms to the distillation loss. In particular, we employ variance-covariance [3] to explicitly regularize representations to be significantly uncorrelated and varied in each dimension. Formally, in a mini-batch of N samples, we denote the squeezed generator representations and student representations with Zg = [Tφ(h g(w1)), Tφ(h g(w2)), . . . , Tφ(h g(wN ))] ∈ RM×N , (7) Zs = [Sθ(a1[G(w1)]), Sθ(a2[G(w2)]), . . . , Sθ(aN [G(wN )])] ∈ RM×N , (8) where wi ∼ P (w) and ai ∼ A denote random sample of latent variable and data augmentation operator. The variance loss is introduced to encourage the standard deviation of each representation dimension to be greater than 1, Lvar(Z) = 1 M M∑ j=1 max(0, 1− √ Var(zj) + ), (9) where zj represents the j-th dimension in representation z. The covariance loss is introduced to encourage the correlation of any pair of dimensions to be uncorrelated, Lcov(Z) = 1M ∑ i 6=j [C(Z)] 2 ij , where C(Z) = 1N−1 ∑N i=1(zi − z̄)(zi − z̄)>, z̄ = 1 N ∑N i=1 zi. (10) To this end, the loss function of squeezing representations from the generator into the student network can be summarized as Lsqueeze = λLRD + µ [Lvar(Zf ) + Lvar(Zg)] + ν [Lcov(Zf ) + Lcov(Zg)] . (11) Discussion Our work differs from multi-view representation learning methods [3, 10] in the following aspects. (1) Our work studies the transfer of the generative model that does not originally favor representation extraction, whereas most multi-view representation learning learns representation with discriminative pretext tasks. (2) Unlike typical Siamese networks in multi-view representation learning, the two networks in our work are asymmetric: one takes in noise and outputs an image and the other works in the reverse fashion. (3) While most multi-view representation learning methods learn representation networks from scratch, our work distills representations from a pre-trained model. In specific, most SSL methods create multiview representations by transforming input images in multiple ways, we instead pursue different representation views from a well-trained data generator. 4.2 Spanning Representations from Synthetic to Real Domain Here we address the second problem, the domain between synthetic and real domains, due to two factors. First, the synthesized images may be of low quality. This aspect has been improved a lot with recent GAN modelling [37, 35] and is out of our concern. Second, more importantly, GAN is notorious for the mode collapse issue, suggesting the synthetic data can only cover partial modes of real data distribution. In other words, the synthetic dataset appears to be a subset of the real dataset. To undermine the harm of mode collapse, we include the real data in the training data of the student network. In particular, in each training step, synthetic data and real data consist of a mini-batch of training data. For synthetic data, the aforementioned squeeze loss is employed. For real data, we employ the original VICReg to compute loss. Specifically, given a mini-batch of real data {xri }Ni=1, each image xri is transformed twice with random data augmentation to obtain two views ai(x r i ) and a′i(x r i ), where ai, a ′ i ∼ A. The corresponding representations Zr and Z ′r are obtained by feeding the transformed images into Sθ similarly to Equ. 8. Then the loss on real data is computed as Lspan = λL′RD + µ [Lvar(Zr) + Lvar(Z ′r)] + ν [Lcov(Zr) + Lcov(Z ′r)] , (12) where L′RD denotes a self-distillation by measuring the distance of two-view representations on real images. The overall loss is computed by simply combine the generated data loss and real data loss as Ltotal = αLsqueeze + (1 − α)Lspan, where α = 0.5 denotes the proportion of synthetic data in a mini-batch of training samples. From a technical perspective, spanning representation seems to be a combination of representation distillation and SSL using VICReg [3]. We interpret this combination as spanning representation from the synthetic domain to the real domain. The representation is dominantly learned in the synthetic domain and generalized to the real domain. The student network learns to fuse representation spaces of two domains into a consistent one in the spanning process. Our experimental evaluation shows that “squeeze and span” can outperform VICReg on real data, suggesting that the squeezed representations do have a nontrivial contribution to the learned representation. 5 Experiments 5.1 Setup Dataset and pre-trained GAN Our methods are mainly evaluated on CIFAR10, CIFAR100, and STL10, ImageNet100, and ImageNet. CIFAR10 and CIFAR100 [39] are two image datasets containing small images at 32×32 resolution with 10 and 100 classes, respectively, and both split into 50,000 images for training and 10,000 for validation. STL-10 [13], which is derived from the ImageNet [14], includes images at 96×96 resolution over 10 classes. STL-10 contains 500 labeled images per class (i.e. 5K in total) with an additional 100K unlabeled images for training and 800 labeled images for testing. ImageNet100 [55] contains images of 100 classes, among which 126,689 images are regarded as the train split and 5,000 images are taken as the validation split. ImageNet [14] is a popular large-scale image dataset of 1000 classes, which is split into 1,281,167 images as training set and 50,000 images as validation set. We adopt StyleGAN2-ADA5 for representation distillation since it has good stability and high performance. GANs are all pre-trained on training split. More details can be referred in the supplementary material. Implementation details The squeeze module uses linear layers to transform the generator features into vectors with 2048 dimensions, which are then summed up and fed into a three-layer MLP to get a 2048-d teacher representation. On CIFAR10 and CIFAR100, we use ResNet18 [30] of the CIFAR variant as the backbone. On STL10, we use ResNet18 as the backbone. On ImageNet100 and ImageNet, we use ResNet50 as the backbone. On top of the backbone network, a five-layer MLP is added for producing representation. We use SGD optimizer with cosine learning rate decay [43] scheduler to optimize our models. The actual learning rate is linearly scaled according to the ratio of batch size to 256, i.e. base_lr × batch_size/256 [24]. We follow the common practice in SSL [7, 55, 29] to evaluate the distilled representation with linear classification task. More details are available in the supplementary material. 5.2 Transferring GAN Representation Compared methods In this section, we justify the advantage of distilling generator representations by comparing the performance of different ways of transferring GAN representation. In particular, we consider the following competitors: • Discriminator. As the discriminator network receives image as input and is ready for representation extraction, we directly extract features, single penultimate features, or multiple features (Equ 2), using a pre-trained discriminator and train a linear classifier on top of them. • Encoding. We train a post hoc encoder with or without real images involved in the training process as in Equ 3. • Distilling latent variable. We employ the vanilla distillation or squeeze method on latent variables with data augmentation engaged. • Distilling generator feature. Our method as described in Section 4. Results Table 1 presents the comparison results, from which we can draw the following conclusions. (1) Representation distillation, whether from the latent variable or generator feature, significantly outperforms discriminator and encoding. We think this is because image reconstruction and realness discrimination are not suitable pretext tasks for representation learning. (2) Distillation from latent variable achieve comparable performance to distillation from generator feature, despite that the former one show entangled class information (Fig. 2). This result can be attributed to a projection head in the student network. (3) Our method works significantly better than vanilla distillation which does not employ a squeeze module. This result suggests that our method squeeze more informative representation that can help to improve the student performance. 5.3 Comparison to SSL Linear classification We further compare our methods to SSL algorithms such as SimSiam [10] and VICReg [3] in different training data domains: real, synthetic, and a mixture of real and synthetic. 5https://github.com/NVlabs/stylegan2-ada-pytorch Table 2 presents the linear classification results, from which we want to highlight the following points. (1) Both SimSiam and VICReg perform worse when pre-trained on only synthetic data than only real data, indicating the existence of a domain gap between synthetic data and real data. (2) Our methods outperform SimSiam and VICReg in synthetic and mixture domains, suggesting distillation of generator feature contributes extra improvement SSL. (3) Our "Squeeze and Span" is the best among all the competitors on CIFAR10, CIFAR100, and STL10. (4) Our method outperforms VICReg with a large margin (6.90% Top-1 Acc) on ImageNet100 and a clear increase (0.48% Top-1 Acc) on ImageNet. Transfer learning As one goal of representation learning is its transferability to other datasets, we further conduct a comprehensive transfer learning evaluation. We follow the protocol in [20] and use its released source code6 to conduct a thorough transfer learning evaluation for our pre-trained models on ImageNet100/ImageNet. In particular, the learned representations are mainly evaluated for (1) linear classification on 11 datasets including Aircraft, Caltech101, Cars, CIFAR10, CIFAR100, DTD, Flowers, Food, Pets, SUN397, and VOC2007; (2) finetuning on three downstream tasks and datasets, including object detection on PASCAL VOC, surface normal estimation on NYUv2, and semantic segmentation on ADEChallenge2016. Please refer to [20] for the details of evaluation protocol. The results are presented in Table 3 and Table 4, where we have the following observation (1) As depicted in Table 3, our method achieves better transferability than VICReg on the mixed data no matter pre-trained on ImageNet100 or ImageNet. Our method beats VICReg on nearly all other datasets and the improvement on average accuracy is 3.40 with models pre-trained on ImageNet100 and 1.00 with models pre-trained on ImageNet. (2) As depicted in Table 4, representations learned with our method can be well transferred to various downstream tasks such as object detection, 6https://github.com/linusericsson/ssl-transfer surface normal estimation and semantic segmentation, and consistently show higher performance than VICReg. We believe these results suggest that generator features have strong transferability and great promise to contribute to self-supervised representation learning. 5.4 Ablation Study Effect of squeeze and span The effect of our method is studied by adding modules to the vanilla version of representation distillation (a) one by one. (a)→ (b): After added data augmentation, significant improvement can be observed, suggesting that invariant representation to data augmentation is crucial for linear classification performance. This result inspires us to make teacher representation more invariant. (b)→ (c): the learnable Tφ is introduced to squeeze out invariant representation as teacher. However, trivial performance (10% top-1 accuracy, no better than random guess) is obtained, implying models learn trivial solutions, probably constant output. (c)→ (d) & (e): regularization terms are added, and the student network now achieves meaningful performance, which indicates the trivial solution is prevented. Moreover, using both regularizations achieves the best performance, which outperforms (b) without "squeeze". (e)→ (f): training data is supplemented with real data, i.e. adding "span", the performance is further improved. Domain gap issue We calculate the squared MMD [4] of representations of synthetic and real data to measure their gap in representation space. Table 6 shows “Squeeze and Span” (Sq&Sp) reducesthe MMD compared to “Squeeze” by an order of magnitude on CIFAR10 and CIFAR100 and a large margin on STL10, clearly justifying the efficacy of “span” as reducing the domain gap. Impact of generator We further compare the performance of our method when we use GAN checkpoints of different quality. Fig. 4 shows the top-1 accuracy with respect to FID, which indicates the quality of GAN. It is not surprising that GAN quality significantly impacts our method. The higher the quality of generator we utilize, the higher performance of learned representation our method can attain. It is noteworthy that a moderately trained GAN (FID < 11.03) is already able to contribute additional performance improvement on CIFAR100 when compared to VICReg trained on a mixture of synthetic and real data. In the appendix, we further analyze the im- pact of generator feature choices and GAN architectures on the distillation performance. 6 Conclusions This paper proposes to "squeeze and span" representation from the GAN generator to extract transferable representation for downstream tasks like image classification. The key techniques, "squeeze" and "span", aim to mitigate issues that the GAN generator contains the information necessary for image synthesis but unnecessary for downstream tasks and the domain gap between synthetic and real data. Experimental results justify the effectiveness of our method and show its great promise in self-supervised representation learning. We hope more attention can be drawn to studying GAN for representation learning. Limitation and future work The current form of our work still maintains several limitations that need to be studied in the future. (1) Since we distill representation from GANs, the performance of learned representation relies on the quality of pretrained GANs and thus is limited by the performance of the GAN techniques. Therefore, whether a prematurely trained GAN can also contribute to self-supervised representation learning and how to effectively distill them will be an interesting problem. (2) In this paper, the squeeze module sets the widely-used transformation-invariance as the learning objective of representation distillation. We leave other learning objectives tailored for specific downstream tasks as future work. (3) More comprehensive empirical study with larger scale is left as future work to further exhibit the potential of our method. Acknowledgments and Disclosure of Funding This work was supported by the National Key R&D Program of China under Grant 2018AAA0102801, National Natural Science Foundation of China under Grant 61620106005, EPSRC Visual AI grant EP/T028572/1.
1. What is the focus of the paper regarding GANs and downstream tasks? 2. What are the strengths and weaknesses of the proposed method in utilizing generator features? 3. Do you have any concerns about the experiments conducted in the paper? 4. How does the reviewer assess the significance and practicality of the proposed approach? 5. Are there any limitations mentioned in the paper that the reviewer agrees with?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes to distill representations from GANs for downstream tasks. Specifically, it emphasizes the importance of generator features and proposes a squeeze and span solution to effectively extract such generator features as image representations. To do so, it learns a feature prediction network that predicts generator features transformed under some kind of semantic-preserving transformations. So that generator features of real images can be directly computed. Moreover, to reduce the gap between real and fake images and avoid the generator representation of a real image suffer from GANs' issues such as mode collapse, the proposed method further applies a spanning operation that trains the feature prediction network also on real images using VICReg. Strengths And Weaknesses Strengths: The task of distilling GAN features for downstream perception tasks is definitely a worthexploring direction The manuscript is easy to follow The visualization included in Figure 2 well motivates the use of generator features over latent variables and discriminator features. Weaknesses: one missing related work: [a] Generative Hierarchical Features from Synthesizing Images, CVPR 2021. which also utilizes generator features for various downstream tasks. Experiments should be significantly extended. Specifically, authors only conduct experiments on low-resolution images of CIFAR10, CIFAR100 and SHVN. As pointed out by the authors as the third point of the limitations, the proposed method should be validated on more challenging datasets with higher resolution images. I believe this is a must. Since practical value is an important aspect for self-supervised learning methods, CIFAR10, CIFAR100 and SHVN are insufficient to reflect the richness and informativeness of generator features. Authors only used StyleGAN2-ADA as the target generator to distill from, which is not sufficient. Studying the generator features of different GAN architectures is important for studying the effectiveness of generator features since different architectures naturally have different inductive bias. Authors only used distilled generator features on the test set of the same dataset the gan is trained on. However, one property of self-supervised methods is that their representations are generally useful across datasets. Authors should include such a study. Authors only used distilled generator features on image classification. Other downstream tasks are needed to study the pros&cons of generator features as a self-supervised representation. In fact, maybe authors should follow the standard protocol for evaluating self-supervised methods. Just concatenate all layer features as the representation provided by the generator is insufficient to obtain solid observations. Authors should include studies that use different features or combinations from the single generator. Questions Overall I think current experiments are significantly insufficient to comprehensively study the effectiveness and different properties of GAN generator features as self-supervised representations. My detailed concerns are covered in weaknesses. I believe studying generator features as self-supervised representations across generator architectures, feature depths, downstream tasks, as well as test datasets will lead to many more insightful observations. Limitations Authors discussed the limitations in the main manuscript, which look reasonable to me.
NIPS
Title Distilling Representations from GAN Generator via Squeeze and Span Abstract In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains. The controllable synthesis ability of GAN generators suggests that they maintain informative, disentangled, and explainable image representations, but leveraging and transferring their representations to downstream tasks is largely unexplored. In this paper, we propose to distill knowledge from GAN generators by squeezing and spanning their representations. We squeeze the generator features into representations that are invariant to semantic-preserving transformations through a network before they are distilled into the student network. We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs and boost the student network performance in a real domain. Experiments justify the efficacy of our method and reveal its great significance in self-supervised representation learning. Code is available at https://github.com/yangyu12/squeeze-and-span. 1 Introduction Generative adversarial networks (GANs) [23] continue to achieve impressive image synthesis results thanks to large datasets and recent advances in network architecture design [5, 36, 37, 34]. GANs synthesize not only realistic images but also steerable ones towards specific content or styles [22, 52, 49, 33, 57, 53, 32]. These properties motivate a rich body of works to adopt powerful pretrained GANs for various computer vision tasks, including part segmentation [68, 56, 61], 3D reconstruction [67], image alignment [48, 45], showing the strengths of GANs in the few-label regime. GANs typically produce fine-grained, disentangled, and explainable representations, which allow for higher data efficiency and better generalization [42, 68, 56, 61, 67, 48]. Prior works on GAN-based representation learning focus on learned features from either a discriminator network [50] or an encoder network mapping images back into the latent space [19, 17, 18]. However, there is still inadequate exploration about how to leverage or transfer the learned representations in generators. Inspired by the recent success of [68, 56, 61], we hypothesize that representations produced in generator networks are rich and informative for downstream discriminative tasks. Hence, this paper proposes to distill representations from feature maps of a pretrained generator network into a student network (see Fig. 1). In particular, we present a novel “squeeze-and-span” technique to distill knowledge from a generator into a representation network2 that is transferred to a downstream task. Unlike transferring discrimi- ∗Equal Contribution 2Throughout the paper, two terms“representation network” and “student network” are used interchangeably, as are the “generator network” and “teacher network”. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). nator network, generator network is not directly transferable to downstream image recognition tasks, as it cannot ingest image input but a latent vector. Hence, we distill generator network representations into a representation network that can be further transferred to the target task. When fed in a synthesized image, the representation network is optimized to produce similar representations to the generator network’s. However, the generator representations are very high-dimensional and not all of them are informative for the downstream task. Thus, we propose a squeeze module that purifies generator representations to be invariant to semantic-preserving transformations through an MLP and an augmentation strategy. As the joint optimization of the squeeze module and representation network can lead to a trivial solution (e.g. mapping representations to zero vector), we employ variance-covariance regularization in [3] while maximizing the agreement between the two networks. Finally, to address the potential domain gap between synthetic and real images, we span the learned representation of synthetic images by training the representation network additionally on real images. We evaluate our distilled representations on CIFAR10, CIFAR100 and STL10 with linear classification tasks as commonly done in representation learning. Experimental results show that squeezing and spanning generator representations outperforms methods that build on discriminator and encoding images into latent space. Moreover, our method achieves better results than discriminative SSL algorithms, including SimSiam [10] and VICReg [3] on CIFAR10 and CIFAR100, and competitive results on STL10, showing significant potential for transferable representation learning. Our contributions can be summarized as follows: We (1) provide a new taxonomy of representation and transfer learning in generative adversarial networks based on the location of the representations, (2) propose a novel “squeeze-and-span” framework to distill representations in the GAN generator and transfer them for downstream tasks, (3) empirically show the promise of utilizing generator features to benefit self-supervised representation learning. 2 Related Work GANs for Representation Learning. Significant progress has been made on the interpretability, manipulability, and versatility of the latent space and representation of GANs [36, 37, 34, 35]. It inspires a broad spectrum of GAN-based applications, such as semantic segmentation [68, 56, 61], visual alignment [48, 45], and 3D reconstruction [67], where GAN representations are leveraged to synthesize supervision signals efficiently. As GAN can be trained unsupervised, its representations are transferred to downstream tasks. DCGAN [50] proposes a convolutional GAN and uses the pre-trained discriminator for image classification. BiGAN [17] adopts an inverse mapping strategy to transfer the real domain knowledge for representation learning. While ALI [19] improves this idea with a stochastic network instead of a deterministic one, BigBiGAN [18] extends BiGAN with BigGAN [5] for large scale representation learning. GHFeat [59] trains a post hoc encoder that maps given images back into style codes of style-based GANs [36, 37, 35] for image representation. These works leverage or transfer representations from either discriminators or encoders. In contrast, our method reveals that the generator of a pre-trained GAN is typically more suitable for representation transfer with a proper distillation strategy. Knowledge Distillation (KD) aims at training a small student network, under the supervision of a relatively large teacher network [31]. In terms of the knowledge source, it can be broadly divided into logit-based KD and feature-based KD. Logit-based KD methods [41, 60, 12] optimize the divergence loss between the predicted class distributions, usually called logits or soft labels, of the teacher and student network. Feature-based KD methods [38, 2, 54] adopt the teacher model’s intermediate layers as supervisory signals for the student. FitNet [51] introduces the output of hidden layers of the teacher network as supervision. AT [63] proposes to match attention maps between the teacher and student. FSP [62] calculates flow between layers as guidance for distillation. Likewise, our method distills knowledge from intermediate layers from a pre-trained GAN generator. Self-Supervised Representation Learning (SSL) pursues learning general transferable representations from unlabelled data. To produce informative self-supervision signals, the design of handcrafted pretext tasks has flourished for a long time, including jigsaw puzzle completion [46], relative position prediction [15, 16], rotation perception [21], inpainting [47], colorization [40, 65], masked image modeling [27, 58], etc. Instead of performing intra-instance prediction, contrastive learning-based SSL methods explore inter-instance relation. Applying the InfoNCE loss or its variants [26], they typically partition informative positive/negative data subsets and attempt to attract positive pairs while repelling negative ones. MoCo series [28, 8, 11] introduce an offline memory bank to store large negative samples for contrast and a momentum encoder to make them consistent. SimCLR [7] adopts an end-to-end manner to provide negatives in a mini-batch and introduce substantial data augmentation and a projection head to improve the performance significantly. Surprisingly, without negative pairs, BYOL [25] proposes a simple asymmetry SSL framework with the momentum branch applying the stop gradient to avoid model collapse. It inspires a series of in-deep explorations, such as SimSiam [9], Barlow Twins [64], VICReg [3], etc. In this paper, despite the same end goal of obtaining transferable representations and the use of techniques from VICReg [3], we study the transferability of generator representations in pretrained GANs to discriminative tasks, use asymmetric instead of siamese networks, and design effective distillation strategies. 3 Rethinking GAN Representations Let G : W → X denote a generator network that maps a latent variable in W to an image in X . An unconditional GAN trains G adversarially against a discriminator network D : X → [0, 1] that estimates the realness of the given images, max G min D E log(1−D(G(w))) + logD(x). (1) The adversarial learning does not require any human supervision and therefore allows for learning representations in an unsupervised way. In this paper, we show that the type of GAN representations and how they are obtained has a large effect on their transferrability. To illustrate the impact on the transferability, Fig. 2 plots the embedded 2D points of three different type of representations from an unconditional GAN, where color is assigned based on the class labels.3. Note that we describe each representation in the following paragraphs. Discriminator Feature The discriminator D, which is tasked to distinguish real and fake images, can be transferred to various recognition tasks [50]. Formally, let D = d(L) ◦ d(L−1) ◦ · · · ◦ d(1) denote the decomposition of a discriminator into L consecutive layers. As shown in Fig. 1(a), given an image x, the discriminator representation can be extracted by concatenating the features after average pooling from each discriminator block output, hd = [µ(hd1), . . . , µ(h d L)], where h d i = d (i) ◦ · · · ◦ d(1)(x), (2) where µ denotes the average pooling operator. However, Fig. 2(a) shows that the cluster of discriminator features is not significantly correlated with class information indicating that real/fake discrimination does not necessarily relate to class separation. Latent Variable An alternative way of transferring GAN representation is through its latent variable w [19, 17, 18]. In particular, one can invert the generator such that it can extract a latent variable representation of the generated image through a learned encoder E. Then the representations of the encoder can be transferred to a downstream task. While some works jointly trains the encoder with the generator and discriminator [19, 17, 18], we consider training a post hoc encoder [6] given a fixed pre-trained generator G, as this provides more consistent comparison with the other two strategies: E∗ = arg min E Ew∼P (w),x=G(w) [ ‖G(E(x))− x‖1 + Lpercep(G(E(x)),x) + λ‖E(x)−w‖22 ] , (3) where Lpercep denotes the LPIPS loss [66] and λ = 1.0 is used to balance different loss terms. The key assumption behind this strategy is that latent variables encode various characteristics of generated images (e.g. [33, 57]) and hence extracting them from generated images result in learning transferrable representations. Fig. 2(b) visualizes the embedding of latent variables4. It shows that samples from the same classes are not clustered together and distant from other ones. In other words, latent variables do not disentangle the class information while encoding other information about image synthesis. Generator Feature An overlooked practice is to utilize generator features. Typically, GAN generators transform a low-resolution (e.g. 4×4) feature map to a higher-resolution one (e.g. 256×256) and further synthesize images from the final feature map [17, 36] or multi-scale feature maps [37]. The image synthesis is performed hierarchically: feature map from low to high resolution encodes the low-frequency to high-frequency component for composing an image signal [35]. This understanding is also evidenced by image editing works [22, 52, 49, 53, 32] which show that interfering with low-resolution feature maps leads to a structural and high-level change of an image, and altering high-resolution feature maps only induces subtle appearance changes. Therefore, generator features contain valuable hierarchical knowledge about an image. Formally, let G = g(L) ◦ g(L−1) ◦ · · · ◦ g(1) denote the decomposition of a discriminator into L consecutive layers. Given a latent variable w ∼ P (w) drawn from a prior distribution, we consider the concatenated features average pooled from each generator block output, hg = [µ(hg1), . . . , µ(h g L)], where h g i = g (i) ◦ · · · ◦ g(1)(w). (4) As Fig. 2(c) shows, generator features within the same class are naturally clustered. This result suggests that generators contain identifiable representations that can be transferred for downstream tasks. However, as GANs do not initially provide a reverse model for the accurate recovery of generator features, it is still inconvenient to extract generator features for any given image. This limitation motivates us to distill the valuable features from GAN generators. 3The GAN is trained trained on CIFAR10. We use UMAP embeddings [44] for dimensionality reduction. As the class labels of the generated images are unknown, they are inferred by a classifier that is trained on CIFAR10 training set and achieves around 95% top-1 accuracy on CIFAR10 validation set. 4In the family of StyleGAN, to achieve more disentangled latent variables, the prior latent variable that observes standard normal distribution is mapped into a learnable latent space via an MLP before fed into the generator. In our work, we refer to latent variables as the transformed ones, which are also known as latent variables in W+ space in other works [1] 𝐰~𝑷(𝐰) Synthesis 4x4 Synthesis 8x8 Synthesis 16x16 Synthesis 32x32 toRGB M L P toRGB toRGB toRGB 𝜇 & FC Const Up Up Up 𝜇 & FC 𝜇 & FC 𝜇 & FC 𝐱𝐠 Squeeze module 𝑇𝜙 Generator 𝐺 𝑮 𝑻𝝓 𝑺𝜽 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝐱𝐠 𝒂 𝐱𝐠 𝒂 𝐱𝐫 𝒂′ 𝐱𝒓 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐯𝐚𝐫 & 𝓛𝐜𝐨𝐯 𝓛𝐑𝐃 ′ 𝓛𝐑𝐃 𝐰~𝑷(𝐰) Teacher Student Squeeze Span Teacher Figure 3: Squeeze and span representation from the GAN generator. Left: pretrained generator G and squeeze module Tφ constitute teacher network to produce squeezed representations which are further distilled into a student network Sθ (Squeeze part). The student network is also trained on real data (Span part). Right: the generator structure and our squeeze module. We average pool (denoted as µ) the feature maps from each synthesis block and transform them with a linear layer plus an MLP, termed squeeze module. 4 Squeeze-and-Span Representations from GAN Generator This section introduces the “Squeeze-and-Span” technique to distill representation from GANs into a student network, which can then be readily transferred for downstream tasks, e.g. image classification. Let Sθ : X → H denote a student network that maps a given image into representation space. A naive way of representation learning can be achieved by tasking the student network to predict the teacher representation, which can be formulated as the following optimization problem: min θ Ew∼P (w) ‖Sθ(G(w))− hg(w)‖22, (5) where we use mean squared error to measure the prediction loss and hg(w) to denote the dependence of hg on w. However, this formulation has two problems. First, representations extracted through multiple layers of the generator are likely to contain significantly redundant information for downstream tasks but necessary for image synthesis. Second, as the student network is only optimized on synthetic images, it is likely to perform poorly in extracting features from real images in the downstream task due to the potential domain gap between real and synthetic images. To mitigate these issues, we propose the “Squeeze and Span” technique as illustrated in Fig. 3. 4.1 Squeezing Informative Representations To alleviate the first issue that generator representation may contain a big portion of irrelevant information for downstream tasks, we introduce a squeeze (or bottleneck) module Tφ (Fig. 3) that squeezes informative representations out of the generator representation. In addition, we transform the generated image via a semantic-preserving image transformation a (e.g. color jittering and cropping) before feeding it to the student work. Equ. 5 can be rewritten as min θ,φ LRD = Ew∼P (w),a∼A ‖Sθ(a[G(w)])− Tφ(hg(w))‖22, (6) where image transformation a is randomly sampled from A. In words, we seek to distill compact representations from the generator among the ones that are invariant to data augmentationA, inspired from the success of recent self-supervised methods [7, 10]. An informal intepretation is that, similar to Chen & He [10], considering one of the alternate subproblems that fix θ and solve φ, the optimal solution would result in the effect of Tφ∗(hg(w)) ≈ Ea∼A Sθ(a[G(w)]), which implies Equ 6 encourages Tφ to squeeze out transformation-invariant representation. However, similar to the siamese network in SSL [10], there exists a trivial solution to Equ. 6: both the squeeze module and the student network degenerate to output constant for any input. Therefore, we consult the techniques from SSL methods and add regularization terms to the distillation loss. In particular, we employ variance-covariance [3] to explicitly regularize representations to be significantly uncorrelated and varied in each dimension. Formally, in a mini-batch of N samples, we denote the squeezed generator representations and student representations with Zg = [Tφ(h g(w1)), Tφ(h g(w2)), . . . , Tφ(h g(wN ))] ∈ RM×N , (7) Zs = [Sθ(a1[G(w1)]), Sθ(a2[G(w2)]), . . . , Sθ(aN [G(wN )])] ∈ RM×N , (8) where wi ∼ P (w) and ai ∼ A denote random sample of latent variable and data augmentation operator. The variance loss is introduced to encourage the standard deviation of each representation dimension to be greater than 1, Lvar(Z) = 1 M M∑ j=1 max(0, 1− √ Var(zj) + ), (9) where zj represents the j-th dimension in representation z. The covariance loss is introduced to encourage the correlation of any pair of dimensions to be uncorrelated, Lcov(Z) = 1M ∑ i 6=j [C(Z)] 2 ij , where C(Z) = 1N−1 ∑N i=1(zi − z̄)(zi − z̄)>, z̄ = 1 N ∑N i=1 zi. (10) To this end, the loss function of squeezing representations from the generator into the student network can be summarized as Lsqueeze = λLRD + µ [Lvar(Zf ) + Lvar(Zg)] + ν [Lcov(Zf ) + Lcov(Zg)] . (11) Discussion Our work differs from multi-view representation learning methods [3, 10] in the following aspects. (1) Our work studies the transfer of the generative model that does not originally favor representation extraction, whereas most multi-view representation learning learns representation with discriminative pretext tasks. (2) Unlike typical Siamese networks in multi-view representation learning, the two networks in our work are asymmetric: one takes in noise and outputs an image and the other works in the reverse fashion. (3) While most multi-view representation learning methods learn representation networks from scratch, our work distills representations from a pre-trained model. In specific, most SSL methods create multiview representations by transforming input images in multiple ways, we instead pursue different representation views from a well-trained data generator. 4.2 Spanning Representations from Synthetic to Real Domain Here we address the second problem, the domain between synthetic and real domains, due to two factors. First, the synthesized images may be of low quality. This aspect has been improved a lot with recent GAN modelling [37, 35] and is out of our concern. Second, more importantly, GAN is notorious for the mode collapse issue, suggesting the synthetic data can only cover partial modes of real data distribution. In other words, the synthetic dataset appears to be a subset of the real dataset. To undermine the harm of mode collapse, we include the real data in the training data of the student network. In particular, in each training step, synthetic data and real data consist of a mini-batch of training data. For synthetic data, the aforementioned squeeze loss is employed. For real data, we employ the original VICReg to compute loss. Specifically, given a mini-batch of real data {xri }Ni=1, each image xri is transformed twice with random data augmentation to obtain two views ai(x r i ) and a′i(x r i ), where ai, a ′ i ∼ A. The corresponding representations Zr and Z ′r are obtained by feeding the transformed images into Sθ similarly to Equ. 8. Then the loss on real data is computed as Lspan = λL′RD + µ [Lvar(Zr) + Lvar(Z ′r)] + ν [Lcov(Zr) + Lcov(Z ′r)] , (12) where L′RD denotes a self-distillation by measuring the distance of two-view representations on real images. The overall loss is computed by simply combine the generated data loss and real data loss as Ltotal = αLsqueeze + (1 − α)Lspan, where α = 0.5 denotes the proportion of synthetic data in a mini-batch of training samples. From a technical perspective, spanning representation seems to be a combination of representation distillation and SSL using VICReg [3]. We interpret this combination as spanning representation from the synthetic domain to the real domain. The representation is dominantly learned in the synthetic domain and generalized to the real domain. The student network learns to fuse representation spaces of two domains into a consistent one in the spanning process. Our experimental evaluation shows that “squeeze and span” can outperform VICReg on real data, suggesting that the squeezed representations do have a nontrivial contribution to the learned representation. 5 Experiments 5.1 Setup Dataset and pre-trained GAN Our methods are mainly evaluated on CIFAR10, CIFAR100, and STL10, ImageNet100, and ImageNet. CIFAR10 and CIFAR100 [39] are two image datasets containing small images at 32×32 resolution with 10 and 100 classes, respectively, and both split into 50,000 images for training and 10,000 for validation. STL-10 [13], which is derived from the ImageNet [14], includes images at 96×96 resolution over 10 classes. STL-10 contains 500 labeled images per class (i.e. 5K in total) with an additional 100K unlabeled images for training and 800 labeled images for testing. ImageNet100 [55] contains images of 100 classes, among which 126,689 images are regarded as the train split and 5,000 images are taken as the validation split. ImageNet [14] is a popular large-scale image dataset of 1000 classes, which is split into 1,281,167 images as training set and 50,000 images as validation set. We adopt StyleGAN2-ADA5 for representation distillation since it has good stability and high performance. GANs are all pre-trained on training split. More details can be referred in the supplementary material. Implementation details The squeeze module uses linear layers to transform the generator features into vectors with 2048 dimensions, which are then summed up and fed into a three-layer MLP to get a 2048-d teacher representation. On CIFAR10 and CIFAR100, we use ResNet18 [30] of the CIFAR variant as the backbone. On STL10, we use ResNet18 as the backbone. On ImageNet100 and ImageNet, we use ResNet50 as the backbone. On top of the backbone network, a five-layer MLP is added for producing representation. We use SGD optimizer with cosine learning rate decay [43] scheduler to optimize our models. The actual learning rate is linearly scaled according to the ratio of batch size to 256, i.e. base_lr × batch_size/256 [24]. We follow the common practice in SSL [7, 55, 29] to evaluate the distilled representation with linear classification task. More details are available in the supplementary material. 5.2 Transferring GAN Representation Compared methods In this section, we justify the advantage of distilling generator representations by comparing the performance of different ways of transferring GAN representation. In particular, we consider the following competitors: • Discriminator. As the discriminator network receives image as input and is ready for representation extraction, we directly extract features, single penultimate features, or multiple features (Equ 2), using a pre-trained discriminator and train a linear classifier on top of them. • Encoding. We train a post hoc encoder with or without real images involved in the training process as in Equ 3. • Distilling latent variable. We employ the vanilla distillation or squeeze method on latent variables with data augmentation engaged. • Distilling generator feature. Our method as described in Section 4. Results Table 1 presents the comparison results, from which we can draw the following conclusions. (1) Representation distillation, whether from the latent variable or generator feature, significantly outperforms discriminator and encoding. We think this is because image reconstruction and realness discrimination are not suitable pretext tasks for representation learning. (2) Distillation from latent variable achieve comparable performance to distillation from generator feature, despite that the former one show entangled class information (Fig. 2). This result can be attributed to a projection head in the student network. (3) Our method works significantly better than vanilla distillation which does not employ a squeeze module. This result suggests that our method squeeze more informative representation that can help to improve the student performance. 5.3 Comparison to SSL Linear classification We further compare our methods to SSL algorithms such as SimSiam [10] and VICReg [3] in different training data domains: real, synthetic, and a mixture of real and synthetic. 5https://github.com/NVlabs/stylegan2-ada-pytorch Table 2 presents the linear classification results, from which we want to highlight the following points. (1) Both SimSiam and VICReg perform worse when pre-trained on only synthetic data than only real data, indicating the existence of a domain gap between synthetic data and real data. (2) Our methods outperform SimSiam and VICReg in synthetic and mixture domains, suggesting distillation of generator feature contributes extra improvement SSL. (3) Our "Squeeze and Span" is the best among all the competitors on CIFAR10, CIFAR100, and STL10. (4) Our method outperforms VICReg with a large margin (6.90% Top-1 Acc) on ImageNet100 and a clear increase (0.48% Top-1 Acc) on ImageNet. Transfer learning As one goal of representation learning is its transferability to other datasets, we further conduct a comprehensive transfer learning evaluation. We follow the protocol in [20] and use its released source code6 to conduct a thorough transfer learning evaluation for our pre-trained models on ImageNet100/ImageNet. In particular, the learned representations are mainly evaluated for (1) linear classification on 11 datasets including Aircraft, Caltech101, Cars, CIFAR10, CIFAR100, DTD, Flowers, Food, Pets, SUN397, and VOC2007; (2) finetuning on three downstream tasks and datasets, including object detection on PASCAL VOC, surface normal estimation on NYUv2, and semantic segmentation on ADEChallenge2016. Please refer to [20] for the details of evaluation protocol. The results are presented in Table 3 and Table 4, where we have the following observation (1) As depicted in Table 3, our method achieves better transferability than VICReg on the mixed data no matter pre-trained on ImageNet100 or ImageNet. Our method beats VICReg on nearly all other datasets and the improvement on average accuracy is 3.40 with models pre-trained on ImageNet100 and 1.00 with models pre-trained on ImageNet. (2) As depicted in Table 4, representations learned with our method can be well transferred to various downstream tasks such as object detection, 6https://github.com/linusericsson/ssl-transfer surface normal estimation and semantic segmentation, and consistently show higher performance than VICReg. We believe these results suggest that generator features have strong transferability and great promise to contribute to self-supervised representation learning. 5.4 Ablation Study Effect of squeeze and span The effect of our method is studied by adding modules to the vanilla version of representation distillation (a) one by one. (a)→ (b): After added data augmentation, significant improvement can be observed, suggesting that invariant representation to data augmentation is crucial for linear classification performance. This result inspires us to make teacher representation more invariant. (b)→ (c): the learnable Tφ is introduced to squeeze out invariant representation as teacher. However, trivial performance (10% top-1 accuracy, no better than random guess) is obtained, implying models learn trivial solutions, probably constant output. (c)→ (d) & (e): regularization terms are added, and the student network now achieves meaningful performance, which indicates the trivial solution is prevented. Moreover, using both regularizations achieves the best performance, which outperforms (b) without "squeeze". (e)→ (f): training data is supplemented with real data, i.e. adding "span", the performance is further improved. Domain gap issue We calculate the squared MMD [4] of representations of synthetic and real data to measure their gap in representation space. Table 6 shows “Squeeze and Span” (Sq&Sp) reducesthe MMD compared to “Squeeze” by an order of magnitude on CIFAR10 and CIFAR100 and a large margin on STL10, clearly justifying the efficacy of “span” as reducing the domain gap. Impact of generator We further compare the performance of our method when we use GAN checkpoints of different quality. Fig. 4 shows the top-1 accuracy with respect to FID, which indicates the quality of GAN. It is not surprising that GAN quality significantly impacts our method. The higher the quality of generator we utilize, the higher performance of learned representation our method can attain. It is noteworthy that a moderately trained GAN (FID < 11.03) is already able to contribute additional performance improvement on CIFAR100 when compared to VICReg trained on a mixture of synthetic and real data. In the appendix, we further analyze the im- pact of generator feature choices and GAN architectures on the distillation performance. 6 Conclusions This paper proposes to "squeeze and span" representation from the GAN generator to extract transferable representation for downstream tasks like image classification. The key techniques, "squeeze" and "span", aim to mitigate issues that the GAN generator contains the information necessary for image synthesis but unnecessary for downstream tasks and the domain gap between synthetic and real data. Experimental results justify the effectiveness of our method and show its great promise in self-supervised representation learning. We hope more attention can be drawn to studying GAN for representation learning. Limitation and future work The current form of our work still maintains several limitations that need to be studied in the future. (1) Since we distill representation from GANs, the performance of learned representation relies on the quality of pretrained GANs and thus is limited by the performance of the GAN techniques. Therefore, whether a prematurely trained GAN can also contribute to self-supervised representation learning and how to effectively distill them will be an interesting problem. (2) In this paper, the squeeze module sets the widely-used transformation-invariance as the learning objective of representation distillation. We leave other learning objectives tailored for specific downstream tasks as future work. (3) More comprehensive empirical study with larger scale is left as future work to further exhibit the potential of our method. Acknowledgments and Disclosure of Funding This work was supported by the National Key R&D Program of China under Grant 2018AAA0102801, National Natural Science Foundation of China under Grant 61620106005, EPSRC Visual AI grant EP/T028572/1.
1. What is the focus and contribution of the paper on representation learning? 2. What are the strengths of the proposed approach, particularly in terms of its motivation and technique? 3. What are the weaknesses of the paper, especially regarding its novelty and technical aspects? 4. Do you have any concerns about the effectiveness of the proposed method on large-scale datasets? 5. Are there any limitations to the paper's content that need to be acknowledged?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a method to distill representations from a pretrained GAN generator for the downstream classification tasks. The paper further proposes a "squeeze" technique to make the representation compact, and a "span" technique to mitigate the synthetic to real domain gap. Experimental results on small datasets including CIFAR10, CIFAR100, and STL10 show its efficiency in self-supervised representation learning. Strengths And Weaknesses Strengths This paper is well-written and easy to follow. The idea of distilling representations from GAN Generators is well-motivated. The proposed squeeze and span techniques mitigate the mentioned problems effectively. Weaknesses The proposed squeeze and span techniques are heavily inspired by VICReg, which downweights the novelty of the paper. From a technical perspective, distillation from "latent variable" and "generator feature" are not that very different. This can be verified by the numbers in Table 1, the vanilla distillation and squeeze accuracies are very similar. I also wonder what are accuracies of the "squeeze and span" for "latent variable" and how would those compare to the proposed full model. It is hard to assess the effectiveness of the proposed method without experiments on ImageNet for such an empirical paper. I understand that the limited computational resources might be a concern, but any form of ImageNet like low resolution of 64x64 or TinyImageNet (which has 200 classes) would be helpful. Post Rebuttal I thank authors for their response. Some of my concerns have been addressed. But my concern on missing results on large-scale datasets still remain. Therefore, I keep my initial rating but lean towards acceptance for this paper. Questions See "Weaknesses". Limitations Yes.
NIPS
Title Space-time Mixing Attention for Video Transformer Abstract This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. N/A This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. 1 Introduction Video recognition – in analogy to image recognition – refers to the problem of recognizing events of interest in video sequences such as human activities. Following the tremendous success of Transformers in sequential data, specifically in Natural Language Processing (NLP) [39, 5], Vision Transformers were very recently shown to outperform CNNs for image recognition too [48, 13, 35], signaling a paradigm shift on how visual understanding models should be constructed. In light of this, in this paper, we propose a Video Transformer model as an appealing and promising solution for improving the accuracy of video recognition models. A direct, natural extension of Vision Transformers to the spatio-temporal domain is to perform the self-attention jointly across all S spatial locations and T temporal locations. Full space-time attention though has complexityO(T 2S2) making such a model computationally heavy and, hence, impractical even when compared with the 3D-based convolutional models. As such, our aim is to exploit the temporal information present in video streams while minimizing the computational burden within the Transformer framework for efficient video recognition. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A baseline solution to this problem is to consider spatial-only attention followed by temporal averaging, which has complexity O(TS2). Similar attempts to reduce the cost of full space-time attention have been recently proposed in [3, 1]. These methods have demonstrated promising results in terms of video recognition accuracy, yet they have been also shown to induce, in most of the cases, significant computational overheads compared to the baseline (spatial-only) method due to the additional modelling of the temporal information. Our main contribution in this paper is a Video Transformer model that has complexity O(TS2) and, hence, is as efficient as the baseline model, yet, as our results show, it outperforms recently/concurrently proposed work [3, 1] in terms of efficiency (i.e. accuracy/FLOP) by significant margins. To achieve this our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. Fig. 1 shows the proposed approximation to space-time attention. We also show how to integrate two very lightweight mechanisms for global temporal-only attention, which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model is surprisingly effective in terms of capturing long-term dependencies and producing very high recognition accuracy on the most popular video recognition datasets, including Something-Something-v2 [17], Kinetics [4] and Epic Kitchens [9], while at the same time being significantly more efficient than other Video Transformer models. 2 Related work Video recognition: Standard solutions are based on CNNs and can be broadly classified into two categories: 2D- and 3D-based approaches. 2D-based approaches process each frame independently to extract frame-based features which are then aggregated temporally with some sort of temporal modeling (e.g. temporal averaging) performed at the end of the network [42, 26, 27]. The works of [26, 27] use the “shift trick” [45] to have some temporal modeling at a layer level. 3D-based approaches [4, 16, 36] are considered the current state-of-the-art as they can typically learn stronger temporal models via 3D convolutions. However, they also incur higher computational and memory costs. To alleviate this, a large body of works attempt to improve their efficiency via spatial and/or temporal factorization [38, 37, 15]. CNN vs ViT: Historically, video recognition approaches tend to mimic the architectures used for image classification (e.g. from AlexNet [23] to [20] or from ResNet [18] and ResNeXt [47] to [16]). After revolutionizing NLP [39, 32], very recently, Transformer-based architectures showed promising results on large scale image classification too [13]. While self-attention and attention were previously used in conjunction with CNNs at a layer or block level [6, 50, 33], the Vision Transformer (ViT) of Dosovitskiy et al. [13] is the first convolution-free, Transformer-based architecture that achieves state-of-the-art on ImageNet [11]. Video Transformer: Recently/concurrently with our work, vision transformer architectures, derived from [13], were used for video recognition [3, 1], too. Because performing full space-time attention is computationally prohibitive (i.e. O(T 2S2)), their main focus is on reducing this via temporal and spatial factorization. In TimeSformer [3], the authors propose applying spatial and temporal attention in an alternating manner reducing the complexity to O(T 2S + TS2). In a similar fashion, ViViT [1] explores several avenues for space-time factorization. In addition, they also proposed to adapt the patch embedding process from [13] to 3D (i.e. video) data. Our work proposes a completely different approximation to full space-time attention that is also efficient. To this end, we firstly restrict full space-time attention to a local temporal window which is reminiscent of [2] but applied here to space-time attention and video recognition 1. Secondly, we define a local joint space-time attention which we show that can be implemented efficiently via the “shift trick” [45]. 3 Method Video Transformer: We are given a video clip X ∈ RT×H×W×C (C = 3). Following ViT [13], each frame is divided into K ×K non-overlapping patches which are then mapped into visual tokens using a linear embedding layer E ∈ R3K2×d. Since self-attention is permutation invariant, in order to preserve the information regarding the location of each patch within space and time we also learn two positional embeddings, one for space: ps ∈ R1×S×d and one for time: pt ∈ RT×1×d. These are then added to the initial visual tokens. Finally, the token sequence is processed by L Transformer layers. The visual token at layer l, spatial location s and temporal location t is denoted as: zls,t ∈ Rd, l = 0, . . . , L− 1, s = 0, . . . , S − 1, t = 0, . . . , T − 1. (1) In addition to the ST visual tokens extracted from the video, a special classification token zlcls ∈ Rd is prepended to the token sequence [12]. The l−th Transformer layer processes the visual tokens Zl ∈ R(ST+1)×d of the previous layer using a series of Multi-head Self-Attention (MSA), Layer Normalization (LN), and MLP (Rd → R4d → Rd) layers as follows: Yl = MSA(LN(Zl−1)) + Zl−1, (2) Zl = MLP(LN(Yl)) +Yl. (3) The main computation of a single full space-time Self-Attention (SA) head boils down to calculating: yls,t = T−1∑ t′=0 S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , { s=0,...,S−1 t=0,...,T−1 } (4) where qls,t,k l s,t,v l s,t ∈ Rdh are the query, key, and value vectors computed from zls,t (after LN) using embedding matrices Wq,Wk,Wv ∈ Rd×dh . Finally, the output of the h heads is concatenated and projected using embedding matrix Wh ∈ Rhdh×d. The complexity of the full model is: O(3hTSddh) (qkv projections) +O(2hT 2S2dh) (MSA for h attention heads) +O(TS(hdh)d) (multi-head projection) +O(4TSd2) (MLP) 2. From these terms, our goal is to reduce the costO(2T 2S2dh) (for a single attention head) of the full space-time attention which is the dominant term 3. For clarity, from now on, we will drop constant terms and dh to report complexity unless necessary. Hence, the complexity of the full space-time attention is O(T 2S2). Our baseline is a model that performs a simple approximation to the full space-time attention by applying, at each Transformer layer, spatial-only attention: yls,t = S−1∑ s′=0 Softmax{(qls,t · kls′,t)/ √ dh}vls′,t, { s=0,...,S−1 t=0,...,T−1 } (5) 1Other attempts of exploiting local attention can be found in [29, 7, 49], however they are also different in scope, task/domain and implementation. 2For this work, we used S = 196, T = {8, 16, 32} and d = 768 (for a ViT-B backbone). 3The MLP complexity is by no means negligible, however the focus of this work (similarly to [3, 1]) is on reducing the complexity of the self-attention component. the complexity of which is O(TS2). Notably, the complexity of the proposed space-time mixing attention is also O(TS2). Following spatial-only attention, simple temporal averaging is performed on the class tokens zfinal = 1T ∑ t zL−1t,cls to obtain a single feature that is fed to the linear classifier. Recent work by [3, 1] has focused on reducing the cost O(T 2S2) of the full space-time attention of Eq. 4. Bertasius et al. [3] proposed the factorised attention: ỹls,t = T−1∑ t′=0 Softmax{(qls,t · kls,t′)/ √ dh}vls,t′ , yls,t = S−1∑ s′=0 Softmax{q̃ls,t · k̃ls′,t)/ √ dh}ṽls′,t, { s = 0, . . . , S − 1 t = 0, . . . , T − 1 } , (6) where q̃ls,t, k̃ l s′,tṽ l s′,t are new query, key and value vectors calculated from ỹ l s,t 4. The above model reduces complexity to O(T 2S + TS2). However, temporal attention is performed for a fixed spatial location which is ineffective when there is camera or object motion and there is spatial misalignment between frames. The work of [1] is concurrent to ours and proposes the following approximation: Ls Transformer layers perform spatial-only attention as in Eq. 5 (each with complexity O(S2)). Following this, there are Lt Transformer layers performing temporal-only attention on the class tokens zLst . The complexity of the temporal-only attention is, in general, O(T 2). Our model aims to better approximate the full space-time self-attention (SA) of Eq. 4 while keeping complexity to O(TS2), i.e. inducing no further complexity to a spatial-only model. To achieve this, we make a first approximation to perform full space-time attention but restricted to a local temporal window [−tw, tw]: yls,t = t+tw∑ t′=t−tw S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ = t+tw∑ t′=t−tw Vlt′a l t′ , { s=0,...,S−1 t=0,...,T−1 } (7) where Vlt′ = [v l 0,t′ ;v l 1,t′ ; . . . ;v l S−1,t′ ] ∈ Rdh×S and alt′ = [al0,t′ , al1,t′ , . . . , alS−1,t′ ] ∈ RS is the vector with the corresponding attention weights. Eq. 7 shows that, for a single Transformer layer, yls,t is a spatio-temporal combination of the visual tokens in the local window [−tw, tw]. It follows that, after k Transformer layers, yl+ks,t will be a spatio-temporal combination of the visual tokens in the local window [−ktw, ktw] which in turn conveniently allows to perform spatio-temporal attention over the whole clip. For example, for tw = 1 and k = 4, the local window becomes [−4, 4] which spans the whole video clip for the typical case T = 8. The complexity of the local self-attention of Eq. 7 is O(T (2tw + 1)2S2). To reduce this even further, we make a second approximation on top of the first one as follows: the attention between spatial locations s and s′ according to the model of Eq. 7 is: t+tw∑ t′=t−tw Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , (8) i.e. it requires the calculation of 2tw +1 attentions, one per temporal location over [−tw, tw]. Instead, we propose to calculate a single attention over [−tw, tw] which can be achieved by qls,t attending kls′,−tw:tw , [k l s′,t−tw ; . . . ;k l s′,t+tw ] ∈ R(2tw+1)dh . Note that to match the dimensions of qls,t and kls′,−tw:tw a further projection of k l s′,−tw:tw to R dh is normally required which has complexity O((2tw +1)d 2 h) and hence compromises the goal of an efficient implementation. To alleviate this we use the “shift trick” [45, 26] which allows to perform both zero-cost dimensionality reduction, spacetime mixing and attention (between qls,t and k l s′,−tw:tw ) in O(dh). In particular, each t ′ ∈ [−tw, tw] is assigned dt ′ h channels from dh (i.e. ∑ t′ d t′ h = dh). Let k l s′,t′(d t′ h ) ∈ Rd t′ h denote the operator for 4More precisely, Eq. 6 holds for h = 1 heads. For h > 1, the different heads ỹl,hs,t are concatenated and projected to produce ỹls,t. indexing the dt ′ h channels from k l s′,t′ . Then, a new key vector is constructed as: k̃ls′,−tw:tw , [k l s′,t−tw(d t−tw h ), . . . ,k l s′,t+tw(d t+tw h )] ∈ R dh . (9) Fig. 2 shows how the key vector k̃ls′,−tw:tw is constructed. In a similar way, we also construct a new value vector ṽls′,−tw:tw . Finally, the proposed approximation to the full space-time attention is given by: ylss,t = S−1∑ s′=0 Softmax{(qlss,t · k̃ls′,−tw:tw/ √ dh}ṽls′,−tw:tw , { s=0,...,S−1 t=0,...,T−1 } . (10) This has the complexity of a spatial-only attention (O(TS2)) and hence it is more efficient than previously proposed video transformers [3, 1]. Our model also provides a better approximation to the full space-time attention and as shown by our results it significantly outperforms [3, 1]. Temporal Attention aggregation: The final set of the class tokens zL−1t,cls , 0 ≤ t ≤ L− 1 are used to generate the predictions. To this end, we propose to consider the following options: (a) simple temporal averaging zfinal = 1T ∑ t z L−1 t,cls as in the case of our baseline. (b) An obvious limitation of temporal averaging is that the output is treated purely as an ensemble of per-frame features and, hence, completely ignores the temporal ordering between them. To address this, we propose to use a lightweight Temporal Attention (TA) mechanism that will attend to the T classification tokens. In particular a zfinal token attends the sequence [zL−10,cls, . . . , z L−1 T−1,cls] using a temporal Transformer layer and then fed as input to the classifier. This is akin to the (concurrent) work of [1] with the difference being that in our model we found that a single TA layer suffices whereas [1] uses Lt. A consequence of this is that the complexity of our layer is O(T ) vs O(2(Lt − 1)T 2 + T ) of [1]. Summary token: As an alternative to TA, herein, we also propose a simple lightweight mechanism for information exchange between different frames at intermediate layers of the network. Given the set of tokens for each frame t, Zl−1t ∈ R(S+1)×dh (constructed by concatenating all tokens zl−1s,t , s = 0, . . . , S), we compute a new set of R tokens Z l r,t = φ(Z l−1 t ) ∈ RR×dh which summarize the frame information and hence are named “Summary” tokens. These are then, appended to the visual tokens of all frames to calculate the keys and values so that the query vectors attend the original keys plus the Summary tokens. Herein, we explore the case that φ(.) performs simple spatial averaging zl0,t = 1 S ∑ s z l s,t over the tokens of each frame (R = 1 for this case). Note that, forR = 1, the extra cost that the Summary token induces is O(TS). X-ViT: We call the Video Transformer based on the proposed (a) space-time mixing attention and (b) lightweight global temporal attention (or summary token) as X-ViT. 4 Results 4.1 Experimental setup Datasets: We train and evaluate the proposed models on the following datasets (all datasets are publicly available for research purposes): Kinetics-400 and 600: The Kinetics [21] dataset consists of short clips (typically 10 sec long sampled from YouTube) labeled using 400 and 600 classes, respectively. Due to the removal of some videos from YouTube, the version of the dataset used in this paper consists of approximately 261K clips for Kinetics-400. Note, that these amounts are lower than the original version of the datasets and thus might represent a negative performance bias when compared with prior works. Something-Something-v2 (SSv2): The SSv2 [17] dataset consists of 220,487 short videos (of duration between 2 and 6 sec) that depict humans performing pre-defined basic actions with everyday objects. Because the objects and backgrounds in the videos are consistent across different action classes, this dataset tends to require stronger temporal modeling. Due to this, we conducted most of our ablation studies on SSv2 to better analyze the importance of the proposed components. Epic Kitchens-100 (Epic-100): is an egocentric large scale action recognition dataset consisting of more than 90,000 action segments spanning 100 hours of recordings in home environments, capturing daily activities [10]. The dataset is labeled using 97 verb classes and 300 noun classes. The evaluation results are reported using the standard action recognition protocol: the network predicts the “verb” and the “noun” using two heads. The predictions are then merged to construct an “action” which is used to report the accuracy. Training details: All models, unless otherwise stated, were trained using the following scheduler and training procedure: specifically, our models were trained using SGD with momentum (0.9) and a cosine scheduler [28] (with linear warmup) for 35 epochs on SSv2, 50 on Epic-100 and 30 on Kinetics. The base learning rate, set at a batch size of 128, was 0.05 (0.03 for Kinetics). To prevent over-fitting we made use of the following augmentation techniques: random scaling (0.9× to 1.3×) and cropping, random flipping (with probability of 0.5; not for SSv2) and autoaugment [8]. In addition, for SSv2 and Epic-100, we also applied random erasing (probability=0.5, min. area=0.02, max. area=1/3, min. aspect=0.3) [52] and label smoothing (λ = 0.3) [34] while, for Kinetics, we used mixup [51] (α = 0.4). The backbone models follow closely the ViT architecture of Dosovitskiy et al. [13]. Most experiments were performed using the ViT-B/16 variant (L = 12, h = 12, d = 768, K = 16), where L represents the number of transformer layers, h the number of heads, d the embedding dimension and K the patch size. We initialized our models from a pretrained ImageNet-21k [11] ViT model. The spatial positional encoding ps was initialized from the pretrained 2D model and the temporal one, pt, with zeros so that it does not have a great impact on the tokens early on during training. The models were trained on 8 V100 GPUs using PyTorch [30]. Testing details: Unless otherwise stated, we used ViT-B/16 and T = 8 frames. We mostly used Temporal Attention (TA) for temporal aggregation. We report accuracy results for 1 × 3 views (1 temporal clip and 3 spatial crops) departing from the common approach of using up to 10 × 3 views [26, 16]. The 1× 3 views setting was also used in Bertasius et al. [3]. To measure the variation between runs, we trained one of the 8–frame models 5 times. The results varied by ±0.4%. 4.2 Ablation studies Throughout this section, we study the effect of varying certain design choices and different components of our method. Because SSv2 tends to require a more fine-grained temporal modeling, unless otherwise specified, all results reported, in this section, are on the SSv2. Table 2: Effect of: (a) proposed SA position, (b) temporal aggregation and number of Temporal Attention (TA) layers, (c) space-time mixing qkv vectors and (d) amount of mixed channels on SSv2. (a) Effect of applying the proposed SA to certain layers. Transform. layers Top-1 Top-5 1st half 61.7 86.5 2nd half 61.6 86.3 Half (odd. pos) 61.2 86.4 All 62.6 87.8 (b) Effect of number of TA layers. 0 corresponds to temporal averaging. #. TA layers Top-1 Top-5 0 (temp. avg.) 62.4 87.8 1 64.4 89.3 2 64.5 89.3 3 64.5 89.3 (c) Effect of space-time mixing. x denotes the input token before qkv projection. Query produces equivalent results with key and thus omitted. x key value Top-1 Top-5 7 7 7 56.6 83.5 X 7 7 63.1 88.8 7 X 7 63.1 88.8 7 7 X 62.5 88.6 7 X X 64.4 89.3 (d) Effect of amount of mixed channels. * uses temp. avg. aggregation. 0%* 0% 25% 50% 100% 45.2 56.6 64.3 64.4 62.5 Effect of local window size: Table 1 shows the accuracy of our model by varying the local window size [−tw, tw] used in the proposed space-time mixing attention. Firstly, we observe that the proposed model is significantly superior to our baseline (tw = 0) which uses spatial-only attention. Secondly, a window of tw = 1 produces the best results. This shows that more gradual increase of the effective window size that is attended is more beneficial compared to more aggressive ones, i.e. the case where tw = 2. A performance degradation for the case tw = 2 could be attributed to boundary effects (handled by filling with zeros) which are aggravated as tw increases. Based on these results, we chose to use tw = 1 for the models reported hereafter. For short to medium long videos, it seems that tw = 1 suffices as the temporal receptive field size increases as we advance in depth in the model allowing it to capture a larger effective temporal window. For the datasets used, as explained earlier, after a few transformer layers the whole clip is effectively covered. However, for significantly longer video sequences, larger window sizes may perform better. Effect of SA position: We explored which layers should the proposed space-time mixing attention be applied to within the network. Specifically, we explored the following variants: Applying it to the first L/2 layers, to the last L/2 layers, to every odd indexed layer and, finally, to all layers. As the results from Table 2a show, the exact layers within the network that self-attention is applied to do not matter; what matters is the number of layers it is applied to. We attribute this result to the increased temporal receptive field and cross-frame interactions. Effect of temporal aggregation: Herein, we compare the two methods used for temporal aggregation: simple temporal averaging [41] and the proposed Temporal Attention (TA) mechanism. Given that our model already incorporates temporal information through the proposed space-time attention, we also explored how many TA layers are needed. As shown in Table 2b, replacing temporal averaging with one TA layer improves the Top-1 accuracy from 62.5% to 64.4%. Increasing the number of layers further yields no additional benefits. In Table 2d, we also report the accuracy of spatial-only attention (0% mixing) plus TA aggregation. In the absence of the pro- posed space-time mixing attention, the TA layer alone is unable to compensate, scoring only 56.6%. In the same table, 45.2% is the accuracy of a model trained without the proposed local attention and TA layer (i.e. using a temporal pooling for aggregation). Overall, the results highlight the need of having both components in our final model. For the next two ablation studies, we used 1 TA layer. Effect of space-time mixing qkv vectors: Paramount to our work is the proposed space-time mixing attention of Eq. 10 which is implemented by constructing k̃ls′,−tw:tw and ṽ l s′,−tw:tw efficiently via channel indexing (see Eq. 9). Space-time mixing though can be applied in several different ways in the model. For completeness, herein, we study the effect of applying space-time mixing to various combinations for the key, value and to the input token prior to qkv projection. As shown in Table 2c, the combination corresponding to our model (i.e. space-time mixing applied to the key and value) significantly outperforms all other variants by up to 2%. This result is important as it confirms that our model, derived from the proposed approximation to the local space-time attention, gives the best results when compared to other non-well motivated variants. Effect of amount of space-time mixing: We define as ρdh the total number of channels coming from the adjacent frames in the local temporal window [−tw, tw] (i.e. ∑tw t′=−tw,t6=0 d t′ h = ρdh) when constructing k̃ls′,−tw:tw (see Section 3). Herein, we study the effect of ρ on the model’s accuracy. As the results from Table 2d show, the optimal ρ is between 25% and 50%. Increasing ρ to 100% (i.e. all channels are coming from adjacent frames) unsurprisingly degrades the performance as it excludes the case t′ = t when performing the self-attention. Effect of Summary token: Herein, we compare Temporal Attention with Summary token on SSv2 and Kinetics-400. We used both datasets for this case as they require different type of understanding: fine-grained temporal (SSv2) and spatial content (Kinetics-400). From Table 4, we conclude that the Summary token compares favorable on Kinetics-400 but not on SSv2 showing that it is more useful in terms of capturing spatial information. Since the improvement is small, we conclude that 1 TA layer is the best global attention-based mechanism for improving the accuracy of our method adding also negligible computational cost. Effect of number of input frames: Herein we evaluate the impact of increasing the number of input frames T from 8 to 16 and 32. We note that, for our method, this change results in a linear increase in complexity. As the results from Table 7 show, increasing the number of frames from 8 to 16 offers a 1.8% boost in Top-1 accuracy on SSv2. Moreover, increasing the number of frames to 32 improves the performance by a further 0.2%, offering diminishing returns. Similar behavior can be observed on Kinetics and Epic-100 in Tables 5 and 8. Effect of number of tokens and different model sizes: Herein, we vary the number of input tokens by changing the patch size K. As the results from Table 3 show, even when the number of tokens decreases significantly (e.g. ViT-B/32 or ViT-S/32) our approach is still able to produce results of satisfactory accuracy. The benefit of that is having a model which is significantly more efficient. Similar concusions can be observed when the model size (in terms of parameters and FLOPs) is varied. Our approach provides consistent results in all cases, showcasing its ability to scale well from tiny (XViT-T) to large (XViT-L) models. Latency and throughput considerations: While the channel shifting operation used by the proposed space-time mixing attention is zero-FLOP, there is still a small cost associated with memory movement operations. In order to ascertain that the induced cost does not introduce noticeable performance degradation, we benchmarked a Vit-B/16 (8× frames) model using spatial-only attention and the proposed space-time mixing attention on 8 V100 GPUs and a batch size of 128. A model with spatial-only attention has a throughput of 312 fps while our model has 304 fps. 4.3 Comparison to state-of-the-art Our best model uses the proposed space-time mixing attention in all the Transformer layers and performs temporal aggregation using a single lightweight temporal transformer layer as described in Section 3. Unless otherwise specified, we report the results using the 1× 3 configuration for the views (1 temporal and 3 spatial) for all datasets. Regarding related work on transformer-based video recognition [1, 3], we included their very best models trained on the same data as our models. For TimeSformer, this is typically the TimeSformer-L version. For ViVit, we used the 16x2 configuration, with factorized-encoding for Epic-100 and SS-v2 (as reported in Tables 6d and 6e in [1]) and the full version for Kinetics (as reported in Table 6a in [1]). On Kinetics-400, we match the current state-of-the-art while having significantly lower computational complexity than the next two best recently proposed methods that also use Transformer-based architectures: 20× fewer FLOPs than ViVit [1] and 8× fewer than TimeSformer-L [3]. Note that both models from [1, 3] and ours were initialized from a ViT model pretrained on ImageNet-21k [11] and take as input frames at a resolution of 224 × 224px. Similar conclusions can be drawn from Table 6 which reports our results on Kinetics-600. On SSv2, we match and surpass the current state-of-the-art, especially in terms of Top-5 accuracy (ours: 90.7% vs ViViT: 89.8% [1]) using models that are 14× (16 frames) and 9× (32 frames) faster. Finally, we observe similar outcomes on Epic-100 where we set a new state-of-the-art, showing large improvements especially for “Verb” accuracy, while again being more efficient. 5 Ethical considerations and broader impact Current high-performing video recognition models tend to have high computational demands for both training and testing and, by extension, significant environmental costs. This is especially true for the transformer-based architectures. Our research introduces a novel approach that matches and surpasses the current state-ofthe-art while being significantly more efficient thanks to the linear scaling of the complexity with respect to the number of frames. We hope such models will offer noticeable reduction in power consumption while setting at the same time a solid base for future research. We will release code and models to facilitate this. Moreover, and similarly to most data-driven systems, bias from the training data can potentially affect the fairness of the model. As such, we suggest to take this aspect into consideration when deploying the models into real-world scenarios. 6 Conclusions We presented a novel approximation to the full space-time attention that is amenable to an efficient implementation and applied it to video recognition. Our approximation has the same computational cost as spatial-only attention yet the resulting video Transformer model was shown to be significantly more efficient than recently proposed Video Transformers [3, 1]. By no means this paper proposes a complete solution to video recognition using video Transformers. Future efforts could include combining our approaches with other architectures than the standard ViT, removing the dependency on pre-trained models and applying the model to other video-related tasks like detection and segmentation. Finally, further research is required for deploying our models on low power/resource devices.
1. What is the focus and contribution of the paper on video recognition? 2. What are the strengths of the proposed video transformer model, particularly in terms of efficiency and effectiveness? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or limitations regarding the proposed approach that the reviewer would like to highlight?
Summary Of The Paper Review
Summary Of The Paper This paper presents a video Transformer model for video recognition. It limits the temporal window to the local area and proposes efficient space-time mixing to reduce computational costs. On Kinetics, Something-Something v2, and Epic Kitchens datasets, the proposed video transformer model is more effective than other transformer-based models yet has the same computational cost as spatial-only attention models. Review For originality, this paper extends the mixing technique in TSM to the Transformer based backbone architecture, to obtain a space-time mixing attention in Video Transformer. This combination is straightforward and shown to be effective for video recognition. The paper is well written and well organized. Using the proposed methods, the paper achieves similar performance with much lower computation costs on Kinetics-400, Something-Something v2, and Epic Kitchens datasets. Concerns: The experimental results are not sufficient enough. This paper only presents one version of the X-ViT model with different input frames but did not verify the effectiveness of the proposed approach on different sizes of models. The novelty is somewhat limited. This paper is more like a combination of TSM and Vision Transformer. In Table 1, a larger temporal window size would lead to worse performance on video recognition. This may limit the application scenarios and hurt the generality of the proposed approach. Number of parameters and speed of different models should be compared. Other approaches could benefit from using more temporal clips, such as 4x3. But the presented approach could only achieve slightly better when using more temporal clips. Please explain why.
NIPS
Title Space-time Mixing Attention for Video Transformer Abstract This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. N/A This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. 1 Introduction Video recognition – in analogy to image recognition – refers to the problem of recognizing events of interest in video sequences such as human activities. Following the tremendous success of Transformers in sequential data, specifically in Natural Language Processing (NLP) [39, 5], Vision Transformers were very recently shown to outperform CNNs for image recognition too [48, 13, 35], signaling a paradigm shift on how visual understanding models should be constructed. In light of this, in this paper, we propose a Video Transformer model as an appealing and promising solution for improving the accuracy of video recognition models. A direct, natural extension of Vision Transformers to the spatio-temporal domain is to perform the self-attention jointly across all S spatial locations and T temporal locations. Full space-time attention though has complexityO(T 2S2) making such a model computationally heavy and, hence, impractical even when compared with the 3D-based convolutional models. As such, our aim is to exploit the temporal information present in video streams while minimizing the computational burden within the Transformer framework for efficient video recognition. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A baseline solution to this problem is to consider spatial-only attention followed by temporal averaging, which has complexity O(TS2). Similar attempts to reduce the cost of full space-time attention have been recently proposed in [3, 1]. These methods have demonstrated promising results in terms of video recognition accuracy, yet they have been also shown to induce, in most of the cases, significant computational overheads compared to the baseline (spatial-only) method due to the additional modelling of the temporal information. Our main contribution in this paper is a Video Transformer model that has complexity O(TS2) and, hence, is as efficient as the baseline model, yet, as our results show, it outperforms recently/concurrently proposed work [3, 1] in terms of efficiency (i.e. accuracy/FLOP) by significant margins. To achieve this our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. Fig. 1 shows the proposed approximation to space-time attention. We also show how to integrate two very lightweight mechanisms for global temporal-only attention, which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model is surprisingly effective in terms of capturing long-term dependencies and producing very high recognition accuracy on the most popular video recognition datasets, including Something-Something-v2 [17], Kinetics [4] and Epic Kitchens [9], while at the same time being significantly more efficient than other Video Transformer models. 2 Related work Video recognition: Standard solutions are based on CNNs and can be broadly classified into two categories: 2D- and 3D-based approaches. 2D-based approaches process each frame independently to extract frame-based features which are then aggregated temporally with some sort of temporal modeling (e.g. temporal averaging) performed at the end of the network [42, 26, 27]. The works of [26, 27] use the “shift trick” [45] to have some temporal modeling at a layer level. 3D-based approaches [4, 16, 36] are considered the current state-of-the-art as they can typically learn stronger temporal models via 3D convolutions. However, they also incur higher computational and memory costs. To alleviate this, a large body of works attempt to improve their efficiency via spatial and/or temporal factorization [38, 37, 15]. CNN vs ViT: Historically, video recognition approaches tend to mimic the architectures used for image classification (e.g. from AlexNet [23] to [20] or from ResNet [18] and ResNeXt [47] to [16]). After revolutionizing NLP [39, 32], very recently, Transformer-based architectures showed promising results on large scale image classification too [13]. While self-attention and attention were previously used in conjunction with CNNs at a layer or block level [6, 50, 33], the Vision Transformer (ViT) of Dosovitskiy et al. [13] is the first convolution-free, Transformer-based architecture that achieves state-of-the-art on ImageNet [11]. Video Transformer: Recently/concurrently with our work, vision transformer architectures, derived from [13], were used for video recognition [3, 1], too. Because performing full space-time attention is computationally prohibitive (i.e. O(T 2S2)), their main focus is on reducing this via temporal and spatial factorization. In TimeSformer [3], the authors propose applying spatial and temporal attention in an alternating manner reducing the complexity to O(T 2S + TS2). In a similar fashion, ViViT [1] explores several avenues for space-time factorization. In addition, they also proposed to adapt the patch embedding process from [13] to 3D (i.e. video) data. Our work proposes a completely different approximation to full space-time attention that is also efficient. To this end, we firstly restrict full space-time attention to a local temporal window which is reminiscent of [2] but applied here to space-time attention and video recognition 1. Secondly, we define a local joint space-time attention which we show that can be implemented efficiently via the “shift trick” [45]. 3 Method Video Transformer: We are given a video clip X ∈ RT×H×W×C (C = 3). Following ViT [13], each frame is divided into K ×K non-overlapping patches which are then mapped into visual tokens using a linear embedding layer E ∈ R3K2×d. Since self-attention is permutation invariant, in order to preserve the information regarding the location of each patch within space and time we also learn two positional embeddings, one for space: ps ∈ R1×S×d and one for time: pt ∈ RT×1×d. These are then added to the initial visual tokens. Finally, the token sequence is processed by L Transformer layers. The visual token at layer l, spatial location s and temporal location t is denoted as: zls,t ∈ Rd, l = 0, . . . , L− 1, s = 0, . . . , S − 1, t = 0, . . . , T − 1. (1) In addition to the ST visual tokens extracted from the video, a special classification token zlcls ∈ Rd is prepended to the token sequence [12]. The l−th Transformer layer processes the visual tokens Zl ∈ R(ST+1)×d of the previous layer using a series of Multi-head Self-Attention (MSA), Layer Normalization (LN), and MLP (Rd → R4d → Rd) layers as follows: Yl = MSA(LN(Zl−1)) + Zl−1, (2) Zl = MLP(LN(Yl)) +Yl. (3) The main computation of a single full space-time Self-Attention (SA) head boils down to calculating: yls,t = T−1∑ t′=0 S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , { s=0,...,S−1 t=0,...,T−1 } (4) where qls,t,k l s,t,v l s,t ∈ Rdh are the query, key, and value vectors computed from zls,t (after LN) using embedding matrices Wq,Wk,Wv ∈ Rd×dh . Finally, the output of the h heads is concatenated and projected using embedding matrix Wh ∈ Rhdh×d. The complexity of the full model is: O(3hTSddh) (qkv projections) +O(2hT 2S2dh) (MSA for h attention heads) +O(TS(hdh)d) (multi-head projection) +O(4TSd2) (MLP) 2. From these terms, our goal is to reduce the costO(2T 2S2dh) (for a single attention head) of the full space-time attention which is the dominant term 3. For clarity, from now on, we will drop constant terms and dh to report complexity unless necessary. Hence, the complexity of the full space-time attention is O(T 2S2). Our baseline is a model that performs a simple approximation to the full space-time attention by applying, at each Transformer layer, spatial-only attention: yls,t = S−1∑ s′=0 Softmax{(qls,t · kls′,t)/ √ dh}vls′,t, { s=0,...,S−1 t=0,...,T−1 } (5) 1Other attempts of exploiting local attention can be found in [29, 7, 49], however they are also different in scope, task/domain and implementation. 2For this work, we used S = 196, T = {8, 16, 32} and d = 768 (for a ViT-B backbone). 3The MLP complexity is by no means negligible, however the focus of this work (similarly to [3, 1]) is on reducing the complexity of the self-attention component. the complexity of which is O(TS2). Notably, the complexity of the proposed space-time mixing attention is also O(TS2). Following spatial-only attention, simple temporal averaging is performed on the class tokens zfinal = 1T ∑ t zL−1t,cls to obtain a single feature that is fed to the linear classifier. Recent work by [3, 1] has focused on reducing the cost O(T 2S2) of the full space-time attention of Eq. 4. Bertasius et al. [3] proposed the factorised attention: ỹls,t = T−1∑ t′=0 Softmax{(qls,t · kls,t′)/ √ dh}vls,t′ , yls,t = S−1∑ s′=0 Softmax{q̃ls,t · k̃ls′,t)/ √ dh}ṽls′,t, { s = 0, . . . , S − 1 t = 0, . . . , T − 1 } , (6) where q̃ls,t, k̃ l s′,tṽ l s′,t are new query, key and value vectors calculated from ỹ l s,t 4. The above model reduces complexity to O(T 2S + TS2). However, temporal attention is performed for a fixed spatial location which is ineffective when there is camera or object motion and there is spatial misalignment between frames. The work of [1] is concurrent to ours and proposes the following approximation: Ls Transformer layers perform spatial-only attention as in Eq. 5 (each with complexity O(S2)). Following this, there are Lt Transformer layers performing temporal-only attention on the class tokens zLst . The complexity of the temporal-only attention is, in general, O(T 2). Our model aims to better approximate the full space-time self-attention (SA) of Eq. 4 while keeping complexity to O(TS2), i.e. inducing no further complexity to a spatial-only model. To achieve this, we make a first approximation to perform full space-time attention but restricted to a local temporal window [−tw, tw]: yls,t = t+tw∑ t′=t−tw S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ = t+tw∑ t′=t−tw Vlt′a l t′ , { s=0,...,S−1 t=0,...,T−1 } (7) where Vlt′ = [v l 0,t′ ;v l 1,t′ ; . . . ;v l S−1,t′ ] ∈ Rdh×S and alt′ = [al0,t′ , al1,t′ , . . . , alS−1,t′ ] ∈ RS is the vector with the corresponding attention weights. Eq. 7 shows that, for a single Transformer layer, yls,t is a spatio-temporal combination of the visual tokens in the local window [−tw, tw]. It follows that, after k Transformer layers, yl+ks,t will be a spatio-temporal combination of the visual tokens in the local window [−ktw, ktw] which in turn conveniently allows to perform spatio-temporal attention over the whole clip. For example, for tw = 1 and k = 4, the local window becomes [−4, 4] which spans the whole video clip for the typical case T = 8. The complexity of the local self-attention of Eq. 7 is O(T (2tw + 1)2S2). To reduce this even further, we make a second approximation on top of the first one as follows: the attention between spatial locations s and s′ according to the model of Eq. 7 is: t+tw∑ t′=t−tw Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , (8) i.e. it requires the calculation of 2tw +1 attentions, one per temporal location over [−tw, tw]. Instead, we propose to calculate a single attention over [−tw, tw] which can be achieved by qls,t attending kls′,−tw:tw , [k l s′,t−tw ; . . . ;k l s′,t+tw ] ∈ R(2tw+1)dh . Note that to match the dimensions of qls,t and kls′,−tw:tw a further projection of k l s′,−tw:tw to R dh is normally required which has complexity O((2tw +1)d 2 h) and hence compromises the goal of an efficient implementation. To alleviate this we use the “shift trick” [45, 26] which allows to perform both zero-cost dimensionality reduction, spacetime mixing and attention (between qls,t and k l s′,−tw:tw ) in O(dh). In particular, each t ′ ∈ [−tw, tw] is assigned dt ′ h channels from dh (i.e. ∑ t′ d t′ h = dh). Let k l s′,t′(d t′ h ) ∈ Rd t′ h denote the operator for 4More precisely, Eq. 6 holds for h = 1 heads. For h > 1, the different heads ỹl,hs,t are concatenated and projected to produce ỹls,t. indexing the dt ′ h channels from k l s′,t′ . Then, a new key vector is constructed as: k̃ls′,−tw:tw , [k l s′,t−tw(d t−tw h ), . . . ,k l s′,t+tw(d t+tw h )] ∈ R dh . (9) Fig. 2 shows how the key vector k̃ls′,−tw:tw is constructed. In a similar way, we also construct a new value vector ṽls′,−tw:tw . Finally, the proposed approximation to the full space-time attention is given by: ylss,t = S−1∑ s′=0 Softmax{(qlss,t · k̃ls′,−tw:tw/ √ dh}ṽls′,−tw:tw , { s=0,...,S−1 t=0,...,T−1 } . (10) This has the complexity of a spatial-only attention (O(TS2)) and hence it is more efficient than previously proposed video transformers [3, 1]. Our model also provides a better approximation to the full space-time attention and as shown by our results it significantly outperforms [3, 1]. Temporal Attention aggregation: The final set of the class tokens zL−1t,cls , 0 ≤ t ≤ L− 1 are used to generate the predictions. To this end, we propose to consider the following options: (a) simple temporal averaging zfinal = 1T ∑ t z L−1 t,cls as in the case of our baseline. (b) An obvious limitation of temporal averaging is that the output is treated purely as an ensemble of per-frame features and, hence, completely ignores the temporal ordering between them. To address this, we propose to use a lightweight Temporal Attention (TA) mechanism that will attend to the T classification tokens. In particular a zfinal token attends the sequence [zL−10,cls, . . . , z L−1 T−1,cls] using a temporal Transformer layer and then fed as input to the classifier. This is akin to the (concurrent) work of [1] with the difference being that in our model we found that a single TA layer suffices whereas [1] uses Lt. A consequence of this is that the complexity of our layer is O(T ) vs O(2(Lt − 1)T 2 + T ) of [1]. Summary token: As an alternative to TA, herein, we also propose a simple lightweight mechanism for information exchange between different frames at intermediate layers of the network. Given the set of tokens for each frame t, Zl−1t ∈ R(S+1)×dh (constructed by concatenating all tokens zl−1s,t , s = 0, . . . , S), we compute a new set of R tokens Z l r,t = φ(Z l−1 t ) ∈ RR×dh which summarize the frame information and hence are named “Summary” tokens. These are then, appended to the visual tokens of all frames to calculate the keys and values so that the query vectors attend the original keys plus the Summary tokens. Herein, we explore the case that φ(.) performs simple spatial averaging zl0,t = 1 S ∑ s z l s,t over the tokens of each frame (R = 1 for this case). Note that, forR = 1, the extra cost that the Summary token induces is O(TS). X-ViT: We call the Video Transformer based on the proposed (a) space-time mixing attention and (b) lightweight global temporal attention (or summary token) as X-ViT. 4 Results 4.1 Experimental setup Datasets: We train and evaluate the proposed models on the following datasets (all datasets are publicly available for research purposes): Kinetics-400 and 600: The Kinetics [21] dataset consists of short clips (typically 10 sec long sampled from YouTube) labeled using 400 and 600 classes, respectively. Due to the removal of some videos from YouTube, the version of the dataset used in this paper consists of approximately 261K clips for Kinetics-400. Note, that these amounts are lower than the original version of the datasets and thus might represent a negative performance bias when compared with prior works. Something-Something-v2 (SSv2): The SSv2 [17] dataset consists of 220,487 short videos (of duration between 2 and 6 sec) that depict humans performing pre-defined basic actions with everyday objects. Because the objects and backgrounds in the videos are consistent across different action classes, this dataset tends to require stronger temporal modeling. Due to this, we conducted most of our ablation studies on SSv2 to better analyze the importance of the proposed components. Epic Kitchens-100 (Epic-100): is an egocentric large scale action recognition dataset consisting of more than 90,000 action segments spanning 100 hours of recordings in home environments, capturing daily activities [10]. The dataset is labeled using 97 verb classes and 300 noun classes. The evaluation results are reported using the standard action recognition protocol: the network predicts the “verb” and the “noun” using two heads. The predictions are then merged to construct an “action” which is used to report the accuracy. Training details: All models, unless otherwise stated, were trained using the following scheduler and training procedure: specifically, our models were trained using SGD with momentum (0.9) and a cosine scheduler [28] (with linear warmup) for 35 epochs on SSv2, 50 on Epic-100 and 30 on Kinetics. The base learning rate, set at a batch size of 128, was 0.05 (0.03 for Kinetics). To prevent over-fitting we made use of the following augmentation techniques: random scaling (0.9× to 1.3×) and cropping, random flipping (with probability of 0.5; not for SSv2) and autoaugment [8]. In addition, for SSv2 and Epic-100, we also applied random erasing (probability=0.5, min. area=0.02, max. area=1/3, min. aspect=0.3) [52] and label smoothing (λ = 0.3) [34] while, for Kinetics, we used mixup [51] (α = 0.4). The backbone models follow closely the ViT architecture of Dosovitskiy et al. [13]. Most experiments were performed using the ViT-B/16 variant (L = 12, h = 12, d = 768, K = 16), where L represents the number of transformer layers, h the number of heads, d the embedding dimension and K the patch size. We initialized our models from a pretrained ImageNet-21k [11] ViT model. The spatial positional encoding ps was initialized from the pretrained 2D model and the temporal one, pt, with zeros so that it does not have a great impact on the tokens early on during training. The models were trained on 8 V100 GPUs using PyTorch [30]. Testing details: Unless otherwise stated, we used ViT-B/16 and T = 8 frames. We mostly used Temporal Attention (TA) for temporal aggregation. We report accuracy results for 1 × 3 views (1 temporal clip and 3 spatial crops) departing from the common approach of using up to 10 × 3 views [26, 16]. The 1× 3 views setting was also used in Bertasius et al. [3]. To measure the variation between runs, we trained one of the 8–frame models 5 times. The results varied by ±0.4%. 4.2 Ablation studies Throughout this section, we study the effect of varying certain design choices and different components of our method. Because SSv2 tends to require a more fine-grained temporal modeling, unless otherwise specified, all results reported, in this section, are on the SSv2. Table 2: Effect of: (a) proposed SA position, (b) temporal aggregation and number of Temporal Attention (TA) layers, (c) space-time mixing qkv vectors and (d) amount of mixed channels on SSv2. (a) Effect of applying the proposed SA to certain layers. Transform. layers Top-1 Top-5 1st half 61.7 86.5 2nd half 61.6 86.3 Half (odd. pos) 61.2 86.4 All 62.6 87.8 (b) Effect of number of TA layers. 0 corresponds to temporal averaging. #. TA layers Top-1 Top-5 0 (temp. avg.) 62.4 87.8 1 64.4 89.3 2 64.5 89.3 3 64.5 89.3 (c) Effect of space-time mixing. x denotes the input token before qkv projection. Query produces equivalent results with key and thus omitted. x key value Top-1 Top-5 7 7 7 56.6 83.5 X 7 7 63.1 88.8 7 X 7 63.1 88.8 7 7 X 62.5 88.6 7 X X 64.4 89.3 (d) Effect of amount of mixed channels. * uses temp. avg. aggregation. 0%* 0% 25% 50% 100% 45.2 56.6 64.3 64.4 62.5 Effect of local window size: Table 1 shows the accuracy of our model by varying the local window size [−tw, tw] used in the proposed space-time mixing attention. Firstly, we observe that the proposed model is significantly superior to our baseline (tw = 0) which uses spatial-only attention. Secondly, a window of tw = 1 produces the best results. This shows that more gradual increase of the effective window size that is attended is more beneficial compared to more aggressive ones, i.e. the case where tw = 2. A performance degradation for the case tw = 2 could be attributed to boundary effects (handled by filling with zeros) which are aggravated as tw increases. Based on these results, we chose to use tw = 1 for the models reported hereafter. For short to medium long videos, it seems that tw = 1 suffices as the temporal receptive field size increases as we advance in depth in the model allowing it to capture a larger effective temporal window. For the datasets used, as explained earlier, after a few transformer layers the whole clip is effectively covered. However, for significantly longer video sequences, larger window sizes may perform better. Effect of SA position: We explored which layers should the proposed space-time mixing attention be applied to within the network. Specifically, we explored the following variants: Applying it to the first L/2 layers, to the last L/2 layers, to every odd indexed layer and, finally, to all layers. As the results from Table 2a show, the exact layers within the network that self-attention is applied to do not matter; what matters is the number of layers it is applied to. We attribute this result to the increased temporal receptive field and cross-frame interactions. Effect of temporal aggregation: Herein, we compare the two methods used for temporal aggregation: simple temporal averaging [41] and the proposed Temporal Attention (TA) mechanism. Given that our model already incorporates temporal information through the proposed space-time attention, we also explored how many TA layers are needed. As shown in Table 2b, replacing temporal averaging with one TA layer improves the Top-1 accuracy from 62.5% to 64.4%. Increasing the number of layers further yields no additional benefits. In Table 2d, we also report the accuracy of spatial-only attention (0% mixing) plus TA aggregation. In the absence of the pro- posed space-time mixing attention, the TA layer alone is unable to compensate, scoring only 56.6%. In the same table, 45.2% is the accuracy of a model trained without the proposed local attention and TA layer (i.e. using a temporal pooling for aggregation). Overall, the results highlight the need of having both components in our final model. For the next two ablation studies, we used 1 TA layer. Effect of space-time mixing qkv vectors: Paramount to our work is the proposed space-time mixing attention of Eq. 10 which is implemented by constructing k̃ls′,−tw:tw and ṽ l s′,−tw:tw efficiently via channel indexing (see Eq. 9). Space-time mixing though can be applied in several different ways in the model. For completeness, herein, we study the effect of applying space-time mixing to various combinations for the key, value and to the input token prior to qkv projection. As shown in Table 2c, the combination corresponding to our model (i.e. space-time mixing applied to the key and value) significantly outperforms all other variants by up to 2%. This result is important as it confirms that our model, derived from the proposed approximation to the local space-time attention, gives the best results when compared to other non-well motivated variants. Effect of amount of space-time mixing: We define as ρdh the total number of channels coming from the adjacent frames in the local temporal window [−tw, tw] (i.e. ∑tw t′=−tw,t6=0 d t′ h = ρdh) when constructing k̃ls′,−tw:tw (see Section 3). Herein, we study the effect of ρ on the model’s accuracy. As the results from Table 2d show, the optimal ρ is between 25% and 50%. Increasing ρ to 100% (i.e. all channels are coming from adjacent frames) unsurprisingly degrades the performance as it excludes the case t′ = t when performing the self-attention. Effect of Summary token: Herein, we compare Temporal Attention with Summary token on SSv2 and Kinetics-400. We used both datasets for this case as they require different type of understanding: fine-grained temporal (SSv2) and spatial content (Kinetics-400). From Table 4, we conclude that the Summary token compares favorable on Kinetics-400 but not on SSv2 showing that it is more useful in terms of capturing spatial information. Since the improvement is small, we conclude that 1 TA layer is the best global attention-based mechanism for improving the accuracy of our method adding also negligible computational cost. Effect of number of input frames: Herein we evaluate the impact of increasing the number of input frames T from 8 to 16 and 32. We note that, for our method, this change results in a linear increase in complexity. As the results from Table 7 show, increasing the number of frames from 8 to 16 offers a 1.8% boost in Top-1 accuracy on SSv2. Moreover, increasing the number of frames to 32 improves the performance by a further 0.2%, offering diminishing returns. Similar behavior can be observed on Kinetics and Epic-100 in Tables 5 and 8. Effect of number of tokens and different model sizes: Herein, we vary the number of input tokens by changing the patch size K. As the results from Table 3 show, even when the number of tokens decreases significantly (e.g. ViT-B/32 or ViT-S/32) our approach is still able to produce results of satisfactory accuracy. The benefit of that is having a model which is significantly more efficient. Similar concusions can be observed when the model size (in terms of parameters and FLOPs) is varied. Our approach provides consistent results in all cases, showcasing its ability to scale well from tiny (XViT-T) to large (XViT-L) models. Latency and throughput considerations: While the channel shifting operation used by the proposed space-time mixing attention is zero-FLOP, there is still a small cost associated with memory movement operations. In order to ascertain that the induced cost does not introduce noticeable performance degradation, we benchmarked a Vit-B/16 (8× frames) model using spatial-only attention and the proposed space-time mixing attention on 8 V100 GPUs and a batch size of 128. A model with spatial-only attention has a throughput of 312 fps while our model has 304 fps. 4.3 Comparison to state-of-the-art Our best model uses the proposed space-time mixing attention in all the Transformer layers and performs temporal aggregation using a single lightweight temporal transformer layer as described in Section 3. Unless otherwise specified, we report the results using the 1× 3 configuration for the views (1 temporal and 3 spatial) for all datasets. Regarding related work on transformer-based video recognition [1, 3], we included their very best models trained on the same data as our models. For TimeSformer, this is typically the TimeSformer-L version. For ViVit, we used the 16x2 configuration, with factorized-encoding for Epic-100 and SS-v2 (as reported in Tables 6d and 6e in [1]) and the full version for Kinetics (as reported in Table 6a in [1]). On Kinetics-400, we match the current state-of-the-art while having significantly lower computational complexity than the next two best recently proposed methods that also use Transformer-based architectures: 20× fewer FLOPs than ViVit [1] and 8× fewer than TimeSformer-L [3]. Note that both models from [1, 3] and ours were initialized from a ViT model pretrained on ImageNet-21k [11] and take as input frames at a resolution of 224 × 224px. Similar conclusions can be drawn from Table 6 which reports our results on Kinetics-600. On SSv2, we match and surpass the current state-of-the-art, especially in terms of Top-5 accuracy (ours: 90.7% vs ViViT: 89.8% [1]) using models that are 14× (16 frames) and 9× (32 frames) faster. Finally, we observe similar outcomes on Epic-100 where we set a new state-of-the-art, showing large improvements especially for “Verb” accuracy, while again being more efficient. 5 Ethical considerations and broader impact Current high-performing video recognition models tend to have high computational demands for both training and testing and, by extension, significant environmental costs. This is especially true for the transformer-based architectures. Our research introduces a novel approach that matches and surpasses the current state-ofthe-art while being significantly more efficient thanks to the linear scaling of the complexity with respect to the number of frames. We hope such models will offer noticeable reduction in power consumption while setting at the same time a solid base for future research. We will release code and models to facilitate this. Moreover, and similarly to most data-driven systems, bias from the training data can potentially affect the fairness of the model. As such, we suggest to take this aspect into consideration when deploying the models into real-world scenarios. 6 Conclusions We presented a novel approximation to the full space-time attention that is amenable to an efficient implementation and applied it to video recognition. Our approximation has the same computational cost as spatial-only attention yet the resulting video Transformer model was shown to be significantly more efficient than recently proposed Video Transformers [3, 1]. By no means this paper proposes a complete solution to video recognition using video Transformers. Future efforts could include combining our approaches with other architectures than the standard ViT, removing the dependency on pre-trained models and applying the model to other video-related tasks like detection and segmentation. Finally, further research is required for deploying our models on low power/resource devices.
1. What is the focus and contribution of the paper on Video Transformer? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and scalability? 3. Do you have any concerns regarding the choice of hyperparameters, especially the local temporal window size? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential improvements regarding the applicability of the proposed method to different datasets or scenarios?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors proposed a Video Transformer model the complexity of which scales linearly (instead of quadratic) with the number of frames in the video sequence. The key ideas are 1) restricting time attention to a local temporal window, and 2) using efficient space-time mixing to attend jointly spatial and temporal locations. They empirically verified that the proposed method outperforms existing state-of-the-art models in popular action recognition datasets. Review The paper is well-written overall, with full details of the model in Section 3 and intuitive demonstration in Figure 1. It looks like t_w, the additional hyper-parameter of local window size, is very important and its optimal value may significantly vary dataset by dataset. Although the authors included an empirical evidence on Table 1, I still believe that this may need to be tuned for other datasets differently. Especially, when the dataset contains videos that are relative longer, and thus containing less homogeneous scenes, the optimal value for t_w may be significantly larger than 1. I guess the optimal value was 1 since SSv2 contains mostly homogeneous and short videos. This interpretation is aligned with the main idea of this work that attention only within a short temporal window is enough, as in Figure 1 (d). It is okay to limit the scope of this work to this kinds of short videos only and action recognition (as opposed to general, topical video classification for longer videos like YouTube 8M or TVR), but the scope still should be mentioned clearly. Other than the point in #2, the experiment was conducted clearly, and shows impressive results on multiple datasets. Ablation studies were also conducted nicely.
NIPS
Title Space-time Mixing Attention for Video Transformer Abstract This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. N/A This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. 1 Introduction Video recognition – in analogy to image recognition – refers to the problem of recognizing events of interest in video sequences such as human activities. Following the tremendous success of Transformers in sequential data, specifically in Natural Language Processing (NLP) [39, 5], Vision Transformers were very recently shown to outperform CNNs for image recognition too [48, 13, 35], signaling a paradigm shift on how visual understanding models should be constructed. In light of this, in this paper, we propose a Video Transformer model as an appealing and promising solution for improving the accuracy of video recognition models. A direct, natural extension of Vision Transformers to the spatio-temporal domain is to perform the self-attention jointly across all S spatial locations and T temporal locations. Full space-time attention though has complexityO(T 2S2) making such a model computationally heavy and, hence, impractical even when compared with the 3D-based convolutional models. As such, our aim is to exploit the temporal information present in video streams while minimizing the computational burden within the Transformer framework for efficient video recognition. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A baseline solution to this problem is to consider spatial-only attention followed by temporal averaging, which has complexity O(TS2). Similar attempts to reduce the cost of full space-time attention have been recently proposed in [3, 1]. These methods have demonstrated promising results in terms of video recognition accuracy, yet they have been also shown to induce, in most of the cases, significant computational overheads compared to the baseline (spatial-only) method due to the additional modelling of the temporal information. Our main contribution in this paper is a Video Transformer model that has complexity O(TS2) and, hence, is as efficient as the baseline model, yet, as our results show, it outperforms recently/concurrently proposed work [3, 1] in terms of efficiency (i.e. accuracy/FLOP) by significant margins. To achieve this our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. Fig. 1 shows the proposed approximation to space-time attention. We also show how to integrate two very lightweight mechanisms for global temporal-only attention, which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model is surprisingly effective in terms of capturing long-term dependencies and producing very high recognition accuracy on the most popular video recognition datasets, including Something-Something-v2 [17], Kinetics [4] and Epic Kitchens [9], while at the same time being significantly more efficient than other Video Transformer models. 2 Related work Video recognition: Standard solutions are based on CNNs and can be broadly classified into two categories: 2D- and 3D-based approaches. 2D-based approaches process each frame independently to extract frame-based features which are then aggregated temporally with some sort of temporal modeling (e.g. temporal averaging) performed at the end of the network [42, 26, 27]. The works of [26, 27] use the “shift trick” [45] to have some temporal modeling at a layer level. 3D-based approaches [4, 16, 36] are considered the current state-of-the-art as they can typically learn stronger temporal models via 3D convolutions. However, they also incur higher computational and memory costs. To alleviate this, a large body of works attempt to improve their efficiency via spatial and/or temporal factorization [38, 37, 15]. CNN vs ViT: Historically, video recognition approaches tend to mimic the architectures used for image classification (e.g. from AlexNet [23] to [20] or from ResNet [18] and ResNeXt [47] to [16]). After revolutionizing NLP [39, 32], very recently, Transformer-based architectures showed promising results on large scale image classification too [13]. While self-attention and attention were previously used in conjunction with CNNs at a layer or block level [6, 50, 33], the Vision Transformer (ViT) of Dosovitskiy et al. [13] is the first convolution-free, Transformer-based architecture that achieves state-of-the-art on ImageNet [11]. Video Transformer: Recently/concurrently with our work, vision transformer architectures, derived from [13], were used for video recognition [3, 1], too. Because performing full space-time attention is computationally prohibitive (i.e. O(T 2S2)), their main focus is on reducing this via temporal and spatial factorization. In TimeSformer [3], the authors propose applying spatial and temporal attention in an alternating manner reducing the complexity to O(T 2S + TS2). In a similar fashion, ViViT [1] explores several avenues for space-time factorization. In addition, they also proposed to adapt the patch embedding process from [13] to 3D (i.e. video) data. Our work proposes a completely different approximation to full space-time attention that is also efficient. To this end, we firstly restrict full space-time attention to a local temporal window which is reminiscent of [2] but applied here to space-time attention and video recognition 1. Secondly, we define a local joint space-time attention which we show that can be implemented efficiently via the “shift trick” [45]. 3 Method Video Transformer: We are given a video clip X ∈ RT×H×W×C (C = 3). Following ViT [13], each frame is divided into K ×K non-overlapping patches which are then mapped into visual tokens using a linear embedding layer E ∈ R3K2×d. Since self-attention is permutation invariant, in order to preserve the information regarding the location of each patch within space and time we also learn two positional embeddings, one for space: ps ∈ R1×S×d and one for time: pt ∈ RT×1×d. These are then added to the initial visual tokens. Finally, the token sequence is processed by L Transformer layers. The visual token at layer l, spatial location s and temporal location t is denoted as: zls,t ∈ Rd, l = 0, . . . , L− 1, s = 0, . . . , S − 1, t = 0, . . . , T − 1. (1) In addition to the ST visual tokens extracted from the video, a special classification token zlcls ∈ Rd is prepended to the token sequence [12]. The l−th Transformer layer processes the visual tokens Zl ∈ R(ST+1)×d of the previous layer using a series of Multi-head Self-Attention (MSA), Layer Normalization (LN), and MLP (Rd → R4d → Rd) layers as follows: Yl = MSA(LN(Zl−1)) + Zl−1, (2) Zl = MLP(LN(Yl)) +Yl. (3) The main computation of a single full space-time Self-Attention (SA) head boils down to calculating: yls,t = T−1∑ t′=0 S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , { s=0,...,S−1 t=0,...,T−1 } (4) where qls,t,k l s,t,v l s,t ∈ Rdh are the query, key, and value vectors computed from zls,t (after LN) using embedding matrices Wq,Wk,Wv ∈ Rd×dh . Finally, the output of the h heads is concatenated and projected using embedding matrix Wh ∈ Rhdh×d. The complexity of the full model is: O(3hTSddh) (qkv projections) +O(2hT 2S2dh) (MSA for h attention heads) +O(TS(hdh)d) (multi-head projection) +O(4TSd2) (MLP) 2. From these terms, our goal is to reduce the costO(2T 2S2dh) (for a single attention head) of the full space-time attention which is the dominant term 3. For clarity, from now on, we will drop constant terms and dh to report complexity unless necessary. Hence, the complexity of the full space-time attention is O(T 2S2). Our baseline is a model that performs a simple approximation to the full space-time attention by applying, at each Transformer layer, spatial-only attention: yls,t = S−1∑ s′=0 Softmax{(qls,t · kls′,t)/ √ dh}vls′,t, { s=0,...,S−1 t=0,...,T−1 } (5) 1Other attempts of exploiting local attention can be found in [29, 7, 49], however they are also different in scope, task/domain and implementation. 2For this work, we used S = 196, T = {8, 16, 32} and d = 768 (for a ViT-B backbone). 3The MLP complexity is by no means negligible, however the focus of this work (similarly to [3, 1]) is on reducing the complexity of the self-attention component. the complexity of which is O(TS2). Notably, the complexity of the proposed space-time mixing attention is also O(TS2). Following spatial-only attention, simple temporal averaging is performed on the class tokens zfinal = 1T ∑ t zL−1t,cls to obtain a single feature that is fed to the linear classifier. Recent work by [3, 1] has focused on reducing the cost O(T 2S2) of the full space-time attention of Eq. 4. Bertasius et al. [3] proposed the factorised attention: ỹls,t = T−1∑ t′=0 Softmax{(qls,t · kls,t′)/ √ dh}vls,t′ , yls,t = S−1∑ s′=0 Softmax{q̃ls,t · k̃ls′,t)/ √ dh}ṽls′,t, { s = 0, . . . , S − 1 t = 0, . . . , T − 1 } , (6) where q̃ls,t, k̃ l s′,tṽ l s′,t are new query, key and value vectors calculated from ỹ l s,t 4. The above model reduces complexity to O(T 2S + TS2). However, temporal attention is performed for a fixed spatial location which is ineffective when there is camera or object motion and there is spatial misalignment between frames. The work of [1] is concurrent to ours and proposes the following approximation: Ls Transformer layers perform spatial-only attention as in Eq. 5 (each with complexity O(S2)). Following this, there are Lt Transformer layers performing temporal-only attention on the class tokens zLst . The complexity of the temporal-only attention is, in general, O(T 2). Our model aims to better approximate the full space-time self-attention (SA) of Eq. 4 while keeping complexity to O(TS2), i.e. inducing no further complexity to a spatial-only model. To achieve this, we make a first approximation to perform full space-time attention but restricted to a local temporal window [−tw, tw]: yls,t = t+tw∑ t′=t−tw S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ = t+tw∑ t′=t−tw Vlt′a l t′ , { s=0,...,S−1 t=0,...,T−1 } (7) where Vlt′ = [v l 0,t′ ;v l 1,t′ ; . . . ;v l S−1,t′ ] ∈ Rdh×S and alt′ = [al0,t′ , al1,t′ , . . . , alS−1,t′ ] ∈ RS is the vector with the corresponding attention weights. Eq. 7 shows that, for a single Transformer layer, yls,t is a spatio-temporal combination of the visual tokens in the local window [−tw, tw]. It follows that, after k Transformer layers, yl+ks,t will be a spatio-temporal combination of the visual tokens in the local window [−ktw, ktw] which in turn conveniently allows to perform spatio-temporal attention over the whole clip. For example, for tw = 1 and k = 4, the local window becomes [−4, 4] which spans the whole video clip for the typical case T = 8. The complexity of the local self-attention of Eq. 7 is O(T (2tw + 1)2S2). To reduce this even further, we make a second approximation on top of the first one as follows: the attention between spatial locations s and s′ according to the model of Eq. 7 is: t+tw∑ t′=t−tw Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , (8) i.e. it requires the calculation of 2tw +1 attentions, one per temporal location over [−tw, tw]. Instead, we propose to calculate a single attention over [−tw, tw] which can be achieved by qls,t attending kls′,−tw:tw , [k l s′,t−tw ; . . . ;k l s′,t+tw ] ∈ R(2tw+1)dh . Note that to match the dimensions of qls,t and kls′,−tw:tw a further projection of k l s′,−tw:tw to R dh is normally required which has complexity O((2tw +1)d 2 h) and hence compromises the goal of an efficient implementation. To alleviate this we use the “shift trick” [45, 26] which allows to perform both zero-cost dimensionality reduction, spacetime mixing and attention (between qls,t and k l s′,−tw:tw ) in O(dh). In particular, each t ′ ∈ [−tw, tw] is assigned dt ′ h channels from dh (i.e. ∑ t′ d t′ h = dh). Let k l s′,t′(d t′ h ) ∈ Rd t′ h denote the operator for 4More precisely, Eq. 6 holds for h = 1 heads. For h > 1, the different heads ỹl,hs,t are concatenated and projected to produce ỹls,t. indexing the dt ′ h channels from k l s′,t′ . Then, a new key vector is constructed as: k̃ls′,−tw:tw , [k l s′,t−tw(d t−tw h ), . . . ,k l s′,t+tw(d t+tw h )] ∈ R dh . (9) Fig. 2 shows how the key vector k̃ls′,−tw:tw is constructed. In a similar way, we also construct a new value vector ṽls′,−tw:tw . Finally, the proposed approximation to the full space-time attention is given by: ylss,t = S−1∑ s′=0 Softmax{(qlss,t · k̃ls′,−tw:tw/ √ dh}ṽls′,−tw:tw , { s=0,...,S−1 t=0,...,T−1 } . (10) This has the complexity of a spatial-only attention (O(TS2)) and hence it is more efficient than previously proposed video transformers [3, 1]. Our model also provides a better approximation to the full space-time attention and as shown by our results it significantly outperforms [3, 1]. Temporal Attention aggregation: The final set of the class tokens zL−1t,cls , 0 ≤ t ≤ L− 1 are used to generate the predictions. To this end, we propose to consider the following options: (a) simple temporal averaging zfinal = 1T ∑ t z L−1 t,cls as in the case of our baseline. (b) An obvious limitation of temporal averaging is that the output is treated purely as an ensemble of per-frame features and, hence, completely ignores the temporal ordering between them. To address this, we propose to use a lightweight Temporal Attention (TA) mechanism that will attend to the T classification tokens. In particular a zfinal token attends the sequence [zL−10,cls, . . . , z L−1 T−1,cls] using a temporal Transformer layer and then fed as input to the classifier. This is akin to the (concurrent) work of [1] with the difference being that in our model we found that a single TA layer suffices whereas [1] uses Lt. A consequence of this is that the complexity of our layer is O(T ) vs O(2(Lt − 1)T 2 + T ) of [1]. Summary token: As an alternative to TA, herein, we also propose a simple lightweight mechanism for information exchange between different frames at intermediate layers of the network. Given the set of tokens for each frame t, Zl−1t ∈ R(S+1)×dh (constructed by concatenating all tokens zl−1s,t , s = 0, . . . , S), we compute a new set of R tokens Z l r,t = φ(Z l−1 t ) ∈ RR×dh which summarize the frame information and hence are named “Summary” tokens. These are then, appended to the visual tokens of all frames to calculate the keys and values so that the query vectors attend the original keys plus the Summary tokens. Herein, we explore the case that φ(.) performs simple spatial averaging zl0,t = 1 S ∑ s z l s,t over the tokens of each frame (R = 1 for this case). Note that, forR = 1, the extra cost that the Summary token induces is O(TS). X-ViT: We call the Video Transformer based on the proposed (a) space-time mixing attention and (b) lightweight global temporal attention (or summary token) as X-ViT. 4 Results 4.1 Experimental setup Datasets: We train and evaluate the proposed models on the following datasets (all datasets are publicly available for research purposes): Kinetics-400 and 600: The Kinetics [21] dataset consists of short clips (typically 10 sec long sampled from YouTube) labeled using 400 and 600 classes, respectively. Due to the removal of some videos from YouTube, the version of the dataset used in this paper consists of approximately 261K clips for Kinetics-400. Note, that these amounts are lower than the original version of the datasets and thus might represent a negative performance bias when compared with prior works. Something-Something-v2 (SSv2): The SSv2 [17] dataset consists of 220,487 short videos (of duration between 2 and 6 sec) that depict humans performing pre-defined basic actions with everyday objects. Because the objects and backgrounds in the videos are consistent across different action classes, this dataset tends to require stronger temporal modeling. Due to this, we conducted most of our ablation studies on SSv2 to better analyze the importance of the proposed components. Epic Kitchens-100 (Epic-100): is an egocentric large scale action recognition dataset consisting of more than 90,000 action segments spanning 100 hours of recordings in home environments, capturing daily activities [10]. The dataset is labeled using 97 verb classes and 300 noun classes. The evaluation results are reported using the standard action recognition protocol: the network predicts the “verb” and the “noun” using two heads. The predictions are then merged to construct an “action” which is used to report the accuracy. Training details: All models, unless otherwise stated, were trained using the following scheduler and training procedure: specifically, our models were trained using SGD with momentum (0.9) and a cosine scheduler [28] (with linear warmup) for 35 epochs on SSv2, 50 on Epic-100 and 30 on Kinetics. The base learning rate, set at a batch size of 128, was 0.05 (0.03 for Kinetics). To prevent over-fitting we made use of the following augmentation techniques: random scaling (0.9× to 1.3×) and cropping, random flipping (with probability of 0.5; not for SSv2) and autoaugment [8]. In addition, for SSv2 and Epic-100, we also applied random erasing (probability=0.5, min. area=0.02, max. area=1/3, min. aspect=0.3) [52] and label smoothing (λ = 0.3) [34] while, for Kinetics, we used mixup [51] (α = 0.4). The backbone models follow closely the ViT architecture of Dosovitskiy et al. [13]. Most experiments were performed using the ViT-B/16 variant (L = 12, h = 12, d = 768, K = 16), where L represents the number of transformer layers, h the number of heads, d the embedding dimension and K the patch size. We initialized our models from a pretrained ImageNet-21k [11] ViT model. The spatial positional encoding ps was initialized from the pretrained 2D model and the temporal one, pt, with zeros so that it does not have a great impact on the tokens early on during training. The models were trained on 8 V100 GPUs using PyTorch [30]. Testing details: Unless otherwise stated, we used ViT-B/16 and T = 8 frames. We mostly used Temporal Attention (TA) for temporal aggregation. We report accuracy results for 1 × 3 views (1 temporal clip and 3 spatial crops) departing from the common approach of using up to 10 × 3 views [26, 16]. The 1× 3 views setting was also used in Bertasius et al. [3]. To measure the variation between runs, we trained one of the 8–frame models 5 times. The results varied by ±0.4%. 4.2 Ablation studies Throughout this section, we study the effect of varying certain design choices and different components of our method. Because SSv2 tends to require a more fine-grained temporal modeling, unless otherwise specified, all results reported, in this section, are on the SSv2. Table 2: Effect of: (a) proposed SA position, (b) temporal aggregation and number of Temporal Attention (TA) layers, (c) space-time mixing qkv vectors and (d) amount of mixed channels on SSv2. (a) Effect of applying the proposed SA to certain layers. Transform. layers Top-1 Top-5 1st half 61.7 86.5 2nd half 61.6 86.3 Half (odd. pos) 61.2 86.4 All 62.6 87.8 (b) Effect of number of TA layers. 0 corresponds to temporal averaging. #. TA layers Top-1 Top-5 0 (temp. avg.) 62.4 87.8 1 64.4 89.3 2 64.5 89.3 3 64.5 89.3 (c) Effect of space-time mixing. x denotes the input token before qkv projection. Query produces equivalent results with key and thus omitted. x key value Top-1 Top-5 7 7 7 56.6 83.5 X 7 7 63.1 88.8 7 X 7 63.1 88.8 7 7 X 62.5 88.6 7 X X 64.4 89.3 (d) Effect of amount of mixed channels. * uses temp. avg. aggregation. 0%* 0% 25% 50% 100% 45.2 56.6 64.3 64.4 62.5 Effect of local window size: Table 1 shows the accuracy of our model by varying the local window size [−tw, tw] used in the proposed space-time mixing attention. Firstly, we observe that the proposed model is significantly superior to our baseline (tw = 0) which uses spatial-only attention. Secondly, a window of tw = 1 produces the best results. This shows that more gradual increase of the effective window size that is attended is more beneficial compared to more aggressive ones, i.e. the case where tw = 2. A performance degradation for the case tw = 2 could be attributed to boundary effects (handled by filling with zeros) which are aggravated as tw increases. Based on these results, we chose to use tw = 1 for the models reported hereafter. For short to medium long videos, it seems that tw = 1 suffices as the temporal receptive field size increases as we advance in depth in the model allowing it to capture a larger effective temporal window. For the datasets used, as explained earlier, after a few transformer layers the whole clip is effectively covered. However, for significantly longer video sequences, larger window sizes may perform better. Effect of SA position: We explored which layers should the proposed space-time mixing attention be applied to within the network. Specifically, we explored the following variants: Applying it to the first L/2 layers, to the last L/2 layers, to every odd indexed layer and, finally, to all layers. As the results from Table 2a show, the exact layers within the network that self-attention is applied to do not matter; what matters is the number of layers it is applied to. We attribute this result to the increased temporal receptive field and cross-frame interactions. Effect of temporal aggregation: Herein, we compare the two methods used for temporal aggregation: simple temporal averaging [41] and the proposed Temporal Attention (TA) mechanism. Given that our model already incorporates temporal information through the proposed space-time attention, we also explored how many TA layers are needed. As shown in Table 2b, replacing temporal averaging with one TA layer improves the Top-1 accuracy from 62.5% to 64.4%. Increasing the number of layers further yields no additional benefits. In Table 2d, we also report the accuracy of spatial-only attention (0% mixing) plus TA aggregation. In the absence of the pro- posed space-time mixing attention, the TA layer alone is unable to compensate, scoring only 56.6%. In the same table, 45.2% is the accuracy of a model trained without the proposed local attention and TA layer (i.e. using a temporal pooling for aggregation). Overall, the results highlight the need of having both components in our final model. For the next two ablation studies, we used 1 TA layer. Effect of space-time mixing qkv vectors: Paramount to our work is the proposed space-time mixing attention of Eq. 10 which is implemented by constructing k̃ls′,−tw:tw and ṽ l s′,−tw:tw efficiently via channel indexing (see Eq. 9). Space-time mixing though can be applied in several different ways in the model. For completeness, herein, we study the effect of applying space-time mixing to various combinations for the key, value and to the input token prior to qkv projection. As shown in Table 2c, the combination corresponding to our model (i.e. space-time mixing applied to the key and value) significantly outperforms all other variants by up to 2%. This result is important as it confirms that our model, derived from the proposed approximation to the local space-time attention, gives the best results when compared to other non-well motivated variants. Effect of amount of space-time mixing: We define as ρdh the total number of channels coming from the adjacent frames in the local temporal window [−tw, tw] (i.e. ∑tw t′=−tw,t6=0 d t′ h = ρdh) when constructing k̃ls′,−tw:tw (see Section 3). Herein, we study the effect of ρ on the model’s accuracy. As the results from Table 2d show, the optimal ρ is between 25% and 50%. Increasing ρ to 100% (i.e. all channels are coming from adjacent frames) unsurprisingly degrades the performance as it excludes the case t′ = t when performing the self-attention. Effect of Summary token: Herein, we compare Temporal Attention with Summary token on SSv2 and Kinetics-400. We used both datasets for this case as they require different type of understanding: fine-grained temporal (SSv2) and spatial content (Kinetics-400). From Table 4, we conclude that the Summary token compares favorable on Kinetics-400 but not on SSv2 showing that it is more useful in terms of capturing spatial information. Since the improvement is small, we conclude that 1 TA layer is the best global attention-based mechanism for improving the accuracy of our method adding also negligible computational cost. Effect of number of input frames: Herein we evaluate the impact of increasing the number of input frames T from 8 to 16 and 32. We note that, for our method, this change results in a linear increase in complexity. As the results from Table 7 show, increasing the number of frames from 8 to 16 offers a 1.8% boost in Top-1 accuracy on SSv2. Moreover, increasing the number of frames to 32 improves the performance by a further 0.2%, offering diminishing returns. Similar behavior can be observed on Kinetics and Epic-100 in Tables 5 and 8. Effect of number of tokens and different model sizes: Herein, we vary the number of input tokens by changing the patch size K. As the results from Table 3 show, even when the number of tokens decreases significantly (e.g. ViT-B/32 or ViT-S/32) our approach is still able to produce results of satisfactory accuracy. The benefit of that is having a model which is significantly more efficient. Similar concusions can be observed when the model size (in terms of parameters and FLOPs) is varied. Our approach provides consistent results in all cases, showcasing its ability to scale well from tiny (XViT-T) to large (XViT-L) models. Latency and throughput considerations: While the channel shifting operation used by the proposed space-time mixing attention is zero-FLOP, there is still a small cost associated with memory movement operations. In order to ascertain that the induced cost does not introduce noticeable performance degradation, we benchmarked a Vit-B/16 (8× frames) model using spatial-only attention and the proposed space-time mixing attention on 8 V100 GPUs and a batch size of 128. A model with spatial-only attention has a throughput of 312 fps while our model has 304 fps. 4.3 Comparison to state-of-the-art Our best model uses the proposed space-time mixing attention in all the Transformer layers and performs temporal aggregation using a single lightweight temporal transformer layer as described in Section 3. Unless otherwise specified, we report the results using the 1× 3 configuration for the views (1 temporal and 3 spatial) for all datasets. Regarding related work on transformer-based video recognition [1, 3], we included their very best models trained on the same data as our models. For TimeSformer, this is typically the TimeSformer-L version. For ViVit, we used the 16x2 configuration, with factorized-encoding for Epic-100 and SS-v2 (as reported in Tables 6d and 6e in [1]) and the full version for Kinetics (as reported in Table 6a in [1]). On Kinetics-400, we match the current state-of-the-art while having significantly lower computational complexity than the next two best recently proposed methods that also use Transformer-based architectures: 20× fewer FLOPs than ViVit [1] and 8× fewer than TimeSformer-L [3]. Note that both models from [1, 3] and ours were initialized from a ViT model pretrained on ImageNet-21k [11] and take as input frames at a resolution of 224 × 224px. Similar conclusions can be drawn from Table 6 which reports our results on Kinetics-600. On SSv2, we match and surpass the current state-of-the-art, especially in terms of Top-5 accuracy (ours: 90.7% vs ViViT: 89.8% [1]) using models that are 14× (16 frames) and 9× (32 frames) faster. Finally, we observe similar outcomes on Epic-100 where we set a new state-of-the-art, showing large improvements especially for “Verb” accuracy, while again being more efficient. 5 Ethical considerations and broader impact Current high-performing video recognition models tend to have high computational demands for both training and testing and, by extension, significant environmental costs. This is especially true for the transformer-based architectures. Our research introduces a novel approach that matches and surpasses the current state-ofthe-art while being significantly more efficient thanks to the linear scaling of the complexity with respect to the number of frames. We hope such models will offer noticeable reduction in power consumption while setting at the same time a solid base for future research. We will release code and models to facilitate this. Moreover, and similarly to most data-driven systems, bias from the training data can potentially affect the fairness of the model. As such, we suggest to take this aspect into consideration when deploying the models into real-world scenarios. 6 Conclusions We presented a novel approximation to the full space-time attention that is amenable to an efficient implementation and applied it to video recognition. Our approximation has the same computational cost as spatial-only attention yet the resulting video Transformer model was shown to be significantly more efficient than recently proposed Video Transformers [3, 1]. By no means this paper proposes a complete solution to video recognition using video Transformers. Future efforts could include combining our approaches with other architectures than the standard ViT, removing the dependency on pre-trained models and applying the model to other video-related tasks like detection and segmentation. Finally, further research is required for deploying our models on low power/resource devices.
1. What is the main contribution of the paper regarding efficient self-attention operations for video models? 2. What are the strengths and weaknesses of the proposed method compared to prior works such as ViViT and TimeSFormer? 3. How does the reviewer assess the novelty and similarity of the proposed method with respect to previous research, including TSM and other transformer papers? 4. Are there any concerns or suggestions regarding the experimental results, such as the choice of units for reporting runtime and the inclusion of additional information in Table 4? 5. Are there any minor points or typos that should be addressed in the revision, such as the shared reference for Timesformer and Vivit in Table 5?
Summary Of The Paper Review
Summary Of The Paper The authors propose a more efficient self-attention operation, specifically for video models, such that the compute requirements increase linearly with the length of the video. The author's contributions consist of two main things: 1) Self-attention along the temporal axis is limited to a smaller window instead of the whole sequence (with multiple transformer layers, the temporal "receptive field" grows to cover the whole video) and 2) The authors efficiently perform "space-time mixing" by constructing the keys and values for attention by concatenating features from different frames. This operation requires 0 floating point operations, but is still able to combine temporal information from adjacent frames effectively. The authors show good experimental results. They achieve similar results to ViViT [1] and TimeSFormer [3] whilst using far fewer GFLOPs, on Kinetics, SSv2 and Epic Kitchens. The paper is also well written in general. Review The method appears fairly straightforward to implement, is efficient in terms of GFLOPs and achieves comparable results to ViViT and TimeSFormer whilst using less GFLOPs. However, I do think the novelty is quite limited: The idea of restricting self-attention to more localised patterns has been explored before in a number of papers, ie [A, B, C, D] among others. The authors should discuss these papers more. The author's proposed method for efficient "space-time mixing" is very similar to that of TSM [42]. Moreover, although I am not aware of other transformer papers that do efficient "space-time mixing" in exactly the same way, it does seem quite similar in spirit to Model 4 of Vivit [1], and a discussion of this is completely missing in the paper. On a related note, the authors often refer to recent video-transformer papers [1, 3]. However, both of these papers propose multiple different models, and it's not clear which models in the paper the authors are referring to. For example in Figure 1, the attention pattern for TimeSFormer is their "Divided Space-Time Attention", and is also Model 3 of Vivit. On Page 5, when the authors are comparing their "Temporal Attention aggregation" to Vivit, this is referring only to Model 2 of [1]. These should be clarified in the revision. In the experiments section, the authors state they match state-of-the-art results while being "significantly faster". Actually, the authors only report GFLOPs, so it is not correct to say that they are "faster". Rather, the proposed model uses less floating point operations, and so the authors should use more precise language here. (The same applies to the discussion of results in the supplementary) A further point here is that the author's "shift trick" does not require any FLOPs, but does increase the runtime. Note that TSM [42] also shows that the impact of the "shift trick" on runtime is architecture-specific with many caveats. When the authors reported runtime on Page 8, it is reported with "frames/second" as the unit. A better unit would be "video clips / second" as that is independent of the number of frames used (as this is a model hyperparamter) and thus more comparable across different methods. Am I correct that as the authors process 8 frames per video, 312 frames/second = 39 video clips / second? Table 4 would also be more clear if the number of tokens and/or number of frames processed by the model were included as separate columns. Minor points Table 5: Timesformer and Vivit have the same reference. Line 155 to 156: [1] also shows that L_t = 1 works well, and this is identical to the author's temporal aggregation approach. References [A] N Parmar et al. Image Transformer. ICML 2018 [B] R Child et al. Generating long sequences with sparse transformers. ICML 2019 [C] I Betalgy et al. Longformer: The Long-Document Transformer (the authors cited this, but did not discuss it in context) [D] M Zaheer et al. Big Bird: Transformers for Longer Sequences. NeurIPS 2020
NIPS
Title Space-time Mixing Attention for Video Transformer Abstract This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. N/A This paper is on video recognition using Transformers. Very recent attempts in this area have demonstrated promising results in terms of recognition accuracy, yet they have been also shown to induce, in many cases, significant computational overheads due to the additional modelling of the temporal information. In this work, we propose a Video Transformer model the complexity of which scales linearly with the number of frames in the video sequence and hence induces no overhead compared to an image-based Transformer model. To achieve this, our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. We also show how to integrate 2 very lightweight mechanisms for global temporal-only attention which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model produces very high recognition accuracy on the most popular video recognition datasets while at the same time being significantly more efficient than other Video Transformer models. Code for our method is made available here. 1 Introduction Video recognition – in analogy to image recognition – refers to the problem of recognizing events of interest in video sequences such as human activities. Following the tremendous success of Transformers in sequential data, specifically in Natural Language Processing (NLP) [39, 5], Vision Transformers were very recently shown to outperform CNNs for image recognition too [48, 13, 35], signaling a paradigm shift on how visual understanding models should be constructed. In light of this, in this paper, we propose a Video Transformer model as an appealing and promising solution for improving the accuracy of video recognition models. A direct, natural extension of Vision Transformers to the spatio-temporal domain is to perform the self-attention jointly across all S spatial locations and T temporal locations. Full space-time attention though has complexityO(T 2S2) making such a model computationally heavy and, hence, impractical even when compared with the 3D-based convolutional models. As such, our aim is to exploit the temporal information present in video streams while minimizing the computational burden within the Transformer framework for efficient video recognition. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). A baseline solution to this problem is to consider spatial-only attention followed by temporal averaging, which has complexity O(TS2). Similar attempts to reduce the cost of full space-time attention have been recently proposed in [3, 1]. These methods have demonstrated promising results in terms of video recognition accuracy, yet they have been also shown to induce, in most of the cases, significant computational overheads compared to the baseline (spatial-only) method due to the additional modelling of the temporal information. Our main contribution in this paper is a Video Transformer model that has complexity O(TS2) and, hence, is as efficient as the baseline model, yet, as our results show, it outperforms recently/concurrently proposed work [3, 1] in terms of efficiency (i.e. accuracy/FLOP) by significant margins. To achieve this our model makes two approximations to the full space-time attention used in Video Transformers: (a) It restricts time attention to a local temporal window and capitalizes on the Transformer’s depth to obtain full temporal coverage of the video sequence. (b) It uses efficient space-time mixing to attend jointly spatial and temporal locations without inducing any additional cost on top of a spatial-only attention model. Fig. 1 shows the proposed approximation to space-time attention. We also show how to integrate two very lightweight mechanisms for global temporal-only attention, which provide additional accuracy improvements at minimal computational cost. We demonstrate that our model is surprisingly effective in terms of capturing long-term dependencies and producing very high recognition accuracy on the most popular video recognition datasets, including Something-Something-v2 [17], Kinetics [4] and Epic Kitchens [9], while at the same time being significantly more efficient than other Video Transformer models. 2 Related work Video recognition: Standard solutions are based on CNNs and can be broadly classified into two categories: 2D- and 3D-based approaches. 2D-based approaches process each frame independently to extract frame-based features which are then aggregated temporally with some sort of temporal modeling (e.g. temporal averaging) performed at the end of the network [42, 26, 27]. The works of [26, 27] use the “shift trick” [45] to have some temporal modeling at a layer level. 3D-based approaches [4, 16, 36] are considered the current state-of-the-art as they can typically learn stronger temporal models via 3D convolutions. However, they also incur higher computational and memory costs. To alleviate this, a large body of works attempt to improve their efficiency via spatial and/or temporal factorization [38, 37, 15]. CNN vs ViT: Historically, video recognition approaches tend to mimic the architectures used for image classification (e.g. from AlexNet [23] to [20] or from ResNet [18] and ResNeXt [47] to [16]). After revolutionizing NLP [39, 32], very recently, Transformer-based architectures showed promising results on large scale image classification too [13]. While self-attention and attention were previously used in conjunction with CNNs at a layer or block level [6, 50, 33], the Vision Transformer (ViT) of Dosovitskiy et al. [13] is the first convolution-free, Transformer-based architecture that achieves state-of-the-art on ImageNet [11]. Video Transformer: Recently/concurrently with our work, vision transformer architectures, derived from [13], were used for video recognition [3, 1], too. Because performing full space-time attention is computationally prohibitive (i.e. O(T 2S2)), their main focus is on reducing this via temporal and spatial factorization. In TimeSformer [3], the authors propose applying spatial and temporal attention in an alternating manner reducing the complexity to O(T 2S + TS2). In a similar fashion, ViViT [1] explores several avenues for space-time factorization. In addition, they also proposed to adapt the patch embedding process from [13] to 3D (i.e. video) data. Our work proposes a completely different approximation to full space-time attention that is also efficient. To this end, we firstly restrict full space-time attention to a local temporal window which is reminiscent of [2] but applied here to space-time attention and video recognition 1. Secondly, we define a local joint space-time attention which we show that can be implemented efficiently via the “shift trick” [45]. 3 Method Video Transformer: We are given a video clip X ∈ RT×H×W×C (C = 3). Following ViT [13], each frame is divided into K ×K non-overlapping patches which are then mapped into visual tokens using a linear embedding layer E ∈ R3K2×d. Since self-attention is permutation invariant, in order to preserve the information regarding the location of each patch within space and time we also learn two positional embeddings, one for space: ps ∈ R1×S×d and one for time: pt ∈ RT×1×d. These are then added to the initial visual tokens. Finally, the token sequence is processed by L Transformer layers. The visual token at layer l, spatial location s and temporal location t is denoted as: zls,t ∈ Rd, l = 0, . . . , L− 1, s = 0, . . . , S − 1, t = 0, . . . , T − 1. (1) In addition to the ST visual tokens extracted from the video, a special classification token zlcls ∈ Rd is prepended to the token sequence [12]. The l−th Transformer layer processes the visual tokens Zl ∈ R(ST+1)×d of the previous layer using a series of Multi-head Self-Attention (MSA), Layer Normalization (LN), and MLP (Rd → R4d → Rd) layers as follows: Yl = MSA(LN(Zl−1)) + Zl−1, (2) Zl = MLP(LN(Yl)) +Yl. (3) The main computation of a single full space-time Self-Attention (SA) head boils down to calculating: yls,t = T−1∑ t′=0 S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , { s=0,...,S−1 t=0,...,T−1 } (4) where qls,t,k l s,t,v l s,t ∈ Rdh are the query, key, and value vectors computed from zls,t (after LN) using embedding matrices Wq,Wk,Wv ∈ Rd×dh . Finally, the output of the h heads is concatenated and projected using embedding matrix Wh ∈ Rhdh×d. The complexity of the full model is: O(3hTSddh) (qkv projections) +O(2hT 2S2dh) (MSA for h attention heads) +O(TS(hdh)d) (multi-head projection) +O(4TSd2) (MLP) 2. From these terms, our goal is to reduce the costO(2T 2S2dh) (for a single attention head) of the full space-time attention which is the dominant term 3. For clarity, from now on, we will drop constant terms and dh to report complexity unless necessary. Hence, the complexity of the full space-time attention is O(T 2S2). Our baseline is a model that performs a simple approximation to the full space-time attention by applying, at each Transformer layer, spatial-only attention: yls,t = S−1∑ s′=0 Softmax{(qls,t · kls′,t)/ √ dh}vls′,t, { s=0,...,S−1 t=0,...,T−1 } (5) 1Other attempts of exploiting local attention can be found in [29, 7, 49], however they are also different in scope, task/domain and implementation. 2For this work, we used S = 196, T = {8, 16, 32} and d = 768 (for a ViT-B backbone). 3The MLP complexity is by no means negligible, however the focus of this work (similarly to [3, 1]) is on reducing the complexity of the self-attention component. the complexity of which is O(TS2). Notably, the complexity of the proposed space-time mixing attention is also O(TS2). Following spatial-only attention, simple temporal averaging is performed on the class tokens zfinal = 1T ∑ t zL−1t,cls to obtain a single feature that is fed to the linear classifier. Recent work by [3, 1] has focused on reducing the cost O(T 2S2) of the full space-time attention of Eq. 4. Bertasius et al. [3] proposed the factorised attention: ỹls,t = T−1∑ t′=0 Softmax{(qls,t · kls,t′)/ √ dh}vls,t′ , yls,t = S−1∑ s′=0 Softmax{q̃ls,t · k̃ls′,t)/ √ dh}ṽls′,t, { s = 0, . . . , S − 1 t = 0, . . . , T − 1 } , (6) where q̃ls,t, k̃ l s′,tṽ l s′,t are new query, key and value vectors calculated from ỹ l s,t 4. The above model reduces complexity to O(T 2S + TS2). However, temporal attention is performed for a fixed spatial location which is ineffective when there is camera or object motion and there is spatial misalignment between frames. The work of [1] is concurrent to ours and proposes the following approximation: Ls Transformer layers perform spatial-only attention as in Eq. 5 (each with complexity O(S2)). Following this, there are Lt Transformer layers performing temporal-only attention on the class tokens zLst . The complexity of the temporal-only attention is, in general, O(T 2). Our model aims to better approximate the full space-time self-attention (SA) of Eq. 4 while keeping complexity to O(TS2), i.e. inducing no further complexity to a spatial-only model. To achieve this, we make a first approximation to perform full space-time attention but restricted to a local temporal window [−tw, tw]: yls,t = t+tw∑ t′=t−tw S−1∑ s′=0 Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ = t+tw∑ t′=t−tw Vlt′a l t′ , { s=0,...,S−1 t=0,...,T−1 } (7) where Vlt′ = [v l 0,t′ ;v l 1,t′ ; . . . ;v l S−1,t′ ] ∈ Rdh×S and alt′ = [al0,t′ , al1,t′ , . . . , alS−1,t′ ] ∈ RS is the vector with the corresponding attention weights. Eq. 7 shows that, for a single Transformer layer, yls,t is a spatio-temporal combination of the visual tokens in the local window [−tw, tw]. It follows that, after k Transformer layers, yl+ks,t will be a spatio-temporal combination of the visual tokens in the local window [−ktw, ktw] which in turn conveniently allows to perform spatio-temporal attention over the whole clip. For example, for tw = 1 and k = 4, the local window becomes [−4, 4] which spans the whole video clip for the typical case T = 8. The complexity of the local self-attention of Eq. 7 is O(T (2tw + 1)2S2). To reduce this even further, we make a second approximation on top of the first one as follows: the attention between spatial locations s and s′ according to the model of Eq. 7 is: t+tw∑ t′=t−tw Softmax{(qls,t · kls′,t′)/ √ dh}vls′,t′ , (8) i.e. it requires the calculation of 2tw +1 attentions, one per temporal location over [−tw, tw]. Instead, we propose to calculate a single attention over [−tw, tw] which can be achieved by qls,t attending kls′,−tw:tw , [k l s′,t−tw ; . . . ;k l s′,t+tw ] ∈ R(2tw+1)dh . Note that to match the dimensions of qls,t and kls′,−tw:tw a further projection of k l s′,−tw:tw to R dh is normally required which has complexity O((2tw +1)d 2 h) and hence compromises the goal of an efficient implementation. To alleviate this we use the “shift trick” [45, 26] which allows to perform both zero-cost dimensionality reduction, spacetime mixing and attention (between qls,t and k l s′,−tw:tw ) in O(dh). In particular, each t ′ ∈ [−tw, tw] is assigned dt ′ h channels from dh (i.e. ∑ t′ d t′ h = dh). Let k l s′,t′(d t′ h ) ∈ Rd t′ h denote the operator for 4More precisely, Eq. 6 holds for h = 1 heads. For h > 1, the different heads ỹl,hs,t are concatenated and projected to produce ỹls,t. indexing the dt ′ h channels from k l s′,t′ . Then, a new key vector is constructed as: k̃ls′,−tw:tw , [k l s′,t−tw(d t−tw h ), . . . ,k l s′,t+tw(d t+tw h )] ∈ R dh . (9) Fig. 2 shows how the key vector k̃ls′,−tw:tw is constructed. In a similar way, we also construct a new value vector ṽls′,−tw:tw . Finally, the proposed approximation to the full space-time attention is given by: ylss,t = S−1∑ s′=0 Softmax{(qlss,t · k̃ls′,−tw:tw/ √ dh}ṽls′,−tw:tw , { s=0,...,S−1 t=0,...,T−1 } . (10) This has the complexity of a spatial-only attention (O(TS2)) and hence it is more efficient than previously proposed video transformers [3, 1]. Our model also provides a better approximation to the full space-time attention and as shown by our results it significantly outperforms [3, 1]. Temporal Attention aggregation: The final set of the class tokens zL−1t,cls , 0 ≤ t ≤ L− 1 are used to generate the predictions. To this end, we propose to consider the following options: (a) simple temporal averaging zfinal = 1T ∑ t z L−1 t,cls as in the case of our baseline. (b) An obvious limitation of temporal averaging is that the output is treated purely as an ensemble of per-frame features and, hence, completely ignores the temporal ordering between them. To address this, we propose to use a lightweight Temporal Attention (TA) mechanism that will attend to the T classification tokens. In particular a zfinal token attends the sequence [zL−10,cls, . . . , z L−1 T−1,cls] using a temporal Transformer layer and then fed as input to the classifier. This is akin to the (concurrent) work of [1] with the difference being that in our model we found that a single TA layer suffices whereas [1] uses Lt. A consequence of this is that the complexity of our layer is O(T ) vs O(2(Lt − 1)T 2 + T ) of [1]. Summary token: As an alternative to TA, herein, we also propose a simple lightweight mechanism for information exchange between different frames at intermediate layers of the network. Given the set of tokens for each frame t, Zl−1t ∈ R(S+1)×dh (constructed by concatenating all tokens zl−1s,t , s = 0, . . . , S), we compute a new set of R tokens Z l r,t = φ(Z l−1 t ) ∈ RR×dh which summarize the frame information and hence are named “Summary” tokens. These are then, appended to the visual tokens of all frames to calculate the keys and values so that the query vectors attend the original keys plus the Summary tokens. Herein, we explore the case that φ(.) performs simple spatial averaging zl0,t = 1 S ∑ s z l s,t over the tokens of each frame (R = 1 for this case). Note that, forR = 1, the extra cost that the Summary token induces is O(TS). X-ViT: We call the Video Transformer based on the proposed (a) space-time mixing attention and (b) lightweight global temporal attention (or summary token) as X-ViT. 4 Results 4.1 Experimental setup Datasets: We train and evaluate the proposed models on the following datasets (all datasets are publicly available for research purposes): Kinetics-400 and 600: The Kinetics [21] dataset consists of short clips (typically 10 sec long sampled from YouTube) labeled using 400 and 600 classes, respectively. Due to the removal of some videos from YouTube, the version of the dataset used in this paper consists of approximately 261K clips for Kinetics-400. Note, that these amounts are lower than the original version of the datasets and thus might represent a negative performance bias when compared with prior works. Something-Something-v2 (SSv2): The SSv2 [17] dataset consists of 220,487 short videos (of duration between 2 and 6 sec) that depict humans performing pre-defined basic actions with everyday objects. Because the objects and backgrounds in the videos are consistent across different action classes, this dataset tends to require stronger temporal modeling. Due to this, we conducted most of our ablation studies on SSv2 to better analyze the importance of the proposed components. Epic Kitchens-100 (Epic-100): is an egocentric large scale action recognition dataset consisting of more than 90,000 action segments spanning 100 hours of recordings in home environments, capturing daily activities [10]. The dataset is labeled using 97 verb classes and 300 noun classes. The evaluation results are reported using the standard action recognition protocol: the network predicts the “verb” and the “noun” using two heads. The predictions are then merged to construct an “action” which is used to report the accuracy. Training details: All models, unless otherwise stated, were trained using the following scheduler and training procedure: specifically, our models were trained using SGD with momentum (0.9) and a cosine scheduler [28] (with linear warmup) for 35 epochs on SSv2, 50 on Epic-100 and 30 on Kinetics. The base learning rate, set at a batch size of 128, was 0.05 (0.03 for Kinetics). To prevent over-fitting we made use of the following augmentation techniques: random scaling (0.9× to 1.3×) and cropping, random flipping (with probability of 0.5; not for SSv2) and autoaugment [8]. In addition, for SSv2 and Epic-100, we also applied random erasing (probability=0.5, min. area=0.02, max. area=1/3, min. aspect=0.3) [52] and label smoothing (λ = 0.3) [34] while, for Kinetics, we used mixup [51] (α = 0.4). The backbone models follow closely the ViT architecture of Dosovitskiy et al. [13]. Most experiments were performed using the ViT-B/16 variant (L = 12, h = 12, d = 768, K = 16), where L represents the number of transformer layers, h the number of heads, d the embedding dimension and K the patch size. We initialized our models from a pretrained ImageNet-21k [11] ViT model. The spatial positional encoding ps was initialized from the pretrained 2D model and the temporal one, pt, with zeros so that it does not have a great impact on the tokens early on during training. The models were trained on 8 V100 GPUs using PyTorch [30]. Testing details: Unless otherwise stated, we used ViT-B/16 and T = 8 frames. We mostly used Temporal Attention (TA) for temporal aggregation. We report accuracy results for 1 × 3 views (1 temporal clip and 3 spatial crops) departing from the common approach of using up to 10 × 3 views [26, 16]. The 1× 3 views setting was also used in Bertasius et al. [3]. To measure the variation between runs, we trained one of the 8–frame models 5 times. The results varied by ±0.4%. 4.2 Ablation studies Throughout this section, we study the effect of varying certain design choices and different components of our method. Because SSv2 tends to require a more fine-grained temporal modeling, unless otherwise specified, all results reported, in this section, are on the SSv2. Table 2: Effect of: (a) proposed SA position, (b) temporal aggregation and number of Temporal Attention (TA) layers, (c) space-time mixing qkv vectors and (d) amount of mixed channels on SSv2. (a) Effect of applying the proposed SA to certain layers. Transform. layers Top-1 Top-5 1st half 61.7 86.5 2nd half 61.6 86.3 Half (odd. pos) 61.2 86.4 All 62.6 87.8 (b) Effect of number of TA layers. 0 corresponds to temporal averaging. #. TA layers Top-1 Top-5 0 (temp. avg.) 62.4 87.8 1 64.4 89.3 2 64.5 89.3 3 64.5 89.3 (c) Effect of space-time mixing. x denotes the input token before qkv projection. Query produces equivalent results with key and thus omitted. x key value Top-1 Top-5 7 7 7 56.6 83.5 X 7 7 63.1 88.8 7 X 7 63.1 88.8 7 7 X 62.5 88.6 7 X X 64.4 89.3 (d) Effect of amount of mixed channels. * uses temp. avg. aggregation. 0%* 0% 25% 50% 100% 45.2 56.6 64.3 64.4 62.5 Effect of local window size: Table 1 shows the accuracy of our model by varying the local window size [−tw, tw] used in the proposed space-time mixing attention. Firstly, we observe that the proposed model is significantly superior to our baseline (tw = 0) which uses spatial-only attention. Secondly, a window of tw = 1 produces the best results. This shows that more gradual increase of the effective window size that is attended is more beneficial compared to more aggressive ones, i.e. the case where tw = 2. A performance degradation for the case tw = 2 could be attributed to boundary effects (handled by filling with zeros) which are aggravated as tw increases. Based on these results, we chose to use tw = 1 for the models reported hereafter. For short to medium long videos, it seems that tw = 1 suffices as the temporal receptive field size increases as we advance in depth in the model allowing it to capture a larger effective temporal window. For the datasets used, as explained earlier, after a few transformer layers the whole clip is effectively covered. However, for significantly longer video sequences, larger window sizes may perform better. Effect of SA position: We explored which layers should the proposed space-time mixing attention be applied to within the network. Specifically, we explored the following variants: Applying it to the first L/2 layers, to the last L/2 layers, to every odd indexed layer and, finally, to all layers. As the results from Table 2a show, the exact layers within the network that self-attention is applied to do not matter; what matters is the number of layers it is applied to. We attribute this result to the increased temporal receptive field and cross-frame interactions. Effect of temporal aggregation: Herein, we compare the two methods used for temporal aggregation: simple temporal averaging [41] and the proposed Temporal Attention (TA) mechanism. Given that our model already incorporates temporal information through the proposed space-time attention, we also explored how many TA layers are needed. As shown in Table 2b, replacing temporal averaging with one TA layer improves the Top-1 accuracy from 62.5% to 64.4%. Increasing the number of layers further yields no additional benefits. In Table 2d, we also report the accuracy of spatial-only attention (0% mixing) plus TA aggregation. In the absence of the pro- posed space-time mixing attention, the TA layer alone is unable to compensate, scoring only 56.6%. In the same table, 45.2% is the accuracy of a model trained without the proposed local attention and TA layer (i.e. using a temporal pooling for aggregation). Overall, the results highlight the need of having both components in our final model. For the next two ablation studies, we used 1 TA layer. Effect of space-time mixing qkv vectors: Paramount to our work is the proposed space-time mixing attention of Eq. 10 which is implemented by constructing k̃ls′,−tw:tw and ṽ l s′,−tw:tw efficiently via channel indexing (see Eq. 9). Space-time mixing though can be applied in several different ways in the model. For completeness, herein, we study the effect of applying space-time mixing to various combinations for the key, value and to the input token prior to qkv projection. As shown in Table 2c, the combination corresponding to our model (i.e. space-time mixing applied to the key and value) significantly outperforms all other variants by up to 2%. This result is important as it confirms that our model, derived from the proposed approximation to the local space-time attention, gives the best results when compared to other non-well motivated variants. Effect of amount of space-time mixing: We define as ρdh the total number of channels coming from the adjacent frames in the local temporal window [−tw, tw] (i.e. ∑tw t′=−tw,t6=0 d t′ h = ρdh) when constructing k̃ls′,−tw:tw (see Section 3). Herein, we study the effect of ρ on the model’s accuracy. As the results from Table 2d show, the optimal ρ is between 25% and 50%. Increasing ρ to 100% (i.e. all channels are coming from adjacent frames) unsurprisingly degrades the performance as it excludes the case t′ = t when performing the self-attention. Effect of Summary token: Herein, we compare Temporal Attention with Summary token on SSv2 and Kinetics-400. We used both datasets for this case as they require different type of understanding: fine-grained temporal (SSv2) and spatial content (Kinetics-400). From Table 4, we conclude that the Summary token compares favorable on Kinetics-400 but not on SSv2 showing that it is more useful in terms of capturing spatial information. Since the improvement is small, we conclude that 1 TA layer is the best global attention-based mechanism for improving the accuracy of our method adding also negligible computational cost. Effect of number of input frames: Herein we evaluate the impact of increasing the number of input frames T from 8 to 16 and 32. We note that, for our method, this change results in a linear increase in complexity. As the results from Table 7 show, increasing the number of frames from 8 to 16 offers a 1.8% boost in Top-1 accuracy on SSv2. Moreover, increasing the number of frames to 32 improves the performance by a further 0.2%, offering diminishing returns. Similar behavior can be observed on Kinetics and Epic-100 in Tables 5 and 8. Effect of number of tokens and different model sizes: Herein, we vary the number of input tokens by changing the patch size K. As the results from Table 3 show, even when the number of tokens decreases significantly (e.g. ViT-B/32 or ViT-S/32) our approach is still able to produce results of satisfactory accuracy. The benefit of that is having a model which is significantly more efficient. Similar concusions can be observed when the model size (in terms of parameters and FLOPs) is varied. Our approach provides consistent results in all cases, showcasing its ability to scale well from tiny (XViT-T) to large (XViT-L) models. Latency and throughput considerations: While the channel shifting operation used by the proposed space-time mixing attention is zero-FLOP, there is still a small cost associated with memory movement operations. In order to ascertain that the induced cost does not introduce noticeable performance degradation, we benchmarked a Vit-B/16 (8× frames) model using spatial-only attention and the proposed space-time mixing attention on 8 V100 GPUs and a batch size of 128. A model with spatial-only attention has a throughput of 312 fps while our model has 304 fps. 4.3 Comparison to state-of-the-art Our best model uses the proposed space-time mixing attention in all the Transformer layers and performs temporal aggregation using a single lightweight temporal transformer layer as described in Section 3. Unless otherwise specified, we report the results using the 1× 3 configuration for the views (1 temporal and 3 spatial) for all datasets. Regarding related work on transformer-based video recognition [1, 3], we included their very best models trained on the same data as our models. For TimeSformer, this is typically the TimeSformer-L version. For ViVit, we used the 16x2 configuration, with factorized-encoding for Epic-100 and SS-v2 (as reported in Tables 6d and 6e in [1]) and the full version for Kinetics (as reported in Table 6a in [1]). On Kinetics-400, we match the current state-of-the-art while having significantly lower computational complexity than the next two best recently proposed methods that also use Transformer-based architectures: 20× fewer FLOPs than ViVit [1] and 8× fewer than TimeSformer-L [3]. Note that both models from [1, 3] and ours were initialized from a ViT model pretrained on ImageNet-21k [11] and take as input frames at a resolution of 224 × 224px. Similar conclusions can be drawn from Table 6 which reports our results on Kinetics-600. On SSv2, we match and surpass the current state-of-the-art, especially in terms of Top-5 accuracy (ours: 90.7% vs ViViT: 89.8% [1]) using models that are 14× (16 frames) and 9× (32 frames) faster. Finally, we observe similar outcomes on Epic-100 where we set a new state-of-the-art, showing large improvements especially for “Verb” accuracy, while again being more efficient. 5 Ethical considerations and broader impact Current high-performing video recognition models tend to have high computational demands for both training and testing and, by extension, significant environmental costs. This is especially true for the transformer-based architectures. Our research introduces a novel approach that matches and surpasses the current state-ofthe-art while being significantly more efficient thanks to the linear scaling of the complexity with respect to the number of frames. We hope such models will offer noticeable reduction in power consumption while setting at the same time a solid base for future research. We will release code and models to facilitate this. Moreover, and similarly to most data-driven systems, bias from the training data can potentially affect the fairness of the model. As such, we suggest to take this aspect into consideration when deploying the models into real-world scenarios. 6 Conclusions We presented a novel approximation to the full space-time attention that is amenable to an efficient implementation and applied it to video recognition. Our approximation has the same computational cost as spatial-only attention yet the resulting video Transformer model was shown to be significantly more efficient than recently proposed Video Transformers [3, 1]. By no means this paper proposes a complete solution to video recognition using video Transformers. Future efforts could include combining our approaches with other architectures than the standard ViT, removing the dependency on pre-trained models and applying the model to other video-related tasks like detection and segmentation. Finally, further research is required for deploying our models on low power/resource devices.
1. What is the main contribution of the paper regarding reducing computation cost in Transformer frameworks? 2. What are the strengths and weaknesses of the proposed space-time mixing attention module? 3. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 4. Are there any concerns or suggestions regarding the experimental values, computational complexity, and ablation studies provided in the paper?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors focus on reducing the computation cost caused by the space-time attention modeling in the Transformer framework for video recognition. Specifically, the proposal includes 1) a space-time mixing attention module that mixes channels from neighboring frames within a local window as key-value tokens, and 2) a temporal attention layer at the end of the model to aggregate global temporal information. The authors provide promising results on Kinetics-400 and SSv2 in the paper. Review Originality: The major proposal is the space-time mixing attention module. The authors introduce the “channel shift trick [42]” to the space-time attention module to aggregate both spatial and temporal features while reducing computation complexity. They have shown promising results (Table 5-6) to support their claims. I think the idea is novel and clearly differs from previous works (TimeSformer [3] and ViViT[3]). Quality: The submission is technically sound. The authors have provided extensive experiments and ablation studies to support their major claims. The weaknesses are list as below. The authors should provide the typical experimental values of T, S, and d (Line 98-101), otherwise, it may be hard to figure out the dominant computation cost of a full model. For example, when 2 d > T S , the complexity of MLP is larger than MSA. I’m not sure about the complexity mentioned in Line 129. In my understanding, in the local temporal window of Eq. 7, there are ( 2 t + 1 ) S tokens, and I think the complexity should be O ( T ( 2 t + 1 ) 2 S 2 ) ∼ O ( T t 2 S 2 ) , rather than O ( ( 2 t + 1 ) T S 2 ) . In my understanding, the space-temporal mixing attention mainly aggregates temporal information by aggregating a subset of channels from neighboring frames to constitute the features of a new key-value token (Line 137-140). I think another intuitive option is reducing channels of each frame (e.g., reduce to 1/t channels) and then concatenate them together as the final key-value tokens. Such a design can aggregate temporal information while also avoiding introducing additional overhead. We suggest the authors clarify their differences and prove the superiority of their proposal over the abovementioned option. I think the results of Table 2 (b) look weird as the results of three different models are totally the same. We suggest the authors carefully check the results and report the error bar of the results. The authors have highlighted the computation complexity and cost multiple times in the paper. Therefore, I think it is important to provide latency (i.e., runtime), the number of input frames, the number of tokens for all the models in Table 5 and Table 6 to support their claims. Clarity: I think this paper is well organized, however, some details of the model are not clear enough. it’s hard for me to totally understand the summary token (Line 159-166). We suggest the authors provide illustrations of the temporal attention aggregation and summary token to demonstrate their functions and differences. In Table 2 (d), * (44.2) indicates temporal average aggregation, however, it is reported as 55.6% in Line 241. The authors should check it and clarify it. Significance: This paper proposes a novel way to reduce computation cost for spatial-temporal attention modeling. The authors have shown promising FLOPs and accuracy in standard benchmarks. Latency and more details of Table 5 and Table 6 are expected in the final version. Overall, I think it is useful and helpful to others to follow similar ideas to build transformer networks for video tasks.
NIPS
Title Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation Abstract Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers to the discovery of meaningful implicit connections between biomedical terms. However, most existing methods fail to truly capture the temporal dynamics of scientific term relations and also assume unobserved connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To break these limits, we formulate this HG problem as future connectivity prediction task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data. We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction. Experiment results on real-world biomedical term relationship datasets and case study analyses on a COVID-19 dataset validate the effectiveness of the proposed model. 1 Introduction Recently, the study of co-relationships between biomedical entities is increasingly gaining attention. The ability to predict future relationships between biomedical entities like diseases, drugs, and genes enhances the chances of early detection of disease outbreaks and reduces the time required to detect probable disease characteristics. For instance, in 2020, the COVID-19 outbreak pushed the world to a halt with scientists working tediously to study the disease characteristics for containment, cure, and vaccine. An increasing number of articles encompassing new knowledge and discoveries from these studies were being published daily [1]. However, with the accelerated growth rate of publications, the manual process of reading to extract undiscovered knowledge increasingly becomes a tedious and time-consuming task beyond the capability of individual researchers. In an effort towards an advanced knowledge discovery process, computers have been introduced to play an ever-greater role in the scientific process with automatic hypothesis generation (HG). The study of automated HG has attracted considerable attention in recent years [41, 25, 45, 47]. Several previous works proposed techniques based on association rules [25, 18, 47], clustering and topic modeling [45, 44, 5], text mining [43, 42], and others [28, 49, 39]. However, these previous works fail to truly utilize the crucial information encapsulated in the dynamic nature of scientific discoveries and assume that the unobserved relationships denote a non-relevant relationship (negative). To model the historical evolution of term pair relations, we formulate HG on a term relationship graph G = {V,E}, which is decomposed into a sequence of attributed graphlets G = {G1, G2, ..., GT }, where the graphlet at time t is defined as, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Definition 1. Temporal graphlet: A temporal graphlet Gt = {V t, Et, xtv} is a temporal subgraph at time step t, which consists of nodes (terms) V t satisfying V 1 ⊆ V 2, ...,⊆ V T and the observed co-occurrence between these terms Et satisfying E1 ⊆ E2, ...,⊆ ET . And xtv is the node attribute. Example of the node terms can be covid-19, fever, cough, Zinc, hepatitis B virus etc. When two terms co-occurred at time t in scientific discovery, a link between them is added to Et, and the nodes are added to V t if they haven’t been added. Definition 2. Hypothesis Generation (HG): Given G = {G1, G2, ..., GT }, the target is to predict which nodes unlinked in V T should be linked (a hypothesis is generated between these nodes). We address the HG problem by modeling how Et was formed from t = 1 to T (on a dynamic graph), rather than using only ET (on a static graph). In the design of learning model, it is clear to us the observed edges are positive. However, we are in a dilemma whether the unobserved edges are positive or negative. The prior work simply set them to be negative, learning in a positive-negative (PN) setting) based on a closed world assumption that unobserved connections are irrelevant (negative) [39, 28, 4]. We set the learning with a more realistic assumption that the unobserved connections are a mixture of positive and negative term relations (unlabeled), a.k.a. Positive-unlabeled (PU) learning, which is different from semi-supervised PN learning that assumes a known set of labeled negative samples. For the observed positive samples in PU learning, they are assumed to be selected entirely at random from the set of all positive examples [16]. This assumption facilitates and simplifies both theoretical analysis and algorithmic design since the probability of observing the label of a positive example is constant. However, estimating this probability value from the positive-unlabeled data is nontrivial. We propose a variational inference model to estimate the positive prior and incorporate it in the learning of node pair embeddings, which are then used for link prediction (hypothesis generation). We highlight the contributions of this work as follows. 1) Methodology: we propose a PU learning approach on temporal graphs. It differs from other existing approaches that learn in a conventional PN setting on static graphs. In addition, we estimate the positive prior via a variational inference model, rather than setting by prior knowledge. 2) Application: to the best of our knowledge, this is the first the application of PU learning on the HG problem, and on dynamic graphs. We applied the proposed model on real-world graphs of terms in scholarly publications published from 1945 to 2020. Each of the three graphs has around 30K nodes and 1-2 million edges. The model is trained end-to-end and shows superior performance on HG. Case studies demonstrate our new and valid findings of the positive relationship between medical terms, including newly observed terms that were not observed in training. 2 Related Work of PU Learning In PU learning, since the negative samples are not available, a classifier is trained to minimize the expected misclassification rate for both the positive and unlabeled samples. One group of study [32, 31, 33, 22] proposed a two-step solution: 1) identifying reliable negative samples, and 2) learning a classifier based on the labeled positives and reliable negatives using a (semi)-supervised technique. Another group of studies [36, 30, 26, 17, 40] considered the unlabeled samples as negatives with label noise. Hence, they place higher penalties on misclassified positive examples or tune a hyperparameter based on suitable PU evaluation metrics. Such a proposed framework follows the SCAR (Selected Completely at Random) assumption since the noise for negative samples is constant. PU Learning via Risk Estimation Recently, the use of unbiased risk estimator has gained attention [12, 14, 15, 48]. The goal is to minimize the expected classification risk to obtain an empirical risk minimizer. Given an input representation h (in our case the node pair representation to be learned), let f : Rd → R be an arbitrary decision function and l : R×{±1} → R be the loss function calculating the incurred loss l(f(h), y) of predicting an output f(h) when the true value is y. Function l has a variety of forms, and is determined by application needs [29, 13]. In PN learning, the empirical risk minimizer f̂PN is obtained by minimizing the PN risk R̂(f) w.r.t. a class prior of πp: R̂(f) = π P R̂+P (f) + πN R̂ − N (f), (1) where π N = 1 − π P , R̂+P (f) = 1 nP ∑nP i=1 l(f(h P i ),+1) and R̂ − N (f) = 1 nN ∑nN i=1 l(f(h N i ),−1). The variables nP and nN are the numbers of positive and negative samples, respectively. PU learning has to exploit the fact that π N p N (h) = p(h)− π P p P (h), due to the absence of negative samples. The second part of Eq. (1) can be reformulated as: πNR̂−N (f) = R̂ − U − πP R̂ − P (f), (2) whereR−U = Eh∼p(h)[l(f(h),−1)] andR − P = Eh∼p(h|y=+1)[l(f(h),−1)]. Furthermore, the classification risk can then be approximated by: R̂PU (f) = πP R̂+P (f) + R̂ − U (f)− πP R̂ − P (f), (3) where R̂−P (f) = 1 nP ∑nP i=1 l(f(h P i ),−1) , R̂ − U (f) = 1 nU ∑nU i=1 l(f(h U i ),−1), and nU is the number of unlabeled data sample. To obtain an empirical risk minimizer f̂PU for the PU learning framework, R̂PU (f) needs to be minimized. Kiryo et al. noted that the model tends to suffer from overfitting on the training data when the model f is made too flexible [29]. To alleviate this problem, the authors proposed the use of non-negative risk estimator for PU learning: R̃PU (f) = πP R̂+P (f) + max{0, R̂ − U − πP R̂ − P (f)}. (4) It works in fact by explicitly constraining the training risk of PU to be non-negative. The key challenge in practical PU learning is the unknown of prior πP . Prior Estimation The knowledge of the class prior πP is quintessential to estimating the classification risk. In PU learning for our node pairs, we represent a sample as {h, s, y}, where h is the node pair representation (to be learned), s indicates if the pair relationship is observed (labeled, s = 1) or unobserved (unlabeled, s = 0), and y denotes the true class (positive or negative). We have only the positive samples labeled: p(y = 1|s = 1) = 1. If s = 0, the sample can belong to either the positive or negative class. PU learning runs commonly with the Selected Completely at Random (SCAR) assumption, which postulates that the labeled sample set is a random subset of the positive sample set [16, 6, 8]. The probability of selecting a positive sample to observe can be denoted as: p(s = 1|y = 1, h). The SCAR assumption means: p(s = 1|y = 1, h) = p(s = 1|y = 1). However, it is hard to estimate πP = p(y = 1) with only a small set of observed samples (s = 1) and a large set of unobserved samples (s = 0) [7]. Solutions have been tried by i) estimating from a validation set of a fully labeled data set (all with s = 1 and knowing y = 1 or −1) [29, 10]; ii) estimating from the background knowledge; and iii) estimating directly from the PU data [16, 6, 8, 27, 14]. In this paper, we focus on estimating the prior directly from the PU data. Specifically, unlike the other methods, we propose a scalable method based on deep variational inference to jointly estimate the prior and train the classification model end-to-end. The proposed deep variational inference uses KL-divergence to estimate the parameters of class mixture model distributions of the positive and negative class in contrast to the method proposed in [14] which uses penalized L1 divergences to assign higher penalties to class priors that scale the positive distribution as more than the total distribution. 3 PU learning on Temporal Attributed Networks 3.1 Model Design The architecture of our Temporal Relationship Predictor (TRP) model is shown in Fig. 1. For a given pair of nodes aij =< vi, vj > in any temporal graphlet Gt, the main steps used in the training process of TRP for calculating the connectivity prediction score pt(aij) are given in Algorithm 1. The testing process also uses the same Algorithm 1 (with t=T ), calculating pT (aij) for node pairs that have not been connected in GT−1. The connectivity prediction score is calculated in line 6 of Algorithm 1 by pt(aij) = fC(htaij ; θC), where θC is the classification network parameter, and the embedding vector htaij for the pair a ij is iteratively updated in lines 1-5. These iterations of updating htaij are shown as the recurrent structure in Fig. 1 (a), followed by the classifier fC(.; θC). The recurrent update function hτaij = fA(h τ−1 aij , z τ vi , z τ vj ; θA), τ = 1...t, in line 4 is shown in Fig. 1 (b), and has a Gated recurrent unit (GRU) network at its core, P = σg(W zfm(zτvi , z τ vj ) + U Phτ−1aij + b P), r = σg(W rfm(z τ vi , z τ vj ) + U rhτ−1aij + b r), h̃τaij = σh′(Wfm(z τ vi , z τ vj ) + r ◦ Uh τ−1 aij + b), hτaij = P ◦ h̃ τ aij + (1− P) ◦ h τ−1 aij . (5) Algorithm 1: Calculate the future connection score for term pairs ai,j =< vi, vj > Input: G = {G1, G2, . . . , GT } with node feature xtv , a node pair ai,j =< vi, vj > in Gt, and an initialized pair embedding vector h0ai,j (e.g., by zeros) Result: ptai,j , the connectivity prediction score for the node pair a i,j 1 for τ ← 1· · · t do 2 Obtain the current node feature xτv (v = vi, vj) of both nodes (terms) vi, vj ; as well as xτNr(v) (v = vi, vj) for the node feature of sampled neighboring nodes for vi, vj ; 3 Aggregate the neighborhood information of node v = vi, vj , zτv = fG(xτv , xτNr(v); θG); 4 Update the embedding vector for the node pair hτai,j = fA(h τ−1 ai,j , zτvi , z τ vj ; θA) ; 5 end 6 Return ptai,j = fC(h t ai,j ; θC) where ◦ denotes element-wise multiplication, σ is a nonlinear activation function, and fm(.) is an aggregation function. In this study, we use a max pool aggregation. The variables {W,U} are the weights. The inputs to function fA include: hτ−1aij , the embedding vector in previous step; {z t vi , z t vj}, the representation of node vi and vj after aggregation their neighborhood, zτv = fG(x τ v , x τ Nr(v); θG), given in line 3. The aggregation function fG takes input the node feature xτv , and the neighboring node feature xτNr(v) and goes through the aggregation block shown in Fig. 1 (c). The aggregation network fG(; θG) is implemented following GraphSAGE [21], which is one of the most popular graph neural networks for aggregating node and its neighbors. The loss function in our problem l(pt(aij), y) evaluates the loss incurred by predicting a connectivity pt(aij) = fC(htaij ; θC) when the ground truth is y. For constructing the training set for our PU learning in the dynamic graph, we first clarify the label notations. For one pair aij from a graph Gt, its label yijt = 1 (positive) if the two nodes have a link observed in G t+1 (they have an edge ∈ Et+1, observed in next time step), i.e., sijt = 1. Otherwise when no link is observed between them in Gt+1, aij is unlabeled, i.e., sijt = 0, since y ij t can be either 1 or -1. Since we consider insertion only graphlets sequence, V 1 ⊆ V 2, ...,⊆ V T and E1 ⊆ E2, ...,⊆ ET , yijt = 1 maintains for all future steps after t (once positive, always positive). At the final step t = T , all pairs with observed connections already have yijT = 1, our objective is to predict the connectivity score for those pairs with sijT = 0. Our loss function is defined following the unbiased risk estimator in Eq. (3), LR = πP R̂+P (fC) + R̂ − U (fC)− πP R̂ − P (fC), (6) where R̂+P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(pt(aij))), R̂−U (fC) = 1 |H U | ∑ aij∈H U 1/(1 + exp(−pt(aij))), and R̂−P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(−pt(aij))) with the positive samples H P and unlabeled samples H U , when taking l as sigmoid loss function. LR can be adjusted with the non-negative constraint in Eq. (4), with the same definition of R̂+P (f), R̂ − U , and R̂ − P (f). 3.2 Prior Estimation The positive prior πP is a key factor in LR to be addressed. The samples we have from G are only positive H P and unlabeled H U . Due to the absence of negative samples and of prior knowledge, we present an estimate of the class prior from the distribution of h, which is the pair embedding from fA. Without loss of generality, we assume that the learned h of all samples has a Gaussian mixture distribution of two components, one is for the positive samples, while the other is for the negative samples although they are unlabeled. The mixture distribution is parameterized by β, including the mean, co-variance matrix and mixing coefficient of each component. We learn the mixture distribution using stochastic variational inference [24] via the “Bayes by Backprop” technique [9]. The use of variational inference has been shown to have the ability to model salient properties of the data generation mechanism and avoid singularities. The idea is to find variational distribution variables θ∗ that minimizes the Kullback-Leibler (KL) divergence between the variational distribution q(β|θ) and the true posterior distribution p(β|h): θ∗ = argminθLE , (7) where,LE = KL(q(β|θ)||p(β|h)) = KL(q(β|θ)||p(β))− Eq(β|θ)[log p(h|β)]. The resulting cost function LE on the right of Eq. (7) is known as the (negative) “evidence lower bound” (ELBO). The second term in LE is the likelihood of h fitting to the mixture Gaussian with parameter β: Eq(β|θ)[log p(h|β)], while the first term is referred to as the complexity cost [9]. We optimize the ELBO using stochastic gradient descent. With θ∗, the positive prior is then estimated as πP = q(β π i |θ∗), i = arg max k=1,2 |Ck| (8) where C1 = {h ∈ HP , p(h|β1) > p(h|β2)} and C2 = {h ∈ HP , p(h|β2) > p(h|β1)}. 3.3 Parameter Learning To train the three networks fA(.; θA), fG(.; θG), fC(.; θC) for connectivity score prediction, we jointly optimize L = ∑T t=1 LRt + LEt , using Adam over the model parameters. Loss LR is the PU classification risk as described in section 3.1, and LE is the loss of prior estimation as described in section 3.2. Note that during training, yTaij = y T−1 aij since we do not observe G T in training. This is to enforce prediction consistency. 4 Experimental Evaluation 4.1 Dataset and Experimental Setup The graphs on which we apply our model are constructed from the title and abstract of papers published in the biomedical fields from 1949 to 2020. The nodes are the biomedical terms, while the edges linking two nodes indicate the co-occurrence of the two terms. Note that we focus only on the co-occurrence relation and leave the polarity of the relationships for future study. To evaluate the model’s adaptivity in different scientific domains, we construct three graphs from papers relevant to COVID-19, Immunotherapy, and Virology. The graph statistics are shown in Table 1. To set up the training and testing data for TRP model, we split the graph by year intervals (5 years for COVID-19 or 10 years for Virology and Immunotherapy). We use splits of {G1, G2, ..., GT−1} for training, and use connections newly added in the final split GT for testing. Since baseline models do not work on dynamic data, hence we train on GT−1 and test on new observations made in GT . Therefore in testing, the positive pairs are those linked in GT but not in GT−1, i.e., ET \ ET−1, which can be new connections between nodes already existing in GT−1, or between a new node in GT and another node in GT−1, or between two new nodes in GT . All other unlinked node pairs in GT are unlabeled. At each t = 1, ..., T − 1, graph Gt is incrementally updated from Gt−1 by adding new nodes (biomedical terms) and their links. For the node feature vector xtv, we extract its term description and convert to a 300-dimensional feature vector by applying the latent semantic analysis (LSI). The missing term and context attributes are filled with zero vectors. If this node already exists before time t, the context features are updated with the new information about them in discoveries, and publications. In the inference (testing) stage, the new nodes in GT are only presented with their feature vectors xTv . The connections to these isolated nodes are predicted by our TRP model. We implement TRP using the Tensorflow library. Each GPU based experiment was conducted on an Nvidia 1080TI GPU. In all our experiments, we set the hidden dimensions to d = 128. For each neural network based model, we performed a grid search over the learning rate lr = {1e−2, 5e−3, 1e−3, 5e−2}, For the prior estimation, we adopted Gaussian, square-root inverse Gamma, and Dirichlet distributions to model the mean, co-variance matrix and mixing coefficient variational posteriors respectively. 4.2 Comparison Methods and Performance Matrices We evaluate our proposed TRP model in several variants and by comparing with several competitors: 1) TRP variants: a) TRP-PN - the same framework but in PN setting (i.e., treating all unobserved samples as negative, rather than unlabeled); b) TRP-nnPU - trained using the non-negative risk estimator Eq. (4) or the equation defined in section 3.1 for our problem; and c) TRP-uPU - trained using the unbiased PU risk estimator Eq. (3), the equation defined in section 3.1 for our problem. The comparison of these variants will show the impact of different risk estimators. 2) SOTA PU learning: the state-of-the-art (SOTA) PU learning methods taking input h from the SOTA node embedding models, which can be based on LSI [11], node2vec [20], DynAE [19] and GraphSAGE [21]. Since node2vec learns only from the graph structure, we concatenate the node2vec embeddings with the text (term and context) attributes to obtain an enriched node representation. Unlike our TRP that learns h for one pair of nodes, these models learn embedding vectors for individual nodes. Then, h of one pair from baselines is defined as the concatenation of the embedding vector of two nodes. We observe from the results that a concatenation of node2vec and LSI embeddings had the most competitive performance compared to others. Hence we only report the results based on concatenated embeddings for all the baselines methods. The used SOTA PU learning methods include [16] by reweighting all examples, and models with different estimation of the class prior such as SAR-EM [8] (an EM-based SAR-PU method), SCAR-KM2 [38], SCAR-C [8], SCAR-TIcE [6], and pen-L1 [14]. 3) Supervised: weaker but simpler logistic regression applied also h. We measure the performance using four different metrics. These metrics are the Macro-F1 score (F1-M), F1 score of observed connections (F1-S), F1 score adapted to PU learning (F1-P) [7, 30], and the label ranking average precision score (LRAP), where the goal is to give better rank to the positive node pairs. In all metrics used, higher values are preferred. 4.3 Evaluation Results Table 2 shows that TRP-uPU always has the superior performance over all other baselines across the datasets due to its ability to capture and utilize temporal, structural, and textual information (learning better h) and also the better class prior estimator. Among TRP variants, TRP-uPU has higher or equal F1 values comparing to the other two, indicating the benefit of using the unbiased risk estimator. On the LRAP scores, TRP-uPU and TRP-PN have the same performance on promoting the rank of the positive samples. Note that the results in Table 2 are from the models trained with their best learning rate, which is an important parameter that should be tuned in gradient-based optimizer, by either exhaustive search or advanced auto-machine learning [35]. To further investigate the performance of TRP variants, we show in Figure 2 their F1-S at different learning rate in trained from 1 to 10 epochs. We notice that TRP-uPU has a stable performance across different epochs and learning rates. This advantage is attributed to the unbiased PU risk estimation, which learns from only positive samples with no assumptions on the negative samples. We also found interesting that nnPU was worse than uPU in our experimental results. However, it is not uncommon for uPU to outperform nnPU in evaluation with real-world datasets. Similar observations were found in the results in [14, 17]. In our case, we attribute this observation to the joint optimization of the loss from the classifier and the prior estimation. Specifically, in the loss of uPU (Eq. (3)), πP affects both R̂+P (f) and R̂ − P (f). However, in the loss of nnPU (Eq. (4)), πP only weighted R̂+P (f) when R̂ − U − πP R̂ − P (f) is negative. In real-world applications, especially when the true prior is unknown, the loss selection affects the estimation of πP , and thus the final classification results. TRP-PN is not as stable as TRP-uPU due to the strict assumption of unobserved samples as negative. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (COVID-19) TRP-UPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Virology) TRP-UPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Immunotherapy) TRP-UPU (Immunotherapy) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Immunotherapy) 4.4 Incremental Prediction In Figure 3, we compare the performance of the top-performing PU learning methods on different year splits. We train TRP and other baseline methods on data until t− 1 and evaluate its performance on predicting the testing pairs in t. It is expected to see performance gain over the incremental training process, as more and more data are used. We show F1-P due to the similar pattern on other metrics. We observe that the TRP models display an incremental learning curve across the three datasets and outperformed all other models. 0.4 0.5 0.6 0.7 0.8 0.9 2001 - 2005 2006 - 2010 2011 - 2015 2016 - 2020 F1 -S co re Evaluation year COVID-19 SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU 0.1 0.3 0.5 0.7 0.9 1980 - 1989 1990 - 1999 2000 - 2009 2010 - 2019 F1 -S co re Evaluation year Virology SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU Immunotherapy 4.5 Qualitative Analysis We conduct qualitative analysis of the results obtained by TRP-uPU on the COVID-19 dataset. This investigation is to qualitatively check the meaningfulness of the paired terms, e.g., can term covid-19 be paired meaningfully with other terms. We designed two evaluations. First, we set our training data until 2015, i.e., excluding the new terms in 2016-2020 in the COVID-19 graph, such as covid-19, sars-cov-2. The trained model then predicts the connectivity between covid-19 as a new term and other terms, which can be also a new term or a term existing before 2015. Since new terms like covid-19 were not in training graph, their term feature were initialized as defined in Section 4.1. The top predicted terms predicted to be connected with covid-19 are shown in Table 3 top, with the verification in COVID-19 graph of 2016-2020. We notice that the top terms are truly relevant to covid-19, and we do observe their connection in the evaluation graph. For instance, Cough, Fever, SARS, Hand (washing of hands) were known to be relevant to covid-19 at the time of writing this paper. In the second evaluation, we trained the model on the full COVID-19 data (≤ 2020) and then predict to which terms covid-19 will be connected, but they haven’t been connect yet in the graph until 20201. We show the results in Table 3 bottom, and verified the top ranked terms by manually searching the recent research articles online. We did find there exist discussions between covid-19 and some top ranked terms, for example, [3] discusses how covid-19 affected the market of Chromium oxide and [23] discusses about caring for people living with Hepatitis B virus during the covid-19 spread. 4.6 Pair Embedding Visualization We further analyze the node pair embedding learned by TRP-uPU on the COVID-19 data by visualizing them with t-SNE [34]. To have a clear visibility, we sample 800 pairs and visualize the learned embeddings in Figure 4. We denote with colors the observed labels in comparison with the predicted 1Dataset used in this analysis was downloaded in early March 2020 from https://www.semanticscholar. org/cord19/download labels. We observe that the true positives (observed in GT and correctly predicted as positives - blue) and unobserved negatives (not observed in GT and predicted as negatives - red) are further apart. This clear separation indicates that the learned h appropriately grouped the positive and negative (predicted) pairs in distinct clusters. We also observe that the unobserved positives (not observed in GT but predicted as positives - green) and true positives are close. This supports our motivation behind conducting PU learning: the unlabeled samples are a mixture of positive and negative samples, rather than just negative samples. We observe that several unobserved positives are relationships like between Tobacco and covid-19. Although they are not connected in the graph we study, several articles have shown a link between terms [2, 46, 37]. 5 Conclusion In this paper, we propose TRP - a temporal risk estimation PU learning strategy for predicting the relationship between biomedical terms found in texts. TRP is shown with advantages on capturing the temporal evolution of the term-term relationship and minimizing the unbiased risk with a positive prior estimator based on variational inference. The quantitative experiments and analyses show that TRP outperforms several state-of-the-art PU learning methods. The qualitative analyses also show the effectiveness and usefulness of the proposed method. For the future work, we see opportunities like predicting the relationship strength between drugs and diseases (TRP for a regression task). We can also substitute the experimental compatibility of terms for the term co-occurrence used in this study. Acknowledgments and Disclosure of Funding The research reported in this publication was supported by funding from the Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), under award number URF/1/1976-31-01, and NSFC No 61828302. Additional revenue related to this work: student internship at Sony Computer Science Laboratories Inc. We would like to acknowledge the great contribution of Sucheendra K. PalaniapPalaniappanpan and The Systems Biology institute to this work for the initial problem definition and data collection. 6 Broader Impact TRP can be adopted to a wide range of applications involving node pairs in a graph structure. For instance, the prediction of relationships or similarities between two social beings, the prediction of items that should be purchased together, the discovery of compatibility between drugs and diseases, and many more. Our proposed model can be used to capture and analyze the temporal relationship of node pairs in an incremental dynamic graph. Besides, it is especially useful when only samples of a given class (e.g., positive) are available, but it is uncertain whether the unlabeled samples are positive or negative. To be aligned with this fact, TRP treats the unlabeled data as a mixture of negative and positive data samples, rather than all be negative. Thus TRP is a flexible classification model learned from the positive and unlabeled data. While there could be several applications of our proposed model, we focus on the automatic biomedical hypothesis generation (HG) task, which refers to the discovery of meaningful implicit connections between biomedical terms. The use of HG systems has many benefits, such as a faster understanding of relationships between biomedical terms like viruses, drugs, and symptoms, which is essential in the fight against diseases. With the use of HG systems, new hypotheses with minimum uncertainty about undiscovered knowledge can be made from already published scholarly literature. Scientific research and discovery is a continuous process. Hence, our proposed model can be used to predict pairwise relationships when it is not enough to know with whom the items are related, but also learn how the connections have been formed (in a dynamic process). However, there are some potential risks of hypothesis generation from biomedical papers. 1) Publications might be faulty (with faulty/wrong results), which can result in a bad estimate of future relationships. However, this is a challenging problem as even experts in the field might be misled by the faulty results. 2) The access to full publication text (or even abstracts) is not readily available, hence leading to a lack of enough data for a good understanding of the studied terms, and then inaccurate h in generation performance. 3) It is hard to interpret and explain the learning process, for example, the learned embedding vectors are relevant to which term features, the contribution of neighboring terms in the dynamic evolution process. 4) For validating the future relationships, there is often a need for background knowledge or a biologist to evaluate the prediction. Scientific discovery is often to explore the new nontraditional paths. PU learning lifts the restriction on undiscovered relations, keeping them under investigation for the probability of being positive, rather than denying all the unobserved relations as negative. This is the key value of our work in this paper.
1. What is the main contribution of the paper, and how does it relate to the Hypothesis Generation (HG) problem? 2. What are the strengths of the proposed approach, particularly in combining different techniques? 3. What are the weaknesses of the paper, especially regarding its comparisons with other works and reproducibility? 4. How could the writing and presentation of the paper be improved? 5. Are there any concerns about the application of the proposed method to real-world scenarios, such as COVID-19?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes to use temporal PU learning to tackle the hypothesis generation (HG) problem. Specifically, the authors formulate the HG problem as future connectivity prediction on a dynamic attributed graph. The authors claim that the experiments on COVID-19 datasets validate the effectiveness of the proposed model. Strengths 1. The idea of this paper is very novel, which combines PU learning, GRU, and graphSAGE. 2. The application to COVID-19 is much appreciated. Weaknesses 1. The writing and presentation of this paper can be further improved. 2. The proposed method is not compared with the SOTA methods, and some important baselines are missing. 3. The codes and datasets are not provided, which decreases the reproducibility. I understand that the data may be confidential, but the authors should at least provide the codes for their algorithm.
NIPS
Title Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation Abstract Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers to the discovery of meaningful implicit connections between biomedical terms. However, most existing methods fail to truly capture the temporal dynamics of scientific term relations and also assume unobserved connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To break these limits, we formulate this HG problem as future connectivity prediction task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data. We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction. Experiment results on real-world biomedical term relationship datasets and case study analyses on a COVID-19 dataset validate the effectiveness of the proposed model. 1 Introduction Recently, the study of co-relationships between biomedical entities is increasingly gaining attention. The ability to predict future relationships between biomedical entities like diseases, drugs, and genes enhances the chances of early detection of disease outbreaks and reduces the time required to detect probable disease characteristics. For instance, in 2020, the COVID-19 outbreak pushed the world to a halt with scientists working tediously to study the disease characteristics for containment, cure, and vaccine. An increasing number of articles encompassing new knowledge and discoveries from these studies were being published daily [1]. However, with the accelerated growth rate of publications, the manual process of reading to extract undiscovered knowledge increasingly becomes a tedious and time-consuming task beyond the capability of individual researchers. In an effort towards an advanced knowledge discovery process, computers have been introduced to play an ever-greater role in the scientific process with automatic hypothesis generation (HG). The study of automated HG has attracted considerable attention in recent years [41, 25, 45, 47]. Several previous works proposed techniques based on association rules [25, 18, 47], clustering and topic modeling [45, 44, 5], text mining [43, 42], and others [28, 49, 39]. However, these previous works fail to truly utilize the crucial information encapsulated in the dynamic nature of scientific discoveries and assume that the unobserved relationships denote a non-relevant relationship (negative). To model the historical evolution of term pair relations, we formulate HG on a term relationship graph G = {V,E}, which is decomposed into a sequence of attributed graphlets G = {G1, G2, ..., GT }, where the graphlet at time t is defined as, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Definition 1. Temporal graphlet: A temporal graphlet Gt = {V t, Et, xtv} is a temporal subgraph at time step t, which consists of nodes (terms) V t satisfying V 1 ⊆ V 2, ...,⊆ V T and the observed co-occurrence between these terms Et satisfying E1 ⊆ E2, ...,⊆ ET . And xtv is the node attribute. Example of the node terms can be covid-19, fever, cough, Zinc, hepatitis B virus etc. When two terms co-occurred at time t in scientific discovery, a link between them is added to Et, and the nodes are added to V t if they haven’t been added. Definition 2. Hypothesis Generation (HG): Given G = {G1, G2, ..., GT }, the target is to predict which nodes unlinked in V T should be linked (a hypothesis is generated between these nodes). We address the HG problem by modeling how Et was formed from t = 1 to T (on a dynamic graph), rather than using only ET (on a static graph). In the design of learning model, it is clear to us the observed edges are positive. However, we are in a dilemma whether the unobserved edges are positive or negative. The prior work simply set them to be negative, learning in a positive-negative (PN) setting) based on a closed world assumption that unobserved connections are irrelevant (negative) [39, 28, 4]. We set the learning with a more realistic assumption that the unobserved connections are a mixture of positive and negative term relations (unlabeled), a.k.a. Positive-unlabeled (PU) learning, which is different from semi-supervised PN learning that assumes a known set of labeled negative samples. For the observed positive samples in PU learning, they are assumed to be selected entirely at random from the set of all positive examples [16]. This assumption facilitates and simplifies both theoretical analysis and algorithmic design since the probability of observing the label of a positive example is constant. However, estimating this probability value from the positive-unlabeled data is nontrivial. We propose a variational inference model to estimate the positive prior and incorporate it in the learning of node pair embeddings, which are then used for link prediction (hypothesis generation). We highlight the contributions of this work as follows. 1) Methodology: we propose a PU learning approach on temporal graphs. It differs from other existing approaches that learn in a conventional PN setting on static graphs. In addition, we estimate the positive prior via a variational inference model, rather than setting by prior knowledge. 2) Application: to the best of our knowledge, this is the first the application of PU learning on the HG problem, and on dynamic graphs. We applied the proposed model on real-world graphs of terms in scholarly publications published from 1945 to 2020. Each of the three graphs has around 30K nodes and 1-2 million edges. The model is trained end-to-end and shows superior performance on HG. Case studies demonstrate our new and valid findings of the positive relationship between medical terms, including newly observed terms that were not observed in training. 2 Related Work of PU Learning In PU learning, since the negative samples are not available, a classifier is trained to minimize the expected misclassification rate for both the positive and unlabeled samples. One group of study [32, 31, 33, 22] proposed a two-step solution: 1) identifying reliable negative samples, and 2) learning a classifier based on the labeled positives and reliable negatives using a (semi)-supervised technique. Another group of studies [36, 30, 26, 17, 40] considered the unlabeled samples as negatives with label noise. Hence, they place higher penalties on misclassified positive examples or tune a hyperparameter based on suitable PU evaluation metrics. Such a proposed framework follows the SCAR (Selected Completely at Random) assumption since the noise for negative samples is constant. PU Learning via Risk Estimation Recently, the use of unbiased risk estimator has gained attention [12, 14, 15, 48]. The goal is to minimize the expected classification risk to obtain an empirical risk minimizer. Given an input representation h (in our case the node pair representation to be learned), let f : Rd → R be an arbitrary decision function and l : R×{±1} → R be the loss function calculating the incurred loss l(f(h), y) of predicting an output f(h) when the true value is y. Function l has a variety of forms, and is determined by application needs [29, 13]. In PN learning, the empirical risk minimizer f̂PN is obtained by minimizing the PN risk R̂(f) w.r.t. a class prior of πp: R̂(f) = π P R̂+P (f) + πN R̂ − N (f), (1) where π N = 1 − π P , R̂+P (f) = 1 nP ∑nP i=1 l(f(h P i ),+1) and R̂ − N (f) = 1 nN ∑nN i=1 l(f(h N i ),−1). The variables nP and nN are the numbers of positive and negative samples, respectively. PU learning has to exploit the fact that π N p N (h) = p(h)− π P p P (h), due to the absence of negative samples. The second part of Eq. (1) can be reformulated as: πNR̂−N (f) = R̂ − U − πP R̂ − P (f), (2) whereR−U = Eh∼p(h)[l(f(h),−1)] andR − P = Eh∼p(h|y=+1)[l(f(h),−1)]. Furthermore, the classification risk can then be approximated by: R̂PU (f) = πP R̂+P (f) + R̂ − U (f)− πP R̂ − P (f), (3) where R̂−P (f) = 1 nP ∑nP i=1 l(f(h P i ),−1) , R̂ − U (f) = 1 nU ∑nU i=1 l(f(h U i ),−1), and nU is the number of unlabeled data sample. To obtain an empirical risk minimizer f̂PU for the PU learning framework, R̂PU (f) needs to be minimized. Kiryo et al. noted that the model tends to suffer from overfitting on the training data when the model f is made too flexible [29]. To alleviate this problem, the authors proposed the use of non-negative risk estimator for PU learning: R̃PU (f) = πP R̂+P (f) + max{0, R̂ − U − πP R̂ − P (f)}. (4) It works in fact by explicitly constraining the training risk of PU to be non-negative. The key challenge in practical PU learning is the unknown of prior πP . Prior Estimation The knowledge of the class prior πP is quintessential to estimating the classification risk. In PU learning for our node pairs, we represent a sample as {h, s, y}, where h is the node pair representation (to be learned), s indicates if the pair relationship is observed (labeled, s = 1) or unobserved (unlabeled, s = 0), and y denotes the true class (positive or negative). We have only the positive samples labeled: p(y = 1|s = 1) = 1. If s = 0, the sample can belong to either the positive or negative class. PU learning runs commonly with the Selected Completely at Random (SCAR) assumption, which postulates that the labeled sample set is a random subset of the positive sample set [16, 6, 8]. The probability of selecting a positive sample to observe can be denoted as: p(s = 1|y = 1, h). The SCAR assumption means: p(s = 1|y = 1, h) = p(s = 1|y = 1). However, it is hard to estimate πP = p(y = 1) with only a small set of observed samples (s = 1) and a large set of unobserved samples (s = 0) [7]. Solutions have been tried by i) estimating from a validation set of a fully labeled data set (all with s = 1 and knowing y = 1 or −1) [29, 10]; ii) estimating from the background knowledge; and iii) estimating directly from the PU data [16, 6, 8, 27, 14]. In this paper, we focus on estimating the prior directly from the PU data. Specifically, unlike the other methods, we propose a scalable method based on deep variational inference to jointly estimate the prior and train the classification model end-to-end. The proposed deep variational inference uses KL-divergence to estimate the parameters of class mixture model distributions of the positive and negative class in contrast to the method proposed in [14] which uses penalized L1 divergences to assign higher penalties to class priors that scale the positive distribution as more than the total distribution. 3 PU learning on Temporal Attributed Networks 3.1 Model Design The architecture of our Temporal Relationship Predictor (TRP) model is shown in Fig. 1. For a given pair of nodes aij =< vi, vj > in any temporal graphlet Gt, the main steps used in the training process of TRP for calculating the connectivity prediction score pt(aij) are given in Algorithm 1. The testing process also uses the same Algorithm 1 (with t=T ), calculating pT (aij) for node pairs that have not been connected in GT−1. The connectivity prediction score is calculated in line 6 of Algorithm 1 by pt(aij) = fC(htaij ; θC), where θC is the classification network parameter, and the embedding vector htaij for the pair a ij is iteratively updated in lines 1-5. These iterations of updating htaij are shown as the recurrent structure in Fig. 1 (a), followed by the classifier fC(.; θC). The recurrent update function hτaij = fA(h τ−1 aij , z τ vi , z τ vj ; θA), τ = 1...t, in line 4 is shown in Fig. 1 (b), and has a Gated recurrent unit (GRU) network at its core, P = σg(W zfm(zτvi , z τ vj ) + U Phτ−1aij + b P), r = σg(W rfm(z τ vi , z τ vj ) + U rhτ−1aij + b r), h̃τaij = σh′(Wfm(z τ vi , z τ vj ) + r ◦ Uh τ−1 aij + b), hτaij = P ◦ h̃ τ aij + (1− P) ◦ h τ−1 aij . (5) Algorithm 1: Calculate the future connection score for term pairs ai,j =< vi, vj > Input: G = {G1, G2, . . . , GT } with node feature xtv , a node pair ai,j =< vi, vj > in Gt, and an initialized pair embedding vector h0ai,j (e.g., by zeros) Result: ptai,j , the connectivity prediction score for the node pair a i,j 1 for τ ← 1· · · t do 2 Obtain the current node feature xτv (v = vi, vj) of both nodes (terms) vi, vj ; as well as xτNr(v) (v = vi, vj) for the node feature of sampled neighboring nodes for vi, vj ; 3 Aggregate the neighborhood information of node v = vi, vj , zτv = fG(xτv , xτNr(v); θG); 4 Update the embedding vector for the node pair hτai,j = fA(h τ−1 ai,j , zτvi , z τ vj ; θA) ; 5 end 6 Return ptai,j = fC(h t ai,j ; θC) where ◦ denotes element-wise multiplication, σ is a nonlinear activation function, and fm(.) is an aggregation function. In this study, we use a max pool aggregation. The variables {W,U} are the weights. The inputs to function fA include: hτ−1aij , the embedding vector in previous step; {z t vi , z t vj}, the representation of node vi and vj after aggregation their neighborhood, zτv = fG(x τ v , x τ Nr(v); θG), given in line 3. The aggregation function fG takes input the node feature xτv , and the neighboring node feature xτNr(v) and goes through the aggregation block shown in Fig. 1 (c). The aggregation network fG(; θG) is implemented following GraphSAGE [21], which is one of the most popular graph neural networks for aggregating node and its neighbors. The loss function in our problem l(pt(aij), y) evaluates the loss incurred by predicting a connectivity pt(aij) = fC(htaij ; θC) when the ground truth is y. For constructing the training set for our PU learning in the dynamic graph, we first clarify the label notations. For one pair aij from a graph Gt, its label yijt = 1 (positive) if the two nodes have a link observed in G t+1 (they have an edge ∈ Et+1, observed in next time step), i.e., sijt = 1. Otherwise when no link is observed between them in Gt+1, aij is unlabeled, i.e., sijt = 0, since y ij t can be either 1 or -1. Since we consider insertion only graphlets sequence, V 1 ⊆ V 2, ...,⊆ V T and E1 ⊆ E2, ...,⊆ ET , yijt = 1 maintains for all future steps after t (once positive, always positive). At the final step t = T , all pairs with observed connections already have yijT = 1, our objective is to predict the connectivity score for those pairs with sijT = 0. Our loss function is defined following the unbiased risk estimator in Eq. (3), LR = πP R̂+P (fC) + R̂ − U (fC)− πP R̂ − P (fC), (6) where R̂+P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(pt(aij))), R̂−U (fC) = 1 |H U | ∑ aij∈H U 1/(1 + exp(−pt(aij))), and R̂−P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(−pt(aij))) with the positive samples H P and unlabeled samples H U , when taking l as sigmoid loss function. LR can be adjusted with the non-negative constraint in Eq. (4), with the same definition of R̂+P (f), R̂ − U , and R̂ − P (f). 3.2 Prior Estimation The positive prior πP is a key factor in LR to be addressed. The samples we have from G are only positive H P and unlabeled H U . Due to the absence of negative samples and of prior knowledge, we present an estimate of the class prior from the distribution of h, which is the pair embedding from fA. Without loss of generality, we assume that the learned h of all samples has a Gaussian mixture distribution of two components, one is for the positive samples, while the other is for the negative samples although they are unlabeled. The mixture distribution is parameterized by β, including the mean, co-variance matrix and mixing coefficient of each component. We learn the mixture distribution using stochastic variational inference [24] via the “Bayes by Backprop” technique [9]. The use of variational inference has been shown to have the ability to model salient properties of the data generation mechanism and avoid singularities. The idea is to find variational distribution variables θ∗ that minimizes the Kullback-Leibler (KL) divergence between the variational distribution q(β|θ) and the true posterior distribution p(β|h): θ∗ = argminθLE , (7) where,LE = KL(q(β|θ)||p(β|h)) = KL(q(β|θ)||p(β))− Eq(β|θ)[log p(h|β)]. The resulting cost function LE on the right of Eq. (7) is known as the (negative) “evidence lower bound” (ELBO). The second term in LE is the likelihood of h fitting to the mixture Gaussian with parameter β: Eq(β|θ)[log p(h|β)], while the first term is referred to as the complexity cost [9]. We optimize the ELBO using stochastic gradient descent. With θ∗, the positive prior is then estimated as πP = q(β π i |θ∗), i = arg max k=1,2 |Ck| (8) where C1 = {h ∈ HP , p(h|β1) > p(h|β2)} and C2 = {h ∈ HP , p(h|β2) > p(h|β1)}. 3.3 Parameter Learning To train the three networks fA(.; θA), fG(.; θG), fC(.; θC) for connectivity score prediction, we jointly optimize L = ∑T t=1 LRt + LEt , using Adam over the model parameters. Loss LR is the PU classification risk as described in section 3.1, and LE is the loss of prior estimation as described in section 3.2. Note that during training, yTaij = y T−1 aij since we do not observe G T in training. This is to enforce prediction consistency. 4 Experimental Evaluation 4.1 Dataset and Experimental Setup The graphs on which we apply our model are constructed from the title and abstract of papers published in the biomedical fields from 1949 to 2020. The nodes are the biomedical terms, while the edges linking two nodes indicate the co-occurrence of the two terms. Note that we focus only on the co-occurrence relation and leave the polarity of the relationships for future study. To evaluate the model’s adaptivity in different scientific domains, we construct three graphs from papers relevant to COVID-19, Immunotherapy, and Virology. The graph statistics are shown in Table 1. To set up the training and testing data for TRP model, we split the graph by year intervals (5 years for COVID-19 or 10 years for Virology and Immunotherapy). We use splits of {G1, G2, ..., GT−1} for training, and use connections newly added in the final split GT for testing. Since baseline models do not work on dynamic data, hence we train on GT−1 and test on new observations made in GT . Therefore in testing, the positive pairs are those linked in GT but not in GT−1, i.e., ET \ ET−1, which can be new connections between nodes already existing in GT−1, or between a new node in GT and another node in GT−1, or between two new nodes in GT . All other unlinked node pairs in GT are unlabeled. At each t = 1, ..., T − 1, graph Gt is incrementally updated from Gt−1 by adding new nodes (biomedical terms) and their links. For the node feature vector xtv, we extract its term description and convert to a 300-dimensional feature vector by applying the latent semantic analysis (LSI). The missing term and context attributes are filled with zero vectors. If this node already exists before time t, the context features are updated with the new information about them in discoveries, and publications. In the inference (testing) stage, the new nodes in GT are only presented with their feature vectors xTv . The connections to these isolated nodes are predicted by our TRP model. We implement TRP using the Tensorflow library. Each GPU based experiment was conducted on an Nvidia 1080TI GPU. In all our experiments, we set the hidden dimensions to d = 128. For each neural network based model, we performed a grid search over the learning rate lr = {1e−2, 5e−3, 1e−3, 5e−2}, For the prior estimation, we adopted Gaussian, square-root inverse Gamma, and Dirichlet distributions to model the mean, co-variance matrix and mixing coefficient variational posteriors respectively. 4.2 Comparison Methods and Performance Matrices We evaluate our proposed TRP model in several variants and by comparing with several competitors: 1) TRP variants: a) TRP-PN - the same framework but in PN setting (i.e., treating all unobserved samples as negative, rather than unlabeled); b) TRP-nnPU - trained using the non-negative risk estimator Eq. (4) or the equation defined in section 3.1 for our problem; and c) TRP-uPU - trained using the unbiased PU risk estimator Eq. (3), the equation defined in section 3.1 for our problem. The comparison of these variants will show the impact of different risk estimators. 2) SOTA PU learning: the state-of-the-art (SOTA) PU learning methods taking input h from the SOTA node embedding models, which can be based on LSI [11], node2vec [20], DynAE [19] and GraphSAGE [21]. Since node2vec learns only from the graph structure, we concatenate the node2vec embeddings with the text (term and context) attributes to obtain an enriched node representation. Unlike our TRP that learns h for one pair of nodes, these models learn embedding vectors for individual nodes. Then, h of one pair from baselines is defined as the concatenation of the embedding vector of two nodes. We observe from the results that a concatenation of node2vec and LSI embeddings had the most competitive performance compared to others. Hence we only report the results based on concatenated embeddings for all the baselines methods. The used SOTA PU learning methods include [16] by reweighting all examples, and models with different estimation of the class prior such as SAR-EM [8] (an EM-based SAR-PU method), SCAR-KM2 [38], SCAR-C [8], SCAR-TIcE [6], and pen-L1 [14]. 3) Supervised: weaker but simpler logistic regression applied also h. We measure the performance using four different metrics. These metrics are the Macro-F1 score (F1-M), F1 score of observed connections (F1-S), F1 score adapted to PU learning (F1-P) [7, 30], and the label ranking average precision score (LRAP), where the goal is to give better rank to the positive node pairs. In all metrics used, higher values are preferred. 4.3 Evaluation Results Table 2 shows that TRP-uPU always has the superior performance over all other baselines across the datasets due to its ability to capture and utilize temporal, structural, and textual information (learning better h) and also the better class prior estimator. Among TRP variants, TRP-uPU has higher or equal F1 values comparing to the other two, indicating the benefit of using the unbiased risk estimator. On the LRAP scores, TRP-uPU and TRP-PN have the same performance on promoting the rank of the positive samples. Note that the results in Table 2 are from the models trained with their best learning rate, which is an important parameter that should be tuned in gradient-based optimizer, by either exhaustive search or advanced auto-machine learning [35]. To further investigate the performance of TRP variants, we show in Figure 2 their F1-S at different learning rate in trained from 1 to 10 epochs. We notice that TRP-uPU has a stable performance across different epochs and learning rates. This advantage is attributed to the unbiased PU risk estimation, which learns from only positive samples with no assumptions on the negative samples. We also found interesting that nnPU was worse than uPU in our experimental results. However, it is not uncommon for uPU to outperform nnPU in evaluation with real-world datasets. Similar observations were found in the results in [14, 17]. In our case, we attribute this observation to the joint optimization of the loss from the classifier and the prior estimation. Specifically, in the loss of uPU (Eq. (3)), πP affects both R̂+P (f) and R̂ − P (f). However, in the loss of nnPU (Eq. (4)), πP only weighted R̂+P (f) when R̂ − U − πP R̂ − P (f) is negative. In real-world applications, especially when the true prior is unknown, the loss selection affects the estimation of πP , and thus the final classification results. TRP-PN is not as stable as TRP-uPU due to the strict assumption of unobserved samples as negative. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (COVID-19) TRP-UPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Virology) TRP-UPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Immunotherapy) TRP-UPU (Immunotherapy) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Immunotherapy) 4.4 Incremental Prediction In Figure 3, we compare the performance of the top-performing PU learning methods on different year splits. We train TRP and other baseline methods on data until t− 1 and evaluate its performance on predicting the testing pairs in t. It is expected to see performance gain over the incremental training process, as more and more data are used. We show F1-P due to the similar pattern on other metrics. We observe that the TRP models display an incremental learning curve across the three datasets and outperformed all other models. 0.4 0.5 0.6 0.7 0.8 0.9 2001 - 2005 2006 - 2010 2011 - 2015 2016 - 2020 F1 -S co re Evaluation year COVID-19 SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU 0.1 0.3 0.5 0.7 0.9 1980 - 1989 1990 - 1999 2000 - 2009 2010 - 2019 F1 -S co re Evaluation year Virology SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU Immunotherapy 4.5 Qualitative Analysis We conduct qualitative analysis of the results obtained by TRP-uPU on the COVID-19 dataset. This investigation is to qualitatively check the meaningfulness of the paired terms, e.g., can term covid-19 be paired meaningfully with other terms. We designed two evaluations. First, we set our training data until 2015, i.e., excluding the new terms in 2016-2020 in the COVID-19 graph, such as covid-19, sars-cov-2. The trained model then predicts the connectivity between covid-19 as a new term and other terms, which can be also a new term or a term existing before 2015. Since new terms like covid-19 were not in training graph, their term feature were initialized as defined in Section 4.1. The top predicted terms predicted to be connected with covid-19 are shown in Table 3 top, with the verification in COVID-19 graph of 2016-2020. We notice that the top terms are truly relevant to covid-19, and we do observe their connection in the evaluation graph. For instance, Cough, Fever, SARS, Hand (washing of hands) were known to be relevant to covid-19 at the time of writing this paper. In the second evaluation, we trained the model on the full COVID-19 data (≤ 2020) and then predict to which terms covid-19 will be connected, but they haven’t been connect yet in the graph until 20201. We show the results in Table 3 bottom, and verified the top ranked terms by manually searching the recent research articles online. We did find there exist discussions between covid-19 and some top ranked terms, for example, [3] discusses how covid-19 affected the market of Chromium oxide and [23] discusses about caring for people living with Hepatitis B virus during the covid-19 spread. 4.6 Pair Embedding Visualization We further analyze the node pair embedding learned by TRP-uPU on the COVID-19 data by visualizing them with t-SNE [34]. To have a clear visibility, we sample 800 pairs and visualize the learned embeddings in Figure 4. We denote with colors the observed labels in comparison with the predicted 1Dataset used in this analysis was downloaded in early March 2020 from https://www.semanticscholar. org/cord19/download labels. We observe that the true positives (observed in GT and correctly predicted as positives - blue) and unobserved negatives (not observed in GT and predicted as negatives - red) are further apart. This clear separation indicates that the learned h appropriately grouped the positive and negative (predicted) pairs in distinct clusters. We also observe that the unobserved positives (not observed in GT but predicted as positives - green) and true positives are close. This supports our motivation behind conducting PU learning: the unlabeled samples are a mixture of positive and negative samples, rather than just negative samples. We observe that several unobserved positives are relationships like between Tobacco and covid-19. Although they are not connected in the graph we study, several articles have shown a link between terms [2, 46, 37]. 5 Conclusion In this paper, we propose TRP - a temporal risk estimation PU learning strategy for predicting the relationship between biomedical terms found in texts. TRP is shown with advantages on capturing the temporal evolution of the term-term relationship and minimizing the unbiased risk with a positive prior estimator based on variational inference. The quantitative experiments and analyses show that TRP outperforms several state-of-the-art PU learning methods. The qualitative analyses also show the effectiveness and usefulness of the proposed method. For the future work, we see opportunities like predicting the relationship strength between drugs and diseases (TRP for a regression task). We can also substitute the experimental compatibility of terms for the term co-occurrence used in this study. Acknowledgments and Disclosure of Funding The research reported in this publication was supported by funding from the Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), under award number URF/1/1976-31-01, and NSFC No 61828302. Additional revenue related to this work: student internship at Sony Computer Science Laboratories Inc. We would like to acknowledge the great contribution of Sucheendra K. PalaniapPalaniappanpan and The Systems Biology institute to this work for the initial problem definition and data collection. 6 Broader Impact TRP can be adopted to a wide range of applications involving node pairs in a graph structure. For instance, the prediction of relationships or similarities between two social beings, the prediction of items that should be purchased together, the discovery of compatibility between drugs and diseases, and many more. Our proposed model can be used to capture and analyze the temporal relationship of node pairs in an incremental dynamic graph. Besides, it is especially useful when only samples of a given class (e.g., positive) are available, but it is uncertain whether the unlabeled samples are positive or negative. To be aligned with this fact, TRP treats the unlabeled data as a mixture of negative and positive data samples, rather than all be negative. Thus TRP is a flexible classification model learned from the positive and unlabeled data. While there could be several applications of our proposed model, we focus on the automatic biomedical hypothesis generation (HG) task, which refers to the discovery of meaningful implicit connections between biomedical terms. The use of HG systems has many benefits, such as a faster understanding of relationships between biomedical terms like viruses, drugs, and symptoms, which is essential in the fight against diseases. With the use of HG systems, new hypotheses with minimum uncertainty about undiscovered knowledge can be made from already published scholarly literature. Scientific research and discovery is a continuous process. Hence, our proposed model can be used to predict pairwise relationships when it is not enough to know with whom the items are related, but also learn how the connections have been formed (in a dynamic process). However, there are some potential risks of hypothesis generation from biomedical papers. 1) Publications might be faulty (with faulty/wrong results), which can result in a bad estimate of future relationships. However, this is a challenging problem as even experts in the field might be misled by the faulty results. 2) The access to full publication text (or even abstracts) is not readily available, hence leading to a lack of enough data for a good understanding of the studied terms, and then inaccurate h in generation performance. 3) It is hard to interpret and explain the learning process, for example, the learned embedding vectors are relevant to which term features, the contribution of neighboring terms in the dynamic evolution process. 4) For validating the future relationships, there is often a need for background knowledge or a biologist to evaluate the prediction. Scientific discovery is often to explore the new nontraditional paths. PU learning lifts the restriction on undiscovered relations, keeping them under investigation for the probability of being positive, rather than denying all the unobserved relations as negative. This is the key value of our work in this paper.
1. What is the focus and contribution of the paper regarding the transformation of the HG problem into a PU learning problem? 2. What are the strengths of the proposed algorithm, particularly in terms of its sound and clear ideas and process? 3. What are the weaknesses of the paper, especially regarding the intractable problem of class prior estimation and suitability of the Gaussian mixture model? 4. Do you have any concerns or questions regarding the experimental results and the comparison between nnPU and uPU? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper designs a novel algorithm named TPR in PU-Learning to predict future connectivity on a dynamic attributed graph (HG problem). The algorithm can be divided into two parts. The first part is to transform the HG problem into a PU learning problem, so that the unbiased risk estimator based PU methods can be applied. The second part is about estimation of the positive prior also by optimizing an objective function related to ELBO. Strengths The ideas and process of the algorithm are sound and clear, and the proposed methods have comparably high accuracy on ground-truth datasets. The major contribution of this paper is to transform a practically important problem, mecial HG problem, into a PU learning problem. It shows the potential of the PU learning in real-world applications besides image classification. Weaknesses 1) The class prior estimation is an intractable problem for PU learning, and it is hard to identify the quantity from data without the assumption of irreducibility (the negative distribution cannot be a mixture that contains positive distribution). The authors seem to avoid this problem by using the Guassian mixture model with two components, but it leads to another problem: Is the GMM suitable for the data? 2) This is related to the above one. Can authors show (at least by numerical results) that the class prior is well estimated in experiments? I am curious how the ratio of positive dataset to unlabeled dataset influences effectiveness of the model. 3) It is interesting to see that nnPU is worse than uPU in experiments. I think more analysis is required, because the to optimal PU classifier must satisfy the "the non-negative restriction of the risk estimation" according to the theoretical analysis. ==== Update after rebuttal: I thank the authors for addressing the main points. But more details on the estimation of class prior is still required.
NIPS
Title Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation Abstract Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers to the discovery of meaningful implicit connections between biomedical terms. However, most existing methods fail to truly capture the temporal dynamics of scientific term relations and also assume unobserved connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To break these limits, we formulate this HG problem as future connectivity prediction task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data. We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction. Experiment results on real-world biomedical term relationship datasets and case study analyses on a COVID-19 dataset validate the effectiveness of the proposed model. 1 Introduction Recently, the study of co-relationships between biomedical entities is increasingly gaining attention. The ability to predict future relationships between biomedical entities like diseases, drugs, and genes enhances the chances of early detection of disease outbreaks and reduces the time required to detect probable disease characteristics. For instance, in 2020, the COVID-19 outbreak pushed the world to a halt with scientists working tediously to study the disease characteristics for containment, cure, and vaccine. An increasing number of articles encompassing new knowledge and discoveries from these studies were being published daily [1]. However, with the accelerated growth rate of publications, the manual process of reading to extract undiscovered knowledge increasingly becomes a tedious and time-consuming task beyond the capability of individual researchers. In an effort towards an advanced knowledge discovery process, computers have been introduced to play an ever-greater role in the scientific process with automatic hypothesis generation (HG). The study of automated HG has attracted considerable attention in recent years [41, 25, 45, 47]. Several previous works proposed techniques based on association rules [25, 18, 47], clustering and topic modeling [45, 44, 5], text mining [43, 42], and others [28, 49, 39]. However, these previous works fail to truly utilize the crucial information encapsulated in the dynamic nature of scientific discoveries and assume that the unobserved relationships denote a non-relevant relationship (negative). To model the historical evolution of term pair relations, we formulate HG on a term relationship graph G = {V,E}, which is decomposed into a sequence of attributed graphlets G = {G1, G2, ..., GT }, where the graphlet at time t is defined as, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Definition 1. Temporal graphlet: A temporal graphlet Gt = {V t, Et, xtv} is a temporal subgraph at time step t, which consists of nodes (terms) V t satisfying V 1 ⊆ V 2, ...,⊆ V T and the observed co-occurrence between these terms Et satisfying E1 ⊆ E2, ...,⊆ ET . And xtv is the node attribute. Example of the node terms can be covid-19, fever, cough, Zinc, hepatitis B virus etc. When two terms co-occurred at time t in scientific discovery, a link between them is added to Et, and the nodes are added to V t if they haven’t been added. Definition 2. Hypothesis Generation (HG): Given G = {G1, G2, ..., GT }, the target is to predict which nodes unlinked in V T should be linked (a hypothesis is generated between these nodes). We address the HG problem by modeling how Et was formed from t = 1 to T (on a dynamic graph), rather than using only ET (on a static graph). In the design of learning model, it is clear to us the observed edges are positive. However, we are in a dilemma whether the unobserved edges are positive or negative. The prior work simply set them to be negative, learning in a positive-negative (PN) setting) based on a closed world assumption that unobserved connections are irrelevant (negative) [39, 28, 4]. We set the learning with a more realistic assumption that the unobserved connections are a mixture of positive and negative term relations (unlabeled), a.k.a. Positive-unlabeled (PU) learning, which is different from semi-supervised PN learning that assumes a known set of labeled negative samples. For the observed positive samples in PU learning, they are assumed to be selected entirely at random from the set of all positive examples [16]. This assumption facilitates and simplifies both theoretical analysis and algorithmic design since the probability of observing the label of a positive example is constant. However, estimating this probability value from the positive-unlabeled data is nontrivial. We propose a variational inference model to estimate the positive prior and incorporate it in the learning of node pair embeddings, which are then used for link prediction (hypothesis generation). We highlight the contributions of this work as follows. 1) Methodology: we propose a PU learning approach on temporal graphs. It differs from other existing approaches that learn in a conventional PN setting on static graphs. In addition, we estimate the positive prior via a variational inference model, rather than setting by prior knowledge. 2) Application: to the best of our knowledge, this is the first the application of PU learning on the HG problem, and on dynamic graphs. We applied the proposed model on real-world graphs of terms in scholarly publications published from 1945 to 2020. Each of the three graphs has around 30K nodes and 1-2 million edges. The model is trained end-to-end and shows superior performance on HG. Case studies demonstrate our new and valid findings of the positive relationship between medical terms, including newly observed terms that were not observed in training. 2 Related Work of PU Learning In PU learning, since the negative samples are not available, a classifier is trained to minimize the expected misclassification rate for both the positive and unlabeled samples. One group of study [32, 31, 33, 22] proposed a two-step solution: 1) identifying reliable negative samples, and 2) learning a classifier based on the labeled positives and reliable negatives using a (semi)-supervised technique. Another group of studies [36, 30, 26, 17, 40] considered the unlabeled samples as negatives with label noise. Hence, they place higher penalties on misclassified positive examples or tune a hyperparameter based on suitable PU evaluation metrics. Such a proposed framework follows the SCAR (Selected Completely at Random) assumption since the noise for negative samples is constant. PU Learning via Risk Estimation Recently, the use of unbiased risk estimator has gained attention [12, 14, 15, 48]. The goal is to minimize the expected classification risk to obtain an empirical risk minimizer. Given an input representation h (in our case the node pair representation to be learned), let f : Rd → R be an arbitrary decision function and l : R×{±1} → R be the loss function calculating the incurred loss l(f(h), y) of predicting an output f(h) when the true value is y. Function l has a variety of forms, and is determined by application needs [29, 13]. In PN learning, the empirical risk minimizer f̂PN is obtained by minimizing the PN risk R̂(f) w.r.t. a class prior of πp: R̂(f) = π P R̂+P (f) + πN R̂ − N (f), (1) where π N = 1 − π P , R̂+P (f) = 1 nP ∑nP i=1 l(f(h P i ),+1) and R̂ − N (f) = 1 nN ∑nN i=1 l(f(h N i ),−1). The variables nP and nN are the numbers of positive and negative samples, respectively. PU learning has to exploit the fact that π N p N (h) = p(h)− π P p P (h), due to the absence of negative samples. The second part of Eq. (1) can be reformulated as: πNR̂−N (f) = R̂ − U − πP R̂ − P (f), (2) whereR−U = Eh∼p(h)[l(f(h),−1)] andR − P = Eh∼p(h|y=+1)[l(f(h),−1)]. Furthermore, the classification risk can then be approximated by: R̂PU (f) = πP R̂+P (f) + R̂ − U (f)− πP R̂ − P (f), (3) where R̂−P (f) = 1 nP ∑nP i=1 l(f(h P i ),−1) , R̂ − U (f) = 1 nU ∑nU i=1 l(f(h U i ),−1), and nU is the number of unlabeled data sample. To obtain an empirical risk minimizer f̂PU for the PU learning framework, R̂PU (f) needs to be minimized. Kiryo et al. noted that the model tends to suffer from overfitting on the training data when the model f is made too flexible [29]. To alleviate this problem, the authors proposed the use of non-negative risk estimator for PU learning: R̃PU (f) = πP R̂+P (f) + max{0, R̂ − U − πP R̂ − P (f)}. (4) It works in fact by explicitly constraining the training risk of PU to be non-negative. The key challenge in practical PU learning is the unknown of prior πP . Prior Estimation The knowledge of the class prior πP is quintessential to estimating the classification risk. In PU learning for our node pairs, we represent a sample as {h, s, y}, where h is the node pair representation (to be learned), s indicates if the pair relationship is observed (labeled, s = 1) or unobserved (unlabeled, s = 0), and y denotes the true class (positive or negative). We have only the positive samples labeled: p(y = 1|s = 1) = 1. If s = 0, the sample can belong to either the positive or negative class. PU learning runs commonly with the Selected Completely at Random (SCAR) assumption, which postulates that the labeled sample set is a random subset of the positive sample set [16, 6, 8]. The probability of selecting a positive sample to observe can be denoted as: p(s = 1|y = 1, h). The SCAR assumption means: p(s = 1|y = 1, h) = p(s = 1|y = 1). However, it is hard to estimate πP = p(y = 1) with only a small set of observed samples (s = 1) and a large set of unobserved samples (s = 0) [7]. Solutions have been tried by i) estimating from a validation set of a fully labeled data set (all with s = 1 and knowing y = 1 or −1) [29, 10]; ii) estimating from the background knowledge; and iii) estimating directly from the PU data [16, 6, 8, 27, 14]. In this paper, we focus on estimating the prior directly from the PU data. Specifically, unlike the other methods, we propose a scalable method based on deep variational inference to jointly estimate the prior and train the classification model end-to-end. The proposed deep variational inference uses KL-divergence to estimate the parameters of class mixture model distributions of the positive and negative class in contrast to the method proposed in [14] which uses penalized L1 divergences to assign higher penalties to class priors that scale the positive distribution as more than the total distribution. 3 PU learning on Temporal Attributed Networks 3.1 Model Design The architecture of our Temporal Relationship Predictor (TRP) model is shown in Fig. 1. For a given pair of nodes aij =< vi, vj > in any temporal graphlet Gt, the main steps used in the training process of TRP for calculating the connectivity prediction score pt(aij) are given in Algorithm 1. The testing process also uses the same Algorithm 1 (with t=T ), calculating pT (aij) for node pairs that have not been connected in GT−1. The connectivity prediction score is calculated in line 6 of Algorithm 1 by pt(aij) = fC(htaij ; θC), where θC is the classification network parameter, and the embedding vector htaij for the pair a ij is iteratively updated in lines 1-5. These iterations of updating htaij are shown as the recurrent structure in Fig. 1 (a), followed by the classifier fC(.; θC). The recurrent update function hτaij = fA(h τ−1 aij , z τ vi , z τ vj ; θA), τ = 1...t, in line 4 is shown in Fig. 1 (b), and has a Gated recurrent unit (GRU) network at its core, P = σg(W zfm(zτvi , z τ vj ) + U Phτ−1aij + b P), r = σg(W rfm(z τ vi , z τ vj ) + U rhτ−1aij + b r), h̃τaij = σh′(Wfm(z τ vi , z τ vj ) + r ◦ Uh τ−1 aij + b), hτaij = P ◦ h̃ τ aij + (1− P) ◦ h τ−1 aij . (5) Algorithm 1: Calculate the future connection score for term pairs ai,j =< vi, vj > Input: G = {G1, G2, . . . , GT } with node feature xtv , a node pair ai,j =< vi, vj > in Gt, and an initialized pair embedding vector h0ai,j (e.g., by zeros) Result: ptai,j , the connectivity prediction score for the node pair a i,j 1 for τ ← 1· · · t do 2 Obtain the current node feature xτv (v = vi, vj) of both nodes (terms) vi, vj ; as well as xτNr(v) (v = vi, vj) for the node feature of sampled neighboring nodes for vi, vj ; 3 Aggregate the neighborhood information of node v = vi, vj , zτv = fG(xτv , xτNr(v); θG); 4 Update the embedding vector for the node pair hτai,j = fA(h τ−1 ai,j , zτvi , z τ vj ; θA) ; 5 end 6 Return ptai,j = fC(h t ai,j ; θC) where ◦ denotes element-wise multiplication, σ is a nonlinear activation function, and fm(.) is an aggregation function. In this study, we use a max pool aggregation. The variables {W,U} are the weights. The inputs to function fA include: hτ−1aij , the embedding vector in previous step; {z t vi , z t vj}, the representation of node vi and vj after aggregation their neighborhood, zτv = fG(x τ v , x τ Nr(v); θG), given in line 3. The aggregation function fG takes input the node feature xτv , and the neighboring node feature xτNr(v) and goes through the aggregation block shown in Fig. 1 (c). The aggregation network fG(; θG) is implemented following GraphSAGE [21], which is one of the most popular graph neural networks for aggregating node and its neighbors. The loss function in our problem l(pt(aij), y) evaluates the loss incurred by predicting a connectivity pt(aij) = fC(htaij ; θC) when the ground truth is y. For constructing the training set for our PU learning in the dynamic graph, we first clarify the label notations. For one pair aij from a graph Gt, its label yijt = 1 (positive) if the two nodes have a link observed in G t+1 (they have an edge ∈ Et+1, observed in next time step), i.e., sijt = 1. Otherwise when no link is observed between them in Gt+1, aij is unlabeled, i.e., sijt = 0, since y ij t can be either 1 or -1. Since we consider insertion only graphlets sequence, V 1 ⊆ V 2, ...,⊆ V T and E1 ⊆ E2, ...,⊆ ET , yijt = 1 maintains for all future steps after t (once positive, always positive). At the final step t = T , all pairs with observed connections already have yijT = 1, our objective is to predict the connectivity score for those pairs with sijT = 0. Our loss function is defined following the unbiased risk estimator in Eq. (3), LR = πP R̂+P (fC) + R̂ − U (fC)− πP R̂ − P (fC), (6) where R̂+P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(pt(aij))), R̂−U (fC) = 1 |H U | ∑ aij∈H U 1/(1 + exp(−pt(aij))), and R̂−P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(−pt(aij))) with the positive samples H P and unlabeled samples H U , when taking l as sigmoid loss function. LR can be adjusted with the non-negative constraint in Eq. (4), with the same definition of R̂+P (f), R̂ − U , and R̂ − P (f). 3.2 Prior Estimation The positive prior πP is a key factor in LR to be addressed. The samples we have from G are only positive H P and unlabeled H U . Due to the absence of negative samples and of prior knowledge, we present an estimate of the class prior from the distribution of h, which is the pair embedding from fA. Without loss of generality, we assume that the learned h of all samples has a Gaussian mixture distribution of two components, one is for the positive samples, while the other is for the negative samples although they are unlabeled. The mixture distribution is parameterized by β, including the mean, co-variance matrix and mixing coefficient of each component. We learn the mixture distribution using stochastic variational inference [24] via the “Bayes by Backprop” technique [9]. The use of variational inference has been shown to have the ability to model salient properties of the data generation mechanism and avoid singularities. The idea is to find variational distribution variables θ∗ that minimizes the Kullback-Leibler (KL) divergence between the variational distribution q(β|θ) and the true posterior distribution p(β|h): θ∗ = argminθLE , (7) where,LE = KL(q(β|θ)||p(β|h)) = KL(q(β|θ)||p(β))− Eq(β|θ)[log p(h|β)]. The resulting cost function LE on the right of Eq. (7) is known as the (negative) “evidence lower bound” (ELBO). The second term in LE is the likelihood of h fitting to the mixture Gaussian with parameter β: Eq(β|θ)[log p(h|β)], while the first term is referred to as the complexity cost [9]. We optimize the ELBO using stochastic gradient descent. With θ∗, the positive prior is then estimated as πP = q(β π i |θ∗), i = arg max k=1,2 |Ck| (8) where C1 = {h ∈ HP , p(h|β1) > p(h|β2)} and C2 = {h ∈ HP , p(h|β2) > p(h|β1)}. 3.3 Parameter Learning To train the three networks fA(.; θA), fG(.; θG), fC(.; θC) for connectivity score prediction, we jointly optimize L = ∑T t=1 LRt + LEt , using Adam over the model parameters. Loss LR is the PU classification risk as described in section 3.1, and LE is the loss of prior estimation as described in section 3.2. Note that during training, yTaij = y T−1 aij since we do not observe G T in training. This is to enforce prediction consistency. 4 Experimental Evaluation 4.1 Dataset and Experimental Setup The graphs on which we apply our model are constructed from the title and abstract of papers published in the biomedical fields from 1949 to 2020. The nodes are the biomedical terms, while the edges linking two nodes indicate the co-occurrence of the two terms. Note that we focus only on the co-occurrence relation and leave the polarity of the relationships for future study. To evaluate the model’s adaptivity in different scientific domains, we construct three graphs from papers relevant to COVID-19, Immunotherapy, and Virology. The graph statistics are shown in Table 1. To set up the training and testing data for TRP model, we split the graph by year intervals (5 years for COVID-19 or 10 years for Virology and Immunotherapy). We use splits of {G1, G2, ..., GT−1} for training, and use connections newly added in the final split GT for testing. Since baseline models do not work on dynamic data, hence we train on GT−1 and test on new observations made in GT . Therefore in testing, the positive pairs are those linked in GT but not in GT−1, i.e., ET \ ET−1, which can be new connections between nodes already existing in GT−1, or between a new node in GT and another node in GT−1, or between two new nodes in GT . All other unlinked node pairs in GT are unlabeled. At each t = 1, ..., T − 1, graph Gt is incrementally updated from Gt−1 by adding new nodes (biomedical terms) and their links. For the node feature vector xtv, we extract its term description and convert to a 300-dimensional feature vector by applying the latent semantic analysis (LSI). The missing term and context attributes are filled with zero vectors. If this node already exists before time t, the context features are updated with the new information about them in discoveries, and publications. In the inference (testing) stage, the new nodes in GT are only presented with their feature vectors xTv . The connections to these isolated nodes are predicted by our TRP model. We implement TRP using the Tensorflow library. Each GPU based experiment was conducted on an Nvidia 1080TI GPU. In all our experiments, we set the hidden dimensions to d = 128. For each neural network based model, we performed a grid search over the learning rate lr = {1e−2, 5e−3, 1e−3, 5e−2}, For the prior estimation, we adopted Gaussian, square-root inverse Gamma, and Dirichlet distributions to model the mean, co-variance matrix and mixing coefficient variational posteriors respectively. 4.2 Comparison Methods and Performance Matrices We evaluate our proposed TRP model in several variants and by comparing with several competitors: 1) TRP variants: a) TRP-PN - the same framework but in PN setting (i.e., treating all unobserved samples as negative, rather than unlabeled); b) TRP-nnPU - trained using the non-negative risk estimator Eq. (4) or the equation defined in section 3.1 for our problem; and c) TRP-uPU - trained using the unbiased PU risk estimator Eq. (3), the equation defined in section 3.1 for our problem. The comparison of these variants will show the impact of different risk estimators. 2) SOTA PU learning: the state-of-the-art (SOTA) PU learning methods taking input h from the SOTA node embedding models, which can be based on LSI [11], node2vec [20], DynAE [19] and GraphSAGE [21]. Since node2vec learns only from the graph structure, we concatenate the node2vec embeddings with the text (term and context) attributes to obtain an enriched node representation. Unlike our TRP that learns h for one pair of nodes, these models learn embedding vectors for individual nodes. Then, h of one pair from baselines is defined as the concatenation of the embedding vector of two nodes. We observe from the results that a concatenation of node2vec and LSI embeddings had the most competitive performance compared to others. Hence we only report the results based on concatenated embeddings for all the baselines methods. The used SOTA PU learning methods include [16] by reweighting all examples, and models with different estimation of the class prior such as SAR-EM [8] (an EM-based SAR-PU method), SCAR-KM2 [38], SCAR-C [8], SCAR-TIcE [6], and pen-L1 [14]. 3) Supervised: weaker but simpler logistic regression applied also h. We measure the performance using four different metrics. These metrics are the Macro-F1 score (F1-M), F1 score of observed connections (F1-S), F1 score adapted to PU learning (F1-P) [7, 30], and the label ranking average precision score (LRAP), where the goal is to give better rank to the positive node pairs. In all metrics used, higher values are preferred. 4.3 Evaluation Results Table 2 shows that TRP-uPU always has the superior performance over all other baselines across the datasets due to its ability to capture and utilize temporal, structural, and textual information (learning better h) and also the better class prior estimator. Among TRP variants, TRP-uPU has higher or equal F1 values comparing to the other two, indicating the benefit of using the unbiased risk estimator. On the LRAP scores, TRP-uPU and TRP-PN have the same performance on promoting the rank of the positive samples. Note that the results in Table 2 are from the models trained with their best learning rate, which is an important parameter that should be tuned in gradient-based optimizer, by either exhaustive search or advanced auto-machine learning [35]. To further investigate the performance of TRP variants, we show in Figure 2 their F1-S at different learning rate in trained from 1 to 10 epochs. We notice that TRP-uPU has a stable performance across different epochs and learning rates. This advantage is attributed to the unbiased PU risk estimation, which learns from only positive samples with no assumptions on the negative samples. We also found interesting that nnPU was worse than uPU in our experimental results. However, it is not uncommon for uPU to outperform nnPU in evaluation with real-world datasets. Similar observations were found in the results in [14, 17]. In our case, we attribute this observation to the joint optimization of the loss from the classifier and the prior estimation. Specifically, in the loss of uPU (Eq. (3)), πP affects both R̂+P (f) and R̂ − P (f). However, in the loss of nnPU (Eq. (4)), πP only weighted R̂+P (f) when R̂ − U − πP R̂ − P (f) is negative. In real-world applications, especially when the true prior is unknown, the loss selection affects the estimation of πP , and thus the final classification results. TRP-PN is not as stable as TRP-uPU due to the strict assumption of unobserved samples as negative. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (COVID-19) TRP-UPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Virology) TRP-UPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Immunotherapy) TRP-UPU (Immunotherapy) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Immunotherapy) 4.4 Incremental Prediction In Figure 3, we compare the performance of the top-performing PU learning methods on different year splits. We train TRP and other baseline methods on data until t− 1 and evaluate its performance on predicting the testing pairs in t. It is expected to see performance gain over the incremental training process, as more and more data are used. We show F1-P due to the similar pattern on other metrics. We observe that the TRP models display an incremental learning curve across the three datasets and outperformed all other models. 0.4 0.5 0.6 0.7 0.8 0.9 2001 - 2005 2006 - 2010 2011 - 2015 2016 - 2020 F1 -S co re Evaluation year COVID-19 SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU 0.1 0.3 0.5 0.7 0.9 1980 - 1989 1990 - 1999 2000 - 2009 2010 - 2019 F1 -S co re Evaluation year Virology SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU Immunotherapy 4.5 Qualitative Analysis We conduct qualitative analysis of the results obtained by TRP-uPU on the COVID-19 dataset. This investigation is to qualitatively check the meaningfulness of the paired terms, e.g., can term covid-19 be paired meaningfully with other terms. We designed two evaluations. First, we set our training data until 2015, i.e., excluding the new terms in 2016-2020 in the COVID-19 graph, such as covid-19, sars-cov-2. The trained model then predicts the connectivity between covid-19 as a new term and other terms, which can be also a new term or a term existing before 2015. Since new terms like covid-19 were not in training graph, their term feature were initialized as defined in Section 4.1. The top predicted terms predicted to be connected with covid-19 are shown in Table 3 top, with the verification in COVID-19 graph of 2016-2020. We notice that the top terms are truly relevant to covid-19, and we do observe their connection in the evaluation graph. For instance, Cough, Fever, SARS, Hand (washing of hands) were known to be relevant to covid-19 at the time of writing this paper. In the second evaluation, we trained the model on the full COVID-19 data (≤ 2020) and then predict to which terms covid-19 will be connected, but they haven’t been connect yet in the graph until 20201. We show the results in Table 3 bottom, and verified the top ranked terms by manually searching the recent research articles online. We did find there exist discussions between covid-19 and some top ranked terms, for example, [3] discusses how covid-19 affected the market of Chromium oxide and [23] discusses about caring for people living with Hepatitis B virus during the covid-19 spread. 4.6 Pair Embedding Visualization We further analyze the node pair embedding learned by TRP-uPU on the COVID-19 data by visualizing them with t-SNE [34]. To have a clear visibility, we sample 800 pairs and visualize the learned embeddings in Figure 4. We denote with colors the observed labels in comparison with the predicted 1Dataset used in this analysis was downloaded in early March 2020 from https://www.semanticscholar. org/cord19/download labels. We observe that the true positives (observed in GT and correctly predicted as positives - blue) and unobserved negatives (not observed in GT and predicted as negatives - red) are further apart. This clear separation indicates that the learned h appropriately grouped the positive and negative (predicted) pairs in distinct clusters. We also observe that the unobserved positives (not observed in GT but predicted as positives - green) and true positives are close. This supports our motivation behind conducting PU learning: the unlabeled samples are a mixture of positive and negative samples, rather than just negative samples. We observe that several unobserved positives are relationships like between Tobacco and covid-19. Although they are not connected in the graph we study, several articles have shown a link between terms [2, 46, 37]. 5 Conclusion In this paper, we propose TRP - a temporal risk estimation PU learning strategy for predicting the relationship between biomedical terms found in texts. TRP is shown with advantages on capturing the temporal evolution of the term-term relationship and minimizing the unbiased risk with a positive prior estimator based on variational inference. The quantitative experiments and analyses show that TRP outperforms several state-of-the-art PU learning methods. The qualitative analyses also show the effectiveness and usefulness of the proposed method. For the future work, we see opportunities like predicting the relationship strength between drugs and diseases (TRP for a regression task). We can also substitute the experimental compatibility of terms for the term co-occurrence used in this study. Acknowledgments and Disclosure of Funding The research reported in this publication was supported by funding from the Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), under award number URF/1/1976-31-01, and NSFC No 61828302. Additional revenue related to this work: student internship at Sony Computer Science Laboratories Inc. We would like to acknowledge the great contribution of Sucheendra K. PalaniapPalaniappanpan and The Systems Biology institute to this work for the initial problem definition and data collection. 6 Broader Impact TRP can be adopted to a wide range of applications involving node pairs in a graph structure. For instance, the prediction of relationships or similarities between two social beings, the prediction of items that should be purchased together, the discovery of compatibility between drugs and diseases, and many more. Our proposed model can be used to capture and analyze the temporal relationship of node pairs in an incremental dynamic graph. Besides, it is especially useful when only samples of a given class (e.g., positive) are available, but it is uncertain whether the unlabeled samples are positive or negative. To be aligned with this fact, TRP treats the unlabeled data as a mixture of negative and positive data samples, rather than all be negative. Thus TRP is a flexible classification model learned from the positive and unlabeled data. While there could be several applications of our proposed model, we focus on the automatic biomedical hypothesis generation (HG) task, which refers to the discovery of meaningful implicit connections between biomedical terms. The use of HG systems has many benefits, such as a faster understanding of relationships between biomedical terms like viruses, drugs, and symptoms, which is essential in the fight against diseases. With the use of HG systems, new hypotheses with minimum uncertainty about undiscovered knowledge can be made from already published scholarly literature. Scientific research and discovery is a continuous process. Hence, our proposed model can be used to predict pairwise relationships when it is not enough to know with whom the items are related, but also learn how the connections have been formed (in a dynamic process). However, there are some potential risks of hypothesis generation from biomedical papers. 1) Publications might be faulty (with faulty/wrong results), which can result in a bad estimate of future relationships. However, this is a challenging problem as even experts in the field might be misled by the faulty results. 2) The access to full publication text (or even abstracts) is not readily available, hence leading to a lack of enough data for a good understanding of the studied terms, and then inaccurate h in generation performance. 3) It is hard to interpret and explain the learning process, for example, the learned embedding vectors are relevant to which term features, the contribution of neighboring terms in the dynamic evolution process. 4) For validating the future relationships, there is often a need for background knowledge or a biologist to evaluate the prediction. Scientific discovery is often to explore the new nontraditional paths. PU learning lifts the restriction on undiscovered relations, keeping them under investigation for the probability of being positive, rather than denying all the unobserved relations as negative. This is the key value of our work in this paper.
1. What is the focus and contribution of the paper regarding the hypothesis generation problem? 2. What are the strengths of the proposed approach, particularly in terms of its application of positive-unlabeled learning and variational inference? 3. What are the weaknesses of the paper, especially regarding the introduction of L^E and the lack of ablation studies? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper is aimed at addressing hypothesis generation problem. It considers the HG problem as connectivity prediction by capturing the features on a temporally dynamic graph with positive-unlabeled learning. Also, the author proposes a variational inference model to estimate positive prior to help node embedding learning. I have read the rebuttal. Strengths (1) The author proposes the Temporal Relationship Predictor model to calculate the future connection score for term pairs based on PU learning framework. It’s the first application of PU learning on the HG problem and dynamic graph. (2) To acquire a more reliable positive prior, the authors proposes a variational inference method, treating the learned pairs embedding with Gaussian mixture distribution and minimize the KL divergence to estimate the GMM model parameters \beta. (3) Experimental results show the effectiveness of TRP method, achieve the SOTA result in PU learning. The authors also give a detailed analysis and visualization for the result. Weaknesses (1) The introduction to L^E in eq(7) can be more clear. (2) It would be better to conduct some ablation studies to show the effectiveness of prior estimate.
NIPS
Title Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation Abstract Understanding the relationships between biomedical terms like viruses, drugs, and symptoms is essential in the fight against diseases. Many attempts have been made to introduce the use of machine learning to the scientific process of hypothesis generation (HG), which refers to the discovery of meaningful implicit connections between biomedical terms. However, most existing methods fail to truly capture the temporal dynamics of scientific term relations and also assume unobserved connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To break these limits, we formulate this HG problem as future connectivity prediction task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then, the key is to capture the temporal evolution of node pair (term pair) relations from just the positive and unlabeled data. We propose a variational inference model to estimate the positive prior, and incorporate it in the learning of node pair embeddings, which are then used for link prediction. Experiment results on real-world biomedical term relationship datasets and case study analyses on a COVID-19 dataset validate the effectiveness of the proposed model. 1 Introduction Recently, the study of co-relationships between biomedical entities is increasingly gaining attention. The ability to predict future relationships between biomedical entities like diseases, drugs, and genes enhances the chances of early detection of disease outbreaks and reduces the time required to detect probable disease characteristics. For instance, in 2020, the COVID-19 outbreak pushed the world to a halt with scientists working tediously to study the disease characteristics for containment, cure, and vaccine. An increasing number of articles encompassing new knowledge and discoveries from these studies were being published daily [1]. However, with the accelerated growth rate of publications, the manual process of reading to extract undiscovered knowledge increasingly becomes a tedious and time-consuming task beyond the capability of individual researchers. In an effort towards an advanced knowledge discovery process, computers have been introduced to play an ever-greater role in the scientific process with automatic hypothesis generation (HG). The study of automated HG has attracted considerable attention in recent years [41, 25, 45, 47]. Several previous works proposed techniques based on association rules [25, 18, 47], clustering and topic modeling [45, 44, 5], text mining [43, 42], and others [28, 49, 39]. However, these previous works fail to truly utilize the crucial information encapsulated in the dynamic nature of scientific discoveries and assume that the unobserved relationships denote a non-relevant relationship (negative). To model the historical evolution of term pair relations, we formulate HG on a term relationship graph G = {V,E}, which is decomposed into a sequence of attributed graphlets G = {G1, G2, ..., GT }, where the graphlet at time t is defined as, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Definition 1. Temporal graphlet: A temporal graphlet Gt = {V t, Et, xtv} is a temporal subgraph at time step t, which consists of nodes (terms) V t satisfying V 1 ⊆ V 2, ...,⊆ V T and the observed co-occurrence between these terms Et satisfying E1 ⊆ E2, ...,⊆ ET . And xtv is the node attribute. Example of the node terms can be covid-19, fever, cough, Zinc, hepatitis B virus etc. When two terms co-occurred at time t in scientific discovery, a link between them is added to Et, and the nodes are added to V t if they haven’t been added. Definition 2. Hypothesis Generation (HG): Given G = {G1, G2, ..., GT }, the target is to predict which nodes unlinked in V T should be linked (a hypothesis is generated between these nodes). We address the HG problem by modeling how Et was formed from t = 1 to T (on a dynamic graph), rather than using only ET (on a static graph). In the design of learning model, it is clear to us the observed edges are positive. However, we are in a dilemma whether the unobserved edges are positive or negative. The prior work simply set them to be negative, learning in a positive-negative (PN) setting) based on a closed world assumption that unobserved connections are irrelevant (negative) [39, 28, 4]. We set the learning with a more realistic assumption that the unobserved connections are a mixture of positive and negative term relations (unlabeled), a.k.a. Positive-unlabeled (PU) learning, which is different from semi-supervised PN learning that assumes a known set of labeled negative samples. For the observed positive samples in PU learning, they are assumed to be selected entirely at random from the set of all positive examples [16]. This assumption facilitates and simplifies both theoretical analysis and algorithmic design since the probability of observing the label of a positive example is constant. However, estimating this probability value from the positive-unlabeled data is nontrivial. We propose a variational inference model to estimate the positive prior and incorporate it in the learning of node pair embeddings, which are then used for link prediction (hypothesis generation). We highlight the contributions of this work as follows. 1) Methodology: we propose a PU learning approach on temporal graphs. It differs from other existing approaches that learn in a conventional PN setting on static graphs. In addition, we estimate the positive prior via a variational inference model, rather than setting by prior knowledge. 2) Application: to the best of our knowledge, this is the first the application of PU learning on the HG problem, and on dynamic graphs. We applied the proposed model on real-world graphs of terms in scholarly publications published from 1945 to 2020. Each of the three graphs has around 30K nodes and 1-2 million edges. The model is trained end-to-end and shows superior performance on HG. Case studies demonstrate our new and valid findings of the positive relationship between medical terms, including newly observed terms that were not observed in training. 2 Related Work of PU Learning In PU learning, since the negative samples are not available, a classifier is trained to minimize the expected misclassification rate for both the positive and unlabeled samples. One group of study [32, 31, 33, 22] proposed a two-step solution: 1) identifying reliable negative samples, and 2) learning a classifier based on the labeled positives and reliable negatives using a (semi)-supervised technique. Another group of studies [36, 30, 26, 17, 40] considered the unlabeled samples as negatives with label noise. Hence, they place higher penalties on misclassified positive examples or tune a hyperparameter based on suitable PU evaluation metrics. Such a proposed framework follows the SCAR (Selected Completely at Random) assumption since the noise for negative samples is constant. PU Learning via Risk Estimation Recently, the use of unbiased risk estimator has gained attention [12, 14, 15, 48]. The goal is to minimize the expected classification risk to obtain an empirical risk minimizer. Given an input representation h (in our case the node pair representation to be learned), let f : Rd → R be an arbitrary decision function and l : R×{±1} → R be the loss function calculating the incurred loss l(f(h), y) of predicting an output f(h) when the true value is y. Function l has a variety of forms, and is determined by application needs [29, 13]. In PN learning, the empirical risk minimizer f̂PN is obtained by minimizing the PN risk R̂(f) w.r.t. a class prior of πp: R̂(f) = π P R̂+P (f) + πN R̂ − N (f), (1) where π N = 1 − π P , R̂+P (f) = 1 nP ∑nP i=1 l(f(h P i ),+1) and R̂ − N (f) = 1 nN ∑nN i=1 l(f(h N i ),−1). The variables nP and nN are the numbers of positive and negative samples, respectively. PU learning has to exploit the fact that π N p N (h) = p(h)− π P p P (h), due to the absence of negative samples. The second part of Eq. (1) can be reformulated as: πNR̂−N (f) = R̂ − U − πP R̂ − P (f), (2) whereR−U = Eh∼p(h)[l(f(h),−1)] andR − P = Eh∼p(h|y=+1)[l(f(h),−1)]. Furthermore, the classification risk can then be approximated by: R̂PU (f) = πP R̂+P (f) + R̂ − U (f)− πP R̂ − P (f), (3) where R̂−P (f) = 1 nP ∑nP i=1 l(f(h P i ),−1) , R̂ − U (f) = 1 nU ∑nU i=1 l(f(h U i ),−1), and nU is the number of unlabeled data sample. To obtain an empirical risk minimizer f̂PU for the PU learning framework, R̂PU (f) needs to be minimized. Kiryo et al. noted that the model tends to suffer from overfitting on the training data when the model f is made too flexible [29]. To alleviate this problem, the authors proposed the use of non-negative risk estimator for PU learning: R̃PU (f) = πP R̂+P (f) + max{0, R̂ − U − πP R̂ − P (f)}. (4) It works in fact by explicitly constraining the training risk of PU to be non-negative. The key challenge in practical PU learning is the unknown of prior πP . Prior Estimation The knowledge of the class prior πP is quintessential to estimating the classification risk. In PU learning for our node pairs, we represent a sample as {h, s, y}, where h is the node pair representation (to be learned), s indicates if the pair relationship is observed (labeled, s = 1) or unobserved (unlabeled, s = 0), and y denotes the true class (positive or negative). We have only the positive samples labeled: p(y = 1|s = 1) = 1. If s = 0, the sample can belong to either the positive or negative class. PU learning runs commonly with the Selected Completely at Random (SCAR) assumption, which postulates that the labeled sample set is a random subset of the positive sample set [16, 6, 8]. The probability of selecting a positive sample to observe can be denoted as: p(s = 1|y = 1, h). The SCAR assumption means: p(s = 1|y = 1, h) = p(s = 1|y = 1). However, it is hard to estimate πP = p(y = 1) with only a small set of observed samples (s = 1) and a large set of unobserved samples (s = 0) [7]. Solutions have been tried by i) estimating from a validation set of a fully labeled data set (all with s = 1 and knowing y = 1 or −1) [29, 10]; ii) estimating from the background knowledge; and iii) estimating directly from the PU data [16, 6, 8, 27, 14]. In this paper, we focus on estimating the prior directly from the PU data. Specifically, unlike the other methods, we propose a scalable method based on deep variational inference to jointly estimate the prior and train the classification model end-to-end. The proposed deep variational inference uses KL-divergence to estimate the parameters of class mixture model distributions of the positive and negative class in contrast to the method proposed in [14] which uses penalized L1 divergences to assign higher penalties to class priors that scale the positive distribution as more than the total distribution. 3 PU learning on Temporal Attributed Networks 3.1 Model Design The architecture of our Temporal Relationship Predictor (TRP) model is shown in Fig. 1. For a given pair of nodes aij =< vi, vj > in any temporal graphlet Gt, the main steps used in the training process of TRP for calculating the connectivity prediction score pt(aij) are given in Algorithm 1. The testing process also uses the same Algorithm 1 (with t=T ), calculating pT (aij) for node pairs that have not been connected in GT−1. The connectivity prediction score is calculated in line 6 of Algorithm 1 by pt(aij) = fC(htaij ; θC), where θC is the classification network parameter, and the embedding vector htaij for the pair a ij is iteratively updated in lines 1-5. These iterations of updating htaij are shown as the recurrent structure in Fig. 1 (a), followed by the classifier fC(.; θC). The recurrent update function hτaij = fA(h τ−1 aij , z τ vi , z τ vj ; θA), τ = 1...t, in line 4 is shown in Fig. 1 (b), and has a Gated recurrent unit (GRU) network at its core, P = σg(W zfm(zτvi , z τ vj ) + U Phτ−1aij + b P), r = σg(W rfm(z τ vi , z τ vj ) + U rhτ−1aij + b r), h̃τaij = σh′(Wfm(z τ vi , z τ vj ) + r ◦ Uh τ−1 aij + b), hτaij = P ◦ h̃ τ aij + (1− P) ◦ h τ−1 aij . (5) Algorithm 1: Calculate the future connection score for term pairs ai,j =< vi, vj > Input: G = {G1, G2, . . . , GT } with node feature xtv , a node pair ai,j =< vi, vj > in Gt, and an initialized pair embedding vector h0ai,j (e.g., by zeros) Result: ptai,j , the connectivity prediction score for the node pair a i,j 1 for τ ← 1· · · t do 2 Obtain the current node feature xτv (v = vi, vj) of both nodes (terms) vi, vj ; as well as xτNr(v) (v = vi, vj) for the node feature of sampled neighboring nodes for vi, vj ; 3 Aggregate the neighborhood information of node v = vi, vj , zτv = fG(xτv , xτNr(v); θG); 4 Update the embedding vector for the node pair hτai,j = fA(h τ−1 ai,j , zτvi , z τ vj ; θA) ; 5 end 6 Return ptai,j = fC(h t ai,j ; θC) where ◦ denotes element-wise multiplication, σ is a nonlinear activation function, and fm(.) is an aggregation function. In this study, we use a max pool aggregation. The variables {W,U} are the weights. The inputs to function fA include: hτ−1aij , the embedding vector in previous step; {z t vi , z t vj}, the representation of node vi and vj after aggregation their neighborhood, zτv = fG(x τ v , x τ Nr(v); θG), given in line 3. The aggregation function fG takes input the node feature xτv , and the neighboring node feature xτNr(v) and goes through the aggregation block shown in Fig. 1 (c). The aggregation network fG(; θG) is implemented following GraphSAGE [21], which is one of the most popular graph neural networks for aggregating node and its neighbors. The loss function in our problem l(pt(aij), y) evaluates the loss incurred by predicting a connectivity pt(aij) = fC(htaij ; θC) when the ground truth is y. For constructing the training set for our PU learning in the dynamic graph, we first clarify the label notations. For one pair aij from a graph Gt, its label yijt = 1 (positive) if the two nodes have a link observed in G t+1 (they have an edge ∈ Et+1, observed in next time step), i.e., sijt = 1. Otherwise when no link is observed between them in Gt+1, aij is unlabeled, i.e., sijt = 0, since y ij t can be either 1 or -1. Since we consider insertion only graphlets sequence, V 1 ⊆ V 2, ...,⊆ V T and E1 ⊆ E2, ...,⊆ ET , yijt = 1 maintains for all future steps after t (once positive, always positive). At the final step t = T , all pairs with observed connections already have yijT = 1, our objective is to predict the connectivity score for those pairs with sijT = 0. Our loss function is defined following the unbiased risk estimator in Eq. (3), LR = πP R̂+P (fC) + R̂ − U (fC)− πP R̂ − P (fC), (6) where R̂+P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(pt(aij))), R̂−U (fC) = 1 |H U | ∑ aij∈H U 1/(1 + exp(−pt(aij))), and R̂−P (fC) = 1 |H P | ∑ aij∈H P 1/(1 + exp(−pt(aij))) with the positive samples H P and unlabeled samples H U , when taking l as sigmoid loss function. LR can be adjusted with the non-negative constraint in Eq. (4), with the same definition of R̂+P (f), R̂ − U , and R̂ − P (f). 3.2 Prior Estimation The positive prior πP is a key factor in LR to be addressed. The samples we have from G are only positive H P and unlabeled H U . Due to the absence of negative samples and of prior knowledge, we present an estimate of the class prior from the distribution of h, which is the pair embedding from fA. Without loss of generality, we assume that the learned h of all samples has a Gaussian mixture distribution of two components, one is for the positive samples, while the other is for the negative samples although they are unlabeled. The mixture distribution is parameterized by β, including the mean, co-variance matrix and mixing coefficient of each component. We learn the mixture distribution using stochastic variational inference [24] via the “Bayes by Backprop” technique [9]. The use of variational inference has been shown to have the ability to model salient properties of the data generation mechanism and avoid singularities. The idea is to find variational distribution variables θ∗ that minimizes the Kullback-Leibler (KL) divergence between the variational distribution q(β|θ) and the true posterior distribution p(β|h): θ∗ = argminθLE , (7) where,LE = KL(q(β|θ)||p(β|h)) = KL(q(β|θ)||p(β))− Eq(β|θ)[log p(h|β)]. The resulting cost function LE on the right of Eq. (7) is known as the (negative) “evidence lower bound” (ELBO). The second term in LE is the likelihood of h fitting to the mixture Gaussian with parameter β: Eq(β|θ)[log p(h|β)], while the first term is referred to as the complexity cost [9]. We optimize the ELBO using stochastic gradient descent. With θ∗, the positive prior is then estimated as πP = q(β π i |θ∗), i = arg max k=1,2 |Ck| (8) where C1 = {h ∈ HP , p(h|β1) > p(h|β2)} and C2 = {h ∈ HP , p(h|β2) > p(h|β1)}. 3.3 Parameter Learning To train the three networks fA(.; θA), fG(.; θG), fC(.; θC) for connectivity score prediction, we jointly optimize L = ∑T t=1 LRt + LEt , using Adam over the model parameters. Loss LR is the PU classification risk as described in section 3.1, and LE is the loss of prior estimation as described in section 3.2. Note that during training, yTaij = y T−1 aij since we do not observe G T in training. This is to enforce prediction consistency. 4 Experimental Evaluation 4.1 Dataset and Experimental Setup The graphs on which we apply our model are constructed from the title and abstract of papers published in the biomedical fields from 1949 to 2020. The nodes are the biomedical terms, while the edges linking two nodes indicate the co-occurrence of the two terms. Note that we focus only on the co-occurrence relation and leave the polarity of the relationships for future study. To evaluate the model’s adaptivity in different scientific domains, we construct three graphs from papers relevant to COVID-19, Immunotherapy, and Virology. The graph statistics are shown in Table 1. To set up the training and testing data for TRP model, we split the graph by year intervals (5 years for COVID-19 or 10 years for Virology and Immunotherapy). We use splits of {G1, G2, ..., GT−1} for training, and use connections newly added in the final split GT for testing. Since baseline models do not work on dynamic data, hence we train on GT−1 and test on new observations made in GT . Therefore in testing, the positive pairs are those linked in GT but not in GT−1, i.e., ET \ ET−1, which can be new connections between nodes already existing in GT−1, or between a new node in GT and another node in GT−1, or between two new nodes in GT . All other unlinked node pairs in GT are unlabeled. At each t = 1, ..., T − 1, graph Gt is incrementally updated from Gt−1 by adding new nodes (biomedical terms) and their links. For the node feature vector xtv, we extract its term description and convert to a 300-dimensional feature vector by applying the latent semantic analysis (LSI). The missing term and context attributes are filled with zero vectors. If this node already exists before time t, the context features are updated with the new information about them in discoveries, and publications. In the inference (testing) stage, the new nodes in GT are only presented with their feature vectors xTv . The connections to these isolated nodes are predicted by our TRP model. We implement TRP using the Tensorflow library. Each GPU based experiment was conducted on an Nvidia 1080TI GPU. In all our experiments, we set the hidden dimensions to d = 128. For each neural network based model, we performed a grid search over the learning rate lr = {1e−2, 5e−3, 1e−3, 5e−2}, For the prior estimation, we adopted Gaussian, square-root inverse Gamma, and Dirichlet distributions to model the mean, co-variance matrix and mixing coefficient variational posteriors respectively. 4.2 Comparison Methods and Performance Matrices We evaluate our proposed TRP model in several variants and by comparing with several competitors: 1) TRP variants: a) TRP-PN - the same framework but in PN setting (i.e., treating all unobserved samples as negative, rather than unlabeled); b) TRP-nnPU - trained using the non-negative risk estimator Eq. (4) or the equation defined in section 3.1 for our problem; and c) TRP-uPU - trained using the unbiased PU risk estimator Eq. (3), the equation defined in section 3.1 for our problem. The comparison of these variants will show the impact of different risk estimators. 2) SOTA PU learning: the state-of-the-art (SOTA) PU learning methods taking input h from the SOTA node embedding models, which can be based on LSI [11], node2vec [20], DynAE [19] and GraphSAGE [21]. Since node2vec learns only from the graph structure, we concatenate the node2vec embeddings with the text (term and context) attributes to obtain an enriched node representation. Unlike our TRP that learns h for one pair of nodes, these models learn embedding vectors for individual nodes. Then, h of one pair from baselines is defined as the concatenation of the embedding vector of two nodes. We observe from the results that a concatenation of node2vec and LSI embeddings had the most competitive performance compared to others. Hence we only report the results based on concatenated embeddings for all the baselines methods. The used SOTA PU learning methods include [16] by reweighting all examples, and models with different estimation of the class prior such as SAR-EM [8] (an EM-based SAR-PU method), SCAR-KM2 [38], SCAR-C [8], SCAR-TIcE [6], and pen-L1 [14]. 3) Supervised: weaker but simpler logistic regression applied also h. We measure the performance using four different metrics. These metrics are the Macro-F1 score (F1-M), F1 score of observed connections (F1-S), F1 score adapted to PU learning (F1-P) [7, 30], and the label ranking average precision score (LRAP), where the goal is to give better rank to the positive node pairs. In all metrics used, higher values are preferred. 4.3 Evaluation Results Table 2 shows that TRP-uPU always has the superior performance over all other baselines across the datasets due to its ability to capture and utilize temporal, structural, and textual information (learning better h) and also the better class prior estimator. Among TRP variants, TRP-uPU has higher or equal F1 values comparing to the other two, indicating the benefit of using the unbiased risk estimator. On the LRAP scores, TRP-uPU and TRP-PN have the same performance on promoting the rank of the positive samples. Note that the results in Table 2 are from the models trained with their best learning rate, which is an important parameter that should be tuned in gradient-based optimizer, by either exhaustive search or advanced auto-machine learning [35]. To further investigate the performance of TRP variants, we show in Figure 2 their F1-S at different learning rate in trained from 1 to 10 epochs. We notice that TRP-uPU has a stable performance across different epochs and learning rates. This advantage is attributed to the unbiased PU risk estimation, which learns from only positive samples with no assumptions on the negative samples. We also found interesting that nnPU was worse than uPU in our experimental results. However, it is not uncommon for uPU to outperform nnPU in evaluation with real-world datasets. Similar observations were found in the results in [14, 17]. In our case, we attribute this observation to the joint optimization of the loss from the classifier and the prior estimation. Specifically, in the loss of uPU (Eq. (3)), πP affects both R̂+P (f) and R̂ − P (f). However, in the loss of nnPU (Eq. (4)), πP only weighted R̂+P (f) when R̂ − U − πP R̂ − P (f) is negative. In real-world applications, especially when the true prior is unknown, the loss selection affects the estimation of πP , and thus the final classification results. TRP-PN is not as stable as TRP-uPU due to the strict assumption of unobserved samples as negative. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (COVID-19) TRP-UPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (COVID-19) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Virology) TRP-UPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Virology) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-PN (Immunotherapy) TRP-UPU (Immunotherapy) 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.E-03 5.E-03 1.E-02 5.E-02 TRP-NNPU (Immunotherapy) 4.4 Incremental Prediction In Figure 3, we compare the performance of the top-performing PU learning methods on different year splits. We train TRP and other baseline methods on data until t− 1 and evaluate its performance on predicting the testing pairs in t. It is expected to see performance gain over the incremental training process, as more and more data are used. We show F1-P due to the similar pattern on other metrics. We observe that the TRP models display an incremental learning curve across the three datasets and outperformed all other models. 0.4 0.5 0.6 0.7 0.8 0.9 2001 - 2005 2006 - 2010 2011 - 2015 2016 - 2020 F1 -S co re Evaluation year COVID-19 SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU 0.1 0.3 0.5 0.7 0.9 1980 - 1989 1990 - 1999 2000 - 2009 2010 - 2019 F1 -S co re Evaluation year Virology SAR-EM SCAR-C Elkan TRP-PN TRP-NNPU TRP-UPU Immunotherapy 4.5 Qualitative Analysis We conduct qualitative analysis of the results obtained by TRP-uPU on the COVID-19 dataset. This investigation is to qualitatively check the meaningfulness of the paired terms, e.g., can term covid-19 be paired meaningfully with other terms. We designed two evaluations. First, we set our training data until 2015, i.e., excluding the new terms in 2016-2020 in the COVID-19 graph, such as covid-19, sars-cov-2. The trained model then predicts the connectivity between covid-19 as a new term and other terms, which can be also a new term or a term existing before 2015. Since new terms like covid-19 were not in training graph, their term feature were initialized as defined in Section 4.1. The top predicted terms predicted to be connected with covid-19 are shown in Table 3 top, with the verification in COVID-19 graph of 2016-2020. We notice that the top terms are truly relevant to covid-19, and we do observe their connection in the evaluation graph. For instance, Cough, Fever, SARS, Hand (washing of hands) were known to be relevant to covid-19 at the time of writing this paper. In the second evaluation, we trained the model on the full COVID-19 data (≤ 2020) and then predict to which terms covid-19 will be connected, but they haven’t been connect yet in the graph until 20201. We show the results in Table 3 bottom, and verified the top ranked terms by manually searching the recent research articles online. We did find there exist discussions between covid-19 and some top ranked terms, for example, [3] discusses how covid-19 affected the market of Chromium oxide and [23] discusses about caring for people living with Hepatitis B virus during the covid-19 spread. 4.6 Pair Embedding Visualization We further analyze the node pair embedding learned by TRP-uPU on the COVID-19 data by visualizing them with t-SNE [34]. To have a clear visibility, we sample 800 pairs and visualize the learned embeddings in Figure 4. We denote with colors the observed labels in comparison with the predicted 1Dataset used in this analysis was downloaded in early March 2020 from https://www.semanticscholar. org/cord19/download labels. We observe that the true positives (observed in GT and correctly predicted as positives - blue) and unobserved negatives (not observed in GT and predicted as negatives - red) are further apart. This clear separation indicates that the learned h appropriately grouped the positive and negative (predicted) pairs in distinct clusters. We also observe that the unobserved positives (not observed in GT but predicted as positives - green) and true positives are close. This supports our motivation behind conducting PU learning: the unlabeled samples are a mixture of positive and negative samples, rather than just negative samples. We observe that several unobserved positives are relationships like between Tobacco and covid-19. Although they are not connected in the graph we study, several articles have shown a link between terms [2, 46, 37]. 5 Conclusion In this paper, we propose TRP - a temporal risk estimation PU learning strategy for predicting the relationship between biomedical terms found in texts. TRP is shown with advantages on capturing the temporal evolution of the term-term relationship and minimizing the unbiased risk with a positive prior estimator based on variational inference. The quantitative experiments and analyses show that TRP outperforms several state-of-the-art PU learning methods. The qualitative analyses also show the effectiveness and usefulness of the proposed method. For the future work, we see opportunities like predicting the relationship strength between drugs and diseases (TRP for a regression task). We can also substitute the experimental compatibility of terms for the term co-occurrence used in this study. Acknowledgments and Disclosure of Funding The research reported in this publication was supported by funding from the Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), under award number URF/1/1976-31-01, and NSFC No 61828302. Additional revenue related to this work: student internship at Sony Computer Science Laboratories Inc. We would like to acknowledge the great contribution of Sucheendra K. PalaniapPalaniappanpan and The Systems Biology institute to this work for the initial problem definition and data collection. 6 Broader Impact TRP can be adopted to a wide range of applications involving node pairs in a graph structure. For instance, the prediction of relationships or similarities between two social beings, the prediction of items that should be purchased together, the discovery of compatibility between drugs and diseases, and many more. Our proposed model can be used to capture and analyze the temporal relationship of node pairs in an incremental dynamic graph. Besides, it is especially useful when only samples of a given class (e.g., positive) are available, but it is uncertain whether the unlabeled samples are positive or negative. To be aligned with this fact, TRP treats the unlabeled data as a mixture of negative and positive data samples, rather than all be negative. Thus TRP is a flexible classification model learned from the positive and unlabeled data. While there could be several applications of our proposed model, we focus on the automatic biomedical hypothesis generation (HG) task, which refers to the discovery of meaningful implicit connections between biomedical terms. The use of HG systems has many benefits, such as a faster understanding of relationships between biomedical terms like viruses, drugs, and symptoms, which is essential in the fight against diseases. With the use of HG systems, new hypotheses with minimum uncertainty about undiscovered knowledge can be made from already published scholarly literature. Scientific research and discovery is a continuous process. Hence, our proposed model can be used to predict pairwise relationships when it is not enough to know with whom the items are related, but also learn how the connections have been formed (in a dynamic process). However, there are some potential risks of hypothesis generation from biomedical papers. 1) Publications might be faulty (with faulty/wrong results), which can result in a bad estimate of future relationships. However, this is a challenging problem as even experts in the field might be misled by the faulty results. 2) The access to full publication text (or even abstracts) is not readily available, hence leading to a lack of enough data for a good understanding of the studied terms, and then inaccurate h in generation performance. 3) It is hard to interpret and explain the learning process, for example, the learned embedding vectors are relevant to which term features, the contribution of neighboring terms in the dynamic evolution process. 4) For validating the future relationships, there is often a need for background knowledge or a biologist to evaluate the prediction. Scientific discovery is often to explore the new nontraditional paths. PU learning lifts the restriction on undiscovered relations, keeping them under investigation for the probability of being positive, rather than denying all the unobserved relations as negative. This is the key value of our work in this paper.
1. What is the focus and contribution of the paper regarding graph evolution? 2. What are the strengths of the proposed approach, particularly in treating unobserved links? 3. What are the weaknesses of the paper, especially in terms of exposition and confidence intervals?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper contributes a method for modeling the evolution of connections in a graph that considers links that are unobserved so far as unlabeled rather than negative. It is applied to a hypothesis generation problem by modeling the cooccurrence of biomedical terms in paper titles and abstracts over the last 75 years. Strengths The treatment of links that are unobserved so far as unlabeled rather than assuming they are negative/absent makes a lot of sense. The approach of modeling the temporal evolution of the graph also seems advantageous. Weaknesses The exposition of the methodology is dense and hard to follow (Section 3). Is there a way to provide confidence intervals on the values in Table 2?
NIPS
Title COLA: Decentralized Linear Learning Abstract Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy. We consider learning of linear classification and regression models, in the setting where the training data is decentralized over many user devices, and the learning algorithm must run ondevice, on an arbitrary communication network, without a central coordinator. We propose COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance. Our framework overcomes many limitations of existing methods, and achieves communication efficiency, scalability, elasticity as well as resilience to changes in data and allows for unreliable and heterogeneous participating devices. 1 Introduction With the immense growth of data, decentralized machine learning has become not only attractive but a necessity. Personal data from, for example, smart phones, wearables and many other mobile devices is sensitive and exposed to a great risk of data breaches and abuse when collected by a centralized authority or enterprise. Nevertheless, many users have gotten accustomed to giving up control over their data in return for useful machine learning predictions (e.g. recommendations), which benefits from joint training on the data of all users combined in a centralized fashion. In contrast, decentralized learning aims at learning this same global machine learning model, without any central server. Instead, we only rely on distributed computations of the devices themselves, with each user’s data never leaving its device of origin. While increasing research progress has been made towards this goal, major challenges in terms of the privacy aspects as well as algorithmic efficiency, robustness and scalability remain to be addressed. Motivated by aforementioned challenges, we make progress in this work addressing the important problem of training generalized linear models in a fully decentralized environment. Existing research on decentralized optimization, minx2Rn F (x), can be categorized into two main directions. The seminal line of work started by Bertsekas and Tsitsiklis in the 1980s, cf. [Tsitsiklis et al., 1986], tackles this problem by splitting the parameter vector x by coordinates/components among the devices. A second more recent line of work including e.g. [Nedic and Ozdaglar, 2009, Duchi et al., 2012, Shi et al., 2015, Mokhtari and Ribeiro, 2016, Nedic et al., 2017] addresses sum-structured F (x) = P k Fk(x) where Fk is the local cost function of node k. This structure is closely related to empirical risk minimization in a learning setting. See e.g. [Cevher et al., 2014] for an overview of both directions. While the first line of work typically only provides convergence guarantees for smooth objectives F , the second approach often suffers from a “lack of consensus”, that is, the minimizers of {Fk}k are typically different since the data is not distributed i.i.d. between devices in general. ⇤These two authors contributed equally 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Contributions. In this paper, our main contribution is to propose COLA, a new decentralized framework for training generalized linear models with convergence guarantees. Our scheme resolves both described issues in existing approaches, using techniques from primal-dual optimization, and can be seen as a generalization of COCOA [Smith et al., 2018] to the decentralized setting. More specifically, the proposed algorithm offers - Convergence Guarantees: Linear and sublinear convergence rates are guaranteed for strongly convex and general convex objectives respectively. Our results are free of the restrictive assumptions made by stochastic methods [Zhang et al., 2015, Wang et al., 2017], which requires i.i.d. data distribution over all devices. - Communication Efficiency and Usability: Employing a data-local subproblem between each communication round, COLA not only achieves communication efficiency but also allows the re-use of existing efficient single-machine solvers for on-device learning. We provide practical decentralized primal-dual certificates to diagnose the learning progress. - Elasticity and Fault Tolerance: Unlike sum-structured approaches such as SGD, COLA is provably resilient to changes in the data, in the network topology, and participating devices disappearing, straggling or re-appearing in the network. Our implementation is publicly available under github.com/epfml/cola . 1.1 Problem statement Setup. Many machine learning and signal processing models are formulated as a composite convex optimization problem of the form min u l(u) + r(u), where l is a convex loss function of a linear predictor over data and r is a convex regularizer. Some cornerstone applications include e.g. logistic regression, SVMs, Lasso, generalized linear models, each combined with or without L1, L2 or elastic-net regularization. Following the setup of [Dünner et al., 2016, Smith et al., 2018], these training problems can be mapped to either of the two following formulations, which are dual to each other min x2Rn ⇥ FA(x) := f(Ax) + P i gi(xi) ⇤ (A) min w2Rd ⇥ FB(w) := f ⇤(w) + P i g ⇤ i ( A>i w) ⇤ , (B) where f⇤, g⇤i are the convex conjugates of f and gi, respectively. Here x 2 Rn is a parameter vector and A := [A1; . . . ;An] 2 Rd⇥n is a data matrix with column vectors Ai 2 Rd, i 2 [n]. We assume that f is smooth (Lipschitz gradient) and g(x) := Pn i=1 gi(xi) is separable. Data partitioning. As in [Jaggi et al., 2014, Dünner et al., 2016, Smith et al., 2018], we assume the dataset A is distributed over K machines according to a partition {Pk}Kk=1 of the columns of A. Note that this convention maintains the flexibility of partitioning the training dataset either by samples (through mapping applications to (B), e.g. for SVMs) or by features (through mapping applications to (A), e.g. for Lasso or L1-regularized logistic regression). For x 2 Rn, we write x[k] 2 Rn for the n-vector with elements (x[k])i := xi if i 2 Pk and (x[k])i := 0 otherwise, and analogously A[k] 2 Rd⇥nk for the corresponding set of local data columns on node k, which is of size nk = |Pk|. Network topology. We consider the task of joint training of a global machine learning model in a decentralized network of K nodes. Its connectivity is modelled by a mixing matrix W 2 RK⇥K+ . More precisely, Wij 2 [0, 1] denotes the connection strength between nodes i and j, with a non-zero weight indicating the existence of a pairwise communication link. We assume W to be symmetric and doubly stochastic, which means each row and column of W sums to one. The spectral properties of W used in this paper are that the eigenvalues of W are real, and 1 = 1(W) · · · n(W) 1. Let the second largest magnitude of the eigenvalues of W be := max{| 2(W)|, | n(W)|}. 1 is called the spectral gap, a quantity well-studied in graph theory and network analysis. The spectral gap measures the level of connectivity among nodes. In the extreme case when W is diagonal, and thus an identity matrix, the spectral gap is 0 and there is no communication among nodes. To ensure convergence of decentralized algorithms, we impose the standard assumption of positive spectral gap of the network which includes all connected graphs, such as e.g. a ring or 2-D grid topology, see also Appendix B for details. 1.2 Related work Research in decentralized optimization dates back to the 1980s with the seminal work of Bertsekas and Tsitsiklis, cf. [Tsitsiklis et al., 1986]. Their framework focuses on the minimization of a (smooth) function by distributing the components of the parameter vector x among agents. In contrast, a second more recent line of work [Nedic and Ozdaglar, 2009, Duchi et al., 2012, Shi et al., 2015, Mokhtari and Ribeiro, 2016, Nedic et al., 2017, Scaman et al., 2017, 2018] considers minimization of a sum of individual local cost-functions F (x) = P i Fi(x), which are potentially non-smooth. Our work here can be seen as bridging the two scenarios to the primal-dual setting (A) and (B). While decentralized optimization is a relatively mature area in the operations research and automatic control communities, it has recently received a surge of attention for machine learning applications, see e.g. [Cevher et al., 2014]. Decentralized gradient descent (DGD) with diminishing stepsizes was proposed by [Nedic and Ozdaglar, 2009, Jakovetic et al., 2012], showing convergence to the optimal solution at a sublinear rate. [Yuan et al., 2016] further prove that DGD will converge to the neighborhood of a global optimum at a linear rate when used with a constant stepsize for strongly convex objectives. [Shi et al., 2015] present EXTRA, which offers a significant performance boost compared to DGD by using a gradient tracking technique. [Nedic et al., 2017] propose the DIGing algorithm to handle a time-varying network topology. For a static and symmetric W , DIGing recovers EXTRA by redefining the two mixing matrices in EXTRA. The dual averaging method [Duchi et al., 2012] converges at a sublinear rate with a dynamic stepsize. Under a strong convexity assumption, decomposition techniques such as decentralized ADMM (DADMM, also known as consensus ADMM) have linear convergence for time-invariant undirected graphs, if subproblems are solved exactly [Shi et al., 2014, Wei and Ozdaglar, 2013]. DADMM+ [Bianchi et al., 2016] is a different primal-dual approach with more efficient closed-form updates in each step (as compared to ADMM), and is proven to converge but without a rate. Compared to COLA, neither of DADMM and DADMM+ can be flexibly adapted to the communication-computation tradeoff due to their fixed update definition, and both require additional hyperparameters to tune in each use-case (including the ⇢ from ADMM). Notably COLA shows superior performance compared to DIGing and decentralized ADMM in our experiments. [Scaman et al., 2017, 2018] present lower complexity bounds and optimal algorithms for objectives in the form F (x) = P i Fi(x). Specifically, [Scaman et al., 2017] assumes each Fi(x) is smooth and strongly convex, and [Scaman et al., 2018] assumes each Fi(x) is Lipschitz continuous and convex. Additionally [Scaman et al., 2018] needs a boundedness constraint for the input problem. In contrast, COLA can handle non-smooth and non-strongly convex objectives (A) and (B), suited to the mentioned applications in machine learning and signal processing. For smooth nonconvex models, [Lian et al., 2017] demonstrate that a variant of decentralized parallel SGD can outperform the centralized variant when the network latency is high. They further extend it to the asynchronous setting [Lian et al., 2018] and to deal with large data variance among nodes [Tang et al., 2018a] or with unreliable network links [Tang et al., 2018b]. For the decentralized, asynchronous consensus optimization, [Wu et al., 2018] extends the existing PG-EXTRA and proves convergence of the algorithm. [Sirb and Ye, 2018] proves a O(K/✏2) rate for stale and stochastic gradients. [Lian et al., 2018] achieves O(1/✏) rate and has linear speedup with respect to number of workers. In the distributed setting with a central server, algorithms of the COCOA family [Yang, 2013, Jaggi et al., 2014, Ma et al., 2015, Dünner et al., 2018]—see [Smith et al., 2018] for a recent overview— are targeted for problems of the forms (A) and (B). For convex models, COCOA has shown to significantly outperform competing methods including e.g., ADMM, distributed SGD etc. Other centralized algorithm representatives are parallel SGD variants such as [Agarwal and Duchi, 2011, Zinkevich et al., 2010] and more recent distributed second-order methods [Zhang and Lin, 2015, Reddi et al., 2016, Gargiani, 2017, Lee and Chang, 2017, Dünner et al., 2018, Lee et al., 2018]. In this paper we extend COCOA to the challenging decentralized environment—with no central coordinator—while maintaining all of its nice properties. We are not aware of any existing primaldual methods in the decentralized setting, except the recent work of [Smith et al., 2017] on federated learning for the special case of multi-task learning problems. Federated learning was first described by [Konecnỳ et al., 2015, 2016, McMahan et al., 2017] as decentralized learning for on-device learning applications, combining a global shared model with local personalized models. Current Algorithm 1: COLA: Communication-Efficient Decentralized Linear Learning 1 Input: Data matrix A distributed column-wise according to partition {Pk}Kk=1. Mixing matrix W . Aggregation parameter 2 [0, 1], and local subproblem parameter 0 as in (1). Starting point x(0) := 0 2 Rn, v(0) := 0 2 Rd, v(0)k := 0 2 Rd 8 k = 1, . . .K; 2 for t = 0, 1, 2, . . . , T do 3 for k 2 {1, 2, . . . ,K} in parallel over all nodes do 4 compute locally averaged shared vector v(t+ 1 2 ) k := PK l=1 Wklv (t) l 5 x[k] ⇥-approximate solution to subproblem (1) at v (t+ 12 ) k 6 update local variable x(t+1)[k] := x (t) [k] + x[k] 7 compute update of local estimate vk := A[k] x[k] 8 v(t+1)k := v (t+ 12 ) k + K vk 9 end 10 end federated optimization algorithms (like FedAvg in [McMahan et al., 2017]) are still close to the centralized setting. In contrast, our work provides a fully decentralized alternative algorithm for federated learning with generalized linear models. 2 The decentralized algorithm: COLA The COLA framework is summarized in Algorithm 1. For a given input problem we map it to either of the (A) or (B) formulation, and define the locally stored dataset A[k] and local part of the weight vector x[k] in node k accordingly. While v = Ax is the shared state being communicated in COCOA, this is generally unknown to a node in the fully decentralized setting. Instead, we maintain vk, a local estimate of v in node k, and use it as a surrogate in the algorithm. New data-local quadratic subproblems. During a computation step, node k locally solves the following minimization problem min x[k]2Rn G 0 k ( x[k];vk,x[k]), (1) where G 0 k ( x[k];vk,x[k]) := 1 K f(vk) +rf(vk) >A[k] x[k] + 0 2⌧ A[k] x[k] 2 + P i2Pk gi(xi + ( x[k])i). (2) Crucially, this subproblem only depends on the local data A[k], and local vectors vl from the neighborhood of the current node k. In contrast, in COCOA [Smith et al., 2018] the subproblem is defined in terms of a global aggregated shared vector vc := Ax 2 Rd, which is not available in the decentralized setting.2 The aggregation parameter 2 [0, 1] does not need to be tuned; in fact, we use the default := 1 throughout the paper, see [Ma et al., 2015] for a discussion. Once is settled, a safe choice of the subproblem relaxation parameter 0 is given as 0 := K. 0 can be additionally tightened using an improved Hessian subproblem (Appendix E.3). Algorithm description. At time t on node k, v(t+ 1 2 ) k is a local estimate of the shared variable after a communication step (i.e. gossip mixing). The local subproblem (1) based on this estimate is solved 2 Subproblem interpretation: Note that for the special case of := 1, 0 := K, by smoothness of f , our subproblem in (2) is an upper bound on min x[k]2Rn 1 K f(A(x+K x[k])) + P i2Pk gi(xi + ( x[k])i), (3) which is a scaled block-coordinate update of block k of the original objective (A). This assumes that we have consensus vk ⌘ Ax 8 k. For quadratic objectives (i.e. when f ⌘ k.k22 and A describes the quadratic), the equality of the formulations (2) and (3) holds. Furthermore, by convexity of f , the sum of (3) is an upper bound on the centralized updates f(x + x) + g(x + x). Both inequalities quantify the overhead of the distributed algorithm over the centralized version, see also [Yang, 2013, Ma et al., 2015, Smith et al., 2018] for the non-decentralized case. and yields x[k]. Then we calculate vk := A[k] x[k], and update the local shared vector v (t+1) k . We allow the local subproblem to be solved approximately: Assumption 1 (⇥-approximation solution). Let ⇥ 2 [0, 1] be the relative accuracy of the local solver (potentially randomized), in the sense of returning an approximate solution x[k] at each step t, s.t. E[G 0k ( x[k];vk,x[k]) G 0 k ( x ? [k];vk,x[k])] G 0k ( 0 ;vk,x[k]) G 0 k ( x ? [k];vk,x[k]) ⇥, where x?[k] 2 argmin x2RnG 0 k ( x[k];vk,x[k]), for each k 2 [K]. Elasticity to network size, compute resources and changing data—and fault tolerance. Realworld communication networks are not homogeneous and static, but greatly vary in availability, computation, communication and storage capacity. Also, the training data is subject to changes. While these issues impose significant challenges for most existing distributed training algorithms, we hereby show that COLA offers adaptivity to such dynamic and heterogenous scenarios. Scalability and elasticity in terms of availability and computational capacity can be modelled by a node-specific local accuracy parameter ⇥k in Assumption 1, as proposed by [Smith et al., 2017]. The more resources node k has, the more accurate (smaller) ⇥k we can use. The same mechanism also allows dealing with fault tolerance and stragglers, which is crucial e.g. on a network of personal devices. More specifically, when a new node k joins the network, its x[k] variables are initialized to 0; when node k leaves, its x[k] is frozen, and its subproblem is not touched anymore (i.e. ⇥k = 1). Using the same approach, we can adapt to dynamic changes in the dataset—such as additions and removal of local data columns—by adjusting the size of the local weight vector accordingly. Unlike gradient-based methods and ADMM, COLA does not require parameter tuning to converge, increasing resilience to drastic changes. Extension to improved second-order subproblems. In the centralized setting, it has recently been shown that the Hessian information of f can be properly utilized to define improved local subproblems [Lee and Chang, 2017, Dünner et al., 2018]. Similar techniques can be applied to COLA as well, details on which are left in Appendix E. Extension to time-varying graphs. Similar to scalability and elasticity, it is also straightforward to extend COLA to a time varying graph under proper assumptions. If we use the time-varying model in [Nedic et al., 2017, Assumption 1], where an undirected graph is connected with B gossip steps, then changing COLA to perform B communication steps and one computation step per round still guarantees convergence. Details of this setup are provided in Appendix E. 3 On the convergence of COLA In this section we present a convergence analysis of the proposed decentralized algorithm COLA for both general convex and strongly convex objectives. In order to capture the evolution of COLA, we reformulate the original problem (A) by incorporating both x and local estimates {vk}Kk=1 minx,{vk}Kk=1 HA(x, {vk} K k=1) := 1 K PK k=1 f(vk) + g(x), (DA) such that vk = Ax, k = 1, ...,K. While the consensus is not always satisfied during Algorithm 1, the following relations between the decentralized objective and the original one (A) always hold. All proofs are deferred to Appendix C. Lemma 1. Let {vk} and x be the iterates generated during the execution of Algorithm 1. At any timestep, it holds that 1 K PK k=1 vk = Ax, (4) FA(x) HA(x, {vk}Kk=1) FA(x) + 12⌧K PK k=1 kvk Axk 2 . (5) The dual problem and duality gap of the decentralized objective (DA) are given in Lemma 2. Lemma 2 (Decentralized Dual Function and Duality Gap). The Lagrangian dual of the decentralized formation (DA) is min{wk}Kk=1 HB({wk} K k=1) := 1 K PK k=1 f ⇤(wk) + Pn i=1 g ⇤ i ⇣ A>i ( 1K PK k=1 wk) ⌘ . (DB) Given primal variables {x, {vk}Kk=1} and dual variables {wk}Kk=1, the duality gap is: GH(x, {vk}Kk=1, {wk}Kk=1) := 1K P k(f(vk)+f ⇤(wk))+g(x)+ Pn i=1 g ⇤ i 1K P k A > i wk . (6) If the dual variables are fixed to the optimality condition wk = rf(vk), then the dual variables can be omitted in the argument list of duality gap, namely GH(x, {vk}Kk=1). Note that the decentralized duality gap generalizes the duality gap of COCOA: when consensus is ensured, i.e., vk ⌘ Ax and wk ⌘ rf(Ax), the decentralized duality gap recovers that of COCOA. 3.1 Linear rate for strongly convex objectives We use the following data-dependent quantities in our main theorems k := maxx[k]2Rn A[k]x[k] 2 /kx[k]k2, max = maxk=1,...,K k, := PK k=1 knk. (7) If {gi} are strongly convex, COLA achieves the following linear rate of convergence. Theorem 1 (Strongly Convex gi). Consider Algorithm 1 with := 1 and let ⇥ be the quality of the local solver in Assumption 1. Let gi be µg-strongly convex for all i 2 [n] and let f be 1/⌧ -smooth. Let ̄ 0 := (1 + ) 0, ↵ := (1 + (1 ) 2 36(1+⇥) ) 1 and ⌘ := (1 ⇥)(1 ↵) s0 = ⌧µg ⌧µg+ max̄0 2 [0, 1]. (8) Then after T iterations of Algorithm 1 with 3 T 1+⌘s0⌘s0 log "(0) H "H , it holds that E ⇥ HA(x(T ), {v(T )k }Kk=1) HA(x?, {v?k}Kk=1) ⇤ "H. Furthermore, after T iterations with T 1+⌘s0⌘s0 log ✓ 1 ⌘s0 "(0) H "GH , ◆ we have the expected duality gap E[GH(x(T ), { PK k=1 Wklv (T ) l }Kk=1)] "GH . 3.2 Sublinear rate for general convex objectives Models such as sparse logistic regression, Lasso, group Lasso are non-strongly convex. For such models, we show that COLA enjoys a O(1/T ) sublinear rate of convergence for all network topologies with a positive spectral gap. Theorem 2 (Non-strongly Convex Case). Consider Algorithm 1, using a local solver of quality ⇥. Let gi(·) have L-bounded support, and let f be (1/⌧)-smooth. Let "GH > 0 be the desired duality gap. Then after T iterations where T T0 +max ⇢l 1 ⌘ m , 4L2 ̄0 ⌧"GH⌘ , T0 t0 + 2 ⌘ ⇣ 8L2 ̄0 ⌧"GH 1 ⌘ + t0 max ⇢ 0, ⇠ 1+⌘ ⌘ log 2⌧(HA(x (0),{v(0)l }) HA(x ?,{v?})) 4L2 ̄0 ⇡ and ̄ 0 := (1+ ) 0, ↵ := (1+ (1 ) 2 36(1+⇥) ) 1 and ⌘ := (1 ⇥)(1 ↵). We have that the expected duality gap satisfies E ⇥ GH(x̄, {v̄k}Kk=1, {w̄k}Kk=1) ⇤ "GH at the averaged iterate x̄ := 1T T0 PT 1 t=T0+1 x(t), and v0k := PK l=1 Wklvl and v̄k := 1 T T0 PT 1 t=T0+1 (v0k) (t) and w̄k := 1 T T0 PT 1 t=T0+1 rf((v0k)(t)). Note that the assumption of bounded support for the gi functions is not restrictive in the general convex case, as discussed e.g. in [Dünner et al., 2016]. 3"(0) H := HA(x (0), {v(0)k } K k=1) HA(x ?, {v?k} K k=1) is the initial suboptimality. 3.3 Local certificates for global accuracy Accuracy certificates for the training error are very useful for practitioners to diagnose the learning progress. In the centralized setting, the duality gap serves as such a certificate, and is available as a stopping criterion on the master node. In the decentralized setting of our interest, this is more challenging as consensus is not guaranteed. Nevertheless, we show in the following Proposition 1 that certificates for the decentralized objective (DA) can be computed from local quantities: Proposition 1 (Local Certificates). Assume gi has L-bounded support, and let Nk := {j : Wjk > 0} be the set of nodes accessible to node k. Then for any given " > 0, we have GH(x; {vk}Kk=1) ", if for all k = 1, . . . ,K the following two local conditions are satisfied: v>k rf(vk) + X i2Pk gi(xi) + g ⇤ i ( A>i rf(vk)) " 2K (9) rf(vk) 1|Nk| P j2Nk rf(vj) 2 ⇣PK k=1 n 2 k k ⌘ 1/2 1 2L p K ", (10) The local conditions (9) and (10) have a clear interpretation. The first one ensures the duality gap of the local subproblem given by vk as on the left hand side of (9) is small. The second condition (10) guarantees that consensus violation is bounded, by ensuring that the gradient of each node is similar to its neighborhood nodes. Remark 1. The resulting certificate from Proposition 1 is local, in the sense that no global vector aggregations are needed to compute it. For a certificate on the global objective, the boolean flag of each local condition (9) and (10) being satisfied or not needs to be shared with all nodes, but this requires extremely little communication. Exact values of the parameters and PK k=1 n 2 k k are not required to be known, and any valid upper bound can be used instead. We can use the local certificates to avoid unnecessary work on local problems which are already optimized, as well as to continuously quantify how newly arriving local data has to be re-optimized in the case of online training. The local certificates can also be used to quantify the contribution of newly joining or departing nodes, which is particularly useful in the elastic scenario described above. 4 Experimental results Here we illustrate the advantages of COLA in three respects: firstly we investigate the application in different network topologies and with varying subproblem quality ⇥; secondly, we compare COLA with state-of-the-art decentralized baselines: 1 , DIGing [Nedic et al., 2017], which generalizes the gradient-tracking technique of the EXTRA algorithm [Shi et al., 2015], and 2 , Decentralized ADMM (aka. consensus ADMM), which extends the classical ADMM (Alternating Direction Method of Multipliers) method [Boyd et al., 2011] to the decentralized setting [Shi et al., 2014, Wei and Ozdaglar, 2013]; Finally, we show that COLA works in the challenging unreliable network environment where each node has a certain chance to drop out of the network. We implement all algorithms in PyTorch with MPI backend. The decentralized network topology is simulated by running one thread per graph node, on a 2⇥12 core Intel Xeon CPU E5-2680 v3 server with 256 GB RAM. Table 1 describes the datasets4 used in the experiments. For Lasso, the columns of A are features. For ridge regression, the columns are features and samples for COLA primal and COLA dual, respectively. The order of columns is shuffled once before being distributed across the nodes. Due to space limit, details on the experimental configurations are included in Appendix D. Effect of approximation quality ⇥. We study the convergence behavior in terms of the approximation quality ⇥. Here, ⇥ is controlled by the number of data passes on subproblem (1) per node. Figure 1 shows that increasing always results in less number of iterations (less communication rounds) for COLA. However, given a fixed network bandwidth, it leads to a clear trade-off for the overall wall-clock time, showing the cost of both communication and computation. Larger leads to less communication rounds, however, it also takes more time to solve subproblems. The observations suggest that one can adjust ⇥ for each node to handle system heterogeneity, as what we have discussed at the end of Section 2. Effect of graph topology. Fixing K=16, we test the performance of COLA on 5 different topologies: ring, 2-connected cycle, 3-connected cycle, 2D grid and complete graph. The mixing matrix W is given by Metropolis weights for all test cases (details in Appendix B). Convergence curves are plotted in Figure 3. One can observe that for all topologies, COLA converges monotonically and especailly when all nodes in the network are equal, smaller leads to a faster convergence rate. This is consistent with the intuition that 1 measures the connectivity level of the topology. Superior performance compared to baselines. We compare COLA with DIGing and D-ADMM for strongly and general convex problems. For general convex objectives, we use Lasso regression with = 10 4 on the webspam dataset; for the strongly convex objective, we use Ridge regression with = 10 5 on the URL reputation dataset. For Ridge regression, we can map COLA to both primal and dual problems. Figure 2 traces the results on log-suboptimality. One can observe that for both generally and strongly convex objectives, COLA significantly outperforms DIGing and decentralized ADMM in terms of number of communication rounds and computation time. While DIGing and D-ADMM need parameter tuning to ensure convergence and efficiency, COLA is much easier to deploy as it is parameter free. Additionally, convergence guarantees of ADMM relies on exact subproblem solvers, whereas inexact solver is allowed for COLA. 4https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ Fault tolerance to unreliable nodes. Assume each node of a network only has a chance of p to participate in each round. If a new node k joins the network, then local variables are initialized as x[k] = 0; if node k leaves the network, then x[k] will be frozen with ⇥k = 1. All remaining nodes dynamically adjust their weights to maintain the doubly stochastic property of W . We run COLA on such unreliable networks of different ps and show the results in Figure 4. First, one can observe that for all p > 0 the suboptimality decreases monotonically as COLA progresses. It is also clear from the result that a smaller dropout rate (a larger p) leads to a faster convergence of COLA. 5 Discussion and conclusions In this work we have studied training generalized linear models in the fully decentralized setting. We proposed a communication-efficient decentralized framework, termed COLA, which is free of parameter tuning. We proved that it has a sublinear rate of convergence for general convex problems, allowing e.g. L1 regularizers, and has a linear rate of convergence for strongly convex objectives. Our scheme offers primal-dual certificates which are useful in the decentralized setting. We demonstrated that COLA offers full adaptivity to heterogenous distributed systems on arbitrary network topologies, and is adaptive to changes in network size and data, and offers fault tolerance and elasticity. Future research directions include improving subproblems, as well as extension to the network topology with directed graphs, as well as recent communication compression schemes [Stich et al., 2018]. Acknowledgments. We thank Prof. Bharat K. Bhargava for fruitful discussions. We acknowledge funding from SNSF grant 200021_175796, Microsoft Research JRC project ‘Coltrain’, as well as a Google Focused Research Award.
1. What is the main contribution of the paper regarding decentralized learning linear models? 2. What are the strengths and weaknesses of the proposed algorithm compared to prior works like CoCoA? 3. Do you have any questions or concerns about the technical aspects of the paper, such as convergence rates, network topology, and dependence on the spectral gap? 4. How does the reviewer assess the quality and usefulness of the presented experiments? 5. Are there any suggestions for improving the presentation and clarity of the paper, such as providing concrete examples, interpreting the form of the subproblem, and discussing the communication cost? 6. What is the significance of the derived convergence rates for the decentralized setting, and how do they contribute to the literature of decentralized optimization?
Review
Review This paper deals with learning linear models in a decentralized setting, where each node holds a subset of the dataset (features or data points, depending on the application) and communication can only occur between neighboring nodes in a connected network graph. The authors extend the CoCoA algorithm, originally designed for the distributed (master/slave) setting. They provide convergence rates as well as numerical comparisons. The authors should state more clearly that they are extending CoCoA to the decentralized setting. The adaptation of the setup, the local subproblems and the algorithm itself are fairly direct by restricting the information accessible by each node to its direct neighbors (instead of having access to information from all nodes). Despite the limited originality in the algorithm design, the main technical contribution is the derivation of convergence rates for the decentralized setting. This non-trivial result makes the paper an interesting contribution to the literature of decentralized optimization. In particular, the resulting algorithm has several properties (inherited from CoCoA) that are useful in the decentralized setting. I have a few technical questions: 1/ Can you clarify and comment the dependence of the network topology (spectral gap) on the convergence rates of Thm 1-3? A typical dependence is on the inverse of the square root of the spectral gap (see e.g. [1, 2]). In Thm 2, why is there no dependence on the spectral gap? 2/ CoCoA achieves linear convergence even when only g_i is strongly convex (not f), but this decentralized version only achieves sublinear convergence in this case (Theorem 2). Is there a fundamental reason for this or is it due to the proof technique? 3/ The convergence guarantees hold for the quantity \bar{x}, which is an average over several iterations. As each node holds a block of the iterate x^(t) at each iteration, it looks like making the model \bar{x} available to all nodes as learning progresses (so they can use the best current model to make predictions) could require a lot of additional communication. Is there a way around this? Regarding the experiments, experimenting with only 16 nodes is quite disappointing for the decentralized setting, which is largely motivated by the scalability to large networks. It would be much more convincing if the authors can show that their algorithm still behaves well compared to competitors on networks with many more nodes (this can be done by simulation if needed). The clarity of the presentation can be improved: - The authors remain very elusive on what the matrix A represents in practice. They do not explicitly mention that depending on the task, the dataset must be split sample-wise or feature-wise (sometimes, both are possible if both the the primal and the dual match the assumptions). Giving a few concrete examples would really help the reader understand better the possible application scenarios. - This aspect is also very unclear in the experiments. For each task, what is distributed? And how are features/samples distributed across nodes? - It would be useful to give an interpretation of the form of the subproblem (3). Other comments/questions: - Lines 33-34: most decentralized optimization algorithms for sum-structured problems do not rely on an i.i.d. assumption or completely fail when it is violated (but they can of course be slower). - Line 139: the dependence of the communication cost on d (which can be the number of features or the total number of samples) should be made clear. Depending on the task and dataset, this dependence on d may make the algorithm quite inefficient in communication. - How was the rho parameter of ADMM set in the experiments? - It is false to argue that the proposed algorithm does not have any parameter to select. At the very least, one should carefully choose \Theta, the subproblem approximation parameter. There may be additional parameters for the local solver. Typos: - Before eq (2): "Let set" --> "Let the set" - Line 132: "Appendix E.2" --> "Appendix E.1" - Line 178: missing tilde on G - Line 180: we recovers References: [1] Duchi et al. Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling. IEEE TAC 2012. [2] Colin et al. Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions. ICML 2016. ============ After Rebuttal ============ Thanks for the clarifications and updated results.
NIPS
Title COLA: Decentralized Linear Learning Abstract Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy. We consider learning of linear classification and regression models, in the setting where the training data is decentralized over many user devices, and the learning algorithm must run ondevice, on an arbitrary communication network, without a central coordinator. We propose COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance. Our framework overcomes many limitations of existing methods, and achieves communication efficiency, scalability, elasticity as well as resilience to changes in data and allows for unreliable and heterogeneous participating devices. 1 Introduction With the immense growth of data, decentralized machine learning has become not only attractive but a necessity. Personal data from, for example, smart phones, wearables and many other mobile devices is sensitive and exposed to a great risk of data breaches and abuse when collected by a centralized authority or enterprise. Nevertheless, many users have gotten accustomed to giving up control over their data in return for useful machine learning predictions (e.g. recommendations), which benefits from joint training on the data of all users combined in a centralized fashion. In contrast, decentralized learning aims at learning this same global machine learning model, without any central server. Instead, we only rely on distributed computations of the devices themselves, with each user’s data never leaving its device of origin. While increasing research progress has been made towards this goal, major challenges in terms of the privacy aspects as well as algorithmic efficiency, robustness and scalability remain to be addressed. Motivated by aforementioned challenges, we make progress in this work addressing the important problem of training generalized linear models in a fully decentralized environment. Existing research on decentralized optimization, minx2Rn F (x), can be categorized into two main directions. The seminal line of work started by Bertsekas and Tsitsiklis in the 1980s, cf. [Tsitsiklis et al., 1986], tackles this problem by splitting the parameter vector x by coordinates/components among the devices. A second more recent line of work including e.g. [Nedic and Ozdaglar, 2009, Duchi et al., 2012, Shi et al., 2015, Mokhtari and Ribeiro, 2016, Nedic et al., 2017] addresses sum-structured F (x) = P k Fk(x) where Fk is the local cost function of node k. This structure is closely related to empirical risk minimization in a learning setting. See e.g. [Cevher et al., 2014] for an overview of both directions. While the first line of work typically only provides convergence guarantees for smooth objectives F , the second approach often suffers from a “lack of consensus”, that is, the minimizers of {Fk}k are typically different since the data is not distributed i.i.d. between devices in general. ⇤These two authors contributed equally 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Contributions. In this paper, our main contribution is to propose COLA, a new decentralized framework for training generalized linear models with convergence guarantees. Our scheme resolves both described issues in existing approaches, using techniques from primal-dual optimization, and can be seen as a generalization of COCOA [Smith et al., 2018] to the decentralized setting. More specifically, the proposed algorithm offers - Convergence Guarantees: Linear and sublinear convergence rates are guaranteed for strongly convex and general convex objectives respectively. Our results are free of the restrictive assumptions made by stochastic methods [Zhang et al., 2015, Wang et al., 2017], which requires i.i.d. data distribution over all devices. - Communication Efficiency and Usability: Employing a data-local subproblem between each communication round, COLA not only achieves communication efficiency but also allows the re-use of existing efficient single-machine solvers for on-device learning. We provide practical decentralized primal-dual certificates to diagnose the learning progress. - Elasticity and Fault Tolerance: Unlike sum-structured approaches such as SGD, COLA is provably resilient to changes in the data, in the network topology, and participating devices disappearing, straggling or re-appearing in the network. Our implementation is publicly available under github.com/epfml/cola . 1.1 Problem statement Setup. Many machine learning and signal processing models are formulated as a composite convex optimization problem of the form min u l(u) + r(u), where l is a convex loss function of a linear predictor over data and r is a convex regularizer. Some cornerstone applications include e.g. logistic regression, SVMs, Lasso, generalized linear models, each combined with or without L1, L2 or elastic-net regularization. Following the setup of [Dünner et al., 2016, Smith et al., 2018], these training problems can be mapped to either of the two following formulations, which are dual to each other min x2Rn ⇥ FA(x) := f(Ax) + P i gi(xi) ⇤ (A) min w2Rd ⇥ FB(w) := f ⇤(w) + P i g ⇤ i ( A>i w) ⇤ , (B) where f⇤, g⇤i are the convex conjugates of f and gi, respectively. Here x 2 Rn is a parameter vector and A := [A1; . . . ;An] 2 Rd⇥n is a data matrix with column vectors Ai 2 Rd, i 2 [n]. We assume that f is smooth (Lipschitz gradient) and g(x) := Pn i=1 gi(xi) is separable. Data partitioning. As in [Jaggi et al., 2014, Dünner et al., 2016, Smith et al., 2018], we assume the dataset A is distributed over K machines according to a partition {Pk}Kk=1 of the columns of A. Note that this convention maintains the flexibility of partitioning the training dataset either by samples (through mapping applications to (B), e.g. for SVMs) or by features (through mapping applications to (A), e.g. for Lasso or L1-regularized logistic regression). For x 2 Rn, we write x[k] 2 Rn for the n-vector with elements (x[k])i := xi if i 2 Pk and (x[k])i := 0 otherwise, and analogously A[k] 2 Rd⇥nk for the corresponding set of local data columns on node k, which is of size nk = |Pk|. Network topology. We consider the task of joint training of a global machine learning model in a decentralized network of K nodes. Its connectivity is modelled by a mixing matrix W 2 RK⇥K+ . More precisely, Wij 2 [0, 1] denotes the connection strength between nodes i and j, with a non-zero weight indicating the existence of a pairwise communication link. We assume W to be symmetric and doubly stochastic, which means each row and column of W sums to one. The spectral properties of W used in this paper are that the eigenvalues of W are real, and 1 = 1(W) · · · n(W) 1. Let the second largest magnitude of the eigenvalues of W be := max{| 2(W)|, | n(W)|}. 1 is called the spectral gap, a quantity well-studied in graph theory and network analysis. The spectral gap measures the level of connectivity among nodes. In the extreme case when W is diagonal, and thus an identity matrix, the spectral gap is 0 and there is no communication among nodes. To ensure convergence of decentralized algorithms, we impose the standard assumption of positive spectral gap of the network which includes all connected graphs, such as e.g. a ring or 2-D grid topology, see also Appendix B for details. 1.2 Related work Research in decentralized optimization dates back to the 1980s with the seminal work of Bertsekas and Tsitsiklis, cf. [Tsitsiklis et al., 1986]. Their framework focuses on the minimization of a (smooth) function by distributing the components of the parameter vector x among agents. In contrast, a second more recent line of work [Nedic and Ozdaglar, 2009, Duchi et al., 2012, Shi et al., 2015, Mokhtari and Ribeiro, 2016, Nedic et al., 2017, Scaman et al., 2017, 2018] considers minimization of a sum of individual local cost-functions F (x) = P i Fi(x), which are potentially non-smooth. Our work here can be seen as bridging the two scenarios to the primal-dual setting (A) and (B). While decentralized optimization is a relatively mature area in the operations research and automatic control communities, it has recently received a surge of attention for machine learning applications, see e.g. [Cevher et al., 2014]. Decentralized gradient descent (DGD) with diminishing stepsizes was proposed by [Nedic and Ozdaglar, 2009, Jakovetic et al., 2012], showing convergence to the optimal solution at a sublinear rate. [Yuan et al., 2016] further prove that DGD will converge to the neighborhood of a global optimum at a linear rate when used with a constant stepsize for strongly convex objectives. [Shi et al., 2015] present EXTRA, which offers a significant performance boost compared to DGD by using a gradient tracking technique. [Nedic et al., 2017] propose the DIGing algorithm to handle a time-varying network topology. For a static and symmetric W , DIGing recovers EXTRA by redefining the two mixing matrices in EXTRA. The dual averaging method [Duchi et al., 2012] converges at a sublinear rate with a dynamic stepsize. Under a strong convexity assumption, decomposition techniques such as decentralized ADMM (DADMM, also known as consensus ADMM) have linear convergence for time-invariant undirected graphs, if subproblems are solved exactly [Shi et al., 2014, Wei and Ozdaglar, 2013]. DADMM+ [Bianchi et al., 2016] is a different primal-dual approach with more efficient closed-form updates in each step (as compared to ADMM), and is proven to converge but without a rate. Compared to COLA, neither of DADMM and DADMM+ can be flexibly adapted to the communication-computation tradeoff due to their fixed update definition, and both require additional hyperparameters to tune in each use-case (including the ⇢ from ADMM). Notably COLA shows superior performance compared to DIGing and decentralized ADMM in our experiments. [Scaman et al., 2017, 2018] present lower complexity bounds and optimal algorithms for objectives in the form F (x) = P i Fi(x). Specifically, [Scaman et al., 2017] assumes each Fi(x) is smooth and strongly convex, and [Scaman et al., 2018] assumes each Fi(x) is Lipschitz continuous and convex. Additionally [Scaman et al., 2018] needs a boundedness constraint for the input problem. In contrast, COLA can handle non-smooth and non-strongly convex objectives (A) and (B), suited to the mentioned applications in machine learning and signal processing. For smooth nonconvex models, [Lian et al., 2017] demonstrate that a variant of decentralized parallel SGD can outperform the centralized variant when the network latency is high. They further extend it to the asynchronous setting [Lian et al., 2018] and to deal with large data variance among nodes [Tang et al., 2018a] or with unreliable network links [Tang et al., 2018b]. For the decentralized, asynchronous consensus optimization, [Wu et al., 2018] extends the existing PG-EXTRA and proves convergence of the algorithm. [Sirb and Ye, 2018] proves a O(K/✏2) rate for stale and stochastic gradients. [Lian et al., 2018] achieves O(1/✏) rate and has linear speedup with respect to number of workers. In the distributed setting with a central server, algorithms of the COCOA family [Yang, 2013, Jaggi et al., 2014, Ma et al., 2015, Dünner et al., 2018]—see [Smith et al., 2018] for a recent overview— are targeted for problems of the forms (A) and (B). For convex models, COCOA has shown to significantly outperform competing methods including e.g., ADMM, distributed SGD etc. Other centralized algorithm representatives are parallel SGD variants such as [Agarwal and Duchi, 2011, Zinkevich et al., 2010] and more recent distributed second-order methods [Zhang and Lin, 2015, Reddi et al., 2016, Gargiani, 2017, Lee and Chang, 2017, Dünner et al., 2018, Lee et al., 2018]. In this paper we extend COCOA to the challenging decentralized environment—with no central coordinator—while maintaining all of its nice properties. We are not aware of any existing primaldual methods in the decentralized setting, except the recent work of [Smith et al., 2017] on federated learning for the special case of multi-task learning problems. Federated learning was first described by [Konecnỳ et al., 2015, 2016, McMahan et al., 2017] as decentralized learning for on-device learning applications, combining a global shared model with local personalized models. Current Algorithm 1: COLA: Communication-Efficient Decentralized Linear Learning 1 Input: Data matrix A distributed column-wise according to partition {Pk}Kk=1. Mixing matrix W . Aggregation parameter 2 [0, 1], and local subproblem parameter 0 as in (1). Starting point x(0) := 0 2 Rn, v(0) := 0 2 Rd, v(0)k := 0 2 Rd 8 k = 1, . . .K; 2 for t = 0, 1, 2, . . . , T do 3 for k 2 {1, 2, . . . ,K} in parallel over all nodes do 4 compute locally averaged shared vector v(t+ 1 2 ) k := PK l=1 Wklv (t) l 5 x[k] ⇥-approximate solution to subproblem (1) at v (t+ 12 ) k 6 update local variable x(t+1)[k] := x (t) [k] + x[k] 7 compute update of local estimate vk := A[k] x[k] 8 v(t+1)k := v (t+ 12 ) k + K vk 9 end 10 end federated optimization algorithms (like FedAvg in [McMahan et al., 2017]) are still close to the centralized setting. In contrast, our work provides a fully decentralized alternative algorithm for federated learning with generalized linear models. 2 The decentralized algorithm: COLA The COLA framework is summarized in Algorithm 1. For a given input problem we map it to either of the (A) or (B) formulation, and define the locally stored dataset A[k] and local part of the weight vector x[k] in node k accordingly. While v = Ax is the shared state being communicated in COCOA, this is generally unknown to a node in the fully decentralized setting. Instead, we maintain vk, a local estimate of v in node k, and use it as a surrogate in the algorithm. New data-local quadratic subproblems. During a computation step, node k locally solves the following minimization problem min x[k]2Rn G 0 k ( x[k];vk,x[k]), (1) where G 0 k ( x[k];vk,x[k]) := 1 K f(vk) +rf(vk) >A[k] x[k] + 0 2⌧ A[k] x[k] 2 + P i2Pk gi(xi + ( x[k])i). (2) Crucially, this subproblem only depends on the local data A[k], and local vectors vl from the neighborhood of the current node k. In contrast, in COCOA [Smith et al., 2018] the subproblem is defined in terms of a global aggregated shared vector vc := Ax 2 Rd, which is not available in the decentralized setting.2 The aggregation parameter 2 [0, 1] does not need to be tuned; in fact, we use the default := 1 throughout the paper, see [Ma et al., 2015] for a discussion. Once is settled, a safe choice of the subproblem relaxation parameter 0 is given as 0 := K. 0 can be additionally tightened using an improved Hessian subproblem (Appendix E.3). Algorithm description. At time t on node k, v(t+ 1 2 ) k is a local estimate of the shared variable after a communication step (i.e. gossip mixing). The local subproblem (1) based on this estimate is solved 2 Subproblem interpretation: Note that for the special case of := 1, 0 := K, by smoothness of f , our subproblem in (2) is an upper bound on min x[k]2Rn 1 K f(A(x+K x[k])) + P i2Pk gi(xi + ( x[k])i), (3) which is a scaled block-coordinate update of block k of the original objective (A). This assumes that we have consensus vk ⌘ Ax 8 k. For quadratic objectives (i.e. when f ⌘ k.k22 and A describes the quadratic), the equality of the formulations (2) and (3) holds. Furthermore, by convexity of f , the sum of (3) is an upper bound on the centralized updates f(x + x) + g(x + x). Both inequalities quantify the overhead of the distributed algorithm over the centralized version, see also [Yang, 2013, Ma et al., 2015, Smith et al., 2018] for the non-decentralized case. and yields x[k]. Then we calculate vk := A[k] x[k], and update the local shared vector v (t+1) k . We allow the local subproblem to be solved approximately: Assumption 1 (⇥-approximation solution). Let ⇥ 2 [0, 1] be the relative accuracy of the local solver (potentially randomized), in the sense of returning an approximate solution x[k] at each step t, s.t. E[G 0k ( x[k];vk,x[k]) G 0 k ( x ? [k];vk,x[k])] G 0k ( 0 ;vk,x[k]) G 0 k ( x ? [k];vk,x[k]) ⇥, where x?[k] 2 argmin x2RnG 0 k ( x[k];vk,x[k]), for each k 2 [K]. Elasticity to network size, compute resources and changing data—and fault tolerance. Realworld communication networks are not homogeneous and static, but greatly vary in availability, computation, communication and storage capacity. Also, the training data is subject to changes. While these issues impose significant challenges for most existing distributed training algorithms, we hereby show that COLA offers adaptivity to such dynamic and heterogenous scenarios. Scalability and elasticity in terms of availability and computational capacity can be modelled by a node-specific local accuracy parameter ⇥k in Assumption 1, as proposed by [Smith et al., 2017]. The more resources node k has, the more accurate (smaller) ⇥k we can use. The same mechanism also allows dealing with fault tolerance and stragglers, which is crucial e.g. on a network of personal devices. More specifically, when a new node k joins the network, its x[k] variables are initialized to 0; when node k leaves, its x[k] is frozen, and its subproblem is not touched anymore (i.e. ⇥k = 1). Using the same approach, we can adapt to dynamic changes in the dataset—such as additions and removal of local data columns—by adjusting the size of the local weight vector accordingly. Unlike gradient-based methods and ADMM, COLA does not require parameter tuning to converge, increasing resilience to drastic changes. Extension to improved second-order subproblems. In the centralized setting, it has recently been shown that the Hessian information of f can be properly utilized to define improved local subproblems [Lee and Chang, 2017, Dünner et al., 2018]. Similar techniques can be applied to COLA as well, details on which are left in Appendix E. Extension to time-varying graphs. Similar to scalability and elasticity, it is also straightforward to extend COLA to a time varying graph under proper assumptions. If we use the time-varying model in [Nedic et al., 2017, Assumption 1], where an undirected graph is connected with B gossip steps, then changing COLA to perform B communication steps and one computation step per round still guarantees convergence. Details of this setup are provided in Appendix E. 3 On the convergence of COLA In this section we present a convergence analysis of the proposed decentralized algorithm COLA for both general convex and strongly convex objectives. In order to capture the evolution of COLA, we reformulate the original problem (A) by incorporating both x and local estimates {vk}Kk=1 minx,{vk}Kk=1 HA(x, {vk} K k=1) := 1 K PK k=1 f(vk) + g(x), (DA) such that vk = Ax, k = 1, ...,K. While the consensus is not always satisfied during Algorithm 1, the following relations between the decentralized objective and the original one (A) always hold. All proofs are deferred to Appendix C. Lemma 1. Let {vk} and x be the iterates generated during the execution of Algorithm 1. At any timestep, it holds that 1 K PK k=1 vk = Ax, (4) FA(x) HA(x, {vk}Kk=1) FA(x) + 12⌧K PK k=1 kvk Axk 2 . (5) The dual problem and duality gap of the decentralized objective (DA) are given in Lemma 2. Lemma 2 (Decentralized Dual Function and Duality Gap). The Lagrangian dual of the decentralized formation (DA) is min{wk}Kk=1 HB({wk} K k=1) := 1 K PK k=1 f ⇤(wk) + Pn i=1 g ⇤ i ⇣ A>i ( 1K PK k=1 wk) ⌘ . (DB) Given primal variables {x, {vk}Kk=1} and dual variables {wk}Kk=1, the duality gap is: GH(x, {vk}Kk=1, {wk}Kk=1) := 1K P k(f(vk)+f ⇤(wk))+g(x)+ Pn i=1 g ⇤ i 1K P k A > i wk . (6) If the dual variables are fixed to the optimality condition wk = rf(vk), then the dual variables can be omitted in the argument list of duality gap, namely GH(x, {vk}Kk=1). Note that the decentralized duality gap generalizes the duality gap of COCOA: when consensus is ensured, i.e., vk ⌘ Ax and wk ⌘ rf(Ax), the decentralized duality gap recovers that of COCOA. 3.1 Linear rate for strongly convex objectives We use the following data-dependent quantities in our main theorems k := maxx[k]2Rn A[k]x[k] 2 /kx[k]k2, max = maxk=1,...,K k, := PK k=1 knk. (7) If {gi} are strongly convex, COLA achieves the following linear rate of convergence. Theorem 1 (Strongly Convex gi). Consider Algorithm 1 with := 1 and let ⇥ be the quality of the local solver in Assumption 1. Let gi be µg-strongly convex for all i 2 [n] and let f be 1/⌧ -smooth. Let ̄ 0 := (1 + ) 0, ↵ := (1 + (1 ) 2 36(1+⇥) ) 1 and ⌘ := (1 ⇥)(1 ↵) s0 = ⌧µg ⌧µg+ max̄0 2 [0, 1]. (8) Then after T iterations of Algorithm 1 with 3 T 1+⌘s0⌘s0 log "(0) H "H , it holds that E ⇥ HA(x(T ), {v(T )k }Kk=1) HA(x?, {v?k}Kk=1) ⇤ "H. Furthermore, after T iterations with T 1+⌘s0⌘s0 log ✓ 1 ⌘s0 "(0) H "GH , ◆ we have the expected duality gap E[GH(x(T ), { PK k=1 Wklv (T ) l }Kk=1)] "GH . 3.2 Sublinear rate for general convex objectives Models such as sparse logistic regression, Lasso, group Lasso are non-strongly convex. For such models, we show that COLA enjoys a O(1/T ) sublinear rate of convergence for all network topologies with a positive spectral gap. Theorem 2 (Non-strongly Convex Case). Consider Algorithm 1, using a local solver of quality ⇥. Let gi(·) have L-bounded support, and let f be (1/⌧)-smooth. Let "GH > 0 be the desired duality gap. Then after T iterations where T T0 +max ⇢l 1 ⌘ m , 4L2 ̄0 ⌧"GH⌘ , T0 t0 + 2 ⌘ ⇣ 8L2 ̄0 ⌧"GH 1 ⌘ + t0 max ⇢ 0, ⇠ 1+⌘ ⌘ log 2⌧(HA(x (0),{v(0)l }) HA(x ?,{v?})) 4L2 ̄0 ⇡ and ̄ 0 := (1+ ) 0, ↵ := (1+ (1 ) 2 36(1+⇥) ) 1 and ⌘ := (1 ⇥)(1 ↵). We have that the expected duality gap satisfies E ⇥ GH(x̄, {v̄k}Kk=1, {w̄k}Kk=1) ⇤ "GH at the averaged iterate x̄ := 1T T0 PT 1 t=T0+1 x(t), and v0k := PK l=1 Wklvl and v̄k := 1 T T0 PT 1 t=T0+1 (v0k) (t) and w̄k := 1 T T0 PT 1 t=T0+1 rf((v0k)(t)). Note that the assumption of bounded support for the gi functions is not restrictive in the general convex case, as discussed e.g. in [Dünner et al., 2016]. 3"(0) H := HA(x (0), {v(0)k } K k=1) HA(x ?, {v?k} K k=1) is the initial suboptimality. 3.3 Local certificates for global accuracy Accuracy certificates for the training error are very useful for practitioners to diagnose the learning progress. In the centralized setting, the duality gap serves as such a certificate, and is available as a stopping criterion on the master node. In the decentralized setting of our interest, this is more challenging as consensus is not guaranteed. Nevertheless, we show in the following Proposition 1 that certificates for the decentralized objective (DA) can be computed from local quantities: Proposition 1 (Local Certificates). Assume gi has L-bounded support, and let Nk := {j : Wjk > 0} be the set of nodes accessible to node k. Then for any given " > 0, we have GH(x; {vk}Kk=1) ", if for all k = 1, . . . ,K the following two local conditions are satisfied: v>k rf(vk) + X i2Pk gi(xi) + g ⇤ i ( A>i rf(vk)) " 2K (9) rf(vk) 1|Nk| P j2Nk rf(vj) 2 ⇣PK k=1 n 2 k k ⌘ 1/2 1 2L p K ", (10) The local conditions (9) and (10) have a clear interpretation. The first one ensures the duality gap of the local subproblem given by vk as on the left hand side of (9) is small. The second condition (10) guarantees that consensus violation is bounded, by ensuring that the gradient of each node is similar to its neighborhood nodes. Remark 1. The resulting certificate from Proposition 1 is local, in the sense that no global vector aggregations are needed to compute it. For a certificate on the global objective, the boolean flag of each local condition (9) and (10) being satisfied or not needs to be shared with all nodes, but this requires extremely little communication. Exact values of the parameters and PK k=1 n 2 k k are not required to be known, and any valid upper bound can be used instead. We can use the local certificates to avoid unnecessary work on local problems which are already optimized, as well as to continuously quantify how newly arriving local data has to be re-optimized in the case of online training. The local certificates can also be used to quantify the contribution of newly joining or departing nodes, which is particularly useful in the elastic scenario described above. 4 Experimental results Here we illustrate the advantages of COLA in three respects: firstly we investigate the application in different network topologies and with varying subproblem quality ⇥; secondly, we compare COLA with state-of-the-art decentralized baselines: 1 , DIGing [Nedic et al., 2017], which generalizes the gradient-tracking technique of the EXTRA algorithm [Shi et al., 2015], and 2 , Decentralized ADMM (aka. consensus ADMM), which extends the classical ADMM (Alternating Direction Method of Multipliers) method [Boyd et al., 2011] to the decentralized setting [Shi et al., 2014, Wei and Ozdaglar, 2013]; Finally, we show that COLA works in the challenging unreliable network environment where each node has a certain chance to drop out of the network. We implement all algorithms in PyTorch with MPI backend. The decentralized network topology is simulated by running one thread per graph node, on a 2⇥12 core Intel Xeon CPU E5-2680 v3 server with 256 GB RAM. Table 1 describes the datasets4 used in the experiments. For Lasso, the columns of A are features. For ridge regression, the columns are features and samples for COLA primal and COLA dual, respectively. The order of columns is shuffled once before being distributed across the nodes. Due to space limit, details on the experimental configurations are included in Appendix D. Effect of approximation quality ⇥. We study the convergence behavior in terms of the approximation quality ⇥. Here, ⇥ is controlled by the number of data passes on subproblem (1) per node. Figure 1 shows that increasing always results in less number of iterations (less communication rounds) for COLA. However, given a fixed network bandwidth, it leads to a clear trade-off for the overall wall-clock time, showing the cost of both communication and computation. Larger leads to less communication rounds, however, it also takes more time to solve subproblems. The observations suggest that one can adjust ⇥ for each node to handle system heterogeneity, as what we have discussed at the end of Section 2. Effect of graph topology. Fixing K=16, we test the performance of COLA on 5 different topologies: ring, 2-connected cycle, 3-connected cycle, 2D grid and complete graph. The mixing matrix W is given by Metropolis weights for all test cases (details in Appendix B). Convergence curves are plotted in Figure 3. One can observe that for all topologies, COLA converges monotonically and especailly when all nodes in the network are equal, smaller leads to a faster convergence rate. This is consistent with the intuition that 1 measures the connectivity level of the topology. Superior performance compared to baselines. We compare COLA with DIGing and D-ADMM for strongly and general convex problems. For general convex objectives, we use Lasso regression with = 10 4 on the webspam dataset; for the strongly convex objective, we use Ridge regression with = 10 5 on the URL reputation dataset. For Ridge regression, we can map COLA to both primal and dual problems. Figure 2 traces the results on log-suboptimality. One can observe that for both generally and strongly convex objectives, COLA significantly outperforms DIGing and decentralized ADMM in terms of number of communication rounds and computation time. While DIGing and D-ADMM need parameter tuning to ensure convergence and efficiency, COLA is much easier to deploy as it is parameter free. Additionally, convergence guarantees of ADMM relies on exact subproblem solvers, whereas inexact solver is allowed for COLA. 4https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ Fault tolerance to unreliable nodes. Assume each node of a network only has a chance of p to participate in each round. If a new node k joins the network, then local variables are initialized as x[k] = 0; if node k leaves the network, then x[k] will be frozen with ⇥k = 1. All remaining nodes dynamically adjust their weights to maintain the doubly stochastic property of W . We run COLA on such unreliable networks of different ps and show the results in Figure 4. First, one can observe that for all p > 0 the suboptimality decreases monotonically as COLA progresses. It is also clear from the result that a smaller dropout rate (a larger p) leads to a faster convergence of COLA. 5 Discussion and conclusions In this work we have studied training generalized linear models in the fully decentralized setting. We proposed a communication-efficient decentralized framework, termed COLA, which is free of parameter tuning. We proved that it has a sublinear rate of convergence for general convex problems, allowing e.g. L1 regularizers, and has a linear rate of convergence for strongly convex objectives. Our scheme offers primal-dual certificates which are useful in the decentralized setting. We demonstrated that COLA offers full adaptivity to heterogenous distributed systems on arbitrary network topologies, and is adaptive to changes in network size and data, and offers fault tolerance and elasticity. Future research directions include improving subproblems, as well as extension to the network topology with directed graphs, as well as recent communication compression schemes [Stich et al., 2018]. Acknowledgments. We thank Prof. Bharat K. Bhargava for fruitful discussions. We acknowledge funding from SNSF grant 200021_175796, Microsoft Research JRC project ‘Coltrain’, as well as a Google Focused Research Award.
1. What is the focus of the paper regarding learning a regularized linear model over a graph of agents? 2. What are the strengths and weaknesses of the proposed optimization algorithm compared to prior works such as Condat-Vu's algorithm and decentralized algorithms like DADMM+? 3. How does the reviewer assess the convergence rates provided in the paper, particularly in Theorem 1 and Theorem 2? 4. What minor comments or suggestions does the reviewer have regarding the paper's content and presentation?
Review
Review The authors consider the problem learning a (regularized) linear model over a graph of agents. At each iteration, the agents exchange with their neighbors and perform an (approximate) proximal gradient step. They provide linear and sublinear rates depending on the strong convexity of the functions. Despite interesting ideas, including the rate derivations, this papers lack connections with the literature and some clarifications in its present form. [Distributed Setup] The distributed setup is rather well explained although it is not exactly standard as the data matrix is split by the *columns* (i.e. the features) among the agents; usually, the lines/examples are split. This difference should be more clearly mentioned and motivated in a practical context (near l64 page 2). As the simulations are performed on a single machine, one would expect a justification for considering this kind of data splitting and network topology. * Then, the communications are done using a doubly-stochastic mixing matrix as commonly accepted in Gossip theory. The paragraph before Assumption 1 is unclear: it is well known that as soon as an (undirected) graph is connected, the spectral gap is positive, this would make assumption 1 more readable. [Optimization algorithm] As the authors mention (Sec 1.2, e.g. 108), the proposed algorithm falls into the primal-dual methods. * At first glance, it seems close (as the matrix A cannot be inverted) to Condat-Vu's algorithm (Condat, L. (2013). A primal–dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. Journal of Optimization Theory and Applications, 158(2), 460-479.) Decentralized algorithms based on this algorithm were proposed, see e.g. Bianchi, P., Hachem, W., & Iutzeler, F. (2016). A coordinate descent primal-dual algorithm and application to distributed asynchronous optimization. IEEE Transactions on Automatic Control, 61(10), 2947-2957. The synchronous version (DADMM+ in their paper) looks indeed close to what you propose and should definitively be compared to the proposed method. * Line 5 in the algorithm you approximatively solve problem (2), which leads to two concerns: (i) why: minimizing (2) seems equivalent to a proximity operation on the corresponding g_i on a point that is a gradient step on the coordinates of f with stepsize tau/sigma'; this seems to be computable exactly quite cheaply especially for the Lasso and Ridge problems considered in the experiments; (ii) how: I don't get how to obtain approximate solutions of this problem apart from not computing some coordinates but then theta can be arbitrarily bad and so are the rates. * I do not see a discussion on how to choose gamma. Furthermore, it seems rather explicit that the gradient descent is performed with the inverse of the associated Lipschitz constant. [Convergence rates] The derivations seem valid but their significance is kind of obfuscated. * In Theorem 1, the assumption that g has a bounded support (or rather that the iterates stay in a bounded subspace) is rather strong but ok, what bothers me is that (i) it is hard to decipher the rate, a simple case with theta = 0, gamma =1 might help. In addition, the assumption on rho also would require some explanations. * In Theorem 2, it look that we actually have the usual conditioning/T rate but clarifications are also welcome * In Theorem 3, two conditions bother me: (i) mu tau >= D_1 seem like a minimal conditioning of the problem to have a linear rate; (ii) the condition of beta seems to look like a condition on how well connected the graph has to be to get the linear rate. Again, this is not discussed. Minor comments: * l91 "bridgin"G * l232: the authors say "CoLa is free of tuning parameters" but the gradient descent is actually natively performed with the inverse Lipschitz constant which is a possibility in most algorithms of the literature (and can be local as extended in Apx E1). Furthermore, there is also the parameter gamma to tune, theta to adapt... ============ After Rebuttal ============ After reading the author feedback and the other reviews, I upgraded my score to 6. The main reasons for that upgrade are: * the improvement of their main theorem to any connected graph * a misconception on my side for the difficulty of pb. (2) However, it still seems important to me to clarify the non-triviality of (2), mention the relations between CoCoA and ADMM, and clarify that gamma and Theta are not hyper-parameter (cf l. 12-14 of the feedback).
NIPS
Title COLA: Decentralized Linear Learning Abstract Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy. We consider learning of linear classification and regression models, in the setting where the training data is decentralized over many user devices, and the learning algorithm must run ondevice, on an arbitrary communication network, without a central coordinator. We propose COLA, a new decentralized training algorithm with strong theoretical guarantees and superior practical performance. Our framework overcomes many limitations of existing methods, and achieves communication efficiency, scalability, elasticity as well as resilience to changes in data and allows for unreliable and heterogeneous participating devices. 1 Introduction With the immense growth of data, decentralized machine learning has become not only attractive but a necessity. Personal data from, for example, smart phones, wearables and many other mobile devices is sensitive and exposed to a great risk of data breaches and abuse when collected by a centralized authority or enterprise. Nevertheless, many users have gotten accustomed to giving up control over their data in return for useful machine learning predictions (e.g. recommendations), which benefits from joint training on the data of all users combined in a centralized fashion. In contrast, decentralized learning aims at learning this same global machine learning model, without any central server. Instead, we only rely on distributed computations of the devices themselves, with each user’s data never leaving its device of origin. While increasing research progress has been made towards this goal, major challenges in terms of the privacy aspects as well as algorithmic efficiency, robustness and scalability remain to be addressed. Motivated by aforementioned challenges, we make progress in this work addressing the important problem of training generalized linear models in a fully decentralized environment. Existing research on decentralized optimization, minx2Rn F (x), can be categorized into two main directions. The seminal line of work started by Bertsekas and Tsitsiklis in the 1980s, cf. [Tsitsiklis et al., 1986], tackles this problem by splitting the parameter vector x by coordinates/components among the devices. A second more recent line of work including e.g. [Nedic and Ozdaglar, 2009, Duchi et al., 2012, Shi et al., 2015, Mokhtari and Ribeiro, 2016, Nedic et al., 2017] addresses sum-structured F (x) = P k Fk(x) where Fk is the local cost function of node k. This structure is closely related to empirical risk minimization in a learning setting. See e.g. [Cevher et al., 2014] for an overview of both directions. While the first line of work typically only provides convergence guarantees for smooth objectives F , the second approach often suffers from a “lack of consensus”, that is, the minimizers of {Fk}k are typically different since the data is not distributed i.i.d. between devices in general. ⇤These two authors contributed equally 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. Contributions. In this paper, our main contribution is to propose COLA, a new decentralized framework for training generalized linear models with convergence guarantees. Our scheme resolves both described issues in existing approaches, using techniques from primal-dual optimization, and can be seen as a generalization of COCOA [Smith et al., 2018] to the decentralized setting. More specifically, the proposed algorithm offers - Convergence Guarantees: Linear and sublinear convergence rates are guaranteed for strongly convex and general convex objectives respectively. Our results are free of the restrictive assumptions made by stochastic methods [Zhang et al., 2015, Wang et al., 2017], which requires i.i.d. data distribution over all devices. - Communication Efficiency and Usability: Employing a data-local subproblem between each communication round, COLA not only achieves communication efficiency but also allows the re-use of existing efficient single-machine solvers for on-device learning. We provide practical decentralized primal-dual certificates to diagnose the learning progress. - Elasticity and Fault Tolerance: Unlike sum-structured approaches such as SGD, COLA is provably resilient to changes in the data, in the network topology, and participating devices disappearing, straggling or re-appearing in the network. Our implementation is publicly available under github.com/epfml/cola . 1.1 Problem statement Setup. Many machine learning and signal processing models are formulated as a composite convex optimization problem of the form min u l(u) + r(u), where l is a convex loss function of a linear predictor over data and r is a convex regularizer. Some cornerstone applications include e.g. logistic regression, SVMs, Lasso, generalized linear models, each combined with or without L1, L2 or elastic-net regularization. Following the setup of [Dünner et al., 2016, Smith et al., 2018], these training problems can be mapped to either of the two following formulations, which are dual to each other min x2Rn ⇥ FA(x) := f(Ax) + P i gi(xi) ⇤ (A) min w2Rd ⇥ FB(w) := f ⇤(w) + P i g ⇤ i ( A>i w) ⇤ , (B) where f⇤, g⇤i are the convex conjugates of f and gi, respectively. Here x 2 Rn is a parameter vector and A := [A1; . . . ;An] 2 Rd⇥n is a data matrix with column vectors Ai 2 Rd, i 2 [n]. We assume that f is smooth (Lipschitz gradient) and g(x) := Pn i=1 gi(xi) is separable. Data partitioning. As in [Jaggi et al., 2014, Dünner et al., 2016, Smith et al., 2018], we assume the dataset A is distributed over K machines according to a partition {Pk}Kk=1 of the columns of A. Note that this convention maintains the flexibility of partitioning the training dataset either by samples (through mapping applications to (B), e.g. for SVMs) or by features (through mapping applications to (A), e.g. for Lasso or L1-regularized logistic regression). For x 2 Rn, we write x[k] 2 Rn for the n-vector with elements (x[k])i := xi if i 2 Pk and (x[k])i := 0 otherwise, and analogously A[k] 2 Rd⇥nk for the corresponding set of local data columns on node k, which is of size nk = |Pk|. Network topology. We consider the task of joint training of a global machine learning model in a decentralized network of K nodes. Its connectivity is modelled by a mixing matrix W 2 RK⇥K+ . More precisely, Wij 2 [0, 1] denotes the connection strength between nodes i and j, with a non-zero weight indicating the existence of a pairwise communication link. We assume W to be symmetric and doubly stochastic, which means each row and column of W sums to one. The spectral properties of W used in this paper are that the eigenvalues of W are real, and 1 = 1(W) · · · n(W) 1. Let the second largest magnitude of the eigenvalues of W be := max{| 2(W)|, | n(W)|}. 1 is called the spectral gap, a quantity well-studied in graph theory and network analysis. The spectral gap measures the level of connectivity among nodes. In the extreme case when W is diagonal, and thus an identity matrix, the spectral gap is 0 and there is no communication among nodes. To ensure convergence of decentralized algorithms, we impose the standard assumption of positive spectral gap of the network which includes all connected graphs, such as e.g. a ring or 2-D grid topology, see also Appendix B for details. 1.2 Related work Research in decentralized optimization dates back to the 1980s with the seminal work of Bertsekas and Tsitsiklis, cf. [Tsitsiklis et al., 1986]. Their framework focuses on the minimization of a (smooth) function by distributing the components of the parameter vector x among agents. In contrast, a second more recent line of work [Nedic and Ozdaglar, 2009, Duchi et al., 2012, Shi et al., 2015, Mokhtari and Ribeiro, 2016, Nedic et al., 2017, Scaman et al., 2017, 2018] considers minimization of a sum of individual local cost-functions F (x) = P i Fi(x), which are potentially non-smooth. Our work here can be seen as bridging the two scenarios to the primal-dual setting (A) and (B). While decentralized optimization is a relatively mature area in the operations research and automatic control communities, it has recently received a surge of attention for machine learning applications, see e.g. [Cevher et al., 2014]. Decentralized gradient descent (DGD) with diminishing stepsizes was proposed by [Nedic and Ozdaglar, 2009, Jakovetic et al., 2012], showing convergence to the optimal solution at a sublinear rate. [Yuan et al., 2016] further prove that DGD will converge to the neighborhood of a global optimum at a linear rate when used with a constant stepsize for strongly convex objectives. [Shi et al., 2015] present EXTRA, which offers a significant performance boost compared to DGD by using a gradient tracking technique. [Nedic et al., 2017] propose the DIGing algorithm to handle a time-varying network topology. For a static and symmetric W , DIGing recovers EXTRA by redefining the two mixing matrices in EXTRA. The dual averaging method [Duchi et al., 2012] converges at a sublinear rate with a dynamic stepsize. Under a strong convexity assumption, decomposition techniques such as decentralized ADMM (DADMM, also known as consensus ADMM) have linear convergence for time-invariant undirected graphs, if subproblems are solved exactly [Shi et al., 2014, Wei and Ozdaglar, 2013]. DADMM+ [Bianchi et al., 2016] is a different primal-dual approach with more efficient closed-form updates in each step (as compared to ADMM), and is proven to converge but without a rate. Compared to COLA, neither of DADMM and DADMM+ can be flexibly adapted to the communication-computation tradeoff due to their fixed update definition, and both require additional hyperparameters to tune in each use-case (including the ⇢ from ADMM). Notably COLA shows superior performance compared to DIGing and decentralized ADMM in our experiments. [Scaman et al., 2017, 2018] present lower complexity bounds and optimal algorithms for objectives in the form F (x) = P i Fi(x). Specifically, [Scaman et al., 2017] assumes each Fi(x) is smooth and strongly convex, and [Scaman et al., 2018] assumes each Fi(x) is Lipschitz continuous and convex. Additionally [Scaman et al., 2018] needs a boundedness constraint for the input problem. In contrast, COLA can handle non-smooth and non-strongly convex objectives (A) and (B), suited to the mentioned applications in machine learning and signal processing. For smooth nonconvex models, [Lian et al., 2017] demonstrate that a variant of decentralized parallel SGD can outperform the centralized variant when the network latency is high. They further extend it to the asynchronous setting [Lian et al., 2018] and to deal with large data variance among nodes [Tang et al., 2018a] or with unreliable network links [Tang et al., 2018b]. For the decentralized, asynchronous consensus optimization, [Wu et al., 2018] extends the existing PG-EXTRA and proves convergence of the algorithm. [Sirb and Ye, 2018] proves a O(K/✏2) rate for stale and stochastic gradients. [Lian et al., 2018] achieves O(1/✏) rate and has linear speedup with respect to number of workers. In the distributed setting with a central server, algorithms of the COCOA family [Yang, 2013, Jaggi et al., 2014, Ma et al., 2015, Dünner et al., 2018]—see [Smith et al., 2018] for a recent overview— are targeted for problems of the forms (A) and (B). For convex models, COCOA has shown to significantly outperform competing methods including e.g., ADMM, distributed SGD etc. Other centralized algorithm representatives are parallel SGD variants such as [Agarwal and Duchi, 2011, Zinkevich et al., 2010] and more recent distributed second-order methods [Zhang and Lin, 2015, Reddi et al., 2016, Gargiani, 2017, Lee and Chang, 2017, Dünner et al., 2018, Lee et al., 2018]. In this paper we extend COCOA to the challenging decentralized environment—with no central coordinator—while maintaining all of its nice properties. We are not aware of any existing primaldual methods in the decentralized setting, except the recent work of [Smith et al., 2017] on federated learning for the special case of multi-task learning problems. Federated learning was first described by [Konecnỳ et al., 2015, 2016, McMahan et al., 2017] as decentralized learning for on-device learning applications, combining a global shared model with local personalized models. Current Algorithm 1: COLA: Communication-Efficient Decentralized Linear Learning 1 Input: Data matrix A distributed column-wise according to partition {Pk}Kk=1. Mixing matrix W . Aggregation parameter 2 [0, 1], and local subproblem parameter 0 as in (1). Starting point x(0) := 0 2 Rn, v(0) := 0 2 Rd, v(0)k := 0 2 Rd 8 k = 1, . . .K; 2 for t = 0, 1, 2, . . . , T do 3 for k 2 {1, 2, . . . ,K} in parallel over all nodes do 4 compute locally averaged shared vector v(t+ 1 2 ) k := PK l=1 Wklv (t) l 5 x[k] ⇥-approximate solution to subproblem (1) at v (t+ 12 ) k 6 update local variable x(t+1)[k] := x (t) [k] + x[k] 7 compute update of local estimate vk := A[k] x[k] 8 v(t+1)k := v (t+ 12 ) k + K vk 9 end 10 end federated optimization algorithms (like FedAvg in [McMahan et al., 2017]) are still close to the centralized setting. In contrast, our work provides a fully decentralized alternative algorithm for federated learning with generalized linear models. 2 The decentralized algorithm: COLA The COLA framework is summarized in Algorithm 1. For a given input problem we map it to either of the (A) or (B) formulation, and define the locally stored dataset A[k] and local part of the weight vector x[k] in node k accordingly. While v = Ax is the shared state being communicated in COCOA, this is generally unknown to a node in the fully decentralized setting. Instead, we maintain vk, a local estimate of v in node k, and use it as a surrogate in the algorithm. New data-local quadratic subproblems. During a computation step, node k locally solves the following minimization problem min x[k]2Rn G 0 k ( x[k];vk,x[k]), (1) where G 0 k ( x[k];vk,x[k]) := 1 K f(vk) +rf(vk) >A[k] x[k] + 0 2⌧ A[k] x[k] 2 + P i2Pk gi(xi + ( x[k])i). (2) Crucially, this subproblem only depends on the local data A[k], and local vectors vl from the neighborhood of the current node k. In contrast, in COCOA [Smith et al., 2018] the subproblem is defined in terms of a global aggregated shared vector vc := Ax 2 Rd, which is not available in the decentralized setting.2 The aggregation parameter 2 [0, 1] does not need to be tuned; in fact, we use the default := 1 throughout the paper, see [Ma et al., 2015] for a discussion. Once is settled, a safe choice of the subproblem relaxation parameter 0 is given as 0 := K. 0 can be additionally tightened using an improved Hessian subproblem (Appendix E.3). Algorithm description. At time t on node k, v(t+ 1 2 ) k is a local estimate of the shared variable after a communication step (i.e. gossip mixing). The local subproblem (1) based on this estimate is solved 2 Subproblem interpretation: Note that for the special case of := 1, 0 := K, by smoothness of f , our subproblem in (2) is an upper bound on min x[k]2Rn 1 K f(A(x+K x[k])) + P i2Pk gi(xi + ( x[k])i), (3) which is a scaled block-coordinate update of block k of the original objective (A). This assumes that we have consensus vk ⌘ Ax 8 k. For quadratic objectives (i.e. when f ⌘ k.k22 and A describes the quadratic), the equality of the formulations (2) and (3) holds. Furthermore, by convexity of f , the sum of (3) is an upper bound on the centralized updates f(x + x) + g(x + x). Both inequalities quantify the overhead of the distributed algorithm over the centralized version, see also [Yang, 2013, Ma et al., 2015, Smith et al., 2018] for the non-decentralized case. and yields x[k]. Then we calculate vk := A[k] x[k], and update the local shared vector v (t+1) k . We allow the local subproblem to be solved approximately: Assumption 1 (⇥-approximation solution). Let ⇥ 2 [0, 1] be the relative accuracy of the local solver (potentially randomized), in the sense of returning an approximate solution x[k] at each step t, s.t. E[G 0k ( x[k];vk,x[k]) G 0 k ( x ? [k];vk,x[k])] G 0k ( 0 ;vk,x[k]) G 0 k ( x ? [k];vk,x[k]) ⇥, where x?[k] 2 argmin x2RnG 0 k ( x[k];vk,x[k]), for each k 2 [K]. Elasticity to network size, compute resources and changing data—and fault tolerance. Realworld communication networks are not homogeneous and static, but greatly vary in availability, computation, communication and storage capacity. Also, the training data is subject to changes. While these issues impose significant challenges for most existing distributed training algorithms, we hereby show that COLA offers adaptivity to such dynamic and heterogenous scenarios. Scalability and elasticity in terms of availability and computational capacity can be modelled by a node-specific local accuracy parameter ⇥k in Assumption 1, as proposed by [Smith et al., 2017]. The more resources node k has, the more accurate (smaller) ⇥k we can use. The same mechanism also allows dealing with fault tolerance and stragglers, which is crucial e.g. on a network of personal devices. More specifically, when a new node k joins the network, its x[k] variables are initialized to 0; when node k leaves, its x[k] is frozen, and its subproblem is not touched anymore (i.e. ⇥k = 1). Using the same approach, we can adapt to dynamic changes in the dataset—such as additions and removal of local data columns—by adjusting the size of the local weight vector accordingly. Unlike gradient-based methods and ADMM, COLA does not require parameter tuning to converge, increasing resilience to drastic changes. Extension to improved second-order subproblems. In the centralized setting, it has recently been shown that the Hessian information of f can be properly utilized to define improved local subproblems [Lee and Chang, 2017, Dünner et al., 2018]. Similar techniques can be applied to COLA as well, details on which are left in Appendix E. Extension to time-varying graphs. Similar to scalability and elasticity, it is also straightforward to extend COLA to a time varying graph under proper assumptions. If we use the time-varying model in [Nedic et al., 2017, Assumption 1], where an undirected graph is connected with B gossip steps, then changing COLA to perform B communication steps and one computation step per round still guarantees convergence. Details of this setup are provided in Appendix E. 3 On the convergence of COLA In this section we present a convergence analysis of the proposed decentralized algorithm COLA for both general convex and strongly convex objectives. In order to capture the evolution of COLA, we reformulate the original problem (A) by incorporating both x and local estimates {vk}Kk=1 minx,{vk}Kk=1 HA(x, {vk} K k=1) := 1 K PK k=1 f(vk) + g(x), (DA) such that vk = Ax, k = 1, ...,K. While the consensus is not always satisfied during Algorithm 1, the following relations between the decentralized objective and the original one (A) always hold. All proofs are deferred to Appendix C. Lemma 1. Let {vk} and x be the iterates generated during the execution of Algorithm 1. At any timestep, it holds that 1 K PK k=1 vk = Ax, (4) FA(x) HA(x, {vk}Kk=1) FA(x) + 12⌧K PK k=1 kvk Axk 2 . (5) The dual problem and duality gap of the decentralized objective (DA) are given in Lemma 2. Lemma 2 (Decentralized Dual Function and Duality Gap). The Lagrangian dual of the decentralized formation (DA) is min{wk}Kk=1 HB({wk} K k=1) := 1 K PK k=1 f ⇤(wk) + Pn i=1 g ⇤ i ⇣ A>i ( 1K PK k=1 wk) ⌘ . (DB) Given primal variables {x, {vk}Kk=1} and dual variables {wk}Kk=1, the duality gap is: GH(x, {vk}Kk=1, {wk}Kk=1) := 1K P k(f(vk)+f ⇤(wk))+g(x)+ Pn i=1 g ⇤ i 1K P k A > i wk . (6) If the dual variables are fixed to the optimality condition wk = rf(vk), then the dual variables can be omitted in the argument list of duality gap, namely GH(x, {vk}Kk=1). Note that the decentralized duality gap generalizes the duality gap of COCOA: when consensus is ensured, i.e., vk ⌘ Ax and wk ⌘ rf(Ax), the decentralized duality gap recovers that of COCOA. 3.1 Linear rate for strongly convex objectives We use the following data-dependent quantities in our main theorems k := maxx[k]2Rn A[k]x[k] 2 /kx[k]k2, max = maxk=1,...,K k, := PK k=1 knk. (7) If {gi} are strongly convex, COLA achieves the following linear rate of convergence. Theorem 1 (Strongly Convex gi). Consider Algorithm 1 with := 1 and let ⇥ be the quality of the local solver in Assumption 1. Let gi be µg-strongly convex for all i 2 [n] and let f be 1/⌧ -smooth. Let ̄ 0 := (1 + ) 0, ↵ := (1 + (1 ) 2 36(1+⇥) ) 1 and ⌘ := (1 ⇥)(1 ↵) s0 = ⌧µg ⌧µg+ max̄0 2 [0, 1]. (8) Then after T iterations of Algorithm 1 with 3 T 1+⌘s0⌘s0 log "(0) H "H , it holds that E ⇥ HA(x(T ), {v(T )k }Kk=1) HA(x?, {v?k}Kk=1) ⇤ "H. Furthermore, after T iterations with T 1+⌘s0⌘s0 log ✓ 1 ⌘s0 "(0) H "GH , ◆ we have the expected duality gap E[GH(x(T ), { PK k=1 Wklv (T ) l }Kk=1)] "GH . 3.2 Sublinear rate for general convex objectives Models such as sparse logistic regression, Lasso, group Lasso are non-strongly convex. For such models, we show that COLA enjoys a O(1/T ) sublinear rate of convergence for all network topologies with a positive spectral gap. Theorem 2 (Non-strongly Convex Case). Consider Algorithm 1, using a local solver of quality ⇥. Let gi(·) have L-bounded support, and let f be (1/⌧)-smooth. Let "GH > 0 be the desired duality gap. Then after T iterations where T T0 +max ⇢l 1 ⌘ m , 4L2 ̄0 ⌧"GH⌘ , T0 t0 + 2 ⌘ ⇣ 8L2 ̄0 ⌧"GH 1 ⌘ + t0 max ⇢ 0, ⇠ 1+⌘ ⌘ log 2⌧(HA(x (0),{v(0)l }) HA(x ?,{v?})) 4L2 ̄0 ⇡ and ̄ 0 := (1+ ) 0, ↵ := (1+ (1 ) 2 36(1+⇥) ) 1 and ⌘ := (1 ⇥)(1 ↵). We have that the expected duality gap satisfies E ⇥ GH(x̄, {v̄k}Kk=1, {w̄k}Kk=1) ⇤ "GH at the averaged iterate x̄ := 1T T0 PT 1 t=T0+1 x(t), and v0k := PK l=1 Wklvl and v̄k := 1 T T0 PT 1 t=T0+1 (v0k) (t) and w̄k := 1 T T0 PT 1 t=T0+1 rf((v0k)(t)). Note that the assumption of bounded support for the gi functions is not restrictive in the general convex case, as discussed e.g. in [Dünner et al., 2016]. 3"(0) H := HA(x (0), {v(0)k } K k=1) HA(x ?, {v?k} K k=1) is the initial suboptimality. 3.3 Local certificates for global accuracy Accuracy certificates for the training error are very useful for practitioners to diagnose the learning progress. In the centralized setting, the duality gap serves as such a certificate, and is available as a stopping criterion on the master node. In the decentralized setting of our interest, this is more challenging as consensus is not guaranteed. Nevertheless, we show in the following Proposition 1 that certificates for the decentralized objective (DA) can be computed from local quantities: Proposition 1 (Local Certificates). Assume gi has L-bounded support, and let Nk := {j : Wjk > 0} be the set of nodes accessible to node k. Then for any given " > 0, we have GH(x; {vk}Kk=1) ", if for all k = 1, . . . ,K the following two local conditions are satisfied: v>k rf(vk) + X i2Pk gi(xi) + g ⇤ i ( A>i rf(vk)) " 2K (9) rf(vk) 1|Nk| P j2Nk rf(vj) 2 ⇣PK k=1 n 2 k k ⌘ 1/2 1 2L p K ", (10) The local conditions (9) and (10) have a clear interpretation. The first one ensures the duality gap of the local subproblem given by vk as on the left hand side of (9) is small. The second condition (10) guarantees that consensus violation is bounded, by ensuring that the gradient of each node is similar to its neighborhood nodes. Remark 1. The resulting certificate from Proposition 1 is local, in the sense that no global vector aggregations are needed to compute it. For a certificate on the global objective, the boolean flag of each local condition (9) and (10) being satisfied or not needs to be shared with all nodes, but this requires extremely little communication. Exact values of the parameters and PK k=1 n 2 k k are not required to be known, and any valid upper bound can be used instead. We can use the local certificates to avoid unnecessary work on local problems which are already optimized, as well as to continuously quantify how newly arriving local data has to be re-optimized in the case of online training. The local certificates can also be used to quantify the contribution of newly joining or departing nodes, which is particularly useful in the elastic scenario described above. 4 Experimental results Here we illustrate the advantages of COLA in three respects: firstly we investigate the application in different network topologies and with varying subproblem quality ⇥; secondly, we compare COLA with state-of-the-art decentralized baselines: 1 , DIGing [Nedic et al., 2017], which generalizes the gradient-tracking technique of the EXTRA algorithm [Shi et al., 2015], and 2 , Decentralized ADMM (aka. consensus ADMM), which extends the classical ADMM (Alternating Direction Method of Multipliers) method [Boyd et al., 2011] to the decentralized setting [Shi et al., 2014, Wei and Ozdaglar, 2013]; Finally, we show that COLA works in the challenging unreliable network environment where each node has a certain chance to drop out of the network. We implement all algorithms in PyTorch with MPI backend. The decentralized network topology is simulated by running one thread per graph node, on a 2⇥12 core Intel Xeon CPU E5-2680 v3 server with 256 GB RAM. Table 1 describes the datasets4 used in the experiments. For Lasso, the columns of A are features. For ridge regression, the columns are features and samples for COLA primal and COLA dual, respectively. The order of columns is shuffled once before being distributed across the nodes. Due to space limit, details on the experimental configurations are included in Appendix D. Effect of approximation quality ⇥. We study the convergence behavior in terms of the approximation quality ⇥. Here, ⇥ is controlled by the number of data passes on subproblem (1) per node. Figure 1 shows that increasing always results in less number of iterations (less communication rounds) for COLA. However, given a fixed network bandwidth, it leads to a clear trade-off for the overall wall-clock time, showing the cost of both communication and computation. Larger leads to less communication rounds, however, it also takes more time to solve subproblems. The observations suggest that one can adjust ⇥ for each node to handle system heterogeneity, as what we have discussed at the end of Section 2. Effect of graph topology. Fixing K=16, we test the performance of COLA on 5 different topologies: ring, 2-connected cycle, 3-connected cycle, 2D grid and complete graph. The mixing matrix W is given by Metropolis weights for all test cases (details in Appendix B). Convergence curves are plotted in Figure 3. One can observe that for all topologies, COLA converges monotonically and especailly when all nodes in the network are equal, smaller leads to a faster convergence rate. This is consistent with the intuition that 1 measures the connectivity level of the topology. Superior performance compared to baselines. We compare COLA with DIGing and D-ADMM for strongly and general convex problems. For general convex objectives, we use Lasso regression with = 10 4 on the webspam dataset; for the strongly convex objective, we use Ridge regression with = 10 5 on the URL reputation dataset. For Ridge regression, we can map COLA to both primal and dual problems. Figure 2 traces the results on log-suboptimality. One can observe that for both generally and strongly convex objectives, COLA significantly outperforms DIGing and decentralized ADMM in terms of number of communication rounds and computation time. While DIGing and D-ADMM need parameter tuning to ensure convergence and efficiency, COLA is much easier to deploy as it is parameter free. Additionally, convergence guarantees of ADMM relies on exact subproblem solvers, whereas inexact solver is allowed for COLA. 4https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ Fault tolerance to unreliable nodes. Assume each node of a network only has a chance of p to participate in each round. If a new node k joins the network, then local variables are initialized as x[k] = 0; if node k leaves the network, then x[k] will be frozen with ⇥k = 1. All remaining nodes dynamically adjust their weights to maintain the doubly stochastic property of W . We run COLA on such unreliable networks of different ps and show the results in Figure 4. First, one can observe that for all p > 0 the suboptimality decreases monotonically as COLA progresses. It is also clear from the result that a smaller dropout rate (a larger p) leads to a faster convergence of COLA. 5 Discussion and conclusions In this work we have studied training generalized linear models in the fully decentralized setting. We proposed a communication-efficient decentralized framework, termed COLA, which is free of parameter tuning. We proved that it has a sublinear rate of convergence for general convex problems, allowing e.g. L1 regularizers, and has a linear rate of convergence for strongly convex objectives. Our scheme offers primal-dual certificates which are useful in the decentralized setting. We demonstrated that COLA offers full adaptivity to heterogenous distributed systems on arbitrary network topologies, and is adaptive to changes in network size and data, and offers fault tolerance and elasticity. Future research directions include improving subproblems, as well as extension to the network topology with directed graphs, as well as recent communication compression schemes [Stich et al., 2018]. Acknowledgments. We thank Prof. Bharat K. Bhargava for fruitful discussions. We acknowledge funding from SNSF grant 200021_175796, Microsoft Research JRC project ‘Coltrain’, as well as a Google Focused Research Award.
1. What are the strengths and weaknesses of the proposed decentralized version of CoCoA algorithm? 2. How does the decentralized algorithm improve CoCoA in terms of communication cost and computation complexity? 3. Is there a speedup property of the decentralized algorithm, and how does it work? 4. Are there any advantages of the decentralized algorithm over centralized algorithms in certain scenarios? 5. Can the authors provide more details about the experimental setup and the choice of multi-core machine for testing the decentralized algorithm?
Review
Review This paper essentially proposed a decentralized version of CoCoA algorithm, or a decentralized version of block coordinate descent. While I enjoyed reading the first section, the theory section is a little bit over complicated in term of presentation. Authors may consider how to simplify notations and statements. Despite of this, I have a few questions / concerns that need authors to respond in rebuttal - A few key theoretical results are not clear enough to me. In particular, how does the proposed decentralized algorithm improves CoCoA in theory? Authors need to compare the communication cost and the computation complexity at least. (Authors show that the convergence rate is sort of consistent with CoCoA in the extreme case, but that may not be enough. People may want to see if your method has any advantage over CoCaA in theory by jointly considering communication and computation cost.) - The speedup property is not clear to. Since the notation and the statement of the theoretical results is over complicate, it is hard to see if more workers will accelerate the training process and how? - The experiments are performed on the multi-core machine, where the communication is not the bottleneck. To my personal experiences, the decentralized algorithm may not have advantage over centralized algorithms. ============= Authors partially addressed my concerns. The comparison to centralized methods are also expected to include since in many ML application, people can choose either centralized network or decentralized network. Authors may refer to the comparison in [14].