paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:8861e607941d5f65eae84cb2f2ac04066254b18a
[ "In this paper the authors propose a metric based model for few-shot learning. The goal of the proposed technique is to incorporate a prior that highlight better the dissimilarity between closely related class prototype. Thus, the proposed paper is related to prototypical neural network (use of prototype to represent a class) but differ from it by using inner product scoring as a similarity measure instead of the use of euclidean distance. There is also close similarity between the proposed method and matching network ", "The stated contributions of the paper are: (1) a method for performing few-shot learning and (2) an approach for building harder few-shot learning datasets from existing datasets. The authors describe a model for creating a task-aware embedding for different novel sets (for different image classification settings) using a nonlinear self-attention-like mechanism applied to the centroid of the global embeddings for each class. The resulting embeddings are used per class with an additional attention layer applied on the embeddings from the other classes to identify closely-related classes and consider the part of the embedding orthogonal to the attention-weighted-average of these closely-related classes. They compare the accuracy of their model vs others in the 1-shot and 5-shot setting on various datasets, including a derived dataset from CIFAR which they call Hierarchical-CIFAR." ]
Few-shot classification may involve differentiating data that belongs to a different level of labels granularity. Compounded by the fact that the number of available labeled examples are scarce in the novel classification set, relying solely on the loss function to implicitly guide the classifier to separate data based on its label might not be enough; few-shot classifier needs to be very biased to perform well. In this paper, we propose a model that incorporates a simple inductive bias: focusing on differences by building a dissimilar set of class representations. The model treats a class representation as a vector and removes its component that is shared among closely related class representatives. It does so through the combination of learned attention and vector orthogonalization. Our model works well on our newly introduced dataset − CIFAR-Hard − that contains different levels of labels granularity. It also substantially improved the performance on fine-grained classification dataset, CUB; whereas staying competitive on standard benchmarks such as mini-Imagenet, Omniglot, and few-shot dataset derived from CIFAR.
[]
[ { "authors": [ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Wang", "Jia-Bin Huang" ], "title": "A closer look at few-shot classification", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "One-shot learning of object categories", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2006 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nathan Hilliard", "Lawrence Phillips", "Scott Howland", "Artëm Yankov", "Courtney D Corley", "Nathan O Hodas" ], "title": "Few-shot learning with metric-agnostic conditional embeddings", "venue": "arXiv preprint arXiv:1802.04376,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "In Proceedings of the annual meeting of the cognitive science society,", "year": 2011 }, { "authors": [ "Thomas Mensink", "Jakob Verbeek", "Florent Perronnin", "Gabriela Csurka" ], "title": "Distance-based image classification: Generalizing to new classes at near-zero cost", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Alex Nichol", "John Schulman" ], "title": "Reptile: a scalable metalearning algorithm", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Hang Qi", "Matthew Brown", "David G Lowe" ], "title": "Low-shot learning with imprinted weights", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Progress in artificial intelligence (AI) has been rapid. AI agents have been outperforming humans in an increasing variety of tasks, such as in recognizing images on ImageNet (He et al., 2016) and in the ancient game of Go (Silver et al., 2016). However, challenges remain – systems that outperform humans usually require learning from very large-scale data. In contrast, humans only require few examples to be able to rapidly adapt to a novel task; humans are still better learners. Few-shot learning methods – which learn classes from few labeled examples – aim to bridge this gap.\nIn learning a classification algorithm from a few labeled examples, one may train the algorithm with a different set of abundantly labeled data (base set); before adapting it to the unknown examples. However, it might be the case that the available labeled examples are of different granularity level. For example, it is possible that it is only trained to differentiate between cats and dogs, but is tested on differentiating different breeds of dogs. The set of labeled examples is also very limited; in the extreme case, only one labeled example is provided for each class (called one-shot classification). A few-shot learning algorithm can learn to identify features that are important for doing well on the base set – these can be adapted to classify the few labeled examples as long as the domain remains similar. But with so few examples provided and possible differences in the task granularity, few-shot learning algorithms need to be very biased to perform well. The question is then: what kind of bias is reasonable?\nOur work. In this paper, we propose a model that performs classification in a novel task by focusing on the differences between closely related classes of its support set. Our bias is loosely inspired by how scientists often work (Mill’s method of difference): in looking for potential causes of a phenomenon, a scientist would often focus on the differences in the circumstances (features) in the instance in which the phenomenon occurred and the circumstances in instances for which the phenomenon did not occur (Mill, 1875). Our method focuses on the differences by removing components in each class representative that are shared with closely related classes.\nOur contributions.\n1. We introduce a method which incorporates a simple yet effective inductive bias: focusing on differences, and show that it works well on classifying closely related classes. Our\nresults show that the method achieves better performance on a standard fine-grained classification benchmark (CUB dataset) and on our proposed benchmark consisting of a mix of fine and coarse-grained classification tasks, CIFAR-Hard. On other commonly used benchmark datasets, CIFAR-FS, mini-ImageNet, and Omniglot, its performance is competitive with existing methods.\n2. We propose a methodology to build harder few-shot learning datasets without requiring very large hierarchically labeled datasets. Using this methodology, we build CIFAR-Hard – a mix of fine-grained and coarse-grained few-shot classification dataset derived from CIFAR-100. Our empirical evaluation shows that the currently existing methods are not well equipped to handle this scenario." }, { "heading": "2 DISSIMILARITY NETWORK", "text": "" }, { "heading": "2.1 FEW-SHOT LEARNING", "text": "In few-shot learning, we are given a base set B and a novel set N. The base set contains labeled examples from a large number of classes while novel set contains classes not found in the base set. The objective of few-shot learning is to train a classification algorithm P on the training set Xtrain = B, in such a way that it generalizes to the elements of the novel set N. Some methods may also train on the small number of labeled examples from the novel set. In that case, the training set becomes Xtrain = B ∪ Nlabeled, with labeled novel set Nlabeled ⊂ N. In one-shot learning, the novel set only contains one labeled example for each class, while for k-shot learning, the novel set contains k examples for each class.\nWe use the episodic training proposed by Vinyals et al. (2016) to make sure that the training and test condition match. At every step, we sample some examples to form an episode T ⊂ Xtrain to train the classification algorithm P . Each episode T consists of a small set of N -labeled examples (called support set) S = {(xi, yi)}Ni=1 and a set of M examples to be labeled (called query set) Q = {xi}Mi=1, where xi ∈ RD is a D-dimensional feature vector with label yi (for computing the loss on the query set), simulating the conditions for learning the novel set N. We denote the set of labeled examples from the support set with class k ∈ {1, ...,K} in an episode as Sk. Our method adopts the approach of similarity learning. Instead of learning a distance or similarity function, we learn a space (embedding) that works well with a fixed similarity-based classifier in that space. Specifically, our model learns to construct a space that is optimized to separate data that belong to different classes for a classifier that uses dot-product as its similarity function. We define two levels of embedding based on how the embedding utilizes task information:\nGlobal task embedding learns an embedding function that is optimized for all episodes that it is trained upon. The assumption is that the produced embedding will learn a meaningful and general representation that is sufficient to separate data points on the novel task without knowing what classes appear in it.\nTask-aware embedding removes the assumption that the learned global embedding is sufficient for the novel task. Instead, it takes the possible classes of the novel task into account. On the novel task, it will embed the query set conditioned on the support set that is aware of its member, giving the full context to the prediction. Our model builds an explicit task-aware embedding that separate a class from the weighted average of its closely-related classes. It is explicit in the sense that our model explicitly encodes such an inductive bias into its function (architecture). In contrast, implicit task-aware embedding only relies upon its loss function to adjust its function as to induce separation between classes." }, { "heading": "2.2 MODEL", "text": "For each class, our model – which we call the Dissimilarity Network – computes a class representative, called a prototype, that is used to classify a new instance of data by comparing its similarity to the prototype, through an inner product. For 1-shot classification, each training example in the novel task is transformed into the prototype, while for k-shot classification, the mean of the k instance representations that belong to each class is transformed into the prototype.\nThe Dissimilarity Network computes prototypes that are dissimilar to one another, in the sense that the components of the class prototypes are orthogonal to the direction of the weighted average of closely-related classes. This lifts the burden of the classifier as the classes are more easily distinguishable – since what remains are their differences. The model works by iteratively enhancing the representation of the prototypes through a set of learned transformations (embedding functions). The first transformation builds a global task embedding through a learned dimensionality-reduction function. The global embedding retains features that are useful for the training tasks but does not take into account the classes that are present in the novel task. The second transformation transforms the prototypes into a task-aware embedding using self-attention networks, taking into account other classes that are present in the task. Finally, the last transformation computes the class prototypes that are dissimilar, by locally orthogonalizing the representations to the weighted average of other closely related class representations.\nWhen presented with a new point to classify, it computes the global embedding for that point, transform it using the task-aware embedding, locally orthogonalize the point for each possible class, and select the most similar prototype’s class as the class of the new point. Figure 1 illustrates how the model constructs class representations as well as predict the new point." }, { "heading": "2.2.1 GLOBAL EMBEDDING", "text": "We learn a feature extraction function ff : RD → RH for the images to reduce their representation to a H-dimensional vector. We used deep convolutional neural networks (Krizhevsky et al., 2012) which captures the local interaction of neighboring pixels and builds a hierarchical representation of them. This feature extractor constructs our first level of embedding. It is trained to extract information that is useful for all the training episodes but is not specialized to a particular novel task. We use the global embedding to compute the class mean (or prototype) ck of the H-dimensional representation of the support points,\nck = 1 |Sk| ∑\n(xi,yi)∈Sk\nff (xi). (1)" }, { "heading": "2.2.2 SELF-ATTENTION", "text": "By seeing the prototypes as a set of vector: C = {ci}Ki=1, we can learn a set-to-set function that conditions the member of the set to every other member within the set. This gives task-awareness to our prototypes; each member is now aware of the other class representatives of the given task.\nOur set-to-set operation is based on the self-attention mechanism introduced by Vaswani et al. (2017). Given a query, an attention mechanism learns to ”attend” – by means of weighted aver-\nage – to different parts of the set depending on how relevant it is to the query. In self-attention, the query is the member of the set itself. Our self attention uses embedding functions for query hQ, key hK , and value hV . Each embedding function is parameterized by a neural network that computes a mapping RK×H → RK×H . For simplicity we assume that for any input in the form of a set of vector A, it will automatically be cast into matrix A ∈ RK×H , whereas the output will be cast back into a set. The self-attention mechanism self -attn : RK×H → RK×H is formulated as follows.\nself -attn(C) = softmax (hQ(C)hK(C)T√\nH\n) hV (C) (2)\nIntuitively, the self-attention computes a weighted average of the element of the input matrix C ∈ RK×H , representing a set of prototypes C. This operation has the effect of averaging out noisy components from global embedding that may be relevant for other tasks but are irrelevant to the set of classes in the current novel task. The weights are obtained using the learned attention function, which in this case, is parameterized by hQ and hK . In our case, our prototype ck learns to be aware of the other class representatives C \\ ck by incorporating some of their components. We use bidirectional LSTM (BLSTM) (Hochreiter & Schmidhuber, 1997) for our attention embedding function for query hQ and key hK , while using either identity function or BLSTM on hV . BLSM computes a concatenation of two sequence of opposing-direction by sequentially applying the element xt ∈ RH of its input x = [x1, ...,xT ] into an LSTM, ht,ut = LSTM(xt−1,ht−1,ut−1). The computation yields a sequence of vector that each are conditioned on its neighboring elements. Our BLSTM uses context sharing between key attention embedding hK and query attention embedding hQ.\nWe are aware of the sequential nature of BLSTM, which can be counter-intuitive as we are modeling a set-to-set operation which should not have any preference for ordering. However, we found empirically that this setup offers more performance gain compared to the use of traditional linear function as attention embedding function. The BLSTM may learn to ignore the unimportance of set ordering due to the nature of episodic training, which exposes it to many permutations of the possible class-orderings. Moreover, the attention also gives a global context of the member of the set, which could further alleviate the ordering issues (if any)." }, { "heading": "2.2.3 FOCUSING ON DIFFERENCES", "text": "We encode our inductive bias in the form of neural network architecture, in which we remove the components that are shared among other class representatives, thereby giving the model the ability to focus only on the inter-class differences. Since we treat the prototypes as a vector, one natural way to achieve that is by making the prototypes to be locally orthogonal to the components that are shared among other classes. We learn to find such components by using dot-product attention.\nFor a H-dimensional prototype ck ∈ C of class k, it will have a corresponding task-aware vector prototype wk ∈ W = self -attn(C) following the method described in the Section 2.2.2. Let W′ = W \\wk, the components shared among the other classes k′ 6= k that are locally orthogonal to the vector prototype wk is computed using attention function attn : RH × RK×H → RH :\nattn(wk,W ′) = softmax (wk ·W ′T√ H ) W ′ (3)\nEssentially, through weighted averaging, it selects components from the other class prototypes based on how similar the embedded vector prototype wk is to them.\nWe make the prototype wk to be orthogonal to the shared components of W \\ wk by removing its projection to the shared components. Specifically, we use the shared components bk = attn(wk,W \\wk) that belongs to class k as the basis of the projection proj(wk, bk) as follows:\nproj(wk, bk) = wk · bk ||bk||2 bk (4)\nIntuitively, it maps the vector of prototype wk to represent its direction using the given basis bk (i.e., projecting it into the basis). It produces components that are linearly dependent on the basis, thereby removing it: zk = wk − proj(wk, bk) will produce a new prototype representation zk that is orthogonal to the weighted average of the components of other closely-related prototypes." }, { "heading": "2.2.4 CLASSIFICATION AND LEARNING", "text": "To classify a new point x̂, we follow similar transformations as computing the locally orthogonal prototypes. First global embedding is computed for the point v̂ = ff (x̂). For consistency, we follow the transformation given by self -attn. However, since we only have a single point v̂, we construct a set of length K by duplicating the point K times as the input: Ŵ = self -attn({v̂}Ki=1). If the attention value embedding hV (Section 2.2.2) is an identity function (or any other elementwise function), any ∀ŵk∈Ŵŵk = v̂ since its taking a weighted average of a set of identical vectors. When hV is a BLTSM (or function that operates on a set of elements), hV transforms the vector through a multi-stage non-linear processing.\nFor a task-aware embedding ŵk ∈ Ŵ, we compute the vector that is locally orthogonal to the set of prototypes of class k̃ 6= k by computing ẑk = ŵk − proj(ŵk, bk) with basis given by bk = attn(wk,W \\wk) from the Section 2.2.3. Given an unlabeled data x̂, the transformations gives a set of locally orthogonalized vector {ẑi}Ki=1 for comparison with the locally orthogonal prototypes {zi}Ki=1. Dissimilarity Network then computes a distribution over classes for point x̂ using Softmax over inner product:\np(y = k|x̂) = exp(〈ẑk, zk〉)∑ k′ exp(〈ẑk′ , zk′〉)\n(5)\nLearning is done using the cross-entropy loss with the label of the instance." }, { "heading": "3 RELATED WORKS", "text": "There are many works on few-shot learning, which was started on the assumption that currently, learned tasks can help in making a prediction in a new task (Fei-Fei et al., 2006). It soon gained interest from many researchers, which introduced many interesting techniques that contribute to huge strides of progress in few-shot learning. We will quickly review some of the recently proposed methods and delve deeper into metric learning-based methods that are more related to our work.\nTransfer learning-based methods follows the standard transfer learning procedure (network pretraining & fine-tuning). Gidaris & Komodakis (2018); Qi et al. (2018); Chen et al. (2019) propose to directly predicting weights of the classifiers on a novel task.\nInitialization based methods address few-shot learning by finding a way to better initialize a model. Ravi & Larochelle (2016) uses LSTM as a meta-optimizer to rapidly adapt neural network on the novel task, whereas Munkhdalai & Yu (2017) uses external memory to update weights. Another line of works is concerned about finding a good initialization, as such that finetuning can be done using fewer steps (Finn et al., 2017; Nichol & Schulman, 2018; Rusu et al., 2018).\nMetric and similarity learning-based methods assumes that representation produced by some model on a task share some similarities with those that are produced by another task. Essentially, the goal is to learn a comparison model that can distinguish different classes on the novel task by measuring its distance or similarity to some representation produced by the support set.\nOur proposed method is similar to the prototypical networks (Snell et al., 2017) – and subsequently Mensink et al. (2013) – in its use of mean representation of class (or prototypes). The similarity stops there, as the prototypical networks directly perform classification by comparing the distance of the new input to each prototype. They assume that the embedding function that produces the prototypes can sufficiently capture useful and general enough representation that is transferable to the novel set; it only computes global task embedding. As shown in our results, their assumption breaks down when there are changes in class granularity or the label is of fine granularity. In contrast, our method does not directly classify on the prototypes; instead, it transforms the prototypes, producing task-aware embeddings that are locally orthogonal to the shared components belonging to different classes. Thus, classification is performed by computing a Softmax over dot product between the new point and the task-aware prototypes.\nThe Dissimilarity Network uses context embedding similar to the full context embedding (FCE) extension of the matching network (Vinyals et al., 2016). However, there are some glaring differences\nin how they operate. The matching network carries the entire support for the prediction. It predicts the label of an unknown point by computing the linear combination of the label of its support set; as the support set grows, the memory increases linearly with it. Their full context embedding conditions the prediction on the entire support set; computing the task-aware embedding is quadratic in the number of elements of the support set. Moreover, as they do not construct an explicit reference for the classes to condition into, it is less clear how reasonable separation of a point belonging to different classes can be maximized. As pointed out by Snell et al. (2017), the FCE extension of the matching network does not make that much difference.\nOur model only maintains a set of prototypes and classifies a new point based on how orthogonal its representation to the prototypes. Since we condition the prediction only on the prototypes, it will not grow with the size of the support set. Moreover, computing the task-aware embedding is only quadratic in the number of prototypes (i.e., labels) – as opposed to the number of support set. We also explicitly computes the representations to be dissimilar; lifting the reliance on learning sufficiently separable inter-class representations only to the loss function.\nOur similarity function can be set differently. One possible extension is to use learned similarity or distance function similar to RelationNet (Sung et al., 2018), which we leave for future work." }, { "heading": "4 EXPERIMENTS", "text": "Datasets & scenarios. We evaluate all models on the standard dataset that is widely used in fewshot learning: omniglot, miniImageNet, and CUB. Apart from that, we also evaluate all models on CIFAR dataset with the two splits: hard and normal split (which we will elaborate later).\nEvaluation. Our evaluation follows Chen et al. (2019), that is by sampling N -class to form N -way classification (with N=5 unless otherwise stated). For k-shot task, we pick k labeled instances for each class to be the support set and 16 instances for query set. All results are averaged over 600 experiments which follow the above settings. We evaluate all models on 1-shot and 5-shot setting, which is the most common setting adopted in few-shot learning.\nImplementation details. All methods are trained using Adam optimizer Kingma & Ba (2014) with the initial learning rate of 10−3, which we cut half every 2000 episodes. We apply the following standard data augmentation on all datasets (except CIFAR): random crop, right-left flip, and color jittering. Following Snell et al. (2017), we use a four-layer convolution backbone (Conv-4) with an input size of 84x84 as a feature extractor for all methods. We use the open-source implementation of Chen et al. (2019) for other methods that we reported. We pick the best performing model based on the validation for meta-learning methods, whereas for baseline and baseline++ Chen et al. (2019), we follow the recommended settings prescribed in their paper. We trained our model on 800 epochs. Our best performing model on all datasets, based on validation set, uses identity function for the value attention embedding hV except on the CUB dataset, which uses BLSTM." }, { "heading": "4.1 STANDARD BENCHMARKS", "text": "Omniglot (Lake et al., 2011) dataset consist of 1623 handwritten digits from 50 different alphabets. There are 20 examples per character which is drawn by different people. We follow Vinyals et al. (2016) procedure for evaluation.\nmini-ImageNet is a 100 classes subset of ImageNet (Deng et al., 2009) dataset (ILSVRC-12 dataset), which was first proposed by Vinyals et al. (2016). It consists of 600 images per class. Recent works follow the setting proposed by Ravi & Larochelle (2016), which consist of randomly selected 64 base, 16 validation, and 20 novel classes.\nCUB or Caltech-UCSD Birds 200-2011 dataset (Wah et al., 2011) is a fine-grained classification dataset which consist of 200 classes (bird species) and 11,788 images in total. We follow Hilliard et al. (2018) setting which is composed of 100 base, 50 validation, and 50 novel classes.\nCIFAR-FS dataset is derived from CIFAR dataset (Krizhevsky et al., 2009) consist of 60,000 32x32 color images with 100 classes (belonging to 20 superclasses), with 600 images for each class. Our split consists of randomly sampled 40 base, 15 validation, and 45 novel classes (detail on appendix)." }, { "heading": "4.2 HARDER BENCHMARKS", "text": "General setting. Our approach requires the dataset that we derived from to have at least two different levels of class granularity. For example, CIFAR dataset which has two levels of labels granularity. ImageNet labels also form a hierarchy which – through this method – can be derived into several hard few-shot classification datasets. In the case where different labels of granularity are absent, one may be able to construct new labels by exploring the natural hierarchy which may present.\nMethod. Given a J-labeled dataset D = {(x1, ycoarse1 , y fine 1 ), ..., (xJ , y coarse J , y fine J )} where the labels comes from a two-level hierarchy: coarse-grained label ycoarsei ∈ Kcoarse and fine-grained label yfinei ∈ Kfine. Kfines denote a subset of labels Kfine that belongs to coarse-grained label (superclass) s ∈ Kcoarse.\nFor all coarse-grained label ycoarsei ∈ Kcoarse, select some y fine i ∈ Kfine that is the subclasses of ycoarsei (i.e., K fine ycoarsei\n), producing a set of fine-grained labels from all superclasses Kfinebase . Construct the base set: B = {(xi, ycoarsei )|(xi, ycoarsei , y fine i ) ∈ D, y fine i ∈ K fine base}. The novel set can be built by taking the rest of unused data and pair them with their fine-grained labels: N = {(xi, yfinei )|(xi, ycoarsei , y fine i ) ∈ D, y fine i /∈ K fine base}. Validation set is constructed the same way as novel set – we leave out the detail of its construction for simplicity.\nThis approach (illustrated in Figure 2) is advantageous as on each task, the labels can vary from being fine-grained to coarse-grained depending on the random selection. As such, the methods that rely on the awareness of the overall novel tasks will likely to fail as it builds a general embedding that works on all but not optimized for the current task. On the other hand, a dynamic method that conditions the prediction on the support set of the given task will likely to perform better.\nCIFAR-Hard is derived from CIFAR-100, using the aforementioned method. We derive our harder benchmark from CIFAR-100 because it has two different level of labels granularity. The size of the dataset is also not too big, making it suitable for use as a benchmark. The following is the detail for each split (more detail on the appendix). 20 coarse-grained base classes from 40 fine-grained classes (derived from the entire 20 superclasses, 2 classes each). 15 validation classes (derived from 5 superclass, 3 classes each). 45 novel classes (derived from 15 superclass, 5 classes each)." }, { "heading": "4.3 RESULTS & DISCUSSIONS", "text": "Table 1 shows how our method fares against others. Our method performs the best on a fine-grained classification task such as CUB and improved on a wide margin on its 1-shot classification task. As we have suspected, there is an increasing need to focus on the difference between classes when the classification task becomes increasingly fine-grained. Despite also being trained on fine-grained classification tasks on the CUB dataset, our inductive bias still seems to be helpful in classifying similar-looking classes as it further separate class representation by explicitly removing latent features shared among those classes. At a glance, it appears as if the 1-shot performance improvement our method is significantly higher than its improvement in 5-shot. However, this is due to the fact that the Baselinee++ uses retraining, which relies heavily on the availability of labeled data on the novel\nset. If we take a look at the rest of the methods which are based on meta-learning (i.e., optimizationbased and metric learning), they all suffer equally on both 1-shot and 5-shot; especially on 1-shot as it is of higher variance. The difference between our method and the other methods averaged is consistent on both shots, which is around 7%.\nOur method also significantly surpassed competing methods on the harder benchmark, CIFAR-H, Its improvement is slightly higher on 5-shot – compared to the 2nd best performing model. It is expected as the higher the number of support set, the more likely our method finds the most representative prototypes; as the variance of the samples for the mean decreases. Due to the nature of how the dataset is constructed, to perform well on this dataset, methods should be able to dynamically adapt its classification function based on the level of granularity presented. As our method explicitly induce dissimilar prototypes, it’s able to fare significantly better compared to others. Overall, there is an average drop of 3.7% in terms of method’s performance between CIFAR-H and CIFAR-FS – confirming CIFAR-H to be in fact, harder. On CIFAR-FS, the normal variant of the CIFAR dataset, our method performed slightly better on 5-shot classification. On mini-ImageNet, our method performed comparably with the rest, while being slightly worse on the Omniglot dataset. This is expected, as adding more inductive bias may only hurt its asymptotic performance.\nTo summarize, our method performed the best in its intended scenario – when the granularity of the labels are fine enough (or changing), as such that the model has to be able to dynamically adapt to it. In the case when the granularity of the labels is fixed, or when the labels are quite coarse, our method will perform comparably to the others. When accuracy is already high, our method might fail to reach optimum asymptotic performance due to the additional constraint that we impose." }, { "heading": "5 CONCLUSION", "text": "We have proposed Dissimilarity Network for few-shot learning based on the idea of focusing on differences in the class representation. Our approach directly addresses the failure modes of some few-shot classifiers that do not explicitly take into account the classification task at hand, yielding non-satisfactory results on some task such as fine-grained novel classification with coarse-grained base classification task. To demonstrate the necessities of building task-aware embedding for such task, we came up with a challenging dataset, CIFAR-Hard, which we have shown to be harder than the CIFAR-FS. Dissimilarity Network introduced an architectural inductive bias which removes the shared components among classes in the prototypes by orthogonalizing them (i.e., removing their projected components) to their leave-self-out weighted local average. Our method performs comparably to the state-of-the-art methods on standard benchmarks such as Omniglot and miniImageNet, and substantially outperform other methods on CUB dataset and on the newly constructed CIFAR-Hard dataset." }, { "heading": "A APPENDIX", "text": "A.1 SPLITS FOR CIFAR\nBase: forest, house, television, wolf, cloud, sweet pepper, dinosaur, tank, caterpillar, cup, sunflower, whale, can, bottle, road, crocodile, woman, bear, otter, willow tree, snail, aquarium fish, girl, trout, bowl, worm, pear, streetcar, castle, flatfish, lobster, turtle, poppy, orchid, man, seal, lamp, lawn mower, beetle, clock\nValidation: oak tree, kangaroo, mushroom, porcupine, squirrel, lizard, train, spider, keyboard, maple tree, bicycle, orange, lion, rabbit, motorcycle\nNovel: fox, boy, skyscraper, bridge, mouse, shrew, plain, possum, tiger, tulip, wardrobe, sea, couch, mountain, leopard, camel, shark, plate, dolphin, table, bee, pickup truck, palm tree, beaver, baby, bus, butterfly, ray, apple, cattle, crab, pine tree, raccoon, tractor, chair, rose, telephone, chimpanzee, snake, bed, hamster, skunk, cockroach, rocket, elephant\nA.2 SPLITS FOR CIFAR-HARD\nBase: aquatic mammals, fish, flowers, food containers, fruit and vegetables, household electrical devices, household furniture, insects, large carnivores, large manmade outdoor things, large natural outdoor scenes, large omnivores and herbivores, medium mammals, non-insect invertebrates, people, reptiles, small mammals, trees, vehicles 1, vehicles 2\nValidation: rabbit, hamster, bed, house, kangaroo, lamp, skyscraper, squirrel, castle, table, chimpanzee, telephone, television, wardrobe, elephant\nNovel: baby, beaver, beetle, bicycle, bottle, bus, butterfly, can, caterpillar, crocodile, cup, dolphin, flatfish, forest, girl, lion, lobster, man, mountain, oak tree, orange, orchid, otter, pear, pickup truck, pine tree, porcupine, possum, ray, rocket, rose, sea, shark, skunk, snail, snake, streetcar, sweet pepper, tank, tiger, tulip, turtle, willow tree, wolf, worm" } ]
2,019
null
SP:3350fe8ea74f715832130fe2c8a5309b721aa24b
[ "This paper analyses a pitfall of current meta-learning algorithms, where the task can be inferred from the meta-training data alone, leaving the task-training data unused. Such a meta-learner would generalise well on the meta-training tasks, but will fail to generalise on new tasks at test time. This kind of overfitting is formalised as the memorization problem. This problem is implicitly resolved in current meta-learning algorithms by constructing mutually-exclusive meta-training tasks, which is not easy to construct in all scenarios. The paper introduces an information-theoretic meta-regularizer which forces information extraction from the task data (D) by restricting information flow from meta-parameters (\\theta) and input (x^*). Experimental evaluation with one gradient based and one contextual meta-learning method, on non-mutually-exclusive tasks bring out the mettle of the proposed regulariser. ", "This paper illustrates, identifies, and formally defines a memorization problem in meta-learning -- the model can simply memorize meta-training tasks and ignore meta-training train sets. The paper proposes to optimize the mutual information between testing predictions and the training data (given input and meta model), and upper bound it by imposing a information bottleneck between output and input+model. Unlike related work, this paper specifically is able to generalize to meta-test even when the meta-train dataset is not made confusing enough (i.e. even when model can learn well from test data in meta-train alone), making it applicable to use cases where it is hard to make the dataset confusing." ]
The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradientbased meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.
[ { "affiliations": [], "name": "Mingzhang Yin" }, { "affiliations": [], "name": "George Tucker" }, { "affiliations": [], "name": "Mingyuan Zhou" }, { "affiliations": [], "name": "Sergey Levine" }, { "affiliations": [], "name": "Chelsea Finn" } ]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Emergence of invariance and disentanglement in deep representations", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "arXiv preprint arXiv:1612.00410,", "year": 2016 }, { "authors": [ "Ron Amit", "Ron Meir" ], "title": "Meta-learning by adjusting priors based on extended pac-bayes theory", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Thomas M Cover", "Joy A Thomas" ], "title": "Elements of information theory", "venue": null, "year": 2012 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Towards a neural statistician", "venue": "arXiv preprint arXiv:1606.02185,", "year": 2016 }, { "authors": [ "Li Fei-Fei" ], "title": "A bayesian approach to unsupervised one-shot learning of object categories", "venue": "In Proceedings Ninth IEEE International Conference on Computer Vision,", "year": 2003 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Chelsea Finn", "Kelvin Xu", "Sergey Levine" ], "title": "Probabilistic model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tomer Galanti", "Lior Wolf", "Tamir Hazan" ], "title": "A theoretical framework for deep transfer learning", "venue": "Information and Inference: A Journal of the IMA,", "year": 2016 }, { "authors": [ "Marta Garnelo", "Dan Rosenbaum", "Chris J Maddison", "Tiago Ramalho", "David Saxton", "Murray Shanahan", "Yee Whye Teh", "Danilo J Rezende", "SM Eslami" ], "title": "Conditional neural processes", "venue": "arXiv preprint arXiv:1807.01613,", "year": 2018 }, { "authors": [ "Jonathan Gordon", "John Bronskill", "Matthias Bauer", "Sebastian Nowozin", "Richard E Turner" ], "title": "Metalearning probabilistic inference for prediction", "venue": "arXiv preprint arXiv:1805.09921,", "year": 2018 }, { "authors": [ "Erin Grant", "Chelsea Finn", "Sergey Levine", "Trevor Darrell", "Thomas Griffiths" ], "title": "Recasting gradientbased meta-learning as hierarchical bayes", "venue": "arXiv preprint arXiv:1801.08930,", "year": 2018 }, { "authors": [ "Simon Guiroy", "Vikas Verma", "Christopher Pal" ], "title": "Towards understanding generalization in gradientbased meta-learning", "venue": "arXiv preprint arXiv:1907.07287,", "year": 2019 }, { "authors": [ "James Harrison", "Apoorva Sharma", "Marco Pavone" ], "title": "Meta-learning priors for efficient online bayesian regression", "venue": "arXiv preprint arXiv:1807.08912,", "year": 2018 }, { "authors": [ "Muhammad Abdullah Jamal", "Guo-Jun Qi" ], "title": "Task agnostic meta-learning for few-shot learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Taesup Kim", "Jaesik Yoon", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian model-agnostic meta-learning", "venue": "arXiv preprint arXiv:1806.03836,", "year": 2018 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML deep learning workshop,", "year": 2015 }, { "authors": [ "Anders Krogh", "John A Hertz" ], "title": "A simple weight decay can improve generalization", "venue": "In Advances in neural information processing systems,", "year": 1992 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "In Proceedings of the annual meeting of the cognitive science society,", "year": 2011 }, { "authors": [ "Yoonho Lee", "Wonjae Kim", "Seungjin Choi" ], "title": "Discrete infomax codes for meta-learning", "venue": "arXiv preprint arXiv:1905.11656,", "year": 2019 }, { "authors": [ "David A McAllester" ], "title": "Pac-bayesian model averaging", "venue": "In COLT,", "year": 1999 }, { "authors": [ "Anusha Nagabandi", "Ignasi Clavera", "Simin Liu", "Ronald S Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ], "title": "Learning to adapt in dynamic, real-world environments through metareinforcement learning", "venue": "arXiv preprint arXiv:1803.11347,", "year": 2018 }, { "authors": [ "Anastasia Pentina", "Christoph Lampert" ], "title": "A pac-bayesian bound for lifelong learning", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Deirdre Quillen", "Chelsea Finn", "Sergey Levine" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": null, "year": 1903 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In ICLR 2016,", "year": 2016 }, { "authors": [ "Adam Santoro", "Sergey Bartunov", "Matthew Botvinick", "Daan Wierstra", "Timothy Lillicrap" ], "title": "Metalearning with memory-augmented neural networks", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Jurgen Schmidhuber" ], "title": "Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-.", "venue": "hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich,", "year": 1987 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Joshua Brett Tenenbaum" ], "title": "A Bayesian framework for concept learning", "venue": "PhD thesis, Massachusetts Institute of Technology,", "year": 1999 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt" ], "title": "Learning to learn", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "arXiv preprint physics/0004057,", "year": 2000 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Yu Xiang", "Roozbeh Mottaghi", "Silvio Savarese" ], "title": "Beyond pascal: A benchmark for 3d object detection in the wild", "venue": "In IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2014 }, { "authors": [ "Ruixiang Zhang", "Tong Che", "Zoubin Ghahramani", "Yoshua Bengio", "Yangqiu Song" ], "title": "Metagan: An adversarial approach to few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Luisa M Zintgraf", "Kyriacos Shiarlis", "Vitaly Kurin", "Katja Hofmann", "Shimon Whiteson" ], "title": "Fast context adaptation via meta-learning", "venue": "In Thirty-sixth International Conference on Machine Learning", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The ability to learn new concepts and skills with small amounts of data is a critical aspect of intelligence that many machine learning systems lack. Meta-learning (Schmidhuber, 1987) has emerged as a promising approach for enabling systems to quickly learn new tasks by building upon experience from previous related tasks (Thrun & Pratt, 2012; Koch et al., 2015; Santoro et al., 2016; Ravi & Larochelle, 2016; Finn et al., 2017). Meta-learning accomplishes this by explicitly optimizing for few-shot generalization across a set of meta-training tasks. The meta-learner is trained such that, after being presented with a small task training set, it can accurately make predictions on test datapoints for that meta-training task.\nWhile these methods have shown promising results, current methods require careful design of the meta-training tasks to prevent a subtle form of task overfitting, distinct from standard overfitting in supervised learning. If the task can be accurately inferred from the test input alone, then the task training data can be ignored while still achieving low meta-training loss. In effect, the model will collapse to one that makes zero-shot decisions. This presents an opportunity for overfitting where the meta-learner generalizes on meta-training tasks, but fails to adapt when presented with training data from novel tasks. We call this form of overfitting the memorization problem in meta-learning because the meta-learner memorizes a function that solves all of the meta-training tasks, rather than learning to adapt.\nExisting meta-learning algorithms implicitly resolve this problem by carefully designing the metatraining tasks such that no single model can solve all tasks zero-shot; we call tasks constructed in this\nImplementation and examples available here: https://github.com/google-research/ google-research/tree/master/meta_learning_without_memorization.\nway mutually-exclusive. For example, forN -way classification, each task consists of examples from N randomly sampled classes. The N classes are labeled from 1 to N , and critically, for each task, we randomize the assignment of classes to labels {1, 2, . . . , N} (visualized in Appendix Figure 3). This ensures that the task-specific class-to-label assignment cannot be inferred from a test input alone. However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information). While shuffling labels provides a reasonable mechanism to force tasks to be mutually-exclusive with standard few-shot image classification datasets such as MiniImageNet (Ravi & Larochelle, 2016), this solution cannot be applied to all domains where we would like to utilize meta-learning. For example, consider meta-learning a pose predictor that can adapt to different objects: even if N different objects are used for meta-training, a powerful model can simply learn to ignore the training set for each task, and directly learn to predict the pose of each of the N objects. However, such a model would not be able to adapt to new objects at meta-test time.\nThe primary contributions of this work are: 1) to identify and formalize the memorization problem in meta-learning, and 2) to propose a meta-regularizer (MR) using information theory as a general approach for mitigating this problem without placing restrictions on the task distribution. We formally differentiate the meta-learning memorization problem from overfitting problem in conventional supervised learning, and empirically show that naı̈ve applications of standard regularization techniques do not solve the memorization problem in meta-learning. The key insight of our metaregularization approach is that the model acquired when memorizing tasks is more complex than the model that results from task-specific adaptation because the memorization model is a single model that simultaneously performs well on all tasks. It needs to contain all information in its weights needed to do well on test points without looking at training points. Therefore we would expect the information content of the weights of a memorization model to be larger, and hence the model should be more complex. As a result, we propose an objective that regularizes the information complexity of the meta-learned function class (motivated by Alemi et al. (2016); Achille & Soatto (2018)). Furthermore, we show that meta-regularization in MAML can be rigorously motivated by a PAC-Bayes bound on generalization. In a series of experiments on non-mutually-exclusive task distributions entailing both few-shot regression and classification, we find that memorization poses a significant challenge for both gradient-based (Finn et al., 2017) and contextual (Garnelo et al., 2018a) meta-learning methods, resulting in near random performance on test tasks in some cases. Our meta-regularization approach enables both of these methods to achieve efficient adaptation and generalization, leading to substantial performance gains across the board on non-mutually-exclusive tasks." }, { "heading": "2 PRELIMINARIES", "text": "We focus on the standard supervised meta-learning problem (see, e.g., Finn et al. (2017)). Briefly, we assume tasks Ti are sampled from a task distribution p(T ). During meta-training, for each task, we observe a set of training data Di = (xi,yi) and a set of test data D∗i = (x∗i ,y∗i ) with xi = (xi1, . . . , xiK),yi = (yi1, . . . , yiK) sampled from p(x, y|Ti), and similarly forD∗i . We denote the entire meta-training set asM = {Di,D∗i }Ni=1. The goal of meta-training is to learn a model for a new task T by leveraging what is learned during meta-training and a small amount of training data for the new task D. We use θ to denote the meta-parameters learned during meta-training and use φ to denote the task-specific parameters that are computed based on the task training data.\nFollowing Grant et al. (2018); Gordon et al. (2018), given a meta-training set M, we consider meta-learning algorithms that maximize conditional likelihood q(ŷ∗ = y∗|x∗, θ,D), which is composed of three distributions: q(θ|M) that summarizes meta-training data into a distribution on metaparameters, q(φ|D, θ) that summarizes the per-task training set into a distribution on task-specific parameters, and q(ŷ∗|x∗, φ, θ) that is the predictive distribution. These distributions are learned to minimize\n− 1N ∑ i Eq(θ|M)q(φ|Di,θ) [ 1 K ∑ (x∗,y∗)∈D∗i log q(ŷ∗ = y∗|x∗, φ, θ) ] . (1)\nFor example, in MAML (Finn et al., 2017), θ and φ are the weights of a predictor network, q(θ|M) is a delta function learned over the meta-training data, q(φ|D, θ) is a delta function centered at a point defined by gradient optimization, and φ parameterizes the predictor network q(ŷ∗|x∗, φ) (Grant et al., 2018). In particular, to determine the task-specific parameters φ, the task training data D and θ are used in the predictor model φ = θ + αK ∑ (x,y)∈D∇θ log q(y|x, φ = θ).\nAnother family of meta-learning algorithms are contextual methods (Santoro et al., 2016), such as conditional neural processes (CNP) (Garnelo et al., 2018b;a). CNP instead defines q(φ|D, θ) as a mapping from D to a summary statistic φ (parameterized by θ). In particular, φ = aθ ◦ hθ(D) is the output of an aggregator aθ(·) applied to features hθ(D) extracted from the task training data. Then θ parameterizes a predictor network that takes φ and x∗ as input and produces a predictive distribution q(ŷ∗|x∗, φ, θ). In the following sections, we describe a common pitfall for a variety of meta-learning algorithms, including MAML and CNP, and a general meta-regularization approach to prevent this pitfall." }, { "heading": "3 THE MEMORIZATION PROBLEM IN META-LEARNING", "text": "The ideal meta-learning algorithm will learn in such a way that generalizes to novel tasks. However, we find that unless tasks are carefully designed, current meta-learning algorithms can overfit to the tasks and end up ignoring the task training data (i.e., either q(φ|D, θ) does not depend on D or q(ŷ∗|x∗, φ, θ) does not depend on φ, as shown in Figure 1), which can lead to poor generalization. This memorization phenomenon is best understood through examples.\nConsider a 3D object pose prediction problem (illustrated in Figure 1), where each object has a fixed canonical pose. The (x, y) pairs for the task are 2D grey-scale images of the rotated object (x) and the rotation angle relative to the fixed canonical pose for that object (y). In the most extreme case, for an unseen object, the task is impossible without using D because the canonical pose for the unseen object is unknown. The number of objects in the meta-training dataset is small, so it is straightforward for a single network to memorize the canonical pose for each training object and to infer the object from the input image (i.e., task overfitting), thus achieving a low training error without using D. However, by construction, this solution will necessarily have poor generalization to test tasks with unseen objects.\nAs another example, imagine an automated medical prescription system that suggests medication prescriptions to doctors based on patient symptoms and the patient’s previous record of prescription responses (i.e., medical history) for adaptation. In the meta-learning framework, each patient represents a separate task. Here, the symptoms and prescriptions have a close relationship, so we cannot assign random prescriptions to symptoms, in contrast to the classification tasks where we can randomly shuffle the labels to create mutually-exclusiveness. For this non-mutually-exclusive task distribution, a standard meta-learning system can memorize the patients’ identity information in the training, leading it to ignore the medical history and only utilize the symptoms combined with the memorized information. As a result, it may issue highly accurate prescriptions on the meta-training set, but fail to adapt to new patients effectively. While such a system would achieve a baseline level of accuracy for new patients, it would be no better than a standard supervised learning method applied to the pooled data.\nWe formally define (complete) memorization as: Definition 1 (Complete Meta-Learning Memorization). Complete memorization in meta-learning is when the learned model ignores the task training data such that I(ŷ∗;D|x∗, θ) = 0 (i.e., q(ŷ∗|x∗, θ,D) = q(ŷ∗|x∗, θ) = ED′|x∗ [q(ŷ∗|x∗, θ,D′)]).\nMemorization describes an issue with overfitting the meta-training tasks, but it does not preclude the network from generalizing to unseen (x, y) pairs on the tasks similar to the training tasks. Memorization becomes an undesired problem for generalization to new tasks when I(y∗;D|x∗) I(ŷ∗;D|x∗, θ) (i.e., the task training data is necessary to achieve good performance, even with exact inference under the data generating distribution, to make accurate predictions).\nA model with the memorization problem may generalize to new datapoints in training tasks but cannot generalize to novel tasks, which distinguishes it from typical overfitting in supervised learning. In practice, we find that MAML and CNP frequently converge to this memorization solution (Table 2). For MAML, memorization can occur when a particular setting of θ that does not adapt to the task training data can achieve comparable meta-training error to a solution that adapts θ. For example, if a setting of θ can solve all of the meta-training tasks (i.e., for all (x, y) in D and D∗ the predictive error is close to zero), the optimization may converge to a stationary point of the MAML objective where minimal adaptation occurs based on the task training set (i.e., φ ≈ θ). For a novel task where it is necessary to use the task training data, MAML can in principle still leverage the task training data because the adaptation step is based on gradient descent. However, in practice, the\npoor initialization of θ can affect the model’s ability to generalize from a small mount of data. For CNP, memorization can occur when the predictive distribution network q(ŷ∗|x∗, φ, θ) can achieve low training error without using the task training summary statistics φ. On a novel task, the network is not trained to use φ, so it is unable to use the information extracted from the task training set to effectively generalize.\nIn some problem domains, the memorization problem can be avoided by carefully constructing the tasks. For example, for N -way classification, each task consists of examples from N randomly sampled classes. If the classes are assigned to a random permutation ofN for each task, this ensures that the task-specific class-to-label assignment cannot be inferred from the test inputs alone. As a result, a model that ignores the task training data cannot achieve low training error, preventing convergence to the memorization problem. We refer to tasks constructed in this way as mutuallyexclusive. However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information) and cannot be applied to all domains where we would like to utilize meta-learning." }, { "heading": "4 META REGULARIZATION USING INFORMATION THEORY", "text": "At a high level, the sources of information in the predictive distribution q(ŷ∗|x∗, θ,D) come from the input, the meta-parameters, and the data. The memorization problem occurs when the model encodes task information in the predictive network that is readily available from the task training set (i.e., it memorizes the task information for each meta-training task). We could resolve this problem by encouraging the model to minimize the training error and to rely on the task training dataset as much as possible for the prediction of y∗ (i.e., to maximize I(ŷ∗;D|x∗, θ)). Explicitly maximizing I(ŷ∗;D|x∗, θ) requires an intractable marginalization over task training sets to compute q(ŷ∗|x∗, θ). Instead, we can implicitly encourage it by restricting the information flow from other sources (x∗ and θ) to ŷ∗. To achieve both low error and low mutual information between ŷ∗ and (x∗, θ), the model must use task training data D to make predictions, hence increasing the mutual information I(ŷ∗;D|x∗, θ), leading to reduced memorization. In this section, we describe two tractable ways to achieve this." }, { "heading": "4.1 META REGULARIZATION ON ACTIVATIONS", "text": "Given θ, the statistical dependency between x∗ and ŷ∗ is controlled by the direct path from x∗ to ŷ∗ and the indirect path through D (see Figure 1), where the latter is desirable because it leverages the task training data. We can control the information flow between x∗ and ŷ∗ by introducing an intermediate stochastic bottleneck variable z∗ such that q(ŷ∗|x∗, φ, θ) =∫ q(ŷ∗|z∗, φ, θ)q(z∗|x∗, θ) dz∗ (Alemi et al., 2016) as shown in Figure 4. Now, we would like\nto maximize I(ŷ∗;D|z∗, θ) to prevent memorization. We can bound this mutual information by\nI(ŷ∗;D|z∗, θ) ≥I(x∗; ŷ∗|θ, z∗) = I(x∗; ŷ∗|θ)− I(x∗; z∗|θ) + I(x∗; z∗|ŷ∗, θ) ≥I(x∗; ŷ∗|θ)− I(x∗; z∗|θ)\n=I(x∗; ŷ∗|θ)− Ep(x∗)q(z∗|x∗,θ) [ log\nq(z∗|x∗, θ) q(z∗|θ) ] ≥I(x∗; ŷ∗|θ)− E [ log\nq(z∗|x∗, θ) r(z∗)\n] = I(x∗; ŷ∗|θ)− E [DKL(q(z∗|x∗, θ)||r(z∗))] (2)\nwhere r(z∗) is a variational approximation to the marginal, the first inequality follows from the statistical dependencies in our model (see Figure 4 and Appendix A.2 for the proof). By simultaneously minimizing E [DKL(q(z∗|x∗, θ)||r(z∗))] and maximizing the mutual information I(x∗; ŷ∗|θ), we can implicitly encourage the model to use the task training data D. For non-mutually-exclusive problems, the true label y∗ is dependent on x∗. If the model has the memorization problem and I(x∗; ŷ∗|θ) = 0, then q(ŷ∗|x∗, θ,D) = q(ŷ∗|x∗, θ) = q(ŷ∗|θ), which means the model predictions do not depend on x∗ orD. Hence, in practical problems, the predictions generated from the model will have low accuracy.\nThis suggests minimizing the training loss in Eq. (1) can increase I(ŷ∗;D|x∗, θ) or I(x∗; ŷ∗|θ). Replacing the maximization of I(x∗; ŷ∗|θ) in Eq. (2) with minimizing the training loss results in the following regularized training objective\n1 N ∑ i Eq(θ|M)q(φ|Di,θ) [ − 1K\n∑ (x∗,y∗)∈D∗i log q(ŷ∗ = y∗|x∗, φ, θ) + βDKL(q(z∗|x∗, θ)||r(z∗)) ] (3)\nwhere log q(ŷ∗|x∗, φ, θ) is estimated by log q(ŷ∗|z∗, φ, θ) with z∗ ∼ q(z∗|x∗, θ), β modulates the regularizer and r(z∗) can be set as N (z∗; 0, I). We refer to this regularizer as meta-regularization (MR) on the activations.\nAs we demonstrate in Section 6, we find that this regularizer performs well, but in some cases can fail to prevent the memorization problem. Our hypothesis is that in these cases, the network can sidestep the information constraint by storing the prediction of y∗ in a part of z∗, which incurs a small penalty in Eq. (3) and small lower bound in Eq. (2)." }, { "heading": "4.2 META REGULARIZATION ON WEIGHTS", "text": "Alternatively, we can penalize the task information stored in the meta-parameters θ. Here, we provide an informal argument and provide the complete argument in Appendix A.3. Analogous to the supervised setting (Achille & Soatto, 2018), given meta-training dataset M, we consider θ as random variable where the randomness can be introduced by training stochasticity. We model the stochasticity over θ with a Gaussian distribution N (θ; θµ, θσ) with learned mean and variance parameters per dimension (Blundell et al., 2015; Achille & Soatto, 2018). By penalizing I(y∗1:N ,D1:N ; θ|x∗1:N ), we can limit the information about the training tasks stored in the metaparameters θ and thus require the network to use the task training data to make accurate predictions. We can tractably upper bound it by\nI(y∗1:N ,D1:N ; θ|x∗1:N ) = E [ log q(θ|M)q(θ|x∗1:N ) ] ≤ E [DKL (q(θ|M)‖r(θ))] , (4)\nwhere r(θ) is a variational approximation to the marginal, which we set to N (θ; 0, I). In practice, we apply meta-regularization to the meta-parameters θ that are not used to adapt to the task training data and denote the other parameters as θ̃. In this way, we control the complexity of the network that can predict the test labels without using task training data, but we do not limit the complexity of the network that processes the task training data. Our final meta-regularized objective can be written as\n1 N ∑ i Eq(θ;θµ,θσ)q(φ|Di,θ̃) [ − 1K\n∑ (x∗,y∗)∈D∗i log q(ŷ∗ = y∗|x∗, φ, θ, θ̃) + βDKL(q(θ; θµ, θσ)||r(θ)) ] (5)\nFor MAML, we apply meta-regularization to the parameters uninvolved in the task adaptation. For CNP, we apply meta-regularization to the encoder parameters. The detailed algorithms are shown in Algorithm 1 and 2 in the appendix." }, { "heading": "4.3 DOES META REGULARIZATION LEAD TO BETTER GENERALIZATION?", "text": "Now that we have derived meta regularization approaches for mitigating the memorization problem, we theoretically analyze whether meta regularization leads to better generalization via a PAC-Bayes bound. In particular, we study meta regularization (MR) on the weights (W) of MAML, i.e. MRMAML (W), as a case study.\nMeta regularization on the weights of MAML uses a Gaussian distribution N (θ; θµ, θσ) to model the stochasticity in the weights. Given a task and task training data, the expected error is given by\ner(θµ, θσ,D, T ) = Eθ∼N (θ;θµ,θσ),φ∼q(φ|θ,D),(x∗,y∗)∼p(x,y|T ) [L(x ∗, y∗, φ)] , (6)\nwhere the prediction loss L(x∗, y∗, φi) is bounded1. Then, we would like to minimize the error on novel tasks\ner(θµ, θσ) = ET ∼p(T ),D∼p(x,y|T ) [er(θµ, θσ,D, T )] (7)\nWe only have a finite sample of training tasks, so computing er(Q) is intractable, but we can form an empirical estimate:\nêr(θµ, θσ,D1,D∗1 , ...,Dn,D∗n)\n= 1\nn n∑ i=1 Eθ∼N (θ;θµ,θσ),φi∼q(φ|θ,Di) − 1 K ∑ (x∗,y∗)∈D∗i log q(ŷ∗ = y∗|x∗, φi) ︸ ︷︷ ︸\nêr(θµ,θσ,Di,D∗i )\n(8)\nwhere for exposition we have assumed |D∗i | = K are the same for all tasks. We would like to relate er(θµ, θσ) and êr(θµ, θσ,D1,D∗1 , ...,Dn,D∗n), but the challenge is that θµ and θσ are derived from the meta-training tasks D1,D∗1 , ...,Dn,D∗n. There are two sources of generalization error: (i) error due to the finite number of observed tasks and (ii) error due to the finite number of examples observed per task. Closely following the arguments in (Amit & Meir, 2018), we apply a standard PAC-Bayes bound to each of these and combine the results with a union bound, resulting in the following Theorem. Theorem 1. Let P (θ) be an arbitrary prior distribution over θ that does not depend on the metatraining data. Then for any δ ∈ (0, 1], with probability at least 1− δ, the following inequality holds uniformly for all choices of θµ and θσ ,\ner(θµ, θσ) ≤ 1\nn n∑ i=1\nêr(θµ, θσ,Di,D∗i )+(√ 1\n2(K − 1) +\n√ 1\n2(n− 1)\n)√ DKL(N (θ; θµ, θσ)‖P ) + log n(K + 1)\nδ , (9)\nwhere n is the number of meta-training tasks and K is the number of per-task validation datapoints.\nWe defer the proof to the Appendix A.4. The key difference from the result in (Amit & Meir, 2018) is that we leverage the fact that the task training data is split into training and validation.\nIn practice, we set P (θ) = r(θ) = N (θ; 0, I). If we can achieve a low value for the bound, then with high probability, our test error will also be low. As shown in the Appendix A.4, by a first order Taylor expansion of the the second term of the RHS in Eq.(9) and setting the coefficient of the KL term as β = √ 1/2(K−1)+ √\n1/2(n−1) 2 √ logn(K+1)/δ , we recover the MR-MAML(W) objective (Eq.(5)). β trades-\noff between the tightness of the generalization bound and the probability that it holds true. The result of this bound suggests that the proposed meta-regularization on weights does indeed improve generalization on the meta-test set.\n1In practice, L(x∗, y∗, φi) is MSE on a bounded target space or classification accuracy. We optimize the negative log-likelihood as a bound on the 0-1 loss." }, { "heading": "5 RELATED WORK", "text": "Previous works have developed approaches for mitigating various forms of overfitting in metalearning. These approaches aim to improve generalization in several ways: by reducing the number of parameters that are adapted in MAML (Zintgraf et al., 2019), by compressing the task embedding (Lee et al., 2019), through data augmentation from a GAN (Zhang et al., 2018), by using an auxiliary objective on task gradients (Guiroy et al., 2019), and via an entropy regularization objective (Jamal & Qi, 2019). These methods all focus on the setting with mutually-exclusive task distributions. We instead recognize and formalize the memorization problem, a particular form of overfitting that manifests itself with non-mutually-exclusive tasks, and offer a general and principled solution. Unlike prior methods, our approach is applicable to both contextual and gradientbased meta-learning methods. We additionally validate that prior regularization approaches, namely TAML (Jamal & Qi, 2019), are not effective for addressing this problem setting.\nOur derivation uses a Bayesian interpretation of meta-learning (Tenenbaum, 1999; Fei-Fei et al., 2003; Edwards & Storkey, 2016; Grant et al., 2018; Gordon et al., 2018; Finn et al., 2018; Kim et al., 2018; Harrison et al., 2018). Some Bayesian meta-learning approaches place a distributional loss on the inferred task variables to constrain them to a prior distribution (Garnelo et al., 2018b; Gordon et al., 2018; Rakelly et al., 2019), which amounts to an information bottleneck on the latent task variables. Similarly Zintgraf et al. (2019); Lee et al. (2019); Guiroy et al. (2019) aim to produce simpler or more compressed task adaptation processes. Our approach does the opposite, penalizing information from the inputs and parameters, to encourage the task-specific variables to contain greater information driven by the per-task data.\nWe use PAC-Bayes theory to study the generalization error of meta-learning and meta-regularization. Pentina & Lampert (2014) extends the single task PAC-Bayes bound (McAllester, 1999) to the multitask setting, which quantifies the gap between empirical error on training tasks and the expected error on new tasks. More recent research shows that, with tightened generalization bounds as the training objective, the algorithms can reduce the test error for mutually-exclusive tasks (Galanti et al., 2016; Amit & Meir, 2018). Our analysis is different from these prior works in that we only include preupdate meta parameters in the generalization bound rather than both pre-update and post-update parameters. In the derivation, we also explicitly consider the splitting of data into the task training set and task validation set, which is aligned with the practical setting.\nThe memorization problem differs from overfitting in conventional supervised learning in several aspects. First, memorization occurs at the task level rather than datapoint level and the model memorizes functions rather than labels. In particular, within a training task, the model can generalize to new datapoints, but it fails to generalize to new tasks. Second, the source of information for achieving generalization is different. For meta-learning the information is from both the meta-training data and new task training data but in standard supervised setting the information is only from training data. Finally, the aim of regularization is different. In the conventional supervised setting, regularization methods such as weight decay (Krogh & Hertz, 1992), dropout (Srivastava et al., 2014), the information bottleneck (Tishby et al., 2000; Tishby & Zaslavsky, 2015), and Bayes-by-Backprop (Blundell et al., 2015) are used to balance the network complexity and the information in the data. The aim of meta-regularization is different. It governs the model complexity to avoid one complex model solving all tasks, while allowing the model’s dependency on the task data to be complex. We further empirically validate this difference, finding that standard regularization techniques do not solve the memorization problem." }, { "heading": "6 EXPERIMENTS", "text": "In the experimental evaluation, we aim to answer the following questions: (1) How prevalent is the memorization problem across different algorithms and domains? (2) How does the memorization problem affect the performance of algorithms on non-mutually-exclusive task distributions? (3) Is our meta-regularization approach effective for mitigating the problem and is it compatible with multiple types of meta-learning algorithms? (4) Is the problem of memorization empirically distinct from that of the standard overfitting problem?\nTo answer these questions, we propose several meta-learning problems involving non-mutuallyexclusive task distributions, including two problems that are adapted from prior benchmarks with mutually-exclusive task distributions. We consider model-agnostic meta-learning (MAML) and conditional neural processes (CNP) as representative meta-learning algorithms. We study both variants\nof our method in combination with MAML and CNP. When comparing with meta-learning algorithms with and without meta-regularization, we use the same neural network architecture, while other hyperparameters are tuned via cross-validation per-problem." }, { "heading": "6.1 SINUSOID REGRESSION", "text": "First, we consider a toy sinusoid regression problem that is non-mutually-exclusive. The data for each task is created in the following way: the amplitude A of the sinusoid is uniformly sampled from a set of 20 equally-spaced points {0.1, 0.3, · · · , 4}; u is sampled uniformly from [−5, 5] and y is sampled fromN (A sin(u), 0.12). We provide both u and the amplitude A (as a one-hot vector) as input, i.e. x = (u,A). At the test time, we expand the range of the tasks by randomly sampling the data-generating amplitude A uniformly from [0.1, 4] and use a random one-hot vector for the input to the network. The meta-training tasks are a proper subset of the meta-test tasks.\nWithout the additional amplitude input, both MAML and CNP can easily solve the task and generalize to the meta-test tasks. However, once we add the additional amplitude input which indicates the task identity, we find that both MAML and CNP converge to the complete memorization solution and fail to generalize well to test data (Table 1 and Appendix Figures 7 and 8). Both meta-regularized MAML and CNP (MR-MAML) and (MR-CNP) instead converge to a solution that adapts to the data, and as a result, greatly outperform the unregularized methods." }, { "heading": "6.2 POSE PREDICTION", "text": "To illustrate the memorization problem on a more realistic task, we create a multi-task regression dataset based on the Pascal 3D data (Xiang et al., 2014) (See Appendix A.5.1 for a complete description). We randomly select 50 objects for meta-training and the other 15 objects for meta-testing. For each object, we use MuJoCo (Todorov et al., 2012) to render images with random orientations of the instance on a table, visualized in Figure 1. For the meta-learning algorithm, the observation (x) is the 128×128 gray-scale image and the label (y) is the orientation relative to a fixed canonical pose. Because the number of objects in the meta-training dataset is small, it is straightforward for a single network to memorize the canonical pose for each training object and to infer the orientation from the input image, thus achieving a low meta-training error without using D. However, this solution performs poorly at the test time because it has not seen the novel objects and their canonical poses.\nOptimization modes and hyperparameter sensitivity. We choose the learning rate from {0.0001, 0.0005, 0.001} for each method, β from {10−6, 10−5, · · · , 1} for meta-regularization and report the results with the best hyperparameters (as measured on the meta-validation set) for each method. In this domain, we find that the convergence point of the meta-learning algorithm is determined by both the optimization landscape of the objective and the training dynamics, which vary due to stochastic gradients and the random initialization. In particular, we observe that there are two modes of the objective, one that corresponds to complete memorization and one that corresponds to successful adaptation to the task data. As illustrated in the Appendix, we find that models that converge to a memorization solution have lower training error than solutions which use the task training data, indicating a clear need for meta-regularization. When the meta-regularization is on the activations, the solution that the algorithms converge to depends on the learning rate, while MR on the weights consistently converges to the adaptation solution (See Appendix Figure 9 for the sensitivity analysis). This suggests that MR on the activations is not always successful at preventing memorization. Our hypothesis is that there exists a solution in which the bottlenecked activations encode only the prediction y∗, and discard other information. Such a solution can achieve both low training MSE and low regularization loss without using task training data, particularly if the predicted label contains a small number of bits (i.e., because the activations will have low information complexity).\nHowever, note that this solution does not achieve low regularization error when applying MR to the weights because the function needed to produce the predicted label does not have low information complexity. As a result, meta-regularization on the weights does not suffer from this pathology and is robust to different learning rates. Therefore, we will use regularization on weights as the proposed methodology in the following experiments and algorithms in Appendix A.1.\nQuantitative results. We compare MAML and CNP with their meta-regularized versions (Table 2). We additionally include fine-tuning as baseline, which trains a single network on all the instances jointly, and then fine-tunes on the task training data. Meta-learning with meta-regularization (on weights) outperforms all competing methods by a large margin. We show test error as a function of the meta-regularization coefficient β in Appendix Figure 2. The curve reflects the trade-off when changing the amount of information contained in the weights. This indicates that β gives a knob that allows us to tune the degree to which the model uses the data to adapt versus relying on the prior.\nComparison to standard regularization. We compare our meta-regularization with standard regularization techniques, weight decay (Krogh & Hertz, 1992) and Bayes-by-Backprop (Blundell et al., 2015), in Table 3. We observe that simply applying standard regularization to all the weights, as in conventional supervised learning, does not solve the memorization problem, which validates that the memorization problem differs from the standard overfitting problem." }, { "heading": "6.3 OMNIGLOT AND MINIIMAGENET CLASSIFICATION", "text": "Next, we study memorization in the few-shot classification problem by adapting the few-shot Omniglot (Lake et al., 2011) and MiniImagenet (Ravi & Larochelle, 2016; Vinyals et al., 2016) bench-\nmarks to the non-mutually-exclusive setting. In the non-mutually-exclusive N-way K-shot classification problem, each class is (randomly) assigned a fixed classification label from 1 to N. For each task, we randomly select a corresponding class for each classification label and K task training data points and K task test data points from that class2. This ensures that each class takes only one classification label across tasks and different tasks are non-mutually-exclusive (See Appendix A.5.2 for details).\nWe evaluate MAML, TAML (Jamal & Qi, 2019), MR-MAML (ours), fine-tuning, and a nearest neighbor baseline on non-mutually-exclusive classification tasks (Table 4). We find that MR-MAML significantly outperforms previous methods on all of these tasks. To better understand the problem, for the MAML variants, we calculate the pre-update accuracy (before adaptation on the task training data) on the meta-training data in Appendix Table 5. The high pre-update meta-training accuracy and low meta-test accuracy are evidence of the memorization problem for MAML and TAML, indicating that it is learning a model that ignores the task data. In contrast, MR-MAML successfully controls the pre-update accuracy to be near chance and encourages the learner to use the task training data to achieve low meta-training error, resulting in good performance at meta-test time.\nFinally, we verify that meta-regularization does not degrade performance on the standard mutuallyexclusive task. We evaluate performance as a function of regularization strength on the standard 20-way 1-shot Omniglot task (Appendix Figure 10), and we find that small values of β lead to slight improvements over MAML. This indicates that meta-regularization substantially improves performance in the non-mutually-exclusive setting without degrading performance in other settings." }, { "heading": "7 CONCLUSION AND DISCUSSION", "text": "Meta-learning has achieved remarkable success in few-shot learning problems. However, we identify a pitfall of current algorithms: the need to create task distributions that are mutually exclusive. This requirement restricts the domains that meta-learning can be applied to. We formalize the failure mode, i.e. the memorization problem, that results from training on non-mutually-exclusive tasks and distinguish it as a function-level overfitting problem compared to the the standard label-level overfitting in supervised learning.\nWe illustrate the memorization problem with different meta-learning algorithms on a number of domains. To address the problem, we propose an algorithm-agnostic meta-regularization (MR) approach that leverages an information-theoretic perspective of the problem. The key idea is that by placing a soft restriction on the information flow from meta-parameters in prediction of test set labels, we can encourage the meta-learner to use task training data during meta-training. We achieve this by successfully controlling the complexity of model prior to the task adaptation.\nThe memorization issue is quite broad and is likely to occur in a wide range of real-world applications, for example, personalized speech recognition systems, learning robots that can adapt to different environments (Nagabandi et al., 2018), and learning goal-conditioned manipulation skills using trial-and-error data. Further, this challenge may also be prevalent in other conditional prediction problems, beyond meta-learning, an interesting direction for future study. By both recognizing the challenge of memorization and developing a general and lightweight approach for solving it, we believe that this work represents an important step towards making meta-learning algorithms applicable to and effective on any problem domain.\n2We assume that the number of classes in the meta-training set is larger than N ." }, { "heading": "ACKNOWLEDGEMENT", "text": "The authors would like to thank Alexander A. Alemi, Kevin Murphy, Luke Metz, Abhishek Kumar and the anonymous reviewers for helpful discussions and feedback. M. Yin and M. Zhou acknowledge the support of the U.S. National Science Foundation under Grant IIS-1812699." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ALGORITHM", "text": "We present the detailed algorithm for meta-regularization on weights with conditional neural processes (CNP) in Algorithm 1 and with model-agnostic meta-learning (MAML) in Algorithm 2. For CNP, we add the regularization on the weights θ of encoder and leave other weights θ̃ unrestricted. For MAML, we similarly regularize the weights θ from input to an intermediate hidden layer and leave the weights θ̃ for adaptation unregularized. In this way, we restrict the complexity of the pre-adaptation model not the post-adaptation model.\nAlgorithm 1: Meta-Regularized CNP input : Task distribution p(T ); Encoder weights distribution q(θ; τ) = N (θ; τ) with Gaussian\nparameters τ = (θµ, θσ); Prior distribution r(θ) and Lagrangian multiplier β; θ̃ that parameterizes feature extractor hθ̃(·) and decoder Tθ̃(·). Stepsize α.\noutput: Network parameter τ , θ̃.\nInitialize τ , θ̃ randomly; while not converged do\nSample a mini-batch of {Ti} from p(T ); Sample θ ∼ q(θ; τ) with reparameterization ; for all Ti ∈ {Ti} do\nSample Di = (xi,yi), D∗i = (x∗i ,y∗i ) from Ti ; Encode observation zi = gθ(xi), z∗i = gθ(x ∗ i ) ;\nCompute task context φi = a(hθ̃(zi,yi)) with aggregator a(·); Update θ̃ ← θ̃ + α∇θ̃ ∑ Ti log q(y ∗ i |Tθ̃(z∗i , φi)) ;\nUpdate τ ← τ + α∇τ [ ∑ Ti log q(y ∗ i |Tθ̃(z∗i , φi))− βDKL(q(θ; τ)||r(θ))]\nAlgorithm 2: Meta-Regularized MAML input : Task distribution p(T ); Weights distribution q(θ; τ) = N (θ; τ) with Gaussian parameters τ = (θµ, θσ); Prior distribution r(θ) and Lagrangian multiplier β; Stepsize α, α′. output: Network parameter τ , θ̃.\nInitialize τ , θ̃ randomly; while not converged do\nSample a mini-batch of {Ti} from p(T ); Sample θ ∼ q(θ; τ) with reparameterization ; for all Ti ∈ {Ti} do\nSample Di = (xi,yi), D∗i = (x∗i ,y∗i ) from Ti ; Encode observation zi = gθ(xi), z∗i = gθ(x ∗ i ) ;\nCompute task specific parameter φi = θ̃ + α′∇θ̃ log q(yi|zi, θ̃) ; Update θ̃ ← θ̃ + α∇θ̃ ∑ Ti log q(y ∗ i |z∗i , φi) ;\nUpdate τ ← τ + α∇τ [ ∑ Ti log q(y ∗ i |z∗i , φi)− βDKL(q(θ; τ)||r(θ))]\nAlgorithm 3: Meta-Regularized Methods in Meta-testing input : Meta-testing task T with training data D = (x,y) and testing input x∗, optimized\nparameters τ, θ̃. output: Prediction ŷ∗\nfor k from 1 to K do Sample θk ∼ q(θ; τ); Encode observation zk = gθk(x), z ∗ k = gθk(x\n∗) ; Compute task specific parameter φk = a(hθ̃(zk,y)) for MR-CNP and φk = θ̃ + α\n′∇θ̃ log q(y|zk, θ̃) for MR-MAML; Predict ŷ∗k ∼ q(ŷ∗|z∗k, φk, θ̃)\nReturn prediction ŷ∗ = 1K ∑K k=1 ŷ ∗ k" }, { "heading": "A.2 META REGULARIZATION ON ACTIVATIONS", "text": "We show that I(x∗; ŷ∗|z∗, θ) ≤ I(ŷ∗;D|z∗, θ). By Figure 4, we have that I(ŷ∗;x∗|θ,D, z∗) = 0. By the chain rule of mutual information we have\nI(ŷ∗;D|z∗, θ) =I(ŷ∗;D|z∗, θ) + I(ŷ∗;x∗|D, θ, z∗) =I(ŷ∗;x∗,D|θ, z∗) =I(x∗; ŷ∗|θ, z∗) + I(ŷ∗;D|x∗, θ, z∗) ≥I(x∗; ŷ∗|θ, z∗) (10)" }, { "heading": "A.3 META REGULARIZATION ON WEIGHTS", "text": "Similar to (Achille & Soatto, 2018), we use ξ to denote the unknown parameters of the true data generating distribution. This defines a joint distribution p(ξ,M, θ) = p(ξ)p(M|ξ)q(θ|M). Furthermore, we have a predictive distribution q(ŷ∗|x∗,D, θ) = Eφ|θ,D [q(ŷ∗|x∗, φ, θ)].\nThe meta-training loss in Eq. 1 is an upper bound for the cross entropy Hp,q(y∗1:N |x∗1:N ,D1:N , θ). Using an information decomposition of cross entropy (Achille & Soatto, 2018), we have\nHp,q(y ∗ 1:N |x∗1:N ,D1:N , θ) = H(y∗1:N |x∗1:N ,D1:N , ξ) + I(ξ; y∗1:N |x∗1:N ,D1:N , θ) + E [DKL(p(y∗1:N |x∗1:N ,D1:N , θ)||q(y∗1:N |x∗1:N ,D1:N , θ))] + I(D1:N ; θ|x∗1:N , ξ) − I(y∗1:N ,D1:N ; θ|x∗1:N , ξ). (11)\nHere the only negative term is the I(y∗1:N ,D1:N ; θ|x∗1:N , ξ), which quantifies the information that the meta-parameters contain about the meta-training data beyond what can be inferred from the data generating parameters (i.e., memorization). Without proper regularization, the cross entropy loss can be minimized by maximizing this term. We can control its value by upper bounding it\nI(y∗1:N ,D1:N ; θ|x∗1:N , ξ) = E [ log\nq(θ|M, ξ) q(θ|x∗1:N , ξ) ] = E [ log\nq(θ|M) q(θ|x∗1:N , ξ) ] = E [DKL(q(θ|M)||q(θ|x∗1:N , ξ))] ≤ E [DKL(q(θ|M)||r(θ))] ,\nwhere the second equality follows because θ and ξ are conditionally independent given M. This gives the regularization in Section 4.2." }, { "heading": "A.4 PROOF OF THE PAC-BAYES GENERALIZATION BOUND", "text": "First, we prove a more general result and then specialize it. The goal of the meta-learner is to extract information about the meta-training tasks and the test task training data to serve as a prior for test examples from the novel task. This information will be in terms of a distribution Q over possible models. When learning a new task, the meta-learner uses the training task data D and a model\nparameterized by θ (sampled from Q(θ)) and outputs a distribution q(φ|D, θ) over models. Our goal is to learn Q such that it performs well on novel tasks.\nTo formalize this, define\ner(Q,D, T ) = Eθ∼Q(θ),φ∼q(φ|θ,D),(x∗,y∗)∼p(x,y|T ) [L(φ(x∗), y∗)] (12)\nwhere L(φ(x∗), y∗) is a bounded loss in [0, 1]. Then, we would like to minimize the error on novel tasks\ner(Q) = min Q\nET ∼p(T ),D∼p(x,y|T ) [er(Q,D, T )] (13)\nBecause we only have a finite training set, computing er(Q) is intractable, but we can form an empirical estimate:\nêr(Q,D1,D∗1 , ...,Dn,D∗n) = 1\nn n∑ i=1 Eθ∼Q(θ),φi∼q(φ|θ,Di) 1 K ∑ (x∗,y∗)∈D∗i L(φ(x∗), y∗)) ︸ ︷︷ ︸\nêr(Q,Di,D∗i )\n(14)\nwhere for exposition we assume K = |D∗i | is the same for all i. We would like to relate er(Q) and êr(Q,D1,D∗1 , ...,Dn,D∗n), but the challenge is thatQmay depend onD1,D∗1 , ...,Dn,D∗n due to the learning algorithm. There are two sources of generalization error: (i) error due to the finite number of observed tasks and (ii) error due to the finite number of examples observed per task. Closely following the arguments in (Amit & Meir, 2018), we apply a standard PAC-Bayes bound to each of these and combine the results with a union bound.\nTheorem. Let Q(θ) be a distribution over parameters θ and let P (θ) be a prior distribution. Then for any δ ∈ (0, 1], with probability at least 1 − δ, the following inequality holds uniformly for all distributions Q,\ner(Q) ≤ 1 n n∑ i=1 êr(Q,Di,D∗i ) +\n(√ 1\n2(K − 1) +\n√ 1\n2(n− 1)\n)√ DKL(Q‖P ) + log n(K + 1)\nδ\n(15)\nProof. To start, we state a classical PAC-Bayes bound and use it to derive generalization bounds on task and datapoint level generalization, respectively.\nTheorem 2. Let X be a sample space (i.e. a space of possible datapoints). Let P (X) be a distribution over X (i.e. a data distribution). Let Θ be a hypothesis space. Given a “loss function” l(θ,X) : Θ × X → [0, 1] and a collection of M i.i.d. random variables sampled from P (X), X1, ..., XM , let π be a prior distribution over hypotheses in Θ that does not depend on the samples but may depend on the data distribution P (X). Then, for any δ ∈ (0, 1], the following bound holds uniformly for all posterior distributions ρ over Θ\nP ( EXi∼P (X),θ∼ρ(·) [l(θ,Xi)] ≤ 1\nM M∑ m=1 Eθ∼ρ(·)[l(θ,Xm] +\n√ 1\n2(M − 1)\n( DKL(ρ‖π) + log M\nδ\n) , ∀ρ )\n≥1− δ. (16)\nMeta-level generalization First, we bound the task-level generalization, that is we relate er(Q) to 1n ∑n i=1 er(Q,Di, Ti). Letting the samples be Xi = (Di, Ti), and l(θ,Xn) = Eφi∼q(φ|Di,θ),(x∗,y∗)∼Ti [L(φ(x∗), y∗)], then Theorem 1 says that for any δ0 ∼ (0, 1]\nP ( er(Q) ≤ 1\nn n∑ i=1 er(Q,Di, Ti) +\n√ 1\n2(n− 1)\n( DKL(Q‖P ) + log n\nδ0\n) ,∀Q ) ≥ 1− δ0, (17)\nwhere P is a prior over θ.\nWithin task generalization Next, we relate er(Q,Di, Ti) to êr(Q,Di,D∗i ) via the PAC-Bayes bound. For a fixed task i, task training data Di, a prior π(φ|Ti) that only depends on the training\ndata, and any δi ∈ (0, 1], we have that P ( E(x∗,y∗)∼p(x,y|Ti)ρ(φi) [L(φi(x ∗), y∗)] ≤Eρ(φi) 1 K ∑ (x∗,y∗)∈D∗i L(φi(x∗), y∗) + √ 1\n2(K − 1)\n( DKL(ρ||π) + log K\nδi\n) ,∀ρ ) ≥ 1− δi.\nNow, we choose π(φ|Ti) to be ∫ P (θ)q(φ|θ,Di)dθ and restrict ρ(φ) to be of the form∫\nQ(θ)q(φ|θ,Di)dθ for any Q. While, π and ρ may be complicated distributions (especially, if they are defined implicitly), we know that with this choice of π and ρ,DKL(ρ||π) ≤ DKL(Q||P ) (Cover & Thomas, 2012), hence, we have\nP ( er(Q,Di, Ti) ≤ êr(Q,Di,D∗i ) + √ 1\n2(K − 1)\n( DKL(Q‖P ) + log K\nδi\n) ,∀Q ) ≥ 1− δi\n(18)\nOverall bound on meta-learner generalization Combining Eq. (17) and (18) using the union bound, we have\nP ( er(Q) ≤ 1\nn n∑ i=1 êr(Q,Di,D∗i ) +\n√ 1\n2(K − 1) DKL(Q‖P ) + log\nK\nδi\n+\n√ 1\n2(n− 1) DKL(Q‖P ) + log\nn δ0 ,∀Q\n) ≥ 1− (∑ i δi + δ0 ) (19)\nChoosing δ0 = δK+1 and δi = Kδ n(K+1) , then we have: P ( er(Q) ≤ 1\nn n∑ i=1 êr(Q,Di,D∗i ) +\n(√ 1\n2(K − 1) +\n√ 1\n2(n− 1)\n)√ DKL(Q‖P ) + log n(K + 1)\nδ , ∀Q ) ≥1− δ. (20)\nBecause n is generally large, by Taylor expansion of the complexity term we have(√ 1\n2(K − 1) +\n√ 1\n2(n− 1)\n)√( DKLQ||P ) + log n(K + 1)\nδ\n)\n= 1 2 √ log n(K + 1)/δ\n(√ 1\n2(K − 1) +\n√ 1\n2(n− 1)\n)( DKLQ||P ) + 2 log( n(K + 1)\nδ )\n) + o(1)\nRe-defining the coefficient of KL term as β and omitting the constant and higher order term, we recover the meta-regularization bound in Eq.(5) when Q(θ) = N (θ; θµ, θσ)." }, { "heading": "A.5 EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.5.1 POSE PREDICTION", "text": "We create a multi-task regression dataset based on the Pascal 3D data (Xiang et al., 2014). The dataset consists of 10 classes of 3D object such as “aeroplane”, “sofa”, “TV monitor”, etc. Each class has multiple different objects and there are 65 objects in total. We randomly select 50 objects for meta-training and the other 15 objects for meta-testing. For each object, we use MuJoCo (Todorov et al., 2012) to render 100 images with random orientations of the instance on a table, visualized in Figure 1. For the meta-learning algorithm, the observation (x) is the 128 × 128 gray-scale image and the label (y) is the orientation re-scaled to be within [0, 10]. For each task, we randomly sample\n30 (x, y) pairs for an object and evenly split them between task training and task test data. We use a meta batch-size of 10 tasks per iteration.\nFor MR-CNP, we use a convolutional encoder with a fully connected bottom layer to map the input image to a 20-dimensional latent representation z and z∗ for task training input x and test input x∗ respectively. The (z, y) are concatenated and mapped by the feature extractor and aggregator which are fully connected networks to the 200 dimensional task summary statistics φ. The decoder is a fully connected network that maps (φ, z∗) to the prediction ŷ∗.\nFor MR-MAML, we use a convolutional encoder to map the input image to a 14 × 14 dimensional latent representation z and z∗. The pairs (z, y) are used in the task adaptation step to get a task specific parameter φ via gradient descent. Then z∗ is mapped to the prediction ŷ∗ with a convolutional predictor parameterized by φ. The network is trained using 5 gradient steps with learning rate 0.01 in the inner loop for adaptation and evaluated using 20 gradient steps at the test-time." }, { "heading": "A.5.2 NON-MUTUALLY-EXCLUSIVE CLASSIFICATION", "text": "The Omniglot dataset consists of 20 instances of 1623 characters from 50 different alphabets. We randomly choose 1200 characters for meta-training and use the remaining for testing. The metatraining characters are partitioned into 60 disjoint sets for 20-way classification. The MiniImagenet dataset contains 100 classes of images including 64 training classes, 12 validation classes, and 24 test classes. We randomly partition the 64 meta-training classes into 13 disjoint sets for 5-way classification with one label having one less class of images than the others.\nFor MR-MAML we use a convolutional encoder similar to the pose prediction problem. The dimension of z and z∗ is 14 × 14 for Omniglot and 20 × 20 for MiniImagenet. We use a convolutional decoder for both datasets. Following (Finn et al., 2017), we use a meta batch-size of 16 for 20-way Omniglot classification and meta batch-size of 4 for 5-way MiniImagenet classification. The metalearning rate is chosen from {0.001, 0.005} and the β for meta-regularized methods are chosen from {10−7, 10−6, . . . , 10−3}. The optimal hyperparameters are chosen for each method separately via cross-validation." }, { "heading": "A.6 ADDITIONAL ILLUSTRATION AND GRAPHICAL MODEL", "text": "We show a standard few-shot classification setup in meta-learning to illustrate a mutually-exclusive task distribution and a graphical model for the regularization on the activations." }, { "heading": "A.7 ADDITIONAL RESULTS", "text": "As shown in Figures 5, 7 and 8, when meta-learning algorithms converge to the memorization solution, the test tasks must be similar to the train tasks in order to achieve low test error. For CNP, although the task training set contains sufficient information to infer the correct amplitude, this information is ignored and the regression curve at test-time is determined by the one-hot vector. As a result, CNP can only generalize to points from the curves it has seen in the training (Figure 7 first row). On the other hand, MAML does use the task training data (Figure 5, 8 and Table 1), however, its performance is much worse than in the mutually-exclusive task. MR-MAML and MR-CNP avoid converging to a memorization solution and achieve excellent test performance on sinusoid task." }, { "heading": "MR-CNP (A) MR-CNP (A) MR-CNP (W) MR-CNP (W)", "text": "In Table 5, we report the pre-update accuracy for the non-mutually-exclusive classification experiment in Section 6.3. The pre-update accuracy is obtained by the initial parameters θ rather than the task adapted parameters φ. At the meta-training time, for both MAML and MR-MAML the post-update accuracy obtained by using φ gets close to 1. High pre-update accuracy reflects the memorization problem. For example, in 20-way 1-shot Omniglot example, the pre-update accuracy for MAML is 99.2% at the training time, which means only 0.8% improvement in accuracy is due to adaptation, so the task training data is ignored to a large extent. The pre-update training accuracy for MR-MAML is 5%, which means 95% improvement in accuracy during training is due to the adaptation. This explains why in Table 4, the test accuracy of MR-MAML is much higher than that of MAML at the test-time, since the task training data is used to achieve fast adaptation." } ]
2,020
META-LEARNING WITHOUT MEMORIZATION
SP:a9c2860abb6a9df585aecea0dfb9a833458f184f
[ "In this paper the authors propose a method for training neural networks using evolutionary methods. The aim of developing this method is to provide a biological alternative to back-propagation. The authors prove that their method converges and with high probability succeeds in learning linear classification problems. Another method is also proposed which is linked to dopaminergic neurons.", "This paper argues that Artificial Neural Network (ANN) lack in biological plausibility because of the back-propagation process. Therefore, the authors provide an alternative approach, named neural net evolution (NNE) that follows evolutionary theory. This approach uses a large number of genotypes (in the form of vector with binary logits) that will evolve overtime during training. It does not require to calculate the gradient explicitly. The authors have conducted some experiments on MNIST using ANN with only one hidden layer. The experimental results show that the NNE can learn the classification task reasonably well considering that no explicit back propagation is used. " ]
Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain. Here we propose that backpropagation can happen in evolutionary time, instead of lifetime, in what we call neural net evolution (NNE). In NNE the weights of the links of the neural net are sparse linear functions of the animal’s genes, where each gene has two alleles, 0 and 1. In each generation, a population is generated at random based on current allele frequencies, and it is tested in the learning task. The relative performance of the two alleles of each gene over the whole population is determined, and the allele frequencies are updated via the standard population genetics equations for the weak selection regime. We prove that, under assumptions, NNE succeeds in learning simple labeling functions with high probability, and with polynomially many generations and individuals per generation. We test the NNE concept, with only one hidden layer, on MNIST with encouraging results. Finally, we explore a further version of biologically plausible ANNs inspired by the recent discovery in animals of dopaminergic plasticity: the increase of the strength of a synapse that fired if dopamine was released soon after the firing.
[]
[ { "authors": [ "Yoshua Bengio", "Dong-Hyun Lee", "Jorg Bornschein", "Zhouhan Lin" ], "title": "Towards biologically plausible deep learning", "venue": "arXiv preprint arXiv:1502.04156,", "year": 2015 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Animashree Anandkumar" ], "title": "SIGNSGD: compressed optimisation for non-convex problems", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Conrado Arturo Bosman", "Francisco Aboitiz" ], "title": "Functional constraints in the evolution of brain circuits", "venue": "In Front. Neurosci.,", "year": 2015 }, { "authors": [ "Reinhard Bürger" ], "title": "The mathematical theory of selection, recombination, and mutation, volume 228", "venue": null, "year": 2000 }, { "authors": [ "Erick Chastain", "Adi Livnat", "Christos Papadimitriou", "Umesh Vazirani" ], "title": "Algorithms, games, and evolution", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "Peter Diehl", "Matthew Cook" ], "title": "Unsupervised learning of digit recognition using spike-timingdependent plasticity", "venue": "Frontiers in Computational Neuroscience,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13,", "year": 2015 }, { "authors": [ "Geoffrey E. Hinton" ], "title": "Turing award lecture ”the deep learning revolution", "venue": null, "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Dmitry Krotov", "John J. Hopfield" ], "title": "Unsupervised learning by competing hidden units", "venue": "Proc. Natl. Acad. Sci. U.S.A.,", "year": 2019 }, { "authors": [ "Dong-Hyun Lee", "Saizheng Zhang", "Asja Fischer", "Yoshua Bengio" ], "title": "Difference target propagation", "venue": "In Machine Learning and Knowledge Discovery in Databases,", "year": 2015 }, { "authors": [ "Timothy P. Lillicrap", "Daniel Cownden", "Douglas Blair Tweed", "Colin J. Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "In Nature communications,", "year": 2016 }, { "authors": [ "Ruta Mehta", "Ioannis Panageas", "Georgios Piliouras" ], "title": "Natural selection as an inhibitor of genetic diversity: Multiplicative weights updates algorithm and a conjecture of haploid genetics", "venue": "In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science,", "year": 2015 }, { "authors": [ "Arild Nø kland" ], "title": "Direct feedback alignment provides learning in deep neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Christos H. Papadimitriou", "Santosh S. Vempala" ], "title": "Random projection in the brain and computation with assemblies of neurons", "venue": "In 10th Innovations in Theoretical Computer Science Conference,", "year": 2019 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "Advances in Neural Information Processing Systems", "year": 2008 }, { "authors": [ "Kenneth O. Stanley", "Jeff Clune", "Joel Lehman", "Risto Miikkulainen" ], "title": "Designing neural networks through neuroevolution", "venue": "Nature Machine Intelligence,", "year": 2019 }, { "authors": [ "Santosh Vempala", "John Wilmes" ], "title": "Gradient descent for one-hidden-layer neural networks: Polynomial convergence and sq lower bounds", "venue": "Proceedings of the Thirty-Second Conference on Learning Theory,", "year": 2019 }, { "authors": [ "Sho Yagishita", "Akiko Hayashi-Takagi", "Graham CR Ellis-Davies", "Hidetoshi Urakubo", "Shin Ishii", "Haruo Kasai" ], "title": "A critical time window for dopamine actions on the structural plasticity of dendritic spines", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "In his Turing award lecture, neural networks pioneer Geoff Hinton opined that “evolution can’t get gradients because a lot of what determines the relationship between the genotype and the phenotype is outside your control” (Hinton, 2019). We beg to differ. Evolution does have what amounts to an effective oracle access to the (indeed, complex and intractable) mapping from genotype to phenotype. The well-established equations of population genetics governing evolution under recombination (Bürger, 2000; Chastain et al., 2014) describe the way whereby the distribution of genotypes in the population is updated from one generation to the next, informed by the empirical fitness of the phenotypes during lifetime; and these equations do bear a similarity to gradient descent and, even closer, to no-regret learning (Chastain et al., 2014). In this paper, we show that, in fact, quite effective training of neural nets can be carried out, without backpropagation, in evolutionary time.\nThe towering empirical success of ANNs has brought into focus their profound incongruity with what we know about the brain: backpropagation requires that and synaptic weights and plasticity be informed by downstream events. Clever versions of ANNs have been proposed recently that avoid this criticism: ANNs whose backward weights are random and fixed (Lillicrap et al., 2016) and a variant that also uses random feedback weights but with zero initial conditions (Nø kland, 2016), a backpropagation interpretation of STDP (a widely accepted theory of plasticity) (Bengio et al., 2015), unsupervised learning using STDP (Diehl & Cook, 2015), ANNs driven by neural competition (Krotov & Hopfield, 2019), or ANNs with target value propagation at each layer rather than the loss gradient (Lee et al., 2015).\nHere we take a very different approach. We believe that, while forward neural computation is coterminous with life, backpropagation (i.e., feedback to individual neurons and synapses about their contribution to the overall performance of the circuit) can be effectively carried out over evolutionary\ntime. Suppose that the brain circuitry for a particular classification task, such as “food/not food”, is encoded in the animal’s genes, assuming each gene to have two alleles 0 and 1. A (haploid) genotype is a bit string. Crucially, we assume that the weight of each link of the neural network is a fixed sparse linear function of the genes. Evolution proceeds in generations. At each generation, a gene is an independent binary variable with fixed probability of 1. A population is sampled from this distribution of genotypes, and it experiences a sequence of inputs to the brain circuit. Fitness of each genotype depends, to some small degree, on the animal’s success over its lifetime in the specific classification task. In the next generation, the allele frequencies will change slightly, depending on how each allele of each gene fared cumulatively (over both all inputs and all genotypes containing it) in the classification task. These changes follow the standard population genetics equations for the weak selection regime, see Bürger (2000); Chastain et al. (2014); weak selection means that the classification task is only one of the many biological functions (digestion, locomotion, other brain circuits, etc.) that affect the animal’s fitness.\nThe question is, can competent brain circuits evolve this way? We offer theoretical evidence that this is indeed the case1. In Section 2 we prove that, if the classifier to be learned is linear, then evolution does indeed succeed in creating a neural network that classifies well, almost certainly and within polynomial (in the dimension of the problem) number of generations, individuals per generation, and neurons per individual. We also validate our theorem through experiments on the MNIST data set. Our experiments are not meant to compete with what is happening in the ANN world. We want to make the point that competent learning can happen in life: NNE with a single hidden layer already gives surprisingly good accuracy rates (more than 90% accuracy for classifying the MNIST digits 0 to 4).\nThere is a different way of looking at, and motivating, our results, namely from the point of view of the study of the brain in connection to evolution. “Nothing in biology makes sense except in the light of evolution,” Theodosius Dobzhansky famously proclaimed. Neuroscientists have espoused this point of view, and evolutionary arguments come up often in the study of the brain, see for example Bosman & Aboitiz (2015). However, we are not aware of a technical discussion in the literature of the obvious existential question: Is the architecture of the brain susceptible to evolution through natural selection? Can brain circuits evolve? Our mathematical and empirical results in this paper on NNE strongly suggest that, indeed, effective brain circuits specializing in classification tasks could have evolved.\nWe also propose a second biologically plausible ANN-like mechanism, based on dopaminergic plasticity. It was recently established experimentally (Yagishita et al., 2014) that weights in certain synapses (in this case from the cortex to the striatum in the mouse brain, but not only) are increased if dopamine was released within 0.5-2 seconds after the synapse’s firing. Intuitively, this is a reinforcement plasticity mechanism that “rewards” synapses whose firing led to a favorable outcome. Inspired by this experiment, we define dopaminergic neural nets (DNN), in which the weight of a link that fired (that is, both nodes fired during the current training example) is modified by a multiple of ( 14 − err\n2), where err is the error of the current training example. That is, links that fired are rewarded if the result was good, and actually also punished if it was not. Our experiments show that such DNNs can also learn to classify quite well, comparable to SGD.\nOur Contributions. In Section 2, we give a rigorous proof that NNE with a single hidden layer succeeds in learning arbitrary linear target functions. In Section 3, we discuss experiments with NNE and DNN on MNIST." }, { "heading": "2 ANALYSIS OF NNE", "text": "A genotype can be viewed as a vector x ∈ {0, 1}n. A probability distribution over the genotypes is given by a vector p ∈ [0, 1]n; a genotype x is sampled by setting x(i) = 1 with probability p(i), independently for each i. The neural network corresponding to a genotype x is a feed-forward neural network (FFNN) whose weights are computed as follows. For a prediction network having m links, the weights of the links are given by Wx, where W is an m× n sparse weight generation\n1We note incidentally that NNE is of course very much distinct from neuroevolution (see the recent survey Stanley et al. (2019)), which optimizes ANN architecture and hyperparameters through genetic algorithms.\nmatrix. We choose the entries of W to be random and i.i.d.: with probability β, W (i, j) is chosen uniformly at random from [−1, 1], and is 0 with probability 1− β. The input to the network is a vector y drawn from a distribution D and has a label (possibly realvalued) `(y). The output of the network on an input y is NNEx (y). In the simplest linear case (Section 2.1), y ∈ Rm and NNEx (y) = xTWT y. In our experiments (Section 3.1), we study the case whenD is the uniform distribution over MNIST, and NNEx (·) is a 1-layer neural network with a ReLU output gate (see Section 3.1 for formal definition).\nFor each genotype x, we measure its performance by computing the loss L(NNEx (y) , l(y)) (this could be squared loss, cross-entropy loss, etc.). For a probability distribution over genotypes p, we define the loss as L(p) := Ex∼pEy∼DL(NNEx (y) , `(y)). We calculate the rewards f t(i) and f̄ t(i) as the expected negative loss whenever the allele is present and absent respectively.\nf t(i) = Ex∼pt [Ey∼D [−L(NNEx (y) , `(y))] |x(i) = 1] . (1) and f̄ t(i) = Ex∼pt [Ey∼D [−L(NNEx (y) , `(y))] |x(i) = 0] . (2) For the next generation we calculate,\np = pt(i)(1 + f t(i)) and q = (1− pt(i))(1 + f̄ t(i)) We normalize p and q to make it a probability distribution. Thus the allele probabilities for the next generation will be,\npt+1(i) = p\np+ q =\npt(i)(1 + f t(i))\n1 + f̄ t(i) + pt(i)(f t(i)− f̄ t(i)) . (3)\nThis is the standard update rule in population genetics under the weak selection assumption. The multiplier captures the small degree to which the performance of this task by the animal confers an evolutionary advantage leading to larger progeny.\nOur first observation is that perfomance per allele is in fact a function of the gradient of the loss function.\nLemma 1\nL(pt) = −f̄ t(i)− pt(i)(f t(i)− f̄ t(i)) and ∂ ∂pt(i)\n( L(pt) ) = −(f t(i)− f̄ t(i)).\nProof.\nL(pt) = Ex∼pEy∼D [L(NNEx (y) , `(y))] = pt(i)Ex∼pt [Ey∼D [L(NNEx (y) , `(y))] |x(i) = 1] + (1− pt(i))Ex∼pt [Ey∼D [L(NNEx (y) , `(y))] |x(i) = 0] = pt(i)(−f t(i)) + (1− pt(i)) ( −f̄ t(i) ) = −f̄ t(i)− pt(i)(f t(i)− f̄ t(i)).\nHere, the last line follows from equations 1 and 2. Now, taking the derivative w.r.t. pt(i) we get\n∂\n∂pt(i)\n( L(pt) ) = −(f t(i)− f̄ t(i)).\nWe use this to prove the following theorem.\nTheorem 1 Fix δ > 0. Suppose ∇2L(z) H · I ∀z ∈ [0, 1]n. Let U := supp∈[0,1]n L(p) and St := {i ∈ [n]|δ ≤ pt(i) ≤ 1− δ}. For ≤ min{1/ (max{2U, 1}) , 2/H, 1}, there is an η > 0 s.t.\nE(L(pt+1)) ≤ L(pt)− η ∑ i∈St ( ∇iL(pt) )2 .\nProof. Using Equation 3 and Lemma 1 we get\npt+1(i)− pt(i) = · p t(i)(1− pt(i))(f t(i)− f̄ t(i)) 1 + f̄ t(i) + pt(i)(f t(i)− f̄ t(i)) = − · p t(i)(1− pt(i)) 1− L(pt) ∇iL(pt)\n= −γi · ∇iL(pt)\nwhere γ is as defined above. For our choice of , we have\n1− L(pt) ≥ 1− U ≥ 1 2 and · pt(i)(1− pt(i)) ≤ 4 ≤ 1 2H .\nTherefore, γi ≤ 1/H . Using Taylor’s theorem, there exists a z ∈ [pt, pt+1] s.t.\nL(pt+1) = L(pt)− (pt+1 − pt)T∇L(pt) + 1 2 (pt+1 − pt)T∇2L(z)(pt+1 − pt)\n≤ L(pt)− ∑ i γi(∇iL(pt))2 + H 2 ∑ i γ2i (∇iL(pt))2 (Using∇2L(z) HI)\n= L(pt)− ∑ i ( γi − H 2 γ2i ) (∇iL(pt))2.\nSince, γi ≤ 1/H , we have γi − H2 γ 2 i ≥ γi/2. Therefore,\nL(pt+1) ≤ L(pt)− ∑ i γi 2 (∇iL(pt))2 = L(pt)− 2(1− L(pt)) ∑ i pt(i)(1− pt(i)) ( ∇iL(pt) )2 ≤ L(pt)− δ(1− δ)\n2(1− B) ∑ i∈St ( ∇iL(pt) )2 ,\nwhere B := infp∈[0,1]n L(p). Therefore, the conclusion of the theorem holds for η = δ(1 − δ)/ (2(1− B))." }, { "heading": "2.1 LEARNING LINEAR FUNCTIONS", "text": "In this section, we show that in the case of a linear target functions, with high probability, NNE converges to an allele distribution p which is arbitrarily close to the correct linear labeling. Our NNE has m input gates connected to one output gate (i.e., no hidden layers). For a genotype x, the weights of the connections are given by Wx. On input y, the NNE outputs xTWT y.\nTheorem 2 Let D be the uniform distribution over vectors in an n-dimensional unit ball. Let a be a fixed vector with ‖a‖ ≤ 1, such that the label of y is `(y) := aT y. Let W have i.i.d. entries with Wij = ± √ m/d with probability d/m and 0 with probability 1− (d/m). Then, for any δ ∈ (0, 1], with n = O(m + log(1/δ)), with probability at least 1 − δ, there exists an allele distribution p s.t. Wp = a. Moreover, with probability at least 3/4, for any ∈ (0, 1], with n = Ω(m(log(1/ )/ 2), there is an x ∈ {0, 1}n s.t. (Wx)·a‖Wx‖‖a‖ ≥ 1− .\nWe remark that the above guarantee works for every linear target function in Rm. To learn, with high confidence, the target function from among d unknown (arbitrary) linear functions, m above can be replaced by log d.\nProof. To have Wp = a, with p(i) ∈ [0, 1], it suffices that the vector a lies in the convex hull of the columns of W . This follows if the unit ball around the origin is contained in the convex hull of the vectors W (i). By duality, it suffices that every halfspace tangent to the unit ball (and not containing it) has at least one of the W (i) in it. For any single halfspace tangent to the unit ball, the probability that a random W (i) falls in it is at least a constant factor — each W (i) has squared length m in expectation and concentrated near it. Thus, the halfspace it defines carves out a cap of constant measure. Next, by the VC theorem, if n = Ω(m+ log(1/δ)), with probability 1− δ every such halfpsace will contain a column of W . This establishes E(Wx) = a.\nTo bound the error, we consider the subset of columns of W that have a nontrivial inner product with a and take their sum, i.e., let J = {i : wi · a ≥ 1√\nm ‖a‖‖wi‖} and d = |J |, and consider the\nrandom variables:\nY = 1\nd ∑ i∈J wi and Z = Y · a.\nThen by the symmetry of the distribution of W , E(Y ) points in the same direction as a (in all other directions, the truncated distribution remains symmetric and therefore has mean zero). For convenience . Then,\nVar(Z) = 1\nd ∑ i∈J Var(wi · a) ≤ c‖a‖.\nOn the other hand,\nE(‖Y ‖2) = 1 d2 ∑ i,j∈J E(wi · wj) ≤ 1 d2 ( dE(‖wi‖2) + d2E(wi · a)2 ) ≤ c1m d + c2‖a‖.\nwhere c, c1, c2 are absolute constants. So if d = Ω(m/ ), then with large probability,\nY · a ‖Y ‖‖a‖ ≥ 1−O( ). (4)\nHowever, we need this for every possible a. So we take an ( /2)-net of the unit ball in Rm (which has size at most (3/ )m). For any fixed a, by taking d = Ω\n( m log(1/ )\n2\n) , the Hoeffding bound\ntells us that the probability that (4) is violated is at most e− 2d/4 ≤ ( /4)m. Then, by a union bound, this bound on d suffices for all a. Finally, with n = Ω(m log(1/ )/ 2) whp, every cap {y : a · y ≥ 1/ √ m} has at least d columns of W in it." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 NNE ON MNIST", "text": "We study the effectiveness of NNE by evaluating its classification performance on the MNIST dataset.\nTo train an NNE via evolution of T generations of genotypes, we fix a sufficiently large population size N . Each generation t ∈ [T ] consists of a sample of N independently sampled genotypes from the allele distribution pt, we denote this sample by Pt. This distribution is updated based on the average performance f t(i) and f̄ t(i) of all the genotypes on a task, in our case, MNIST handwirtten digit recognition task. We let the allele distribution pt evolve over T generations in this manner (see fig. 1).\nExperimental setup. We use 200 training samples for each of the digits, drawn uniformly at random from MNIST; we denote this set of training examples by S. p1, the allele distribution for the first generation, is sampled uniformly at random from [0, 1]n. We evaluate the performance of the alleles over N = 1000 genotypes. Our network has 784 input units, one hidden layer of |h1| = 1000 units with ReLU activation and an output layer of 10 units with softmax activation. We add a sparse random graph between the input layer and the hidden layer: between a neuron in the input layer and a neuron in the hidden layer, we independently add an edge with probability 0.1. The hidden layer is fully connected to the output layer. We choose β = 0.0025 for our experiments, i.e., each edge weight is a sparse random function of only β fraction of the alleles. For the input sample y, `(y) is now a one-hot encoding of the label, and NNEx (y) is the soft-max output of the network. We use the cross-entropy loss function, L(NNEx (y) , `(y)) = − ∑ c∈[C] `(y)c log (NNEx (y)c).\nIf a classifier were to randomly guess the label of an input intance, its loss function value would be α := − log (1/10). We use the relative performance of the genotype w.r.t. to a random guess for our updates. To this end, we define for a genotype x, δx := 1|S| ∑ s∈[S] max{0, α2 − L(NNEx (y) , `(y)) 2}. For each allele, we calculate the rewards f t(i) and f̄ t(i) whenever the allele is present and absent respectively.\nf t(i) = ∑ x∈Pt δxx(i)∑ x∈Pt x(i)\nand f̄ t(i) = ∑ x∈Pt δx(1− x(i))∑ x∈Pt(1− x(i)) .\nThe allele distribution for the next generation is updated using equation 3.\nNNE as described above achieves 78.8% test accuracy on the full MNIST test set. While this is somewhat far from the state of art in classification of MNIST images, our results demonstrate that very basic NNEs can perform reasonably well in this task.\nConvergence of allele distributions. We repeat for many (hundreds of thousands) generations. As our theoretical results predict (see also Mehta et al. (2015)), the vast majority of genes have allele probabilities that are very close to 0 and 1. Figure 2 shows the fraction of allele probabilities that are at a distance [x, 1− x] from 0 or 1, i.e., y is calculated as y = 1− |{i:min{p\nt(i),1−pt(i)}≤x}| n .\nNNE with output layer training. The biological implausibility objection of using stochastic gradient based updates is less acute for the output layer, since in animal brains synaptic changes due to plasticity happen at the post-synaptic neuron, and for the output layer this is the output neuron. Even then, computing exact (or approximate) gradients is a nontrivial computational task; instead we consider using just the sign of the gradient for only the output layer as a lifetime training mechanism.\nFor the same network described as above, we randomly initialize the network weights using allele distribution learned using the NNE. We then calculate the sign of the gradient of the output layer weights and update the weights in the opposite direction (SignSGD), using a sufficiently small learning rate ′, similar to stochastic gradient descent. For i in the hidden layer and j in the output\nlayer, the update is wij := wij − ′ · sign ((zj − `(y)j)hi) (5)\nwhere hi is output of the neuron i, and zj softmax output of neuron j (see appendix for the proof). SignSGD has been shown to be effective for traning large deep neural networks (for e.g., see Bernstein et al. (2018)).\nWe perform a few hundred iterations of this training using batch size 50. In this experiment (NNE + SignSGD), we obtain 86.3% accuracy on full MNIST test set. This further demonstrates that biologically plausible neural networks can perform reasonably well in this task.\nNumber of genes. A crucial choice for an NNE is the number of genes. In our experiments, we use a few thousand genes; this is not unreasonable as it is estimated that about 5, 000 genes are expressed in the cells of the mammalian brain. To investigate further, we compare the performance of our algorithm with increasing values of n (the number of genes). Figure 3 presents the validation accuracy trends on the same network described above for five class [0 − 4] classification and for full MNIST dataset. We observe that the accuracy rate of the network improves significantly with increase in the number of genes. However, it requires much longer training time to achieve a desired accuracy rate.\nTable 1 compares the results of all the models along with the baseline, stochastic gradient descent trained on the same subset of MNIST." }, { "heading": "3.2 DOPAMINERGIC NEURAL NETS (DNNS)", "text": "DNNs are biologically plausible ANNs based on dopaminergic plasticity. They learn by a weak form of immediate reinforcement - “rewarding” synapses whose firing led to a favourable outcome. If a connection between two neurons has fired during a training step, then its weight is increased if the square error was low (less than 14 ). In this section, we demonstrate that simple DNNs can perform reasonably well for tasks like classifying the images in the MNIST dataset.\nExperimental setup. For our experiments we use a network consisting of an input layer, a single hidden layer, and an output layer consisting of 784, h, and 10 neurons respectively. Each neuron in the input layer has a link to each neuron in the hidden layer, and its weight is initialised by the popularly used Kaiming Uniform (more commonly called He initialisation (He et al., 2015)). These weights are unchanged through out the learning process. Recent theoretical results suggest that a\nlarge enough random layer is sufficiently rich and efficiently trainable (Vempala & Wilmes, 2019) (see also (Rahimi & Recht, 2008)).\nEach neuron in the hidden layer has a link to each neuron in the output layer. The output layer outputs the softmax score. The weights of this layer are learned using plasticity based updates. On seeing an input y, the DNN tries to predict the label of y; let us denote this by DNNW (y). If the DNN got the prediction correct, i.e. the loss L(DNNW (y) , `(y)) is at most 0, then weight wij get increased by a small amount, provided the output neuron j has low error (i.e. |zj − l(y)j |2 ≤ 1/4) where zj is the jth coordinate of DNNW (s).\nFormally, the update rule is as follows for i in the hidden layer and j in the output layer.\nwij = wij + 1 max\n{ 0, 14 − |zj − `(y)j | 2 } ·max {0, L(DNNW (y) , `(y))− 0}(\n1 4 − |zj − `(y)j |2 ) · (L(DNNW (y) , `(y))− 0)\n.\nExperimental results. To study the effectiveness of our DNN in the 10-class MNIST digit classification, we compare its peformance with some other standard baselines.\n1. SGD: In this we use the standard stochastic gradient descent (with the Adam optimiser Kingma & Ba (2015)) based updates to train our network.\n2. SignSGD: As before, we use the sign of the gradient for updates (equation 5).\nTable 2 shows the results for different h values. All results are after 500 epochs of training. As with NNE we use the cross-entopy loss for all the models. We found that 0 = 0.75 and 1 = 1 for the DNN gives reasonable performance. Our DNN gives encouraging results and is comparable to SignSGD in performance." }, { "heading": "4 DISCUSSION AND FURTHER WORK", "text": "We have presented two biologically plausible mechanisms for the evolution of neural networks, motivated by the brain, and based on evolution. One feature of these mechanisms is that they process one input instance at a time (i.e., unit batch size) and do not require the computation of gradients.\nOur preliminary experiments with bio-plausible mechanisms suggest that they are promising alternatives to backpropagation and the explicit use of gradients. The results raise several interesting possibilities:\n1. In our current set-up, network weights are sparse linear functions of alleles. What if we used nonlinear functions (e.g., sigmoids or ReLUs) to define the weights?\n2. In our experiments, allele distributions approach 0/1 values for most coordinates. We could take advantage of this by fixing alleles that are sufficiently close to 0 or 1 and continue only on the rest.\n3. Can this approach be used, with greater success, for multi-layer networks? 4. Does NNE or DNN implicitly optimize the underlying architecture? 5. A recent model of memory creation and association in the mammalian brain is based on\nplasticity and inhibition (Papadimitriou & Vempala, 2019). In this model, inhibition is implemented as a cap, where the top k highest weighted input neurons of an entire layer are the ones that fire; the rest of the neurons in the layer are suppressed. Our experiments with DNN indicate that using a k-cap for the hidden layer with k as small as 256 when h = 100, 000 does not degrade performance (and even enhances it slightly), while reducing computation. Can k-cap help with learning?" }, { "heading": "5 APPENDIX", "text": "Lemma 2 For neuron i in the hidden layer and neuron j in the output layer, the gradient of the cross entropy loss with respect to the weights wij in the final layer is\n∂L\n∂wij = (zj − `(y)j)hi.\nwhere hi is output of the neuron i, and zj softmax output of neuron k.\nProof. For the hidden layer output h and the final layer weightsW , we compute the soft-max output as follows,\no = WTh.\nz = softmax(o).\nTrue output l(y) is a one-hot encoded vector and the output of the model z is the softmax of the final layer input o. The derivative of loss function L w.r.t. o is given by\n∂L ∂oj = − C∑ k=1 ∂`(y)k log(zk) ∂oj\n= − C∑ k=1 `(y)k ∂ log(zk) ∂oj = − C∑ k=1 `(y)k 1 zk ∂zk ∂oj\n= −`(y)j zj ∂zj ∂oj − C∑ k 6=j `(y)k zk ∂zk ∂oj = −`(y)j zj zj(1− zj)− C∑ k 6=j `(y)k zk (−zkzj) = −`(y)j + `(y)jzj + C∑ k 6=j `(y)kzj = −`(y)j + C∑ k=1 `(y)kzj = −`(y)j + zj C∑ k=1 `(y)k = zj − `(y)j .\nWe know that oj = ∑ i wijhi. Hence,\n∂oj ∂wij = hi.\nThen the gradient for the final layer weights wik is calculated as:\n∂L ∂wij = ∂L ∂oj · ∂oj ∂wij = (zj − `(y)j)hi." } ]
2,019
null
SP:823860fad022b8abde97b597f8b3881453489dc1
[ "This paper proposes a novel loss function to account for imperceptible, geometry-aware deformations of point clouds. The loss is used in two cases: generating adversarial point clouds to attack representative models of point set classifiers, and generating cooperative point clouds to improve classification confidence or accuracy. The combined geometry-aware objective is well-introduced, which mainly contains Chamfer distance term, Hausdorff distance term and local curvatures consistency term. The authors apply the geometry-aware objective to generate adversarial point clouds by adopting the framework of C&W attack. For generating cooperative point clouds, the authors introduced a training procedure to reduce the overfitting of the deformed point clouds. Most of the experiments are well-conducted, and demonstrate the effectiveness of the proposed loss function. ", "This paper describes a new targeted adversarial attack against 3D point cloud object classifiers that is robust to several countermeasures. The attack finds a point of the target class that is close to the original point cloud in terms of a more complicated metric that combines the Hausdorff distance, the Chamfer distance, and a curvature distance measure. The proposed attack is 100% successful against several different state of the art classifiers on a dataset of 1024-point clouds sampled from 25 instances of CAD models of each of 10 common objects without any countermeasures. When the Random Removal countermeasure is used, the attack is still successful almost 50% of the time even when 256 points are removed as compared to two other attacks that are only ~17% successful. When the SOR countermeasure is used, the attack is 60% successful when 64 points are removed as compared to <1% for the comparison attacks. The attack can also be used in reverse for data augmentation in training and can cut error rates almost in half." ]
Recent studies show that machine learning models are vulnerable to adversarial examples. In 2D image domain, these examples are obtained by adding imperceptible noises to natural images. This paper studies adversarial generation of point clouds by learning to deform those approximating object surfaces of certain categories. As 2D manifolds embedded in the 3D Euclidean space, object surfaces enjoy the general properties of smoothness and fairness. We thus argue that in order to achieve imperceptible surface shape deformations, adversarial point clouds should have the same properties with similar degrees of smoothness/fairness to the benign ones, while being close to the benign ones as well when measured under certain distance metrics of point clouds. To this end, we propose a novel loss function to account for imperceptible, geometry-aware deformations of point clouds, and use the proposed loss in an adversarial objective to attack representative models of point set classifiers. Experiments show that our proposed method achieves stronger attacks than existing methods, without introduction of noticeable outliers and surface irregularities. In this work, we also investigate an opposite direction that learns to deform point clouds of object surfaces in the same geometry-aware, but cooperative manner. Cooperatively generated point clouds are more favored by machine learning models in terms of improved classification confidence or accuracy. We present experiments verifying that our proposed objective succeeds in learning cooperative shape deformations.
[]
[ { "authors": [ "Anish Athalye", "Logan Engstrom", "Andrew Ilyas", "Kevin Kwok" ], "title": "Synthesizing robust adversarial examples", "venue": "arXiv preprint arXiv:1707.07397,", "year": 2017 }, { "authors": [ "Kwang-Ho Bae" ], "title": "Automated registration of unorganised point clouds from terrestrial laser scanners", "venue": "PhD thesis, Curtin University,", "year": 2006 }, { "authors": [ "Mario Botsch", "Leif Kobbelt", "Mark Pauly", "Pierre Alliez", "Bruno Lévy" ], "title": "Polygon mesh processing", "venue": "AK Peters/CRC Press,", "year": 2010 }, { "authors": [ "Yulong Cao", "Chaowei Xiao", "Benjamin Cyr", "Yimeng Zhou", "Won Park", "Sara Rampazzi", "Qi Alfred Chen", "Kevin Fu", "Z Morley Mao" ], "title": "Adversarial sensor attack on lidar-based perception in autonomous driving", "venue": "arXiv preprint arXiv:1907.06826,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Herbert Edelsbrunner", "David Kirkpatrick", "Raimund Seidel" ], "title": "On the shape of a set of points in the plane", "venue": "IEEE Transactions on information theory,", "year": 1983 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Robust physical-world attacks on deep learning visual classification", "venue": null, "year": 2018 }, { "authors": [ "Haoqiang Fan", "Hao Su", "Leonidas J Guibas" ], "title": "A point set generation network for 3d object reconstruction from a single image", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G. Kim", "Bryan Russell", "Mathieu Aubry" ], "title": "AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation", "venue": null, "year": 2018 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G. Kim", "Bryan C. Russell", "Mathieu Aubry" ], "title": "A papier-mache approach to learning 3d surface generation", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018b. doi: 10.1109/cvpr.2018.00030. URL http://dx.doi.org/10.1109/cvpr.2018.00030", "year": 2018 }, { "authors": [ "Tong He", "Haibin Huang", "Li Yi", "Yuqian Zhou", "Chihao Wu", "Jue Wang", "Stefano Soatto" ], "title": "Geonet: Deep geodesic networks for point cloud analysis", "venue": null, "year": 2019 }, { "authors": [ "Hugues Hoppe", "Tony DeRose", "Tom Duchamp", "John McDonald", "Werner Stuetzle" ], "title": "Surface reconstruction from unorganized points", "venue": null, "year": 1992 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Shiyi Lan", "Ruichi Yu", "Gang Yu", "Larry S Davis" ], "title": "Modeling local geometric structure of 3d point clouds using geo-cnn", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Daniel Liu", "Ronald Yu", "Hao Su" ], "title": "Extending adversarial attacks and defenses to deep 3d point cloud", "venue": "classifiers. pp. 2279–2283,", "year": 2019 }, { "authors": [ "Daniel Liu", "Ronald Yu", "Hao Su" ], "title": "Adversarial point perturbations on 3d objects, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Jiapeng Tang", "Xiaoguang Han", "Junyi Pan", "Kui Jia", "Xin Tong" ], "title": "A skeleton-bridged deep learning approach for generating meshes of complex topologies from single rgb images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Kaiqi Wang", "Ke Chen", "Kui Jia" ], "title": "Deep cascade generation on point sets", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Nanyang Wang", "Yinda Zhang", "Zhuwen Li", "Yanwei Fu", "Wei Liu", "Yu-Gang Jiang" ], "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Yue Wang", "Yongbin Sun", "Ziwei Liu", "Sanjay E. Sarma", "Michael M. Bronstein", "Justin M. Solomon" ], "title": "Dynamic graph cnn for learning on point clouds", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Matthew Wicker", "Marta Kwiatkowska" ], "title": "Robustness of 3d deep learning in an adversarial setting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Shuran Song", "Aditya Khosla", "Fisher Yu", "Linguang Zhang", "Xiaoou Tang", "Jianxiong Xiao" ], "title": "3d shapenets: A deep representation for volumetric shapes", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Chong Xiang", "Charles R Qi", "Bo Li" ], "title": "Generating 3d adversarial point clouds", "venue": null, "year": 2019 }, { "authors": [ "Chaowei Xiao", "Dawei Yang", "Bo Li", "Jia Deng", "Mingyan Liu" ], "title": "Meshadv: Adversarial meshes for visual recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jiancheng Yang", "Qiang Zhang", "Rongyao Fang", "Bingbing Ni", "Jinxian Liu", "Qi Tian" ], "title": "Adversarial attack and defense on point sets", "venue": null, "year": 1902 }, { "authors": [ "Yaoqing Yang", "Chen Feng", "Yiru Shen", "Dong Tian" ], "title": "Foldingnet: Point cloud auto-encoder via deep grid deformation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hang Zhou", "Kejiang Chen", "Weiming Zhang", "Han Fang", "Wenbo Zhou", "Nenghai Yu" ], "title": "Dup-net: Denoiser and upsampler network for 3d adversarial point clouds", "venue": null, "year": 2018 }, { "authors": [ "PointNet Qi" ], "title": "2017a) as our baseline due to its simple structure and good performance. The classier f can be formulated as f(P ) = γ(maxpi∈P {h(pi)}), where γ and h are two learnable parameters of the neural network. A.7 INTRODUCTION OF DEFENSIVE ALGORITHM We adopt two kinds of defense method including random removal method (RR for short) and statistic", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The existence of adversarial examples shows the vulnerability of machine learning models, and triggers a great amount of research attention paid to either attacking and defense studies for safetycritical issues, or robustness analysis on machine learning models themselves. In existing literature, adversarial examples generally satisfy two key properties: visually imperceptible by humans and capable to mislead machine learning models. For 2D adversarial images (Szegedy et al., 2013; Carlini & Wagner, 2017; Madry et al., 2017), their imperceptible nature indicates less sensitivity by humans to the texture changes of the original images, which can be viewed as noises with small magnitudes anchored on RGB values of pixels.\nDifferent from texture changes in adversarial images, adversarial point clouds as approximate shape representations of object surface demand imperceptible shape deformations. Heuristic manners of existing several works are taken to allow shape deformations in arbitrary directions for adversarial attack effects. Consequently, they produce adversarial examples that contain obvious point outliers (Xiang et al., 2019; Liu et al., 2019a), which can thus be easily defended by simple outlier removal method such as SOR (Zhou et al., 2018).\nIn this work, we aim to adversarially perturbing benign point clouds with awareness of the underlying geometric properties, which can be generally described as some sort of approximate geometric smoothness, e.g., point-wise normals (Wang et al., 2018), curvatures (Bae, 2006), or geodesics for pairs of points (He et al., 2019), in view of the nature of object surface as 2-manifold embedded in the 3D space. To this end, we propose a novel geometry-aware loss in combination with the attack one. Our proposed geometry-aware loss is composed of three terms: the Chamfer Distance (CD) and Hausdorff Distance (HD) terms prevent geometric/topological changes of global shape via constraining point-wise perturbations, while the term encouraging consistency of local curvatures\nbetween the adversarial and benign point clouds achieves smooth local surface deformations. Extensive experiments on the ModelNet40 dataset (Wu et al., 2015) can verify the imperceptibility and effectiveness of geometry-aware adversaries.\nOur success in generating less noticeable point clouds inspires us to think of an opposite direction: is it possible to perturb raw point clouds in a cooperative, geometry-aware manner such that the resulting imperceptible perturbations of point clouds can improve classification performance with existing machine learning models? To the best of our knowledge, very few existing works are pursuing this direction, at least in the domain of 3D shape analysis. Technically, the objective of cooperative generation on point clouds is achieved by learning point perturbations in favor of either improved classification confidence or corrected class predictions. In view of their geometryaware nature, our cooperative point clouds could be practically meaningful in terms of guiding design of certain objects such that they can be better perceived by in-built machine learning systems in physical world. Experiments on the ModelNet40 again verify our motivation on cooperative generation of point clouds." }, { "heading": "2 RELATED WORKS", "text": "A number of adversarial attack algorithms have been proposed on 3D semantic analysis (Xiao et al., 2019; Liu et al., 2019a; Xiang et al., 2019; Yang et al., 2019; Zheng et al., 2018; Wicker & Kwiatkowska, 2019; Liu et al., 2019b). Beyond (Xiao et al., 2019) manipulating both texture and shape on the mesh level to attack image classifiers and detectors, most of adversarial algorithms on point clouds attack 3D classifiers via points attachment (Xiang et al., 2019; Yang et al., 2019), point detachment (Yang et al., 2019; Zheng et al., 2018; Wicker & Kwiatkowska, 2019) or point-wise perturbation (Xiang et al., 2019; Liu et al., 2019a; Yang et al., 2019; Liu et al., 2019b). Existing methods fail to exploit geometric properties during adversarial example generation, and thus have evident outliers and can be easily defended by SOR defense (Zhou et al., 2018), which encourage our adversaries (see Figure 2 and Table 1).\nA number of deep models have been proposed for point-based surface reconstruction such as Point Set Generation (PSG) (Fan et al., 2017), the AtlasNet (Groueix et al., 2018b), and Deep Cascade Generation (DCG) (Wang et al., 2019a) from 2D images. These methods exploiting local mesh structure into point clouds generation have inspired our geometry-aware concept, but they are directly regressing from the latent vector encoded from images rather than point-wise deformations in the proposed method.\nData argumentation on point clouds such as jittering, rotation and random scale firstly introduced in (Qi et al., 2017a) shares similar concept as our cooperative deformations, both of which generate new data to boost classification. However, the key differences between our cooperative learning and data argumentation lie in two folds: 1) optimization based vs. simple physical processing; and 2) deformations on the underlying shape vs. point density changes. Simply put, the proposed cooperative deformation can be a promising learning paradigm, while data argumentation is an effective pre-processing step to mitigate sparse sample distributions." }, { "heading": "3 METHODOLOGY", "text": "Our problem setting assumes the availability of a collection of point clouds in the input space X , which are approximate shape representations of object surfaces of certain categories. Each point cloud P ∈ X contains an orderless set of n points {pi}ni=1, with the corresponding label y ∈ Y of object category, where any p denotes the coordinates (x, y, z) in the 3D Euclidean space. In this work, we focus on machine learning models of 3D point set classification (Qi et al., 2017a;b; Wang et al., 2019b), which learn a classifier f : X → Y and expect f(P) = y for any input P with the true label y.\nP is a discrete approximation of an object surface that satisfies the general properties of smoothness and fairness (Botsch et al., 2010), which concern with the continuity and variation of (partial) derivatives of a parametric surface function; or in a more intuitive way, the properties concern with the curvatures of local surface patches. Our objective in this work is to obtain from P a deformed point cloud P ′ via learning to perturb individual points {pi}ni=1 of P . Since the deformation is\nexpected to be imperceptible by humans, we argue that P ′ should have the aforementioned surface properties with similar degrees of smoothness/fairness to local surface patches of P , while being globally close to P as well, measured by certain distance metrics of point sets; otherwise, humans would notice either the global, possibly topological changes of part configuration of the object surface, or those of local surface details. We present shortly in section 3.1 our technical solutions to the above objective of geometry-aware point perturbations, and discuss how such solutions can be used to generate either adversarial or cooperative point clouds in sections 3.2 and 3.3 respectively." }, { "heading": "3.1 GEOMETRY-AWARE POINT PERTURBATIONS", "text": "In 2D image domain, pixel-wise lp-norms are usually used to constrain the noise magnitudes of adversarial examples (Carlini & Wagner, 2017; Madry et al., 2017). Due to the sharply different data nature of point clouds, we consider the following learning criteria to perturb points {pi}ni=1 of P in a geometry-aware manner. These criteria are to constrain the deformation magnitudes of the resulting P ′, while taking into account prevention of point outliers and smooth regularization of local point neighborhoods. Combined use of these criteria leads to point cloud deformations that are less noticeable by humans, as verified in the comparative experiments in section 4.\nChamfer Distance Given two point sets P and P ′, the Chamfer distance computes\nCChamfer(P ′,P) = 1\nn ∑ p′∈P′ min p∈P ‖p′ − p‖22 + 1 n ∑ p∈P min p′∈P′ ‖p− p′‖22, (1)\nwhich shows that Chamfer distance is symmetric w.r.t. P andP ′. Although the Chamfer distance (1) is not a strict distance function, since the triangle inequality does not hold, it is popularly used in the recent literature of learning based 3D shape generation (Yang et al., 2018; Groueix et al., 2018a; Fan et al., 2017). It measures the distance between the two point sets by averaging over the individual deviation of any p ∈ P from P ′ and that of any p′ ∈ P ′ from P . However, Chamfer distance is less effective in prevention of outlier points in P ′, since a small portion of outliers in P ′ increases the distance (1) negligibly — one can intuitively think of outliers of a point cloud as those away from the object surface, represented by the point cloud, with relatively large distances. This shortcoming of Chamfer distance motivates us to additionally use the Hausdorff distance as introduced below.\nHausdorff Distance In this work, we use a non-symmetric Hausdorff distance between the two point sets P and P ′ that computes\nCHausdorff(P ′,P) = max p′∈P′ min p∈P ‖p′ − p‖22, (2)\nsince only the deformation of P ′ is concerned. As (2) indicates, the Hausdorff distance finds the largest one among the smallest distances of individual p′ ∈ P ′ from P . It is thus sensitive to generation of outliers in the resulting P ′. Distances computed by the functions (1) and (2) rely on those between the individual points {p′i}ni=1 and {pi}ni=1, which does not involve geometries of local surface patches associated with these individual points; consequently, the resulting P ′ could be close to P when measured by (1) and/or (2), but changes of geometric details at certain local patches could be clearly visible, causing failure to achieve the objective of imperceptible deformation.\nConsistency of Local Curvatures Our way of achieving geometry-aware imperceptibility is to constrain the point cloud deformation such that local patches of the surface approximated by the resulting P ′ have curvatures whose magnitudes are similar to those of the corresponding patches of the surface approximated by P . Since computations in this work are conducted on the discrete surface approximation of point clouds, we propose discrete notions that approximately characterize curvatures of local surface patches.\nMore specifically, for any point p′ ∈ P ′, we find its closest point p ∈ P by p = arg minp∈P ‖p′ − p‖2. There exist local point neighborhoods N ′p′ ⊂ P ′ and Np ⊂ P respectively associated with p′ and p, which are obtained in this work by searching k nearest neighbors, suggesting |N ′p′ | = |Np| = k. To capture the local geometry of Np, we rely on the following discrete notion\nκ(p;P) = 1 k ∑ q∈Np |〈(q − p)/‖q − p‖2,np〉| , (3)\nwhere np denotes the unit normal vector of the surface at p. The term (3) intuitively measures the averaged angles between the normal vector and the vector defined by pointing p towards each q of its neighboring points. Indeed, since the normal vector np is orthogonal to the tangent plane of the surface at p, each inner product in (3) characterizes how the normals vary directionally in the local neighborhood Np, thus approximately measuring the local, directional curvature, and an average of |Np| inner products in (3) approximately measures the local, mean curvature. Note that unit normal vector np in (3) can be computed from Np via eigen-decomposition of the set N (p) (Hoppe et al., 1992). We compute κ′(p′;P ′) in the same way as (3), with a subtle difference that instead of computing n′p′ from N ′p′ , we directly use np, i.e., the unit normal vector of the point in P that is closest to p′, as a surrogate of n′p′ , since normal vectors of P can be pre-computed and efficiently retrieved during the deformation learning.\nGiven κ′(p′;P ′,P) and κ(p;P), we use the following criterion to encourage the consistency of local geometries between any p′ ∈ P ′ and its closest point p ∈ P\nCCurvature(P ′,P) = 1\nn ∑ p′∈P′ ‖κ′(p′;P ′,P)− κ(p;P)‖22 s.t. p = arg minp∈P ‖p ′ − p‖2.\n(4)\nwhere we write κ′(p′;P ′,P) since the normal vector involved in its computation is from the corresponding one of P . Note that terms similar to (3) are also used in (Wang et al., 2018; Tang et al., 2019) for single-view surface reconstruction. Our use of the term (3) in (4) is to encourage the consistency of local surface geometries between P ′ and P , rather than to directly minimize (3) as in (Wang et al., 2018; Tang et al., 2019).\nThe Combined Geometry-aware Objective We use the following combined objective to learn the deformed P ′ by either adversarial or cooperative perturbations of individual points of P\nCGeometry(P ′,P) = CChamfer(P ′,P) + α · CHausdorff(P ′,P) + β · CCurvature(P ′,P), (5) where α and β are the weighting parameters." }, { "heading": "3.2 GENERATION OF ADVERSARIAL POINT CLOUDS", "text": "Assume a point cloud classification model f : X → Y . An adversarial example of a point cloud P is a crafted malicious input P ′ to the model f(·), with imperceptible deformation, such that P ′ is misclassified by the model. Let the true label of P be y ∈ Y , it means that f(P ′) 6= y. Adversarial examples can be generated either by untargeted or targeted attacks. Similar to (Xiang et al., 2019), we focus in this work on the more difficult task of targeted attack for point cloud data, which generates P ′ such that it is classified as a specified class y′ 6= y, i.e., f(P ′) = y′. Among various attack models proposed for 2D image classification (Madry et al., 2017; Carlini & Wagner, 2017), we adopt the state-of-the-art framework of C&W attack (Carlini & Wagner, 2017). Its objective can be generally written as\nmin x′\nCAdv(x ′) + λ · ‖x′ − x‖p, (6)\nwhere x is the benign signal (e.g., an image) and x′ is the adversarial example to be optimized. The misclassification loss CAdv(x′) is regularized by a λ weighted, lp-norm based term that constrains the noise magnitude of adversarial x′. To specify CAdv(x′), we assume a classification model f(·) be implemented as a deep network, and denote as g(·) the function that outputs the network logits, i.e., g(·) includes all layers of the network except the final softmax. Let the targeted label attacking x as y′, C&W attack commonly uses a margin based loss function as CAdv(x\n′) = max{maxi 6=y′ gi(x′)− gy′(x′), 0}. In this work, we adopt the C&W attack framework, and propose to replace the term of lp-norm in (6) with our geometry-aware objective (5), in order to generate adversarial point clouds with imperceptible shape deformations, giving\nmin P′\nCAdv(P ′) + λ · CGeo(P ′,P). (7)\nThe original choice of margin based CAdv(P ′) in C&W attack ceases pursuing more adversarial examples once maxi 6=y′ gi(P ′) − gy′(P ′) ≤ 0, assuming that further optimization would reduce\nthe imperceptibility of the resulting P ′. Our proposed geometry-aware (5) allows us to take a more aggressive strategy, and we propose to use the following term as our misclassification loss\nCAdv(P ′) = − log ( exp(gy′(P ′))/\n∑ i exp(gi(P ′))\n) . (8)\nOur proposed objective (7) with term (8) continues to pursue more adversarial, arguably less defendable, point clouds without introducing noticeable shape deformations, as empirically verified by our experiments in section 4." }, { "heading": "3.3 GENERATION OF COOPERATIVE POINT CLOUDS", "text": "The adversarial objective (7) motivates us to think of an opposite direction of geometry-aware point cloud deformations that are less noticable by humans. Specifically, is it possible to perturb points of any P in a cooperative, geometry-aware manner such that the resulting P ′ is more favored by machine learning models in terms of improved classification confidence/accuracy? Technically, for a given classifier f(·), this seems to be achieved trivially by optimizing\nmin P′\nCCoop(P ′) + λ · CGeo(P ′,P), (9)\nwith the cooperative loss term as\nCCoop(P ′) = − log ( exp(gy(P ′))/\n∑ i exp(gi(P ′))\n) , (10)\nwhere we simply use the true label y of P to replace the targeted attack label y′ 6= y in (8). Given knowledge of the true label y of P , the objective (9) seems only learn a deformed P ′ that overfits the classifier f(·), which could be practically less meaningful. In this work, we present the following interesting, but less explored investigation based on (9), which suggests that subtle deformations of shape instances of common object categories (e.g., those in ShapeNet (Chang et al., 2015)) could lead to practical meanings in terms of being better perceived by existing 3D point set classification models (Qi et al., 2017a;b; Wang et al., 2019b).\nAssume we are given a set of m point clouds {Pi}mi=1 of different object categories. We take the following procedure on {Pi}mi=1.\n1. We divide {Pi}mi=1 evenly into s subsets, with consideration of class balance. 2. We use the first s−1 subsets as training data to train a classifier fs(·), and use the objective\n(9) to cooperatively deform point clouds in the sth subset (the validation set) w.r.t. fs(·). 3. We perform s times of step 2 by using each of the s subsets as the validation set, obtaining f i(·), i = 1, . . . , s, and the aggregated collection of deformed {P ′i}mi=1.\nThe above procedure can optionally be conducted for multiple times to further increase the degree of deformations, with awareness of geometric imperceptibility. We use the obtained {P ′i}mi=1 in a standard training-and-test setting of point set classification. That is, we split {P ′i}mi=1 as training and test data, whose indices are the same as those of the original {Pi}mi=1, and train a new classifier f̂(·) to evaluate f̂(·) on the split test data.\nThe obtained {P ′i}mi=1 are indeed cooperative when performance of f̂(·) is improved over that of the original classifier f(·). In fact, we have further conducted cross-model experiments by learning {P ′i}mi=1 via the above procedure with PointNet (Qi et al., 2017a), and train and evaluate f̂(·) of other representative point set classifiers (e.g., PointNet++ (Qi et al., 2017b) and DGCNN (Wang et al., 2019b)). We have also conducted experiments by converting the obtained {P ′i}mi=1 to their mesh representations (Edelsbrunner et al., 1983), and uniformly re-sample points from the meshes to form {P̃ ′i}mi=1; classification performance again improves by training and testing on {P̃ ′i}mi=1, confirming that we are indeed cooperatively deforming the underlying object surfaces.\nWhile our above investigations remain in the digital space, cooperative deformations of point clouds could be physically achieved either by changes of shape design for common rigid objects, or by designing special devices that actively control the reflections of laser pulses from LiDAR (Cao et al.,\n2019). Physical-world adversarial attacks (Athalye et al., 2017; Eykholt et al., 2018) are actively pursued recently in the 2D image domain. Deformations of geometry-aware imperceptibility make sense here since we are not to change the geometric and/or topological surface structures of the objects; otherwise the deformations would have influence on human perception or functional attributes of the objects." }, { "heading": "4 EXPERIMENTS", "text": "Dataset We use point clouds of object instances from ModelNet40 (Wu et al., 2015) to evaluate our proposed algorithms. The dataset consists of 12, 311 CAD models belonging to 40 semantic categories. For each CAD model, 1, 024 points are uniformly sampled from its surface as the working point clouds, which are re-scaled into a unit ball following (Qi et al., 2017b).\nModels and Protocols We evaluate the adversarial and cooperative generation of point clouds based on three representative classifiers, namely PointNet (Qi et al., 2017a), PointNet++ (Qi et al., 2017b), and DGCNN (Wang et al., 2019b). More details of the networks please refer to A.6. In our adversarial setting, we follow the official data split for 3D classification (Qi et al., 2017a;b) and use 9, 843 samples of point clouds for training classifiers that are to be attacked, and for testing, we follow (Xiang et al., 2019) and randomly select 25 samples from each testing set of the top-10 object classes (ordered by sample sizes of these classes, giving rise to airplane, bed, bookshelf, bottle, chair, monitor, sofa, table, toilet, vase); all adversarial attack algorithms are evaluated with a white-box, targeted attack protocol. In our cooperative setting, we set the subset number as s = 5.\nEvaluation Metrics We respectively use the attack success rate (i.e., misclassification rate) and classification accuracy to evaluate the algorithmic effectiveness in our adversarial and cooperative settings. For both metrics, the higher, the better.\nImplementation Details We set k = 16 to define local point neighborhoods for computation of normals and approximate curvatures in (3). We fix α = 0.1 and β = 1.0 for our proposed geometryaware loss (5), and the trade-off parameter λ in (7) and (9) is optimized via 10-step binary search. We use Adam optimizer (Kingma & Ba, 2014) to train networks, and set its learning rate as 0.01." }, { "heading": "4.1 EVALUATION ON GENERATION OF ADVERSARIAL POINT CLOUDS", "text": "In this section, we evaluate the efficacy of our proposed geometry-aware objective (7) for generation of adversarial point clouds. We dub our method as GeoAwareAdv and compare with the method (Xiang et al., 2019), which is among the only few existing works (Liu et al., 2019a;b) addressing adversarial point clouds and takes the same white-box, targeted attack as our method does. Objective of (Xiang et al., 2019) strictly follows the C&W attack (6) (Carlini & Wagner, 2017), with a margin-based misclassification loss regularized by l2-norm constraining magnitude of point perturbations. We also compare with a degenerate version of our method, dubbed GeoDegenerateAdv, which replaces the geometry-aware term (5) with a similar l2-norm as in (Xiang et al., 2019). These experiments are conducted using PointNet (Qi et al., 2017a) as the model of classifier.\nAblation Studies To investigate how different terms in our proposed geometry-aware loss (5), we conduct ablation studies by removing each of them from (5), and use the remaining ones into the adversarial objective (7) for point cloud generation. Figure 1) shows that these terms all together contribute to smooth and outlier-free results. Without using CD, the shape deformations go to unexpected twists. The use of HD and curvature terms largely removes generation of outliers, with the later one further improving the surface smoothness.\nComparative Results We report comparative results of our GeoAwareAdv, GeoDegenerateAdv, and the method (Xiang et al., 2019) in Table 1 and Figure 2. Table 1 compares the attack success rates under two defense methods of SOR (Zhou et al., 2018) and Random Romoval (RR). SOR is the current state-of-the-art method to defend attacking of point clouds, which works by dropping certain points from a point cloud based on statistical analysis. RR simply drops points from a point cloud randomly. Based on our stronger adversarial term (8), both of our methods achieve better success rates than those of (Xiang et al., 2019) under different dropping settings of the defenses SOR and RR. More significantly, the stronger attacking effects of our GeoAwareAdv are achieved without introduction of noticeable outliers and surface irregularities, as shown in Figure 2; in contrast, the\nuse of simple l2 norm in (Xiang et al., 2019) produces its adversarial results with clear outliers. Advantages of our GeoAwareAdv are essentially due to the use of HD and curvature terms in our geometry-aware loss (5)." }, { "heading": "4.2 EVALUATION ON GENERATING GEOMETRY-AWARE COOPERATIVE POINT CLOUDS", "text": "Comparative Evaluation Cooperative point clouds are generated based on the PointNet via optimizing objective function (9), and a variant can be re-sampled point clouds from its mesh surface. A number of recent classifiers are trained and evaluated on vanilla and cooperative point clouds respectively, whose results are reported in Figure 3 and Table 2. It can be observed from Table 2, both cooperative point clouds significantly outperform the vanilla ones by a large margin, i.e., increase at least 6% on classification accuracy. Moreover, performance gap between two types of cooperative point clouds can be explained by both approximation errors of mesh reconstruction and the changes on point distribution, which encourage us to consider adversarial shape deformations on the mesh level.\nCross-model experiments We conduct one more cross-model experiment to test transferable characteristics of cooperative point clouds across classifiers, i.e., generating cooperative examples on PointNet, while train and evaluate other classifiers. Results in Table 2 show consistent improvement on classification accuracy achieved for different neural classifiers, which reveal that cooperative shape deformations on point clouds can capture discriminative geometric patterns favor for semantic object classification, which are independent on neural models." }, { "heading": "5 CONCLUSIONS", "text": "In this paper, we propose a compact geometry-aware loss to constrain point-wise imperceptible perturbations to achieve similar geometric smoothness and fairness of local patches and their global topological configuration in deformed point clouds as the original ones. Experiment results reveal the rationale of three components in the proposed geometry-aware loss for generating adversaries without evident outliers and shape irregularities, which can thus achieve adversarial effects even confronting the SOR defense. Moreover, cooperative generation on point clouds also demonstrates its positive effects on improving classification with aware of geometric properties." }, { "heading": "A APPENDIX", "text": "A.1 VISUALIZATION OF MORE GEOMETRY-AWARE ADVERSARIAL EXAMPLES.\nA.2 VISUALIZATION OF MORE GEOMETRY-AWARE COOPERATIVE EXAMPLES.\nA.3 ABLATION STUDY OF DIFFERENT α AND β .\nA.4 QUANTITATIVE RESULTS FOR FIGURE 1.\nIt’s hard to define a metric evaluating geometrical smoothness of a point cloud directly due to its irregularity and lacking of geometric topological connection. Hence, we introduce an approximation metric mainly focuses on measuring the strength of isolated noisy points and the roughness of local patches, which can be defined as:\nS(P) = max p∈P ∑ q∈Np D(q,TNp) (11)\nwhere estimated tangent plane of the k nearest neighbors Np of p is denoted by TNp, and D denotes the distance function between the points and the tangent plane, where we use l2 norm in our settings. Intuitively, we are calculating the residual of first-order Taylor expansion of the points we estimate the first-order approximation of a point p, i.e., the tangent plane TNp, and then measures how far its neighbor pointsNp are to this plane. Note that we set the |Np| = k to be small to reduce approximation error, where in the following settings k = 16.\nWe evaluate S(P) on the experiments of Figure 1, and the quantitative results are shown in the following Table 3. As can be seen in the table, with all the losses, the geometry-aware adversarial point clouds achieve the lowest S(P), where the point clouds are smoothest. And with many outliers, the adversarial point clouds without Hausdorff Distance get the highest S(P). The quantitative results of the point clouds without Chamfer Distance are a little higher than the point clouds without Consistency of Local Curvatures, which is consistent with our perception the point clouds without Consistency of Local Curvatures are also very smooth expect for some high frequency outliers near their surfaces.\nA.5 USER STUDY ON AMAZON MECHANICAL TURK (AMT).\nWe conducted a user study on Amazon Mechanical Turk (AMT) in order to verify the imperceptible quality of our adversarial examples. We upload the snapshots of three kinds of point clouds including the benign point clouds, the adversarial point clouds generated via the method of Xiang et al. (2019) and our geometry-aware adversarial ones. All the adversarial point clouds can successfully mislead PointNet. Participants were asked to compare which one of the adversarial point clouds are more geometrically similar to the original one. The order of two kinds of adversarial point clouds was randomized and all the images appeared in the middle of the screen on each trial. Each participant could conduct at most 30 trials and each adversarial images can be shown to 50 different participant at most. In total, we conduct 1500 trials among 128 participants. And our geometry-aware adversarial point cloud are considered to be closer to the original ones in 82.06% of the trials, which indicates that our geometry-aware point clouds are more imperceptible. We think this experiment can further support our opinion that our geometry-aware adversarial examples are more imperceptible to humans. And we will attach this experiments to the appendix of our revised paper.\nA.6 INTRODUCTION OF OUR POINT CLOUD CLASSIFIER.\nThe PointNet (Qi et al., 2017a) and Pointnet++ (Qi et al., 2017b) are the first attempts to explore deep point cloud classification, with the permutation invariance of points in multi-layer perceptrons (MLPs) and a symmetric function for aggregating features. Both methods can only implicitly model global semantic patterns from 3D geometry. Another groups of algorithms design graph based convolution operation on irregular distributed structure of points such as DGCNN (Wang et al., 2019b). In DGCNN, an edge convolution operation is proposed on a dynamic graph to discover local geometric manifolds.\nWe mainly focus on PointNet Qi et al. (2017a) as our baseline due to its simple structure and good performance. The classier f can be formulated as f(P ) = γ(maxpi∈P {h(pi)}), where γ and h are two learnable parameters of the neural network.\nA.7 INTRODUCTION OF DEFENSIVE ALGORITHM\nWe adopt two kinds of defense method including random removal method (RR for short) and statistic outlier removal method (SOR for short)Zhou et al. (2018) as the defensive method.\nIn RR, we randomly select a subset of points from the point cloud and drop it. And in SOR, we calculate the average distance dp of a point p to its k-nearest neighbors, which can be denoted by\ndp = 1\nk ∑ q∈Np ‖p− q‖2 (12)\nAnd then we calculate the mean d̄ and standard deviation σd over all the distance dp of points in the point cloud p ∈ P . We remove all the points that fall outside d̄ ± a × σd, where a is set to be 1.1 in Zhou et al. (2018). According to our experiments, around 100 points will be dropped out of 1024 points when a is set to be 1.1.\nHowever, in our settings, we drop a fix number points with m-largest dp for fairer comparison between different methods. We drop k = 1, 2, 4, 8, 16, 32, 64, 128, 256 respectively in Table 1." } ]
2,019
null
SP:f447505525e50eabf0dd4dd7a09ce763f16a233c
[ "This paper presents a new reversible flow-based graph generative model wherein the whole graph i.e., representative attributes such as node features and adjacency tensor is modeled using seperate streams of invertible flow model. This allows training of generative model using exact likelihood maximization over the underlying graph dataset.The model avoids encoding any domain specific heuristics and thus can be applied to any structured graph data. The paper focusses it applicability for molecular graphs. Given that this approach avoids sequential generation of graph, it is faster by an order of magnitude than prior models for molecular generation. Empirical experiments on couple of molecular graph data suggets that GraphNVP approach performs as well as prior approach but albeit without any rule checker.", "In this paper, a GraphNVP framework for molecular graph generation is proposed. The main difference from the previously proposed models is the use of the invertible normalizing flow idea for the generative model, which doesn’t require a separate decoder for sampling. This architecture is implemented with coupling layers combined with a multi-layer perceptron. The model is evaluated and compared on QM9 and ZINC chemical molecular datasets." ]
We propose GraphNVP, an invertible flow-based molecular graph generation model. Existing flow-based models only handle node attributes of a graph with invertible maps. In contrast, our model is the first invertible model for the whole graph components: both dequantized node attributes and adjacency tensor are converted into latent vectors through two novel invertible flows. This decomposition yields the exact likelihood maximization on graph-structured data. We decompose the generation of a graph into two steps: generation of (i) an adjacency tensor and (ii) node attributes. We empirically demonstrate that our model and the two-step generation efficiently generates valid molecular graphs with almost no duplicated molecules, although there are no domain-specific heuristics ingrained in the model. We also confirm that the sampling (generation) of graphs is faster in order of magnitude than other models in our implementation. In addition, we observe that the learned latent space can be used to generate molecules with desired chemical properties. Finally we list open problems for this new direction of fully invertible graph generation researches.
[]
[ { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "Molgan: An implicit generative model for small molecular graphs", "venue": "arXiv preprint arXiv:1805.11973,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "In Proceedings of International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamín Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Will Grathwohl", "Ricky T.Q. Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models", "venue": "In Proceedings of ICLR,", "year": 2019 }, { "authors": [ "Gabriel Guimaraes", "Carlos Sanchez-Lengeling", "Outeiral", "Pedro Luis Cunha Farias", "Alan AupuruGuzip" ], "title": "Object-reinforced generative adversarial networks (organ) for seuqnce generation models. arXiv, 18:1705.18043v2 [stat.ml], 2017", "venue": null, "year": 2017 }, { "authors": [ "Emiel Hoogeboom", "Jorn W.T. Peters", "Rianne van den Berg", "Max Welling" ], "title": "Integer discrete flows and lossless compression", "venue": null, "year": 1905 }, { "authors": [ "John J Irwin", "Teague Sterling", "Michael M Mysinger", "Erin S Bolstad", "Ryan G Coleman" ], "title": "Zinc: a free tool to discover chemistry for biology", "venue": "Journal of chemical information and modeling,", "year": 2012 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Lei Ba" ], "title": "Adam: a Method for Stochastic Optimization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In Proceedings of the 2nd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ivan Kobyzev", "Simon Prince", "Marcus A Brubaker" ], "title": "Normalizing Flows: Introduction and Ideas", "venue": "arXIv, pp. 1908.09257", "year": 1908 }, { "authors": [ "Aviral Kumar", "Jimmy Ba", "Jamie Kiros", "Kevin Swersky" ], "title": "GRevnet: Improving Graph Neural Nets wiht Reversible Computation", "venue": "In Proceedings of the Relational Representation Learning Workshop at NeurIPS", "year": 2018 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 1954 }, { "authors": [ "Jenny Liu", "Aviral Kumar", "Jimmy Ba", "Jamle Kiros", "Kevin Swersky" ], "title": "Graph normalizing flows", "venue": "arXiv preprint arXiv:1905.13177,", "year": 2019 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tengfei Ma", "Jie Chen", "Cao Xiao" ], "title": "Constrained generation of semantically valid graphs via regularizing variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Michael Schlichtkrull", "Thomas N Kipf", "Peter Bloem", "Rianne Van Den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In European Semantic Web Conference,", "year": 2018 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Graphvae: Towards generation of small graphs using variational autoencoders", "venue": "In International Conference on Artificial Neural Networks,", "year": 2018 }, { "authors": [ "E G Tabak", "Cristina V Turner" ], "title": "A Family of Nonparametric Density Estimation Algorithms", "venue": "Communications on Pure and Applied Mathematics,", "year": 2013 }, { "authors": [ "Esteban G. Tabak", "Eric Vanden-Eijnden" ], "title": "Density estimation by dual ascent of the loglikelihood", "venue": "Communications in Mathematical Sciences, 8(1):217–233,", "year": 2010 }, { "authors": [ "L Theis", "A van den Oord", "M Bethge" ], "title": "A note on the evaluation of generative models", "venue": "In International Conference on Learning Representations (ICLR 2016),", "year": 2016 }, { "authors": [ "Seiya Tokui", "Kenta Oono", "Shohei Hido", "Justin Clayton" ], "title": "Chainer: a Next-Generation Open Source Framework for Deep Learning", "venue": "In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS),", "year": 2015 }, { "authors": [ "Jul" ], "title": "PMLR", "venue": "URL http://proceedings.mlr.press/v80/you18a.html. A NETWORK ARCHITECTURE DETAILS For QM9 dataset, we use a total of 27 adjacency coupling and 36 node feature coupling layers. For ZINC dataset, we keep the number of coupling layers equal to the maximum number of atoms a ZINC molecule can have, 38. We model affine transformation (both scale and translation) of an", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generation of molecules with certain desirable properties is a crucial problem in computational drug discovery. Recently, deep learning approaches are being actively studied for generating promising candidate molecules quickly. Earlier models (Kusner et al., 2017; Gómez-Bombarelli et al., 2018) depend on a string-based representation of molecules. However, recent models (Jin et al., 2018; You et al., 2018a; De Cao & Kipf, 2018) directly work on molecular graph representations and record impressive experimental results. In these studies, either variational autoencoder (VAE) (Kingma & Welling, 2014) or generative adversarial network (GAN) (Goodfellow et al., 2014; Radford et al., 2015) are used mainly to learn mappings between the graphs and their latent vector representations.\nIn this paper, we propose GraphNVP, yet another framework for molecular graph generation based on the invertible normalizing flow, which was mainly adopted for image generation tasks (Dinh et al., 2017; Kingma & Dhariwal, 2018). To capture distributions of irregular graph structure of molecules into a latent representation, we propose a novel two-step generation scheme. Specifically, GraphNVP is equipped with two latent representations for a molecular graph: first for the graph structure represented by an adjacency tensor, and second for node (atom) attributes. We introduce two types of reversible flows that work for the aforementioned two latent representations of graphs.\nRecent work by Liu et al. (2019) proposes a flow-based invertible model for transforming the node attribute matrix. However, they use a non-invertible encoder for transforming the adjacency tensor making the complete model non-invertible. Our model is the first fully invertible model for the whole graph components: both adjacency tensor and node attributes are converted into latent vectors through two novel invertible flows.\nTo sample a graph, we develop a novel two-step generation process. During the generation process, GraphNVP first generates the graph structure. Then node attributes are generated according to this structure. This two-step generation enables efficient generation of valid molecular graphs. The full reversibility of our model on graphs contributes to two major benefits: a simple architecture and precise log-likelihood maximization. A major advantage of invertible models is that we do not need to design a separate decoder for sample generation: new graph samples can be generated by simply feeding a latent vector into the same model but in the reverse order.\nIn contrast, VAE models require an encoder and a separated decoder. Decoding processes of several VAE graph generators are often quite complicated to assure valid generations (Kusner et al., 2017; Jin et al., 2018; Ma et al., 2018), and computing a graph reconstruction loss may require expensive graph matching (Simonovsky & Komodakis, 2018). The lack of an encoder in GAN models makes it challenging to manipulate the sample generation. For example, it is not straightforward to use a GAN model to generate graph samples that are similar to a query graph (e.g., lead optimization for drug discovery), while it is easy for flow-based models.\nUnlike VAEs and GANs, invertible models are capable of precise log-likelihood evaluation. We believe precise optimization is crucial in molecule generation for drugs, which are highly sensitive to a minor replacement of a single atom (node).\nIn the experiments, we compare the proposed flow model with several existing graph generation models using two popular molecular datasets. The proposed flow model generates molecular graphs with almost 100% uniqueness ratio: namely, the results contain almost no duplicated molecular graphs without ingrained domain expert knowledge and extra validity checks. The proposed model enjoys fast graph samplings; faster in orders of magnitude than other graph generation models in our implementation. Additionally, we show that the learned latent space can be utilized to generate molecular graphs with desired chemical properties, even though we do not encode domain expert knowledge into the model. Finally we list open problems for the development of this new direction of fully invertible graph generation researches." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 MOLECULAR GRAPH GENERATION", "text": "We can classify the existing molecular graph generation models based on how the data distribution is learned. Most current models belong to two categories. First, VAE-based models assume a simple variational distribution for latent representation vectors (Jin et al., 2018; Liu et al., 2018; Ma et al., 2018). Second, some models implicitly learn the empirical distribution, especially based on the GAN architecture (e.g., (De Cao & Kipf, 2018; You et al., 2018a; Guimaraes et al., 2017)). Some may resort to reinforcement learning (You et al., 2018a) to alleviate the difficulty of direct optimization of the objective function. We also observe an application of autoregressive recurrent neural networks (RNN) for graphs (You et al., 2018b). In this paper, we will add a new category to this list: namely, the invertible flow.\nAdditionally, we can classify the existing models based on the process they use for generating a graph. There are mainly two choices in the generation process. One is a sequential iterative process, which generates a molecule in a step-by-step fashion by adding nodes and edges one by one (Jin et al., 2018; You et al., 2018a). The alternative is one-shot generation of molecular graphs, when the graph is generated in a single step. This process resembles commonly used image generation models (e.g., (Kingma & Dhariwal, 2018)). The former process is advantageous in (i) dealing with large molecules and (ii) forcing validity constraints on the graph (e.g., a valency condition of molecule atoms). The latter approach has a major advantage: the model is simple to formulate and implement. This is because the one-shot approach does not have to consider arbitrary permutations of the sequential steps, which can grow exponentially with the number of nodes in the graph.\nCombining these two types of classification, we summarize the current status of molecular graph generation in Table 1. In this paper, we propose the first graph generation model based on the invertible flow, with one-shot generation strategy." }, { "heading": "2.2 INVERTIBLE FLOW MODELS", "text": "To the best of our knowledge, the invertible flow was first introduced to the machine learning community by (Tabak & Vanden-Eijnden, 2010; Tabak & Turner, 2013). Later, Rezende & Mohamed (2015) and Dinh et al. (2015) leveraged deep neural networks in defining tractable invertible flows. Dinh et al. (2015) introduced reversible transformations for which the log-determinant calculation is tractable. These transformations, known as coupling layers, serve as the basis of recent flow-based image generation models (Dinh et al., 2017; Kingma & Dhariwal, 2018; Grathwohl et al., 2019)\nReaders are referred to the latest survey (Kobyzev et al., 2019) for the general flow methodologies.\nSo far, the application of flow-based models is mostly limited to the image domain. As a few exceptions, Kumar et al. (2018) proposed flow-based invertible transformations on graphs. However, their model is only capable of modeling the node assignments and cannot learn a latent representation of the adjacency tensor; therefore, it cannot generate a graph structure. Liu et al. (2019) proposed to plug a non-invertible decoder for the adjacency tensor to this flow model afterwards, giving up training the entire graph generator in a single unified estimator. We overcome this issue by introducing two latent representations, one for node assignments and another for the adjacency tensor, to capture the unknown distributions of the graph structure and its node assignments. Thus, we consider our proposed model to be the first invertible flow-based model that can generate attributed graphs including the adjacency structure." }, { "heading": "3 GRAPHNVP: FLOW-BASED GRAPH GENERATION MODEL", "text": "" }, { "heading": "3.1 FORMULATION", "text": "We use the notation G = (A,X) to represent a graph G consisting of an adjacency tensor A and a feature matrix X . Let there be N nodes in the graph. Let M be the number of types of nodes and R be the number of types of edges. Then A ∈ {0, 1}N×N×R and X ∈ {0, 1}N×M . In the case of molecular graphs, G = (A,X) represents a molecule with R types of bonds (single, double, etc.) and M types of atoms (e.g., oxygen, carbon, etc.). Our objective is to learn an invertible model fθ with parameters θ that maps G into a latent point z = fθ(G) ∈ RD=(N×N×R)+(N×M). We describe fθ as a normalizing flow composed of multiple invertible functions.\nLet z be a latent vector drawn from a known prior distribution pz(z) (e.g., Gaussian): z ∼ pz(z). With the change of variable formula, the log probability of a given graph G can be calculated as:\nlog (pG(G)) = log (pz(z)) + log (∣∣∣∣det( ∂z∂G )∣∣∣∣) , (1)\nwhere ∂z∂G is the Jacobian of fθ at G." }, { "heading": "3.2 GRAPH REPRESENTATION", "text": "Directly applying a continuous density model on discrete components may result in degenerate probability distributions. Therefore, we cannot directly employ the change of variable formula (Eq. 1) for these components. The same issue, especially modeling the discrete structure of the adjacency A, has been a problem in existing one-shot generators based on GAN (De Cao & Kipf, 2018) and VAE (Ma et al., 2018). They resort to an ad-hoc workaround; treating the adjacency tensor as a real-valued continuous tensor. In this paper we take another approach, dequantization (Theis et al., 2016), following the flow-based image generation models (Dinh et al., 2017; Kingma & Dhariwal, 2018). The dequantization process adds uniform noises to A and X and yield the dequantized graph component G′ = (A′, X ′). Specifically, A′ = A+ cu; u ∼ U [0, 1)N×N×R and X ′ = X + cu; u ∼ U [0, 1)N×M , where 0 < c < 1 is a scaling hyperparameter (c = 0.9 is adopted for our experiment).\nThis G′ is used as the input in Eq. 1. Note that the original discrete inputs A and X can be recovered by quantization: simply applying floor operation on each continuous value in A′ and X ′.\nHereafter, all the transformations consisting fθ are performed on dequantized inputs A′ and X ′, not on A and X . It means fθ is a bijective function that maps G′ → z: thus f−1(z) returns the dequantized G′, not the original G. However, our generative model can recover the original discrete G by performing the postprocessing quantization to inverted G′.\nThere are a few works related to discrete invertible flows such as (Hoogeboom et al., 2019; Tran et al., 2019). The former maps discrete data x to a discrete latent space. However, we prefer a smoothly distributed continuous latent space for molecule decoration and optimization applications (see Sec. 4.3). The latter can map discrete data x to a continuous z, but computation includes approximation. Approximated likelihood evaluations decreases the significance of the invertible flows against VAEs. So we do not adopt these options in this paper." }, { "heading": "3.3 COUPLING LAYERS", "text": "Based on real-valued non-volume preserving (real NVP) transformations introduced in (Dinh et al., 2017), we propose two types of reversible affine coupling layers; adjacency coupling layers and node feature coupling layers that transform the adjacency tensor A′ and the feature matrix X ′ into latent representations, zA ∈ RN×N×R and zX ∈ RN×M , respectively. We apply LX layers of node feature coupling layers to a feature matrix X ′ to obtain zX . We denote an intermediate representation of the feature matrix after applying the `th node feature coupling layer as z(`)X . Starting from z (0) X = X ′, we repeat updating rows of zX over LX layers. Each row of z (`) X corresponds to a feature vector of a node in the graph. Finally, we obtain zX = z (LX) X as the final latent representation of the feature matrix. The `th node feature coupling layer updates a single row ` of the feature matrix while keeping the rest of the input intact:\nz (`) X [`, :]← z (`−1) X [`, :] exp\n( s(z\n(`−1) X [`\n−, :], A) ) + t(z\n(`−1) X [` −, :], A), (2)\nwhere functions s and t stand for scale and translation operations, and denotes element-wise multiplication. We use zX [`−, :] to denote a latent representation matrix of X ′ excluding the `th row (node). Rest of the rows of the feature matrix will stay the same as\nz (`) X [` −, :]← z(`−1)X [` −, :]. (3)\nBoth s and t can be formulated with arbitrary nonlinear functions, as the reverse step of the model does not require inverting these functions. Therefore, we use the graph adjacency tensor A when\ncomputing invertible transformations of the node feature matrix X ′. So as functions s and t in a node feature coupling layer, we use a sequence of generic graph neural networks. It should be noted that we use the discrete adjacency tensor A, as only the node feature matrix is updated in this step. In this paper, we use a variant of Relational GCN (Schlichtkrull et al., 2018) architecture.\nLikewise, we apply LA layers of transformations for the adjacency tensor A′ to obtain the latent representation zA. We denote an intermediate representation of the adjacency tensor after applying the `th adjacency coupling as z(`)A . The `\nth adjacency coupling layer updates only a single slice of z`A with dimensions N×R as:\nz (`) A [`, :, :]← z (`−1) A [`, :, :] exp\n( s(z\n(`−1) A [`\n−, :, :]) ) + t(z\n(`−1) A [` −, :, :]). (4)\nThe rest of the rows will stay as it is:\nz (`) A [` −, :, :]← z(`−1)A [` −, :, :]. (5)\nFor the adjacency coupling layer, we adopt multi-layer perceptrons (MLPs) for s and t functions. Starting from z(0)A = A\n′, we repeat updating the first axis slices of zA over LA layers. Finally, we obtain zA = z (LA) A as the final latent representation of the adjacency tensor." }, { "heading": "3.3.1 MASKING PATTERNS AND PERMUTATION OVER NODES", "text": "Eqs. (2, 4) are implemented with masking patterns shown in Figure 2. Based on experimental evidence, we observe that masking zA(A′) and zX(X ′) w.r.t. the node axis performs the best. Because a single coupling layer updates one single slice of zA and zX , we need a sequence of N coupling layers at the minimum, each masking a different node, for each of the adjacency coupling and the node feature coupling layers.\nWe acknowledge that this choice of masking axis over zX and zA makes the transformations not invariant to permutations of the nodes. We can easily formulate permutation-invariant couplings by changing the slice indexing based on the non-node axes (the 3rd axis of the adjacency tensor, and the 2nd axis of the feature matrix). However, using such masking patterns results in dramatically worse performance due to the sparsity of molecular graphs. For example, organic compounds are mostly made of carbon atoms. Thus, masking the carbon column in X ′ (and zX ) results in feeding a nearly-empty matrix to the scale and the translation networks, which is almost non-informative to update the carbon column entries of X ′ and zX . We consider this permutation dependency as a limitation of the current model, and we intend to work on this issue as future work." }, { "heading": "3.4 TRAINING", "text": "During the training, we perform the forward computations shown in Figure 1 over minibatches of training data (G = (A,X)) and obtain latent representations z = concat(zA, zX). Our objective is maximizing the log likelihood pG(G) (Eq. 1) over minibatches of training data. This is implemented as minimization of the negative log likelihood using the Adam optimizer (Kingma & Ba, 2015)." }, { "heading": "3.5 TWO-STEP MOLECULAR GRAPH GENERATION", "text": "Because our proposed model is invertible, graph generation is simply executing the process shown in Figure 1 in reverse. During the training, node feature coupling and adjacency coupling can be performed in either order, as the output of one coupling module does not depend on the output of the other coupling module. However, because the node feature coupling module requires a valid adjacency tensor as an input, we also need an adjacency tensor to perform the reverse step of node feature coupling. Therefore, we apply the reverse step of adjacency coupling module first, so we get an adjacency tensor as the output. Next, the adjacency tensor is fed into the reverse step of the node feature coupling. The generation process is shown in Figure 3. In section 4, we show that this 2-step generation process can efficiently generate chemically valid molecular graphs.\n1st step: We draw a random sample z = concat(zA, zX) from prior pz and split sampled z into zA and zX . Next, we apply a sequence of inverted adjacency coupling layers on zA. As a result, we obtain a probabilistic adjacency tensor Ã′, from which we construct a discrete adjacency tensor à ∈ {0, 1}N×N×R by taking node-wise and edge-wise argmax. 2nd step: We generate a feature matrix given the sampled zX and the generated adjacency tensor Ã. We input à along with zX into a sequence of inverted node feature coupling layers to attain X̃ ′. Likewise, we take node-wise argmax of X̃ ′ to get discrete feature matrix X̃ ∈ {0, 1}N×M ." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 PROCEDURE", "text": "We use two popular chemical molecular datasets, QM9 (Ramakrishnan et al., 2014) and ZINC250k (Irwin et al., 2012). QM9 dataset contains 134k molecules, and ZINC-250k is made of 250k drug-like molecules randomly selected from the ZINC database. The maximum number of atoms in a molecule are 9 for the QM9 and 38 for the ZINC, respectively (excluding hydrogen). Following a standard procedure, we first kekulize molecules and then remove hydrogen atoms from them. The resulting molecules contain only single, double, and triple bonds.\nWe convert each molecule to an adjacency tensor A ∈ {0, 1}N×N×R and a feature matrix X ∈ {0, 1}N×M . N is the maximum number of atoms a molecule in a certain dataset can have. If a molecule has less than N atoms, we insert virtual nodes as padding to keep the dimensions of A and X the same for all the molecules. Because the original adjacency tensors can be sparse, we add a virtual bond edge between the atoms that do not have a bond in the molecule. Thus, an adjacency tensor consists of R=4 adjacency matrices stacked together, each corresponding to the existence\nof a certain type of bond (single, double, triple, and virtual bonds) between the atoms. The feature matrix is used to represent the type of each atom (e.g., oxygen, fluorine, etc.).\nWe use a multivariate Gaussian distribution N (0, σ2I) as prior distribution pz(z), where standard deviation σ is learned simultaneously during the training. More details are presented in the appendix." }, { "heading": "4.2 NUMERICAL EVALUATION", "text": "Following (Kingma & Dhariwal, 2018), we sample 1,000 latent vectors from a temperature-truncated normal distribution pz,T (z) (see the appendix for details) and transform them into molecular graphs by performing the reverse step of our model. We compare the performance of the proposed model with baseline models in Table 2 using following metrics. Validity (V) is the percentage of generated graphs corresponding to valid molecules. Novelty (N) is the percentage of generated valid molecules not present in the training set. Uniqueness (U) is the percentage of unique valid molecules out of all generated molecules. Reconstruction accuracy (R) is the percentage of molecules that can be reconstructed perfectly by the model: namely, the ratio of molecules G s.t. G = f−1θ (fθ (G)).\nWe choose Regularizing-VAE (RVAE) (Ma et al., 2018) and MolGAN (De Cao & Kipf, 2018) as baseline one-shot generation models. We compare with two additional models: grammar VAE(GVAE) (Kusner et al., 2017) and character VAE (CVAE)(Gómez-Bombarelli et al., 2018), which learn to generate string representations of molecules. Finally, JT-VAE (Jin et al., 2018) and CGVAE (Ma et al., 2018) as the state-of-the-art iterative generation models with complicated decoders with validity checkers.\nNotably, proposed GraphNVP guarantees 100% reconstruction accuracy, attributed to the invertible function construction of normalizing flows. Also, it is notable that GraphNVP enjoys a significantly high uniqueness ratio. Although some baselines exhibit a higher validity on QM9 dataset, the set of generated molecules contains many duplicates. Additionally, we want to emphasize that our model generates a substantial number of valid molecules without explicitly incorporating the chemical knowledge as done in some baselines (e.g., valency checks for chemical graphs in RVAE, MolGAN, JT-VAE, and CG-VAE. This is preferable because additional validity checks consume computational time (see Sec.4.2.1), and may result in a low reconstruction accuracy (e.g., RVAE and JT-VAE). As GraphNVP does not incorporate domain-specific procedures during learning, it can be easily used for learning generative models on general graph structures. Two iterative generation models, JT-VAE (Jin et al., 2018) and CG-VAE (Liu et al., 2018), show great results in the table. However, decoders of these models are quite complicated to properly implement and reproduce the same performance. In contrast, the proposed GraphNVP enjoys a simple network architecture and its decoder is immediately available by just inverting the trained coupling layers.\nConsidering the simplicity of the model, proposed GraphNVP achieves good performance among latest graph generation models. We guess the generation scheme of the GraphNVP may affect these performances in part. The proposed generation scheme is in the midst of the one-shot and the iterative graph generation. From a higher perspective, our generation is one-shot: once we sample the latent\nvector z = [zX , zA] then the final output graph is determined. In a detailed observation, the inversion process is iterative: for each (inverting) `-th layer of two couplings, the network recovers a adjacency matrix or a feature vector of a `-th node, given representations of all the nodes except `-th node. One layer of partition-based affine coupling is not a mapping of super-flexible, but may be flexible enough to warp a single node’s representation." }, { "heading": "4.2.1 COMPUTATIONAL TIME FOR GRAPH GENERATION", "text": "One practically important aspect of graph generation is computational time. Training and sampling a generative model is much faster than wet-lab experiments, but the computational time is still an issue for tasks involving huge search spaces: e.g. drug search. We compare the computational time (wall-clock time) for sampling 1, 000 graphs for ZINC dataset experiment runs. The average wallclock time (excluding preprocessing time) of GraphNVP for sampling is only 4.6 [sec] (implemented in Chainer Tokui et al. (2015)). This is faster in order of magnitude than several baselines (in our test runs): 193.5 [sec] for CVAE (Tensorflow), 460 [sec] for GVAE (Tensorflow), and 124 [sec] for JT-VAE (pytorch).\nThe sampling time affects the number of valid, novel, and unique molecular graphs we can collect within a unit time. The validity of the GraphNVP samples are relatively low, but still keeps 40%. In contrast, sampling time is 30 to 100 times shorter. Thus we can obtain more (10 to 40 times) valid, novel, and unique molecules in the same computation time. Once we obtained the generated molecule, we usually calculate or predict the value of specific property in computer to check the generated molecules have desired values. Thus generating many molecules increases the chance to discover molecule with required property. Assume we need to prepare 1 million unique, novel, and valid molecules from models trained via ZINC dataset. With a very rough estimate, we expect the GraphNVP, JT-VAE, and GVAE requires 1.1 hours, 1.5 days, and 121 5 days, respectively. Such slow graph generations would harm the productivity of the R&D projects. Further, this will reduce the usage of cloud computing servers such as Amazon EC2, in turn reducing the monetary cost.\nThese computational time may depend on choices of frameworks and skills of implementations. However we think it is safe to say that the GraphNVP is significantly faster than other models in sampling for several reasons: the GraphNVP decoder does not involve additional chemical validity check (Jin et al., 2018), or grammatical validity-assurance for sampling (Kusner et al., 2017). Deterministic decoding of graphNVP further reduces generation time in practical scenarios since a latent vector is not needed to be decoded multiple times as done for JT-VAE." }, { "heading": "4.3 SMOOTHNESS OF THE LEARNED LATENT SPACE", "text": "Next, we qualitatively examine the learned latent space z by visualizing the latent points space. In this experiment, we randomly select a molecule from the training set and encode it into a latent vector z0 using our proposed model. Then we choose two random axes which are orthogonal to each other. We decode latent points lying on a 2-dimensional grid spanned by those two axes and with z0 as the origin. Figure 4 shows that the latent spaces learned from both QM9 (panel (a)) and ZINC dataset (panel (b)) vary smoothly such that neighboring latent points correspond to molecules with minor variations. This visualization indicates the smoothness of the learned latent space, similar to the results of existing VAE-based models (e.g., (Liu et al., 2018; Ma et al., 2018)). However, it should be noted that we decode each latent point only once unlike VAE-based models. For example, GVAE (Kusner et al., 2017) decodes each latent point 1000 times and selects the most common molecule as the representative molecule for that point. Because our decoding step is deterministic such a time-consuming measure is not needed. In practice, smoothness of the latent space is crucial for decorating a molecule: generating a slightly-modified graph by perturbing the latent representation of the source molecular graph." }, { "heading": "4.4 PROPERTY-TARGETED MOLECULE OPTIMIZATION", "text": "Our last task is to find molecules similar to a given molecule, but possessing a better chemical property. This task is known as molecular optimization in the field of chemo-informatics. We train a linear regressor on the latent space of molecules with quantitative estimate of drug-likeness (QED) of each molecule as the target chemical property. QED score quantifies how likely a molecule is to be a potential drug. We interpolate the latent vector of a randomly selected molecule along the direction of\nincreasing QED score as learned by linear regression. Figure 5 demonstrates the learned latent space and a simple linear regression yields successful molecular optimization. Here, we select a molecule with a low QED score and visualize its neighborhood. However, we note that the number of valid molecules that can be generated along a given direction varies depending on the query molecule. We show another property optimization example on QM9 dataset in the appendix.\nAlthough we could perform molecular optimization with linear regression, we believe an extensive Bayesian optimization (e.g., (Jin et al., 2018; Kusner et al., 2017)) on the latent space may provide better results." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed GraphNVP, an invertible flow-based model for generating molecular graphs. Specifically, the proposed model is the first fully invertible model for the whole graph components: both of node attributes and an adjacency tensor are converted into latent vectors through two novel invertible flows. Our model can generate valid molecules with a high uniqueness score and guaranteed reconstruction ability with very simple invertible coupling flow layers. The proposed model enjoys a fast graph generation; faster in order of magnitude than other graph generation models in our implementation. In addition, we demonstrate that the learned latent space can be used to search for molecules similar to a given molecule, which maximize a desired chemical property." }, { "heading": "5.1 OPEN PROBLEMS", "text": "As the first paper for fully invertible graph generation models, we identified several open problems of this research direction. One is the permutation-invariant graph generation, which is essentially difficult to achieve by coupling-based flow layers. Another is the number of nodes in generated graphs. The current formulation of the GraphNVP must choose the maximum number of nodes within generated graphs. This is the limitation of one-shot generative models compared to iterative ones. Incorporating external validity checks would improve the validity of the generative model. There is a possibility that overfitting causes the lower validity and novelty. If this is the case then it is interesting to devise a good regularizer for reliable graph generations. Additionally, we believe more exploration of the reasons contributing to the high uniqueness ratio of the proposed model will contribute to the understanding of graph generation models in general.\nWe will provide our implementation of the proposed GraphNVP in near future." }, { "heading": "A NETWORK ARCHITECTURE DETAILS", "text": "For QM9 dataset, we use a total of 27 adjacency coupling and 36 node feature coupling layers. For ZINC dataset, we keep the number of coupling layers equal to the maximum number of atoms a ZINC molecule can have, 38. We model affine transformation (both scale and translation) of an adjacency coupling layer with a multi-layer perceptron (MLP). As mentioned in the main text, we utilize both node assignments and adjacency information in defining node feature coupling layers. However, we found affine transformations can become unstable when used to update the feature matrix with Relational-GCN (RelGCN). Therefore, we use only additive transformations in node feature coupling layers.\nWe initialize the last layer of each RelGCN and MLP with zeros, such that each affine transformation initially performs an identity function.\nWe train the models using Adam optimizer with default parameters (α = 0.001) and minibatch sizes 256 and 128 for QM9 and ZINC datasets. We use batch normalization in both types of coupling layers." }, { "heading": "B TRAINING DETAILS", "text": "For training data splits, we used the same train/test dataset splits used in (Kusner et al., 2017). We train each model for 200 epochs. We did not employ early-stopping in the experiments. We chose the model snapshot of the last (200) epoch for evaluations and demonstrations. All models are implemented using Chainer-Chemistry1 and RDKit2 libraries." }, { "heading": "C EFFECT OF TEMPERATURE", "text": "Following previous work on likelihood-based generative models (Kingma & Dhariwal, 2018), we sampled latent vectors from a temperature-truncated normal distribution. Temperature parameter handles uniqueness and validity trade off. Sampling with a lower temperature results in higher number of valid molecules at the cost of uniqueness among them. How temperature effects validity, uniqueness, and novelty of generated molecules is shown in Figure 6. Users may tune this parameter depending on the application and its goal. In our experiments we chose 0.85 and 0.75 as the temperature values for QM9 and ZINC models respectively." }, { "heading": "D EFFECT OF ADJACENCY TENSOR IN GRAPHNVP COUPLING", "text": "We performed additional experiment to quantify the effect of A introduced in the node feature coupling. We trained an ablation model, which replace the RelGCN layer with an MLP which does not use A. For QM9 dataset the validity drops to 41.8± 1.26%, about half the validity of original GraphNVP model.\n1https://github.com/pfnet-research/chainer-chemistry 2https://github.com/rdkit/rdkit" }, { "heading": "E ADDITIONAL VISUALIZATIONS", "text": "Fig. 7 illustrates an example of chemical property optimization for water-octanol partition coefficient (logP) on QM9 dataset." } ]
2,019
GRAPHNVP: AN INVERTIBLE FLOW-BASED MODEL
SP:e0dd4a62106a2c2fc6a248c601ddb8422e148864
[ "This paper proposes to learn a kernel for training MMD-GAN by optimizing over the probability distribution that defines the kernel by means of random features. This is unlike the usual setting of MMD-GAN where the kernel is parametrized by composing a fixed top-kernel with a discriminator network that is optimized during training. The main motivation for this approach is to avoid having to 'manually' fix some parameters of the top-level kernel like the bandwidth of the RBF kernel. They provide an algorithm to achieve such optimization in probability space along with some consistency results and perform experiments on MNIST and Cifar10 to demonstrate empirically the advantage of such an approach over those that fix the top-level kernel.", "This paper aims to improve the kernel selection issue of the MMD-based generative models. The author formulates the kernels via inverse Fourier transform and the goal is to learn the optimal N finite random Fourier features (RFF). The RFF samples are optimized by the proposed kernel alignment loss where the positive and negative labels are defined as samples coming from real and negative data distributions, respectively. Some theoretical analysis regarding the consistency of the learned kernel is provided. Experiment results on the IS score and FID on CIFAR-10 show improvement of the proposed methods over MMD-GAN baselines, while the results are not comparable to the original MMD-GAN due to unknown results." ]
We propose a novel supervised learning method to optimize the kernel in maximum mean discrepancy generative adversarial networks (MMD GANs). Specifically, we characterize a distributionally robust optimization problem to compute a good distribution for the random feature model of Rahimi and Recht to approximate a good kernel function. Due to the fact that the distributional optimization is infinite dimensional, we consider a Monte-Carlo sample average approximation (SAA) to obtain a more tractable finite dimensional optimization problem. We subsequently leverage a particle stochastic gradient descent (SGD) method to solve the derived finite dimensional optimization problem. Based on a mean-field analysis, we then prove that the empirical distribution of the interactive particles system at each iteration of the SGD follows the path of the gradient descent flow on the Wasserstein manifold. We also establish the non-asymptotic consistency of the finite sample estimator. Our empirical evaluation on synthetic data-set as well as MNIST and CIFAR-10 benchmark data-sets indicates that our proposed MMD GAN model with kernel learning indeed attains higher inception scores well as Frèchet inception distances and generates better images compared to the generative moment matching network (GMMN) and MMD GAN with untrained kernels.
[]
[ { "authors": [ "Patrick Billingsley" ], "title": "Convergence of probability measures", "venue": null, "year": 2013 }, { "authors": [ "Stéphane Boucheron", "Gábor Lugosi", "Pascal Massart" ], "title": "Concentration inequalities: A nonasymptotic theory of independence", "venue": "Oxford university press,", "year": 2013 }, { "authors": [ "Tong Che", "Yanran Li", "Athul Paul Jacob", "Yoshua Bengio", "Wenjie Li" ], "title": "Mode regularized generative adversarial networks", "venue": "arXiv preprint arXiv:1612.02136,", "year": 2016 }, { "authors": [ "Corinna Cortes", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Algorithms for learning kernels based on centered alignment", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Joseph Leo Doob" ], "title": "Stochastic processes, volume 101", "venue": "New York Wiley,", "year": 1953 }, { "authors": [ "Rui Gao", "Anton J Kleywegt" ], "title": "Distributionally robust stochastic optimization with wasserstein distance", "venue": "arXiv preprint arXiv:1604.02199,", "year": 2016 }, { "authors": [ "Kay Giesecke", "Konstantinos Spiliopoulos", "Richard B Sowers" ], "title": "Default clustering in large portfolios", "venue": "Typical events. The Annals of Applied Probability,", "year": 2013 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Arthur Gretton", "Karsten Borgwardt", "Malte Rasch", "Bernhard Schölkopf", "Alex J Smola" ], "title": "A kernel method for the two-sample-problem", "venue": "In Advances in neural information processing systems,", "year": 2007 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Adam Jakubowski" ], "title": "On the skorokhod topology", "venue": "In Annales de l’IHP Probabilités et statistiques,", "year": 1986 }, { "authors": [ "Adel Javanmard", "Marco Mondelli", "Andrea Montanari" ], "title": "Analysis of a two-layer neural network via displacement convexity", "venue": "arXiv preprint arXiv:1901.01375,", "year": 2019 }, { "authors": [ "Richard Jordan", "David Kinderlehrer", "Felix Otto" ], "title": "The variational formulation of the Fokker– Planck equation", "venue": "SIAM journal on mathematical analysis,", "year": 1998 }, { "authors": [ "Masoud Badiei Khuzani", "Na Li" ], "title": "Stochastic primal-dual method on riemannian manifolds with bounded sectional curvature", "venue": "arXiv preprint arXiv:1703.08167,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Quoc Le", "Tamás Sarlós", "Alexander Smola" ], "title": "Fastfood-computing hilbert space expansions in loglinear time", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Chun-Liang Li", "Wei-Cheng Chang", "Yu Cheng", "Yiming Yang", "Barnabás Póczos" ], "title": "Mmd GAN: Towards deeper understanding of moment matching network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chun-Liang Li", "Wei-Cheng Chang", "Youssef Mroueh", "Yiming Yang", "Barnabás Póczos" ], "title": "Implicit kernel learning", "venue": "arXiv preprint arXiv:1902.10214,", "year": 2019 }, { "authors": [ "Yujia Li", "Kevin Swersky", "Rich Zemel" ], "title": "Generative moment matching networks", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Shishi Luo", "Jonathan C Mattingly" ], "title": "Scaling limits of a model for selection at two", "venue": "scales. Nonlinearity,", "year": 2017 }, { "authors": [ "Colin McDiarmid" ], "title": "On the method of bounded differences", "venue": "Surveys in combinatorics,", "year": 1989 }, { "authors": [ "Song Mei", "Andrea Montanari", "Phan-Minh Nguyen" ], "title": "A mean field view of the landscape of twolayer neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari" ], "title": "Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit", "venue": null, "year": 1902 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "GAN: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Felix Otto" ], "title": "The geometry of dissipative evolution equations: the porous medium equation", "venue": null, "year": 2001 }, { "authors": [ "David Pollard" ], "title": "Empirical processes: theory and applications. In NSF-CBMS regional conference series in probability and statistics, pp. i–86", "venue": "JSTOR,", "year": 1990 }, { "authors": [ "Yu V Prokhorov" ], "title": "Convergence of random processes and limit theorems in probability theory", "venue": "Theory of Probability & Its Applications,", "year": 1956 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Philippe Robert" ], "title": "Stochastic networks and queues, volume 52", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Grant M Rotskoff", "Eric Vanden-Eijnden" ], "title": "Neural networks as interacting particle systems: Asymptotic convexity of the loss landscape and universal scaling of the approximation error", "venue": "arXiv preprint arXiv:1805.00915,", "year": 2018 }, { "authors": [ "W Rudin" ], "title": "Real and complex analysis mcgraw-hill book co", "venue": "New York 3rd ed., xiv,", "year": 1987 }, { "authors": [ "Ruslan Salakhutdinov", "Geoffrey Hinton" ], "title": "Deep boltzmann machines", "venue": "In Artificial intelligence and statistics,", "year": 2009 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training GANs", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Shashank Singh", "Barnabás Póczos" ], "title": "Minimax distribution estimation in wasserstein distance", "venue": "arXiv preprint arXiv:1802.08855,", "year": 2018 }, { "authors": [ "Aman Sinha", "John C Duchi" ], "title": "Learning kernels with random features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Justin Sirignano", "Konstantinos Spiliopoulos" ], "title": "Mean field analysis of neural networks", "venue": "arXiv preprint arXiv:1805.01053,", "year": 2018 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-RMSprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning", "venue": null, "year": 2012 }, { "authors": [ "VS Varadarajan" ], "title": "On the theorem of Riesz concerning the form of linear functional", "venue": null, "year": 1958 }, { "authors": [ "Chuang Wang", "Jonathan Mattingly", "Yue Lu" ], "title": "Scaling limit: Exact and tractable analysis of online learning algorithms with applications to regularized regression and PCA", "venue": "arXiv preprint arXiv:1712.04332,", "year": 2017 }, { "authors": [ "Jeff Webb" ], "title": "Extensions of Gronwall’s inequality with quadratic growth terms and applications", "venue": "Electronic Journal of Qualitative Theory of Differential Equations,", "year": 2018 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron Courville", "Ruslan Salakhudinov", "Rich Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Fisher Yu", "Ari Seff", "Yinda Zhang", "Shuran Song", "Thomas Funkhouser", "Jianxiong Xiao" ], "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "venue": "arXiv preprint arXiv:1506.03365,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "A fundamental and long-standing problem in unsupervised learning systems is to capture the underlying distribution of data. While deep generative models such as Boltzmann machines Salakhutdinov & Hinton (2009) and auto-encoding variational Bayes Kingma & Welling (2013) accomplish this task to some extent, they are inadequate for many intractable probabilistic computations that arise in maximum likelihood estimation. Moreover, in many machine learning tasks such as caption generation Xu et al. (2015), the main objective is to obtain new samples rather than to accurately estimate the underlying data distribution. Generative adverserial network (GAN) Goodfellow et al. (2014) provides a framework to directly draw new samples without estimating data distribution. It consists of a deep feedforward network to generate new samples from a base distribution (e.g. Gaussian distribution), and a discriminator network to accept or reject the generated samples. However, training GAN requires finding a Nash equilibrium of a non-convex minimax game with continuous, high-dimensional parameters. Consequently, it is highly unstable and prone to miss modes Salimans et al. (2016); Che et al. (2016). To obtain more stable models, the generative moment matching networks (GMMNs) Li et al. (2015) are proposed, wherein instead of training a discriminator network, a non-parametric statistical hypothesis test is performed to accept or reject the generated samples via the computation of the kernel maximum mean discrepancy Gretton et al. (2007). While leveraging a statistical test simplifies the loss function for training GMMN, in practice, the diversity of generated samples by GMMN is highly sensitive to the choice of the kernel. Thus, to improve the sampling performance, the kernel function also needs to be jointly optimized with the generator. Rather than\ni\noptimizing the kernel directly, the MMD GAN model Li et al. (2017) is proposed in which an embedding function is optimized in conjunction with a fixed user-defined kernel (e.g. RBF Gaussian kernel). However, there are no theoretical guarantees that the user-defined kernel is the ‘right’ kernel for embedded features.\nContributions. To address the kernel model selection problem in MMD GAN Li et al. (2017), in this paper we put forth a novel framework to learn a good kernel function from training data. Our kernel learning approach is based on a distributional optimization problem to learn a good distribution for the random feature model of Rahimi and Recht Rahimi & Recht (2008; 2009) to approximate the kernel. Since optimization with respect to the distribution of random features is infinite dimensional, we consider a Monte Carlo approximation to obtain a more tractable finite dimensional optimization problem with respect to the samples of the distribution. We then use a particle stochastic gradient descent (SGD) to solve the approximated finite dimensional optimization problem. We provide a theoretical guarantee for the consistency of the finite sample-average approximations. Based on a mean-field analysis, we also show the consistency of the proposed particle SGD. In particular, we show that when the number of particles tends to infinity, the empirical distribution of the particles in SGD follows the path of the gradient descent flow of the distributional optimization problem on the Wasserstein manifold." }, { "heading": "2 PRELIMINARIES OF MMD GANS", "text": "Assume we are given data {vi}ni=1 that are sampled from an unknown distribution PV with the support V . In many unsupervised tasks, we wish to attain new samples from the distribution PV without directly estimating it. Generative Adversarial Network (GAN) Goodfellow et al. (2014) provides such a framework. In vanilla GAN, a deep network G(·;W ) parameterized by W ∈ W is trained as a generator to transform the samples Z ∼ PZ ,Z ∈ Z from a user-defined distribution PZ (e.g. Gaussian) into a new sample G(Z;W ) ∼ PW , such that the distributions PW and PV are close. In addition, a discriminator network D(·; δ) parameterized by δ ∈ ∆ is also trained to reject or accept the generated samples as a realization of the data distribution. The training of the generator and discriminator networks is then accomplished via solving a minimax optimization problem as below\nmin W∈W max δ∈∆ IEPV [D(X; δ)] + IEPZ [log(1−D(G(Z;W ); δ))]. (1)\nIn the high dimensional settings, the generator trained via the min-max program of equation 1 can potentially collapse to a single mode of distribution where it always emits the same point Che et al. (2016). To overcome this shortcoming, other adversarial generative models are proposed in the literature, which propose to modify or replace the discriminator network by a statistical two-sample test based on the notion of the maximum mean discrepancy which is defined below:\nDefinition 2.1. (MAXIMUM MEAN DISCREPANCY GRETTON ET AL. (2007)) Let (X , d) be a metric space, F be a class of functions f : X → IR, and P,Q ∈ B(X ) be two probability measures from the set of all Borel probability measures B(X ) onX . The maximum mean discrepancy (MMD) between the distributions P and Q with respect to the function class F is defined below\nDF [P,Q] def = sup\nf∈F ∫ X f(x)(P −Q)(dx). (2)\nDifferent choices of the function class F in equation 2 yield different adversarial models such as Wasserstein GANs (WGAN) Arjovsky et al. (2017), f -GANs Nowozin et al. (2016), and GMMN and MMD GAN Li et al. (2017; 2015). In the latter two cases, the function class F def= {f : X → IR : ‖f‖HX ≤ 1}, where HX is a reproducing kernel Hilbert space (RKHS) of functions with a kernel K : X × X → IR, denoted by (HX ,K). Then, the squared MMD loss in equation 2 as a\nii\nmeasure of the distance between the distributions PV and PW has the following expression\nDK [PV , PW ] def = sup\nf :X→IR:‖f‖HX≤1\n∫ X f(x)(PV − PW )(dx) (3)\n= IEV ,V ′∼PV [K(V ;V ′)] + IEW ,W ′∼PW [K(W ,W ′)]− 2IEV ∼PV ,W∼PW [K(V ;W )], (4)\nwhere X = V ∪ W . Instead of training the generator via solving the minimax optimization in equation 1, the MMD GAN model of Li et al. (2015) proposes to optimize the discrepancy between two distributions via optimization of an embedding function ι : IRD 7→ IRp, p ≤ D, i.e.,\nmin W∈W max ι∈Q MMDk◦ι[PV , PW ], (5)\nwhere k : IRp × IRp → IR is a user-defined fixed kernel. In Li et al. (2015), the proposal for the kernel k : IRp × IRp → IR is a mixture of the Gaussians,\nk ◦ ι(x,y) = k(ι(x), ι(y)) = m∑ i=1 ( ‖ι(x)− ι(y)‖22 σ2i ) , (6)\nwhere the bandwidth parameters σ1, · · · , σm > 0 are manually selected. Nevertheless, in practice there is no guarantee that the user-defined kernel k(ι(x), q(y)) can capture the structure of the embedded features ι(x)." }, { "heading": "3 PROPOSED APPROACH: KERNEL LEARNING WITH RANDOM FEATURES FOR", "text": "MMD GANS\nIn this section, we first expound our kernel learning approach. Then, we describe a novel MMD GAN model based on the proposed kernel learning approach." }, { "heading": "3.1 ROBUST DISTRIBUTIONAL OPTIMIZATION FOR KERNEL LEARNING", "text": "To address the kernel model selection issue in MMD GAN Li et al. (2017), we consider a kernel optimization scheme with random features Rahimi & Recht (2008; 2009). Let ϕ : IRd × IRD → [−1, 1] denotes the explicit feature maps and µ ∈ M(IRD) denotes a probability measure from the space of probability measuresM(IRD) on IRD. The kernel function is characterized via the explicit feature maps using the following integral equation\nKµ(x,y) = IEµ[ϕ(x; ξ)ϕ(y; ξ)] = ∫ Ξ ϕ(x; ξ)ϕ(y; ξ)µ(dξ). (7)\nLet MMDµ[PV , PW ] def = MMDKµ [PV , PW ]. Then, the kernel optimization problem in can be formulated as a distribution optimization for random features, i.e,\nmin W∈W sup µ∈P MMDµ[PV , PW ]. (8)\nHere, P is the set of probability distributions corresponding to a kernel class K. In the sequel, we consider P to be the distribution ball of radius R as below\nP def= IBpR(µ0) def = {µ ∈M(IRD) : Wp(µ, µ0) ≤ R}, (9)\nwhere µ0 is a user-defined base distribution, and To establish the proof, we consider the Wp(·, ·) is the p-Wasserstein distance defined as below\nWp(µ1, µ2) def = ( inf\nπ∈Π(µ1,µ2) ∫ IRD×IRD ‖ξ1 − ξ2‖p2dπ(ξ1, ξ2) ) 1 p , (10)\niii\nwhere the infimum is taken with respect to all couplings π of the measures µ, µ0 ∈ M(IRD), and Π(µ, µ0) is the set of all such couplings with the marginals µ and µ0.\nThe kernel MMD loss function in equation 8 is defined with respect to the unknown distributions of the data-set PV and the model PW . Therefore, we construct an unbiased estimator for the MMD loss function in equation 8 based on the training samples. To describe the estimator, sample the labels from a uniform distribution y1, · · · , yn ∼i.i.d. Uniform{−1,+1}, where we assume that the number of positive and negative labels are balanced. In particular, consider the set of positive labels I = {i ∈ {1, 2, · · · , n} : yi = +1}, and negative labels J = {1, 2, · · · , n}/I, where their cardinality is |I| = |J | = n2 . We consider the following assignment of labels:\n• Positive class labels: If yi = +1, sample the corresponding feature map from data-distribution xi = vi ∼ PV .\n• Negative class labels: If yi = −1, sample from the corresponding feature map from the generated distribution xi = G(Zi,W ) ∼ PW ,Zi ∼ PZ .\nBy this construction, the joint distribution of features and labels PY,X has the marginals PX|Y=+1 = PV , and PX|Y=−1 = PW . Moreover, the following statistic, known as the kernel alignment in the literature (see, e.g., Sinha & Duchi (2016); Cortes et al. (2012)), is an unbiased estimator of the MMD loss in equation 8,\nmin W∈W sup µ∈P M̂MDµ\n[ PV , PW ] def =\n8 n(n− 1) ∑\n1≤i<j≤n\nyiyjKµ(xi,xj). (11)\nSee Appendix C.1 for the related proof. The kernel alignment in equation 11 can also be viewed through the lens of the risk minimization\nmin W∈W inf µ∈P\nM̂MD α\nµ\n[ PV , PW ] def =\n8 n(n− 1)α ∑\n1≤i<j≤n\n(αyiyj −Kµ(xi,xj))2 (12a)\n= 8 n(n− 1)α ∑\n1≤i<j≤n\n(αyiyj − IEµ[ϕ(xi; ξ)ϕ(xj ; ξ)])2 . (12b)\nHere, α > 0 is a scaling factor that determines the separation between feature vectors, and K∗ def = αyyT is the ideal kernel that provides the maximal separation between the feature vectors over the training data-set, i.e., K∗(xi,xj) = α when features have identical labels yi = yj , and K∗(xi,xj) = −α otherwise. Upon expansion of the risk function in equation 12, it can be easily shown that it reduces to the kernel alignment in equation 11 when α → +∞. Intuitively, the risk minimization in equation 12 gives a feature space in which pairwise distances are similar to those in the output space Y = {−1,+1}." }, { "heading": "3.2 SAA FOR DISTRIBUTIONAL OPTIMIZATION", "text": "The distributional optimization problem in equation 8 is infinite dimensional, and thus cannot be solved directly. To obtain a tractable optimization problem, instead of optimizing with respect to the distribution µ of random features, we optimize the i.i.d. samples (particles) ξ1, · · · , ξN ∼i.i.d. µ generated from the distribution. The empirical distribution of these particles is accordingly defined as follows\nµ̂N (ξ) def =\n1\nN N∑ k=1 δ(ξ − ξk), (13)\nwhere δ(·) is the Dirac’s delta function concentrated at zero. In practice, the optimization problem in equation 12 is solved via the Monte-Carlo sample average approximation of the objective function,\nmin W∈W min µ̂N∈PN\nM̂MD α\nµ̂N [ PV , PW ] =\n8 n(n− 1)α ∑\n1≤i<j≤n\n( αyiyj − 1\nN N∑ k=1 ϕ(xi; ξ k)ϕ(xj ; ξ k) )2 , (14)\niv\nwhere PN def = IBNR (µ̂ N 0 ) = { µ̂N ∈ M(IRD) : Wp(µ̂N , µ̂N0 ) ≤ R } , and µ̂N0 is the empirical\nmeasure associated with the initial samples ξ10, · · · , ξN0 ∼i.i.d. µ0. The empirical objective function in equation 14 can be optimized with respect to the samples ξ1, · · · , ξN using the particle stochastic gradient descent. For the optimization problem in equation 14, the (projected) stochastic gradient descent (SGD) takes the following recursive form,1\nξkm+1 = ξ k m −\nη\nN\n( ymỹm − 1\nαN N∑ k=1 ϕ(xm; ξ k m)ϕ(x̃m; ξ k m)\n) ∇ξ ( ϕ(xm; ξ k m)ϕ(x̃m; ξ k m) ) , (15a)\nfor k = 1, 2, · · · , N , where (ym,xm), (ỹm, x̃m) ∼i.i.d Px,y and η ∈ IR>0 denotes the learning rate of the algorithm, and the initial particles are ξ10, · · · , ξN0 ∼i.i.d. µ0. At each iteration of the SGD dynamic in equation 15, a feasible solution for the inner optimization of the empirical risk function in equation 14 is generated via the empirical measure\nµ̂Nm(ξ) = 1\nN N∑ k=1 δ(ξ − ξkm). (16)\nIndeed, we prove in Section 4 that for an appropriate choice of the learning rate η > 0, the empirical measure in equation 16 remains inside the distribution ball µ̂Nm ∈ PN for all m ∈ [0, NT ] ∩ IN, and is thus a feasible solution for the empirical risk minimization equation 14 (see Corollary 4.2.1 in Section 4)." }, { "heading": "3.3 PROPOSED MMD GAN WITH KERNEL LEARNING", "text": "In Algorithm 1, we describe the proposed method MMD GAN model with the kernel learning approach described earlier. Algorithm 1 has an inner loop for the kernel training and an outer loop for training the generator, where we employ RMSprop Tieleman & Hinton (2012). Our proposed MMD GAN model is distinguished from MMD GAN of Li et al. (2017) in that we learn a good kernel function in equation 17 of the inner loop instead of optimizing the embedding function that is implemented by an auto-encoder. However, we mention that our kernel learning approach is compatible with the auto-encoder implementation of Li et al. (2017) for the dimensionality reduction of features (and particles). In the case of including an auto-encoder, the inner loop in Algorithm 1 must be modified to add an additional step for training the auto-encoder. However, to convey the main ideas more clearly, the training step of the auto-encoder is omitted from Algorithm 1." }, { "heading": "3.4 COMPUTATIONAL COMPLEXITY ANALYSIS", "text": "Sampling the labels y, ỹ ∼i.i.d. Uniform{−1,+1} require O(1) complexity, while sampling x̃|ỹ = +1,x|y = +1 ∼ PV and x̃|ỹ = +1,x|y = −1 ∼ PW has a complexity of O(d). The computation of the stochastic gradient step in equation 17 requires computing the sum 1 N ∑N i=1 ϕ(x; ξ k)ϕ(x̃; ξk). This can be done as a separate preprocessing step prior to executing the SGD in equation 17, and requires preparing the vectors ϕ def= [ϕ(x; ξ1), · · · , ϕ(x; ξN )] and ϕ̃ def = [ϕ(x̃; ξ1), · · · , ϕ(x̃; ξN )]. Using the random feature model of Rahimi and Recht Rahimi & Recht (2008; 2009), where ϕ(x; ξ) = √\n2 cos(xT ξ+ b). Here b ∼ Uniform[−π,+π], the complexity of computing the vectors ϕ and ϕ̃ is of the order O(Nd) on a single processor. However, this construction is trivially parallelizable. Furthermore, computation can be sped up even further for certain distributions µ0. For example, the Fastfood technique can approximateϕ and ϕ̃ inO(N log(d))\n1To avoid clutter in our subsequent analysis, the normalization factor 16 n(n−1) of the gradient is omitted by\nmodifying the step-size η.\nv\nAlgorithm 1 MMD GAN with a supervised kernel learning Method (Monte-Carlo Approach) Inputs: The learning rates η̃, η > 0 , the number of iterations of discriminator per generator update T ∈ N, the batch-size n, the number of random featuresN ∈ N. Regularization parameter α > 0. whileW has not converged do\nfor t = 1, 2, · · · , T do Sample the labels y, ỹ ∼i.i.d Uniform{−1, 1}. Sample the features x|y = +1 ∼ PV , and x|y = −1 ∼ PW . Similarly, x̃|ỹ = +1 ∼ PV , and x̃|ỹ = −1 ∼ PW . For all k = 1, 2, · · · , N , update the particles,\nξ k ← ξk −\nη\nN\n( αyỹ − 1\nN N∑ k=1 ϕ(x; ξ k )ϕ(x̃; ξ k )\n) ∇ξ ( ϕ(x; ξ k )ϕ(x̃; ξ k ) ) , (17)\nend for Sample a balanced minibatch of labels {yi}ni=1 ∼i.i.d. Uniform{−1,+1}. Sample the minibatch {x}ni=1 such that xi|yi = +1 ∼ PV , and xi|yi = −1 ∼ PW for all i = 1, 2, · · · , n. Update the generator\ngW ← ∇W D̂αµ̂N [ PV , PW ] , µ̂ N = 1\nN N∑ k=1 δ(ξ − ξk). (18a)\nW ←W − η̃RMSprop(gW ,W ). (18b)\nend while\ntime for the Gaussian kernel Le et al. (2013). Updating each particle in equation 17, involves the computation of the gradient ∇ξ(ϕ(ξ; ξk)ϕ(ξ̃; ξk))) which is O(d). Thus, the complexity of one iteration of SGD for all the particles is O(Nd). Overall, one step of the kernel learning has a complexity of O(Nd). On the other hand, to attain ε-suboptimal solution maxk=1,2,··· ,N ‖ξk − ξk∗‖ ≤ ε, the SGD requires has the sample complexity O ( log( 1ε ) ) . Consequently, the computational complexity of the kernel learning is of the order of\nO(Nd log( 1ε )). To compare this complexity with that of MMD GAN is of the orderO ( B2` log( 1ε ) ) , where B is the batch size for approximation of the population MMD, and ` is the number of kernel mixtures." }, { "heading": "4 CONSISTENCY AND A MEAN-FIELD ANALYSIS", "text": "In this section, we provide theoretical guarantees for the consistency of various approximations we made to optimize the population MMD loss function in equation 8. We defer the proofs of the following theoretical results to AppendixC. The main assumptions ((A.1),(A.2), and (A.3)) underlying our theoretical results are also stated in the same section.\nConsistency of finite-sample estimate: In this part, we prove that the solution to finite sample optimization problem in equation 14 approaches its population optimum in equation 8 as the number of data points as well as the number of random feature samples tends to infinity.\nTheorem 4.1. (NON-ASYMPTOTIC CONSISTENCY OF FINITE-SAMPLE ESTIMATOR) Suppose conditions (A.1)-(A.3) of Appendix C are satisfied. Consider the distribution balls P and PN that are defined with respect to the 2-Wasserstein distance (p = 2). Furthermore, consider the optimal MMD values of the population optimization and its finite sample estimate\n(W∗, µ∗) def = arg min\nW∈W arg sup µ∈P MMDµ[PV , PW ]. (19a)\n(ŴN∗ , µ̂ N ∗ )\ndef = arg min\nW∈W arg inf µ̂N∈PN M̂MD\nα µ̂N [PV , PW ], (19b)\nvi\nrespectively. Then, with the probability of (at least) 1 − 3% over the training data samples {(xi, yi)}ni=1 and the random feature samples {ξk0}Nk=1, the following non-asymptotic bound holds∣∣∣MMDµ∗ [PV , PW∗ ]−MMDµ̂N∗ [PV , PŴN∗ ]∣∣∣ (20) ≤ √ L2(d+ 2)\nN ln\n1 2\n( 28Ndiam2(X )\n%\n) + 2 max { c1L 2\nn ln\n1 2\n( 4\n%\n) , c2RL 4\nn2 ln\n( 4e L4 9\n%\n)} + 8L2\nα ,\nwhere c1 = 3 1 4 × 24, and c2 = 9× 211.\nThe proof of Theorem 4.1 is presented in Appendix C.1.\nNotice that there are three key parameters involved in the upper bound of Theorem 4.1. Namely, the number of training samples n, the number of random feature samples N , and the regularization parameter α. The upper bound in equation 20 thus shows that when n,N, α → +∞, the solution obtained from solving the empirical risk minimization in equation 12 yields a MMD population value tending to the optimal value of the distributional optimization in equation 8.\nConsistency of particle SGD for solving distributional optimization. The consistency result of Theorem 4.1 is concerned with the MMD value of the optimal empirical measure µ̂N∗ (ξ) = 1 N ∑N k=1 δ(ξ − ξk∗) of the empirical risk minimization equation 14. In practice, the particle SGD is executed for a few iterations and its values are used as an estimate for (ξ1∗, · · · , ξN∗ ). Consequently, it is desirable to establish a similar consistency type result for the particle SGD estimates (ξ1m, · · · , ξNm) at the m-th iteration. To reach this objective, we define the scaled empirical measure as follows\nµNt = µ̂ N bNtc =\n1\nN N∑ k=1 δ(ξ − ξbNtc), 0 ≤ t ≤ T. (21)\nAt any time t, the scaled empirical measure µNt is a random element, and thus (µ N t )0≤t≤T is a measured-valued stochastic process. Therefore, we characterize the evolution of its Lebesgue density pNt (ξ) def = µNt (dξ)/dξ in the following theorem: Theorem 4.2. (MCKEAN-VLASOV MEAN-FIELD PDE) Suppose conditions (A.1)-(A.3) of Appendix C are satisfied. Further, suppose that the Radon-Nikodyme derivative q0(ξ) = µ0(dξ)/dξ exists. Then, there exists a unique solution (p∗t (ξ))0≤t≤T to the following non-linear partial differential equation∂pt(ξ)∂t = − ηα ∫∫ X×Y (∫ IRp ϕ(x, ξ̃)ϕ(x̃, ξ̃)pt(ξ̃)dξ̃ − αyỹ ) ∇ξ(pt(ξ)∇ξ(ϕ(x; ξ)ϕ(x̃; ξ))dP⊗2X,Y ,\np0(ξ) = q0(ξ).\n(22)\nMoreover, the measure-valued process {(µNt )0≤t≤T }N∈IN defined in equation 21 converges (weakly) to the unique solution µ∗t (ξ) = p ∗ t (ξ)dξ as the number of particles tend to infinityN →∞. 2.\nDue to the mean-field analysis of Theorem 4.2, we can prove that the empirical measure µ̂Nm of the particles in SGD dynamic equation 15 remains inside the feasible distribution ball PN :\nCorollary 4.2.1. Consider the learning rate η = O (\nRp\nT √ NT log(2/δ)\n) for the SGD in equation 15.\nThen, the empirical measure µ̂Nm of the particles remains inside the distributional ball µ̂ N m ∈ PN = {µ̂N ∈ M(IRD) : Wp(µ̂N , µ̂N0 ) ≤ R} for all m ∈ [0, NT ] ∩ IN, with the probability of (at least) 1− δ.\n2The notion of the weak convergence of a sequence of empirical measures is formally defined in Supplementary.\nvii\nLet us make two remarks about the PDE in equation 22. First, the seminal works of Otto Otto (2001), and Jordan, et al. Jordan et al. (1998) establishes a deep connection between the McKeanVlasov type PDEs specified in equation 22 and the gradient flow on the Wasserstein manifolds. More specifically, the PDE equation in equation 22 can be thought of as the minimization of the energy functional\ninf µ∈M(IRD)\nEα(pt(ξ)) def =\n1\nα ∫ IRD Rα(ξ, pt(ξ))pt(ξ)dξ (23a)\nRα(ξ, pt(ξ)) def = −α(IEPX,Y [yϕ(x; ξ)])2 + IEξ̃∼pt [( IEPX [ϕ(x; ξ)ϕ(x; ξ̃)] )2] , (23b)\nusing the following gradient flow dynamics dpt(ξ)\ndt = −η · gradptEα(pt(ξ)), p0(ξ) = q0(ξ), (24)\nwhere gradptE(pt(ξ)) = ∇ξ · (pt(ξ)∇ξRα(pt(ξ))) is the Riemannian gradient of Rα(µt(ξ)) with respect to the metric of the Wasserstein manifold . This shows that when the number of particles in particle SGD equation 15 tends to infinity (N → +∞), their empirical distribution follows a gradient descent path for minimization of the population version (with respect to data samples) of the distributional risk optimization in equation 12. In this sense, the particle SGD is the ‘consistent’ approximation algorithm for solving the distributional optimization." }, { "heading": "4.1 RELATED WORKS", "text": "The mean-field description of SGD dynamics has been studied in several prior works for different information processing tasks. Wang et al. Wang et al. (2017) consider the problem of online learning for the principal component analysis (PCA), and analyze the scaling limits of different online learning algorithms based on the notion of finite exchangeability. In their seminal papers, Montanari and co-authors Mei et al. (2018); Javanmard et al. (2019); Mei et al. (2019) consider the scaling limits of SGD for training a two-layer neural network, and characterize the related Mckean-Vlasov PDE for the limiting distribution of the empirical measure associated with the weights of the input layer. Similar mean-field type results for two-layer neural networks are also studied recently in Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018). Our work is also related to the unpublished work of Wang, et al. Wang et al., which proposes a solvable model of GAN and analyzes the scaling limits. However, our GAN model is different from Wang et al. and is based on the notion of the kernel MMD. Our work is also closely related to the recent work of Li, et al Li et al. (2019) which proposes an implicit kernel learning method based on the following kernel definition\nKh(ι(x), ι(y)) = IEξ∼µ0\n[ e(ih(ξ)(ι(x)−ι(y))) ] , (25)\nwhere µ0 is a user defined base distribution, and h ∈ H is a function that transforms the base distribution µ0 into a distribution µ that provides a better kernel. Therefore, the work of Li, et al Li et al. (2019) implicitly optimizes the distribution of random features via transforming a random variable with a function. In contrast, the proposed distributional optimization framework in this paper optimizes the distribution of random feature explicitly, via optimizing their empirical measures. Perhaps more importantly from a practical perspective is the fact that our kernel learning approach does not require the user-defined function class H. Moreover, our particle SGD method in equation 15 obviates tuning hyper-parameters related to the implicit kernel learning method such as the gradient penalty factor and the variance constraint factor (denoted by λGP and λh, respectively, in Algorithm 1 of Li et al. (2019))." }, { "heading": "5 EMPIRICAL EVALUATION", "text": "" }, { "heading": "5.1 SYNTHETIC DATA-SET", "text": "Due to the space limitation, the experiments on the synthetic data are deferred to Appendix A.\nviii" }, { "heading": "5.2 PERFORMANCE ON BENCHMARK DATASETS", "text": "We evaluate our kernel learning approach on large-scale benchmark data-sets. We train our MMD GAN model on two distinct types of datasets, namely on MNIST LeCun et al. (1998) and CIFAR-10 LeCun et al. (1998), where the size of training instances are 60 × 103 and 50 × 103, respectively. All the generated samples are from a fixed noise random vectors and are not singled out. Implementation and hyper-parameters. We implement Algorithm 1 as well as MMD GAN Li et al. (2017) in Pytorch using NVIDIA Titan V100 32GB graphics processing units (GPUs). The source code of Algorithm 1 is built upon the code of Li et al. (2017), and retains the auto-encoder implementation. In particular, we use a sequential training of the auto-encoder and kernel as explained in the Synthetic data in Section A of Supplementary. For a fair comparison, our hyper-parameters are adjusted as in Li et al. (2017), i.e., the learning rate of 0.00005 is considered for RMSProp Tieleman & Hinton (2012). Moreover, the batch-size for training the generator and auto-encoder is n = 64. The learning rate of particle SGD is tuned to η = 10. Random Feature Maps. To approximate the kernel, we use the the random feature model of Rahimi and Recht Rahimi & Recht (2008; 2009), where ϕ(x; ξ) = √ 2 cos(xT ξ + b). Here b ∼ Uniform{−π,+π} is a random bias term. Practical considerations. When data-samples {Vi} ∈ IRd are high dimensional (as in CIFAR10), the particles ξ1, · · · , ξN ∈ IRD, D = d in SGD equation 15 are also high-dimensional. To reduce the dimensionality of the particles, we apply an auto-encoder architecture similar to Li et al. (2017), and train our kernel on top of learned embedded features. More specifically, in our simulations, we train an auto-encoder where the dimensionality of the latent space is h = 10 for MNIST, and h = 128 (thus D = d = 128) for CIFAR-10. Therefore, the particles ξ1, · · · , ξN in subsequent kernel training phase have the dimension of D = 10, and D = 128, respectively. Choice of the scaling parameter α. There is a trade-off in the choice of α. While for large values of α, the kernel is better able to separate data-samples from generated samples, in practice, it slows down the convergence of particle SGD. This is due to the fact that the coupling between the particle dynamics in equation 15 decrease as α increase. The scaling factor is set to be α = 1 in all the following experiments. Qualitative comparison. We now show that without the bandwidth tuning for Gaussian kernels and using the particle SGD to learn the kernel, we can attain better visual results on benchmark data-sets. In Figure 1, we show the generated samples on CIFAR-10 and MNIST data-sets, using our Algorithm 1, MMD GAN Li et al. (2017) with a mixed and homogeneous Gaussian RBF kernels, and GMMN Li et al. (2015).\nFigure 1(a) shows the samples from Algorithm 1, Figure 1(b) shows the samples from MMD GAN Li et al. (2017) with a mixture RBF Gaussian kernel κ(x,y) = ∑5 k=1 κσk(x,y), where σk ∈ {1, 2, 4, 8, 16} are the bandwidths of the Gaussian kernels that are fine tuned and optimized. We observe that our MMD GAN with automatic kernel learning visually attains similar results to MMD GAN Li et al. (2017) which requires manual tuning of the hyper-parameters. In Figure 1(c), we show the MMD GAN result with a single kernel RBF Gaussian kernel whose bandwidth is manually tuned at σ = 16. Lastly, in Figure 1(d), we show the samples from GMMN Li et al. (2015) which does not exploit an auto-encoder or kernel training. Clearly, GMMN yield a poor results compared to other methods due to high dimensionality of features, as well as the lack of an efficient method to train the kernel.\nOn MNIST data-set in Figure 1,(e)-(h), the difference between our method and MMD GAN Li et al. (2017) is visually more pronounced. We observe that without a manual tuning of the kernel bandwidth and by using the particle SGD equation 15 to optimize the kernel, we attain better generated images in Figure 1(e), compared to MMD GAN with mixed RBF Gaussian kernel and manual bandwidth tuning in Figure 1(f). Moreover, using a single RBF Gaussian kernel yields a poor result regardless of the choice of its bandwidth. The generated images from GMMN is also shown in Figure 1(h).\nix\nQuantitivative comparison. To quantitatively measure the quality and diversity of generated samples, we compute the inception score (IS) Salimans et al. (2016) as well as Frèchet Inception Distance (FID) Heusel et al. (2017) on CIFAR-10 images. Intuitively, the inception score is used for GANs to measure samples quality and diversity. Accordingly, for generative models that are collapsed into a single mode of distribution, the inception score is relatively low. The FID improves on IS by actually comparing the statistics of generated samples to real samples, instead of evaluating generated samples independently.\nIn Table 1, we report the quantitative measures for different MMD GAN model using different scoring metric. Note that in Table 1 lower FID scores and higher IS scores indicate a better performance. We observe from Table 1 that our approach attain lower FID score, and higher IS score compared to MMD GAN with single Gaussian kernel (bandwidth σ = 16), and a mixture Gaussian kernel (bandwidths {1, 2, 4, 8, 16}).\nx" }, { "heading": "APPENDIX", "text": "We provide additional material to support the content presented in the paper. This Appendix is organized as follows:\nAppendix A. We provide the results of our experiments on the synthetic data-set described in Section 5.1 of the main text.\nAppendix B. We provide the results of our experiments on the LSUN and CelebA data-sets.\nAppendix C. We present the proofs of the main theoretical results of Section 4 in the main text.\nAppendix D. We present the proofs of auxiliary lemmas used to support the proof of main results.\nAppendix E. We prove additional theoretical results regarding the so-called chaoticity of the particle SGD in equation 15." }, { "heading": "A EXPERIMENTAL RESULTS ON THE SYNTHETIC DATA-SET", "text": "The synthetic data-set we consider is as follows:\n• The distribution of training data is PV = N(0, (1 + λ)Id×d), • The distribution of generated data is PW = N(0, (1− λ)Id×d).\nxiii\nTo reduce the dimensionality of data, we consider the embedding ι : IRd 7→ IRp,x 7→ ι(x) = Σx, where Σ ∈ IRp×d and p < d. In this case, the distribution of the embedded features are PX|Y=+1 = N(0, (1 + λ)ΣΣT ), and PX|Y=−1 = N(0, (1− λ)ΣΣT ).\nNote that λ ∈ [0, 1] is a parameter that determines the separation of distributions. In particular, the Kullback-Leibler divergece of the two multi-variate Gaussian distributions is controlled by λ ∈ [0, 1],\nDKL(PX|Y=−1, PX|Y=+1) = 1\n2\n[ log ( 1− λ 1 + λ ) − p+ p(1− λ2) ] . (26)\nIn Figure 2, we show the distributions of i.i.d. samples from the distributions PV and PW for different choices of variance parameter of λ = 0.1, λ = 0.5, and λ = 0.9. Notice that for larger λ the divergence is reduced and thus performing the two-sample test is more difficult. From Figure 2, we clearly observe that for large values of λ, the data-points from the two distributions PV and PW have a large overlap and conducting a statistical test to distinguish between these two distributions is more challenging." }, { "heading": "A.0.1 KERNEL LEARNING APPROACH", "text": "Figure 4 depicts our two-phase kernel learning procedure which we also employed in our implementations of Algorithm 1 on benchmark data-sets of Section 5.2 in the main text. The kernel learning approach consists of training the auto-encoder and the kernel optimization sequentially, i.e.,\nsup µ̂N∈PN sup ι∈Q\nM̂MD α\nKµ̂N ◦ι [PV , PW ]. (27)\nwhere the function class is defined Q def= {ι(z) = Σz,Σ ∈ IRD×d}, and (Kµ̂N ◦ ι)(x1,x2) = Kµ̂N (ι(x1), ι(x2)). Now, we consider a two-phase optimization procedure:\n• Phase (I): we fix the kernel function, and optimize the auto-encoder to compute a covariance matrix Σ for dimensionality reduction • Phase (II): we optimize the kernel based on the learned embedded features.\nThis two-phase procedure significantly improves the computational complexity of SGD as it reduces the dimensionality of random feature samples ξ ∈ IRD, D d. When the kernel function K is fixed, optimizing the auto-encoder is equivalent to the kernel learning step of Li et al. (2017).\nxiv" }, { "heading": "A.0.2 STATISTICAL HYPOTHESIS TESTING WITH THE KERNEL MMD", "text": "Let V1, · · · ,Vm ∼i.i.d. PV = N(0, (1 + λ)Id×d), and W1, · · · ,Wn ∼i.i.d. PW = N(0, (1 − λ)Id×d). Given these i.i.d. samples, the statistical test T ({Vi}mi=1, {Wi}nj=1) : Vm×Wn → {0, 1} is used to distinguish between these hypotheses:\n• Null hypothesis H0 : PV = PW (thus λ = 0), • Alternative hypothesis H1 : PV 6= PW (thus λ > 0).\nTo perform hypothesis testing via the kernel MMD, we require thatHX is a universal RKHS, defined on a compact metric space X . Universality requires that the kernel K(·, ·) be continuous and, HX be dense in C(X ). Under these conditions, the following theorem establishes that the kernel MMD is indeed a metric: Theorem A.1. (METRIZABLITY OF THE RKHS) Let F denotes a unit ball in a universal RKHS HX defined on a compact metric space X with the associated continuous kernel K(·, ·). Then, the kernel MMD is a metric in the sense that MMDK [PV , PW ] = 0 if and only if PV = PW .\nTo design a test, let µ̂Nm(ξ) = 1 N ∑N k=1 δ(ξ − ξkm) denotes the solution of SGD in equation 15 for solving the optimization problem. Consider the following MMD estimator consisting of two\nxv\nU -statistics and an empirical function\nM̂MDKµ̂Nm◦ι [ {Vi}mi=1, {Wi}ni=1 ] =\n1\nm(m− 1) N∑ k=1 ∑ i 6=j ϕ(ι(Vi), ξ k m)ϕ(ι(Vj), ξ k m)\n+ 1\nn(n− 1) N∑ k=1 ∑ i 6=j ϕ(ι(Wi), ξ k m)ϕ(ι(Wj), ξ k m)\n− 2 nm N∑ k=1 m∑ i=1 n∑ j=1 ϕ(ι(Wi), ξ k m)ϕ(ι(Vj), ξ k m). (28)\nGiven the samples {Vi}mi=1 and {Wi}ni=1, we design a test statistic as below\nT ({Vi}mi=1, {Wi}ni=1) def =\n{ H0 if M̂MDKµ̂Nm◦ι [ {Vi}mi=1, {Wi}ni=1 ] ≤ τ\nH1 if M̂MDKµ̂Nm◦ι [ {Vi}mi=1, {Wi}ni=1 ] > τ, . (29)\nwhere τ ∈ IR is a threshold. Notice that the unbiased MMD estimator of equation 28 can be negative despite the fact that the population MMD is non-negative. Consequently, negative values for the statistical threshold τ equation 29 are admissible. In the following simulations, we only consider non-negative values for the threshold τ .\nA Type I error is made when H0 is rejected based on the observed samples, despite the null hypothesis having generated the data. Conversely, a Type II error occurs when H0 is accepted despite the alternative hypothesis H1 being true. The significance level of a test is an upper bound on the probability of a Type I error: this is a design parameter of the test which must be set in advance, and is used to determine the threshold to which we compare the test statistic. The power of a test is the probability of rejecting the null hypothesis H0 when it is indeed incorrect. In particular,\nPower def = IP(reject H0|H1 is true). (30)\nIn this sense, the statistical power controls the probability of making Type II errors." }, { "heading": "A.0.3 EMPIRICAL RESULTS", "text": "In Figure 3, we show the evolution of the empirical measure µNm(ξ) of SGD particles by plotting the 2D histogram of the particles ξ1m, · · · , ξNm ∈ IR\nD at different iterations of SGD (D = d). Clearly, starting with a uniform distribution in 3(a), the empirical measure seemingly evolves into a Gaussian measure in Figure 3(d). The evolution to a Gaussian distribution demonstrates that the RBF Gaussian kernel corresponding to a Gaussian distribution for the random features indeed provides a good kernel function for the underlying hypothesis test with Gaussian distributions.\nIn Figure 4, we evaluate the power of the test for 100 trials of hypothesis test using the test statistics of equation 29. To obtain the result, we used an autoencoder to reduce the dimension from d = 100 to p = 50. Clearly, for the trained kernel in Panel (a) of Figure 4, the threshold τ for which Power = 1 increases after learning the kernel via the two phase procedure described earlier. In comparison, in Panel (b), we observe that training an auto-encoder only with a fixed standard Gaussian kernel K(x,y) = e−‖x−y‖ 2 2 attains lower thresholds compared to our two-phase procedure. In Panel (c), we demonstrate the case of a fixed Gaussian kernel without an auto-encoder. In this case, the threshold is significantly lower due to the large dimensionality of the data.\nFrom Figure 4, we also observe that interestingly, the phase transition in the statistical threshold τ is not sensitive to the parameter λ. This phenomenon can be justified by the fact that the kernel learning indeed improves the MMD value more significantly for smaller values of λ (i.e., more difficult hypothesis testing problems) than larger values of λ. See Figure 5.\nxvi" }, { "heading": "B EXPERIMENTAL RESULTS ON LSUN BEDROOM AND CELEBA DATA-SETS", "text": "In this section, we present additional simulations using CelebA Liu et al. (2015), and LSUN bedroom Yu et al. (2015) data-sets. The LSUN dataset of bedroom pictures resized to 64× 64, and the CelebA dataset of celebrity face images resized and cropped to 160 × 160. The sample generated images are shown in Figure 6.\nThe inception score for LSUN bedroom data set is 3.860 ± 0.0423 using a single Gaussian kernel with the bandwidth of σ = 16, and 4.064± 0.061 using our proposed method in Algorithm 1." }, { "heading": "C PROOFS OF MAIN THEORETICAL RESULTS", "text": "Before we delve into proofs, we state the main assumptions underlying our theoretical results:\nAssumptions:\n(A.1) The feature space X = V ∪ W ⊂ IRd is compact with a finite diameter diam(X ) < ∞, where V = support(PV ) and W = support(PW ) are the supports of the distributions PV and PW respectively.\nxvii\n(A.2) The feature maps are bounded and Lipchitz almost everywhere (a.e.) ξ ∈ IRD. In particular, supx∈X |ϕ(x; ξ)| < L0, supx∈X ‖∇ξϕ(x; ξ)‖2 ≤ L1, and supξ∈IRD ‖∇xϕ(x; ξ)‖ < L2. Let L def = max{L0, L1, L2} < +∞.\n(A.3) Let µ̂N0 (ξ) def = 1N ∑N k=1 δ(ξ − ξk0 ) denotes the empirical measure for the initial particles\nξ10, · · · , ξN0 . We assume that µ̂N0 (ξ) converges (weakly) to a deterministic measure µ0 ∈ M(IRD). Furthermore, we assume the limiting measure µ0 is absolutely continuous w.r.t. Lebesgue measure and has a compact support support(µ0) = Ξ ⊂ IRD.\nNotation: We denote vectors by lower case bold letters, e.g. x = (x1, · · · , xn) ∈ IRn, and matrices by the upper case bold letters, e.g., M = [Mij ] ∈ IRn×m. The Frobenius norm of a matrix is denoted by ‖M‖F = ∑n i=1 ∑m j=1 |Mij |2. Let IBr(x) def = {y ∈ IRd : ‖y − x‖2 ≤ r} denote the Euclidean ball of radius r centered at x. For a given metric space X , Let Cb(IRd) denote the space of bounded and continuous functions on X equipped with the usual supremum norm\n‖f‖∞ def = sup x∈X |f(x)|. (31)\nFurther, Ckb (X ) the space of all functions in Cb(X ) whose partial derivatives up to order k are bounded and continuous, and Ckc (X ) the space of functions whose partial derivatives up to order k are continuous with compact support.\nxviii\nWe denote the class of the integrable functions f with f(t) ≥ 0 a.e., on 0 ≤ t ≤ T by L1+[0, T ]. Similarly, L∞+ [0, T ] will denote the essentially bounded functions with f(t) ≥ 0 almost everywhere. For a given metric space X , we denote the Borel σ-algebra by B(X ). For a Borel set B ∈ B(X ), the measure value of the set B with respect to the measure is given by µ(B). The space of finite non-negative measures defined on X is denoted byM(X ). The Dirac measure with the unit mass at x ∈ X is denoted by δ(x). For any measure µ ∈ M(X ) and any bounded function f ∈ Cb(X ), we define\n〈µ, f〉 def= ∫ X f(x)µ(dx). (32)\nThe space M(X ) is equipped with the weak topology, i.e., a (random) sequence {µN}N∈IN converges weakly to a deterministic measure µ ∈ M(X ) if and only if 〈µN , f〉 → 〈µ, f〉 for all f ∈ Cb(X ). We denote the weak convergence by µNt\nweakly→ µ. Notice that when X is Polish, thenM(X ) equipped with the weak topology is also Polish.3 For a Polish space X , let DX ([0, T ]) denotes the Skorokhod space of the cádlág functions that take values in X defined on [0, T ]. We assume that DX ([0, T ]) is equipped with the Skorokhod’s J1-topology Billingsley (2013), which in that case DX ([0, T ]) is also a Polish space. We use asymptotic notations throughout the paper. We use the standard asymptotic notation for sequences. If an and bn are positive sequences, then an = O(bn) means that lim supn→∞ an/bn < ∞, whereas an = Ω(bn) means that lim infn→∞ an/bn > 0. Furthermore, an = Õ(bn) implies an = O(bnpoly log(bn)). Moreover an = o(bn) means that limn→∞ an/bn = 0 and an = ω(bn) means that limn→∞ an/bn = ∞. Lastly, we have an = Θ(bn) if an = O(bn) and an = Ω(bn). Finally, for positive a, b > 0, denote a . b if a/b is at most some universal constant. Definition C.1. (ORLICZ NORM) The Young-Orlicz modulus is a convex non-decreasing function ψ : IR+ → IR+ such that ψ(0) = 0 and ψ(x)→∞ when x→∞. Accordingly, the Orlicz norm of an integrable random variable X with respect to the modulus ψ is defined as\n‖X‖ψ def = inf{β > 0 : IE[ψ(||X| − IE[|X|]|/β)] ≤ 1}. (33)\nIn the sequel, we consider the Orlicz modulus ψν(x) def = (xν)− 1 . Accordingly, the cases of ‖ · ‖ψ2 and ‖ ·‖ψ1 norms are called the sub-Gaussian and the sub-exponential norms and have the following alternative definitions: Definition C.2. (SUB-GAUSSIAN NORM) The sub-Gaussian norm of a random variable Z, denoted by ‖Z‖ψ2 , is defined as\n‖Z‖ψ2 = sup q≥1\nq−1/2(IE|Z|q)1/q. (34)\nFor a random vector Z ∈ IRn, its sub-Gaussian norm is defined as follows\n‖Z‖ψ2 = sup x∈Sn−1 ‖〈x,Z〉‖ψ2 . (35)\nDefinition C.3. (SUB-EXPONENTIAL NORM) The sub-exponential norm of a random variable Z, denoted by ‖Z‖ψ1 , is defined as follows\n‖Z‖ψ1 = sup q≥1\nq−1(IE[|Z|q])1/q. (36)\nFor a random vector Z ∈ IRn, its sub-exponential norm is defined below\n‖Z‖ψ1 = sup x∈Sn−1 ‖〈Z,x〉‖ψ1 . (37)\n3A topological space is Polish if it is homeomorphic to a complete, separable metric space.\nxix" }, { "heading": "C.1 PROOF OF THEOREM 4.1", "text": "By the triangle inequality, we have that∣∣∣MMDµ∗ [PV , PW∗ ]−MMDµ̂N∗ [PV , PŴN∗ ]∣∣∣ ≤ A1 + A2 + A3 + A4, (38) where the terms Ai, i = 1, 2, 3, 4 are defined as follows\nA1 def = ∣∣∣MMDµ∗ [PV , PW∗ ]− min\nW∈W sup µ∈P\nM̂MDµ[PV , PW ] ∣∣∣\nA2 def = ∣∣∣ min W∈W sup µ∈P M̂MDµ[PV , PW ]− min W∈W sup µ̂N∈PN M̂MDµ̂N [ PV , PW ]∣∣∣ A3 def = ∣∣∣ min W∈W\nsup µ̂N∈PN\nM̂MDµ̂N [ PV , PW ] − M̂MDµ̂N∗ [ PV , PW ]∣∣∣ A4 def = ∣∣∣M̂MDµ̂N∗ [PV , PŴN∗ ]−MMDµ̂N∗ [PV , PŴN∗ ]∣∣∣.\nIn the sequel, we compute an upper bound for each term on the right hand side of equation 38:\nUpper bound on A1:\nFirst, notice that the squared kernel MMD loss in equation 4 can be characterized in terms of class labels and features defined in Section 3.1 as follows\nMMDµ[PV , PW ] = 4IEP⊗2x,y [yŷKµ(x, x̂)] . (39)\nTo see this equivalence, we first rewrite the right hand side of equation 39 as follows IEP⊗2y,x [yŷKµ(x, x̂)] = IP{y = +1}IP{ŷ = +1}IEx,x̂∼P⊗2x|y=+1 [Kµ(x, x̂)]\n+ IP{y = −1}IP{ŷ = −1}IEx,x̂∼P⊗2 x|y=−1 [Kµ(x, x̂)]\n− IP{y = −1}IP{ŷ = +1}IEx∼Px|y=−1,x̂∼Px|y=+1 [Kµ(x, x̂)] − IP{y = +1}IP{ŷ = −1}IEx∼Px|y=+1,x̂∼Px|y=−1 [Kµ(x, x̂)]. (40)\nNow, recall from Section 3.1 that Px|y=+1 = PV , and Px|y=−1 = PW by construction of the labels and random features. Moreover, y, ŷ ∼i.i.d. Uniform{−1,+1}, and thus IP{y = −1} = IP{y = +1} = 12 . Therefore, from equation 40, we derive\nIEP⊗2y,x [yŷKµ(x, x̂)] = 1\n4 IEP⊗2V\n[Kµ(x; x̂)] + 1\n4 IEP⊗2W\n[Kµ(x; x̂)]− 1\n2 IEPV ,PW [Kµ(x; x̂)]\n= 1\n4 MMDµ[PV , PW ].\nFor any givenW ∈ W , we have that∣∣∣ sup µ∈P M̂MDµ[PV , PW ]− sup µ∈P MMDµ[PV , PW ] ∣∣∣\n≤ sup µ∈P ∣∣M̂MDµ[PV , PW ]−MMDµ[PV , PW ]∣∣ = 4 sup\nµ∈P ∣∣∣∣∣ 1n(n− 1) ∑ i 6=j yiyjKµ(xi,xj)− IEP⊗2y,x [yŷKµ(x, x̂)] ∣∣∣∣∣ = 4 sup\nµ∈P ∣∣IEµ[En(ξ)]∣∣ ≤ 4\n∣∣∣∣sup µ∈P IEµ[En(ξ)] ∣∣∣∣ , xx\nwhere the error term is defined using the random features\nEn(ξ) def =\n1 n(n− 1) ∑ i 6=j yiyjϕ(xi; ξ)ϕ(xj ; ξ)− IEP⊗2x,y [yŷϕ(x; ξ)ϕ(x̂, ξ)]. (41)\nNow, we invoke the following strong duality theorem Gao & Kleywegt (2016): Theorem C.4. (STRONG DUALITY FOR ROBUST OPTIMIZATION, (GAO & KLEYWEGT, 2016, THEOREM 1)) Consider the general metric space (Ξ, d), and any normal distribution ν ∈ M(Ξ), whereM(Ξ) is the set of Borel probability measures on Ξ. Then,\nsup µ∈M(Ξ)\n{ IEµ[Ψ(ξ)] : Wp(µ, ν) ≤ R } = min\nλ≥0\n{ λRp − ∫ Ξ inf ξ∈Ξ [λdp(ξ, ζ)−Ψ(ξ)]ν(dζ) } , (42)\nprovided that Ψ is upper semi-continuous in ξ.\nUnder the strong duality of Theorem C.4, we obtain that∣∣∣ sup µ∈P M̂MDµ[PV , PW ]− sup µ∈P MMDµ[PV , PW ] ∣∣∣\n≤ 4 ∣∣∣∣minλ≥0 { λRp − ∫ IRD inf ζ∈IRD [ λ‖ξ − ζ‖p2 − En(ζ) ] µ0(dξ) }∣∣∣∣ . (43) In the sequel, let p = 2. The Moreau’s envelope Parikh & Boyd (2014) of a function f : X → IR is defined as follows\nMβf (y) def = inf x∈X\n{ 1\n2β ‖x− y‖22 + f(x)\n} , ∀y ∈ X , (44)\nwhere β > 0 is the regularization parameter. When the function f is differentiable, the following lemma can be established: Lemma C.5. (MOREAU’S ENVELOPE OF DIFFERENTIABLE FUNCTIONS) Suppose the function f : X → IR is differentiable. Then, the Moreau’s envelope defined in equation 44 has the following upper bound and lower bounds\nf(y)− β 2 ∫ 1 0 sup x∈X ‖∇f(y + s(x− y))‖22ds ≤M β f (y) ≤ f(y). (45)\nIn particular, when f is Lf -Lipschitz, we have\nf(y)− βL2f\n2 ≤Mβf (y) ≤ f(y). (46)\nThe proof is presented in Appendix D.1.\nNow, we return to Equation equation 43. We leverage the lower bound on Moreau’s envelope in equation 45 of Lemma C.5 as follows∣∣∣ sup\nµ∈P M̂MDµ[PV , PW ]− sup µ∈P MMDµ[PV , PW ] ∣∣∣ ≤ 4 ∣∣∣∣minλ≥0 { λR2 − ∫ IRD M 1 2λ −En(ξ)µ0(dξ)\n}∣∣∣∣ ≤ 4 ∣∣∣∣∣minλ≥0 { λR2 + IEµ0 [En(ξ)] + 1 4λ IEµ0 [∫ 1 0 sup ζ∈IRD ‖∇En((1− s)ξ + sζ)‖22ds\n]}∣∣∣∣∣ ≤ 4|IEµ0 [En(ξ)]|+ 4RIEµ0 [∫ 1 0 sup ζ∈IRD ‖∇En((1− s)ξ + sζ)‖22ds ] . (47)\nxxi\nLet ζ∗ = ζ∗(ξ, s) = arg supζ∈IRD ‖∇En(1 − s)ξ + sζ‖2. Then, applying the union bound in conjunction with Inequality equation 47 yields\nIP (∣∣∣ sup µ∈P M̂MDµ[PV , PW ]− sup µ∈P MMDµ[PV , PW ] ∣∣∣ ≥ δ)\n≤ IP (∣∣∣∣∣ ∫ IRD En(ξ)µ0(dξ) ∣∣∣∣∣ ≥ δ8 ) + IP (∫ IRD ∫ 1 0 ‖∇En((1− s)ξ + sζ∗)‖22dsµ0(dξ) ≥ δ 8R ) .\n(48)\nNow, we state the following lemma: Lemma C.6. (TAIL BOUNDS FOR THE FINITE SAMPLE ESTIMATION ERROR) Consider the estimation error En defined in equation 41. Then, the following statements hold:\n• Z = ‖∇En(ξ)‖22 is a sub-exponential random variable with the Orlicz norm of ‖Z‖ψ1 ≤ 9×29×L4\nn2 for every ξ ∈ IR D. Moreover,\nIP (∫ IRD ∫ 1 0 ‖∇En((1− s)ξ + sζ∗)‖22dsµ0(dξ) ≥ δ ) ≤ 2e− n2δ 9×29×L4 +L 4 9 ,\n(49)\n• En(ξ) is zero-mean sub-Gaussian random variable with the Orlicz norm of ‖En(ξ)‖ψ2 ≤ 16 √ 3L4\nn for every ξ ∈ IR D. Moreover,\nIP (∣∣∣∣∫ IRD En(ξ)µ0(dξ) ∣∣∣∣ ≥ δ) ≥ 2e− n2δ216√3L4 . (50) The proof of Lemma C.6 is presented in Appendix C.2.\nNow, we leverage the probability bounds equation 49 and equation 50 of Lemma C.6 to upper bound the terms on the right hand side of equation 48 as below\nIP (∣∣∣ sup µ∈P M̂MDµ[PV , PW ]− sup µ∈P MMDµ[PV , PW ] ∣∣∣ ≥ δ) ≤ 2e− n2δ2√3×211×L4 + 2e− n2δ9×212×RL4 +L49\n≤ 4max { e − n 2δ2√ 3×211×L4 , e − n 2δ 9×212×RL4 +L 4 9 } ,\n(51)\nwhere the last inequality comes from the basic inequality a+ b ≤ 2 max{a, b}. Therefore, with the probability of at least 1− %, we have that∣∣∣ sup\nµ∈P M̂MDµ[PV , PW ]− sup µ∈P MMDµ[PV , PW ] ∣∣∣ ≤ max 3 1 4 × 2 11 2 × L2 n ln 1 2 ( 4 % ) , 9× 212 ×RL4 n2 ln 4eL49 %\n , (52) for allW ∈ W . Lemma C.7. (DISTANCE BETWEEN MINIMA OF ADJACENT FUNCTIONS) Let Ψ(W ) : W → IR and Φ(W ) :W → IR. Further, suppose ‖Ψ(W )− Φ(W )‖∞ ≤ δ for some δ > 0. Then,∣∣∣∣ minW∈WΨ(W )− minW∈W Φ(W )\n∣∣∣∣ ≤ δ. (53) xxii\nSee Appendix D.4 for the proof.\nLet Ψ(W ) def= supµ∈P M̂MDµ[PV , PW ], and Φ(W ) def = supµ∈P MMDµ[PV , PW ]. Then, from Inequality equation 53, we have the following upper bound on A1\nA1 = ∣∣∣MMDµ∗ [PV , PW∗ ]− min\nW∈W sup µ∈P\nM̂MDµ[PV , PW ] ∣∣∣\n= ∣∣∣ min W∈W sup µ∈P MMDµ[PV , PW ]− min W∈W sup µ∈P M̂MDµ[PV , PW ] ∣∣∣\n≤ max\n{ 3 1 4 × 24 × L2\nn ln\n1 2\n( 4\n%\n) , 9× 211 ×RL4\nn2 ln\n( 4e L4 9\n%\n)} . (54)\nwith the probability of (at least) 1− %.\nUpper bound on A2:\nTo establish the upper bound on A2, we recall that\nM̂MDµ̂N [PV , PW ] = 1 n(n− 1) 1 N ∑ i 6=j N∑ k=1 yiyjϕ(xi; ξ k)ϕ(xj ; ξ k) (55a)\nM̂MDµ[PV , PW ] = 1 n(n− 1) ∑ i 6=j yiyjIEµ[ϕ(xi; ξ)ϕ(xj ; ξ)]. (55b)\nTherefore,∣∣∣ sup µ∈P M̂MDµ[PV , PW ]− sup µ̂N∈PN M̂MDµ̂N [PV , PW ] ∣∣∣\n≤ ∣∣∣ sup µ∈P IEµ[ϕ(xi; ξ)ϕ(xj ; ξ)]− sup µ̂N∈PN IEµ̂N [ϕ(xi; ξ)ϕ(xj ; ξ)] ∣∣∣. (56)\nHere, the last inequality is due to Theorem C.4 and the following duality results hold\nsup µ∈P IEµ[ϕ(x; ξ)ϕ(x̂; ξ)] = inf λ≥0\n{ λR2 − ∫ IRD inf ζ∈IRD {λ‖ξ − ζ‖22 − ϕ(xi; ζ)ϕ(xj ; ζ)}µ0(dξ) }\nsup µ̂N∈PN IEµ̂N [ϕ(xi; ξ)ϕ(xj ; ξ)] = inf λ≥0\n{ λR2 − 1\nN N∑ k=1 inf ζ∈IRD {λ‖ξk0 − ζ‖22 − ϕ(xi; ζ)ϕ(xj ; ζ)}\n} .\nNow, in the sequel, we establish a uniform concentration result for the following function\nTλ(x,x̂) : IR N×D 7→ IR\n(ξ10, · · · , ξN0 ) 7→ Tλ(x,x̂)(ξ10, · · · , ξN0 ) = 1\nN N∑ k=1 M 1 2λ −ϕ(x,·)ϕ(x̂,·)(ξ k 0 )− ∫ IRD M 1 2λ −ϕ(x,·)ϕ(x̂,·)(ξ)µ0(dξ).\nThen, from equation 56 we have∣∣∣ sup µ∈P M̂MDµ[PV , PW ]− sup µ̂N∈PN M̂MDµ̂N [PV , PW ] ∣∣∣ ≤ sup λ≥0 sup x,x̂∈X |Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )|. (57)\nWe now closely follow the argument of Rahimi & Recht (2008) to establish a uniform concentration result with respect to the data points x, x̂ ∈ X . In particular, consider an -net cover of X ⊂ IRd.\nxxiii\nThen, we require N = ( 4diam(X ) )d balls of the radius > 0, e.g., see (Pollard, 1990, Lemma\n4.1, Section 4). Let Z = {z1, · · · , zN } ⊂ X denotes the center of the covering net. Now, let (ξ10, · · · , ξk0 , · · · , ξN0 ) ∈ IR N×D and (ξ10, · · · , ξ̃k0 , · · · , ξN0 ) ∈ IR N×D be two sequences that differs in the k-th coordinate for 1 ≤ k ≤ N . Then,∣∣T(zi,zj)(ξ10, · · · , ξk0 , · · · , ξN0 )− T(zi,zj)(ξ10, · · · , ξ̃k0 , · · · , ξN0 )∣∣ = 1\nN ∣∣∣M 12λ−ϕ(zi;·)ϕ(zj ;·)(ξk0 )−M 12λ−ϕ(zi,·)ϕ(zj ,·)(ξ̃k0 )∣∣∣. (58) Without loss of generality suppose M 1 2λ\n−ϕ(zi;·)ϕ(zj ;·)(ξ k 0 ) ≥M\n1 2λ\n−ϕ(zi;·)ϕ(zj ;·)(ξ̃ k 0 ). Then,\nM 1 2λ\n−ϕ(zi;·)ϕ(zj ;·)(ξ k 0 )−M\n1 2λ\n−ϕ(zi,·)ϕ(zj ,·)(ξ̃ k 0 )\n= inf ζ∈IRD\n{ λ‖ζ − ξk0‖22 − ϕ(zi; ζ)ϕ(zj ; ζ) } − inf ζ∈IRD { λ‖ζ − ξ̃k0‖22 − ϕ(zi; ζ)ϕ(zj ; ζ) } (a) ≤ −ϕ(zi; ξk0 )ϕ(zj ; ξk0 )− inf ζ∈IRD { λ‖ζ − ξ̃k0‖22 − ϕ(zi; ζ)ϕ(zj ; ζ)\n} (b) ≤ −ϕ(zi; ξk0 )ϕ(zj ; ξk0 ) + sup ζ∈IRD {ϕ(zi; ζ)ϕ(zj ; ζ)}\n(c) ≤ 2L2, (59)\nwhere (a) follows by letting ζ = ξk0 in the first optimization problem, (b) follows by using the fact that −λ‖ζ − ξ̃k0‖2 is non-positive for any ζ ∈ IR\nD and can be dropped, and (c) follows from Assumption (A.2).\nNow, plugging the upper bound in equation 59 into equation 58 yields∣∣Tλ(zi,zj)(ξ10, · · · , ξk0 , · · · , ξN0 )− Tλ(zi,zj)(ξ10, · · · , ξ̃k0 , · · · , ξN0 )∣∣ ≤ 2L2N . From McDiarmid’s Martingale inequality McDiarmid (1989) and the union bound, we obtain that\nIP ( ∪zi,zj∈Z |Tλ(zi,zj)(ξ 1 0, · · · , ξN0 )| ≥ δ ) ≤ ( 4diam(X ) )d · ( −Nδ 2\nL2\n) , (60)\nfor all λ ≥ 0. Now, consider arbitrary points (x, x̂) ∈ X ×X . Let the center of the balls containing those points be zi, zj ∈ Z , i.e., x ∈ IBε(zi) and x̂ ∈ IBε(zj) for some zi, zj ∈ Z . Then, by the triangle inequality, we have that\n|Tλ(x,x̂)(ξ10, · · · , ξN0 )− T(zi,zj)(ξ 1 0, · · · , ξN0 )|\n≤ |Tλ(x,x̂)(ξ10, · · · , ξN0 )− Tλ(zi,x̂)(ξ 1 0, · · · , ξN0 )|\n+ |Tλ(zi,x̂)(ξ 1 0, · · · , ξN0 )− Tλ(zi,zj)(ξ 1 0, · · · , ξN0 )|\n≤ ‖∇xTλ(x,x̂)(ξ10, · · · , ξN0 )‖2‖x− zi‖2 + ‖∇x̂Tλ(x,x̂)(ξ10, · · · , ξN0 )‖2‖x̂− zj‖2 ≤ 2LT , (61)\nwhere LT = LT (ξ10, · · · , ξN0 ) def = supx,x̂∈X ‖∇xTλ(x,x̂)(ξ 1 0, · · · , ξN0 )‖2 is the Lipschitz constant of the mapping T . Note that the Lipschitz constant LT is a random variable with respect to the random feature samples ξ0, · · · , ξN . Let (x∗, x̂∗) def = arg supx,x̂∈X ‖∇xTλ(x,x̂)(ξ 1 0, · · · , ξN0 )‖2.\nxxiv\nWe compute an upper bound on the second moment of the random variable LT as follows\nIEµ0 [ L2T ] = IEµ0 [ ‖∇xTλ(x∗,x̂∗)(ξ 1 0, · · · , ξN0 )‖22 ] = IEµ0 ∥∥∥∥∥ 1N N∑ k=1 ∇xM 1 2λ −ϕ(x∗;·)ϕ(x̂∗;·)(ξ k 0 )− ∫ IRD ∇xM 1 2λ −ϕ(x∗;·)ϕ(x̂∗;·)(ξ)µ0(dξ) ∥∥∥∥∥ 2\n2 = IEµ0 ∥∥∥∥∥ 1N N∑ k=1 ∇xM 1 2λ −ϕ(x∗;·)ϕ(x̂∗;·)(ξ k 0 ) ∥∥∥∥∥ 2\n2\n− IEµ0 [∥∥∥∥∫\nIRD ∇xM\n1 2λ −ϕ(x∗;·)ϕ(x̂∗;·)(ξ)µ0(dξ) ∥∥∥∥2 2 ]\n≤ 1 N2 IEµ0 ∥∥∥∥∥ N∑ k=1 ∇xM 1 2λ −ϕ(x∗;·)ϕ(x̂∗;·)(ξ k 0 ) ∥∥∥∥∥ 2\n2 . We further proceed using the triangle inequality as well as the basic inequality (a1 + a2 + · · · + aN ) 2 ≤ N(a21 + a22 + · · ·+ a2N ),\nIEµ0 [ L2T ] = 1\nN2 IEµ0 ∥∥∥∥∥ N∑ k=1 ∇xM 1 2λ −ϕ(x∗;·)ϕ(x̂∗;·)(ξ k 0 ) ∥∥∥∥∥ 2\n2 ≤ 1 N2 IEµ0 ( N∑ k=1 ∥∥∥∇xM 12λ−ϕ(x∗;·)ϕ(x̂∗;·)(ξk0 )∥∥∥2 )2\n≤ 1 N N∑ k=1 IEµ0 [∥∥∥∇xM 12λ−ϕ(x∗;·)ϕ(x̂∗;·)(ξk0 )∥∥∥22 ] . (62)\nTo proceed from equation 62, we leverage the following lemma: Lemma C.8. (MOREAU’S ENVELOP OF PARAMETRIC FUNCTIONS) Consider the parametric function f : X ×Θ→ IR and the associated Moreau’s envelope for a given θ ∈ Θ ⊂ IRd:\nMβf(·;θ)(x) = infy∈X\n{ 1\n2β ‖x− y‖22 + f(y;θ)\n} . (63)\nFurthermore, define the proximal operator as follows\nProxβf(·;θ)(x) = arg infy∈X\n{ 1\n2β ‖x− y‖22 + f(y;θ)\n} . (64)\nThen, Moreau’s envelope has the following upper bound∥∥∥∇θMβf(·;θ)(x)∥∥∥ 2 ≤ ∥∥∥∇θf (Proxβf(·;θ)(x);θ)∥∥∥ 2 . (65)\nThe proof is presented in Appendix D.2.\nEquipped with Inequality equation 65 of Lemma C.8, we now compute an upper bound on the right hand side of equation 62 as follows\nIEµ0 [ L2T ] ≤ 1 N N∑ k=1 IEµ0 [|ϕ(x̂∗; ξ)|2 · ‖∇xϕ(x∗; ξk0 )‖22] ≤ L4, (66)\nwhere the last inequality is due to (A.2).\nxxv\nInvoking Markov’s inequality now yields IP ( |Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )− Tλ(zi,zj)(ξ 1 0, · · · , ξN0 )| ≥ δ ) = IP ( LT ≥ δ\n2 ) ≤ ( 2\nδ\n)2 IEµ0 [L 2 T ]\n≤ ( 2\nδ\n)2 L4. (67)\nNow, using the union bound, for every arbitrary pair of data points (x, x̂) ∈ X × X the following inequality holds\nIP ( |Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )| ≥ δ ) ≤ IP ( |Tλ(zi,zj)(ξ 1 0, · · · , ξN0 )| ≥ δ/2 ) + IP ( |Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )− Tλ(zi,zj)(ξ 1 0, · · · , ξN0 )| ≥ δ/2\n) (68)\n≤ ( 2\nδ\n)2 L4 + ( 4diam(X ) )d · ( −Nδ 2\nL2\n) .\nFollowing the proposal of Rahimi & Recht (2008), we select = (κ1/κ2) 1 d+2 , where κ1 def = (4diam(X ))d · e− 2Nλδ2 2L2λ+L4 and κ2 def = (2/δ)2L4. Then,\nIP ( sup x,x̂∈X |Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )| ≥ δ ) ≤ 28 ( L2diam(X ) δ )2 · ( − Nδ 2 L2(d+ 2) ) .\nThus, with the probability of at least 1− %, the following inequality holds\nsup λ≥0 sup x,x̂∈X\n|Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )| ≤\n( L2(d+ 2)\nN W\n( 28Ndiam2(X )\n%\n)) 1 2\n, (69)\nwhere W(·) is the Lambert W -function.4 Since W(x) ≤ ln(x) for x > e, we can rewrite the upper bound in terms of elementary functions\nsup λ≥0 sup x,x̂∈X\n|Tλ(x,x̂)(ξ 1 0, · · · , ξN0 )| ≤\n√ L2(d+ 2)\nN ln\n1 2\n( 28Ndiam2(X )\n%\n) , (70)\nprovided thatN is sufficiently large and/or % is sufficiently small so that 2 8Ndiam2(X )\n% ≥ e. Plugging Inequality equation 70 in equation 57 now results in the following inequality∣∣∣ sup\nµ∈P M̂MDµ[PV , PW ]− sup µ̂N∈PN M̂MDµ̂N [PV , PW ]\n∣∣∣ ≤√L2(d+ 2) N ln 1 2 ( 28Ndiam2(X ) % ) , (71)\nfor allW ∈ W . Employingequation 53 from Lemma C.7 now yields the following upper bound A2 = ∣∣∣ min W∈W sup µ∈P M̂MDµ[PV , PW ]− min W∈W sup µ̂N∈PN M̂MDµ̂N [ PV , PW ]∣∣∣ ≤ √ L2(d+ 2)\nN ln\n1 2\n( 28Ndiam2(X )\n%\n) . (72)\n4Recall that the lambert W -function is the inverse of the function f(W ) =WeW .\nxxvi\nUpper bound on A3:\nRecall that the solution of the empirical risk function of equation 14 is denoted by\n(ŴN∗ , µ̂ N ∗ )\ndef = arg min\nW∈W arg inf µ̂N∈PN M̂MD\nα µ̂N [PV , PW ]\n= arg min W∈W arg sup µ̂N∈PN\n8 n(n− 1) ∑\n1≤i<j≤n\nyiyjIEµ̂N∈PN [ϕ(xi; ξ)ϕ(xj ; ξ)]\n− 8 n(n− 1)α ∑ 1≤i<j≤n (IEµ̂N [ϕ(xi; ξ)ϕ(xj ; ξ)]) 2. (73)\nWe also define the solution of the empirical kernel alignment as follows\n(ŴN , µ̂ N ) def = arg min\nW∈W arg sup µ̂N∈PN M̂MDµ̂N [PV , PW ]\n= 8 n(n− 1) ∑\n1≤i<j≤n\nyiyjIEµ̂N [ϕ(xi; ξ)ϕ(xj ; ξ)]. (74)\nDue to the optimality of the empirical measure µ̂N∗ for the inner optimization in equation 73, the following inequality holds\nM̂MD α\nµ̂N [ PV , PŴN∗ ] ≤ M̂MD α µ̂N∗ [PV , PŴN∗ ]\n≤ 8 n(n− 1) ∑ 1≤i<j≤n yiyjIEµ̂N∗ [ϕ(xi; ξ)ϕ(xj ; ξ)]. (75)\nUpon expansion of M̂MD α\nµ̂N [ PV , PŴN∗ ] , and after rearranging the terms in equation 75, we arrive\nat M̂MDµ̂N [ PV , PŴN∗ ] − M̂MDµ̂N∗ [ PV , PŴN∗ ] = 8 n(n− 1) ∑\n1≤i<j≤n\nyiyj(IEµ̂N [ϕ(xi; ξ)ϕ(xj ; ξ)]− IEµ̂N∗ [ϕ(xi; ξ)ϕ(xj ; ξ)])\n≤ 8 n(n− 1)α ∑ 1≤i<j≤n ( IEµ̂N [ϕ(xi; ξ)ϕ(xj ; ξ)] )2 ≤ 8L 4\nα , (76)\nwhere the last inequality is due to the fact that ‖ϕ‖∞ < L by (A.1). Now, due to optimality of ŴN for the outer optimization problem in equation 74, we have\nM̂MDµ̂N [ PV , PŴN ] ≤ M̂MDµ̂N [ PV , PŴN∗ ] . (77)\nPutting together Inequalities equation 76 and equation 77 yields M̂MDµ̂N [ PV , PŴN ] − M̂MDµ̂N∗ [ PV , PŴN∗ ] ≤ 8L 4 α . (78)\nSimilarly, due to the optimality of the empirical measure µ̂N for the optimization in equation 74 we have that\nM̂MDµ̂N∗ [ PV , PŴN∗ ] ≤ M̂MDµ̂N∗ [ PV , PŴN ] ≤ M̂MDµ̂N [ PV , PŴN ] . (79)\nxxvii\nCombining equation 78 and equation 79 then yields A3 = ∣∣∣M̂MDµ̂N∗ [PV , PŴN∗ ]− M̂MDµ̂N [PV , PŴN ]∣∣∣ ≤ 8L4α . (80)\nUpper bound on A4:\nThe upper bound on A4 can be obtained exactly the same way as A1. Specifically, from equation 52 it follows directly that\nA4 = ∣∣∣M̂MDµ̂N∗ [PV , PŴN∗ ]−MMDµ̂N∗ [PV , PŴN∗ ]∣∣∣\n≤ sup µ̂N∈PN ∣∣∣M̂MDµ̂N [PV , PŴN∗ ]−MMDµ̂N [PV , PŴN∗ ]∣∣∣ (81) ≤ max { 3 1 4 × 2 112 × L2\nn ln\n1 2\n( 4\n%\n) , 9× 212 ×RL4\nn2 ln\n( 4e L4 9\n%\n)} . (82)\nNow, plugging the derived upper bounds in A1-A4 in equation 38 and employing the union bound completes the proof." }, { "heading": "C.2 PROOF OF THEOREM 4.2", "text": "The proof has three main ingredients and follows the standard procedure in the literature, see, e.g., Wang et al. (2017); Luo & Mattingly (2017). In the first step, we identify the mean-field limit of the particle SGD in equation 15. In the second step, we prove the convergence of the measured-valued process {(µNt )0≤t≤T } to the mean-field solution by establishing the pre-compactness of Sokhorhod space. Lastly, we prove the uniqueness of the mean-field solution of the particle SGD.\nStep 1-Identification of the scaling limit: First, we identify the weak limit of converging subsequences via the action of the empirical measure µ̂Nm(ξ) = 1 N ∑N k=1 δ(ξ − ξkm) on a test function f ∈ C3b (IR D). In particular, we use the standard techniques of computing the scaling limits from Luo & Mattingly (2017).\nRecall that the action of an empirical measure on a bounded function is defined as follows\n〈f, µ̂Nm〉 def =\n1\nN N∑ k=1 f(ξkm). (83)\nWe analyze the evolution of the empirical measure µ̂Nm via its action on a test function f ∈ C3b (IR D). Using Taylor’s expansion, we obtain 〈f, µ̂Nm+1〉 − 〈f, µ̂Nm〉 = 〈f, µ̂Nm+1〉 − 〈f, ν̂Nm+1〉\n= 1\nN N∑ k=1 f(ξkm+1)− f(ξkm)\n= 1\nN N∑ k=1 ∇f(ξkm)(ξkm+1 − ξkm)T +RNm.\nwhere RNm is a remainder term defined as follows\nRNm def =\n1\nN N∑ k=1 (ξkm+1 − ξkm)T∇2f(ξ̃k)(ξkm+1 − ξkm), (84)\nxxviii\nwhere ξ̃k def= (ξ̃k(1), · · · , ξ̃k(p)), and ξ̃k(i) ∈ [ξkm(i), ξkm+1(i)], for i = 1, 2, · · · , p.\nPlugging the difference term (ξkm+1 − ξkm) from the SGD equation in equation 15 results in\n〈f, µ̂Nm+1〉 − 〈f, µ̂Nm〉 (85)\n= η\nN2α N∑ k=1 ∇f(ξkm) ·\n(( 1\nN N∑ `=1 ϕ(xm; ξ ` m)ϕ(x̃m; ξ ` m)− αymỹm\n) ∇ξ ( ϕ(xm; ξ k m)ϕ(x̃m; ξ k m) )) +RNm.\nNow, we define the drift and Martingale terms as follows\nDNm def =\nη\nNα ∫∫ X×Y ( 〈ϕ(x, ξ)ϕ(x̃, ξ), µ̂Nm〉 − αyỹ ) (86a)\n× 〈∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ)), µ̂Nm〉dP⊗2x,y((x, y), (x̃, ỹ))\nMNm def =\nη\nNα\n( 〈ϕ(xm, ξ)ϕ(x̃m, ξ), µ̂Nm〉 − αymỹm ) (86b)\n× 〈∇f(ξ)(ϕ(x̃m; ξ)∇ξϕ(xm; ξ) + ϕ(x̃m; ξ)∇ξϕ(xm; ξ)), µ̂Nm〉 − DNm.\nrespectively. Using the definitions ofDNm andM N m in equation 86a-equation 86b, we recast Equation equation 85 as follows\n〈f, µ̂Nm+1〉 − 〈f, µ̂Nm〉 = DNm +MNm +RNm. (87) Summation over ` = 0, 1, 2 · · · ,m− 1 and using the telescopic sum yields\n〈f, µ̂Nm〉 − 〈f, µ̂N0 〉 = m−1∑ `=0 DN` + m−1∑ `=0 MN` + m−1∑ `=0 RN` . (88)\nWe also define the following continuous embedding of the drift, martingale, and the remainder terms as follows\nDNt def = bNtc∑ `=0 DN` (89a)\nMNt def = bNtc∑ `=0 MN` (89b)\nRNt def = bNtc∑ `=0 RN` , t ∈ (0, T ]. (89c)\nThe scaled empirical measure µNt def = µ̂NbNtc then can be written as follows\n〈f, µNt 〉 − 〈f, µN0 〉 = DNt +MNt +RNt . (90)\nSince the drift process (DNt )0≤t≤T is a piecewise cádlág process, we have\nDN` =\n∫ `+1 N\n` N\nR[µs]ds, (91)\nwhere the functional R[µs] is defined as follows\nR[µs] def = η\nα ∫∫ X×Y (〈ϕ(x, ξ)ϕ(x̃, ξ), µs〉 − αyỹ) (92)\n× 〈∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ))T , µs〉P⊗2x,y((dx,dx̃), (dy,dỹ)).\nxxix\nTherefore, the expression in equation 90 can be rewritten as follows 〈f, µNt 〉 − 〈f, µN0 〉 = ∫ t\n0\nR[µs]ds+MNt +RNt . (93)\nIn the following lemma, we prove that the remainder term sup0≤t≤T |RNt | vanishes in probabilistic sense as the number of particles tends to infinity N →∞: Lemma C.9. (LARGE N -LIMIT OF THE REMAINDER PROCESS) Consider the remainder process (RNt )0≤t≤T defined via scaling in equation 84-equation 89c. Then, there exists a constant C0 > 0 such that\nsup 0≤t≤T\n|RNt | ≤ C0T\nN\n( ηL2 + 2ηL4\nα\n) . (94)\nand thus lim supN→∞ sup0≤t≤T |RNt | = 0.\nProof. The proof is relegated to Appendix D.5.\nWe can also prove a similar result for the process defined by the remainder term:\nLemma C.10. (LARGE N -LIMIT OF THE MARTINGALE PROCESS) Consider the Martingale process (MNt )0≤t≤T defined via scaling in equation 86b-equation 89b. Then, for some constant C1 > 0, the following inequality holds\nIP ( sup\n0≤t≤T |MNt | ≥ ε ) ≤ 1 Nαε 4 √ 2L2 √ bNT cηC1(L2 + α)2. (95)\nIn particular, with the probability of at least 1− ρ, we have\nsup 0≤t≤T\n|MNt | ≤ 1 Nαρ 4 √\n2L2 √ bNT cηC1(L2 + α)2. (96)\nand thus lim supN→∞ sup0≤t≤T |MNt | = 0 almost surely.\nProof. The proof is deferred to Appendix D.6.\nNow, using the results of Lemmata C.10 and C.9 in conjunction with equation 93 yields the following mean-field equation as N →∞,\n〈µt, f〉 = 〈µ0, f〉+ η\nα ∫ t 0 (∫∫ X×Y (〈ϕ(x, ξ)ϕ(x̃, ξ), µs〉 − αyỹ) (97)\n× 〈∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ)), µs〉P⊗2x,y((dx,dx̃), (dy,dỹ))\n) ds.\nNotice that he mean-field equation in equation 97 is in the weak form. When the Lebesgue density pt(ξ) = dµt/dξ exists, the McKean-Vlasov PDE in equation 22 can be readily obtained from equation 97.\nStep 2: Pre-compactness of the Skorkhod space: To establish our results in this part of the proof, we need a definition and a theorem:\nxxx\nDefinition C.11. (TIGHTNESS) A setA of probability measures on a metric space S is tight if there exists a compact subset S0 ⊂ S such that\nν(S0) ≥ 1− ε, for all ν ∈ A, (98)\nfor all ε > 0. A sequence {XN}N∈IN of random elements of the metric space S is tight if there exists a compact subset S0 ⊂ S such that\nν(XN ∈ S0) > 1− ε, (99)\nfor all ε > 0, and all N ∈ IN.\nNow, to show the tightness of the measured valued process (µNt )0≤t≤T , we must verify Jakubowski’s criterion (Jakubowski, 1986, Thm. 1):\nTheorem C.12. (JAKUBOWSKI’S CRITERION (JAKUBOWSKI, 1986, THM. 1)) A sequence of measured-valued process {(ζNt )0≤t≤T }N∈IN is tight in DM(IRD)([0, T ]) if and only if the following two conditions are satisfied:\n(J.1) For each T > 0 and γ > 0, there exists a compact set UT,γ such that\nlim N→∞\ninf IP ( ζNt ∈ UT,γ ,∀t ∈ (0, T ] ) > 1− γ. (100)\nThis condition is referred to as the compact-containment condition.\n(J.2) There exists a family H of real-valued functions H : M(IRD) 7→ IR that separates points inM(IRD) and is closed under addition such that for every H ∈ H, the sequence {(H(ξNt ))0≤t≤T }N∈IN is tight in DIR([0, T ]).\nTo establish (J1), we closely follow the proof of (Giesecke et al., 2013, Lemma 6.1.). In particular, for each L > 0, we define SL = [0, B]p. Then, SB ⊂ IRp is compact, and for each t ≥ 0, and N ∈ IN, we have\nIE[µNt (IR p/SB)] =\n1\nN N∑ k=1 IP ( ‖ξkbNtc‖2 ≥ B ) (101)\n(a) ≤ 1 N N∑ k=1 IE[‖ξkbNtc‖2] B\n(102)\n(b) ≤ c0 + ηαL 2T + 2ηL4T\nB , (103)\nwhere (a) follows from Markov’s inequality, and (b) follows from the upper bound on the norm of the particles in equation 177 of Appendix D. We now define the following set\nUB = { µ ∈M(IRp) : µ(IRp/S(B+j)2) <\n1√ B + j\nfor all j ∈ IN } . (104)\nxxxi\nWe let UT,γ = UB , where UB is the completion of the set UB . By definition, UT,γ is a compact subset ofM(IRD). Now, we have\nIP ( µNt 6∈ UT,γ ) ≤ ∞∑ j=1 IP ( µNt (IR p/S(B+j)2) > 1√ B + j )\n≤ ∞∑ j=1 IE[µNt (IR p/S(B+j)2)] 1/ √ B + j\n≤ ∞∑ j=1 c0 + ηL 2T + 2(η/α)L4T (B + j)2/ √ B + j\n= ∞∑ j=1 c0 + ηL 2T + 2(η/α)L4T (B + j)3/2 . (105)\nNow, since\nlim B→∞ ∞∑ j=1 c0 + ηL 2T + 2(η/α)L4T (B + j)3/2 = 0, (106)\nthis implies that for any γ > 0, there exists a B > 0, such that\nlim N→∞\ninf IP ( µNt ∈ UB ,∀t ∈ (0, T ] ) > 1− γ. (107)\nThis completes the proof of (J.1). To verify (J.2), we consider the following class of functions\nH def= {H : ∃f ∈ C3b (IR D) such that H(µ) = 〈µ, f〉,∀µ ∈M(IRD)}. (108)\nBy definition, every function H ∈ H is continuous with respect to the weak topology ofM(IRD) and further the class of functions H separate points inM(IRD) and is closed under addition. Now, we state the following sufficient conditions to establish (J.2). The statement of the theorem is due to (Robert, 2013, Thm. C.9): Theorem C.13. (TIGHTNESS IN DIR([0, T ]), (ROBERT, 2013, THM. C.9)) A sequence {(ZNt )0≤t≤T }N∈IN is tight in DIR([0, T ]) iff for any δ > 0, we have\n(T.1) There exists > 0, such that\nIP(|ZN0 | > ) ≤ δ, (109) for all N ∈ IN.\n(T.2) For any ρ > 0, there exists σ > 0 such that\nIP ( sup\nt1,t2≤T,|t1−t2|≤ρ |ZNt1 − Z N t2 | > σ\n) ≤ δ, (110)\nThis completes the tightness proof of the of the laws of the measured-valued process {(µNt )0≤t≤T }N∈IN. Now, we verify the condition (J.2) by showing that the sufficient conditions (T.1) and (T.2) hold for function values {(H(µNt ))0≤t≤T }N∈IN, where H ∈ H and H is defined in Eq. equation 108. Now, condition (T.1) is readily verified since\nH(µN0 ) = 〈µN0 , f〉 = ∫\nIRD f(ξ)µN0 (dξ) (111) ≤ ‖f‖∞ ∫\nIRD µN0 (dξ) (112)\n≤ b, (113)\nxxxii\nwhere in the last step, we used the fact that f ∈ C3b (IR D), and hence, ‖f‖∞ ≤ b. Thus, IP(H(µN0 ) ≥ b) = 0 for all N ∈ IN, and the condition (T.1) is satisfied. Now, consider the condition (T.2). From Equation equation 93, and with 0 ≤ t1 < t2 ≤ T we have\n|H(µNt1)−H(µ N t1)| = |〈f, µ N t1〉 − 〈f, µ N t2〉| ≤ ∫ t2 t1 |R[µs]|ds+ |MNt1 −M N t2 |+ |R N t1 −R N t2 |. (114)\nTo bound the first term, recall the definition of R[µs] from equation 92. The following chain of inequalities holds,\n|R[µs]| ≤ η\nα IE P⊗2x,y\n[|〈ϕ(x, ξ)ϕ(x̃, ξ), µs〉 − αyỹ||〈∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ))T , µs〉|]\n≤ η α IE P⊗2x [(|〈ϕ(x, ξ)ϕ(x̃, ξ), µs〉|+ α)|〈∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ))T , µs〉|]. (115)\nLet I : IRD → IR, I(ξ) = 1 denotes the identity function. Notice that 〈I, µs〉 = ∫\nIRD µs(ds) = 1.\nFrom equation 115, we proceed as follows\n|R[µs]| ≤ η\nα IE P⊗2 X\n[(‖ϕ‖2∞ · |〈I, µs〉|+ α) · ‖∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ))T ‖∞ · |〈I, µs〉|]\n≤ η α IE P⊗2 X [(‖ϕ‖2∞ + α) · ‖∇f(ξ)(ϕ(x̃; ξ)∇ξϕ(x; ξ) + ϕ(x; ξ)∇ξϕ(x̃; ξ))T ‖∞]\n≤ 2η α (L2 + α)L2C1, (116)\nwhere the last inequality is due to (A.1). Therefore,∫ t2 t1 |R[µs]|ds ≤ s0|t2 − t1|, (117)\nwhere s0 def =\n2η α (L2 + α)L2C1.\nConsider the middle term of equation 114. Using the definition of the martingale term in equation 89b, we obtain that\n|MNt1 −M N t2 | = ∣∣∣∣∣∣ bNt1c∑ `=0 MN` − bNt2c∑ `=0 MN` ∣∣∣∣∣∣ ≤\n∣∣∣∣∣∣ bNt2c∑ `=bNt1c MN` ∣∣∣∣∣∣ . (118) In Equation of Section D, we have proved the following concentration bound\nIP(|MNm | ≥ ε) ≤ 2 ( − N 2α2ε2\n8mL4η2C21 (L 2 + α)2\n) , ∀m ∈ [0, NT ] ∩ IN. (119)\nNow, recall the alternative definition of the sub-Gaussian random variables: Definition C.14. (SUB-GAUSSIAN RANDOM VARIABLES BOUCHERON ET AL. (2013)) A random variable X is σ2-sub-Gaussian if\nIE[(λ(X − IE[X]))] ≤ (λ2σ2\n2\n) . (120)\nxxxiii\nWe enumerate a few standard consequences of sub-Gaussianity Boucheron et al. (2013). If Xi are independent and σ2i -sub-Gaussian, then ∑n i=1Xi is ∑n i=1 σ 2 i -sub-Gaussian. Moreover, X is σ2-sub-Gaussian if and only if\nIP(|X − IE[X]| ≥ ε) ≤ ( − ε 2\n2σ2\n) . (121)\nNow, it is clear from equation 119 andthat MNm is sub-Gaussian random variable with a zero mean, and with the parameter σ2m = 4mL4η2C21 (L 2 + α)2\nN2α2 . Therefore, ∑bNt2c `=bNt1cM N ` is sub-Gaussian\nwith the parameter σ2(t1, t2) def =\n2L4η2C21 (L 2 + α)2\nN2α2 (bNt1c− bNt2c+ 1)(bNt1c+ bNt2c). Con-\nsequently, from Inequality equation 118 and the concentration inequality in equation 121, we have\nIP ( sup\nt1,t2≤T,|t1−t2|≤ρ |MNt1 −M N t2 | ≥ ε\n) ≤ IP sup t1,t2≤T,|t1−t2|≤ρ ∣∣∣∣∣∣ bNt2c∑ `=bNt1c MN` ∣∣∣∣∣∣ ≥ ε (122)\n= IP ∣∣∣∣∣∣ bNt∗2c∑ `=bNt∗1c MN` ∣∣∣∣∣∣ ≥ ε (123)\n≤ 2 ( − ε 2\nσ2(t∗1, t ∗ 2)\n) (124)\n≤ 2 ( − α 2ε2\n4L4η2C21 (L 2 + α)2(ρ+ 1)T\n) , (125)\nwhere (t∗1, t ∗ 2) def = arg supt1,t2≤T,|t1−t2|≤ρ ∣∣∣∑bNt2c`=bNt1cMN` ∣∣∣. We first compute a bound for the last term of equation 114 using the definition of the scaled term RNt from equation 89c. We have\n|RNt1 −R N t2 | = ∣∣∣∣∣∣ bNt1c∑ `=0 RN` − bNt2c∑ `=0 RN` ∣∣∣∣∣∣ =\n∣∣∣∣∣∣ bNt2c∑ `=bNt1c R` ∣∣∣∣∣∣ ≤ bNt2c∑ `=bNt1c |R`|\n(a) ≤ |bNt2c − bNt1c| C0 N2 (ηL2 + (L4/α))\n(b) ≤ s1|t2 − t1|, (126)\nwhere (a) follows from the upper bound in equation 178 of Section D, and in (b) we define s1 def = C0 N (ηL2 + (L4/α)).\nxxxiv\nPutting together equation 117, equation 122, and equation 126, we conclude from Inequality equation 114 that\nIP (\nsup t1,t2≤T,|t1−t2|≤ρ\n|H(µNt1)−H(µ N t1)| ≥ σ\n) ≤ IP ( sup\nt1,t2≤T,|t1−t2|≤ρ |MNt1 −M N t2 |+ (s0 + s1)ρ ≥ σ ) ≤ 2 ( − α 2(σ − (s0 + s1)ρ)2\n4L4η2C21 (L 2 + α)2(ρ+ 1)T\n) .\nTherefore, condition (T.2) is also satisfied. Since the sufficient conditions (T.1) and (T.2) are satisfied, the condition (J.2) is satisfied. This completes the tightness proof of the measured-valued sequence {µNt }N∈IN. Now, we prove its convergence to a mean-field solution (µ∗t )0≤t≤T . Theorem C.15. (PROKHOROV’S THEOREM PROKHOROV (1956)) A subset of probability measures on a complete separable metric space is tight if and only if it is pre-compact.\nAccording to Theorem C.15, the tightness of the Skorkhod Space DM(IRD)([0, T ]) implies its precompactness which in turn implies the existence of a converging sub-sequence {(µNt )0≤t≤T }Nk of {µNt }N∈IN . Notice that {(µNt )0≤t≤T }Nk is a stochastic process defined on the Skorkhod space. Therefore, let πNk denotes the law of the converging sub-sequence {(µNt )0≤t≤T }Nk . By definition, πNk is an element of the measure spaceM(D[0,T ](M(IRD))). In the sequel, we closely follow the argument of (Wang et al., 2017, Proposition 4) to show that the limiting measure π∞ is a Dirac’s delta function concentrated at a mean-field solution µ∗t ∈ D[0,T ](M(IR\nD)). We define the following functional\nFt : D[0,T ](M(IRD))→ IR, µt 7→ Ft[µt] = ∣∣∣∣〈µt, f〉 − 〈µ0, f〉 − ∫ t\n0\nR[µs]ds ∣∣∣∣ . (127) We compute the expectation of the functional Ft with respect to πNk . We then have\nIEπNk [Ft(µ)] = IE[Ft[µ N t ]]\n= IE [∣∣∣∣〈µNkt , f〉 − 〈µN0 , f〉 − ∫ t 0 R[µNks ]ds ∣∣∣∣ .] . (128) Now, from Equation equation 93, we have that\n〈µNkt , f〉 − 〈µ Nk 0 , f〉 − ∫ t 0 R[µNks ]ds =M Nk t +R Nk t . (129)\nPlugging equation 129 in equation 128 gives\nIEπNk [Ft(µ)] = IE[Ft[µ Nk t ]] = IE [∣∣∣MNkt +RNkt ∣∣∣]\n≤ IE [\nsup 0≤t≤T\n|MNkt | ] + IE [\nsup 0≤t≤T\n|RNkt | ]\n= 1 Nαρ 4 √\n2L2 √ bNT cηC1(L2 + α)2 + C0T\nN (ηαL2T + 2ηL4T ), (130)\nwhere the last equality is due to the bounds in equation 94 and equation 95 of Lemmata C.9 and C.10, respectively. Taking the limit of N →∞ from equation 130 yields\nlim Nk→∞ IEπNk [|Ft[µ]|] = 0. (131)\nxxxv\nIt can be shown that the functional Ft[·] is continuous and bounded. Therefore, due the weak convergence of the sequence {πNk}Nk∈IN to π∞, equation 131 implies that\nIEπ∞ [|Ft(µ)|] = 0. (132)\nSince the identity equation 132 holds for all bounded test functions f ∈ C3b (IR D) and for all t ∈ (0, T ], it follows that π∞ is a Dirac’s delta function concentrated at a solution (µ∗t )0≤t≤T of the mean-field equation.\nStep 3: Uniqueness of a mean-field solution: Before we establish the uniqueness result we make two remarks:\nFirst, we make it clear that from the compact-containment condition (J.1) of Jakubowski’s criterion in Theorem C.12, the support of the measured-valued process (µNt )0≤t≤T = (µ̂ N bNtc)0≤t≤T is compact for all 0 ≤ t ≤ T . Moreover, in Step 2 of the proof, we established that the measure valued process (µNt )0≤t≤T converges weakly to a mean-field solution as the number of particles tends to infinity (i.e., N → ∞). Thus, all the possible solutions of the mean-field equation also have compact supports. Let Ξ̂ ⊂ IRD denotes a compact set containing the supports of all such solutions at 0 ≤ t ≤ T . In the sequel, it suffices to establish the uniqueness of the mean-field solution for the test functions with a compact domain, i.e., let f ∈ C3b (Ξ̂).\nSecond, for all bounded continuous test functions f ∈ C3b (Ξ̂), the operator f → 〈µt, f〉 is a linear operator with µt(IRD) = 1. Hence, from Riesz-Markov-Kakutani representation theorem Rudin (1987); Varadarajan (1958) by assuming µt ∈ M(IRD) , existence of unique operator implies f 7→ 〈f, µt〉 implies the existence of the unique probability measure µt. Now, we equip the measure spaceM(IRD) with the following norm\n‖µ‖ def= sup f∈C3b (Ξ̂) ‖f‖∞ 6=0 |〈f, µ〉| ‖f‖∞ . (133)\nGiven an initial measure µ0, we next prove that there exists at most one mean-field model solution by showing that there exists at most one real valued process 〈µt, f〉 corresponding to the mean-field model. Suppose (µ∗,1t )0≤t≤T , (µ ∗,2 t )0≤t≤T are two solutions satisfying the mean-field equations equation 97 with the initial distributions µ10, µ 2 0 ∈ M(IR\nD), respectively. For any test function f ∈ C3b (Ξ̂) we have that\n〈µ∗,1t − µ∗.2t , f〉 = 〈µ10 − µ20, f〉+ η\nα ∫ t 0 (∫∫ X×Y ( 〈ϕ(x, ξ)ϕ(x̃, ξ), µ∗,1s − µ∗,2s 〉 − αyỹ\n) (134)\n× 〈∇f(ξ)(∇ξ(ϕ(x̃; ξ)ϕ(x; ξ)))T , µ∗,1s − µ∗,2s 〉P⊗2x,y((dz,dz̃)\n) ds.\nWe bound the first term on the right side of Equation equation 134 as follows\n〈µ10 − µ20, f〉 ≤ ‖µ10 − µ20‖ · ‖f‖∞ (135) ≤ b‖µ10 − µ20‖, (136)\nwhere used the definition of the norm ‖ · ‖ on the measure spaceM(IRD) from equation 133.\nxxxvi\nFurthermore, let∫∫ X×Y αyỹ〈∇f(ξ)(∇ξ(ϕ(x̃; ξ)ϕ(x; ξ)))T , µ∗,1s − µ∗,2s 〉P⊗2x,y(d(x, y),d(x̃, ỹ))\n≤ ∫∫ X×Y α|yỹ| · |〈∇f(ξ)(∇ξ(ϕ(x̃; ξ)ϕ(x; ξ)))T , µ∗,1s − µ∗,2s 〉|P⊗2x (d(x, y),d(x̃, ỹ))\n≤ α‖µ∗,1s − µ∗,2s ‖ ∫ X ‖∇f(ξ)(∇ξ(ϕ(x̃; ξ)ϕ(x; ξ)))T ‖P⊗2x (dx,dx̃)\n≤ α‖µ∗,1s − µ∗,2s ‖ ∫ X ‖∇f(ξ)‖∞ · ‖∇ξϕ(x̃; ξ)ϕ(x; ξ)‖∞P⊗2x (dx,dx̃) ≤ αL2C1‖µ∗,1s − µ∗,2s ‖, (137)\nwhere in the last inequality, we used the fact that ‖∇f(ξ)‖ ≤ C1 since the test function is threetimes continuously differentiable f ∈ C3b (Ξ̂) on a compact support. Similarly, we have∫\nX 〈ϕ(x, ξ)ϕ(x̃, ξ), µ∗,1s − µ∗,2s 〉〈∇f(ξ)(∇ξϕ(x, ξ)ϕ(x, ξ)), µ∗,1s − µ∗,2s 〉P⊗2x (dx,dx̃) ≤ ‖µ∗,1s − µ∗,2s ‖2 ∫ X ‖ϕ(x, ξ)ϕ(x̃, ξ)‖∞‖∇f(ξ)(∇ξϕ(x, ξ)ϕ(x, ξ))T ‖∞P⊗2x (dx,dx̃)\n≤ L4C1‖µ∗,1s − µ∗,2s ‖2. (138) Putting together the inequalities in equation 136,equation 137, and equation 138 yield\n〈µ∗,1t − µ ∗,2 t , f〉 ≤ b‖µ 1 0 − µ20‖+ L2C1η ∫ t 0 ‖µ∗,1s − µ∗,2s ‖ds+ ηL4C1 α ∫ t 0 ‖µ∗,1s − µ∗,2s ‖2ds. (139)\nThe above inequality holds for all bounded functions f ∈ C3b (Ξ̂). Thus, by taking the supremum with respect to f we obtain\n‖µ∗,1t − µ ∗,2 t ‖ = sup f∈C3b (Ξ̂) 〈µ∗,1t − µ ∗,2 t , f〉 (140)\n≤ b‖µ10 − µ20‖+ L2C1η ∫ t\n0\n‖µ∗,1s − µ∗,2s ‖ds+ L4C1η\nα ∫ t 0 ‖µ∗,1s − µ∗,2s ‖2ds.\n(141)\nNow, we employ the following result which generalizes Gronewall’s inequality when higher order terms are involved: Lemma C.16. (EXTENDED GRONEWALL’S INEQUALITY, (WEBB, 2018, THM 2.1.)) Let p ∈ IN and suppose that for a.e. t ∈ [0, T ], u ∈ L∞+ [0, T ] satisfies\nut ≤ c0(t) + ∫ t\n0\n(c1(s)us + c2(s)u 2 s + · · ·+ cp+1(s)up+1s )ds, (142)\nwhere c0 ∈ L∞[0, T ] is non-decreasing, and cj ∈ L1+[0, T ] for j ∈ {1, · · · , p+ 1}. Then, if∫ T 0 cj+1(s)u j sds ≤Mj , j ∈ {1, 2, · · · , p}. (143)\nIt follows that for a.e. t ∈ [0, T ] ut ≤ c0(t) (∫ t\n0\nc1(s)ds ) (M1 + · · ·+Mp). (144)\nxxxvii\nWe now apply the extended Gronewall’s Inequality equation 142 with p = 1, c0(t) = b‖µ10 − µ20‖, 0 ≤ t ≤ T , c1(t) = ηL2C1, 0 ≤ t ≤ T , c2(t) = ηL4C1 α , 0 ≤ t ≤ T , and us = ‖µ ∗,1 s −µ∗,2s ‖. In this case, it is easy to see that M1 = 2bηTL 4C1\nα . Consequently, from equation 140 and equation 144, we obtain that\n‖µ∗,1t − µ ∗,2 t ‖ ≤ b‖µ10 − µ20‖ · ( ηL2C1t+\n2bTηL4C1 α\n) , 0 ≤ t ≤ T. (145)\nThus, starting from an initial measure µ10 = µ 2 0 = µ0, there exists at most one solution for the mean-field model equations equation 97." }, { "heading": "C.3 PROOF OF COROLLARY 4.2.1", "text": "To establish the proof, we recall from equation 88 that\n〈f, µ̂Nm〉 − 〈f, µ̂N0 〉 = m−1∑ `=0 DN` + m−1∑ `=0 MN` + m−1∑ `=0 RN` , (146)\nfor all f ∈ Cb(IRD). Recall the definition of the total variation distance TV(·, ·) on a metric space (X , d).\nTV(µ, ν) = 1\n2 sup A⊂X\n|µ(A)− ν(A)|. (147)\nThe total variation distance admits the following variational form TV(µ, ν) = sup\nf :‖f‖∞≤ 12 〈f, µ〉 − 〈f, ν〉. (148)\nNow, using the variation form and by taking the supremum of equation 146 with respect to the functions from the function class Fc def = {f ∈ C1/2(IRD)}, we obtain the following upper bound on the TV distance\nTV(µ̂Nm, µ̂ N 0 ) ≤\n1\n2 m−1∑ `=0 |DN` |+ 1 2 m−1∑ `=0 |MN` |+ 1 2 m−1∑ `=0 |RN` |. (149)\nBased on the upper bound equation 178 on the remainder term, we have\n|RN` | ≤ C0 N2\n( ηL2 + 2 η α L4 ) , ` ∈ [0,m− 1], (150)\nfor some constant C0 > 0. Moreover, from the concentration inequality equation 189, we also have that with the probability of at least 1− δ, the following inequality holds\n|MN` | ≤ 8 √ `ηL2C1(L 2 + α)\nNα log\n( 2\nδ\n) . (151)\nLastly, recall the definition of the drift term in equation 86a. By carrying out a similar bounding method leading to equation 116, it can be shown that\n|DN` | ≤ 2η\nNα (L2 + α)L2C1. (152)\nBy plugging equation 150, equation 151, and equation 152 into equation 149, we derive that TV ( µ̂Nm, µ̂ N 0 ) ≤ mηC0\n2N2\n( L2 + 2 L4\nα\n) + 8m √ mηL2C1(L 2 + α)\nNα log\n( 2\nδ\n) + mη\nNα (L2 + α)L2C1,\n(153)\nwith the probability of 1− δ. We now leverage the following lemma:\nxxxviii\nLemma C.17. (BOUNDED EQUIVALENCE OF THE WASSERSTEIN AND TOTAL VARIATION DISTANCES, SINGH & PÓCZOS (2018)) Suppose (X , d) is a metric space, and suppose µ and ν are Borel probability measures on X with countable support; i.e., there exists a countable set X ′ ⊆ X such that µ(X ′) = ν(X ′) = 1. Then, for any p ≥ 1, we have\nSep(X ′)(2TV(µ, ν)) 1 p ≤Wp(µ, ν) ≤ Diam(X ′)(2TV(µ, ν)) 1 p , (154)\nwhere Diam(X ′) def= supx,y∈X ′ d(x, y), and Sep(X ′) def = infx 6=y∈X ′ d(x, y).\nConsider the metric space (IRD, ‖ · ‖2). Note that the empirical measures µ̂Nm, µ̂N0 have a countable support X ′ = {ξkm}Nk=1 ∪ {ξk0}Nk=1 ⊂ IR\nD. Therefore, using the upper bounds in equation 154 of Lemma C.17 and 153, we conclude that when the step-size is of the order\nη = O (\nRp\nT √ NT log(2/δ)\n) , (155)\nthen Wp(µ̂Nm, µ̂ N 0 ) ≤ R for all m ∈ [0, NT ] ∩ IN." }, { "heading": "D PROOFS OF AUXILIARY RESULTS", "text": "" }, { "heading": "D.1 PROOF OF LEMMA C.5", "text": "The upper bound follows trivially by letting x = y in the optimization problem equation 44.\nNow, consider the lower bound. Define the function g : [0, 1] → IR, t 7→ g(t) = f(y + t(x − y)). Then, when f is differentiable, we have g′(t) = 〈x − y,∇f(y + t(x − y))〉. In addition, g(0) = f(y), and g(1) = f(x). Based on the basic identity g(1) = g(0) + ∫ 1 0 g′(s)ds, we derive\nf(x) = f(y) + ∫ 1 0 〈x− y,∇f(y + s(x− y))〉ds\n≥ f(y)− ‖x− y‖2 ∫ 1\n0\n‖∇f(y + s(x− y))‖2ds, (156)\nwhere the last step is due to the Cauchy-Schwarz inequality. Using Inequality equation 156 yields the following lower bound on Moreau’s envelope\nMβf (y) ≥ f(y) + infx∈X\n{ 1\n2β ‖x− y‖22 − ‖x− y‖2 ∫ 1 0 ‖∇f(y + s(x− y))‖2ds }\n= f(y) + inf x∈X {( 1√ 2β ‖x− y‖2 − √ β 2 ∫ 1 0 ‖∇f(y + s(x− y))‖2ds )2\n− β 2 (∫ 1 0 ‖∇f(y + s(x− y))‖2ds )2}\n≥ f(y)− β 2 sup x∈X (∫ 1 0 ‖∇f(y + s(x− y))‖2ds )2 (a) ≥ f(y)− β 2 sup x∈X ∫ 1 0 ‖∇f(y + s(x− y))‖22ds\n≥ f(y)− β 2 ∫ 1 0 sup x∈X ‖∇f(y + s(x− y))‖22ds, (157)\nwhere (a) is due to Jensen’s inequality.\nxxxix" }, { "heading": "D.2 PROOF OF LEMMA C.8", "text": "Let u ∈ IRd denotes an arbitrary unit vector ‖u‖2 = 1. From the definition of the gradient of a function, we have that 〈\n∇θMβf(·;θ)(x),u 〉 = Du[Mf(·;θ)(x)], (158)\nwhere Du[Mf(·;θ)(x)] is the directional derivative\nDu[Mf(·;θ)(x)] def = lim\nδ→0 Mf(·;θ+δu)(x)−Mf(·;θ)(x) δ . (159)\nWe now have\nMf(·;θ+δu)(x) = inf x∈X\n{ 1\n2β ‖x− y‖22 + f(y;θ + δu) } = inf x∈X { 1 2β ‖x− y‖22 + f(y;θ) + δ〈∇θf(y;θ),u〉 } +O(δ2)\n(a) ≤ 1 2β ‖x− Proxf(·;θ)(x)‖22 + f(Proxf(·;θ)(x);θ)\n+ δ〈∇θf(Proxf(·;θ)(x);θ),u〉+O(δ2), (160) where the inequality in (a) follows by letting y = Proxf(·;θ)(x) in the optimization problem. Now, recall that\nMf(·;θ)(x) = inf y∈X\n{ 1\n2β ‖x− y‖22 + f(y;θ)\n} ,\nProxf(·;θ)(x) = arg min y∈X\n{ 1\n2β ‖x− y‖22 + f(y;θ)\n} .\nTherefore,\nMf(·;θ)(x) = 1\n2β ‖x− Proxf(·;θ)(x)‖22 + f(Proxf(·;θ)(x);θ). (161)\nSubstitution of equation 161 in equation 160 yields Mf(·;θ+δu)(x) ≤Mf(·;θ)(x) + δ〈∇θf(Proxf(·;θ)(x);θ),u〉+O(δ2). (162)\nHence, Du[Mf(·;θ)(x)] ≤ 〈∇θf(Proxf(·;θ)(x);θ),u〉. From equation 158 and by using CauchySchwarz inequality, we compute the following bound on the inner product of the gradient with the unit vectors u ∈ IRd, ‖u‖2 = 1,〈 ∇θMβf(·;θ)(x),u 〉 ≤ ‖∇θf(Proxf(·;θ)(x);θ)‖2 · ‖u‖2 = ‖∇θf(Proxf(·;θ)(x);θ)‖2. (163)\nSince the preceding upper bound holds for all the unit vectors u ∈ IRd, we let u = ∇θMβf(·;θ)(x) ‖∇θMβf(·;θ)(x)‖2 to get Inequality equation 65." }, { "heading": "D.3 PROOF OF LEMMA C.6", "text": "Let z ∈ Sd−1 denotes an arbitrary vector on the sphere. Define the random variable Qz ( (y1,x1), · · · , (yn,xn) ) def = 〈z,∇En(ξ)〉\n= 1 n(n− 1) ∑ i 6=j yiyj ( ϕ(xi; ξ)〈z,∇ξϕ(xj ; ξ)〉+ ϕ(xj ; ξ)〈z,∇ξϕ(xi; ξ)〉 ) − IE\nP⊗2x,y\n[ yŷ ( ϕ(xi; ξ)〈z,∇ξϕ(xj ; ξ)〉+ ϕ(xj ; ξ)〈z,∇ξϕ(xi; ξ)〉 )] .\n(164)\nxl\nClearly, IEPx,y [Qz] = 0. Now, let (ŷm, x̂m) ∈ Y × X , 1 ≤ m ≤ n. By repeated application of the triangle inequality, we obtain that∣∣∣Qz((y1,x1), · · · , (ym,xm), · · · , (yn,xn))−Qz((y1,x1), · · · , (ŷm, x̂m), · · · , (yn,xn))∣∣∣\n≤ 1 n(n− 1) ∣∣∣∑ i 6=m yiϕ(xi; ξ)〈z, ym∇ξϕ(xm; ξ)− ŷm∇ξϕ(x̂m; ξ)〉 ∣∣∣\n+ 1\nn(n− 1) ∣∣∣∑ i 6=m yi〈z,∇ξϕ(xi; ξ)〉(ymϕ(xm; ξ)− ŷmϕ(x̂m; ξ)) ∣∣∣\n≤ 1 n(n− 1) ∑ i 6=m |ϕ(xi; ξ)| · ‖z‖2 · ‖ym∇ξϕ(xm; ξ)− ŷm∇ξϕ(x̂m; ξ)‖2\n+ 1 n(n− 1) ∑ i 6=m ‖z‖2 · ‖∇ξϕ(xi; ξ)‖2 · |ymϕ(xm; ξ)− ŷmϕ(x̂m; ξ)|\n≤ 4L 2\nn , (165)\nwhere the last inequality is due to assumption (A.2) and the fact that ‖z‖2 = 1 for z ∈ Sd−1. In particular, to derive Inequality equation 165, we employed the following upper bounds\n|ϕ(xi; ξ)| ≤ L, |ymϕ(xm; ξ)− ŷmϕ(x̂m; ξ)| ≤ |ϕ(xm; ξ)|+ |ϕ(x̂m; ξ)| ≤ 2L,\n‖∇ξϕ(xi; ξ)‖2 ≤ L, ‖ym∇ξϕ(xm; ξ)− ŷm∇ξϕ(x̂m; ξ)‖2 ≤ ‖∇ξϕ(xm; ξ)‖2 + ‖∇ξϕ(x̂m; ξ)‖2 ≤ 2L.\nUsing McDiarmid Martingale’s inequality McDiarmid (1989) then gives us IP (∣∣∣Qz((y1,x1), · · · , (yn,xn))∣∣∣ ≥ u) ≤ 2(− nu2\n16L4\n) , (166)\nfor x ≥ 0. Now, for every p ∈ IN, the 2p-th moment of the random variable Qz is given by IE [ Q2pz ((y1,x1), · · · , (yn,xn)) ] = ∫ IR+ 2pu2p−1IP(Qz((y1,x1), · · · , (yn,xn)) ≥ u)du\n(a) ≤ ∫\nIR+\n4pu2p−1 ( − u 2\n16nL4\n) du\n= 2(16L4/n)2pp!, (167)\nwhere (a) is due to the concentration bound in equation 166. Now Therefore,\nIE [( Q2z((y1,x1), · · · , (yn,xn))/γ2 )] = ∞∑ p=0 1 p!γ2p IE [ φ2pz ((y1,x1), · · · , (yn,xn)) ] = 1 + 2\n∑ p∈IN ( 16L4 nγ )2p = 2\n1− (16L4/nγ)2 − 1.\nFor γ = 16 √ 3L4/n, we obtain IE [( Q2z((y1,x1), · · · , (yn,xn))/γ2 )] ≤ 2. Therefore, ‖Qz‖ψ2 = ‖〈z,∇En(ξ)〉‖ψ2 ≤ 16 √ 3L4/n for all z ∈ Sn−1 and ξ ∈ IRD. Consequently, by the definition\nxli\nof the sub-Gaussian random vector in equation 35 of Definition C.2, we have ‖∇En(ξ)‖ψ2 ≤ 16 √\n3L4/n for every ξ ∈ IRD. We invoke the following lemma proved by the first author in (Khuzani & Li, 2017, Lemma 16):\nLemma D.1. (THE ORLICZ NORM OF THE SQUARED VECTOR NORMS, (KHUZANI & LI, 2017, LEMMA 16)) Consider the zero-mean random vector Z satisfying ‖Z‖ψν ≤ β for every ν ≥ 0. Then, ‖‖Z‖22‖ψ ν\n2 ≤ 2 · 3 2ν · β2.\nUsing Lemma D.1, we now have that ‖‖∇En(ξ)‖22‖ψ1 ≤ 4608L4/n2 for every ξ ∈ IR D. Applying the exponential Chebyshev’s inequality with β = 4608L4/n2 yields\nIP (∫ IRD ∫ 1 0 ∣∣∣‖∇En((1− s)ξ + sζ∗)‖22 − IEx,y[‖∇En((1− s)ξ + sζ∗)‖22]∣∣∣µ0(dξ) ≥ δ )\n≤ e− n2δ\n4608L4 IEx,y\n[ e ( n2 4608L4 ∫ IRD ∫ 1 0 ∣∣‖∇En((1−s)ξ+sζ∗)‖22−IEx,y [‖∇En((1−s)ξ+sζ∗)‖22]∣∣dsµ0(dξ))] (a) ≤ e− n2δ 4608L4\n∫ IRD ∫ 1 0 IEx,y [ e n2 4608L4 (|‖∇En((1−s)ξ+sζ∗)‖22−IEx,y [‖∇En((1−s)ξ+sζ∗)‖ 2 2])| ] dsµ0(dξ)\n(b) ≤ 2e− n2δ 4608L4 ,\nwhere (a) follows by Jensen’s inequality, and (b) follows from the fact that\nIEx,y\n[ e n2 4608L4 (|‖∇En((1−s)ξ+sζ∗)‖22−IEx,y [‖∇En((1−s)ξ+sζ∗)‖ 2 2])| ] ≤ 2, (168)\ndue to Definition C.1. Therefore,\nIP (∫ IRD ∫ 1 0 ‖∇En((1− s)ξ + sζ∗)‖22dsµ(dξ) ≥ δ )\n≤ 2 ( − n2(δ − ∫ 1 0 ∫ IRD\nIEx,y[‖∇En((1− s)ξ + sζ∗)‖22]dsµ0(dξ)) 4608L4\n) . (169)\nIt now remains to compute an upper bound on the expectation IEx,y[‖∇En((1− s)ξ+ sζ∗)‖22]. But this readily follows from equation 167 by letting p = 1 and z = ∇En((1−s)ξ+sζ∗)‖∇En((1−s)ξ+sζ∗)‖2 as follows\nIEx,y[‖∇En((1− s)ξ + sζ∗)‖22] = IEx,y [〈 ∇En((1− s)ξ + sζ∗) ‖∇En((1− s)ξ + sζ∗)‖2 ,∇En((1− s)ξ + sζ∗) 〉2]\n= IEx,y [ Q2 ∇En((1−s)ξ+sζ∗) ‖∇En((1−s)ξ+sζ∗)‖2 ] ≤ 29L 8\nn2 . (170)\nPlugging the expectation upper bound of equation 170 into equation 169 completes the proof of the first part of Lemma C.6.\nThe second part of Lemma C.6 follows by a similar approach and we thus omit its proof.\nxlii" }, { "heading": "D.4 PROOF OF LEMMA C.7", "text": "Let W∗ def = arg minW∈W Ψ(W ) and W def = arg minW∈W Φ(W ). Then, since |Ψ(W ) − Φ(W )| ≤ δ for allW ∈ W , we have that\n|Ψ(W∗)− Φ(W∗)| = ∣∣∣∣ minW∈WΨ(W )− Φ(W∗) ∣∣∣∣ ≤ δ. (171) Therefore,\nmin W∈W Φ(W ) ≤ Φ(W∗) ≤ min W∈W Ψ(W ) + δ. (172)\nSimilarly, it can be shown that\nmin W∈W Ψ(W ) ≤ Ψ(W ) ≤ min W∈W Φ(W ) + δ. (173)\nCombining equation 172 and equation 173 yields the desired inequality." }, { "heading": "D.5 PROOF OF LEMMA C.10", "text": "We recall the expression of the remainder term {RNm}0≤m≤NT from equation 84. For each 0 ≤ m ≤ NT , N ∈ IN, we can bound the absolute value of the remainder term as follows\n∣∣RNm∣∣ = 1N ∣∣∣∣∣ N∑ k=1 (ξkm+1 − ξkm)∇2f(ξ̃k)(ξkm+1 − ξkm)T ∣∣∣∣∣\n≤ 1 N N∑ k=1 ∣∣∣(ξkm+1 − ξkm)∇2f(ξ̃k)(ξkm+1 − ξkm)T ∣∣∣ ≤ 1 N N∑ k=1 ‖ξkm+1 − ξkm‖22 · ∥∥∇2f(ξ̃k)∥∥ F . (174)\nNext, we characterize a bound on the difference term ‖ξkm+1− ξkm‖2. To attain this goal, we use the iterations of the particle SGD in Equation equation 15. We have that\n‖ξkm+1 − ξkm‖2\n≤ η N ∥∥∥∥∥ ( ymỹm − 1 Nα N∑ k=1 ϕ(xm; ξ k m)ϕ(x̃m; ξ k m) ) ∇ξ ( ϕ(xm; ξ k m)ϕ(x̃m; ξ k m) )∥∥∥∥∥\n2\n≤ η N |ymỹm|\n( |ϕ(xm; ξkm)| · ‖∇ξϕ(x̃m; ξkm)‖2 + |ϕ(x̃m; ξkm)| · ‖∇ξϕ(xm; ξkm)‖2 ) + η\nN\n( 1\nNα N∑ k=1\n∣∣∣ϕ(xm; ξkm)ϕ(x̃m; ξkm)∣∣∣ )( |ϕ(xm; ξkm)|‖∇ξϕ(x̃m; ξkm)‖2 + |ϕ(x̃m; ξkm)|‖∇ξϕ(xm; ξkm)‖2 ) (a)\n≤ ηL 2\nN +\n2ηL4\nNα , (175)\nwhere in (a), we used the fact that ‖ϕ‖∞ < L and ‖∇ξϕ(x, ξ)‖2 < L due to (A.1), and ym, ỹm ∈ {−1, 1}. Plugging the last inequality in equation 175 yields\n|RNm| ≤ 1\nN3\n( ηL2 + 2ηL4\nα ) N∑ k=1 ‖∇2f(ξ̃k)‖F . (176)\nxliii\nWe next compute an upper bound on the Frobenious norm of the Hessian matrix ∇2f(ξ̃k). To this end, we first show that there exists a compact set C ⊂ IRD such that ξkm ∈ C for all k = 1, 2, · · · , N and all m ∈ [0, NT ] ∩ IN. For each k = 1, 2, · · · , N , from Inequality equation 175 we obtain that\n‖ξkm‖2 ≤ ‖ξkm−1‖2 + ηL2\nN +\n2ηL4\nNα\n= ‖ξk0‖2 + mηL2\nN +\n2mηL4\nNα\n≤ ‖ξk0‖2 + ηL2T + 2(η/α)L4T. (177)\nNow, ‖ξk0‖2 < c0 for some constant c0 > 0 since the initial samples ξ10, · · · , ξN0 are drawn from the measure µ0 whose support support(µ0) = Ξ is assumed to be compact by (A.3). From upper bound in equation 177, it thus follows that ‖ξkm‖2 < C for some constant C > 0, for all m ∈ [0, NT ]∩ IN. Now, recall that ξ̃k = (ξ̃k(1), · · · , ξ̃k(p)), where ξ̃k(i) ∈ [ξkm(i), ξkm+1(i)], i = 1, 2, · · · ,m + 1, for i = 1, 2, · · · , p. Therefore, ξ̃k ∈ C. Since all the test function f ∈ C3b (IR 3) are three-times continuously differentiable, it follows that there exists a constant C0 def = C0(T ) > 0 such that supξ̃∈C ‖∇ 2f(ξ̃)‖F < C0. From Inequality equation 176, it follows that\n|RNm| ≤ C0 N2\n( ηL2 + 2ηL4\nα\n) , m ∈ [0, NT ] ∩ IN. (178)\nNow, recall the definition of the scaled term RNt from equation 89c. Using the Inequality equation 178 as well as the definition ofRNt , we obtain\nsup 0≤t≤T\n|RNt | ≤ C0T\nN\n( ηL2 + 2ηL4\nα\n) . (179)" }, { "heading": "D.6 PROOF OF LEMMA C.10", "text": "Let Fm−1 = σ((xk, yk)0≤k≤m−1, (x̃k, ỹk)0≤k≤m−1) denotes the σ-algebra generated by the samples up to time m− 1. We define F−1 def = ∅. Further, define the following random variable\n∆Nm def = ( 〈ϕ(xm, ξ)ϕ(x̃m, ξ), µ̂Nm〉 − αymỹm ) × 〈∇f(ξ)∇ξ(ϕ(x̃m; ξ)ϕ(xm; ξ)), µ̂Nm〉. (180)\nNotice that 1N IE[∆ N m|Fm−1] = DNm . We now rewrite the martingale term in equation 86b in term of ∆Nm,\nMNm def =\nη\nNα m∑ `=0 (∆N` − IE[∆N` |F`−1]), (181)\nwith MN0 = 0.\nBy construction of MNm in equation 181, it is a Martingale IE[M N m |Fm−1] = MNm−1. We now prove that MNm has also bounded difference. To do so, we define the shorthand notations\naNm def = 〈ϕ(xm, ξ)ϕ(x̃m, ξ), µ̂Nm〉 − αymỹm, (182)\nbNm def = 〈∇f(ξ)(∇ξ(ϕ(x̃m; ξ)ϕ(xm; ξ)))T , µ̂Nm〉. (183)\nxliv\nThen, we compute\n|MNm −MNm−1| = η\nNα ∣∣∆Nm − IE[∆Nm|Fm−1]∣∣ ≤ η Nα |∆Nm|+ η Nα IE[|∆Nm||Fm−1] ≤ η Nα |aNm| · |bNm|+ η Nα IE [ |aNm| · |bNm||Fm−1 ] . (184)\nFor the difference terms, we derive that |aNm| = ∣∣∣〈ϕ(xm, ξ)ϕ(x̃m, ξ), µ̂Nm〉 − αymỹm∣∣∣\n≤ 1 N N∑ k=1 ∣∣ϕ(xm, ξkm)ϕ(x̃m, ξkm)∣∣+ α|ymỹm| ≤ L2 + α, (185)\nwhere the last step follows from the fact that ‖ϕ‖∞ ≤ L due to (A.1). Similarly, we obtain that |bNm| = |〈∇f(ξ)(∇ξ(ϕ(x̃m; ξ)ϕ(xm; ξ)))T , µ̂Nm〉|\n≤ 1 N N∑ k=1 |ϕ(x̃m; ξkm)| · ∣∣∇f(ξkm)(∇ξϕ(x̃m; ξkm))T ∣∣\n+ 1\nN N∑ k=1 |ϕ(xm; ξkm)| · |∇f(ξkm)(∇ξϕ(x̃m; ξkm))T |\n(a) ≤ L N N∑ k=1 ∣∣∇f(ξkm)(∇ξϕ(x̃m; ξkm))T ∣∣+ LN N∑ k=1 |∇f(ξkm)(∇ξϕ(x̃m; ξkm))T |\n(b) ≤ L N N∑ k=1 ‖∇f(ξkm)‖2 · ‖∇ξϕ(x̃m; ξkm)‖2 + L N N∑ k=1 ‖∇f(ξkm)‖2 · ‖∇ξϕ(x̃m; ξkm)‖2\n(c) ≤ 2L 2\nN N∑ k=1 ‖∇f(ξkm)‖2, (186)\nwhere (a) and (c) follows from (A.1), and (b) follows from the Cauchy-Schwarz inequality. From Inequality equation 177 and the ensuing disucssion in Appendix D.5, we recall that ‖ξkm‖2 < C for some constant and for all m ∈ [0, NT ] ∩ IN, and k = 1, 2, · · · , N . For the two times continuously test function f ∈ C3b (IR\nD), it then follows that |∇f(ξkm)‖2 ≤ C1 for some constant C1 > 0. The following bound can now be computed from equation 186,\n|bNm| ≤ 2C1L\n2\nN . (187)\nPlugging the upper bounds on |aNm| and |bNm| from equation 185-equation 187 into equation 184 we obtain that\n|MNm −MNm−1| ≤ 4ηC1 Nα L2(L2 + α). (188)\nThus, (MNm )m∈[0,NT ]∩IN is a Martingale process with bounded difference. From the AzumaHoeffding inequality it follows that\nIP(|MNm | ≥ ε) = IP(|MNm −MN0 | ≥ ε) ≤ 2 ( − N 2α2ε2\n8mL4η2C21 (L 2 + α)2\n) , ∀m ∈ [0, NT ] ∩ IN.\n(189)\nxlv\nTherefore, sinceMNt = MNbNtc, we have IP(|MNT | ≥ ε) ≤ 2 ( − N 2α2ε2\n8L4bNT cη2C21 (L2 + α)2\n) . (190)\nThen,\nIE [ |MNT | ] = ∫ ∞ 0 IP(|MNT | ≥ ε)dε\n≤ 2 ∫ ∞\n0\n( − N 2α2ε2\n8L4bNT cη2C21 (L2 + α)2 ) = 1\nNα 4 √\n2L2 √ bNT cηC1(L2 + α)2. (191)\nwhere the inequality follows from equation 190.\nBy Doob’s Martingale inequality Doob (1953), the following inequality holds\nIP ( sup\n0≤t≤T |MNt | ≥ ε\n) ≤ IE[|M N T |]\nε (192)\n≤ 1 Nαε\n4 √ 2L2 √ bNT cηC1(L2 + α)2. (193)\nIn particular, with the probability of at least 1− ρ, we have\nsup 0≤t≤T\n|MNt | ≤ 1 Nαρ 4 √\n2L2 √ bNT cηC1(L2 + α)2. (194)" }, { "heading": "E CHAOTICITY AND PROPAGATION OF CHAOS IN PARTICLE SGD", "text": "In this appendix, we establish the so called ‘propagation of chaos’ property of particle SGD. We now establish the so called ‘propagation of chaos’ property of particle SGD. At a high level, the propagation of chaos means that when the number of samples {ξk}Nk=1 tends to infinity (N → +∞), their dynamics are decoupled. Definition E.1. (EXCHANGABLITY) Let ν be a probability measure on a Polish space S and. For N ∈ IN, we say that ν⊗N is an exchangeable probability measure on the product space Sn if it is invariant under the permutation π def= (π(1), · · · , π(N)) of indices. In particular,\nν⊗N (π ·B) = ν⊗N (B), (195) for all Borel subsets B ∈ B(Sn).\nAn interpretation of the exchangablity condition equation 195 can be provided via De Finetti’s representation theorem which states that the joint distribution of an infinitely exchangeable sequence of random variables is as if a random parameter were drawn from some distribution and then the random variables in question were independent and identically distributed, conditioned on that parameter.\nNext, we review the mathematical definition of chaoticity, as well as the propagation of chaos in the product measure spaces: Definition E.2. (CHAOTICITY) Suppose ν⊗N is exchangeable. Then, the sequence {ν⊗N}N∈IN is ν-chaotic if, for any natural number ` ∈ IN and any test function f1, f2, · · · , fk ∈ C2b (S), we have\nlim N→∞ 〈∏̀ k=1 fk(s k), ν⊗N (ds1, · · · ,dsN ) 〉 = ∏̀ k=1 〈fk, ν〉 (196)\nxlvi\nAccording to equation 196 of Definition E.2, a sequence of probability measures on the product spaces S is ν-chaotic if, for fixed k the joint probability measures for the first k coordinates tend to the product measure ν(ds1)ν(ds2) · · · ν(dsk) = ν⊗k on Sk. If the measures ν⊗N are thought of as giving the joint distribution of N particles residing in the space S, then {ν⊗N} is ν-chaotic if k particles out of N become more and more independent as N tends to infinity, and each particles distribution tends to ν. A sequence of symmetric probability measures on SN is chaotic if it is ν-chaotic for some probability measure ν on S.\nIf a Markov process on SN begins in a random state with the distribution ν⊗N , the distribution of the state after t seconds of Markovian random motion can be expressed in terms of the transition function KN for the Markov process. The distribution at time t > 0 is the probability measure UNt ν ⊗N is defined by the kernel\nUNt ν ⊗N (B) def = ∫ SN KN (s,B, t)ν⊗N (ds). (197)\nDefinition E.3. (PROPOGATION OF CHAOS) A sequence functions{ KN (s,B, t) } N∈IN\n(198)\nwhose N -th term is a Markov transition function on SN that satisfies the permutation condition\nKN (s,B, t) = KN (π · s,π ·B, t), (199)\npropagates chaos if whenever {ν⊗N}N∈IN is chaotic, so is {UNt } for any t ≥ 0, where UNt is defined in equation 197.\nWe note that for finite systems size N ,the states of the particles are not independent of each other. However, as we prove in the following result, in the limiting system N → +∞, the particles are mutually independent. This phenomena is known as the propagation of chaos (a.k.a. asymptotic independence): Theorem E.4. (CHAOTICITY IN PARTICLE SGD) Consider Assumptions (A.1)− (A.3). Furthermore, suppose that {ξk0}1≤k≤N ∼i.i.d. µ0 is exchangable in the sense that the joint law is invariant under the permutation of indices. Then, at each time instant t ∈ (0, T ], the scaled empirical measure µNt ∈M(IR D) defined via scaling\nµNt (dξ 1, · · · ,dξN ) def= µ̂NbNtc(dξ 1, · · · ,dξN ) = IP{ξ1bNtc ∈ dξ 1, · · · , ξNbNtc ∈ dξ N}, (200)\nis µ∗t -chaotic, where µ ∗ t is mean-field solution of equation 97.\nProof. To establish the proof, it suffices to show that for every integer ` ∈ IN, and for all the test functions f1, · · · , fk ∈ C3b (IR D), we have\nlim sup N→∞ ∣∣∣∣∣IE [∏̀ k=1 fk(ξ k bNtc) ] − ∏̀ k=1 〈µ∗t , fk〉 ∣∣∣∣∣ = 0. (201) Using the triangle inequality, we now have that∣∣∣∣∣IE [∏̀ k=1 fk(ξ k bNtc) ] − ∏̀ k=1 〈µ∗t , fk〉\n∣∣∣∣∣ ≤\n∣∣∣∣∣IE [∏̀ k=1 〈µ̂Nt , fk〉 ] − ∏̀ k=1 〈µ∗t , fk〉 ∣∣∣∣∣+ ∣∣∣∣∣IE [∏̀ k=1 〈µ̂Nt , fk〉 ] − IE [∏̀ k=1 fk(ξ k bNtc) ]∣∣∣∣∣ . (202) xlvii\nFor the first term on the right side of equation 202 we have\nlim sup N→∞ ∣∣∣∣∣IE [∏̀ k=1 〈µ̂Nt , fk〉 ] − ∏̀ k=1 〈µ∗t , fk〉 ∣∣∣∣∣ (a)≤ lim supN→∞ IE [∣∣∣∣∣∏̀ k=1 〈µ̂Nt , fk〉 − ∏̀ k=1 〈µ∗t , fk〉 ∣∣∣∣∣ ]\n(b) ≤ IE [ lim sup\nN→∞ ∣∣∣∣∣∏̀ k=1 〈µ̂Nt , fk〉 − ∏̀ k=1 〈µ∗t , fk〉 ∣∣∣∣∣ ]\n(c) ≤ b`−1IE [∑̀ k=1 lim sup N→∞ ∣∣〈µ̂Nt , fk〉 − 〈µ∗t , fk〉∣∣ ]\n(d) = 0, (203)\nwhere (a) is by Jensen’s inequality, (b) is by Fatou’s lemma, (c) follows from the basic inequality∣∣∣∏Ni=1 ai −∏Ni=1 bi∣∣∣ ≤ ∑Ni=1 |ai − bi| for |ai|, |bi| ≤ 1, i = 1, 2, · · · , N , as well as the fact that 〈µ∗t , fk〉 ≤ b and 〈µ̂Nt , fk〉 ≤ b for all k = 1, 2, · · · , N due to the boundedness of the test functions f1, · · · , f` ∈ C3b (IR D), and (d) follows from the weak convergence µ̂Nt weakly→ µ∗t to the mean-field solution equation 97.\nNow, consider the second term on the right hand side of equation 202. Due to the exchangeability of the initial states (ξk0 )1≤k≤N , the law of the random variables (ξ k 0 )1≤k≤N is also exchangeable. Therefore, we obtain that\nIE [∏̀ k=1 fk(ξ k bNtc) ] = `! N ! IE ∑ π∈Π(`,N) ∏̀ k=1 fk(ξ π(k) bNtc) , (204) where Π(`,N) is the set of all permutations of ` numbers selected from {1, 2, · · · , N}. Notice that the right hand side of equation 204 is the symmetrized version of the left hand side equation 205.\nFurther, by the definition of the empirical measure µ̂Nt we obtain that\nIE [∏̀ k=1 〈µ̂Nt , fk〉 ] = 1 N ` IE [∏̀ k=1 ( N∑ m=1 fk(ξ m bNtc) )] (205)\n= 1\nN ` IE ∑ π∈Π̃(`,N) (∏̀ k=1 fk(ξ π(k) bNtc) ) . (206) Therefore, subtracting equation 204 and equation 205 yields∣∣∣∣∣IE [∏̀ k=1 〈µ̂Nt , fk〉 ] − IE [∏̀ k=1 fk(ξ k bNtc) ]∣∣∣∣∣ ≤ b` ( 1− N ! `!N ` ) . (207)\nHence,\nlim sup N→∞ ∣∣∣∣∣IE [∏̀ k=1 〈µ̂Nt , fk〉 ] − IE [∏̀ k=1 fk(ξ k bNtc) ]∣∣∣∣∣ = 0. (208) Combining equation 203-equation 208 yields the desired result.\nxlviii" } ]
2,019
null
SP:32d80c08e2e1a76e06e701d537264421493db122
[ "This paper proposes a functional form to model the dependence of generalization error on a held-out test set on model and dataset size. The functional form is derived based on empirical observations of the generalizing error for various model and dataset sizes (sections O1, O2, and O3) and on certain necessary criteria (C1, C4 and C5). The parameters of the function are then fit using linear regression on observed data. The authors show that the regressed function \\(\\epsilon(m,n)\\) is able to predict the generalization error for various \\(m\\) and \\(n\\) reasonably accurately.", "This work proposes a functional form for the relationship between <dataset size, model size> and generalization error, and performs an empirical study to validate it. First, it states 5 criteria that such a functional form must take, and proposes one such functional form containing 6 free coefficients that satisfy all these criteria. It then performs a rigorous empirical study consisting of 6 image datasets and 3 text datasets, each with 2 distinct architectures defined at several model scales, and trained with different dataset sizes. This process produces 42-49 data points for each <dataset, architecture> pair, and the 6 coefficients of the proposed functional form are fit to those data points, with < 2% mean deviation in accuracy. It then studies how this functional form performs at extrapolation, and finds that it still performs pretty well, with ~4.5% mean deviation in accuracy, but with additional caveats." ]
The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from smallto large-scale models and data.
[ { "affiliations": [], "name": "Jonathan S. Rosenfeld" }, { "affiliations": [], "name": "Amir Rosenfeld" }, { "affiliations": [], "name": "Yonatan Belinkov" }, { "affiliations": [], "name": "Nir Shavit" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "arXiv preprint arXiv:1811.04918,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "On the convergence rate of training recurrent neural networks", "venue": "arXiv preprint arXiv:1810.12065,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "arXiv preprint arXiv:1802.05296,", "year": 2018 }, { "authors": [ "Michele Banko", "Eric Brill" ], "title": "Mitigating the paucity-of-data problem: Exploring the effect of training corpus size on classifier performance for natural language processing", "venue": "In Proceedings of the first international conference on Human language technology research,", "year": 2001 }, { "authors": [ "Hakan Bilen", "Basura Fernando", "Efstratios Gavves", "Andrea Vedaldi", "Stephen Gould" ], "title": "Dynamic image networks for action recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "James Bradbury", "Stephen Merity", "Caiming Xiong", "Richard Socher" ], "title": "Quasi-recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Junghwan Cho", "Kyewook Lee", "Ellie Shin", "Garry Choy", "Synho Do" ], "title": "How much data is needed to train a medical image deep learning system to achieve necessary high accuracy", "venue": "arXiv preprint arXiv:1511.06348,", "year": 2015 }, { "authors": [ "Mircea Cimpoi", "Subhransu Maji", "Iasonas Kokkinos", "Sammy Mohamed", "Andrea Vedaldi" ], "title": "Describing textures in the wild", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md Patwary", "Mostofa Ali", "Yang Yang", "Yanqi Zhou" ], "title": "Deep learning scaling is predictable, empirically", "venue": "arXiv preprint arXiv:1712.00409,", "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Fix your classifier: the marginal value of training the last weight layer", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Roxana Istrate", "Florian Scheidegger", "Giovanni Mariani", "Dimitrios Nikolopoulos", "Costas Bekas", "A Cristiano I Malossi" ], "title": "Tapas: Train-less accuracy predictor for architecture search", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Mark Johnson", "Peter Anderson", "Mark Dras", "Mark Steedman" ], "title": "Predicting accuracy on large datasets from smaller pilot data", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2015 }, { "authors": [ "Aaron Klein", "Stefan Falkner", "Simon Bartels", "Philipp Hennig", "Frank Hutter" ], "title": "Fast bayesian optimization of machine learning hyperparameters on large datasets", "venue": "In Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Tengyuan Liang", "Alexander Rakhlin", "Xiyu Zhai" ], "title": "On the risk of minimum-norm interpolants and restricted lower isometry of kernels", "venue": null, "year": 1908 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Subhransu Maji", "Esa Rahtu", "Juho Kannala", "Matthew Blaschko", "Andrea Vedaldi" ], "title": "Fine-grained visual classification of aircraft", "venue": "arXiv preprint arXiv:1306.5151,", "year": 2013 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843,", "year": 2016 }, { "authors": [ "Stephen Merity", "Nitish Shirish Keskar", "Richard Socher" ], "title": "Regularizing and optimizing LSTM language models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Antonio Valerio Miceli Barone", "Barry Haddow", "Ulrich Germann", "Rico Sennrich" ], "title": "Regularization techniques for fine-tuning in neural machine translation", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Tomáš Mikolov", "Martin Karafiát", "Lukáš Burget", "Jan Černockỳ", "Sanjeev Khudanpur" ], "title": "Recurrent neural network based language model", "venue": "In Eleventh Annual Conference of the International Speech Communication Association,", "year": 2010 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A pac-bayesian approach to spectrally-normalized margin bounds for neural networks", "venue": "arXiv preprint arXiv:1707.09564,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In NIPS Autodiff Workshop,", "year": 2017 }, { "authors": [ "E Real", "A Aggarwal", "Y Huang", "QV Le" ], "title": "Aging evolution for image classifier architecture search", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "arXiv preprint arXiv:1802.01548,", "year": 2018 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Hakan Bilen", "Andrea Vedaldi" ], "title": "Learning multiple visual domains with residual adapters", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Khurram Soomro", "Amir Roshan Zamir", "Mubarak Shah" ], "title": "Ucf101: A dataset of 101 human actions classes from videos in the wild", "venue": "arXiv preprint arXiv:1212.0402,", "year": 2012 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Alon Talmor", "Jonathan Berant" ], "title": "MultiQA: An empirical investigation of generalization and transfer in reading comprehension", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Zifeng Wu", "Chunhua Shen", "Anton van den Hengel" ], "title": "Wider or deeper: Revisiting the resnet model for visual recognition", "venue": "arXiv preprint arXiv:1611.10080,", "year": 2016 }, { "authors": [ "Dmitry Yarotsky" ], "title": "Optimal approximation of continuous functions by very deep relu networks", "venue": "arXiv preprint arXiv:1802.03620,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Arber Zela", "Aaron Klein", "Stefan Falkner", "Frank Hutter" ], "title": "Towards automated deep learning: Efficient joint neural architecture and hyperparameter search", "venue": "arXiv preprint arXiv:1807.06906,", "year": 2018 }, { "authors": [ "Xiangxin Zhu", "Carl Vondrick", "Deva Ramanan", "Charless C Fowlkes" ], "title": "Do we need more training data or better models for object detection", "venue": "In BMVC,", "year": 2012 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "sakovsky" ], "title": "2015): a large-scale recognition benchmark consisting of natural images of 1000 object categories with 1.28M training images spread roughly uniformly over the categories. It has 50K validation and 100K testing images. It has been the most popular large-scale benchmark for image classification methods for the better part of the last decade. CIFAR10/100 (Krizhevsky et al., 2009): 60K natural RGB images of 10 classes (100 for CIFAR100) with a train/test split of 50K/10K", "venue": null, "year": 2009 }, { "authors": [ "Hoffer" ], "title": "Scaling the models’ width is performed by multiplying the number of channels in each convolutional layer and the width of the hidden linear layers by a constant factor and rounding to the nearest integer. The ranges of width scales (and data scales) for the main experiments are detailed in Table 1b", "venue": null, "year": 2018 }, { "authors": [ "Hoffer" ], "title": "The VGG and DenseNet models were also modified for width scaling from the implementation", "venue": "Zisserman, 2014) and DenseNet (L=40,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the success and heightened adoption of neural networks for real world tasks, some questions remain poorly answered. For a given task and model architecture, how much data would one require to reach a prescribed performance level? How big a model would be needed?\nAddressing such questions is made especially difficult by the mounting evidence that large, deep neural networks trained on large-scale data outperform their smaller counterparts, rendering the training of high performance models prohibitively costly. Indeed, in the absence of practical answers to the above questions, surrogate approaches have proven useful. One such common approach is model scaling, where one designs and compares small-scale models, and applies the obtained architectural principles at a larger scale (e.g., Liu et al., 2018; Real et al., 2018; Zoph et al., 2018). Despite these heuristics being widely used to various degrees of success, the relation between the performance of a model in the small- and large-scale settings is not well understood. Hence, exploring the limitations or improving the efficiency of such methods remains subject to trial and error.\nIn this work we circle back to the fundamental question: what is the (functional) relation between generalization error and model and dataset sizes? Critically, we capitalize on the concept of model scaling in its strictest form: we consider the case where there is some given scaling policy that completely defines how to scale up a model from small to large scales. We include in this context all model parameters, such that traversing from one scale (in which all parameters are known) to another requires no additional resources for specifying the model (e.g., architecture search/design).\nWe empirically explore the behavior of the generalization error over a wide range of datasets and models in vision and language tasks. While the error landscape seems fairly complex at first glance, we observe the emergence of several key characteristics shared across benchmarks and domains. Chief among these characteristics is the emergence of regions where power-law behavior approximates the error well both with respect to data size, when holding model size fixed, and vice versa.\nMotivated by these observations, we establish criteria which a function approximating the error landscape should meet. We propose an intuitive candidate for such a function and evaluate its quality, both in explaining the observed error landscapes and in extrapolating from small scale (seen) to large scale (unseen) errors. Critically, our functional approximation of the error depends on both\nmodel and data sizes. We find that this function leads to a high quality fit and extrapolation. For instance, the mean and standard deviation of the relative errors are under 2% when fitting across all scales investigated and under 5% when extrapolating from a slimmed-down model (1/16 of the parameters) on a fraction of the training data (1/8 of the examples) on the ImageNet (Russakovsky et al., 2015) and WikiText-103 (Merity et al., 2016) datasets, with similar results for other datasets.\nTo the best of our knowledge, this is the first work that provides simultaneously:\n• A joint functional form of the generalization error landscape—as dependent on both data and model size—with few, interpretable degrees of freedom (section 5).\n• Direct and complete specification (via the scaling policy) of the model configuration attaining said generalization error across model and dataset sizes.\n• Highly accurate approximation of error measurements across model and data scales via the functional form, evaluated on different models, datasets, and tasks (section 6 ).\n• Highly accurate error prediction from small to large model and data (section 7).\nWe conclude with a discussion of some implications of our findings as a practical and principled tool for understanding network design at small scale and for efficient computation and trade-off design in general. We hope this work also provides a useful empirical leg to stand on and an invitation to search for a theory of generalization error which accounts for our findings." }, { "heading": "2 RELATED WORK", "text": "Model scaling: A number of studies have explored the effect of model scaling on performance. For instance, image classification networks can be scaled by depth (number of layers; He et al., 2016) or width (number of channels; Zagoruyko & Komodakis, 2016; Howard et al., 2017). More recently, Tan & Le (2019) demonstrated how scaling width, depth, and input resolution has combined positive effects larger than scaling each factor in isolation. However, this relationship has yet to be quantified in a predictive form – by how much will error change with model scaling? In this work, we focus on finding a constructive functional form for determining the model given a specified performance.\nData scaling: It has long been recognized that more data improves performance, and various studies report such trends in both computer vision (e.g., Zhu et al., 2012; Sun et al., 2017) and language processing tasks (e.g., Banko & Brill, 2001; Talmor & Berant, 2019). A number of prior studies observed power-law relations between the generalization error and training data size (Cho et al., 2015; Miceli Barone et al., 2017; Johnson et al., 2018). Most relevant to our work, Hestness et al. (2017) explored the effect of data size on the generalization error in vision, language, and speech tasks, and observed a strikingly consistent power-law behavior in a large set of experiments. However, while these studies point to the empirical existence of a power law in terms of data, they do not offer tools for predicting the performance given a specified model. Nor do they offer low-cost methods to specify the model configuration which would attain the power law with data dependency. Indeed, Hestness et al. had to search over models and their configurations at large scale to exhibit their findings, incurring prohibitive computational costs.\nIn contrast, we demonstrate a constructive recipe, where we directly predict the test performance at large scale and specify the full model configuration which attains it (with no need for large-scale search), given performance at small scale.\nPredicting model performance: Since training models at full data/model scale may be computationally prohibitive, a line of work tries to predict the performance of a given model on a given dataset, without training the model, for example by using a bank of previously trained models, dataset, and their associated performances (Istrate et al., 2019). Others have proposed to estimate performance on small data (Klein et al., 2017) or model sizes (Zoph et al., 2018; Real et al., 2019) in the context of neural architecture search (NAS). In this case, the small-scale evaluation is used to compare models at small cost, to expedite the search process; see Elsken et al. (2019) for a recent survey. Our work complements previous approaches by demonstrating a functional form that can predict large-scale performance from small-scale measurements. Moreover, our method may be integrated in NAS, addressing some of its current limitations (as discussed in section 8).\nTheoretical error bounds: Much attention has been given to theoretical explanations of the generalization capabilities of deep neural networks (Neyshabur et al., 2017a;b; Allen-Zhu et al., 2018a;b; Arora et al., 2018). While fully engaging with this literature is beyond our scope, we note that recent studies have derived bounds involving power-law dependencies in both model (Yarotsky, 2018) and data size (Liang et al., 2019). We leave it as an open question for future work to find theoretical explanations for the empirical behavior and the functional form we investigate in this work." }, { "heading": "3 EXPERIMENTAL SETUP", "text": "Notation: Let Dn = {xi, yi}ni=1 denote a labeled (training) dataset with n samples or datapoints. Let fm denote a neural network whose size is the number of parameters m, such that ŷ = fm(x) is the predicted label. Let (n,m) be the generalization error as a function of n and m, measured by a performance metric (e.g., top-1 accuracy or cross-entropy loss) on a held-out test set. We refer to this error function as the error landscape." }, { "heading": "3.1 SCALING POLICIES", "text": "Dataset scaling: We wish to scale datasets while preserving the original distribution. For image classification, we uniformly subsample all classes by a constant ratio, thus preserving the relative sample size per class. We limit the maximal sub-sampling to avoid eradicating any class. For language modeling, where the number of classes (vocabulary items) has a very long tail distribution, we randomly sample sentences such that the total number of sampled words will be a certain fraction of the original dataset. Table 1 reports the data scales we use. In all tasks the held-out test set remains untouched for evaluating the error.\nModel scaling: We are critically interested in a method where moving across scales is defined by some scaling function, such that no additional significant computation would be incurred. We thus consider the case where the model architecture is given and the model size determines how to scale it. For instance, one may scale width (number of channels in convolutional networks, hidden state size in recurrent networks), depth (number of layers), do compound scaling (Tan & Le, 2019), or more generally define a function tying the model degrees of freedom and size. We focus primarily on width scaling in our experiments; the model scales are reported in Table 1. We also perform selected depth scaling to demonstrate flexibility with respect to the scaling method.\n0( er\nr)\nHyper-parameters: For similar reasons we wish to avoid hyper-paramater search at large scales, and thus avoid the temptation to tune hyper-parameters accordingly (learning rate, regularization, etc.). Therefore, we hold all hyper-parameters fixed. This enables us to construct a functional form that fits the error landscape and can be used to predict the error across scales while completely defining the model attaining it. We consider pros and cons of this approach in the discussion (section 8)." }, { "heading": "3.2 TASKS, MODELS, OPTIMIZERS AND DATASETS", "text": "We experiment with both vision and language tasks. We use 6 benchmark datasets for image classification and 3 for language modeling. For image classification, we train ResNet (He et al., 2016) and WRN models (Zagoruyko & Komodakis, 2016) with stochastic gradient decent (SGD). In section 6.2 we explore the effect of varying architectures and optimizers for a fixed task (CIFAR100), adding VGG16 (Simonyan & Zisserman, 2014) and DenseNet (Huang et al., 2017) models trained with both Adam (Kingma & Ba, 2015) and SGD. For language modeling, we train AWD-LSTM (Merity et al., 2018) and Transformer-XL models (Dai et al., 2019) with SGD and Adam optimizers respectively. Summary statistics are shown in Table 1, along with the range of explored scales. Appendix A gives additional information." }, { "heading": "4 OBSERVATIONS ON THE ERROR LANDSCAPE", "text": "Figures 1a and 1b respectively show an example test error landscape for width scaling of Transformer-XL on WikiText-103 and WRN-44-16 on CIFAR10. Various additional such landscapes are found in appendix C, showing largely consistent patterns. Examining the error landscapes yields the following observations:" }, { "heading": "O1 Model Scaling", "text": "O1.1 For a given dataset size, scaling up the model results in an initial decrease in test error, which\nthen saturates to a level determined by the dataset size.1 This behavior has been noted by Tan & Le (2019) across varied model scaling methods, although they have not engaged with the dependency on dataset size.\nO1.2 The rate of error decrease with model size appears well approximated by a power-law.\nThese two observations together can be summarized as the following relation:\n(m,n) ≈ b(n)m−β(n) + cm(n) (1) where b, β, cm may depend on the data size n, s.t. as m grows, → cm. Example fits to this form (allowing b, β, cm to be fit per n) are seen in figure 2a (right) and figure 2b (right)." }, { "heading": "O2 Data scaling", "text": "O2.1 For a given model size, scaling up the dataset results in an initial increase in performance,\nwhich then saturates to a level determined by the model size. O2.2 The rate of error decrease with dataset size appears well approximated by a power-law. Hes-\ntness et al. (2017) also noted a similar relationship, but did not functionally tie the saturation level to the dataset size.\nThese two observations together can be summarized as the following relation:\n(m,n) ≈ a(m)n−α(m) + cn(m) (2) where a, α, cn may depend on the model size m, s.t. as n grows, → cn. Example fits to this form (allowing a, α, cn to be fit per m) are seen in figure 2a (left) and figure 2b (left).\nO3 Joint properties The behavior of the error when scaling model size while holding data size fixed, and vice versa, extends to the entire error landscape in a well-behaved manner, such that the manifold (m,n) is smooth everywhere as a function of both model and data scales." }, { "heading": "5 FUNCTIONAL APPROXIMATION OF THE GENERALIZATION ERROR", "text": "" }, { "heading": "5.1 CRITERIA", "text": "Motivated by the above observations, we now consider a functional approximation for the error landscape. In particular, let us consider function families meeting the following criteria which augment and restrict our observations:\nC1 As either model or dataset size goes to zero, the expected performance is equivalent to a random-guess error level 0.2 C2 For a given dataset size, scaling up the model will result in an initial increase in performance, which will then saturate, taking the form in equation 1.\nC3 For a given model size, scaling up the dataset will result in an initial increase in performance, which will then saturate, taking the form in equation 2.\nC4 There exists an irreducible error ∞, intrinsic to the dataset. C5 The function must be smooth everywhere and monotonic non-increasing in terms of model\nand data size (observation O3).\nWhile there are many possible function families meeting the above criteria, below we propose a simple function family for our evaluation. We do not claim that this is in fact the true underlying dependency, but rather that it serves as a good approximation of the error landscape—consistent with these criteria.\n1At some point error increase ensues; this point differs between datasets, see Appendix C for examples. 2Best guess when m→ 0 ( 0n) or n→ 0 ( m0) need not coincide, but can, e.g., in a balanced dataset." }, { "heading": "5.2 PROPOSED FUNCTION FAMILY", "text": "As a first insightful step, consider the implications of satisfying C2 and C3 simultaneously. By examining the limiting behavior as m or n grow, we have:\nAs m grows large: cm(n) ≈ a(m)n−α(m) + cn(m) As n grows large: cn(m) ≈ b(n)m−β(n) + cm(n)\nThus, a consistent form satisfying C2 and C3 simultaneously is:\n(m,n) ≈ a(m)n−α(m) + b(n)m−β(n) + c∞ (3)\nwhere c∞ is a constant not dependent on either m or n.\nLet us now examine the simplified case where a, b, α, β are constant:\ñ(m,n) = an−α + bm−β + c∞ (4)\nwhere α ≥ 0 and β ≥ 0 control the global rate at which error decreases with data and model size, respectively, a > 0 and b > 0 are a form of unit conversion between data and model sizes and error, and c∞ > 0 is the asymptotic lower value attainable. This function is a special case of equation 3 and meets criteria C2 and C3 by construction. Importantly C4 and C5 are also met.\nHowever, by giving up the dependence of a, b, α, β onm,n, this function does not meet criterion C1. We thus need to model the transition from the initial random-guess level to the power-law region. We propose to parameterize the transition using the following envelope (complex) function:\n̂(m,n) = 0 ∥∥∥∥ ̃(m,n)̃(m,n)− iη ∥∥∥∥ = 0 ∥∥∥∥ an−α + bm−β + c∞an−α + bm−β + c∞ − iη ∥∥∥∥ (5) where i = √ −1. Here the simple pole at η controls the transition point from the initial random-guess level 0 as (m,n) increase. As (m,n) grow, ̃→ c∞ and the final irreducible error ∞ , 0c∞η−1 is approached. The random-guess error, 0, is a known parameter determined by dataset statistics (e.g, (Nclasses−1)/Nclasses for a balanced dataset). Note that due to our choice of rational envelope, we can divide by a constant the form in equation 4. Without loss of generality, let us choose a = 1.\nNote that while the forms in equations 3 and 4 are well motivated, the approach taken for modeling the transition is solely a convenience one. In fact, the transition(s) as function of m and n may be captured in the functional forms of a, b, α, β or another envelope mechanism. We leave a more refined investigation of the nature of the transitions to future work." }, { "heading": "6 ERROR LANDSCAPE ESTIMATION", "text": "We wish to empirically estimate the quality of the proposed functional parameterization as a fit to the true error landscape. Let ̂(n,m;θ) be the parametric function family (equation 5) approximating the error landscape (n,m), where θ = {α, β, b, c∞, η}.3 Define the divergence δ(n,m;θ) as the relative difference between the estimated error ̂(m,n;θ) and the true error (m,n):\nδ(n,m;θ) , ̂(m,n;θ)− (m,n)\n(m,n)\nWe fit a least squares regression model to find the best parameters minimizing the divergence. In this section, we fit the function using 10-fold cross-validation across all model/data configurations m,n (see Table 1) and evaluate the fit quality. (In the next section, we perform extrapolation experiments, from seen to unseen points.) We perform the fit separately for each dataset and evaluate its quality by the mean µ and standard deviation σ of the divergence δ over all points (m,n). See Appendix B.1 for experimental details.\nAs figure 3 shows, estimated test accuracy is highly correlated with actual test accuracy for various datasets, with worst-case values µ < 1% and σ < 5% . Note that the number of free parameters is small (|θ| ≤ 6) compared to the number of points (42–49 model-data configurations), demonstrating the appropriateness of the proposed function for modeling the complex error landscape." }, { "heading": "6.1 A PROBE INTO DEPTH SCALING", "text": "Here we verify that our results extend to another canonical scaling policy, namely depth scaling. Figure 4a shows the error landscape with depth scaling on CIFAR10, exhibiting the same characteristics as width scaling. Figures 4b and 4c show error landscape estimation results for both cases of width and depth scaling, exhibiting small and comparable fit errors (confidence intervals < 3%).\nSince the difference in approximation quality is effectively indistinguishable when scaling depth or width orthogonally, we expect compound scaling to adhere to the same functional form. Indeed, we verified this on the publicly available (model scaling only) results for EfficientNet (Tan & Le, 2019)." }, { "heading": "6.2 ON THE VARIETY OF OPTIMIZERS AND ARCHITECTURES", "text": "Our study covers a deliberate variety of architectures (ResNet, WRN, LSTM, Transformer) and optimizers (Adam, SGD variants), following standard implementations in the literature as recommended for each dataset/model setting; see Appendix A.\n3 For image classification, we set 0 = (Nclasses − 1)/Nclasses (the balanced dataset case). For language modeling, we estimate 0 as another parameter, such that θ = {α, β, b, c∞, η, 0} in this case.\nHowever, the model/optimizer settings differ in multiple aspects across the different tasks , rendering the comparison of, say, different optimizers, challenging. In this section we verify that the functional form holds when varying the optimizer and/or the architecture on the same task, namely image classification on CIFAR100.\nIn addition to the previously examined setting of WRN with SGD, we add four more settings: two well known architectures (VGG and DenseNet), each trained with both SGD and Adam optimizers. See Appendix A for experimental details. Figure 5 exhibits consistent, accurate, fit values across all architecture/optimizer settings, with mean divergence of µ < 1% (std: σ < 6%; confidence intervals < 4%)." }, { "heading": "7 EXTRAPOLATION", "text": "In this section, we evaluate the ability of our functional approximation to extrapolate beyond seen model/data configurations. The primary question we ask is: can we predict the error of a large model/data configuration from the errors of smaller-scale model/data configurations? To do this, we fit the least squares regression on a subset of the configurations and predict the error on larger, unseen configurations. More formally, let (mi, nj) denote a given model/data configuration. We first estimate parameters θij by fitting the function in equation 5 on all points of at most that size (m ≤ mi, n ≤ nj). Then we predict the error (m,n) in all points corresponding to larger configurations (m > mi, n > nj) using estimated θij . Finally, we measure the divergence δ(m,n) between the estimated error and the actual error at all larger configurations. This process is illustrated in figure 6a.\nFigure 6b shows the results of one such extrapolation experiment, on ImageNet. In this case, we have fit the functional form on all configurations of model size m ≤ mi = M/16 and data size n ≤ nj = N/8, and predicted the error on all larger configurations. As the figure shows, the extrapolation is highly accurate, with a mean divergence of µ = 4.5% (std: σ = 4.7%). Figure 6c reports a similar experiment on WikiText-103. Here, again, we see very good extrapolation, with a mean divergence of µ = 0.5% (std: σ = 1.7%). Note that each extrapolation is run 10 times with different random initializations of θij in the least squares with negligible effect on the prediction.\nIn practice, we may be interested in extrapolation quality with different subsets of configurations. Appendix D provides detailed extrapolation results on multiple subsets of configurations, for both vision and language datasets. Generally, the extrapolation performs well once not ill-posed, which may be caused by lack of signal in the region of the initial “random-guess” level, or in degenerate cases like having fewer measurements than the number of free parameters in θ." }, { "heading": "8 DISCUSSION AND CONCLUSION", "text": "In this work, through insights gained by the joint examination of the dependencies of generalization error on both model and data size, we arrive at criteria for functions consistent with the form of the generalization error under a given scaling policy. We consider one such function and find it to be in very good agreement with the actual behavior of the error landscape. Indeed, the agreement is strong enough that extrapolation from small to large scale becomes feasible: the function predicts the behavior of the generalization error in practice for the practical case of scaling models and data. We discuss several example implications of knowing such a functional form.\nSmall-scale network development: At the core of small fidelity searches is the notion of performance rank comparison between models. However, small scale and large scale ranks are not assured to be consistent. If indeed a functional form such as empirically found in this work holds very generally, then in contrast, one can safely assess scaling rank between models at small scale, with the assurance that it remains consistent. This suggests that one would be well served by searching over scaling policies; a pertinent example of such a success is Tan & Le (2019). The functional form also explains the limitation of small-scale search: once reaching the random-guess error level, where the sensitivity to scaling vanishes, the informativeness of ranking diminishes. Finally, the functional form allows direct usage of differentiable methods for NAS.\nPrincipled design: Knowing the error landscape function facilitates reasoning about the choice of (m,n) attaining a specified error level. In other words, for any given error level, one can solve Eq. 5 for m,n based on small-scale measurements. Thus, one can quantitatively answer design questions regarding the expected (in particular, large-scale) relations between m, n, and . In fact, Eq. 5 provides direct ansewrs to questions such as ”how much data would one require to reach a prescribed performance level?” or ”how big a model would be needed?” Imposing constraints is also straightforward. For instance, consider the following question: ”What is the maximal model size possibly needed (useful), when the data is limited in size, n = nlim (for a given model architecture and scaling policy)?” For a fixed dataset size, model scaling eventually contributes marginally to error reduction and becomes negligible when bm−β n−αlim (Eq. 5). Define the relative contribution threshold T as satisfying T = n −α lim\nbm−βmax . (For example, T = 10.) Then the maximal useful model size\nmeeting threshold T is: mmax(T ) = (bT ) 1/β n α/β lim\nSimilarly, The maximal useful amount of data for a limited sized model mlim is:\nnmax(T ) = (1/bT ) 1/α m β/α lim\nMoreover, Eq. 5 allows for complex design trade-offs. Generally, given some design-tradeoff cost function C(m,n, ), one can minimize such cost s.t. Eq. 5. For example, consider the case of optimizing for efficient computation which has both practical and environmental importance (Schwartz et al., 2019). Since the number of FLOPs during training is ∝ m · n (for constant epoch budget), the trade-off cost function may be formulated as C(FLOPS, ) = C(mn, ). Further, since constant error contour is very well approximated by c = 1nα + b mβ\n(Eq. 5), dataset and models may be scaled with optimal resource efficiency with no effect on performance by solving for:\nargmin m,n m · n s.t. c = 1 nα + b mβ\nThe solution gives us the optimal-computational-efficiency ratio of model to data size: bβα nα mβ = 1.\nLimitations: We have made a few simplifying assumptions in our choice of approximating function, in particular in how to model the transition from the initial random-guess error level and the union of the random-guess level of the two scenarios (small model with large data and large model with small data). We leave a more detailed examination of the behavior of the transitions from random-guess error levels and refinements of the functional form to future work.\nCritically, the restrictive nature of our scaling framework (all parameters and hyperparameters described by a policy) is both a blessing and a challenge. The blessing comes in fulfilling the goal of finding simultaneously both the form of the generalization error and the full specification of the model and hyperparameters that attain it across scales. The challenge is that we have demonstrated in this work only the case of constant hyper-parameters. We conjecture that the relation between model configuration and hyperparameter choice (Zela et al., 2018) may entail the potential to formulate hyperparameter-scaling policies similar in nature to the model-scaling polices, and that these too fall under the scope of the form we find in this work. This too will be the subject of future work.\nWe hope that this work will bring the actual functional form of the generalization error in this practical case of scaling to the fore, both in practice and as an empirical leg to stand on in the quest for its theoretical origins." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Alexander Rakhlin, Alexander Madry, Kai Xiao, Lu Mi, Viaks Garg, Dan Alistrah, and Tommi Jaakkola for discussions and their help. We also thank the anonymous reviewers for their valuable feedback. J.R. was partly supported by the Eli and Dorothy Berman Fellowship as well as grants NSF IIS-1447786, NSF CCF-1563880 and China-Singapore Suzhou Industrial Park. A.R. was partially supported by the Air Force Office of Scientific Research USA (FA9550-18-1-0054) though a grant to John K. Tsotsos. Y.B. was partly supported by the Harvard Mind ,Brain, and Behavior Initiative." }, { "heading": "A DATASETS AND MODELS", "text": "A.1 IMAGE CLASSIFICATION\nA.1.1 DATASETS\nWe evaluated our predictions on several popular image classification datasets: ImageNet (Russakovsky et al., 2015): a large-scale recognition benchmark consisting of natural images of 1000 object categories with 1.28M training images spread roughly uniformly over the categories. It has 50K validation and 100K testing images. It has been the most popular large-scale benchmark for image classification methods for the better part of the last decade. CIFAR10/100 (Krizhevsky et al., 2009): 60K natural RGB images of 10 classes (100 for CIFAR100) with a train/test split of 50K/10K. For each of the following datasets, we use the version collated, resized, and split into train/validation/test sets by Rebuffi et al. (2017). DTD (Cimpoi et al., 2014): a texture database of 47 categories and 5640 images. Aircraft (Maji et al., 2013): 10K images of 100 different aircraft classes. UCF101 (Soomro et al., 2012): originally a video action recognition dataset, converted using the method of Bilen et al. (2016) into a single image per video. It contains 13,320 images of 101 action classes.\nA.1.2 MODELS\nWe experiment with four models for image classification. We use different variants of the popular ResNet architecture (He et al., 2016) in the main experiments. For ImageNet we use ResNet-50 and build on the code from the PyTorch framework (Paszke et al., 2017) to vary the model width. For all other datasets we use WRN-44-16 (Wu et al., 2016) of varying widths, modified from the implementation of Hoffer et al. (2018).\nScaling the models’ width is performed by multiplying the number of channels in each convolutional layer and the width of the hidden linear layers by a constant factor and rounding to the nearest integer. The ranges of width scales (and data scales) for the main experiments are detailed in Table 1b.\nIn section 6.2, we perform width scaling for two additional architectures, VGG16bn (Simonyan & Zisserman, 2014) and DenseNet (L=40, k=32) (Huang et al., 2017). The VGG and DenseNet models were also modified for width scaling from the implementation of Hoffer et al. (2018). The model scales in this case are 4−k, 0 ≤ k ≤ 5, for both VGG and DenseNEt. Depth-scaling, in the CIFAR10 case (section 6.1), is performed by appending extra layers within each block.\nA.1.3 TRAINING\nIn the main experiments, training is done via SGD with a momentum of 0.9, weight decay of 1e-4 and initial learning rate of 0.1. For ImageNet we train for 90 epochs, decreasing the learning rate by a multiplicative factor of 0.1 after and 30 and after 60 epochs. We use a batch size of 16. For all other vision datasets we use a batch-size of 128. We begin training with a learning rate of 0.1, run for 200 epochs, and reduce by a multiplicative factor of 0.1 after 80, 120, and 160 epochs.\nFor the VGG and DenseNet experiments on CIFAR100 in section 6.2, we train with both SGD and Adam optimizers. We train VGG for 170 epochs and Densenet for 300 epochs. Adam hyperparameters are default, with an initial learning rate of 1e-3. When training with SGD, we retain initial learning rate, batch size, momentum, and weight-decay, as in the main experiment (at 0.1, 128, 0.9, and 1e-4 respectively) and follow standard stepped learning rate schedules: For VGG, learning rate multiplicative factor of 0.1 after 80, 120, and 160 epochs; For DenseNet, learning rate multiplicative factor of 0.1 after 150 and 225 epochs.\nA.2 LANGUAGE MODELING\nA.2.1 DATASETS\nWe evaluate on several datasets commonly used for (word-level) language modeling: Penn Treebank (Mikolov et al., 2010), WikiText-2 (Bradbury et al., 2017), and WikiText-103 (Merity et al., 2016). The PTB is a relatively small language modeling dataset of news texts, with a vocabu-\nlary of 10K unique words and about 900K/70K/80K training/validation/test words. WikiText-2 is drawn from Wikipedia articles and it is both larger and richer, with a vocabulary of 33K words and 2M/210K/240K training/validation/test words. WikiText-103 is also based on Wikipedia, but larger still, with a vocabulary of 270K words and 100M training words (and the same validation and test sets as WikiText-2).\nA.2.2 MODELS\nWe experiment with two standard models for language modeling: Transformer-XL (Dai et al., 2019) and AWD-LSTM (Merity et al., 2018). Transformer-XL is a recent language modeling architecture that is based on transformer self-attention (Vaswani et al., 2017), but modified to better learn dependencies beyond a fixed length by adding a segment-level recurrence mechanism. It has achieved state-of-the-art results on multiple benchmarks. We use the official PyTorch implementation4 with their base configuration: 16 layers, embedding size of 410, inner dimension of 2100 in the fullyconnected layers, and 10 attention heads. Training is done with Adam. See the implementation for other details. For scaling experiments, we decimate the inner dimension. We use Transformer-XL for WikiText-103.\nAWD-LSTM is a long short-term memory (Hochreiter & Schmidhuber, 1997) language model with adaptive weight averaging. We use the official implementation5 with the recommended configuration: 3 layers, embedding size of 400, and hidden state size of 1150. Training is done with SGD. We use AWD-LSTM for PTB and WikiText-2 and follow the recommended settings for these two datasets. For scaling experiments, we decimate the hidden state size.\n4https://github.com/kimiyoung/transformer-xl 5https://github.com/salesforce/awd-lstm-lm" }, { "heading": "B ERROR ESTIMATION EXPERIMENT", "text": "B.1 EXPERIMENTAL DETAILS\nIn the experiment described in section 6, we fit a least squares regression model to find the best parameters minimizing the divergence δ(m,n) - evaluated at configurations m,n as in Table 1:\nθ∗ = argmin θ ∑ n,m |δ(m,n;θ)|2\nWe quantify the quality of the fit by the mean µ and standard deviation σ of the fitted divergence by performing standard 10-fold cross validation over all points (m,n) with confidence intervals reported as ±1 std over the folds.\nB.2 FOUND THETA VALUES" }, { "heading": "C ADDITIONAL ERROR LANDSCAPE MEASUREMENTS AND ESTIMATIONS", "text": "In this appendix, we provide error landscape measurements and estimations for all datasets, corresponding to the experiment in section 6. The results are shown in 3D graphs similar to figure 1. In each such graph, the z-axis is the logarithm of the generalization error as a function of two independent variables: the model size m and the data size n.\nThe 3D graph is deliberately portrayed in log-log-log scale, as we cover a very large range of data scales and model scales and a correspondingly wide range of errors. This view is a useful one when one wishes to evaluate both large dynamic ranges (simultaneously both very large and very small values) and is especially vivid in portraying power-law like dependencies; a power-law naturally forms a straight line in a log-log view.\nIn each figure, subfigure (a) shows the measured error landscape is in log-log-log scale, where each point (blue dot) is the error resulting from training with a model/data configuration m,n. Subfigure (b) shows the best-fit estimated error landscape. The surface is a linear interpolation between the points, which is then projected on the model-error (m, ), data-error (n, ), and model-data (m,n) planes. The contour plots on each one of these planes are the projections of the error landscape surface, and are useful in considering the behavior of the surface when holding one dimension constant.\nWe call to attention several interesting observations on the datasets explored:\n• As quantified rigorously in section 6, the fits perform well across error ranges. In these surfaces, one also gets qualitative sense of the fit adequacy across the wide ranges of the dataset and model scales directly. While perhaps slightly difficult to asses the surface directly, a helpful view is to consider the similarity between the projections of the actual and projected surfaces.\n• With increasing model size, indeed typically the error does remain saturated. However, in one of our tested datasets (figure 12) there was a renewed slight increase. We verify that this is indeed over-fitting, in the sense that there is no corresponding increase in the training error. We note that the functional form we find can actually be used to veer clear of the m,n regions where such over-fitting may occur.\n• The simplifying approach taken by considering the random guess levels (and associated transitions) for small models or small data as identical, seems to work fairly well with some deviation apparent by examining figure 15. Indeed the simplification can hold well for balanced datasets, but need not for imbalanced ones such as in the task of language modeling. Thus, a relaxation of this simplification is expected to be important conceptually and practically.\n0( er\nr)\n0( er\nr)\n0( er\nr)\n0( er\nr)\n0( er\nr)\n0( er\nr)" }, { "heading": "D ADDITIONAL EXTRAPOLATION RESULTS", "text": "Here we provide detailed extrapolation results, for all datasets. All figures are structured in a similar way. Each subplot shows estimated (y-axis) vs. actual error (x-axis) (0 to 1 scale on both axes). Each subplot is located at the coordinate of the maximal data and model given for the task of performing the fit to the functional form in equation 5. This is the point at the top-right corner of the green dots in the illustration in figure 6a. The target is to find the error-landscape values for unseen, larger scales of both model and data (red points in the same illustration). Going from left to right in each figure indicates observed measurements of the error from models of an increasing fraction w.r.t the full size. Going from bottom-to top indicates observed measurements of the error from dataset sizes of an increasingly large fraction of the full dataset.\nIn each subplot, every point shows the estimated vs. actual error on a model-data configuration. Points that were given for fitting the function are colored in green, while unseen points that were not used are in red. The red points show the estimation error vs. actual error when extrapolating to all larger models and data sizes. In each subplot, the mean and standard deviation over all divergences δ at target points are given in text.\nEach experiment fit of the parameters was repeated 100 times, with different random initializations of θ. The shaded bands show one standard deviation across these runs.\nThe quality of the extrapolation is critically dependent on the signal provided in the (green) fitted points. Two limiting factors are evident by examining the figures below, which both play a role in the well-posedness of the solution:\n• The proximity to the initial random guess level. Only upon transitioning from the initial error plateau, does meaningful signal about the scaling rates become available. Indeed, for scales prior still in the region or close to the initial error level, one sees poor extrapolation results; see figures 18, 19, and 21, and the vivid origin of this phenomena by examining figures 11, 10, and 12.\n• A second source of ill-posedness is tied to the number of configurations used for the estimation of θ. Clearly, when this is small, one cannot expect the extrapolation to be stable. In fact, at least two measurements in each scaling dimension (model/data) are needed, and no less than the number of parameters in θ in total. Indeed, for all the plots in this appendix, the smallest scale of m,n is omitted form the graph such that the lowermost row and leftmost column span exactly two model and data scales correspondingly. Of course, there is nothing tying directly the number of points and scale of configurations measured, and one can decouple these two factors by taking closer spaced samples at small scale.\n• When both the above factors are not limiting the measurement, one readily sees that for divergences of no more than a few percent, it is sufficient to measure model/data configurations which are far-ranged from the configurations which one wishes to extrapolate to .\n0.00\n0.25\n0.50\n0.75\n1.00 :-15.1±5.8\n:11.3±3.6\n:-13.0±0.0\n:5.8±0.0\n:0.1±0.0\n:2.9±0.0\n:5.8±0.0\n:1.0±0.0\n:5.3±0.2\n:0.0±0.0\n0.00\n0.25\n0.50\n0.75\n1.00 :-12.8±5.5\n:11.7±3.7\n:-8.5±0.0\n:6.5±0.0\n:-6.5±0.0\n:2.5±0.0\n:-4.5±0.2\n:2.2±0.1\n:5.0±0.1\n:1.9±0.0\n:3.3±0.0\n:0.0±0.0\n0.00\n0.25\n0.50\n0.75\n1.00 :-9.2±5.6\n:9.9±4.1\n:-13.9±0.0\n:9.7±0.0\n:-4.5±0.0\n:4.7±0.0\n:-5.3±0.0\n:3.5±0.0\n:-2.6±0.0\n:3.3±0.0\n:-11.6±0.6\n:3.1±0.2\n0.00\n0.25\n0.50\n0.75\n1.00 :-16.4±3.4\n:13.9±2.4\n:11.3±0.0\n:10.3±0.0\n:8.6±0.1\n:7.3±0.0\n:0.9±0.0\n:3.3±0.0\n:-0.2±0.0\n:3.0±0.0\n:4.3±0.1\n:2.3±0.1\n0.00\n0.25\n0.50\n0.75\n1.00 :-13.2±3.5\n:12.5±3.0\n:-12.6±0.0\n:9.3±0.0\n:-12.7±0.0\n:14.6±0.0\n:-18.0±0.0\n:12.2±0.0\n:-17.1±0.1\n:11.7±0.1\n:2.4±0.0\n:3.3±0.0\n0.0 0.5 1.0 0.00\n0.25\n0.50\n0.75\n1.00 :-28.4±0.0\n:22.5±0.0 0.0 0.5 1.0\n:13.9±8.6\n:16.0±5.6 0.0 0.5 1.0\n:26.0±11.7\n:22.5±8.0 0.0 0.5 1.0\n:26.9±14.9\n:21.6±9.9 0.0 0.5 1.0\n:1.2±9.2\n:7.7±5.3 0.0 0.5 1.0\n:4.4±15.8\n:10.9±8.2\nimagenet\nDataset Fraction (log2(n/N)\nM od\nel F\nra ct\nio n\n(lo g2\n(m /M\n)\nFigure 16: ImageNet extrapolation results.\n0.0\n0.5\n1.0 :4.6±11.4\n:7.4±5.6\n:2.8±0.3\n:0.6±0.1\n:-1.1±0.3\n:1.2±0.2\n:4.1±0.6\n:0.0±0.0\n0.0\n0.5\n1.0 :1.6±8.6\n:6.7±4.4\n:4.8±0.3\n:3.4±0.2\n:3.3±0.3\n:1.5±0.1\n:5.7±0.0\n:1.7±0.0\n:3.9±0.1\n:0.0±0.0\n0.0\n0.5\n1.0 :-0.5±7.2\n:6.5±3.4\n:10.6±0.2\n:9.2±0.1\n:4.3±0.1\n:3.3±0.0\n:7.0±0.1\n:3.1±0.0\n:5.2±0.1\n:1.3±0.0\n0.0\n0.5\n1.0 :-7.7±7.1\n:6.4±3.4\n:23.3±0.1\n:21.0±0.1\n:6.2±0.1\n:5.3±0.1\n:5.9±0.0\n:3.7±0.0\n:2.9±0.0\n:2.2±0.0\n0.0\n0.5\n1.0 :-15.3±3.9\n:7.4±2.6\n:18.7±0.0\n:20.0±0.0\n:-0.6±0.1\n:3.8±0.0\n:-9.1±0.0\n:6.1±0.0\n:-19.6±0.0\n:9.7±0.0\n0.0 0.5 1.0 0.0\n0.5\n1.0 :-21.6±0.9\n:14.1±1.0 0.0 0.5 1.0\n:4.7±8.8\n:14.8±2.6 0.0 0.5 1.0\n:-5.4±8.0\n:9.5±3.0 0.0 0.5 1.0\n:-6.4±9.0\n:8.9±4.1 0.0 0.5 1.0\n:-5.7±11.5\n:8.9±5.1\ndecathlon_cifar100\nDataset Fraction (log2(n/N)\nM od\nel F\nra ct\nio n\n(lo g2\n(m /M\n)\nFigure 17: CIFAR100 Extrapolation Results\n0.0\n0.5\n1.0 :13.2±0.1\n:17.5±0.1\n:14.8±0.0\n:16.8±0.0\n:18.5±0.0\n:14.8±0.0\n:15.8±0.0\n:0.0±0.0\n0.0\n0.5\n1.0 :11.8±0.2\n:17.5±0.2\n:12.9±0.0\n:16.9±0.0\n:14.8±0.0\n:15.6±0.0\n:9.5±0.0\n:8.1±0.0\n:-5.0±0.0\n:0.0±0.0\n0.0\n0.5\n1.0 :11.3±0.3\n:17.3±0.3\n:12.7±0.0\n:16.9±0.0\n:13.8±0.0\n:15.5±0.0\n:5.0±0.0\n:8.1±0.0\n:-6.1±0.0\n:2.0±0.0\n0.0\n0.5\n1.0 :9.2±0.6\n:15.4±0.7\n:12.9±0.1\n:17.7±0.1\n:12.7±0.0\n:14.9±0.0\n:7.7±0.0\n:8.5±0.0\n:-3.9±0.0\n:4.0±0.0\n0.0\n0.5\n1.0 :9.9±0.2\n:15.7±0.2\n:9.4±0.0\n:15.3±0.0\n:13.1±0.0\n:14.5±0.0\n:11.6±0.0\n:9.7±0.0\n:-0.1±0.0\n:4.1±0.0\n0.0 0.5 1.0 0.0\n0.5\n1.0 :-7.4±1.0\n:13.0±1.5 0.0 0.5 1.0\n:12.4±0.1\n:16.3±0.0 0.0 0.5 1.0\n:11.6±0.0\n:13.7±0.0 0.0 0.5 1.0\n:-0.2±0.0\n:4.7±0.0 0.0 0.5 1.0\n:0.9±0.0\n:4.2±0.0\ndecathlon_aircraft\nDataset Fraction (log2(n/N)\nM od\nel F\nra ct\nio n\n(lo g2\n(m /M\n)\nFigure 18: Aircraft extrapolation results.\n0.0\n0.5\n1.0 :5.6±1.1\n:5.7±0.9\n:2.7±0.0\n:3.2±0.0\n:3.3±0.0\n:2.4±0.0\n:2.1±0.0\n:0.0±0.0\n0.0\n0.5\n1.0 :7.7±0.4\n:6.9±0.3\n:3.3±0.0\n:3.4±0.0\n:2.2±0.0\n:2.1±0.0\n:-0.3±0.0\n:1.3±0.0\n:0.8±0.0\n:0.0±0.0\n0.0\n0.5\n1.0 :7.1±0.4\n:7.1±0.3\n:4.7±0.0\n:4.3±0.0\n:2.4±0.0\n:2.1±0.0\n:-1.6±0.0\n:1.0±0.0\n:-2.1±0.0\n:1.5±0.0\n0.0\n0.5\n1.0 :6.0±0.1\n:8.9±0.1\n:1.7±0.0\n:6.8±0.0\n:1.8±0.0\n:2.8±0.0\n:-1.1±0.0\n:1.6±0.0\n:-3.9±0.0\n:1.2±0.0\n0.0\n0.5\n1.0 :8.7±0.0\n:9.2±0.0\n:5.9±0.0\n:6.9±0.0\n:1.0±0.0\n:1.7±0.0\n:-5.1±0.0\n:2.5±0.0\n:-4.8±0.0\n:2.1±0.0\n0.0 0.5 1.0 0.0\n0.5\n1.0 :11.7±0.0\n:9.3±0.0 0.0 0.5 1.0\n:12.0±0.0\n:8.0±0.0 0.0 0.5 1.0\n:3.2±0.0\n:1.3±0.0 0.0 0.5 1.0\n:-9.1±0.0\n:5.4±0.0 0.0 0.5 1.0\n:-14.2±0.0\n:4.3±0.0\ndecathlon_dtd\nDataset Fraction (log2(n/N)\nM od\nel F\nra ct\nio n\n(lo g2\n(m /M\n)\nFigure 19: DTD Results\n0.0\n0.5\n1.0 :52.3±16.2\n:36.5±13.7\n:19.1±0.1\n:9.4±0.0\n:15.8±0.1\n:3.3±0.0\n:11.2±0.2\n:0.0±0.0\n0.0\n0.5\n1.0 :44.7±12.4\n:36.4±11.9\n:14.8±0.2\n:10.6±0.1\n:11.8±0.1\n:6.3±0.0\n:10.1±0.1\n:2.0±0.0\n:6.7±0.3\n:0.0±0.0\n0.0\n0.5\n1.0 :32.0±9.2\n:28.4±9.2\n:15.3±0.0\n:10.5±0.0\n:10.8±0.0\n:7.0±0.0\n:9.8±0.0\n:3.6±0.0\n:5.3±0.2\n:1.3±0.0\n0.0\n0.5\n1.0 :24.9±9.4\n:21.5±9.4\n:13.0±0.1\n:8.9±0.0\n:10.7±0.2\n:6.3±0.0\n:8.9±0.1\n:3.8±0.0\n:7.6±0.1\n:1.7±0.0\n0.0\n0.5\n1.0 :14.5±7.0\n:15.5±7.3\n:33.6±0.0\n:28.0±0.0\n:11.5±0.1\n:6.8±0.0\n:10.4±0.1\n:4.2±0.0\n:9.7±0.1\n:2.0±0.0\n0.0\n0.5\n1.0 :2.9±6.2\n:8.0±6.4\n:48.9±0.0\n:46.8±0.0\n:16.1±0.2\n:12.6±0.2\n:4.8±0.1\n:4.2±0.0\n:1.6±0.2\n:1.6±0.0\n0.00 0.25 0.50 0.75 1.00 0.0\n0.5\n1.0 :-11.3±1.7\n:6.9±1.7 0.00 0.25 0.50 0.75 1.00\n:9.0±3.3\n:25.6±0.8 0.00 0.25 0.50 0.75 1.00\n:1.5±4.1\n:14.7±1.0 0.00 0.25 0.50 0.75 1.00\n:-3.9±0.3\n:6.5±0.1 0.00 0.25 0.50 0.75 1.00\n:6.4±0.4\n:3.2±0.1\ncifar10\nDataset Fraction (log2(n/N)\nM od\nel F\nra ct\nio n\n(lo g2\n(m /M\n)\nFigure 20: CIFAR10 extrapolation results.\n0.0\n0.5\n1.0 :42.3±6.1\n:50.2±7.0\n:61.4±0.3\n:54.7±0.3\n:15.7±0.0\n:9.6±0.0\n:7.1±0.0\n:0.0±0.0\n0.0\n0.5\n1.0 :36.1±6.4\n:47.1±7.9\n:55.4±0.2\n:57.2±0.2\n:5.3±0.0\n:8.8±0.0\n:4.1±0.0\n:5.8±0.0\n:-1.3±0.0\n:0.0±0.0\n0.0\n0.5\n1.0 :30.8±6.1\n:40.7±7.7\n:49.8±0.2\n:52.8±0.2\n:-4.1±0.0\n:4.5±0.0\n:8.7±0.3\n:9.8±0.2\n:4.0±0.1\n:2.3±0.0\n0.0\n0.5\n1.0 :21.8±6.1\n:30.9±7.6\n:52.3±0.2\n:55.3±0.1\n:-13.0±0.0\n:2.6±0.0\n:11.5±0.0\n:12.7±0.0\n:8.0±0.0\n:2.2±0.0\n0.0\n0.5\n1.0 :12.3±6.3\n:22.9±7.3\n:35.2±0.0\n:42.6±0.0\n:-6.4±0.0\n:5.3±0.0\n:-7.0±0.0\n:6.0±0.0\n:-3.6±0.0\n:2.3±0.0\n0.0 0.5 1.0 0.0\n0.5\n1.0 :-14.2±4.0\n:8.9±2.0 0.0 0.5 1.0\n:41.5±0.6\n:49.4±0.5 0.0 0.5 1.0\n:-1.4±0.0\n:8.3±0.0 0.0 0.5 1.0\n:5.3±0.0\n:8.0±0.0 0.0 0.5 1.0\n:18.4±0.0\n:8.4±0.0\ndecathlon_ucf101\nDataset Fraction (log2(n/N)\nM od\nel F\nra ct\nio n\n(lo g2\n(m /M\n)\nFigure 21: UCF101 extrapolation results." } ]
2,019
A CONSTRUCTIVE PREDICTION OF THE GENERALIZATION ERROR ACROSS SCALES
SP:f08e59bd838b72a61a2ddcbd9027df8bca75ccea
[ "This submission studies the problem of transfer learning and fine tuning. This submission proposes four insights: Momentum hyperparameters are essential for fine-tuning; When the hyperparameters satisfy some certain relationships, the results of fine-tuning are optimal; The similarity between source and target datasets influences the optimal choice of the hyperparameters; Existing regularization methods for DNN is not effective when the datasets are dissimilar. This submission provides multiple experiments to support their opinion.", "This paper studies the role of different hyperparameters in finetuning image recognition models on new target tasks. The authors run a large set of experiments and show that, perhaps non-surprisingly, hyperparameters matter. In particular, they show that momentum, which is typically ignored in finetuning, is quite important, and that the momentum values that work well depend on the similarity between the source and target datasets. They also show important correlations between momentum, learning rate, and weight decay." ]
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for “dissimilar” datasets. Our findings challenge common practices of finetuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
[ { "affiliations": [], "name": "Hao Li" }, { "affiliations": [], "name": "Pratik Chaudhari" }, { "affiliations": [], "name": "Hao Yang" }, { "affiliations": [], "name": "Michael Lam" }, { "affiliations": [], "name": "Avinash Ravichandran" }, { "affiliations": [], "name": "Rahul Bhotika" }, { "affiliations": [], "name": "Stefano Soatto" } ]
[ { "authors": [ "Alessandro Achille", "Michael Lam", "Rahul Tewari", "Avinash Ravichandran", "Subhransu Maji", "Charless Fowlkes", "Stefano Soatto", "Pietro Perona" ], "title": "Task2vec: Task embedding for meta-learning", "venue": null, "year": 1902 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE T-PAMI,", "year": 2017 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "arXiv preprint arXiv:1805.09501,", "year": 2018 }, { "authors": [ "Yin Cui", "Yang Song", "Chen Sun", "Andrew Howard", "Serge Belongie" ], "title": "Large scale fine-grained categorization and domain-specific transfer learning", "venue": null, "year": 2018 }, { "authors": [ "Jia Deng", "Alexander C Berg", "Kai Li", "Li Fei-Fei" ], "title": "What does classifying more than 10,000 image categories tell us", "venue": "In ECCV,", "year": 2010 }, { "authors": [ "Jeff Donahue", "Yangqing Jia", "Oriol Vinyals", "Judy Hoffman", "Ning Zhang", "Eric Tzeng", "Trevor Darrell" ], "title": "Decaf: A deep convolutional activation feature for generic visual recognition", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Weifeng Ge", "Yizhou Yu" ], "title": "Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning", "venue": "In CPVR,", "year": 2017 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Gabriel Goh" ], "title": "Why momentum really works", "venue": "Distill, 2(4):e6,", "year": 2017 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Gregory Griffin", "Alex Holub", "Pietro Perona" ], "title": "Caltech-256 object category dataset", "venue": null, "year": 2007 }, { "authors": [ "Stephen Jose Hanson", "Lorien Y. Pratt" ], "title": "Comparing biases for minimal network construction with back-propagation", "venue": "In NIPS,", "year": 1989 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Kaiming He", "Ross Girshick", "Piotr Dollár" ], "title": "Rethinking imagenet pre-training", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "John Hertz", "A Krogh", "Richard G Palmer" ], "title": "Introduction to the theory of neural computation", "venue": null, "year": 1991 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Minyoung Huh", "Pulkit Agrawal", "Alexei A Efros" ], "title": "What makes imagenet good for transfer learning", "venue": "arXiv preprint arXiv:1608.08614,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Aditya Khosla", "Nityananda Jayadevaprakash", "Bangpeng Yao", "Li Fei-Fei" ], "title": "Novel dataset for fine-grained image categorization", "venue": "In First Workshop on Fine-Grained Visual Categorization,", "year": 2011 }, { "authors": [ "Simon Kornblith", "Jonathon Shlens", "Quoc V Le" ], "title": "Do better imagenet models transfer better", "venue": null, "year": 2019 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3d object representations for fine-grained categorization", "venue": "In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia,", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Anders Krogh", "John A Hertz" ], "title": "A simple weight decay can improve generalization", "venue": "In NIPS,", "year": 1992 }, { "authors": [ "Xingjian Li", "Haoyi Xiong", "Hanchao Wang", "Yuxuan Rao", "Liping Liu", "Jun Huan" ], "title": "Delta: Deep learning transfer using feature map with attention for convolutional networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Xuhong Li", "Yves Grandvalet", "Franck Davoine" ], "title": "Explicit inductive bias for transfer learning with convolutional networks", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Tianyi Liu", "Zhehui Chen", "Enlu Zhou", "Tuo Zhao" ], "title": "Toward deeper understanding of nonconvex stochastic optimization with momentum using diffusion approximations", "venue": "arXiv preprint arXiv:1802.05155,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": null, "year": 2018 }, { "authors": [ "S. Maji", "J. Kannala", "E. Rahtu", "M. Blaschko", "A. Vedaldi" ], "title": "Fine-grained visual classification of aircraft", "venue": "Technical report,", "year": 2013 }, { "authors": [ "Romain Mormont", "Pierre Geurts", "Raphaël Marée" ], "title": "Comparison of deep transfer learning strategies for digital pathology", "venue": "In CVPR Workshops,", "year": 2018 }, { "authors": [ "Yurixi E Nesterov" ], "title": "A method for solving the convex programming problem with convergence rate o (1/kˆ 2)", "venue": "In Dokl. akad. nauk Sssr,", "year": 1983 }, { "authors": [ "Jiquan Ngiam", "Daiyi Peng", "Vijay Vasudevan", "Simon Kornblith", "Quoc V Le", "Ruoming Pang" ], "title": "Domain adaptive transfer learning with specialist models", "venue": "arXiv preprint arXiv:1811.07056,", "year": 2018 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "In Indian Conference on Computer Vision, Graphics & Image Processing", "year": 2008 }, { "authors": [ "Boris T Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Maithra Raghu", "Chiyuan Zhang", "Jon Kleinberg", "Samy Bengio" ], "title": "Transfusion: Understanding transfer learning with applications to medical imaging", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Ali Sharif Razavian", "Hossein Azizpour", "Josephine Sullivan", "Stefan Carlsson" ], "title": "Cnn features off-the-shelf: an astounding baseline for recognition", "venue": "In CVPR workshops,", "year": 2014 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Leslie N Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "In WACV,", "year": 2017 }, { "authors": [ "Leslie N Smith" ], "title": "A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay", "venue": "arXiv preprint arXiv:1803.09820,", "year": 2018 }, { "authors": [ "Leslie N Smith", "Nicholay Topin" ], "title": "Super-convergence: Very fast training of neural networks using large learning rates", "venue": "In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications,", "year": 2019 }, { "authors": [ "Samuel L Smith", "Quoc V Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Samuel L Smith", "Pieter-Jan Kindermans", "Chris Ying", "Quoc V Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Chen Sun", "Abhinav Shrivastava", "Saurabh Singh", "Abhinav Gupta" ], "title": "Revisiting unreasonable effectiveness of data in deep learning era", "venue": null, "year": 2017 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Grant Van Horn", "Oisin Mac Aodha", "Yang Song", "Yin Cui", "Chen Sun", "Alex Shepard", "Hartwig Adam", "Pietro Perona", "Serge Belongie" ], "title": "The inaturalist species classification and detection", "venue": null, "year": 2018 }, { "authors": [ "Twan van Laarhoven" ], "title": "L2 regularization versus batch and weight normalization", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "C. Wah", "S. Branson", "P. Welinder", "P. Perona", "S. Belongie" ], "title": "The Caltech-UCSD Birds-200-2011 Dataset", "venue": "Technical Report CNS-TR-2011-001, California Institute of Technology,", "year": 2011 }, { "authors": [ "Junyuan Xie", "Tong He", "Zhi Zhang", "Hang Zhang", "Zhongyue Zhang", "Mu Li" ], "title": "Bag of tricks for image classification with convolutional neural networks", "venue": "arXiv preprint arXiv:1812.01187,", "year": 2018 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Guodong Zhang", "Chaoqi Wang", "Bowen Xu", "Roger Grosse" ], "title": "Three mechanisms of weight decay regularization", "venue": "arXiv preprint arXiv:1810.12281,", "year": 2018 }, { "authors": [ "Jian Zhang", "Ioannis Mitliagkas" ], "title": "Yellowfin and the art of momentum tuning", "venue": "arXiv preprint arXiv:1706.03471,", "year": 2017 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE T-PAMI,", "year": 2018 }, { "authors": [], "title": "2019) conducted extensive fine-tuning experiments with different hyperparameters. One observation they made is that the momentum parameter of BN layer is essential for finetuning. They found it useful to decrease the BN momentum parameter from its ImageNet value to max(1− 10/s", "venue": null, "year": 2019 }, { "authors": [ "Cui" ], "title": "2018) also identified that datasets like Cars and Aircrafts do not benefit too much from fine-tuning", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many real-world applications often have a limited number of training instances, which makes directly training deep neural networks hard and prone to overfitting. Transfer learning with the knowledge of models learned on a similar task can help to avoid overfitting. Fine-tuning is a simple and effective approach of transfer learning and has become popular for solving new tasks in which pre-trained models are fine-tuned with the target dataset. Specifically, fine-tuning on pre-trained ImageNet classification models (Simonyan & Zisserman, 2015; He et al., 2016b) has achieved impressive results for tasks such as object detection (Ren et al., 2015) and segmentation (He et al., 2017; Chen et al., 2017) and is becoming the de-facto standard of solving computer vision problems. It is believed that the weights learned on the source dataset with a large number of instances provide better initialization for the target task than random initialization. Even when there is enough training data, fine-tuning is still preferred as it often reduces training time significantly (He et al., 2019).\nThe common practice of fine-tuning is to adopt the default hyperparameters for training large models while using smaller initial learning rate and shorter learning rate schedule. It is believed that adhering to the original hyperparameters for fine-tuning with small learning rate prevents destroying the originally learned knowledge or features. For instance, many studies conduct fine-tuning of ResNets (He et al., 2016b) with these default hyperparameters: learning rate 0.01, momentum 0.9 and weight decay 0.0001. However, the default setting is not necessarily optimal for fine-tuning on other tasks. While few studies have performed extensive hyperparameter search for learning rate and weight decay (Mahajan et al., 2018; Kornblith et al., 2019), the momentum coefficient is rarely changed. Though the effectiveness of the hyperparameters has been studied extensively for training a model from scratch, how to set the hyperparameters for fine-tuning is not yet fully understood.\n∗Work done while at Amazon Web Services\nIn addition to using ad-hoc hyperparameters, commonly held beliefs for fine-tuning also include:\n• Fine-tuning pre-trained networks outperforms training from scratch; recent work (He et al., 2019) has already revisited this. • Fine-tuning from similar domains and tasks works better (Ge & Yu, 2017; Cui et al., 2018;\nAchille et al., 2019; Ngiam et al., 2018). • Explicit regularization with initial models matters for transfer learning performance (Li\net al., 2018; 2019).\nAre these practices or beliefs always valid? From an optimization perspective, the difference between fine-tuning and training from scratch is all about the initialization. However, the loss landscape of the pre-trained model and the fine-tuned solution could be much different, so as their optimization strategies and hyperparameters. Would the hyperparameters for training from scratch still be useful for fine-tuning? In addition, most of the hyperparameters (e.g., batch size, momentum, weight decay) are frozen; will the conclusion differ when some of them are changed?\nWith these questions in mind, we re-examined the common practices for fine-tuning. We conducted extensive hyperparameter search for fine-tuning on various transfer learning benchmarks with different source models. The goal of our work is not to obtain state-of-the-art performance on each fine-tuning task, but to understand the effectiveness of each hyperparameter for fine-tuning, avoiding unnecessary computation. We explain why certain hyperparameters work so well on certain datasets while fail on others, which can guide hyperparameter search for fine-tuning.\nOur main findings are as follows:\n• Optimal hyperparameters for fine-tuning are not only dataset dependent, but are also dependent on the similarity between the source and target domains, which is different from training from scratch. Therefore, the common practice of using optimization schedules derived from ImageNet training cannot guarantee good performance. It explains why some tasks are not achieving satisfactory results after fine-tuning because of inappropriate hyperparameter selection. Specifically, as opposed to the common practice of rarely tuning the momentum value beyond 0.9, we find that zero momentum sometimes work better for fine-tuning on tasks that are similar with the source domain, while nonzero momentum works better for target domains that are different from the source domain. • Hyperparameters are coupled together and it is the effective learning rate—which encap-\nsulates the learning rate and momentum—that matters for fine-tuning performance. While effective learning rate has been studied for training from scratch, to the best of our knowledge, no previous work investigates effective learning rate for fine-tuning and is less used in practice. Our observation of momentum can be explained as small momentum actually decreases the effective learning rate, which is more suitable for fine-tuning on similar tasks. We show that the optimal effective learning rate depends on the similarity between the source and target domains. • We find regularization methods that were designed to keep models close to the initial\nmodel does not necessarily work for “dissimilar” datasets, especially for nets with Batch Normalization. Simple weight decay can result in as good performance as the referencebased regularization methods for fine-tuning with better search space." }, { "heading": "2 RELATED WORK", "text": "In transfer learning for image classification, the last layer of a pre-trained network is usually replaced with a randomly initialized fully connected layer with the same size as the number of classes in the target task (Simonyan & Zisserman, 2015). It has been shown that fine-tuning the whole network usually results in better performance than using the network as a static feature extractor (Yosinski et al., 2014; Donahue et al., 2014; Huh et al., 2016; Mormont et al., 2018; Kornblith et al., 2019). Ge & Yu (2017) select images that have similar local features from source domain to jointly fine-tune pre-trained networks. Cui et al. (2018) estimate domain similarity with ImageNet and demonstrate that transfer learning benefits from pre-training on a similar source domain. Besides image classification, many object detection frameworks also rely on fine-tuning to improve over training from scratch (Girshick et al., 2014; Ren et al., 2015).\nMany researchers re-examined whether fine-tuning is a necessity for obtaining good performance. Ngiam et al. (2018) find that when domains are mismatched, the effectiveness of transfer learning is negative, even when domains are intuitively similar. Kornblith et al. (2019) examine the fine-tuning performance of various ImageNet models and find a strong correlation between ImageNet top-1 accuracy and the transfer accuracy. They also find that pre-training on ImageNet provides minimal benefits for some fine-grained object classification dataset. He et al. (2019) questioned whether ImageNet pre-training is necessary for training object detectors. They find the solution of training from scratch is no worse than the fine-tuning counterpart as long as the target dataset is large enough. Raghu et al. (2019) find that transfer learning has negligible performance boost on medical imaging applications, but speed up the convergence significantly.\nThere are many literatures on hyperparameter selection for training neural networks from scratch, mostly on batch size, learning rate and weight decay (Goyal et al., 2017; Smith et al., 2018; Smith & Topin, 2019). There are few works on the selection of momentum (Sutskever et al., 2013). Zhang & Mitliagkas (2017) proposed an automatic tuner for momentum and learning rate in SGD. There are also studies on the correlations of the hyperparameters, such as linear scaling rule between batch size and learning (Goyal et al., 2017; Smith et al., 2018; Smith, 2017). However, most of these advances on hyperparameter tuning are designed for training from scratch and have not examined on fine-tuning tasks for computer vision problems. Most work on fine-tuning simply choose fixed hyperparameters (Cui et al., 2018) or use dataset-dependent learning rates (Li et al., 2018) in their experiments. Due to the huge computational cost for hyperparameter search, only a few works (Kornblith et al., 2019; Mahajan et al., 2018) performed large-scale grid search of learning rate and weight decay for obtaining the best performance." }, { "heading": "3 TUNING HYPERPARAMETERS FOR FINE-TUNING", "text": "In this section, we first introduce the notations and experimental settings, and then present our observations on momentum, effective learning rate and regularization. The fine-tuning process is not different from learning from scratch except for the weights initialization. The goal of the process is still to minimize the objective function L = 1N ∑N i=1 `(f(xi, θ), yi) + λ 2 ‖θ‖ 2 2, where ` is the loss function, N is the number of samples, xi is the input data, yi is its label, f is the neural network, θ is the model parameters and λ is the regularization hyperparameter or weight decay. Momentum is widely used for accelerating and smoothing the convergence of SGD by accumulating a velocity vector in the direction of persistent loss reduction (Polyak, 1964; Sutskever et al., 2013; Goh, 2017). The commonly used Nesterov’s Accelerated Gradient (Nesterov, 1983) is given by:\nvt+1 = mvt − ηt 1\nn n∑ i=1 ∇`(f(xi, θt +mvt), yi) (1)\nθt+1 = θt + vt+1 − ηλθt (2) where θt indicates the model parameters at iteration t. The hyperparameters include the learning rate ηt, batch size n, momentum coefficient m ∈ [0, 1), and the weight decay λ." }, { "heading": "3.1 EXPERIMENTAL SETTINGS", "text": "We evaluate fine-tuning on seven widely used image classification datasets, which covers tasks for fine-grained object recognition, scene recognition and general object recognition. Detailed statistics of each dataset can be seen in Table 1. We use ImageNet (Russakovsky et al., 2015), Places-365 (Zhou et al., 2018) and iNaturalist (Van Horn et al., 2018) as source domains for pre-trained models. We resize the input images such that the aspect ratio is preserved and the shorter side is 256 pixels. The images are normalized with mean and std values calculated over ImageNet. For data augmentation, we adopt the common practices used for training ImageNet models (Szegedy et al., 2015): random mirror, random scaled cropping with scale and aspect variations, and color jittering. The augmented images are resized to 224×224. Note that state-of-the-art results could achieve even better performance by using higher resolution images (Cui et al., 2018) or better data augmentation (Cubuk et al., 2018).\nWe mainly use ResNet-101-V2 (He et al., 2016a) as our base network, which is pre-trained on ImageNet (Russakovsky et al., 2015). Similar observations are also made on DenseNets (Huang et al., 2017) and MobileNet (Howard et al., 2017). The hyperparameters to be tuned (and ranges)\nare: learning rate (0.1, 0.05, 0.01, 0.005, 0.001, 0.0001), momentum (0.9, 0.99, 0.95, 0.9, 0.8, 0.0) and weight decay (0.0, 0.0001, 0.0005, 0.001). We set the default hyperparameters to be batch size 2561, learning rate 0.01, momentum 0.9 and weight decay 0.0001. To avoid insufficient training and observe the complete convergence behavior, we use 300 epochs for fine-tuning and 600 epochs for scratch-training, which is long enough for the training curves to converge. The learning rate is decayed by a factor of 0.1 at epoch 150 and 250. We report the Top-1 validation (test) error at the end of training. The total computation time for the experiments is more than 10K GPU hours." }, { "heading": "3.2 EFFECT OF MOMENTUM AND DOMAIN SIMILARITY", "text": "Momentum 0.9 is the most widely used value for training from scratch (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016b) and is also widely adopted for fine-tuning (Kornblith et al., 2019). To the best of our knowledge, it is rarely changed, regardless of the network architectures or target tasks. To check the influence of momentum on fine-tuning, we first search for the best momentum value for fine-tuning on the Birds dataset with different weight decay and learning rate. Figure 1(a) shows the performance of fine-tuning with and without weight decays. Surprisingly, momentum zero actually outperforms the nonzero momentum. The optimal learning rate also increases when the momentum is disabled as shown in Figure 1(b).\nTo verify this observation, we further compare momentum 0.9 and 0.0 on other datasets. Table 2 shows the performance of 8 hyperparameter settings on 7 datasets. We observe a clear pattern that disabling momentum works better for Dogs, Caltech and Indoor, while momentum 0.9 works better for Cars, Aircrafts and Flowers.\n1 For each training job with ResNet-101 and batch size 256, we use 8 NVIDIA Tesla V100 GPUs for synchronous training, where each GPU uses a batch of 32 and no SyncBN is used.\nInterestingly, datasets such as Dogs, Caltech, Indoor and Birds are known to have high overlap with ImageNet dataset2, while Cars and Aircrafts are identified to be difficult to benefit from fine-tuning from pre-trained ImageNet models (Kornblith et al., 2019). According to Cui et al. (2018), in which the Earth Mover’s Distance (EMD) is used to calculate the similarity between ImageNet and other domains, the similarity to Dogs and Birds are 0.619 and 0.563, while the similarity to Cars, Aircrafts and Flowers are 0.560, 0.556 and 0.5253. The relative order of similarities to ImageNet is\nDogs, Birds, Cars, Aircrafts and Flowers\nwhich aligns well with the transition of optimal momentum value from 0.0 to 0.9. Following the similarity calculation, we can also verified Caltech and Indoor are more close to ImageNet than Cars/Aircrafts/Flowers (Table 3.3).\nTo verify the connection between momentum and domain similarity, we further fine-tune from different source domains such as Places-365 and iNaturalist, which are known to be better source domains than ImageNet for fine-tuning on Indoor and Birds dataset (Cui et al., 2018). We may expect that fine-tuning from iNaturalist works better for Birds with m = 0 and similarly, Places for Indoor. Indeed, as shown in Table 3, disabling momentum improves the performance when the source and target domain are similar, such as Places for Indoor and iNaturalist for Birds.\nSmall momentum works better for fine-tuning on domains that are close to the source domain One explanation for the above observations is that because the Dogs dataset is very close to ImageNet, the pre-trained ImageNet model is expected to be close to the fine-tuned solution on the Dogs dataset. In this case, momentum may not help much as the gradient direction around the minimum could be much random and accumulating the momentum direction could be meaningless. Whereas, for\n2Stanford Dogs (Khosla et al., 2011) was built using images and annotation from ImageNet for the task of fine-grained image categorization. Caltech-256 has at least 200 categories exist in ImageNet (Deng et al., 2010). Images in the CUB-Birds dataset overlap with images in ImageNet.\n3The domain similarity calucation is discussed in Appendix B and the exact value can be found in Table 3.3.\nfaraway target domains (e.g., Cars and Aircrafts) where the pre-trained ImageNet model could be much different with the fine-tuned solution, the fine-tuning process is more similar with training from scratch, where large momentum stabilizes the decent directions towards the minimum. An illustration of the difference can be found in Figure 2.\nConnections to early observations on decreasing momentum Early work (Sutskever et al., 2013) actually pointed out that reducing momentum during the final stage of training allows finer convergence while aggressive momentum would prevent this. They recommended reducing momentum from 0.99 to 0.9 in the last 1000 parameter updates but not disabling it completely. Recent work (Liu et al., 2018; Smith, 2018) showed that a large momentum helps escape from saddle points but can hurt the final convergence within the neighborhood of the optima, implying that momentum should be reduced at the end of training. Liu et al. (2018) find that a larger momentum introduces higher variance of noise and encourages more exploration at the beginning of optimization, and encourages more aggressive exploitation at the end of training. They suggest that at the final stage of the step size annealing, momentum SGD should use a much smaller step size than that of vanilla SGD. When applied to fine-tuning, we can interpret that if the pre-trained model lies in the neighborhood of the optimal solution on the target dataset, the momentum should be small. Our work identifies the empirical evidence of disabling momentum helps final convergence, and fine-tuning on close domains is a good exemplar." }, { "heading": "3.3 COUPLED HYPERPARAMETERS AND THE VIEW OF EFFECTIVE LEARNING RATE", "text": "Now that we had discovered the effect of momentum by fixing other hyperparameters and only allowed momentum to change. But note that the two difficult scenarios shown in Figure 2 (b) and (c) might also be mitigated by increasing or decreasing learning rate. That is, hyperparameters are coupled and varying one hyperparameter can change the optimal values of the other hyperparameters that lead to the best performance. In addition, optimal values of certain hyperparameters depend on the values of other hyperparameters in systematic ways. For example, learning rate is entangled with batch size, momentum and weight decay. There is a notion of effective learning rate (ELR) (Hertz et al., 1991; Smith et al., 2018; Smith & Le, 2018) for SGD with momentum: η′ = η/(1−m), which was shown to be more closely related with training dynamics and final performance rather than η. The effective learning rate with m = 0.9 is 10× higher than the one with m = 0.0 if other hyperparameters are fixed, which is probably why we see an increase in optimal learning rate when momentum is disabled in Figure 1(b) and Appendix A.\nIt is the effective learning rate that matters for fine-tuning performance Because hyperparameters are coupled, looking at the performance with only one hyperparameter varied may give a\nmisleading understanding of the effect of hyperparameters. Therefore, to examine the effect of momentum, we should report the best result obtainable with and without momentum, as long as other hyperparameters explored are sufficiently explored. We re-examine previous experiments that demonstrated the importance of momentum tuning when the ELR η′ = η/(1 − m) is held fixed instead of simply fixing learning rate η. Figure 3 shows that when η′ is constant, the best performance obtained by m = 0.9 and m = 0 are almost equivalent when other hyperparameters are allowed to change. However, different ELR does result in different performance, which indicates its importance for the best performance. It explains why the common practice of changing only learning rate generally works, though changing momentum may results in the same result, they both change the ELR. In fact, as long as the initial learning rate is small enough, we can always search for the optimal momentum as it is an amplifier, making the ELR larger by a factor of 1/(1−m). Therefore, momentum does determine the search ranges of learning rate.\nOptimal ELR depends on the similarity between source domain and target domain Now that we have shown ELR is critical for fine-tuning performance, we are interested in the factors that determine the optimal ELR for a given task. Previous work (Smith & Le, 2018) found that there is an optimum ELR which maximizes the test accuracy. However, the observations are only based on scratch training on small datasets (e.g., CIFAR-10); the relationship between ELR and domain similarity, especially for fine-tuning, is still unexplored. To examine this, we search the best ELR on each fine-tuning task and reports in Fig. 4 the best validation error obtained by each ELR while allowing other hyperparameters to change. It shows the optimal ELR depends on both source domain and target domain. As shown in Fig. 4 (a-c), the optimal ELR for Dogs/Caltech/Indoor are much smaller than these for Aircrafts/Flowers/Cars when fine-tuned from ImageNet pre-trained model. Similar observations can be made on DenseNets and MobileNet. Though the optimal ELR value is different, the relative order of domain similarity is consistent and architecture agnostic. We can also see a smaller ELR works better when source domain and target domain are similar, such as Dogs for ImageNet and Birds for iNat2017 (Fig. 4 (a, d-e)). Interestingly, the optimal ELR for training from scratch is much larger and very similar across different target datasets, which indicates the distance from a random initialization is uniformly similar to different target dataset." }, { "heading": "10 4 10 3 10 2 10 1 100", "text": "" }, { "heading": "10 5 10 4 10 3 10 2 10 1 100", "text": "Optimal ELR selection based on domain similarity Now we have made qualitative observations about the relationship between domain similarity and optimal ELR. A quantitative characterization of the relationship could reduce the hyperparameter search ranges for HPO or even eliminate HPO by accurately predicting hyperparameters. We followed the domain similarity calculation in (Cui et al., 2018) and recalculate similarity scores for all source-target domain pairs. Note the original domain similarity calculation in (Cui et al., 2018) use pre-trained JFT (Sun et al., 2017) models as feature extractor, which are not public available. We alternatively use ImageNet pre-trained model or the source model as feature extractor. As shown in Table 4, there is a good correlation between domain similarity score and the scale of optimal ELR. Generally, the more similar the two domains, the smaller the optimal ELR. Though it is not strictly corresponding to the domain similarity score, the score provides reasonable prediction about the scale of optimal ELR, such as [0.001, 0.01], [0.01, 0.1], [0.1, 1.0] and therefore can reduce the search space for optimal ELR. Based on this correlation, a simple strategy can be developed for optimal ELR selection given a frequently used source model: one can calculate domain similarities and perform exhaustive hyperparameter searches for few reference datasets, including similar and dissimilar datasets. Then given a new dataset to fine-tune, one can calculate the domain similarity and compare with the scores of reference datasets, and choose the range of ELRs with the closest domain similarity.\nWeight Decay and Learning Rate The relationship between weight decay and effective learning rate is recently well-studied (van Laarhoven, 2017; Zhang et al., 2018; Loshchilov & Hutter, 2018). It was shown that the effect of weight decay on models with BN layers is equivalent to increasing the ELR by shrinking the weights scales, i.e., η′ ∼ η/‖θ‖22. And if the optimal effective learning rate exists, the optimal weight decay value λ is inversely related with the optimal learning rate η. The ‘effective’ weight decay is λ′ = λ/η. We show in Figure 5 that the optimal effective weight decay is also correlated with domain similarity." }, { "heading": "10 3 10 2 10 1 100 101 102", "text": "" }, { "heading": "3.4 THE CHOICE OF REGULARIZATION", "text": "L2 regularization or weight decay is widely used for constraining the model capacity (Hanson & Pratt, 1989; Krogh & Hertz, 1992). Recently Li et al. (2018; 2019) pointed out that standard L2 regularization, which drives the parameters towards the origin, is not adequate in transfer learning. To retain the knowledge learned by the pre-trained model, reference-based regularization was used to regularize the distance between fine-tuned weights and the pre-trained weights, so that the finetuned model is not too different from the initial model. Li et al. (2018) propose L2-SP norm, i.e., λ12 ‖θ ′ − θ0‖22 + λ22 ‖θ ′′‖22, where θ′ refers to the part of network that shared with the source network, and θ′′ refers to the novel part, e.g., the last layer with different number of neurons. While the motivation is intuitive, there are several issues for adopting reference based regularization for fine-tuning:\n• Many applications actually adopt fine-tuning on target domains that are quite different from source domain, such as fine-tuning ImageNet models for medical imaging (Mormont et al., 2018; Raghu et al., 2019). The fine-tuned model does not necessarily have to be close with the initial model.\n• The scale invariance introduced by Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers enable models with different parameter scales to function the same, i.e., f(θ) = f(αθ). Therefore, when L2 regularization drives ‖θ‖22 towards zeros, it could still have the same functionality as the initial model. On the contrary, a model could still be different even when the L2-SP norm is small.\n• L2-SP regularization would constrain θ′′ to be close to θ0, so that ‖θ‖22 is relatively stable in comparison with L2 regularization. Given that ELR is approximately proportional to η/‖θ‖22 and a smaller ELR is beneficial for fine-tuning from similar domains, it may explain why L2-SP provides better performance. If this is true, then by decreasing the initial ELR, L2-norm may function the same.\nTo examine these conjectures, we revisited the work of (Li et al., 2018) with additional experiments. To show the effectiveness of L2-SP norm, the authors conducted experiments on datasets such as Dogs, Caltech and Indoor, which are all close to the source domain (ImageNet or Places-365). We extend their experiments by fine-tuning on both “similar” and “dissimilar” datasets, including Birds, Cars, Aircrafts and Flowers, with both L2 and L2-SP regularization (details in Appendix D). For fair comparison, we perform the same hyperparameter search for both methods. As expected, Table 5 shows that L2 regularization is very competitive with L2-SP on Birds, Cars, Aircrafts and Flowers, which indicates that reference based regularization may not generalize well for fine-tuning on dissimilar domains.\nWe also check the change of regularization terms during training for both methods as well as their best hyperparameters. As shown in Figure 6, the L2 regularization usually decrease the weights norm more aggressively, depending on the value of λ, while L2-SP regularization keeps the norm less changed. We can see that the optimal learning rate of L2 regularization is mostly smaller than L2-SP, which may compensate for the decreased weight norm or increased ELR. Interestingly, for Dogs dataset, both regularization terms grow much larger after a few iterations and then become stable, which means constraining the weights to be close to initialization is not necessarily the reason for L2-SP to work even for close domains. It also seems contradicting to previous finding (Zhang et al., 2018) that weight decay functions as increasing ELR by decreasing weight norms. However, it\nmight be reasonable as large norm actually decreases the ELR, which could be helpful due to the close domain similarity between Dogs and ImageNet." }, { "heading": "4 DISCUSSION", "text": "The two extreme ways for selecting hyperparameters—performing exhaustive hyperparameter search or taking ad-hoc hyperparameters from scratch training—could be either too computationally expensive or yield inferior performance. Different from training from scratch, where the default hyperparameter setting may work well for random initialization, the choice of hyperparameters for fine-tuning is not only dataset dependent but is also influenced by the similarity between the target and source domains. The rarely tuned momentum value could also improve or impede the performance when the target domain and source domain are close given insufficiently searched learning rate. These observations connect with previous theoretical works on decreasing momentum at the end of training and effective learning rate. We further identify that the optimal effective learning rate correlates with the similarity between the source and target domains. With this understanding, one can significantly reduce the hyperparameter search space. We hope these findings could be one step towards better and efficient hyperparameter selection for fine-tuning." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors would like to thank all anonymous reviewers for their valuable feedback." }, { "heading": "A THE EFFECTIVENESS OF MOMENTUM", "text": "Searching for Optimal Momentum To check the effectiveness of momentum on fine-tuning, we can search the best momentum values for fine-tuning with fixed learning rate but different weight decay and batch size. Taking Birds dataset as an example, Figure 7 provides the convergence curves for the results shown in Figure 1(a), which shows the learning curves of fine-tuning with 6 different batch sizes and weight decay combinations. Zero momentum outperforms the nonzero momentum in 5 out of 6 configurations.\nEffective learning rate increases after disabling momentum. Figure 8 compares the performance of with and without momentum for Dogs dataset with a range of different learning rates. Note that the learning rate with similar performance generally increases 10x after changing m from 0.9 to 0.0, which is coherent with the rule of effective learning rate η′ = η/(1−m). Same observations can be made on other datasets as shown in Figure 9.\n0 50 100 150 200 250 300 Epochs\n0\n10\n20\n30\n40\n50\n60\nEr ro\nr\nresnet101_v2, n = 256, m = 0.9, = 0.0001\n= 0.1, 20.69 = 0.05, 19.07 = 0.01, 14.85 = 0.005, 13.42 = 0.001, 12.07 = 0.0005, 11.64 = 0.0001, 14.70\n(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs\n0\n10\n20\n30\n40\n50\n60\nEr ro\nr\nresnet101_v2, n = 256, m = 0.0, = 0.0001\n= 0.1, 14.67 = 0.05, 13.29 = 0.01, 12.11 = 0.005, 11.86 = 0.001, 14.62 = 0.0005, 19.39 = 0.0001, 81.26\n(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs\n0\n10\n20\n30\n40\n50\n60\nEr ro\nr\nresnet101_v2, n = 256, m = 0.9, = 0.0001\n= 0.1, 27.29 = 0.05, 25.64 = 0.01, 23.76 = 0.005, 24.59 = 0.001, 22.34 = 0.0005, 21.29 = 0.0001, 29.39\n(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs\n0\n10\n20\n30\n40\n50\n60\nEr ro\nr\nresnet101_v2, n = 256, m = 0.0, = 0.0001\n= 0.1, 23.46 = 0.05, 22.04 = 0.01, 21.14 = 0.005, 21.96 = 0.001, 29.69 = 0.0005, 41.00 = 0.0001, 88.23\n(d) Indoor, m = 0.0" }, { "heading": "B DOMAIN SIMILARITY", "text": "The domain similarity calculation based on Earth Mover Distance (EMD) is introduced in the section 4.1 of (Cui et al., 2018)4. Here we briefly introduce the steps. In (Cui et al., 2018), the authors first train ResNet-101 on the large scale JFT dataset (Sun et al., 2017) and use it as a feature extractor. They extracted features from the penultimate layer of the model for each image of the training set of the source domain and target domain. For ResNet-101, the length of the feature vector is 2048. The features of images belonging to the same category are averaged and g(si) denotes the average feature vector of ith label in source domain S, similarly, g(tj) denotes the average feature vector of jth label in target domain T . The distance between the averaged features of two labels is di,j = ‖g(si) − g(tj)‖. Each label is associated with a weight w ∈ [0, 1] corresponding to the percentage of images with this label in the dataset. So the source domain S with m labels and the target domain T with n labels can be represented as S = {(si, wsi)}mi=1 and T = {(tj , wtj )}ni=1. The EMD between the two domains is defined as\nd(S, T ) = EMD(S, T ) = ∑m,n i=1,j=1 fi,jdi,j∑m,n i=1,j=1 fi,j\n(3)\nwhere the optimal flow fi,j corresponds to the least amount of total work by solving the EMD optimization problem. The domain similarity is defined as\nsim(S, T ) = e−γd(S,T ) (4)\nwhere γ is 0.01. Note that the domain similarity value is not ranging from 0 to 1.\nDue to the unavailability of the large-scale JFT dataset (300x larger than ImageNet) and its pre-trained ResNet-101 model, we cannot use it for extracting features for new datasets, such as Caltech256 and\n4The extracted features and code are available in https://github.com/richardaecn/ cvpr18-inaturalist-transfer\nMIT67-Indoor. Instead of using the powerful feature representation, we use our pre-trained ImageNet model (ResNet-101) as the feature extractor. Table 4 compares the domain similarities calculated by different pre-trained models and we can see some consistent patterns across different architectures: e.g., The 1st and 2nd highest similarity scores are Caltech and Dogs regardless of architectures; the 3rd and 4th highest similarity scores refers to Birds and Indoor; the most dissimilar datasets are Cars, Aircrafts and Flowers, though the relative orders for them are not exactly the same. Besides using fixed feature extractor, an alternative way is to use the source domain model directly as the feature extractor for both source domain and target domain, which may captures the transfer learning process more precisely than a uniform feature extractor." }, { "heading": "C THE EFFECTIVENESS OF BN MOMENTUM", "text": "Kornblith et al. (2019) conducted extensive fine-tuning experiments with different hyperparameters. One observation they made is that the momentum parameter of BN layer is essential for finetuning. They found it useful to decrease the BN momentum parameter from its ImageNet value to max(1− 10/s, 0.9) where s is the number of steps per epoch. This will change the default BN momentum value (0.9) when s is larger than 100, but it only applies when the dataset size is larger than 25.6K with batch size 256. The maximum data size used in our experiments is Caltech-256, which is 15K, so this strategy seems not applicable.\nWe further validate the effect of BN momentum by performing a similar study as to ELR. The goal is to identify whether there is an optimal BN momentum for a given task. For each dataset, we fine-tune the pre-trained model using previously obtained best hyperparameters and only vary BN momentum. In addition to the default value 0.9, we also set it to 0.0, 0.95 and 0.99. The rational is that if BN mommentum is a critical hyperparameter, we should expect significant performance differences when the value is changed from the optimal value. As shown in Figure 10, we can see mbn = 0.99 slightly improves the performance for some datasets, however, there is no significant performance difference among values greater than 0.9. One hypothesis is that similar domains will share similar BN parameters and statistics, BN momentum may affect the parameter adaptation. More investigation is still needed to fully understand its effectiveness.\nD EXPERIMENTAL SETTINGS FOR COMPARISON OF L2 AND L2-SP\nThe experiments in Section 3.4 is based the code5 provided by (Li et al., 2018). The base network is ImageNet pretrained ResNet-101-V1. The model is fine-tuned with batch size 64 for 9000 iterations, and learning rate is decayed once at iteration 6000. Following the original setting, we use momentum 0.9. We performed grid search on learning rate and weight decay, with the range of η : {0.02, 0.01, 0.005, 0.001, 0.0001} and λ1 : {0.1, 0.01, 0.001, 0.0001}, and report the best average class error (1 - average accuracy) for both methods. For L2-SP norm, we follow the authors’ setting to use constant λ2 = 0.01. Different with the original setting for L2 regularization, we set λ2 = λ1 to simulate normal L2-norm.\n5 https://github.com/holyseven/TransferLearningClassification" }, { "heading": "E DATA AUGMENTATION", "text": "Data augmentation is an important way of increasing data quantity and diversity to make models more robust. It is even critical for transfer learning with few instances. The effect of data augmentation can be viewed as a regularization and the choice of data augmentation can be also viewed as a hyperparameter. Most current widely used data augmentation methods have verified their effectiveness on training ImageNet models, such as random mirror flipping, random rescaled cropping6, color jittering and etc (Szegedy et al., 2015; Xie et al., 2018).\nDo these methods transfer for fine-tuning on other datasets? Here we compare three settings for data augmentation with different momentum settings: 1) random resized cropping: our default data augmentation; 2) random cropping: the same as standard data augmentation except that we use random cropping with fixed size; 3) random flip: simply random horizontal flipping. The training and validation errors of fine-tuning with different data augmentation strategies and hyperparameters are shown in Figure 11 and Figure 12.\nThe effect of data augmentation is dataset dependent and is also influenced by other hyperparameters The first row in Figure 11 shows that advanced data augmentation with default hyperparameters (m = 0.9 and η = 0.01) leads to overfitting for Dogs while generalize better on Aircrafts and Flowers. Similar observations can be made in Figure 12. However, when momentum is disabled, the overfitting disappears for Dogs and Caltech. This is explainable since random resized cropping adds more variance to the gradient direction, and disabling momentum will lead to a smaller ELR which will be helpful for fine-tuning from a similar domain. On the other hand, the performance of random cropping decreases when momentum is disabled. As random cropping produces training samples with less variation than random resized cropping, disabling momentum or decreasing the ELR might lead to underfitting or stucking in poor local minima. This can be mitigated as we increase the learning rate for random cropping, which adds variation to the gradients. As shown in Table 6,\n6Randomly crop a rectangular region with aspect ratio randomly sampled in [3/4, 4/3] and area randomly sampled in [8%, 100%] (Szegedy et al., 2015)\n0 50 100 150 200 250 300 Epochs\n0\n5\n10\n15\n20\n25\n30\n35 40 Er ro r\nresnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001\nrand resized crop, 14.85 rand crop, 12.42 rand flip, 12.34\n(a) Caltech, m = 0.9 0 50 100 150 200 250 300 Epochs\n0\n5\n10\n15\n20\n25\n30\n35\n40\nEr ro\nr\nresnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001\nrand resized crop, 12.11 rand crop, 12.89\n(b) Caltech, m = 0.0 0 50 100 150 200 250 300 Epochs\n0\n5\n10\n15\n20\n25\n30\n35\n40\nEr ro\nr\nresnet101_v2, = 0.01, n = 256, m = 0.9, = 0.0001\nrand resized crop, 23.76 rand crop, 23.39 rand flip, 23.31\n(c) Indoor, m = 0.9 0 50 100 150 200 250 300 Epochs\n0\n5\n10\n15\n20\n25\n30\n35\n40\nEr ro\nr\nresnet101_v2, = 0.01, n = 256, m = 0.0, = 0.0001\nrand resized crop, 21.14 rand crop, 25.19\n(d) Indoor, m = 0.0\nwhen learning rate increased fro 0.01 to 0.05, disabling momentum shows better performance than nonzero momentum on datasets that are close, similar to previous findings with random resized cropping." }, { "heading": "F SOURCE DOMAINS", "text": "Pre-trained models For most of our experiments, we use the pre-trained ResNet-101_v2 model from the model zoo of MXNet GluonCV 7. To get the pre-trained models for iNat-2017 and Places365, we fine-tune from the ImageNet pre-trained model with the default fine-tuning hyperparameters for 60 epochs, where learning rate is decayed at epoch 45 by a factor of 10. Table 7 illustrates the Top-1 errors of each pre-trained model on their validation sets.\nTraining from Scratch with HPO The default hyperparameters for training from scratch are η = 0.1, λ = 0.0001, m = 0.9 and n = 256. We train 600 epochs, and decay the learning rate at epoch 400 and 550 by a factor of 10. To perform Hyperparameter Optimization (HPO), we search hyperparameters in the following space: η ∈ [0.1, 0.2, 0.5] and λ ∈ [0.0001, 0.0005]. Figure 13 shows the training/validation errors of training from scratch on each dataset with different learning rate and weight decay. We observe weight decay 0.0005 consistently performs better than 0.0001.\nInsufficient hyperparameter search may lead to miss-leading conclusion To show the importance of hyperparameter tuning, Table 8 compares the performance with and without hyperparameter tuning for both fine-tuning and training from scratch tasks. With the default hyperparameters, some inappropriate conclusions might be made, e.g., “there is significant gap between fine-tuning and training from scratch\", “fine-tuning always surpass training from scratch\" or “fine-tuning from iNat cannot beat the performance of ImageNet\". However, with HPO, those statements may not be valid. For example, training from scratch surpass the default fine-tuning result on Cars and Aircrafts and the gap between fine-tuning and training from scratch is much smaller. Previous studies (Kornblith et al., 2019; Cui et al., 2018) also identified that datasets like Cars and Aircrafts do not benefit too much from fine-tuning." } ]
2,020
null
SP:e2bc61c78d53d0b72fcc5cde34368e88290371b6
[ "The paper argues for the use of attractive networks (AN) for the tasks that involve learning from noisy data. Attractor networks are recurrent in nature and use energy minimization dynamics. As motivation, the authors point to studies that give evidence for the usefulness of recurrence for visual tasks. The experiments presented show that the proposed model produces better quality images than a VAE based baseline.", "This paper presents an attractor network (AN) approach for pattern interpretation and completion. The authors propose a convolutional bipartite architecture consisting of visible (input and output) and hidden layers with weight constraints and squared and energy-based losses. To prevent vanishing/exploding gradients, temporal-difference method and leaky sigmoid activation function are exploited. Training is done by stochastic gradient descent. In experimental validation, the proposed model is able to reconstruct missing pixels in the images for bar task and supervised MNIST. And, in OMNIGLOT and CIFAR-10 experiments, the proposed approach outperforms its variants, and in super-resolution results, it outperforms the baselines. " ]
In human perception and cognition, a fundamental operation that brains perform is interpretation: constructing coherent neural states from noisy, incomplete, and intrinsically ambiguous evidence. The problem of interpretation is well matched to an early and often overlooked architecture, the attractor network—a recurrent neural net that performs constraint satisfaction, imputation of missing features, and clean up of noisy data via energy minimization dynamics. We revisit attractor nets in light of modern deep learning methods and propose a convolutional bipartite architecture with a novel training loss, activation function, and connectivity constraints. We tackle larger problems than have been previously explored with attractor nets and demonstrate their potential for image completion and super-resolution. We argue that this architecture is better motivated than ever-deeper feedforward models and is a viable alternative to more costly sampling-based generative methods on a range of supervised and unsupervised tasks.
[]
[ { "authors": [ "G. Alain", "Y. Bengio", "S. Rifai" ], "title": "Regularized auto-encoders estimate local statistics", "venue": "CoRR, abs/1211.4246,", "year": 2012 }, { "authors": [ "L.B. Almeida" ], "title": "A learning rule for asynchronous perceptrons with feedback in a combinatorial environment", "venue": "In IEEE First International Conference on Neural Networks,", "year": 1987 }, { "authors": [ "D.J. Amit" ], "title": "Modeling brain function: The world of attractor neural networks", "venue": null, "year": 1992 }, { "authors": [ "M. Bevilacqua", "A. Roumy", "C. Guillemot", "M.L. Alberi-Morel" ], "title": "Low-complexity single-image super-resolution based on nonnegative neighbor embedding", "venue": "In British Machine Vision Conference. BMVA press,", "year": 2012 }, { "authors": [ "R. Chaudhuri", "I. Fiete" ], "title": "Associative content-addressable networks with exponentially many robust stable states", "venue": "arXiv preprint arXiv:1704.02019 q-bio.NC,", "year": 2017 }, { "authors": [ "L. Dinh", "J. Sohl-Dickstein", "S. Bengio" ], "title": "Density estimation using Real NVP", "venue": "arXiv preprint arXiv:1605.08803 cs.LG,", "year": 2016 }, { "authors": [ "Y. Du", "I. Mordatch" ], "title": "Implicit generation and generalization in energy-based models", "venue": "arXiv preprint arXiv:1903.08689 cs.LG,", "year": 2019 }, { "authors": [ "T. Han", "E. Nijkamp", "X. Fang", "M. Hill", "S.-C. Zhu", "Y. Nian Wu" ], "title": "Divergence triangle for joint training of generator model, energy-based model, and inference model", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "G.E. Hinton", "R.R. Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "S. Hochreiter", "Y. Bengio", "P. Frasconi" ], "title": "Gradient flow in recurrent nets: The difficulty of learning long-term dependencies", "venue": null, "year": 2001 }, { "authors": [ "J.J. Hopfield" ], "title": "Neural networks and physical systems with emergent collective computational abilities", "venue": "Proceedings of the National Academy of Sciences,", "year": 1982 }, { "authors": [ "J.J. Hopfield" ], "title": "Neurons with graded response have collective computational properties like those of two-state neurons", "venue": "Proceedings of the National Academy of Sciences,", "year": 1984 }, { "authors": [ "J.-B. Huang", "A. Singh", "N. Ahuja" ], "title": "Single image super-resolution from transformed self-exemplars", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "D.J. Im", "S. Ahn", "R. Memisevic", "Y. Bengio" ], "title": "Denoising criterion for variational auto-encoding framework", "venue": "In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "K. Kar", "J. Kubilius", "K. Schmidt", "E.B. Issa", "J.J. DiCarlo" ], "title": "Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior", "venue": "Nature neuroscience,", "year": 2019 }, { "authors": [ "J. Kim", "J. Kwon Lee", "K. Mu Lee" ], "title": "Deeply-recursive convolutional network for image superresolution", "venue": "In Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "P. Koiran" ], "title": "Dynamics of discrete time, continuous state Hopfield networks", "venue": "Neural Computation,", "year": 1994 }, { "authors": [ "D. Krotov", "J.J. Hopfield" ], "title": "Dense associative memory for pattern recognition", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "W.-S. Lai", "J.-B. Huang", "N. Ahuja", "M.-H. Yang" ], "title": "Fast and accurate image super-resolution with deep Laplacian pyramid networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "B.M. Lake", "R. Salakhutdinov", "J.B. Tenenbaum" ], "title": "Human-Level Concept Learning through Probabilistic Program Induction", "venue": "Science, 350(6266):1332–1338,", "year": 2015 }, { "authors": [ "A. Lamb", "J. Binas", "A. Goyal", "S. Subramanian", "I. Mitliagkas", "D. Kazakov", "Y. Bengio", "M.C. Mozer" ], "title": "State-reification networks: Improving generalization by modeling the distribution of hidden representations", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Y. LeCun", "S. Chopra", "R. Hadsell", "M. Ranzato", "F.-J. Huang" ], "title": "A tutorial on energy-based learning", "venue": null, "year": 2006 }, { "authors": [ "H. Lee", "R. Grosse", "R. Ranganath", "A.Y. Ng" ], "title": "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations", "venue": "In Proceedings of the 26th International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "G. Li", "K. Ramanathan", "N. Ning", "L. Shi", "C. Wen" ], "title": "Memory dynamics in attractor networks", "venue": "Computational Intelligence and Neuroscience,", "year": 2015 }, { "authors": [ "R. Liao", "A. Schwing", "R. Zemel", "R. Urtasun" ], "title": "Learning deep parsimonious representations", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "R. Liao", "Y. Xiong", "E. Fetaya", "L. Zhang", "K. Yoon", "X. Pitkow", "R. Urtasun", "R. Zemel" ], "title": "Reviving and improving recurrent back-propagation", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "D. Martin", "C. Fowlkes", "D. Tal", "J. Malik" ], "title": "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics", "venue": "In Proceedings of the International Conference on Computer Vision (ICCV),", "year": 2001 }, { "authors": [ "J.L. McClelland", "D.E. Rumelhart" ], "title": "An interactive activation model of context effects in letter perception: I. an account of basic findings", "venue": "Psychological Review,", "year": 1981 }, { "authors": [ "M.C. Mozer" ], "title": "Attractor networks", "venue": "Oxford Companion to Consciousness,", "year": 2009 }, { "authors": [ "A. Nayebi", "D. Bear", "J. Kubilius", "K. Kar", "S. Ganguli", "D. Sussillo", "J.J. DiCarlo", "D.L. Yamins" ], "title": "Taskdriven convolutional recurrent models of the visual system", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "K. Perlin" ], "title": "An image synthesizer", "venue": "Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques,", "year": 1985 }, { "authors": [ "F.J. Pineda" ], "title": "Generalization of back-propagation to recurrent neural networks", "venue": "Physical Review Letters,", "year": 1987 }, { "authors": [ "R. Salakhutdinov", "G. Hinton" ], "title": "Deep boltzmann machines", "venue": "In Artificial intelligence and statistics,", "year": 2009 }, { "authors": [ "P. Sterzer", "A. Kleinschmidt" ], "title": "A neural basis for inference in perceptual ambiguity", "venue": "Proceedings of the National Academy of Sciences,", "year": 2007 }, { "authors": [ "R.S. Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine Learning,", "year": 1988 }, { "authors": [ "Y. Tai", "J. Yang", "X. Liu" ], "title": "Image super-resolution via deep recursive residual network", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "H. Tang", "M. Schrimpf", "W. Lotter", "C. Moerman", "A. Paredes", "J.O. Caro", "W. Hardesty", "D. Cox", "G. Kreiman" ], "title": "Recurrent computations for visual pattern completion", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "P. Vincent", "H. Larochelle", "Y. Bengio", "P.-A. Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "Z. Wang", "A.C. Bovik", "H.R. Sheikh", "E.P. Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE Transactions on Image Processing,", "year": 2004 }, { "authors": [ "M. Welling", "M. Rosen-zvi", "G.E. Hinton" ], "title": "Exponential family harmoniums with an application to information retrieval", "venue": "Advances in Neural Information Processing Systems", "year": 2005 }, { "authors": [ "Y. Wu", "G. Wayne", "A. Graves", "T. Lillicrap" ], "title": "The Kanerva machine: A generative distributed memory", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Y. Wu", "G. Wayne", "K. Gregor", "T. Lillicrap" ], "title": "Learning attractor dynamics for generative memory", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "J. Xie", "Y. Lu", "S.-C. Zhu", "Y.N. Wu" ], "title": "A theory of generative ConvNet", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "J. Yang", "J. Wright", "T.S. Huang", "Y. Ma" ], "title": "Image super-resolution via sparse representation", "venue": "IEEE transactions on image processing,", "year": 2010 }, { "authors": [ "R.S. Zemel", "M.C. Mozer" ], "title": "Localist attractor networks", "venue": "Neural Computation,", "year": 2001 }, { "authors": [ "R. Zeyde", "M. Elad", "M. Protter" ], "title": "On single image scale-up using sparse-representations", "venue": "In Proceedings of the International conference on curves and surfaces,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Under ordinary conditions, human visual perception is quick and accurate. Studying circumstances that give rise to slow or inaccurate perception can help reveal the underlying mechanisms of visual information processing. Recent investigations of occluded (Tang et al., 2018) and empirically challenging (Kar et al., 2019) scenes have led to the conclusion that recurrent brain circuits can play a critical role in object recognition. Further, recurrence can improve the classification performance of deep nets (Tang et al., 2018; Nayebi et al., 2018), specifically for the same images with which humans and animals have the most difficulty (Kar et al., 2019).\nRecurrent dynamics allow the brain to perform pattern completion, constructing a coherent neural state from noisy, incomplete, and intrinsically ambiguous evidence. This interpretive process is well matched to attractor networks (ANs) (Hopfield, 1982; 1984; Krotov and Hopfield, 2016; Zemel and Mozer, 2001), a class of dynamical neural networks that converge to fixed-point attractor states (Figure 1a). Given evidence in the form of a static input, an AN settles to an asymptotic state—an interpretation or completion—that is as consistent as possible with the evidence and with implicit knowledge embodied in the network connectivity. We show examples from our model in Figure 1b.\nANs have played a pivotal role in characterizing computation in the brain (Amit, 1992; McClelland and Rumelhart, 1981), not only perception (e.g., Sterzer and Kleinschmidt, 2007), but also language (Stowe et al., 2018) and awareness (Mozer, 2009). We revisit attractor nets in light of modern deep learning methods and propose a convolutional bipartite architecture for pattern completion tasks with a novel training loss, activation function, and connectivity constraints." }, { "heading": "2 BACKGROUND AND RELATED RESEARCH", "text": "Although ANs have been mostly neglected in the recent literature, attractor-like dynamics can be seen in many models. For example, clustering and denoising autoencoders are used to clean up internal states and improve the robustness of deep models (Liao et al., 2016; Tang et al., 2018; Lamb et al., 2019). In a range of image-processing domains, e.g., denoising, inpainting, and super-resolution, performance gains are realized by constructing deeper and deeper architectures (e.g., Lai et al., 2018). State-of-the-art results are often obtained using deep recursive architectures that replicate layers and weights (Kim et al., 2016; Tai et al., 2017), effectively implementing an unfolded-in-time recurrent net. This approach is sensible because image processing tasks are fundamentally constraint satisfaction problems: the value of any pixel depends on the values of its neighborhood, and iterative processing is required to converge on mutually consistent activation patterns. Because ANs are specifically designed to address constraint-satisfaction problems, our goal is to re-examine them from a modern deep-learning perspective.\nInterest in ANs seems to be narrow for two reasons. First, in both early (Hopfield, 1982; 1984) and recent (Li et al., 2015; Wu et al., 2018a;b; Chaudhuri and Fiete, 2017) work, ANs are characterized as content-addressable memories: activation vectors are stored and can later be retrieved with only partial information. However, memory retrieval does not well characterize the model’s capabilities: like its probabilistic sibling the Boltzmann machine (Hinton, 2007; Welling et al., 2005), the AN is a general computational architecture for supervised and unsupervised learning. Second, ANs have been limited by training procedures. In Hopfield’s work, ANs are trained with a simple procedure—an outer product (Hebbian) rule—which cannot accommodate hidden units and the representational capacity they provide. Recent explorations have considered stronger training procedures (e.g., Wu et al., 2018b; Liao et al., 2018); however, as for all recurrent nets, training is complicated by the issue of vanishing/exploding gradients. To facilitate training and increase the computational power of ANs, we propose a set of extensions to the architecture and training procedures.\nANs are related to several popular architectures. Autoencoding models such as the VAE (Kingma and Welling, 2013) and denoising autoencoders (Vincent et al., 2008) can be viewed as approximating one step of attractor dynamics, directing the input toward the training data manifold (Alain et al., 2012). These models can be applied recursively, though convergence is not guaranteed, nor is improvement in output quality over iterations. Flow-based generative models (FBGMs) (e.g., Dinh et al., 2016) are invertible density-estimation models that can map between observations and latent states. Whereas FBGMs require invertibility of mappings, ANs require only a weaker constraint that weights in one direction are the transpose of the weights in the other direction.\nAs currently formulated, energy-based models (EBMs) are feedforward density-estimation models that learn a mapping from input data to energies and are trained to assign low energy values to the data manifold (LeCun et al., 2006; Han et al., 2018; Xie et al., 2016; Du and Mordatch, 2019). Whereas AN dynamics are determined by an implicit energy function, the EBM dynamics are driven by optimizing or sampling from an explicit energy function. In the AN, lowering the energy for some states raises it for others, whereas the explicit EBM energy function requires well-chosen negative samples to ensure it discriminates likely from unlikely states. Although the EBM and FBGM seem well suited for synthesis and generation tasks due to their probabilistic underpinnings, we show that ANs can also be used for generation (maximum-likelihood completion) tasks." }, { "heading": "3 CONVOLUTIONAL BIPARTITE ATTRACTOR NETS (CBANS)", "text": "Various types of recurrent nets have been shown to converge to activation fixed points, including fully interconnected networks of asynchronous binary units (Hopfield, 1982) and networks of continuousvalued units operating in continuous time (Hopfield, 1984). Most relevant to modern deep learning, Koiran (1994) identified convergence conditions for synchronous update of continuous-valued units in discrete time: given a network with state x, parallel updates of the full state with the standard activation rule,\nx← f(xW + b), (1) will asymptote at either a fixed point or a limit cycle of 2. Sufficient conditions for this result are: initial x ∈ [−1,+1]n, W = W T, wii ≥ 0, and f(.) piecewise continuous and strictly increasing\nwith limη→±∞ f(η) = ±1. The proof is cast in terms of an energy function,\nE(x) = −1 2 xWxT − xbT + ∑ i ∫ xi 0 f−1(ξ)dξ. (2)\nWith f ≡ tanh, we have the barrier function: ρ(xi) ≡ ∫ xi\n0\nf−1(ξ)dξ = (1 + xi) ln(1 + xi) + (1− xi) ln(1− xi) (3)\nTo ensure a fixed point (no limit cycle > 1), asynchronous updates are sufficient because the solution of ∂E/∂xi = 0 is the standard update for unit i (Equation 1). Because the energy function additively factorizes for units that have no direct connections, parallel updates of these units still ensure non-increasing energy, and hence attainment of a fixed point.\nWe adopt the bipartite architecture of a stacked restricted Boltzmann machine (Hinton and Salakhutdinov, 2006), with bidirectional symmetric connections between adjacent layers of units and no connectivity within a layer (Figure 1c). We distinguish between visible layers, which contain inputs and/or outputs of the net, and hidden layers. The bipartite architecture allows for units within a layer to be updated in parallel while guaranteeing strictly non-increasing energy and attainment of a local energy minimum. We thus perform layerwise updating of units, defining one iteration as a sweep from one end of the architecture to the other and back. The 8-step update sequence for the architecture in Figure 1c is shown above the network." }, { "heading": "3.1 CONVOLUTIONAL WEIGHT CONSTRAINTS", "text": "Weight constraints required for convergence can be achieved within a convolutional architecture as well (Figure 1d). In a feedforward convolutional architecture, the connectivity from layer l to l + 1 is represented by weights W l = {wlqrab}, where q and r are channel indices in the destination (l + 1) and source (l) layers, respectively, and a and b specify the relative coordinate within the kernel, such that the weight wlqrab modulates the input to the unit in layer l + 1, channel q, absolute position (α, β)—denoted xl+1qαβ—from the unit x l r,α+a,β+b. If W\nl+1 = {wl+1qrab} denotes the reverse weights to channel q in layer l from channel r in layer l + 1, symmetry requires that\nwlq,r,a,b =w l+1 r,q,−a,−b . (4)\nThis follows from the fact that the weights are translation invariant: the reverse mapping from xl+1q,α,β to x l r,α+a,β+b has the same weight as from x l+1 q,α−a,β−b to x l r,α,β , embodied in Equation 4. Implementation of the weight constraint is simple: W l is unconstrained, andW l+1 is obtained by transposing the first two tensor dimensions of W l and flipping the indices of the last two. The convolutional bipartite architecture has energy function:\nE(x) = − L−1∑ l=1 ∑ q xl+1q • ( W lq ∗ xl ) + L∑ l=1 ∑ q,α,β ρ(xlqαβ)− blq xlqαβ (5)\nwhere xl is the activation in layer l, bl are the channel biases, and ρ(.) is the barrier function (Equation 3), ‘∗’ is the convolution operator, and ‘•’ is the element-wise sum of the Hadamard product of tensors. The factor of 12 ordinarily found in energy functions is not present in the first term because, in contrast to Equation 2, each second-order term in x appears only once. For a similar formulation in stacked restricted Boltzmann machines, see Lee et al. (2009)." }, { "heading": "3.2 LOSS FUNCTIONS", "text": "Evidence provided to our Convolutional Bipartite Attractor Net (hereafter, CBAN) consists of activation constraints on a subset of the visible units. The CBAN is trained to fill-in or complete the activation pattern over the visible state. The manner in which evidence constrains activations depends on the nature of the evidence. In a scenario where all features are present but potentially noisy, one should treat them as soft constraints that can be overridden by the model; in a scenario where the evidence features are reliable but other features are entirely missing, one should treat the evidence as hard constraints.\nWe have focused on this latter scenario in our simulations, although we discuss the use of soft constraints in Appendix A. For a hard constraint, we clamp the visible units to the value of the evidence, meaning that activation is set to the observed value and not allowed to change. Energy is\nminimized conditioned on the clamped values. One extension to clamping is to replicate all visible units and designate one set as input, clamped to the evidence, and one set as output, which serves as the network read out. We considered using the evidence to initialize the visible state, but initialization is inadequate to anchor the visible state and it wanders. We also considered using the evidence as a fixed bias on the input to the visible state, but redundancy of the bias and top-down signals from the hidden layer can prevent the CBAN from achieving the desired activations.\nAn obvious loss function is squared error, LSE = ∑ i ||vi − yi||2, where i is an index over visible units, v is the visible state, and y is the target visible state. However, this loss misses out on a key source of error. The clamped units have zero error under this loss. Consequently, we replace vi with ṽi, the value that unit i would take were it unclamped, i.e., free to take on a value consistent with the hidden units driving it:\nLSE = ∑ i ||ṽi − yi||2.\nAn alternative loss, related to the contrastive loss of the Boltzmann machine (see Appendix B), explicitly aims to ensure that the energy of the current state is higher than that of the target state. With x = (y,h) being the complete state with all visible units clamped at their target values and the hidden units in some configuration h, and x̃ = (ṽ,h) being the complete state with the visible units unclamped, one can define the loss\nL∆E = E(x)− E(x̃) = ∑ i f −1(ṽi)(ṽi − yi) + ρ(yi)− ρ(ṽi).\nWe apply this loss by allowing the net to iterate for some number of steps given a partially clamped input, yielding a hidden state that is a plausible candidate to generate the target visible state. Note that ρ(yi) is constant and although it does not factor into the gradient computation, it helps interpret L∆E : when L∆E = 0, ṽ = y. This loss is curious in that it is a function not just of the visible state, but, through the term f−1(ṽi), it directly depends on the hidden state in the adjacent layer and the weights between these layers. A variant on L∆E is based on the observation that the goal of training is only to make the two energies equal, suggesting a soft hinge loss:\nL∆E+ = log (1 + exp(E(x)− E(x̃))) . Both energy-based losses have an interpretation under the Boltzmann distribution: L∆E is related to the conditional likelihood ratio of the clamped to unclamped visible state, and L∆E+ is related to the conditional probability of the clamped versus unclamped visible state:\nL∆E = − log p(y|h)p(ṽ|h) and L∆E+ = − log p(y|h) p(ṽ|h)+p(y|h) ." }, { "heading": "3.3 PREVENTING VANISHING/EXPLODING GRADIENTS", "text": "Although gradient descent is a more powerful method to train the CBAN than Hopfield’s Hebb rule or the Boltzmann machine’s contrastive loss, vanishing and exploding gradients are a concern as with any recurrent net (Hochreiter et al., 2001), particularly in the CBAN which may take 50 steps to fully relax. We address the gradient issue in two ways: through intermediate training signals and through a soft sigmoid activation function.\nThe aim of the CBAN is to produce a stable interpretation asymptotically. The appropriate way to achieve this is to apply the loss once activation converges. However, the loss can be applied prior to convergence as well, essentially training the net to achieve convergence as quickly as possible, while also introducing loss gradients deep inside the unrolled net. Assume a stability criterion θ that determines the iteration t∗ at which the net has effectively converged: t∗ = mint [maxi |xi(t)− xi(t− 1)| < θ] . Training can be logically separated into pre- and post-convergence phases, which we will refer to as transient and stationary. In the stationary phase, the Almeida/Pineda algorithm (Pineda, 1987; Almeida, 1987) leverages the fact that activation is constant over iterations, permitting a computationally efficient gradient calculation with low memory requirements. In the transient phase, the loss can be injected at each step, which is exactly the temporal-difference method TD(1) (Sutton, 1988). Casting training as temporal-difference learning, one might consider other values of λ in TD(λ); for example, TD(0) trains the model to predict the visible state at the next time step, encouraging the model to reach the target state as quickly as feasible while not penalizing it for being unable to get to the target immediately.\nAny of the losses, LSE , L∆E , and L∆E+, can be applied with a weighted mixture of training in the stationary and transient phases. Although we do not report systematic experiments in this article, we consistently find that transient training with λ = 1 is as efficient and effective as weighted mixtures\nincluding stationary-phase-only training, and that λ = 1 outperforms any λ < 1, likely due to moving targets during training. We thus conduct all simulations with transient-phase training and λ = 1.\nWe propose a second method of avoiding vanishing gradients specifically due to sigmoidal activation functions: a leaky sigmoid, analogous to a leaky ReLU, which allows gradients to propagate through the net more freely. The leaky sigmoid has activation and barrier functions\nf(z) = α(z + 1)− 1 z < −1 z −1 ≤ z ≤ 1 α(z − 1) + 1 z > 1 , ρ(x) = 1 2α [ x2 + (1− α)(1 + 2x) ] if x < −1 1 2x 2 if − 1 ≤ x ≤ 1 1\n2α\n[ x2 + (1− α)(1− 2x) ] if x > 1\n.\nParameter α specifies the slope of the piecewise linear function outside the |x| < 1 interval. As α→ 0, loss gradients become flat and the CBAN fails to train well. As α→ 1, activation magnitudes can blow up and the CBAN fails to reach a fixed point. In Appendix C, we show that convergence to a fixed point is guaranteed when α||W ||1,∞ < 1, where ||W ||1,∞ = maxi ||wi||1. In practice, we have found that restricting W is unnecessary and α = 0.2 works well." }, { "heading": "4 SIMULATIONS", "text": "We report on a series of simulation studies of increasing complexity. First, we explore fully connected bipartite attractor net (FBAN) on a bar imputation task and then supervised MNIST image completion and classification. Second, we apply CBAN to unsupervised image completion tasks on Omniglot and CIFAR-10 and compare CBAN to CBAN-variants and denoising-VAEs. Lastly, we revision CBAN for the task of super-resolution and report promising results against competing models, such as DRCN and LapSRN. Details of architectures, parameters, and training are in Appendix D. All models are trained via SGD (back propagation through time)." }, { "heading": "4.1 BAR TASK", "text": "We studied a simple inference task on partial images that have exactly one correct interpretation. Images are 5× 5 binary pixel arrays consisting of two horizontal bars or two vertical bars. Twenty distinct images exist, shown in the top row of Figure 2. A subset of pixels is provided as evidence; examples are shown in the bottom row of Figure 2. The task is to fill in the masked pixels. Evidence is generated such that only one consistent completion exists. In some cases, a bar must be inferred without any white pixels as evidence (e.g., second column from the right). In other cases, the local evidence is consistent with both vertical and horizontal bars (e.g., first column from left).\nAn FBAN with one layer of 50 hidden units is sufficient for the task. Evidence is generated randomly on each trial. Evaluating on 10k random states, the model is 99.995% correct. The middle row in Figure 2 shows the FBAN response after one iteration. The net comes close to performing the task in a single shot, but after a second iteration of clean up and the asymptotic state is shown in the top row.\nFigure 3 shows some visible-hidden weights learned by the FBAN. Each 5× 5 array depicts weights to/from one hidden unit. Weight sign and magnitude are indicated by coloring and area of the square, respectively. Units appear to select one row and one column, either with the same or opposite\ncompletion\nevidence\niteration 1\nFigure 2: Bar task: Input consists of 5× 5 pixel arrays with the target being either two rows or two columns of pixels present.\nFigure 3: Bar task: Weights between visible and first hidden layers\ntarget\ncompletion\nevidence\ndream\nFigure 4: MNIST completions. Row 1: target test examples, with class label coded in the bottom row. Row 2: completions produced by the FBAN. Row 3: Evidence with masked regions (including class labels) in red. Row 4: the top-down ‘dream’ state produced by the hidden representation.\npolarity. Same-polarity weights within a row or column induce coherence among pixels. Oppositepolarity weights between a row and a column allow the pixel at the intersection to activate either the row/column depending on the sign of the unit’s activation." }, { "heading": "4.2 SUPERVISED MNIST", "text": "We trained an FBAN with two hidden layers on a supervised version of MNIST in which the visible state consists of a 28 × 28 array for an MNIST digit and an additional vector to code the class label. For the sake of graphical convenience, we allocate 28 units to the label, using the first 20 by redundantly coding the class label in pairs of units, and ignoring the final 8 units. Our architecture had 812 inputs, 200 units in the first hidden layer, and 50 units in the second. During training, all bits of the label were masked as well as one-third of image pixels. The image was masked with thresholded Perlin coherent noise (Perlin, 1985), which produces missing patches that are far more difficult to fill in than the isolated pixels produced by Bernoulli masking.\nFigure 4 shows evidence provided to FBAN for 20 random test set items in the third row. The red masks indicate unobserved pixels; the other pixels are clamped in the visible state. The unobserved pixels include those representing the class label, coded in the bottom row of the pixel array. The top row of the Figure shows the target visible representation, with class labels indicated by the isolated white pixels. Even though the training loss treats all pixels as equivalent, the FBAN does learn to classify unlabeled images. On the test set, the model achieves a classification accuracy of 87.5% on Perlin-masked test images and 89.9% on noise-free test images. Note that the 20 pixels indicating class membership are no different than any other missing pixels in the input. The model learns to classify by virtue of the systematic relationship between images and labels. We can train the model with fully observed images and fully unobserved labels, and its performance is like that of any fully-connected MNIST classifier, achieving an accuracy of 98.5%.\nThe FBAN does an excellent job of filling in missing features in Figure 4 and in further examples in Appendix F. The FBAN’s interpretations of the input seem to be respectable in comparison to other recent recurrent associative memory models (Figures 8a,b). We mean no disrespect of other research efforts—which have very different foci than ours—but merely wish to indicate we are obtaining state-of-the-art results for associative memory models. Figure 8c shows some weights between visible and hidden units. Note that the weights link image pixels with multiple digit labels. These weights stand apart from the usual hidden representations found in feedforward classification networks." }, { "heading": "4.3 UNSUPERVISED OMNIGLOT", "text": "We trained a CBAN with the Omniglot data set (Lake et al., 2015) which consists of instances of 1623 characters from 50 different alphabets. The CBAN has one visible layer containing the character image, 28×28×1, and three successive hidden layers with dimensions 28×28×128, 14×14×256, and 7× 7× 256, all with average pooling between the layers and filters of size 3× 3. Other network parameters and training details are presented in Appendix D. To vary the masking, we used random square patches of diameter 3–6, which remove on average 30% of the white pixels in the image.\nWe compared our CBAN to variants with critical properties removed: one without weight symmetry (CBAN-asym) and one in which the TD(1) training procedure is substituted for a standard squared loss at the final step (CBAN-noTD). We also compare to a convolutional denoising VAE (CD-VAE), which takes the masked image as input and outputs the completion. The CBAN with symmetric\nTarget\nEvidence\nCBAN\nCBAN-noTD" }, { "heading": "CBAN-asym", "text": "VAE\nCD-VAE\nCD-VAE\nCD-\nFigure 5: Omniglot image completion comparison examples (left) and quantitative results (right). The top two rows of the examples show the target image and the evidence provided to the model (with missing pixels depicted in red), respectively. The subsequent rows show the image completions produced by CBAN, CBAN-asym, CBAN-noTD, and the denoising VAE. The quantitative measures evaluate each model on PSNR and SSIM metrics; black lines indicate +1 standard error of the mean.\nweights reaches a fixed point, whereas CBAN-asym appears to attain limit cycles of 2-10 iterations. Qualitatively, CBAN produces the best image reconstructions (Figure 5; additional completions in Appendix F). CBAN-asym and CBAN-noTD tend to hallucinate additional strokes; and CBAN-noTD and CD-VAE produce less crisp edges. Quantitatively, we assess models with two measures of reconstruction quality, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM, Wang et al., 2004); larger is better on each measure. CBAN is strictly superior to the alternatives on both measures (Figure 5, right panel). CBAN completions are not merely memorized instances; the CBAN has learned structural regularities of the images, allowing it to fill in big gaps in images that—with the missing pixels—are typically uninterpretable by both classifiers and humans." }, { "heading": "4.4 UNSUPERVISED CIFAR-10", "text": "We trained a CBAN with one visible and three hidden layers on CIFAR-10 images. The visible layer is the size of the input image, 32 × 32 × 3. The successive hidden layers had dimensions 32 × 32 × 40, 16 × 16 × 120, and 8 × 8 × 440, all with filters of size 3 × 3 and average pooling between the hidden layers. Further details of architecture and training can be found in Appendix D. Figure 6 shows qualitative and quantitative comparisons of alternative models. Here, CBAN-asym performs about the same as CBAN. However, CBAN-asym typically attains bi-phasic limit cycles, and CBAN-asym sometimes produces splotchy artifacts in background regions (e.g., third image from left). CBAN-noTD and the CD-VAE are clearly inferior to CBAN. Additional CBAN image completions can be found in Appendix F." }, { "heading": "4.5 SUPER-RESOLUTION", "text": "Deep learning models have proliferated in many domains of image processing, perhaps none more than image super-resolution, which is concerned with recovering a high-resolution image from a\nTarget\nEvidence\nCBAN\nCBAN-noTD" }, { "heading": "CBAN-asym", "text": "VAE\nCD-VAE\nCD-VAE\nCD-\nFigure 6: CIFAR-10 image completion comparison examples (left) and quantitative results (right). Layout identical to that of Figure 5.\nlow-resolution image. Many specialized architectures have been developed, and although common test data sets exist, comparisons are not as simple as one would hope due to subtle differences in methodology. (For example, even the baseline method, bicubic interpolation, yields different results depending on the implementation.) We set out to explore the feasibility of using CBANs for super-resolution. Our architecture processes 40× 40 color image patches, and the visible state included both the low- and high-resolution images, with the low-resolution version clamped and the high-resolution version read out from the net. Details can be found in Appendix D.\nTable 1 presents two measures of performance, SSIM and PSNR, for the CBAN and various published alternatives. CBAN beats the baseline, bicubic interpolation, on both measures, and performs well on SSIM against some leading contenders (even beating LapSRN and DRCN on Set14 and Urban100), but poorly on PSNR. It is common for PSNR and SSIM to be in opposition: SSIM rewards crisp edges, PSNR rewards averaging toward the mean. The border sharpening and contrast enhancement that produce good perceptual quality and a high SSIM score (see Figure 7) are due to the fact that CBAN comes to an interpretation of the images: it imposes edges and textures in order to make the features mutually consistent. We believe that CBAN warrants further investigation for superresolution; regardless of whether it becomes the winner in this competitive field, one can argue that it is performing a different type of computation than feedforward models like LapSRN and DRCN." }, { "heading": "5 DISCUSSION", "text": "This article revisits attractor nets, which are traditionally fully interconnected RNNs trained with recurrent back propagation Liao et al. (2018) or the Hebbian rule (Hopfield, 1984). Our key contribution is to endow nets with a combination of properties which allows them to tackle difficult image completion problems; these properties include: convolutional weight constraints, novel loss functions, and methods for preventing vanishing/exploding gradients. In comparison to recent published results on image completion with attractor networks, our CBAN produces far more impressive results (see Appendix, Figure 8, for a contrast). The computational cost and challenge of training CBANs is no greater than those of training deep feedforward nets. CBANs seem to produce crisp images, on par with those produced by generative (e.g., energy- and flow-based) models. CBANs have potential to be applied in many contexts involving data interpretation, with the virtue that the computational resources they bring to bear on a task is dynamic and dependent on the difficulty of interpreting a given input. Although this article has focused on convolutional networks that have attractor dynamics between levels of representation, we have recently recognized the value of architectures that are fundamentally feedforward with attractor dynamics within a level. Our current research explores this variant of the CBAN as a biologically plausible account of intralaminar lateral inhibition." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A USING EVIDENCE", "text": "The CBAN is probed with an observation—a constraint on the activation of a subset of visible units. For any visible unit, we must specify how an observation is used to constrain activation. The possibilities include:\n• The unit is clamped, meaning that the unit activation is set to the observed value and is not allowed to change. Convergence is still guaranteed, and the energy is minimized conditional on the clamped value. However, clamping a unit has the disadvantage that any error signal back propagated to the unit will be lost (because changing the unit’s input does not change its output). • The unit is initialized to the observed value, instead of 0. This scheme has the disadvantage\nthat activation dynamics can cause the network to wander away from the observed state. This problem occurs in practice and the consequences are so severe it is not a viable approach. • In principle, we might try an activation rule which sets the visible unit’s activation to be a\nconvex combination of the observed value and the value that would be obtained via activation dynamics: α× observed + (1− α)× f(net input). With α = 1 this is simply the clamping scheme; with α = 0 and appropriate start state, this is just the initialization scheme. • The unit has an external bias proportional to the observation. In this scenario, the net input\nto a visible unit is: xi ← f(xwTi + bi + ei), (6)\nwhere ej ∝ observation. The initial activation can be either 0 or the observation. One concern with this scheme is that the ideal input to a unit will depend on whether or not the unit has this additional bias. For this reason the magnitude of the bias should probably be small. However, in order to have an impact, the bias must be larger. • We might replicate all visible units and designate one set for input (clamped) and one set\nfor output (unclamped). The input is clamped to the observation (which may be zero). The output is allowed to settle. The hidden layer(s) would synchronize the inputs and outputs, but it could handle noisy inputs, which isn’t possible with clamping. Essentially, the input would serve as a bias, but on the hidden units, not on the inputs directly.\nIn practice, we have found that external biases work but are not as effective as clamping. Partial clamping with 0 < α < 1 has partial effectiveness relative to clamping. And initialization is not effective; the state wanders from the initialized values. However, the replicated-visible scheme seems very promising and should be explored further." }, { "heading": "B LOSS FUNCTIONS", "text": "The training procedure for a Boltzmann machine aims to maximize the likelihood of the training data, which consist of a set of observations over the visible units. The complete states in a Boltzmann machine occur with probabilities specified by p(x) ∝ e−E(x)/T , (7) where T is a computational temperature and the likelihood of a visible state is obtained by marginalizing over the hidden states. Raising the likelihood of a visible state is achieved by lowering its energy.\nThe Boltzmann machine learning algorithm has a contrastive loss: it tries to maximize the energy of states with the visible units clamped to training observations and minimize the energy of states with the visible units unclamped and free to take on whatever values they want. This contrastive loss is an example of an energy-based loss, which expresses the training objective in terms of the network energies.\nIn our model, we will define an energy-based loss via matched pairs of states: x is a state with the visible units clamped to observed values, and x̃ is a state in which the visible units are unclamped, i.e., they are free take on values consistent with the hidden units driving them. Although x̃ could be\nany unclamped state, it will be most useful for training if it is related to x (i.e., it is a good point of contrast). To achieve this relationship, we propose to compute (x̃,x) pairs by:\n1. Clamp some portion of the visible units with a training example.\n2. Run the net to some iteration, at which point the full hidden state is h. (The point of this step is to identify a hidden state that is a plausible candidate to generate the target visible state.)\n3. Set x to be the complete state in which the hidden component of the state is h and the visible component is the target visible state.\n4. Set x̃ to be the complete state in which the hidden component of the state is h and the visible component is the fully unclamped activation pattern that would be obtained by propagating activities from the hidden units to the (unclamped) visible units.\nNote that the contrastive pair at this iteration, (x̃i,xi), are states close to the activation trajectory that the network is following. We might train the net only after it has reached convergence, but we’ve found that defining the loss for every iteration i up until convergence improves training performance." }, { "heading": "B.1 LOSS 1: THE DIFFERENCE OF ENERGIES", "text": "L∆E = E(x)− E(x̃)\n= −1 2 xWxT − bxT + ∑ j ∫ xj 0 f−1(ξ)dξ − −1 2 x̃W x̃T − bx̃T + ∑ j ∫ x̃j 0 f−1(ξ)dξ = ∑ i (wix+ bi)(ṽi − vi) + ∫ vi 0 f−1(ξ)dξ − ∫ ṽi 0 f−1(ξ)dξ\n= ∑ i f−1(ṽi)(ṽi − vi) + ρ(vi)− ρ(ṽi)\nwith ρ(s) = 1\n2 (1 + s) ln(1 + s) +\n1 2 (1− s) ln(1− s)\nThis reduction depends on x and x̃ sharing the same hidden state, a bipartite architecture in which visible and hidden are interconnected, all visible-to-visible connections are zero, a tanh activation function, f , for all units, and symmetric weights.\nThe goal of this loss is to move x̃ toward x. Once they become identical, adjusting their relative energies is not beneficial. Thus, we propose using a hinge version of the loss:\nL∆E = max(0, E(x)− E(x̃))" }, { "heading": "B.2 LOSS 2: THE CONDITIONAL PROBABILITY OF CORRECT RESPONSE", "text": "This loss aims to maximize the log probability of the clamped state conditional on the choice between unclamped and clamped states. Framed as a loss, we have a negative log likelihood:\nL∆E+ = − lnP (x | x ∨ x̃)\n= − ln p(x) p(x̃) + p(x)\n= ln ( 1 + exp ( E(x)− E(x̃)\nT )) The last step is attained using the Boltzmann distribution (Equation 7).\nC PROOF OF CONVERGENCE OF CBAN WITH LEAKY SIGMOID ACTIVATION FUNCTION\n(1)\nx fCWxtb xelR f is applied element wise\nxCzH I 2 s 1\nfeef Z laze\nL Z 1 I Z 1\nwhere o a I Assume llblla.si\nbglelHxtHEltm where mso Vgillwyll Er\nThen for Hx HIM to hold we need\nllfcwxttbdla ltm Vglfcwjxth.pl Itm\n2\nif 12 14 we are done\nif Z 1 analogously 2 a l (2)\nf z _LEZ l 11 x WE by 1 I\nE L HughHXena l l 11 E L ration 11\nwe want the last term to be at most Itm\nLv Itm 11 E l 1M\nAre Ci ar m drei Let\nmaxgllwyll In fact there is a degree of freedom here so we can simply use c Xr I as the only param in the analysis so long as we also re parameterize b accordingly\nIn conclusion given c xr I\n(3)\nthe region of convergence must\ninclude the hypercube\ni Ea HE\nif the barrier is set to be smaller\nthan the above region no convergence\nis guaranteed" }, { "heading": "D NETWORK ARCHITECTURES AND HYPERPARAMETERS", "text": "" }, { "heading": "D.1 BAR TASK", "text": "Our architecture was a fully connected bipartite attractor net (FBAN) with one visible layer and two hidden layers having 48 and 24 channels. We trained using L∆E+ with the transient TD(1) procedure, defining network stability as the condition in which all changes in unit activation on successive iterations are less than 0.01 for a given input, tanh activation functions, batches of 20 examples (the complete data set), with masks randomly generated on each epoch subject to the constraint that only one completion is consistent with the evidence. Weights between layers l and l + 1 and the biases in layer l are initialized from a mean-zero Gaussian with standard deviation 0.1( 12nl + 1 2nl+1 + 1)\n− 12 , where nl is the number of units in layer l. Optimization is via stochastic gradient descent with an initial learning rate of 0.01, dropped to .001; the gradients in a given layer of weights are L2 renormalized to be 1.0 for a batch of examples, which we refer to as SGD-L2." }, { "heading": "D.2 MNIST", "text": "Our architecture was a fully connected bipartite attractor net (FBAN) with one visible layer to one hidden layer with 200 units to a second hidden layer with 50. We trained using L∆E+ with the transient TD(1) procedure, defining network stability as the condition in which all changes in unit activation on successive iterations are less than 0.01 for a given input, tanh activation functions, batches of 250 examples. Masks are generated randomly for each example on each epoch. The masks were produced by generating Perlin noise, frequency 7, thresholded such that one third of the pixels were obscured. Weights between layers l and l + 1 and the biases in layer l are initialized from a mean-zero Gaussian with standard deviation 0.1( 12nl + 1 2nl+1 + 1)\n− 12 , where nl is the number of units in layer l. Optimization is via stochastic gradient descent with learning rate 0.01; the gradients in a given layer of weights are L∞ renormalized to be 1.0 for a batch of examples, which we refer to as SGD-Linf. Target activations scaled to lie in [-0.999,0.999]." }, { "heading": "D.3 OMNIGLOT", "text": "The network architecture consists of four layers: one visible layer and three hidden layers. The visible layer dimensions match the input image dimensions: (28, 28, 1). The channel dimensions of the three hidden layers increase by 128, 256, and 512, respectively. We used filter sizes of 3× 3 between all layers. Beyond the first hidden layer, we introduce a 2× 2 average pooling operation followed by half-padded convolution going from layer l to layer l + 1, and a half-padded convolution followed by a 2× 2 nearest-neighbor interpolation going from layer l + 1 to layer l. Consequently, the spatial dimensions of the hidden states, from lowest to highest, are (28,28), (14,14) and (7,7). A trainable bias is applied per-channel to each layer. All biases are initialized to 0, whereas kernel weights are Gaussian initialized with a standard deviation of 0.01. The CBAN used tanh activation functions and LSE with TD(1) transient training, as described in the main text. We trained our model on 15,424 images from the Omniglot dataset (test set: 3856). The images are noised by online-generation of squares that mask 20-40% of the white pixels in the image. We optimized our mean-squared error objective using Adam. The learning rate is initially set to 0.0005 and then decreased manually by a factor of 10 every 20 epochs after training epoch 100. For each batch, the network runs until the state stabilizes, where the condition for stabilization is specified as the maximum absolute difference of the full network states between stabilization steps i and i+1 being less than 0.01. The maximum number of stabilization steps was set to 100; the average stabilization iteration per batch over the course of training was 50 stabilization steps.\nMasks were formed by selecting patches of diameter 3–6 uniformly, in random, possibly overlapping locations, stopping when at least 25% of the white pixels have been masked.\nOur DC-VAE is based on code from https://github.com/sksq96/pytorch-vae/blob/master/vaecnn.ipynb and followed the method of Im et al. (2017).\nD.4 CIFAR-10\nThe network architecture consists of four layers: one visible layer and three hidden layers. The visible layer dimensions match the input image dimensions: (32, 32, 3). The channel dimensions of the three hidden layers increase by 40, 120, and 440, respectively. We used filter sizes of 3× 3 between all layers. Beyond the first hidden layer, we introduce a 2× 2 average pooling operation followed by\nhalf-padded convolution going from layer l to layer l + 1, and a half-padded convolution followed by a 2× 2 nearest-neighbor interpolation going from layer l + 1 to layer l. Consequently, the spatial dimensions of the hidden states, from lowest to highest, are (32,32), (16,16) and (8,8). A trainable bias is applied per-channel to each layer. All biases are initialized to 0, whereas kernel weights are Gaussian initialized with a standard deviation of 0.0001. The CBAN used tanh activation functions and LSE with TD(1) transient training, as described in the main text. We trained our model on 50,000 images from the CIFAR10 dataset (test set 10,000). The images are noised by online-generation of Perlin noise that masks 40% of the image. We optimized our mean-squared error objective using Adam. The learning rate is initially set to 0.0005 and then decreased manually by a factor of 10 every 20 epochs beyond training epoch 150. For each batch, the network runs until the state stabilizes, where the condition for stabilization is specified as the maximum absolute difference of the full network states between stabilization steps t and t+ 1 being less than 0.01. The maximum number of stabilization steps was set to 100; the average stabilization iteration per batch over the course of training was 50 stabilization steps." }, { "heading": "D.5 SUPER-RESOLUTION", "text": "The network architecture consists of four layers: one visible layer and three hidden layers. The visible layer spatial dimensions match the input patch dimensions, but consists of 6 channels: (40, 40, 6). The low-resolution evidence patch is clamped to the bottom 3 channels of the visible state; the top 3 channels of the visible state serve as the unclamped output against which the high-resolution target patch is compared and loss is computed as a mean-squared error. The channel dimensions of the three hidden layers are 300, 300, and 300. We used filter sizes of 5 × 5 between all layers. All convolutions are half-padded and no average pooling operations are introduced in the SR network scheme. Consequently, the spatial dimensions of the hidden states remain constant and match the input patches of (40, 40). A trainable bias is applied per-channel to each layer. All biases are initialized to 0, whereas kernel weights are Gaussian initialized with a standard deviation of 0.001.\nWe trained our model on 91 images from the T91 dataset (Yang et al., 2010) scaled at ×2. We optimized our mean-squared error objective using Adam. The learning rate is initially set to 0.00005 and then decreased by a factor of 10 every 10 epochs. The stability conditions described in the CBAN models for CIFAR-10 and Omniglot are repeated for the SR task, except the stability threshold was set to 0.1 half way through training. We evaluated on four test datasets at ×2 scaling: Set5 (Bevilacqua et al., 2012), Set14 (Zeyde et al., 2010), DSB100 (Martin et al., 2001), and Urban100 (Huang et al., 2015)." }, { "heading": "E ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "E.1 CONVERGENCE THRESHOLD", "text": "The convergence threshold operates within the forward iterations of network training and determines the point at which the network’s state has satisfactorily converged. Smaller thresholds will result in longer training time, whereas large thresholds will yield faster training times. However, the higher thresholds may negatively impact network performance. We examined the effects of convergence thresholds on network performance by training two identical CBAN models on the Omniglot dataset; see § D.3 for network architecture and training details. The only difference between the models is the convergence threshold: one model uses a convergence threshold of 0.01, whereas the other uses 1.0.\nOur experimental results are shown in Table 2. We find that the higher convergence threshold provides both a significant improvement in training efficiency, with an increased performance on PSNR, SSIM, and L1 reconstruction error metrics. The average number of forward iterations drops from 45 to 3 when changing from a convergence threshold of 0.01 to 1.0." }, { "heading": "E.2 LAYER UPDATING SCHEMES", "text": "Salakhutdinov and Hinton (2009) apply odd-even rules for layer updating in their work on Restricted Boltzmann Machines (RBMs). We empirically evaluate the odd-even update rule and compare it against a simple up-down update rule. The up-down update rule sequentially updates the layers,\nstarting from the bottom and moving to the top, and then back down. We explored the two alternative update rules by training two CBAN models on the Omniglot dataset; one with the up-down update rule and the other with the odd-even update rule. See § D.3 for detailed network architecture and training details. Although the odd-even rule is suggested to be a more biologically-plausible mechanism, our results show the up-down rule achieving superior performance, measured by PSNR, SSIM, and L1 reconstruction loss (see Table 3). In addition, a qualitative comparison on image completion demonstrates that the odd-even rule is less efficient when compared to the up-down rule, and more frequently fails to fully complete the task (see Figure 9)." }, { "heading": "F ADDITIONAL RESULTS", "text": "" } ]
2,019
null
SP:e963ad4e47263da9f64c76505e1853cbf8b012c4
[ "This paper studies the theoretical property of neural network's loss surface. The main contribution is to prove that the loss surface of every neural network (with arbitrary depth) with piecewise linear activations has infinite spurious local minima. Moreover, the paper further characterizes the partition of the local minima. More precisely, the loss surface is partitioned into multiple smooth and multilinear open cells and within each cell, the local minima are equally good. This result can also explain the linear neural network case where there is only one cell, implying that all local minima are global. ", "This paper studies the landscape of deep neural networks with piecewise-linear activation functions. The paper showed that under very mild assumptions, the loss surface admits infinite spurious local minima. Further, it is shown that the loss surface is partitioned into many multilinear cells. If the network is two-layer with two-piece linear activations, it is proved that within each cell every local minimum is global. " ]
Understanding the loss surface of a neural network is fundamentally important to the understanding of deep learning. This paper presents how piecewise linear activation functions substantially shape the loss surfaces of neural networks. We first prove that the loss surfaces of many neural networks have infinite spurious local minima which are defined as the local minima with higher empirical risks than the global minima. Our result demonstrates that the networks with piecewise linear activations possess substantial differences to the well-studied linear neural networks. This result holds for any neural network with arbitrary depth and arbitrary piecewise linear activation functions (excluding linear functions) under most loss functions in practice. Essentially, the underlying assumptions are consistent with most practical circumstances where the output layer is narrower than any hidden layer. In addition, the loss surface of a neural network with piecewise linear activations is partitioned into multiple smooth and multilinear cells by nondifferentiable boundaries. The constructed spurious local minima are concentrated in one cell as a valley: they are connected by a continuous path, on which empirical risk is invariant. Further for one-hidden-layer networks, we prove that all local minima in a cell constitute an equivalence class; they are concentrated in a valley; and they are all global minima in the cell.
[ { "affiliations": [], "name": "Fengxiang He" }, { "affiliations": [], "name": "Bohan Wang" }, { "affiliations": [], "name": "Dacheng Tao" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Pierre Baldi", "Kurt Hornik" ], "title": "Neural networks and principal component analysis: Learning from examples without local minima", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "Alon Brutzkus", "Amir Globerson" ], "title": "Globally optimal gradient descent for a convnet with gaussian inputs", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "SGD learns overparameterized networks that provably generalize on linearly separable data", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Anna Choromanska", "Mikael Henaff", "Michael Mathieu", "Gérard Ben Arous", "Yann LeCun" ], "title": "The loss surfaces of multilayer networks", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Felix Draxler", "Kambis Veschgini", "Manfred Salmhofer", "Fred Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Simon S Du", "Jason D Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Simon S Du", "Jason D Lee", "Yuandong Tian", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Simon S. Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "C Daniel Freeman", "Joan Bruna" ], "title": "Topology and geometry of half-rectified network optimization", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Micah Goldblum", "Jonas Geiping", "Avi Schwarzschild", "Michael Moeller", "Tom Goldstein" ], "title": "Truth or backpropaganda? an empirical investigation of deep learning theory", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Benjamin D. Haeffele", "Rene Vidal" ], "title": "Global optimality in neural network training", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "Complexity of linear regions in deep networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Fengxiang He", "Tongliang Liu", "Dacheng Tao" ], "title": "Control batch size and learning rate to generalize well: Theoretical and empirical evidence", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kenji Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Kenji Kawaguchi", "Leslie Pack Kaelbling" ], "title": "Elimination of all bad local minima in deep learning", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Rohith Kuditipudi", "Xiang Wang", "Holden Lee", "Yi Zhang", "Zhiyuan Li", "Wei Hu", "Sanjeev Arora", "Rong Ge" ], "title": "Explaining landscape connectivity of low-cost solutions for multilayer nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Thomas Laurent", "James von Brecht" ], "title": "The multilinear structure of relu networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Dawei Li", "Tian Ding", "Ruoyu Sun" ], "title": "Over-parameterized deep neural networks have no strict local minima for any continuous activations", "venue": "arXiv preprint arXiv:1812.11039,", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Yang Yuan" ], "title": "Convergence analysis of two-layer neural networks with relu activation", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Shiyu Liang", "Ruoyu Sun", "Yixuan Li", "Rayadurgam Srikant" ], "title": "Understanding the loss surface of neural networks for binary classification", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Geert Litjens", "Thijs Kooi", "Babak Ehteshami Bejnordi", "Arnaud Arindra Adiyoso Setio", "Francesco Ciompi", "Mohsen Ghafoorian", "Jeroen Awm Van Der Laak", "Bram Van Ginneken", "Clara I Sánchez" ], "title": "A survey on deep learning in medical image analysis", "venue": "Medical Image Analysis,", "year": 2017 }, { "authors": [ "Haihao Lu", "Kenji Kawaguchi" ], "title": "Depth creates no bad local minima", "venue": "arXiv preprint arXiv:1702.08580,", "year": 2017 }, { "authors": [ "Song Mei", "Yu Bai", "Andrea Montanari" ], "title": "The landscape of empirical risk for nonconvex losses", "venue": "The Annals of Statistics,", "year": 2018 }, { "authors": [ "Quynh Nguyen" ], "title": "On connected sublevel sets in deep learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Quynh Nguyen", "Matthias Hein" ], "title": "Optimization landscape and expressivity of deep cnns", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Quynh Nguyen", "Mahesh Chandra Mukkamala", "Matthias Hein" ], "title": "On the loss landscape of a class of deep neural networks with no bad local valleys", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Overparameterized nonlinear learning: Gradient descent takes the shortest path", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Itay Safran", "Ohad Shamir" ], "title": "Spurious local minima are common in two-layer relu neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Levent Sagun", "Léon Bottou", "Yann LeCun" ], "title": "Singularity of the hessian in deep learning", "venue": "arXiv preprint arXiv:1611.07476,", "year": 2016 }, { "authors": [ "Levent Sagun", "Utku Evci", "V Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks", "venue": "In International Conference on Learning Representations Workshop,", "year": 2018 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "Mahdi Soltanolkotabi" ], "title": "Learning relus via gradient descent", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Mahdi Soltanolkotabi", "Adel Javanmard", "Jason D Lee" ], "title": "Theoretical insights into the optimization landscape of over-parameterized shallow neural networks", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Daniel Soudry", "Elad Hoffer" ], "title": "Exponentially vanishing sub-optimal local minima in multilayer neural networks", "venue": "In International Conference on Learning Representations Workshop,", "year": 2018 }, { "authors": [ "Grzegorz Swirszcz", "Wojciech Marian Czarnecki", "Razvan Pascanu" ], "title": "Local minima in training of deep networks", "venue": "arXiv preprint arXiv:1611:06310,", "year": 2016 }, { "authors": [ "Yuandong Tian" ], "title": "An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Gang Wang", "Georgios B Giannakis", "Jie Chen" ], "title": "Learning relu networks on linearly separable data: Algorithm, optimality, and generalization", "venue": "IEEE Transactions on Signal Processing,", "year": 2019 }, { "authors": [ "Ian H Witten", "Eibe Frank", "Mark A Hall", "Christopher J Pal" ], "title": "Data Mining: Practical machine learning tools and techniques", "venue": null, "year": 2016 }, { "authors": [ "Chenwei Wu", "Jiajun Luo", "Jason D Lee" ], "title": "No spurious local minima in a two hidden unit relu network", "venue": "In International Conference on Learning Representation Workshop,", "year": 2018 }, { "authors": [ "Bo Xie", "Yingyu Liang", "Le Song" ], "title": "Diverse neural network learns true target functions", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Chulhee Yun", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Global optimality conditions for deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chulhee Yun", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Efficiently testing local optimality and escaping saddles for reLU networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Chulhee Yun", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Small nonlinearities in activation functions create bad local minima in neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Junru Shao", "Ruslan Salakhutdinov" ], "title": "Deep neural networks with multi-branch architectures are intrinsically less non-convex", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Xiao Zhang", "Yaodong Yu", "Lingxiao Wang", "Quanquan Gu" ], "title": "Learning one-hidden-layer relu networks via gradient descent", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Kai Zhong", "Zhao Song", "Prateek Jain", "Peter L Bartlett", "Inderjit S Dhillon" ], "title": "Recovery guarantees for one-hidden-layer neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Pan Zhou", "Jiashi Feng" ], "title": "Empirical risk landscape analysis for understanding deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yi Zhou", "Yingbin Liang" ], "title": "Critical points of neural networks: Analytical forms and landscape properties", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yi Zhou", "Junjie Yang", "Huishuai Zhang", "Yingbin Liang", "Vahid Tarokh" ], "title": "SGD converges to global minimum in deep learning via star-convex path", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Difan Zou", "Yuan Cao", "Dongruo Zhou", "Quanquan Gu" ], "title": "Stochastic gradient descent optimizes over-parameterized deep relu networks", "venue": "Machine Learning,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have been successfully deployed in many real-world applications (LeCun et al., 2015; Witten et al., 2016; Silver et al., 2016; He et al., 2016; Litjens et al., 2017). In spite of this, the theoretical foundations of neural networks are somewhat premature. To the many deficiencies in our knowledge of deep learning theory, the investigation into the loss surfaces of neural networks is of fundamental importance. Understanding the loss surface would be helpful in several relevant research areas, such as the ability to estimate data distributions, the optimization of neural networks, and the generalization to unseen data.\nThis paper studies the role of the nonlinearities in activation functions in shaping the loss surfaces of neural networks. Our results demonstrate that the impact of nonlinearities is profound.\nFirst, we prove that the loss surfaces of nonlinear neural networks are substantially different to those of linear neural networks, in which local minima are created equal, and also, they are all global minima (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018). By contrast,\nNeural networks with arbitrary depth and arbitrary piecewise linear activations (excluding linear functions) have infinitely many spurious local minima under arbitrary continuously differentiable loss functions.\n∗Both authors contributed equally. †Bohan Wang is also affiliated with University of Science and Technology of China. This work was completed when he was a summer intern at UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, the University of Sydney.\nThis result only relies on four mild assumptions that cover most practical circumstances: (1) the training sample set is linearly inseparable; (2) all training sample points are distinct; (3) the output layer is narrower than the other hidden layers; and (4) there exists some turning point in the piecewise linear activations that the sum of the slops on the two sides does not equal to 0.\nOur result significantly extends the existing study on the existence of spurious local minimum. For example, Zhou & Liang (2018) prove that one-hidden-layer neural networks with two nodes in the hidden layer and two-piece linear (ReLU-like) activations have spurious local minima; Swirszcz et al. (2016) prove that ReLU networks have spurious local minima under the squared loss when most of the neurons are not activated; Safran & Shamir (2018) present a computer-assisted proof that two-layer ReLU networks have spurious local minima; a recent work (Yun et al., 2019b) have proven that neural networks with two-piece linear activations have infinite spurious local minima, but the results only apply to the networks with one hidden layer and one-dimensional outputs; and a concurrent work (Goldblum et al., 2020) proves that for multi-layer perceptrons of any depth, the performance of every local minimum on the training data equals to a linear model, which is also verified by experiments.\nThe proposed theorem is proved in three stages: (1) we prove that neural networks with one hidden layer and two-piece linear activations have spurious local minima; (2) we extend the conditions to neural networks with arbitrary hidden layers and two-piece linear activations; and (3) we further extend the conditions to neural networks with arbitrary depth and arbitrary piecewise linear activations. Since some parameters of the constructed spurious local minima are from continuous intervals, we have obtained infinitely many spurious local minima. At each stage, the proof follows a two-step strategy that: (a) constructs an infinite series of local minima; and (b) constructs a point in the parameter space whose empirical risk is lower than the constructed local minimum in Step (a). This strategy is inspired by Yun et al. (2019b) but we have made significant and non-trivial development.\nSecond, we draw a “big picture” for the loss surfaces of nonlinear neural networks. Soudry & Hoffer (2018) highlight a smooth and multilinear partition of the loss surfaces of neural networks. The nonlinearities in the piecewise linear activations partition the loss surface of any nonlinear neural network into multiple smooth and multilinear open cells. Specifically, every nonlinear point in the activation functions creates a group of the non-differentiable boundaries between the cells, while the linear parts of activations correspond to the smooth and multilinear interiors. Based on the partition, we discover a degenerate nature of the large amounts of local minima from the following aspects:\n• Every local minimum is globally minimal within a cell. This property demonstrates that the local geometry within every cell is similar to the global geometry of linear networks, although technically, they are substantially different. It applies to any one-hidden-layer neural network with two-piece linear activations for regression under convex loss. We rigorously prove this property in two stages: (1) we prove that within every cell, the empirical risk R̂ is convex with respect to a variable Ŵ mapped from the weights W by a mapping Q. Therefore, the local minima with respect to the variable Ŵ are also the global minima in the cell; and then (2) we prove that the local optimality is maintained under the constructed mapping. Specifically, the local minima of the empirical risk R̂ with respect to the parameter W are also the local minima with respect to the variable Ŵ . We thereby prove this property by combining the convexity and the correspondence of the minima. This proof is technically novel and non-trivial, though the intuitions are natural.\n• Equivalence classes and quotient space of local minimum valleys. All local minima in a cell are concentrated as a local minimum valley: on a local minimum valley, all local minima are connected with each other by a continuous path, on which the empirical risk is invariant. Further, all these local minima constitute an equivalence class. This local minima valley may have several parallel valleys that are in the same equivalence class but do not appear because of the restraints from cell boundaries. If such constraints are ignored, all the equivalence classes constitute a quotient space. The constructed mapping Q is exactly the quotient map. This result coincides with the property of mode connectivity that the minima found by gradient-based methods are connected by a path in the parameter space with almost invariant empirical risk (Garipov et al., 2018; Draxler et al., 2018; Kuditipudi et al., 2019). Additionally, this property suggests that we would need to study every local minimum valley as a whole.\n• Linear collapse. Linear neural networks are covered by our theories as a simplified case. When all activations are linear, the partitioned loss surface collapses to one single cell, in which all local minima are globally optimal, as suggested by the existing works on linear networks (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018).\nNotations. If M is a matrix, Mi,j denotes the (i, j)-th component of M . If M is a vector, Mi denotes the i-th component of M . Define Eij as a matrix in which the (i, j)-th component is 1 while all other components are 0. Also, denote ei as a vector such that the i-th component is 1 while all others are 0. Additionally, we define 1k ∈ Rk×1 is a vector whose components are all 1, while those of 0n×m ∈ Rn×m (or briefly, 0) are all 0. For the brevity, [i : j] denotes {i, · · · , j}." }, { "heading": "2 RELATED WORK", "text": "Some works suggest that linear neural networks have no spurious local minima. Kawaguchi (2016) proves that linear neural networks with squared loss do not have any spurious local minimum under three assumptions about the data matrix X and the label matrix Y : (1) both matrices XXT and XY T have full ranks; and (2) the input layer is wider than the output layer; and (3) the eigenvalues of matrix Y X> ( XXT )−1 XY T are distinct with each other. Zhou & Liang (2018) give an analytic formulation of the critical points for the loss function of deep linear networks, and thereby obtain a group of equivalence conditions for that critical point is a global minimum. Lu & Kawaguchi (2017) prove the argument under one assumption that both matrices X and Y have full ranks, which is even more restrictive. However, in practice, the activations of most neural networks are not linear. The nonlinearities would make the loss surface extremely non-convex and even non-smooth and therefore far different from the linear case.\nThe loss surfaces of over-parameterized neural networks have some special properties. Choromanska et al. (2015) empirically suggest that: (1) most local minima of over-parameterized networks are equivalent; and (2) small-size networks have spurious local minima but the probability of finding one decreases rapidly with the network size. Li et al. (2018) prove that over-parameterized fullyconnected deep neural networks with continuous activation functions and convex, differentiable loss functions, have no bad strict local minimum. Nguyen et al. (2019) suggest that “sufficiently overparameterized” neural networks have no bad local valley under the cross-entropy loss. Nguyen (2019) further suggests that the global minima of sufficiently over-parameterized neural networks are connected within a unique valley. Many other works study the convergence, generalization, and other properties of stochastic gradient descent on the loss surfaces of over-parameterized networks (Chizat & Bach; Arora et al., 2018; Brutzkus et al., 2018; Du et al., 2019; Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019a;b; Oymak & Soltanolkotabi, 2019).\nMany advances on the loss surfaces of neural networks are focused on other problems. Zhou & Feng (2018) and Mei et al. (2018) prove that the empirical risk surface and expected risk surface are linked. This correspondence highlights the value of investigating loss surfaces (empirical risk surfaces) to the study of generalization (the gap between empirical risks to expected risks). Hanin & Rolnick (2019) demonstrate that the input space of neural networks with piecewise linear activations are partitioned by multiple regions, while our work focuses on the partition of the loss surface. Xie et al. (2017) proves that the training error and test error are upper bounded by the magnitude of the gradient, under the assumption that the geometry discrepancy of the parameterW is bounded. Sagun et al. (2016; 2018) present empirical results that the eigenvalues of the Hessian of the loss surface are two-fold: (1) a bulk centered closed to zero; and (2) outliers away from the bulk. Kawaguchi & Kaelbling (2020) prove that we can eliminate the spurious local minima by adding one unit per output unit for almost any neural network in practice. Tian (2017); Andrychowicz et al. (2016); Soltanolkotabi (2017); Zhong et al. (2017); Brutzkus & Globerson (2017); Tian (2017); Li & Yuan (2017); Zou et al. (2019); Li & Liang (2018); Du et al. (2018a; 2019); Zhang et al. (2019b); Zhou et al. (2019); Wang et al. (2019) study the optimization methods for neural networks. Other relevant works include Sagun et al. (2016; 2018); Nguyen & Hein (2018); Du et al. (2018b); Haeffele & Vidal (2017); Liang et al. (2018); Wu et al. (2018); Yun et al. (2019a); Zhang et al. (2019a); Kuditipudi et al. (2019); Garipov et al. (2018); Draxler et al. (2018); He et al. (2019); Kawaguchi & Kaelbling (2020)." }, { "heading": "3 NEURAL NETWORK HAS INFINITE SPURIOUS LOCAL MINIMA", "text": "This section investigates the existence of spurious local minima on the loss surfaces of neural networks. We find that almost all practical neural networks have infinitely many spurious local minima. This result stands for any neural network with arbitrary depth and arbitrary piecewise linear activations excluding linear functions under arbitrary continuously differentiable loss." }, { "heading": "3.1 PRELIMINARIES", "text": "Consider a training sample set {(X1, Y1), (X2, Y2), . . . , (Xn, Yn)} of size n. Suppose the dimensions of feature Xi and label Yi are dX and dY , respectively. By aggregating the training sample set, we obtain the feature matrix X ∈ RdX×n and label matrix Y ∈ RdY ×n. Suppose a neural network has L layers. Denote the weight matrix, bias, and activation in the jth layer respectively by Wj ∈ Rdj×dj−1 , bj ∈ Rdj , and h : Rdj×n → Rdj×n, where dj is the dimension of the output of the j-th layer. Also, for the input matrix X , the output of the j-th layer is denoted as the Y (j) and the output of the j-th layer before the activation is denoted as the Ỹ (j),\nỸ (j) = WjY (j−1) + bi1 T n , (1) Y (j) = h ( WjY (j−1) + bi1 T n ) . (2)\nThe output of the network is defined as follows, Ŷ = hL ( WLhL−1 ( WL−1hL−2 ( . . . h1 ( W1X + b11 T n ) . . . ) + bL−11 T n ) + bL1 T n ) . (3) Also, we define Y (0) = X , Y (L) = Ŷ , d0 = dX , and dL = dY . In some situations, we use Ŷ (\n[Wi] L i=1 , [bi] L i=1\n) to clarify the parameters, as well as Ỹ (j), Y (j), etc.\nThis section discusses neural networks with piecewise linear activations. A part of the proof uses two-piece linear activations hs−,s+ which are defined as follows,\nhs−,s+(x) = I{x≤0}s−x+ I{x>0}s+x, (4)\nwhere |s+| 6= |s−| and I{·} is the indicator function. Remark. Piecewise linear functions are dense in the space of continuous functions. In other words, for any continuous function, we can always find a piecewise linear function to estimate it with arbitrary small distance.\nThis section uses continuously differentiable loss to evaluate the performance of neural networks. Continuous differentiability is defined as follows. Definition 1 (Continuously differentiable). We call a function f : Rn → R continuously differentiable with respect to the variable x if: (1) the function f is differentiable with respect to x; and (2) the gradient∇xf(x) of the function f is continuous with respect to the variable x." }, { "heading": "3.2 MAIN RESULT", "text": "The theorem in this section relies on the following assumptions. Assumption 1. The training data cannot be fit by a linear model. Assumption 2. All data points are distinct. Assumption 3. All hidden layers are wider than the output layer. Assumption 4. For the piece-wise linear activations, there exists some turning point that the sum of the slops on the two sides does not equal to 0.\nTo our best knowledge, our assumptions are the least restrictive compared with the relevant works in the literature. These assumptions are respectively justified as follows: (1) most real-world datasets are extremely complex and cannot be simply fit using linear models; (2) it is easy to guarantee that the data points are distinct by employing data cleansing methods; (3) for regression and many classification tasks, the width of output layer is limited and narrower than the hidden layers; and (4) this assumption is invalid only for activations like f(x) = a|x|. Based on these four assumptions, we can prove the following theorem.\nTheorem 1. Neural networks with arbitrary depth and arbitrary piecewise linear activations (excluding linear functions) have infinitely many spurious local minima under arbitrary continuously differentiable loss whose derivative can equal 0 only when the prediction and label are the same.\nIn practice, most loss functions are continuously differentiable and the derivative can equal 0 only when the prediction and label are the same, such as squared loss and cross-entropy loss (see Appendix A.1, Lemmas 2 and 3). Squared loss is a standard loss for regression and is defined as the L2 norm of the difference between the ground-truth label and the prediction as follows.\nl2 ( Yi, Ŷi ) = 1\n2 ∥∥∥Yi − Ŷi∥∥∥2 F . (5)\nMeanwhile, cross-entropy loss is used as a standard loss in multiclass classification, which is defined as follows. Here, we treat the softmax function as a part of the loss function.\nlce(Yi, Ŷi) = − dY∑ j=1 Yi,j log ( Ŷi,j∑dY k=1 Ŷi,k ) . (6)\nOne can also remove Assumption 4, if Assumption 3 is replaced by the following assumption, which is mildly more restrictive (see a detailed proof in pp. 34–37).\nAssumption 5. The dimensions of the layers satisfy that:\nd1 ≥ dY + 2, di ≥ dY + 1, i = 2, . . . , L− 1.\nOur result demonstrates that introducing nonlinearities into activations substantially reshapes the loss surface: they bring infinitely many spurious local minima into the loss surface. This result highlights the substantial difference from linear neural networks that all local minima of linear neural networks are equally good, and therefore, they are all global minima (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018).\nSome works have noticed the existence of spurious local minima on the loss surfaces of nonlinear neural networks, which however has a limited applicable domain (Choromanska et al., 2015; Swirszcz et al., 2016; Safran & Shamir, 2018; Yun et al., 2019b). A notable work by Yun et al. (2019b) proves that one-hidden-layer neural networks with two-piece linear (ReLU-like) activations for one-dimensional regression have infinitely many spurious local minima under squared loss. This work first constructs a series of local minima and then prove they are spurious. This idea inspires some of this work. However, our work makes significant and non-trivial development that extends the conditions to arbitrary depth, piecewise linear activations excluding linear functions, and continuously differentiable loss." }, { "heading": "3.3 PROOF SKELETON", "text": "This section presents the skeleton of the proof. Theorem 1 is proved in three stages. We first prove a simplified version of Theorem 1 and then extend the conditions in the last two stages. The proof is partially inspired by Yun et al. (2019b) but the proof in this paper has made nontrivial development and the results are significantly extended.\nYun et al. (2019b) and our paper both employ the following strategy: (a) construct a series of local minima based on a linear classifier; and (b) construct a new point with smaller empirical risk and thereby we prove that the constructed local minima are spurious. However, due to the differences in the loss function and the output dimensions, the exact constructions of local minima are substantially different.\nOur extensions from Yun et al. (2019b) are three-fold: (1) From one hidden layer to arbitrary depth: To prove that networks with an arbitrary depth have infinite spurious local minima, we develop a novel strategy that employs transformation operations to force data flow through the same linear parts of the activations, in order to construct the spurious local minima; (2) From squared loss to arbitrary differentiable loss: Yun et al. (2019b) calculate the analytic formations of derivatives of\nthe loss to construct the local minima and then prove they are spurious. This technique cannot be transplanted to the case of arbitrary differentiable loss functions, because we cannot assume the analytic formation. To prove that the loss surface under an arbitrary differentiable loss has an infinite number of spurious local minima, we employ a new proof technique based on Taylor series and a new separation lemma; and (3) From one-dimensional output to arbitrary-dimensional output: To prove the loss surface of a neural network with an arbitrary-dimensional output has an infinite number of spurious local minima, we need to deal with the calculus of functions whose domain and codomain are a matrix space and a vector space, respectively. By contrast, when the output dimension is one, the codomain is only the space of real numbers. Therefore, the extension of the output dimension significantly mounts the difficulty of the whole proof.\nStage (1): Neural networks with one hidden layer and two-piece linear activations.\nWe first prove that nonlinear neural networks with one hidden layer and two-piece linear activation functions (ReLU-like activations) have spurious local minima. The proof in this stage further follows a two-step strategy:\n(a) We first construct local minima of the empirical risk R̂ (see Appendix A.2, Lemma 4). These local minimizers are constructed based on a linear neural network which has the same network size (dimension of weight matrices) and evaluated under the same loss. The design of the hidden layer guarantees that the components of the output Ỹ (1) in the hidden layer before the activation are all positive. The activation is thus effectively reduced to a linear function. Therefore, the local geometry around the local minima with respect to the weights W is similar to those of linear neural networks. Further, the design of the output layer guarantees that its output Ŷ is the same as the linear neural network. This construction helps to utilize the results of linear neural networks to solve the problems in nonlinear neural networks.\n(b) We then prove that all the constructed local minima in Step (a) are spurious (see Appendix A.2, Theorem 4). Specifically, we assumed by Assumption 1 that the dataset cannot be fit by a linear model. Therefore, the gradient ∇Ŷ R̂ of the empirical risk R̂ with respect to the prediction Ŷ is not zero. Suppose the i-th row of the gradient ∇Ŷ R̂ is not zero. Then, we use Taylor series and a preparation lemma (see Appendix A.5, Lemma 7) to construct another point in the parameter space that has smaller empirical risk. Therefore, we prove that the constructed local minima are spurious. Furthermore, the constructions involve some parameters that are randomly picked from a continuous interval. Thus, we constructed infinitely many spurious local minima.\nStage (2) - Neural networks with arbitrary hidden layers and two-piece linear activations.\nWe extend the condition in Stage (1) to any neural network with arbitrary depth and two-piece linear activations. The proof in this stage follows the same two-step strategy but has different implementations:\n(a) We first construct a series of local minima of the empirical risk R̂ (see Appendix A.3, Lemma 5). The construction guarantees that every component of the output Ỹ (i) in each layer before the activations is positive, which secure all the input examples flow through the same part of the activations. Thereby, the nonlinear activations are reduced to linear functions. Also, our construction guarantees that the output Ŷ of the network is the same as a linear network with the same weight matrix dimensions.\n(b) We then prove that the constructed local minima are spurious (see Appendix A.3, Theorem 5). The idea is to find a point in the parameter space that has the same empirical risk R̂ with the constructed point in Stage (1), Step (b).\nStage (3) - Neural networks with arbitrary hidden layer and piecewise linear activations.\nWe further extend the conditions in Stage (2) to any neural network with arbitrary depth and arbitrary piecewise linear activations. We continue to adapt the two-step strategy in this stage:\n(a) We first construct a local minimizer of the empirical risk R̂ based on the results in Stages (1) and (2) (see Appendix A.4, Lemma 6). This construction is based on Stage (2), Step (a). The difference of the construction in this stage is that every linear part in activations can be a finite interval. The constructed weight matrices use several uniform scaling and translation operations to the outputs\nof hidden layers in order to guarantee that all the input training sample points flow through the same linear parts of the activations. We thereby reduce the nonlinear activations to linear functions, effectively. Also, our construction guarantees that the output Ŷ of the neural network equals to that of the corresponding linear neural network.\n(b) We then prove that the constructed local minima are spurious (see Appendix A.4). We use the same strategy in Stage (2), Step (b). Some adaptations are implemented for the new conditions." }, { "heading": "4 A BIG PICTURE OF THE LOSS SURFACE", "text": "This section draws a big picture for the loss surfaces of neural networks. Based on a recent result by Soudry & Hoffer (2018), we present four profound properties of the loss surface that collectively characterize how the nonlinearities in activations shape the loss surface." }, { "heading": "4.1 PRELIMINARIES", "text": "The discussions in this section use the following concepts.\nDefinition 2 (Open ball and open set). The open ball in H centered at x ∈ H and of radius r > 0 is defined by B(h, r) = {x : ‖x − h‖ < r}. A subset A ⊂ H of a space H is called a open set, if for every point h ∈ A, there exists a positive real r > 0, such that the open ball B(h, r) with center h and radius r is in the subset A: B(h, r) ⊂ A. Definition 3 (Interior point and interior). For a subset A ⊂ H of a spaceH, a point h ∈ A is called an interior point of A, if there exists a positive real r > 0, such that the open ball B(h, r) with center h and radius r is in the subset A: B(h, r) ⊂ A. The set of all the interior points of the set A is called the interior of the set A.\nDefinition 4 (Limit point, closure, and boundary). For a subset A ⊂ H of a spaceH, a point h ∈ A is called a limit point, if for every r > 0, the open ball B(h, r) with center h and radius r contains some point of A: B(h, r)∩A 6= ∅. The closure Ā of the set A consists of the union of the set A and all its limit points. The boundary ∂A is defined as the set of points which are in the closure of set A but not in the interior of set A.\nDefinition 5 (Multilinear). A function f : X1×X2 → Y is called multilinear if for arbitrary x11, x21 ∈ X , x12, x22 ∈ X2, and constants λ1, λ2, µ1, and µ2, we have\nf(λ1x 1 1+λ2x 2 1, µ1x 1 2+µ2x 2 2) = λ1µ1f(x 1 1, x 1 2)+λ1µ2f(x 1 1, x 2 2)+λ2µ1f(x 2 1, x 1 2)+λ2µ2f(x 2 1, x 2 2).\nRemark. The definition of “multilinear” implies that the domain of any multilinear function f is a connective and convex set, such as the smooth and multilinear cells below.\nDefinition 6 (Equivalence class, and quotient space). Suppose X is a linear space. [x] = {v ∈ X : v ∼ x} is an equivalence class, if there is an equivalent relation ∼ on [x], such that for any a, b, c ∈ [x], we have: (1) reflexivity: a ∼ a; (2) symmetry: if a ∼ b, b ∼ a; and (3) transitivity: if a ∼ b and b ∼ c, a ∼ c. The quotient space and quotient map are defined to be X/ ∼= {{v ∈ X : v ∼ x} : x ∈ X} and x→ [x], respectively." }, { "heading": "4.2 MAIN RESULTS", "text": "In this section, the loss surface is defined under convex loss with respect to the prediction Ŷ of the neural network. Convex loss covers many popular loss functions in practice, such as the squared loss for the regression tasks and many others based on norms. The triangle inequality of the norms secures the convexity of the corresponding loss functions. The convexity of the squared loss is checked in the appendix (see Appendix B, Lemma 8).\nWe now present four propositions to express the loss surfaces of nonlinear neural networks. These propositions give four major properties of the loss surface that collectively draw a big picture for the loss surface.\nWe first recall a lemma by Soudry & Hoffer (2018). It proves that the loss surfaces of neural networks have smooth and multilinear partitions.\nLemma 1 (Smooth and multilinear partition; cf. Soudry & Hoffer (2018)). The loss surfaces of neural networks of arbitrary depth with piecewise linear functions excluding linear functions are partitioned into multiple smooth and multilinear open cells, while the boundaries are nondifferentiable.\nBased on the smooth and multilinear partition, we prove four propositions as follows. Theorem 2 (Analogous convexity). For one-hidden-layer neural networks with two-piece linear activation for regression under convex loss, within every cell, all local minima are equally good, and also, they are all global minima in the cell. Theorem 3 (Equivalence classes of local minimum valleys). Suppose all conditions of Theorem 2 hold. Assume the loss function is strictly convex. Then, all local minima in a cell are concentrated as a local minimum valley: they are connected with each other by a continuous path and have the same empirical risk. Additionally, all local minima in a cell constitute an equivalence class. Corollary 1 (Quotient space of local minimum valleys). Suppose all conditions of Theorem 3 hold. There might exist some “parallel” local minimum valleys in the equivalence class of a local minimum valley. They do not appear because of the constraints from the cell boundaries. If we ignore such constraints, all equivalence classes of local minima valleys constitute a quotient space. Corollary 2 (Linear collapse). The partitioned loss surface collapses to one single smooth and multilinear cell, when all activations are linear." }, { "heading": "4.3 DISCUSSIONS AND PROOF TECHNIQUES", "text": "The four propositions collectively characterize how the nonlinearities in activations shape the loss surfaces of neural networks. This section discusses the results and the structure of the proofs. A detailed proof is omitted here and given in Appendix B.\nSmooth and multilinear partition. Intuitively, the nonlinearities in the piecewise linear activation functions partition the surface into multiple smooth and multilinear cells. Zhou & Liang (2018); Soudry & Hoffer (2018) highlight the partition of the loss surface. We restate it here to make the picture self-contained. A similar but also markedly different notions recently proposed by Hanin & Rolnick (2019) demonstrate that the input data space is partitioned into multiple linear regions, while our work focuses on the partition in the parameter space.\nEvery local minimum is globally minimal within a cell. In convex optimization, convexity guarantees that all the local minima are global minima. This theorem proves that the local minima within a cell are equally good, and also, they are all global minima in the cell. This result is not surprising provided the excellent training performance of deep learning algorithms. However, the proof is technically non-trivial.\nSoudry & Hoffer (2018) proved that the local minima in a cell are the same. However, there would be some point near the boundary has a smaller empirical risk and is not locally minimal. Unfortunately, the proof by Soudry & Hoffer (2018) cannot exclude this possibility. By contrast, our proof completely solves this problem. Furthermore, our proof holds for any convex loss, including squared loss and cross-entropy loss, but Soudry & Hoffer (2018) only stands for squared loss.\nIt is challenging to prove, because the proof techniques for the case of linear networks cannot be transplanted here. Technically, linear networks can be expressed by the product of a sequence of weight matrices, which guarantees good geometrical properties. Specifically, the effect of every linear activation function is just equivalently multiplying a real constant to the output. However, the loss surface within a cell of a nonlinear neural network does not have this property. Below is the skeleton of our proof.\nWe first prove that the empirical risk R̂ is a convex function within every cell with respect to a variable Ŵ which is calculated from the weights W . Therefore, all local minima of the empirical risk R̂ with respect to Ŵ are also globally optimal in the cell. Every cell corresponds to a specific series of linear parts of the activations. Therefore, in any fixed cell, the activation hs−,s+ can be expressed by the slopes of the corresponding linear parts as the following equations,\nR̂(W1,W2) = 1\nn n∑ i=1 l (yi,W2h(W1xi)) = 1 n n∑ i=1 l (yi,W2diag (A·,i)W1xi) , (7)\nwhere A·,i is the i-th column of matrix\nA = h ′ s−,s+((W1)1,·x1) · · · h ′ s−,s+((W1)1,·xn) ... . . .\n... h′s−,s+((W1)d1,·x1) · · · h ′ s−,s+((W1)d1,·xn) . Matrix A is constituted by collecting the slopes of the activation h at every point (W1)i,·xj .\nDifferent elements of the matrix A can be multiplied either one of {s−, s+}. Therefore, we cannot use a single constant to express the effect of this activation, and thus, even within the cell, a nonlinear network cannot be expressed as the product of a sequence of weight matrices. This difference ensures that the proofs of deep linear neural networks cannot be transplanted here.\nThen, we prove that (see p. 40)\nW2diag (A·,i)W1xi = AT·,idiag(W2)W1xi. (8)\nApplying eq. (8) to eq. (7), the empirical risk R̂ equals to a formulation similar to the linear neural networks,\nR̂ = 1 n n∑ i=1 l ( yi −AT·,idiag(W2)W1xi ) . (9)\nAfterwards, define Ŵ1 = diag(W2)W1 and then straighten the matrix Ŵ1 to a vector Ŵ , Ŵ = ( (Ŵ1)1,· · · · (Ŵ1)d1,· ) ,\nDefine Q : (W1,W2) 7→ Ŵ , and also define,\nX̂ = (A·,1 ⊗ x1 · · · A·,n ⊗ xn) .\nWe can prove the following equations (see p. 41),( AT·,1Ŵ1x1 · · · AT·,nŴ1xn ) =Ŵ X̂.\nApplying eq. (9), the empirical risk is transferred to a convex function as follows,\nR̂ = 1 n n∑ i=1 l ( yi, (A·,i) T Ŵ1xi ) = 1 n n∑ i=1 l(yi, Ŵ X̂i).\nWe then prove that the local optimality of the empirical risk R̂ is maintained when the weightsW are mapped to the variable Ŵ . Specifically, the local minima of the empirical risk R̂ with respect to the weight W are also the local minima with respect to the variable Ŵ . The maintenance of optimality is not surprising but the proof is technically non-trivial (see a detailed proof in pp. 42-43).\nEquivalence classes and quotient space of local minimum valleys. The constructed mappingQ is a quotient map. Under the setting in the previous property, all local minima in a cell is an equivalence class; they are concentrated as a local minimum valley. However, there might exist some “parallel” local minimum valley in the equivalence class, which do not appear because of the constraints from the cell boundaries. Further for neural networks of arbitrary depth, we also constructed a local minimum valley (the spurious local minima constructed in Section 3). This result explains the property of mode connectivity that the minima found by gradient-based methods are connected by a path in the parameter space with almost constant empirical risk, which is proposed in two empirical works (Garipov et al., 2018; Draxler et al., 2018). A recent theoretical work (Kuditipudi et al., 2019) proves that dropout stability and noise stability guarantee the mode connectivity.\nLinear collapse. Our theories also cover the case of linear neural networks. Linear neural networks do not have any nonlinearity in their activations. Correspondingly, the loss surface does not have any non-differentiable boundaries. In our theories, when there is no nonlinearity in the activations, the partitioned loss surface collapses to a single smooth, multilinear cell. All local minima wherein are equally good, and also, they are all global minima as follows. This result unites the existing results on linear neural networks (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018)." }, { "heading": "5 CONCLUSION AND FUTURE DIRECTIONS", "text": "This paper reports that the nonlinearities in activations substantially shape the loss surfaces of neural networks. First, we prove that neural networks have infinitely many spurious local minima which are in contrast to the circumstance of linear neural networks. This result stands for any neural network with arbitrary hidden layers and arbitrary piecewise linear activations (excluding linear functions) under many popular loss functions in practice (e.g., squared loss and cross-entropy loss). This result significantly extends the conditions of the relevant results and has the least restrictive assumptions that cover most practical circumstances: (1) the training data is not linearly separable; (2) the training sample points are distinct; (3) all hidden layers are wider than the output layer; and (4) there exists some turning point in the piece-wise linear activation that the sum of the slops on the two sides does not equal to 0. Second, based on a recent result that the loss surface has a smooth and multilinear partition, we draw a big picture of the loss surface from the following aspects: (1) local minima in any cell are equally good, and also, they are all global minima in the cell; (2) all local minima in one cell constitute an equivalence class and are concentrated as a local minimum valley; and (3) the loss surface collapses to one single cell when all activations are linear functions, which explains the results of linear neural networks. The first and second properties are rigorously proved for any onehidden-layer nonlinear neural networks with two-piece linear (ReLU-like) activations for regression tasks under convex/strictly convex loss without any other assumption.\nTheoretically understanding deep learning is of vital importance to both academia and industry. A major barrier recognized by the whole community is that deep neural networks’ loss surfaces are extremely non-convex and even non-smooth. Such non-convexity and non-smoothness make the analysis of the optimization and generalization properties prohibitively difficult. A natural idea is to bypass the geometrical properties and then approach a theoretical explanation. We argue that such “intimidating” geometrical properties are exactly the major factors that shape the properties of deep neural networks, and also the key to explaining deep learning. We propose to explore the magic of deep learning from the geometrical structures of its loss surface. Future directions towards fully understanding deep learning are summarized as follows,\n• Investigate the (potential) equivalence classes and quotient space of local minimum valleys for deep neural networks. This paper suggests a degenerate nature of the large amounts of local minima: all the local minima within one cell constitute an equivalence class. We construct a quotient map for one-hidden-layer neural networks with two-piece activations for regression. Whether deep neural networks have similar properties remains an open problem. Understanding the quotient space would be a major step of understanding the approximation, optimization, and generalization of deep learning.\n• Explore the sophisticated geometry of local minimum valleys. The quotient space of local minima suggests a strategy that treats every local minimum valley as a whole. However, the sophisticated local geometrical properties around the local minimum valleys are still premature, such as the sharpness/flatness of the local minima, the potential categorization of the local minimum valley according to their performance, and the volumes of the local minima valleys from different categories.\n• Tackle the optimization and generalization problems of deep learning. Empirical results have overwhelmingly suggested that deep learning has excellent optimization and generalization capabilities, which is, however, beyond the current theoretical understanding: (1) one can employ stochastic convex optimization methods (such as SGD) to minimize the extremely non-convex and non-smooth loss function in deep learning, which is expected to be NP-hard but practically solved by computationally cheap optimization methods; and (2) heavily-parametrized neural networks can generalize well in many tasks, which is beyond the expectation of most current theoretical frameworks based on hypothesis complexity and the variants. The sophisticated geometrical expression, if fortunately, we possess in the future, would be a compelling push to tackle the generalization and optimization muses of deep learning." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by Australian Research Council Project FL-170100117. The authors sincerely appreciate Micah Goldblum and the anonymous reviewers for their constructive comments." }, { "heading": "A PROOF OF THEOREM 1", "text": "This appendix gives a detailed proof of Theorem 1 omitted from the main text. It follows the skeleton presented in Section 3.3." }, { "heading": "A.1 SQUARED LOSS AND CROSS-ENTROPY LOSS", "text": "We first check whether squared loss and cross-entropy loss are covered by the requirements of Theorem 1.\nLemma 2. The squared loss (defined by eq. 5) is continuously differentiable with respect to the prediction of the model, whose gradient of loss equal to zero when the prediction and the label are different.\nProof. Apparently, the squared loss is differentiable with respect to Ŷ . Specifically, the gradient with respect to Ŷ is as follows,\n∇Ŷ ∥∥∥Y − Ŷ ∥∥∥2 = 2(Y − Ŷ ) ,\nwhich is continuous with respect to Ŷ .\nAlso, when the prediction Ŷ does not equals to the label Y , we have ∇Ŷ ∥∥∥Y − Ŷ ∥∥∥2 6= 0.\nThe proof is completed.\nLemma 3. The cross-entropy loss eq. (6) is continuously differentiable with respect to the prediction of the model, whose gradient of loss equal to zero when the prediction and the label are different. Also, we assume that the ground-truth label is a one-hot vector.\nProof. For any i ∈ [1 : n], the cross-entropy loss is differentiable with respect to Ŷi. The j-th component of the gradient with respect to the prediction Ŷi is as follows,\n∂ ( −\ndY∑ k=1 Yk,i log ( eŶk,i∑dY k=1 e Ŷk,i )) ∂Ŷj,i = ( dY∑ k=1 Yk,i ) eŶj,i\ndY∑ k=1 eŶk,i − Yj,i. (10)\nwhich is continuous with respect to Ŷi. So, the cross-entropy loss is continuously differentiable with respect to Ŷi.\nAdditionally, if the gradient (eq. (10)) is zero, we have the following equations,( dY∑ k=1 Yk,i ) eŶj,i − Yj,i dY∑ k=1 eŶk,i = 0, j = 1, 2, · · · , n.\nRewrite it into the matrix form, we have dY∑ k=1 Yk,i − Y1,i −Y1,i · · · −Y1,i −Y2,i dY∑ k=1 Yk,i − Y2,i · · · −Y2,i ... ... . . . ...\n−YdY ,i · · · −YdY ,i dY∑ i=1 Yk,i − YdY ,i\n eŶ1,i eŶ2,i ... eŶdY ,i = 0.\nSince dY∑ k=1 Yk,i = 1, we can easily check the rank of the left matrix is dY − 1. So the dimension of the solution space is one. Meanwhile, we have dY∑ k=1 Yk,i − Y1,i −Y1,i · · · −Y1,i −Y2,i dY∑ k=1 Yk,i − Y2,i · · · −Y2,i ... ... . . . ...\n−YdY ,i · · · −YdY ,i dY∑ i=1 Yk,i − YdY ,i\n Y1,i Y2,i ... YdY ,i = 0.\nTherefore, 0 6= eŶk,i = λYk,i, for some λ ∈ R, which contradicts to the assumption that some of the components of Y is 0 (Yi,· is a one-hot vector).\nThe proof is completed.\nA.2 STAGE (1)\nIn Stage (1), we prove that deep neural networks with one hidden layer, two-piece linear activation hs−,s+ , and multi-dimensional outputs have infinite spurious local minima.\nThis stage is organized as follows: (a) we construct a local minimizer by Lemma 4; and (b) we prove that the local minimizer is spurious in Theorem 4 by constructing a set of parameters with smaller empirical risk.\nWithout loss of generality, we assume that s+ 6= 0. Otherwise, suppose that s+ = 0. From the definition of ReLU-like activation (eq. (4)), we have s− 6= 0. Since\nhs−,s+(x) = h−s+,−s−(−x), the output of the neural network with parameters {\n[Wi] L i=1 , [bi] L i=1 } and activation hs−,s+ equals\nto that of the neural network with parameters {\n[W ′i ] L i=1 , [b ′ i] L i=1 } and activation h−s+,−s− where\nW ′i = −Wi, b′i = −bi, i = 1, 2, · · · , L− 1 and W ′L = WL, b′L = bL. Since { [Wi] L i=1 , [bi] L i=1 } →{\n[W ′i ] L i=1 , [b ′ i] L i=1\n} is an one-to-one map, it is equivalent to consider either the two networks, with\nh−s+,−s−(x) has non-zero slope when x > 0.\nStep (a). Construct local minima of the loss surface. Lemma 4. Suppose that W̃ is a local minimizer of\nf(W ) 4 = 1\nn n∑ i=1 l ( Yi,W [ xi 1 ]) , (11)\nUnder Assumption 3, any one-hidden-layer neural network has a local minimum at\nŴ1 = [ [ W̃ ] ·,[1:dX ]\n0(d1−dY )×dX\n] , b̂1 = [ [ W̃ ] ·,dX+1\n− η1dY −η1d1−dY\n] , (12)\nand Ŵ2 = [ 1 s+ IdY 0dY ×(d1−dY ) ] , b̂2 = η1dY , (13)\nwhere Ŵ1 and b̂1 are respectively the weight matrix and the bias of the first layer, Ŵ2 and b̂2 are respectively the weight matrix and the bias of the second layer, and η is a negative constant with absolute value sufficiently large such that\nW̃ X̃ − η1dY 1Tn > 0, (14) where > is element-wise.\nAlso, the loss in this lemma is continuously differentiable loss whose gradient does not equals to 0 when the prediction is not the same as the ground-truth label.\nProof. We show that the empirical risk is higher in the neiborhood of {[ Ŵi ]2 i=1 , [ b̂i ]2 i=1 } , in order\nto prove that {[ Ŵi ]2 i=1 , [ b̂i ]2 i=1 } is a local minimizer.\nThe output of the first layer before the activation is\nỸ (1) = Ŵ1X + b̂11 T n = [ ŴX − η1dY 1Tn −η1d1−dY 1Tn ] .\nBecause η is a negative constant with absolute value sufficiently large such that eq. (27)) holds, the output above is positive (element-wise), the output of the neural network with parameters {Ŵ1, Ŵ2, b̂1, b̂2} is\nŶ =Ŵ2hs−,s+ ( Ŵ1X + b̂11 T n ) + b̂21 T n\n=s+Ŵ2 ( Ŵ1X + b̂11 T n ) + b̂21 T n\n=s+ [ 1 s+ IdY 0dY ×(d1−dY ) ] [ ŴX − η1dY 1Tn −η1d1−dY 1Tn ] + η1dY 1 T n\n=W̃ X̃,\nwhere X̃ is defined as\nX̃ = [ X 1Tn ] . (15)\nTherefore, the empirical risk R̂ in terms of parameters {Ŵ1, Ŵ2, b̂1, b̂2} is\nR̂ ( Ŵ1, Ŵ2, b̂1, b̂2 ) = 1\nn n∑ i=1 l ( Yi, ( W̃ X̃ ) ·,i ) = 1 n n∑ i=1 l ( Yi, W̃ [ xi 1 ]) = f(W̃ ).\nThen, we introduce a sufficiently small disturbance {\n[δWi] 2 i=1 , [δbi] 2 i=1 } into the parameters{[\nŴi ]2 i=1 , [ b̂i ]2 i=1 } . When the disturbance is sufficiently small, all components of the output\nof the first layer remain positive. Therefore, the output after the disturbance is\nŶ ([ Ŵi + δWi ]2 i=1 , [ b̂i + δbi ]2 i=1 ) = ( Ŵ2 + δW2 ) hs−,s+ (( Ŵ1 + δW1 ) X + ( b̂1 + δb1 ) 1Tn ) + ( b̂2 + δb2 ) 1Tn\n(∗) = ( Ŵ2 + δW2 ) s+ (( Ŵ1 + δW1 ) X + ( b̂1 + δb1 ) 1Tn ) + ( b̂2 + δb2 ) 1Tn\n=s+δW2 (( Ŵ1 + δW1 ) X + ( b̂1 + δb1 ) 1Tn ) + s+Ŵ2δW1X + s+Ŵ2δb11 T n + δb21 T n\n+ Ŵ2s+(Ŵ1X + b̂11 T n ) + b̂21 T n = ( s+δW2 ( Ŵ1 + δW1 ) + s+Ŵ2δW1 ) X + ( s+Ŵ2δb1 + δb2 + s+δW2 ( b̂1 + δb1 )) 1Tn\n+ Ŵ2hs−,s+(Ŵ1X + b̂11 T n ) + b̂21 T n\n=(W̃ + δ) [ X 1Tn ] ,\nwhere eq. (∗) is because all components of ( Ŵ1 + δW1 ) X + (b′1 + δb1)1 T n are positive, and δ is\ndefined as the following matrix δ = [ s+ ( Ŵ2δW1 + δW2Ŵ1 + δW2δW1 ) s+Ŵ2δb1 + δb2 + s+δW2 ( b̂1 + δb1 )] .\nTherefore, the empirical risk R̂ with respect to {[ Ŵi + δWi ]2 i=1 , [ b̂i + δbi ]2 i=1 } is\nR̂ ([ Ŵi + δWi ]2 i=1 , [ b̂i + δbi ]2 i=1 ) = 1 n n∑ i=1 l ( Yi, (( W̃ + δ ) X̃ ) ·,i )\n= 1\nn n∑ i=1 l ( Yi, ( W̃ + δ )[ xi 1 ]) = f(W̃ + δ).\nδ approaches zero when the disturbances {δW1, δW2, δb1, δb2} approach zero (element-wise). Since Ŵ is the local minimizer of f(W ), we have\nR̂ ([ Ŵi ]2 i=1 , [ b̂i ]2 i=1 ) = f(Ŵ ) ≤ f(Ŵ + δ) = R̂ ([ Ŵi + δWi ]2 i=1 , [ b̂i + δbi ]2 i=1 ) . (16)\nBecause the disturbances {δW1, δW2, δb1, δb2} are arbitrary, eq. (16) demonstrates that{[ Ŵi ]2 i=1 , [ b̂i ]2 i=1 } is a local minimizer.\nThe proof is completed.\nStep (b). Prove the constructed local minima are spurious. Theorem 4. Under the same conditions of Lemma 4 and Assumptions 1, 2, and 4, the constructed spurious local minima in Lemma 4 are spurious.\nProof. The minimizer W̃ is the solution of the following equation\n∇W f(W ) = 0.\nSpecifically, we have ∂f ( W̃ )\n∂Wi,j = 0, i ∈ {1, · · · , dY }, j ∈ {1, · · · , dX},\nApplying the definition of f(W ) (eq. (11)), ∂f ( W̃ )\n∂Wk,j = n∑ i=1 ∇Ŷi l ( Yi, W̃ [ xi 1 ]) Ek,j [ xi 1 ] = n∑ i=1 ( ∇Ŷi l ( Yi, W̃ [ xi 1 ])) k ([ xi 1 ]) j ,\nwhere Ŷi = W̃ [ xi 1 ] ,∇Ŷi l ( Yi, W̃ [ xi 1 ]) ∈ R1×dY . Since k, j are arbitrary in {1, · · · , dY } and\n{1, · · · , dX}, respectively, we have\nV [ XT 1n ] = 0, (17)\nwhere\nV = [( ∇Ŷ1 l ( Y1, W̃ [ x1 1 ]))T · · · ( ∇Ŷn l ( Yn, W̃ [ xn 1 ]))T] .\nWe then define Ỹ = W̃ X̃ . Applying Assumption 1, we have Ỹ − Y = ( W̃ X̃ − Y ) 6= 0,\nThus, there exists some k-th row of Ỹ − Y that does not equal to 0.\nWe can rearrange the rows of W̃ and Y simultaneously, while W̃ is maintained as the local minimizer of f(W ) and f(W̃ ) invariant1. Without loss of generality, we assume k = 1 (k is the index of the row). Set u = V1,· and vi = Ỹ1,i in Lemma 7. There exists a non-empty separation I = [1 : l′] and J = [l′ + 1 : n] of S = {1, 2, · · · , n} and a vector β ∈ RdX , such that\n(1.1) for any positive constant α small enough, and i ∈ I , j ∈ J , Ỹ1,i − αβTxi < Ỹ1,j − αβTxj ; (1.2) ∑ i∈I V1,i 6= 0.\nDefine\nη1 = Ỹ1,l′ − αβTxl′ + 1\n2\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) .\nApplying (1.1), for any i ∈ I\nỸ1,i − αβTxi − η1 = ( Ỹ1,i − αβTxi − Ỹ1,l′ + αβTxl′ ) − 1\n2\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) <0,\nwhile for any j ∈ J ,\nỸ1,j − αβTxj − η1 > 0 = ( Ỹ1,j − αβTxi − Ỹ1,l′ + αβTxl′ ) − 1\n2\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) ≥1\n2\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) >0.\nDefine γ ∈ R which satisfies\n|γ| = 1 2 min i∈{l′+1,...,st+1} αβT (xl − xi), l′ < st+1\nα, l′ = st+1\n,\nwhere st+1 is defined in Lemma 7.\nWe argue that ∣∣∣∣12 ( min i∈{l′+1,...,n} ( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ ))∣∣∣∣− |γ| > 0. (18) When l′ = st+1, eq. (58) stands. Also,\nlim α→0+ γ = 0,\nlim α→0+\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) = min i∈{l′+1,...,n} Ỹ1,i − Ỹ1,l′ > 0.\nTherefore, we get eq. (18) when α is small enough.\nWhen l′ < st+1, eq. (57) stands. Therefore,\n|γ| = 1 2 ∣∣∣∣12 ( min i∈{l′+1,...,n} ( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ ))∣∣∣∣ , which apparently leads to eq. (18).\n1f is also the function in term of Y .\nTherefore, for any i ∈ I , we have that\nỸ1,i − αβTxi − η1 + |γ|\n≤ − 1 2\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) + |γ|\n<0,\nwhile for any j ∈ J ,\nỸ1,j − αβTxj − η1 − |γ|\n≥1 2\n( min\ni∈{l′+1,...,n}\n( Ỹ1,i − αβTxi ) − ( Ỹ1,l′ − αβTxl′ )) − |γ|\n>0.\nFurthermore, define ηi (2 ≤ i ≤ dY ) as negative reals with absolute value sufficiently large, such that for any i ∈ [2 : dY ] and any j ∈ [1 : n],\nỸi,j − ηi > 0.\nNow we construct a point in the parameter space whose empirical risk is smaller than the proposed local minimum in Lemma 4 as follows\nW̃1 = W̃1,[1:dX ] − αβT −W̃1,[1:dX ] + αβT W̃2,[1:dX ] ...\nW̃dY ,[1:dX ] 0(d1−dY −1)×dX\n , (19)\nb̃1 = W̃1,[dX+1] − η1 + γ −W̃1,[dX+1] + η1 + γ W̃2,[dX+1] − η2\n... W̃dY ,[dX+1] − ηdY\n0(d1−dY −1)×1\n , (20)\nW̃2 = 1 s++s− − 1s++s− 0 0 · · · 0 0 · · · 0 0 0 1s+ · · · 0 ... ... ... ... ... ... 1s+ · · · 0 . . . ... ... ... ... . . . ... ... ...\n0 0 0 0 · · · 1s+ 0 · · · 0\n , (21)\nand\nb̃2 = η1 η2 ...\nηdY , (22) where W̃i and b̃i are the weight matrix and the bias of the i-th layer, respectively.\nAfter some calculations, the network output of the first layer before the activation in terms of{[ W̃i ]2 i=1 , [ b̃i ]2 i=1 } is\nỸ (1) = W̃1X + b̃11 T n = W̃1,·X̃ − αβTX − η11Tn + γ1Tn −W̃1,·X̃ + αβTX + η11Tn + γ1Tn W̃2,·X̃ − η21Tn ...\nW̃dY ,·X̃ − ηdY 1Tn 0(d1−dY −1)×n\n .\nTherefore, the output of the whole neural network is\nŶ = W̃2hs−,s+ ( W̃1X + b̃11 T n ) + b̃21 T n\n= W̃2hs−,s+ W̃1,·X̃ − αβTX − η11Tn + γ1Tn −W̃1,·X̃ + αβTX + η11Tn + γ1Tn W̃2,·X̃ − η21Tn ...\nW̃dY ,·X̃ − ηdY 1Tn 0(d1−dY −1)×n\n + b̃21 T n .\nSpecifically, if j ≤ l′,( Ỹ (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 )) 1,j =W̃1,· [ xj 1 ] − αβTxj − η1 + γ\n=Ỹ1,j − αβTxj − η1 + γ < 0,( Ỹ (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 )) 2,j =− W̃1,· [ xj 1 ] + αβTxj + η1 + γ\n=− Ỹ1,j + αβTxj + η1 + γ > 0. Therefore, (1, j)-th component of Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) is(\nŶ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 )) 1,j\n= (\n1 s++s− ,− 1s++s− , 0, · · · , 0 ) hs−,s+ W̃1,·X − αβTX − η11Tn + γ1Tn −W̃1,·X + αβTX + η11Tn + γ1Tn W̃2,·X − η21Tn ...\nW̃dY ,·X − ηdY 1Tn 0d1−dY −11 T n\n ·,j\n+ η1\n= 1\ns+ + s− hs−,s+(Ỹ1,j − αβTxj − η1 + γ)−\n1\ns+ + s− hs−,s+(−Ỹ1,j + αβTxj + η1 + γ)\n+ η1\n= s−\ns+ + s− (Ỹ1,j − αβTxj − η1 + γ)− s+ s+ + s− (−Ỹ1,j + αβTxj + η1 + γ)\n+ η1\n=Ỹ1,j − αβTxj + s− − s+ s+ + s− γ; (23)\nSimilarly, when j > l′, the (1, j)-th component is( Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 )) 1,j\n= s+\ns+ + s− (Ỹ1,j − αβTxj − η1 + γ)− s− s+ + s− (−Ỹ1,j + αβTxj + η1 + γ) + η1\n=Ỹ1,j − αβTxj + s+ − s− s+ + s− γ, (24)\nand ( Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 )) i,j = s+ s+ (Ỹi,j − ηi) + ηi = Ỹi,j , i ≥ 2. (25)\nThus, the empirical risk of the neural network with parameters {[ W̃i ]2 i=1 , [ b̃i ]2 i=1 } is\nR̂ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) = 1\nn n∑ i=1 l ( Yi, W̃2 ( W̃1xi + b̃11 T n ) + b̃21 T n ) = 1\nn n∑ i=1 ( l ( Yi, W̃ [ xi 1 ]) +∇Ŷi l ( Yi, W̃ [ xi 1 ])( W̃2 ( W̃1xi + b̃11 T n ) + b̃21 T n − W̃ [ xi 1 ]))\n+ n∑ i=1 o (∥∥∥∥W̃2 (W̃1xi + b̃11Tn)+ b̃21Tn − W̃ [xi1 ]∥∥∥∥) . (26)\nApplying eqs. (23), (24), and (25), we have\nn∑ i=1 ( W̃2 ( W̃1xi + b̃1 ) + b̃2 − W̃ [ xi 1 ])T ∇Ŷi l ( Yi, W̃ [ xi 1 ])T (∗) =\nl′∑ i=1 V1,i(−αβTxi + s+ − s− s+ + s− γ) + n∑ i=l′+1 V1,i(−αβTxi − s+ − s− s+ + s− γ)\n=2γ l′∑ i=1 s+ − s− s+ + s− V1,i,\nwhere eq. (∗) is because\n( W̃2 ( W̃1xi + b̃1 ) + b̃2 − W̃ [ xi 1 ]) j = − αβTxj + s− − s+ s+ + s− γ, j = 1, i ≤ l′ − αβTxj − s− − s+ s+ + s− γ, j = 1, i > l′\n0, j ≥ 2\n.\nFurthermore, note that α = O(γ) (from the definition of γ). We have\nn∑ i=1 o (∥∥∥∥W̃2 (W̃1xi + b̃1)+ b̃2 − Ŵ [xi1 ]∥∥∥∥)\n= n∑ i=1 o √√√√ n∑ j=1 ( W̃2 ( W̃1xi + b̃1 ) + b̃2 − W̃ [ xi 1 ])2 j =o(γ).\nLet α be sufficiently small while sgn(γ) = −sgn ( l′∑ i=1 s+−s− s++s− V1,i ) . We have\nn∑ i=1 l ( Yi, W̃2 ( W̃1xi + b̃1 ) + b̃2 ) − n∑ i=1 l ( Yi, Ŵ [ xi 1 ])\n=2γ l′∑ i=1 s+ − s− s+ + s− V1,i + o(γ)\n(∗∗) < 0,\nwhere inequality (∗∗) comes from (1.2) (see p. 18). From Lemma 4, there exists a local minimizer {[ Ŵi ]2 i=1 , [ b̂i ]2 i=1 } with empirical risk that equals\nto f(W̃ ). Meanwhile, we just construct a point in the parameter space with empirical risk smaller than f(W̃ ).\nTherefore, {[ Ŵi ]2 i=1 , [ b̂i ]2 i=1 } is a spurious local minimum.\nThe proof is completed.\nA.3 STAGE (2)\nStage (2) proves that neural networks with arbitrary hidden layers and two-piece linear activation hs−,s+ have spurious local minima. Here, we still assume s+ 6= 0. We have justified this assumption in Stage (1).\nThis stage is organized similarly with Stage (1): (a) Lemma 5 constructs a local minimum; and (b) Theorem 5 proves the minimum is spurious.\nStep (a). Construct local minima of the loss surface. Lemma 5. Suppose that all the conditions of Lemma 4 hold, while the neural network has L − 1 hidden layers. Then, this network has a local minimum at\nŴ ′1 = [ [ W̃ ] ·,[1:dX ]\n0(d1−dY )×dX\n] , b̂′1 = [ [ W̃ ] ·,dX+1\n− η1dY −η1d1−dY\n] ,\nŴ ′i = 1\ns+ dY∑ j=1 Ej,j + 1 s+ di∑ j=dY +1 Ej,(dY +1), b̂ ′ i = 0 (i = 2, 3, ..., L− 1),\nand Ŵ ′L = [ 1 s+ IdY 0dY ×(dL−1−dY ) ] , b̂′L = η1dY ,\nwhere Ŵ ′i and b̂ ′ i are the weight matrix and the bias of the i-th layer, respectively, and η is a negative constant with absolute value sufficiently large such that\nW̃ X̃ − η1dY 1Tn > 0, (27)\nwhere > is element-wise.\nProof. Recall the discussion in Lemma 4 that all components of Ŵ1X + b̂11Tn are positive. Specifically,\nŴ1X + b̂11 T n = Ỹ − η1dY 1Tn −η1d1−dY 1Tn , where Ỹ is defined in Lemma 4.\nSimilar to the discussions in Lemma 4, when the parameters equal to {[ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 } , the output of the first layer before the activation function is\nỸ (1) = Ŵ ′1X + b̂ ′ 11 T n = Ỹ − η1dY 1Tn −η1d1−dY 1Tn , and\nỸ − η1dY 1Tn > 0, (28) −η1d1−dY 1Tn > 0. (29)\nHere > is defined element-wise.\nAfter the activation function, the output of the first layer is\nY (1) = hs−,s+(Ŵ ′ 1X + b̂ ′ 11 T n ) = s+(Ŵ ′ 1X + b̂ ′ 11 T n ) = s+ Ỹ − η1dY 1Tn −η1d1−dY 1Tn . We prove by induction that for all i ∈ [1 : L− 1] that\nỸ (i) > 0 , element-wise, (30)\nY (i) = s+ Ỹ − η1dY 1Tn −η1di−dY 1Tn . (31) Suppose that for 1 ≤ k ≤ L− 2, Ỹ (k) is positive (element-wise) and\nY (k) = s+ Ỹ − η1dY 1Tn −η1dk−dY 1Tn . Then the output of the (k + 1)-th layer before the activation is\nỸ (k+1) = Ŵ ′k+1Y (k) + b̂′k+11 T n\n= 1\ns+ dY∑ j=1 Ej,j + dk+1∑ j=dY +1 Ej,(dY +1) s+ Ỹ − η1dY 1Tn −η1dk−dY 1Tn =\n Ỹ − η1dY 1Tn −η1dk+1−dY 1Tn . Applying eqs. (28) and (29), we have\nỸ (k+1) = Ỹ − η1dY 1Tn −η1dk+1−dY 1Tn > 0, where > is defined element-wise. Therefore,\nY (k+1) = hs−,s+\n( Ỹ (k+1) ) = s+Ỹ (k+1) = s+ Ỹ − η1dY 1Tn −η1dk+1−dY 1Tn . We thereby prove eqs. (30) and (31).\nTherefore, Y (L) can be calculated as Ŷ = Y (L) = Ŵ ′LY (L−1) + b̂′L1 T n\n= 1\ns+\n[ IdY 0dY ×(dL−1−dY ) ] s+ Ỹ − η1dY 1Tn −η1di−dY 1Tn + η1dY 1Tn = Ỹ . (32)\nThen, we show the empirical risk is higher around {[ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 } in order to prove that{[\nŴ ′i ]L i=1 , [ b̂′i ]L i=1 } is a local minimizer.\nLet {[ Ŵ ′i + δ ′ Wi ]L i=1 , [ b̂′i + δ ′ bi ]L i=1 } be point in the parameter space which is close enough to the\npoint {[ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 } . Since the disturbances δ′Wi and δ ′ bi are both close to 0 (element-wise),\nall components of Ỹ (i) ([ Ŵ ′i + δ ′ Wi ]L i=1 , [ b̂′i + δ ′ bi ]L i=1 ) remains positive. Therefore, the output\nof the neural network in terms of parameters {[ Ŵ ′i + δ ′ Wi ]L i=1 , [ b̂′i + δ ′ bi ]L i=1 } is\nŶ ([ Ŵ ′i + δ ′ Wi ]L i=1 , [ b̂′i + δ ′ bi ]L i=1 ) =(Ŵ ′L + δ ′ WL)hs−,s+ ( . . . hs−,s+ (( Ŵ ′1 + δ ′ W1 ) X + ( b̂′1 + δ ′ b1 ) 1Tn ) . . . )\n+ ( b̂′L + δ ′ bL ) 1Tn\n=(Ŵ ′L + δ ′ WL)s+ ( . . . s+ (( Ŵ ′1 + δ ′ W1 ) X + ( b̂′1 + δ ′ b1 ) 1Tn ) . . . )\n+ ( b̂′L + δ ′ bL ) 1Tn\n=M1X +M21 T n , where M1 and M2 can be obtained from {[ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 } and { [δ′Wi] L i=1 , [δ ′ bi] L i=1 } through several multiplication and summation operations2.\nRewrite the output as\nM1X +M21 T n = [M1 M2] [ X 1Tn ] .\nTherefore, the empirical risk R̂ before and after the disturbance can be expressed as f(W̃ ) and f ([M1 M2]), respectively.\nWhen the disturbances {\n[δ′Wi] L i=1 , [δ ′ bi] L i=1 } approach 0 (element-wise), [M1 M2] approaches W̃ .\nTherefore, when {\n[δ′Wi] L i=1 , [δ ′ bi] L i=1\n} are all small enough, we have\nR̂ ([ Ŵ ′i + δ ′ Wi ]L i=1 , [ b̂′i + δ ′ bi ]L i=1 ) =f([M1 M2])\n≥f(W̃ ) =R̂ ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) . (33)\nSince {[ Ŵ ′i + δ ′ Wi ]L i=1 , [ b̂′i + δ ′ bi ]L i=1 } are arbitrary within a sufficiently small neighbour of{[\nŴ ′i ]L i=1 , [ b̂′i ]L i=1 } , eq. (33) yields that {[ Ŵ ′i ]2 i=1 , [ b̂′i ]2 i=1 } is a local minimizer.\nStep (b). Prove the constructed local minima are spurious. 2Since the exact form of M1 and M2 are not needed, we omit the exact formulations here.\nTheorem 5. Under the same conditions of Lemma 5 and Assumptions 1, 2, and 4, the constructed spurious local minima in Lemma 5 are spurious.\nProof. We first construct the weight matrix and bias of the i-th layer as follows,\nW̃ ′1 = W̃1, b̃ ′ 1 = b̃1,\nW̃ ′2 =\n[ W̃2\n0(d2−dY )×d1\n] , b̃′2 = λ1d2 + [ b̃2\n0(d2−dY )×1\n] ,\nW̃ ′i = 1\ns+ dY∑ i=1 Ei,i, b̃ ′ i = 0 (i = 3, 4, ..., L− 1),\nand\nW̃ ′L = 1\ns+ dY∑ i=1 Ei,i, b̃ ′ L = −λ1dY ,\nwhere W̃1, W̃2, b̃1 and b̃2 are defined by eqs. (19), (20), (21), and (22), respectively, and λ is a sufficiently large positive real such that\nŶ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1d21 T n > 0, (34)\nwhere > is defined element-wise. We argue that {[ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 } corresponds to a smaller empirical risk than f(W̃ ) which is defined in Lemma 4.\nFirst, Theorem 4 has proved that the point { W̃1, W̃2, b̃1, b̃2 } corresponds to a smaller empirical risk\nthan f(W̃ ).\nWe prove by induction that for any i ∈ {3, 4, ..., L− 1},\nỸ (i) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) ≥ 0 , element-wise, (35)\nY (i) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) = s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(di−dY )×n . (36) Apparently the output of the first layer before the activation is\nỸ (1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) = W̃ ′1X + b̃ ′ 11 T n = W̃1X + b̃11 T n = Ỹ (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) .\nTherefore, the output of the first layer after the activation is\nY (1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) =hs−,s+ ( Ỹ (1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) =hs−,s+ ( Ỹ (1) ([ W̃i ]L i=1 , [ b̃i ]L i=1\n)) =Y (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) .\nThus, the output of the second layer before the activation is Ỹ (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) =W̃ ′2Y (1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + b̃′21 T n\n=\n[ W̃2\n0(d2−dY )×d1\n] Y (1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + [ b̃2\n0(d2−dY )×1\n] 1Tn\n+ λ1d21 T n\n= Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\nλ1d2−dY 1 T n . Applying the definition of λ (eq. (34)),\nỸ (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) > 0 , element-wise. (37)\nTherefore, the output of the second layer after the activation is Y (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) = hs−,s+ ( Ỹ (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ))\n= s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\nλ1d2−dY 1 T n\n .\nMeanwhile, the output of the third layer before the activation is Ỹ (3) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) can be\ncalculated based on Y (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) :\nỸ (3) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) = W̃ ′3Y (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + b̃′31 T n\n= 1\ns+ ( dY∑ i=1 Ei,i ) s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\nλ1d2−dY 1 T n\n\n= Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(d3−dY )×n . Applying eq. (37),\nỸ (3) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) ≥ 0 , element-wise. (38)\nTherefore, the output of the third layer after the activation is Y (3) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) = hs−,s+ ( Ỹ (3) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) = s+ ( Ỹ (3) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ))\n= s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(d3−dY )×n\n .\nSuppose eqs. (35) and (36) hold for k (3 ≤ k ≤ L− 2), when k + 1, Ỹ (k+1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) = W̃ ′k+1Y (k) ([ W̃ ′i ]2 i=1 , [ b̃′i ]2 i=1 ) + b̃′k+11 T n\n= 1\ns+ ( dY∑ i=1 Ei,i ) s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(dk−dY )×n\n\n= Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(dk+1−dY )×n . Applying eq. (38),\nỸ (k+1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) ≥ 0 , element-wise. (39)\nTherefore, the output of the (k + 1)-th layer after the activation is Y (k+1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) =hs−,s+ ( Ỹ (k+1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) =s+Ỹ (k+1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )\n=s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(dk+1−dY )×n . Therefore, eqs. (35) and (36) hold for any i ∈ {3, 4, ..., L− 1}. Finally, the output of the network is\nŶ ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) =Y (L) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) =W̃ ′LY (L−1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + b̃′L1 T n\n=\n( 1\ns+ dY∑ i=1 Ei,i\n) s+ Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) + λ1dY 1 T n\n0(dL−1−dY )×n − λ1dY 1Tn\n=Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) .\nApplying Theorem 4, we have R̂ ([ W̃ ′i ]L i=1 , [ W̃ ′i ]L i=1 ) = R̂ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) < f(W̃ ).\nThe proof is completed.\nA.4 STAGE (3)\nFinally, we prove Theorem 1.\nThis stage also follows the two-step strategy.\nStep (a). Construct local minima of the loss surface.\nLemma 6. Suppose t is a non-differentiable point for the piece-wise linear activation function h and σ is a constant such that the activation h is differentiable in the intervals (t − σ, t) and (t, t + σ). Assume that M is a sufficiently large positive real such that\n1\nM ∥∥∥Ŵ ′1X + b̂′11Tn∥∥∥ F < σ. (40)\nLet αi be any positive real such that\nα1 = 1\n0 < αi < 1, i = 2, · · · , L− 1. (41) Then, under Assumption 3, any neural network with piecewise linear activations and L − 1 hidden layers has local minima at\nŴ ′′1 = 1\nM Ŵ ′1, b̂ ′′ 1 =\n1\nM b̂′1 + t1d1 ,\nŴ ′′i = αiŴ ′ i , b̂ ′′ i = −αiŴ ′ih(t)1di−1 + t1di +\nΠij=2αj\nM b̂′i, (i = 2, 3, ..., L− 1),\nand Ŵ ′′L =\n1\nΠLj=2αj MŴ ′L, b̂ ′′ L = − M∏L−1 j=2 αj Ŵ ′Lh(t)1dL−1 + b̂ ′ L\nwhere {[ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 } is the local minimizer constructed in Lemma 5. Also, the loss is con-\ntinuously differentiable, whose derivative with respect to the prediction Ŷi may equal to 0 only when the prediction Ŷi and label Yi are the same.\nProof. Define s− = lim θ→0− h′(θ) and s+ = lim θ→0+ h′(θ).\nWe then prove by induction that for all i ∈ [1 : L−1], all components of the i-th layer output before the activation Ỹ (i) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) are in interval (t, t+ σ), and\nY (i) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) = h(t)1di1 T n + Πij=1αj M Y (i) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) .\nThe first layer output before the activation is, Ỹ (1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) = Ŵ ′′1 X + b̂ ′′ 11 T n = 1 M Ŵ ′1X + 1 M b̂′11 T n + t1d11 T n . (42)\nWe proved in Lemma 5 that Ŵ ′1X + b̂ ′ 11 T n is positive (element-wise). Since the Frobenius norm of a matrix is no smaller than any component’s absolute value, applying eq. (40), we have that for all i ∈ [1, d1] and j ∈ [1 : n],\n0 < 1\nM\n( Ŵ ′1X + b̂ ′ 11 T n ) ij < σ. (43)\nTherefore, (\n1 M ( Ŵ ′1X + b̂ ′ 11 T n ) ij + t ) ∈ (t, t+ σ). So,\nY (1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) =h ( Ỹ (1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 )) (∗) =hs−,s+ ( 1\nM Ŵ ′1X +\n1\nM b̂′11 T n\n) + h(t)1d11 T n\n= 1\nM Y (1)\n([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) + h(t)1d11 T n ,\nwhere eq.(∗) is because for any x ∈ (t− σ, t+ σ), h(x) = h(t) + hs−,s+(x− t). (44)\nSuppose the above argument holds for k (1 ≤ k ≤ L− 2). Then\nỸ (k+1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) =Ŵ ′′k+1Y (k) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) + b̂′′k+11 T n\n=(−αk+1Ŵ ′k+1h(t)1dk+1\n+ t1dk+1 + Πk+1i=1 αi M b̂′k+1)1 T n + αk+1Ŵ ′ k+1Y (k)\n([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) =αk+1Ŵ ′ k+1 ( h(t)1dk1 T n +\nΠki=1αi M\nY (k) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 )) + ( −αk+1Ŵ ′k+1h(t)1dk + t1dk+1 +\nΠk+1i=1 αi M b̂′k+1\n) 1Tn\n= Πk+1i=1 αi M Ŵ ′k+1Y (k)\n([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) + Πk+1i=1 αi M b̂′k+11 T n + t1dk+11 T n\n=t1dk+11 T n + Πk+1i=1 αi M\nỸ (k+1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) .\nLemma 5 has proved that all components of Ỹ (k+1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) are contained in\nỸ (1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) . Combining\nt1d11 T n < t1d11 T n +\n1\nM Ỹ (1)\n([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) < (t+ σ)1d11 T n ,\nwe have\nt1dk+11 T n < t1dk+11 T n + Πk+1i=1 αi M\nỸ (k+1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) (∗) < (t+ σ)1dk+11 T n .\nHere < are all element-wise, and inequality (∗) comes from the property of αi (eq. (41)). Furthermore, the (k + 1)-th layer output after the activation is Y (k+1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) =h ( Ỹ (k+1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1\n)) =h ( t1dk+11 T n + 1\nM Ỹ (k+1)\n([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 )) (∗) =h(t)1dk+11 T n + hs−,s+ ( Πk+1i=1 αi M Ỹ (k+1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ))\n=h(t)1dk+11 T n + Πk+1i=1 αi M hs−,s+\n( Ỹ (k+1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 )) =h(t)1dk+11 T n +\nΠk+1i=1 αi M\nY (k+1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) ,\nwhere eq. (∗) is because of eq. (44). The above argument is proved for any index k ∈ {1, . . . , L−1}.\nTherefore, the output of the network is Y (L) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) =Ŵ ′′LY (L−1) ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) + b̂′′L1 T n\n= M\nΠL−1i=1 αi Ŵ ′L\n( h(t)1dL−11 T n + ΠL−1i=1 αi M Y (L−1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ))\n+ ( − M\nΠL−1i=1 αi Ŵ ′Lh(t)1dL−1 + b̂ ′ L\n) 1Tn\n=Ŵ ′LY (L−1) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) + b̂′L1 T n\n=Y (L) ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) .\nTherefore,\nR̂ ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) = R̂ ([ Ŵ ′i ]L i=1 , [ b̂′i ]L i=1 ) = f(W̃ ).\nWe then introduce some small disturbances {\n[δ′′Wi] L i=1 , [δ ′′ bi] L i=1\n} into {[ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 } in\norder to check the local optimality.\nSince all comonents of Y (i) are in interval (t, t+σ), the activations in every hidden layers is realized at linear parts. Therefore, the output of network is\nŶ ([ Ŵ ′′i + δ ′′ Wi ]L i=1 , [ b̂′′i + δ ′′ bi ]L i=1 ) = ( Ŵ ′′L + δ ′′ WL ) h ( · · ·h (( Ŵ ′′1 + δ ′′ W1 ) X + ( b̂′′1 + δ ′′ b1 ) 1Tn ) · · · ) + ( b̂′′L + δ ′′ bL ) 1Tn\n= ( Ŵ ′′L + δ ′′ WL ) s+ ( · · · s+ (( Ŵ ′′1 + δ ′′ W1 ) X + ( b̂′′1 + δ ′′ b1 ) 1Tn ) + f(t)1d11 T n · · · ) + f(t)1dL1 T n + ( b̂′′L + δ ′′ bL ) 1Tn\n=M1X +M21 T n\n= [M1 M2] [ X 1Tn ] .\nSimilar to Lemma 5, [M1 M2] approaches W̃ as disturbances { [δWi] L i=1 , [δbi] L i=1 } approach 0\n(element-wise). Combining that W̃ is a local minimizer of f(W ), we have R̂ ([ Ŵ ′′i + δ ′′ Wi ]L i=1 , [ b̂′′i + δ ′′ bi ]L i=1 ) = f ([M1 M2]) ≥ f(W̃ ) = R̂ ([ Ŵ ′′i ]L i=1 , [ b̂′′i ]L i=1 ) .\nThe proof is completed.\nStep (b). Prove the constructed local minima are spurious.\nProof of Theorem 1. Without loss of generality, we assume that all activations are the same.\nLet t be a non-differentiable point of the piece-wise linear activation function h with s− = lim\nθ→0− h′(θ),\ns+ = lim θ→0+\nh′(θ).\nLet σ be a constant such that h is linear in interval (t− σ, t) and interval (t, t+ σ). Then construct that\nW̃ ′′1 = 1\nM W̃ ′1, b̃ ′′ 1 =\n1\nM b̃′1 + t1d1 ,\nW̃ ′′2 = 1\nM̃ W̃ ′2, b̃ ′′ 2 = t1d2 −\n1\nM̃ h(t)W̃ ′21d2 +\n1\nMM̃ b̃′2,\nW̃ ′′i = W̃ ′ i , b̃ ′′ i = −W̃ ′ih(t)1di−1 + t1di +\n1\nMM̃ b̃′i, (i = 3, 4, ..., L− 1)\nand W̃ ′′L = MM̃W̃ ′ L, b̃ ′′ L = b̃ ′ L −MM̃W̃ ′Lh(t)1L−1,\nwhere {[ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 } are constructed in Theorem 5, M is a large enough positive real such that 1\nM ∥∥∥W̃ ′1X + b̃′11Tn∥∥∥ F < σ, (45)\nand M̃ a large enough positive real such that\n1\nM̃\n∥∥∥∥ 1M Ỹ (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )∥∥∥∥ F < σ. (46)\nThen, we prove by induction that for any i ∈ [2 : L − 1], all components of Ỹ (i) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) are in interval (t, t+ δ), and\nY (i) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) = h(t)1di1 T n + 1 M̃M Y (i) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) .\nFirst,\nỸ (1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) = W̃ ′′1 X + b̃ ′′ 11 T n = 1 M (W̃ ′1X + b̃ ′ 11 T n ) + t1 T d11 T n . (47)\nFor any i ∈ [1 : d1] and j ∈ [1 : n], eq. (45) implies∣∣∣∣∣ ( 1 M (W̃ ′1X + b̃ ′ 11 T n ) ) ij ∣∣∣∣∣ ≤ 1M ∥∥∥W̃ ′1X + b̃′11Tn∥∥∥F < σ. Thus, (\n1\nM (W̃ ′1X + b̃ ′ 11 T n ) + t1 T d11 T n ) ij ∈ (t− σ, t+ σ). (48)\nTherefore, the output of the first layer after the activation is Y (1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) = h ( Ỹ (1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 )) = h ( 1\nM (W̃ ′1X + b̃ ′ 11 T n ) + t1d11 T n ) (∗) = h(t)1d11 T n + hs−,s+ ( 1\nM\n( W̃ ′1X + b̃ ′ 1 )) = h(t)1d11 T n + 1\nM hs−,s+\n(( W̃ ′1X + b̃ ′ 1 )) = h(t)1d11 T n + 1\nM Y (1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) ,\nwhere eq. (∗) is from eq. (44) for any x ∈ (t− δ, t+ δ). Also,\nỸ (2) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) =W̃ ′′2 Y (1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) + b̃′′21 T n\n= 1\nM̃\n( W̃ ′2 )( h(t)1d11 T n + 1\nM Y (1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) + t1d21 T n − 1\nM̃ h(t)W̃ ′21d11 T n +\n1\nMM̃ b̃′21 T n\n= 1\nM̃M W̃ ′2Y (1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + 1 MM̃ b̃′21 T n + t1d21 T n\n= 1\nMM̃ Ỹ (2)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + t1d21 T n .\nRecall in Theorem 5 we prove all components of Ỹ (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) are positive. Combin-\ning the definition of M̃ (eq. (46)), we have\nt1d21 T n <Ỹ (2) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) = 1\nM̃M Ỹ (2)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + t1d21 T n\n<(t+ σ)1d21 T n .\nTherefore, Y (2) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) =h ( Ỹ (2) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 )) =h ( 1\nM̃M Y (2)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + t1d21 T n ) =h(t)1d21 T n + hs−,s+ ( 1\nM̃M Ỹ (2)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) =h(t)1d21 T n + 1\nM̃M hs−,s+\n( Ỹ (2) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) =h(t)1d21 T n + 1\nM̃M Y (2)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) .\nSuppose the above argument holds for k-th layer.\nThe output of (k + 1)-th layer before the activation is Ỹ (k+1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) =W̃ ′′k+1Y (k) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) + b̃′′k+11 T n\n=W̃ ′k+1 ( h(t)1dk1 T n + 1\nM̃M Y (k)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) + ( −W̃ ′k+1h(t)1dk + t1dk+1 + 1\nMM̃ b̃′k+1\n) 1Tn\n= 1\nMM̃\n( W̃ ′k+1Y (k) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + b̃′k+11 T n ) + t1dk+11 T n\n= 1\nMM̃ Ỹ (k+1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + t1dk+11 T n .\nRecall proved in Theorem 5 that all components of Ỹ (k+1) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) except those that\nare 0 are contained in Ỹ (k) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) . We have\nt1dk+11Tn < 1\nMM̃ Ỹ (k+1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + t1dk+11 T n < (t+ σ)1dk+11Tn .\nTherefore, Y (k+1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) = h ( Ỹ (k) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 )) = h ( 1\nMM̃ Ỹ (k+1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) + t1dk+11 T n ) = h(t)1dk+11 T n + 1\nMM̃ Y (k+1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) .\nThus, the argument holds for any k ∈ {2, . . . , L− 1}. So,\nY (L) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) =W̃ ′′LY (L−1) ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) + b̃′′L\n=MM̃W̃ ′L ( h(t)1dL−11 T n + 1\nMM̃ Y (L−1)\n([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 )) + b̃′L1 T n −MM̃W̃ ′Lh(t)1dL−11Tn\n=Y (L) ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) .\nTherefore,\nR̂ ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) = R̂ ([ W̃ ′i ]L i=1 , [ b̃′i ]L i=1 ) . (49)\nFrom eq. (49) and Theorem 5, we have R̂ ([ W̃ ′′i ]L i=1 , [ b̃′′i ]L i=1 ) < f ( W̃ ) ,\nwhich completes the proof of local minimizer.\nFurthermore, the parameter M used in Lemma 6 (not those in this proof) is arbitrary in a continuous interval (cf. eq. (40)), we have actually constructed infinite spurious local minima.\nTheorem 1 relies on Assumption 4. We can further remove it by replacing Assumption 3 by a mildly more restrictive variant Assumption 5.\nCorollary 3. Suppose that Assumptions 1, 2, and 5 hold. Neural networks with arbitrary depth and arbitrary piecewise linear activations (excluding linear functions) have infinitely many spurious local minima under arbitrary continuously differentiable loss whose derivative can equal 0 only when the prediction and label are the same.\nProof. The proof is delivered by modifications of Theorem 4 in Stage 1 of Theorem 1’s proof. We only need to prove the corollary under the assumption that s− + s+ = 0.\nLet the local minimizer constructed in Lemma 4 be {[ Ŵi ]2 i=1 , [ b̂i ]2 i=1 } . Then, we construct a point in the parameter space whose empirical risk is smaller as follows:\nW̃1 = W̃1,[1:dX ] − αβT W̃1,[1:dX ] −W̃1,[1:dX ] + αβT W̃2,[1:dX ]\n... W̃dY ,[1:dX ]\n0(d1−dY −2)×dX\n ,\nb̃1 = W̃1,[dX+1] − η1 + γ W̃1,[dX+1] − η −W̃1,[dX+1] + η1 + γ W̃2,[dX+1] − η2\n... W̃dY ,[dX+1] − ηdY\n0(d1−dY −2)×1\n ,\nW̃2 = 1 2s+ 1 s+ − 12s+ 0 0 · · · 0 0 · · · 0 0 0 0 1s+ 0 · · · 0 0 · · · 0 0 0 0 0 1s+ · · · 0 0 · · · 0 ... ... ... ... ... . . . ... ... . . .\n... 0 0 0 0 0 · · · 1s+ 0 · · · 0\n ,\nand\nb̃2 = η η2 ...\nηdY\n ,\nwhere α, β, and ηi are defined the same as those in Theorem 4, and η is defined by eq. (27).\nThen, the output of the first layer is\nY (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) =hs−,s+ ( W̃1X + b̃11 T n )\n=hs−,s+ W̃1,·X − αβTX − η11Tn + γ1Tn W̃1,·X − η1Tn −W̃1,·X + αβTX + η11Tn + γ1Tn W̃2,·X − η21Tn\n... W̃dY ,·X − ηdY 1Tn\n0d1−dY −21 T n\n .\nFurther, the output of the whole network is\nŶ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 )\n=W̃2hs−,s+ W̃1,·X − αβTX − η11Tn + γ1Tn W̃1,·X − η1Tn −W̃1,·X + αβTX + η11Tn + γ1Tn W̃2,·X − η21Tn\n... W̃dY ,·X − ηdY 1Tn\n0d1−dY −21 T n\n + b̃21 T n\n=W̃2hs−,s+ W̃1,·X − αβTX − η11Tn + γ1Tn W̃1,·X − η1Tn −W̃1,·X + αβTX + η11Tn + γ1Tn W̃2,·X − η21Tn\n... W̃dY ,·X − ηdY 1Tn\n0d1−dY −21 T n\n + η η2 ... ηdY 1Tn .\nTherefore, if j ≤ l′, the (1, j)-th component of Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) is\n( W̃2 ) 1 Ỹ (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) j\n= 1\n2s+\n( s− ( Ỹ1,j − αβTxj − η1 + γ ) + 2s+ ( Ỹ1,j − η ) − s+ ( −Ỹ1j + αβTxj + η1 + γ )) + η\n= 1\n2s+\n( −s+ ( Ỹ1,j − αβTxj − η1 + γ ) + 2s+ ( Ỹ1,j − η ) − s+ ( −Ỹ1j + αβTxj + η1 + γ )) + η\n= 1\n2s+\n( 2s+Ỹ1,j − 2s+η − 2s+γ ) + η\n=Ỹ1,j − γ.\nOtherwise (j > l′), the (1, j)-th component of Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) is\n( W̃2 ) 1 Ỹ (1) ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) j\n= 1\n2s+\n( s+ ( Ỹ1,j − αβTxj − η1 + γ ) + 2s+ ( Ỹ1,j − η ) − s− ( −Ỹ1j + αβTxj + η1 + γ )) + η\n= 1\n2s+\n( s+ ( Ỹ1,j − αβTxj − η1 + γ ) + 2s+ ( Ỹ1,j − η ) + s+ ( −Ỹ1j + αβTxj + η1 + γ )) + η\n= 1\n2s+\n( 2s+Ỹ1,j − 2s+η + 2s+γ ) + η\n=Ỹ1,j + γ, and the (i, j)-th (i > 1) component of Ŷ ([ W̃i ]2 i=1 , [ b̃i ]2 i=1 ) is Ỹi,j .\nTherefore, we have( W̃2 ( W̃1xi + b̃1 ) + b̃2 − W̃ [ xi 1 ]) j = − γ, j = 1, i ≤ l; γ, j = 1, i > l;\n0, j ≥ 2.\nThen, similar to Theorem 4, we have R̂ ([ W̃i ]L i=1 , [ b̃i ]L i=1 ) − R̂ ([ Ŵi ]L i=1 , [ b̂i ]L i=1 ) = 1\nn n∑ i=1 l ( Yi, W̃2 ( W̃1xi + b̃1 ) + b̃2 ) − 1 n n∑ i=1 l ( Yi, Ŵ [ xi 1 ])\n= 1\nn n∑ i=1 ∇Ŷi l ( Yi, W̃ [ xi 1 ])( W̃2 ( W̃1xi + b̃11 T n ) + b̃21 T n − W̃ [ xi 1 ])\n+ n∑ i=1 o (∥∥∥∥W̃2 (W̃1xi + b̃11Tn)+ b̃21Tn − W̃ [xi1 ]∥∥∥∥)\n=− 2 n l′∑ i=1 V1,iγ + o(γ),\nwhere V and l′ are also defined the same as those in Theorem 4. When γ is sufficiently small and sgn(γ) = sgn (∑l′\ni=1 V1,i\n) , we have that\nR̂ (([\nW̃i ]2 i=1 , [ b̃i ]2 i=1 )) < f(W̃ ).\nThis complete the proof of Corollary 3." }, { "heading": "A.5 A PREPARATION LEMMA", "text": "We now prove the preparation lemma used above.\nLemma 7. Suppose u = (u1 · · ·un) ∈ R1×n which satisfies u 6= 0 and n∑ i=1 ui = 0, (50)\nwhile {x1,...,xn} is a set of vector ⊂ Rm×1. Suppose index set S = {1, 2, · · · , n}. Then for any series of real number {v1, · · · , vn}, there exists a non-empty separation I , J of S, which satisfies I ∪ J = S, I ∩ J = ∅ and both I and J are not empty, a vector β ∈ Rm×1,such that, (1.1) for any sufficiently small positive real α, i ∈ I , and j ∈ J , we have vi−αβTxi < vj−αβTxj; (1.2) ∑ i∈I ui 6= 0.\nProof. If there exists a non-empty separation I and J of the index set S, such that when β = 0, (1.1) and (1.2) hold, the lemma is apparently correct.\nOtherwise, suppose that there is no non-empty separation I and J of the index set S such that (1.1) and (1.2) hold simultaneously when β = 0.\nSome number vi in the sequence (v1, v2, · · · , vn) are probably equal to each other. We rerarrange the sequence by the increasing order as follows,\nv1 = v2 = · · · = vs1 < vs1+1 = · · · = vs2 < · · · < vsk−1+1 = · · · = vsk = vn, (51)\nwhere sk = n.\nThen, for any j ∈ {1, 2, · · · , k − 1}, we argue that sj∑ i=1 ui = 0.\nOtherwise, suppose there exists a sj , such that sj∑ i=1 ui 6= 0.\nLet I = {1, 2, ..., sj} and J = {sj + 1, ..., n}. Then, when β = 0, we have\nvi − αβTxi = vi < vj = vj − αβTxj ,\nand ∑ i∈I ui = sj∑ i=1 ui 6= 0,\nwhich are exactly the arguments (1.1) and (1.2). Thereby we construct a contrary example. Therefore, for any j ∈ {1, 2, · · · , k − 1}, we have\nsj∑ i=1 ui = 0.\nSince we assume that u 6= 0, there exists an index t ∈ {1, . . . , k−1}, such that there exists an index i ∈ {st + 1, ..., st+1} that ui 6= 0. Let l ∈ {st + 1, ..., st+1} is the index such that xl has the largest norm while ul 6= 0:\nl = arg max j∈{st+1,...,st+1}, uj 6=0\n‖xj‖ . (52)\nWe further rearrange the sequence (vst+1, ..., vst+1) such that there is an index l ′ ∈ {st + 1, . . . , st+1}, ‖xl′‖ = max\nj∈{st+1,...,st+1}, uj 6=0 ‖xj‖ ,\nand\n∀i ∈ {st + 1, ..., l′}, 〈xl′ , xi〉 ≥ ‖xl′‖2 ; (53) ∀i ∈ {l′ + 1, · · · , st+1}, 〈xl′ , xi〉 < ‖xl′‖2 . (54)\nIt is worth noting that it is probably l′ = st+1, but it is a trivial case that would not influence the result of this lemma.\nLet I = {1, ..., l′}, J = {l′ + 1, ..., n}, and β = xl′ . We prove (1.1) and (1.2) as follows. Proof of argument (1.1).\nWe argue that for any i ∈ I , vi − αβTxi ≤ vl′ − αβTxl′ and for any j ∈ J , vj − αβTxj > vl′ − αβTxl′ . There are three situations:\n(A) i ∈ {1, . . . , st} and j ∈ {st+1 + 1, · · · , n}. Applying eq. (51), for any i ∈ {1, . . . , st} and j ∈ {st+1 + 1, · · · , n}, we have that vi < vl′ and vj > vl′ . Therefore, when α is sufficiently small, we have the following inequalities,\nvi − αβTxi < vl′ − αβTxl′ , vj − αβTxj > vl′ − αβTxl′ .\n(B) i ∈ {st + 1, · · · , l′}. Applying eq. (53) and because of α > 0, we have\n−α〈β, xi〉 ≤ −α ‖β‖2 = −α〈β, xl′〉.\nSince vi = vl′ , it further leads to\nvi − αβTxi ≤ vl′ − αβTxl′ .\n(C) j ∈ {l′ + 1, · · · , st+1}. Similarly, applying eq. (54) and because of α > 0, we have\n−α〈β, xj〉 > −α ‖β‖2 = −α〈β, xl′〉.\nSince vj = vl′ , it further leads to\nvj − αβTxj > vl′ − αβTxl′ ,\nwhich is exactly the argument (1.1)." }, { "heading": "Proof of argument (1.2).", "text": "We argue that for any i ∈ {st + 1, · · · , l′ − 1}, ui = 0. Otherwise, suppose there exists an i ∈ {st + 1, · · · , l′ − 1} such that ui 6= 0. From eq. (52), we have ‖xi‖ ≤ ‖xl′‖. Therefore,\n〈xl′ , xi〉 ≤ ‖xl′‖‖xi‖ ≤ ‖xl′‖2,\nwhere the first inequality strictly holds if the vector xl′ and xi have the same direction, while the second inequlity strictly holds when xi and xl′ have the same norm. Because xl′ 6= xi, we have the following inequality, 〈xl′ , xi〉 < ‖xl′‖2, which contradicts to eq. (53), i.e.,\n〈xl′ , xi〉 ≥ ‖xl′‖2, ∀i ∈ {st + 1, · · · , l′}.\nTherefore, ∑ i∈I ui = st∑ i=1 ui + l′−1∑ i=st+1 ui + ul′ = ul′ 6= 0,\nwhich is exactly the argument (1.2).\nThe proof is completed.\nRemark. For any i ∈ {l′ + 1, ..., st+1}, we have( vi − αβTxi ) − ( vl′ − αβTxl′ ) = αβT (xl′ − xi), (55)\nwhile for any j ∈ {st+1 + 1, ..., n}, we have( vj − αβTxj ) − ( vl′ − αβTxl′ ) = vj − vl′ + αβT (xl′ − xj). (56)\nBecause vj > vl′ , when the real number α is sufficiently small, we have\nαβT (xl′ − xi) < vj − vl′ + αβT (xl′ − xj).\nApplying eqs. (55) and (56), we have( vi − αβTxi ) − ( vl′ − αβTxl′ ) < ( vj − αβTxj ) − ( vl′ − αβTxl′ ) .\nTherefore, if l′ < st+1, we have\nmin i∈{l′+1,...,n}\n( vi − αβTxi ) − ( vl′ − αβTxl′ ) = min i∈{l′+1,...,st+1} αβT (xl′ − xi); (57)\nwhile if l′ = st+1,\nmin i∈{l′+1,...,n}\n( vi − αβTxi ) − ( vl′ − αβTxl′ ) = min i∈{l′+1,...,n} vi − vl + αβT (xl′ − xi). (58)\nEqs. (57) and (58) make senses because l′ < n. Otherwise, from Lemma 7 we have ∑n i=1 ui 6= 0, which contradicts to the assumption." }, { "heading": "B PROOFS OF THEOREM 2, THEOREM 3, COROLLARY 1, AND COROLLARY 2", "text": "This appendix gives the proofs of Theorem 2, Theorem 3, Corollary 1, and Corollary 2 omitted from Section 4." }, { "heading": "B.1 SQUARED LOSS", "text": "We first check that the squared loss is strictly convex, which is even restrictive than “convex”.\nLemma 8. The empirical risk R̂ under squared loss (defined by eq. (5)) is strictly convex with respect to the prediction Ŷ .\nProof. The second derivative of the empirical risk R̂ under squared loss with respect to the prediction Ŷ is\n∂2lce(Y, Ŷ ) ∂Ŷ 2 = ∂2(y − Ŷ )2 ∂Ŷ 2 = 2 > 0.\nTherefore, the empirical risk R̂ under squared loss is strictly convex with respect to prediction Ŷ ." }, { "heading": "B.2 SMOOTH AND MULTILINEAR PARTITION.", "text": "If the activations are all linear functions, the neural networks is reduced to a multilinear model. The loss surface is apparently smooth and multilinear. The nonlinearity in the activations largely reshape the landscape of the loss surface. Specifically, if the input data flows through the linear parts of every activation functions, the output falls in a smooth and multilinear region in the loss surface. When some parameter changes by a sufficiently small swift, the data flow may not move out of the linear parts of the activations. This fact guarantees that each smooth and multilinear regions expands to an open cell. Meanwhile, every nonlinear point in the activations is non-differentiable. If the input data flows through these nonlinear points, the corresponding empirical risk is not smooth with respect to the parameters. Therefore, the nonlinear points in activations correspond to the non-differentiable boundaries between cells on the loss surface." }, { "heading": "B.3 EVERY LOCAL MINIMUM IS GLOBALLY MINIMAL WITHIN A CELL.", "text": "Proof of Theorem 2. In every cell, the input sample points flows through the same linear parts of the activations no matter what values the parameters are.\n(1) We first proves that the empirical risk R̂ equals to a convex function with respect to a variable Ŵ that is calculated from the parameters W .\nSuppose (W1,W2) is a local minimum within a cell. We argue that n∑ i=1 l (yi,W2diag (A·,i)W1xi) = n∑ i=1 l ( yi, A T ·,idiag(W2)W1xi ) , (59)\nwhere A·,i is the i-th column of the following matrix\nA = h ′ s−,s+((W1)1,·x1) · · · h ′ s−,s+((W1)1,·xn) ... . . .\n... h′s−,s+((W1)d1,·x1) · · · h ′ s−,s+((W1)d1,·xn) . (60) The left-hand side (LHS) is as follows,\nLHS = n∑ i=1 l (yi,W2diag (A·,i)W1xi)\n= n∑ i=1 l (yi, [(W2)1,1A1,i · · · (W2)1,d1Ad1,i]W1xi) .\nMeanwhile, the right-hand side (RHS) is as follows,\nRHS = n∑ i=1 l ( yi, A T ·,idiag(W2)W1xi ) ,\n= n∑ i=1 l (yi, [(W2)1,1A1,i · · · (W2)1,d1Ad1,i]W1xi) .\nApparently, LHS = RHS. Thereby, we proved eq. (59).\nAfterwards, we define Ŵ1 = diag(W2)W1, (61)\nand then straighten the matrix Ŵ1 to a vector Ŵ , Ŵ = ( (Ŵ1)1,· · · · (Ŵ1)d1,· ) , (62)\nAlso define X̂ = (A·,1 ⊗ x1 · · · A·,n ⊗ xn) . (63)\nThen, we can prove that the following equations,( AT·,1Ŵ1x1 · · · AT·,nŴ1xn ) = ( (Ŵ1)1,· · · · (Ŵ1)d1,· ) (A·,1 ⊗ x1 · · · A·,n ⊗ xn)\n=Ŵ X̂. (64)\nApplying eq. (64), the empirical risk is transferred to a convex function as follows,\nR̂(W1,W2) = 1\nn n∑ i=1 l ( yi, (A·,i) T diag(W2)W1xi ) = 1 n n∑ i=1 l ( yi, (A·,i) T Ŵ1xi ) = 1\nn n∑ i=1 l ( yi, Ŵ X̂i ) . (65)\nWe can see that the empirical risk is rearranged as a convex function in terms of Ŵ which unite the two weight matrices W1 and W2 and the activation h are together as Ŵ .\nApplying eqs. (61) and (62), we have\nŴ = [(W2)1(W1)1,· · · · (W2)d1(W1)d1,·] .\n(2) We then prove that the local minima (including global minima) of the empirical risk R̂ with respect to the parameter W is also local minima with respect to the corresponding variable Ŵ .\nWe first prove that for any i ∈ [1 : d1d2], we have\neiX̂∇ = 0,\nwhere∇ is defined as follows,\n∇ = [ ∇(Ŵ X̂)\n1\nl ( Y1, ( Ŵ X̂ ) 1 ) · · · ∇(Ŵ X̂) n l ( Yn, ( Ŵ X̂ ) n )]T .\nTo see this, we divide i into two cases: (W2)i 6= 0 and (W2)i = 0. Case 1: (W2)i 6= 0.\nThe local minimizer of the empirical risk R̂ with respect to the parameter W satisfies the following equation,\n∂R̂ ∂(W1)i,j = 0.\nTherefore,\n0 = ∂R̂\n∂(W1)i,j\n=\n∂ ( n∑ k=1 l ( Y·,k, ( Ŵ X̂ ) ·,k )) ∂(W1)i,j\n= n∑ k=1 [ 0 · · · 0 (W2)i 0 · · · 0 ] X̂k∇(Ŵ X̂)·,k l ( Y·,k, ( Ŵ X̂ ) ·,k ) ,︸ ︷︷ ︸\ndX(i−1)+j−1 ︸ ︷︷ ︸ dXd1−dX(i−1)−j\n(66)\nwhere (W2) is a vector and (W2)i is its i-th component.\nThen, divid the both hand sides of eq. (66) with (W2)i, we can get the following equation,\n(edX(i−1)+jX̂)∇ = 0.\nCase 2: (W2)i = 0. Suppose u1 ∈ Rd0 is a unitary vector, u2 ∈ R is a real number, and ε is a small enough positive constant. Then, define a disturbance of W1 and W2 as follows,\n∆W1 = [ 0 · · · 0 εu1 0 · · · 0 ] ,︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ d1dX−dX i\n∆W2 = [ 0 · · · 0 ε2u2 0 · · · 0 ] .︸ ︷︷ ︸\ni−1 ︸ ︷︷ ︸ d1−i\nWhen ε is sufficiently small, ∆W1 and ∆W2 are also sufficiently small. Since (W1,W2) is a local minimum, we have\n1\nn n∑ k=1 l ( Yk, (( Ŵ + ∆ ) X̂ ) k ) =R̂(W1 + ∆W1,W2 + ∆W2) ≥R̂(W1,W2)\n= 1\nn n∑ k=1 l ( Yk, ( Ŵ X̂ ) k ) , (67)\nwhere ∆ is defined as follows,\n∆ = [(W2 + ∆W2)1(W1 + ∆W1)1 · · · (W2 + ∆W2)d1(W1 + ∆W1)d1 ] − [(W2)1(W1)1 · · · (W2)d1(W1)d1 ]\n(∗) = [ 0 · · · 0 ε2u2 (εu1 + (W1)i) 0 · · · 0 ] .︸ ︷︷ ︸\ndX(i−1) ︸ ︷︷ ︸ d1dX−dX i\n(68)\nHere, eq. (∗) comes from (W2)i = 0. Rearrange eq. (67) and apply the Taylor’s Theorem, we can get that\n∆ · X̂∇+ O ( ‖∆ · X̂‖2 ) ≥ 0.\nApplying eq. (68), we have[ 0 · · · 0 ε2u2 (εu1 + (W1)i) 0 · · · 0 ] X̂∇︸ ︷︷ ︸\ndX(i−1) ︸ ︷︷ ︸ dXd1−idX\n+ε4O ( ‖ [ 0 · · · 0 u2 (εu1 + (W1)i) 0 · · · 0 ] X̂‖2 ) ︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ dXd1−idX\n(∗∗) = [ 0 · · · 0 ε3u2u1 0 · · · 0 ] X̂∇︸ ︷︷ ︸\ndX(i−1) ︸ ︷︷ ︸ dXd1−idX\n+ε4O ( ‖ [ 0 · · · 0 u2 (εu1 + (W1)i) 0 · · · 0 ] X̂‖2 ) ︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ dXd1−idX\n= ε3 [ 0 · · · 0 u2u1 0 · · · 0 ] X̂∇+ o(ε3)︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ dXd1−idX\n(69)\n≥ 0 . (70)\nHere, eq. (∗∗) can be obtained from follows. Because W2 is a local minimizer, for any component (W2)i of W2,\n∂ (∑n k=1 l ( Yk, ( Ŵ X̂ ) k )) ∂(W2)i =0,\nwhich leads to\n[ 0 · · · 0 (W1)i 0 · · · 0 ] X̂∇ = 0.︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ dXd1−idX\nWhen ε approaches 0, eq. (69) leads to the following inequality,\n[ 0 · · · 0 u2u1 0 · · · 0 ] X̂∇ ≥ 0.︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ dXd1−idX\nSince u1 and u2 are arbitrarily picked (while the norms equal 1), the inequality above further leads to\n[ 0 · · · 0 ej 0 · · · 0 ] X̂∇ = 0,︸ ︷︷ ︸ dX(i−1) ︸ ︷︷ ︸ dXd1−idX\n(71)\nwhich finishes the proof of the argument.\nTherefore, for any i and j, we have proven that\ned0(i−1)+jX̂∇ = 0,\nwhich demonstrates that X̂∇ = 0,\nwhich means Ŵ is also a local minimizer of the empirical risk R̂,\nR̂(W ) = n∑ i=1 l(Yi,WX̂i). (72)\n(3) Applying the property of convex function, Ŵ is a global minimizer of the empirical risk R̂, which leads to (W1,W2) is a global minimum inside this cell.\nThe proof is completed." }, { "heading": "B.4 EQUIVALENCE CLASSES OF LOCAL MINIMUM VALLEYS IN CELLS.", "text": "Proof of Theorem 3 and Corollary 1. In the proof of Theorem 2, we constructed a map Q: (W1,W2) → Ŵ . Further, in any fixed cell, the represented hypothesis of a neural network is uniquely determined by Ŵ .\nWe first prove that all local minima in a cell are concentrated as a local minimum valley.\nSince the loss function l is strictly convex, the empirical risk has one unique local minimum (which is also a global minimum) with respect to Ŵ in every cell, if there exists some local minimum in the cell. Meanwhile, we have proved that all local minima with respect to (W1,W2) are also local minima with respect to the corresponding Ŵ . Therefore, all local minima with respect to (W1,W2) correspond one unique Ŵ . Within a cell, when W1 expands by a positive real factor α to W ′1 and W2 shrinks by the same positive real factor α to W ′2, we have Q(W1,W2) = Q(W ′ 1,W ′ 2), i.e., the Ŵ remains invariant.\nFurther, we argue that all local minima in a cell are connected with each other by a continuous path, on which the empirical risk is invariant. For every local minima pair (W1,W2) and (W ′1,W ′ 2), we have diag(W2)W1 = diag(W ′2)W ′ 1. (73) Since h′s−,s+(W1X) = h ′ s−,s+(W ′ 1X) (element-wise), for every i ∈ [1, d1],\nsgn ((W2)i) = sgn ((W ′ 2)i) .\nTherefore, a continuous path from (W1,W2) to (W ′1,W ′ 2) can be constructed by finite moves, each of which expands a component of W2 by a real constant α and then shrinks the corresponding line of W1 by the same constant α.\nWe then prove that all local minima in a cell constitute an equivalence class.\nDefine an operation ∼R as follows,\n(W 11 ,W 1 2 ) ∼R (W 21 ,W 22 ),\nif Q(W 11 ,W 1 2 ) = Q(W 2 1 ,W 2 2 ).\nWe then argue that ∼R is an equivalence relation. The three properties of equivalence relations are checked as follows.\n(1) Reflexivity:\nFor any (W1,W2), we have Q(W1,W2) = Q(W1,W2).\nTherefore, (W1,W2) ∼R (W1,W2).\n(2) Symmetry:\nFor any pair (W 11 ,W 1 2 ) and (W 2 1 ,W 2 2 ), Suppose that\n(W 11 ,W 1 2 ) ∼R (W 21 ,W 22 ).\nThus, Q(W 11 ,W 1 2 ) = Q(W 2 1 ,W 2 2 ).\nApparently, Q(W 21 ,W 2 2 ) = Q(W 1 1 ,W 1 2 ).\nTherefore, Q(W 21 ,W 2 2 ) ∼R Q(W 11 ,W 12 ).\n(3) Transitivity:\nFor any (W 11 ,W 1 2 ), (W 2 1 ,W 2 2 ), and (W 3 1 ,W 3 2 ), suppose that\n(W 11 ,W 1 2 ) ∼R (W 21 ,W 22 ),\n(W 21 ,W 2 2 ) ∼R (W 31 ,W 32 ).\nThen,\nQ(W 11 ,W 1 2 ) = Q(W 2 1 ,W 2 2 ),\nQ(W 21 ,W 2 2 ) = Q(W 3 1 ,W 3 2 ).\nApparently, Q(W 11 ,W 1 2 ) = Q(W 2 1 ,W 2 2 ) = Q(W 3 1 ,W 3 2 ).\nTherefore, (W 11 ,W 1 2 ) ∼R (W 31 ,W 32 ).\nWe then prove the mapping Q is the quotient map.\nDefine a map as follows,\nT : (W1,W2)→ (diag(W2)W1,11×d1).\nWe then define an operator ⊕ as,\n(W 11 ,W 1 2 )⊕ (W 21 ,W 22 ) = T (W 11 ,W 12 ) + T (W 21 ,W 22 ),\nthe inverse of (W1,W2) is defined to be (−W1,W2) and the zero element is defined to be (0,11×d1). Obviously, the following is a linear mapping:\nQ : ((Rd1×dX ,R1×d1),⊕)→ (R1×dXd1 ,+).\nFor any pair (W 11 ,W 1 2 ) and (W 2 1 ,W 2 2 ), we have\n(W 11 ,W 1 2 ) ∼R (W 21 ,W 22 ),\nif and only if (W 11 ,W 1 2 )⊕ (−W 21 ,W 22 ) ∈ Ker(Q). Therefore, the quotient space (Rd1×dX ,R1×d1)/Ker(Q) is a definition of the equivalence relation ∼R. The proof is completed." }, { "heading": "B.5 LINEAR COLLAPSE.", "text": "When there is no nonlinearities in the activations, there is apparently no non-differentiable regions on the loss surface. In other words, the loss surface is a single smooth and multilinear cell." } ]
2,020
PIECEWISE LINEAR ACTIVATIONS SUBSTANTIALLY SHAPE THE LOSS SURFACES OF NEURAL NETWORKS
SP:cf11852f87d71e71dc4e5327eef4236db46fe1d5
[ "In this paper, the authors present a natural image model based on the manifold of image patches. It is similar to the Deep Image Prior in that it is untrained and has a convolutional-like structure. It leads to an optimization problem with a reconstruction loss term and an auto encoding term. The authors show empirical results in time series recovery, non-semantic inpainting, and super resolution. In the image processing tasks, the performance of the proposed algorithm is on par (sometimes slightly worse, sometimes slightly better) than that of DIP.", "This paper introduces a transformation from the deep image prior (DIP) to an embedding with an autoencoder (MMES). The authors aim to use this transformation to explain (\"in words\") why the DIP works so well and explain why convolutions are needed in the DIP. The contributions are summarised as a) providing an interpretable analogue to the convnet, b) demonstration of the proposed method's effectiveness, and c) characterisation of the DIP as a \"low-dimensional patch-manifold prior\"." ]
Deep image prior (DIP) (Ulyanov et al., 2018), which utilizes a deep convolutional network (ConvNet) structure itself as an image prior, has attracted huge attentions in computer vision community. It empirically shows the effectiveness of ConvNet structure for various image restoration applications. However, why the DIP works so well is still unknown, and why convolution operation is useful for image reconstruction or enhancement is not very clear. In this study, we tackle these questions. The proposed approach is dividing the convolution into “delay-embedding” and “transformation (i.e., encoder-decoder)”, and proposing a simple, but essential, image/tensor modeling method which is closely related to dynamical systems and self-similarity. The proposed method named as manifold modeling in embedded space (MMES) is implemented by using a novel denoising-auto-encoder in combination with multi-way delay-embedding transform. In spite of its simplicity, the image/tensor completion, super-resolution, and deconvolution results of MMES are quite similar even competitive to DIP in our extensive experiments, and these results would help us for reinterpreting/characterizing the DIP from a perspective of “low-dimensional patch-manifold prior”.
[]
[ { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "What regularized auto-encoders learn from the data-generating distribution", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Joshua Batson", "Loic Royer" ], "title": "Noise2self: Blind denoising by self-supervision", "venue": "In Proceedings of ICML,", "year": 2019 }, { "authors": [ "Chris M Bishop" ], "title": "Training with noise is equivalent to Tikhonov regularization", "venue": "Neural Computation,", "year": 1995 }, { "authors": [ "Antoni Buades", "Bartomeu Coll", "J-M Morel" ], "title": "A non-local algorithm for image denoising", "venue": "In Proceedings of CVPR,", "year": 2005 }, { "authors": [ "Sungmin Cha", "Taeeon Park", "Taesup Moon" ], "title": "Gan2gan: Generative noise learning for blind image denoising with single noisy images", "venue": null, "year": 1905 }, { "authors": [ "Andrzej Cichocki", "Rafal Zdunek", "Anh Huy Phan", "Shun-ichi Amari" ], "title": "Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation", "venue": null, "year": 2009 }, { "authors": [ "Kostadin Dabov", "Alessandro Foi", "Vladimir Katkovnik", "Karen Egiazarian" ], "title": "Image denoising by sparse 3-d transform-domain collaborative filtering", "venue": "IEEE Transactions on Image Processing,", "year": 2007 }, { "authors": [ "Max Welling Diederik P Kingma" ], "title": "Auto-encoding variational bayes", "venue": "In Proceedings of ICLR,", "year": 2014 }, { "authors": [ "Tao Ding", "Mario Sznaier", "Octavia I Camps" ], "title": "A rank minimization approach to video inpainting", "venue": "In Proceedings of ICCV,", "year": 2007 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In Proceedings of ICML,", "year": 2016 }, { "authors": [ "Yosef Gandelsman", "Assaf Shocher", "Michal Irani" ], "title": "double-dip”: Unsupervised image decomposition via coupled deep-image-priors", "venue": "In Proceedings of CVPR,", "year": 2019 }, { "authors": [ "Kuang Gong", "Ciprian Catana", "Jinyi Qi", "Quanzheng Li" ], "title": "PET image reconstruction using deep image prior", "venue": "IEEE Transactions on Medical Imaging,", "year": 2018 }, { "authors": [ "William Eric Leifur Grimson" ], "title": "From Images to Surfaces: A Computational Study of the Human Early Visual System", "venue": null, "year": 1981 }, { "authors": [ "Shuhang Gu", "Lei Zhang", "Wangmeng Zuo", "Xiangchu Feng" ], "title": "Weighted nuclear norm minimization with application to image denoising", "venue": "In Proceedings of CVPR,", "year": 2014 }, { "authors": [ "Frédéric Guichard", "François Malgouyres" ], "title": "Total variation based interpolation", "venue": "In Proceedings of EUSIPCO, pp. 1–4", "year": 1998 }, { "authors": [ "Reinhard Heckel", "Paul Hand" ], "title": "Deep decoder: Concise image representations from untrained non-convolutional networks", "venue": "arXiv preprint arXiv:1810.03982,", "year": 2018 }, { "authors": [ "Geoffrey E Hinton", "Ruslan R Salakhutdinov" ], "title": "Reducing the dimensionality of data with neural networks", "venue": null, "year": 2006 }, { "authors": [ "Frank L Hitchcock" ], "title": "The expression of a tensor or a polyadic as a sum of products", "venue": "Journal of Mathematics and Physics,", "year": 1927 }, { "authors": [ "Harold Hotelling" ], "title": "Analysis of a complex of statistical variables into principal components", "venue": "Journal of Educational Psychology,", "year": 1933 }, { "authors": [ "Aapo Hyvarinen", "Juha Karhunen", "Erkki Oja" ], "title": "Independent Component Analysis, volume 46", "venue": null, "year": 2004 }, { "authors": [ "Hui Ji", "Chaoqiang Liu", "Zuowei Shen", "Yuhong Xu" ], "title": "Robust video denoising using low rank matrix completion", "venue": "In Proceedings of CVPR,", "year": 2010 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alexander Krull", "Tim-Oliver Buchholz", "Florian Jug" ], "title": "Noise2void-learning denoising from single noisy images", "venue": "In Proceedings of CVPR,", "year": 2019 }, { "authors": [ "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Self-supervised deep image denoising", "venue": "arXiv preprint arXiv:1901.10277,", "year": 2019 }, { "authors": [ "Daniel D Lee", "H Sebastian Seung" ], "title": "Learning the parts of objects by non-negative matrix factorization", "venue": null, "year": 1999 }, { "authors": [ "Jaakko Lehtinen", "Jacob Munkberg", "Jon Hasselgren", "Samuli Laine", "Tero Karras", "Miika Aittala", "Timo Aila" ], "title": "Noise2noise: Learning image restoration without clean data", "venue": "In Proceedings of ICML,", "year": 2018 }, { "authors": [ "Stan Z Li" ], "title": "Markov random field models in computer vision", "venue": "In Proceedings of ECCV,", "year": 1994 }, { "authors": [ "Ye Li", "KJ Ray Liu", "Javad Razavilar" ], "title": "A parameter estimation scheme for damped sinusoidal signals based on low-rank Hankel approximation", "venue": "IEEE Transactions on Signal Processing,", "year": 1997 }, { "authors": [ "Ji Liu", "Przemyslaw Musialski", "Peter Wonka", "Jieping Ye" ], "title": "Tensor completion for estimating missing values in visual data", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Ivan Markovsky" ], "title": "Structured low-rank approximation and its applications", "venue": null, "year": 2008 }, { "authors": [ "Stanley Osher", "Zuoqiang Shi", "Wei Zhu" ], "title": "Low dimensional manifold model for image processing", "venue": "SIAM Journal on Imaging Sciences,", "year": 2017 }, { "authors": [ "Norman H Packard", "James P Crutchfield", "J Doyne Farmer", "Robert S Shaw" ], "title": "Geometry from a time series", "venue": "Physical Review Letters,", "year": 1980 }, { "authors": [ "Karl Pearson" ], "title": "LIII. On lines and planes of closest fit to systems of points in space", "venue": "The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science,", "year": 1901 }, { "authors": [ "Gabriel Peyre" ], "title": "Manifold models for signals and images", "venue": "Computer Vision and Image Understanding,", "year": 2009 }, { "authors": [ "Tomaso Poggio", "Vincent Torre", "Christof Koch" ], "title": "Computational vision and regularization theory", "venue": "Nature, 317:26,", "year": 1985 }, { "authors": [ "Mihaela Rosca", "Balaji Lakshminarayanan", "David Warde-Farley", "Shakir Mohamed" ], "title": "Variational approaches for auto-encoding generative adversarial networks", "venue": "arXiv preprint arXiv:1706.04987,", "year": 2017 }, { "authors": [ "Tamar Rott Shaham", "Tali Dekel", "Tomer Michaeli" ], "title": "Singan: Learning a generative model from a single natural image", "venue": "In Proceedings of ICCV,", "year": 2019 }, { "authors": [ "Assaf Shocher", "Nadav Cohen", "Michal Irani" ], "title": "Zero-shot super-resolution using deep internal learning", "venue": "In Proceedings of CVPR,", "year": 2018 }, { "authors": [ "Assaf Shocher", "Shai Bagon", "Phillip Isola", "Michal Irani" ], "title": "Ingan: Capturing and retargeting the DNA of a natural image", "venue": "In Proceedings of ICCV,", "year": 2019 }, { "authors": [ "Sho Sonoda", "Noboru Murata" ], "title": "Transportation analysis of denoising autoencoders: A novel method for analyzing deep neural networks", "venue": "arXiv preprint arXiv:1712.04145,", "year": 2017 }, { "authors": [ "Martin Szummer", "Rosalind W Picard" ], "title": "Temporal texture modeling", "venue": "In Proceedings of ICIP,", "year": 1996 }, { "authors": [ "Robert Tibshirani" ], "title": "Regression shrinkage and selection via the lasso", "venue": "Journal of the Royal Statistical Society: Series B (Methodological),", "year": 1996 }, { "authors": [ "Ledyard R Tucker" ], "title": "Some mathematical notes on three-mode factor analysis", "venue": null, "year": 1966 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of CVPR,", "year": 2018 }, { "authors": [ "Peter Van Overschee", "Bart De Moor" ], "title": "Subspace algorithms for the stochastic identification problem", "venue": "In Proceedings of IEEE Conference on Decision and Control,", "year": 1991 }, { "authors": [ "Dave Van Veen", "Ajil Jalal", "Mahdi Soltanolkotabi", "Eric Price", "Sriram Vishwanath", "Alexandros G Dimakis" ], "title": "Compressed sensing with deep image prior and learned regularization", "venue": "arXiv preprint arXiv:1806.06438,", "year": 2018 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In Proceedings of ICML, pp", "year": 2008 }, { "authors": [ "Curtis R Vogel", "Mary E Oman" ], "title": "Fast, robust total variation-based reconstruction of noisy, blurred images", "venue": "IEEE Transactions on Image Processing,", "year": 1998 }, { "authors": [ "Wenqi Wang", "Vaneet Aggarwal", "Shuchin Aeron" ], "title": "Efficient low rank tensor ring completion", "venue": "In Proceedings of ICCV,", "year": 2017 }, { "authors": [ "Francis Williams", "Teseo Schneider", "Claudio Silva", "Denis Zorin", "Joan Bruna", "Daniele Panozzo" ], "title": "Deep geometric prior for surface reconstruction", "venue": "In Proceedings of CVPR,", "year": 2019 }, { "authors": [ "Jun Xu", "Yuan Huang", "Li Liu", "Fan Zhu", "Xingsong Hou", "Ling Shao" ], "title": "Noisy-as-clean: Learning unsupervised denoising from the corrupted image", "venue": null, "year": 1906 }, { "authors": [ "Yangyang Xu", "Ruru Hao", "Wotao Yin", "Zhixun Su" ], "title": "Parallel matrix factorization for low-rank tensor completion", "venue": "Inverse Problems & Imaging,", "year": 2015 }, { "authors": [ "Noam Yair", "Tomer Michaeli" ], "title": "Multi-scale weighted nuclear norm image restoration", "venue": "In Proceedings of CVPR,", "year": 2018 }, { "authors": [ "Tatsuya Yokota", "Hidekata Hontani" ], "title": "Simultaneous visual data completion and denoising based on tensor rank and total variation minimization and its primal-dual splitting algorithm", "venue": "In Proceedings of CVPR,", "year": 2017 }, { "authors": [ "Tatsuya Yokota", "Hidekata Hontani" ], "title": "Simultaneous tensor completion and denoising by noise inequality constrained convex optimization", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Tatsuya Yokota", "Qibin Zhao", "Andrzej Cichocki" ], "title": "Smooth PARAFAC decomposition for tensor completion", "venue": "IEEE Transactions on Signal Processing,", "year": 2016 }, { "authors": [ "Tatsuya Yokota", "Burak Erem", "Seyhmus Guler", "Simon K Warfield", "Hidekata Hontani" ], "title": "Missing slice recovery for tensors using a low-rank model in embedded space", "venue": "In Proceedings of CVPR,", "year": 2018 }, { "authors": [ "Tatsuya Yokota", "Kazuya Kawai", "Muneyuki Sakata", "Yuichi Kimura", "Hidekata Hontani" ], "title": "Dynamic PET image reconstruction using nonnegative matrix factorization incorporated with deep image prior", "venue": "In Proceedings of ICCV,", "year": 2019 }, { "authors": [ "Jian Zhang", "Debin Zhao", "Wen Gao" ], "title": "Group-based sparse representation for image restoration", "venue": "IEEE Transactions on Image Processing,", "year": 2014 }, { "authors": [ "Zemin Zhang", "Gregory Ely", "Shuchin Aeron", "Ning Hao", "Misha Kilmer" ], "title": "Novel methods for multilinear data completion and de-noising based on tensor-SVD", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Qibin Zhao", "Liqing Zhang", "Andrzej Cichocki" ], "title": "Bayesian CP factorization of incomplete tensors with automatic rank determination", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The most important piece of information for image/tensor restoration would be the “prior” which usually converts the optimization problems from ill-posed to well-posed, and/or gives some robustness for specific noises and outliers. Many priors were studied in computer science problems such as low-rank representation (Pearson, 1901; Hotelling, 1933; Hitchcock, 1927; Tucker, 1966), smoothness (Grimson, 1981; Poggio et al., 1985; Li, 1994), sparseness (Tibshirani, 1996), non-negativity (Lee & Seung, 1999; Cichocki et al., 2009), statistical independence (Hyvarinen et al., 2004), and so on. Particularly in today’s computer vision problems, total variation (TV) (Guichard & Malgouyres, 1998; Vogel & Oman, 1998), low-rank representation (Liu et al., 2013; Ji et al., 2010; Zhao et al., 2015; Wang et al., 2017), and non-local similarity (Buades et al., 2005; Dabov et al., 2007) priors are often used for image modeling. These priors can be obtained by analyzing basic properties of natural images, and categorized as “unsupervised image modeling”.\nBy contrast, the deep image prior (DIP) (Ulyanov et al., 2018) has been come from a part of “supervised” or “data-driven” image modeling framework (i.e., deep learning) although the DIP itself is one of the state-of-the-art unsupervised image restoration methods. The method of DIP can be simply explained to only optimize an untrained (i.e., randomly initialized) fully convolutional generator network (ConvNet) for minimizing squares loss between its generated image and an observed image (e.g., noisy image), and stop the optimization before the overfitting. Ulyanov et al. (2018) explained the reason why a high-capacity ConvNet can be used as a prior by the following statement: Network resists “bad” solutions and descends much more quickly towards naturally-looking images, and its phenomenon of “impedance of ConvNet” was confirmed by toy experiments. However, most researchers could not be fully convinced from only above explanation because it is just a part of whole. One of the essential questions is why is it ConvNet? or in more practical perspective, to explain what is “priors in DIP” with simple and clear words (like smoothness, sparseness, low-rank etc) is very important.\nIn this study, we tackle the question why ConvNet is essential as an image prior, and try to translate the “deep image prior” with words. For this purpose, we divide the convolution operation into\n“embedding” and “transformation” (see Fig. 9 in Appendix). Here, the “embedding” stands for delay/shift-embedding (i.e., Hankelization) which is a copy/duplication operation of image-patches by sliding window of patch size (τ, τ). The embedding/Hankelization is a preprocessing to capture the delay/shift-invariant feature (e.g., non-local similarity) of signals/images. This “transformation” is basically linear transformation in a simple convolution operation, and it also indicates some nonlinear transformation from the ConvNet perspective.\nTo simplify the complicated “encoder-decoder” structure of ConvNet used in DIP, we consider the following network structure: Embedding H (linear), encoding φr (non-linear), decoding ψr (non-linear), and backward embedding H† (linear) (see Fig. 1). Note that its encoder-decoder part (φr, ψr) is just a simple multi-layer perceptron along the filter domain (i.e., manifold learning), and it is sandwitched between forward and backward embedding (H,H†). Hence, the proposed network can be characterized by Manifold Modeling in Embedded Space (MMES). The proposed MMES is designed as simple as possible while keeping a essential ConvNet structure. Some parameters τ and r in MMES are corresponded with a kernel size and a filter size in ConvNet.\nWhen we set the horizontal dimension of hidden tensor L with r, each τ2-dimensional fiber in H, which is a vectorization of each (τ, τ)-patch of an input image, is encoded into r-dimensional space. Note that the volume of hidden tensor L looks to be larger than that of input/output image, but representation ability of L is much lower than input/output image space since the first/last tensor (H,H′) must have Hankel structure (i.e., its representation ability is equivalent to image) and the hidden tensor L is reduced to lower dimensions from H. Here, we assume r < τ2, and its lowdimensionality indicates the existence of similar (τ, τ )-patches (i.e., self-similarity) in the image, and it would provide some “impedance” which passes self-similar patches and resist/ignore others. Each fiber of Hidden tensor L represents a coordinate on the patch-manifold of image. It should be noted that the MMES network is a special case of deep neural networks. In fact, the proposed MMES can be considered as a new kind of auto-encoder (AE) in which convolution operations have been replaced by Hankelization in pre-processing and post-processing. Compared with ConvNet, the forward and backward embedding operations can be implemented by convolution and transposed convolution with one-hot-filters (see Fig. 12 in Appendix for details). Note that the encoder-decoder part can be implemented by multiple convolution layers with kernel size (1,1) and non-linear activations. In our model, we do not use convolution explicitly but just do linear transform and non-linear activation for “filter-domain” (i.e., horizontal axis of tensors in Fig. 1).\nThe contributions in this study can be summarized as follow: (1) A new and simple approach of image/tensor modeling is proposed which translates the ConvNet, (2) effectiveness of the proposed method and similarity to the DIP are demonstrated in experiments, and (3) most importantly, there is a prospect for interpreting/characterizing the DIP as “low-dimensional patch-manifold prior”." }, { "heading": "2 RELATED WORKS", "text": "Note that the idea of low-dimensional patch manifold itself has been proposed by Peyre (2009) and Osher et al. (2017). Peyre had firstly formulated the patch manifold model of natural images and solve it by dictionary learning and manifold pursuit. Osher et al. formulated the regularization function to minimize dimension of patch manifold, and solved Laplace-Beltrami equation by point integral method. In comparison with these studies, we decrease the dimension of patch-manifold by utilizing AE shown in Fig. 1.\nA related technique, low-rank tensor modeling in embedded space, has been studied recently by Yokota et al. (2018). However, the modeling approaches here are different: multi-linear vs nonlinear manifold. Thus, our study would be interpreted as manifold version of (Yokota et al., 2018) in a perspective of tensor completion methods. Note that Yokota et al. (2018) applied their model for only tensor completion task. By contrast, we investigate here tensor completion, super-resolution, and deconvolution tasks.\nAnother related work is devoted to group sparse representation (GSR) (Zhang et al., 2014a). The GSR is roughly characterized as a combination of similar patch-grouping and sparse modeling which is similar to the combination of embedding and manifold-modeling. However, the computational cost of similar patch-grouping is obviously higher than embedding, and this task is naturally included in manifold learning.\nThe main difference between above studies and our is the motivation: Essential and simple image modeling which can translate the ConvNet/DIP. The proposed MMES has many connections with ConvNet/DIP such as embedding, non-linear mapping, and the training with noise.\nFrom a perspective of DIP, there are several related works. First, the deep geometric prior (Williams et al., 2019) utilises a good properties of a multi-layer perceptron for shape reconstruction problem which efficiently learn a smooth function from 2D space to 3D space. It helps us to understand DIP from a perspective of manifold learning. For example, it can be used for gray scale image reconstruction if an image is regarded as point could in 3D space (i, j,Xij). However, this may not provide the good image reconstruction like DIP, because it just smoothly interpolates a point cloud by surface like a Volonoi interpolation. Especially it can not provide a property of self-similarity in natural image.\nSecond, deep decoder (Heckel & Hand, 2018) reconstructs natural images from noises by nonconvolutional networks which consists of linear channel/color transform, ReLU, channel/color normalization, and upsampling layers. In contrast that DIP uses over-parameterized network, deep decoder uses under-parameterized network and shows its ability of image reconstruction. Although deep decoder is a non-convolutional network, Authors emphasize the closed relationship between convolutional layers in DIP and upsampling layers in deep decoder. In this literature, Authors described ”If there is no upsampling layer, then there is no notion of locality in the resultant image” in deep decoder. It implies the ”locality” is the essence of image model, and the convolution/upsampling layer provides it. Furthermore, the deep decoder has a close relationship with our MMES. Note that the MMES is originally/essentially has only decoder and inverse MDT (see Eq.(3)), and the encoder is just used for satisfying Hankel structure. The decoder and inverse MDT in our MMES are respectively corresponding linear operation and upsampling layer in deep decoder. Moreover, concept of under-parameterization is also similar to our MMES.\nFrom this, we can say the essence of image model is the ”locality”, and its locality can be provided by ”convolution”, ”upsampling”, or ”delay-embedding”. This is why the image restoration from single image with deep convolutional networks has highly attentions which are called by zero-shot learning, internal learning, or self-supervised learning (Shocher et al., 2018; Lehtinen et al., 2018; Krull et al., 2019; Batson & Royer, 2019; Xu et al., 2019; Cha et al., 2019; Laine et al., 2019).\nRecently, two generative models: SinGAN (Shaham et al., 2019) and InGAN (Shocher et al., 2019) learned from only a single image, have been proposed. Key concept of both papers is to impose the constraint for local patches of image to be natural. From a perspective of the constraint for local patches of image, our MMES has closed relationship with these works. However, we explicitly impose a low-dimensional manifold constraint for local patches rather than adversarial training with patch discriminators." }, { "heading": "3 MANIFOLD MODELING IN EMBEDDED SPACE", "text": "Here, on the contrary to Section 1, we start to explain the proposed method from the concept of MMES, and we systematically derive the MMES structure from it. Conceptually, the proposed tensor reconstruction method can be formulated by\nminimize X ||Y −F(X )||2F ,\ns.t. H(X ) = [h1,h2, ...,hT ] =: H, (1) ht ∈Mr for t = 1, 2, ..., T,\nwhere Y ∈ RJ1×J2×···×JN is an observed corrupted tensor, X ∈ RI1×I2×···×IN is an estimated tensor, F : RI1×I2×···×IN → RJ1×J2×···×JN is a linear operator which represents the observation system,H : RI1×I2×···×IN → RD×T is padding and Hankelization operator with sliding window of size (τ1, τ2, ..., τN ), and we impose each column of matrixH can be sampled from an r-dimensional manifoldMr in D-dimensional Euclid space (see Appendix B for details). We have r ≤ D. For simplicity, we puttedD := ∏ n τn and T := ∏ n(In+τn−1). For tensor completion task, F := PΩ is a projection operator onto support set Ω so that the missing elements are set to be zero. For superresolution task, F is a down-sampling operator of images/tensors. For deconvolution task, F is a convolution operator with some blur kernels. Fig. 2 shows the concept of proposed manifold modeling in case of image inpainting (i.e., N = 2). We minimize the distance between observation Y and reconstruction X with its support Ω, and all patches in X should be included in some restricted manifold Mr. In other words, X is represented by the patch-manifold, and the property of the patch-manifold can be image priors. For example, low dimensionality of patch-manifold restricts the non-local similarity of images/tensors, and it would be related with “impedance” in DIP. We model X indirectly by designing the properties of patch-manifoldMr." }, { "heading": "3.1 DEFINITION OF LOW-DIMENSIONAL MANIFOLD", "text": "We consider an AE to define the r-dimensional manifold Mr in ( ∏ n τn)-dimensional Euclidean space as follows:\nMr := {ψ̂r(l) | l ∈ Rr}, (ψ̂r, φ̂r) := argmin (ψr,φr) T∑ t=1 ||ht − ψrφr(ht)||22, (2)\nwhere φr : RD → Rr is an encoder, ψr : Rr → RD is a decoder, and ψ̂rφ̂r : RD → RD is an auto-encoder constructed from {ht}Tt=1. Note that, in general, the use of AE models is a widely accepted approach for manifold learning (Hinton & Salakhutdinov, 2006). The properties of the manifoldMr are determined by the properties of φr and ψr. By employing multi-layer perceptrons (neural networks) for φr and ψr, encoder-decoder may provide a smooth manifold." }, { "heading": "3.2 PROBLEM FORMULATION", "text": "In this section, we combine the conceptual formulation (1) and the AE guided manifold constraint to derive a equivalent and more practical optimization problem. First, we redefine a tensor X as an output of generator:\nX :=H†[h1,h2, ...,hT ], where ht ∈Mr =H†[ψ̂r(l1), ψ̂r(l2), ..., ψ̂r(lT )], (3)\nAlgorithm 1 Optimization algorithm for tensor reconstruction input: Y ∈ RJ1×···×JN (corrupted tensor), F , τ , r, σ; initialize: Z ∈ RI1×···×IN , auto-encoder Ar, λ = 5.0; repeat H ← H(Z) ∈ RD×T with τ ; generate noise E ∈ RD×T with σ; LAE ← ||H −Ar(H +E)||2F ; Lrec ← 1D ||Y −F(H\n†Ar(H +E))||2F ; update (Z,Ar) by Adam for Lrec + λLAE; if Lrec < LAE then λ← 1.1λ; else λ← 0.99λ;\nuntil converge output: X̂ = H†ArH(Z) ∈ RI1×···×IN (reconstructed tensor);\nwhere lt ∈ Rr, and H† is a pseudo inverse of H. At this moment, X is a function of {lt}Tt=1, however Hankel structure of matrix H can not be always guaranteed under the unconstrained condition of lt. For guaranteeing the Hankel structure of matrixH , we further transform it as follow:\nX :=H†[ψ̂rφ̂r(g1), ψ̂rφ̂r(g2), ..., ψ̂rφ̂r(gT )], =H†Ar[g1, g2, ..., gT ] =H†ArH(Z), (4)\nwhere we put Ar : RD×T → RD×T as an operator which auto-encodes each column of a input matrix with (ψ̂r, φ̂r), and [g1, g2, ..., gT ] as a matrix, which has Hankel structure and is transformed by Hankelization of some input tensor Z ∈ RI1×I2×···×IN . Note that Z is the most compact representation for Hankel matrix [g1, g2, ..., gT ]. Eq. (4) describes the MMES network shown in Fig. 1: H, φ̂r, ψ̂r andH† are respectively corresponding to forward embedding, encoding, decoding, and backward embedding, where encoder and decoder can be defined e.g. by multi-layer perceptrons (i.e., repetition of linear transformation and non-linear activation).\nFrom this formulation, Problem (1) is transformed as minimizeZ ||Y − F(H†ArH(Z))||2F , where Ar is an AE which defines the manifold Mr. In this study, the AE/manifold is learned from an observed tensor Y itself, thus the optimization problem is finally formulated as\nminimize Z,Ar ||Y −F(H†ArH(Z))||2F︸ ︷︷ ︸ =:Lrec +λ ||H(Z)−ArH(Z)||2F︸ ︷︷ ︸ =:LAE , (5)\nwhere we refer respectively the first and second terms by a reconstruction loss and an auto-encoding loss, and λ > 0 is a trade-off parameter for balancing both losses." }, { "heading": "3.3 OPTIMIZATION ALGORITHM", "text": "Optimization problem (5) consists of two terms: a reconstruction loss, and an auto-encoding loss. Hyperparameter λ is set to balance both losses. Basically, λ should be large because auto-encoding loss should be zero. However, very large λ prohibits minimizing the reconstruction loss, and may lead to local optima. Therefore, we adjust gradually the value of λ in the optimization process.\nAlgorithm 1 shows an optimization algorithm for tensor reconstruction and/or enhancement. For AE learning, we employs a strategy of denoising-auto-encoder (see Appendix in detail). Adaptation of λ is just an example, and it can be modified appropriately with data. Here, the trade-off parameter λ is adjusted for keeping Lrec > LAE, but for no large gap between both losses. By exploiting the convolutional structure ofH andH† (see Appendix B.1), the calculation flow of Lrec and LAE can be easily implemented by using neural network libraries such as TensorFlow. We employed Adam (Kingma & Ba, 2014) optimizer for updating (Z,Ar)." }, { "heading": "4 EXPERIMENTS", "text": "Here, we show the selective experimental results to demonstrate the close similarity and some slight differences between DIP and MMES. First, toy examples with a time-series signal and a gray-scale\nimage were recovered by the proposed method to show its basic behaviors. Thereafter, we show the main results by comparison with DIP and other selective methods on color-image inpainting, superresolution, and deconvolution tasks. Optional results of optimization behavior, hyper-parameter sensitivity, and volumetric/3D image completion are shown in Appendix." }, { "heading": "4.1 TOY EXAMPLES", "text": "In this section, we apply the proposed method into a toy example of signal recovery. Fig. 3 shows a result of this experiment. A one-dimensional time-series signal is generated from Lorentz system, and corrupted by additive Gaussian noise, random missing, and three block occlusions. The corrupted signal was recovered by the subspace modeling (Yokota et al., 2018), and the proposed manifold modeling in embedded space. Window size of delay-embedding was τ = 64, the lowest dimension of auto-encoder was r = 3, and additive noise standard deviation was set to σ = 0.05. Manifold modeling catched the structure of Lorentz attractor much better than subspace modeling.\nFig. 4 visualizes a two-dimensional (8, 8)-patch manifold learned by the proposed method from a 50% missing gray-scale image of ‘Lena’. For this figure, we set τ = [8, 8], r = 2, σ = 0.05. Similar patches are located near each other, and the smooth change of patterns can be observed. It implies the relationship between non-local similarity based methods (Buades et al., 2005; Dabov et al., 2007; Gu et al., 2014; Zhang et al., 2014a), and the manifold modeling (i.e., DAE) plays a key role of “patch-grouping” in the proposed method. The difference from the non-local similarity based approach is that the manifold modeling is “global” rather than “non-local” which finds similar patches of the target patch from its neighborhood area." }, { "heading": "4.2 COLOR IMAGE COMPLETION, ESPECIALLY FOR EXTREMELY HIGH NUMBER OF MISSING PIXELS", "text": "In this section, we compare performance of the proposed method with several selected unsupervised image inpainting methods: low-rank tensor completion (HaLRTC) (Liu et al., 2013), parallel lowrank matrix factorization (TMac) (Xu et al., 2015), tubal nuclear norm regularization (tSVD) (Zhang et al., 2014b), Tucker decomposition with rank increment (Tucker inc.) (Yokota et al., 2018), lowrank and total-variation (LRTV) regularization (Yokota & Hontani, 2017; 2019), smooth PARAFAC tensor completion (SPC) (Yokota et al., 2016), GSR (Zhang et al., 2014a), multi-way delay embedding based Tucker modeling (MDT-Tucker) (Yokota et al., 2018), and DIP (Ulyanov et al., 2018). Implementation and detailed hyper-parameter settings are explained in Appendix. Basically, we carefully tuned the hyper-parameters for all methods to perform the best scores of peak-signal-tonoise ratio (PSNR) and structural similarity (SSIM).\nFig. 5(a) shows the eight test images and averages of PSNR and SSIM for various missing ratio {50%, 70%, 90%, 95%, 99%} and for selective competitive methods. The proposed method is quite competitive with DIP. Fig. 6 shows the illustration of results. The 99% of randomly selected voxels are removed from 3D (256,256,3)-tensors, and the tensors were recovered by various methods. Basically low-rank priors (HaLRTC, TMac, tSVD, Tucker) could not recover such highly incomplete image. In piecewise smoothness prior (LRTV), over-smoothed images were reconstructed since the essential image properties could not be captured. There was a somewhat jump from them by SPC (i.e., smooth prior of basis functions in low-rank tensor decomposition). MDT-Tucker further improves it by exploiting the shift-invariant multi-linear basis. GSR nicely recovered the global\npattern of images but details were insufficient. Finally, the reconstructed images by DIP and MMES recovered both global and local patterns of images." }, { "heading": "4.3 COLOR IMAGE SUPER-RESOLUTION", "text": "In this section, we compare the proposed method with selected unsupervised image super-resolution methods: Bicubic interpolation, GSR (Zhang et al., 2014a), ZSSR (Shocher et al., 2018) and DIP (Ulyanov et al., 2018). Implementation and detailed hyper-parameter settings are explained in Appendix. Basically, we carefully tuned the hyper-parameters for all methods to perform the best scores of PSNR and SSIM.\nFig. 5(b) shows values of PSNR and SSIM of the computer simulation results. We used three (256,256,3) color images, and six (512,512,3) color images. Super resolution methods scaling up them from four or eight times down-scaled images of them with Lanczos2 kernels. According to this quantitative evaluation, bicubic interpolation was clearly worse than others. ZSSR worked well for up-scaling from (128,128,3), however the performances were substantially decreased for upscaling from (64,64,3). Basically, GSR, DIP, and MMES were very competitive. In detail, DIP was slightly better than GSR, and the proposed MMES was slightly better than DIP. More detailed PSNR/SSIM values are given by Table 3 in Appendix. Fig. 7 shows selected high resolution images reconstructed by four super-resolution methods. In general, bicubic method reconstructed blurred images and these were visually worse than others. GSR results had smooth outlines in all images, but these were slightly blurred. ZSSR was weak for very low-resolution images. DIP reconstructed visually sharp images but these images had jagged artifacts along the diagonal lines. The proposed MMES reconstructed sharp and smooth outlines." }, { "heading": "4.4 COLOR IMAGE DECONVOLUTION", "text": "In this section, we compare the proposed method with DIP for image deconvolution/deblurring task. Three (256,256,3) color images are prepared and blurred by using three different Gaussian filters. For DIP we choose the best early stopping timing from {1000, 2000, ..., 10000} iterations. For MMES, we employed the fixed AE structure as [32τ2, r, 32τ2], and parameters as τ = 4, r = 16, and σ = 0.01 for all nine cases. Fig. 8 shows the reconstructed deblurring images by DIP and MMES. Tab. 1 shows the PSNR and SSIM values of these results. We can see that the similarity of the methods qualitatively and quantitatively." }, { "heading": "5 INTERPRETATION OF MMES TOWARD EXPLAINING DIP", "text": "It is well known that there is no mathematical definition of interpretability in machine learning and there is no one unique definition of interpretation. We understand the interpretability as a degree to which a human can consistently predict the model’s results or performance. The higher the interpretability of a deep learning model, the easier it is for someone to comprehend why certain performance or predictions or expected output can be achieved. We think that a model is better interpretable than another model if its performance or behaviors are easier for a human to comprehend than performance of the other models." }, { "heading": "5.1 FROM A PERSPECTIVE OF DIMENSIONALITY REDUCTION/MANIFOLD LEARNING", "text": "The manifold learning and associated auto-encoder (AE) can be viewed as the generalized non-linear version of principal component analysis (PCA). In fact, manifold learning solves the key problem of dimensionality reduction very efficiently. In other words, manifold learning (modeling) is an approach to non-linear dimensionality reduction. Manifold modeling for this task are based on the idea that the dimensionality of many data sets is only artificially high. Although the patches of images (data points) consist of hundreds/thousands pixels, they may be represented as a function of only a few or quite limited number underlying parameters. That is, the patches are actually samples from a low-dimensional manifold that is embedded in a high-dimensional space. Manifold learning algorithms attempt to uncover these parameters in order to find a low dimensional representation of the images.\nIn our MMES approach to solve the problem we applied original embedding via multi-way delay embedding transform (MDT or Hankelization). Our algorithm is based on the optimization of cost function and it works towards extracting the low-dimensional manifold that is used to describe the high-dimensional data. The manifold is described mathematically by Eq. (2) and cost function is formulated by Eq. (5)." }, { "heading": "5.2 REGARDING OUR ATTEMPT TO INTERPRET ”NOISE IMPEDANCE IN DIP” VIA MMES", "text": "As mentioned at introduction, Ulyanov et al. (2018) reported an important phenomenon of noise impedance of ConvNet structures. Here, we provide a prospect for explaining the noise impedance in DIP through the MMES.\nLet us consider the sparse-land model, i.e. noise-free images are distributed along low-dimensional manifolds in the high-dimensional Euclidean space and images perturbed by noises thicken the manifolds (make the dimension of the manifolds higher). Under this model, the distribution of images can be assumed to be higher along the low-dimensional noise-free image manifolds. When we assume that the image patches are sampled from low-dimensional manifold like sparse-land model, it is difficult to put noisy patches on the low-dimensional manifold. Let us consider to fit the network for noisy images. In such case the fastest way for decreasing squared error (loss function) is to learn ”similar patches” which often appear in a large set of image-patches. Note that finding similar image-patches for denoising is well-known problem solved, e.g., by BM3D algorithm, which find similar image patches by template matching. In contrast, our auto-encoder automatically maps similar-patches into close points on the low-dimensional manifold. When similar-patches have some noise, the low-dimensional representation tries to keep the common components of similar patches, while reducing the noise components. This has been proved by Alain & Bengio (2014) so that a (denoising) auto-encoder maps input image patches toward higher density portions in the image space. In other words, a (denoising) auto-encoder has kind of a force to reconstruct the low-dimensional patch manifold, and this is our rough explanation of noise impedance phenomenon. Although the proposed MMES and DIP are not completely equivalent, we see many analogies and similarities and we believe that our MMES model and associated learning algorithm give some new insight for DIP." }, { "heading": "6 DISCUSSIONS AND CONCLUSIONS", "text": "A beautiful manifold representation of complicated signals in embedded space has been originally discovered in a study of dynamical system analysis (i.e., chaos analysis) for time-series signals (Packard et al., 1980). After this, many signal processing and computer vision applications have been studied but most methods have considered only linear approximation because of the difficulty of non-linear modeling (Van Overschee & De Moor, 1991; Szummer & Picard, 1996; Li et al., 1997; Ding et al., 2007; Markovsky, 2008). However nowadays, the study of non-linear/manifold modeling has been well progressed with deep learning, and it was successfully applied in this study. Interestingly, we could apply this non-linear system analysis not only for time-series signals but also natural color images and tensors (this is an extension from delay-embedding to multi-way shiftembedding). The best of our knowledge, this is the first study to apply Hankelization with AE into general tensor data reconstruction.\nMMES is a novel and simple image reconstruction model based on the low-dimensional patchmanifold prior which has many connections to ConvNet. We believe it helps us to understand how work ConvNet/DIP through MMES, and support to use DIP for various applications like tensor/image reconstruction or enhancement (Gong et al., 2018; Yokota et al., 2019; Van Veen et al., 2018; Gandelsman et al., 2019).\nFinally, we established bridges between quite different research areas such as the dynamical system analysis, the deep learning, and the tensor modeling. The proposed method is just a prototype and can be further improved by incorporating other methods such as regularizations, multi-scale extensions, and adversarial training." }, { "heading": "A HANKELIZATION OF ONE- AND TWO-DIMENSIONAL ARRAYS", "text": "For example, Hankelization of one-dimensional array f = [f1, f2, ..., f7] with window size τ = 3 is given by (\nf1 f2 f3 f4 f5 f2 f3 f4 f5 f6 f3 f4 f5 f6 f7\n) . (6)\nWe can see the anti-diagonal elements of above matrix are equivalent. Such matrix is called as “Hankel matrix”.\nFor a two-dimensional array ( f11 f12 f13 f21 f22 f23 f31 f32 f33 ) , (7)\nwe consider unfold of it and inverse folding by\nunfold ( f11 f12 f13 f21 f22 f23 f31 f32 f33 ) = f11 f21 f31 f12 f22 f32 f13 f23 f33 , and ( f11 f12 f13 f21 f22 f23 f31 f32 f33 ) = fold f11 f21 f31 f12 f22 f32 f13 f23 f33 . (8)\nThe point here is that we scan matrix elements column-wise manner. Hankelization of this twodimensional array (matrix) with τ = [2, 2] is given by scanning a matrix with local (2,2)-window column-wise manner, and unfold and stack each local patch left-to-right. Thus, it is given as\nf11f21f12 f22 f21f31f22 f32 f12f22f13 f23 f22f32f23 f33 = ( f11 f21 f21 f31 ) ( f12 f22 f22 f32 ) ( f12 f22 f22 f32 ) ( f13 f23 f23 f33 ) . (9)\nWe can see that it is not a Hankel matrix. However, it is a “block Hankel matrix” in perspective of block matrix, a matrix that its elements are also matrices. We can see the block matrix itself is a Hankel matrix and all elements are Hankel matrices, too. Thus, Hankel matrix is a special case of block Hankel matrix in case of that all elements are scalar. In this paper, we say simply “Hankel structure” for block Hankel structure.\nFigure 9 shows an illustrative explanation of valid convolution which is decomposed into delayembedding/Hankelization and linear transformation. 1D valid convolution of f with kernel h = [h1, h2, h3] can be provided by matrix-vector product of the Hankel matrix and h. In similar way, 2D valid convolution can be provided by matrix-vector product of the block Hankel matrix and unfolded kernel." }, { "heading": "B MULTIWAY-DELAY EMBEDDING FOR TENSORS", "text": "Multiway-delay embedding transform (MDT) is a multi-way generalization of Hankelization proposed by Yokota et al. (2018).\nIn (Yokota et al., 2018), MDT is defined by using the multi-linear tensor product with multiple duplication matrices and tensor reshaping. Basically, we use the same operation, but a padding operation is added. Thus, the multiway-delay embedding used in this study is defined by\nH(X ) := unfold(D,T )(padτ (X )×1 S1 · · · ×N SN ), (10)\nwhere padτ : RI1×···×IN → R(I1+2(τ1−1))×···×(IN+2(τN−1)) is a N -dimensional reflection padding operator1 of tensor, unfold(D,T ) : Rτ1(I1+τ1−1)×···×τN (IN+τN−1) → RD×T is an unfolding operator which outputs a matrix from an input N -th order tensor, and Sn ∈ Rτn(In+τn−1)×(In+2(τn−1)) is a duplication matrix. Fig. 10 shows the duplication matrix with τ .\nFor example, our Hankelization with reflection padding of f = [f1, f2, ..., f7] with τ = 3 is given by (\nf3 f2 f1 f2 f3 f4 f5 f6 f7 f2 f1 f2 f3 f4 f5 f6 f7 f6 f1 f2 f3 f4 f5 f6 f7 f6 f5\n) . (11)\nFig. 11 shows an example of our multiway-delay embedding in case of second order tensors. The overlapped patch grid is constructed by multi-linear tensor product with Sn. Finally, all patches are splitted, lined up, and vectorized.\nThe Moore-Penrose pseudo inverse ofH is given by\nH†(H) = trimτ (fold(D,T )(H)×1 S†1 · · · ×N S † N ), (12)\nwhereS†n := (S T nSn) −1STn is a pseudo inverse ofSn, fold(D,T ) := unfold −1 (D,T ), and trimτ = pad † τ is a trimming operator for removing (τn−1) elements at start and end of each mode. Note thatH†◦H is an identity map, butH ◦H† is not, that is kind of a projection.\nB.1 DELAY EMBEDDING USING CONVOLUTION\nDelay embedding and its pseudo inverse can be implemented by using convolution with all onehot-tensor windows of size (τ1, τ2, ..., τN ). The one-hot-tensor windows can be given by folding a D-dimensional identity matrix ID ∈ RD×D into ID ∈ Rτ1×···×τN×D. Fig. 12 shows a calculation flow of multi-way delay embedding using convolution in a case of N = 2. Multi-linear tensor product is replaced with convolution with one-hot-tensor windows.\n1For one dimensional array x = [x1, ..., xI ]T , we have padτ (x) = [xτ , ..., x2︸ ︷︷ ︸\nτ−1 , x1, ..., xI︸ ︷︷ ︸ I , xI−1, ..., xI−τ︸ ︷︷ ︸ τ−1 ]T .\nPseudo inverse of the convolution with padding is given by its adjoint operation, which is called as the “transposed convolution” in some neural network library, with trimming and simple scaling with D−1." }, { "heading": "C DESIGN OF AUTO-ENCODER", "text": "In this section, we discuss how to design the neural network architecture of auto-encoder for restricting the manifold Mr. The simplest way is controlling the value of r, and it directly restricts the dimensionality of latent space. There are many other possibilities: Tikhonov regularization (Goodfellow et al., 2016), drop-out (Gal & Ghahramani, 2016), denoising auto-encoder (Vincent et al., 2008), variational auto-encoder (Diederik P Kingma, 2014), adversarial auto-encoder (Makhzani et al., 2015), alpha-GAN (Rosca et al., 2017), and so on. All methods have some perspective and promise, however the cost is not low. In this study, we select an attractive and fundamental one: “denoising auto-encoder”(DAE) (Vincent et al., 2008). The DAE is attractive because it has a strong relationship with Tikhonov regularization (Bishop, 1995), and decreases the entropy of data (Sonoda & Murata, 2017). Furthermore, learning with noise is also employed in the deep image prior.\nFinally, we designed an auto-encoder with controlling the dimension r and the standard deviation σ of additive zero-mean Gaussian noise. Fig. 13 shows the illustration of an example of architecture of\nauto-encoder which we used in this study. In this case, it consists of five hidden variables of which sizes are [D,D, r,D,D] with leaky ReLU activation." }, { "heading": "D A SPECIAL SETTING FOR COLOR-IMAGE RECOVERY", "text": "In case of multi-channel or color image recovery case, we use a special setting of generator network because spacial pattern of individual channels are similar and the patch-manifold can be shared. Fig. 14 shows an illustration of the auto-encoder shared version of MMES in a case of color image recovery. In this case, we put three channels of input and each channel input is embedded, independently. Then, three block Hankel matrices are concatenated, and auto-encoded simultaneously. Inverted three images are stacked as a color-image (third-order tensor), and finally color-transformed. The last color-transform can be implemented by convolution layer with kernel size (1,1), and it is also optimized as parameters. It should be noted that the input three channels are not necessary to correspond to RGB, but it would be optimized as some compact color-representation." }, { "heading": "E OTHER DETAILS OF IMAGE-INPAINTING EXPERIMENTS", "text": "Here, we explain detailed experimental settings in Section 4.2.\nIn this section, we compared performance of the proposed method with several selected unsupervised image inpainting methods: low-rank tensor completion (HaLRTC) (Liu et al., 2013), parallel low-rank matrix factorization (TMac) (Xu et al., 2015), tubal nuclear norm regularization (tSVD) (Zhang et al., 2014b), Tucker decomposition with rank increment (Tucker inc.) (Yokota et al., 2018), low-rank and total-variation (LRTV) regularization2 (Yokota & Hontani, 2017; 2019), smooth PARAFAC tensor completion (SPC)3 (Yokota et al., 2016), GSR4 (Zhang et al., 2014a), multi-way\n2For LRTV, the MATLAB software was downloaded from https://sites.google.com/site/ yokotatsuya/home/software/lrtv_pds\n3For SPC, the MATLAB software was downloaded from https://sites.google.com/site/ yokotatsuya/home/software/smooth-parafac-decomposition-for-tensor-completion.\n4For GSR, each color channel was recovered, independently, using the MATLAB software downloaded from https://github.com/jianzhangcs/GSR.\ndelay embedding based Tucker modeling (MDT-Tucker)5 (Yokota et al., 2018), and DIP6 (Ulyanov et al., 2018).\nFor this experiments, hyper-parameters of all methods were tuned manually to perform the best peaksignal-to-noise ratio (PSNR) and for structural similarity (SSIM), although it would not be perfect. For DIP, we did not try the all network structures with various kernel sizes, filter sizes, and depth. We just employed “default architecture”, which the details are available in supplemental material7 of (Ulyanov et al., 2018), and employed the best results at the appropriate intermediate iterations in optimizations based on the value of PSNR. For the proposed MMES method, we adaptively selected the patch-size τ , and dimension r. Table 2 shows parameter settings of τ = [τ, τ ] and r for MMES. Noise level of denoising auto-encoder was set as σ = 0.05 for all images. For auto-encoder, same architecture shown in Fig. 13 was employed. Initial learning rate of Adam optimizer was 0.01 and we decayed the learning rate with 0.98 every 100 iterations. The optimization was stopped after 20,000 iterations for each image." }, { "heading": "F OTHER DETAILS OF SUPER-RESOLUTION EXPERIMENTS", "text": "Here, we explain detailed experimental settings in Section 4.3.\nIn this section, we compare performance of the proposed method with several selected unsupervised image super-resolution methods: bicubic interpolation, GSR8 (Zhang et al., 2014a), ZSSR9 and DIP (Ulyanov et al., 2018).\nIn this experiments, DIP was conducted with the best number of iterations from {1000, 2000, 3000, ..., 9000}. For four times (x4) up-scaling in MMES, we set τ = 6, r = 32, and σ = 0.1. For eight times (x8) up-scaling in MMES, we set τ = 6, r = 16, and σ = 0.1. For all images in MMES, the architecture of auto-encoder consists of three hidden layers with sizes of [8τ2, r, 8τ2]. We assumed the same Lanczos2 kernel for down-sampling system for all super-resolution methods.\nTab. 3 shows values of PSNR and SSIM of the results. We used three (256,256,3) color images, and six (512,512,3) color images. Super resolution methods scaling up them from four or eight times down-scaled images of them. According to this quantitative evaluation, bicubic interpolation was clearly worse than others. ZSSR was good for (128,128,3) color images, however the performance were substantially decreased for (64,64,3) color image. Basically, GSR, DIP, and MMES were very competitive. In detail, DIP was slightly better than GSR, and the proposed MMES was slightly better than DIP.\n5For MDT-Tucker, the MATLAB software was downloaded from https://sites.google.com/site/yokotatsuya/home/software/ mdt-tucker-decomposition-for-tensor-completion.\n6For DIP, we implemented by ourselves in Python with TensorFlow. 7https://dmitryulyanov.github.io/deep_image_prior 8For GSR, each color channel was recovered, independently, using the MATLAB software downloaded from https://github.com/jianzhangcs/GSR. We slightly modified its MATLAB code for applying it to super-resolution task.\n9For ZSSR, software was downloaded from https://github.com/assafshocher/ZSSR. We set the same Lanczos2 kernel for this super-resolution task." }, { "heading": "G OTHER EXPERIMENTAL RESULTS", "text": "G.1 OPTIMIZATION BEHAVIOR\nFor this experiment, we recovered 50% missing gray-scale image of ‘Lena’. We stopped the optimization algorithm after 20,000 iterations. Learning rate was set as 0.01, and we decayed the learning rate with 0.98 every 100 iterations. λ was adapted by Algorithm 1 every 10 iterations. Fig. 15 shows optimization behaviors of reconstructed image, reconstruction loss Lrec, auto-encoding loss LDAE, and trade-off coefficient λ. By using trade-off adjustment, the reconstruction loss and the auto-encoding loss were intersected around 1,500 iterations, and both losses were jointly decreased after the intersection point.\nG.2 HYPER-PARAMETER SENSITIVITY\nWe evaluate the sensitivity of MMES with three hyper-parameters: r, σ, and τ . First, we fixed the patch-size as (8, 8), and dimension r and noise standard deviation σ were varied. Fig. 17 shows\nthe reconstruction results of a 99% missing image of ‘Lena’ by the proposed method with different settings of (r, σ). The proposed method with very low dimension (r = 1) provided blurred results, and the proposed method with very high dimension (r = 64) provided results which have many peaks. Furthermore, some appropriate noise level (σ = 0.05) provides sharp and clean results. For reference, Fig. 16 shows the difference of DIP optimized with and without noise. From both results, the effects of learning with noise can be confirmed.\nNext, we fixed the noise level as σ = 0.05, and the patch-size were varied with some values of r. Fig. 18 shows the results with various patch-size settings for recovering a 99% missing image. The patch sizes τ of (8,8) or (10,10) were appropriate for this case. Patch size is very important because it depends on the variety of patch patterns. If patch size is too large, then patch variations might expand and the structure of patch-manifold is complicated. By contrast, if patch size is too small, then the information obtained from the embedded matrixH is limited and the reconstruction becomes difficult in highly missing cases. The same problem might be occurred in all patch-based image reconstruction methods (Buades et al., 2005; Dabov et al., 2007; Gu et al., 2014; Zhang et al., 2014a). However, good patch sizes would be different for different images and types/levels of corruption, and the estimation of good patch size is an open problem. Multi-scale approach (Yair & Michaeli, 2018) may reduce a part of this issue but the patch-size is still fixed or tuned as a hyper-parameter.\nG.3 VOLUMETRIC/3D IMAGE/TENSOR COMPLETION\nIn this section, we show the results of MR-image/3D-tensor completion problem. The size of MR image is (109,91,91). We randomly remove 50%, 70%, and 90% voxels of the original MR-image and recover the missing MR-images by the proposed method and DIP. For DIP, we implemented the 3D version of default architecture in TensorFlow, but the number of filters of shallow layers were slightly reduced because of the GPU memory constraint. For the proposed method, 3D patch-size was set as τ = [4, 4, 4], the lowest dimension was r = 6, and noise level was σ = 0.05. Same architecture shown in Fig. 13 was employed.\nFig. 19 shows reconstruction behavior of PSNR with final value of PSNR/SSIM in this experiment. From the values of PSNR and SSIM, the proposed MMES outperformed DIP in low-rate missing cases, and it is quite competitive in highly missing cases. The some degradation of DIP might be occurred by the insufficiency of filter sizes since much more filter sizes would be required for 3D ConvNet than 2D ConvNet. Moreover, computational times required for our MMES were significantly shorter than that of DIP in this tensor completion problem." } ]
2,019
null
SP:45bf7ca342ad1752c7f7c056653137b9283c487f
[ "This paper propose an active deep learning model. By leveraging subjective Logic, they propose to decompose the entropy of a predicted class distribution into vacuity (lack of evidence) and dissonance (conflict of strong evidence). Instead of using the predicted class distribution, they estimate the supporting evidence for each class. In the actual data sampling stage, they first sample from those high-vacuity dense region, to shape the true decision boundary, and then gradually sample from those high-dissonance region to fine-tune the decision boundary. They show better performance than the baselines on both synthetic and real datasets. ", "The authors consider active deep learning. They propose decomposing predictive entropy into a) vacuity (lack of evidence) and b) dissonance (contradictory evidence). They frame this in terms of \"subjective logic\". In practice this is achieved by having the NN output the parameters of a Dirichlet, which allows an additional degree of freedom describing variance/vacuity. Dissonance is defined in terms of the support of contradictory classes. To get improved estimates of vacuity they augment the loss with a term regularizing the Dirichlet parameters to be small (low precision) at unlabelled points with higher KDE(unlablled points) than KDE(labelled points). They propose initially weighting vacuity and later dissonance as AL proceeds. Encouraging results are presented on simulated 2D data, MNIST and CIFAR10. " ]
We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification through kernel density estimation (KDE). The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model.
[]
[ { "authors": [ "Clarence W De Silva" ], "title": "Intelligent control: fuzzy logic applications", "venue": "CRC press,", "year": 2018 }, { "authors": [ "Yarin Gal" ], "title": "Uncertainty in deep learning", "venue": "University of Cambridge,", "year": 2016 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Audun Jøsang", "Jin-Hee Cho", "Feng Chen" ], "title": "Uncertainty characteristics of subjective opinions", "venue": "In Fusion,", "year": 1998 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive uncertainty estimation via prior networks", "venue": "arXiv preprint arXiv:1802.10501,", "year": 2018 }, { "authors": [ "Nils J Nilsson" ], "title": "Probabilistic logic", "venue": "Artificial intelligence,", "year": 1986 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Murat Sensoy", "Lance Kaplan", "Melih Kandemir" ], "title": "Evidential deep learning to quantify classification uncertainty", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kari Sentz", "Scott Ferson" ], "title": "Combination of evidence in Dempster-Shafer theory, volume 4015", "venue": null, "year": 2002 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Glenn Shafer" ], "title": "A mathematical theory of evidence, volume 42", "venue": "Princeton university press,", "year": 1976 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Dan Wang", "Yi Shang" ], "title": "A new active labeling method for deep learning", "venue": "In 2014 International joint conference on neural networks (IJCNN),", "year": 2014 }, { "authors": [ "Keze Wang", "Dongyu Zhang", "Ya Li", "Ruimao Zhang", "Liang Lin" ], "title": "Cost-effective active learning for deep image classification", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning (DL) models establish dominating status among other types of supervised learning models by achieving the state-of-the-art performance in various application domains. However, such an advantage only emerges when a huge amount of labeled training data is available. This limitation slows down the pace of DL, especially when being applied to knowledge-rich domains, such as medicine, biology, and military operations, where large-scale labeled samples are too expensive to obtain from well-trained experts. Meanwhile, active learning (AL) has demonstrated great success by showing that for many supervised models, training samples are not equally important in terms of improving the model performance (Settles, 2009). As a result, a carefully selected smaller training set can achieve a model equally well or even better than a randomly selected large training set.\nAn interesting question arises, which is whether DL models can be actively trained using much less labeled data. Recent efforts show promising results in this direction through Bayesian modeling (Gal et al., 2017) and batch model sampling (Sener & Savarese, 2018). However, as DL models are commonly applied to high dimensional data such as images and videos, a fundamental challenge still remains, which is how to most effectively explore the exponentially growing sample space to select the most useful data samples for active model training. Existing AL models usually leverage the model provided information, such as estimated decision boundaries or predicted entropy for data sampling. However, the deep structure and the large number of parameters of DL models make model overfitting almost inevitable especially in the early stage of AL when only very limited training data is available. As a result, the model may provide misleading information that makes data sampling from a high-dimensional search space even more difficult. Besides a high dimensionality, complex data may contain a large number of classes and data samples from certain classes may be completely missing. Such situations are quite common for domains, such as scientific discovery (e.g., gene function prediction) and anomaly detection. AL models should be able to effectively discover these out of distribution (OOD) samples for labeling in order to achieve an overall good prediction performance.\nUncertainty sampling has been one of the most commonly used pool-based AL models. In particular, a model chooses the data sample that it is least certain about. Thus, once the sample is labeled, model uncertainty can be significantly reduced. As an information-theoretic measure, entropy provides a\ngeneral criterion for uncertainty sampling. Some commonly used sampling methods, including least confident and margin based strategies, are equivalent to entropy-based sampling in binary classification (Settles, 2009). It is also natural and straightforward to generalize to multi-class problems.\nA key challenge of entropy-based sampling for AL is that the predicted entropy may be highly inaccurate, especially in the early state of the AL. Such an issue may become more severe when training a neural network (NN)/DL active learner due to model overfitting as described above. Figure 1(a) shows the predicted entropy by an NN active learner trained using nine labeled data samples, which are in black color and evenly distributed in three classes. The standard softmax layer is used in the output layer to generate class probabilities over three classes, each of which is a mixture of two Gaussian’s. It turns out that all the data samples in the three small clusters located in the top left, top right, and bottom center, are wrongly predicted with high confidence, as indicated by the low entropy. As a result, data samples from these three clusters are less likely to be selected for labeling. In contrast, the data samples that are close to the center of the three major clusters are more likely to be selected. However, labeling these samples will have the effect of fine-tuning a wrongly predicted decision boundary, leading to a much higher (but less effective) labeling cost.\nFigure 1(b) shows the the result from the proposed active deep learning (ADL) model. While the samples from the small clusters are still wrongly predicted due to lack of training data, they are predicted with a much lower low confidence as indicated by the high entropy. However, even with a more accurately predicted entropy, the active learner may still sample from the center of the three major clusters as it is still assigned a high entropy along with the areas that cover the three smaller clusters. By performing a fine-grained analysis of uncertainty under the subjective logic (SL) framework (Jøsang, 2016), we formally decompose entropy into two distinct sources of uncertainty: vacuity and dissonance, which are caused by lack of evidence and conflict of strong evidence, respectively. By putting the vacuity and dissonance as shown in Figures 1(c) and (d) together, it is interesting to see that we recover the entropy as shown in Figure 1(c), which empirically verifies our theoretical results. Entropy decomposition provides further insights on the sources on uncertainty, which is instrumental to guide the data sampling process. Intuitively, given the dataset in Figure 1, an effective sampling strategy will first choose samples from the three small clusters according to vacuity in the early stage of AL to properly establish the shape of the decision boundary. It can then fine-tune the decision boundary by sampling according to dissonance. Such an uncertainty-aware sampling strategy will be critical for a high-dimensional space with multiple competing classes where data samples are scarcely distributed and the decision boundary becomes more complicated.\nOur major contribution is threefold: (1) decomposition of entropy through evidence-based theoretical analysis of belief vacuity and belief dissonance under the SL framework; (2) a multi-source uncertainty prediction model that accurately quantifies different sources of uncertainty through kernel density regularized evidence prediction; (3) an active deep learning model that systematically integrates different types of uncertainty for effective data sampling in a high-dimensional space. Extensive experiments are conducted over both synthetic and real-world data to demonstrate the effectiveness of the proposed ADL model." }, { "heading": "2 RELATED WORK", "text": "Uncertainty Quantification in Belief/Evidence Theory: In the belief/evidence theory domain, uncertainty reasoning has been substantially explored such as Fuzzy Logic (De Silva, 2018), DempsterShafer Theory (DST) (Sentz et al., 2002), or Subjective Logic (SL) (Jøsang, 2016). Unlike the efforts made in ML/DL above, belief theorists focused on reasoning of inherent uncertainty in information\nresulting from unreliable, incomplete, deceptive, and/or conflicting evidence. SL considered uncertainty in subjective opinions in terms of vacuity (i.e., lack of evidence) and vagueness (i.e., failure of discriminating a belief state) (Jøsang, 2016). Vacuity has been used as an effective vehicle to detect out-of-distribution queries through evidence learning, achieved under the typical DL setting with ample training samples (Sensoy et al., 2018). Recently, other dimensions of uncertainty have been studied, such as dissonance (due to conflicting evidence) and consonance (due to evidence about composite subsets of state values) (Jøsang et al., 2018).\nEpistemic Uncertainty in Deep Learning: In DL, aleatoric uncertainty (AU) and epistemic uncertainty (EU) have been studies using Bayesian Neural Networks (BNNs) for computer vision. AU consists of homoscedastic uncertainty (i.e., constant errors for different inputs) and heteroscedastic uncertainty (i.e., different errors for different inputs) (Gal, 2016). A Bayesian DL (BDL) framework was presented to estimate both AU and DU simultaneously in regression (e.g., depth regression) and classification settings (e.g., semantic segmentation) (Kendall & Gal, 2017). A new type of uncertainty, called distributional uncertainty, is defined based on distributional mismatch between the test and training data distributions (Malinin & Gales, 2018).\nActive Learning in Deep Learning: The common AL methods other than DL-based ones are surveyed in (Settles, 2009). There are limited efforts on actively training DL models for highdimensional data with a few exceptions. In (Wang & Shang, 2014), an AL model was developed for DL using three metrics for data sampling: least confidence, margin sampling, and entropy. A new approach combines recent advances in BDL into the AL framework to achieve label-efficient DL training (Gal et al., 2017). Another approach advances the AL development by introducing a cost-effective strategy to automatically select and annotate the high-confidence samples, which improves the traditional samples selection strategies (Wang et al., 2016). Data sampling in DL has also been approached as a core-set selection problem (Sener & Savarese, 2018), which requires a large batch to work well. Different from all existing works, the proposed ADL model decomposes the accurately estimated uncertainty into vacuity and dissonance and dynamically balances multisource uncertainty to achieve active training of DL models with much less labeled data." }, { "heading": "3 EVIDENCE-AWARE ENTROPY DECOMPOSITION", "text": "As discussed earlier, a high entropy may be contributed by difference sources of uncertainty with distinct characteristics. In this section, we conduct a fine-grained theoretical analysis of different types of uncertainty that arise in the context of multi-class problems. The decomposition is conducted under the SL framework, which provides key building blocks for our theoretical analysis." }, { "heading": "3.1 THEORY OF SUBJECTIVE LOGIC", "text": "SL is an uncertain probabilistic logic that is built upon probabilistic logic (PL) (Nilsson, 1986) and belief theory (BT) (Shafer, 1976) while making two unique extensions. First, SL explicitly represents uncertainty by introducing vacuity of evidence (or uncertainty mass) in its opinion representation, which addresses the limitation of using PL to model lack of confidence in probabilities. Second, SL extends the traditional belief function of the BT by incorporating base rates, which serve as the prior probability in Bayesian theory. The Bayesian nature of SL allows it to use second-order uncertainty to express and reason the uncertainty mass, where second-order uncertainty is represented in terms of a probability density function (PDF) over first-order probabilities (Jøsang, 2016). In particular, for multi-class problems, we use a multinomial distribution (first-order uncertainty) to model class probabilities and use a Dirichlet PDF (second-order uncertainty) to model the distribution of class probabilities. Second-order uncertainty enriches uncertainty representation with evidence information, which plays a central role in entropy decomposition as detailed later.\nSubjective opinions (or opinions) are the arguments in SL. In the multi-class setting, the subjective opinion of a multinomial random variable y in domain Y = {1, ...,K} is given by a triplet\nω = (b, u,a), with K∑\nk=1\nbk + u = 1 (1)\nwhere b = (b1, ..., bK)T , u, and a = (a1, ..., aK)T denote the belief mass distribution over Y, uncertainty mass representing vacuity of evidence, and base rate distribution over Y, respectively, and ∀k, ak ≥ 0, bk ≥ 0, u ≥ 0. The probability that y is assigned to the k-class is given by\nP (y = k) = bk + aku, ∀k ∈ Y (2)\nwhich combines the belief mass with the uncertain mass using the base rates. In the multi-class setting, ak can be regarded as the prior preference over the k-th class. When no specific preference is given, we assign all the base rates as 1/K.\nIn existing SL literature, there lacks a clear transition between the first order uncertainty given in equation 2 and the second-order uncertainty expressed as a Dirichlet PDF. Here, we make this transition more explicit by introducing a set of random variables p = (p1, ..., pK)T , where p is distributed on a simplex of dimensionality K − 1. We introduce a conditional distribution P (y = k|pk) = pk, which allows us to represent the marginal distribution in equation 2 by P (y) = ∫ P (y|p)p(p)dp. We define p(p) as a Dirichlet PDF over p: Dir(p|α), where α = (α1, ..., αK)T is K-dimensional strength vector, with αk ≥ 0 denoting the effective number of observations of the k-th class. SL explicitly introduces the uncertainty evidence through a non-informative weight W and redefine the strength parameter as\nαk = rk + akW, with rk ≥ 0,∀k ∈ Y (3) where rk is the amount of evidence (or the number of observations) to support the k-th class and W is usually set to K, i.e., the number of classes. Given the new definition of the strength parameter, the expectation of the class probabilities p = (p1, ..., pK)T is given by\nE[pk] = αk∑K j=1 αj = rk + akW∑K j=1 rj +W\n(4)\nwhere ak = 1/K. By marginalizing out p, we can derive an evidence-based expression of belief mass and uncertainty mass:\nbk = rk S ∀k ∈ Y, u = W S , with S = K∑ k=1 αk (5)\nSL categorizes uncertainty into two primary sources (Jøsang, 2016): (1) basic belief uncertainty that results from specific aspects of belief mass in isolation and (2) intra-belief uncertainty that results from the relationships between belief masses and uncertainty mass. Since we focus on the multiclass setting, no composite values (i.e., simultaneously assigned to multiple classes) are allowed. As a result, these two sources of uncertainty boil down to vacuity and dissonance, respectively, that correspond to vacuous belief and contradicting beliefs. In particular, vacuity of an opinion vac(ω) is captured by uncertainty mass u, which is defined in equation 5 and dissonance of an opinion (Jøsang et al., 2018) is defined as\ndiss(ω) = K∑ k=1 ( bk ∑ j 6=k bjBal(bj , bk)∑ j 6=k bj ),Bal(bj , bk) = { 1− |bj−bk|bj+bk if bibj 6= 0 0 if min(bi, bj) = 0 (6)\nwhere Bal(bj , bk) is the relative mass balance function between two belief masses. The belief dissonance of an opinion is measured based on how much belief supports individual classes. Consider a binary classification example with a binomial opinion given by (b1, b2, u,a) = (0.49, 0.49, 0.02,a). Based on equation 6, it has a dissonance value of 0.98. In this case, although the vacuity is close to zero, a high dissonance indicates that one cannot make a clear decision because both two classes have the same amount of supporting evidence and hence strongly conflict with each other." }, { "heading": "3.2 EVIDENCE-BASED ENTROPY DECOMPOSITION", "text": "By leveraging the second-order uncertainty representation, we formally show that the entropy of a predicted class distribution P (y) can be decomposed into vacuity and dissonance. Our major theoretical results indicate that the uncertainty of a high-entropy data sample may be caused by either lack of evidence (i.e., high vacuity) or conflict of strong evidence (i.e., high dissonance) but not both. By clearly identifying the sources of uncertainty instead of using them in a combined form as in entropy, the evidence based decomposition of entropy provides deeper insights on the nature of uncertainty, which provides important guidance for an AL model to more effectively explore a large and high-dimensional search space for efficient data sampling. Lemma 1. Dissonance maximization. Given a total Dirichlet strength S = CK, where C ≥ 1 and K is the number of classes, for any opinion ω on a multinomial random variable y, we have\nmax diss(ω) = 1− 1 C\n(7)\nCorollary 1. The dissonance diss(ω) is approaching (but not reaching) 1 when all the evidence rk’s are set to equal and S →∞; it reaches 0 when S = K:{\nlimS→∞ diss(ω) = 1 if r1 = ...rk... = rK diss(ω) = 0 if S = K\n(8)\nLemma 2. Vacuity maximization. For any opinion ω on a multinomial random variable y, we have 0 ≤ vac(ω) ≤ 1 and the maximum vacuity is achieved when ∑K k=1 rk = 0. Theorem 1. Let y denote a multinomial random variable, ωy denote its opinion, S denote its total Dirichlet strength, and H[y] be the entropy of y. H[y] can be maximized under two different and non-overlapping conditions: (1) for S = K and assuming non-informative base rates, y∗ = arg maxH[y] ⇔ y∗ = arg max vac(ωy); (2) for S → ∞, y∗ = arg maxH[y] ⇔ y∗ = arg max diss(ωy).\nA more intuitive interpretation of the main results in Theorem 1 is as following. A high-entropy data sample supported by a strong evidence (i.e., S K) is caused by a high dissonance (i.e., conflict of evidence); a high-entropy data sample supported by little evidence (i.e., S ≈ K) is caused by a high vacuity (i.e., lack of evidence). Through the second-order uncertainty representation, we offer an evidence based interpretation of entropy that allows us to identify two different sources of uncertainty that both cause a high entropy. The multi-source uncertainty will provide important information to design a fine-grained sampling function for AL, which will be detailed in next section." }, { "heading": "4 MULTI-SOURCE UNCERTAINTY AWARE ACTIVE DEEP LEARNING", "text": "In order to best use the uncertainty information, the ADL model should first be able to provide an accurate uncertainty estimation based on very limited training data. This, coupled with the large number of parameters of the DL model, poses a fundamental challenge due to a higher risk of model overfitting. As shown earlier, inaccurate uncertainty estimation will cause the model to miss labeling important data samples that can help accurately detect the decision boundary if labeled.\nIn addition, since both vacuity and dissonance are derived from second-order uncertainty, solely predicting the class label or its distribution does not provide sufficient information for multi-source uncertainty prediction. Instead of predicting the class label distribution, the proposed ADL model directly estimates the supporting evidence (i.e., rk’s) for each class, which is a central element that can be used to quantify belief mass and uncertainty mass according to equation 5. To better address overfitting, we develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification through kernel density estimation (KDE). These anchor samples are unlabeled data that inform the ADL which areas of the data space are less explored. Optimizing this loss function will ensure that ADL predicts high vacuity over these areas. Furthermore, through KDE, these less explored areas are automatically ranked based on their data density. This nice property allows the ADL to effectively prioritize the sampling order over these areas and iterative visit them based on their data density. Finally, we introduce our novel sampling function that systematically integrates accurately estimated multi-source uncertainty for active deep learning." }, { "heading": "4.1 UNCERTAINTY ANCHOR SAMPLE IDENTIFICATION", "text": "Let Xu and Xt denote the sets of unlabeled candidate and training samples, respectively. The probability density of the two populations with a kernel function k(·, ·) can be estimated as follows:\npu(x) = 1 |Xu| ∑ xn∈Xu k(x,xn), pt(x) = 1 |Xt| ∑ xn∈Xt k(x,xn) (9)\nSince we aim to identify unlabeled anchor samples to inform the model areas in the space that are less explored by the training data, these samples should be from areas having a high density mass with respect to pu(x) but low density mass with respect to pt(x). The problem is formalized as:\nmax A⊆Xu λ ∑ x∈A pu(x)− ∑ x∈A pt(x) (10)\nThe first term ensures that the selected area has abundant candidate data points to sample so that it has lower risk of containing isolated noise. The second term makes sure the selected region is located OOD with respect to the current training data. The optimal set for equation 10 is given by:\nA∗ = {x|λpu(x)− pt(x) > 0} (11) where λ ∈ [0, 1] is used to control the size of A∗ for given the candidate and training datasets." }, { "heading": "4.2 MULTI-SOURCE UNCERTAINTY PREDICTION", "text": "The set of uncertainty anchor samples A∗ represents areas in the data space that are cohesively distributed far away from the current training data. As these data are essentially OOD with respect to the current training data, their predicted vacuity should be high, which implies low predicted evidence due to Lemma 2. We leverage this information by constructing an evidence strength loss, L(u)Evi, which forces the model to predict low evidence for xu ∈ A∗:\nL(u)Evi(A ∗,Θ) = |1T(xu∈A∗)f(xi|Θ)| (12)\nwhere 1(C) = 1 if C is true and 0 otherwise; ri = f(xu|Θ) is the output of the DL model, representing the predicted supporting evidence of xu, and Θ is the set of DL model parameters. Since we require rk ≥ 0, an activation layer (i.e., ReLu) is used to replace the softmax layer as commonly used in other NN classifiers. The evidence strength loss is the key component to our proposed overall loss function. Samples in A∗ act as anchors to provide the model a preview of certain areas that out of its current knowledge. The model is guided to put less belief mass on those areas, leading to more accurate uncertainty estimation and eventually benefit the multi-source uncertainty based data sampling. Furthermore, since the activation layer is used for model output, equation 12 essentially performs l1 regularization to last hidden layer’s weight matrix and bias vector. We want to emphasize that our approach demands no additional labeling cost. The anchor samples are dynamically detected according to the current training and put into use without their actual label being known.\nWe proceed to define our overall loss function. For training sample xi, let yi encode the ground-true class label k by setting yik = 1 and yij = 0,∀j 6= k. Let Cat(ŷi = k|pi(Θ)) be the likelihoood, where pi(Θ) ∼ Dir(pi|αi(Θ)) and αi(Θ) = f(xi|Θ) +Wai. We set the non-informative weight W = K and base rates aik = 1/K,∀k. The expected sum of squares loss is defined as\nL(i)(Θ) = Epi∼Dir(pi(Θ)|αi(Θ))||yi − pi|| 2 2 = K∑ j=1 (y2ij − 2yijE[pij(Θ)] + E[pij(Θ)2]) (13)\nMinimizing L(i)(Θ) has the effect of jointly minimizing the prediction error and the variance of pi (Sensoy et al., 2018), hence reducing the uncertainty. This can be seen using identity E[pij(Θ)2] = E[pij(Θ)]2 + Var(pij(Θ)) and rearranging the terms on the r.h.s. of equation 13. Our overall loss function is defined as:∑\nxi∈Xt L(i)(Θ) + λ1 ∑ xu∈Xu L(u)Evi + λ2L2(Θ) (14)\nwhere L2(Θ) is the standard L2 regularizer of the network parameters." }, { "heading": "4.3 DATA SAMPLING FOR ACTIVE DEEP LEARNING", "text": "According to Lemma 2, a data sample’s vacuity is maximized when the model assigns zero evidence to all K classes. This indicates the model has never seen a similar data sample from training. Annotating samples with a large vacuity can help the ADL gain most new knowledge of the data space. It has the effect of guiding the model to explore the most important areas, which is especially critical for a high-dimensional data space. In AL, the true decision boundary can be easily skewed due to limited initial training. The vacuity-aware search helps the model fast converge to the true decision boundary without excessively sampling around the wrong one. Moreover, it is also effective to discover new types of classes whose instances have never exposed to the model, as shown in our experiments. According to Lemma 1, a data sample’s dissonance is maximized when the model assigns equally high (close to infinity) evidence to all K classes. These strong conflicting evidence received from different classes indicate the data sample is located near the decision boundary where multiple classes are heavily overlapped. Annotating samples with high dissonance helps the model further fine-tune the decision boundary, leading to better discriminative power.\nWe design a sample function that best leverages these two important and complementary sources of uncertainty to most effectively guide ADL. Intuitively, we would like ADL to rely more on vacuity in the early phase of AL, which can most effectively shape the decision boundary and avoid finetuning the wrong decision areas. As AL goes, dissonance should gradually gain a higher weight, which allows ADL to further fine-tune the decision boundary that has the right shape but is less accurate, aiming to maximize the discriminate power of the model. The sample function is given:\nx∗ = arg max x∈Xu [diss(ω(x)) + βvac(ω(x)] (15)\nwhere β is an annealing coefficient to gradually balance between vacuity and dissonance based on the rationale given above. More specifically, the importance of vacuity reduces as there are less “vacuous” areas in the data space w.r.t. the current training data. This implies that the training data can well approximate the entire data space. Thus, one natural way to quantify β is to use the inverse of mutual information KL(pu,t||pt), where the joint density distribution pu,t can be estimated as pu,t(x) =\n1 |Xu|+|Xt| ∑ xn∈Xu∪Xt k(x,xn). In practice, calculating the mutual information for each\nAL iteration is expensive. We use a heuristic surrogate: min max=minxu∈Xu maxxt∈Xt k(xt,xu) and set β = 1− dT if min max does not change within the past few AL iterations, where T denotes the current iteration of AL and d is a fixed decay rate (set to 1/100K in our experiments)." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we report our experimental results on both synthetic and real-world data. The synthetic experiment aims to verify the key theoretical properties of ADL, including entropy decomposition and multi-source uncertainty prediction, and how these properties contribute to AL. The real-data experiment aims to compare ADL and its competitors. We focus on testing in classical AL environment, where the initial training set only includes limited samples from some classes with samples from other classes completely missing. In each AL iteration, we sample one or a small batch of data instances. This is fundamentally different than some recent DL based AL methods, such as (Sener & Savarese, 2018), which perform batch-mode sampling with a large batch size (larger than our entire labeled samples). All models uses the same DL architecture. For synthetic data, we adopt a 3-layer MLP with tanh for activation. For real data, we use LeNet with Relu for activation." }, { "heading": "5.1 SYNTHETIC DATA", "text": "The synthetic experiment is designed to show: (1) whether ADL accurately captures different sources of uncertainty, and (2) whether accurately estimated uncertainty leads to better AL behavior. To mimic the existence of OOD, we generate three mixtures of Gaussian’s. Each mixture consists of a major and a smaller (i.e., OOD) clusters with 750 and 50 samples, respectively. We center the major Gaussian components from each class in the middle of the feature space and put their corresponding OOD components away from them. In Figure 1, we show that a classical DL model with a softmax layer provides very inaccurate uncertainty estimation. In contrast, the proposed ADL model not only provides accurate entropy prediction but also successfully decomposes it into vacuity and dissonance. Figure 2 shows the uncertain prediction result from EDL (Sentz et al., 2002), which can also provide evidence prediction but requires ample training data. Suffering from insufficient training, EDL is inaccurate in its entropy prediction, especially for the OOD clusters. While EDL does not provide entropy decomposition, we use its predicted evidence to compute vacuity and dissonance as shown in Figure 2. However, neither of them is accurately predicted as low vacuity is predicted for the three OOD clusters where there is no training data and high dissonance is predicted in areas with no nearby training data to show conflicting evidence.\nFigure 3 shows the first time when ADL selects at least one data sample from each OOD area, high vacuity is assigned to an area with no training data but many unlabeled data. Meanwhile, high dissonance indicates that refining the decision boundary may be more instrumental to improve the model performance. A few iterations later, ADL starts to penalize vacuity. While vacuity is still accurately estimated (high in vacuous regions), it becomes less useful for sampling (since very few unlabeled data is located nearby). Two iterations later, penalizing on vacuity helps to choose data samples that significantly refine the decision boundary. The superior AL performance of ADL as shown in Figure 3 further confirms the effectiveness of ADL’s key properties as demonstrated above." }, { "heading": "5.2 REAL DATA", "text": "The real-world experiment is conducted on three datasets, MNIST, notMNIST, and CIFAR-10, all of which have ten classes. To mimic the real-world AL scenario, we leave 2-5 classes out for initial training and there are 5 labeled samples for each available class. A good AL model is expected to discover samples of unknown classes in an early stage to effectively improve model accuracy. We compare the proposed model with EDL (Sensoy et al., 2018) (entropy, vacuity+dissonance), BALD (Gal et al., 2017) (epistemic), and softmax (entropy, random), where in the parenthesis are the uncertainty measurements used for sampling. Figures 4 and 5 show that ADL consistently outperforms other models on all three datasets. The advantages of ADL are twofold. First, entropy decomposition gives ADL flexibility to meet distinct sampling need at different AL phases. In an early stage, the fast accuracy improvement is achieved by the vacuity guided sampling where the most representative samples are labeled with high priority. Gradually, ADL switches to dissonance guided sampling to refine the decision boundary by labeling the most informative samples to improve its discriminative power. In contrast, sampling methods utilizing a unified uncertainty (e.g., epistemic uncertainty and entropy) lack such flexibility to adjust the sampling behavior, leading to either slow convergence or lower model accuracy. Second, compared with EDL, which can also perform evidence prediction, ADL is superior due to accurate uncertainty estimation using the effective loss function. For both synthetic and real data, we observe that ADL identifies samples from missing classes at least around 20% faster than using EDL and other models." }, { "heading": "6 CONCLUSION", "text": "We present a novel active deep learning model that systematically leverages two distinct sources of uncertainty, vacuity and dissonance, to effectively explore a large and high-dimensional data space for label-efficient training of DL models. The proposed ADL model benefits from the evidencebased entropy decomposition that follows from our theoretical analysis of belief vacuity and belief dissonance under the SL framework. The multi-source uncertainty can be accurately estimated through a novel loss function that augments DL based evidence prediction with vacuity-aware regularization of the model parameters. Through dynamically balancing the importance of vacuity and dissonance, a sampling function is designed to first explore the critical areas of the data space and then fine-tune the decision boundary to maximize its discriminate power. Extensive experiments conducted over both synthetic and real data help verify the theoretical properties and empirical performance of the proposed ADL model." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "PROOF OF LEMMA 1", "text": "Proof. Let Bjk denote Bal(bj , bk). Since 0 ≤ bk ≤ 1 (as S = ∑\nk bk +K), we have 0 ≤ Bjk ≤ 1. In addition, Bjk = 1, if bj = bk 6= 0; Bjk = 0, if bjbk = 0. Thus, we have ∑ j 6=k bjBjk ≤∑\nj 6=k bj , where the equality holds when Bjk = 1,∀j. Therefore, we have\ndiss(ω) = K∑ k=1 bk\n[∑ j 6=k bjBjk∑\nj 6=k bj\n] ≤\nK∑ k=1 bk (a) = ∑K k=1 rk S (b) = S −K S = 1− 1 C\n(16)\nwhere (a) is due to the definition of bk in equation 5 and (b) is due to the summation constraint in equation 1 and W = K." }, { "heading": "PROOF OF LEMMA 2", "text": "Proof. Using the definition of uncertainty mass in equation 5 and substituting W by K, we have\n0 ≤ vac(ω) = K S\n= K∑\nk=1 rk +K ≤ 1 (17)\nwhere equality is achieved when ∑K\nk=1 rk = 0." }, { "heading": "PROOF OF THEOREM 1", "text": "Proof. For (1), (⇒) is easy to show as S = K implies ∑K\nk=1 rk = 0 and vac(ωy∗) = 1; for (⇐), using equation 2 and non-informative base rates, we have P (y∗ = k) = 1/K,∀k, which achieves a maximum H[y∗] as logK.\nFor (2), we first prove (⇒). For y∗ = arg maxH[y], we have P (y∗ = k) = 1/K,∀k. Thus, (rk + akK)/S = 1/K,∀k. For S →∞, denote S = CK and we have rk/S + ak/C = 1/K,∀k. Let C →∞, we have rk/S → 1/K,∀k. Thus, we have y∗ = arg max diss(ωy) due to Corollary 1. To prove (⇐), diss(ωy∗) = 1 implies that r1 = ...rk... = rK and S →∞. Hence, limS→∞ P (y∗ = k) = limS→∞(rk + akK)/S = 1/K, which implies that y∗ = arg maxH[y] for S →∞." }, { "heading": "ADDITIONAL EXPERIMENTAL RESULTS", "text": "We obtain similar AL curves for notMNIST and CIFAR-10 when starting AL with 7 and 8 classes as with 5 and 6 classes, which are shown in Figure 6. In Figure 7, we also report the AL performance on the three datasets when there is no missing class. ADL still achieves the best performance in all cases with slightly less advantage than other models." }, { "heading": "EXPERIMENTAL SETTINGS", "text": "We choose the Adam optimizer to train ADL for 600 epochs and setting the learning rate to 0.001. The coefficient of evidence strength loss, λ1, is set to 0.005 (cross validated from {0.001, 0.005, 0.03, 0.05}). The coefficient of the L2 regularizer, λ2, is set to 0.05 (cross validated from {0.001, 0.005, 0.01, 0.03, 0.05, 0.08}). The λ for anchor sample identification in equation 10 is set to 0.005 (cross validated {0.001, 0.005, 0.03, 0.05}). We choose RBF as our kernel function with length scale set to 1 (cross validated from {0.01, 0.1, 1})." }, { "heading": "EFFECTIVENESS OF KDE BASED UNCERTAINTY ANCHOR SAMPLE IDENTIFICATION", "text": "In this section, we conduct additional experiment to evaluate the effectiveness of the KDE based uncertainty anchor sample identification method. Uncertainty anchor sample identification is an integral component of the ADL model, which aims to guide the model to be uncertain in the OOD areas with respect to the current training data (instead of providing the final prediction). Therefore, other more advanced kernel functions/similarity measures that are specifically designed for highdimensional data can be used for the same purpose without affecting the overall model. However, when choosing a specific technique, it is also important to consider both the quality of the data samples and efficiency as fast identification of these data samples is critical for AL which is usually performed in real time. Since the model is constantly changing as it continues to explore the data space, new uncertainty data samples need to be discovered in each AL iteration. We have conducted three additional experiments to demonstrate which technique can achieve such a good balance.\n• We have compared KDE with the randomly selected anchor samples from unlabeled data and not using any anchor samples in Figure 8(a). KDE clearly outperforms random selection, which in turn performs better than not using any anchor samples. We further confirm the positive result by evaluating the min-max similarity between the unlabeled and training data. If KDE is able to identify anchor samples from the desired OOD regions of the feature space (although the estimated density in that region may not be very accurate), the sampling process would be guided correctly and the min-max similarity would increase in the next AL iteration as the result. Figure 8(b) compares the min-max similarity of KDE with random selection. The result shows that with KDE, the model covers the unlabeled feature space much more efficiently as AL moves forward.\n• We have adopted the attention kernel as a more advanced distance metric to replace the RBF kernel in the proposed anchor sample identification component. The attention kernel is the major component in the matching network (Vinyals et al., 2016), where the spatial invariance is ensured by CNN and the dimensionality of the inputs is reduced through two correlated LSTM projections. However, the attention kernel (our current implementation) is much slower to compute as compared with KDE especially when facing a very large unlabeled pool as the entire candidate data samples need to be embedded every iteration when the training/testing data are changed along with AL. Thus, if the improvement is not significant (see Figure 8(a)) and when the efficiency becomes a bottleneck for a large unlabeled pool, the proposed KDE approach appears to be a good choice as it can provide a good balance between quality and efficiency, which is critical for AL.\n• We have investigated the impact of the characteristic length scale used in RBF kernel on AL performance. Figure 8(c) shows that the ADL model performance is fairly robust to the length scale and only shows minor change with different choices." }, { "heading": "ABLATION STUDY", "text": "We have conducted a detailed ablation study to clearly demonstrate the effectiveness of each major technical components:\n• Figure 9(a) compares proposed sampling method with other different sampling criteria: entropy, vacuity only, and dissonance only. The result confirms the effectiveness of the dynamically balanced sampling method. It is interesting to see that using vacuity alone performs quite well in the initial phase but only converges to a lower accuracy in the end. In contrast, using dissonance is slow to start but able to converge to a higher accuracy. The entropy curve roughly stays in the middle of the above two curves.\n• The effectiveness of using the anchor samples has already been demonstrated in Figure 8(a) by comparing ADL with not using any anchor samples and randomly selected anchor samples from unlabeled data.\n• Figure 9(b) shows the results using different vacuity/dissonance ratios but keeping fixed throughout the AL process. The dynamically balanced sampling method clearly outperforms the fixed weighting. This also demonstrates the usefulness of the proposed entropy decomposition theory. Since the sampling goal of AL changes with the accumulation of the labeled data, the optimal AL behavior can only be achieved by adaptively adjust the importance of vacuity and dissonance in the sampling function.\nFinally, we have conducted batch-mode AL and reported the results in Figure 9(c). As can be seen, as the batch size increases, the performance decreases. This is expected as there is no special strategy to diversify the samples chosen in the same batch. We will leave this to our future work as the current model is not designed specifically for batch mode AL.\nSAMPLE IMAGES CHOSEN BY ADL\nIn this section, we have visualized the image samples selected by ADL in the early and later stages of active learning to help better understand the role of vacuity and dissonance in data sampling. In order to better demonstrate the effectiveness of the vacuity measurement, we start active learning with 5 classes omitted from the initial training. Later we will see how does high vacuity at the early stage of active learning helps fast identify missing classes. Figure 10 shows the samples with highest vacuity selected by ADL in the first 30 AL iterations. The first four of them are from missing classes. This clearly demonstrates the effectiveness of using vacuity to explore the data space. As a result, data samples from the missing classes are quickly identified and being labeled. The last sample is from class ’3’, whose examples have already been exposed to ADL. However, the writing style of this sample is very different than other instances from the same class, which result in a high vacuity.\nFigure 11 shows the samples with highest dissonance selected by ADL in the last 100 AL iterations. By observing their predicted belief mass, we find that the high dissonance result is due to the conflicting belief over multiple classes. For example, the first sample is confusing between classes ’4’ and ’6’; the second sample is confusing among classes ’5’,’6’, and ’8’; the third sample is confusing\namong classes ’4’,’7’, and ’9’; the fourth sample is confusing between classes ’4’ and ’6’; and the Fifth sample is confusing between classes ’0’ and ’9’." }, { "heading": "SOURCE CODE", "text": "The code for this work can be found in https://drive.google.com/drive/folders/ 1imwnOahh8HtHK_g_HSTb4TCxZ7YG04ay?usp=sharing" } ]
2,019
EVIDENCE-AWARE ENTROPY DECOMPOSITION FOR ACTIVE DEEP LEARNING
SP:cc73a630ce68477bde408cc08a92a4f98eb2c597
[ "The article on \"Noisy Machines\" addresses the issue of implementing deep neural network inference on a noisy hardware computing substrate, e.g. analog accelerators. This is an important topic because analog devices allow fast and energy efficient inference, which is crucial for inference at the edge. Because of their analog nature such devices suffer from noisy computations, and in this article the case of noisy weights is studied. ", "The manuscript illustrates how a noisy neural network can reduce the learning capacity. To mitigate this loss, the authors propose a method that combines the method of \"noise injection and \"knowledge distillation\". However, from a conceptual point of view, their contribution (i.e. (10) in Section 5,) is unclear to me. Specifically, the authors are not precise about how do they merge the aforementioned previous ideas and come up with the new loss function (10). " ]
The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning.
[]
[ { "authors": [ "Stefano Ambrogio", "Pritish Narayanan", "Hsinyu Tsai", "Robert M Shelby", "Irem Boybat", "Carmelo di Nolfo", "Severin Sidler", "Massimo Giordano", "Martina Bodini", "Nathan CP Farinha" ], "title": "Equivalent-accuracy accelerated neural-network training using analogue", "venue": "memory. Nature,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Jonathan Binas", "Daniel Neil", "Giacomo Indiveri", "Shih-Chii Liu", "Michael Pfeiffer" ], "title": "Precise deep neural network computation on imprecise low-power analog hardware", "venue": "arXiv preprint arXiv:1606.07786,", "year": 2016 }, { "authors": [ "Irem Boybat", "Manuel Le Gallo", "SR Nandakumar", "Timoleon Moraitis", "Thomas Parnell", "Tomas Tuma", "Bipin Rajendran", "Yusuf Leblebici", "Abu Sebastian", "Evangelos Eleftheriou" ], "title": "Neuromorphic computing with multi-memristive synapses", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Ben Feinberg", "Shibo Wang", "Engin Ipek" ], "title": "Making memristive neural network accelerators reliable", "venue": "IEEE International Symposium on High Performance Computer Architecture (HPCA),", "year": 2018 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Micah Goldblum", "Liam Fowl", "Soheil Feizi", "Tom Goldstein" ], "title": "Adversarially robust distillation", "venue": "CoRR, abs/1905.09747,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey E. Hinton", "Simon Osindero", "Yee Whye Teh" ], "title": "Distilling the knowledge in a neural network", "venue": "Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "M Hu", "CE Graves", "C Li", "Y Li", "N Ge", "E Montgomery", "N Davila", "H Jiang", "RS Williams", "JJ Yang" ], "title": "Memristor-based analog computation and neural network classification with a dot product engine. Advanced materials (Deerfield", "venue": null, "year": 2018 }, { "authors": [ "Shubham Jain", "Abhronil Sengupta", "Kaushik Roy", "Anand Raghunathan" ], "title": "Rx-caffe: Framework for evaluating and training deep neural networks on resistive crossbars", "venue": "arXiv preprint arXiv:1809.00072,", "year": 2018 }, { "authors": [ "Vinay Joshi", "Manuel Le Gallo", "Irem Boybat", "Simon Haefeli", "Christophe Piveteau", "Martino Dazzi", "Bipin Rajendran", "Abu Sebastian", "Evangelos Eleftheriou" ], "title": "Accurate deep neural network inference using computational phase-change memory", "venue": null, "year": 1906 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Manuel Le Gallo", "Abu Sebastian", "Roland Mathis", "Matteo Manica", "Heiner Giefers", "Tomas Tuma", "Costas Bekas", "Alessandro Curioni", "Evangelos Eleftheriou" ], "title": "Mixed-precision in-memory computing", "venue": "Nature Electronics,", "year": 2018 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Yinan Li", "Fang Liu" ], "title": "Whiteout: Gaussian adaptive noise regularization in deep neural networks", "venue": "arXiv preprint arXiv:1612.01490,", "year": 2016 }, { "authors": [ "Xing Lin", "Yair Rivenson", "Nezih T Yardimci", "Muhammed Veli", "Yi Luo", "Mona Jarrahi", "Aydogan Ozcan" ], "title": "All-optical machine learning using diffractive deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Paul Micaelli", "Amos Storkey" ], "title": "Zero-shot knowledge transfer via adversarial belief matching", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Asit Mishra", "Debbie Marr" ], "title": "Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy", "venue": "arXiv preprint arXiv:1711.05852,", "year": 2017 }, { "authors": [ "Leibin Ni", "Zichuan Liu", "Hao Yu", "Rajiv V Joshi" ], "title": "An energy-efficient digital reram-crossbar-based cnn with bitwise parallelism", "venue": "IEEE Journal on Exploratory Solid-State Computational Devices and Circuits,", "year": 2017 }, { "authors": [ "Hyeonwoo Noh", "Tackgeun You", "Jonghwan Mun", "Bohyung Han" ], "title": "Regularizing deep neural networks by noise: its interpretation and optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "N. Papernot", "P. McDaniel", "X. Wu", "S. Jha", "A. Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Antonio Polino", "Razvan Pascanu", "Dan Alistarh" ], "title": "Model compression via distillation and quantization", "venue": "arXiv preprint arXiv:1802.05668,", "year": 2018 }, { "authors": [ "Adnan Siraj Rakin", "Zhezhi He", "Deliang Fan" ], "title": "Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack", "venue": "arXiv preprint arXiv:1811.09310,", "year": 2018 }, { "authors": [ "Angad S Rekhi", "Brian Zimmer", "Nikola Nedovic", "Ningxi Liu", "Rangharajan Venkatesan", "Miaorong Wang", "Brucek Khailany", "William J Dally", "C Thomas Gray" ], "title": "Analog/mixed-signal hardware error modeling for deep learning inference", "venue": "In Proceedings of the 56th Annual Design Automation Conference", "year": 2019 }, { "authors": [ "Andrew Michael Saxe", "Yamini Bansal", "Joel Dapello", "Madhu Advani", "Artemy Kolchinsky", "Brendan Daniel Tracey", "David Daniel Cox" ], "title": "On the information bottleneck theory of deep", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Ali Shafiee", "Anirban Nag", "Naveen Muralimanohar", "Rajeev Balasubramonian", "John Paul Strachan", "Miao Hu", "R Stanley Williams", "Vivek Srikumar" ], "title": "Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars", "venue": "ACM SIGARCH Computer Architecture News,", "year": 2016 }, { "authors": [ "Yichen Shen", "Nicholas C Harris", "Scott Skirlo", "Mihika Prabhu", "Tom Baehr-Jones", "Michael Hochberg", "Xin Sun", "Shijie Zhao", "Hugo Larochelle", "Dirk Englund" ], "title": "Deep learning with coherent nanophotonic circuits", "venue": "Nature Photonics,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Maryhelen Stevenson", "Rodney Winter", "Bernard Widrow" ], "title": "Sensitivity of feedforward neural networks to weight errors", "venue": "IEEE Transactions on Neural Networks,", "year": 1990 }, { "authors": [ "Vivienne Sze", "Yu-Hsin Chen", "Tien-Ju Yang", "Joel S Emer" ], "title": "Efficient processing of deep neural networks: A tutorial and survey", "venue": "Proceedings of the IEEE,", "year": 2017 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 } ]
[ { "heading": null, "text": "The success of deep learning has brought forth a wave of interest in computer hardware design to better meet the high demands of neural network inference. In particular, analog computing hardware has been heavily motivated specifically for accelerating neural networks, based on either electronic, optical or photonic devices, which may well achieve lower power consumption than conventional digital electronics. However, these proposed analog accelerators suffer from the intrinsic noise generated by their physical components, which makes it challenging to achieve high accuracy on deep neural networks. Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning. In this paper, we advance the understanding of noisy neural networks. We outline how a noisy neural network has reduced learning capacity as a result of loss of mutual information between its input and output. To combat this, we propose using knowledge distillation combined with noise injection during training to achieve more noise robust networks, which is demonstrated experimentally across different networks and datasets, including ImageNet. Our method achieves models with as much as ∼ 2× greater noise tolerance compared with the previous best attempts, which is a significant step towards making analog hardware practical for deep learning." }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks (DNNs) have achieved unprecedented performance over a wide variety of tasks such as computer vision, speech recognition, and natural language processing. However, DNN inference is typically very demanding in terms of compute and memory resources. Consequently, larger models are often not well suited for large-scale deployment on edge devices, which typically have meagre performance and power budgets, especially battery powered mobile and IoT devices. To address these issues, the design of specialized hardware for DNN inference has drawn great interest, and is an extremely active area of research. To date, a plethora of techniques have been proposed for designing efficient neural network hardware (Sze et al., 2017).\nIn contrast to the current status quo of predominantly digital hardware, there is significant research interest in analog hardware for DNN inference. In this approach, digital values are represented by analog quantities such as electrical voltages or light pulses, and the computation itself (e.g., multiplication and addition) proceeds in the analog domain, before eventually being converted back to digital. Analog accelerators take advantage of particular efficiencies of analog computation in exchange for losing the bit-exact precision of digital. In other words, analog compute is cheap but somewhat imprecise. Analog computation has been demonstrated in the context of DNN inference in both electronic (Binas et al., 2016), photonic (Shen et al., 2017) and optical (Lin et al., 2018) systems. Analog accelerators promise to deliver at least two orders of magnitude better performance over a conventional digital processor for deep learning workloads in both speed (Shen et al., 2017) and energy efficiency (Ni et al., 2017). Electronic analog DNN accelerators are arguably the most mature technology and hence will be our focus in this work.\nThe most common approach to electronic analog DNN accelerator is in-memory computing, which typically uses non-volatile memory (NVM) crossbar arrays to encode the network weights as analog values. The NVM itself can be implemented with memristive devices, such as metal-oxide resistive random-access memory (ReRAM) (Hu et al., 2018) or phase-change memory (PCM) (Le Gallo et al., 2018; Boybat et al., 2018; Ambrogio et al., 2018). The matrix-vector operations computed during inference are then performed in parallel inside the crossbar array, operating on analog quantities for weights and activations. For example, addition of two quantities encoded as electrical currents can be achieved by simply connecting the two wires together, whereby the currents will add linearly according to Kirchhoff’s current law. In this case, there is almost zero latency or energy dissipation for this operation.\nSimilarly, multiplication with a weight can be achieved by programming the NVM cell conductance to the weight value, which is then used to convert an input activation encoded as a voltage into a scaled current, following Ohm’s law. Therefore, the analog approach promises significantly improved throughput and energy efficiency. However, the analog nature of the weights makes the compute noisy, which can limit inference accuracy. For example, a simple two-layer fully-connected network with a baseline accuracy of 91.7% on digital hardware, achieves only 76.7% when implemented on an analog photonic array (Shen et al., 2017). This kind of accuracy degradation is not acceptable for most deep learning applications. Therefore, the challenge of imprecise analog hardware motivates us to study and understand noisy neural networks, in order to maintain inference accuracy under noisy analog computation.\nThe question of how to effectively learn and compute with a noisy machine is a long-standing problem of interest in machine learning and computer science (Stevenson et al., 1990; Von Neumann, 1956). In this paper, we study noisy neural networks to understand their inference performance. We also demonstrate how to train a neural network with distillation and noise injection to make it more resilient to computation noise, enabling higher inference accuracy for models deployed on analog hardware. We present empirical results that demonstrate state-of-the-art noise tolerance on multiple datasets, including ImageNet.\nThe remainder of the paper is organized as follows. Section 2 gives an overview of related work. Section 3 outlines the problem statement. Section 4 presents a more formal analysis of noisy neural networks. Section 5 gives a distillation methodology for training noisy neural networks, with experimental results. Finally, Section 6 provides a brief discussion and Section 7 closes with concluding remarks." }, { "heading": "2 RELATED WORK", "text": "Previous work broadly falls under the following categories: studying the effect of analog computation noise, analysis of noise-injection for DNNs, and use of distillation in model training.\nAnalog Computation Noise Models In Rekhi et al. (2019), the noise due to analog computation is modeled as additive parameter noise with zero-mean Gaussian distribution. The variance of this Gaussian is a function of the effective number of bits of the output of an analog computation. Similarly, the authors in Joshi et al. (2019) also model analog computation noise as additive Gaussian noise on the parameters, where the variance is proportional to the range of values that their PCM device can represent. Some noise models presented have included a more detailed account of device-level interactions, such as voltage drop across the analog array (Jain et al., 2018; Feinberg et al., 2018), but are beyond the scope of this paper. In this work, we consider an additive Gaussian noise model on the weights, similar to Rekhi et al. (2019); Joshi et al. (2019) and present a novel training method that outperforms the previous work in model noise resilience.\nNoise Injection for Neural Networks Several stochastic regularization techniques based on noise-injection and dropout (Srivastava et al., 2014; Noh et al., 2017; Li & Liu, 2016) have been demonstrated to be highly effective at reducing overfitting. For generalized linear models, dropout and additive noise have been shown to be equivalent to adaptive L2 regularization to a first order (Wager et al., 2013). Training networks with Gaussian noise added to the weights or activations can also increase robustness to variety of adversarial attacks (Rakin et al., 2018). Bayesian neural networks replace deterministic weights with distributions in order to optimize over the posterior\ndistribution of the weights (Kingma & Welling, 2013). Many of these methods use noise injection at inference time to approximate weight distribution; in Gal & Ghahramani (2016) a link between Gaussian processes and dropout is established in an effort to model the uncertainty of the output of a network. A theoretical analysis by Stevenson et al. (1990) has shown that for neural networks with adaptive linear neurons, the probability of error of a noisy neural network classifier with weight noise increases with the number of layers, but largely independent of the number of weights per neuron or neurons per layer.\nDistillation in Training Knowledge distillation (Hinton et al., 2015) is a well known technique in which the soft labels produced by a teacher model are used to train a student model which typically has reduced capacity. Distillation has shown merit for improving model performance across a range of scenarios, including student models lacking access to portions of training data (Micaelli & Storkey, 2019), quantized low-precision networks (Polino et al., 2018; Mishra & Marr, 2017), protection against adversarial attacks (Papernot et al., 2016; Goldblum et al., 2019), and in avoiding catastrophic forgetting for multi-task learning (Schwarz et al., 2018). To the best of our knowledge, our work is the first to combine distillation with noise injection in training to enhance model noise robustness." }, { "heading": "3 PROBLEM STATEMENT", "text": "Without loss of generality, we model a general noisy machine after a simple memristive crossbar array, similar to Shafiee et al. (2016). Figure 1 illustrates how an arbitrary neural network layer, l, such as a typical 3× 3 convolution, can be mapped to this hardware substrate by first flattening the weights into a single large 2D matrix, Wl, and then programming each element of this matrix into a memristive cell in the crossbar array, which provides the required conductances Gl (the reciprocal of resistance) to perform analog multiplication following Ohm’s law, iout = vinG. Note that a pair of differential pair of NVM devices are typically used to represent a signed quantity in Gl. Subsequently, input activations, xl converted into continuous voltages, v(xl), are streamed into the array rows from the left-hand side. The memristive devices connect row with columns, where the row voltages are converted into currents scaled by the programmed conductance, G, to generate the currents i(yl), which are differential in order to represent both positive and negative quantites with unipolar signals. The currents from each memristive device essentially add up for free where they are connected in the columns, according to Kirchhoff’s current law. Finally, the differential currents are converted to bipolar voltages, v(yl), which are they digitized before adding bias, and performing batch normalization and ReLU operations, which are not shown in Figure 1.\nHowever, the analog inference hardware of Figure 1 is subject to real-world non-idealities, typically attributed to variations in: 1) manufacturing process, 2) supply voltage and 3) temperature, PVT variation collectively, all of which result in noise in the system. Below we discuss the two key components in terms of analog noise modeling.\nData Converters. Digital-to-analog converter (DAC) and analog-to-digital converter (ADC) circuits are designed to be robust to PVT variation, but in practice these effects do degrade the resolution (i.e. number of bits). Therefore, we consider effective number of bits (ENOB), which is a lower bound on resolution in the presence of non-idealities. Hence, we use activation and weight quantization with ENOB data converters and no additional converter noise modeling.\nNVM cells. Due to their analog nature, memristive NVM cells have limited precision, due to the read and write circuitry (Joshi et al., 2019). In between write and read operations, their stored value is prone to drift over time. Long-term drift can be corrected with periodic refresh operations. At shorter timescales, time-varying noise may be encountered. For most of the experiments in this paper, we model generic NVM cell noise as an additive zero-mean i.i.d. Gaussian error term on the weights of the model in each particular layer ∆Wl ∼ N (∆Wl; 0, σ2N,lI). This simple model, described more concretely in Section 5, is similar to that used by Joshi et al. (2019) which was verified on real hardware. In addition, we also investigate spatially-varying and time-varying noise models in Section 5.2 (Table 1)." }, { "heading": "4 ANALYSIS OF NOISY NEURAL NETWORKS", "text": "" }, { "heading": "4.1 BIAS VARIANCE DECOMPOSITION FOR NOISY WEIGHTS", "text": "Naively deploying an off-the-shelf pretrained model on a noisy accelerator will yield poor accuracy for a fundamental reason. Consider a neural network f(W;x) with weights W that maps an input x ∈ Rn to an output y ∈ R. In the framework of statistical learning, x and y are considered to be randomly distributed following a joint probability distribution p(x, y). In a noisy neural network, the weights W are also randomly distributed, with distribution p(W). The expected Mean Squared Error (MSE) of this noisy neural network can be decomposed as\nE(x,y)∼p(x,y),W∼p(W)[(f(W;x)− y)2] =E(x,y)∼p(x,y),W∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)] + EW∼p(W)[f(W;x)]− y)2] =Ex∼p(x)[EW∼p(W)[(f(W;x)− EW∼p(W)[f(W;x)])2]]\n+ E(x,y)∼p(x,y)[(EW∼p(W)[f(W;x)]− y)2]. (1)\nThe first term on the right hand side of Equation 1 is a variance loss term due to randomness in the weights and is denoted as lvar. The second term is a squared bias loss term which we call lbias. However, typically a model is trained to minimize the empirical version of expected loss lpretrained = E(x,y)∼p(x,y)[(f(E[W];x) − y)2]. We assume that the noise is centered such that pretrained weights are equal to E[W]. A pretrained model is therefore optimized for the wrong loss function when deployed on a noisy accelerator. To show this in a more concrete way, a baseline LeNet model (32 filters in the first convolutional layer, 64 filters in the second convolutional layer and 1024 neurons in the fully-connected layer) (LeCun et al., 1998) is trained on MNIST dataset to 99.19% accuracy and then exposed to Gaussian noise in its weights, numerical values of these loss terms can be estimated. The expected value of the network output EW[f(W;x)] is estimated by averaging over outputs of different instances of the network for the same input x. We perform inference on n = 100 different instances of the network and estimate the loss terms as\nf(W;x) = EW∼p(W)[f(W;x)] ' 1\nn n∑ i=1 f(Wi;x), (2)\nl̂var = 1\nN N∑ j=1 1 n n∑ i=1 (f(Wi;xj)− f(W;xj))2, (3)\nl̂bias = 1\nN N∑ j=1 (f(W;xj)− yj)2, (4)\nl̂pretrained = 1\nN N∑ j=1 (f(E[W];xj)− yj)2. (5)\nThe above formulas are for a network with a scalar output. They can be easily extended to the vector output case by averaging over all outputs. In the LeNet example, we take the output of softmax layer to calculate squared losses. The noise is assumed i.i.d. Gaussian centered around zero with a fixed SNR σ2W,l/σ 2 N,l in each layer l. The numerical values of the above losses are estimated using\nthe entire test dataset for different noise levels. Results are shown in Figure 2(a). l̂bias is initially equal to l̂pretrained and l̂var = 0 when there is no noise. However, as noise level rises, they increase in magnitude and become much more important than l̂pretrained. l̂var overtakes l̂bias to become the predominant loss term in a noisy LeNet at σN/σW ' 0.6. It is useful to note that lbias increases with noise entirely due to nonlinearity in the network, which is ReLU in the case of LeNet. In a linear model, lbias should be equal to lpretrained as we would have f(E[W];x) = E[f(W;x)]. A model trained in a conventional manner is thus not optimized for the real loss it is going to encounter on a noisy accelerator. Special retraining is required to improve its noise tolerance. In Figure 2(a), we show how the model accuracy degrades with a rising noise level for the baseline LeNet and its deeper and wider variants. The deeper network is obtained by stacking two more convolutional layers of width 16 in front of the baseline network and the wider network is obtained by increasing the widths of each layer in the baseline to 128, 256, 2048 respectively. Performance degradation due to noise is worse for the deeper variant and less severe for the wider one. A more detailed discussion of the network architecture effect on its performance under noise is offered in Section 4.2" }, { "heading": "4.2 LOSS OF INFORMATION IN A NOISY NEURAL NETWORK", "text": "Information theory offers useful tools to study noise in neural networks. Mutual information I(X;Y ) characterizes the amount of information obtained on random variable X by observing another random variable Y . The mutual information between X and Y can be related to Shannon entropy by I(X;Y ) = H(Y )−H(Y |X). (6) Mutual information has been used to understand DNNs (Tishby & Zaslavsky, 2015; Saxe et al., 2018). Treating a noisy neural network as a noisy information channel, we can show how information about the input to the neural network diminishes as it propagates through the noisy computation. In this subsection, X is the input to the neural network and Y is the output. Mutual information is estimated for the baseline LeNet model and its variants using Equation 6. When there is no noise, the term H(Y |X) is zero as Y is deterministic once the input to the network X is known, therefore I(X;Y ) is just H(Y ) in this case. Shannon entropy H(Y ) can be estimated using a standard discrete binning approach (Saxe et al., 2018). In our experiment, Y is the output of the softmax layer\nwhich is a vector of length 10. Entropy H(Y ) is estimated using four bins per coordinate of Y by\nĤ(Y ) = − N∑ i=1 pi log(pi), (7)\nwhere pi is the probability that an output falls in the bin i. When noise is introduced to the weights, the conditional entropy H(Y |X) is estimated by fixing the input X = x and performing multiple noisy inferences to calculate Ĥ(Y |X = x) with the above binning approach. Ĥ(Y |X = x) is then averaged over different input x to obtain Ĥ(Y |X). This estimate is performed for LeNet and its variants with different noise levels. Results are shown in Figure 2(b). The values are normalized to the estimate of I(X;Y ) at zero noise. Mutual information between the input and the output decays towards zero with increasing noise in network weights. Furthermore, mutual information in a deeper and narrower network decays faster than in a shallower and wider network. Intuitively, information from the input undergoes more noisy compute when more layers are added to the network, while a wider network has more redundant paths for the information to flow, thus better preserving it. An information theoretic bound of mutual information decay as a function of network depth and width in a noisy neural network will be treated in our follow-up work. Overall, noise is damaging the learning capacity of the network. When the output of the model contains no information from its input, the network loses all ability to learn. For a noise level that is not so extreme, a significant amount of mutual information remains, which indicates that useful learning is possible even with a noisy model." }, { "heading": "5 COMBINING NOISE INJECTION AND KNOWLEDGE DISTILLATION", "text": "" }, { "heading": "5.1 METHODOLOGY", "text": "Noise injection during training is one way of exposing network training to a more realistic loss as randomly perturbing weights simulates what happens in a real noisy analog device, and forces the network to adapt to noise during training. Noise injection only happens in training during forward propagation, which can be considered as an approximation for calculating weight gradients with a straight-through-estimator (STE) (Bengio et al., 2013). At each forward pass, the weight Wl of layer l is drawn from an i.i.d. Gaussian distribution N (Wl;Wl0, σ2N,lI). The noise is referenced to the range of representable weights W lmax −W lmin in that particular layer\nσN,l = η(W l max −W lmin), (8)\nwhere η is a coefficient characterizing the noise level. During back propagation, gradients are calculated with clean weights Wl0, and only W l 0 gets updated by applying the gradient. W l max and W l min are hyperparameters which can be chosen with information on the weight distributions.\nKnowledge distillation was introduced by Hinton et al. (2015) as a way for training a smaller student model using a larger model as the teacher. For an input to the neural network x, the teacher model generates logits zTi , which are then turned into a probability vector by the softmax layer\nqTi = σ(z T i ;T ) = exp(zTi /T )∑ j exp(z T j /T ) . (9)\nThe temperature, T , controls the softness of the probabilities. The teacher network can generate softer labels for the student network by raising the temperature T . We propose to use a noise free clean model as the teacher to train a noisy student network. The student network is trained with noise injection to match a mix of hard targets and soft targets generated by the teacher. Logits generated by the student network are denoted as zSi . A loss function with distillation for the student model can be written as\nL(x;WS;T ) = H(σ(zSi ;T = 1), ytrue) + αT 2H(σ(zSi ;T ), qTi ) +R(WS0). (10)\nHere H is cross-entropy loss, ytrue is the one-hot encoding of the ground truth, and R is the L2regularization term. Parameter α balances relative strength between hard and soft targets. We follow the original implementation in Hinton et al. (2015), which includes a T 2 factor in front of the soft target loss to balance gradients generated from different targets. The student model is then trained\nwith Gaussian noise injection using this distillation loss function. The vanilla noise injection training corresponds to the case where α = 0. If the range of weights is not constrained and the noise reference is fixed, the network soon learns that the most effective way to decrease the loss is to increase the amplitude of the weights, which increases the effective SNR. There are two possible ways to deal with this problem. Firstly, the noise reference could be re-calculated after each weight update, thus updating the noise power. Secondly, we can constrain the range of weights by clipping them to the range [W lmin,W l max], and use a fixed noise model during training. We found that in general the second method of fixing the range of weights and training for a specific noise yields more stable training and better results. Therefore, this is the training method that we adopt in this paper. A schematic of our proposed method is shown in Figure 5 of the Appendix.\nDuring training, a clean model is first trained to its full accuracy and then weight clipping is applied to clip weights in the range [W lmin,W l max]. The specific range is chosen based on statistics of the weights. Fine-tuning is then applied to bring the weight-clipped clean model back to full accuracy. This model is then used as the teacher to generate soft targets. The noisy student network is initialized with the same weights as the teacher. This can be considered as a warm start to accelerate retraining. As we discussed earlier, the range of weights is fixed during training, and the noise injected into the student model is referenced to this range.\nOur method also supports training for low precision noisy models. Quantization reflects finite precision conversion between analog and digital domains in an analog accelerator. Weights are uniformly quantized in the range [W lmin,W l max] before being exposed to noise. In a given layer, the input activations are quantized before being multiplied by noisy weights. The output results of the matrix multiplication are also quantized before adding biases and performing batch normalization, which are considered to happen in digital domain. When training with quantization, the straight-throughestimator is assumed when calculating gradients with back propagation." }, { "heading": "5.2 EXPERIMENTAL RESULTS", "text": "In order to establish the effectiveness of our proposed method, experiments are performed for different networks and datasets. In this section we mainly focus on bigger datasets and models, while results on LeNet and its variants with some discussion of network architecture effect can be found in Figure 6 of the Appendix. ResNets are a family of convolutional neural networks proposed by He et al. (2016), which have gained great popularity in computer vision applications. In fact, many other deep neural networks also use ResNet-like cells as their building blocks. ResNets are often used as industry standard benchmark models to test hardware performance. The first set of experiments we present consist of a ResNet-32 model trained on the CIFAR10 dataset. In order to compare fairly with the previous work, we follow the implementation in Joshi et al. (2019), and consider a ResNet32(v1) model on CIFAR10 with weight clipping in the range [−2σW,l, 2σW,l]. The teacher model is trained to an accuracy of 93.845% using stochastic gradient descent with cosine learning rate decay (Loshchilov & Hutter, 2016), and an initial learning rate of 0.1 (batch size is 128). The network is then retrained with noise injection to make it robust against noise. Retraining takes place for 150 epochs, the initial learning rate is 0.01 and decays with the same cosine profile. We performed two sets of retraining, one without distillation in the loss (α = 0), and another with distillation loss (α = 1). Everything else was kept equal in these retraining runs. Five different noise levels are tested with five different values of η: {0.02, 0.04, 0.057, 0.073, 0.11}. Results are shown in Figure 3(a). Every retraining run was performed twice and inference was performed 50 times on the test dataset for one model, to generate statistically significant results. Temperature was set to T = 6 for the runs with distillation. We found that an intermediate temperature between 2 and 10 produces better results. The pretrained model without any retraining performs very poorly at inference time when noise is present. Retraining with Gaussian noise injection can effectively recover some accuracy, which we confirm as reported in Joshi et al. (2019). Our method of combining noise injection with knowledge distillation from the clean model further improves noise resilience by about 40% in terms of η, which is an improvement of almost 2× in terms of noise power σ2N .\nThe actual noise level in a given device can only be estimated, and will vary from one device to another and even fluctuate depending on the physical environment in which it operates (Section 3). Therefore, it is important that any method to enhance noise robustness can tolerate a range of noise\nlevels. Our method offers improved noise robustness, even when the actual noise at inference time is different from that injected at training time. It is shown in Figure 3(b) that the model obtained from distillation is more accurate and less sensitive to noise level differences between training and inference time. This holds for a range of different inference noise levels around the training level. In the previous experiments, we assume a fixed noise level parameterized by η. On real analog hardware, there could be additional non-idealities such as variation in noise level due to temperature fluctuation and nonuniform noise profile on different NVM cells due to statistical variation in the manufacturing process. We have conducted additional experiments to account for these effects.\nResults from the experiments are shown in Table 1. Temporal fluctuation represents noise level variation over time. Noise η is randomly sampled from N (η; η0, σ2η) for each inference batch. A noise temporal fluctuation level of 10% means that ση = 0.1η0. Spatial noise level fluctuation introduces nonuniform diagonal terms in the noise covariance matrix. More concretely, each weight noise in our previous model is multiplied by a scale factor λw with λw drawn from a Gaussian distribution N (λw; 1, σ2λ). A noise spatial fluctuation level of 10% means that σλ = 0.1. The scale factors are generated and then fixed when the network is instantiated, therefore the noise during network inference is non i.i.d. in this case. Results from our experiments show that there is no significant deviation when a combination of these non-ideal noise effects are taken into account.\nThe performance of our training method is also validated with quantization. A ResNet-18(v2) model is trained with quantization to 4-bit precision (ENOB) for both weights and activations. This corresponds to 4-bit precision conversions between digital and analog domains. A subset of training\ndata is passed through the full precision model to calibrate the range for quantization – we choose the 0.1% and 99.9% percentiles as qmin and qmax for the quantizer. This range of quantization is fixed throughout training. The quantized model achieves an accuracy of 92.91% on the test dataset when no noise is present. The model is then re-trained for noise robustness. The noise level is referenced to the range of quantization of weights in one particular layer, such that W lmin = qmin,l and W lmax = qmax,l. Results are shown for the same set of η values in Figure 4(a). In the distillation retraining runs, the full-precision clean model with an accuracy of 93.87% is used as the teacher and temperature is set to T = 6. Due to extra loss in precision imposed by aggressive quantization, accuracy of the pretrained quantized model drops sharply with noise. At η = 0.057, the model accuracy drops to 87.5% without retraining and further down to 80.9% at η = 0.073. Even retraining with noise injection struggles, and the model retrained with only noise injection achieves an accuracy of 90.34% at η = 0.073. Our method of combining noise injection and distillation stands out by keeping the accuracy loss within 1% from the baseline up to a noise level of η ' 0.07.\nOne interesting aspect of using distillation loss during retraining with noise can be seen in Figure 4(b). The evolution of model accuracy on the test dataset is shown. When no distillation loss is used, the model suffers an accuracy drop (difference between blue and orange curves) around 2.08% when tested with noise. The drop (difference between green and red curves) is significantly reduced to around 0.6% when distillation loss is used. This observation indicates that training with distillation favors solutions that are less sensitive to noise. The final model obtained with distillation is actually slightly worse when there is no noise at inference time but becomes superior when noise is present.\nResults on the ImageNet dataset for a ResNet-50(v1) network are shown in Table 2 to demonstrate that our proposed approach scales to a large-scale dataset and a deep model. A ResNet-50 model is first trained to an accuracy of 74.942% with weight clipping in the range [−2σW,l, 2σW,l]. This range is fixed as the reference for added noise. For ResNet-50 on ImageNet, only three different noise levels are explored, and the accuracy degrades very quickly beyond the noise level η = 0.06, as the model and the task are considerably more complex. Retraining runs for 30 epochs with an initial learning rate of 0.001 and cosine learning rate decay with a batch size of 32. For distillation, we used α = 1 and T = 6 as in previous experiments. Results are collected for two independent training runs in each setting and 50 inference runs over the entire test dataset. The findings confirm that training with distillation and noise injection consistently delivers more noise robust models. The accuracy uplift benefit also markedly increases with noise." }, { "heading": "6 DISCUSSION", "text": "Effects of distillation Knowledge distillation is a proven technique to transfer knowledge from a larger teacher model to a smaller, lower capacity student model. This paper shows, for the first time, that distillation is also an effective way to transfer knowledge between a clean model and its noisy\ncounterpart, with the novel approach of combining distillation with noise injection during training. We give some intuition for understanding this effect with the help of Section 4.2: a noisy neural network can be viewed as a model with reduced learning capacity by the loss of mutual information argument. Distillation is therefore acting to help reduce this capacity gap.\nIn our experiments, distillation shows great benefit in helping the network to converge to a good solution, even with a high level of noise injected in the forward propagation step. Here, we attempt to explain this effect by the reduced sensitivity of distillation loss. An influential work by Papernot et al. (2016) shows that distillation can be used to reduce the model sensitivity with respect to its input perturbations thus defending against some adversarial attacks. We argue that distillation can achieve a similar effect for the weights of the network. Taking the derivative of the i-th output of the student network qSi at temperature T with respect to a weight w yields\n∂qSi ∂w = 1 T exp(zi/T )(∑ j exp(zj/T ) )2 ∑ j exp(zj/T ) ( ∂zi ∂w − ∂zj ∂w ) . (11)\nThe 1/T scaling makes the output less sensitive to weight perturbation at higher temperature, thus potentially stabilizing the training when noise is injected into weights during forward propagation. We plan to work on a more formal analysis of this argument in our future work.\nHardware Performance Benefits The improvements in noise tolerance of neural networks demonstrated in this work have a potential impact on the design of practical analog hardware accelerators for neural network inference. Increased robustness to noisy computation at the model training level potentially means that the specification of the analog hardware can be relaxed. In turn, this can make it easier to achieve the hardware specification, or even allow optimizations to further reduce the energy consumption. An in-depth discussion of the trade-off between compute noise performance and hardware energy dissipation is beyond the scope of this paper, but we refer the interested reader to Rekhi et al. (2019) for more details. In summary, we believe that machine learning research will be a key enabler for practical analog hardware accelerators." }, { "heading": "7 CONCLUSION", "text": "Analog hardware holds the potential to significantly reduce the latency and energy consumption of neural network inference. However, analog hardware is imprecise and introduces noise during computation that limits accuracy in practice. This paper explored the training of noisy neural networks, which suffer from reduced capacity leading to accuracy loss. We propose a training methodology that trains neural networks via distillation and noise injection to increase the accuracy of models under noisy computation. Experimental results across a range of models and datasets, including ImageNet, demonstrate that this approach can almost double the network noise tolerance compared with the previous best reported values, without any changes to the model itself beyond the training method. With these improvements in the accuracy of noisy neural networks, we hope to enable the implementation of analog inference hardware in the near future." } ]
2,019
NOISY MACHINES: UNDERSTANDING NOISY NEURAL NETWORKS AND ENHANCING ROBUSTNESS TO ANALOG HARDWARE ERRORS USING DISTILLATION
SP:16cb7d0da739f1e6a72efb9b18399d2d8b69f540
[ "This paper proposes to utilize Neural ODEs (NODEs) and the Level Set Method (LSM) for the task of image segmentation. The argument is that the NODE can be used to learn the force function in an LSM and solve the contour evolution process. The authors propose two architectures and demonstrate promising performance on a few image segmentation benchmarks. ", "This paper proposes to apply the Neural ODE framework (Chen et al 2018) for image segmentation. The method relies on contour delineation through Level Sets. Since contour estimation requires to solve an ODE, this naturally allows to apply the work presented in (Chen et al 2018). The method is here applied in two segmentation tasks: kidney segmentation and salient object detection." ]
We propose a novel approach for image segmentation that combines Neural Ordinary Differential Equations (NODEs) and the Level Set method. Our approach parametrizes the evolution of an initial contour with a NODE that implicitly learns from data a forcing function describing the evolution. In cases where an initial contour is not available or to alleviate the need for careful choice or design of contour embedding functions, we propose using NODEs to directly evolve the embedding of an input image into a pixel-wise dense semantic label. We evaluate our methods on kidney segmentation (KiTS19) and on salient object detection (PASCAL-S, ECSSD and HKU-IS). In addition to improving initial contours provided by deep learning models while using a fraction of their number of parameters, our approach achieves Fβ scores that are higher than several state-of-the-art deep learning algorithms.
[]
[ { "authors": [ "David Adalsteinsson", "James A Sethian" ], "title": "A fast level set method for propagating interfaces", "venue": "Journal of computational physics,", "year": 1995 }, { "authors": [ "Thomas Brox", "Andrés Bruhn", "Joachim Weickert" ], "title": "Variational motion segmentation with level sets", "venue": "In European Conference on Computer Vision,", "year": 2006 }, { "authors": [ "Tony F Chan", "Luminita A Vese" ], "title": "Active contours without edges", "venue": "IEEE Transactions on image processing,", "year": 2001 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking atrous convolution for semantic image segmentation", "venue": "arXiv preprint arXiv:1706.05587,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ming-Ming Cheng", "Niloy J Mitra", "Xiaolei Huang", "Philip HS Torr", "Shi-Min Hu" ], "title": "Global contrast based salient region detection", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2014 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "arXiv preprint arXiv:1904.01681,", "year": 2019 }, { "authors": [ "Mark Everingham", "SM Ali Eslami", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes challenge: A retrospective", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Amir Gholami", "Kurt Keutzer", "George Biros" ], "title": "Anode: Unconditionally accurate memory-efficient gradients for neural odes", "venue": "arXiv preprint arXiv:1902.10298,", "year": 2019 }, { "authors": [ "Daniel Hartmann", "Matthias Meinke", "Wolfgang Schröder" ], "title": "The constrained reinitialization equation for level set methods", "venue": "Journal of Computational Physics,", "year": 2010 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Nicholas Heller", "Niranjan Sathianathen", "Arveen Kalapara", "Edward Walczak", "Keenan Moore", "Heather Kaluzniak", "Joel Rosenberg", "Paul Blake", "Zachary Rengel", "Makinna Oestreich" ], "title": "The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes", "venue": "arXiv preprint arXiv:1904.00445,", "year": 2019 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Ping Hu", "Bing Shuai", "Jun Liu", "Gang Wang" ], "title": "Deep level sets for salient object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Gayoung Lee", "Yu-Wing Tai", "Junmo Kim" ], "title": "Deep saliency with encoded low level distance map and high level features", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Guanbin Li", "Yizhou Yu" ], "title": "Visual saliency based on multiscale deep features", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Xi Li", "Liming Zhao", "Lina Wei", "Ming-Hsuan Yang", "Fei Wu", "Yueting Zhuang", "Haibin Ling", "Jingdong Wang" ], "title": "Deepsaliency: Multi-task deep neural network model for salient object detection", "venue": "IEEE Transactions on Image Processing,", "year": 2016 }, { "authors": [ "Yin Li", "Xiaodi Hou", "Christof Koch", "James M Rehg", "Alan L Yuille" ], "title": "The secrets of salient object segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Ran Margolin", "Lihi Zelnik-Manor", "Ayellet Tal" ], "title": "How to evaluate foreground maps", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Tan M Nguyen", "Animesh Garg", "Richard G Baraniuk", "Anima Anandkumar" ], "title": "Infocnf: An efficient conditional continuous normalizing flow with adaptive solvers", "venue": "Workshop on Invertible Neural Nets and Normalizing Flows,", "year": 2019 }, { "authors": [ "Stanley Osher", "James A Sethian" ], "title": "Fronts propagating with curvature-dependent speed: algorithms based on hamilton-jacobi formulations", "venue": "Journal of computational physics,", "year": 1988 }, { "authors": [ "Nikos Paragios", "Rachid Deriche" ], "title": "Geodesic active contours and level sets for the detection and tracking of moving objects", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2000 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Mikael Rousson", "Nikos Paragios" ], "title": "Shape priors for level set representations", "venue": "In European Conference on Computer Vision,", "year": 2002 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Mark Sussman", "Peter Smereka", "Stanley Osher" ], "title": "A level set approach for computing solutions to incompressible two-phase flow", "venue": "Journal of Computational physics,", "year": 1994 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Andy Tsai", "Anthony Yezzi", "William Wells III", "Clare Tempany", "Dewey Tucker", "Ayres Fan", "W Eric Grimson", "Alan S Willsky" ], "title": "A shape-based approach to the segmentation of medical imagery using level", "venue": null, "year": 2003 }, { "authors": [ "Sirion Vittayakorn", "James Hays" ], "title": "Quality assessment for crowdsourced object annotations", "venue": "In BMVC, pp", "year": 2011 }, { "authors": [ "J Wang", "H Jiang", "Z Yuan", "MM Cheng", "X Hu", "N Zheng", "SO Detection" ], "title": "A discriminative regional feature integration approach", "venue": "IEEE Int. J. Comput. Vis,", "year": 2017 }, { "authors": [ "Lijun Wang", "Huchuan Lu", "Xiang Ruan", "Ming-Hsuan Yang" ], "title": "Deep networks for saliency detection via local estimation and global search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Qiong Yan", "Li Xu", "Jianping Shi", "Jiaya Jia" ], "title": "Hierarchical saliency detection", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2013 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Lishi Zhang", "Chenghan Fu", "Jia Li" ], "title": "Collaborative annotation of semantic objects in images with multi-granularity supervisions", "venue": "ACM Multimedia Conference on Multimedia Conference,", "year": 2018 }, { "authors": [ "Hong-Kai Zhao", "Tony Chan", "Barry Merriman", "Stanley Osher" ], "title": "A variational level set approach to multiphase motion", "venue": "Journal of computational physics,", "year": 1996 }, { "authors": [ "Rui Zhao", "Wanli Ouyang", "Hongsheng Li", "Xiaogang Wang" ], "title": "Saliency detection by multi-context deep learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Image segmentation is the task of delineating pixels belonging to semantic labels. The ability to automatically segment objects is important because accurate labeling is expensive and hard (Vittayakorn & Hays, 2011; Zhang et al., 2018). Automatic image segmentation can have large impact in many domains, e.g. obstacle avoidance in autonomous driving and treatment planning in medical imaging.\nAccurate classification of pixels in close proximity to inter-class boundaries remains a challenging task in image segmentation. Object boundaries can have high curvature contours or weak pixel intensity that complicate separating the object from surrounding ones. In deep CNNs (Simonyan & Zisserman, 2014; Zeiler & Fergus, 2014; Szegedy et al., 2015; He et al., 2016; Chen et al., 2017), the object-of-interest and surrounding competing objects can provide equal context to a receptive field of a boundary pixel, which can make accurate classification difficult. Humans also find it difficult to accurately label pixels near object boundaries.\nLevel Set methods (Zhao et al., 1996; Brox et al., 2006) and Active Shapes (Paragios & Deriche, 2000; Chan & Vese, 2001) have been proposed to incorporate shape and image priors to mitigate boundary ambiguities (Tsai et al., 2003; Rousson & Paragios, 2002). The Level Set method for image segmentation evolves an initial contour of an object-of-interest along the normal direction with a forcing function. A contour is represented by an embedding function, typically a signed distance function, and its evolution amounts to solving a differential equation (Osher & Sethian, 1988).\nIn this work, we extend the formulation of the level set method. Inspired by the recent progress in Neural Ordinary Diferential Equations (NODEs) (Chen et al., 2018; Dupont et al., 2019; Gholami et al., 2019), we propose to use NODEs to solve the level set formulation of the contour evolution, thus learning the forcing function in an end-to-end data driven manner. Unlike earlier attempts in combining the level set method with CNNs, we benefit from NODEs parametrization of the derivative of the contour because it allows us to incorporate external constraints that guide the contour evolution, e.g. by adding a regularization penalty to the curvature of the front or exploiting images at the evolving front by extracting appearance constraints in a non-supervised way.\nFinally, similar to experiments in (Chen et al., 2018), to alleviate the need for careful choice or design of contour embedding functions, we propose a NODE-based method that evolves an image embedding into a dense per-pixel semantic label space.\nTo the best of our knowledge, this work is the first to apply Neural ODEs to real world problems. We validate our methods on two 2D segmentation tasks: kidney segmentation in transversal slices of CT scans and salient object segmentation. Given an initial estimate of kidney via existing algorithms, our method effectively evolves the initial estimates and achieves improved kidney segmentation, as we show in Figure 1. On real life salient objects, in addition to contour evolution, we use our method to directly evolve the embedding of an input image into a pixel-wise dense semantic label.\nFollowing (Hu et al., 2017), we compare against the results in (Wang et al., 2017; Li et al., 2016; Li & Yu, 2015; Zhao et al., 2015; Lee et al., 2016; Wang et al., 2015; Hu et al., 2017) and achieve ω-Fβ scores, PASCAL-S 0.668 and ECSSD 0.768, that are higher than several state-of-the art algorithms. Our results suggest the potential of utilizing NODEs for solving the contour evolution of level set methods or the direct evolution of image embeddings into segmentation maps. We hope our findings will inspire future research in using NODEs for semantic segmentation tasks. We foresee that our method would allow for intervention on intermediate states of the solution of the ODE, allowing for injection of shape priors or other regularizing constraints.\nIn summary, our contributions are:\n• We propose to use NODEs to solve the level set formulation of the contour evolution. • We propose using NODEs to learn the forcing function in an end-to-end data driven way. • We show NODEs can also evolve image embeddings directly into dense per-pixel semantic\nlabel spaces, which may alleviate the need for careful choice or design of contour embedding functions." }, { "heading": "2 METHODS", "text": "Suppose I is a 2D image, S is the contour of an object we want to segment, and φ is a contour embedding function, defined as a distance map, such that S = {(x, y)|φ(x, y) = 0}. We assume an initial but rough contour of the object is given by a human operator or by an existing algorithm. A level set segmentation (Osher & Sethian, 1988) solves a differential equation to evolve a contour along its normal direction with a speed function F as:\ndφ dt = |∇φ|F for t ∈ [0, 1], (1)\nwhere the initial value φ0(x, y) is defined as a signed Euclidean distance from (x, y) to the closest point on the initial contour S0. The speed function F is often modelled to be a function of the target image I, the shape statistics of the object contour (derived from training shapes), or a regularizing curvature term (∇ ∇φ‖∇φ‖ ).\nIn Neural ODEs, we parametrize the derivative of the hidden state h using a neural network fθ parametrized by θ:\ndh dt = fθ(h, t). (2)\nThe relationship between Eq. 1 and Eq. 2 is immediate. In the next section, we propose two approaches that adapt NODEs to the level set method for image segmentation." }, { "heading": "2.1 CONTOUR EVOLUTION WITH NODES", "text": "We propose to solve a more general form of Eq. 1 to evolve an initial contour estimate φ̂ for image segmentation. We define the state of the NODE to be φ̂ augmented with the input image’s embedding, h. We then advance the augmented state, γ = (φ̂, h), using NODEs, which can be interpreted as estimating the speed function F described in Eq. 1. Mathematically,\nγ = (φ̂, h),\ndγ dt = fθ(γ, t) for t ∈ [0, 1],\nγ(0) = (φ̂(0), h(0)),\nφ̃ = φ̂(1) + ψ(γ(1)),\n(3)\nwhere t is the time step in the evolution, γ is the augmented state of the NODE, f is a neural network parametrized by θ, φ̂(0) is the initial value of the distance map, h(0) is the initial value of the image embedding, ψ is a learned function and φ̃ is the dense per-pixel distance map prediction. Figure 2a schematically illustrates our initial contour evolution approach. Throughout this paper, we will refer to this method as Contour Evolution." }, { "heading": "2.2 IMAGE EVOLUTION WITH NODES", "text": "In our first approach, we obtain a final optimal contour by evolving an initial estimate. In our second approach, inspired by Chen et al. (2018), we evolve an image embedding h and project it into a dense per-pixel distance map φ̃, whose zero level set defines the final segmentation map, S(t) = {(x, y)|φ(t)(x, y) = 0} . Mathematically,\ndh dt = fθ(h, t) for t ∈ [0, 1],\nh(0) = λ(I),\nφ̃ = ψ(h(1)),\n(4)\nwhere t is the time step in the evolution, f is a neural network parametrized by θ, I is an image, λ is a learned image embedding function and ψ is a learned function that maps an image embedding to a distance map. Figure 2b schematically illustrates our direct image evolution approach. Throughout this paper, we will refer to this method as Image Evolution." }, { "heading": "3 IMPLEMENTATION", "text": "In the following subsections, we describe our design choices in loss function and their regularization terms, architectures, strategies for emphasizing the evolution of the contour on a region of interest. We also detail our model initialization strategies to prevent drifting from the sub-optimal initial value, and choices of error tolerances and activation normalization." }, { "heading": "3.1 LOSS FUNCTION AND REGULARIZATION TERMS", "text": "We optimize the parameters of our NODE models, described in Figures 2a and 2b, to minimize the empirical risk computed as the Mean Squared Error (MSE) between the target (φ) and predicted (φ̃) distance maps. We remind the reader that although our techniques can access intermediate NODE states, which could allow injection of priors or other constraints, we do not explore this in our current experiments, and relay it to future work." }, { "heading": "3.2 NARROW BAND AND RE-INITIALIZATION", "text": "In the level set formulation, all levels that describe the propagating contour are tracked. Adalsteinsson & Sethian (1995) proposed limiting the evolution to the subset of levels within a narrow band of the zero level contour.\nIn our approach, we obtain the equivalent of a narrow band by applying a hyperbolic tangent nonlinearity on the evolved distance map. It effectively attenuates the contribution of levels in the optimization process. This transformation is especially valuable in refinement setups because it weights the gradients of the loss according to the proximity to the contour1.\nRe-initialization of φ is another common practice in classical level set methods. It ensures the states in the trajectory of the numerical solution remain valid distance maps. (Sussman et al., 1994; Hartmann et al., 2010) propose to first extract a zero level set of an evolving state, and re-calculate a distance map of that contour. In our experiments, we found that our optimization is not sensitive to non valid distance maps, and we did not find it necessary to reinitialize φ." }, { "heading": "3.3 PARAMETER INITIALIZATION AND LEARNING RATE RAMPUP", "text": "In tasks where the initial value is already close to the desired solution, not initializing the model parameters to represent the identity function and not using learning rate ramp up can slow down the optimization process as the model predictions can immediately drift away from the initial value.\nIn addition to using learning rate rampup, we prevent this issue by setting the weights and biases on the last layer of the NODE and Postnet layers to zero. This approach has been successfully used in normalizing flow models (Kingma & Dhariwal, 2018; Prenger et al., 2019)." }, { "heading": "3.4 ADAPTIVE SOLVERS AND ERROR TOLERANCES", "text": "In ordinary differential equations, adaptive step solvers vary the step size according to the error estimate of the current step and the error tolerance. If the error estimate is larger than the threshold, the step will be redone with a smaller size until the error is smaller than the error tolerance.\nThe error tolerance eitol given the current state i is the sum of the absolute error tolerance atol and the infinity norm of the current state h weighted by the relative error tolerance rtol:\neitol = atol + rtol ∗ ∥∥hi∥∥∞ . (5)\n1For the hyperbolic tangent, the gradients decrease as it moves away from zero, which represents the contour at the zero level set.\nGiven that we do not know in advance the infinity norm of hi, which in our case contains the image embedding as described in Equations 3 and 4, we set the contribution of the relative error tolerance term to zero and adjust the absolute error tolerance." }, { "heading": "3.5 ACTIVATION NORMALIZATION", "text": "When the batch size is too small for using batch size dependent normalization schemes like BatchNorm (Ioffe & Szegedy, 2015), researchers rely on dataparallel multi-processor training setups with BatchNorm statistics reduced over all processes, for example SyncBatchNorm in the APEX library (Sarofeen et al., 2019).\nWhen training in multi-processor environments with data parallelism and NODEs with adaptive step solvers, the number of NODE function evaluations on each processor can differ. Consequently, the number of BatchNorm calls inside each NODE layer will be dependent on the number of function evaluations, thus making reduction over processes complex.\nIn our setup, we circumvent this issue by using GroupNorm (Wu & He, 2018) in layers where no convolution groups are used and LayerNorm (Ba et al., 2016) when convolution groups are used." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASETS AND TASKS", "text": "The KiTS19 Challenge Data (Heller et al., 2019) consists of CT scans from 210 patients with tumour and kidney annotations. The MSRA10K dataset (Cheng et al., 2014) consists of 10000 images with pixel-level saliency labeling from the MSRA dataset. The PASCAL-S dataset (Li et al., 2014) consists of free-viewing fixations on a subset of 850 images from PASCAL VOC, the ECSSD dataset (Yan et al., 2013) consists of 1000 semantically meaningful but structurally complex images and the HKU-IS (Li & Yu, 2015) consists of 4447 including multiple disconnected salient objects.\nFor the kidney segmentation task, we prune images that do not contain a kidney and resize the data to 200 by 200, without any loss of generality given that the original images are interpolated with the same affine transformation for each patient. We divide the dataset between 7108 images from 168 patients for training and 1786 images from another 42 patients for validation.\nFor the salient object detection task, we train on the MSRA10K dataset and use 512 by 512 image crops, padding where necessary and masking the loss accordingly. We use data augmentation procedures such as scale, horizontal flip and change in brightness. We use all images for training and compute validations scores on the other salient object detection datasets." }, { "heading": "4.2 EVALUATION METRICS", "text": "In our experiments we use three metrics: Intersection Over Union (IOU), adaptive Fβ (α-Fβ) and weighted Fβ (ω-Fβ) described in (Margolin et al., 2014).\nFor computing IOUs, we rely on the definition from the PASCAL-VOC challenge (Everingham et al., 2015) and compute it as TP /(TP +FP +FN ), where TP , FP , and FN represent true positive, false positive and false negative pixels determined over the whole validation set.\nThe α-Fβ metric is computed as the weighted F1 score, F1 ∗ (1 + β2)/β2, where we set β2 = 0.3 (Hu et al., 2017) and compute F1 over the entire validation set . For computing the weighted Fβ , we use the MatLab code provided by (Margolin et al., 2014), compute scores per image and average over all images. We understand these are the mechanism used to compute the scores reported in Hu et al. (2017)." }, { "heading": "4.3 TRAINING SETUP", "text": "All our models are trained in PyTorch (Paszke et al., 2017). We use the Adam optimizer (Kingma & Ba, 2014) with default params and learning rates between 1e-3 and 1e-4. We anneal the learning rate once the loss curves start to plateau.\nWe use the Runge-Kutta (RK-45) adaptive solver and the adjoint sensitivity method provided in (Chen, 2019). We set the relative error tolerance to zero and explore absolute error tolerances between 1e-3 and 1e-5.\nThe model architectures we evaluate include the UNet (Ronneberger et al., 2015), NODEs parametrized by UNets (NodeUnet), and an architecture inspired by DeepLabV3 (Chen et al., 2017), in which we stack NODEs (NodeStack) with Squeeze and Excitation modules (Hu et al., 2018) followed by an Atrous Spatial Pyramid Pooling Layer (Chen et al., 2017).\nThe kidney segmentation experiments were conducted on a single NVIDIA DGX-1 with 8 GPUs. The salient object detection experiments on UNet and NodeUNet were conducted on a single NVIDIA DGX-1 with 8 GPUs, and the experiments on NodeStack were conducted on 4 NVIDIA DGX-1 nodes with 32 GPUs total. We used the largest possible batch size given memory constraints.\nThe code for replicating our experiments and pre-trained weights will be made available on github." }, { "heading": "4.4 RESULTS", "text": "In this section, we provide comparative results between our methods (contour evolution and image evolution) and other methods over multiple datasets. In our experiments, our contour evolution method focuses on using a NODE to refine suboptimal contours obtained from a regression model trained to predict distance maps from an image; our image evolution method focuses on using a NODE to learn to evolve an image embedding into a distance map. We first provide results on kidney segmentation and then provide results on salient object detection." }, { "heading": "4.4.1 KIDNEY SEGMENTATION", "text": "In this task we compare three setups: the first trains a UNet regression model that maps an image to a distance map; the second uses our contour evolution method to refine the prediction of the aforementioned UNet model with a NODE parametrized by a UNet (NodeUNet); the third uses our image evolution method to train a NodeUnet that evolves an image into a distance map.\nWe chose the UNet model checkpoint used in the contour evolution experiment by selecting the checkpoint with the lowest validation loss on the first 8 samples of the validation set right before the UNet starts overfitting the training data and the validation loss starts going up.\nWe use the same training and validation setup for all models and provide results over the validation set below on Table 1. The UNet models provide the worse IOU scores. Our image evolution method produces goods results, showing evidence that it is possible to use NODEs to evolve an image into a distance map.\nLastly, our contour evolution method is able to improve the suboptimal initial contour provided by the UNet model and represents our best results in this experiment. These promising results show evidence that our method could generally be used to improve suboptimal models that underfit or overfit the training data. This is specially valuable for domains with scarcity of data. We provide results of the NodeUNet model trained with the contour evolution method in Figure 3." }, { "heading": "4.4.2 SALIENT OBJECT DETECTION", "text": "In this task we replicate the training setup described in Hu et al. (2017): we train our models on the MSRA10K dataset and compute their validation Fβ scores on PASCAL-S, ECSSD and HKU-IS.\nWe first evaluate the effect of different methods and model architectures on Fβ scores. We compare results from a regression model trained using the UNet architecture, a contour evolution model using the NodeUNet architecture and an image evolution model using the NodeStack architecture. We choose the UNet model for contour evolution by repeating the procedure described in 4.4.1.\nWe provide results over the validation set below on Table 2, wherein the UNet model provides the baseline Fβ scores. In all experiments, our refinement method provides 5% relative improvement over our UNet baseline, which has 3x more paremeters than our NodeUNet (5M vs 15M). We foresee that this trend will continue and models with more parameters will yield better refinement results.\nFinally, we compare our best model against a contrast based model DRFI (Wang et al., 2017) and recent deep learning based models such as MTDS (Li et al., 2016), MDF (Li & Yu, 2015), MCDL (Zhao et al., 2015), ELD (Lee et al., 2016), LEGS (Wang et al., 2015). We also compare against DLS (Hu et al., 2017), a deep learning model based on level sets.\nWhenever possible, we use the original code provided by the authors for computing scores or collect the scores from their publications. We reproduce the setup in Hu et al. (2017) by computing our PASCAL-S scores on binary maps produced by thresholding the ground truth saliency maps at 0.5. Figure 4 below illustrate our model performance on PASCAL-S, ECCSD and HKU-IS.\nTable 3 below shows that the NodeStack model (Ours) achieves the best results on all but one metric." }, { "heading": "5 DISCUSSION", "text": "In this paper, we extend the level set segmentation method to use NODEs to solve the contour evolution problem. We learn a forcing function in an end-to-end data driven manner. We demonstrate that our techniques can effectively evolve rough estimates of contours into final segmentation of objects. Our techniques can also evolve input image’s embedding into a pixel-wise dense semantic label.\nExperimental results on several benchmark datasets suggest using NODEs for image segmentation task is viable. Compared to state-of-the-art methods, our proposed techniques also produce favourable segmentation results.\nAlthough we benefit from NODEs’ parametrization of the derivative of the contour, in this paper we do not explore the incorporation of external constraints to guide the contour evolution and that is an area for future exploration. We also foresee that our method can generalize to 3D images.\nFinally, in some cases during our hyperparameter search we found that training the same model architecture with different learning rates and error tolerances yielded similar losses but largely different number of NODE function evaluations, prohibitively increasing wall clock time. This hyperparameter search can be replaced with automated approaches such as the Gated Info CNF described in Nguyen et al. (2019) , where the error tolerances are estimated by the model." } ]
2,019
null
SP:34f3abe09b1ca5c5bbbf1a2e28b489fee010098e
[ "This paper tackles the problem of catastrophic forgetting when data is organized in a large number of batches of data (tasks) that are sequentially made available. To avoid catastrophic forgetting, the authors learn a VAE that generates the training data (both inputs and labels) and retrain it using samples from the new task combined with samples generated from the VAE trained in the previous tasks (generative replay). In this way, there's no need to store all past data and even the first learned batch keeps being refreshed and should not be forgotten.", "This paper combines replay and openMax approach to help continual learning. The results shows robustness on different dataset include image and audio in the continual learning condition, where the new come data has a different distribution but the model still able to maintain reasonable quality for the previously and newly come examples. To my understanding, this approach was not ground-breaking but seems a reasonable combinations." ]
We introduce a unified probabilistic approach for deep continual learning with open set recognition, based on variational Bayesian inference. Our model combines a joint probabilistic encoder with a generative model and a linear classifier that get shared across tasks. The proposed open set recognition bounds the approximate posterior by fitting regions of high density on the basis of correctly classified data points and balances open set detection with recognition errors. Catastrophic forgetting is significantly alleviated through generative replay, where the open set recognition is used to reject statistical outliers from low density areas of the class specific posterior. Our approach naturally allows for forward and backward transfer while maintaining past knowledge without the necessity of storing old data, regularization or inferring task labels. We demonstrate compelling results in the challenging continual learning scenario of incrementally expanding the singlehead classifier for both class incremental visual and audio classification tasks, as well as incremental learning of datasets across modalities. We further quantitatively demonstrate our models ability to successfully distinguish unseen unknown data from trained known tasks and subsequently prevent misclassification.
[]
[ { "authors": [ "A. Achille", "T. Eccles", "L. Matthey", "C.P. Burgess", "N. Watters", "A. Lerchner", "I. Higgins" ], "title": "Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies", "venue": "Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "O. Bachem", "M. Lucic", "A. Krause" ], "title": "Coresets for Nonparametric Estimation - the Case of DPMeans", "venue": "International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "S. Becker", "M. Ackermann", "S. Lapuschkin", "K.-R. Müller", "W. Samek" ], "title": "Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals", "venue": "arXiv preprint arXiv:", "year": 2018 }, { "authors": [ "A. Bendale", "T.E. Boult" ], "title": "Towards Open Set Deep Networks", "venue": "Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "T.E. Boult", "S. Cruz", "A. Dhamija", "M. Gunther", "J. Henrydoss", "W. Scheirer" ], "title": "Learning and the Unknown : Surveying Steps Toward Open World Recognition", "venue": "AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "X. Chen", "D.P. Kingma", "T. Salimans", "Y. Duan", "P. Dhariwal", "J. Schulman", "I. Sutskever", "P. Abbeel" ], "title": "Variational Lossy Autoencoder", "venue": "International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "T. Clanuwat", "M. Bober-Irizar", "A. Kitamoto", "A. Lamb", "K. Yamamoto", "D. Ha" ], "title": "Deep Learning for Classical Japanese Literature", "venue": "Neural Information Processing Systems (NeurIPS), Workshop on Machine Learning for Creativity and Design,", "year": 2018 }, { "authors": [ "A.R. Dhamija", "M. Günther", "T.E. Boult" ], "title": "Reducing Network Agnostophobia", "venue": "Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "S. Farquhar", "Y. Gal" ], "title": "A Unifying Bayesian View of Continual Learning", "venue": "Neural Information Processing Systems (NeurIPS) Bayesian Deep Learning Workshop,", "year": 2018 }, { "authors": [ "Y. Gal", "Z. Ghahramani" ], "title": "Dropout as a Bayesian Approximation : Representing Model Uncertainty in Deep Learning", "venue": "International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "A. Gepperth", "C. Karaoguz" ], "title": "A Bio-Inspired Incremental Learning Architecture for Applied Perceptual Problems", "venue": "Cognitive Computation,", "year": 2016 }, { "authors": [ "I. Gulrajani", "K. Kumar", "A. Faruk", "A.A. Taiga", "F. Visin", "D. Vazquez", "A. Courville" ], "title": "PixelVAE: a Latent Variable Model for Natural Images", "venue": "International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework", "venue": "International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "G.E. Hinton", "O. Vinyals", "J. Dean" ], "title": "Distilling the Knowledge in a Neural Network", "venue": "NeurIPS Deep Learning Workshop,", "year": 2014 }, { "authors": [ "S. Ioffe", "C. Szegedy" ], "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "venue": "International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "R. Kemker", "M. McClure", "A. Abitino", "T. Hayes", "C. Kanan" ], "title": "Measuring Catastrophic Forgetting in Neural Networks", "venue": "AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "D.P. Kingma", "J.L. Ba" ], "title": "Adam: a Method for Stochastic Optimization", "venue": "International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "International Conference on Learning Representations (ICLR),", "year": 2013 }, { "authors": [ "J. Kirkpatrick", "R. Pascanu", "N. Rabinowitz", "J. Veness", "G. Desjardins", "A.A. Rusu", "K. Milan", "J. Quan", "T. Ramalho", "A. Grabska-Barwinska", "D. Hassabis", "C. Clopath", "D. Kumaran", "R. Hadsell" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences (PNAS),", "year": 2017 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Y. LeCun", "L. Bottou", "Y. Bengio", "P. Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Kimin Lee", "H. Lee", "Kibok Lee", "J. Shin" ], "title": "Training Confidence-Calibrated Classifiers for Detecting Out-of-Distribution Samples", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Kimin Lee", "Kibok Lee", "H. Lee", "J. Shin" ], "title": "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks", "venue": "Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Z. Li", "D. Hoiem" ], "title": "Learning without forgetting", "venue": "European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "S. Liang", "Y. Li", "R. Srikant" ], "title": "Enhancing the Reliability of Out-of-distribution Image Detection in Neural Networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "D. Lopez-Paz", "M.A. Ranzato" ], "title": "Gradient Episodic Memory for Continual Learning", "venue": "Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "O. Matan", "R. Kiang", "C.E. Stenard", "B.E. Boser" ], "title": "Handwritten Character Recognition", "venue": "Using Neural Network Architectures. 4th USPS Advanced Technology Conference,", "year": 1990 }, { "authors": [ "M. McCloskey", "N.J. Cohen" ], "title": "Catastrophic Interference in Connectionist Networks : The Sequential Learning Problem. Psychology of Learning and Motivation ", "venue": "Advances in Research and Theory,", "year": 1989 }, { "authors": [ "E. Nalisnick", "A. Matsukawa", "Y.W. Teh", "D. Gorur", "B. Lakshminarayanan" ], "title": "Do Deep Generative Models Know What They Don’t Know", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Y. Netzer", "T. Wang", "A. Coates", "A. Bissacco", "B. Wu", "A.Y. Ng" ], "title": "Reading Digits in Natural Images with Unsupervised Feature Learning", "venue": "Neural Information Processing Systems (NeurIPS), Workshop on Deep Learning and Unsupervised Feature Learning,", "year": 2011 }, { "authors": [ "C.V. Nguyen", "Y. Li", "T.D. Bui", "R.E. Turner" ], "title": "Variational Continual Learning", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "R.C. O’Reilly", "K.A. Norman" ], "title": "Hippocampal and neocortical contributions to memory: Advances in the complementary learning systems framework", "venue": "Trends in Cognitive Sciences,", "year": 2003 }, { "authors": [ "G.I. Parisi", "R. Kemker", "J.L. Part", "C. Kanan", "S. Wermter" ], "title": "Continual Lifelong Learning with Neural Networks: A Review", "venue": "Neural Networks,", "year": 2019 }, { "authors": [ "R. Ratcliff" ], "title": "Connectionist Models of Recognition Memory: Constraints Imposed by Learning and Forgetting Functions", "venue": "Psychological Review,", "year": 1990 }, { "authors": [ "S.A. Rebuffi", "A. Kolesnikov", "G. Sperl", "C.H. Lampert" ], "title": "iCaRL: Incremental classifier and representation learning", "venue": "Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "A.A. Rusu", "N.C. Rabinowitz", "G. Desjardins", "H. Soyer", "J. Kirkpatrick", "K. Kavukcuoglu", "R. Pascanu", "R. Hadsell" ], "title": "Progressive Neural Networks", "venue": "arXiv preprint arXiv:", "year": 2016 }, { "authors": [ "H. Shin", "J.K. Lee", "J.J. Kim" ], "title": "Continual Learning with Deep Generative Replay", "venue": "Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "M.R.P. Thomas", "J. Ahrens", "I. Tashev" ], "title": "Probability Models For Open Set Recognition", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2014 }, { "authors": [ "C. Wu", "L. Herranz", "X. Liu", "Y. Wang", "J. van de Weijer", "B. Raducanu" ], "title": "Memory Replay GANs: learning to generate images from new categories without forgetting", "venue": "Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "H. Xiao", "K. Rasul", "R. Vollgraf" ], "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms", "venue": "arXiv preprint arXiv:", "year": 2017 }, { "authors": [ "J. Yoon", "E. Yang", "J. Lee", "S.J. Hwang" ], "title": "Lifelong Learning with Dynamically Expandable Networks", "venue": "International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "S. Zagoruyko", "N. Komodakis" ], "title": "Wide Residual Networks", "venue": "British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "F. Zenke", "B. Poole", "S. Ganguli" ], "title": "Continual Learning Through Synaptic Intelligence", "venue": "International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "G. Zhou", "S. Kihyuk", "H. Lee" ], "title": "Online Incremental Feature Learning with Denoising Autoencoders", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2012 }, { "authors": [ "Zhou" ], "title": "Under review as a conference paper at ICLR 2020 B BALANCING CLASSIFICATION ACCURACY AND KULLBACK LEIBLER DIVERGENCE - THE ROLE OF β In equations 1 and 2, we have added a weight term β to the KL divergence to balance the individual loss terms, similar to the work", "venue": null, "year": 2017 }, { "authors": [ "VAE Higgins" ], "title": "β can be seen as a hyper-parameter that balances the reconstruction accuracy with the distribution approximation quality in the latent space. As mentioned in the main body, in our work, this factor also regulates the trade-off between the additional constraint imposed by the classifier, needing to be able to linearly separate the classes given z, and the quality of the approximation to the training data. As reconstruction losses are typically dependent on the inputs", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Most machine learning systems make the closed world assumption and are predominantly trained according to the isolated learning paradigm, where data is available at all times and is independently and identically distributed. However, in the context of continual learning, where tasks and data arrive in sequence, neither of these two principles is desirable. A neural network that is trained exclusively on a new task’s data forgets past knowledge and suffers from an early identified phenomenon commonly referred to as catastrophic forgetting (McCloskey & Cohen, 1989). Moreover, to overcome the closed world assumption, inclusion of a ”background” class is veritably insufficient as it is impossible to include all unseen concepts and classes explicitly in the loss function beforehand. Likewise, commonly applied thresholding of prediction values doesn’t prevent resulting large confidences for unseen classes if the data is far away from any known data (Matan et al., 1990).\nMost of the existing continual learning literature concentrates efforts on either alleviating catastrophic forgetting, maximizing knowledge transfer or addressing ways in which to efficiently store subsets of past data. These works have identified weight regularization (McCloskey & Cohen, 1989; Zenke et al., 2017; Kirkpatrick et al., 2017; Li & Hoiem, 2016; Nguyen et al., 2018) and rehearsal techniques (Ratcliff, 1990; Lopez-Paz & Ranzato, 2017; Rebuffi et al., 2017; Bachem et al., 2015) or have postulated methods based on complementary learning systems theory (O’Reilly & Norman, 2003) through dual-model with generative memory approaches (Gepperth & Karaoguz, 2016; Shin et al., 2017; Wu et al., 2018; Farquhar & Gal, 2018; Achille et al., 2018) as mechanisms against catastrophic inference. On the one hand, regularization techniques can work well in principle, but come with the caveat of relying on a new task’s proximity to previous knowledge. On the other hand, training and storing separate models, including generative models for generative rehearsal, comes at increased memory cost and doesn’t allow for full knowledge sharing, particularly to already stored models. Specifically, the transfer of already attained knowledge to benefit new tasks, known as forward transfer, as well as the potential positive impact of learning new concepts to aid in existing tasks, known as backward transfer, are crucial to any continual learning system. Generally speak-\ning, most current approaches include a set of simplifications, such as considering separate classifiers for each new task, referred to as multi-head classifiers. This scenario prevents ”cross-talk” between classifier units by not sharing them, which would otherwise rapidly decay the accuracy (Zenke et al., 2017; Kirkpatrick et al., 2017; Rusu et al., 2016; Shin et al., 2017; Gepperth & Karaoguz, 2016; Rebuffi et al., 2017; Achille et al., 2018; Nguyen et al., 2018) as newly introduced classes directly impact and confuse existing concepts. In the multi-head scenario task ids thus need to be encoded or are often assumed to be given by humans in order to know which classifier to use for prediction. Correspondingly, in generative replay, generative and discriminative models are taken to be separate models (Shin et al., 2017; Farquhar & Gal, 2018; Nguyen et al., 2018). Similar to regularization of a classifier, a generative model can suffer from the learned approximate posterior distribution deviating further from the true posterior with each further task increment. In order to avoid catastrophic forgetting induced by learning to generate on previously generated data, previous works even store a separate generative model per task (Farquhar & Gal, 2018), in analogy to the multi-head classifier. An extended review of recent continual learning methods is provided by Parisi et al. (2019).\nA parallel thread pursues a complementary component of identifying out-of-distribution and open set examples. While current continual learning approaches typically do not include this thread yet, it can be considered crucial to any system and a necessity in order to avoid encoding task labels and distinguishing seen from unknown data. Here, multiple methods rely on using confidence values as means of rejection through calibration (Liang et al., 2018; Lee et al., 2018b;a). Arguably this also includes Bayesian approaches using variational methods (Farquhar & Gal, 2018; Achille et al., 2018) or Monte-Carlo dropout (Gal & Ghahramani, 2015) to estimate uncertainties. Since the closed world assumption also holds true for Bayesian methods as the approximated posterior probability cannot be computed for unknown classes, misclassification still occurs, as the open space risk is unbounded (Boult et al., 2019). Recently Thomas et al. (2014); Bendale & Boult (2016); Dhamija et al. (2018) have proposed extreme value theory (EVT) based open set recognition to bound the open-space risk and balance it with recognition errors in deep neural networks.\nIn this work we propose a probabilistic approach to unify open set recognition with continual learning in a single deep model in order to remove or alleviate above mentioned common simplifications. Specifically, our contributions are:\n• We introduce a single model for continual learning that combines a joint probabilistic encoder with a generative model and a linear classifier. Inspired by EVT based open set recognition (Bendale & Boult, 2016) for Softmax prediction layers, this model architecture gives rise to a natural way of open set recognition with statistical outlier rejection on the basis of the approximate posterior in Bayesian inference. The latter requires no upfront knowledge of open set data or corresponding modifications to loss or training procedure and can successfully prevent nonsensical predictions for unseen unknown data, a robustness feature that is currently not present in closed world continual learning systems.\n• We show how this EVT bound to the posterior can be used for both identification and rejection of statistically outlying unseen unknown data instances, as well as exclusion of generated samples from areas of low probability density. When used in generative replay this leads to significantly reduced catastrophic forgetting without storing real data.\n• We share our model across tasks and automatically expand a single linear classifier head with units for new classes, thus not requiring explicit task labels during inference.\n• We demonstrate that our approach can incrementally learn the classes of two image and one audio dataset, as well as cross-dataset scenarios across modalities, while allowing for forward and backward transfer due to weight-sharing. When presented with novel data our model is able to distinguish between unseen data from various datasets and data belonging to known tasks. We further show that our approach readily profits from recent model advances such as variational lossy auto-encoders (Gulrajani et al., 2017; Chen et al., 2017)." }, { "heading": "2 UNIFYING CONTINUAL LEARNING WITH OPEN SET RECOGNITION", "text": "In isolated supervised machine learning the core assumption is the presence of i.i.d. data at all times and training is conducted using a dataset D ≡ {( x(n), y(n) )}N n=1\n, consisting of N pairs of data instances x(n) and their corresponding labels y(n) ∈ {1 . . . C} for C classes. In contrast, in\ncontinual learning task dataDt ≡ {( x (n) t , y (n) t )}Nt n=1\nwith t = 1, . . . , T arrives sequentially for T disjoint datasets, each with number of classes Ct. In our work, we consider this class incremental scenario from a perspective of variational Bayesian inference in deep neural networks (Kingma & Welling, 2013) that consist of a shared encoder with variational parameters θ, decoder and linear classifier with respective parameters φ and ξ. The joint probabilistic encoder learns an encoding to a latent variable z, over which a prior is placed, say a unit Gaussian. Using variational inference, its purpose is to approximate the true posterior to both pφ(x, z) and pξ(y,z). The probabilistic decoder pφ(x|z) and probabilistic linear classifier pξ(y|z) then return the conditional probability density of the input x and target y under the respective generative model given a sample z from the approximate posterior qθ(z|x). This yields a generative model p(x,y, z), for which we assume a factorization and generative process of the form p(x,y, z) = p(x|z)p(y|z)p(z). For variational inference with our model the following continual learning loss function thus needs to be optimized:\nLUBt (θ,φ, ξ) = t∑\nτ=1 Nτ∑ n=1 [E qθ,t(z|x(n)τ ) [log pφ,t(x (n) τ |z) + log pξ,t(y(n)τ |z)]\n− βKL(qθ,t(z|x(n)τ ) || p(z))]\n(1)\nHere, we have added a weight term β to the KL divergence to balance the individual loss terms, similar to the work of Zhou et al. (2012) and Higgins et al. (2017). This factor regulates the tradeoff between the additional constraint imposed by the classifier, needing to be able to linearly separate the classes given z, and the quality of the approximation to the training data. To balance this tradeoff irrespective of input and latent dimensionality and number of classes, the losses are normalized according to dimensions. Note that in practice this changes the relative scale of the losses and thus the interpretation of specific β values with respect to the original authors Higgins et al. (2017). We provide a more detailed discussion with empirical examples for the role of β in the supplementary section. However, equation 1 requires the presence of all data for all tasks and is thus generally not feasible for continual learning where only the most recent task’s data is assumed to be available. In context of variational inference, two potential approaches offer solutions to this challenge: a prior-based approach using the former approximate posterior qθ,t−1 as the new task’s prior (Nguyen et al., 2018) or estimating the likelihood of former data through generative replay or other forms of rehearsal (Farquhar & Gal, 2018; Achille et al., 2018). For our proposed model, we follow the latter line of work and let the prior remain the same at all times. Making use of the generative nature of our model, we let the above upper-bound to task incremental continual learning become: Lt (θ,φ, ξ) = Ñt∑ n=1 [E qθ,t(z|x̃(n)t ) [log pφ,t(x̃ (n) t |z) + log pξ,t(ỹ (n) t |z)]− βKL(qθ,t(z|x̃ (n) t ) || p(z))]\n+ Nt∑ n=1 [E qθ,t(z|x(n)t ) [log pφ,t(x (n) t |z) + log pξ,t(y (n) t |z)]− βKL(qθ,t(z|x (n) t ) || p(z))] (2)\nHere, x̃t ∼ pφ,t−1(x|z) and ỹt ∼ pξ,t−1(y|z) with z ∼ p(z) is a sample from the generative model with the corresponding label obtained from the classifier. Ñt is the number of total data instances of all previously seen tasks or alternatively a hyper-parameter. This way the expectation of the log-likelihood for all previously seen tasks is estimated and the dataset at any point in time D̃t ≡ (xt ∪ x̃t, yt ∪ ỹt) is a combination of generations from seen past data distributions and the current task’s real data. For each newly arriving task with novel labels, the classifier is expanded with newly initialized units. We note that whereas the loss function with generative replay in equation 2 is used for continual training, equation 1 and thus real data is always used for testing.\nThe model is further trained in a denoising fashion, where noise is added to each input x to avoid over-fitting. This is preferable to weight regularization as it doesn’t entail unrecoverable units that are needed to encode later stage concepts. We have accordingly coined our model Classifying Denoising Variational Auto-Encoder (CDVAE). We optionally enhance the probabilistic decoder with an autoregressive variant, where generation of a pixel’s value is spatially conditioned on previous pixels (van den Oord et al., 2016; Gulrajani et al., 2017; Chen et al., 2017). Here, the denoising plays an additional crucial role of de-quantization.\nNonetheless, similar to existing dual-model approaches (Shin et al., 2017; Wu et al., 2018; Farquhar & Gal, 2018), by itself both CDVAE and PixCDVAE models accumulate errors as with each iteration\nof generative replay deviations of the approximate from the true posterior get amplified. However, in our joint model, the linear classifier directly affects the partitioning of the latent space by influencing the joint probabilistic encoder’s weights, resulting in class specific areas of large probability density. This is particularly noticeable for lossy VAEs (Gulrajani et al., 2017; Chen et al., 2017) that leave the encoding of local structure to autoregressive layers and hence in our case attribute more influence on the latent space to the classifier. We note that these class specific areas in latent space are not necessarily encouraged for deeper classifiers, however would argue that with a sufficiently expressive probabilistic encoder such a classifier is not necessary. For visualization purposes, we have trained a CDVAE following the details of section 3 with a two-dimensional latent space on the class-incremental MNIST (LeCun et al., 1998) upper-bound and show the latent space embedding for the validation dataset at the end of continual learning in figure 1b. Corresponding intermediate visualizations for each task increment and PixCDVAE can be found in the supplementary material. We take advantage of the classifier’s impact on the latent space as the foundation for posterior based open set recognition and complementary generative replay with statistical outlier rejection. We refer to this extended model as Open-set Classifying Denoising Variational Auto-Encoder (OCDVAE) and PixOCDVAE respectively. An illustration of our joint probabilistic model is shown in figure 1a." }, { "heading": "2.1 OPEN SET RECOGNITION WITH BOUNDS TO THE CLASS SPECIFIC POSTERIOR", "text": "We leverage the single-headed linear classifier’s presence and the resulting formation of class specific high density regions in latent space as the basis for open set recognition. Specifically, we draw inspiration from the EVT based OpenMax approach (Bendale & Boult, 2016), that uses knowledge about extreme distance values to modify a Softmax prediction’s confidence, but propose to instead\nAlgorithm 1 Open set recognition calibration for deep variational neural networks. At the end of task t, a Weibull model fit of tail-size η is conducted to bound the per class approximate posterior. Per class c Weibull models ρc,t with their respective shift τc,t, shape κc,t and scale λc,t parameters are returned. The CDVAE model can now be referred to as OCDVAE. Require: CDVAE with probabilistic encoder qθ,t(z|x) and probabilistic classifier pξ,t(y|z) Require: Classifier probabilities pξ,t(y|z) and samples from the approximate posterior z(x(i)) ∼\nqθ,t(z|x(i)) for each training dataset example x(i) in dataset D̃t Require: For each class c, let S(i)c = z(x′(i)c ) for each correctly classified training example x′(i)c\n1: for c = 1 . . . C do 2: Compute per class latent mean S̄c,t = mean(S (i) c ) 3: Weibull model ρc,t = (τc,t, κc,t, λc,t) = Fit Weibull ( ||Sc − S̄c,t||, η ) 4: Return means S̄t and Weibull models ρt\nAlgorithm 2 Open set probability estimation for unknown and uncertain inputs. At the end of any task t, novel data points are considered statistical outliers if a Weibull model’s cumulative distribution function’s (CDF) outlier probability value exceeds a prior Ωt.\nRequire: OCDVAE with probabilistic encoder qθ,t(z|x) Require: Per class latent mean S̄c,t and Weibull model ρc,t, each with parameters (τc,t, κc,t, λc,t)\nFor a novel input example x̂ sample z ∼ qθ,t(z|x̂) 2: Compute distances to S̄c,t: dc,t = ||S̄c,t − z||\nfor c = 1 . . . C do 4: Compute Weibull CDF ωc,t(dc,t) = 1− exp ( − ||dc,t−τc,t||λc,t )κc,t Reject input if ωc,t(dc,t) > Ωt for any class c.\nbound the open-space risk by employing statistical outlier rejection on the basis of the approximate posterior in Bayesian inference. Considering a trained model at the end of task t, the EVT based open set recognition fits a Weibull distribution on the distances of each correctly classified training example’s sample from the approximate posterior z(x) ∼ qθ,t(z|x) to the respective per class sample mean. In other words, regions of high density of the approximate posterior for each class are identified for the subset of correctly identified data points, with the tail of the Weibull distribution bounding the open-space as well as regions of low-density. This procedure is described in algorithm 1. Once these bounds are established, for any novel input, the Weibull models’ cumulative distribution function can be used to estimate the statistical outlier probability, based on the unknown example’s sample(s) from the posterior and their distance to the class’ region of highest density. If the outlier probability is larger than a prior rejection probability, the input can be considered as unknown and a false overconfident classifier prediction avoided, or conversely it is classified into the already existing classes across all known tasks as detailed in algorithm 2." }, { "heading": "2.2 GENERATIVE REPLAY WITH STATISTICAL OUTLIER REJECTION", "text": "As the obtained open set recognition models provide bounds between the posterior’s regions of high and low density, we can extend the use from rejection of statistical outliers for novel input examples to rejection of samples drawn directly from the prior for the purpose of generative replay. Consider generation of a data point x ∼ pφ,t(x|z). It is common practice to assume that the approximated posterior is close to the true posterior. If a sample from the prior z ∼ p(z) stems from an area of low density, one further inherently relies on the generative model’s capability for interpolation. In periodic generative rehearsal, these factors can entail accumulation of errors through increasing deviations between approximated and true posterior, as well as classifier confusion due to ambiguous examples. To inhibit the latter and as a result implicitly the former, our obtained bounds can be\nAlgorithm 3 Generative replay with outlier rejection. For generative replay after training task t, samples z ∼ p(z) are rejected if the Weibull CDF’s probability value exceeds the prior Ωt. Require: OCDVAE with probabilistic encoder qθ,t(z|x) and probabilistic classifier pξ,t(y|z) Require: Per class latent mean S̄c,t and Weibull model ρc,t, each with parameters (τc,t, κc,t, λc,t) Require: Number of samples per class Mc ∀ c = 1, . . . , C̃t with C̃t seen classes up to task t\nInitialize: mc ← 0∀ c = 1, . . . , C̃, X̃t = ∅ and Ỹt = ∅ while ∑C̃ c=1mc < ∑C̃ c=1Mc do . in parallel\n3: Sample from prior z ∼ p(z) Compute label ĉ = argmax (log pξ,t(y|z)) Calculate distance dĉ,t = ‖S̄ĉ,t − z‖\n6: Compute Weibull CDF ωĉ,t(dĉ,t) = 1− exp ( − ||dĉ,t−τĉ,t||λĉ,t )κzĉ,t if ωĉ,t < Ωt and mĉ < Mĉ then Calculate decoder x̃ ∼ pφ,t(x|z) 9: Append to dataset X̃t ← X̃t ∪ x̃ and Ỹt ← Ỹt ∪ ĉ and mĉ ← mĉ + 1\nelse reject\nexploited by rejecting samples from low density regions and replacing them with statistically inlying samples. Hence, we extend generative replay for the OCDVAE with such a rejection mechanism. We now first sample from the prior until a desired amount of statistical inliers per class is reached, whereas the label is obtained using the linear classifier and is accepted if it is in correspondence with the respective class’ Weibull model. We then proceed to generate the dataset with the probabilistic decoder. This bounded version of generative replay with statistical outlier rejection is detailed in algorithm 3. An example of MNIST images with outlier probabilities based on their sample from the prior are shown in figure 1c to illustrate the rejection of ambiguous and misclassified instances, with additional images in the supplementary material. The reason we use sampling with rejection is because our Weibull models are based on scalar distances and thus samples from the Weibull distributions cannot be inverted to high-dimensional z vectors. While this may sound detrimental to our method, we argue that both sampling from the prior z ∼ p(z) in large parallelized batches and likewise computation of a single layer classifier, even in high dimensions, is computationally negligible in contrast to the much more computationally heavy deep probabilistic decoder. The latter further only needs to be processed for accepted samples and thus does not add any computational complexity with respect to conventional generative replay. Note that the amount of samples that has to be drawn before a sample is accepted scales with the dimensionality of the latent space and could in principle be regarded as a limitation for very high dimensional latent spaces. However, practical VAEs typically do not tend to profit from very high dimensional latent spaces." }, { "heading": "3 EXPERIMENTS", "text": "Similar to recent literature (Zenke et al., 2017; Kirkpatrick et al., 2017; Farquhar & Gal, 2018; Shin et al., 2017; Parisi et al., 2019), we consider the incremental MNIST (LeCun et al., 1998) dataset, where classes arrive in groups of two, and corresponding versions of the FashionMNIST (Xiao et al., 2017) and AudioMNIST dataset (Becker et al., 2018). For the latter we follow the authors’ procedure of converting the audio recordings into spectrograms, resized to 32 × 32. In addition to this class incremental setting, we evaluate cross-dataset scenarios with all inputs resized to 32 × 32, where dataset are sequentially added with all of their classes and the model has to learn across modalities.\nWe compare our proposed OCDVAE model with its counterpart CDVAE to highlight the improvement induced by algorithm 3. We further contrast these improvements with the dual model variant, consisting of a VAE for generative replay and a separate deep model for classification (Shin et al., 2017). Further, autoregressive pixel model variants are reported to demonstrate how approaches benefit from more recent advances in model architecture. We evaluate elastic weight consolidation (EWC) (Kirkpatrick et al., 2017) on the classification task without a decoder to show that approaches based on regularization fail at maintaining previous knowledge in a single-head classifier scenario. Although the latter has already been shown in a recent review by Parisi et al. (2019) and even for multi-head classifier scenarios (Kemker et al., 2018), we nevertheless provide these results for emphasis. We do not report episodic memory approaches like coresets (Bachem et al., 2015) that explicitly store real data or additional regularization as suggested by Nguyen et al. (2018). These methods are complementary to our proposed approach, reporting them separately might mislead the reader and an evaluation that additionally includes these methods on top is left for future work. To provide a frame of reference for achievable performance, we further consider upper- and lowerbounds for our joint model. The CDVAE lower-bound is obtained when only the current task’s data is available and provides the worst case performance where absolute catastrophic forgetting occurs. Conversely, the upper-bound is obtained with equation 1 when a task’s data is added to all previous tasks’ real data and yields a model’s maximum achievable performance if trained in an incremental fashion. The isolated learning baseline corresponds to the typical machine learning practice outside of continual learning where all tasks’ data is always present. All models can be trained on a single GTX 1080 GPU and we will make our code publicly available." }, { "heading": "3.1 METRICS", "text": "Our metrics are inspired by previously proposed continual learning classification measures (LopezPaz & Ranzato, 2017; Kemker et al., 2018). In addition to overall accuracy, these metrics monitor forgetting by computing a base accuracy on the initial task, while also gauging the amount of new knowledge that can be encoded by monitoring the accuracy for the most recent increment. In the\nmulti-head classification scenario, both the overall and base accuracy is then divided with an ideal accuracy. As our single-head classifier scenario implies a natural decay of a task’s base accuracy with increasing amount of classes, we instead report the raw base and new accuracies and compare them with the upper-bound and isolated performance. This is a particularly important distinction to the multi-head scenario reported in many previous works, where the amount of forgetting can easily be determined by the amount that the accuracy of each binary classifier decays over time as, introduction of new classes does not directly affect the other tasks’ weights. We extend these concepts to the probabilistic decoder’s reconstruction loss. Our metrics are thus:\n• Classification accuracy: base accuracy αt,base of initial task at increment t. New accuracy αt,new for the freshly added task. Accuracy αt,all over all classes of all tasks seen so far.\n• Reconstruction negative-log-likelihood (NLL): base NLL γt,base of initial task at task increment t. New NLL γt,new for the added task. NLL γt,all for all tasks seen so far.\n• Kullback-Leibler Divergence: between the approximate posterior qθ,t(z|x) and the prior p(z) distribution and thus always evaluated for all data up to and including task t." }, { "heading": "3.2 TRAINING HYPER-PARAMETERS", "text": "We base our encoder and decoder architecture on 14-layer wide residual networks (He et al., 2016; Zagoruyko & Komodakis, 2016) as used in lossy auto-encoders (Gulrajani et al., 2017; Chen et al., 2017), with a latent dimensionality of 60 to demonstrate scalability to high-dimensions and deep networks. Our classifier always consists of a single linear layer. The optional autoregressive decoder adds three additional layers. For a common frame of reference, all methods’ share the same WRN architecture. We use hyper-parameters consistent with the literature (Gulrajani et al., 2017; Chen et al., 2017). Accordingly, all models are optimized using stochastic gradient descent with a minibatch size of 128 and Adam (Kingma & Ba, 2015) with a learning rate of 0.001, batch-normalization (Ioffe & Szegedy, 2015) in all hidden layers with a value of 10−5 and ReLU activations. We add noise sampled from N (0, 0.25) to the input to avoid over-fitting. Due to the inevitable data augmentation effect, we train all approaches in this denoising fashion. No further data augmentation or preprocessing is applied. We initialize all weights according to He et al. (2015). For our singlehead expanding classifier this ensures that all units are always initialized from the same distribution when they get added at each task increment, as the initialization depends only on the layer’s input dimensionality. This is not necessarily the case for other weight initialization techniques where a dependency on the amount of classifier units exists. We train all class incremental models for 120 epochs per task on MNIST and FashionMNIST and 150 epochs on AudioMNIST. Complementary incremental cross-dataset models are trained for 200 epochs per task. While our proposed model exhibits forward transfer due to weight sharing and need not necessarily be trained for the entire amount of epochs for each subsequent task, this guarantees convergence and a fair comparison of results. Isolated models are trained for 200 and 300 epochs until convergence respectively. For the generative replay with statistical outlier rejection, we use an aggressive rejection rate of Ωt = 0.01 (with analogous results with 0.05) and dynamically set tail-sizes to 5% of seen examples per class. The used open set distance measure is the cosine distance. We provide a detailed description of architectures and EWC hyper-parameters in the appendix. Results are averaged over five experimental repetitions, apart from the isolated, lower- and upper-bound that show negligible deviations." }, { "heading": "3.3 INCREMENTAL CONTINUAL LEARNING RESULTS AND DISCUSSION", "text": "Results for the class incremental scenarios for all models, the upper- and lower-bounds and the isolated setting are shown in table 1. Corresponding results for the two directions of incremental cross-dataset experiments are summarized in table 2. In general the upper-bound values are almost identical to isolated learning. Similarly, the new task’s metrics are negligibly close, as the WRN architecture ensures enough capacity to encode new knowledge. In contrast to EWC that is universally unable to maintain its old knowledge, CDVAE and PixCDVAE are able to partially retain previous information. Yet they accumulate errors due to samples generated from low density regions. While the dual model approaches do not exhibit this behavior for MNIST, they displays similar forgetting for other experiments, particularly for Audio data. However, our proposed OCDVAE and PixOCDVAE generative replay overcomes this issue to a considerable degree. For the class incremental scenarios the best models feature less than 10% drop in accuracy on all datasets even with repeated\ngenerative replay. Even stronger results can be observed for the cross-dataset scenarios, where forgetting is alleviated to the extent that final accuracy values are close to the upper bound. Likewise improvements are noticeable in the reconstruction NLL and KL divergences. The OCDVAE models\ncan consequently achieve reconstruction likelihoods akin to a dual model’s separate VAE, while fully sharing encoded knowledge and maintaining a classifier. As a result of OCDVAE’s shared weights, we observe backward transfer for some experiments. This is particularly apparent for AudioMNIST, where addition of the second increment first decays and inclusion of later tasks improves the second task’s accuracy. Due to space constraints, we provide a detailed account of all intermediate results and examples of generated images for all increments in the supplementary material. We note that the pixel decoders are trained for classification and reported NLL values are obtained through sampling from the multinomial distribution. Original losses are provided in the supplementary material." }, { "heading": "3.4 OUT OF DISTRIBUTION DETECTION RESULTS AND DISCUSSION", "text": "In addition to previous results with focus on continual learning accuracy, where the open set recognition approach has been used for generative replay, we proceed to quantitatively analyze the models’ ability to distinguish unknown tasks’ data from data belonging to known tasks. Here, the challenge is to consider all unseen test data of already trained tasks as inlying, while successfully recognizing 100 % of unknown datasets as outliers. For this purpose, trained models on each of the three datasets are evaluated on their test set, the remaining datasets and additionally the KMNIST (Clanuwat et al., 2018), SVHN (Netzer et al., 2011), CIFAR10 and CIFAR100 (Krizhevsky, 2009) datasets.\nWe compare and contrast three criteria that could be used for open set recognition: classifier predictive entropy, reconstruction loss and our proposed latent space based EVT approach. Naively one might expect the Bayesian approach to yield distributions that are sufficiently different for unknown data. We thus evaluate the respective expectation for each criterion with 100 variational samples from the approximate posterior per data point. Figure 2 shows the three criteria and respective percentage of the total dataset being considered as outlying for the OCDVAE model trained on FashionMNIST. In consistence with recent literature (Nalisnick et al., 2019), we can observe that use of reconstruction loss can sometimes distinguish between the known tasks’ test data and unknown datasets, but results in failure for others. In the case of classifier predictive entropy, depending on the exact choice of entropy threshold, generally only a partial separation can be achieved. Furthermore, both of these criteria pose the additional challenge of results being highly dependent on the choice of the precise cut-off value. While drawing z samples multiple times, as well as repeatedly calculating the classifier, is computationally feasible, we further note that sampling the entire decoder is computationally prohibitively expensive in practice. In contrast to the other criteria, the test data from the known tasks is regarded as inlying across a wide range of rejection priors Ωt and the majority of other datasets is consistently regarded as outlying by our proposed open set mechanism.\nAs our latent space based EVT approach for open set recognition and respective algorithms 1 and 2 do not technically require a decoder, we can evaluate similar outlier detection for the variational dual model approach. While we also evaluate this scenario, note that the distinction the generative model makes need not necessarily apply to a separate classifier and vice versa. We report the outlier detection accuracy in table 3. Here, a 5% validation split is used to determine the respective value\nat which 95% of the validation data is considered as inlying before using these priors to determine outlier counts for the known tasks’ test set as well as other datasets. While MNIST seems to be a distinct and easy enough dataset for all approaches, we can make two further major observations:\n1. The latent based EVT approach outperforms the other criteria, particularly for the OCDVAE where a near perfect open set detection can be achieved.\n2. Even though we can apply our proposed open set approach to just a classifier, the joint model with shared latent space consistently exhibits higher outlier detection values. While future investigation into this aspect is needed, we hypothesize that this is due to the joint model also optimizing a variational lower bound to the data distribution p(x).\nWe provide figures analogous to figure 2 for all models reported in table 3 in the supplementary material. We emphasize that our open set detection does not rely on knowing any open set data examples beforehand as we do not need modification to the loss function or other forms of explicit training. As there exists a body of complementary work (Liang et al., 2018; Lee et al., 2018b; Dhamija et al., 2018) that could readily be integrated, we leave such analysis for future work." }, { "heading": "4 CONCLUSION AND OUTLOOK", "text": "We have proposed a unified probabilistic approach to deep continual learning. At the heart lies Bayesian inference with a model combining a shared probabilistic encoder with a generative model and a expanding linear classifier. Weight sharing across tasks allows for forward and backward transfer, while generative replay alleviates catastrophic forgetting. We have then introduced EVT based bounds to the approximate posterior enabled through class specific latent space partitioning induced by the classifier. Derived open set recognition and corresponding generative replay with statistical outlier rejection have been shown to achieve compelling results in both task incremental as well as cross-dataset continual learning across image and audio modalities, while being able to distinguish seen from unseen data. We have demonstrated that our approach readily benefits from recent model advances such as autoregressive models (Gulrajani et al., 2017; Chen et al., 2017) and therefore expect to apply our approach to more complicated data such as larger scale color images with further improvements in generative models. As our approach is extendible, we envision future work to encompass dynamical neural network expansion to increase representation capacity when task complexity increases (Yoon et al., 2018; Rusu et al., 2016; Zhou et al., 2012), combination with soft-targets (Hinton et al., 2014; Li & Hoiem, 2016) or transfer to entirely unsupervised scenarios where the classifier learns only dataset ids instead of distinct classes (Achille et al., 2018)." }, { "heading": "A CONTINUAL LEARNING 2-D LATENT SPACE VISUALIZATION", "text": "A natural consequence of our joint model with a shared probabilistic encoder is that the classifier encourages the formation of class-specific regions of high density in the latent space. During continual incremental learning, these regions keep shifting with every task increment while maintaining their class-specificity. New regions of high density emerge for newly added classes. As can be observed in figure 3, at the end of the first task two regions have been formed around the mean of the N (0, 1) prior when training our CDVAE model on the MNIST (LeCun et al., 1998) dataset in a class-incremental upper-bound fashion. With every addition of the next classes, the latent embedding shifts around the mean of the prior to accommodate the new classes with distinct classes separated by regions of low density. Furthermore, it can also be seen in figures 3e and 3f that the shape and the location of the high density regions in the latent embedding are model dependent." }, { "heading": "B BALANCING CLASSIFICATION ACCURACY AND KULLBACK LEIBLER", "text": "DIVERGENCE - THE ROLE OF β\nIn equations 1 and 2, we have added a weight term β to the KL divergence to balance the individual loss terms, similar to the work of Zhou et al. (2012) and Higgins et al. (2017). In a standard βVAE Higgins et al. (2017), β can be seen as a hyper-parameter that balances the reconstruction accuracy with the distribution approximation quality in the latent space. As mentioned in the main body, in our work, this factor also regulates the trade-off between the additional constraint imposed by the classifier, needing to be able to linearly separate the classes given z, and the quality of the approximation to the training data. As reconstruction losses are typically dependent on the inputs’ spatial dimensionality and the classification loss on the respective number of classes, we normalize the loss accordingly. This yields loss values whose scale is comparable and in practice changes the relative scale of the losses to each other. Whereas typically the reconstruction loss heavily outweighs the Kullback Leibler divergence and classification in non-normalized training for a β value of 1.0, this is not the case for our approach. Although the benefit is a more stable training irrespective of chosen dimensionalities, note that this however changes the role of specific β values with respect to the original authors’ Higgins et al. (2017) interpretation and the contribution of regularization.\nTo illustrate this we have trained our WRN model on MNIST with 2-D and 60-D latent spaces for multiple values of β in tables 4 and 5 respectively. In these tables we report both the normalized\nobtained losses during training, as well as the equivalent computed un-normalized losses as typically reported in the literature. While we have reported the latter for comparison purposes in our main body, the former are used in practice. We can observe that decreasing the value of beta leads to improvement of the classifier and reconstruction accuracy at the expense of KL divergence quality. In both 2-D and 60-D cases, a large beta value leads to significant underfit of the classifier. In order to enable adequate training of the classifier, a corresponding value of β thus has to be chosen. Empirically we have found that a β of 0.1 works well universally. A nuance of the normalized loss is that β values cannot be compared to ones in the un-normalized scenario. To pick a concrete example, the ratio between KLD and reconstruction loss with a β value of 1.0 for a 60-D latent space in table 5 is roughly 1 : 22 without normalization, but approximately 1 : 1.6 in our case. In practice our choice of β would translate to much larger values in a conventional β-VAE.\nHowever, the presence of the classifier, irrespective of the value of β induces low density regions in latent space. For the 2-D latent space on a trained model for MNIST, we can observe that these low density regions become more apparent with smaller values of β, but manifest even for large values at the center due to the classifier’s need to disentangle the class representations." }, { "heading": "C GENERATIVE REPLAY EXAMPLES WITH CDVAE AND OCDVAE", "text": "As stated in section 2 of the main body, as well as exemplified in the previous section, the jointly optimized linear classifier directly affects the emergence of class specific areas of large probability density in the latent space. The effect of sampling from the prior without statistical outlier rejection for low density regions is shown in figure 5 for the MNIST dataset. For CDVAE/PixCDVAE we observe classifier confusion due to class interpolated examples, mentioned in section 2.2. As the generative model needs to learn how to replay old tasks’ data based on its own former generations, this confusion and interpolations accumulate rapidly. This is not the case for OCDVAE/PixOCDVAE, where misclassifications are scarce and the generative model is capable of maintaining high visual\nfidelity throughout continual training. As the OCDVAE constrains the sampling to regions of high density, in principle the generative replay could reproduce solely the original data without any interpolation akin to an over-fit. However, for the purpose of generative replay and estimating the log-likelihood of former seen data distributions, this can be desirable in the continual learning scenario as long as variety is ensured. Both our continual learning results presented in the main paper, as well as the visual examples of this section’s figures indicate that this is the case. Similar tendencies can be observed for the other two datasets - FashionMNIST Xiao et al. (2017) (figure 6) and AudioMNIST (Becker et al., 2018) (figure 7). We note that we show AudioMNIST for the purpose of completeness as generated examples are difficult to interpret visually.\nD ILLUSTRATION OF GENERATIVE REPLAY WITH STATISTICAL OUTLIER REJECTION\nIn figure 8 we show generated images x ∼ pφ,t(x|z) with z ∼ p(z) and their corresponding class c obtained from the classifier pξ,t(y|z) for an OCDVAE model trained on the class incremental MNIST, after the last task increment t = T . Based on its sample from the prior, for each image we have further noted the open set statistical outlier probability ωc,t from the respective class’ Weibull model. Images are depicted in rows, whereas each row corresponds to a distinct class label. The figure exemplifies the sampling process for a mini-batch of size 128, where the amount of class samples within the mini-batch is not necessarily balanced. We observe how generated images that feature blurring and ambiguity are considered as strong statistical outliers, as well as examples with class interpolation and therefore hold a misclassified label. Using the latter examples to create a dataset for continual learning with generative replay hence entails accumulation of errors. In contrast to the conventional version with unconstrained sampling, our generative replay with statistical outlier rejection algorithm shown in algorithm 3 of the main body rejects these examples and prevents such errors to a large degree." }, { "heading": "E FULL CLASS INCREMENTAL RESULTS", "text": "In addition to the comparative analysis provided in section 3.4 of the main body, we provide the class-incremental results for each of the three datasets at the end of every task increment, averaged over 5 experimental repetitions in tables 6, 7 and 8 respectively. These tables aid in making some additional observations about the behavior of the different continual learning algorithms across consecutive task increments.\nWe once again observe the increased effect of error accumulation due to unconstrained generative sampling from the prior in the CDVAE and the PixCDVAE models in comparison to their open set counterparts. The statistical deviations across experiment repetitions in the base and the overall classification accuracies are higher and are generally decreased by the open set models. For example, in table 6 the MNIST base and overall accuracy deviations of CDVAE are higher than the respective values for OCDVAE starting from the second task increment. Correspondingly, the accuracy values themselves experience larger decline for CDVAE than for OCDVAE with progressive increments. This difference is not as pronounced at the end of the first task increment because the models haven’t been trained on any of their own generated data yet. Successful literature approaches such as the variational generative replay proposed by the authors of Farquhar & Gal (2018) thus avoid repeated learning based on previous generated examples and simply store and retain a separate generative model for each task. The strength of our model is that, instead of storing a trained model for each task increment, we are able to continually keep training our joint model with data generated for all previously seen tasks by filtering out ambiguous samples through statistical outlier rejection. Similar trends can also be observed for the respective pixel models.\nE.1 BACKWARD TRANSFER\nThe weight sharing and the presence of a generative expanding single-headed classifier open up the scope for both forward and backward transfer of knowledge in the continual learning context. Figure 9 shows an interesting case of the latter for class-incremental learning with our OCDVAE model on the AudioMNIST dataset. The addition of two new classes (four and five) at the end of the second increment leads to an improvement in the classification performance on class two, as indicated by the confusion matrices.\nE.2 PIXEL MODEL BITS PER DIMENSION CLASSIFICATION LOSSES\nAlthough the main body reports PixelVAE reconstruction log-likelihoods in nats, these models are practically formulated as a classification problem with a 256-way Softmax. The corresponding loss is in bits per dimension. We have converted these values to have a better comparison, but in order to do so we need to sample from the pixel decoder’s multinomial distribution to calculate a binary cross-entropy on reconstructed images. The bits per dimension classification loss values for our PixelVAE based experiments in the main body are provided for reference here. The PixCDVAE and PixOCDVAE achieve final losses on all tasks of 1.019 ± 0.014 and 1.047 ± 0.010 for MNIST," }, { "heading": "3 18.84 19.34 11.34 ± 0.057 7.811 ± 0.799 21.25 ± 0.872 18.26 ± 0.818 17.35 ± 0.307 15.36 ± 0.530", "text": "" }, { "heading": "2 16.54 17.47 10.65 ± 0.101 6.247 ± 0.710 17.60 ± 0.755 13.79 ± 0.282 15.56 ± 0.696 12.23 ± 0.287", "text": "" }, { "heading": "1 12.17 12.20 9.710 ± 0.345 3.610 ± 0.856 13.21 ± 0.635 7.164 ± 0.759 13.28 ± 0.644 7.809 ± 1.255", "text": "" }, { "heading": "3 220.7 246.1 221.9 ± 0.648 285.7 ± 0.510 227.2 ± 0.606 261.5 ± 2.970 224.9 ± 0.642 259.1 ± 0.929", "text": "" }, { "heading": "2 224.2 240.4 223.8 ± 0.402 293.8 ± 0.349 226.6 ± 2.31 254.3 ± 1.513 226.9 ± 0.918 255.8 ± 0.436", "text": "" }, { "heading": "1 209.7 209.8 207.7 ± 1.558 267.8 ± 1.246 208.9 ± 1.213 230.8 ± 3.024 209.7 ± 3.655 232.0 ± 2.159", "text": "" }, { "heading": "3 213.6 211.8 211.6 ± 0.543 269.1 ± 0.616 215.4 ± 0.501 268.3 ± 3.852 213.0 ± 0.635 262.9 ± 1.893", "text": "" }, { "heading": "2 241.1 240.2 238.7 ± 0.081 313.4 ± 1.006 241.8 ± 0.502 275.8 ± 1.888 241.9 ± 0.960 275.3 ± 1.473", "text": "" }, { "heading": "1 209.7 209.8 207.7 ± 1.558 267.8 ± 1.246 208.9 ± 1.213 230.8 ± 3.024 209.7 ± 3.655 232.0 ± 2.159", "text": "" }, { "heading": "3 207.6 258.7 213.0 ± 1.854 274.0 ± 0.552 219.5 ± 1.376 235.6 ± 2.784 216.9 ± 1.208 231.6 ± 0.832", "text": "" }, { "heading": "2 207.4 240.7 209.0 ± 0.731 273.6 ± 0.631 212.7 ± 0.579 232.5 ± 1.582 212.1 ± 0.937 231.8 ± 0.416", "text": "" }, { "heading": "1 209.7 209.8 207.7 ± 1.558 267.8 ± 1.246 208.9 ± 1.213 230.8 ± 3.024 209.7 ± 3.655 232.0 ± 2.159", "text": "" }, { "heading": "3 93.02 33.33 34.34 ± 0.009 79.98 ± 0.634 76.77 ± 4.378 83.35 ± 1.597 84.07 ± 1.069 86.93 ± 0.870 87.30 ± 0.322", "text": "" }, { "heading": "2 95.75 48.97 49.28 ± 0.242 91.91 ± 0.043 86.22 ± 3.704 91.83 ± 0.730 92.93 ± 0.160 92.31 ± 1.163 92.17 ± 1.425", "text": "" }, { "heading": "1 99.65 99.60 99.17 ± 0.037 99.58 ± 0.062 99.57 ± 0.091 99.55 ± 0.035 99.58 ± 0.076 99.59 ± 0.082 99.54 ± 0.079", "text": "" }, { "heading": "3 93.35 99.95 99.92 ± 0.012 86.06 ± 2.801 99.09 ± 0.367 90.26 ± 1.435 97.33 ± 0.725 83.40 ± 3.089 96.88 ± 1.156", "text": "" }, { "heading": "2 95.55 97.95 96.09 ± 0.260 89.31 ± 0.311 97.73 ± 1.113 90.98 ± 0.626 96.47 ± 0.596 92.64 ± 2.302 97.31 ± 0.475", "text": "" }, { "heading": "1 99.65 99.60 99.17 ± 0.037 99.58 ± 0.062 99.57 ± 0.091 99.55 ± 0.035 99.58 ± 0.076 99.59 ± 0.082 99.54 ± 0.079", "text": "" }, { "heading": "3 95.95 00.00 01.63 ± 0.032 94.88 ± 0.432 78.55 ± 3.964 79.26 ± 4.170 83.70 ± 3.571 83.90 ± 2.310 87.66 ± 0.375", "text": "" }, { "heading": "2 96.70 00.00 02.40 ± 0.122 94.50 ± 0.389 82.40 ± 6.688 92.02 ± 1.175 90.06 ± 1.782 92.36 ± 2.092 88.60 ± 1.998", "text": "" }, { "heading": "1 99.65 99.60 99.17 ± 0.037 99.58 ± 0.062 99.57 ± 0.091 99.55 ± 0.035 99.58 ± 0.076 99.59 ± 0.082 99.54 ± 0.079", "text": "" }, { "heading": "3 20.16 24.28 16.46 ± 0.122 4.923 ± 1.085 24.24 ± 1.974 12.13 ± 0.977 20.02 ± 0.161 10.17 ± 1.528", "text": "" }, { "heading": "2 18.50 25.84 16.15 ± 0.149 3.177 ± 0.702 20.20 ± 1.188 9.238 ± 0.674 18.01 ± 0.154 7.495 ± 0.738", "text": "" }, { "heading": "1 12.55 13.08 11.81 ± 0.123 1.410 ± 0.181 13.00 ± 0.897 5.629 ± 3.749 13.68 ± 0.785 5.635 ± 3.739", "text": "" }, { "heading": "3 79.58 172.3 81.24 ± 0.262 104.8 ± 1.114 89.88 ± 3.172 114.9 ± 4.590 82.95 ± 1.878 114.6 ± 4.788", "text": "" }, { "heading": "2 75.97 107.3 75.64 ± 0.600 102.9 ± 0.408 82.02 ± 5.488 111.9 ± 2.627 76.62 ± 1.695 112.7 ± 3.300", "text": "" }, { "heading": "1 63.18 62.08 62.17 ± 0.979 90.52 ± 0.263 64.34 ± 2.054 100.0 ± 1.572 62.53 ± 1.166 99.77 ± 2.768", "text": "" }, { "heading": "3 82.53 87.22 83.46 ± 0.992 107.7 ± 0.600 87.65 ± 0.530 118.3 ± 3.523 85.37 ± 1.725 116.5 ± 2.219", "text": "" }, { "heading": "2 88.75 87.93 88.03 ± 0.664 115.8 ± 0.805 89.91 ± 0.107 125.7 ± 2.413 89.64 ± 3.709 124.6 ± 3.822", "text": "" }, { "heading": "1 63.18 62.08 62.17 ± 0.979 90.52 ± 0.263 64.34 ± 2.054 100.0 ± 1.572 62.53 ± 1.166 99.77 ± 2.768", "text": "" }, { "heading": "3 63.36 160.4 67.34 ± 0.445 91.92 ± 0.991 81.89 ± 10.09 100.3 ± 4.562 69.29 ± 1.541 101.1 ± 4.014", "text": "" }, { "heading": "2 62.85 126.8 63.69 ± 0.576 91.27 ± 0.789 74.41 ± 10.89 100.4 ± 1.964 65.68 ± 1.166 101.2 ± 3.601", "text": "" }, { "heading": "1 63.18 62.08 62.17 ± 0.979 90.52 ± 0.263 64.34 ± 2.054 100.0 ± 1.572 62.53 ± 1.166 99.77 ± 2.768", "text": "" }, { "heading": "3 99.72 31.35 33.42 ± 0.027 99.32 ± 0.057 98.93 ± 0.291 95.01 ± 3.162 96.14 ± 1.836 98.46 ± 0.903 99.20 ± 0.057", "text": "" }, { "heading": "2 99.81 49.92 50.16 ± 0.029 99.79 ± 0.049 99.60 ± 0.142 98.54 ± 1.638 98.37 ± 1.448 99.55 ± 0.036 99.69 ± 0.051", "text": "" }, { "heading": "1 100.0 100.0 99.88 ± 0.010 99.98 ± 0.023 99.97 ± 0.002 99.97 ± 0.029 99.97 ± 0.026 99.98 ± 0.018 99.86 ± 0.084", "text": "" }, { "heading": "3 99.67 99.94 99.94 ± 0.002 99.48 ± 0.294 99.41 ± 0.084 99.63 ± 0.172 99.22 ± 0.082 99.61 ± 0.055 99.56 ± 0.092", "text": "" }, { "heading": "2 99.80 99.85 99.70 ± 0.013 99.81 ± 0.062 99.71 ± 0.122 99.75 ± 0.127 99.74 ± 0.052 99.80 ± 0.126 99.82 ± 0.027", "text": "" }, { "heading": "1 100.0 100.0 99.88 ± 0.010 99.98 ± 0.023 99.97 ± 0.002 99.97 ± 0.029 99.97 ± 0.026 99.98 ± 0.018 99.86 ± 0.084", "text": "" }, { "heading": "3 99.80 00.00 00.17 ± 0.045 99.51 ± 0.094 99.16 ± 0.611 87.66 ± 8.765 90.12 ± 5.846 96.69 ± 2.173 98.88 ± 0.491", "text": "" }, { "heading": "2 99.82 00.00 00.61 ± 0.057 99.77 ± 0.032 99.54 ± 0.285 97.28 ± 3.184 96.90 ± 2.907 99.30 ± 0.100 99.64 ± 0.095", "text": "" }, { "heading": "1 100.0 100.0 99.88 ± 0.010 99.98 ± 0.023 99.97 ± 0.002 99.97 ± 0.029 99.97 ± 0.026 99.98 ± 0.018 99.86 ± 0.084", "text": "" }, { "heading": "1 99.99 100.0 100.0 ± 0.000 100.0 ± 0.000 100.0 ± 0.000 99.21 ± 0.568 99.71 ± 0.218 99.95 ± 0.035 99.27 ± 0.410", "text": "" }, { "heading": "2 99.92 00.00 00.16 ± 0.040 93.08 ± 5.854 99.52 ± 0.273 98.98 ± 0.766 97.86 ± 0.799 98.61 ± 0.490 97.88 ± 2.478", "text": "" }, { "heading": "3 100.0 00.00 00.29 ± 0.029 83.25 ± 6.844 93.15 ± 3.062 92.44 ± 1.306 81.38 ± 5.433 95.12 ± 2.248 95.82 ± 3.602", "text": "" }, { "heading": "1 99.99 100.0 100.0 ± 0.000 100.0 ± 0.000 100.0 ± 0.000 99.21 ± 0.568 99.71 ± 0.218 99.95 ± 0.035 99.27 ± 0.410", "text": "" }, { "heading": "2 99.75 100.0 99.78 ± 0.019 86.25 ± 8.956 99.71 ± 0.043 91.82 ± 4.577 99.78 ± 0.128 89.23 ± 7.384 99.81 ± 0.189", "text": "" }, { "heading": "3 98.92 99.58 99.25 ± 0.054 95.16 ± 1.490 98.23 ± 1.092 95.20 ± 1.495 98.41 ± 0.507 94.43 ± 3.030 99.30 ± 0.550", "text": "" }, { "heading": "1 99.99 100.0 100.0 ± 0.000 100.0 ± 0.000 100.0 ± 0.000 99.21 ± 0.568 99.71 ± 0.218 99.95 ± 0.035 99.27 ± 0.410", "text": "" }, { "heading": "2 99.83 50.00 50.16 ± 0.119 89.67 ± 1.763 99.50 ± 0.157 93.84 ± 2.558 98.64 ± 0.875 93.93 ± 3.756 99.67 ± 0.033", "text": "" }, { "heading": "3 99.56 33.19 33.28 ± 0.022 78.24 ± 3.315 95.37 ± 1.750 94.26 ± 1.669 90.10 ± 1.431 95.70 ± 1.524 97.77 ± 1.017", "text": "" }, { "heading": "1 433.7 423.2 422.3 ± 0.573 434.2 ± 1.068 435.2 ± 15.69 432.6 ± 0.321 424.2 ± 2.511 433.8 ± 0.370", "text": "" }, { "heading": "2 422.5 439.4 426.6 ± 2.840 434.4 ± 1.082 423.9 ± 0.517 432.5 ± 0.551 425.2 ± 1.402 433.5 ± 1.464", "text": "" }, { "heading": "3 420.7 429.2 425.0 ± 0.339 434.6 ± 0.785 422.7 ± 0.690 432.9 ± 0.723 423.8 ± 1.148 433.1 ± 1.269", "text": "" }, { "heading": "1 433.7 423.2 422.3 ± 0.573 434.2 ± 1.068 435.2 ± 15.69 432.6 ± 0.321 424.2 ± 2.511 433.8 ± 0.370", "text": "" }, { "heading": "2 381.2 384.1 381.3 ± 2.039 390.4 ± 0.694 382.5 ± 1.355 389.4 ± 0.208 385.3 ± 12.56 389.4 ± 1.304", "text": "" }, { "heading": "3 435.9 436.7 436.8 ± 0.188 444.7 ± 0.545 436.3 ± 0.639 442.7 ± 0.513 436.9 ± 0.688 442.4 ± 0.275", "text": "" }, { "heading": "1 433.7 423.2 422.3 ± 0.573 434.2 ± 1.068 435.2 ± 15.69 432.6 ± 0.321 424.2 ± 2.511 433.8 ± 0.370", "text": "" }, { "heading": "2 401.9 411.8 404.0 ± 2.407 412.4 ± 0.871 403.2 ± 0.831 410.9 ± 0.351 403.5 ± 1.274 411.5 ± 1.406", "text": "" }, { "heading": "3 412.1 418.9 414.4 ± 0.385 423.3 ± 0.618 413.6 ± 0.410 421.0 ± 1.026 413.8 ± 0.573 421.9 ± 0.661", "text": "" }, { "heading": "1 11.65 11.20 4.639 ± 0.107 4.361 ± 0.671 11.78 ± 1.478 9.293 ± 0.943 11.16 ± 0.713 11.87 ± 1.504", "text": "" }, { "heading": "2 11.78 13.61 5.135 ± 0.127 5.130 ± 0.636 15.13 ± 1.128 14.00 ± 0.748 14.06 ± 1.140 12.40 ± 0.719", "text": "" }, { "heading": "3 13.40 17.09 5.427 ± 0.105 5.399 ± 0.724 18.18 ± 1.140 20.28 ± 0.774 13.61 ± 0.901 14.41 ± 0.461", "text": "2.851± 0.0026 and 2.852± 0.0047 for FashionMNIST, 4.425± 0.0010 and 4.451± 0.0198 for AudioMNIST. For cross-dataset experiments starting with FashionMNIST first, the corresponding loss values in bits per dimension for PixCDVAE are 2.260 ± 0.0078 and 2.238 ± 0.0021 for PixOCDVAE. In the reverse direction the values are 2.232 ± 0.0177 and 2.218 ± 0.0014 respectively. In the dual model scenario, the separate PixVAE achieves final losses on all tasks of 1.040 ± 0.0103 for MNIST, 2.242 ± 0.0124 for FashionMNIST and 4.406 ± 0.0024 for AudioMNIST. Loss values for the cross dataset setting are 2.253 ± 0.0047 when FashionMNIST gets trained first and 2.279 ± 0.0104 for the reverse direction starting with AudioMNIST." }, { "heading": "F ADDITIONAL OPEN SET RECOGNITION VISUALIZATION", "text": "As we point out in section 3 of the main paper, our posterior based open set recognition considers almost all of the unknown datasets as statistical outliers, while at the same time regarding unseen test data from the originally trained tasks as distribution inliers across a wide range of rejection priors. In addition to the outlier rejection curves for FashionMNIST and the quantitative results presented in the main body, we also show the full outlier rejection curves for the remaining datasets, as well as all dual model approaches in figures 10, 11 and 12. These figures visually support the quantitative findings described in the main body and respective conclusions. In summary, the joint OCDVAE performs better at open set recognition in direct comparison to the dual model setting, particularly when using the EVT based criterion. Apart from the MNIST dataset, where reconstruction loss can be a sufficient metric for open set detection, the latent based approach also exhibits less dependency on the outlier rejection prior and consistently improves the ability to discern unknown data." }, { "heading": "G ARCHITECTURE DEFINITIONS AND ADDITIONAL HYPER-PARAMETERS", "text": "Our previous description of the training hyperparameters in the main text is extended here by specifying the exact encoder and decoder architecture, and the additional hyperparameters for the Adam (Kingma & Ba, 2015) optimizer used for training in each of our evaluated methods. We also provide the hyperparameter values necessary for evaluating EWC in the class-incremental learning and cross-dataset scenarios.\nWe point the reader to tables 9 and 10 for detailed encoder and decoder configurations. For the autoregressive addition to our joint model, we set the number of output channels of the decoder to 60 and append 3 pixel decoder layers, each with a kernel size of 7 × 7 and 60 channels. The hyperparameters for Adam optimization include a β1 of 0.9, β2 of 0.999 and of 10−8.\nFor the EWC experiments, the number of Fisher samples is fixed to the total number of data points from all the previously seen tasks. A suitable Fisher multiplier (λ) value has been determined by conducting a grid search over a set of five values: 50, 100, 500, 1000 and 5000 on held-out validation data for the first two tasks in sequence. We observe exploding gradients if λ is too high. However, a very small λ leads to excessive drift in the network weight distribution across subsequent tasks that further results in catastrophic inference. A balance between these two phenomena is achieved for a λ value of 500 in the class-incremental scenario and 1000 in the cross-dataset setting." } ]
2,019
null
SP:655be2d7f8ffe68416e0c3a5b4218ffe45a37bfc
[ "This paper experimentally investigates how fast the generalization error decreases when some specific kernel functions are used in real datasets. This paper conducted numerical experiments on several datasets to investigate the decreasing rate of the generalization error, and the rate is determined for such datasets. This decreasing rate is theoretically analyzed by using the approximation theory of RKHS in the teacher-student setting. It is shown that the rate is determined with the smoothness and effective dimensionality of input. Then, the smoothness of the teacher function is also derived through this analysis.", "This paper studies, empirically and theoretically, the learning rates of (shift-invariant) kernel learners in a misspecified setting. In the well-specified setting, the rate of kernel learners is at least $n^{-1/2}$, and in a misspecified setting assuming only Lipschitz targets, the rate is $n^{-1/d}$. Neither seems to match the experimental rate on MNIST and CIFAR-10; this paper proposes a theoretical model that can more-or-less match the experimental rate with essentially-reasonable assumptions." ]
How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n−β where n is the number of training examples and β an exponent that depends on both data and algorithm. In this work we measure β when applying kernel methods to real datasets. For MNIST we find β ≈ 0.4 and for CIFAR10 β ≈ 0.1. Remarkably, β is the same for regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we introduce the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption — namely that the data are sampled from a regular lattice — we derive analytically β for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, β depends only on the training data and their dimension. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, our results quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on n. With this definition one obtains reasonable effective smoothness estimates for MNIST and CIFAR10.
[]
[ { "authors": [ "M. Allegra", "E. Facco", "A. Laio", "A. Mira" ], "title": "Clustering by the local intrinsic dimension: the hidden structure of real-world data", "venue": null, "year": 1902 }, { "authors": [ "S. Arora", "S.S. Du", "W. Hu", "Z. Li", "R. Salakhutdinov", "R. Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": null, "year": 1904 }, { "authors": [ "B. Aubin", "A. Maillard", "F. Krzakala", "N. Macris", "L. Zdeborová" ], "title": "The committee machine: Computational to statistical gaps in learning a two-layers neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "F. Bach" ], "title": "Breaking the curse of dimensionality with convex neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "J. Barbier", "F. Krzakala", "N. Macris", "L. Miolane", "L. Zdeborova" ], "title": "Optimal errors and phase transitions in high-dimensional generalized linear models", "venue": "Proceedings of the National Academy of Sciences, 116:201802705,", "year": 2019 }, { "authors": [ "J. Bruna", "S. Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "J.A. Costa", "A.O. Hero" ], "title": "Learning intrinsic dimension and intrinsic entropy of high-dimensional datasets", "venue": "12th European Signal Processing Conference,", "year": 2004 }, { "authors": [ "A. Engel", "C. Van den Broeck" ], "title": "Statistical mechanics of learning", "venue": null, "year": 2001 }, { "authors": [ "E. Facco", "M. d’Errico", "A. Rodriguez", "A. Laio" ], "title": "Estimating the intrinsic dimension of datasets by a minimal neighborhood information", "venue": "Scientific reports,", "year": 2017 }, { "authors": [ "S. Franz", "S. Hwang", "P. Urbani" ], "title": "Jamming in multilayer supervised learning models", "venue": "arXiv preprint arXiv:1809.09945,", "year": 2018 }, { "authors": [ "M. Gabrié", "A. Manoel", "C. Luneau", "N. Macris", "F. Krzakala", "L. Zdeborová" ], "title": "Entropy and mutual information in models of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "P. Grassberger", "I. Procaccia" ], "title": "Measuring the strangeness of strange attractors", "venue": "Physica D: Nonlinear Phenomena,", "year": 1983 }, { "authors": [ "M. Hein", "J.-Y. Audibert" ], "title": "Intrinsic dimensionality estimation of submanifolds in r d", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "J. Hestness", "S. Narang", "N. Ardalani", "G.F. Diamos", "H. Jun", "H. Kianinejad", "M.M.A. Patwary", "Y. Yang", "Y. Zhou" ], "title": "Deep learning scaling is predictable", "venue": "empirically. CoRR,", "year": 2017 }, { "authors": [ "A. Jacot", "F. Gabriel", "C. Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "E. Levina", "P.J. Bickel" ], "title": "Maximum likelihood estimation of intrinsic dimension", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "U. v. Luxburg", "O. Bousquet" ], "title": "Distance-based classification with lipschitz functions", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "G. Matheron" ], "title": "Principles of geostatistics", "venue": "Economic geology,", "year": 1963 }, { "authors": [ "R. Monasson", "R. Zecchina" ], "title": "Weight space structure and internal representations: a direct approach to learning and generalization in multilayer neural networks", "venue": "Physical review letters,", "year": 1995 }, { "authors": [ "M. Opper", "D. Saad" ], "title": "Advanced mean field methods: Theory and practice", "venue": "MIT press,", "year": 2001 }, { "authors": [ "A. Rozza", "G. Lombardi", "C. Ceruti", "E. Casiraghi", "P. Campadelli" ], "title": "Novel high intrinsic dimensionality estimators", "venue": "Machine learning,", "year": 2012 }, { "authors": [ "A. Rudi", "L. Rosasco" ], "title": "Generalization properties of learning with random features", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "D. Saad", "S.A. Solla" ], "title": "On-line learning in soft committee machines", "venue": "Physical Review E,", "year": 1995 }, { "authors": [ "B. Scholkopf", "A.J. Smola" ], "title": "Learning with kernels: support vector machines, regularization, optimization, and beyond", "venue": "MIT press,", "year": 2001 }, { "authors": [ "A.J. Smola", "B. Schölkopf", "K.-R. Müller" ], "title": "The connection between regularization operators and support vector kernels", "venue": "Neural networks,", "year": 1998 }, { "authors": [ "M.L. Stein" ], "title": "Predicting random fields with increasing dense observations", "venue": "The Annals of Applied Probability,", "year": 1999 }, { "authors": [ "M.L. Stein" ], "title": "Interpolation of spatial data: some theory for kriging", "venue": "Springer Science & Business Media,", "year": 1999 }, { "authors": [ "C.K. Williams", "C.E. Rasmussen" ], "title": "Gaussian processes for machine learning, volume", "venue": null, "year": 2006 }, { "authors": [ "L. Zdeborová", "F. Krzakala" ], "title": "Statistical physics of inference: Thresholds and algorithms", "venue": "Advances in Physics,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "In supervised learning machines learn from a finite collection of n training data, and their generalization error is then evaluated on unseen data drawn from the same distribution. How many data are needed to learn a task is characterized by the learning curve relating generalization error to n. In various cases, the generalization error decays as a power law n−β , with an exponent β that depends on both the data and the algorithm. In (Hestness et al., 2017) β is reported for state-of-the-art (SOTA) deep neural networks for various tasks: in for neural-machine translation β ≈ 0.3–0.36 (for fixed model size) or β ≈ 0.13 (for best-fit models at any n); language modeling shows β ≈ 0.06–0.09; in speech recognition β ≈ 0.3; SOTA models for image classification (on ImageNet) have exponents β ≈ 0.3–0.5. Currently there is no available theory of deep learning to rationalize these observations. Recently it was shown that for a proper initialization of the weights, deep learning in the infinite-width limit (Jacot et al., 2018) converges to kernel learning. Moreover, it is nowadays part of the lore that there exist kernels whose performance is nearly comparable to deep networks (Bruna and Mallat, 2013; Arora et al., 2019), at least for some tasks. It is thus of great interest to understand the learning curves of kernels. For regression, if the target function being learned is simply assumed to be Lipschitz, then the best guarantee is β = 1/d (Luxburg and Bousquet, 2004; Bach, 2017) where d is the data dimension. Thus for large d, β is very small: learning is completely inefficient, a phenomenon referred to as the curse of dimensionality. As a result, various works on kernel regression make the much stronger assumption that the training points are sampled from a target function that belongs to the reproducing kernel Hilbert space (RKHS) of the kernel (see for example (Smola et al., 1998)). With this assumption β does not depend on d (for instance in (Rudi and Rosasco, 2017) β = 1/2 is guaranteed). Yet, RKHS is a very strong assumption which requires the smoothness of the target\nfunction to increase with d (Bach, 2017) (see more on this point below), which may not be realistic in large dimensions.\nIn this work we compute β empirically for kernel methods applied on MNIST and CIFAR10 datasets. We find βMNIST ≈ 0.4 and βCIFAR10 ≈ 0.1 respectively. Quite remarkably, we observe essentially the same exponents for regression and classification tasks, using either a Gaussian or a Laplace kernel. Thus the exponents are not as small as 1/d (d = 784 for MNIST, d = 3072 for CIFAR10), but neither are they 1/2 as one would expect under the RKHS assumption. These facts call for frameworks in which assumptions on the smoothness of the data can be intermediary between Lipschitz and RKHS. Here we propose such a framework for regression, in which the target function is assumed to be a Gaussian random field of zero mean with translation-invariant isotropic covariance KT (x). The data can equivalently be thought as being synthesized by a “Teacher” kernelKT (x). Learning is performed with a “Student” kernel KS(x) that minimizes the mean-square error. In general KT (x) 6= KS(x). In this set-up learning is very similar to a technique referred to as kriging, or Gaussian process regression, originally developed in the geostatistics community (Matheron, 1963; Stein, 1999b). To quantify learning, we first perform numerical experiments for data points distributed uniformly at random on a hypersphere of varying dimension d, focusing on a Laplace kernel for the Student, and considering a Laplace or Gaussian kernel for the Teacher. We observe that in both cases β(d) is a decreasing function.\nTo derive β(d) we consider the simplified situation where the Gaussian random field is sampled at training points lying on a regular lattice. Building on the kriging literature (Stein, 1999b), we show that β is controlled by the high-frequency scaling of both the Teacher and Student kernels: assuming that the Fourier transforms of the kernels decay as K̃T (w) = cT ||w||−αT + o (||w||−αT ) and K̃S(w) = cS ||w||−αS + o (||w||−αS ), we obtain\nβ = 1\nd min(αT − d, 2αS). (1)\nImportantly (i) Eq. (1) leads to a prediction for β(d) that accurately matches our numerical study for random training data points, leading to the conjecture that Eq. (1) holds in that case as well. We offer the following interpretation: ultimately, kernel methods are performing a local interpolation whose quality depends on the distance δ(n) between adjacent data points. δ(n) is asymptotically similar for random data or data sitting on a lattice. (ii) If the kernel KS is not too sensitive to high-frequencies, then learning is optimal as far as scaling is concerned and β = (αT − d)/d. We will argue that the smoothness index s ≡ [(αT − d)/2] characterizes the number of derivatives of the target function that are continuous. We thus recover the curse of dimensionality: s needs to be of order d to have non-vanishing β in large dimensions. Point (ii) leads to an apparent paradox: β is significant for MNIST and CIFAR10, for which d is a priori very large, leading to a smoothness value s in the hundreds in both cases, which appears unrealistic. The paradox is resolved by considering that real datasets actually live on lower-dimensional manifolds. As far as kernel learning is concerned, our findings support that the correct definition of dimension should be based on how the nearest-neighbors distance δ(n) scales with n: δ(n) ∼ n−1/deff . Direct measurements of δ(n) support that MNIST and CIFAR10 live on manifolds of lower dimensions deffMNIST ≈ 15 and deffCIFAR10 ≈ 35. Considering the effective dimensions that we find, the observed values for β would be obtained for Gaussian fields of smoothness sMNIST ≈ 3 and sCIFAR10 ≈ 1, values that appear intuitively more reasonable. More generally this analogy with Gaussian fields allows one to associate a smoothness index s to any dataset once β and deff are measured, which may turn out to be a useful characterization of data complexity in the future." }, { "heading": "2 RELATED WORKS", "text": "Our set-up of Teacher-Student learning with kernels is also referred to as kriging, or Gaussian process regression, and it was originally developed in the geostatistics community (Matheron, 1963). In Section 5 we present a theorem that allows one to know the rate at which the test error decreases as we increase the number of training points, assumed to lie on a high-dimensional regular lattice. Similar results have been previously derived in the kriging literature (Stein, 1999b) when sampling occurs on the regular lattice with the exception of the origin, where the inference is made. Here we propose an alternative derivation that some readers might find simpler. We also study a slightly different problem: instead of computing the test error when the inference is carried on at the origin,\nwe compute the average error for a test point that lie at an arbitrary point, sampled uniformly at random and not necessarily on the lattice.\nIn what follows we show, via extensive numerical simulations, that such predictions are accurate even when the training points do not lie on a regular lattice, but are taken at random on a hypersphere. An exact proof of our result in such a general setting is difficult and cannot be found even in the kriging literature. To our knowledge the results that get closer to the point are those discussed in (Stein, 1999a), where the author studies one-dimensional processes where the training data are not necessarily evenly spaced.\nIn this work the effective dimension of the data plays an import role, as it controls how the distance between nearest neighbors scales with the dataset size. Of course, there exists a vast literature (Grassberger and Procaccia, 1983; Costa and Hero, 2004; Hein and Audibert, 2005; Levina and Bickel, 2005; Rozza et al., 2012; Facco et al., 2017; Allegra et al., 2019) devoted to the study of effective dimensions, where other definitions are analyzed. The effective dimensions that we find are compatible with those obtained with more refined methods." }, { "heading": "3 LEARNING CURVE FOR KERNEL METHODS APPLIED TO REAL DATA", "text": "In what follows we apply kernel methods to the MNIST and CIFAR10 datasets, each consisting of a set of images (xµ) n µ=1. We simplify the problem by considering only two classes whose label Z(xµ) = ±1 correspond to odd and even numbers for MNIST, and to two groups of 5 classes in CIFAR10. The goal is to infer the value of the label ẐS(x) of an image x that does not belong to the dataset. The S subscript reminds us that inference is performed using a positive definite kernel KS . We perform inference in both a regression and a classification setting. The following algorithms and associated results can be found in (Scholkopf and Smola, 2001).\nRegression. Learning corresponds to minimizing a mean-square error:\nmin n∑ µ=1 [ ẐS(xµ)− Z(xµ) ]2 . (2)\nFor algorithms seeking solutions of the form ẐS(x) = ∑ µ aµKS(xµ, x) ≡ a · kS(x) by minimizing the man-square loss over the vector a, one obtains:\nẐS(x) = kS(x) ·K−1S Z, (3)\nwhere the vector Z contains all the labels in the training set, Z ≡ (Z(xµ))nµ=1, and KS,µν ≡ KS(xµ, xν) is the Gram matrix. The Gram matrix is always invertible if the kernel KS is positive definite. The generalization error is then evaluated as the expected mean-square error on unseen data, estimated by averaging over a test set composed of ntest unseen data points:\nMSE = 1\nntest ntest∑ µ=1 [ ẐS(xµ)− Z(xµ) ]2 . (4)\nClassification. We perform kernel classification via the algorithm soft-margin SVM. The details can be found in Appendix A. After learning from the training data with a student kernel KS , performance is evaluated via the generalization error. It is estimated as the fraction of correctly predicted labels for data points belonging to a test set with ntest elements.\nIn Fig. 1 we present the learning curves for (binary) MNIST and CIFAR10, for regression and classification. Learning is performed both with a Gaussian kernel K(x) ∝ exp(−||x||2/(2σ2)) and a Laplace one K(x) ∝ exp(−||x||/σ). Remarkably, the power laws in the two tasks are essentially identical (although the estimated exponent appears to be slightly larger, in absolute value, for classification). Moreover, the two kernels display a very similar behavior, compatible with the same exponent: about −0.4 for MNIST and −0.1 for CIFAR10. The presented data are for σ = 1000; in Appendix B we show that the same behaviour is observed for different values." }, { "heading": "4 GENERALIZATION SCALING IN KERNEL TEACHER-STUDENT PROBLEMS", "text": "We study β in a simplified setting where the data is assumed to follow a Gaussian distribution with known covariance. It falls into the class of teacher-Student problems, which are characterized by a machine (the Teacher) that generates the data, and another machine (the Student) that tries to learn from them. The Teacher-Student paradigm has been broadly used to study supervised learning (Saad and Solla, 1995; Monasson and Zecchina, 1995; Opper and Saad, 2001; Engel and Van den Broeck, 2001; Zdeborová and Krzakala, 2016; Barbier et al., 2019; Gabrié et al., 2018; Aubin et al., 2018; Franz et al., 2018). He we restrict our attention to kernel methods: we assume that a target function is distributed according to a Gaussian random field Z ∼ N (0,KT ) — the Teacher — characterized by a translation-invariant isotropic covariance function KT (x, x′) = KT (||x−x′||), and that the training dataset consists the finite set of n observations Z = (Z(xµ)) n µ=1. This is equivalent to saying that the vector of training points follows a centered Gaussian distribution with a covariance matrix that depends on KT and on the location of the points (xµ) n µ=1:\nZ ∼ N (0,KT ) , where KT = (KT (xµ, xν))nµ,ν=1. (5)\nOnce the Teacher has generated the dataset, the rest follows as in the kernel regression described in the previous section. We use another translation-invariant isotropic kernel KS(x, x′) — the Student — to infer the value of the field at another point, ẐS(x), with a regression task, i.e. minimizing the mean-square error in Eq. (2). The solution is therefore given again by Eq. (3).\nFig. 2 (a-b) shows the mean-square error obtained numerically. In the examples the Student is always taken to be a Laplace kernel, and the Teacher is either a Laplace kernel or a Gaussian kernel. The\npoints (xµ) n µ=1 are taken uniformly at random on the unit d-dimensional hypersphere for several dimensions d and for several dataset sizes n. We take σS = σT = d as we observed that with this choice smaller datasets were enough to approach a limiting curve — in Appendix C we show the plots for the case σS = σT = 10, which appears to converge to the same limit curve with increasing n, but at a smaller pace. The figure shows that when n is large enough, the mean-square error behaves as a power law (dashed lines) with an exponent that depends on the spatial dimension of the data, as well as on the kernels. The fitted exponents are plotted in Fig. 2 (c-d) as a function of the spatial dimension d for different dataset sizes n. In the next section we will discuss the theoretical prediction, that in the figure is plotted a thick black line. The figure shows that as the dataset gets bigger, the asymptotic exponent tends to our prediction. In Appendix D we present the learning curves of Gaussian Students with both a Laplace and a Gaussian kernel. When both kernels are Gaussian the test error decays exponentially fast, a result that matches our theoretical prediction. In Appendix E we also provide further numerical results for the case where the Teacher kernel is a Matérn kernel (as defined therein)." }, { "heading": "5 ANALYTIC ASYMPTOTICS FOR THE KERNEL TEACHER-STUDENT PROBLEM", "text": "ON A LATTICE\nIn this section we compute analytically the exponent that describe the asymptotic decay of the generalization error when the number n of training data increases. In order to derive the result we assume that both the Teacher Gaussian random field lives on a bounded hypercube, x ∈ V ≡ [0, L]d, where L is a constant and d is the spatial dimension. The fields and the kernels can then be thought of\nas L-periodic along each dimension. Furthermore, to make the problem tractable we assume that the points (xµ) n µ=1 live on a regular lattice, covering all the hypercube V . Therefore, the linear spacing between neighboring points is δ = Ln−1/d. This is of course a different setting than the one used in the numerical simulations presented in the previous section, yet our results below support that these differences do not matter.\nGeneralization error is then evaluated via the typical mean-square error EMSE = E [ Z(x)− ẐS(x) ]2 , (6)\nwhere the expectation is taken over both the Teacher process and the point x at which we estimate the field, assumed to be uniformly distributed in the hypercube V . In Appendix F we prove the following:\nTheorem 1. Let K̃T (w) = cT ||w||−αT + o (||w||−αT ) and K̃S(w) = cS ||w||−αS + o (||w||−αS ) as ||w|| → ∞, where K̃T (w) and K̃S(w) are the Fourier transforms of the kernels KT (x), KS(x) respectively, assumed to be positive definite. We assume K̃T (w) and K̃S(w) has a finite limit as ||w|| → 0 and that K(0) <∞. Then,\nEMSE = n−β + o ( n−β ) with β = 1\nd min(αT − d, 2αS). (7)\nMoreover, in the case of a Gaussian kernel the result holds valid if we take the corresponding exponent to be α =∞.\nApart from the specific value of the exponent in Eq. (7), Theorem 1 implies that if the Student kernel decays fast enough in the frequency domain, then β depends only on the data through the behaviour of the Teacher kernel at high frequencies. One then recovers β = (αT − d)/d, also found for the Bayes-optimal setting where the Student is identical to the Teacher.\nConsider the predictions of Theorem 1 in the cases presented in Fig. 2 (a-b) of Gaussian and Laplace kernels. If both kernels are Laplace kernels then αT = αS = d + 1 and EMSE ∼ n−1/d, which scales very slowly with the dataset size in large dimensions. If the Teacher is a Gaussian kernel (αT =∞) and the Student is a Laplace kernel then β = 2(1 + 1/d), leading to β → 2 as d→∞. In Fig. 2 (c-d) we compare these predictions with the exponents extracted from Fig. 2 (a-b). We plot logEMSE/ log n ≡ −β, against the dimension d of the data, varying the dataset size n. The exponents extracted numerically tend to our analytical predictions when n is large enough.\nNotice that, although the theory and the experiments do not assume the same distribution for the sampling points (xµ) n µ=1, this does not seem to yield any difference in the asymptotic behavior of the generalization error, leading to the conjecture that our predictions are exact even when the training set is random, and does not correspond to a lattice. The conjecture can be proven in one dimension following results of the kriging literature (Stein, 1999a), but generalization to higher d is a much harder problem. Intuitively, for kernel learning performs an expansion, whose quality is governed by the target function smoothness and the typical distance δmin between a point and its nearest neighbors in the training set. Both for random points or on a lattice, one has δmin ∼ n−1/d when n is large enough, thus both situations lead to the same β.\nTheorem 1 underlines that kernel methods are subjected to the curse of dimensionality. Indeed for appropriate students, one obtains β = (αT − d)/d. Let us define the smoothness index s ≡ [(αT − d)/2] = βd/2, which must be O(d) to avoid β → 0 for large d. The two Lemmas below, derived in Appendix, indicate that the target function is s time differentiable (in a mean-square sense). Thus learning with kernels in very large dimension can only occur if the target function is O(d) times differentiable, a condition that appears very restrictive in large d.\nLemma 1. LetK(x, x′) be a translation-invariant isotropic kernel such that K̃(w) = c||w||−α+ o (||w||−α) as ||w|| → ∞ and ||w||dK̃(w)→ 0 as ||w|| → 0. If α > d+n for some n ∈ Z+, then K(x) ∈ Cn, that is, it is at least n-times differentiable. (Proof in Appendix G).\nLemma 2. Let Z ∼ N (0,K) be a d-dimensional Gaussian random field, with K ∈ C2n being a 2n-times differentiable kernel. Then Z is n-times mean-square differentiable in the sense that\n• derivatives of Z(x) are a Gaussian random fields;\n• E∂n1x1 · · · ∂ nd xd Z(x) = 0;\n• E∂n1x1 · · · ∂ nd xd Z(x) · ∂n\n′ 1 x1 · · · ∂ n′d xdZ(x ′) = ∂ n1+n ′ 1 x1 · · · ∂ nd+n ′ d\nxd K(x − x′) < ∞ if the derivatives of K exist.\nIn particular, E∂mxiZ(x) · ∂ m xiZ(x ′) = ∂2mxi K(x− x ′) <∞ ∀m ≤ n. (Proof in Appendix G).\n6 EFFECTIVE DIMENSION OF DATA SETS\nIf we approximate the high-dimensional MNIST and CIFAR10 datasets with Gaussian random fields, to obtain the curves shown in Fig. 1 and to find the values the we report for β these fields would have to be hundreds of times differentiable, which seems unrealistic. A possible resolution of this paradox lies in the fact that the data live in a much smaller manifold than the number of pixels of these pictures would suggest. As argued above, a key determinant of kernel performance is the typical distance δmin between a point in the training set and its nearest neighbor. We define the effective dimension deff accordingly from the asymptotic relationship between δmin and n:\nδmin ∼ n− 1/deff . (8)\nFor random points on a d-dimensional hypersphere δmin displays fluctuations and the scaling is valid only on average and only asymptotically, that is for n larger than some characteristic scale n?(d) that depends on the spatial dimension. In Fig. 3 (a) we show how the typical δmin scales with the dataset size n for random points on hyperspheres of dimension d = 15 and d = 35. Notice that while for d = 15 the asymptotic regime is reached when n ' 104, for d = 35 a larger dataset is needed, with n > 105 points (that is about the maximum size of the dataset that we can use to apply kernel methods in our simulations, due to memory constraints). One can naturally wonder whether real data are also subjected to a scaling relation like in Eq. (8), from which an effective dimension can be defined. Consider for instance the MNIST dataset, and sample from it a subset of n pictures. For each\npoint we could compute the distance from its nearest neighbor and average such quantities. As the number of data points increases, we expect that this measure characterizes the geometry of the local manifold where the data live in, since nearest neighbors are going to be closer and closer. In Fig. 3 (b) we present how δmin scales with n for the MNIST and CIFAR10 datasets. Both display a power-law decay, but the exponent is not compatible with 1/d with d the spatial dimension, namely d = 784 for MNIST and d = 3072 for CIFAR10. MNIST actually seems to scale pretty much like random data on a hypersphere with d = 15, and CIFAR10 scales approximately as random data on a hypersphere with d = 35. For this reason, the effective dimensions of these datasets are consistent with:\nd effMNIST ≈ 15 d effCIFAR10 ≈ 35.\nObviously, the intrinsic dimension of the data could vary in data space, as has been reported for MNIST (Costa and Hero, 2004; Hein and Audibert, 2005; Rozza et al., 2012; Facco et al., 2017). In this qualitative discussion we neglect such subtle effects. Interestingly, our effective dimensions leads to reasonable values for the effective smoothness:\nseff = βdeff/2.\nIn particular we find s ≈ 3 for MNIST and s ≈ 1 for CIFAR10." }, { "heading": "7 CONCLUSION", "text": "In this work we have shown for CIFAR10 and MNIST respectively that kernel regression and classification display a power-law decay in the learning curves, quite remarkably with essentially the same exponent β, found to be larger for MNIST. These exponents are much larger than β = 1/d expected for Lipschitz target functions and smaller than β = 1/2 expected for RKHS target functions. This observation led us to introduce a framework in which data are modeled as Gaussian random fields of varying smoothness, in which intermediary values of β are obtained.\nIt is important to note the high degree of smoothness underlying the RKHS hypothesis. Consider realizations Z(x) of a Teacher Gaussian process with covariance KT and assume that they lie in the RKHS of the Student kernel KS , namely\nE||Z||2KS = E ∫ dxd yZ(x)K−1S (x− y)Z(y) = ∫ dw K̃T (w)K̃ −1 S (w) <∞. (9)\nIf the Teacher and Student kernels decay in the frequency domain with exponents αT and αS respectively, convergence requires αT > αS + d, and KS(0) ∝ ∫ dw K̃S(w) <∞ (true for many commonly used kernels) implies αS > d. Then using Lemma 1 and Lemma 2 we can conclude that the realizations Z(x) must be at least bd/2c-times mean-square differentiable to be RKHS. From this perspective, the RKHS assumption appears to be very strong, and thus may not provide an accurate description of various empirical learning curves. Our assumption that data are generated by Gaussian random processes is milder, and may thus have broader applications. Yet, we view this approximation as a first step on which to build on, to later include other effects such as noise in the data and deviations from Gaussianity." }, { "heading": "ACKNOWLEDGMENTS", "text": "Anonymized for double-blind review." }, { "heading": "A SOFT-MARGIN SUPPORT VECTOR MACHINES", "text": "The kernel classification task is performed via the algorithm known as soft-margin Support Vector Machine.\nWe want to find a function ẐS(x) such that its sign correctly predicts the label of the data. In this context we model such a function as a linear prediction after projecting the data on a feature space via x→ φ(x):\nẐS(x) = w · φS(xµ) + b, (10)\nwhere w, b are parameters to be learned from the training data. The kernel is related to the feature space via KS(x, x′) = φS(x) · φS(x\n′). We require that Z(xµ)ẐS(xµ) > 1 − ξµ for all training points. Ideally we want to have some large margins 1 − ξµ = 1, but we allow some of them to be smaller by introducing the slack variables ξµ and penalizing large values. To achieve this the following constrained minimization is performed:\nmin w,b,ξ\n1 2 ||w||2 + C ∑ µ ξµ subjected to ∀µ Z(xµ) [ w · φ S (xµ) + b ] ≥ 1− ξµ, ξµ ≥ 0. (11)\nThis problem can be expressed in a dual formulation as\nmin a\n1 2 a ·QSa− n∑ µ=1 aµ subjected to Z · a = 0, 0 ≤ aµ ≤ C, (12)\nwhere QS,µ,ν = Z(xµ)Z(xν)KS(xµ, xν) and Z is the vector of the labels of the training points. Here C (= 104 in our simulations) controls the trade-off between minimizing the training error and maximizing the margins 1− ξµ. For the details we refer to (Scholkopf and Smola, 2001). If a? is the solution to the minimization problem, than\nw? = ∑ µ a?µφS(xµ), (13)\nb? = Z(xµ)− ∑ ν aνyνKS(xµ, xν) for any µ such that aµ < C. (14)\nThe predicted label for unseen data points is then sign(ẐS(x)) = sign( ∑ µ Z(xµ)aµKS(xµ, x) + b ?) (15)\nThe generalization error is now defined as the probability that an unseen image has a predicted label different from the true one, and such a probability is again estimated as an average over a test set with ntest elements:\nError = 1\nntest ntest∑ µ=1 θ [ −sign ( ẐS(xµ) ) Z(xµ) ] . (16)" }, { "heading": "B DIFFERENT KERNEL VARIANCES", "text": "In Fig. 4 we show the learning curves for kernel regression on the MNIST (parity) dataset — the same setting as in Fig. 1 (a). Several Laplace kernels of varying variance σ are used. The variance ranges several orders of magnitude and the learning curves all decay with the same exponent, although for σ = 10 the algorithm achieves suboptimal performance and the test errors are increased by some factor." }, { "heading": "C DIFFERENT CHOICE OF KERNEL VARIANCES", "text": "In Fig. 5 we show the learning curves for the Teacher-Student kernel regression problem, with a Student kernel that is always Laplace and a Teacher that can be either Gaussian or Laplace. We show how the test error decays with the size of the training dataset and how the asymptotic exponent depends on the spatial dimension. Every experiment is run with two different choices of the kernel variances: in one case σT = σS = d and in the other σT = σS = 10. We observed that scaling the variances with the spatial dimension leads faster to the results that we predicted in this paper, but overall the choice has little effect on the exponents (both tend towards the prediction as the dataset size is increased).\n10 10\n10 8\n10 6\n10 4\n10 2\n100\nM SE\nGaussian-Laplace = d d = 4 d = 8 d = 16 d = 32\nGaussian-Laplace = 10 d = 4 d = 8 d = 16 d = 32\n10 1\n10 2\n10 3\n10 4\nn\n10 2\n10 1\nM SE\nLaplace-Laplace = d d = 4 d = 8 d = 16 d = 32\n10 1\n10 2\n10 3\n10 4\nn\nLaplace-Laplace = 10 d = 4 d = 8 d = 16 d = 32\n4.0\n3.5\n3.0\n2.5\n2.0\n1.5\nlo g\nM SE\n/lo gn\nGaussian-Laplace = d MSE n 2(1 + 1/(d 1))\nn=64 n=256 n=1024 n=4096 n=8192\nGaussian-Laplace = 10 MSE n 2(1 + 1/(d 1))\nn=64 n=256 n=1024 n=4096 n=8192\n5 10 15 20 25 30 d\n1.0\n0.8\n0.6\n0.4\n0.2\n0.0\nlo g\nM SE\n/lo gn\nLaplace-Laplace = d MSE n 1/(d 1)\nn=16 n=128 n=1024 n=4096 n=8192\n5 10 15 20 25 30 d\nLaplace-Laplace = 10 MSE n 1/(d 1)\nn=64 n=256 n=1024 n=4096 n=8192\nFigure 5: In these plots we show the results for the Teacher-Student kernel regression. The Student is always a Laplace kernel, the Teacher is either Gaussian or Laplace. The four plots on the left depict the mean-square error against the size of the dataset for different spatial dimensions of the data, those on the right show the fitted asymptotic exponent against the spatial dimension for different dataset sizes. For every case we show both the the results for σT = σS = d and σT = σS = 10.\nFigure 6: Left: The test error of a Laplace Teacher (αT = d+ 1) with a Gaussian Student (αS =∞) decays as a power law with the predicted exponent β = 1d min(1,∞) = 1 6 in d = 6 dimensions. Center: When both the Teacher and the Student are Gaussian the test error decays faster than any power law as the number n of data is increased. This plot confirm this by showing that the logarithm of the test error decays linearly as a function of n 1 3 . Right: Comparison between the learning curves for the cases where both kernels are either Laplace (top blue line) or Gaussian (bottom orange line). While the former decays algebraically with the predicted exponent, the latter decays exponentially, in agreement with the prediction β =∞ found within our framework. In all these plots we have taken the variances of both the Teacher and Student kernels to be equal to the dimension d = 6." }, { "heading": "D GAUSSIAN STUDENTS", "text": "In this appendix we present the learning curves of Gaussian Students: the Fourier transform of these kernels decays faster than any power law and one can effectively consider αS = ∞. If the Teacher is Laplace (αT = d + 1) then the predicted exponent is finite and takes the values β = 1d min(αT − d, 2αS) = 1 d min(1,∞) = 1 d . Such a case is displayed in Fig. 6 (left) in dimension d = 6. However, if we consider the Teacher to be Gaussian as well, then the predicted exponent would be β = 1d min(∞,∞) =∞. This case corresponds to Fig. 6 (center): the test errors decays faster than a power law. In Fig. 6 (right) we compare the case where both kernels are Gaussian to the case where both kernels are Laplace: while the latter decays as a power law, the former decays much faster." }, { "heading": "E MATÉRN TEACHERS", "text": "To further test the applicability of our theory, we show here some numerical simulations for a Teacher kernel that is a Matérn covariance function and a Laplace kernel as student. We ran the simulations in 1d: the data points are sampled uniformly on a 1-dimensional circle embedded in R2. Matérn kernels are parametrized by a parameter ν > 0:\nKT (x) = 21−ν\nΓ(ν) zνKν(z), (17)\nwhere z = √\n2ν ||x||σ (σ being the kernel variance), Γ is the gamma function and Kν is the Bessel function of the second kind with parameter ν. Interestingly we recover the Laplace kernel for ν = 1/2 and the Gaussian kernel for ν =∞. As one can find in e.g. (Williams and Rasmussen, 2006), the exponent αT that governs the decay at high frequency of this kernels is αT = d+ 2ν. Varying ν we can change the smoothness of the target function.\nFor d = 1 our prediction for the learning curve exponent β is\nβ = 1\nd min(αT − d, 2αS) = min(2ν, 4). (18)\nIn Fig. 7 we verify that our prediction matches the numerical results." }, { "heading": "F PROOF OF THEOREM", "text": "We prove here Theorem 1:\nTheorem 1 Let K̃T (w) = cT ||w||−αT + o (||w||−αT ) and K̃S(w) = cS ||w||−αS + o (||w||−αS ) as ||w|| → ∞, where K̃T (w) and K̃S(w) are the Fourier transforms of the kernels KT (x), KS(x) respectively, assumed to be positive definite. We assume K̃T (w) and K̃S(w) has a finite limit as ||w|| → 0 and that K(0) <∞. Then,\nEMSE = n−β + o ( n−β ) with β = 1\nd min(αT − d, 2αS). (19)\nMoreover, in the case of a Gaussian kernel the result holds valid if we take the corresponding exponent to be α =∞.\nProof. Our strategy is to compute how the mean-square test error scales with distance δ between two nearest neighbors on the d-dimensional regular lattice. At the end, we will use the fact that δ ∝ n−1/d, where n is the number of sampled points on the lattice.\nWe denote by F̃ (w) the Fourier transform of a function F : V → R:\nF̃ (w) = L− d/2 ∫ V dx e−iw·xF (x), where w ∈ L ≡ 2π L Zd, (20)\nF (x) = L− d/2 ∑ w∈L eiw·xF̃ (w). (21)\nIf Z ∼ N (0,K) is a Gaussian field with translation-invariant covariance K then by definition\nEZ(x)Z(x′) = K(x− x′). (22)\nProperties of the Fourier transform of a Gaussian field:\nK̃(w) = K̃(−w) ∈ R, (23) EZ̃(w) = 0, (24)\nEZ̃(w)Z̃(w′) = Ld/2δww′K̃(w). (25)\nEq. (23) comes from the fact that K(x) is an even, real-valued function. The real and imaginary parts of Z̃(w) are Gaussian random variables. They are all independent except that Z̃(−w) = Z̃(w). Eq. (25) follows from the fact that Z(x) and K(x) are L-periodic functions, and therefore eiw·xK̃(w) is the Fourier transform of K(·+ x) if w ∈ 2πL Z d. #\nThe solution Eq. (3) for kernel regression has two interpretations. In Section 4 we introduced it as the quantity that minimizes a quadratic error, but it can also be seen as the maximum-a-posteriori (MAP) estimation of another formulation of the problem (Williams and Rasmussen, 2006). The field Z(x) is assumed to be drawn from a Gaussian distribution with covariance function KS(x): KS therefore plays a role in the prior distribution of the data Z = (Z(xµ) n µ=1). Inference about the value of the\nfield ẐS(x) at another location is then performed by maximizing its posterior distribution,\nẐS(x) ≡ arg max P (Z(x)|Z) . (26)\nSuch a posterior distribution is Gaussian, and its mean — and therefore also the value that maximizes the probability — is exactly Eq. (3):\nẐS(x) = kS(x) ·K−1S Z, (27)\nwhere where Z = ( Z(xµ) )n µ=1 are the training data, kS(x) = ( KS(xµ, x) )n µ=1\nand KS =( KS(xµ, xν) )n µ,ν=1\nis the Gram matrix, that is invertible since the kernel KS is assumed to be positive definite. By Fourier transforming this relation we find\nZ̃S(w) = Z̃ ?(w)\nK̃S(w) K̃?S(w) , (28)\nwhere we have defined F ?(w) ≡ ∑ n∈Zd F ( w + 2πnδ ) for a generic function F .\nAnother way to reach Eq. (28) is to consider that we are observing the quantities\nZ̃?(w) ≡ δdL−d/2 ∑\nx∈lattice e−iw·xZ(x) ≡ ∑ n∈Zd Z̃ ( w + 2πn δ ) . (29)\nGiven that we know the prior distribution of the Fourier components on the right-hand side in Eq. (29), we can infer their posterior distribution once their sums are constrained by the value of Z̃?(w), and it is straightforward to see that we recover Eq. (28).\nThe mean-square error can then we written using the Parseval-Plancherel identity,\nEMSE = L−dE ∫ V dx [Z(x)− ẐS(x)]2 = L−dE ∑ w∈L ∣∣∣∣∣Z̃(w)− Z̃?(w)K̃S(w)K̃?S(w) ∣∣∣∣∣ 2 . (30)\nBy taking the expectation value with respect to the Teacher and using Eq. (23)-Eq. (25) we can write the mean-square error as\nEMSE = L−dE ∑ w∈L\n[ Z̃(w)Z̃(w)− 2Z̃(w)Z̃?(w)K̃S(w)\nK̃?S(w) + Z̃?(w)Z̃?(w)\nK̃2S(w)\nK̃?S 2 (w)\n] =\n= L−dE ∑ w∈L Z̃(w)Z̃(w)− 2K̃S(w) K̃?S(w) ∑ n∈Zd Z̃(w)Z̃ ( w + 2πn δ ) +\n+ K̃2S(w)\nK̃?S 2 (w) ∑ n,n′∈Zd Z̃ ( w + 2πn δ ) Z̃ ( w + 2πn′ δ ) =\n= L− d/2 ∑ w∈L K̃T (w)− 2 K̃S(w) K̃?S(w) K̃T (w) + K̃2S(w) K̃?S 2 (w) ∑ n∈Zd K̃T ( w + 2πn δ ) =\n= L− d/2 ∑ w∈L∩B K̃?T (w)− 2 [K̃T K̃S ] ?(w) K̃?S(w) + K̃?T (w)[K̃ 2 S ] ?(w) K̃?S(w) 2 , (31)\nwhere B = [ −πδ , π δ ]d is the Brillouin zone.\nAt high frequencies, K̃T (w) = cT ||w||−αT + o (||w||−αT ) and K̃S(w) = cS ||w||−αS + o (||w||−αS ). Therefore:\nK̃?T (w) = K̃T (w) + δ αT cT ∑ n∈Zd\\{0} ||wδ + 2πn||−αT + o ( ||w||−αT ) ≡\n≡ K̃T (w) + δαT cT ψT (wδ) + o ( ||w||−αT ) . (32)\nThis equation defines the function ψT , and a similar equation holds for the Student as well. The hypothesis KT (0) ∝ ∫ dw K̃T (w) <∞ implies αT > d and therefore ∑ n∈Zd ||n||−αT <∞ (and likewise for the Student). Then, ψαT (0), ψαS (0) are finite; furthermore, the w’s in the sum Eq. (31) are at most of order O ( δ−1 ) , therefore the terms ψα(wδ) are O(δ0) and do not influence how Eq. (31) scales with δ. Applying Eq. (32), expanding for δ 1 and keeping only the leading orders, we find\nEMSE =\n= L− d/2 ∑ w∈L∩B 2cTψαT (wδ)δ αT + c2Sψ2αS (wδ) K̃T (w) K̃2S(w) δ2αS + o ( ||w||−αT ) + o ( ||w||−2αS ) = = L− d/2\n ∑ w∈L∩B 2cTψαT (wδ)δ αT + c2Sψ2αS (wδ) K̃T (w) K̃2S(w) δ2αS +o (||w||−αT−d)+o (||w||−2αS−d) . (33)\nWe have neglected terms proportional to, for instance, δαT+αS , since they are subleading with respect to δαT , but we must keep both δαT and δαS since we do not know a priori which one is dominant. The additional term δ−d in the subleading terms comes from the fact that |L ∩ B| = O ( δ−d ) .\nThe first term in Eq. (33) is the simplest to deal with: since ||wδ|| is smaller than some constant for all w ∈ L ∩ B and the function ψαT (wδ) has a finite limit, we have\nδαT ∑\nw∈L∩B 2cTψαT (wδ) = O (δαT |L ∩ B|) = O\n( δαT−d ) . (34)\nWe then split the second term in Eq. (33) in two contributions:\nSmall ||w|| We consider “small” all the terms w ∈ L∩B such that ||w|| < Γ, where Γ 1 is O(δ0) but large. As δ → 0, ψ2αS (wδ)→ ψ2αS (0) which is finite because K(0) <∞. Therefore\nδ2αS ∑\nw∈L∩B ||w||<Γ\nc2Sψ2αS (wδ) K̃T (w)\nK̃2S(w) → δ2αSc2Sψ2αS (0) ∑ w∈L∩B ||w||<Γ K̃T (w) K̃2S(w) . (35)\nThe summand is real and strictly positive because the positive definiteness of the kernels implies that their Fourier transforms are strictly positive. Moreover, as δ → 0, L∩B∩{||w|| < Γ} → L∩{||w|| < Γ}, which contains a finite number of elements, independent of δ. Therefore\nδ2αS ∑\nw∈L∩B ||w||<Γ\nc2Sψ2αS (wδ) K̃T (w)\nK̃2S(w) = O\n( δ2αS ) . (36)\nLarge ||w|| “Large” w are those with ||w|| > Γ: we recall that Γ 1 is O(δ0) but large. This allows us to approximate K̃T , K̃S in the sum with their asymptotic behavior:\nδ2αS ∑\nw∈L∩B ||w||>Γ\nc2Sψ2αS (wδ) K̃T (w)\nK̃2S(w) ∝ δ2αS ∑ w∈L∩B ||w||>Γ ||w||−αT+2αS + o ( ||w||−αT+2αS ) ≈\n≈ δ2αS ∫ 1/δ\nΓ\ndwwd−1−αT+2αS + o ( ||w||−αT+2αS ) = O ( δmin(αT−d,2αS) ) . (37)\nFinally, putting Eq. (34), Eq. (36) and Eq. (37) together, EMSE = O ( δmin(αT−d,2αS) ) . (38) The proof is concluded by considering that δ = O ( n−1/d ) .\nIn the case of a Gaussian kernel K(x) ∝ exp(−||x||2/(2σ2)) — and therefore K̃(w) ∝ exp(−σ2||w||2/2) — one has to redo the calculations starting from Eq. (31), but the final result can be easily recovered by taking the limit α→ +∞ (Gaussian kernels decay faster than any power law)." }, { "heading": "G PROOFS OF LEMMAS", "text": "Lemma 1 Let K(x, x′) be a translation-invariant isotropic kernel such that K̃(w) = c||w||−α + o (||w||−α) as ||w|| → ∞ and ||w||dK̃(w) → 0 as ||w|| → 0. If α > d + n for some n ∈ Z+, then K(x) ∈ Cn, that is, it is at least n-times differentiable.\nProof. The kernel is rotational invariant in real space (K(x) = K(||x||)) and therefore also in the frequency domain. Then, calling ̂1 = (1, 0, . . . ) the unitary vector along the first dimension x1,\nK(x) ∝ ∫ dw eiw·̂1xK̃(||w||). (39)\nIt follows that |∂mK(x)| ∝ ∣∣∣∣∫ dw (w · ̂1)meiw·̂1xK̃(||w||)∣∣∣∣ < ∫ dw |w · ̂1|m|K̃(||w||)| ∝ ∝ ∫ ∞\n0\ndwwd−1+m|K̃(w)| ∫ π\n0\ndφ1| cos(φ1)|m ∝ ∫ ∞\n0\ndwwd−1+m|K̃(w)|. (40)\nWe want to claim that this quantity is finite if m ≤ n. Convergence at infinity requires m < α− d, that is always smaller than or equal to n because of the hypothesis of the lemma. Convergence in zero requires that wd+m|K̃(w)| → 0, and we want this to hold for all 0 ≤ m < α − d, the most constraining one being the condition with m = 0.\nLemma 2 Let Z ∼ N (0,K) be a d-dimensional Gaussian random field, with K ∈ C2n being a 2n-times differentiable kernel. Then Z is n-times differentiable in the sense that\n• derivatives of Z(x) are a Gaussian random fields; • E∂n1x1 · · · ∂ nd xd Z(x) = 0;\n• E∂n1x1 · · · ∂ nd xd Z(x) · ∂n\n′ 1 x1 · · · ∂ n′d xdZ(x ′) = ∂ n1+n ′ 1 x1 · · · ∂ nd+n ′ d\nxd K(x− x′) < ∞ if the derivatives of K exist.\nIn particular, E∂mxiZ(x) · ∂ m xiZ(x ′) = ∂2mxi K(x− x ′) <∞ ∀m ≤ n.\nProof. Derivatives of Z(x) are defined as limits of sums and differences of the field Z evaluated at different points, therefore they are Gaussian random fields too, and furthermore it is straightforward to see that their expected value is always 0 if the field itself is zero centered.\nThe correlation can be computed via induction. Assume that E∂n1x1 · · · ∂ nd xd Z(x) ·∂n\n′ 1 x1 · · · ∂ n′d xdZ(x ′) =\n∂ n1+n\n′ 1\nx1 · · · ∂ nd+n\n′ d\nxd K(x− x′) holds true. Then, if we increment n1:\nE∂n1+1x1 · · · ∂ nd xd Z(x) · ∂n\n′ 1 x1 · · · ∂ n′d xdZ(x ′) =\n= lim h→0\nh−1E [ ∂n1x1 · · · ∂ nd xd Z(x+ ĥ1)− ∂n1x1 · · · ∂ nd xd Z(x) ] · ∂n ′ 1 x1 · · · ∂ n′d xdZ(x ′) =\n= lim h→0\nh−1 [ ∂ n1+n ′ 1 x1 · · · ∂ nd+n ′ d xd K(x− x′ + ĥ1)− ∂ n1+n ′ 1 x1 · · · ∂ nd+n ′ d xd K(x− x′) ] =\n= ∂ n1+1+n\n′ 1\nx1 · · · ∂ nd+n\n′ d\nxd K(x− x′). (41)\nOf course by symmetry the same can be said about the increase of any other exponent. To conclude the induction proof we simply recall that by definition EZ(x)Z(x′) = K(x− x′)." } ]
2,019
null
SP:ad4e8d1e16eeeff006f8568bd6bf2c0862621526
[ "This work explores the extent to which the natural image manifold is captured by generative adversarial networks (GANs) by performing walks in the latent space of pretrained models. To perform these walks, a transformation vector is learned by minimizing the distance between transformed images and the corresponding images generated from transformed latent vectors. It is found that when traversing the latent space of the GAN along the direction of the transformation vector, that the corresponding generated images initially exhibit the desired transform (such as zooming or changing X position), but soon reach a limit where further changes in the latent vector do not result in changes to the image. It is observed that this behaviour is likely due to bias in the dataset which the GAN is trained on, and that by exploring the limits of the generator, biases which exist in the original dataset can be revealed. In order to increase the extents to which images can be transformed, it is shown that GANs can be trained with an augmented dataset and using a loss function that encourages transformations to lie along linear paths.", "This paper propose to study the generalization properties of GANs through interpolation. They first propose to learn a linear (and non-linear) interpolation in the latent space for a specific type of image transformation for example zoom, translation, rotation, luminance, etc... They show that linear interpolation in GANs can produce really realistic images along the path and enable to control and transform generated images to some extent. They then propose to measure to what extent the generated images can be transformed without \"breaking\". Finally they show that the quality of the interpolation can be improved by learning the interpolation and generator jointly." ]
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise – these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by “steering” in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
[ { "affiliations": [], "name": "Ali Jahanian" }, { "affiliations": [], "name": "Lucy Chai" } ]
[ { "authors": [ "Aharon Azulay", "Yair Weiss" ], "title": "Why do deep convolutional networks generalize so poorly to small image transformations", "venue": "arXiv preprint arXiv:1805.12177,", "year": 2018 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Taco S Cohen", "Maurice Weiler", "Berkay Kicanaoglu", "Max Welling" ], "title": "Gauge equivariant convolutional networks and the icosahedral cnn", "venue": null, "year": 1902 }, { "authors": [ "Emily Denton", "Ben Hutchinson", "Margaret Mitchell", "Timnit Gebru" ], "title": "Detecting bias with generative counterfactual face attribute augmentation", "venue": "arXiv preprint arXiv:1906.06439,", "year": 2019 }, { "authors": [ "Bella DiGrazia" ], "title": "Swampscott fd debuts new blue fire truck, 2019", "venue": "https://www.itemlive. com/2019/05/29/swampscott-fd-debuts-new-blue-fire-truck/,", "year": 2019 }, { "authors": [ "William T. Freeman", "Edward H Adelson" ], "title": "The design and use of steerable filters", "venue": "IEEE Transactions on Pattern Analysis & Machine Intelligence,", "year": 1991 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Lore Goetschalckx", "Alex Andonian", "Aude Oliva", "Phillip Isola" ], "title": "Ganalyze: Toward visual definitions of cognitive image properties", "venue": "arXiv preprint arXiv:1906.10112,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Geoffrey E Hinton", "Alex Krizhevsky", "Sida D Wang" ], "title": "Transforming auto-encoders", "venue": "In International Conference on Artificial Neural Networks,", "year": 2011 }, { "authors": [ "Ali Jahanian", "SVN Vishwanathan", "Jan P Allebach" ], "title": "Learning visual balance from large-scale datasets of aesthetically highly rated images", "venue": "In Human Vision and Electronic Imaging XX,", "year": 2015 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "arXiv preprint arXiv:1710.10196,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "arXiv preprint arXiv:1812.04948,", "year": 2018 }, { "authors": [ "Davis E. King" ], "title": "Dlib-ml: A machine learning toolkit", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Karel Lenc", "Andrea Vedaldi" ], "title": "Understanding image representations by measuring their equivariance and equivalence", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "Ssd: Single shot multibox detector", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Elad Mezuman", "Yair Weiss" ], "title": "Learning about canonical views from internet image collections", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Thomas Möllenhoff", "Daniel Cremers" ], "title": "Flat metric minimization with applications in generative modeling", "venue": "arXiv preprint arXiv:1905.04730,", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Yujun Shen", "Jinjin Gu", "Xiaoou Tang", "Bolei Zhou" ], "title": "Interpreting the latent space of gans for semantic face editing", "venue": null, "year": 1907 }, { "authors": [ "Joel Simon" ], "title": "Ganbreeder. http:/https://ganbreeder.app/, accessed 2019-03-22", "venue": null, "year": 2019 }, { "authors": [ "Antonio Torralba", "Alexei A Efros" ], "title": "Unbiased look at dataset bias", "venue": null, "year": 2011 }, { "authors": [ "Paul Upchurch", "Jacob Gardner", "Geoff Pleiss", "Robert Pless", "Noah Snavely", "Kavita Bala", "Kilian Weinberger" ], "title": "Deep feature interpolation for image content changes", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Tom White" ], "title": "Sampling generative networks", "venue": "arXiv preprint arXiv:1609.04468,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": null, "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Philipp Krähenbühl", "Eli Shechtman", "Alexei A Efros" ], "title": "Generative visual manipulation on the natural image manifold", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Karras" ], "title": "StyleGAN is able to roughly localize the color transformation to this region, suggesting disentanglement of different objects within theW latent space (Fig", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "The quality of deep generative models has increased dramatically over the past few years. When introduced in 2014, Generative Adversarial Networks (GANs) could only synthesize MNIST digits and low-resolution grayscale faces (Goodfellow et al., 2014). The most recent models, however, produce diverse high-resolution images that are often indistinguishable from natural photos (Brock et al., 2018; Karras et al., 2018).\nScience fiction has long dreamed of virtual realities filled of synthetic content as rich as, or richer, than the real world (e.g., The Matrix, Ready Player One). How close are we to this dream? Traditional computer graphics can render photorealistic 3D scenes, but cannot automatically generate detailed content. Generative models like GANs, in contrast, can create content from scratch, but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine.\nIn this paper, we explore the degree to which you can navigate the visual world of a GAN. Figure 1 illustrates the kinds of transformations we explore. Consider the dog at the top-left. By moving in some direction of GAN latent space, can we hallucinate walking toward this dog? As the figure indicates, and as we will show in this paper, the answer is yes. However, as we continue to zoom in, we quickly reach limits. Once the dog face fills the full frame, continuing to walk in this direction fails to increase the zoom. A similar effect occurs in the daisy example (row 2 of Fig. 1), where a direction in latent space moves the daisy up and down, but cannot move it out of frame.\nWe hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained. For example, if the training dataset consists of centered dogs and daises, the same may be the case in GAN-generated images. Nonetheless, we find that some degree of transformation is possible. When and why can we achieve certain transformations but not others?\nThis paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space. In other words, are GANs “steerable” in latent space?1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations. From our experiments, it is possible to shift the distribution of generated images to some degree, but we cannot extrapolate entirely out of the dataset’s support. In particular, attributes can be shifted in proportion to the variability of that attribute in the training data. We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction, together with data augmentation on training images. One of the current criticisms of generative models is that they simply interpolate between datapoints, and fail to generate anything truly new, but our results add nuance to this story. It is possible to achieve distributional shift, but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary.\nOur main findings are:\n• A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space. These walks are learned in self-supervised manner without labeled attributes or distinct source and target images.\n• The linear walk is as effective as more complex non-linear walks, suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so.\n• The extent of each transformation is limited, and we quantify a relationship between dataset variability and how much we can shift the model distribution.\n• The transformations are a general-purpose framework that work with different model architectures, e.g. BigGAN, StyleGAN, and DCGAN, and illustrate different disentanglement properties in their respective latent spaces.\n• Data augmentation improves steerability, as does jointly training the walk trajectory and the generator weights, which allows us to achieve larger transformation effects." }, { "heading": "2 RELATED WORK", "text": "Latent space manipulations can be seen from several perspectives – how we achieve it, what limits it, and what it enables us to do. Our work addresses these three aspects together, and we briefly refer to each one in related work.\nInterpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes, such as smile-vectors and gender-vectors for faces (Radford et al., 2015; Karras et al., 2018). However these manipulations are not exclusive to GANs; in flow-based generative models, linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target (Kingma & Dhariwal, 2018). Möllenhoff & Cremers (2019) proposes a modified GAN formulation by treating data\n1We use the term “steerable” in analogy to the classic steerable filters of Freeman & Adelson (1991).\nas directional k-currents, where moving along tangent planes naturally corresponds to interpretable manipulations. Upchurch et al. (2017) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier, again using feature mappings of source and target sets to determine an edit direction. Unlike these approaches, we learn our latentspace trajectories in a self-supervised manner without labeled attributes or distinct source and target images. Instead, we learn to approximate editing operations on individual source images. We find that linear trajectories in latent space can capture simple image manipulations, e.g., zoom-vectors and shift-vectors, although we also obtain similar results using nonlinear trajectories.\nDataset bias Biases from training data and network architecture both impact the generalization capacity of learned models (Torralba & Efros, 2011; Geirhos et al., 2018; Amini et al.). Dataset biases partly comes from human preferences in taking photos: we tend to take pictures in specific “canonical” views that are not fully representative of the entire visual world (Mezuman & Weiss, 2012; Jahanian et al., 2015). Consequently, models trained with these datasets inherit their biases. This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers (Geirhos et al., 2018) – and in turn limits their generalization performance on similar objectives (Azulay & Weiss, 2018). Our latent space trajectories transform the output corresponding to various image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the data’s support.\nGenerative models for content creation The recent progress in generative models has opened interesting avenues for content creation (Brock et al., 2018; Karras et al., 2018), including applications that enable users to fine-tune the generated output (Simon; Zhu et al., 2016; Bau et al., 2018). A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space. We further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models.\nApplications of latent space editing Image manipulations using generative models suggest several interesting downstream applications. For example, Denton et al. (2019) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. Shen et al. (2019) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement of the latent space. White (2016) suggests approaches to improve the learned manipulations, such as using spherical linear interpolations, resampling images to remove biases in attribute vectors, and using data augmentation as a synthetic attribute for variational autoencoders. Goetschalckx et al. (2019) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks parametrized by neural networks for editing operations." }, { "heading": "3 METHOD", "text": "Generative models such as GANs (Goodfellow et al., 2014) learn a mapping function G such that G : z → x. Here, z is the latent code drawn from a Gaussian density and x is an output, e.g., an image. Our goal is to achieve transformations in the output space by moving in latent space, as shown in Fig. 2. In general, this goal also captures the idea in equivariance, in which transformations in the input space result in equivalent transformations in the output space (c.f. Hinton et al. (2011); Cohen et al. (2019); Lenc & Vedaldi (2015)).\nObjective We want to learn anN -dimensional vector representing the optimal path in latent space for a given transformation. The vector is multiplied with continuous parameter α which signifies the step size: large α values correspond to a greater degree of transformation, while small α values correspond to a lesser degree. Formally, we learn the walk w by minimizing the objective function:\nw∗ = arg min w Ez,α[L(G(z+αw),edit(G(z), α))]. (1)\nHere, L measures the distance between the generated image after taking an α-step in the latent direction G(z + αw) and the target edit(G(z), α) derived from the source image G(z). We use L2 loss as our objective L, however we also obtain similar results when using the LPIPS perceptual image similarity metric (Zhang et al., 2018) (see Appendix B.4.1). Note that we can learn this walk in a fully self-supervised manner – we perform the edit(·) operation on an arbitrary generated image and subsequently the vector to minimize the objective. Let model(α) denote the optimized transformation vector w∗ with the step size α, defined as model(α) = G(z + αw∗).\nThe previous setup assumes linear latent space walks, but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position. For the non-linear walk, we learn a function, f∗(z), which corresponds to a small -step transformation edit(G(z), ). To achieve bigger transformations, we apply f recursively, mimicking discrete Euler ODE approximations. Formally, for a fixed , we minimize\nL = Ez,n[||G(fn(z))− edit(G(z), n ))||], (2) where fn(·) is an nth-order function composition f(f(f(...))), and f(z) is parametrized with a neural network. We discuss further implementation details in Appendix A.4. We use this function composition approach rather than the simpler setup of G(z + αNN(z)) because the latter learns to ignore the input z when α takes on continuous values, and is thus equivalent to the previous linear trajectory (see Appendix A.3 for further details).\nQuantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation. To this end, we compare the distribution of a given attribute, e.g., “luminance”, in the dataset versus in images generated after walking in latent space.\nFor color transformations, we consider the effect of increasing or decreasing the α coefficient corresponding to each color channel. To estimate the color distribution of model-generated images, we randomly sample N = 100 pixels per image both before and after taking a step in latent space. Then, we compute the pixel value for each channel, or the mean RGB value for luminance, and normalize the range between 0 and 1.\nFor zoom and shift transformations, we rely on an object detector which captures the central object in the image class. We use a MobileNet-SSD v1 (Liu et al., 2016) detector to estimate object bounding boxes, and average over image classes recognizable by the detector. For each successful detection, we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation. For the zoom operation, we use the area of the bounding box normalized by the area of the total image. For shift in the X and Y directions, we take the center X and Y coordinates of the bounding box, and normalize by image width or height.\nTruncation parameters in GANs (as used in Brock et al. (2018); Karras et al. (2018)) trade off between the diversity of the generated images and sample quality. When comparing generated images to the dataset distribution, we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training (see Brock et al. (2018)). When comparing the attributes of generated distributions under different α magnitudes to each other but not to the dataset, we reduce truncation to 0.5 to ensure better performance of the object detector.\nReducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model, thus keeping the model weights fixed. The previous approach allows us\nto understand the latent space organization and limitations in the model’s transformation capacity. To overcome these limits, we explore adding data augmentation by editing the training images with each corresponding transformation, and train the generative model with this augmented dataset. We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector:\nG∗, w∗ = arg min G,w (Ledit + LGAN ) , (3)\nwhere the edit loss encourages low L2 error between learned transformation and target image:\nLedit = L2 (G(z+αw)− edit(G(z), α)) . (4) The GAN loss optimizes for discriminator error:\nLGAN = max D (Ez,α[D(G(z+αw))]− Ex,α[D(edit(x, α))]) , (5)\nwhere we draw images x from the training dataset and perform data augmentation by applying the edit operation on them. This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths, and when combined with data augmentation, results in larger transformation ranges which we demonstrate in Sec. 4.4" }, { "heading": "4 EXPERIMENTS", "text": "We demonstrate our approach using BigGAN (Brock et al., 2018), a class-conditional GAN trained on 1000 ImageNet categories. We learn a shared latent space walk by averaging across the image categories, and further quantify how this walk affects each class differently. We focus on linear walks in latent space for the main text, and show additional results on nonlinear walks in Sec. 4.3 and Appendix B.4.2. We also conduct experiments on StyleGAN (Karras et al., 2018), which uses an unconditional style-based generator architecture in Sec. 4.3 and Appendix B.5." }, { "heading": "4.1 WHAT IMAGE TRANSFORMATIONS CAN WE ACHIEVE IN LATENT SPACE?", "text": "We show qualitative results of the learned transformations in Fig. 1. By steering in the generator latent space, we learn a variety of transformations on a given source image (shown in the center panel of each transformation). Interestingly, several priors come into play when learning these image transformations. When we shift a daisy downwards in the Y direction, the model hallucinates that the sky exists on the top of the image. However, when we shift the daisy up, the model inpaints the remainder of the image with grass. When we alter the brightness of a image, the model transitions between nighttime and daytime. This suggests that the model can extrapolate from the original source image, and still remain consistent with the image context.\nHowever, when we increase the step size of α, we observe that the degree to which we can achieve each transformation is limited. In Fig. 3 we observe two potential failure cases: one in which the the image becomes unrealistic, and the other in which the image fails to transform any further. When we try to zoom in on a Persian cat, we observe that the cat no longer increases in size beyond some point, and in fact consistently undershoots the target zoom. On the other hand, when we try to zoom out on the cat, we observe that it begins to fall off the image manifold, and does not become any smaller after some point. Indeed, the perceptual distance (using LPIPS) between images decreases as we push α towards the transformation limits. Similar trends hold with other transformations: we are able to shift a lorikeet up and down to some degree until the transformation yields unrealistic output, and despite adjusting α on the rotation vector, we are unable to rotate a pizza. Are the limitations to these transformations governed by the training dataset? In other words, are our latent space walks limited because in ImageNet photos the cats are mostly centered and taken within a certain size? We seek to investigate and quantify these biases in the next sections.\nAn intriguing characteristic of the learned trajectory is that the amount it affects the output depends on the image class. In Fig. 4, we investigate the impact of the walk for different image categories under color transformations. By moving in the direction of a redness vector, we are able to successfully recolor a jellyfish, but we are unable to change the color of a goldfinch, which remains yellow which slight changes in background textures. Likewise, increasing brightness changes an erupting volcano to a dormant one, but does not have much effect on Alps, which only transitions between night and day. In the third example, we use our latent walk to turn red sports cars to blue, but it cannot recolor firetrucks. Again, perceptual distance over image samples confirms these qualitative observations: a 2-sample t-test yields t = 20.77, p < 0.001 for jellyfish/goldfinch, t = 8.14, p < 0.001 for volcano/alp, and t = 6.84, p < 0.001 for sports car/fire engine. We hypothesize that the different impact of the shared transformation on separate image classes relates to the variability in the underlying dataset. The overwhelming majority of firetrucks are red2, but sports cars appear in a variety of colors. Therefore, our color transformation is constrained by the dataset biases of individual classes.\nWith shift, we can move the distribution of the center object by varying α. In the underlying model, the center coordinate of the object is most concentrated at half of the image width and height, but after applying the shift in X and shift in Y transformation, the mode of the transformed distribution varies between 0.3 and 0.7 of the image width/height. To quantify the distribution changes, we compute the area of intersection between the original model distribution and the distribution after applying each transformation and observe that the intersection decreases as we increase or decrease the magnitude of α. However, our transformations are limited to a certain extent – if we increase α\n2but apparently blue fire trucks do exist! (DiGrazia, 2019)\nbeyond 150 pixels for vertical shifts, we start to generate unrealistic images, as evidenced by a sharp rise in FID and converging modes in the transformed distributions (Fig. 5 columns 2 & 3).\nWe perform a similar procedure for zoom, by measuring the area of the bounding box for the detected object under different magnitudes of α. Like shift, we observe that subsequent increases in α magnitude start to have smaller and smaller effects on the mode of the resulting distribution (Fig. 5 last column). Past an 8x zoom in or out, we observe an increase in the FID signifying decreasing image quality. Interestingly for zoom, the FID under zooming in and zooming out is anti-symmetric, indicating that how well we can zoom-in and retain realisitic images differs from that of zooming out. These trends are consistent with the plateau in transformation behavior that we qualitatively observe in Fig. 3. Although we can arbitrarily increase the α step size, after some point we are unable to achieve further transformation and risk deviating from the natural image manifold." }, { "heading": "4.2 HOW DOES THE DATA AFFECT THE TRANSFORMATIONS?", "text": "Is the extent to which we can transform each class, as we observed in Fig. 4, due to limited variability in the underlying dataset for each class? One way of quantifying this is to measure the difference in transformed model means, model(+α) and model(-α), and compare it to the spread of the dataset distribution. For each class, we compute standard deviation of the dataset with respect to our statistic of interest (pixel RGB value for color, and bounding box area and center value for zoom and shift transformations respectively). We hypothesize that if the amount of transformation is biased depending on the image class, we will observe a correlation between the distance of the mean shifts and the standard deviation of the data distribution.\nMore concretely, we define the change in model means under a given transformation as:\n∆µk = µk,model(+α∗) − µk,model(-α∗) (6) for a given class k and we set α∗ to be largest and smallest α values used in training. The degree to which we achieve each transformation is a function of α, so we use the same α value for all classes – one that is large enough to separate the means of µk,model(α∗) and µk,model(-α∗) under\ntransformation, but also for which the FID of the generated distribution remains below a threshold T of generating reasonably realistic images (for our experiments we use T = 22).\nIn Fig. 6 we plot the standard deviation σ of the dataset on the x-axis, and the model ∆µ under a +α∗ and −α∗ transformation on the y-axis, as defined in Eq. 6. We sample randomly from 100 classes for the color, zoom and shift transformations, and generate 200 samples of each class under the positive and negative transformations. We use the same setup of drawing samples from the model and dataset and computing the statistics for each transformation as described in Sec. 4.1.\nIndeed, we find that the width of the dataset distribution, captured by the standard deviation of random samples drawn from the dataset for each class, relates to how much we can transform. There is a positive correlation between the spread of the dataset and the magnitude of ∆µ observed in the transformed model distributions, and the slope of all observed trends differs significantly from zero (p < 0.001 for all transformations). For the zoom transformation, we show examples of two extremes along the trend. For the “robin” class the spread σ in the dataset is low, and subsequently, the separation ∆µ that we are able to achieve by applying +α∗ and −α∗ transformations is limited. On the other hand, for “laptops”, the dataset spread is broad; ImageNet contains images of laptops of various sizes, and we are able to attain wider shifts in the model distribution.\nFrom these results, we conclude that the amount of transformation we can achieve relates to the dataset variability. Consistent with our qualitative observations in Fig. 4, we find that if the images for a particular class have adequate coverage over the entire range of a given transformation, then we are better able to move the model distribution to both extremes. On the other hand, if the images for a given class are less diverse, the transformation is limited by this dataset bias." }, { "heading": "4.3 ALTERNATIVE ARCHITECTURES AND WALKS", "text": "We ran an identical set of experiments using the nonlinear walk in the BigGAN latent space (Eq 2) and obtained similar quantitative results. To summarize, the Pearson’s correlation coefficient between dataset σ and model ∆µ for linear walks and nonlinear walks is shown in Table 1, and full results in Appendix B.4.2. Qualitatively, we observe that while the linear trajectory undershoots the targeted level of transformation, it is able to preserve more realistic-looking results (Fig. 7). The\ntransformations involve a trade-off between minimizing the loss and maintaining realistic output, and we hypothesize that the linear walk functions as an implicit regularizer that corresponds well with the inherent organization of the latent space.\nTo test the generality of our findings across model architecture, we ran similar experiments on StyleGAN, in which the latent space is divided into two spaces, z and W . As Karras et al. (2018) notes that the W space is less entangled than z, we apply the linear walk to W and show results in Fig. 8 and Appendix B.5. One interesting aspect of StyleGAN is that we can change color while leaving other structure in the image unchanged. In other words, while green faces do not naturally exist in the dataset, the StyleGAN model is still able to generate them. This differs from the behavior of BigGAN, where changing color results in different semantics in the image, e.g., turning a dormant volcano to an active one. StyleGAN, however, does not preserve the exact geometry of objects under other transformations, e.g., zoom and shift (see Appendix B.5)." }, { "heading": "4.4 TOWARDS STEERABLE GANS", "text": "So far, we have frozen the parameters of the generative model when learning a latent space walk for image editing, and observe that the transformations are limited by dataset bias. Here we investigate approaches to overcome these limitations and increase model steerability. For these experiments, we use a class-conditional DCGAN model (Radford et al., 2015) trained on MNIST digits (LeCun, 1998).\nTo study the effect of dataset biases, we train (1) a vanilla DCGAN and (2) a DCGAN with data augmentation, and then learn the optimal walk in Eq. 1 after the model has been trained – we refer to these two approaches in Fig. 9 as argmin W and argmin W + aug, respectively. We observe that adding data augmentation yields transformations that better approximate the target image and\nattain lower L2 error than the vanilla DCGAN (blue and orange curves in Fig. 9). Qualitatively, we observe that transformations using the vanilla GAN (argmin W) become patchy and unrealistic as we increase the magnitude of α, but when the model is trained with data augmentation (argmin W + aug), the digits retain their structural integrity.\nRather than learning the walk vector w assuming a frozen generator, we may also jointly optimize the model and linear walk parameter together, as we formalized in Eq. 3. This allows the model to learn an equivariance between linear directions in the latent space and the corresponding image transformations. We refer to this model as argmin G,W in Fig. 9. Compared to the frozen generator (in argmin W and argmin W + aug), the joint objective further decreases L2 error (green curve in Fig. 9). We show additional qualitative examples in Appendix B.8. The steerable range of the generator increases with joint optimization and data augmentation, which provides additional evidence that training data bias impacts the models’ steerability and generalization capacity. We tried DCGAN on CIFAR10 as a more complicated dataset, however were unable to get steering to be effective – all three methods failed to produce realistic transformations and joint training in fact performed the worst. Finding the right steering implementation per GAN and dataset, especially for joint training, may be a difficult problem and an interesting direction for future work.\n20 0 20 ↵\n0.0\n0.1\n0.2\n0.3\nL 2\nE rr\nor argmin W\nargmin W + aug\nargmin G,W\nRotate 2D\n0.5 0.0 0.5 log(↵)\n0.0\n0.1\n0.2\n0.3\nL 2\nE rr\nor\nZoom\n5 0 5 ↵\n0.0\n0.1\n0.2 0.3 L 2 E rr or\nShift X\nargmin W argmin W + aug\nargmin G,W\n20 0 20 ↵\n0.0\n0.1\n0.2\n0.3\nL 2\nE rr\nor argmin W\nargmin W + aug\nargmin G,W\nFigure 9: Reducing the effect of transformation limits. Using a DCGAN model on MNIST digits, we compare the L2 reconstruction errors on latent space walks for models trained with vanilla GANs without (argmin W) and with data augmentation (argmin W + aug). We also compare to jointly optimizing the generator and the walk parameters with data augmentation (argmin G,W), which achieves the lowest L2 error." }, { "heading": "5 CONCLUSION", "text": "GANs are powerful generative models, but are they simply replicating the existing training datapoints, or can they to generalize beyond the training distribution? We investigate this question by exploring walks in the latent space of GANs. We optimize trajectories in latent space to reflect simple image transformations in the generated output, learned in a self-supervised manner. We find that the model is able to exhibit characteristics of extrapolation – we are able to “steer” the generated output to simulate camera zoom, horizontal and vertical movement, camera rotations, and recolorization. However, our ability to naively move the distribution is finite: we can transform images to some degree but cannot extrapolate entirely outside the support of the training data. To increase model steerability, we add data augmentation during training and jointly optimize the model and walk trajectory. Our experiments illustrate the connection between training data bias and the resulting distribution of generated images, and suggest methods for extending the range of images that the models are able to create." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We would like to thank Quang H Le, Lore Goetschalckx, Alex Andonian, David Bau, and Jonas Wulff for helpful discussions. This work was supported by a Google Faculty Research Award to P.I., and a U.S. National Science Foundation Graduate Research Fellowship to L.C." }, { "heading": "A METHOD DETAILS", "text": "" }, { "heading": "A.1 OPTIMIZATION FOR THE LINEAR WALK", "text": "We learn the walk vector using mini-batch stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) in tensorflow, trained on 20000 unique samples from the latent space z. We share the vector w across all ImageNet categories for the BigGAN model.\nA.2 IMPLEMENTATION DETAILS FOR LINEAR WALK\nWe experiment with a number of different transformations learned in the latent space, each corresponding to a different walk vector. Each of these transformations can be learned without any direct supervision, simply by applying our desired edit to the source image. Furthermore, the parameter α allows us to vary the extent of the transformation. We found that a slight modification to each transformation improved the degree to which we were able to steer the output space: we scale α differently for the learned transformation G(z + αgw), and the target edit edit(G(z), αt). We detail each transformation below:\nShift. We learn transformations corresponding to shifting an image in the horizontal X direction and the vertical Y direction. We train on source images that are shifted −αt pixels to the left and αt pixels to the right, where we set αt to be between zero and one-half of the source image width or height D. When training the walk, we enforce that the αg parameter ranges between -1 and 1; thus for a random shift by t pixels, we use the value αg = αt/D. We apply a mask to the shifted image, so that we only apply the loss function on the visible portion of the source image. This forces the generator to extrapolate on the obscured region of the target image.\nZoom. We learn a walk which is optimized to zoom in and out up to four times the original image. For zooming in, we crop the central portion of the source image by some αt amount, where 0.25 < αt < 1 and resize it back to its original size. To zoom out, we downsample the image by αt where 1 < αt < 4. To allow for both a positive and negative walk direction, we set αg = log(αt). Similar to shift, a mask applied during training allows the generator to inpaint the background scene.\nColor. We implement color as a continuous RGB slider, e.g., a 3-tuple αt = (αR, αG, αB), where each αR, αG, αB can take values between [−0.5, 0.5] in training. To edit the source image, we simply add the corresponding αt values to each of the image channels. Our latent space walk is parameterized as z + αgw = z + αRwR + αGwG + αBwB where we jointly learn the three walk directions wR, wG, and wB .\nRotate in 2D. Rotation in 2D is trained in a similar manner as the shift operations, where we train with −45 ≤ αt ≤ 45 degree rotation. Using R = 45, scale αg = αt/R. We use a mask to enforce the loss only on visible regions of the target.\nRotate in 3D. We simulate a 3D rotation using a perspective transformation along the Z-axis, essentially treating the image as a rotating billboard. Similar to the 2D rotation, we train with −45 ≤ αt ≤ 45 degree rotation, we scale αg = αt/R where R = 45, and apply a mask during training.\nA.3 LINEAR NN(z) WALK\nRather than defining w as a vector in z space (Eq. 1), one could define it as a function that takes a z as input and maps it to the desired z′ after taking a variable-sized step α in latent space. In this case, we may parametrize the walk with a neural network w = NN(z), and transform the image using G(z + αNN(z)). However, as we show in the following proof, this idea will not learn to let w be a function of z.\nProof. For simplicity, let w = F (z). We optimize for J(w,α) = Ez [L(G(z + αw),edit(G(z), α))] where α is an arbitrary scalar value. Note that for the target image, two equal edit operations is equivalent to performing a single edit of twice the size (e.g., shifting by 10px the same as shifting by 5px twice; zooming by 4x is the same as zooming by 2x twice). That is,\nedit(G(z), 2α) = edit(edit(G(z), α), α).\nTo achieve this target, starting from an initial z, we can take two steps of size α in latent space as follows:\nz1 = z + αF (z)\nz2 = z1 + αF (z1)\nHowever, because we let α take on any scalar value during optimization, our objective function enforces that starting from z and taking a step of size 2α equals taking two steps of size α:\nz + 2αF (z) = z1 + αF (z1) (7)\nTherefore: z + 2αF (z) = z + αF (z) + αF (z1)⇒\nαF (z) = αF (z1)⇒ F (z) = F (z1).\nThus F (·) simply becomes a linear trajectory that is independent of the input z." }, { "heading": "A.4 OPTIMIZATION FOR THE NON-LINEAR WALK", "text": "Given the limitations of the previous walk, we define our nonlinear walk F (z) using discrete step sizes . We define F (z) as z+NN(z), where the neural network NN learns a fixed step transformation, rather than a variable α step. We then renormalize the magnitude z. This approach mimics the Euler method for solving ODEs with a discrete step size, where we assume that the gradient of the transformation in latent space is of the form dzdt = NN(z) and we approximate zi+1 = zi + dz dt |zi . The key difference from A.3 is the fixed step size, which avoids optimizing for the equality in (7).\nWe use a two-layer neural network to parametrize the walk, and optimize over 20000 samples using the Adam optimizer as before. Positive and negative transformation directions are handled with two neural networks having identical architecture but independent weights. We set to achieve the same transformation ranges as the linear trajectory within 4-5 steps." }, { "heading": "B ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "B.1 MODEL AND DATA DISTRIBUTIONS", "text": "How well does the model distribution of each property match the dataset distribution? If the generated images do not form a good approximation of the dataset variability, we expect that this would also impact our ability to transform generated images. In Fig. 10 we show the attribute distributions of the BigGAN model G(z) compared to samples from the ImageNet dataset. We show corresponding results for StyleGAN and its respective datasets in Appendix B.5. While there is some bias in how well model-generated images approximate the dataset distribution, we hypothesize that additional biases in our transformations come from variability in the training data." }, { "heading": "B.2 QUANTIFYING TRANSFORMATION LIMITS", "text": "We observe that when we increase the transformation magnitude α in latent space, the generated images become unrealistic and the transformation ceases to have further effect. We show this qualitatively in Fig. 3. To quantitatively verify this trends, we can compute the LPIPS perceptual distance of images generated using consecutive pairs of αi and αi+1. For shift and zoom transformations, perceptual distance is larger when α (or log(α) for zoom) is near zero, and decreases as the the magnitude of α increases, which indicates that large α magnitudes have a smaller transformation effect, and the transformed images appear more similar. On the other hand, color and rotate in 2D/3D exhibit a steady transformation rate as the magnitude of α increases.\nNote that this analysis does not tell us how well we achieve the specific transformation, nor whether the latent trajectory deviates from natural-looking images. Rather, it tells us how much we manage to change the image, regardless of the transformation target. To quantify how well each transformation is achieved, we rely on attribute detectors such as object bounding boxes (see B.3)." }, { "heading": "B.3 DETECTED BOUNDING BOXES", "text": "To quantify the degree to which we are able to achieve the zoom and shift transformations, we rely on a pre-trained MobileNet-SSD v13 object detection model. In Fig. 12 and 13 we show the results of applying the object detection model to images from the dataset, and images generated by the model under the zoom, horizontal shift, and vertical shift transformations for randomly selected values of α, to qualitatively verify that the object detection boundaries are reasonable. Not all ImageNet images contain recognizable objects, so we only use ImageNet classes containing objects recognizable by the detector for this analysis." }, { "heading": "B.4 ALTERNATIVE WALKS IN BIGGAN", "text": "" }, { "heading": "B.4.1 LPIPS OBJECTIVE", "text": "In the main text, we learn the latent space walk w by minimizing the objective function:\nJ(w,α) = Ez [L(G(z + αw),edit(G(z), α))] . (8) using a Euclidean loss for L. In Fig. 14 we show qualitative results using the LPIPS perceptual similarity metric (Zhang et al., 2018) instead of Euclidean loss. Walks were trained using the same parameters as those in the linear-L2 walk shown in the main text: we use 20k samples for training, with Adam optimizer and learning rate 0.001 for zoom and color, 0.0001 for the remaining edit operations (due to scaling of α)." }, { "heading": "B.4.2 NON-LINEAR WALKS", "text": "Following B.4.2, we modify our objective to use discrete step sizes rather than continuous steps. We learn a function F (z) to perform this -step transformation on given latent code z, where F (z) is parametrized with a neural network. We show qualitative results in Fig. 15. We perform the same set of experiments shown in the main text using this nonlinear walk in Fig. 16. These experiments\n3https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API\nexhibit similar trends as we observed in the main text – we are able to modify the generated distribution of images using latent space walks, and the amount to which we can transform is related to the variability in the dataset. However, there are greater increases in FID when we apply the non-linear transformation, suggesting that these generated images deviate more from natural images and look less realistic." }, { "heading": "B.4.3 ADDITIONAL QUALITATIVE EXAMPLES", "text": "We show qualitative examples for randomly generated categories for BigGAN linear-L2, linear LPIPS, and nonlinear trajectories in Figs. 17, 18, 19 respectively." }, { "heading": "B.5 WALKS IN STYLEGAN", "text": "We perform similar experiments for linear latent space walks using StyleGAN models trained on the LSUN cat, LSUN car, and FFHQ face datasets. As suggested by Karras et al. (2018), we learn the walk vector in the intermediate W latent space due to improved attribute disentanglement in W . We show qualitative results for color, shift, and zoom transformations in Figs. 20, 22, 24 and corresponding quantitative analyses in Figs. 21, 23, 25. We show qualitative examples for the comparison of optimizing in the W and z latent spaces in Stylegan in 28." }, { "heading": "B.6 WALKS IN PROGRESSIVE GAN", "text": "We also experiment with the linear walk objective in the latent space of Progressive GAN Karras et al. (2017). One interesting property of the Progressive GAN interpolations is that they take much longer to train to have a visual effect – for example for color, we could obtain drastic color changes in Stylegan W latent space using as few as 2k samples, but with progressive gan, we used 60k samples and still did not obtain as strong of an effect. This points to the Stylegan w latent space being more “flexible” and generalizable for transformation, compared to the latent space of progressive GAN. Moreover, we qualitatively observe some entanglement in the progressive gan transformations – for example, changing the level of zoom also changes the lighting. We did not observe big effects in the horizontal and vertical shift transformations. Qualitative examples and quantitative results are shown in Figs. 26, 27." }, { "heading": "B.7 QUALITATIVE EXAMPLES FOR ADDITIONAL TRANSFORMATIONS", "text": "Since the color transformation operates on individual pixels, we can optimize the walk using a segmented target – for example when learning a walk for cars, we only modify pixels in segmented car region when generating edit(G(z), α). StyleGAN is able to roughly localize the color transformation to this region, suggesting disentanglement of different objects within theW latent space (Fig. 29 left) as also noted in Karras et al. (2018); Shen et al. (2019). We also show qualitative results for adjust image contrast (Fig. 29 right), and for combining zoom, shift X, and shift Y transformations (Fig. 30)." }, { "heading": "B.8 ADDITIONAL RESULTS FOR IMPROVING MODEL STEERABILITY", "text": "We further test the hypothesis that dataset variability impacts the amount we are able to transform by comparing DCGAN models trained with and without data augmentation. Namely, with data augmentation, the discriminator is able to see edited versions of the real images. We also jointly train the model and the walk trajectory which encourages the model to learn linear walks. For zoom, horizontal shift, and 2D rotate transformations, additional samples for three training approaches – without data augmentation, with data augmentation, and joint optimization – appear in Fig. 31-33. Qualitatively, transformations using the model trained without data augmentation degrade the digit structure as α magnitude increases, and may even change one digit to another. Training with data augmentation and joint optimization better preserves digit structure and identity." }, { "heading": "ZoomShift YShift XLuminance", "text": "Luminance\nRotate 2D\nShift X Shift Y\nZoom\n1.0 0.5 0.0 0.5 1.0 ↵\n0.0\n0.2\n0.4\n0.6\n0.8\nP er\nce p tu\nal D\nis ta\nn ce\n400 200 0 200 400 ↵\n0.0\n0.2\n0.4\n0.6 0.8 P er ce p tu al D is ta n ce\n400 200 0 200 400 ↵\n0.0\n0.2\n0.4\n0.6\n0.8\nP er\nce p tu\nal D\nis ta\nn ce\nRotate 3D" }, { "heading": "Luminance Shift X Shift Y Zoom", "text": "" }, { "heading": "Luminance Shift X Shift Y Zoom", "text": "" } ]
2,020
null
SP:1572adb9d8abb9edd60d11e7cdd0e48cfdf5bd4b
[ "The paper addresses a question on whether mutual information (MI) based models for representation learning succeed primarily thanks to the MI maximization. The motivation of the work comes from the fact that although MI is known to be problematic in treatment, it has been successfully applied in a number of recent works in computer vision and natural language processing. The paper conducts a series of experiments that constitute a convincing evidence for a weak connection between the InfoMax principle and these practical successes by showing that maximizing established lower bounds on MI are not predictive of the downstream performance and that contrary to the theory higher capacity instantiations of the critics of MI may result in worse downstream performance of learned representations. The paper concludes that there is a considerable inductive bias in the architectural choices inside MI models that are beneficial for downstream tasks and note that at least one of the lower bounds on MI can be interpreted as a triplet loss connecting it with a metric learning approach.", "This paper gives a nice interpretation why recent works that are based on variational lower bounds of mutual information can demonstrate promising empirical results, where they argue that the success depends on \"the inductive biasin both the choice of feature extractor architectures and the parametrization of theemployed MI estimators.\" To support this argument, they carefully design a series convincing experiments which are stated in full in Section 3. Moreover they show some connection to metric learning. " ]
Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data. This comes with several immediate problems: For example, MI is notoriously hard to estimate, and using it as an objective for representation learning may lead to highly entangled representations due to its invariance under arbitrary invertible transformations. Nevertheless, these methods have been repeatedly shown to excel in practice. In this paper we argue, and provide empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators. Finally, we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation for the success of the recently introduced methods.
[ { "affiliations": [], "name": "Michael Tschannen" }, { "affiliations": [], "name": "Josip Djolonga" }, { "affiliations": [], "name": "Paul K. Rubenstein" }, { "affiliations": [], "name": "Sylvain Gelly" }, { "affiliations": [], "name": "Mario Lucic" } ]
[ { "authors": [ "Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a Broken ELBO", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Galen Andrew", "Raman Arora", "Jeff Bilmes", "Karen Livescu" ], "title": "Deep canonical correlation analysis", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj" ], "title": "Saunshi. A theoretical analysis of contrastive unsupervised representation learning", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "David Barber", "Felix V Agakov" ], "title": "The IM algorithm: a variational approach to information maximization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2003 }, { "authors": [ "Suzanna Becker", "Geoffrey E Hinton" ], "title": "Self-organizing neural network that discovers surfaces in random-dot stereograms", "venue": null, "year": 1992 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeshwar", "Sherjil Ozair", "Yoshua Bengio", "Devon Hjelm", "Aaron Courville" ], "title": "Mutual information neural estimation", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "An information-maximization approach to blind separation and blind deconvolution", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "Yochai Blau", "Tomer Michaeli" ], "title": "Rethinking lossy compression: The rate-distortion-perception tradeoff", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "John S Bridle", "Anthony JR Heading", "David JC MacKay" ], "title": "Unsupervised classifiers, mutual information and phantom targets", "venue": "In Advances in Neural Information Processing Systems,", "year": 1992 }, { "authors": [ "Aaron Defazio", "Francis Bach", "Simon Lacoste-Julien" ], "title": "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Olivier J Hénaff", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Elad Hoffer", "Nir Ailon" ], "title": "Deep metric learning using triplet network", "venue": "In International Workshop on Similarity-Based Pattern Recognition,", "year": 2015 }, { "authors": [ "Weihua Hu", "Takeru Miyato", "Seiya Tokui", "Eiichi Matsumoto", "Masashi Sugiyama" ], "title": "Learning discrete representations via information maximizing self-augmented training", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representation,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational Bayes", "venue": "International Conference on Learning Representation,", "year": 2014 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Andreas Krause", "Pietro Perona", "Ryan G Gomes" ], "title": "Discriminative clustering by regularized information maximization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Zhuang Ma", "Michael Collins" ], "title": "Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency", "venue": "arXiv preprint arXiv:1809.01812,", "year": 2018 }, { "authors": [ "David McAllester", "Karl Statos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "arXiv preprint arXiv:1811.04251,", "year": 2018 }, { "authors": [ "XuanLong Nguyen", "Martin J Wainwright", "Michael I Jordan" ], "title": "Estimating divergence functionals and the likelihood ratio by convex risk minimization", "venue": "IEEE Transactions on Information Theory,", "year": 2010 }, { "authors": [ "Mohammad Norouzi", "David J Fleet", "Ruslan R Salakhutdinov" ], "title": "Hamming distance metric learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-GAN: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Fabian Pedregosa", "Gaël Varoquaux", "Alexandre Gramfort", "Vincent Michel", "Bertrand Thirion", "Olivier Grisel", "Mathieu Blondel", "Peter Prettenhofer", "Ron Weiss", "Vincent Dubourg" ], "title": "Scikit-learn: Machine learning in python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2018 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alex Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tom Rainforth", "Adam Kosiorek", "Tuan Anh Le", "Chris Maddison", "Maximilian Igl", "Frank Wood", "Yee Whye Teh" ], "title": "Tighter variational bounds are not necessarily better", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Avraham Ruderman", "Mark D Reid", "Darío García-García", "James Petterson" ], "title": "Tighter variational representations of f-divergences via restriction to probability measures", "venue": "In International Conference on Machine Learning,", "year": 2012 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Pierre Sermanet", "Corey Lynch", "Yevgen Chebotar", "Jasmine Hsu", "Eric Jang", "Stefan Schaal", "Sergey Levine" ], "title": "Time-contrastive networks: Self-supervised learning from video", "venue": "In IEEE International Conference on Robotics and Automation,", "year": 2018 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Contrastive bidirectional transformer for temporal representation learning", "venue": "arXiv preprint arXiv:1906.05743,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Michael Tschannen", "Olivier Bachem", "Mario Lucic" ], "title": "Recent advances in autoencoder-based representation learning", "venue": "arXiv preprint arXiv:1812.05069,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Xiaolong Wang", "Abhinav Gupta" ], "title": "Unsupervised learning of visual representations using videos", "venue": "In IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Yilun Xu", "Shengjia Zhao", "Jiaming Song", "Russell Stewart", "Stefano Ermon" ], "title": "A Theory of Usable Information under Computational Constraints", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hong-Xing Yu", "Ancong Wu", "Wei-Shi Zheng" ], "title": "Cross-view asymmetric metric learning for unsupervised person re-identification", "venue": "In IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Ji Zhang", "Yannis Kalantidis", "Marcus Rohrbach", "Manohar Paluri", "Ahmed Elgammal", "Mohamed Elhoseiny" ], "title": "Large-scale visual relationship understanding", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unsupervised representation learning is a fundamental problem in machine learning. Intuitively, one aims to learn a function g which maps the data into some, usually lower-dimensional, space where one can solve some (generally a priori unknown) target supervised tasks more efficiently, i.e. with fewer labels. In contrast to supervised and semi-supervised learning, the learner has access only to unlabeled data. Even though the task seems ill-posed as there is no natural objective one should optimize, by leveraging domain knowledge this approach can be successfully applied to a variety of problem areas, including image (Kolesnikov et al., 2019; van den Oord et al., 2018; Hénaff et al., 2019; Tian et al., 2019; Hjelm et al., 2019; Bachman et al., 2019) and video classification (Wang and Gupta, 2015; Sun et al., 2019), and natural language understanding (van den Oord et al., 2018; Peters et al., 2018; Devlin et al., 2019).\nRecently, there has been a revival of approaches inspired by the InfoMax principle (Linsker, 1988): Choose a representation g(x) maximizing the mutual information (MI) between the input and its representation, possibly subject to some structural constraints. MI measures the amount of information obtained about a random variable X by observing some other random variable Y 1 Formally, the MI between X and Y , with joint density p(x, y) and marginal densities p(x) and p(y), is defined as the Kullback–Leibler (KL) divergence between the joint and the product of the marginals\nI(X;Y ) = DKL (p(x, y) ‖ p(x)p(y)) = Ep(x,y) [ log p(x, y)\np(x)p(y)\n] . (1)\nThe fundamental properties of MI are well understood and have been extensively studied (see e.g. Kraskov et al. (2004)). Firstly, MI is invariant under reparametrization of the variables — namely, if X ′ = f1(X) and Y ′ = f2(Y ) are homeomorphisms (i.e. smooth invertible maps), then I(X;Y ) = I(X ′;Y ′). Secondly, estimating MI in high-dimensional spaces is a notoriously difficult task, and in practice one often maximizes a tractable lower bound on this quantity (Poole et al., 2019).\n∗Equal contribution. Correspondence to Michael Tschannen (tschannen@google.com), Josip Djolonga (josipd@google.com), and Mario Lucic (lucic@google.com). †PhD student at University of Cambridge and the Max Planck Institute for Intelligent Systems, Tübingen.\n1We denote random variables using upper-case letters (e.g. X , Y ), and their realizations by the corresponding lower-case letter (e.g. x, y).\nNonetheless, any distribution-free high-confidence lower bound on entropy requires a sample size exponential in the size of the bound (McAllester and Statos, 2018).\nDespite these fundamental challenges, several recent works have demonstrated promising empirical results in representation learning using MI maximization (van den Oord et al., 2018; Hénaff et al., 2019; Tian et al., 2019; Hjelm et al., 2019; Bachman et al., 2019; Sun et al., 2019). In this work we argue, and provide empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone. In fact, we show that maximizing tighter bounds on MI can result in worse representations. In addition, we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation of the success of the recently introduced methods.2" }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Recent progress and the InfoMax principle While promising results in other domains have been presented in the literature, we will focus on unsupervised image representation learning techniques that have achieved state-of-the-art performance on image classification tasks (Hénaff et al., 2019; Tian et al., 2019; Bachman et al., 2019). The usual problem setup dates back at least to Becker and Hinton (1992) and can conceptually be described as follows: For a given image X , let X(1) and X(2) be different, possibly overlapping views of X , for instance the top and bottom halves of the image. These are encoded using encoders g1 and g2 respectively, and the MI between the two representations g1(X (1)) and g2(X(2)) is maximized,\nmax g1∈G1,g2∈G2 IEST\n( g1(X (1)); g2(X (2)) ) , (2)\nwhere IEST(X;Y ) is a sample-based estimator of the true MI I(X;Y ) and the function classes G1 and G2 can be used to specify structural constraints on the encoders. While not explicitly reflected in (2), note that g1 and g2 can often share parameters. Furthermore, it can be shown that I(g1(X (1)); g2(X (2))) ≤ I(X; g1(X(1)), g2(X(2))),3 hence the objective in (2) can be seen as a lower bound on the InfoMax objective maxg∈G I(X; g(X)) (Linsker, 1988).\nPractical advantages of multi-view formulations There are two main advantages in using (2) rather than the original InfoMax objective. First, the MI has to be estimated only between the learned representations of the two views, which typically lie on a much lower-dimensional space than the one where the original data X lives. Second, it gives us plenty of modeling flexibility, as the two views can be chosen to capture completely different aspects and modalities of the data, for example:\n1. In the basic form of DeepInfoMax (Hjelm et al., 2019) g1 extracts global features from the entire image X(1) and g2 local features from image patches X(2), where g1 and g2 correspond to activations in different layers of the same convolutional network. Bachman et al. (2019) build on this and compute the two views from different augmentations of the same image.\n2. Contrastive multiview coding (CMC) (Tian et al., 2019) generalizes the objective in (2) to consider multiple views X(i), where each X(i) corresponds to a different image modality (e.g., different color channels, or the image and its segmentation mask).\n3. Contrastive predictive coding (CPC) (van den Oord et al., 2018; Hénaff et al., 2019) incorporates a sequential component of the data. Concretely, one extracts a sequence of patches from an image in some fixed order, maps each patch using an encoder, aggregates the resulting features of the first t patches into a context vector, and maximizes the MI between the context and features extracted from the patch at position t+ k. In (2), X(1) would thus correspond to the first t patches and X(2) to the patch at location t+ k.\nOther approaches, such as those presented by Sermanet et al. (2018), Hu et al. (2017), and Ji et al. (2019), can be similarly subsumed under the same objective.\nLower bounds on MI As evident from (2), another critical choice is the MI estimator IEST. Given the fundamental limitations of MI estimation (McAllester and Statos, 2018), recent work has focused on deriving lower bounds on MI (Barber and Agakov, 2003; Belghazi et al., 2018; Poole et al.,\n2The code for running the experiments and visualizing the results is available at https://github.com/googleresearch/google-research/tree/master/mutual_information_representation_learning.\n3Follows from the data processing inequality (see Prop. 1 in Appendix A).\n2019). Intuitively, these bounds are based on the following idea: If a classifier can accurately distinguish between samples drawn from the joint p(x, y) and those drawn from the product of marginals p(x)p(y), then X and Y have a high MI.\nWe will focus on two such estimators, which are most commonly used in the representation learning literature. The first of them, termed InfoNCE (van den Oord et al., 2018), is defined as\nI(X;Y ) ≥ E\n[ 1\nK K∑ i=1 log ef(xi,yi) 1 K ∑K j=1 e f(xi,yj)\n] , INCE(X;Y ), (3)\nwhere the expectation is over K independent samples {(xi, yi)}Ki=1 from the joint distribution p(x, y) (Poole et al., 2019). In practice we estimate (3) using Monte Carlo estimation by averaging over multiple batches of samples. Intuitively, the critic function f tries to predict for each xi which of the K samples y1, . . . , yk it was jointly drawn with, by assigning high values to the jointly drawn pair, and low values to all other pairs. The second estimator is based on the variational form of the KL divergence due to Nguyen, Wainwright, and Jordan (NWJ) (Nguyen et al., 2010) and takes the form\nI(X;Y ) ≥ Ep(x,y)[f(x, y)]− e−1Ep(x)[Ep(y)ef(x,y)] , INWJ(X;Y ). (4) For detailed derivations we refer the reader to (Ruderman et al., 2012; Poole et al., 2019). Note that these bounds hold for any critic f and when used in (2) one in practice jointly maximizes over g1, g2 and f . Furthermore, it can be shown that (3) is maximized by f∗(x, y) = log p(y|x) and (4) by f∗(x, y) = 1 + log p(y|x) (Poole et al., 2019). Common choices for f include bilinear critics f(x, y) = x>Wy (van den Oord et al., 2018; Hénaff et al., 2019; Tian et al., 2019), separable critics f(x, y) = φ1(x)>φ2(y) (Bachman et al., 2019), and concatenated critics f(x, y) = φ([x, y]) (Hjelm et al., 2019) (here φ, φ1, φ2 are typically shallow multi-layer perceptrons (MLPs)). When applying these estimators to solve (2), the line between the critic and the encoders g1, g2 can be blurry. For example, one can train with an inner product critic f(x, y) = x>y, but extract features from an intermediate layer of g1, g2, in which case the top layers of g1, g2 form a separable critic. Nevertheless, this boundary is crucial for the interplay between MI estimation and the interpretation of the learned representations." }, { "heading": "3 BIASES IN APPROXIMATE INFORMATION MAXIMIZATION", "text": "It is folklore knowledge that maximizing MI does not necessarily lead to useful representations. Already Linsker (1988) talks in his seminal work about constraints, while a manifestation of the problem in clustering approaches using MI criteria has been brought up by Bridle et al. (1992) and subsequently addressed using regularization by Krause et al. (2010). To what can we then attribute the recent success of methods building on the principles of MI maximization? We will argue that their connection to the InfoMax principle might be very loose. Namely, we will show that they behave counter-intuitively if one equates them with MI maximization, and that the performance of these methods depends strongly on the bias that is encoded not only in the encoders, but also on the actual form of the used estimators.\n1. We first consider encoders which are bijective by design. Even though the true MI is maximized for any choice of model parameters, the representation quality (measured by downstream linear classification accuracy) improves during training. Furthermore, there exist invertible encoders for which the representation quality is worse than using raw pixels, despite also maximizing MI.\n2. We next consider encoders that can model both invertible and non-invertible functions. When the encoder can be non-invertible, but is initialized to be invertible, IEST still biases the encoders to be very ill-conditioned and hard to invert.\n3. For INCE and INWJ, higher-capacity critics admit tighter bounds on MI. We demonstrate that simple critics yielding loose bounds can lead to better representations than high-capacity critics.\n4. Finally, we optimize the estimators to the same MI lower-bound value with different encoder architectures and show that the representation quality can be impacted more by the choice of the architecture, than the estimator.\nAs a consequence, we argue that the success of these methods and the way they are instantiated in practice is only loosely connected to MI. Then, in Section 4 we provide an alternative explanation for the success of recent methods through a connection to classic triplet losses from metric learning.\nSetup Our goal is to provide a minimal set of easily reproducible empirical experiments to understand the role of MI estimators, critic and encoder architectures when learning representations via the objective (2). To this end, we consider a simple setup of learning a representation of the top half of MNIST handwritten digit images (we present results for the experiments from Sections 3.2 and 3.3 on CIFAR10 in Appendix G; the conclusions are analogous). This setup has been used in the context of deep canonical correlation analysis (Andrew et al., 2013), where the target is to maximize the correlation between the representations. Following the widely adopted downstream linear evaluation protocol (Kolesnikov et al., 2019; van den Oord et al., 2018; Hénaff et al., 2019; Tian et al., 2019; Hjelm et al., 2019; Bachman et al., 2019), we train a linear classifier4 for digit classification on the learned representation using all available training labels (other evaluation protocols are discussed in Section 5). To learn the representation we instantiate (2) and split each input MNIST image x ∈ [0, 1]784 into two parts, the top part of the image xtop ∈ [0, 1]392 corresponding to X(1), and the bottom part, xbottom ∈ [0, 1]392, corresponding to X(2), respectively. We train g1, g2, and f using the Adam optimizer (Kingma and Ba, 2015), and use g1(xtop) as the representation for the linear evaluation. Unless stated otherwise, we use a bilinear critic f(x, y) = x>Wy (we investigate its effect in a separate ablation study), set the batch size to 128 and the learning rate to 10−4.5 Throughout, IEST values and downstream classification accuracies are averaged over 20 runs and reported on the testing set (we did not observe large gaps between the training and testing values of IEST). As a common baseline, we rely on a linear classifier in pixel space on xtop, which obtains a testing accuracy of about 85%. For comparison, a simple MLP or ConvNet architecture achieves about 94% (see Section 3.3 for details)." }, { "heading": "3.1 LARGE MI IS NOT PREDICTIVE OF DOWNSTREAM PERFORMANCE", "text": "We start by investigating the behavior of INCE and INWJ when g1 and g2 are parameterized to be always invertible. Hence, for any choice of the encoder parameters, the MI is constant, i.e. I(g1(X (1)); g2(X (2))) = I(X(1);X(2)) for all g1, g2. This means that if we could exactly compute the MI, any parameter choice would be a global maximizer and thus the gradients vanish everywhere.6 However, as we will empirically show, the estimators we consider are biased and prefer those settings which yield representations useful for the downstream classification task.\nMaximized MI and improved downstream performance We model g1 and g2 using the invertible RealNVP architecture (Dinh et al., 2016). We use a total of 30 coupling layers, and each of them computes the shift using a separate MLP with two ReLU hidden layers, each with 512 units.\n4Using SAGA (Defazio et al., 2014), as implemented in scikit-learn (Pedregosa et al., 2011). 5Note that INCE is upper-bounded by log(batch size) ≈ 4.85 (van den Oord et al., 2018). We experimented with batch sizes up to 512 and obtained consistent results aligned with the stated conclusions. 6In the context of continuous distributions and invertible representation functions g the InfoMax objective might be infinite. Bell and Sejnowski (1995) suggest to instead maximize the entropy of the representation. In our case the MI between the two views is finite as the two halves are not deterministic functions of each another.\nFigure 1 shows the testing value of IEST and the testing accuracy on the classification task. Despite the fact that MI is maximized by any instantiation of g1 and g2, IEST and downstream accuracy increase during training, implying that the estimators provide gradient feedback leading to a representation useful for linear classification. This confirms our hypothesis that the estimator biases the encoders towards solutions suitable to solve the downstream linear classification task.\nThe previous experiment demonstrated that among many invertible encoders, all of which are globally optimal MI maximizers, some give rise to improved linear classification performance over raw pixels, and maximizing INCE and INWJ yields such encoders. Next we demonstrate that for the same invertible encoder architecture there are model parameters for which linear classification performance is significantly worse than using raw pixels, despite also being globally optimal MI maximizers.\nMaximized MI and worsened downstream performance The goal is to learn a (bijective) representation maximizing MI such that the optimal linear classifier performs poorly; we achieve this by jointly training a representation and classifier in an adversarial fashion (a separate classifier is trained for the evaluation), without using a MI estimator. Intuitively, we will train the encoder to make the classification task for the linear layer as hard as possible. The experimental details are presented in Appendix B. Figure 1c shows the result of one such training run, displaying the loss of a separately trained classifier on top of the frozen representation. At the beginning of training the network is initialized to be close to the identity mapping, and as such achieves the baseline classification accuracy corresponding to raw pixels. All points beyond this correspond to invertible feature maps with worse classification performance, despite still achieving globally maximal MI.\nAlternatively, the following thought experiment would yield the same conclusion: Using a lossless compression algorithm (e.g. PNG) for g1 and g2 also satisfies I(g1(X(1)); g2(X(2))) = I(X(1);X(2)). Yet, performing linear classification on the raw compressed bit stream g1(X(1)) will likely lead to worse performance than the baseline in pixel space. The information content alone is not sufficient to guarantee a useful geometry in the representation space.\nWe next investigate the behavior of the model if we use a network architecture that can model both invertible and non-invertible functions. We would like to understand whether IEST prefers the network to remain bijective, thus maximizing the true MI, or to ignore part of the input signal, which can be beneficial for representation learning.\nBias towards hard-to-invert encoders We use an MLP architecture with 4 hidden layers of the same dimension as the input, and with a skip connection added to each layer (hence by setting all weights to 0 the network becomes the identity function). As quantifying invertibility is hard, we analyze the condition number, i.e. the ratio between the largest and the smallest singular value, of the Jacobian of g1: By the implicit function theorem, the function is invertible if the Jacobian is non-singular.7 However, the data itself might lie on a low-dimensional manifold, so that having a singular Jacobian is not necessarily indicative of losing invertibility on the support of the data\n7Formally, g1 is invertible as long as the condition number of the Jacobian is finite. Numerically, inversion becomes harder as the condition number increases.\ndistribution. To ensure the support of the data distribution covers the complete input space, we corrupt X(1) and X(2) in a coupled way by adding to each the same 392-dimensional random vector, whose coordinates are sampled (independently of X(1), X(2)) from a normal with standard deviation 0.05 (the standard deviation of the pixels themselves is 0.3). Hence, non-invertible encoders g1, g2 do not maximize I(g1(X(1)); g2(X(2))). 8 As a reference point, the linear classification accuracy from pixels drops to about 84% due to the added noise.\nIn Figure 2 we can see that the IEST value and the downstream accuracy both increase during training, as before. Moreover, even though g1 is initialized very close to the identity function (which maximizes the true MI), the condition number of its Jacobian evaluated at inputs randomly sampled from the data-distribution steadily deteriorates over time, suggesting that in practice (i.e. numerically) inverting the model becomes increasingly hard. It therefore seems that the bounds we consider favor hard-to-invert encoders, which heavily attenuate part of the noise (as the support of the noise is the entire input space), over well conditioned encoders (such as the identity function at initialization), which preserve the noise and hence the entropy of the data well." }, { "heading": "3.2 HIGHER CAPACITY CRITICS CAN LEAD TO WORSE DOWNSTREAM PERFORMANCE", "text": "In the previous section we have established that MI and downstream performance are only loosely connected. Clearly, maximizing MI is not sufficient to learn good representations and there is a non-trivial interplay between the architectures of the encoder, critic, and the underlying estimators. In this section, we will focus on how one of these factors, namely the critic architecture, impacts the quality of the learned representation. Recall that it determines how the estimators such as INCE and INWJ distinguish between samples from the joint distribution p(x, y) and the product of the marginals p(x)p(y), and thereby determines the tightness on the lower bound. A higher capacity critic should allow for a tighter lower-bound on MI (Belghazi et al., 2018). Furthermore, in the context of representation learning where f is instantiated as a neural network, the critic provides gradient feedback to g1 and g2 and thereby shapes the learned representation.\nLooser bounds with simpler critics can lead to better representations We compare three critic architectures, a bilinear critic, a separable critic f(x, y) = φ1(x)>φ2(y) (φ1, φ2 are MLPs with a single hidden layer with 100 units and ReLU activations, followed by a linear layer with 100 units; comprising 40k parameters in total) and an MLP critic with a single hidden layer with 200 units and ReLU activations, applied to the concatenated input [x, y] (40k trainable parameters). Further, we use identical MLP architectures for g1 and g2 with two hidden layers comprising 300 units each, and a third linear layer mapping to a 100-dimensional feature space.\nFigure 3 shows the downstream testing accuracy and the testing IEST value as a function of the iteration (see Appendix G for the corresponding results on CIFAR10). It can be seen that for both lower bounds, representations trained with the MLP critic barely outperform the baseline on pixel space, whereas the same lower bounds with bilinear and separable critics clearly lead to a higher accuracy than the baseline. While the testing INCE value is close to the theoretically achievable maximum value for all critics, the testing INWJ value is higher for the MLP critic than for the separable and bilinear critics, resulting in a tighter bound on the MI. However, despite achieving the smallest\n8This would not necessarily be true if the noise were added in an uncoupled manner, e.g. by drawing it independently for X(1) and X(2), as the MI between the two noise vectors is 0 in that case.\nINWJ testing value, the simple bilinear critic leads to a better downstream performance than the higher-capacity separable and MLP critics.\nA related phenomenon was observed in the context of variational autoencoders (VAEs) (Kingma and Welling, 2014), where one maximizes a lower bound on the data likelihood: Looser bounds often yield better inference models, i.e. latent representations (Rainforth et al., 2018)." }, { "heading": "3.3 ENCODER ARCHITECTURE CAN BE MORE IMPORTANT THAN THE SPECIFIC ESTIMATOR", "text": "We will now show that the encoder architecture is a critical design choice and we will investigate its effect on the learned representation. We consider the same MLP architecture (238k parameters) as in Section 3.2, as well as a ConvNet architecture comprising two convolution layers (with a 5× 5 kernel, stride of 2, ReLU activations, and 64 and 128 channels, respectively; 220k parameters), followed by spatial average pooling and a fully connected layer. Before the average pooling operation we apply layer normalization (Ba et al., 2016) which greatly reduces the variance of INWJ.9 To ensure that both network architectures achieve the same lower bound IEST on the MI, we minimize Lt(g1, g2) = |IEST(g1(X(1)); g1(X(2)))− t| instead of solving (2), for two different values t = 2, 4. Figure 4 shows the downstream testing accuracy as a function of the training iteration (see Appendix G for the corresponding results on CIFAR10). It can be seen in the testing loss curves in Appendix F that for both architectures and estimators the objective value after 7k iterations matches the target t (i.e., Lt(g1, g2) ≈ 0) which implies that they achieve the same lower-bound on the MI. Despite matching lower bounds, ConvNet encoders lead to clearly superior classification accuracy, for both INCE and INWJ. Note that, in contrast, the MLP and ConvNet architectures trained end-to-end in supervised fashion both achieve essentially the same testing accuracy of about 94%.\nIn the context of VAEs, Alemi et al. (2018) similarly observed that models achieving the same evidence lower bound value can lead to vastly different representations depending on the employed encoder architecture, and do not necessarily capture useful information about the data (Tschannen et al., 2018; Blau and Michaeli, 2019)." }, { "heading": "4 CONNECTION TO DEEP METRIC LEARNING AND TRIPLET LOSSES", "text": "In the previous section we empirically demonstrated that there is a disconnect between approximate MI maximization and representation quality. However, many recent works have applied the INCE estimator to obtain state-of-the-art results in practice. We provide some insight on this conundrum by connecting INCE to a popular triplet (k-plet) loss known in the deep metric learning community.\n9LayerNorm avoids the possibility of information leakage within mini-batches that can be induced through batch normalization, potentially leading to poor performance (Hénaff et al., 2019).\nThe metric learning view Given sets of triplets, namely an anchor point x, a positive instance y, and a negative instance z, the goal is to learn a representation g(x) such that the distances (i.e., `2) between g(x) and g(y) is smaller than the distance between g(x) and g(z), for each triplet. In the supervised setting, the positive instances are usually sampled from the same class, while the negative instances are sampled from any other class. A major focus in deep metric learning is how to perform (semi-)hard positive mining — we want to present non-trivial triplets to the learning algorithm which become more challenging as g improves. Natural extensions to the unsupervised setting can be obtained by exploiting the structure present in the input data, namely spatial (e.g. patches from the same image should be closer than patches from different images) and temporal information (temporally close video frames should be encoded closer than the ones which are further away in time) (Hoffer and Ailon, 2015).\nConnection to InfoNCE The InfoNCE objective can be rewritten as follows:\nINCE = E\n[ 1\nK K∑ i=1 log ef(xi,yi) 1 K ∑K j=1 e f(xi,yj)\n] = logK − E 1 K K∑ i=1 log 1 +∑ j 6=i ef(xi,yj)−f(xi,yi) . The derivation is presented in Appendix C. In the particular case that x and y take value in the same space and f is constrained to be of the form f(x, y) = φ(x)>φ(y), for some function φ, this coincides (up to constants and change of sign) with the expectation of the multi-class K-pair loss proposed in (Sohn, 2016, Eqn. (7)):\nLK-pair-mc ( {(xi, yi)}Ki=1, φ ) = 1\nK K∑ i=1 log 1 +∑ j 6=i eφ(xi) >φ(yj)−φ(xi)>φ(yi) . (5) Representation learning by maximizing INCE using a symmetric separable critic f(x, y) = φ(x)>φ(y) and an encoder g = g1 = g2 shared across views is thus equivalent to metric learning based on (5). When using different encoders for different views and asymmetric critics as employed by CPC, DeepInfoMax, and CMC one recovers asymmetric variants of (5), see, e.g. (Yu et al., 2017; Zhang et al., 2019). As a result, one can view (5) as learning encoders with a parameter-less inner product critic, for which the MI lower-bound is very weak in general.\nThere are (at least) two immediate benefits of viewing recent representation learning methods based on MI estimators through the lens of metric learning. Firstly, in the MI view, using inner product or bilinear critic functions is sub-optimal since the critic should ideally be as flexible as possible in order to reduce the gap between the lower bound and the true MI. In the metric learning view, the inner product critic corresponds to a simple metric on the embedding space. The metric learning view seems hence in better accordance with the observations from Section 3.2 than the MI view. Secondly, it elucidates the importance of appropriately choosing the negative samples, which is indeed a critical component in deep metric learning based on triplet losses (Norouzi et al., 2012; Schroff et al., 2015).\nInfoNCE and the importance of negative sampling The negative sample mining issue also manifests itself in MI-based contrastive losses. In fact, while InfoNCE is a lower bound on MI if the negative samples are drawn from the true marginal distribution (Poole et al., 2019), i.e.\nI(X,Y ) ≥ E∏ k p(xk,yk) 1\nK K∑ i=1\n[ log\nef(xi,yi)\n1 K ∑K j=1 e f(xj ,yi)\n] , INCE,\nwe show that if the negative samples are drawn in a dependent fashion (corresponding to the (xi, yi) being drawn identically but not independently), the INCE estimator is in general neither a lower nor an upper bound on the true MI I(X,Y ). We prove this in Appendix D and present empirical evidence here. Let (X,Y ) = Z + , where Z ∼ N (0,ΣZ) and ∼ N (0,Σ ) are two-dimensional Gaussians. We generate batches of data (Xi, Yi) = Z + i where each i is sampled independently for each element of the batch, but Z is sampled only once per batch. As such, (Xi, Yi) has the same marginal distribution for each i, but the elements of the batch are not independent. Although we do not treat it theoretically, we also display results of the same experiment using the INWJ estimator. The experimental details are presented in Appendix E. We observe in Figure 4c that when using noni.i.d. samples both the INCE and INWJ values are larger than the true MI, and that when i.i.d. samples are used, both are lower bounds on the true MI. Hence, the connection to MI under improper negative sampling is no longer clear and might vanish completely.\nNotwithstanding this fundamental problem, the negative sampling strategy is often treated as a design choice. In Hénaff et al. (2019), CPC is applied to images by partitioning the input image into patches. Then, MI (estimated by InfoNCE) between representations of patches and a context summarizing several patches that are vertically above or below in the same image is minimized. Negative samples are obtained by patches from different images as well as patches from the same image, violating the independence assumption. Similarly, van den Oord et al. (2018) learn representations of speech using samples from a variety of speakers. It was found that using utterances from the same speaker as negative samples is more effective, whereas the “proper” negative samples should be drawn from an appropriate mixture of utterances from all speakers.\nA common observation is that increasing the number of negative examples helps in practice (Hjelm et al., 2019; Tian et al., 2019; Bachman et al., 2019). Indeed, Ma and Collins (2018) show that INCE is consistent for any number of negative samples (under technical conditions), and Poole et al. (2019) show that the signal-to-noise ratio increases with the number of negative samples. On the other hand, (Arora et al., 2019) have demonstrated, both theoretically and empirically, that increasing the number of negative samples does not necessarily help, and can even deteriorate the performance. The intricacies of negative sampling hence remain a key research challenge." }, { "heading": "5 CONCLUSION", "text": "Is MI maximization a good objective for learning good representations in an unsupervised fashion? Possibly, but it is clearly not sufficient. In this work we have demonstrated that, under the common linear evaluation protocol, maximizing lower bounds on MI as done in modern incarnations of the InfoMax principle can result in bad representations. We have revealed that the commonly used estimators have strong inductive biases and—perhaps surprisingly—looser bounds can lead to better representations. Furthermore, we have demonstrated that the connection of recent approaches to MI maximization might vanish if negative samples are not drawn independently (as done by some approaches in the literature). As a result, it is unclear whether the connection to MI is a sufficient (or necessary) component for designing powerful unsupervised representation learning algorithms. We propose that the success of these recent methods could be explained through the view of triplet-based metric learning and that leveraging advances in that domain might lead to further improvements. We have several suggestions for future work, which we summarize in the following.\nAlternative measures of information We believe that the question of developing new notions of information suitable for representation learning should receive more attention. While MI has appealing theoretical properties, it is clearly not sufficient for this task—it is hard to estimate, invariant to bijections and can result in suboptimal representations which do not correlate with downstream performance. Therefore, a new notion of information should account for both the amount of information stored in a representation and the geometry of the induced space necessary for good performance on downstream tasks. One possible avenue is to consider extensions to MI which explicitly account for the modeling power and computational constraints of the observer, such as the recently introduced F-information Xu et al. (2020). Alternatively, one can investigate other statistical divergences to measure the discrepancy between p(x, y) and p(x)p(y). For example, using the Wasserstein distance leads to promising results in representation learning as it naturally enforces smoothness in the encoders (Ozair et al., 2019).\nA holistic view We believe that any theory on measuring information for representation learning built on critics should explicitly take into account the function families one uses (e.g. that of the critic and estimator). Most importantly, we would expect some natural trade-offs between the amount of information that can be stored against how hard it is to extract it in the downstream tasks as a function of the architectural choices. While the distribution of downstream tasks is typically assumed unknown in representation learning, it might be possible to rely on weaker assumptions such as a family of invariances relevant for the downstream tasks. Moreover, it seems that in the literature (i) the critics that are used to measure the information, (ii) the encoders, and (iii) the downstream models/evaluation protocol are all mostly chosen independently of each other. Our empirical results show that the downstream performance depends on the intricate balance between these choices and we believe that one should co-design them. This holistic view is currently under-explored and due to the lack of any theory or extensive studies to guide the practitioners.\nGoing beyond the widely used linear evaluation protocol While it was shown that learning good representations under the linear evaluation protocol can lead to reduced sample complexity for downstream tasks (Arora et al., 2019), some recent works (Bachman et al., 2019; Tian et al., 2019) report marginal improvements in terms of the downstream performance under a non-linear regime. Related to the previous point, it would hence be interesting to further explore the implications of the evaluation protocol, in particular its importance in the context of other design choices. We stress that a highly-nonlinear evaluation framework may result in better downstream performance, but it defeats the purpose of learning efficiently transferable data representations.\nSystematic investigations into design decisions that matter On the practical side, we believe that the link to metric learning could lead to new methods, that break away from the goal of estimating MI and place more weight on the aspects that have a stronger effect on the performance such as the negative sampling strategy. An example where the metric learning perspective led to similar methods as the MI view is presented by Sermanet et al. (2018): They developed a multi-view representation learning approach for video data similar to CMC, but without drawing negative samples independently and seemingly without relying on the MI mental model to motivate their design choices." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Alex Alemi, Ben Poole, Olivier Bachem, and Alexey Dosovitskiy for inspiring discussions and comments on the manuscript. We are grateful for the general support and discussions from other members of Google Brain team in Zurich." }, { "heading": "A RELATION BETWEEN (2) AND THE INFOMAX OBJECTIVE", "text": "Proposition 1. Let X be a random variable and define X1 = g1(X) and X2 = g2(X) be arbitrary functions of X . Then I(X1;X2) ≤ I (X; (X1, X2)).\nProof. Follows by two applications of the data processing inequality, which states that for random variables X , Y and Z satisfying the Markov relation X → Y → Z, the inequality I(X;Z) ≤ I(X;Y ) holds.\nThe first step is to observe that X , X1 and X2 satisfy the relation X1 ← X → X2, which is Markov equivalent to X1 → X → X2 (in particular, X1 and X2 are conditionally independent given X). It therefore follows that I(X1;X2) ≤ I(X;X1). The second step is to observe that X → (X1, X2)→ X1 and therefore I(X;X1) ≤ I(X; (X1, X2)). Combining the two inequalities yields I(X1;X2) ≤ I(X; (X1, X2)), as required.\nB EXPERIMENT DETAILS: ADVERSARIALLY TRAINED ENCODER (SECTION 3.1)\nIn the following, we present the details for training the invertible model from Section 3.1 adversarially. We model g1 with the same RealNVP architecture as in the first experiment, and do not model g2. On top of g1(X(1)) we add a linear layer mapping to 10 outputs (i.e. logits). The parameters of the linear layer trained by minimizing the cross-entropy loss with respect to the true label ofX from whichX(1) is derived. Conversely, the parameters of the encoder g1 are trained to minimize the cross-entropy loss with respect to a uniform probability vector over all 10 classes. We use the Adam optimizer with a learning rate of 10−4 for the parameters of the classifier and 10−6 for the parameters of the encoder, and perform 10 classifier optimization steps per encoder step. Furthermore, in a warm-up phase we train the classifier for 1k iterations before alternating between classifier and encoder steps." }, { "heading": "C CONNECTION BETWEEN METRIC LEARNING AND INFONCE", "text": "INCE can be rewritten as follows:\nINCE = E\n[ 1\nK K∑ i=1 log ef(xi,yi) 1 K ∑K j=1 e f(xi,yj)\n]\n= E\n[ 1\nK K∑ i=1 log 1 1 K ∑K j=1 e f(xi,yj)−f(xi,yi)\n]\n= E − 1 K K∑ i=1 log 1 K K∑ j=1 ef(xi,yj)−f(xi,yi) = logK − E 1 K K∑ i=1 log 1 +∑ j 6=i ef(xi,yj)−f(xi,yi)\n . D INFONCE UNDER NON-I.I.D. SAMPLING\nThe proof that InfoNCE is a lower bound on MI presented in (Poole et al., 2019) makes crucial use of the assumption that the negative samples are drawn from the true marginal distribution. We briefly review this proof to highlight the importance of the negative sampling distribution. Their proof starts from the NWJ lower bound of the KL divergence, namely that for any function f̃ the following lower bound holds (Nguyen et al., 2010; Nowozin et al., 2016):\nI(X;Y ) = DKL(p(x, y) || p(x)p(y)) ≥ Ep(x,y)[f̃(x, y)]− e−1Ep(x)p(y)[ef̃(x,y)]. (6)\nSuppose that (Xi, Yi)Ki=1 are i.i.d. draws from p(x, y) and write X1:K = (X1, X2, . . . , XK). Then, for any i we have that I(X1:K ;Yi) = I(Xi;Yi) = I(X;Y ). We thus have\nI(X;Y ) = I(X1:K ;Yi) ≥ Ep(xi,yi)∏k 6=i p(xk)[f̃(x1:K , yi)]− e−1Ep(yi)∏k p(xk)[ef̃(x1:K ,yi)], where the equality follows from the assumption that the (Xi, Yi)Ki=1 are i.i.d. and the inequality is (6) applied to I(X1:K ;Yi). In particular, taking f̃(x1:K , yi) = 1 + log e f(xi,yi)\n1 K ∑K j=1 e f(xj,yi) yields\nI(X,Y )≥ 1+ Ep(xi,yi)∏k 6=i p(xk) [ log ef(xi,yi)\n1 K ∑K j=1 e f(xj ,yi)\n] − Ep(yi)∏k p(xk) [ ef(xi,yi)\n1 K ∑K j=1 e f(xj ,yi)\n] .\n(7)\nThis is then averaged over the K samples Yi, in which case the third term above cancels with the constant 1 (all occurences of yi in the last term of (7) can be replaced with y1 thanks to (Xi, Yi) being identically distributed), yielding the familiar INCE lower bound:\nI(X,Y ) ≥ E∏ k p(xk,yk) 1\nK K∑ i=1\n[ log\nef(xi,yi)\n1 K ∑K j=1 e f(xj ,yi)\n] = INCE. (8)\nThe point in this proof that makes use of the i.i.d. assumption of the negative samples is in the equality I(Xi, Yi) = I(X1:K , Yi), which allowed us to leverage multiple samples when estimating the MI between two variables. If instead the negative samples are drawn in a dependent fashion (corresponding to the (Xi, Yi) being drawn identically but not independently), we have I(Xi, Yi) ≤ I(X1:K , Yi), though the remainder of the proof still holds, resulting in\nI(X,Y ) ≤ 1 K K∑ i=1 I(X1:K ;Yi) ≥ Ep(x1:K ,y1:K) 1 K K∑ i=1\n[ log\nef(xi,yi)\n1 K ∑K j=1 e f(xj ,yi)\n] .\nTherefore the resulting INCE estimator is neither a lower nor an upper bound on the true MI I(X,Y )." }, { "heading": "E EXPERIMENT DETAILS: NON-I.I.D. SAMPLING (SECTION 4)", "text": "Recall that (X,Y ) = Z + . We use Z ∼ N (0,ΣZ) and ∼ N (0,Σ ), where\nΣZ = ( 1 −0.5 −0.5 1 ) and Σ = ( 1 0.9 0.9 1 ) .\nBatches of data are obtained as (Xi, Yi) = Z + i where each i is sampled independently for each element of the batch, but Z is sampled only once per batch. The true MI I(X,Y ) can be calculated analytically since (X,Y ) is jointly Gaussian with known covariance matrix ΣZ + Σ : For two univariate random variables (X,Y ) that are jointly Gaussian with covariance Σ the MI can be written as\nI(X,Y ) = −1 2 log(1− Σ12Σ21 Σ11Σ22 ).\nThis can be derived using the decomposition I(X,Y ) = H(X) +H(Y )−H(X,Y ) and the analytic expression for the entropy H of a Gaussian.\nWe compare the same setting trained using i.i.d. sampled pairs (Xi, Yi) as a baseline. We parametrize the critic as a MLP with 5 hidden layers, each with 10 units and ReLU activations, followed by a linear layer and maximize INCE using these non-i.i.d. samples with batch size 128. Note that if a batch size of K is used, the bound INCE ≤ logK always holds. We used K sufficiently large so that I(X,Y ) ≤ logK to avoid INCE trivially lower bounding the true MI." }, { "heading": "F ADDITIONAL FIGURES", "text": "" }, { "heading": "G RESULTS FOR THE EXPERIMENTS FROM SEC. 3.2 AND 3.3 ON CIFAR10", "text": "We run the experiments form Sections 3.2 and 3.3 on CIFAR10 with minimal changes. Specifically, we use the same encoder and critic architectures with the only difference that the input layers of the encoders are adapted to process the (flattened) 32 × 14 × 3 pixel image halves. Furthermore, we reduce the learning rate from 10−4 to 10−5 and triple the number of training iterations. Linear classification in pixel space from the upper image halves achieves a testing accuracy of about 24%.\nThe CIFAR10 results for the experiment investigating the critic architecture (Section 3.2) can be found in Figure 8 and the results for the experiments investigating the encoder architecture (Section 3.3) in Figure 9. The qualitative behavior of the different encoder and critic architectures in terms of downstream testing accuracy and testing IEST is very similar to the one observed for MNIST. The conclusions made for MNIST hence carry over to CIFAR10." } ]
2,020
null
SP:1bc27efda9dd80ce3cfabd6a1e16a904c85f37fc
[ "CCA is a generative model that learns a shared subspace based on two (or multi) views of the data. Being generative, it might not have strong discriminative power for some downstream classification tasks. Previous approaches to infuse discriminative power into the shared subspace estimated by CCA are linear. So, this paper proposes to learn 1) non-linear 2) discriminative subspaces for CCA. The paper accomplishes this by simply adding a task specific term to the optimization objective of DeepCCA (Andrew et. al. 2013), which involves just adding a task-specific MLP on top and minimizing the associated loss-function. ", "This paper addresses the problem of jointly performing CCA with task labeling. The problem is timely and important as it is challenging to perform CCA jointly with the task classification (see below) and hence previous work typically perform this in a pipeline - that is, first projecting the data using a pre-trained CCA and then training a task classifier using the projected representation. As the authors note, this may be problematic as CCA may delete important information that is relevant for the classification, if training is not done jointly. " ]
Multi-view learning seeks to form better models by making use of multiple feature sets representing the same samples. Exploiting feature correlations during training can enable better models. The traditional method for computing correlations between feature sets is Canonical Correlation Analysis (CCA), which finds linear projections that maximize correlation between feature vectors, essentially computing a shared embedding for two views of data. More recently, CCA has been used for multi-view discriminative tasks; however, CCA makes no use of class labels. Recent CCA methods have started to address this weakness but are limited in that they do not simultaneously optimize the CCA projection for discrimination and the CCA projection itself, or they are linear only. We address these deficiencies by simultaneously optimizing a CCA-based and a task objective in an end-to-end manner. Together, these two objectives learn a non-linear CCA projection to a shared latent space that is highly correlated and discriminative. Our method shows a significant improvement over previous state-of-the-art (including deep supervised approaches) for cross-view classification (8.5% increase), regularization with a second view during training when only one view is available at test time (2.2-3.2%), and semi-supervised learning (15%) on real data.
[]
[ { "authors": [ "Galen Andrew", "Raman Arora", "Jeff Bilmes", "Karen Livescu" ], "title": "Deep Canonical Correlation Analysis", "venue": "In Proc. ICML,", "year": 2013 }, { "authors": [ "Raman Arora", "Karen Livescu" ], "title": "Kernel cca for multi-view learning of acoustic features using articulatory measurements", "venue": "In Symposium on Machine Learning in Speech and Language Processing,", "year": 2012 }, { "authors": [ "Soheil Bahrampour", "Nasser M. Nasrabadi", "Asok Ray", "W. Kenneth Jenkins" ], "title": "Multimodal TaskDriven Dictionary Learning for Image Classification", "venue": "arXiv preprint:", "year": 2015 }, { "authors": [ "Gaurav Bhatt", "Piyush Jha", "Balasubramanian Raman" ], "title": "Common Representation Learning Using Step-based Correlation Multi-Modal CNN", "venue": "arXiv preprint:", "year": 2017 }, { "authors": [ "Tijl De Bie", "Nello Cristianini", "Roman Rosipal" ], "title": "Eigenproblems in pattern recognition", "venue": "In Handbook of Geometric Computing,", "year": 2005 }, { "authors": [ "Natalia Y. Bilenko", "Jack L. Gallant" ], "title": "Pyrcca: regularized kernel canonical correlation analysis in Python and its applications to neuroimaging", "venue": "Frontiers in Neuroinformatics,", "year": 2016 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proc. ECCV,", "year": 2018 }, { "authors": [ "Miriam Cha", "Youngjune Gwon", "H.T. Kung" ], "title": "Multimodal sparse representation learning and applications", "venue": "arXiv preprint:", "year": 2015 }, { "authors": [ "Sarath Chandar", "Mitesh M. Khapra", "Hugo Larochelle", "Balaraman Ravindran" ], "title": "Correlational Neural Networks", "venue": "Neural Computation,", "year": 2016 }, { "authors": [ "Xiaobin Chang", "Tao Xiang", "Timothy M. Hospedales" ], "title": "Scalable and Effective Deep CCA via Soft Decorrelation", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Matthias Dorfer", "Rainer Kelz", "Gerhard Widmer" ], "title": "Deep linear discriminant analysis", "venue": "In Proc. ICLR,", "year": 2016 }, { "authors": [ "Matthias Dorfer", "Gerhard Widmer", "Gerhard Widmerajku At" ], "title": "Towards Deep and Discriminative Canonical Correlation Analysis", "venue": "In Proc. ICML Workshop on Multi-view Representaiton Learning,", "year": 2016 }, { "authors": [ "Matthias Dorfer", "Jan Schlüter", "Andreu Vall", "Filip Korzeniowski", "Gerhard Widmer" ], "title": "End-to-end cross-modality retrieval with CCA projections and pairwise ranking loss", "venue": "International Journal of Multimedia Information Retrieval,", "year": 2018 }, { "authors": [ "Kanghong Duan", "Hongxin Zhang", "Jim Jing Yan Wang" ], "title": "Joint learning of cross-modal classifier and factor analysis for multimedia data classification", "venue": "Neural Computing and Applications,", "year": 2016 }, { "authors": [ "Nour El Din Elmadany", "Yifeng He", "Ling Guan" ], "title": "Multiview learning via deep discriminative canonical correlation analysis", "venue": "In Proc. ICASSP,", "year": 2016 }, { "authors": [ "Qing Feng", "Meilei Jiang", "Jan Hannig", "JS Marron" ], "title": "Angle-based joint and individual variation explained", "venue": "Journal of Multivariate Analysis,", "year": 2018 }, { "authors": [ "Bernardo B. Gatto", "Eulanda M. Dos Santos" ], "title": "Discriminative canonical correlation analysis network for image classification", "venue": "In Proc. ICIP,", "year": 2017 }, { "authors": [ "Harold Hotelling" ], "title": "Relations between two sets of variates", "venue": null, "year": 1936 }, { "authors": [ "Lei Huang", "Dawei Yang", "Bo Lang", "Jia Deng" ], "title": "Decorrelated Batch Normalization", "venue": "In Proc. CVPR,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "venue": "In Proc. ICML,", "year": 2015 }, { "authors": [ "Meina Kan", "Shiguang Shan", "Haihong Zhang", "Shihong Lao", "Xilin Chen" ], "title": "Multi-view Discriminant Analysis", "venue": "IEEE PAMI,", "year": 2015 }, { "authors": [ "Jared Katzman", "Uri Shaham", "Alexander Cloninger", "Jonathan Bates", "Tingting Jiang", "Yuval Kluger" ], "title": "Deep Survival: A Deep Cox Proportional Hazards Network", "venue": "arxiv preprint:", "year": 2016 }, { "authors": [ "Agnan Kessy", "Alex Lewin", "Korbinian Strimmer" ], "title": "Optimal whitening and decorrelation", "venue": "arXiv preprint:", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoff Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits", "venue": "http://yann.lecun.com/exdb/mnist/,", "year": 1998 }, { "authors": [ "George Lee", "Asha Singanamalli", "Haibo Wang", "Michael D Feldman", "Stephen R Master", "Natalie N C Shih", "Elaine Spangler", "Timothy Rebbeck", "John E Tomaszewski", "Anant Madabhushi" ], "title": "Supervised multi-view canonical correlation analysis (sMVCCA): integrating histologic and proteomic features for predicting recurrent prostate cancer", "venue": "IEEE Transactions on Medical Imaging,", "year": 2015 }, { "authors": [ "Dongge Li", "Nevenka Dimitrova", "Mingkun Li", "Ishwar K. Sethi" ], "title": "Multimedia content processing through cross-modal association", "venue": "In Proc. ACM International Conference on Multimedia,", "year": 2003 }, { "authors": [ "Eric F Lock", "Katherine A Hoadley", "J S Marron", "Andrew B Nobel" ], "title": "Joint and Individual Variation Explained (JIVE) for Integrated Analysis of Multiple Data Types", "venue": "The Annals of Applied Statistics,", "year": 2013 }, { "authors": [ "Dominic Masters", "Carlo Luschi" ], "title": "Revisiting Small Batch Training for Deep", "venue": "Neural Networks. arxiv preprint:", "year": 2018 }, { "authors": [ "Joel S Parker", "Michael Mullins", "Maggie CU Cheang", "Samuel Leung", "David Voduc", "Tammi Vickery", "Sherri Davies", "Christiane Fauron", "Xiaping He" ], "title": "Supervised risk predictor of breast cancer based on intrinsic subtypes", "venue": "Journal of Clinical Oncology,", "year": 2009 }, { "authors": [ "Priyadip Ray", "Lingling Zheng", "Joseph Lucas", "Lawrence Carin" ], "title": "Bayesian joint analysis of heterogeneous genomics data", "venue": null, "year": 2014 }, { "authors": [ "Mehmet Emre Sargin", "Yücel Yemez", "Engin Erzin", "A Murat Tekalp" ], "title": "Audiovisual synchronization and fusion using canonical correlation analysis", "venue": "IEEE Transactions on Multimedia,", "year": 2007 }, { "authors": [ "Sumit Shekhar", "Vishal M Patel", "Nasser M Nasrabadi", "Rama Chellappa" ], "title": "Joint sparse representation for robust multimodal biometrics recognition", "venue": "IEEE PAMI,", "year": 2014 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "venue": "In Proc. ICLR,", "year": 2015 }, { "authors": [ "Asha Singanamalli", "Haibo Wang", "George Lee", "Natalie Shih", "Mark Rosen", "Stephen Master", "John Tomaszewski", "Michael Feldman", "Anant Madabhushi" ], "title": "Supervised multi-view canonical correlation analysis: fused multimodal prediction of disease diagnosis and prognosis", "venue": "In Proc. SPIE Medical Imaging,", "year": 2014 }, { "authors": [ "MA Troester", "Xuezheng Sun", "Emma H. Allott", "Joseph Geradts", "Stephanie M Cohen", "Chui Kit Tse", "Erin L. Kirk", "Leigh B Thorne", "Michelle Matthews", "Yan Li", "Zhiyuan Hu", "Whitney R. Robinson", "Katherine A. Hoadley", "Olufunmilayo I. Olopade", "Katherine E. Reeder-Hayes", "H. Shelton Earp", "Andrew F. Olshan", "LA Carey", "Charles M. Perou" ], "title": "Racial differences in PAM50 subtypes in the Carolina Breast Cancer Study", "venue": "Journal of the National Cancer Institute,", "year": 2018 }, { "authors": [ "L Van Der Maaten", "G Hinton" ], "title": "Visualizing high-dimensional data using t-sne. journal of machine learning research", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff Bilmes" ], "title": "On deep multi-view representation learning", "venue": "In Proc. ICML,", "year": 2015 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff A. Bilmes" ], "title": "Unsupervised learning of acoustic features via deep canonical correlation analysis", "venue": "In Proc. ICASSP,", "year": 2015 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Nathan Srebro" ], "title": "Stochastic optimization for deep CCA via nonlinear orthogonal iterations", "venue": "In Proc. Allerton Conference on Communication,", "year": 2016 }, { "authors": [ "Xing Xu", "Atsushi Shimada", "Rin-ichiro Taniguchi", "Li He" ], "title": "Coupled dictionary learning and feature mapping for cross-modal retrieval", "venue": "In Proc. International Conference on Multimedia and Expo,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Parallel modalities of data are increasingly common in a variety of applications, including images and text, audio and video, parallel texts of different languages, and a variety of medical imaging and omics modalities for each patient. Each view provides essential information for classification and, when used together, can form a more accurate model. This is especially important for difficult discriminative tasks such as those with a small training set size. Canonical Correlation Analysis (CCA) is the most common method for computing a shared representation from two views of data by computing a space in which they are maximally correlated (Hotelling, 1936; Bie et al., 2005). In this paper we will demonstrate that, through optimizing for both discriminative features and correlation between views, we can improve classification accuracy for three real world scenarios.\nCCA is an unsupervised method but has been applied to many discriminative tasks (Kan et al., 2015; Sargin et al., 2007; Arora & Livescu, 2012). While some of the correlated CCA features are useful for discriminative tasks, many represent properties that are of no use for classification and obscure correlated information that is beneficial. This problem is magnified with recent non-linear extensions of CCA that use deep learning to make significant strides in improving correlation (Andrew et al., 2013; Wang et al., 2015a; 2016; Chang et al., 2018) but often at the expense of discriminative capability (cf. §5.1). Therefore, we present Task-Optimal CCA (TOCCA), a new deep learning technique to project the data from two views to a shared space that is also discriminative (Fig. 1).\nImplementing a task-optimal variant of CCA required a fundamental change in formulation. We show that the CCA objective can equivalently be expressed as an `2 distance minimization in the shared space plus an orthogonality constraint. Orthogonality constraints help regularize neural networks (NNs) (Huang et al., 2018); we present three techniques to accomplish this. While our method is derived from CCA, by manipulating the orthogonality constraints, we obtain deep CCA approaches that compute a shared latent space that is also discriminative.\nOur family of solutions for supervised CCA required a crucial and non-trivial change in formulation. We demonstrate the effectiveness and versatility of our model for three different tasks: 1) crossview classification on a variation of MNIST (LeCun, 1998), 2) regularization when two views are\navailable for training but only one at test time on a cancer imaging and genomic data set with only 1,000 samples, and 3) semi-supervised representation learning to improve speech recognition. All experiments showed a significant improvement in accuracy over previous state-of-the-art. In addition, our approach is more robust in the small sample size regime than alternative methods. Overall, our experiments on real data show the effectiveness of our method in learning a shared space that is more discriminative than previous methods for a variety of practical problems." }, { "heading": "2 RELATED WORK", "text": "CCA was initially used for unsupervised data analysis to gain insights into components shared by two sources (Andrew et al., 2013; Wang et al., 2015a; 2016). CCA has also been used to compute a shared latent space for cross-view classification (Kan et al., 2015; Wang et al., 2015a; Chandar et al., 2016; Chang et al., 2018), for representation learning on multiple views that are then joined for prediction (Sargin et al., 2007; Dorfer et al., 2016b), and for classification from a single view when a second view is available during training (Arora & Livescu, 2012). Recent non-linear extensions of CCA implemented via NNs make significant improvements in correlation (Andrew et al., 2013; Wang et al., 2015a; 2016; Chang et al., 2018) but with little focus on discriminative capability.\nMost prior work that boosts the discriminative capability of CCA is linear only (Lee et al., 2015; Singanamalli et al., 2014; Duan et al., 2016). More recent work using NNs still remains limited in that it optimizes discriminative capability for an intermediate representation rather than the final CCA projection (Dorfer et al., 2016b), or optimizes the CCA objective only during pre-training, not while training the task objective (Dorfer et al., 2018). We advocate to jointly optimize CCA and a discriminative objective by computing the CCA projection within a network layer while applying a task-driven operation such as classification. Experimental results show that our method significantly improves upon previous work (Dorfer et al., 2016b; 2018) due to its focus on both the shared latent space and a task-driven objective. The latter is particularly important on small training set sizes.\nWhile alternative approaches to multi-view learning via CCA exist, they typically focus on a reconstruction objective. That is, they transform the input into a shared space such that the input could be reconstructed – either individually or reconstructing one view from the other. Variations of coupled dictionary learning (Shekhar et al., 2014; Xu et al., 2015; Cha et al., 2015; Bahrampour et al., 2015) and autoencoders (Wang et al., 2015a; Bhatt et al., 2017) have been used in this context. CCA-based objectives, such as the model used in this work, instead learn a transformation to a shared space without the need for reconstructing the input. This task may be easier and sufficient in producing a representation for multi-view classification (Wang et al., 2015a)." }, { "heading": "3 BACKGROUND", "text": "We first introduce CCA and present our task-driven approach in §4. Linear and non-linear CCA are unsupervised and find the shared signal between a pair of data sources, by maximizing the sum correlation between corresponding projections. Let X1 ∈ Rd1×n and X2 ∈ Rd2×n be meancentered input data from two different views with n samples and d1, d2 features, respectively.\nCCA. The objective is to maximize the correlation between a1 = w>1 X1 and a2 = w>2 X2, where w1 and w2 are projection vectors (Hotelling, 1936). The first canonical directions are found via\narg max w1,w2\ncorr ( w>1 X1,w > 2 X2 ) and subsequent projections are found by maximizing the same correlation but in orthogonal directions. Combining the projection vectors into matrices W1 = [w (1) 1 , . . . ,w (k) 1 ] and W2 = [w (1) 2 , . . . ,w (k) 2 ] (k ≤ min(d1, d2)), CCA can be reformulated as a trace maximization under orthonormality constraints on the projections, i.e.,\narg max W1,W2\ntr(W>1 Σ12W2) s.t. W > 1 Σ1W1 = W > 2 Σ2W2 = I (1)\nfor covariance matrices Σ1 = X1XT1 , Σ2 = X2X T 2 , and cross-covariance matrix Σ12 = X1X T 2 . Let T = Σ−1/21 Σ12Σ −1/2 2 and its singular value decomposition (SVD) be T = U1diag(σ)U > 2 with singular values σ = [σ1, . . . , σmin(d1,d2)] in descending order. W1 and W2 are computed from the top k singular vectors of T as W1 = Σ −1/2 1 U (1:k) 1 and W2 = Σ −1/2 2 U (1:k) 2 where U (1:k) denotes the k first columns of matrix U. The sum correlation in the projection space is equivalent to\nk∑ i=1 corr (( w (i) 1 )> X1, ( w (i) 2 ) >X2 ) = k∑ i=1 σ2i , (2)\ni.e., the sum of the top k singular values. A regularized variation of CCA (RCCA) ensures that the covariance matrices are positive definite by computing the covariance matrices as Σ̂1 = 1\nn−1X1X > 1 + rI and Σ̂2 = 1 n−1X2X > 2 + rI, for regularization parameter r > 0 and identity matrix I (Bilenko & Gallant, 2016).\nDCCA. Deep CCA adds non-linear projections to CCA by non-linearly mapping the input via a multilayer perceptron (MLP). In particular, inputs X1 and X2 are mapped via non-linear functions f1 and f2, parameterized by θ1 and θ2, resulting in activations A1 = f1(X1; θ1) and A2 = f2(X2; θ2) (assumed to be mean centered) (Andrew et al., 2013). When implemented by a NN, A1 and A2 are the output activations of the final layer with do features. Fig. 2(a) shows the network structure. DCCA optimizes the same objective as CCA (equation 1) but using activations A1 and A2. Regularized covariance matrices are computed accordingly and the solution for W1 and W2 can be computed using SVD just as with linear CCA. When k = do (i.e., the number of CCA components is equal to the number of features in A1 and A2), optimizing the sum correlation in the projection space (equation 2) is equivalent to optimizing the following matrix trace norm objective (TNO)\nLTNO(A1,A2) = ‖T‖tr = tr ( T>T )1/2 ,\nwhere T = Σ−1/21 Σ12Σ −1/2 2 as in CCA (Andrew et al., 2013). DCCA optimizes this objective directly, without a need to compute the CCA projection within the network. The TNO is optimized first, followed by a linear CCA operation before downstream tasks like classification are performed. This formulation does not allow for combining directly with a supervised term.\nSoftCCA. While DCCA enforces orthogonality constraints on projections W>1 A1 and W>2 A2, SoftCCA relaxes them using regularization (Chang et al., 2018). Final projection matrices W1 and W2 are integrated into f1 and f2 as the top network layer. The trace objective for DCCA in equation 1 can be rewritten as minimizing the `2 distance between the projections when each feature in A1 and A2 is normalized to a unit variance (Li et al., 2003), leading to1 L`2 dist(A1, A2) = ‖A1− A2‖2F . Regularization in SoftCCA penalizes the off-diagonal elements of the covariance matrix\n1We use this `2 distance objective in our formulation.\nΣ, using a running average computed over batches as Σ̂ and a loss of LDecorr(A) =\n∑do\ni6=i |Σ̂i,j |. Overall, the SoftCCA loss takes the form\nL`2 dist(A1,A2) + λ ( LDecorr(A1) + LDecorr(A2) ) .\nSupervised CCA methods. CCA, DCCA, and SoftCCA are all unsupervised methods to learn a projection to a shared space in which the data is maximally correlated. Although these methods have shown utility for discriminative tasks, a CCA decomposition may not be optimal for classification because features that are correlated may not be discriminative. Our experiments will show that maximizing the correlation objective too much can degrade performance on discriminative tasks.\nCCA has previously been extended to supervised settings in three ways: 1) with methods that are linear only (Singanamalli et al., 2014; Lee et al., 2015; Kan et al., 2015; Duan et al., 2016), 2) by maximizing the total correlation between each view and the training labels in addition to each pair of views (Lee et al., 2015; Singanamalli et al., 2014), and 3) with Linear Discriminant Analysis (LDA)-style approaches to encourage class separation (Kan et al., 2015; Dorfer et al., 2016b; Elmadany et al., 2016).2 LDA approaches to supervision are generative rather than discriminative. Importantly, we will show in §5.3 that encouraging class separation with an LDA-style objective performs significantly inferior to a softmax. Further, Dorfer et al. (2016b) did not apply LDA to the shared space itself but to the NN layer below it, and Elmadany et al. (2016) did not validate the shared space created, only its use in multi-view classification using both views for training and test.\nDorfer et. al’s CCA Layer (CCAL) is the closest to our method. It optimizes a task loss operating on a CCA projection; however, the CCA objective itself is only optimized during pre-training, not in an end-to-end manner (Dorfer et al., 2018). Further, their goal is retrieval with a pairwise rank loss, not classification. Instead of computing the CCA projection explicitly within the network, we optimize the non-linear mapping into the shared space together with the task objective, requiring a fundamental change in formulation. We optimize for the shared space with the `2 distance between activations (similar to SoftCCA) and propose three different ways to apply the orthogonality constraints of CCA." }, { "heading": "4 TASK-OPTIMAL CCA (TOCCA)", "text": "To compute a shared latent space that is also discriminative, we reformulate DCCA to add a taskdriven term to the optimization objective. The CCA component finds features that are correlated\n2Gatto & Dos Santos (2017) use a similar technique with LDA but apply it as a convolutional filter on a single view; it is not a multi-view method.\nbetween views, while the task component ensures that they are also discriminative. This model can be used for representation learning on multiple views before joining representations for prediction (Sargin et al., 2007; Dorfer et al., 2016b) and for classification when two views are available for training but only one at test time (Arora & Livescu, 2012). In §5, we demonstrate both use cases on real data. Our methods and related NN models from the literature are summarized in Tab. A2; Fig. 2 shows schematic diagrams.\nChallenges and solutions. While DCCA optimizes the sum correlation with an equivalent loss function (TNO), the CCA projection itself is computed only after optimization. Hence, the projections cannot be used to optimize another task simultaneously. The main challenge in developing a task-optimal form of deep CCA that discriminates based on the CCA projection is in computing this projection within the network – a necessary step to enable simultaneous training of both objectives. We tackle this by focusing on the two components of DCCA: maximizing the sum correlation between activations A1 and A2 and enforcing orthonormality constraints within A1 and A2. We achieve both by transforming the CCA objective and present three methods that progressively relax the orthogonality constraints.\nWe further improve upon DCCA by enabling mini-batch computations for improved flexibility and test performance. DCCA was developed for large batches because correlation is not separable across batches. While large batch implementations of stochastic gradient optimization can increase computational efficiency via parallelism, small batch training provides more up-to-date gradient calculations, allowing a wider range of learning rates and improving test accuracy (Masters & Luschi, 2018). We reformulate the correlation objective as the `2 distance (following SoftCCA), enabling separability across batches. We ensure a normalization to one via batch normalization without the scale and shift parameters (Ioffe & Szegedy, 2015). Wang et al. (2016) also developed a stochastic mini-batch solution to DCCA but handled the orthonormality constraints in a different way (discussed below).\nTask-driven objective. First, we apply non-linear functions f1 and f2 with parameters θ (via MLPs) to each view X1 and X2, i.e., A1 = f1(X1; θ1) and A2 = f2(X2; θ2). Second, a task-specific function ftask(A; θtask) operates on the outputs A1 and A2. In particular, f1 and f2 are optimized so that the `2 distance between A1 and A2 is minimized; therefore, ftask can be trained to operate on both inputs A1 and A2. We combine CCA and task-driven objectives as a weighted sum with a hyperparameter for tuning. This model is flexible, in that the task-driven goal can be used for classification (Krizhevsky et al., 2012; Dorfer et al., 2016a), regression (Katzman et al., 2016), clustering (Caron et al., 2018), or any other task. Other prior attempts to integrate a classifier into deep CCA only used LDA (Kan et al., 2015; Dorfer et al., 2016b; Elmadany et al., 2016). See Tab. A2 for an overview.\nOrthogonality constraints. The remaining complications for mini-batch optimization are the orthogonality constraints, for which we propose three solutions, each handling the orthogonality constraints of CCA in a different way: whitening, soft decorrelation, and no decorrelation.\n1) Whitening (TOCCA-W). CCA applies orthogonality constraints to A1 and A2. We accomplish this with a linear whitening transformation that transforms the activations such that their covariance becomes the identity matrix, i.e., features are uncorrelated. Decorrelated Batch Normalization (DBN) has previously been used to regularize deep models by decorrelating features (Huang et al., 2018) and inspired our solution. In particular, we apply a transformation B = UA to make B orthonormal, i.e., BB> = I.\nWe use a Zero-phase Component Analysis (ZCA) whitening transform composed of three steps: rotate the data to decorrelate it, rescale each axis, and rotate back to the original space. Each transformation is learned from the data. Any matrix U Rdo×do satisfying U>U = Σ−1 whitens the data, where Σ denotes the covariance matrix of A. As U is only defined up to a rotation, it is not unique. PCA whitening follows the first two steps and uses the eigendecomposition of Σ: UPCA = Λ\n−1/2V> for Λ = diag(λ1, . . . , λdo) and V = [v1, . . . ,vdo ], where (λi,vi) are the eigenvalue, eigenvector pairs of Σ. As PCA whitening suffers from stochastic axis swapping, neurons are not stable between batches (Huang et al., 2018). ZCA whitening uses the transformation UZCA = VΛ\n−1/2VT in which PCA whitening is first applied, followed by a rotation back to the original space. Adding the rotation V brings the whitened data B as close as possible to the original data A (Kessy et al., 2015).\nComputation of UZCA is clearly depend on Σ. While Huang et al. (2018) used a running average of UZCA over batches, we apply this stochastic approximation to Σ for each view using the update Σ(k) = αΣ(k−1)+(1−α)Σb for batch k where Σb is the covariance matrix for the current batch and α ∈ (0, 1) is the momentum. We then compute the ZCA transformation from Σ(k) to do whitening as B = fZCA(A) = U (k) ZCAA. At test time, U\n(k) from the last training batch is used. Algorithm A1 describes ZCA whitening in greater detail. In summary, TOCCA-W integrates both the correlation and task-driven objectives, with decorrelation performed by whitening, into\nLtask(ftask(B1), Y ) + Ltask(ftask(B2), Y ) + λ L`2 dist(B1,B2) , where B1 and B2 are whitened outputs of A1 and A2, respectively, and Y is the class labels. This is a novel approach to integrating the orthogonality constraints of CCA into a NN as it is the first to use ZCA whitening in this manner. Wang et al. (2016)’s stochastic mini-batch solution to DCCA used nonlinear orthogonal iterations and does not state what type of whitening operation was used.\n2) Soft decorrelation (TOCCA-SD). While fully independent components may be beneficial in regularizing NNs on some data sets, a softer decorrelation may be more suitable on others. In this second formulation we relax the orthogonality constraints using regularization, following the Decorr loss of SoftCCA (Chang et al., 2018). The loss function for this formulation is\nLtask(ftask(A1), Y )+Ltask(ftask(A2), Y )+λ1L`2 dist(A1,A2)+λ2 ( LDecorr(A1)+LDecorr(A2) ) .\nWhile this solution is based on SoftCCA, our experiments (§5) will demonstrate that the task component is essential when using the model for classification.\n3) No decorrelation (TOCCA-ND). When CCA is used in an unsupervised manner, some form of orthogonality constraint or decorrelation is necessary to ensure that f1 and f2 do not simply produce multiple copies of the same feature. While this result could maximize the sum correlation, it is not helpful in capturing useful projections. In the task-driven setting, the discriminative term ensures that the features in f1 and f2 are not replicates of the same information. TOCCA-ND therefore removes the decorrelation term entirely, forming the simpler objective\nLtask(ftask(A1), Y ) + Ltask(ftask(A2), Y ) + λL`2 dist(A1,A2) . These three models allow testing whether whitening or decorrelation benefit a task-driven model.\nComputational complexity. Due to the eigendecomposition, TOCCA-W has a complexity of O(d3o) compared to O(d2o) for TOCCA-SD, with respect to output dimension do. However, do is typically small (≤ 100) and this extra computation is only performed once per batch. The difference in runtime is less than 6.5% for a batch size of 100 or 9.4% for a batch size of 30 (Tab. A4).\nSummary. All three variants are motivated by adding a task-driven component to deep CCA. TOCCA-ND is the most relaxed and directly attempts to obtain identical latent representations. Experiments will show that whitening (TOCCA-W) and soft decorrelation (TOCCA-SD) provide a beneficial regularization. Further, since the `2 distance that we optimize was shown to be equivalent to the sum correlation (cf. §3 SoftCCA paragraph), all three TOCCA models maintain the goals of CCA, just with different relaxations of the orthogonality constraints. Our method is the first to simultaneously optimize for CCA and a discriminative task with end-to-end training. See Tab. A2 for an overview." }, { "heading": "5 EXPERIMENTS", "text": "We validated our methods on three different data sets: MNIST handwritten digits, the Carolina Breast Cancer Study (CBCS) using imaging and genomic features, and speech data from the Wisconsin X-ray Microbeam Database (XRMB). Our experiments show the utility of our methods for 1) cross-view classification, 2) regularization with a second view during training when only one view is available at test time, and 3) representation learning on multiple views that are joined for prediction.\nImplementation.3 Each layer of our network consists of a fully connected layer, followed by a ReLU activation and batch normalization (Ioffe & Szegedy, 2015). Our implementations of DCCA, SoftCCA, and Joint DCCA/DeepLDA (Dorfer et al., 2016b) also use ReLU activation and batch\n3Code is submitted with this paper and will also be available publicly on GitHub after the review period.\nnormalization. We modified CCAL-Lrank (Dorfer et al., 2018) to use a softmax function and crossentropy loss for classification, instead of a pairwise ranking loss for retrieval, referring to this modification as CCAL-Lce. We used the Nadam optimizer and tuned hyperparameters on a validation set via random search; settings and ranges are specified in Tab. A3. The same hyperparameter tuning procedure was used for our methods and those we compare with. We used Keras with the Theano backend and an Nvidia GeForce GTX 1080 Ti.\nThe following experiments compare our methods with two linear methods (CCA and RCCA), two unsupervised deep methods (DCCA and SoftCCA), and two supervised deep methods (Joint DCCA/DeepLDA and CCAL-Lce). Many other variants exist (§3), but the ones we selected are the current state-of-the-art in each of these classes. We did not run a direct comparison with Wang et al. (2015a) as Chang et al. (2018) already showed that SoftCCA is superior. We chose Joint DCCA/DeepLDA to represent supervised LDA-style CCA methods rather than comparing with all methods in this group (Kan et al., 2015; Elmadany et al., 2016)4." }, { "heading": "5.1 CROSS-VIEW CLASSIFICATION ON MNIST DIGITS", "text": "We formed a multi-view data set from the MNIST handwritten digit data set (LeCun, 1998). Following Andrew et al. (2013), we split each 28× 28 image in half horizontally, creating left and right views that are each 14 × 28 pixels. All images were flattened into a vector with 392 features. The full data set consists of 60k training images and 10k test images. We used a random set of up to 50k for training and the remaining training images for validation. We used the full 10k image test set.\nIn order to validate both the discriminativeness of the embedding and the success in finding a shared space, we studied performance on cross-view classification. We evaluated cross-view classification accuracy by first computing the projection for each view, then we trained a linear SVM on one view’s projection, and finally we used the other view’s projection at test time. While the task-driven methods presented in this work learn a classifier within the model, this test setup enables a fair comparison with the unsupervised CCA variants and validates the discriminativity of the features learned. It is also the standard method in the literature to test CCA methods for classification. Notably, using the built-in softmax classifier (not shown) performed similarly to the SVM, as much of the power of our methods comes from the representation learning part. We do not compare with a simple supervised NN because this setup does not learn the shared space necessary for cross-view classification. We report results averaged over five randomly selected training/validation sets; the test set always remained the same.\nCorrelation vs. classification accuracy We first demonstrate the importance of adding a taskdriven component to DCCA by showing that maximizing the sum correlation between views is not sufficient. Fig. 3 (left) shows the sum correlation vs. cross-view classification accuracy across many different hyperparameter settings for DCCA (Andrew et al., 2013), SoftCCA (Chang et al., 2018), and TOCCA. We used 50 components for each; thus, the maximum sum correlation was 50. The sum correlation was measured after applying linear CCA to ensure that components were independent. With DCCA a larger correlation tended to produce a larger classification accuracy, but there was still a large variance in classification accuracy amongst hyperparameter settings that produced a similar sum correlation. For example, with the two farthest right points in the plot (colored red), their classification accuracy differs by 10%, and they are not even the points with the best classification accuracy (colored purple). The pattern is different for SoftCCA. There was an increase in classification accuracy as sum correlation increased but only up to a point. For higher sum correlations, the classification accuracy varied even more from 20% to 80%. Further experiments (not shown) have indicated that when the sole objective is correlation, some of the projection directions are simply not discriminative, particularly when there are a large number of classes. Hence, optimizing for sum correlation alone does not guarantee a discriminative model. TOCCA-W and TOCCA-SD show a much greater classification accuracy across a wide range of correlations and, overall, the best accuracy when correlation is greatest.\nEffect of batch size. Fig. 3 (right) plots the batch size vs. classification accuracy for a training set size of 10, 000. We tested batch sizes from 10 to 10,000; a batch size of 10 or 30 was best for all\n4While Elmadany et al. (2016) ran experiments on MNIST, they used the embeddings from both views for training and test; hence, their results are not directly comparable to our cross-view classification results. When we did test multi-view classification on MNIST, we achieved 98.5% vs. their reported 97.2%.\nthree variations of TOCCA. This is in line with previous work that found the best performance with a batch size between 2 and 32 (Masters & Luschi, 2018). We used a batch size of 32 in the remaining experiments on MNIST.\nEffect of training set size. We manipulated the training set size in order to study the robustness of our methods. In particular, Fig. 3 (right) shows the cross-view classification accuracy for training set sizes from n = 300 to 50,000. While we expected that performance would decrease for smaller training set sizes, some methods were more susceptible to this degradation than others. The classification accuracy with CCA dropped significantly for n = 300 and 1,000, due to overfitting and instability issues related to the covariance and cross-covariance matrices. SoftCCA shows similar behavior (prior work (Chang et al., 2018) on this method did not test such small training set sizes).\nAcross all training set sizes, our TOCCA variations consistently exhibited good performance, e.g., increasing classification accuracy from 78.3% to 86.7% for n = 1,000 and from 86.1% to 94.6% for n = 50,000 with TOCCA-SD. Increases in accuracy over TOCCA-NDwere small, indicating that the different decorrelation schemes have only a small effect on this data set; the task-driven component is the main reason for the success of our method. In particular, the classification accuracy with n = 1,000 did better than the unsupervised DCCA method on n = 10,000. Further, TOCCA with n = 300 did better than linear methods on n = 50,000, clearly showing the benefits of the proposed formulation. We also examined the CCA projections qualitatively via a 2D t-SNE embedding (Van Der Maaten & Hinton, 2008). Fig. 4 shows the CCA projection of the left view for each method. As expected, the task-driven variant produced more clearly separated classes." }, { "heading": "5.2 REGULARIZATION FOR CANCER CLASSIFICATION", "text": "In this experiment, we address the following question: Given two views available for training but only one at test time, does the additional view help to regularize the model?\nWe study this question using 1,003 patient samples with image and genomic data from CBCS5 (Troester et al., 2018). Images consisted of four cores per patient from a tissue microarray that was stained with hematoxylin and eosin. Image features were extracted using a VGG16 backbone (Simonyan & Zisserman, 2015), pre-trained on ImageNet, by taking the mean of the 512D output of the fourth set of conv. layers across the tissue region and further averaging across all core images for the same patient. For gene expression (GE), we used the set of 50 genes in the PAM50 array (Parker et al., 2009). The data set was randomly split into half for training and one quarter for validation/testing; we report the mean over eight cross-validation runs. Classification tasks included predicting 1) Basal vs. non-Basal genomic subtype using images, which is typically done from GE, and 2) predicting grade 1 vs. 3 from GE, typically done from images. This is not a multi-task classification setup; it is a means for one view to stabilize the representation of the other. The first task is also a valuable clinical use case. Genomic analysis is expensive and not routinely performed, while histologic imaging is standard practice by pathologists for detecting cancer and assessing its aggressiveness. In working with our clinical collaborators, our goal has been to predict tumor subtypes from images - something that is too complex for pathologists. We hope that this will one day make tumor subtypes accessible to more patients and improve treatment decisions. This experiment demonstrates that the second view of data can help regularize during training even if it is not available for test patients.\nWe tested different classifier training methods when only one view was available at test time: a) a linear SVM trained on one view, b) a deep NN trained on one view using the same architecture as the lower layers of TOCCA, c) CCAL-Lce trained on both views, d) TOCCA trained on both views. Tab. 1 lists the classification accuracy for each method and task. When predicting genomic subtype Basal from images, all our methods showed an improvement in classification accuracy; the best result was with TOCCA-W, which produced a 2.2% improvement. For predicting grade from GE, all our methods again improved the accuracy – by up to 3.2% with TOCCA-W. These results show that having additional information during training can boost performance at test time. Notably, this experiment used a static set of pre-trained VGG16 image features in order to assess the utility of the method. The network itself could be fine-tuned end-to-end with our TOCCA model, providing an easy opportunity for data augmentation and likely further improvements in classification accuracy." }, { "heading": "5.3 SEMI-SUPERVISED LEARNING FOR SPEECH RECOGNITION", "text": "Our final experiments use speech data from XRMB, consisting of simultaneously recorded acoustic and articulatory measurements. Prior work has shown that CCA-based algorithms can improve phonetic recognition (Wang et al., 2015b;a; 2016; Dorfer et al., 2016b). The 45 speakers were split into 35 for training, 2 for validation, and 8 for testing – a total of 1,429,236 samples for training, 85,297 for validation, and 111,314 for testing.6 The acoustic features are 112D and the articulatory ones are 273D. We removed the per-speaker mean & variance for both views. Samples are annotated with one of 38 phonetic labels.\n5http://cbcs.web.unc.edu/for-researchers/ 6http://ttic.uchicago.edu/˜klivescu/XRMB_data/full/README\nOur task on this data set was representation learning for multi-view prediction – that is, using both views of data to learn a shared discriminative representation. We trained each model using both views and their labels. To test each CCA model, we followed prior work and concatenated the original input features from both views with the projections from both views. Due to the large training set size, we used a Linear Discriminant Analysis (LDA) classifier for efficiency. The same construction was used at test time. This setup was used to assess whether a task-optimal DCCA model can improve discriminative power. We tested TOCCA with a task-driven loss of LDA (Dorfer et al., 2016a) or softmax to demonstrate the flexibility of our model.\nWe compared the discriminability of a variety of methods to learn a shared latent representation. Tab. 5 lists the classification results with a baseline that used only the original input features for LDA. Although deep methods, i.e., DCCA and SoftCCA, improved upon the linear methods, all TOCCA variations significantly outperformed previous state-of-the-art techniques. Using softmax consistently beat LDA by a large margin. TOCCA-SD and TOCCA-ND produced equivalent results as a weight of 0 on the decorrelation term performed best. However, TOCCA-W showed the best result with an improvement of 15% over the best alternative method.\nTOCCA can also be used in a semi-supervised manner when labels are available for only some samples. Tab. 6 lists the results for TOCCA-W in this setting. With 0% labeled data, the result would be similar to DCCA. Notably, a large improvement over the unsupervised results in Tab. 5 is seen even with labels for only 10% of the training samples." }, { "heading": "6 DISCUSSION", "text": "We proposed a method to find a shared latent space that is also discriminative by adding a taskdriven component to deep CCA while enabling end-to-end training. This required a fundamental change in formulation because Deep CCA does not compute the embeddings directly as it optimizes an equivalent objective; therefore, we could not simply add an additional term. Instead, we found an alternative formulation by replacing the CCA projection with `2 distance minimization and orthogonality constraints on the activations, and we implemented this in three different ways. TOCCA-W or TOCCA-SD performed the best, dependent on the data set – both of which include some means of decorrelation to provide a regularizing effect to the model and thereby outperforming TOCCA-ND.\nTOCCA showed large improvements over state-of-the-art in cross-view classification accuracy on MNIST and significantly increased robustness to a small training set size. On CBCS, TOCCA provided a regularizing effect when both views were available for training but only one at test time. TOCCA also produced a large increase over state-of-the-art for multi-view representation learning on a much larger data set, XRMB. On this data set we also demonstrated a semi-supervised approach to get a large increase in classification accuracy with only a small proportion of the labels. Using a similar technique, our method could be applied when some samples are missing a second view.\nClassification tasks using a softmax operation or LDA were explored in this work; however, the formulation presented can also be used with other tasks such as regression or clustering. Another possible avenue for future work entails extracting components shared by both views as well as individual components. This approach has been developed for dictionary learning (Lock et al., 2013; Ray et al., 2014; Feng et al., 2018) but could be extended to deep CCA-based methods. Finally, we have yet to apply data augmentation to the proposed framework; this could provide a significant benefit for small training sets." }, { "heading": "A APPENDIX", "text": "This appendix includes additional details on our TOCCA algorithm and experiments, including 1) a comparison of our formulation with other related CCA approaches, 2) pseudocode for the ZCA whitening algorithm used by TOCCA-W, 3) details on hyperparameter selection, and 4) training runtime experiments.\nA.1 COMPARISON OF TOCCA WITH RELATED ALGORITHMS\nOur TOCCA methods finds a shared latent space that is also discriminative by changing the CCA formulation in order to add a task-driven component. Tab. A2 compares our three TOCCA formulations with other related methods (discussed in §3). CCA is the baseline linear method with a goal of maximizing the correlation between a set of orthogonal linear projections on two views of data. DCCA and SoftCCA are unsupervised deep methods. DCCA optimizes an equivalent objective to CCA but uses non-linear projections implemented with a NN; however, the projections are not computed in the network, only after optimization is complete. SoftCCA changes the correlation objective to, equivalently, minimize the `2 distance between projections and relaxes the orthogonality constraints by using regularization. CCAL-Lrank does compute the CCA projections in the network but does not optimize the final NN for correlation; it instead focuses on a pairwise ranking loss for use in retrieval. Our family of TOCCA methods were detailed in §4. In this supervised formulation, we use the same `2 distance as SoftCCA and simultaneously optimize a task-driven objective. We handle the orthogonality constraints in three different ways: with whitening (TOCCA-W), with regularization (TOCCA-SD) as was used in SoftCCA, and with no explicit decorrelation (TOCCA-ND).\nTable A2: A comparison of our proposed task-optimal deep CCA methods with other related ones from the literature: DCCA (Andrew et al., 2013), SoftCCA (Chang et al., 2018), CCAL-Lrank (Dorfer et al., 2018). CCAL-Lrank uses a pairwise ranking loss with cosine similarity to identify matching and non-matching samples for image retrieval – not classification. A1 and A2 are mean centered outputs from two feed-forward networks. Σ = ATA is computed from a single (large) batch (used in DCCA); Σ̂ is computed as a running mean over batches (for all other methods). ftask(A; θtask) is a task-specific function with parameters θtask, e.g., a softmax operation for classification.\nMethod Objective\nCCA −tr(WT1 Σ12W2) s.t. WT1 Σ1W1 = WT2 Σ2W 2 = I DCCA −||Σ−1/21 Σ12Σ −1/2 2 ||tr where ||T ||tr = tr(TTT )1/2 (TNO, equivalent to CCA objective)\nCCA(WT1 A1,W T 2 A2) computed after optimization complete SoftCCA L`2 dist(A1, A2) + λ ( LDecorr(A1) + LDecorr(A2) ) CCAL-Lrank Lrank(B1, B2) where B1, B2 = CCA(A1, A2), Lrank is pairwise ranking loss TOCCA-W Task(B1, B2, Y )+ λ L`2 dist(B1, B2) where B1 = U1A1, B2 = U2A2 s.t. BT1 B1 = BT2 B2 = I TOCCA-SD Task(A1, A2, Y )+ λ1L`2 dist(A1, A2) + λ2 ( LDecorr(A1) + LDecorr(A2) ) Whitening TOCCA-ND Task(A1, A2, Y )+ λ L`2 dist(A1, A2) Loss functions\n`2 dist L`2 dist(A1, A2) = ||A1 −A2||2F Decorr LDecorr(A) = ∑ i 6=j |Σ̂i,j | where Σ̂ is running mean across batches of Σ = ATA Task Task(A1, A2, Y ) = Ltask(ftask(A1; θtask), Y ) + Ltask(ftask(A2; θtask), Y ) where Ltask can be cross-entropy or any other task-driven loss\nA.2 ALGORITHM FOR WHITENING\nTOCCA-W uses whitening to achieve orthogonality (see §4 for details). The goal is to transform the activations such that their covariance becomes the identity matrix. We use ZCA whitening which first applies PCA whitening to decorrelate the data and rescale each axis, followed by a rotation back to the original space. The final rotation reduces the stochastic axis swapping problems of PCA whitening (Huang et al., 2018). Pseudocode for ZCA whitening is shown in Algorithm A1.\nAlgorithm A1 Whitening layer for orthogonality.\nInput: activations A Rdo×n Hyperparameters: batch size m, momentum α Parameters of layer: mean µ, covariance Σ if training then µ← αµ+ (1− α) 1mA 1n×1 {Update mean} Ā = A− µ {Mean center data} Σ← αΣ + (1− α) 1m−1 Ā1Ā T 2 {Update covariance}\nΣ̂← Σ + I {Add I for numerical stability} Λ, V ← eig(Σ̂) {Compute eigendecomposition} U ← V Λ−1/2V T {Compute transformation matrix}\nelse Ā← A− µ {Mean center data} end if B ← UĀ {Apply ZCA whitening transform} return B\nA.3 IMPLEMENTATION DETAILS: HYPERPARAMETERS\nA random search over hyperparameters was used to train our methods and those that we compare with. The hyperparameter settings and ranges for each data set are provided in Tab. A3. Random search in these intervals was performed 100 times for MNIST and CBCS. Fewer tries were done for XRMB because of the much greater runtime on this large data set. A larger batch size was used for XRMB to improve runtime. The hyperparameter ranges were initially set as an educated guess and, in some cases, were widened for a particular data set (for all methods) after observing results.\nTable A3: Hyperparameter settings and search ranges for the experiments on each data set.\nHyperparameter MNIST CBCS XRMB Hidden layers 4 [0,4] 4 Hidden layer size 500 200 1,000 Output layer size 50 50 112 Loss function weight λ [100, 10−4] [101, 10−5] [101, 10−5] Momentum α 0.99 0.99 0.99 Weight decay [10−3, 10−6], 0 [10−2, 10−5], 0 [10−3, 10−7], 0 Soft decorrelation regularizer [100, 10−5] [100, 10−5] [100, 10−5] Batch size 32 100 50,000 Learning rate [10−2, 10−4] [10−1, 10−3] [100, 10−4] Epochs 200 400 100\nA.4 RUNTIME EXPERIMENTS\nThe computational complexity of TOCCA-W is greater than that of TOCCA-SD due to the eigendecomposition operation (see the end of §4); however, this extra computation is only carried out once per batch. A runtime comparison of the two methods on all three data sets is provided in Tab. A4. The difference in runtime was less than 6.5% for a batch size of 100 or 9.4% for a batch size of 30.\nTable A4: Training runtime for each data set.\nData set Batch size Epochs TOCCA-W TOCCA-SD MNIST 100 200 488 s 418 s MNIST 30 200 1071 s 1036 s CBCS 100 400 103 s 104 s XRMB 50,000 100 3056 s 3446 s" } ]
2,019
null
SP:d905f831fd48700ebbaf8286fa71f77b45aa685f
[ "This paper posits that similar input examples will have similar gradients, leading to a gradient \"coherence\" phenomenon. A simple argument then suggests that the loss should decrease much more rapidly when gradients cohere than when they do not. This hypothesis and analysis is supported with clever experiments that confirm some of the predictions of this theory. Furthermore, since, as the authors emphasize, their hypothesis is prescriptive, they are able to suggest a novel regularization technique and show that it is effective in a simple setting.", "The surprising generalization properties of neural networks trained with stochastic gradient descent are still poorly understood. The present work suggests that they can be explained at least partly by the fact that patterns shared across many data points will lead to gradients pointing in similar directions, thus reinforcing each other. Artefacts specific to small numbers of data points however will not have this property and thus have a substantially smaller impact on the learning. Numerical experiments on MNIST with label-noise indeed show that even though the neural network is able to perfectly fit even the flipped labels, the \"pristine\" labels are fittet much earlier during training. The authors also experiment with explicitly clipping \"outlier gradients\" and show that the resulting algorithm drastically reduces overfitting, thus further supporting the coherent gradient hypothesis." ]
An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.
[ { "affiliations": [], "name": "Satrajit Chatterjee" } ]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H. Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS", "year": 2016 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Devansh Arpit", "Stanislaw K. Jastrzebski", "Nicolas Ballas", "David Krueger", "Emmanuel Bengio", "Maxinder S. Kanwal", "Tegan Maharaj", "Asja Fischer", "Aaron C. Courville", "Yoshua Bengio", "Simon Lacoste-Julien" ], "title": "A closer look at memorization in deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machine-learning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Olivier Bousquet", "André Elisseeff" ], "title": "Stability and generalization", "venue": "J. Mach. Learn. Res.,", "year": 2002 }, { "authors": [ "Rich Caruana", "Steve Lawrence", "Lee Giles" ], "title": "Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping", "venue": "In Proceedings of the 13th International Conference on Neural Information Processing Systems,", "year": 2000 }, { "authors": [ "Satrajit Chatterjee", "Alan Mishchenko" ], "title": "Circuit-based intrinsic methods to detect overfitting", "venue": "CoRR, abs/1907.01991,", "year": 2019 }, { "authors": [ "Anna Choromanska", "Mikael Henaff", "Michaël Mathieu", "Gérard Ben Arous", "Yann LeCun" ], "title": "The loss surfaces of multilayer networks", "venue": "Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep", "venue": "nets. CoRR,", "year": 2017 }, { "authors": [ "Stanislav Fort", "Pawel Krzysztof Nowak", "Srini Narayanan" ], "title": "Stiffness: A new perspective on generalization in neural networks", "venue": "CoRR, abs/1901.09491,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Training pruned neural networks", "venue": "CoRR, abs/1803.03635,", "year": 2018 }, { "authors": [ "Moritz Hardt", "Benjamin Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "K. Kawaguchi", "L. Pack Kaelbling", "Y. Bengio" ], "title": "Generalization in Deep Learning", "venue": "ArXiv e-prints,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp", "venue": "minima. CoRR,", "year": 2016 }, { "authors": [ "Steve Lawrence", "C. Lee Giles", "Ah Chung Tsoi" ], "title": "What size neural network gives optimal generalization? convergence properties of backpropagation", "venue": "Technical report,", "year": 1996 }, { "authors": [ "Shengchao Liu", "Dimitris S. Papailiopoulos", "Dimitris Achlioptas" ], "title": "Bad global minima exist and SGD can reach", "venue": "them. CoRR,", "year": 2019 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning", "venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Dimitris Kalimeris", "Tristan Yang", "Benjamin L. Edelman", "Fred Zhang", "Boaz Barak" ], "title": "SGD on neural networks learns functions of increasing complexity", "venue": "URL http://arxiv.org/abs/1905.11604", "year": 1905 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "Towards understanding the role of over-parametrization in generalization of neural networks", "venue": "CoRR, abs/1805.12076,", "year": 2018 }, { "authors": [ "Nasim Rahaman", "Aristide Baratin", "Devansh Arpit", "Felix Draxler", "Min Lin", "Fred Hamprecht", "Yoshua Bengio", "Aaron Courville" ], "title": "On the spectral bias of neural networks", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "David Rolnick", "Andreas Veit", "Serge J. Belongie", "Nir Shavit" ], "title": "Deep learning is robust to massive label noise", "venue": "CoRR, abs/1705.10694,", "year": 2017 }, { "authors": [ "Karthik Abinav Sankararaman", "Soham De", "Zheng Xu", "W. Ronny Huang", "Tom Goldstein" ], "title": "The impact of neural network overparameterization on gradient confusion and stochastic gradient descent", "venue": "URL http://arxiv.org/abs/1904.06963", "year": 1904 }, { "authors": [ "Shai Shalev-Shwartz", "Ohad Shamir", "Nathan Srebro", "Karthik Sridharan" ], "title": "Learnability, stability and uniform convergence", "venue": "J. Mach. Learn. Res.,", "year": 2010 }, { "authors": [ "Umut Simsekli", "Levent Sagun", "Mert Gürbüzbalaban" ], "title": "A tail-index analysis of stochastic gradient noise in deep neural networks", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Guillermo Valle-Perez", "Chico Q. Camargo", "Ard A. Louis" ], "title": "Deep learning generalizes because the parameter-function map is biased towards simple functions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Lei Wu", "Chao Ma", "Weinan E" ], "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In Proceedings of the International Conference on Learning Representations ICLR,", "year": 2017 }, { "authors": [ "Zhihui Zhu", "Daniel Soudry", "Yonina C. Eldar", "Michael B. Wakin" ], "title": "The global optimization geometry of shallow linear neural networks", "venue": "CoRR, abs/1805.04938,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION AND OVERVIEW", "text": "Neural networks used in practice often have sufficient effective capacity to learn arbitrary maps from their inputs to their outputs. This is typically demonstrated by training a classification network that achieves good test accuracy on a real dataset S, on a modified version of S (call it S′) where the labels are randomized and observing that the training accuracy on S′ is very high, though, of course, the test accuracy is no better than chance (Zhang et al., 2017). This leads to an important open question in the Deep Learning community (Zhang et al. (2017); Arpit et al. (2017); Bartlett et al. (2017); Kawaguchi et al. (2017); Neyshabur et al. (2018); Arora et al. (2018); Belkin et al. (2019); Rahaman et al. (2019); Nagarajan & Kolter (2019), etc.): Among all maps that fit a real dataset, how does Gradient Descent (GD) find one that generalizes well? This is the question we address in this paper.\nWe start by observing that this phenomenon is not limited to neural networks trained with GD but also applies to Random Forests and Decision Trees. However, there is no mystery with trees: A typical tree construction algorithm splits the training set recursively into similar subsets based on input features. If no similarity is found, eventually, each example is put into its own leaf to achieve good training accuracy (but, of course, at the cost of poor generalization). Thus, trees that achieve good accuracy on a randomized dataset are much larger than those on a real dataset (e.g. Chatterjee & Mishchenko (2019, Expt. 5)).\nIs it possible that something similar happens with GD? We believe so. The type of randomized-label experiments described above show that if there are common patterns to be found, then GD finds them. If not, it fits each example on a case-by-case basis. The question then is, what is it about the dynamics of GD that makes it possible to extract common patterns from the data? And what does it mean for a pattern to be common?\nSince the only change to the network parameters in GD comes from the gradients, the mechanism to detect commonality amongst examples must be through the gradients. We propose that this commonality detection can be explained as follows:\n1. Gradients are coherent, i.e, similar examples (or parts of examples) have similar gradients (or similar components of gradients) and dissimilar examples have dissimilar gradients.\n2. Since the overall gradient is the sum of the per-example gradients, it is stronger in directions where the per-example gradients are similar and reinforce each other and weaker in other directions where they are different and do not add up.\n3. Since network parameters are updated proportionally to gradients, they change faster in the direction of stronger gradients.\n4. Thus the changes to the network during training are biased towards those that simultaneously benefit many examples instead of a few (or one example).\nFor convenience, we refer to this as the Coherent Gradients hypothesis.\nIt is instructive to work through the proposed mechanism in the context of a simple thought experiment. Consider a training set with two examples a and b. At some point in training, suppose the gradient of a, ga, can be decomposed into two orthogonal components ga1 and ga2 of roughly equal magnitude, i.e., there are two, equally good, independent ways in which the network can better fit a (by using say two disjoint parts of the network). Likewise, for b. Now, further suppose that one of the two ways is common to both a and b, i.e., say ga2 = gb2 = gab, whereas, the other two are example specific, i.e., 〈ga1 , gb1〉 = 0. Now, the overall gradient is\ng = ga + gb = ga1 + 2 gab + gb1 .\nObserve that the gradient is stronger in the direction that simultaneously helps both examples and thus the corresponding parameter changes are bigger than those those that only benefit only one example.1\nIt is important to emphasize that the notion of similarity used above (i.e., which examples are considered similar) is not a constant but changes in the course of training as network parameters change. It starts from a mostly task independent notion due to random initialization and is bootstrapped in the course of training to be task dependent. We say “mostly” because even with random initialization, examples that are syntactically close are treated similarly (e.g., two images differing in the intensities of some pixels as opposed to two images where one is a translated version of the other).\nThe relationship between strong gradients and generalization can also be understood through the lens of algorithmic stability (Bousquet & Elisseeff, 2002): strong gradient directions are more stable since the presence or absence of a single example does not impact them as much, as opposed to weak gradient directions which may altogether disappear if a specific example is missing from the training set. With this observation, we can reason inductively about the stability of GD: since the initial values of the parameters do not depend on the training data, the initial function mapping examples to their gradients is stable. Now, if all parameter updates are due to strong gradient directions, then stability is preserved. However, if some parameter updates are due to weak gradient directions, then stability is diminished. Since stability (suitably formalized) is equivalent to generalization (Shalev-Shwartz et al., 2010), this allows us to see how generalization may degrade as training progresses. Based on this insight, we shall see later how a simple modification to GD to suppress the weak gradient directions can dramatically reduce overfitting.\nIn addition to providing insight into why GD generalizes in practice, we believe that the Coherent Gradients hypothesis can help explain several other empirical observations about deep learning in the literature:\n(a) Learning is slower with random labels than with real labels (Zhang et al., 2017; Arpit et al., 2017)\n(b) Robustness to large amounts of label noise (Rolnick et al., 2017) (c) Early stopping leads to better generalization (Caruana et al., 2000) (d) Increasing capacity improves generalization (Caruana et al., 2000; Neyshabur et al., 2018) (e) The existence of adversarial initialization schemes (Liu et al., 2019) (f) GD detects common patterns even when trained with random labels (Chatterjee &\nMishchenko, 2019) 1While the mechanism is easiest to see with full or large minibatches, we believe it holds even for small\nminibatches (though there one has to consider the bias in updates over time).\nA direct experimental verification of the Coherent Gradients hypothesis is challenging since the notion of similarity between examples depends on the parameters of the network and thus changes during training. Our approach, therefore, is to design intervention experiments where we establish a baseline and compare it against variants designed to test some aspect or prediction of the theory. As part of these experiments, we replicate the observations (a)–(c) in the literature noted above, and analyze the corresponding explanations provided by Coherent Gradients (§2), and outline for future work how (d)–(f) may be accounted for (§5). In this paper, we limit our study to simple baselines: vanilla Stochastic Gradient Descent (SGD) on MNIST using fully connected networks. We believe that this is a good starting point, since even in this simple setting, with all frills eliminated (e.g., inductive bias from architecture or explicit regularization, or a more sophisticated optimization procedure), we are challenged to find a satisfactory explanation of why SGD generalizes well. Furthermore, our prior is that the difference between weak and strong directions is small at any one step of training, and therefore having a strong learning signal as in the case of MNIST makes a direct analysis of gradients easier. It also has the benefit of having a smaller carbon footprint and being easier to reproduce. Finally, based on preliminary experiments on other architectures and datasets we are optimistic that the insights we get from studying this simple setup apply more broadly." }, { "heading": "2 EFFECT OF REDUCING SIMILARITY BETWEEN EXAMPLES", "text": "Our first test of the Coherent Gradients hypothesis is to see what happens when we reduce similarity between examples. Although, at any point during training, we do not know which examples are similar, and which are different, we can (with high probability) reduce the similarity among training examples simply by injecting label noise. In other words, under any notion of similarity, adding label noise to a dataset that has clean labels is likely to make similar examples less similar. Note that this perturbation does not reduce coherence since gradients still depend on the examples. (To break coherence, we would have to make the gradients independent of the training examples which would requiring perturbing SGD itself and not just the dataset)." }, { "heading": "2.1 SETUP", "text": "For our baseline, we use the standard MNIST dataset of 60,000 training examples and 10,000 test examples. Each example is a 28x28 pixel grayscale handwritten digit along with a label (‘0’–‘9’). We train a fully connected network on this dataset. The network has one hidden layer with 2048 ReLUs and an output layer with a 10-way softmax. We initialize it with Xavier and train using vanilla SGD (i.e., no momentum) using cross entropy loss with a constant learning rate of 0.1 and a minibatch size of 100 for 105 steps (i.e., about 170 epochs). We do not use any explicit regularizers.\nWe perturb the baseline by modifying only the dataset and keeping all other aspects of the architecture and learning algorithm fixed. The dataset is modified by adding various amounts of noise (25%, 50%, 75%, and 100%) to the labels of the training set (but not the test set). This noise is added by taking, say in the case of 25% label noise, 25% of the examples at random and randomly permuting their labels. Thus, when we add 25% label noise, we still expect about 75% + 0.1 * 25%, i.e., 77.5% of the examples to have unchanged (i.e. “correct”) labels which we call the proper accuracy of the modified dataset. In what follows, we call examples with unchanged labels, pristine, and the remaining, corrupt. Also, from this perspective, it is convenient to refer to the original MNIST dataset as having 0% label noise.\nWe use a fully connected architecture instead of a convolutional one to mitigate concerns that some of the difference in generalization between the original MNIST and the noisy variants could stem from architectural inductive bias. We restrict ourselves to only 1 hidden layer to have the gradients be as well-behaved as possible. Finally, the network width, learning rate, and the number of training steps are chosen to ensure that exactly the same procedure is usually able to fit all 5 variants to 100% training accuracy." }, { "heading": "2.2 QUALITATIVE PREDICTIONS", "text": "Before looking at the experimental results, it is useful to consider what Coherent Gradients can qualitatively say about this setup. In going from 0% label noise to 100% label noise, as per experiment design, we expect examples in the training set to become more dissimilar (no matter what the current notion of similarity is). Therefore, we expect the per-example gradients to be less aligned with each other. This in turn causes the overall gradient to become more diffuse, i.e., stronger directions become relatively weaker, and consequently, we expect it to take longer to reach a given level of accuracy as label noise increases, i.e., to have a lower realized learning rate.\nThis can be made more precise by considering the following heuristic argument. Let θt be the vector of trainable parameters of the network at training step t. Let L denote the loss function of the network (over all training examples). Let gt be the gradient of L at θt and let α denote the learning rate. By Taylor expansion, to first order, the change ∆Lt in the loss function due to a small gradient descent step ht = −α · gt is given by\n∆Lt := L(θt + ht)− L(θt) ≈ 〈gt, ht〉 = −α · 〈gt, gt〉 = −α · ‖gt‖2 (1)\nwhere ‖ · ‖ denotes the l2-norm. Now, let gte denote the gradient of training example e at step t. Since the overall gradient is the sum of the per-example gradients, we have,\n‖gt‖2 = 〈gt, gt〉 = 〈 ∑ e gte, ∑ e gte〉 = ∑ e,e′ 〈gte, gte′〉 = ∑ e ‖gte‖2 + ∑ e,e′\ne 6=e′\n〈gte, gte′〉 (2)\nNow, heuristically, let us assume that all the ‖gte‖ are roughly the same and equal to ‖g◦t ‖ which is not entirely unreasonable (at least at the start of training, if the network has no a priori reason to treat different examples very differently). If all the per-example gradients are approximately orthogonal (i.e., 〈gte, gte′〉 ≈ 0 for e 6= e′), then ‖gt‖2 ≈ m · ‖g◦t ‖2 where m is the number of examples. On the other hand, if they are approximately the same (i.e., 〈gte, gte′〉 ≈ ‖g◦t ‖2), then ‖gt‖2 ≈ m2 · ‖g◦t ‖2. Thus, we expect that greater the agreement in per-example gradients, the faster loss should decrease.\nFinally, for datasets that have a significant fractions of pristine and corrupt examples (i.e., the 25%, 50%, and 75% noise) we can make a more nuanced prediction. Since, in those datasets, the pristine examples as a group are still more similar than the corrupt ones, we expect the pristine gradients to continue to align well and sum up to a strong gradient. Therefore, we expect them to be learned faster than the corrupt examples, and at a rate closer to the realized learning rate in the 0% label noise case. Likewise, we expect the realized learning rate on the corrupt examples to be closer to the 100% label noise case. Finally, as the proportion of pristine examples falls with increasing noise, we expect the realized learning rate for pristine examples to degrade.\nNote that this provides an explanation for the observation in the literature that that networks can learn even when the number of examples with noisy labels greatly outnumber the clean examples as long as the number of clean examples is sufficiently large (Rolnick et al., 2017) since with too few clean examples the pristine gradients are not strong enough to dominate." }, { "heading": "2.3 AGREEMENT WITH EXPERIMENT", "text": "Figure 1(a) and (b) show the training and test curves for the baseline and the 4 variants. We note that for all 5 variants, at the end of training, we achieve 100% training accuracy but different amounts of generalization. As expected, SGD is able to fit random labels, yet when trained on real data, generalizes well. Figure 1(c) shows the reduction in training loss over the course of training, and Figure 1(d) shows the fraction of pristine and corrupt labels learned as training processes.\nThe results are in agreement with the qualitative predictions made above:\n1. In general, as noise increases, the time taken to reach a given level of accuracy (i.e., realized learning rate) increases.\n2. Pristine examples are learned faster than corrupt examples. They are learned at a rate closer to the 0% label noise rate whereas the corrupt examples are learned at a rate closer to the 100% label noise rate.\n3. With fewer pristine examples, their learning rate reduces. This is most clearly seen in the first few steps of training by comparing say 0% noise with 25% noise.\nUsing Equation 1, note that the magnitude of the slope of the training loss curve is a good measure of the square of the l2-norm of the overall gradient. Therefore, from the loss curves of Figure 1(c), it is clear that in early training, the more the noise, the weaker the l2-norm of the gradient. If we assume that the per-example l2-norm is the same in all variants at start of training, then from Equation 2, it is clear that with greater noise, the gradients are more dissimilar.\nFinally, we note that this experiment is an instance where early stopping (e.g., Caruana et al. (2000)) is effective. Coherent gradients and the discussion in §2.2 provide some insight into this: Strong gradients both generalize well (they are stable since they are supported by many examples) and they bring the training loss down quickly for those examples. Thus early stopping maximizes the use of strong gradients and limits the impact of weak gradients. (The experiment in the §3 discusses a different way to limit the impact of weak gradients and is an interesting point of comparison with early stopping.)" }, { "heading": "2.4 ANALYZING STRONG AND WEAK GRADIENTS", "text": "Within each noisy dataset, we expect the pristine examples to be more similar to each other and the corrupt ones to be less similar. In turn, based on the training curves (particularly, Figure 1 (d)), during the initial part of training, this should mean that the gradients from the pristine examples should be stronger than the gradients from the corrupt examples. We can study this effect via a different decomposition of square of the l2-norm of the gradient (of equivalently upto a constant, the change in the loss function):\n〈gt, gt〉 = 〈gt, gpt + gct 〉 = 〈gt, g p t 〉+ 〈gt, gct 〉\nwhere gpt and g c t are the sum of the gradients of the pristine examples and corrupt examples respectively. (We cannot decompose the overall norm into a sum of norms of pristine and corrupt due to the cross terms 〈gpt , gct 〉. With this decomposition, we attribute the cross terms equally to both.) Now, set fpt = 〈gt,gpt 〉 <gt,gt> and f ct = 〈gt,gct 〉 〈gt,gt〉 . Thus, f p t and f c t represent the fraction of the loss reduction due to pristine and corrupt at each time step respectively (and we have fpt + f c t = 1), and based on the foregoing, we expect the pristine fraction to be a larger fraction of the total when training starts and to diminish as training progresses and the pristine examples are fitted.\nThe first row of Figure 2 shows a plot of estimates of fpt and f c t for 25%, 50% and 75% noise. These quantities were estimated by recording a sample of 400 per-example gradients for 600 weights (300 from each layer) in the network. We see that for 25% and 50% label noise, fpt initially starts off higher than f ct and after a few steps they cross over. This happens because at that point all the pristine examples have been fitted and for most of the rest of training the corrupt examples need to be fitted and so they largely contribute to the l2-norm of the gradient (or equivalently by Equation 1 to loss reduction). Only at the end when the corrupt examples have also been fit, the two curves reach parity. In the case of 75% noise, we see that the cross over doesn’t happen, but there is a slight slope downwards for the contribution from pristine examples. We believe this is because of the sheer number of corrupt examples, and so even though the individual corrupt example gradients are weak, their sum dominates.\nTo get a sense of statistical significance in our hypothesis that there is a difference between the pristine and corrupt examples as a group, in the remaining rows of Figure 2, we construct a null world where there is no difference between pristine and corrupt. We do that by randomly permuting the “corrupt” and “pristine” designations among the examples (instead of using the actual designations) and reploting. Although the null pristine and corrupt curves are mirror images (as they must be even in the null world since each example is given one of the two designations), we note that for 25% and 50% they do not cross over as they do with the real data. This increases our confidence that the null may be rejected. The 75% case is weaker but only the real data shows the slight downward slope in pristine which none of the nulls typically show. However, all the nulls do show that corrupt is more than pristine which increases our confidence that this is due to the significantly differing sizes of the two sets. (Note that this happens in reverse in the 25% case: pristine is always above corrupt, but they never cross over in the null worlds.)\nTo get a stronger signal for the difference between pristine and corrupt in the 75% case, we can look at a different statistic that adjusts for the different sizes of the pristine and corrupt sets. Let |p| and |c|\nbe the number of pristine and corrupt examples respectively. Define\nipt := 1\n|p| t∑ t′=0 〈gt′ , gpt′〉 and i c t := 1 |c| t∑ t′=0 〈gt′ , gct′〉\nwhich represents to a first order and upto a scale factor (α) the mean cumulative contribution of a pristine or corrupt example up until that point in training (since the total change in loss from the start of training to time t is approximately the sum of first order changes in the loss at each time step).\nThe first row of Figure 3 shows ipt and i c t for the first 10 steps of training where the difference between pristine and corrupt is the most pronounced. As before, to give a sense of statistical significance, the remaining rows show the same plots in null worlds where we randomly permute the pristine or corrupt designations of the examples. The results appear somewhat significant but not overwhelmingly so. It would be interesting to redo this on the entire population of examples and trainable parameters instead of a small sample." }, { "heading": "3 EFFECT OF SUPPRESSING WEAK GRADIENT DIRECTIONS", "text": "In the second test of the Coherent Gradients hypothesis, we change GD itself in a very specific (and to our knowledge, novel) manner suggested by the theory. Our inspiration comes from random forests. As noted in the introduction, by building sufficiently deep trees a random forest algorithm can get perfect training accuracy with random labels, yet generalize well when trained on real data. However, if we limit the tree construction algorithm to have a certain minimum number of examples in each leaf, then it no longer overfits. In the case of GD, we can do something similar by suppressing the weak gradient directions." }, { "heading": "3.1 SETUP", "text": "Our baseline setup is the same as before (§2.1) but we add a new dimension by modifying SGD to update each parameter with a “winsorized” gradient where we clip the most extreme values (outliers) among all the per-example gradients. Formally, let gwe be the gradient for the trainable parameter w for example e. The usual gradient computation for w is\ngw = ∑ e gwe\nNow let c ∈ [0, 50] be a hyperparameter that controls the level of winsorization. Define lw to be the c-th percentile of gwe taken over the examples. Similarly, let uw be the (100− c)-th percentile. Now, compute the c-winsorized gradient for w (denoted by gcw) as follows:\ngcw := ∑ e clip(gwe, lw, uw)\nThe change to gradient descent is to simply use gcw instead of gw when updating w at each step.\nNote that although this is conceptually a simple change, it is computationally very expensive due to the need for per-example gradients. To reduce the computational cost we only use the examples in the minibatch to compute lw and uw. Furthermore, instead of using 1 hidden layer of 2048 ReLUs, we use a smaller network with 3 hidden layers of 256 ReLUs each, and train for 60,000 steps (i.e., 100 epochs) with a fixed learning rate of 0.1. We train on the baseline dataset and the 4 noisy variants with c ∈ {0, 1, 2, 4, 8}. Since we have 100 examples in each minibatch, the value of c immediately tells us how many outliers are clipped in each minibatch. For example, c = 2 means the 2 largest and 2 lowest values of the per-example gradient are clipped (independently for each trainable parameter in the network), and c = 0 corresponds to unmodified SGD." }, { "heading": "3.2 QUALITATIVE PREDICTIONS", "text": "If the Coherent Gradient hypothesis is right, then the strong gradients are responsible for making changes to the network that generalize well since they improve many examples simultaneously. On the other hand, the weak gradients lead to overfitting since they only improve a few examples. By winsorizing each coordinate, we suppress the most extreme values and thus ensure that a parameter is only updated in a manner that benefits multiple examples. Therefore:\n• Since c controls which examples are considered extreme, the larger c is, the less we expect the network to overfit. • But this also makes it harder for the network to fit the training data, and so we expect the\ntraining accuracy to fall as well. • Winsorization will not completely eliminate the weak directions. For example, for small\nvalues of c we should still expect overfitting to happen over time though at a reduced rate since only the most egregious outliers are suppressed." }, { "heading": "3.3 AGREEMENT WITH EXPERIMENT", "text": "The resulting training and test curves shown in Figure 4. The columns correspond to different amounts of label noise and the rows to different amounts of winsorization. In addition to the training and test accuracies (ta and va, respectively), we show the level of overfit which is defined as ta− [ · 110 + (1− ) · va] to account for the fact that the test labels are not randomized. We see that the experimental results are in agreement with the predictions above. In particular,\n• For c > 1, training accuracies do not exceed the proper accuracy of the dataset, though they may fall short, specially for large values of c. • The rate at which the overfit curve grows goes down with increasing c.\nAdditionally, we notice that with a large amount of winsorization, the training and test accuracies reach a maximum and then go down. Part of the reason is that as a result of winsorization, each step is no longer in a descent direction, i.e., this is no longer gradient descent." }, { "heading": "4 DISCUSSION AND RELATED WORK", "text": "Although there has been a lot of work in recent years in trying to understand generalization in Deep Learning, no entirely satisfactory explanation has emerged so far.\nThere is a rich literature on aspects of the stochastic optimization problem such as the loss landscape and minima (e.g., Choromanska et al. (2015); Zhu et al. (2018)), the curvature around stationary points (e.g., Hochreiter & Schmidhuber (1997); Keskar et al. (2016); Dinh et al. (2017); Wu et al. (2018)), and the implications of stochasticity due to sampling in SGD (e.g., Simsekli et al. (2019)). However, we believe it should be possible to understand generalization without a detailed understanding of the optimization landscape. For example, since stopping early typically leads to small generalization gap, the nature of the solutions of GD (e.g., stationary points, the limit cycles of SGD at equilibrium) cannot be solely responsible for generalization. In fact, from this observation, it would appear that an inductive argument for generalization would be more natural. Likewise, there is reason to believe that stochasticity is not fundamental to generalization (though it may help). For example, modifying the experiment in §2.1 to use full batch leads to similar qualitative generalization results. This is consistent with other small scale studies (e.g., Figure 1 of Wu et al. (2018)) though we are not aware of any large scale studies on full batch.\nOur view of optimization is a simple, almost combinatorial, one: gradient descent is a greedy search with some hill-climbing thrown in (due to sampling in SGD and finite step size). Therefore, we worry less about the quality of solutions reached, but more about staying “feasible” at all times during the search. In our context, feasibility means being able to generalize; and this naturally leads us to look at the transition dynamics to see if that preserves generalizability.\nAnother approach to understanding generalization, is to argue that gradient-based optimization induces a form of implicit regularization leading to a bias towards models of low complexity. This is an extension of the classical approach where bounding a complexity measure leads to bounds on the generalization gap. As is well known, classical measures of complexity (also called capacity) do not work well. For example, sometimes adding more parameters to a net can help generalization (see for e.g. Lawrence et al. (1996); Neyshabur et al. (2018)) and, as we have seen, VC-Dimension\nand Rademacher Complexity-based bounds must be vacuous since networks can memorize random labels and yet generalize on real data. This has led to a lot of recent work in identifying better measures of complexity such as spectrally-normalized margin (Bartlett et al., 2017), path-based group norm (Neyshabur et al., 2018), a compression-based approach (Arora et al., 2018), etc. However, to our knowledge, none of these measures is entirely satisfactory for accounting for generalization in practice. Please see Nagarajan & Kolter (2019) for an excellent discussion of the challenges.\nWe rely on a different classical notion to argue generalization: algorithmic stability (see Bousquet & Elisseeff (2002) for a historical overview). We have provided only an informal argument in Section 1, but there has been prior work by Hardt et al. (2016) in looking at GD and SGD through the lens of stability, but their formal results do not explain generalization in practical settings (e.g., multiple epochs of training and non-convex objectives). In fact, such an attempt appears unlikely to work since our experimental results imply that any stability bounds for SGD that do not account for the actual training data must be vacuous! (This was also noted by Zhang et al. (2017).) That said, we believe stability is the right way to think about generalization in GD for a few reasons. First, since by Shalev-Shwartz et al. (2010) stability, suitably formalized, is equivalent to generalization. Therefore, in principle, any explanation of generalizability for a learning problem must—to borrow a term from category theory—factor through stability. Second, a stability based analysis may be more amenable to taking the actual training data into account (perhaps by using a “stability accountant” similar to a privacy accountant) which appears necessary to get non-vacuous bounds for practical networks and datasets. Finally, as we have seen with the modification in §3, a stability based approach is not just descriptive but prescriptive2 and can point the way to better learning algorithms.\nFinally, we look at two relevant lines of work pointed out by a reviewer. First, Rahaman et al. (2019) compute the Fourier spectrum of ReLU networks and argue based on heuristics and experiments that these networks learn low frequency functions first. In contrast, we focus not on the function learnt, but on the mechanism in GD to detect commonality. This leads to a perspective that is at once simpler and more general (for e.g., it applies equally to networks with other activation functions, with attention, LSTMs, and discrete (combinatorial) inputs). Furthermore, it opens up a path to analyzing generalization via stability. It is is not clear if Rahaman et al. (2019) claim a causal mechanism, but their analysis does not suggest an obvious intervention experiment such as ours of §3 to test causality. There are other experimental results that show biases towards linear functions (Nakkiran et al., 2019) and functions with low descriptive complexity (Valle-Perez et al., 2019) but these papers do not posit a causal mechanism. It is interesting to consider if Coherent Gradients can provide a unified explanation for these observed biases.\nSecond, Fort et al. (2019) propose a descriptive statistic stiffness based on pairwise per-example gradients and show experimentally that it can be used to characterize generalization. Sankararaman et al. (2019) propose a very similar statistic called gradient confusion but use it to study the speed of training. Unlike our work, these do not propose causal mechanisms for generalization, but these statistics (which are different from those in §2.4) could be useful for the further study of Coherent Gradients." }, { "heading": "5 DIRECTIONS FOR FUTURE WORK", "text": "Does the Coherent Gradients hypothesis hold in other settings such as BERT, ResNet, etc.? For that we would need to develop more computationally efficient tests. Can we use the state of the network to explicitly characterize which examples are considered similar and study this evolution in the course of training? We expect non-parametric methods for similarity such as those developed in Chatterjee & Mishchenko (2019) and their characterization of “easy” examples (i.e., examples learnt early as per Arpit et al. (2017)) as those with many others like them, to be useful in this context.\nCan Coherent Gradients explain adversarial initializations (Liu et al., 2019)? The adversarial initial state makes semantically similar examples purposefully look different. Therefore, during training, they continue to be treated differently (i.e., their gradients share less in common than they would if starting from a random initialization). Thus, fitting is more case-by-case and while it achieves good final training accuracy, it does not generalize.\n2See https://www.offconvex.org/2017/12/08/generalization1/ for a nice discussion of the difference.\nCan Coherent Gradients along with the Lottery Ticket Hypothesis (Frankle & Carbin, 2018) explain the observation in Neyshabur et al. (2018) that wider networks generalize better? By Lottery Ticket, wider networks provide more chances to find initial gradient directions that improve many examples, and by Coherent Gradients, these popular hypotheses are learned preferentially (faster).\nCan we use the ideas behind Winsorized SGD from §3 to develop a computationally efficient learning algorithm with generalization (and even privacy) guarantees? How does winsorized gradients compare in practice to the algorithm proposed in Abadi et al. (2016) for privacy? Last, but not least, can we use the insights from this work to design learning algorithms that operate natively on discrete networks?" }, { "heading": "ACKNOWLEDGMENTS", "text": "I thank Alan Mishchenko, Shankar Krishnan, Piotr Zielinski, Chandramouli Kashyap, Sergey Ioffe, Michele Covell, and Jay Yagnik for helpful discussions." } ]
2,020
UNDERSTANDING GENERALIZATION IN GRADIENT DESCENT-BASED OPTIMIZATION
SP:761207caf0d1b23f060e3957a6309bc6d76819a6
[ "The authors evaluate convolutional autoencoders (CAE) by varying the size (width & height) and depth of the bottleneck layer on three datasets and compare test and training performance. They furthermore evaluate the quality of the bottleneck activations for linear classification. The authors also investigate the belief that a bottleneck layer of size equal to the input image will copy the image. ", "This paper studies some of the properties of fully convolutional autoencoders (CAE) as a function of the shape and total size of the bottleneck. They train and test CAEs with bottlenecks consisting of different ratios of spatial resolution versus number of channels, as well as different total number of neurons. The authors investigate which type of change in the bottleneck is most influential on training behavior, generalization to test set, and linear separability for classification/regression. Their first main finding is that the spatial resolution of the bottleneck is a stronger influencer of generalization to the test set than the number of channels and the total number of neurons in the bottleneck. The second main finding is that even when the total number of neurons in the bottleneck is equal to the data input size, the neural network does not appear to simply learn to copy the input image into the bottleneck. " ]
In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE. We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks. The number of channels in the bottleneck, on the other hand, is secondary in importance. Furthermore, we show empirically that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input. Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (≈ 600 images) dataset. We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs.
[]
[ { "authors": [ "Guillaume Alain", "Yoshua Bengio" ], "title": "What Regularized Auto-Encoders Learn from the Data Generating Distribution", "venue": "arXiv e-prints, art", "year": 2012 }, { "authors": [ "Sanjeev Arora", "Aditya Bhaskara", "Rong Ge", "Tengyu Ma" ], "title": "Provable bounds for learning some deep representations", "venue": "CoRR, abs/1310.6343,", "year": 2013 }, { "authors": [ "Devansh Arpit", "Yingbo Zhou", "Hung Ngo", "Venu Govindaraju" ], "title": "Why Regularized Auto-Encoders learn Sparse Representation", "venue": "arXiv e-prints, art", "year": 2015 }, { "authors": [ "Pierre Baldi" ], "title": "Autoencoders, unsupervised learning, and deep architectures", "venue": "Proceedings of ICML Workshop on Unsupervised and Transfer Learning,", "year": 2012 }, { "authors": [ "Christoph Baur", "Benedikt Wiestler", "Shadi Albarqouni", "Nassir Navab" ], "title": "Deep autoencoding models for unsupervised anomaly segmentation in brain mr images", "venue": null, "year": 2019 }, { "authors": [ "Yoshua Bengio", "Pascal Lamblin", "Dan Popovici", "Hugo Larochelle" ], "title": "Greedy layer-wise training of deep networks", "venue": "Advances in Neural Information Processing Systems", "year": 2007 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Yoshua Bengio", "Li Yao", "Guillaume Alain", "Pascal Vincent" ], "title": "Generalized denoising auto-encoders as generative models", "venue": "CoRR, abs/1305.6663,", "year": 2013 }, { "authors": [ "David Berthelot", "Colin Raffel", "Aurko Roy", "Ian Goodfellow" ], "title": "Understanding and improving interpolation in autoencoders via an adversarial regularizer", "venue": "arXiv preprint arXiv:1807.07543,", "year": 2018 }, { "authors": [ "David Berthelot", "Colin Raffel", "Aurko Roy", "Ian Goodfellow" ], "title": "Improving interpolation in autoencoders. 2019", "venue": "URL https://openreview.net/pdf?id=S1fQSiCcYm", "year": 2019 }, { "authors": [ "Jinghui Chen", "Saket Sathe", "Charu Aggarwal", "Deepak Turaga" ], "title": "Outlier Detection with Autoencoder Ensembles", "venue": "URL https: //epubs.siam.org/doi/abs/10.1137/1.9781611974973.11", "year": 1611 }, { "authors": [ "Z. Cheng", "H. Sun", "M. Takeuchi", "J. Katto" ], "title": "Deep convolutional autoencoder-based lossy image compression", "venue": "Picture Coding Symposium (PCS),", "year": 2018 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "T. Dumas", "A. Roumy", "C. Guillemot" ], "title": "Autoencoder based image compression: Can the learning be quantization independent", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "Vincent Dumoulin", "Francesco Visin" ], "title": "A guide to convolution arithmetic for deep learning", "venue": "arXiv preprint arXiv:1603.07285,", "year": 2016 }, { "authors": [ "Dumitru Erhan", "Pierre-Antoine Manzagol", "Yoshua Bengio", "Samy Bengio", "Pascal Vincent" ], "title": "The difficulty of training deep architectures and the effect of unsupervised pre-training", "venue": "Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Ian J Goodfellow", "Oriol Vinyals", "Andrew M Saxe" ], "title": "Qualitatively characterizing neural network optimization problems", "venue": "arXiv preprint arXiv:1412.6544,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Quoc V. Le" ], "title": "Building high-level features using large scale unsupervised learning", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8595–8598,", "year": 2013 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kin Gwn Lore", "Adedotun Akintayo", "Soumik Sarkar" ], "title": "Llnet: A deep autoencoder approach to natural low-light image enhancement", "venue": "Pattern Recognition,", "year": 1630 }, { "authors": [ "Xiao-Jiao Mao", "Chunhua Shen", "Yu-Bin Yang" ], "title": "Image Restoration Using Convolutional Autoencoders with Symmetric Skip Connections", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Jonathan Masci", "Ueli Meier", "Dan Cireşan", "Jürgen Schmidhuber" ], "title": "Stacked convolutional autoencoders for hierarchical feature extraction", "venue": "In International Conference on Artificial Neural Networks,", "year": 2011 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Thanh V. Nguyen", "Raymond K.W. Wong", "Chinmay Hegde" ], "title": "Autoencoders Learn Generative Linear Models", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Augustus Odena", "Vincent Dumoulin", "Chris Olah" ], "title": "Deconvolution and checkerboard artifacts. Distill, 2016", "venue": "doi: 10.23915/distill.00003. URL http://distill.pub/2016/ deconv-checkerboard", "year": 2016 }, { "authors": [ "Maithra Raghu", "Justin Gilmer", "Jason Yosinski", "Jascha Sohl-Dickstein" ], "title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "David E Rumelhart", "Geoffrey E Hinton", "Ronald J Williams" ], "title": "Learning internal representations by error propagation", "venue": "Technical report, California Univ San Diego La Jolla Inst for Cognitive Science,", "year": 1985 }, { "authors": [ "Zhixin Shu", "Mihir Sahasrabudhe", "Riza Alp Guler", "Dimitris Samaras", "Nikos Paragios", "Iasonas Kokkinos" ], "title": "Deforming autoencoders: Unsupervised disentangling of shape and appearance", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Edgar Tretschk", "Ayush Tewari", "Michael Zollhöfer", "Vladislav Golyanik", "Christian Theobalt" ], "title": "DEMEA: deep mesh autoencoders for non-rigidly deforming objects", "venue": "CoRR, abs/1905.10290,", "year": 2019 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Pascal Vincent", "Hugo Larochelle", "Yoshua Bengio", "Pierre-Antoine Manzagol" ], "title": "Extracting and composing robust features with denoising autoencoders", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Yan Xia", "Xudong Cao", "Fang Wen", "Gang Hua", "Jian Sun" ], "title": "Learning discriminative reconstructions for unsupervised outlier removal", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Ozal Yildirim", "Ru San Tan", "U. Rajendra Acharya" ], "title": "An efficient compression of ecg signals using deep convolutional autoencoders", "venue": "Cognitive Systems Research,", "year": 2018 }, { "authors": [ "Chong Zhou", "Randy C. Paffenroth" ], "title": "Anomaly detection with robust deep autoencoders", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD", "year": 2017 } ]
[ { "heading": null, "text": "In this paper, we present an in-depth investigation of the convolutional autoencoder (CAE) bottleneck. Autoencoders (AE), and especially their convolutional variants, play a vital role in the current deep learning toolbox. Researchers and practitioners employ CAEs for a variety of tasks, ranging from outlier detection and compression to transfer and representation learning. Despite their widespread adoption, we have limited insight into how the bottleneck shape impacts the emergent properties of the CAE. We demonstrate that increased height and width of the bottleneck drastically improves generalization, which in turn leads to better performance of the latent codes in downstream transfer learning tasks. The number of channels in the bottleneck, on the other hand, is secondary in importance. Furthermore, we show empirically that, contrary to popular belief, CAEs do not learn to copy their input, even when the bottleneck has the same number of neurons as there are pixels in the input. Copying does not occur, despite training the CAE for 1,000 epochs on a tiny (≈ 600 images) dataset. We believe that the findings in this paper are directly applicable and will lead to improvements in models that rely on CAEs." }, { "heading": "1 INTRODUCTION", "text": "Autoencoders (AE) are an integral part of the neural network toolkit. They are a class of neural networks that consist of an encoder and decoder part and are trained by reconstructing datapoints after encoding them. Due to their conceptual simplicity, autoencoders often appear in teaching materials as introductory models to the field of deep unsupervised learning. Nevertheless, autoencoders have enabled major contributions in the application and research of the field. The main areas of application include outlier detection (Xia et al., 2015; Chen et al., 2017; Zhou & Paffenroth, 2017; Baur et al., 2019), data compression (Yildirim et al., 2018; Cheng et al., 2018; Dumas et al., 2018), and image enhancement (Mao et al., 2016; Lore et al., 2017). In the early days of deep learning, autoencoders were a crucial tool for the training of deep models. Training large (by the standards of the time) models was challenging, due to the lack of big datasets and computational resources. One way around this problem was to pre-train some or all layers of the network greedily by treating them as autoencoders with one hidden layer (Bengio et al., 2007). Subsequently, Erhan et al. (2009) demonstrated that autoencoder pre-training also benefits generalization. Currently, researchers in the field of representation learning frequently rely on autoencoders for learning nuanced and high-level representations of data (Kingma & Welling, 2013; Tretschk et al., 2019; Shu et al., 2018; Makhzani et al., 2015; Berthelot et al., 2018).\nHowever, despite its widespread use, we propose that the (deep) autoencoder model is not well understood. Many papers have aimed to deepen our understanding of the autoencoder through theoretical analysis (Nguyen et al., 2018; Arora et al., 2013; Baldi, 2012; Alain & Bengio, 2012). While such analyses provide valuable theoretical insight, there is a significant discrepancy between the theoretical frameworks and actual behavior of autoencoders in practice, mainly due to the assumptions made (e.g., weight tying, infinite depth) or the simplicity of the models under study. Others have approached this issue from a more experimental angle (Arpit et al., 2015; Bengio et al., 2013; Le, 2013; Vincent et al., 2008; Berthelot et al., 2019). Such investigations are part of an ongoing effort to understand the behavior of autoencoders in a variety of settings.\nThe focus of most such investigations so far has been the traditional autoencoder setting with fully connected layers. When working with image data, however, the default choice is to use convolutions, as they provide a prior that is well suited to this type of data (Ulyanov et al., 2018). For this reason, Masci et al. (2011) introduced the convolutional autoencoder (CAE) by replacing the fully connected layers in the classical AE with convolutions. In an autoencoder, the layer with the least amount of neurons is referred to as a bottleneck. In the regular AE, this bottleneck is simply a vector ( rank-1 tensor). In CAEs, however, the bottleneck assumes the shape of a multichannel image (rank-3 tensor, height × width × channels) instead of a vector. This bottleneck shape prompts the question: What is the relative importance of the number of channels versus the height and width (hereafter referred to as size) in determining the tightness of the CAE bottleneck? Intuitively, we might expect that only the total number of neurons should matter since convolutions with one-hot filters can distribute values across channels. Generally, the study of CAE properties appears to be underrepresented in literature, despite their widespread adoption.\nIn this paper, we share new insights into the properties of convolutional autoencoders, which we gained through extensive experimentation. We address the following questions:\n• How does the number of channels and the feature map size in the bottleneck layer impact – reconstruction quality? – generalization ability? – the structure of the latent code? – knowledge transfer to downstream tasks?\n• How and when do CAEs overfit? • How does the complexity of the data distribution affect all of the above? • Are CAEs capable of learning a “copy function” if the CAE is complete (i. e., when the\nnumber of pixels in input equals the number of neurons in bottleneck)? This “copying CAE” hypothesis is a commonly held belief that was carried over from regular AEs (see Sections 4 and 5 in Masci et al. (2011).\nWe begin the following section by formally introducing convolutional autoencoders and explaining the convolutional autoencoder model we used in our experiments. Additionally, we introduce our three datasets and the motivation for choosing them. In Section 3, we outline the experiments and their respective aims. Afterward, we present and discuss our findings in Section 4. All of our code, as well as the trained models and datasets, will be published at https://github.com/YmouslyAnon/WalkingTheTightrope. This repository will also include an interactive Jupyter Notebook for investigating the trained models. We invite interested readers to take a look and experiment with our models." }, { "heading": "2 MATERIALS AND METHODS", "text": "" }, { "heading": "2.1 AUTOENCODERS AND CONVOLUTIONAL AUTOENCODERS", "text": "The regular autoencoder, as introduced by Rumelhart et al. (1985), is a neural network that learns a mapping from data points in the input space x ∈ Rd to a code vector in latent space h ∈ Rm and back. Typically, unless we introduce some other constraint, m is set to be smaller than d to force the autoencoder to learn higher-level abstractions by having to compress the data. In this context, the encoder is the mapping f(x) : Rd → Rm and the decoder is the mapping g(h) : Rm → Rd. The layers in both the encoder and decoder are fully connected:\nli+1 = σ(W ili + bi). (1)\nHere, li is the activation vector in the i-th layer, W i and bi are the trainable weights and σ is a element-wise non-linear activation function. If necessary, we can tie weights in the encoder to the ones in the decoder such that W i = (W n−i)T , where n is the total number of layers. Literature refers to autoencoders with this type of encoder-decoder relation as weight-tied.\nThe convolutional autoencoder keeps the overall structure of the traditional autoencoder but replaces the fully connected layers with convolutions:\nLi+1 = σ(Wi ∗ Li + bi), (2)\nwhere ∗ denotes the convolution operation and the bias bi is broadcast to match the shape of Li such that the j-th entry in bi is added to the j-th channel in Li. Whereas before the hidden code was an m-dimensional vector, it is now a tensor with a rank equal to the rank of the input tensor. In the case of images, that rank is three (height, width, and the number of channels). CAEs generally include pooling layers or convolutions with strides > 1 or dilation > 1 in the encoder to reduce the size of the input. In the decoder, unpooling or transposed convolution layers (Dumoulin & Visin, 2016) inflate the latent code to the size of the input." }, { "heading": "2.2 OUR MODEL", "text": "Our model consists of five strided convolution layer in the encoder and five up-sampling convolution layers (bilinear up-sampling followed by padded convolution) (Odena et al., 2016) in the decoder. We chose to use five layers so that the size of the latent code, after the strided convolutions, would be 4x4 or 3x3 depending on the dataset. To increase the level of abstraction in the latent code, we increased the depth of the network by placing two residual blocks (He et al., 2016) with two convolutions each after each every strided / up-sampling convolution layer. We applied instance normalization (Ulyanov et al., 2016) and ReLU activation (Nair & Hinton, 2010) following every convolution in the architecture.\nOne of our goals was to understand the effect latent code shape has on different aspects of the network. Therefore, we wanted to be able to change the shape of the bottleneck from one experiment to another, while keeping the rest of the network constant. To this end, we quadrupled the number of channels with every strided convolution si and reduced it by a factor of four with every up-sampling convolution ui. In effect, this means that the volume (i. e., height×width× channels) of the feature maps is identical to the input in all layers up to the bottleneck:\nsi(Li) ∈ R hi/2×w i /2×4nic , for Li ∈ Rh i×wi×nic (3)\nui(Li) ∈ R2h i×2wi×n i c/4 , for Li ∈ Rh i×wi×nic (4) In this regard, our model, differs from CAEs commonly found in literature, where it is customary to double/halve the number of channels with every down-/up-sampling layer. However, our scheme allows us to test architectures with different bottleneck shapes while ensuring that the volume of the feature maps stays the same as the input until the bottleneck. In this sense, the bottleneck is the only moving part in our experiments. The resulting models range from having ∼ 50M to 90M parameters." }, { "heading": "2.3 DATASETS", "text": "To increase the robustness of our study, we conducted experiments on three different datasets. Additionally, the three datasets allowed us to address the question, how the difficulty of the data (i. e., the complexity of the data distribution) affects learning in the CAE. To study this effect, we decided to run our experiments on three datasets of varying difficulty. We determined the difficulty of each dataset based on intuitive heuristics. In the following, we present the datasets in the order of increasing difficulty and our reasoning for the difficulty grading." }, { "heading": "2.3.1 POKEMON", "text": "The first dataset is a blend of the images from “Pokemon Images Dataset”1 and the type information from “The Complete Pokemon Dataset”2, both of which are available on Kaggle. Our combined dataset consists of 793 256×256 pixel images of Pokemon and their primary and secondary types as labels. To keep the training time within acceptable bounds, we resized all images to be 128×128 pixels. We chose this dataset primarily for its clear structure and simplicity. The images depict only the Pokemon without background, and each image centers on the Pokemon it is showing. Additionally, the variation in poses and color palettes is limited in the images, and each image contains large regions of uniform color. Due to the above reasons and its small size, we deemed this dataset to be the “easy” dataset in our experiments. We trained our models on the first 80% of images and reserved the rest for testing.\n1https://www.kaggle.com/kvpratama/pokemon-images-dataset 2https://www.kaggle.com/rounakbanik/pokemon" }, { "heading": "2.3.2 CELEBA", "text": "A step up from the Pokemon dataset in terms of difficulty is the CelebA faces dataset (Liu et al., 2015). This dataset is a collection of celebrity faces, each with a 40-dimensional attribute vector (attributes such as smiling/not smiling, male/female) and five landmarks (left and right eye, nose and left and right corner of the mouth). To be able to observe overfitting behavior, we used only the first 10,000 images in the dataset for training and the last 2,000 images for testing. Since the images also contain backgrounds of varying complexity, we argue that this leads to more complex data distribution. Furthermore, the lighting conditions, quality, and facial orientation can vary significantly in the images. However, some clear structure is still present in this dataset, as the most substantial portion of each image shows a human face. For those reasons, we defined this dataset to have “medium” difficulty. For our purposes, we resized the images to be 96×96 pixels. The original size was 178×218 pixels." }, { "heading": "2.3.3 STL-10", "text": "For our last dataset, we picked STL-10 (Coates et al., 2011). This dataset consists of 96×96 pixel natural images and is divided into three splits: 5,000 training images (10 classes), 8,000 test images (10 classes), 100,000 unlabeled images. The unlabeled images also include objects that are not covered by the ten classes in the training and test splits. Analogously to CelebA, we used the first 10,000 images from the unlabeled split for training and the last 2,000 for testing of the CAE. In the experiments regarding knowledge transfer (see Section 3.2), we used all 8,000 labeled images from the test split of the dataset. As the images in this dataset show many different scenes, from varying viewpoints and under a multitude of lighting conditions, we find samples from this dataset to be the most complex and, therefore, the most difficult of the three." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 AUTOENCODER TRAINING", "text": "The first experiment we conducted, and which forms the basis for all subsequent experiments, consists of training of autoencoders with varying bottleneck sizes and observing the dynamics of their training and test losses. This experiment probes the relative importance of latent code size versus its number of channels. Additionally, it was meant to provide insight into how and when our models overfit and if the data complexity (see Section 2.3) plays a discernible role in this. We also tested the widespread hypothesis that autoencoders learn to “copy” the input if there is no bottleneck. For each dataset (as introduced in Section 2.3), we selected three latent code sizes (=height=width) si, i ∈ {1, 2, 3} as\nsi = sinput 2nl−i+1\nwith i ∈ {1, 2, 3}, nl = 5 (5)\nIn this equation, nl = 5 is the number of strided convolutions in the network, and sinput is the height (= width) of the images in the dataset. Throughout the rest of the paper, we mean width and height when we refer to the size of the bottleneck. To obtain latent codes with size s2 (s3), we changed the strides in the last (two) strided convolution layer(s) from two to one. For each size we then fixed four levels of compression cj ∈ {1/64, 1/16, 1/4, 1} and calculated the necessary number of channels ncj according to\nncj = cjs\n2 inputncinput\ns2i with i ∈ {1, 2, 3}, j ∈ {1, 2, 3, 4} (6)\nHere, ncinput is the number of channels in the input image. This way, the autoencoders had the same number of parameters in all layers except the ones directly preceding and following the bottleneck. We used mean squared error (MSE) between reconstruction and input as our loss function. After initializing all models with the same seed, we trained each for 1,000 epochs and computed the test error after every epoch. We repeated this process for two different seeds and used the models from the first seed in further experiments." }, { "heading": "3.2 KNOWLEDGE TRANSFER", "text": "Another goal of our investigation was to estimate the effect of the latent code shape on transferability. Here, our idea was to train a logistic regression on latent codes to predict the corresponding labels for each dataset. Since logistic regression can only learn linear decision boundaries, this approach allows us to catch a glimpse of the sort of knowledge present in the latent code and its linear separability. Furthermore, this serves as another test for the “copying” hypothesis. If the encoder has indeed learned to copy the input, the results of the logistic regression will be the same for the latent codes and the input images. In the first step, we exported all latent codes for the training and testing data from the Pokemon and CelebA datasets. For STL-10, we extracted the latent codes for the test split since we trained on the unlabeled split, where no labels are available. In the case of CelebA, we additionally trained linear regression models to predict the facial landmarks provided in the dataset. For every autoencoder setting, we used fivefold cross-validation to strengthen the reliability of the results. We trained the linear models for 200 epochs (50 epoch in the case of CelebA landmarks) with a weight decay of 0.01 and a learning rate of cj/64 (referring to Section 2.2). Besides, we also trained models directly on the image data for every dataset to serve as a baseline for comparison." }, { "heading": "3.3 PAIR-WISE REPRESENTATION SIMILARITY", "text": "In our final experiment, we used the recently published singular vector canonical correlation analysis (SVCCA) (Raghu et al., 2017) technique to gauge the pair-wise similarity of the learned latent codes. SVCCA takes two sets of neuron activations of the shape number of neurons × data points and estimates aligned directions in both spaces that have maximum correlation. First, SVCCA calculates the top singular vectors that explain 99% of the variance using singular value decomposition (SVD). Subsequently, SVCCA finds affine transformations for each set of singular vectors that maximize their alignment in the form of correlation. Lastly, it averages the correlation for each direction in the discovered subspace to produce a scalar similarity score. In convolutional neural networks, this computation can become prohibitively expensive, due to the large size of the feature maps. For such cases, the Raghu et al. (2017) recommend transforming the feature maps using discrete Fourier transformation (DFT). In the publication, the authors show that DFT leaves SVCCA invariant (if the dataset is translation invariant) but results in a block diagonal matrix, which enables exact SVCCA computation by computing SVCCA for each neuron at a time. Additionally, they recommend down-sampling bigger feature maps in Fourier space when comparing them to smaller ones. In this experiment, we investigated the effect of latent code shape on its structure and content." }, { "heading": "4 RESULTS AND DISCUSSION", "text": "Looking at the error curves for the CAEs (Fig. 1), we make several observations:\n1. The total amount of neurons in the bottleneck does not affect training as much as expected. All CAEs converge to a similar training error. We find this unexpected, as the smallest bottlenecks have only 1.56% of total neurons compared to the largest ones. Although the final differences in training error are small, we discover that the size of the bottleneck feature maps has a more substantial effect on training error than the number of channels. The larger the bottleneck width and height, the lower the training error. An interesting outlier presents itself in the plots for the Pokemon dataset. Here, we see that late in the training of the CAE with the 8x8x48 bottleneck training error suddenly spikes. At the same time, the test error drops significantly approximately to the same level as the training error. We verified that this was not due to an unintended interruption in training, by retraining the model with the same seed and obtained an identical result. Currently, it is unclear to us how such a drastic change in model parameters came about at such a late stage in training. Usually, we expect the loss landscape to become smoother the longer we train a model (Goodfellow et al., 2014). Whether this outlier is a fluke or has implications for the loss landscape of CAEs remains to be seen as our understanding of the training dynamics of neural networks deepens.\n2. We observe that bottleneck shape critically affects generalization. Increasing the number of channels in the bottleneck layer seems to improve test error only slightly and not in all cases. The relationship between bottleneck size and test error, on the other hand, is clear\ncut. Larger bottleneck size correlates with a significant decrease in test error. This finding is surprising, given the hypothesis that only the total amount of neurons matters. The CAE reconstructions further confirm this hypothesis. We visually inspected the reconstructions of our models (samples are shown in Fig. 2 and in the Appendix) and found that reconstruction quality improves drastically with the size of the bottleneck, yet no so much with the number of channels. As expected from the loss plots, the effect is more pronounced for samples from the test data.\n3. Bottleneck shape also affects overfitting dynamics. We would expect the test score to increase after reaching a minimum, as the CAE overfits the data. Indeed, we observe this behavior in some cases, especially in CAEs with smaller bottleneck sizes or the minimum amount of channels. In other cases, predominantly in CAEs with a larger bottleneck size, the test error appears to plateau instead. In the plot for the CelebA dataset, the curves for 12x12x48 and 12x12x192 even appear to decrease slightly over the full training duration. This overfitting behavior implies that CAEs with a larger bottleneck size can be trained longer before overfitting occurs.\n4. CAEs, where the total number of neurons in the bottleneck is the same as the number of pixels in the input, do not show signs of simply copying images. If the CAEs would indeed copy images, the test error would go to zero, yet we do not observe this case in any of the datasets. What is more, these complete CAEs follow the same pattern as the under-\ncomplete ones and often converge to similar values. This finding directly contradicts the popular hypothesis about copying CAEs. In essence, it suggests that even complete CAEs learn abstractions from data, and raises the question: What prevents the CAE from simply copying its input? We believe that the answer to this question could potentially lead to new autoencoder designs that exploit this limitation to learn better representations. Hence, we argue that it is an exciting direction for future research. Additionally, the trends we derive from our results suggest that this finding likely extends to over-complete CAEs as well. However, experiments with over-complete CAEs are required to test this intuition.\nFurthermore, the loss curves and reconstruction samples appear to only marginally reflect the notion of dataset difficulty, as defined in Section 2.3. One thing that stands out is the large generalization gap on the Pokemon dataset, which is most likely due to the comparatively tiny dataset size of ≈ 600 training images. Comparing the results for CelebA and STL-10, we find that overall generalization appears to be slightly better for CelebA, which is the less difficult dataset of the two. The test errors on STL-10 exhibit greater variance than on CelebA, although the number of samples and training epochs are equal between the two. This effect also shows itself in the reconstruction quality. On CelebA, even the CAEs with the smallest bottlenecks manage to produce decent reconstructions\n1.56% 6.25% 25.0% 100.0% volume relative to baseline\n4\n8\n16\nba sel\nine\nfe at\nur e\nm ap\nsi ze\n0.34±0.24 0.59±0.19 0.71±0.18 0.67±0.19\n0.34±0.27 0.36±0.25 0.76±0.21 0.71±0.25\n0.47±0.32 0.84±0.15 0.86±0.15 0.82±0.21\n0.14±0.13\nPokemon train\n1.56% 6.25% 25.0% 100.0% volume relative to baseline\n3\n6\n12\nba sel\nine\nfe at\nur e\nm ap\nsi ze\n0.38±0.28 0.41±0.27 0.38±0.27 0.33±0.29\n0.38±0.28 0.43±0.26 0.4±0.26 0.4±0.27\n0.34±0.29 0.4±0.27 0.37±0.28 0.4±0.26\n0.39±0.27\nCelebA-attributes train\n1.56% 6.25% 25.0% 100.0% volume relative to baseline\n3\n6\n12\nba sel\nine\nfe at\nur e\nm ap\nsi ze\n1.6±1.3 1.7±1.4 2.6±1.8 5.9±2.5\n2.0±1.6 2.0±1.6 2.0±1.9 3.3±2.0\n2.1±2.0 1.9±1.9 2.4±2.6 2.0±2.0\n138.2±31.5\nCelebA-regression train\nPokemon test\nCelebA-attributes test\nCelebA-regression test\nSTL-10 test\non test data, whereas the test sample reconstructions on STL-10 are often unrecognizable for those models. Overall, this effect is weak and warrants a separate investigation of the relationship between data complexity and CAE characteristics, especially in the light of compelling results from curriculum learning research (Bengio et al., 2009).\nIf we look at the results of our knowledge transfer experiments (Fig. 3), we find further evidence that contradicts the copying autoencoder hypothesis. Although the loss curves and reconstructions already indicate that the CAE does not copy its input, the possibility remains that the encoder distributes the input pixels along the channels but the decoder is unable to reassemble the image. Here, we see that the results from the linear model trained on latent codes perform drastically better, than the ones trained on the inputs (marked “baseline” in the figure). The only deviation from this pattern seems to be the prediction of attributes on the CelebA dataset, where the performance is more or less the same for all settings. However, the prediction of landmarks on the same dataset strongly favors latent codes over raw data. As such, it seems implausible to assume that the encoder copied the input to the bottleneck. Overall, we find that knowledge transfer also seems to work better on latent codes with greater size, although the effect is not as distinct as in the loss curves.\nAnother point of interest to us is the discrepancy between models trained on the CAE training and test data from the Pokemon dataset. Oddly, the linear models perform better on the test data, despite the evident overfitting of the CAEs as seen in the reconstructions and loss curves. This discrepancy raises the question if overfitting happens mostly in the decoder, while the encoder retains most of its generality. We believe that this question warrants further investigation, especially in light of the recent growth in the popularity of transfer learning methods.\nWe notice that the latent codes from bottlenecks with the same size have higher SVCCA similarity values, as can be seen in Fig. 4 in the blocks on the diagonal. This observation further supports our hypothesis that latent code size, and not the number of channels, dictates the tightness of the CAE bottleneck. Finally, we wish to point out some observations in the SVCCA similarities as a possible inspiration for future research:\n• Overall, similarity appears to be higher in latent codes from test data than in codes from training data\n• Latent codes from complete CAEs show high similarity to all latent codes from all other CAEs\n• SVCCA similarity with the raw inputs tends to increase with the number of channels" }, { "heading": "5 CONCLUSION", "text": "In this paper, we presented the findings of our in-depth investigation of the CAE bottleneck. The intuitive assumption that its total amount of neurons characterizes the CAE bottleneck could not be confirmed. We demonstrate that the height and width of the feature maps in the bottleneck are what defines its tightness, while the number of channels plays a secondary role. Larger bottleneck size (i. e., height and width) is also critical in achieving better generalization as well as a lower training error. Furthermore, we could not confirm the commonly held belief that complete CAE (i. e., CAEs with the same number of neurons in the bottleneck as pixels in the input) will learn to copy its input. On the contrary, even complete CAEs appear to follow the same dynamics of bottleneck size, as stated above. In knowledge transfer experiments, we have also shown that CAEs that overfit retain good predictive power in the latent codes, even on unseen samples. These insights are directly transferable to the two main areas of application for CAEs, outlier detection and compression/denoising: In the case of outlier detection, the model should yield a high reconstruction error on out-of-distribution samples. Using smaller bottleneck sizes to limit generalization could prove useful in this scenario. Compression and denoising tasks, on the other hand, seek to preserve image details while reducing file size and discarding unnecessary information, respectively. In this case, a bigger bottleneck size is preferable, as it increases reconstruction quality at the same level of compression.\nOur investigation yielded additional results that spark new research questions. Data complexity, as estimated by human intuition, did not lead to significant differences in the training dynamics of our models. On the flipside, curriculum learning, which rests on a similar notion of difficulty, has been shown to lead to improvements in the training of classifiers and segmentation networks. The link between those two empirical results is still unclear. Another interesting question that arose from our experiments is how overfitting manifests itself in CAEs. Does it occurs mainly in the encoder or decoder or equally in both?" }, { "heading": "A APPENDIX", "text": "" } ]
2,019
null
SP:4363825dfbd8c5b5a616ea5b0f67a751dcbe7eaf
[ "This paper deals with the global convergence of deep linear ResNets. The author show that under some initialization conditions for the first and the last layer (that are not optimized !) GD and SGD does converge to a global minimum of the min squared error. The closed related work seems to be Bartlett et al. 2019 that study the convergence of GD in the case of linear networks. ", "In this paper, the authors study the convergence of (stochastic) gradient descent in training deep linear residual networks, where linear transformation at input and output layers are fixed and matrices in other layers are trained. They first establish a global convergence of GD/SGD under some conditions on the fixed linear transformations. They they showed that for Gaussian random input and output transformation, global convergence still holds under conditions on the width of networks strictly milder than the literature. Linear convergence rate of SG/SGD are also established." ]
We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training L-hidden-layer linear residual networks (ResNets). We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss. Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets. Compared with the global convergence result of GD for training standard deep linear networks (Du & Hu, 2019), our condition on the neural network width is sharper by a factor of OpκLq, where κ denotes the condition number of the covariance matrix of the training data. We further propose a modified identity input and output transformations, and show that a pd ` kq-wide neural network is sufficient to guarantee the global convergence of GD/SGD, where d, k are the input and output dimensions respectively.
[ { "affiliations": [], "name": "DEEP LINEAR RESNETS" }, { "affiliations": [], "name": "Difan Zou" }, { "affiliations": [], "name": "Philip M. Long" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alexandr Andoni", "Rina Panigrahy", "Gregory Valiant", "Li Zhang" ], "title": "Learning polynomials with neural networks", "venue": "In International Conference on Machine Learning,", "year": 1908 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Elad E Hazan" ], "title": "On the optimization of deep networks: Implicit acceleration by overparameterization", "venue": "In 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Noah Golowich", "Wei Hu" ], "title": "A convergence analysis of gradient descent for deep linear neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Sanjeev Arora", "Simon S Du", "Wei Hu", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang" ], "title": "On exact computation with an infinitely wide neural net", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Peter L Bartlett", "David P Helmbold", "Philip M Long" ], "title": "Gradient descent with identity initialization efficiently learns positive-definite linear transformations by deep residual networks", "venue": "Neural computation,", "year": 2019 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "SGD learns overparameterized networks that provably generalize on linearly separable data", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Edouard Oyallon", "Francis Bach" ], "title": "On lazy training in differentiable programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Amit Daniely" ], "title": "SGD learns the conjugate kernel class of the network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Simon Du", "Wei Hu" ], "title": "Width provably matters in optimization for deep linear neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Simon Du", "Jason Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Simon S Du", "Wei Hu", "Jason D Lee" ], "title": "Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Simon S Du", "Jason D Lee", "Yuandong Tian" ], "title": "When is a convolutional filter easy to learn", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Simon S Du", "Xiyu Zhai", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "arXiv preprint arXiv:1810.02054,", "year": 2018 }, { "authors": [ "Yuguang Fang", "Kenneth A Loparo", "Xiangbo Feng" ], "title": "Inequalities for the trace of matrix product", "venue": "IEEE Transactions on Automatic Control,", "year": 1994 }, { "authors": [ "Daniel C Freeman", "Joan Bruna" ], "title": "Topology and geometry of half-rectified network optimization", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Spencer Frei", "Yuan Cao", "Quanquan Gu" ], "title": "Algorithm-dependent generalization bounds for overparameterized deep residual networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Suriya Gunasekar", "Jason D Lee", "Daniel Soudry", "Nati Srebro" ], "title": "Implicit bias of gradient descent on linear convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Moritz Hardt", "Tengyu Ma" ], "title": "Identity matters in deep learning", "venue": "arXiv preprint arXiv:1611.04231,", "year": 2016 }, { "authors": [ "Wei Hu", "Lechao Xiao", "Jeffrey Pennington" ], "title": "Provable benefit of orthogonal initialization in optimizing deep linear networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ziwei Ji", "Matus Telgarsky" ], "title": "Gradient descent aligns the layers of deep linear networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Kenji Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Kenji Kawaguchi", "Jiaoyang Huang" ], "title": "Gradient descent finds global minima for generalizable deep neural networks of practical sizes", "venue": null, "year": 1908 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Yang Yuan" ], "title": "Convergence analysis of two-layer neural networks with ReLU activation", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Haihao Lu", "Kenji Kawaguchi" ], "title": "Depth creates no bad local minima", "venue": "arXiv preprint arXiv:1702.08580,", "year": 2017 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Towards moderate overparameterization: global convergence guarantees for training shallow neural networks", "venue": "arXiv preprint arXiv:1902.04674,", "year": 2019 }, { "authors": [ "Ohad Shamir" ], "title": "Exponential convergence time of gradient descent for one-dimensional deep linear neural networks", "venue": "arXiv preprint arXiv:1809.08587,", "year": 2018 }, { "authors": [ "Lili Su", "Pengkun Yang" ], "title": "On learning over-parameterized neural networks: A functional approximation prospective", "venue": "arXiv preprint arXiv:1905.10826,", "year": 2019 }, { "authors": [ "Yuandong Tian" ], "title": "An analytical formula of population gradient for two-layered ReLU network and its applications in convergence and critical point analysis", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Roman Vershynin" ], "title": "Introduction to the non-asymptotic analysis of random matrices", "venue": "arXiv preprint arXiv:1011.3027,", "year": 2010 }, { "authors": [ "Lei Wu", "Qingcan Wang", "Chao Ma" ], "title": "Global convergence of gradient descent for deep linear residual networks", "venue": "arXiv preprint arXiv:1911.00645,", "year": 2019 }, { "authors": [ "Chulhee Yun", "Suvrit Sra", "Ali Jadbabaie" ], "title": "Global optimality conditions for deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Huishuai Zhang", "Da Yu", "Wei Chen", "Tie-Yan Liu" ], "title": "Training over-parameterized deep resnet is almost as easy as training a two-layer network", "venue": null, "year": 1903 }, { "authors": [ "Xiao Zhang", "Yaodong Yu", "Lingxiao Wang", "Quanquan Gu" ], "title": "Learning one-hidden-layer ReLU networks via gradient descent", "venue": "arXiv preprint arXiv:1806.07808,", "year": 2018 }, { "authors": [ "Kai Zhong", "Zhao Song", "Prateek Jain", "Peter L Bartlett", "Inderjit S Dhillon" ], "title": "Recovery guarantees for one-hidden-layer neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yi Zhou", "Yingbin Liang" ], "title": "Critical points of linear neural networks: Analytical forms and landscape properties", "venue": null, "year": 2018 }, { "authors": [ "Difan Zou", "Quanquan Gu" ], "title": "An improved analysis of training over-parameterized deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Difan Zou", "Yuan Cao", "Dongruo Zhou", "Quanquan Gu" ], "title": "Stochastic gradient descent optimizes over-parameterized deep ReLU networks", "venue": "Machine Learning Journal,", "year": 2019 }, { "authors": [], "title": "This completes the proof of the bounds on the singular values of A and B. Bounds on the initial training loss: The proof in this part is similar to the proof of Proposition 6.5 in Du & Hu (2019)", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the remarkable power of deep neural networks (DNNs) trained using stochastic gradient descent (SGD) in many machine learning applications, theoretical understanding of the properties of this algorithm, or even plain gradient descent (GD), remains limited. Many key properties of the learning process for such systems are also present in the idealized case of deep linear networks. For example, (a) the objective function is not convex; (b) errors back-propagate; and (c) there is potential for exploding and vanishing gradients. In addition to enabling study of systems with these properties in a relatively simple setting, analysis of deep linear networks also facilitates the scientific understanding of deep learning because using linear networks can control for the effect of architecture choices on the expressiveness of networks (Arora et al., 2018; Du & Hu, 2019). For these reasons, deep linear networks have received extensive attention in recent years.\nOne important line of theoretical investigation of deep linear networks concerns optimization landscape analysis (Kawaguchi, 2016; Hardt & Ma, 2016; Freeman & Bruna, 2016; Lu & Kawaguchi, 2017; Yun et al., 2018; Zhou & Liang, 2018), where major findings include that any critical point of a deep linear network with square loss function is either a global minimum or a saddle point, and identifying conditions on the weight matrices that exclude saddle points. Beyond landscape analysis, another research direction aims to establish convergence guarantees for optimization algorithms (e.g. GD, SGD) for training deep linear networks. Arora et al. (2018) studied the trajectory of gradient flow and showed that depth can help accelerate the optimization of deep linear networks. Ji & Telgarsky (2019); Gunasekar et al. (2018) investigated the implicit bias of GD for training deep linear networks and deep linear convolutional networks respectively. More recently, Bartlett et al. (2019); Arora et al. (2019a); Shamir (2018); Du & Hu (2019) analyzed the optimization trajectory of\nGD for training deep linear networks and proved global convergence rates under certain assumptions on the training data, initialization, and neural network structure.\nInspired by the great empirical success of residual networks (ResNets), Hardt & Ma (2016) considered identity parameterizations in deep linear networks, i.e., parameterizing each layer’s weight matrix as I`W, which leads to the so-called deep linear ResNets. In particular, Hardt & Ma (2016) established the existence of small norm solutions for deep residual networks with sufficiently large depth L, and proved that there are no critical points other than the global minimum when the maximum spectral norm among all weight matrices is smaller than Op1{Lq. Motivated by this intriguing finding, Bartlett et al. (2019) studied the convergence rate of GD for training deep linear networks with identity initialization, which is equivalent to zero initialization in deep linear ResNets. They assumed whitened data and showed that GD can converge to the global minimum if (i) the training loss at the initialization is very close to optimal or (ii) the regression matrix Φ is symmetric and positive definite. (In fact, they proved that, when Φ is symmetric and has negative eigenvalues, GD for linear ResNets with zero-initialization does not converge.) Arora et al. (2019a) showed that GD converges under substantially weaker conditions, which can be satisfied by random initialization schemes. The convergence theory of stochastic gradient descent for training deep linear ResNets is largely missing; it remains unclear under which conditions SGD can be guaranteed to find the global minimum.\nIn this paper, we establish the global convergence of both GD and SGD for training deep linear ResNets without any condition on the training data. More specifically, we consider the training of L-hidden-layer deep linear ResNets with fixed linear transformations at input and output layers. We prove that under certain conditions on the input and output linear transformations, GD and SGD can converge to the global minimum of the training loss function. Moreover, when specializing to appropriate Gaussian random linear transformations, we show that, as long as the neural network is wide enough, both GD and SGD with zero initialization on all hidden weights can find the global minimum. There are two main ingredients of our proof: (i) establishing restricted gradient bounds and a smoothness property; and (ii) proving that these properties hold along the optimization trajectory and further lead to global convergence. We point out the second aspect is challenging especially for SGD due to the uncertainty of its optimization trajectory caused by stochastic gradients. We summarize our main contributions as follows:\n• We prove the global convergence of GD and SGD for training deep linear ResNets. Specifically, we derive a generic condition on the input and output linear transformations, under which both GD and SGD with zero initialization on all hidden weights can find global minima. Based on this condition, one can design a variety of input and output transformations for training deep linear ResNets.\n• When applying appropriate Gaussian random linear transformations, we show that as long as the neural network width satisfies m “ Ωpkrκ2q, with high probability, GD can converge to the global minimum up to an -error within Opκ logp1{ qq iterations, where k, r are the output dimension and the rank of training data matrix X respectively, and κ “ }X}22{σ2rpXq denotes the condition number of the covariance matrix of the training data. Compared with previous convergence results for training deep linear networks from Du & Hu (2019), our condition on the neural network width is independent of the neural network depth L, and is strictly better by a factor of OpLκq.\n• Using the same Gaussian random linear transformations, we also establish the convergence guarantee of SGD for training deep linear ResNets. We show that if the neural network width satisfies m “ rΩ ` krκ2 log2p1{ q ¨n2{B2 ˘\n, with constant probability, SGD can converge to the global minimum up to an -error within rO ` κ2 ´1 logp1{ q ¨ n{B ˘\niterations, where n is the training sample size and B is the minibatch size of stochastic gradient. This is the first global convergence rate of SGD for training deep linear networks. Moreover, when the global minimum of the training loss is 0, we prove that SGD can further achieve linear rate of global convergence, and the condition on the neural network width does not depend on the target error .\nAs alluded to above, we analyze networks with d inputs, k outputs, and m ě maxtd, ku nodes in each hidden layer. Linear transformations that are fixed throughout training map the inputs to the first hidden layer, and the last hidden layer to the outputs. We prove that our bounds hold with high probability when these input and output transformations are randomly generated by Gaussian distributions. If, instead, the input transformation simply copies the inputs onto the first d compo-\nnents of the first hidden layer, and the output transformation takes the first k components of the last hidden layer, then our analysis does not provide a guarantee. There is a good reason for this: a slight modification of a lower bound argument from Bartlett et al. (2019) demonstrates that GD may fail to converge in this case. However, we describe a similarly simple, deterministic, choice of input and output transformations such that wide enough networks always converge. The resulting condition on the network width is weaker than that for Gaussian random transformations, and thus improves on the corresponding convergence guarantee for linear networks, which, in addition to requiring wider networks, only hold with high probability for random transformations." }, { "heading": "1.1 ADDITIONAL RELATED WORK", "text": "In addition to what we discussed above, a large bunch of work focusing on the optimization of neural networks with nonlinear activation functions has emerged. We will briefly review them in this subsection.\nIt is widely believed that the training loss landscape of nonlinear neural networks is highly nonconvex and nonsmooth (e.g., neural networks with ReLU/LeakyReLU activation), thus it is fundamentally difficult to characterize the optimization trajectory and convergence performance of GD and SGD. Some early work (Andoni et al., 2014; Daniely, 2017) showed that wide enough (polynomial in sample size n) neural networks trained by GD/SGD can learn a class of continuous functions (e.g., polynomial functions) in polynomial time. However, those works only consider training some of the neural network weights rather than all of them (e.g., the input and output layers) 1. In addition, a series of papers investigated the convergence of gradient descent for training shallow networks (typically 2-layer networks) under certain assumptions on the training data and initialization scheme (Tian, 2017; Du et al., 2018b; Brutzkus et al., 2018; Zhong et al., 2017; Li & Yuan, 2017; Zhang et al., 2018). However, the assumptions made in these works are rather strong and not consistent with practice. For example, Tian (2017); Du et al. (2018b); Zhong et al. (2017); Li & Yuan (2017); Zhang et al. (2018) assumed that the label of each training data is generated by a teacher network, which has the same architecture as the learned network. Brutzkus et al. (2018) assumed that the training data is linearly separable. Li & Liang (2018) addressed this drawback; they proved that for two-layer ReLU network with cross-entropy loss, as long as the neural network is sufficiently wide, under mild assumptions on the training data SGD with commonly-used Gaussian random initialization can achieve nearly zero expected error. Du et al. (2018c) proved the similar results of GD for training two-layer ReLU networks with square loss. Beyond shallow neural networks, Allen-Zhu et al. (2019); Du et al. (2019); Zou et al. (2019) generalized the global convergence results to multi-layer over-parameterized ReLU networks. Chizat et al. (2019) showed that training over-parameterized neural networks actually belongs to a so-called “lazy training” regime, in which the model behaves like its linearization around the initialization. Furthermore, the parameter scaling is more essential than over-paramterization to make the model learning within the “lazy training” regime. Along this line of research, several follow up works have been conducted. Oymak & Soltanolkotabi (2019); Zou & Gu (2019); Su & Yang (2019); Kawaguchi & Huang (2019) improved the convergence rate and over-parameterization condition for both shallow and deep networks. Arora et al. (2019b) showed that training a sufficiently wide deep neural network is almost equivalent to kernel regression using neural tangent kernel (NTK), proposed in Jacot et al. (2018). Allen-Zhu et al. (2019); Du et al. (2019); Zhang et al. (2019) proved the global convergence for training deep ReLU ResNets. Frei et al. (2019) proved the convergence of GD for training deep ReLU ResNets under an over-parameterization condition that is only logarithmic in the depth of the network, which partially explains why deep residual networks are preferable to fully connected ones. However, all the results in Allen-Zhu et al. (2019); Du et al. (2019); Zhang et al. (2019); Frei et al. (2019) require a very stringent condition on the network width, which typically has a high-degree polynomial dependence on the training sample size n. Besides, the results in Allen-Zhu et al. (2019); Zhang et al. (2019) also require that all data points are separated by a positive distance and have unit norm. As shown in Du & Hu (2019) and will be proved in this paper, for deep linear (residual) networks, there is no assumption on the training data, and the condition on the network width is significantly milder, which is independent of the sample size n. While achieving a stronger result for linear networks than for nonlinear ones is not surprising, we believe that our analysis, conducted in the idealized deep linear case, can provide useful insights to understand optimization in the nonlinear case.\n1In Daniely (2017), the weight changes in all hidden layers make negligible contribution to the final output, thus can be approximately treated as only training the output layer.\nTwo concurrent works analyze gradient descent applied to deep linear (residual) networks (Hu et al., 2020; Wu et al., 2019). Hu et al. (2020) consider deep linear networks with orthogonal initialization, and Wu et al. (2019) consider zero initialization on the last layer and identity initialization for the rest of the layers, which are similar to our setting. However, there are several differences between their work and ours. One major difference is that Hu et al. (2020) and Wu et al. (2019) only prove global convergence for GD, but our results cover both GD and SGD. In addition, Hu et al. (2020) focuses on proving the global convergence of GD for sufficiently wide networks, while we provide a generic condition on the input and output linear transformations for ensuring global convergence. Wu et al. (2019) assumes whitened data and proves a OpL3 logp1{ qq bound on the number of iterations required for GD to converge, where we establish a Oplogp1{ qq2 bound." }, { "heading": "1.2 NOTATION.", "text": "We use lower case, lower case bold face, and upper case bold face letters to denote scalars, vectors and matrices respectively. For a positive integer, we denote the set t1, . . . , ku by rks. Given a vector x, we use }x}2 to denote its `2 norm. We use Npµ, σ2q to denote the Gaussian distribution with mean µ and variance σ2. Given a matrix X, we denote }X}F , }X}2 and }X}2,8 as its Frobenious norm, spectral norm and `2,8 norm (maximum `2 norm over its columns), respectively. In addition, we denote by σminpXq, σmaxpXq and σrpXq the smallest, largest and r-th largest singular values of X respectively. For a square matrix A, we denote by λminpAq and λmaxpAq the smallest and largest eigenvalues of A respectively. For two sequences takukě0 and tbkukě0, we say ak “ Opbkq if ak ď C1bk for some absolute constant C1, and use ak “ Ωpbkq if ak ě C2bk for some absolute constant C2. Except the target error , we use rOp¨q and rΩp¨q to hide the logarithmic factors in Op¨q and Ωp¨q respectively." }, { "heading": "2 PROBLEM SETUP", "text": "Model. In this work, we consider deep linear ResNets defined as follows: fWpxq “ BpI`WLq . . . pI`W1qAx,\nwhere x P Rd is the input, fWpxq P Rk is the corresponding output, A P Rmˆd,B P Rkˆm denote the weight matrices of input and output layers respectively, and W1, . . . ,WL P Rmˆm denote the weight matrices of all hidden layers. The formulation of ResNets in our paper is different from that in Hardt & Ma (2016); Bartlett et al. (2019), where the hidden layers have the same width as the input and output layers. In our formulation, we allow the hidden layers to be wider by choosing the dimensions of A and B appropriately.\nLoss Function. Let tpxi,yiqui“1,...,n be the training dataset, X “ px1, . . . ,xnq P Rdˆn be the input data matrix and Y “ py1, . . . ,ynq P Rkˆn be the corresponding output label matrix. We assume the data matrix X is of rank r, where r can be smaller than d. Let W “ tW1, . . . ,WLu be the collection of weight matrices of all hidden layers. For an example px,yq, we consider the square loss defined by\n`pW; x,yq “ 1 2 }fWpxq ´ y}22.\nThen the training loss over the training dataset takes the following form\nLpWq :“ n ÿ\ni“1 `pW; xi,yiq “\n1 2 }BpI`WLq ¨ ¨ ¨ pI`W1qAX´Y}2F .\nAlgorithm. Similar to Allen-Zhu et al. (2019); Zhang et al. (2019), we consider algorithms that only train the weights W for hidden layers while leaving the input and output weights A and B unchanged throughout training. For hidden weights, we follow the similar idea in Bartlett et al. (2019) and adopt zero initialization (which is equivalent to identity initialization for standard linear network). We would also like to point out that at the initialization, all the hidden layers automatically satisfy the so-called balancedness condition (Arora et al., 2018; 2019a; Du et al., 2018a). The optimization algorithms, including GD and SGD, are summarized in Algorithm 1.\n2Considering whitened data immediately gives κ “ 1.\nAlgorithm 1 (Stochastic) Gradient descent with zero initialization 1: input: Training data txi,yiuiPrns, step size η, total number of iterations T , minibatch size B,\ninput and output weight matrices A and B. 2: initialization: For all l P rLs, each entry of weight matrix Wp0ql is initialized as 0. Gradient Descent 3: for t “ 0, . . . , T ´ 1 do 4: Wpt`1ql “ W ptq l ´ η∇WlLpWptqq for all l P rLs 5: end for 6: output: WpT q Stochastic Gradient Descent 7: for t “ 0, . . . , T ´ 1 do 8: Uniformly sample a subset Bptq of size B from training data without replacement. 9: For all ` P rLs, compute the stochastic gradient Gptql “ n B ř\niPBptq ∇Wl`pWptq; xi,yiq 10: For all l P rLs, Wpt`1ql “ W ptq l ´ ηG ptq l 11: end for 12: output: tWptqut“0,...,T" }, { "heading": "3 MAIN THEORY", "text": "It is clear that the expressive power of deep linear ResNets is identical to that of simple linear model, which implies that the global minima of deep linear ResNets cannot be smaller than that of linear model. Therefore, our focus is to show that GD/SGD can converge to a point W˚ with\nLpW˚q “ min ΘPRkˆd\n1 2 }ΘX´Y}2F ,\nwhich is exactly the global minimum of the linear regression problem. It what follows, we will show that with appropriate input and output transformations, both GD and SGD can converge to the global minimum." }, { "heading": "3.1 CONVERGENCE GUARANTEE OF GRADIENT DESCENT", "text": "The following theorem establishes the global convergence of GD for training deep linear ResNets. Theorem 3.1. There are absolute constants C and C1 such that, if the input and output weight matrices satisfy\nσ2minpAqσ2minpBq }A}2}B}2 ě C }X}2\n` LpWp0qq ´ LpW˚q ˘1{2\nσ2rpXq and the step size satisfies\nη ď C1 ¨ 1\nL}A}2}B}2}X}2 ¨ `\na LpWp0qq ` }A}2}B}2}X}2 ˘ ,\nthen for all iterates of GD in Algorithm 1, it holds that\nLpWptqq ´ LpW˚q ď ˆ 1´ ηLσ 2 minpAqσ2minpBqσ2rpXq\ne\n˙t\n¨ ` LpWp0qq ´ LpW˚q ˘ .\nRemark 3.2. Theorem 3.1 can imply the convergence result in Bartlett et al. (2019). Specifically, in order to turn into the setting considered in Bartlett et al. (2019), we choosem “ d “ k, A “ I,B “ I, LpW˚q “ 0 and XXJ “ I. Then it can be easily observed that the condition in Theorem 3.1 becomes LpWp0qq ´ LpW˚q ď C´2. This implies that the global convergence can be established as long as LpWp0qq ´ LpW˚q is smaller than some constant, which is equivalent to the condition proved in Bartlett et al. (2019).\nIn general, LpWp0qq ´ LpW˚q can be large and thus the setting considered in Bartlett et al. (2019) may not be able to guarantee global convergence. Therefore, it is natural to ask in which setting\nthe condition on A and B in Theorem 3.1 can be satisfied. Here we provide one possible choice which is commonly used in practice (another viable choices can be found in Section 4). We use Gaussian random input and output transformations, i.e., each entry in A is independently generated from Np0, 1{mq and each entry in B is generated from Np0, 1{kq. Based on this choice of transformations, we have the following proposition that characterizes the quantity of the largest and smallest singular values of A and B, and the training loss at the initialization (i.e., LpWp0qq). The following proposition is proved in Section A.2. Proposition 3.3. In Algorithm 1, if each entry in A is independently generated from Np0, α2q and each entry in B is independently generated from Np0, β2q, then if m ě C ¨ pd` k ` logp1{δqq for some absolute constant C, with probability at least 1´ δ, it holds that σminpAq “ Ωpα ? mq, σmaxpAq “ Opα ? mq, σminpBq “ Ω ` β ? m ˘ , σmaxpBq “ O ` β ? m ˘ ,\nand LpWp0qq ď O ` α2β2km logpn{δq}X}2F ` }Y}2F ˘ .\nThen based on Theorem 3.1 and Proposition 3.3, we provide the following corollary, proved in Section 3.4, which shows that GD is able to achieve global convergence if the neural network is wide enough. Corollary 3.4. Suppose }Y}F “ Op}X}F q. Then using Gaussian random input and output transformations in Proposition 3.3 with α “ β “ 1, if the neural network width satisfies m “ Ωpmaxtkrκ2 logpn{δq, k ` d` logp1{δquq then, with probability at least 1´ δ, the output of GD in Algorithm 1 achieves training loss at most LpW˚q ` within T “ O ` κ logp1{ q ˘\niterations, where κ “ }X}22{σ2rpXq denotes the condition number of the covariance matrix of training data. Remark 3.5. For standard deep linear networks, Du & Hu (2019) proved that GD with Gaussian random initialization can converge to a -suboptimal global minima within T “ Ωpκ logp1{ qq iterations if the neural network width satisfies m “ OpLkrκ3 ` dq. In stark contrast, training deep linear ResNets achieves the same convergence rate as training deep linear networks and linear regression, while the condition on the neural network width is strictly milder than that for training standard deep linear networks by a factor of OpLκq. This improvement may in part validate the empirical advantage of deep ResNets." }, { "heading": "3.2 CONVERGENCE GUARANTEE OF STOCHASTIC GRADIENT DESCENT", "text": "The following theorem establishes the global convergence of SGD for training deep linear ResNets. Theorem 3.6. There are absolute constants C, C1 and C2, such for any 0 ă δ ď 1{6 and ą 0, if the input and output weight matrices satisfy\nσ2minpAqσ2minpBq }A}2}B}2 ě C ¨ n}X}2 ¨ logpLpW p0qq{ q Bσ2rpXq ¨ b LpWp0qq,\nand the step size and maximum iteration number are set as\nη ď C1 ¨ Bσ2minpAqσ2minpBqσ2rpXq\nLn}A}42}B}42}X}22 ¨min\n\"\n}X}22,8LpW˚q ,\nB\nn}X}22 ¨ logpT {δq logpLpWp0qq{ q\n*\n,\nT “ C2 ¨ 1\nηLσ2minpAqσ2minpBqσ2rpXq ¨ log\nˆ LpWp0qq ´ LpW˚q ˙\n,\nthen with probability3 at least 1{2 (with respect to the random choices of mini batches), SGD in Algorithm 1 can find a network that achieves training loss at most LpW˚q ` .\nBy combining Theorem 3.6 and Proposition 3.3, we can show that as long as the neural network is wide enough, SGD can achieve global convergence. Specifically, we provide the condition on the neural network width and the iteration complexity of SGD in the following corollary. Corollary 3.7. Suppose }Y}F “ Op}X}F q. Then using Gaussian random input and output transformations in Proposition 3.3 with α “ β “ 1, for sufficiently small ą 0, if the neural network width satisfies m “ rΩ ` krκ2 log2p1{ q ¨n2{B2`d ˘\n, with constant probability, SGD in Algorithm 1 can find a point that achieves training loss at most LpW˚q` within T “ rO ` κ2 ´1 logp1{ q ¨n{B ˘ iterations. 3One can boost this probability to 1´ δ by independently running logp1{δq copies of SGD in Algorithm 1.\nFrom Corollaries 3.7 and 3.4, we can see that compared with the convergence guarantee of GD, the condition on the neural network width for SGD is worse by a factor of rOpn2 log2p1{ q{B2q and the iteration complexity is higher by a factor of rOpκ ´1 ¨ n{Bq. This is because for SGD, its trajectory length contains high uncertainty, and thus we need stronger conditions on the neural network in order to fully control it.\nWe further consider the special case that LpW˚q “ 0, which implies that there exists a ground truth matrix Φ such that for each training data point pxi,yiq we have yi “ Φxi. In this case, we have the following theorem, which shows that SGD can attain a linear rate to converge to the global minimum. Theorem 3.8. There are absolute constants C, and C1 such that for any 0 ă δ ă 1, if the input and output weight matrices satisfy\nσ2minpAqσ2minpBq }A}2}B}2 ě C ¨ n}X}2 Bσ2rpXq ¨ b LpWp0qq,\nand the step size is set as\nη ď C1 ¨ B2σ2minpAqσ2minpBqσ2rpXq\nLn2}A}42}B}42}X}42 ¨ logpT {δq ,\nfor some maximum iteration number T , then with probability at least 1´ δ, the following holds for all t ď T ,\nLpWptqq ď 2LpWp0qq ¨ ˆ 1´ ηLσ 2 minpAqσ2minpBqσ2rpXq\ne\n˙t\n.\nSimilarly, using Gaussian random transformations in Proposition 3.3, we show that SGD can achieve global convergence for wide enough deep linear ResNets in the following corollary. Corollary 3.9. Suppose }Y}F “ Op}X}F q. Then using Gaussian random transformations in Proposition 3.3 with α “ β “ 1, for any ď rO `\nB}X}22,8{pn}X}22q ˘\n, if the neural network width satisfies m “ rΩ ` krκ2 ¨ n2{B2 ` d ˘\n, with high probability, SGD in Algorithm 1 can find a network that achieves training loss at most within T “ rO ` κ2 logp1{ q ¨ n2{B2 ˘ iterations." }, { "heading": "4 DISCUSSION ON DIFFERENT INPUT AND OUTPUT LINEAR TRANSFORMATIONS", "text": "In this section, we will discuss several different choices of linear transformations at input and output layers and their effects to the convergence performance. For simplicity, we will only consider the condition for GD.\nAs we stated in Subsection 3.1, GD converges if the input and output weight matrices A and B\nσ2minpAqσ2minpBq }A}2}B}2 ě C ¨ }X}2 σ2rpXq ¨ ` LpWp0qq ´ LpW˚q ˘1{2 . (4.1)\nThen it is interesting to figure out what kind of choice of A and B can satisfy this condition. In Proposition 3.3, we showed that Gaussian random transformations (i.e., each entry of A and B is generated from certain Gaussian distribution) satisfy this condition with high probability, so that GD converges. Here we will discuss the following two other transformations.\nIdentity transformations. We first consider the transformations that A “ rIdˆd,0dˆpm´dqsJ and B “ a\nm{k ¨ rIkˆk,0kˆpm´kqs. which is equivalent to the setting in Bartlett et al. (2019) when m “ k “ d. Then it is clear that\nσminpBq “ σmaxpBq “ a m{k and σminpAq “ σmaxpAq “ 1.\nNow let us consider LpWp0qq. By our choices of B and A and zero initialization on weight matrices in hidden layers, in the case that d “ k, we have\nLpWp0qq “ 1 2 }BAX´Y}2F “ 1 2 › › a m{kX´Y › › 2 F .\nWe remark that › ›\na\nm{kX´Y › ›\n2 F {2 could be as big as 12 ` m}X}2F {k ` }Y}2F ˘\n(for example, when X and Y are orthogonal). Then plugging these results into (4.1), the condition on A and B becomes\na m{k ě C ¨ }X}2 σ2rpXq ¨ ˆ 1 2 ` m}X}2F {k ` }Y}2F ˘ ´ LpW˚q ˙1{2 ě C ¨ }X}2 σ2rpXq ¨ c m}X}2F 2k ,\nwhere the second inequality is due to the fact that LpW˚q ď }Y}2F {2. Then it is clear if }X}F ě? 2{C, the above inequality cannot be satisfied for any choice of m, since it will be cancelled out on both sides of the inequality. Therefore, in such cases, our bound does not guarantee that GD achieves global convergence. Thus, it is consistent with the non-convergence results in (Bartlett et al., 2019). Note that replacing the scaling factor a\nm{k in the definition of B with any other function of d, k and m would not help.\nModified identity transformations. In fact, we show that a different type of identity transformations of A and B can satisfy the condition (4.1). Here we provide one such example. Assuming m ě d`k, we can construct two sets S1,S2 Ă rms satisfying |S1| “ d, |S2| “ k and S1XS2 “ H. Let S1 “ ti1, . . . , idu and S2 “ tj1, . . . , jku. Then we construct matrices A and B as follows:\nAij “ \" 1 pi, jq “ pij , jq 0 otherwise Bij “ \" α pi, jq “ pi, jiq 0 otherwise\nwhere α is a parameter which will be specified later. In this way, it can be verified that BA “ 0, σminpAq “ σmaxpAq “ 1, and σminpBq “ σmaxpBq “ α. Thus it is clear that the initial training loss satisfies LpWp0qq “ }Y}2F {2. Then plugging these results into (4.1), the condition on A and B can be rewritten as\nα ě C ¨ }X}2 σ2rpXq ¨ ` }Y}2F {2´ LpW˚q ˘1{2 .\nThe R.H.S. of the above inequality does not depend on α, which implies that we can choose sufficiently large α to make this inequality hold. Thus, GD can be guaranteed to achieve the global convergence. Moreover, it is worth noting that using modified identity transformation, a neural network with m “ d ` k suffices to guarantee the global convergence of GD. We further remark that similar analysis can be extended to SGD." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we conduct various experiments to verify our theory on synthetic data, including i) comparison between different input and output transformations and ii) comparison between training deep linear ResNets and standard linear networks." }, { "heading": "5.1 DIFFERENT INPUT AND OUTPUT TRANSFORMATIONS", "text": "To validate our theory, we performed simple experiment on 10-d synthetic data. Specifically, we randomly generate X P R10ˆ1000 from a standard normal distribution and set Y “ ´X ` 0.1 ¨ E, where each entry in E is independently generated from standard normal distribution. Consider 10-hidden-layer linear ResNets, we apply three input and output transformations including identity transformations, modified identity transformations and random transformations. We evaluate the convergence performances for these three choices of transformations and report the results in Figures 1(a)-1(b), where we consider two cases m “ 40 and m “ 200. It can be clearly observed that gradient descent with identity initialization gets stuck, but gradient descent with modified identity initialization or random initialization converges well. This verifies our theory. It can be also observed that modified identity initialization can lead to slightly faster convergence rate as its initial training loss can be smaller. In fact, with identity transformations in this setting, only the first 10 entries of the m hidden variables in each layer ever take a non-zero value, so that, no matter how large m is, effectively, m “ 10, and the lower bound of Bartlett et al. (2019) applies." }, { "heading": "5.2 COMPARISON WITH STANDARD DEEP LINEAR NETWORKS", "text": "Then we compare the convergence performances with that of training standard deep linear networks. Specifically, we adopt the same training data generated in Section 5.1 and consider training Lhidden-layer neural network with fixed width m. The convergence results are displayed in Figures\n1(c)-1(d), where we consider different choices of L. For training linear ResNets, we found that the convergence performances are quite similar for different L, thus we only plot the convergence result for the largest one (e.g., L “ 20 for m “ 40 and L “ 100 for m “ 200). However, it can be observed that for training standard linear networks, the convergence performance becomes worse as the depth increases. This is consistent with the theory as our condition on the neural network width is m “ Opkrκ2q (please refer to Corollary 3.4), which has no dependency in L, while the condition for training standard linear network is m “ OpLkrκ3q (Du & Hu, 2019), which is linear in L." }, { "heading": "6 CONCLUSION", "text": "In this paper, we proved the global convergence of GD and SGD for training deep linear ResNets with square loss. More specifically, we considered fixed linear transformations at both input and output layers, and proved that under certain conditions on the transformations, GD and SGD with zero initialization on all hidden weights can converge to the global minimum. In addition, we further proved that when specializing to appropriate Gaussian random linear transformations, GD and SGD can converge as long as the neural network is wide enough. Compared with the convergence results of GD for training standard deep linear networks, our condition on the neural network width is strictly milder. Our analysis can be generalized to prove similar results for different loss functions such as cross-entropy loss, and can potentially provide meaningful insights to the convergence analysis of deep non-linear ResNets." }, { "heading": "ACKNOWLEDGEMENT", "text": "We thank the anonymous reviewers and area chair for their helpful comments. This work was initiated when Q. Gu and P. Long attended the summer program on the Foundations of Deep Learning at the Simons Institute for the Theory of Computing. D. Zou and Q. Gu were sponsored in part by the National Science Foundation CAREER Award IIS-1906169, BIGDATA IIS-1855099, and Salesforce Deep Learning Research Award. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies." }, { "heading": "A PROOF OF MAIN THEOREMS", "text": "We first provide the following lemma which proves upper and lower bounds on }∇WlLpWq}2F when W is staying inside a certain region. Its proof is in Section B.1.\nLemma A.1. For any weight matrices satisfying maxlPrLs }Wl}2 ď 0.5{L, it holds that,\n}∇WlLpWq}2F ě 2\ne σ2minpAqσ2minpBqσ2rpXq\n` LpWq ´ LpW˚q ˘ ,\n}∇WlLpWq}2F ď 2e}A}22}B}22}X}22 ` LpWq ´ LpW˚q ˘\n}∇Wl`pW; xi,yiq}2F ď 2e}A}22}B}22}xi}22`pW; xi,yiq. In addition, the stochastic gradient Gl in Algorithm 1 satisfies\n}Gl}2F ď 2en2}A}22}B}22}X}22\nB2 LpWq,\nwhere B is the minibatch size.\nThe gradient lower bound can be also interpreted as the Polyak-Łojasiewicz condition, which is essential to the linear convergence rate. The gradient upper bound is crucial to bound the trajectory length, since this lemma requires that maxlPrLs }Wl} ď 0.5{L.\nThe following lemma proves the smoothness property of the training loss function LpWq when W is staying inside a certain region. Its proof is in Section B.2.\nLemma A.2. For any two collections of weight matrices, denoted by ĂW “ tĂW1, . . . ,ĂWLu and W “ tW1, . . . ,WLu, satisfying maxlPrLs }Wl}F ,maxlPrLs }ĂWl}F ď 0.5{L that, it holds that\nLpĂWq ´ LpWq ď L ÿ\nl“1 x∇WlLpWq,ĂWl ´Wly\n` L}A}2}B}2}X}2 `\na\n2eLpWq ` 0.5e}A}2}B}2}X}2 ˘\nL ÿ l“1 }ĂWl ´Wl}2F .\nBased on these two lemmas, we are able to complete the proof of all theorems, which are provided as follows." }, { "heading": "A.1 PROOF OF THEOREM 3.1", "text": "Proof of Theorem 3.1. In order to simplify the proof, we use the short-hand notations λA, µA, λB and µB to denote }A}2, σminpAq, }B}2 and σminpBq respectively. Specifically, we rewrite the condition on A and B as follows\nµ2Aµ 2 B\nλAλB ě 4 ? 2e3}X}2 σ2rpXq ¨ ` LpWp0qq ´ LpW˚q ˘1{2 .\nWe prove the theorem by induction on the update number s, using the following two-part inductive hypothesis:\n(i) maxlPrLs }W psq l }F ď 0.5{L,\n(ii) LpWpsqq ´ LpW˚q ď ˆ 1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n˙s\n¨ ` LpWp0qq ´ LpW˚q ˘ .\nFirst, it can be easily verified that this holds for s “ 0. Now, assume that the inductive hypothesis holds for s ă t. Induction for Part (i): We first prove that maxlPrLs }W ptq l }F ď 0.5{L. By triangle inequality and the update rule of gradient descent, we have\n}Wptql }F ď t´1 ÿ\ns“0 η}∇WlLpWpsqq}F\nď η t´1 ÿ\ns“0\n? 2eλAλB}X}2 ¨ ` LpWpsqq ´ LpW˚q ˘1{2\nď ? 2eηλAλB}X}2 ¨ ` LpWp0qq ´ LpW˚q ˘1{2 ¨ t´1 ÿ\ns“0\nˆ\n1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n˙s{2\nwhere the second inequality follows from Lemma A.1, and the third inequality follows from the inductive hypothesis. Since ? 1´ x ď 1´ x{2 for any x P r0, 1s, we further have\n}Wptql }F ď ? 2eηλAλB}X}2 ¨ ` LpWp0qq ´ LpW˚q ˘1{2 ¨ t´1 ÿ\ns“0\nˆ\n1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\n2e\n˙s\nď ?\n8e3λAλB}X}2 Lµ2Aµ 2 Bσ 2 rpXq ¨ ` LpWp0qq ´ LpW˚q ˘1{2 .\nUnder the condition that µ2Aµ 2 B{pλAλBq ě 2 ? 8e3}X}2 ` LpWp0qq ´ LpW˚q ˘1{2{σ2rpXq, it can be readily verified that }Wptql }F ď 0.5{L. Since this holds for all l P rLs, we have proved Part (i) of the inductive step, i.e., maxlPrLs }W ptq l }F ď 0.5{L.\nInduction for Part (ii): Now we prove Part (ii) of the inductive step, bounding the improvement in the objective function. Note that we have already shown that Wptq satisfies maxlPrLs }W ptq l }F ď 0.5{L, thus by Lemma A.2 we have\nLpWptqq ď LpWpt´1qq ´ η L ÿ\nl“1\n› ›∇WlLpWpt´1qq › › 2\nF\n` η2LλAλB}X}2 ¨ `\nb\neLpWpt´1qq ` 0.5eλAλB}X}2 ˘ ¨ L ÿ\nl“1 }∇WlLpWpt´1qq}2F ,\nwhere we use the fact that Wptql ´ W pt´1q l “ ´η∇WlLpWpl´1qq. Note that LpWpt´1qq ď LpWp0qq and the step size is set to be\nη “ 1 2LλAλB}X}2 ¨ ` a eLpWp0qq ` 0.5eλAλB}X}2 ˘˘ ,\nso that we have\nLpWptqq ´ LpWpt´1qq ď ´η 2\nL ÿ\nl“1\n› ›∇WlLpWpt´1qq › › 2\nF\nď ´ηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n` LpWpt´1qq ´ LpW˚q ˘ ,\nwhere the second inequality is by Lemma A.1. Applying the inductive hypothesis, we get\nLpWptqq ´ LpW˚q ď ˆ 1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n˙\n¨ ` LpWpt´1qq ´ LpW˚q ˘\nď ˆ 1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n˙t\n¨ ` LpWp0qq ´ LpW˚q ˘ , (A.1)\nwhich completes the proof of the inductive step of Part (ii). Thus we are able to complete the proof." }, { "heading": "A.2 PROOF OF PROPOSITION 3.3", "text": "Proof of Proposition 3.3. We prove the bounds on the singular values and initial training loss separately.\nBounds on the singular values: Specifically, we set the neural network width as\nm ě 100 ¨ `\na maxtd, ku ` a 2 logp12{δq ˘2\nBy Corollary 5.35 in Vershynin (2010), we know that for a matrix U P Rd1ˆd2 (d1 ě d2) with entries independently generated by standard normal distribution, with probability at least 1´ 2 expp´t2{2q, its singular values satisfy\na d1 ´ a d2 ´ t ď σminpUq ď σmaxpUq ď a d1 ` a d2 ` t.\nBased on our constructions of A and B, we know that each entry of 1βB and 1 αA follows standard Gaussian distribution. Therefore, set t “ 2 a\nlogp12{δq and apply union bound, with probability at least 1´ δ{3, the following holds,\nα `? m´ ? d´ 2 a logp12{δq ˘ ď σminpAq ď σmaxpAq ď α `? m` ? d` 2 a logp12{δq ˘ β `? m´ ? k ´ 2 a logp12{δq ˘ ď σminpBq ď σmaxpBq ď β `? m` ? k ` 2 a logp12{δq ˘ ,\nwhere we use the facts that σminpκUq “ κσminpUq and σmaxpκUq “ κσmaxpUq for any scalar κ and matrix U. Then applying our choice of m, we have with probability at least 1´ δ{3, 0.9α ? m ď σminpAq ď σmaxpAq ď 1.1α ? m and 0.9β ? m ď σminpBq ď σmaxpBq ď 1.1β ? m.\nThis completes the proof of the bounds on the singular values of A and B.\nBounds on the initial training loss: The proof in this part is similar to the proof of Proposition 6.5 in Du & Hu (2019). Since we apply zero initialization on all hidden layers, by Young’s inequality, we have the following for any px,yq,\n`pWp0q; x,yq “ 1 2 }BAx´ y}22 ď }BAx}22 ` }y}22. (A.2)\nSince each entry of B is generated from N p0, β2q, conditioned on A, each entry of BAx is distributed according to N p0, β2||Ax||22q, so\n}BAx}22 }Ax}22β2\nfollows a χ2k distribution. Applying a standard tail bound for χ2k distribution, we have, with probability at least 1´ δ1,\n}BAx}22 }Ax}22 ď β2kp1` 2 a logp1{δ1q{k ` 2 logp1{δ1qq.\nNote that by our bounds of the singular values, if m ě 100 ¨ ` a maxtd, ku ` a 2 logp8{δq ˘2\n, we have with probability at least 1 ´ δ{3, }A}2 ď 1.1α ? m, thus, it follows that with probability at least 1´ δ1 ´ δ,\n}BAx}22 ď 1.21α2β2km “ 1` 2 a logp1{δ1q ` 2 logp1{δ1q ‰ }x}22.\nThen by union bound, it is evident that with probability 1´ nδ1 ´ δ{3,\n}BAX}2F “ n ÿ\ni“1 }BAxi}22 ď 1.21α2β2km\n“ 1` 2 a logp1{δ1q ` 2 logp1{δ1q ‰ }X}2F .\nSet δ1 “ δ{p3nq, suppose logp1{δ1q ě 1, we have with probability at least 1´ 2δ{3,\nLpWp0qq “ 1 2 }BAX´Y}2F ď }BAX}2F ` }Y}2F ď 6.05α2β2km logp2n{δq}X}2F ` }Y}2F .\nThis completes the proof of the bounds on the initial training loss.\nApplying a union bound on these two parts, we are able to complete the proof." }, { "heading": "A.3 PROOF OF COROLLARY 3.4", "text": "Proof of Corollary 3.4. Recall the condition in Theorem 3.1:\nσ2minpAqσ2minpBq }A}2}B}2 ě C ¨ }X}2 σ2rpXq ¨ ` LpWp0qq ´ LpW˚q ˘1{2 . (A.3)\nBy Proposition 3.3, we know that, with probability 1´ δ,\nσ2minpAqσ2minpBq }A}2}B}2 “ Θ ` m ˘ ,\n}X}2 σrpXq ¨ ` LpWp0qq ´ LpW˚q ˘1{2 “ O\n˜\np a\nkm logpn{δq ` 1q}X}F }X}2 σrpXq\n¸\n.\nNote that }X}F ď ? r}X}2, thus the condition (A.3) can be satisfied if m “ Ωpkrκ2 logpn{δqq where κ “ }X}22{σ2rpXq.\nTheorem 3.1 implies that LpWptqq ´ LpW˚q ď after T “ O ´\n1 ηLσ2minpAqσ2minpBqσ2rpXq log 1\n¯\niterations. Plugging in the value of η, we get\nT “ O ˜ }A}2}B}2}X}2 ¨ ` a LpWp0qq ` }A}2}B}2}X}2 ˘\nσ2minpAqσ2minpBqσ2rpXq log\n1\n¸\n.\nBy Proposition 3.3, we have\nT “ O ˜ }A}2}B}2}X}2 ¨ ` a km logpn{δq}X}F ` }A}2}B}2}X}2 ˘\nσ2minpAqσ2minpBqσ2rpXq log\n1\n¸\n“ O ˜ }X}2 ¨ ` a km logpn{δq}X}F `m}X}2 ˘\nmσ2rpXq log\n1\n¸\n“ O ˜ }X}2 ¨ ` a kr logpn{δq{m}X}2 ` }X}2 ˘\nσ2rpXq log\n1\n¸\n“ O ˆ κ log 1 ˙\nfor m “ Ωpkr logpn{δqq, completing the proof." }, { "heading": "A.4 PROOF OF THEOREM 3.6", "text": "Proof of Theorem 3.6. The guarantee is already achieved by Wp0q if ě LpWp0qq ´ LpW˚q, so we may assume without loss of generality that ă LpWp0qq ´ LpW˚q. Similar to the proof of Theorem 3.1, we use the short-hand notations λA, µA, λB and µB to denote }A}2, σminpAq, }B}2 and σminpBq respectively. Then we rewrite the condition on A and B, and our choices of η and T as follows\nµ2Aµ 2 B λAλB ě ? 8e3n}X}2 ¨ logpLpWp0qq{ 1q Bσ2rpXq ¨ b 2LpWp0qq\nη ď Bµ 2 Aµ 2 Bσ 2 rpXq\n6e3Lnλ4Aλ 4 B}X}22\n¨min \"\n1\n}X}22,8LpW˚q , log2p2qB 3n}X}22 ¨ logpT {δq logpLpWp0qq{ 1q * ,\nT “ e ηLµ2Aµ 2 Bσ 2 rpXq ¨ log ˆ LpWp0qq ´ LpW˚q 1 ˙ ,\nwhere we set 1 “ {3 for the purpose of the proof. We first prove the convergence guarantees on expectation, and then apply the Markov inequality.\nFor SGD, our guarantee is not made on the last iterate but the best one. Define Et to be the event that there is no s ď t such that LpWptqq ´ LpW˚q ď 1. If 1pEtq “ 0, then there is an iterate Ws with s ď t that achieves training loss within 1 of optimal. Similar to the proof of Theorem 3.1, we prove the theorem by induction on the update number s, using the following inductive hypothesis: either 1pEsq “ 0 or the following three inequalities hold,\n(i) maxlPrLs }W psq l }F ď\n? 2esηnλAλB}X}2\nB ¨ a 2LpWp0qq¨\n(ii) E “` LpWpsqq ´ LpW˚q ˘‰ ď ´ 1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n¯s\n¨ ` LpWp0qq ´ LpW˚q ˘\n(iii) LpWpsqq ď 2LpWp0qq, where the expectation in Part (ii) is with respect to all of the random choices of minibatches. Clearly, if 1pEsq “ 0, we have already finished the proof since there is an iterate that achieves training loss\nwithin 1 of optimal. Recalling that ă LpWp0qq ´ LpW˚q, it is easy to verify that the inductive hypothesis holds when s “ 0. For the inductive step, we will prove that if the inductive hypothesis holds for s ă t, then it holds for s “ t. When 1pEt´1q “ 0, then 1pEtq is also 0 and we are done. Therefore, the remaining part is to prove the inductive hypothesis for s “ t under the assumption that 1pEt´1q “ 1, which implies that (i), (ii) and (iii) hold for all s ď t´ 1. For Parts (i) and (ii), we will directly prove that the corresponding two inequalities hold. For Part (iii), we will prove that either this inequality holds or 1pEtq “ 0. Induction for Part (i): As we mentioned, this part will be proved under the assumption 1pEt´1q “ 1. Besides, combining Part (i) for s “ t ´ 1 and our choice of η and T implies that maxlPrLs }W pt´1q l }F ď 0.5{L. Then by triangle inequality, we have the following for }W ptq l }F ,\n}Wptql }F ď }W pt´1q l }F ` η}G pt´1q l }F .\nBy Lemma A.1, we have\n}Gpt´1ql }F ď ? 2enλAλB}X}2 B ¨ b LpWpt´1qq.\nThen we have }Wptql }F ď ` }Wpt´1ql }F ` η}G pt´1q l }F ˘\nď }Wpt´1ql }F ` ? 2eηnλAλB}X}2 B ¨ b LpWpt´1qq. (A.4)\nBy Part (iii) for s “ t´ 1, we know that LpWpt´1qq ď 2LpWp0qq. Then by Part (i) for s “ t´ 1, it is evident that\n}Wptql }F ď ? 2etηnλAλB}X}2 B ¨ b 2LpWp0qq¨. (A.5)\nThis completes the proof of the inductive step of Part (i).\nInduction for Part (ii): As we previously mentioned, we will prove this part under the assumption 1pEt´1q “ 1. Thus, as mentioned earlier, the inductive hypothesis implies that maxlPrLs }W pt´1q l }F ď 0.5{L. By Part (i) for s “ t, which has been verified in (A.5), it can be proved that maxlPrLs }W ptq l }F ď 0.5{L, then we have the following by Lemma A.2,\nLpWptqq ´ LpWpt´1qq ď ´η L ÿ\nl“1\n@ ∇WlLpWpt´1qq,G pt´1q l D\n` η2LλAλB}X}2 ¨ `\nb\neLpWpt´1qq ` 0.5eλAλB}X}2 ˘ ¨ L ÿ\nl“1 }Gpt´1ql } 2 F .\n(A.6) By our condition on A and B, it is easy to verify that\nλAλB ě µ2Aµ 2 B λAλB ě 2\na\n2e´1LpWp0qq }X}2 .\nThen by Part (iii) for s “ t´ 1 (A.6) yields\nLpWptqq ´ LpWpt´1qq ď ´η L ÿ\nl“1\n@ ∇WlLpWpt´1qq,G pt´1q l D ` eη2Lλ2Aλ2B}X}22 ¨ L ÿ\nl“1 }Gpt´1ql } 2 F .\n(A.7)\nTaking expectation conditioning on Wpt´1q gives\nE “ LpWptqq|Wpt´1q ‰ ´ LpWpt´1qq ď ´η L ÿ\nl“1\n› ›∇WlLpWpt´1qq}2F\n` eη2Lλ2Aλ2B}X}22 L ÿ\nl“1 E “ }Gpt´1ql } 2 F |Wpt´1q ‰ . (A.8)\nNote that, for i sampled uniformly from t1, ..., nu, the expectation Er}Gpt´1ql }2F |Wpt´1qs can be upper bounded by\nEr}Gpt´1ql } 2 F |Wpt´1qs “ E “ }Gpt´1ql ´∇WlLpW pt´1qq}2F |Wpt´1q ‰ ` }∇WlLpWpt´1qq}2F\nď n 2\nB Er}∇Wl`pWpt´1q; xi,yiq}2F |Wpt´1qs ` }∇WlLpWpt´1qq}2F .\n(A.9)\nBy Lemma A.1, we have\nEr}∇Wl`pWpt´1q; xi,yiq}2F |Wpt´1qs ď 2eλ2Aλ2BEr}xi}22`pWpt´1q; xi,yiq|Wpt´1qs\nď 2eλ 2 Aλ 2 B\nn\nn ÿ i“1 }xi}22`pWpt´1q; xi,yiq\nď 2eλ2Aλ 2 B}X}22,8LpWpt´1qq\nn .\nPlugging the above inequality into (A.9) and (A.8), we get\nE “ LpWptqq|Wpt´1q ‰ ´ LpWpt´1qq\nď ´η L ÿ\nl“1\n› ›∇WlLpWpt´1qq}2F\n` eη2Lλ2Aλ2B}X}22 ¨ L ÿ\nl“1\nˆ 2enλ2Aλ 2 B}X}22,8LpWpt´1qq\nB ` }∇WlLpWpt´1qq}2F\n˙\n.\nRecalling that η ď 1{p6eLλ2Aλ2B}X}22q, we have\nE “ LpWptqq|Wpt´1q ‰ ´ LpWpt´1qq ď ´5η 6\nL ÿ\nl“1\n› ›∇WlLpWpt´1qq}2F\n` 2e2η2L2nλ4Aλ 4 B}X}22}X}22,8LpWpt´1qq\nB . (A.10)\nBy Lemma A.1, we have L ÿ\nl“1 }∇WlLpWpt´1qq}2F ě 2e´1Lµ2Aµ2Bσ2rpXq ` LpWpt´1qq ´ LpW˚q ˘ .\nIf we set\nη ď Bµ 2 Aµ 2 Bσ 2 rpXq\n6e3Lnλ4Aλ 4 B}X}22}X}22,8\n, (A.11)\nthen (A.10) yields\nE “ LpWptqq|Wpt´1q ‰ ´ LpWpt´1qq\nď ´5ηLµ 2 Aµ 2 Bσ 2 rpXq\n3e\n` LpWpt´1qq ´ LpW˚q ˘\n` 2e2η2L2nλ4Aλ 4 B}X}22}X}22,8\n` LpWpt´1qq ´ LpW˚q ˘\nB\n` 2e2η2L2nλ4Aλ 4 B}X}22}X}22,8LpW˚q B\nď ´4ηLµ 2 Aµ 2 Bσ 2 rpXq\n3e\n` LpWpt´1qq ´ LpW˚q ˘ ` 2e2η2L2nλ4Aλ 4 B}X}22}X}22,8LpW˚q BL2 . (A.12)\nDefine\nγ0 “ 4Lµ2Aµ 2 Bσ 2 rpXq\n3e , and γ1 “ 2e2η2L2nλ4Aλ 4 B}X}22}X}22,8LpW˚q B ,\nrearranging (A.12) further gives\nE “ LpWptqq|Wpt´1q ‰ ´ LpW˚q ď p1´ ηγ0q ¨ ` LpWptqq ´ LpW˚q ˘ ` η2γ1. (A.13) Therefore, setting the step size as\nη ď γ0 1\n4γ1 “ Bµ\n2 Aµ 2 Bσ 2 rpXq\n6e3Lnλ4Aλ 4 B}X}22}X}22,8\n¨ 1\nLpW˚q ,\nwe further have\nE “ LpWptqq ´ LpW˚q|Wpt´1q ‰ ď “ p1´ ηγ0q ¨ rLpWpt´1qq ´ LpW˚qs ` η2γ1 ‰\nď p1´ 3ηγ0{4q ¨ rLpWpt´1qq ´ LpW˚qs, (A.14) where the second inequality is by (A.13) and the last inequality is by the fact that we assume 1pEt´1q “ 1, which implies that LpWpt´1qq´LpW˚q ě 1 ě 4γ1η{γ0. Further taking expectation over Wpt´1q, we get\nE “ LpWptqq ´ LpW˚q ‰ ď p1´ 3ηγ0{4q ¨ E “ LpWpt´1qq ´ LpW˚q ‰\nď p1´ 3ηγ0{4qt ¨ ` LpWp0qq ´ LpW˚q ˘ ,\nwhere the second inequality follows from Part (ii) for s “ t´ 1 and the assumption that 1pE0q “ 1. Plugging the definition of γ0, we are able to complete the proof of the inductive step of Part (ii).\nInduction for Part (iii): Recalling that for this part, we are going to prove that either LpWptqq ď 2LpWp0qq or 1pEtq “ 0, which is equivalent to LpWptqq ¨ 1pEtq ď 2LpWp0qq since LpWp0qq and LpWptqq are both positive. We will prove this by martingale inequality. Let Ft “ σtWp0q, ¨ ¨ ¨ ,Wptqu be a σ-algebra, and F “ tFtutě1 be a filtration. We first prove that ErLpWptqq1pEtq|Ft´1s ď LpWpt´1qq1pEt´1q. Apparently, this inequality holds when 1pEt´1q “ 0 since both sides will be zero. Then if 1pEt´1q “ 1, by (A.14) we have ErLpWptqq|Wpt´1qs ď LpWpt´1qq since LpW˚q is the global minimum. Therefore,\nErLpWptqq1pEtq|Ft´1,Wpt´1q,1pEt´1q “ 1s ď ErLpWptqq|Ft´1,1pEt´1q “ 1s ď LpWpt´1qq.\nCombining these two cases, by Jensen’s inequality, we further have\nE “ log ` LpWptqq1pEtq ˘ |Ft´1 ‰ ď log ` ErLpWptqq1pEtq|Ft´1s ˘\nď log ` LpWpt´1qq1pEt´1q ˘ ,\nwhich implies that tlog ` LpWptqq ¨1pEtq ˘ utě0 is a super-martingale. Then we will upper bound the martingale difference log ` LpWptqq ¨ 1pEtq ˘ ´ log ` LpWpt´1qq ¨ 1pEt´1q ˘\n. Clearly this quantity would be zero if 1pEt´1q “ 0. Then if 1pEt´1q “ 1, by (A.7) we have\nLpWptqq ď LpWpt´1qq ` η L ÿ\nl“1 }∇WlLpWpt´1qq}F }G pt´1q l }F ` eη 2Lλ2Aλ 2 B}X}22\nL ÿ l“1 }Gpt´1ql } 2 F .\nBy Part (i) for s “ t´ 1, Lemma A.1, we further have\nLpWptqq ď ˆ 1` 2eηLnλ 2 Aλ 2 B}X}22 B ` 2e 2n2η2L2λ4Aλ 4 B}X}42 B2 ˙ LpWpt´1qq\nď ˆ 1` 3eηnLλ 2 Aλ 2 B}X}22\nB\n˙\nLpWpt´1qq, (A.15)\nwhere the second inequality follows from the choice of η that\nη ď B 2enLλ2Aλ 2 B}X}22 .\nUsing the fact that 1pEtq ď 1 and 1pEt´1q “ 1, we further have\nlog ` LpWptqq ¨ 1pEtq ˘ ď log ` LpWpt´1qq ¨ 1pEt´1q ˘ ` 3eηLnλ 2 Aλ 2 B}X}22\nB ,\nwhich also holds for the case 1pEt´1q “ 0. Recall that tlog ` LpWptqq ¨ 1pEtq ˘ utě0 is a supermartingale, thus by one-side Azuma’s inequality, we have with probability at least 1´ δ1,\nlog ` LpWptqq ¨ 1pEtq ˘ ď log ` LpWp0qq ˘ ` 3eηLnλ 2 Aλ 2 B}X}22\nB ¨ a 2t logp1{δ1q.\nSetting δ1 “ δ{T , using the fact that t ď T and leveraging our choice of T and η, we have with probability at least 1´ δ{T ,\n? Tη “ logp2qB\n3e a 2 logpδ{T qLnλ2Aλ2B}X}22 ,\nwhich implies that\nLpWptqq1pEtq ď exp ” log ` LpWp0qq ˘ ` logp2q ı ď 2LpWp0qq. (A.16)\nThis completes the proof of the inductive step of Part (iii).\nNote that this result holds with probability at least 1 ´ δ{T . Thus applying union bound over all iterates tWptqut“0,...,T yields that all induction arguments hold for all t ď T with probability at least 1´ δ. Moreover, plugging our choice of T and η into Part (ii) gives\nE “ LpWptqq ´ LpW˚q ‰ ď 1.\nBy Markov inequality, we further have with probability at least 2{3, it holds that rLpWpT qq ´ LpW˚qs¨1pEtq ď 3 1 “ . Therefore, by union bound (together with the high probability arguments of (A.16)) and assuming δ ă 1{6, we have with probability at least 2{3´δ ě 1{2, one of the iterates of SGD can achieve training loss within 1 of optimal. This completes the proof." }, { "heading": "A.5 PROOF OF COROLLARY 3.7", "text": "Proof of Corollary 3.7. Recall the condition in Theorem 3.6:\nσ2minpAqσ2minpBq }A}2}B}2 ě C ¨ n}X}2 ¨ logpLpW p0qq{ q Bσ2rpXq ¨ b LpWp0qq, (A.17)\nThen plugging in the results in Proposition 3.3 and the fact that }X}F ď ? r}X}2, we obtain that condition (A.17) can be satisfied if m “ O ` krκ2 log2p1{ q ¨B{n ˘ .\nIn addition, consider sufficiently small such that ď rO ` B}X}22,8{pn}X}22q ˘\n, then and use the fact that }X}2,8 ď }X}2 we have η “ O ` kB {pLmnκ}X}22q ˘\nbased on the results in Proposition 3.3. Then in order to achieve -suboptimal training loss, the iteration complexity is\nT “ e ηLσ2minσ 2 minpBqσ2rpXq log\nˆ LpWp0q ´ LpW˚qq ˙\n“ O ` κ2 ´1 logp1{ q ¨ n{B ˘ .\nThis completes the proof." }, { "heading": "A.6 PROOF OF THEOREM 3.8", "text": "Proof of Theorem 3.8. Similar to the proof of Theorem 3.6, we set the neural network width and step size as follows,\nµ2Aµ 2 B\nλAλB ě 4 ? 2e3n}X}2 Bσ2rpXq ¨ b 2LpWp0qq\nη ď logp2qB 2µ2Aµ 2 BpBqσ2rpXq\n54e3Ln2λ4Aλ 4 B}X}42 ¨ logpT {δq\n,\nwhere λA, µA, λB and µB denote }A}2, σminpAq, }B}2 and σminpBq respectively. Different from the proof of Theorem 3.6, the convergence guarantee established in this regime is made on the last iterate of SGD, rather than the best one. Besides, we will prove the theorem by induction on the update parameter t, using the following two-part inductive hypothesis:\n(i) maxlPrLs }W ptq l }F ď 0.5{L\n(ii) LpWptqq ď 2LpWp0qq ¨ ´ 1´ sηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n¯s\n.\nInduction for Part (i) We first prove that maxlPrLs }W ptq l }F ď 0.5{L. By triangle inequality and the update rule of SGD, we have\n}Wptql }F ď t´1 ÿ\ns“0 η}Gl}F\nď η t´1 ÿ\ns“0\n? 2enλAλB}X}2\nB\n` LpWpsqq ´ LpW˚q ˘1{2\nď ?\n2eηnλAλB}X}2 B ¨ ` LpWp0qq ´ LpW˚q ˘1{2 ¨\nt´1 ÿ\ns“0\nˆ\n1´ ηLµ 2 Aµ 2 Bσ 2 rpXq\n2e\n˙s\nď ?\n8e3nλAλB}X}2 BLµ2Aµ 2 Bσ 2 rpXq ¨ ` LpWp0qq ´ LpW˚q ˘1{2\nwhere the second inequality is by Lemma A.1, the third inequality follows from Part (ii) for all s ă t and the fact that p1´ xq1{2 ď 1´ x{2 for all x P r0, 1s. Then applying our choice of m implies that }Wptql }F ď 0.5{L. Induction for Part (ii) Similar to Part (ii) and (iii) of the induction step in the proof of Theorem 3.6, we first prove the convergence in expectation, and then use Azuma’s inequality to get the highprobability based results. It can be simply verfied that\nλAλB ě µ2Aµ 2 B λAλB ě 4\n? 2e3n}X}2 ¨ logpLpWp0qq{ q\nBσ2rpXq ¨ b\n2LpWp0qq ě 2 a 2e´1LpWp0qq }X}2\nη ď logp2qB 2µ2Aµ 2 BpBqσ2rpXq\n96e3Ln2λ4Aλ 4 B}X}42 ¨ logpT {δq\nď Bµ 2 Aµ 2 Bσ 2 rpXq\n6e3Lnλ4Aλ 4 B}X}22}X}22,8\n.\nThus, we can leverage (A.12) and obtain\nE “ LpWptqq|Wpt´1q ‰ ´ LpWpt´1qq ď ´4ηLµ 2 Aµ 2 Bσ 2 rpXq\n3e LpWpt´1qq,\nwhere we use the fact that LpW˚q “ 0. Then by Jensen’s inequality, we have\nE “ log ` LpWptqq ˘ |Wpt´1q ‰ ď log ` LpWpt´1qq ˘ ` log ˆ 1´ 4ηLµ 2 Aµ 2 Bσ 2 rpXq\n3e\n˙\n,\nď log ` LpWpt´1qq ˘ ´ 4ηLµ 2 Aµ 2 Bσ 2 rpXq\n3e ,\nwhere the second inequality is by logp1 ` xq ď x. Then similar to the proof of Theorem 3.6, we are going to apply martingale inequality to prove this part. Let Ft “ σtWp0q, ¨ ¨ ¨ ,Wptqu be a σ-algebra, and F “ tFtutě1 be a filtration, the above inequality implies that\nE “ log ` LpWptqq ˘ |Ft´1 ‰ ` 4tηLµ 2 Aµ 2 Bσ 2 rpXq\n3e ď log\n` LpWpt´1qq ˘ ` 4pt´ 1qηLµ 2 Aµ 2 Bσ 2 rpXq\n3e ,\n(A.18)\nwhich implies that log ` LpWptqq ˘ ` 4tηLµ2Aµ2Bσ2rpXq{p3eq (\nis a super-martingale. Besides, by (A.15), we can obtain\nlog ` LpWptqq ˘ ď log ` LpWpt´1qq ˘ ` 3eηLnλ 2 Aλ 2 B}X}22\nB ,\nwhich implies that\nlog ` LpWptqq ˘ ` 4tηLµ 2 Aµ 2 Bσ 2 rpXq\n3e\nď log ` LpWpt´1qq ˘ ` 4pt´ 1qηLµ 2 Aµ 2 Bσ 2 rpXq 3e ` 4eηLnλ 2 Aλ 2 B}X}22 B ,\nwhere we again use the fact that logp1 ` xq ď x. Thus, by the one-sided Azuma’s inequality we have with probability at least 1´ δ1 that\nlog ` LpWptqq ˘ ď log ` LpWp0qq ˘ ´ 4tηLµ 2 Aµ 2 Bσ 2 rpXq 3e ` 4eηLnλ 2 Aλ 2 B}X}22 B ¨ a 2t logp1{δ1q\nď log ` LpWp0qq ˘ ´ tηLµ 2 Aµ 2 Bσ 2 rpXq e ` 96e 3ηLn2λ4Aλ 4 B}X}42 logp1{δ1q\nB2µ2Aµ 2 Bσ 2 rpXq\nď log ` LpWp0qq ˘ ´ tηLµ 2 Aµ 2 Bσ 2 rpXq\ne ` logp2q,\nwhere the second inequality follows from the fact that ´at` b ? t ď b2{a, and the last inequality is by our choice of η that\nη ď logp2qB 2µ2Aµ 2 BpBqσ2rpXq\n96e3Ln2λ4Aλ 4 B}X}42 ¨ logp1{δ1q\n.\nThen it is clear that with probability at least 1´ δ1,\nLpWptqq ď 2LpWp0qq ¨ exp ˆ ´ tηLµ 2 Aµ 2 Bσ 2 rpXq\ne\n˙\n, (A.19)\nwhich completes the induction for Part (ii).\nSimilar to the proof of Theorem 3.6, (A.19) holds with probability at least 1´ δ1 for a given t. Then we can set δ1 “ δ{T and apply union bound such that with probability at least 1 ´ δ, (A.19) holds for all t ď T . This completes the proof." }, { "heading": "A.7 PROOF OF COROLLARY 3.9", "text": "Proof of Corollary 3.9. Recall the condition in Theorem 3.8:\nσ2minpAqσ2minpBq }A}2}B}2 ě C ¨ n}X}2 Bσ2rpXq ¨ b LpWp0qq, (A.20)\nThen plugging in the results in Proposition 3.3 and the fact that }X}F ď ? r}X}2, we obtain that condition (A.17) can be satisfied if m “ O ` krκ2 ¨B{n ˘ .\nIn addition, it can be computed that η “ O ` kB2{pLmn2κ}X}22q ˘\nbased on the results in Proposition 3.3. Then in order to achieve -suboptimal training loss, the iteration complexity is\nT “ e ηLσ2minσ 2 minpBqσ2rpXq log\nˆ LpWp0qq ´ LpW˚q ˙\n“ O ` κ2 logp1{ q ¨ n2{B2 ˘ .\nThis completes the proof." }, { "heading": "B PROOFS OF TECHNICAL LEMMAS", "text": "" }, { "heading": "B.1 PROOF OF LEMMA A.1", "text": "We first note the following useful lemmas. Lemma B.1 (Claim B.1 in Du & Hu (2019)). Define Φ “ arg minΘPRkˆd }ΘX ´Y}2F , then for any U P Rkˆd it holds that\n}UX´Y}2F “ }UX´ΦX}2F ` }ΦX´Y}2F .\nLemma B.2 (Theorem 1 in Fang et al. (1994)). Let U,V P Rdˆd be two positive definite matrices, then it holds that\nλminpUqTrpVq ď TrpUVq ď λmaxpUqTrpVq.\nThe following lemma is proved in Section B.3.\nLemma B.3. Let U P Rdˆr be a rank-r matrix. Then for any V P Rrˆk, it holds that σminpUq}V}F ď }UV}F ď σmaxpUq}V}F .\nProof of Lemma A.1. Proof of gradient lower bound: We first prove the gradient lower bound. Let U “ BpI`WLq . . . pI`W1qA, by Lemma B.1 and the definition of LpW˚q, we know that there exist a matrix Φ P Rkˆd such that\nLpWq “ 1 2 }UX´ΦX}2F ` LpW˚q. (B.1)\nTherefore, based on the assumption that maxlPrLs }Wl}F ď 0.5{L, we have\n}∇WlLpWq}2F “ › › “ BpI`WLq ¨ ¨ ¨ pI`Wl`1q ‰J` UX´ΦX ˘“ pI`Wl´1q ¨ ¨ ¨AX ‰J› › 2\nF\ně σ2minppI`WLq ¨ ¨ ¨ pI`Wl`1qq ¨ σ2min ` pI`Wl´1q ¨ ¨ ¨ pI`W1q ˘\n¨ }BJpU´ΦqXXJAJ}2F ě ` 1´ 0.5{L ˘2L´2}BJpU´ΦqXXJAJ}2F ,\nwhere the last inequality follows from the fact that σminpI `Wlq ě 1 ´ }Wl}2 ě 1 ´ }Wl}F ě 1´ 0.5{L. Applying Lemma B.2, we get\n}BJpU´ΦqXXJAJ}2F “ Tr ` BBJpU´ΦqXXJAJAXXJpU´ΦqJ ˘\ně λminpBBJq ¨ Tr ` AJAXXJpU´ΦqJpU´ΦqXXJ ˘\ně λminpBBJq ¨ λminpAJAq ¨ }pU´ΦqXXJ}2F .\nNote that X is of r-rank, thus there exists a full-rank matrix rX P Rdˆr such that rXrXJ “ XXJ. Thus we have\n}pU´ΦqX}2F “ Tr ` pU´ΦqXXJpU´ φqJ ˘ “ Tr ` pU´ΦqrXrXJpU´ φqJ ˘ “ › ›pU´ΦqrX › › 2 F .\n(B.2)\nTherefore,\n}pU´ΦqXXJ}2F “ › ›pU´ΦqrXrXJ › › 2\nF\n“ Tr ` pU´ΦqrXrXJ rXrXJpU´ΦqJ ˘\ně λminprXJ rXq ¨ }pU´ΦqrX}2F “ 2σ2rpXq ¨ pLpWq ´ LpW˚qq, (B.3)\nwhere the inequality follows from Lemma B.2 and the last equality follows from (B.2), (B.1) and the fact that λminprXJ rXq “ λrpXXJq “ σ2rpXq. Note that we assume d, k ď m and d ď n. Thus it follows that λminpBBJq “ σ2minpBq and λminpAJAq “ σ2minpAq. Then putting everything together, we can obtain\n}∇WlLpWq}2F ě 2σ2minpBqσ2minpAqσ2rpXqp1´ 0.5{Lq2L´2 ` LpW ´ LpW˚q ˘ .\nThen using the inequality p1 ´ 0.5{Lq2L´2 ě e´1, we are able to complete the proof of gradient lower bound.\nProof of gradient upper bound: The gradient upper bound can be proved in a similar way. Specifically, Lemma B.3 implies\n}∇WlLpWq}2F “ › › “ BpI`WLq ¨ ¨ ¨ pI`Wl`1q ‰J` UX´ΦX ˘“ pI`Wl´1q ¨ ¨ ¨AX ‰J› › 2\nF\nď σ2maxppI`WLq ¨ ¨ ¨ pI`Wl`1qq ¨ σ2max ` pI`Wl´1q ¨ ¨ ¨ pI`W1q ˘\n¨ }BJpU´ΦqXXJAJ}2F ď σ2maxppI`WLq ¨ ¨ ¨ pI`Wl`1qq ¨ σ2max ` pI`Wl´1q ¨ ¨ ¨ pI`W1q ˘\n¨ }B}22}A}22 ¨ }pU´ΦqXXJ}2F ď p1` 0.5{Lq2L´2}B}22}A}22 ¨ }pU´ΦqXXJ}2F ,\nwhere the last inequality is by the assumption that maxlPrLs }Wl}F ď 0.5{L. By (B.3), we have\n}pU´ΦqXXJ}2F “ › ›pU´ΦqpXXJq1{2pXXJq1{2 › › 2\nF\nď λmaxpXXJq ¨ }pU´ΦqpXXJq1{2}2F “ λmaxpXXJq ¨ }pU´ΦqX}2F “ 2}X}22 ¨ pLpWq ´ LpW˚qq,\nwhere the inequality is by Lemma B.3 and the second equality is by (B.2). Therefore, combining the above results yields\n}∇WlLpWq}2F ď 2σ2maxpBqσ2maxpAq}X}22p1` 0.5{Lq2L´2 ` LpW ´ LpW˚q ˘ .\nUsing the inequality p1 ` 0.5{Lq2L´2 ď p1 ` 0.5{Lq2L ď e, we are able to complete the proof of gradient upper bound.\nProof of the upper bound of }∇Wl`pW; xi,yiq}2F : Let U “ BpI ` WLq ¨ ¨ ¨ pI ` W1qA, we have\n∇Wl`pW; xi,yiq “ “ BpI`WLq ¨ ¨ ¨ pI`Wl`1q ‰JpUxi ´ yiq “ pI`Wl´1q ¨ ¨ ¨Ax̄i ‰J .\nTherefore, by Lemma B.3, we have\n}∇Wl`pW; xi,yiq}2F ď σ2max ` pI`WLq ¨ pI`Wl`1q ˘ ¨ σ2max ` pI`Wl´1q ¨ ¨ ¨ pI`W1q ˘\n¨ }BJpUxi ´ yiqxiAJ}2F ď p1` 0.5{Lq2L´2 ¨ }B}22}A}22}xi}22 ¨ }Uxi ´ yi}2F ď 2e}A}22}B}22}xi}22`pW; xi,yiq,\nwhere the last inequality is by the fact that p1` 0.5{Lq2L´2 ď e. Proof of the upper bound of stochastic gradient: Define by B the set of training data points used to compute the stochastic gradient, then define by X̄ and Ȳ the stacking of txiuiPB and tyiuiPB respectively. Let U “ BpI`WLq ¨ ¨ ¨ pI`W1qA, the minibatch stochastic gradient takes form\nGl “ n\nB\nÿ iPB ∇Wl`pW; xi,yiq\n“ n B “ BpI`WLq ¨ ¨ ¨ pI`Wl`1q ‰JpUX̄´ Ȳq “ pI`Wl´1q ¨ ¨ ¨AX̄ ‰J .\nThen by Lemma B.3, we have\n}Gl}2F ď n2\nB2 σ2max\n` pI`WLq ¨ pI`Wl`1q ˘ ¨ σ2max ` pI`Wl´1q ¨ ¨ ¨ pI`W1q ˘\n¨ }BJpUX̄´ ȲqX̄JAJ}2F\nď n 2\nB2 ¨ p1` 0.5{Lq2L´2 ¨ }B}22}A}22}X̄}22 ¨ }UX̄´ Ȳ}2F\nď en 2\nB2 }B}22}A}22}X̄}22 ¨ }UX̄´ Ȳ}2F .\nwhere the second inequality is by the assumptions that maxlPrLs }Wl}F ď 0.5{L, and the last inequality follows from the the fact that p1 ` 0.5{Lq2L´2 ď p1 ` 0.5{Lq2L ď e. Note that X̄ and Ȳ are constructed by stacking B columns from X and Y respectively, thus we have }X̄}22 ď }X}22 and }UX̄´ Ȳ}2F ď }UX´Y}2F “ 2LpWq. Then it follows that\n}Gl}2F ď 2en2\nB2 }B}22}A}22}X}22 ¨ LpWq.\nThis completes the proof of the upper bound of stochastic gradient." }, { "heading": "B.2 PROOF OF LEMMA A.2", "text": "Proof of Lemma A.2. Let U “ BpI`WLq ¨ ¨ ¨ pI`W1qA and rU “ BpI` ĂWLq ¨ ¨ ¨ pI` ĂW1qA and ∆ “ rU´U. We have\nLpĂWq ´ LpWq “ 1 2 ` }rUX´Y}2F ´ }UX´Y}2F ˘\n“ 1 2 ` }pU`∆qX´Y}2F ´ }UX´Y}2F ˘ “ 1 2 ` }UX´Y `∆X}2F ´ }UX´Y}2F ˘ “ 1 2 ` }2xUX´Y,∆Xy ` }∆X}2F ˘\n“ @ UX´Y, ` rU´U ˘ X D ` 1 2 › › ` rU´U ˘ X}2F . (B.4)\nWe begin by working on the first term. Let V “ pI`WLq ¨ ¨ ¨ pI`W1q and rV “ pI`ĂWLq ¨ ¨ ¨ pI` ĂW1q, so that rU´U “ BprV´VqA. Breaking down the effect of transforming V “ ś1 j“LpI`Wjq into rV “ ś1 j“LpI` ĂWjq into the effects of replacing one layer at a time, we get\nrV ´V “ L ÿ\nl“1\n«˜\nl`1 ź j“L pI`Wjq\n¸˜\n1 ź j“l pI` ĂWjq\n¸ ´ ˜ l ź\nj“L pI`Wjq\n¸˜\n1 ź\nj“l´1 pI` ĂWjq\n¸ff\nand, for each l, pulling out a common factor of ´ śl`1 j“LpI`Wjq ¯´ ś1´l j“lpI` ĂWjq ¯ gives\nrV ´V “ L ÿ\nl“1 pI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´WlqpI` ĂWl´1q ¨ ¨ ¨ pI` ĂW1q\n“ L ÿ\nl“1 pI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´WlqpI`Wl´1q ¨ ¨ ¨ pI`W1q looooooooooooooooooooooooooooooooooooooooooomooooooooooooooooooooooooooooooooooooooooooon\nV1\n` L ÿ\nl“1 pI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´Wlq looooooooooooooooooooooooomooooooooooooooooooooooooon\nV2\n¨ “ pI` ĂWl´1q ¨ ¨ ¨ pI` ĂW1q ´ pI`Wl´1q ¨ ¨ ¨ pI`W1q ‰ loooooooooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooooooooon\nV2\n. (B.5)\nThe first term V1 satisfies\nxUX´Y,BV1AXy\n“ C UX´Y,B ˜ L ÿ\nl“1 pI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´WlqpI`Wl´1q ¨ ¨ ¨ pI`W1q\n¸\nAX\nG\n“ L ÿ\nl“1\nA UX´Y,BpI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´WlqpI`Wl´1q ¨ ¨ ¨ pI`W1qAX E\n“ L ÿ\nl“1 TrppUX´YqJBpI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´WlqpI`Wl´1q ¨ ¨ ¨ pI`W1qAXq\n“ L ÿ\nl“1 TrppI`Wl´1q ¨ ¨ ¨ pI`W1qAXpUX´YqJBpI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´Wlqq\n“ L ÿ\nl“1\n@ rBpI`WLq ¨ ¨ ¨ pI`Wl`1qsJpUX´YqrpI`Wl´1q ¨ ¨ ¨AXsJ,ĂWl ´Wl D\n“ L ÿ\nl“1 x∇WlLpWq,ĂWl ´Wly, (B.6)\nwhere the first equality is by the definition of V1. Now we focus on the second term V2 of (B.5),\nV2 “ L ÿ\nl“1 pI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´Wlq\n¨ l´1 ÿ\ns“1 pI`Wl´1q ¨ ¨ ¨ pI`Ws`1qpĂWs ´WsqpI` ĂWs´1q ¨ ¨ ¨ pI` ĂW1q.\nRecalling that }Wl}F , }ĂWl}F ď 0.5{L for all l P rLs, by triangle inequality we have\n}V2}F ď p1` 0.5{LqL ¨ ÿ\nl,sPrLs : ląs }ĂWl ´Wl}F ¨ }ĂWs ´Ws}F\nď p1` 0.5{LqL ¨ ˆ L ÿ\nl“1 }ĂWl ´Wl}F\n˙2\n,\nwhere we use the fact that ř l,sPrLs : ląs alas ď ř l,sPrLs alas “ ` ř l al ˘2\nholds for all a1, . . . , aL ě 0. Therefore, the following holds regarding V2:\nxUX´Y,BV2AXy ď }UX´Y}F }BV2AX}F ď a 2LpWq}B}2}A}2}X}2}V2}F\nď ? 2e a LpWq}B}2}A}2}X}2 ˆ L ÿ\nl“1 }ĂWl ´Wl}F\n˙2\n(B.7)\nwhere the third inequality follows from the fact that p1 ` 0.5{LqL “ p1 ` 0.5{LqL ď ? e. Next,\nwe are going to upper bound the second term of (B.4): 12}prU ´UqX} 2 F . Note that, since }prU ´ UqX}2F “ }BprV ´VqAX}2F ď }A}22}B}22}X}22}rV ´V}2F , it suffices to bound the norm }rV ´ V}F . By (B.5), we have\n}rV ´V}F “ › › ›\n›\nL ÿ l“1 pI`WLq ¨ ¨ ¨ pI`Wl`1qpĂWl ´WlqpI` ĂWl´1q ¨ ¨ ¨ pI` ĂW1q › › › › F\nď p1` 0.5{LqL L ÿ\nl“1 }ĂWl ´Wl}F . (B.8)\nPlugging (B.6), (B.7) and (B.8) into (B.4), we have\nLpĂWq ´ LpWq\n“ @ UX´Y,BpV1 `V2qX D ` 1 2 › ›B ` rV ´V ˘ AX}2F\nď L ÿ\nl“1 x∇WlLpWq,ĂWl ´Wly\n` }A}2}B}2}X}2 `\na\n2eLpWq ` 0.5e}A}2}B}2}X}2 ˘\n˜\nL ÿ l“1 }ĂWl ´Wl}F\n¸2\nď L ÿ\nl“1 x∇WlLpWq,ĂWl ´Wly\n` L}A}2}B}2}X}2 `\na\n2eLpWq ` 0.5e}A}2}B}2}X}2 ˘\nL ÿ l“1 }ĂWl ´Wl}2F , (B.9)\nwhere the last inequality is by Jesen’s inequality. This completes the proof." }, { "heading": "B.3 PROOF OF LEMMA B.3", "text": "Proof of Lemma B.3. Note that we have\n}UV}2F “ TrpUVVJUJq “ TrpUJUVVJq.\nBy Lemma B.2, it is clear that\nλminpUJUqTrpVVJq ď TrpUJUVVJq ď λmaxpUJUqTrpVVJq.\nSince U P Rdˆr is of r-rank, thus we have λminpUJUq “ σ2minpUq. Then applying the facts that λmaxpUJUq “ σ2maxpUq and TrpVVJq “ }V}2F , we are able to complete the proof." } ]
2,020
null
SP:cbff5688c7be72a90c6e2ff6e3629c6feac3717c
[ "This paper focuses on lossless source compression with bits back coding for hierarchical fully convolutional VAEs. The focus/contribution is three-fold: 1. Improve the compression rate performance by adapting the discretization of latent space required for the entropy coder ANS. The newly proposed discretization scheme allows for a dependency structure that is not restricted to a Markov chain structure in the encoder model q(z|x) and in the generative part of the model p(x,z). This is in contrast with bit-swap[1], which requires a markov chain structure. The dependency structure that is allowed in the proposed method is widely known to perform better than a markov chain structure, which can explain why it improves significantly over Bit-swap [1] (another hierarchical VAE compression algorithm that uses bits back coding.) 2. Increasing compression speed by implementing a vectorized version of ANS, and heaving an ANS head in the shape of a pair of arrays matching that of the latent variable and the observed variable. The latter allows for simultaneous encoding of the latent with the prior distribution and the image with the decoder distribution. 3. Showing that a model trained on a low-resolution imagenet 32 dataset can generalize its compression capabilities to higher resolution datasets with convincing results. ", "This paper proposes a method for lossless image compression consisting of a VAE and using a bits-back version of ANS. The results are very impressive on a ImageNet (but maybe not so impressive on the other benchmarks). The authors also discuss how to speed up inference and present some frightening runtime numbers for the serial method, and some better numbers for the vectorized version, though they're nowhere close to being practical." ]
We make the following striking observation: fully convolutional VAE models trained on 32×32 ImageNet can generalize well, not just to 64×64 but also to far larger photographs, with no changes to the model. We use this property, applying fully convolutional models to lossless compression, demonstrating a method to scale the VAE-based ‘Bits-Back with ANS’ algorithm for lossless compression (Townsend et al., 2019) to large color photographs, and achieving state of the art for compression of full size ImageNet images. We release Craystack, an open source library for convenient prototyping of lossless compression using probabilistic models, along with full implementations of all of our compression results1.
[ { "affiliations": [], "name": "James Townsend" }, { "affiliations": [], "name": "Thomas Bird" }, { "affiliations": [], "name": "Julius Kunze" }, { "affiliations": [], "name": "David Barber" } ]
[ { "authors": [ "C. Bloom" ], "title": "Understanding ans - 9", "venue": "http://cbloomrants.blogspot.com/2014/02/ 02-10-14-understanding-ans-9.html,", "year": 2014 }, { "authors": [ "J. Duda" ], "title": "Asymmetric numeral systems", "venue": "ArXiv e-prints,", "year": 2009 }, { "authors": [ "B. Frey" ], "title": "Bayesian networks for pattern classification, data compression, and channel coding", "venue": "PhD thesis, University of Toronto,", "year": 1997 }, { "authors": [ "B. Frey", "G. Hinton" ], "title": "Free energy coding", "venue": "In Proceedings of the Data Compression Conference,", "year": 1996 }, { "authors": [ "F. Giesen" ], "title": "Interleaved entropy coders", "venue": "ArXiv e-prints,", "year": 2014 }, { "authors": [ "F. Giesen" ], "title": "rANS in practice", "venue": "https://fgiesen.wordpress.com/2015/12/21/ rans-in-practice/,", "year": 2015 }, { "authors": [ "S. Han", "H. Mao", "W. Dally" ], "title": "Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "G. Hinton", "D. van Camp" ], "title": "Keeping neural networks simple by minimizing the description length of the weights", "venue": "In Proceedings of the Sixth Annual Conference on Computational Learning Theory (COLT),", "year": 1993 }, { "authors": [ "J. Ho", "E. Lohn", "P. Abbeel" ], "title": "Compression with Flows via Local Bits-Back Coding", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2019 }, { "authors": [ "E. Hoogeboom", "J.W.T. Peters", "R. van den Berg", "M. Welling" ], "title": "Integer Discrete Flows and Lossless Compression", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2019 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2013 }, { "authors": [ "D.P. Kingma", "T. Salimans", "R. Jozefowicz", "X. Chen", "I. Sutskever", "M. Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "F.H. Kingma", "P. Abbeel", "J. Ho" ], "title": "Bit-Swap: recursive bits-back coding for lossless compression with hierarchical latent variables", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "T. Le Paine", "P. Khorrami", "S. Chang", "Y. Zhang", "P. Ramachandran", "M.A. Hasegawa-Johnson", "T.S. Huang" ], "title": "Fast Wavenet generation", "venue": null, "year": 2016 }, { "authors": [ "L. Maaløe", "M. Fraccaro", "V. Liévin", "O. Winther" ], "title": "BIVA: a very deep hierarchy of latent variables for generative modeling", "venue": "ArXiv e-prints,", "year": 2019 }, { "authors": [ "J. Menick", "N. Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "P. Ramachandran", "T. Le Paine", "P. Khorrami", "M. Babaeizadeh", "S. Chang", "Y. Zhang", "M.A. HasegawaJohnson", "R.H. Campbell", "T.S. Huang" ], "title": "Fast generation for convolutional autoregressive models", "venue": "ArXiv e-prints,", "year": 2017 }, { "authors": [ "S. Reed", "A. van den Oord", "N. Kalchbrenner", "S. Gómez Colmenarejo", "Z. Wang", "D. Belov", "N. de Freitas" ], "title": "Parallel multiscale autoregressive density estimation", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "D.J. Rezende", "S. Mohamed", "D. Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning (ICML),", "year": 2014 }, { "authors": [ "T. Salimans", "A. Karpathy", "X. Chen", "D.P. Kingma" ], "title": "Pixelcnn++: improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "C. Shannon" ], "title": "A mathematical theory of communication", "venue": "Bell System Technical Journal,", "year": 1948 }, { "authors": [ "J. Sneyers", "P. Wuille" ], "title": "FLIF: Free lossless image format based on maniac compression", "venue": "In IEEE International Conference on Image Processing (ICIP),", "year": 2016 }, { "authors": [ "C.K. Sønderby", "T. Raiko", "L. Maaløe", "S.K. Sønderby", "O. Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "J. Townsend", "T. Bird", "D. Barber" ], "title": "Practical lossless compression with latent variables using bits back coding", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "K. Ullrich", "E. Meeds", "M. Welling" ], "title": "Soft weight-sharing for neural network compression", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "A. van den Oord", "S. Dieleman", "H. Zen", "K. Simonyan", "O. Vinyals", "A. Graves", "N. Kalchbrenner", "A. Senior", "K. Kavukcuoglu" ], "title": "WaveNet: a generative model for raw audio", "venue": "ArXiv e-prints,", "year": 2016 }, { "authors": [ "C.S. Wallace" ], "title": "Classification by minimum-message-length inference", "venue": "In Proceedings of the International Conference on Advances in Computing and Information (ICCI),", "year": 1990 }, { "authors": [ "I. Witten", "R. Neal", "J. Cleary" ], "title": "Arithmetic coding for data compression", "venue": "Communications of the ACM,", "year": 1987 }, { "authors": [ "Kingma" ], "title": "2016), the KL terms are individually clamped as max(DKL, λ), where λ is some constant. This is an optimization technique known as free bits, and aims to prevent latent layers in the hierarchy collapsing such that the posterior is equal to the prior. Each layer in the hierarchy consists of a ResNet block with two sets of activations", "venue": "Where DKL is the KL divergence", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Bits back coding (Wallace, 1990; Hinton & van Camp, 1993) is a method for performing lossless compression using a latent variable model. In an ideal implementation, the method can achieve an expected message length equal to the variational free energy, often referred to as the negative evidence lower bound (ELBO) of the model. Bits back was first introduced to form a theoretical argument for using the ELBO as an objective function for machine learning (Hinton & van Camp, 1993).\nThe first implementation of bits back coding (Frey, 1997; Frey & Hinton, 1996) made use of first-infirst-out (FIFO) arithmetic coding (AC) (Witten et al., 1987). However, the implementation did not achieve optimal compression, due to an incompatibility between a FIFO coder and bits back coding, and its use was only demonstrated on a small dataset of 8×8 binary images. Recently, zero-overhead bits back compression with a significantly simpler implementation has been developed by Townsend et al. (2019). This implementation makes use of asymmetric numeral systems (ANS), a last-in-first-out (LIFO) entropy coding scheme (Duda, 2009). The method, known as ‘Bits Back with Asymmetric Numeral Systems’ (BB-ANS) was demonstrated by compressing the MNIST test set using a variational auto-encoder (VAE) model (Kingma & Welling, 2013; Rezende et al., 2014), achieving a compression rate within 1% of the model ELBO.\nMore recently, Hoogeboom et al. (2019) and Ho et al. (2019) have proposed flow-based methods for lossless compression, and Kingma et al. (2019) have presented ‘Bit-Swap’, extending BB-ANS to hierarchical models. In this work we present an alternative method for extending to hierarchical VAEs. This entails the following novel techniques:\n1. Direct coding of arbitrary sized images using a fully convolutional model. 2. A vectorized ANS implementation supporting dynamic shape. 3. Dynamic discretization to avoid having to calibrate a static discretization. 4. Initializing the bits back chain using a different codec.\nWe discuss each of these contributions in detail in Section 3. We call the combination of BB-ANS using a hierarchical latent variable model and the above techniques: ‘Hierarchical Latent Lossless\n∗Equal contribution. 1Available at https://github.com/hilloc-submission/hilloc.\nCompression’ (HiLLoC). In our experiments (Section 4), we demonstrate that HiLLoC can be used to compress color images from the ImageNet test set at rates close to the ELBO, outperforming all of the other codecs which we benchmark. We also demonstrate the speedup, of nearly three orders of magnitude, resulting from vectorization. We release an open source implementation based on ‘Craystack’, a Python package which we have written for general prototyping of lossless compression with ANS." }, { "heading": "2 BACKGROUND", "text": "In this section we briefly describe the BB-ANS algorithm first introduced by Townsend et al. (2019). We begin by giving a high-level description of the ANS LIFO entropy coder (Duda, 2009), along with a new notation for describing the basic ANS operations. Throughout the rest of the paper we use log to mean the base two logarithm, usually denoted log2, and we measure message lengths in bits." }, { "heading": "2.1 ASYMMETRIC NUMERAL SYSTEMS", "text": "As an entropy coder, ANS was designed for compressing sequences of discretely distributed symbols. It achieves a compressed message length equal to the negative log-probability (information content) of the sequence plus an implementation dependent constant, which is usually less than 32 bits. For long sequences, the constant overhead has a negligible contribution to the overall compression rate. Thus, by Shannon’s source coding theorem (Shannon, 1948), ANS coding is guaranteed to be near-optimal for long sequences.\nThere are two basic operations defined by ANS, which we will refer to as ‘push’ and ‘pop’. Push encodes a symbol by adding it to an existing message. It has the signature\npush : (message, symbol) 7→ message′. (1) Pop is the inverse of push, and may be used to decode a symbol and recover a message identical to that before pushing. pop : message′ 7→ (message, symbol). (2) When multiple symbols are pushed in sequence, they must be popped using the precise inverse procedure, which means popping the symbols in the opposite order. Hence why ANS is referred to as a last-in-first-out coder, or a stack.\nThe push and pop operations require access to a probabilistic model of symbols, summarized by a probability mass function p over the alphabet of possible symbols. The way that symbols are encoded depends on the model, and pushing a symbol s according to p results in an increase in message length of log 1p(s) . Popping s results in an equal reduction in message length. For details on how the ANS operations are implemented, see Duda (2009).\nNote that any model/mass function can be used for the pop operation, i.e. there’s no hard restriction to use the distribution that was used to encode the message. In this way, rather than decoding the same\ndata that was encoded, pop can actually be used to sample a symbol from a different distribution. The pop method itself is deterministic, so the source of randomness for the sample comes from the data contained within the message. This sampling operation, which can be inverted by pushing the sample back onto the stack, is essential for bits back coding.\nFor convenience, we introduce the shorthand notation s→ p(·) for encoding (pushing) a symbol s according to p, and s← p(·) for decoding (popping)." }, { "heading": "2.2 BITS BACK WITH ANS", "text": "Suppose we have a model for data x which involves a latent variable z. A sender and receiver wish to communicate a sample x. They have access to a prior on z, denoted p(z), a likelihood p(x | z) and a (possibly approximate) posterior q(z |x), but not the marginal distribution p(x). Without access to p(x), sender and receiver cannot directly code x using ANS. However, BB-ANS specifies an indirect way to push and pop x. It does not require access to the marginal p(x), but rather uses the prior, conditional, and posterior from the latent variable model.\nTable 1(a) shows, in order from the top, the three steps of the BB-ANS pushing procedure which the sender can perform to encode x. The ‘Variables’ column shows the variables known to the sender before each step. 1(b) shows the inverse steps which the receiver can use to pop x, with the ‘Variables’ column showing what is known to the receiver after each step. After decoding x, the third step of popping, z → q(· |x), is necessary to ensure that BB-ANS pop is a precise inverse of push.\nThe change in message length from BB-ANS can easily be derived by adding up the quantities in the ∆L column of Table 1. For encoding we get\n∆LBB−ANS = − log 1\nq(z |x) + log\n1\np(x | z) + log\n1\np(z) (3)\n= − log p(x, z) q(z |x) . (4)\nTaking the expectation over z gives the expected message length for a datum x L(x) = −Eq(z | x) [ log p(x, z)\nq(z |x)\n] (5)\nwhich is the negative evidence lower bound (ELBO), also known as the free energy. This is a commonly used training objective for latent variable models. The above equation implies that latent variable models trained using the ELBO are implicitly being trained to minimize the expected message length of lossless compression using BB-ANS.\nNote that, as Table 1 shows, the first step of encoding a data point, x, using BB-ANS is to, counterintuitively, decode (and thereby sample) a latent z ← q(· |x). This requires that there is already a buffer of random data pushed to the ANS coder, which can be popped. This data used to start the encoding process is recovered after the final stage of decoding, hence the name ‘bits back’.\nIf we have multiple samples to compress, then we can use ‘chaining’, which is essentially repeated application of the procedure in Table 1 (Townsend et al., 2019). In Section 3.4 we describe how we build up an initial buffer of compressed data by using a different codec to code the first images in a sequence." }, { "heading": "3 SCALING UP BITS BACK WITH ANS", "text": "We now discuss the techniques we introduce to scale up BB-ANS." }, { "heading": "3.1 FULLY CONVOLUTIONAL MODELS", "text": "When all of the layers in the generative and recognition networks of a VAE are either convolutional or elementwise functions (i.e. the VAE has no densely connected layers), then it is possible to evaluate the recognition network on images of any height and width, and similarly to pass latents of any height and width through the generative network to generate an image. Thus, such a VAE can be used as a (probabilistic) model for images of any size.\nWe exploit this fact, and show empirically in Section 4 that, surprisingly, a fully convolutional VAE trained on 32 × 32 images can perform well (in the sense of having a high ELBO) as a model for 64 × 64 images as well as far larger images. This in turn corresponds to a good compression rate, and we implement lossless compression of arbitrary sized images by using a VAE in this way." }, { "heading": "3.2 VECTORIZED LOSSLESS COMPRESSION", "text": "The primary computational bottlenecks in the original BB-ANS implementation (Townsend et al., 2019) were loops over data and latent variables occurring in the Python interpreter. We have been able to vectorize these, achieving an implementation which can scale to large ImageNet images. The effect of vectorization on runtime is shown in Figure 4.\nA vectorized implementation of ANS was described in Giesen (2014) using SIMD instructions. This works by expanding the size of the ANS stack head, from a scalar to a vector, and interleaving the output/input bit stream. We implement this in our lossless compression library, Craystack, using Numpy. Please refer to the Craystack code and to Giesen (2014) for more detail. We ensure that the compression rate overhead to vectorization is low by using the BitKnit technique described in Giesen (2015), see Appendix D for more detail. Having vectorized, we found that most of the compute time for our compression was spent in neural net inference, whether running on CPU or GPU, which we know to already be reasonably well optimized.\nIn Craystack, we further generalize the ANS coder using Numpy’s n-dimensional array view interface, allowing the stack head to be ‘shaped’ like an n-dimensional array, or a nested Python data-structure containing arrays. We can then use a shape which fits that of the data that we wish to encode or decode. When coding data according to a VAE we use an ANS stack head shaped into a pair of arrays, matching the shapes of the observation x and the latent z. This allows for a straightforward implementation and clarifies the lack of data dependence between certain operations, such as the\nx→ p(· | z) and z → p(·) during encoding, which can theoretically be performed concurrently. This vectorized encoding process is visualized in Figure 2." }, { "heading": "3.3 DISCRETIZATION", "text": "It is standard for state of the art latent variable models to use continuous latent variables. Since ANS operates over discrete probability distributions, if we wish to use BB-ANS with such models it is necessary to discretize the latent space so that latent samples can be communicated. Townsend et al. (2019) described a static discretization scheme for the latents in a simple VAE with a single layer of continuous latent variables, and showed that this discretization has a negligible impact on compression rate. The addition of multiple layers of stochastic variables to a VAE has been shown to improve performance (Kingma et al., 2019; Kingma et al., 2016; Maaløe et al., 2019; Sønderby et al., 2016). Motivated by this, we propose a discretization scheme for hierarchical VAEs with multiple layers of latent variables.\nThe discretization described in Townsend et al. (2019) is formed by dividing the latent space into intervals of equal mass under the prior p(z). For a hierarchical model, the prior on each layer depends on the previous layers:\np(z1:L) = p(zL) L−1∏ l=1 p(zl | zl+1:L). (6)\nIt isn’t immediately possible to use the simple static scheme from Townsend et al. (2019), since the marginals p(z1), . . . , p(zL−1) are not known. Kingma et al. (2019) estimate these marginals by sampling, and create static bins based on the estimates. They demonstrate that this approach can work well. We propose an alternative approach, allowing the discretization to vary with the context of the latents we are trying to code. We refer to our approach as dynamic discretization.\nIn dynamic discretization, instead of discretizing with respect to the marginals of the prior, we discretize according to the conditionals in the prior, p(zl | zl+1:L). Specifically, for each latent layer l, we partition each dimension into intervals which have equal probability mass under the conditional p(zl | zl+1:L). This directly generalizes the scheme used in BB-ANS (Townsend et al., 2019). Dynamic discretization is more straightforward to implement because it doesn’t require callibrating the discretization to samples. However it imposes a restriction on model structure, in particular it requires that posterior inference is done top-down. This precludes the use of Bit-Swap. In Section 3.3.1 we contrast the model restriction from dynamic discretization with the bottom-up, Markov restriction imposed by Bit-Swap itself.\nWe give further details about the dynamic discretization implementation we use in Appendix A.\n3.3.1 MODEL RESTRICTIONS\nThe first stage of BB-ANS encoding is to pop from the posterior, z1:L ← q(· |x). When using dynamic discretization, popping the layer zl requires knowledge of the discretization used for zl and\nthus of the conditional distribution p(zl | zl+1:L). This requires the latents zl+1:L to have already been popped. Because of this, latents in general must be popped (sampled) in ‘top-down’ order, i.e. zL first, then zL−1 and so on down to z1.\nThe most general form of posterior for which top-down sampling is possible is\nq(z1:L |x) = q(zL |x) L−1∏ l=1 q(zl | zl+1:L, x). (7)\nThis is illustrated, for a hierarchy of depth 3, in Figure 3b. The Bit-Swap technique (Kingma et al., 2019) requires that inference be done bottom up, and that generative and inference models must both be a Markov chain on z1, . . . , zL, and thus cannot use skip connections. These constraints are illustrated in Figure 3c,d. Skip connections have been shown to improve model ELBO in very deep models (Sønderby et al., 2016; Maaløe et al., 2019). HiLLoC does not have this constraint, and we do utilize skip connections in our experiments." }, { "heading": "3.4 STARTING THE BITS BACK CHAIN", "text": "As discussed in Section 3.3, our dynamic discretization method precludes the use of Bit-Swap for reducing the one-time cost of starting a BB-ANS chain. We propose instead to use a significantly simpler method to address the high cost of coding a small number of samples with BB-ANS, namely we code the first samples using a different codec. The purpose of this is to build up a sufficiently large buffer of compressed data to permit the first stage of the BB-ANS algorithm - to pop a latent sample from the posterior. In our experiments we use the ‘Free Lossless Image Format’ (FLIF) (Sneyers & Wuille, 2016) to build up the buffer. We chose this codec because it performed better than other widely used codecs, but in principal any lossless codec could be used.\nThe amount of previously compressed data required to pop a posterior sample from the ANS stack (and therefore start the BB-ANS chain) is roughly proportional to the size of the image we wish to compress, since in a fully convolutional model the size of the latent space is determined by the image size.\nWe can exploit this to allow us to obtain a better compression rate than FLIF as quickly as possible. We do so by partitioning the first images we wish to compress with HiLLoC into smaller patches. These patches require a smaller data buffer, and thus we can use the superior HiLLoC coding sooner than if we attempted to compress full images. We find experimentally that, generally, larger patches have a better coding rate than smaller patches. Therefore we increase the size of the image patches being compressed with HiLLoC as more images are compressed and the size of the data buffer grows, until we finally compress full images once the buffer is sufficiently large. For our experiments on compressing full ImageNet images, we compress 32×32 patches, then 64×64, then 128×128 before switching to coding the full size images directly. Note that since our model can compress any shape image, we can compress the edge patches which will have different shape if the patch size does not divide the image dimensions exactly. Using this technique means that our coding rate improves gradually from the FLIF coding rate towards the coding rate achieved by HiLLoC on full images. We compress only 5 ImageNet images using FLIF before we start compressing 32×32 patches using HiLLoC." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "Using Craystack, we implement HiLLoC with a ResNet VAE (RVAE) (Kingma et al., 2016). This powerful hierarchical latent variable model achieves ELBOs comparable to state of the art autoregressive models2. In all experiments we used an RVAE with 24 stochastic hidden layers. The RVAE utilizes skip connections, which are important to be able to effectively train models with such a deep latent hierarchy. See Appendix E for more details.\nWe trained the RVAE on the ImageNet 32 training set, then evaluated the RVAE ELBO and HiLLoC compression rate on the ImageNet 32 test set. To test generalization, we also evaluated the ELBO\n2Unlike autoregressive models, for which decoding time scales with number of pixels, and is in practice extremely slow, both encoding and decoding with RVAEs are fast.\nand compression rate on the tests sets of ImageNet64, CIFAR10 and full size ImageNet. For full size ImageNet, we used the partitioning method described in 3.4. The results are shown in Table 2.\nFor HiLLoC the compression rates are for the entire test set, except for full ImageNet, where we use 2000 random images from the test set.\nTable 2 shows that HiLLoC achieves competitive compression rates on all benchmarks, and state of the art on full size ImageNet images. The fact that HiLLoC can achieve state of the art compression on ImageNet relative to the baselines, even under a change of distribution, is striking. This provides strong evidence of its efficacy as a general method for lossless compression of natural images. Naively, one might expect a degradation of performance relative to the original test set when changing the test distribution—even more so when the resolution changes. However, in the settings we studied, the opposite was true, in that the average per-pixel ELBO (and thus the compressed message length) was lower on all other datasets compared to the ImageNet 32 validation set.\nIn the case of CIFAR, we conjecture that the reason for this is that its images are simpler and contain more redundancy than ImageNet. This theory is backed up by the performance of standard compression algorithms which, as shown in Table 2, also perform better on CIFAR images than they do on ImageNet 32. We find the compression rate improvement on larger images more surprising. We hypothesize that this is because pixels at the edge of an image are harder to model because they have less context to reduce uncertainty. The ratio of edge pixels to interior pixels is lower for larger images, thus we might expect less uncertainty per pixel in a larger image.\nTo demonstrate the effect of vectorization we timed ANS of single images at different, fixed, sizes, using a fully vectorized and a fully serial implementation. The results are shown in Figure 4, which clearly shows a speedup of nearly three orders of magnitude for all image sizes. We find that the run times for encoding and decoding are roughly linear in the number of pixels, and the time to compress an average sized ImageNet image of 500 × 374 pixels (with vectorized ANS) is around 29s on a desktop computer with 6 CPU cores and a GTX 1060 GPU." }, { "heading": "5 DISCUSSION", "text": "Our experiments demonstrate HiLLoC as a bridge between large scale latent variable models and compression. To do this we use simple variants of pre-existing VAE models. Having shown that bits back coding is flexible enough to compress well with large, complex models, we see plenty of work still to be done in searching model structures (i.e. architecture search), optimizing with a trade-off between compression rate, encode/decode time and memory usage. Particularly pertinent for HiLLoC\n3Integer discrete flows, retrieved from Hoogeboom et al. (2019). 4Integer discrete flows trained on ImageNet 32. ImageNet 64 images are split into four 32×32 patches.\nRetrieved from Hoogeboom et al. (2019). 5Local bits back, retrieved from Ho et al. (2019). 6 For Bit-Swap, full size ImageNet images were cropped so that their side lengths were multiples of 32.\nis latent dimensionality, since compute time and memory usage both scale with this. Since the model must be stored/transmitted to use HiLLoC, weight compression is also highly relevant. This is a well-established research area in machine learning (Han et al., 2016; Ullrich et al., 2017).\nOur experiments also demonstrated that one can achieve good performance on a dataset of large images by training on smaller images. This result is promising, but future work should be done to discover what the best training datasets are for coding generic images. One question in particular is whether results could be improved by training on larger images and/or images of varying size. We leave this to future work. Another related direction for improvement is batch compression of images of different sizes using masking, analogous to how samples of different length may be processed in batches by recurrent neural nets.\nWhilst this work has focused on latent variable models, there is also promise in applying state of the art fully observed auto-regressive models to lossless compression. We look forward to future work investigating the performance of models such as WaveNet (van den Oord et al., 2016) for lossless audio compression as well as PixelCNN++ (Salimans et al., 2017) and the state of the art models in Menick & Kalchbrenner (2019) for images. Sampling speed for these models, and thus decompression, scales with autoregressive sequence length, and can be very slow. This could be a serious limitation, particularly in common applications where encoding is performed once but decoding is performed many times. This effect can be mitigated by using dynamic programming (Le Paine et al., 2016; Ramachandran et al., 2017), and altering model architecture (Reed et al., 2017), but on parallel architectures sampling/decompression is still significantly slower than with VAE models.\nOn the other hand, fully observed models, as well as the flow based models of Hoogeboom et al. (2019) and Ho et al. (2019), do not require bits back coding, and therefore do not have to pay the one-off cost of starting a chain. Therefore they may be well suited to situations where one or a few i.i.d. samples are to be communicated. Similar to the way that we use FLIF to code the first images for our experiments, one could initially code images using a fully observed model then switch to a faster latent variable model once a stack of bits has been built up." }, { "heading": "6 CONCLUSION", "text": "We presented HiLLoC, an extension of BB-ANS to hierarchical latent variable models, and show that HiLLoC can perform well with large models. We open-sourced our implementation, along with the Craystack package for prototyping lossless compression.\nWe have also explored generalization of large VAE models, and established that fully convolutional VAEs can generalize well to other datasets, including images of very different size to those they were trained on. We have described how to compress images of arbitrary size with HiLLoC, achieving a compression rate superior to the best available codecs on ImageNet images. We look forward to future work reuniting machine learning and lossless compression." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Paul Rubenstein for the substantial constructive feedback and advice which he gave us. We also thank the anonymous reviewers for their feedback. This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1." }, { "heading": "A REPARAMETERIZING DISCRETIZED LATENTS", "text": "After discretizing the latent space, the latent variable at layer l can be treated as simply an index il into one of the intervals created by the discretization. As such, we introduce the following notation for pushing and popping according to a discretized version of the posterior.\nil ↔ Ql(· | il+1:L, x) (8)\nWhere Ql(· | il+1:L, x) is the distribution over the intervals of the discretized latent space for zl, with interval masses equal to their probability under q(zl | z̃l+1:L, x). The discretization is created from splitting the latent space into equal mass intervals under p(zl | z̃l+1:L). The mass of a given interval under some distribution is the CDF at the upper bound of the interval minus the CDF at the lower end of the interval. We have used z̃ to indicate that these will be discrete zl values that are reconstructed from the indices il. In practise we take z̃l(il) to be the centre of the interval indexed by il. It is important to note that the Ql has an implicit dependence on the previous prior distributions p(zk|zk+1:L) for k ≥ l, as these prior distributions are required to calculate z̃l+1:L and the discretization of the latent space.\nSince we discretize each latent layer to be intervals of equal mass under the prior, the prior distribution over the indices il becomes a uniform distribution over the interval indices, U(il), which is not dependent on i6=l. Note that this allows us to push/pop the il according to the prior in parallel. The full encoding and decoding procedures with a hierarchical latent model and the dynamic discretization we have described are shown in Table 3. Note that the operations in the two tables are ordered top to bottom." }, { "heading": "B CODEC FOR VARIABLE IMAGE SIZES", "text": "Here we describe a codec to compress a set of images of arbitrary size. The encoder now adds the dimensions of the image being coded to the stream of compressed data, such that the decoder knows what shape the image will be before decoding it. Since we are using a vectorized ANS coder, as described in Section 3.2, we resize the top of the coder in between each coding/decoding step such that the size of the top of the coder matches the sizes of the image and latents being coded. The codec is detailed in Table 4.\nTo make the resizing procedure efficient, we resize via ‘folding’ the top of the vectorized ANS coder such that we are roughly halving/doubling the number of individual ANS coders each time we fold. This makes the cost of the resize logarithmic with the size difference between the vectorized coder and the targeted size." }, { "heading": "C COMPRESSION WITH PIXELVAE", "text": "To further demonstrate HiLLoC, we implement it with a PixelVAE model. We use a model with two latent layers, although the posterior is fully factorized. The implementation requires nesting an autoregressive codec inside the BB-ANS codec, since the observations and one of the latent layers in PixelVAE have autoregressive generative distributions. Handling this complexity showcases Craystack, which was designed to support this kind of composition. It would also have been prohibitively slow to run on the datasets we compress without the vectorized ANS scheme discussed in Section 3.2.\nThe achieved compression rate on the entire ImageNet validation set is displayed in Table 5.\nThe autoregressive component of the PixelVAE generative model leads to an asymmetry between the times required for compression and decompression. Compression with the PixelVAE model is readily parallelizable across pixels, since we already have access to the pixel values we wish to compress and thus also the conditional distributions on each pixel. However, decompression (equivalently, sampling) is not parallelizable across pixels, since we must decompress a pixel value in order to give us access to the conditional distribution on the next pixel. This means the time complexity of decompression is linear in the number of pixels, making it prohibitively slow for most image sizes.\nD VECTORIZATION WITHOUT OVERHEADS\nTo ensure that the compression rate overhead from using vectorization is low, we use a technique from the BitKnit codec (Giesen, 2015). When we reach the end of encoding, we could simply concatenate the integers in the (vector) stack head to form the final output message. However, this is inefficient because the stack head is not uniformly distributed. As discussed in Giesen (2015), elements of the top of the stack have a probability mass roughly\np(h) ∝ 1/h. (9)\nEquivalently, the length of h is approximately uniformly distributed. More detailed discussion and an empirical demonstration of this is given by Bloom (2014). An efficient way to form the final output message at the end of decoding, is to fold the stack head vector by repeatedly encoding half of it onto the other half, until only a scalar remains, using the above distribution for the encoding. We implement this technique in Craystack and use it for our experiments. The number of (vectorized) encode steps required is logarithmic in the size (i.e. the number of elements) of the stack head.\nSome of the overhead from vectorization also comes at the start of encoding, when, in existing implementations, the elements of the stack head vector are initialized to copies of a fixed constant. Information from these copies ends up in the message and introduces redundancy which scales with the size of the head. This overhead can be removed by initializing the stack head to a vector of length 1 and then growing the length of the stack head vector gradually as more random data is added to the stack, by decoding new stack head vector elements according to the distribution (9)." }, { "heading": "E RESNET VAE ARCHITECTURE", "text": "A full description of the RVAE architecture is given in Kingma et al. (2016), and a full implementation can be found in our repository https://github.com/hilloc-submission/hilloc, but we give a short description below.\nThe RVAE is a hierarchical latent model, trained by maximization of the usual evidence lower bound (ELBO) on the log-likelihood:\nlog p(x) ≥ Eq(z | x) [ log p(x, z)\nq(z |x)\n] (10)\nTake the latent hierarchy to be depth L, such that the latents are z1:L. There are skip connections in both the generative model, p(x, z1:L), and the inference model, q(z1:L |x). Due to our requirement of using dynamic discretization, we use a top-down inference model 7. This means that we can write\np(x, z1:L) = p(x | z1:L)p(zL) L−1∏ l=1 p(zl | zl+1:L) (11)\nq(z1:L |x) = q(zL |x) L−1∏ l=1 q(zl | zl+1:L, x) (12)\nAnd the ELBO as\nlog p(x) ≥ Eq(z1:L | x) [log p(x | z1:L)]−DKL(q(zL |x) ‖ p(zL)) (13)\n− L−1∑ l=1 Eq(zl+1:L | x) [DKL(q(zl | zl+1:L, x) ‖ p(zl | zl+1:L))] (14)\nWhere DKL is the KL divergence. As in Kingma et al. (2016), the KL terms are individually clamped as max(DKL, λ), where λ is some constant. This is an optimization technique known as free bits, and aims to prevent latent layers in the hierarchy collapsing such that the posterior is equal to the prior.\nEach layer in the hierarchy consists of a ResNet block with two sets of activations. One set of activations are calculated bottom-up (in the direction of x to zL), and the other are calculated top-down. The bottom-up activations are used only within q(z1:L |x), whereas the top-down activations are used by both q(z1:L |x) and p(x, z1:L). Every conditional distribution on a latent zl is parameterized as a diagonal Gaussian distribution, with mean and covariance a function of the activations within the ResNet block, and the conditional distribution on x is parameterized by a discretized logistic distribution. Given activations for previous ResNet blocks, the activations at the following ResNet block are a combination of stochastic and deterministic features of the previous latent layer, as well as from skip connections directly passing the previous activations. The features are calculated by convolutions.\nNote also that all latent layers are the same shape. Since we retained the default hyperparameters from the original implementation, each latent layer has 32 feature maps and spatial dimensions half those of the input (e.g. h2 × w 2 for input of shape h× w).\n7Note that in Kingma et al. (2016), this is referred to as bidirectional inference." } ]
2,019
LATENT VARIABLE MODELS
SP:3dd9ae7b88b3e6848ee1fbd11c274d7a395d3167
[ "The goal of this paper is to train neural networks (NNs) in a way to be robust to adversarial attacks. The authors formulate training a NN as finding an optimal controller for a discrete dynamical system. This formulation allows them to use an optimal control algorithm, called method of successive approximations (MSA), to train a NN. The authors then show how constraints can be added to this optimization problem in order to make the trained NN more robust. They show that the resulted constraint optimization problem can be formulated as a semi-definite programming and provide some experimental results. ", "Neural Networks are vulnerable to adversarial perturbations. This paper proposes a method that based on optimal control theory that uses semidefinite-programming. This is a quite popular topic in Adversarial training recently, there has been a few works in that line. There are almost no experiments in this paper. There are several typos in the paper and writing of this paper requires more work. There are several typos in this paper, for example STOA, should be SOTA (in the Section 6.) In its current state, this paper looks very rushed." ]
Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness.
[ { "affiliations": [], "name": "LYAPUNOV STABILITY" } ]
[ { "authors": [ "N. Akhtar", "J. Liu", "A. Mian" ], "title": "Defense against universal adversarial perturbations", "venue": "arXiv preprint arXiv:1711.05929,", "year": 2017 }, { "authors": [ "Uri M. Ascher", "Linda R. Petzold" ], "title": "Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. Society for Industrial and Applied Mathematics, Philadelphia, PA", "venue": "USA, 1st edition,", "year": 1998 }, { "authors": [ "V.G. Boltyanskii", "R.V. Gamkrelidze", "L.S" ], "title": "Pontryagin. The theory of optimal processes", "venue": "i. the maximum principle. Translations. Series 2. American Mathematical Society.,", "year": 1960 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "arXiv preprint arXiv:1608.04644,", "year": 2017 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Adversarial examples are not easily detected: Bypassing ten detection methods", "venue": "arXiv preprint arXiv:1705.07263,", "year": 2017 }, { "authors": [ "E. Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics, 5:1–11,", "year": 2017 }, { "authors": [ "David E. Rumelhart", "Geoffrey E. Hinton", "Ronald J. Williams" ], "title": "Learning representations by back propagating errors", "venue": "Nature, 323:533–536,", "year": 1986 }, { "authors": [ "T. Gebhart", "P. Schrater" ], "title": "Adversary detection in neural networks via persistent homology", "venue": "arXiv preprint arXiv:1711.10056,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2015 }, { "authors": [ "S. Gu", "L. Rigazio" ], "title": "Towards deep neural network architectures robust to adversarial examples", "venue": "arXiv preprint arXiv:1412.5068,", "year": 2015 }, { "authors": [ "Eldad Haber", "Lars Ruthotto" ], "title": "Stable architectures for deep neural networks", "venue": "Inverse Problems, 34,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "arXiv preprint arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Christoph Helmberg", "Franz Rendl", "Robert Vanderbei", "Henry Wolkowicz" ], "title": "An interior-point method for semidefinite programming", "venue": "SIAM Journal on Optimization,", "year": 1970 }, { "authors": [ "Morris Hirsch", "Stephen Smale", "Robert L Devaney" ], "title": "Differential Equations, Dynamical Systems and an Introduction to Chaos, volume 14", "venue": null, "year": 2004 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "arXiv preprint arXiv:1905.02175,", "year": 1905 }, { "authors": [ "F.L.I.A. Krylov" ], "title": "Chernous’ko. On the method of successive approximations for solution of optimal control problems", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1963 }, { "authors": [ "A. Kurakin", "I. Goodfellow", "S. Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Hyeungill Lee", "Sungyeob Han", "Jungwoo Lee" ], "title": "Generative adversarial trainer: Defense to adversarial perturbations with gan", "venue": "arXiv preprint arXiv:1705.03387,", "year": 2017 }, { "authors": [ "Qianxiao Li", "Shuji Hao" ], "title": "An optimal control approach to deep learning and applications to discrete-weight neural networks", "venue": "arXiv preprint arXiv:1803.01299,", "year": 2018 }, { "authors": [ "Qianxiao Li", "Long Chen", "Cheng Tai", "E Weinan" ], "title": "Maximum principle based algorithms for deep learning", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "arXiv preprint arXiv:1710.10121,", "year": 2017 }, { "authors": [ "Chunchuan Lyu", "Kaizhu Huang", "Hai-Ning Liang" ], "title": "A unified gradient regularization family for adversarial examples", "venue": "arXiv preprint arXiv:1511.06385,", "year": 2015 }, { "authors": [ "A. Nayebi", "S. Ganguli" ], "title": "Biologically inspired protection of deep networks from adversarial attacks", "venue": "arXiv preprint arXiv:1703.09202,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z. Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": "arXiv preprint arXiv:1511.07528,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "arXiv preprint arXiv:1606.04435,", "year": 2016 }, { "authors": [ "Andrew Ross", "Finale Doshi-Velez" ], "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "venue": "arXiv preprint arXiv:1711.09404,", "year": 2017 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of gradient descent optimization algorithms", "venue": "arXiv preprint arXiv:1609.04747,", "year": 2016 }, { "authors": [ "Ludwig Schmidt", "Shibani Santurkar", "Dimitris Tsipras", "Kunal Talwar", "Aleksander Madry" ], "title": "Adversarially robust generalization requires more data", "venue": "arXiv preprint arXiv:1804.11285,", "year": 2018 }, { "authors": [ "Adi Shamir", "Itay Safran", "Eyal Ronen", "Orr Dunkelman" ], "title": "A simple explanation for the existence of adversarial examples with small hamming distance", "venue": "arXiv preprint arXiv:1901.10861,", "year": 1901 }, { "authors": [ "Ke Sun", "Zhanxing Zhu", "Zhouchen Lin" ], "title": "Enhancing the robustness of deep neural networks by boundary conditional gan", "venue": null, "year": 1902 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Lieven Vandenberghe", "Stephen Boyd" ], "title": "Semidefinite programming", "venue": "SIAM Review, 38,", "year": 1998 }, { "authors": [ "Weilin Xu", "David Evans", "Yanjun Qi" ], "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "venue": "arXiv preprint arXiv:1704.01155,", "year": 2018 }, { "authors": [ "Dinghuai Zhang", "Zhang Tianyuan", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": "arXiv preprint arXiv:1905.00877,", "year": 1905 } ]
[ { "heading": null, "text": "Deep neural networks are known to be vulnerable to adversarial perturbations. In this paper, we bridge adversarial robustness of neural nets with Lyapunov stability of dynamical systems. From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin’s maximum principle, to train neural nets. This decoupled training method allows us to add constraints to the optimization, which makes the deep model more robust. The constrained optimization problem can be formulated as a semi-definite programming problem and hence can be solved efficiently. Experiments show that our method effectively improves deep model’s adversarial robustness." }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks achieve state-of-the-art performances on a variety of tasks (LeCun et al., 2015). However, neural nets are known to be vulnerable to adversarial examples. Imperceptibly perturbed inputs can induce erroneous outputs in neural nets (Szegedy et al., 2013). In image classification problems of computer vision, previous work has proposed various methods to attack deep models and induce low accuracy (Goodfellow et al., 2015; Madry et al., 2017; Papernot et al., 2016a; Carlini & Wagner, 2017a). Whereas multiple defenses against adversarial attacks are developed, they don’t ensure safety faced with strong attacking methods. There are also theories that explain the existence of adversarial examples (Ilyas et al., 2019; Shamir et al., 2019), but they often fail to fully explain the features and behaviors of this phenomenon. This makes the study of adversarial attacks important in that it is a threat to real-life machine learning systems (Kurakin et al., 2016).\nIn this paper, we propose a dynamical system view on the adversarial robustness of the models, as well as new method that significantly defense adversarial attacks.\nRecent works have shown the connection between deep neural networks and dynamical systems (E, 2017; Li et al., 2017; Haber & Ruthotto, 2017; Lu et al., 2017). If we regard the neural net as a discretization of an ordinary differential equation (ODE), then training neural nets becomes finding an optimal control of the corresponding discrete dynamical system. Traditionally, we often treat training neural networks as an unconstrained non-convex optimization problem\nmin θ∈Θ J(θ) +R(θ),\nwhere θ denotes the parameters of the model, J denotes the loss function and R denotes the regularizer term, and we solve the problem with (stochastic) gradient-descent based methods (Bottou, 2010; Ruder, 2016). In the training process, we feed the network with a batch of training data, and compute the gradient with forward and backward propagation (E. Rumelhart et al., 1986). The propagation process resembles solving optimal control problems that tune the parameters to make the output be close to target states. This viewpoint motivates us to bridge adversarial robustness with Lyapunov stability of a dynamical system, and to train robust networks with algorithms that find stable optimal control. We will formulate the discussion in later sections." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 ADVERSARIAL DEFENSE", "text": "Many defense methods have been proposed to improve the models’ adversarial robustness. The defenses mainly fall into three types: adversarial training (Szegedy et al., 2013; Zhang et al., 2019), modifying the networks (Gu & Rigazio, 2015; Lyu et al., 2015; Papernot et al., 2016b; Nayebi & Ganguli, 2017; Ross & Doshi-Velez, 2017), and adding external models (Lee et al., 2017; Akhtar et al., 2017; Gebhart & Schrater, 2017; Xu et al., 2018; Sun et al., 2019). Although various defense methods have been developed, a defended deep model is often successfully attacked by newly developed attacks or specific counter-counter measures (Carlini & Wagner, 2017b). Therefore, it can be hoped that defenses against general attacks will be devised to make deep learning models (adversarially) robust to real-life threats." }, { "heading": "2.2 NEURAL ODES AND OPTIMAL CONTROL", "text": "Recent works have bridged deep neural networks with ODEs and dynamical systems. On the one hand, deep residual networks (He et al., 2015) can be illustrated as forward Euler scheme approximating an ODE (E, 2017), which motivates us to design effective network structures (Lu et al., 2017). On the other hand, regarding the network as a dynamical system allows us to set up an optimal control viewpoint of neural nets. Pontryagin’s Maximum Principle (Boltyanskii et al., 1960) has been applied to train neural nets (Li et al., 2017; Li & Hao, 2018)." }, { "heading": "3 ADVERSARIAL ROBUSTNESS AND LYAPUNOV STABILITY", "text": "" }, { "heading": "3.1 DYNAMICS OF DEEP NEURAL NETS", "text": "Given a T -layer neural net, we let the dynamical system {ft(xt, θt) : t = 0, . . . , T} represents the network, where xt is the input of t-th layer, θt is the parameter, and ft : Rdt × Θt → Rdt+1 denotes the t-th layer’s transformation, which is usually a non-linear function σ(θtxt + bt) for fully-connected layers, convolution layers and batch normalization layers, etc. Therefore, training the neural net can be regarded as controlling the parameters to let the dynamics fit the training data. Specifically, the training optimization problem can be formulated as a typical optimal control problem as follows:\nmin θ B∑ i=1 J(xiT ) + T∑ i=0 L(θi),\nsubj. to xit+1 = ft(x i t, θt), t = 0, . . . , T − 1,\nwhere we use xi to denote the i-th input in the batch and B denote the batch size. J and L are the loss function and the regularizer, respectively. Specially, if the model is a deep residual network with structure xt+1 = xt+ft(xt, θt), we can regard the problem as the forward Euler discretization of the following continuous optimal control problem:\nmin θ J(x(T )) + ∫ T 0 L(θ(t)) dt,\nsubj. to ẋ = f(t, x(t), θ(t)), x(0) = x, 0 ≤ t ≤ T,\nwhere x(t) is a continuous trajectory from the input to the output logits." }, { "heading": "3.2 LYAPUNOV STABILITY", "text": "Adversarial examples are usually clean images added by a small calculated perturbation η. The model predicts correct labels fed with clean inputs x0, while the output is completely different when it is fed with perturbed input x0 + η. The dynamical system view of neural nets motivate us to characterize this sensitivity with Lyapunov stability of a system (Hirsch et al., 2004). Definition 1 (Lyapunov Stability). For a given dynamical system ẋ = f(x), x(0) = x0, xe is an equilibrium, then\n• The system is Lyapunov stable, if, ∀ > 0, ∃ δ > 0 such that, if ‖x(0)− xe‖ < δ, then for every t ≥ 0, ‖x(t)− xe‖ < .\n• The system is asymptotically stable if it is Lyapunov stable and ∃ δ > 0 such that if ‖x(0)− xe‖ < δ, then limt→∞ ‖x(t)− xe‖ = 0.\n• The system is exponentially stable if it is asymptotically stable and ∃α > 0, β > 0, δ > 0 such that if ‖x(0)− xe‖ < δ, then ‖x(t)− xe‖ ≤ α‖x(0)− xe‖e−βt, for all t ≥ 0.\nThe definitions can be easily extended to discrete-time systems.\nIntuitively, the Lyapunov stability states that for any small perturbation η, the trajectory is still “close enough” to the original one. If we regard a neural net as a dynamical system, and ensure the network is Lyapunov stable, then the model is robust to all (adversarial) perturbations." }, { "heading": "3.3 ADVERSARIALLY ROBUST NEURAL NETS", "text": "Due to the connection between numerical ODEs and residual networks, we first consider robustness (i.e. Lyapunov stability) of continuous ODEs. Theorem 1 (Stable ODEs). For a given ODE ẋ = f(t, x, θ) = σ(Ax+b), where σ is the activation function, e.g., Sigmoid function or ReLU function, it is stable if Re(λi(A)) ≤ 0, ∀i, where Re denotes the real part, and λi denotes the i-th eigenvalue.\nOne can see, e.g. Hirsch et al. (2004), for the proof of this theorem.\nTheorem 1 provides a set of conditions for stable ODEs. However, deep residual network is only a forward Euler discretization scheme of continuous ODE. To ensure numerical stability, we require |1− λi(A)h| ≤ 1 (Ascher & Petzold, 1998), where the step size h = 1 in residual networks. Added by the identity mapping in residual networks, we can get the stable conditions for discrete dynamics. Theorem 2 (Stable Discrete Networks). For a discrete neural network, i.e., discrete dynamics {ft(xt, θt) : t = 0, . . . , T}, where ft(xt, θt) = σ(θtxt) (we omit the bias term for simplicity), the network is stable if the ρ(θt) ≤ 1, where ρ(A) = maxi(|λi(A)|) is the spectral radius.\nIf the conditions are added to the unconstrained optimization problem of training, we can greatly improve the adversarial robustness of neural nets. The methods will be discussed in the following section." }, { "heading": "4 TRAINING ROBUST NEURAL NETS", "text": "" }, { "heading": "4.1 PMP AND MSA", "text": "For deterministic systems, the Pontryagin’s Maximum Principle (PMP) (Boltyanskii et al., 1960) provides a set of necessary conditions for optimal control of the system. Various algorithms have been proposed to solve the deterministic optimal control problem based on PMP. Among them, the Method of Successive Approximations (MSA) (Krylov & Chernous’ko, 1963) is one of the simplest algorithms. In the field of deep learning, previous work has utilized MSA to train neural networks (Li et al., 2017; Li & Hao, 2018).\nFormally, consider the optimal control problem for training neural nets in section 3. For dynamics {ft(xt, θt) : t = 0, . . . , T}, assume θ∗ = { θ∗0 , . . . , θ ∗ T−1 } is a solution to the optimal control problem. Also, we define the Hamiltonian function H : Rdt × Rdt+1 × Θt × [T ] → R by H(x, p, θ, t) = p · ft(x, θ)−L(θt), where the dot denotes the inner product. We have the following necessary conditions for θ∗. Theorem 3 (Pontryagin’s Maximum Principle for Discrete Systems). Assume ft and J are sufficiently smooth. There exists co-states p∗ = {p∗0, . . . , p∗T } s.t. the following conditions hold:\nx∗t+1 = ∇pH(x∗t , p∗t+1, θ∗t , t), x∗0 = x0, p∗t = ∇xH(x∗t , p∗t+1, θ∗t , t), p∗T = −∇xJ(x∗T ),\nθ∗t = arg max θ H(x∗t , p ∗ t+1, θ, t).\nFor simplicity of notations, here we assume the batch size is 1. One can easily extend the theorem to minibatch training case by summing over the batch.\nThe theorem can be proved by KKT conditions (Boyd & Vandenberghe, 2004), where the co-states can be seen as the Lagrangian dual variables.\nConsider the conditions in PMP, one can find the x equations are exactly the forward propagation of a neural net, and the p equations resemble the backward propagation process. The third condition states that the model parameters must maximize the Hamiltonian function. This motivates us to iteratively compute forward and backward propagation, and solve the Hamiltonian maximization to find the optimal control, which is exactly the Method of Successive Approximations (Algorithm 1). In practice, we usually add regularizer terms that penalize great changes in the maximization step to prevent drastic steps that cause divergence. For the connection between MSA and back-propagationbased gradient descent algorithms, see the appendix of Li & Hao (2018).\nAlgorithm 1 The Method of Successive Approximations Initialize θ0 = { θ00, . . . , θ 0 T−1 } , set k = 0;\nrepeat Compute the states (forward propagation): xt+1 = ∇pH(xt, pt+1, θkt , t), t = 0, . . . , T − 1; Compute the co-states (backward propagation): pt = ∇xH(xt, pt+1, θkt , t), t = T − 1, . . . , 0, with initial pT = −∇xJ(xT ); For each t = 0, . . . , T − 1, solve the maximization θk+1t = arg maxθH(xt, pt+1, θ, t); Set k = k + 1; until Converge;\nThe advantages of training by MSA compared with gradient descent algorithms has been discussed in (Li et al., 2017), among which the most significant feature is that the optimization steps on different layers are decoupled. Concretely, after computing the states x and co-states p, the optimization step on layer t is only searching for parameters θt. This not only suggests that the optimization process can be accelerated by parallelization, but also allows us to utilize the features of the problem. The parameter space is greatly reduced compared with the original intractable optimization problem, and hence the optimization is much more easier. This allows us to add constraints that ensure robustness of the model." }, { "heading": "4.2 ROBUST CONSTRAINTS", "text": "Consider a layer in the form of ft(x) = θtx, where we leave the activation as an individual layer with no parameters for simplicity, we can derive the following optimization problem for Hamiltonian maximization:\nmax θ\npt+1 · (θtxt)− α‖θt‖22 − β‖θt − θ′t‖22,\nsubj. to ρ(θt) ≤ 1, where α‖θt‖22 is the L2 norm regularizer (weight decay), and θ′t is the initial parameter (i.e., θkt in the algorithm). The last term keeps the training process from drastic steps that cause divergence. The constraint, as illustrated in section 3, is the stable condition for discrete systems. It makes the optimization quite difficult if we directly add the constraints in gradient descent based algorithms, but the decoupled optimization in MSA allows us to do so.\nWith regard to the constraint of parameter’s spectral radius, a simple method is to apply special forms of matrices for parameters, e.g. anti-symmetric matrices. For continuous deep models, the only constraint is Theorem 1, i.e., Re(λi(θt)) ≤ 0. Anti-symmetric matrices have only imaginary eigenvalues, and hence we can replace θt with θt − θTt − γI , where γ is a small positive constant. For general forms of parameters, one can prove the following transformation. Theorem 4. One sufficient condition of ρ(A) ≤ 1 is[\nI A AT I\n] 0,\nwhere A B denotes A−B is positive semi-definite.\nProof. Recall that ρ(A) ≤ ‖A‖2 = √ λmax(ATA), we have\n‖A‖2 ≤ 1⇔ ATA I ⇔ [ I A AT I ] 0.\nHence we can replace ρ(θt) ≤ 1 with a positive semi-definite condition, and we turn the Hamiltonian maximization into a new optimization problem, where the target function is quadratic and the constraint is a semi-definite condition. This can be reduced to a semi-definite programming (SDP) problem (Vandenberghe & Boyd, 1998), which is a special case of convex optimization, and thus can be solved efficiently by, e.g., interior point methods (Helmberg et al., 1970) in polynomial time.\nHere we summarize our method. For a given neural network, we use MSA to train the model, i.e., iteratively computing the states (forward propagation) and co-states (backward propagation), and solving the optimization for each layer. Instead of directly maximizing the Hamiltonian, we add a positive semi-definite constraint to the optimization problem, which leads to a stable control of the dynamics." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENT SETUP", "text": "To evaluate the effectiveness of our method, we conduct experiments on CIFAR10. We trained the network on clean data, with adversarial training (PGD-10) and with robust training (our method), respectively. We used FGSM (Goodfellow et al., 2015), PGD-10 (Madry et al., 2017) and C&W (Carlini & Wagner, 2017a) to attack the network.\nDue to the limitation of TensorFlow, we used a simple interior point method with gradient descent to solve SDP. The network model was an 18-layer residual network (He et al., 2015), with 8 residual blocks. We set the perturbation size as = 0.1 for both FGSM and PGD. For C&W, we used the L0 metric. We trained the model for 150 epochs with a batch size of 200. The learning rate was set to be 10−2 initially, and was divided by 5 at epoch 30, 60 and 100. The regularizer term constant was set to be 10−3." }, { "heading": "5.2 RESULTS", "text": "The results can be seen in Table 1. The accuracy of robust models on clean data is lower than vanilla model’s in that robust training and generalization is more difficult and requires more data (Schmidt et al., 2018).\nOur method improves model’s adversarial robustness, compared with the vanilla model. Figure 1 displays the eigenvalues of the last fully-connected layer’s parameter. The complex norm of eigenvalues (spectral radius) of the model trained by our method are effectively bounded below 1, which satisfies the robust constraint on parameters in section 4.2, while eigenvalues of natural training are randomly distributed in the complex plane.\nOur method is not as effective as traditional adversarial training method. However, it mainly has the following advantages: (a) The training process doesn’t require large numbers of gradient propagation, which consumes much time in adversarial training. In our experiment, adversarial training\nspends about 10 times GPU time as much as our method. (b) The decoupled training process allows us to set different hyperparameters and training methods for different layers, which is more maneuverable for large scale training. We can further control the behavior of different layers in adversarial settings. (c) Lyapunov stability provides a framework for analyzing adversarial robustness of deep models, which may lead to theoretical analysis of adversarial samples in future work." }, { "heading": "6 DISCUSSION AND FUTURE WORK", "text": "Motivated by the dynamical system view of neural networks, this work bridges adversarial robustness of deep neural models with Lyapunov stability of dynamical systems, and we also propose a method that uses a stable optimal control algorithm to train neural networks to improve the adversarial robustness of deep neural models. Though the result didn’t surpass STOA defense methods, the stable control view of training neural nets points out another direction towards adversarially robust models.\nFor future work, on the one hand, mathematical analysis on Lyapunov stability of neural models may be studied to provide theoretical understanding of adversarial robustness. On the other hand, popular platforms for deep learning, e.g., TensorFlow, PyTorch, didn’t provide frameworks for optimal control. We will obtain better results if specific algorithms for SDP are applied to solve the optimization problem." } ]
2,019
null
SP:b63d45fa7937d0efe9d4d471ca75e52114393ea7
[ "This paper presents an algorithm for adversarial imitation that uses off-policy data in a principled manner, unlike prior work. The core idea is to express the KL-divergence between the policy's state-action marginal and the expert's state action marginal using the Donsker-Varadhan representation and then applying the change of variable similar to DualDICE to avoid computing the marginal of the current policy, thus getting rid of the on-policy sampling requirement. The paper then shows how the auxiliary variable (critic) added to the optimization is a value function that maximizes the corresponding induced reward in AIL methods, thus unifying the objectives for policy optimization and reward learning. The authors then present practical considerations needed in getting this formulation to work, including sampling from a replay buffer, biased sampling for the exponentiated term and avoid the double-sampling issue. Finally, the paper presents some results, which show that valueDICE is comparable to most of the other imitation learning methods. ", "This paper provides a novel off policy objective to solve imitation learning. It resolves the limitation of the famous GAIL algorithm that we need on-policy samples to interact with the environment. The new algorithm is simple but efficient, and can handle off-policy settings. The derivation of equation (12) is nice and intuitive, provide a potential on creating new imitation learning algorithm. Empirical results show that the new algorithm can perform as good as the state-of-the-art baseline, under on-policy setting." ]
When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly datainefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary. Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.1
[ { "affiliations": [], "name": "Ilya Kostrikov" }, { "affiliations": [], "name": "Ofir Nachum" }, { "affiliations": [], "name": "Jonathan Tompson" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Jozefowicz", "Bob McGrew", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray" ], "title": "Learning dexterous in-hand manipulation", "venue": "arXiv preprint arXiv:1808.00177,", "year": 2018 }, { "authors": [ "Mohamed Ishmael Belghazi", "Aristide Baratin", "Sai Rajeswar", "Sherjil Ozair", "Yoshua Bengio", "Aaron Courville", "R Devon Hjelm" ], "title": "Mine: mutual information neural estimation", "venue": "arXiv preprint arXiv:1801.04062,", "year": 2018 }, { "authors": [ "D.P. Bertsekas" ], "title": "Nonlinear Programming", "venue": "Athena Scientific,", "year": 1999 }, { "authors": [ "Mariusz Bojarski", "Davide Del Testa", "Daniel Dworakowski", "Bernhard Firner", "Beat Flepp", "Prasoon Goyal", "Lawrence D Jackel", "Mathew Monfort", "Urs Muller", "Jiakai Zhang" ], "title": "End to end learning for self-driving cars", "venue": "arXiv preprint arXiv:1604.07316,", "year": 2016 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Monroe D Donsker", "SR Srinivasa Varadhan" ], "title": "Asymptotic evaluation of certain markov process expectations for large time", "venue": "iv. Communications on Pure and Applied Mathematics,", "year": 1983 }, { "authors": [ "Justin Fu", "Katie Luo", "Sergey Levine" ], "title": "Learning robust rewards with adversarial inverse reinforcement learning", "venue": "arXiv preprint arXiv:1710.11248,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Kristian Hartikainen", "George Tucker", "Sehoon Ha", "Jie Tan", "Vikash Kumar", "Henry Zhu", "Abhishek Gupta", "Pieter Abbeel" ], "title": "Soft actor-critic algorithms and applications", "venue": "arXiv preprint arXiv:1812.05905,", "year": 2018 }, { "authors": [ "Karol Hausman", "Yevgen Chebotar", "Stefan Schaal", "Gaurav Sukhatme", "Joseph J Lim" ], "title": "Multimodal imitation learning from unstructured demonstrations using generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Liyiming Ke", "Matt Barnes", "Wen Sun", "Gilwoo Lee", "Sanjiban Choudhury", "Siddhartha Srinivasa" ], "title": "Imitation learning as f -divergence minimization", "venue": null, "year": 1905 }, { "authors": [ "Ilya Kostrikov", "Kumar Krishna Agrawal", "Debidatta Dwibedi", "Sergey Levine", "Jonathan Tompson" ], "title": "Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yunzhu Li", "Jiaming Song", "Stefano Ermon" ], "title": "Infogail: Interpretable imitation learning from visual demonstrations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Urs Muller", "Jan Ben", "Eric Cosatto", "Beat Flepp", "Yann L Cun" ], "title": "Off-road obstacle avoidance through end-to-end learning", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": "arXiv preprint arXiv:1908.05224,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Yinlam Chow", "Bo Dai", "Lihong Li" ], "title": "Dualdice: Efficient estimation of off-policy stationary distribution corrections", "venue": null, "year": 2019 }, { "authors": [ "Jan Peters", "Katharina Mulling", "Yasemin Altun" ], "title": "Relative entropy policy search", "venue": "In Twenty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Dean A Pomerleau" ], "title": "Alvinn: An autonomous land vehicle in a neural network", "venue": "In Advances in neural information processing systems,", "year": 1989 }, { "authors": [ "Martin L Puterman" ], "title": "Markov Decision Processes.: Discrete Stochastic Dynamic Programming", "venue": null, "year": 2014 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Fumihiro Sasaki", "Tetsuya Yohira", "Atsuo Kawaguchi" ], "title": "Sample efficient imitation learning for continuous control", "venue": "In International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is typically framed as learning a behavior policy based on reward feedback from trial-and-error experience. Accordingly, many successful demonstrations of RL often rely on carefully handcrafted rewards with various bonuses and penalties designed to encourage intended behavior (Nachum et al., 2019a; Andrychowicz et al., 2018). In contrast, many real-world behaviors are easier to demonstrate rather than devise explicit rewards. This realization is at the heart of imitation learning (Ho & Ermon, 2016; Ng et al.; Pomerleau, 1989), in which one aims to learn a behavior policy from a set of expert demonstrations – logged experience data of a near-optimal policy interacting with the environment – without explicit knowledge of rewards.\nDistribution matching via adversarial learning, or Adversarial Imitation Learning (AIL), has recently become a popular approach for imitation learning (Ho & Ermon, 2016; Fu et al., 2017; Ke et al., 2019; Kostrikov et al., 2019). These methods interpret the states and actions provided in the expert demonstrations as a finite sample from a target distribution. Imitation learning can then be framed as learning a behavior policy which minimizes a divergence between this target distribution and the state-action distribution induced by the behavior policy interacting with the environment. As derived by Ho & Ermon (2016), this divergence minimization may be achieved by iteratively performing two alternating steps, reminiscent of GAN algorithms (Goodfellow et al., 2014). First, one estimates the density ratio of states and actions between the target distribution and the behavior policy. Then, these density ratios are used as rewards for a standard RL algorithm, and the behavior policy is updated to maximize these cumulative rewards (data distribution ratios).\nThe main limitation of current distribution matching approaches is that estimating distribution density ratios (the first step of every iteration) typically requires samples from the behavior policy distribution. This means that every iteration – every update to the behavior policy – requires new interactions with the environment, precluding the use of these algorithms in settings where interactions with the environment are expensive and limited. Several papers attempt to relax this on-policy\n∗Also at NYU. 1Code to reproduce our results is available at https://github.com/google-research/\ngoogle-research/tree/master/value_dice.\nrequirement and resolve the sample inefficiency problem by designing off-policy imitation learning algorithms, which may take advantage of past logged data, usually in the form of a replay buffer (Kostrikov et al., 2019; Sasaki et al., 2019). However, these methods do so by altering the original divergence minimization objective to measure a divergence between the target expert distribution and the replay buffer distribution. Accordingly, there is no guarantee that the learned policy will recover the desired target distribution.\nIn this work, we introduce an algorithm for imitation learning that, on the one hand, performs divergence minimization as in the original AIL methods, while on the other hand, is completely off-policy. We begin by providing a new formulation of the minimum divergence objective that avoids the use of any explicit on-policy expectations. While this objective may be used in the traditional way to estimate data distribution ratios that are then input to an RL algorithm, we go further to show how the specific form of the derived objective renders the use of a separate RL optimization unnecessary. Rather, gradients of the minimum divergence objective with respect to behavior policy may be computed directly. This way, an imitating behavior policy may be learned to minimize the divergence without the use of explicit rewards. We call this stream-lined imitation learning algorithm ValueDICE. In addition to being simpler than standard imitation learning methods, we show that our proposed algorithm is able to achieve state-of-the-art performance on a suite of imitation learning benchmarks." }, { "heading": "2 BACKGROUND", "text": "We consider environments represented as a Markov Decision Process (MDP) (Puterman, 2014), defined by the tuple, (S,A, p0(s), p(s′|s, a), r(s, a), γ), where S and A are the state and action space, respectively, p0(s) is an initial state distribution, p(s′|s, a) defines environment dynamics represented as a conditional state distribution, r(s, a) is a reward function, and γ is a return discount factor. A behavior policy π(·|·) interacts with the environment to yield experience (st, at, rt, st+1), for t = 0, 1, . . . , where s0 ∼ p0(·), at ∼ π(·|st), st+1 ∼ p(·|st, at), rt = r(st, at). Without loss of generality, we consider infinite-horizon, non-terminating environments. In standard RL, one aims to learn a behavior policy π(·|s) to maximize cumulative rewards, based on experience gained from interacting with the environment.\nIn imitation learning (Pomerleau, 1989; Abbeel & Ng, 2004; Ho & Ermon, 2016), the environment reward is not observed. Rather, one has access to a set of expert demonstrations D := {(sk, ak, s′k}Nk=1 given by state-action-next-state transitions in the environment induced by an unknown expert policy πexp and the goal is to learn a behavior policy π which recovers πexp. During the learning process, in addition to the finite set of expert demonstrationsD, one may also optionally interact with the environment (in these interactions, no rewards are observed). This setting describes a number of real-world applications where rewards are unknown, such as Pomerleau (1989); Muller et al. (2006); Bojarski et al. (2016)." }, { "heading": "2.1 BEHAVIORAL CLONING (BC)", "text": "Supervised behavioral cloning (BC) is a popular approach for imitation learning. Given a set of expert demonstrations, a mapping of state observations to actions is fit using regression or density estimation. In the simplest case, one simply trains the behavior policy π to minimize the negative log-likelihood of the observed expert actions:\nmin π JBC(π) := −\n1\nN N∑ k=1 log π(ak|sk). (1)\nUnlike Inverse Reinforcement Learning (IRL) algorithms (e.g. GAIL (Ho & Ermon, 2016)), BC does not perform any additional policy interactions with the learning environment and hence does not suffer from the same issue of policy sample complexity. However, behavioral cloning suffers from distributional drift (Ross et al., 2011); i.e., there is no way for π to learn how to recover if it deviates from the expert behavior to a state s̃ not seen in the expert demonstrations." }, { "heading": "2.2 DISTRIBUTION MATCHING", "text": "The distribution matching approach provides a family of methods that are robust to distributional shift. Rather than considering the policy directly as a conditional distribution π(·|s) over actions, this approach considers the state-action distribution induced by a policy. In particular, under certain conditions (Puterman, 2014), there is a one-to-one correspondence between a policy and its stateaction distribution dπ defined as,\ndπ(s, a) = (1− γ) · ∞∑ t=0 γtp(st = s, at = a|s0 ∼ p0(·), st ∼ p(·|st−1, at−1), at ∼ π(·|st)). (2) By the same token, the unknown expert policy πexp also possesses a state-action distribution dexp, and one may usually assume that the expert demonstrations D := {(sk, ak, s′k}Nk=1 are sampled as (sk, ak) ∼ dexp, s′k ∼ p(·|sk, ak). Accordingly, the distribution matching approach proposes to learn π to minimize the divergence between dπ and dexp. The KL-divergence is typically used to measure the discrepancy between dπ and dexp (Ho & Ermon, 2016; Ke et al., 2019):\n−DKL (dπ||dexp) = E(s,a)∼dπ [ log dexp(s, a)\ndπ(s, a)\n] . (3)\nThe use of the KL-divergence is convenient, as it may be expressed as an RL problem where rewards are given by log distribution ratios:\n−DKL (dπ||dexp) = (1− γ) · E s0∼p0(·), at∼π(·|st) st+1∼p(·|st,at)\n[ ∞∑ t=0 γt log dexp(st, at) dπ(st, at) ] . (4)\nIn other words, if one has access to estimates of the distribution ratios of the two policies, then the minimum divergence problem reduces to a max-return RL problem with rewards r̃(s, a) = log d\nexp(s,a) dπ(s,a) . Any on-policy or off-policy RL algorithm can be used to maximize the corresponding\nexpected returns in Equation 4.\nCapitalizing on this observation, Ho & Ermon (2016) and Ke et al. (2019) propose algorithms (e.g., GAIL) in which the distribution ratio is estimated using a GAN-like objective:\nmax h:S×A→(0,1)\nJGAIL(h) := E(s,a)∼dexp [log h(s, a)] + E(s,a)∼dπ [log(1− h(s, a))]. (5)\nIn this objective, the function h acts as a discriminator, discriminating between samples (s, a) from dexp and dπ . The optimal discriminator satisfies,\nlog h∗(s, a)− log(1− h∗(s, a)) = log d exp(s, a)\ndπ(s, a) , (6)\nand so the distribution matching rewards may be computed as r̃(s, a) = log h∗(s, a) − log(1 − h∗(s, a)). In practice, the discriminator is not fully optimized, and instead gradient updates to the discriminator and policy are alternated.\nThese prior distribution matching approaches possess two limitations which we will resolve with our proposed ValueDICE algorithm:\n• On-policy. Arguably the main limitation of these prior approaches is that they require access to on-policy samples from dπ . While off-policy RL can be used for learning π, optimizing the discriminator h necessitates having on-policy samples (the second expectation in Equation 5). Thus, in practice, GAIL requires a prohibitively large number of environment interactions, making it unfeasible for use in many real-world applications. Attempts to remedy this, such as DiscriminatorActor-Critic (DAC) (Kostrikov et al., 2019), often do so via ad-hoc methods; for example, changing the on-policy expectation E(s,a)∼dπ [log(1−h(s, a))] in Equation 5 to an expectation over the replay buffer E(s,a)∼dRB [log(1 − h(s, a))]. While DAC achieves good empirical results, it does not guarantee distribution matching of π to πexp, especially when dRB is far from dπ . • Separate RL optimization. Prior approaches require iteratively taking alternating steps: first estimate the data distribution ratios using the GAN-like objective, then input these into an RL optimization and repeat. The use of a separate RL algorithm introduces complexity to any implementation of these approaches, with many additional design choices that need to be made and more function approximators to learn (e.g., value functions). Our introduced ValueDICE will be shown to not need a separate RL optimization." }, { "heading": "3 OFF-POLICY FORMULATION OF THE KL-DIVERGENCE", "text": "As is standard in distribution matching, we begin with the KL-divergence between the policy stateaction occupancies and the expert. However, in contrast to the form used in Equation 4 or 5, we use the Donsker-Varadhan representation (Donsker & Varadhan, 1983) given by,\n−DKL (dπ||dexp) = min x:S×A→R logE(s,a)∼dexp [ex(s,a)]− E(s,a)∼dπ [x(s, a)]. (7)\nSimilar to Equation 5, this dual representation of the KL has a property that is important for imitation learning. The optimal x∗ is equal to the log distribution ratio (plus a constant):2\nx∗(s, a) = log dπ(s, a)\ndexp(s, a) + C. (8)\nIn our considered infinite-horizon setting, the constant does not affect optimality and so we will ignore it (take C = 0). If one were to take a GAIL-like approach, they could use this form of the KL to estimate distribution matching rewards given by r̃(s, a) = −x∗(s, a), and these could then be maximized by any standard RL algorithm. However, there is no clear advantage of this objective over GAIL since it still relies on an expectation with respect to on-policy samples from dπ .\nTo make this objective practical for off-policy learning, we take inspiration from derivations used in DualDICE (Nachum et al., 2019b), and perform the following change of variables:3\nx(s, a) = ν(s, a)− Bπν(s, a), (9)\nwhere Bπ is the expected Bellman operator with respect to policy π and zero reward:\nBπν(s, a) = γEs′∼p(·|s,a),a′∼π(·|s′)[ν(s′, a′)]. (10)\nThis change of variables is explicitly chosen to take advantage of the linearity of the second expectation in Equation 7. Specifically, the representation becomes,\n−DKL (dπ||dexp) = min ν:S×A→R log E (s,a)∼dexp [eν(s,a)−B πν(s,a)]− E (s,a)∼dπ [ν(s, a)−Bπν(s, a)], (11)\nwhere the second expectation conveniently telescopes and reduces to an expectation over initial states (see Nachum et al. (2019b) for details):\nmin ν:S×A→R JDICE(ν) := log E (s,a)∼dexp\n[eν(s,a)−B πν(s,a)]− (1− γ) · E\ns0∼p0(·), a0∼π(·|s0)\n[ν(s0, a0)]. (12)\nThus we achieve our ValueDICE4 objective. It allows us to express the KL-divergence between dπ and dexp in terms of an objective over a ‘value-function’ ν expressed in an off-policy manner, with expectations over expert demonstrations dexp and initial state distribution p0(·). It is clear that the derived objective in Equation 12 possesses two key characteristics missing from prior distribution matching algorithms: First, the objective does not rely on access to samples from the on-policy distribution dπ , and so may be used in more realistic, off-policy settings. Second, the objective describes a proper divergence between dπ and dexp, as opposed to estimating a divergence between dRB and dexp, and thus avoids poor behavior when dRB is far from dπ . In the following section, we will go further to show how the objective in Equation 12 also renders the use of a separate RL optimization unnecessary." }, { "heading": "4 VALUEDICE: IMITATION LEARNING WITH IMPLICIT REWARDS", "text": "Although it is standard in distribution matching to have separate optimizations for estimating the distribution ratios and learning a policy, in our case this can be mitigated. Indeed, looking at our\n2This result is easy to derive by setting the gradient of the Donsker-Varadhan representation to zero and solving for x∗.\n3This change of variables is valid when one assumes log dπ(s, a)/dexp(s, a) ∈ K for all s ∈ S, a ∈ A, where K is some bounded subset of R, and x is restricted to the family of functions S ×A → K.\n4DICE (Nachum et al., 2019b) is an abbreviation for discounted distribution correction estimation.\nformulation of the KL in Equation 12, we see that gradients of this objective with respect to π may be easily computed. Specifically, we may express the distribution matching objective for π as a max-min optimization: max π min ν:S×A→R JDICE(π, ν) := log E (s,a)∼dexp [eν(s,a)−B πν(s,a)]−(1−γ)· E\ns0∼p0(·), a0∼π(·|s0)\n[ν(s0, a0)]. (13)\nIf the inner objective over ν is sufficiently optimized, the gradients of π may be computed directly (Bertsekas, 1999), noting that,\n∂\n∂π eν(s,a)−B πν(s,a) = −γ · eν(s,a)−B πν(s,a) · Es′∼T (s,a),a′∼π(s′)[ν(s′, a′)∇ log π(a′|s′)], (14)\n∂\n∂π Es0∼p0(·),a0∼π(·|s0)[ν(s0, a0)] = Es0∼p0(·),a0∼π(·|s0)[ν(s0, a0)∇ log π(a0|s0)]. (15)\nIn continuous control environments when π is parameterized by a Gaussian and ν is a neural network, one may use the re-parameterization trick (Haarnoja et al., 2018) to compute gradients of the ν-values with respect to policy mean and variance directly as opposed to computing ∇ log π(a|s). Please see the appendix for a full pseudocode implementation of ValueDICE. We note that in practice, as in GAIL, we do not train ν until optimality but rather alternate ν and π updates.\nThe mechanics of learning π according to the ValueDICE objective are straightforward, but what is the underlying reason for this more streamlined policy learning? How does it relate the standard protocol of alternating data distribution estimation with RL optimization? To better understand this, we consider the form of ν when it is completely optimized. If we consider the original change of variables (Equation 9) and optimality characterization (Equation 8) we have,\nν∗(s, a)− Bπν∗(s, a) = x∗(s, a) = log d π(s, a)\ndexp(s, a) . (16)\nFrom this characterization of ν∗, we realize that ν∗ is a sort of Q-value function: ν∗(s, a) is the future discounted sum of rewards r̃(s, a) := log d\nπ(s,a) dexp(s,a) when acting according to π. The gradients\nfor π then encourage the policy to choose actions which minimize ν∗(s, a), i.e., maximize future discounted log ratios log d\nexp(s,a) dπ(s,a) . Thus we realize that the objective for π in ValueDICE performs\nexactly the RL optimization suggested by Equation 4. The streamlined nature of ValueDICE comes from the fact that the value function ν (which would traditionally need to be learned as a critic in a separate actor-critic RL algorithm) is learned directly from the same objective as that used for distribution matching.\nThus, in addition to estimating a proper divergence between dπ and dexp in an off-policy manner, ValueDICE also greatly simplifies the implementation of distribution matching algorithms. There is no longer a need to use a separate RL algorithm for learning π, and moreover, the use of ν as a value function removes any use of explicit rewards. Instead, the objective and implementation are only in terms of policy π and function ν." }, { "heading": "5 SOME PRACTICAL CONSIDERATIONS", "text": "In order to make use of the ValueDICE objective (Equation 13) in practical scenarios, where one does not have access to dexp or p0(·) but rather only limited finite samples, we perform several modifications." }, { "heading": "5.1 EMPIRICAL EXPECTATIONS", "text": "The objective in Equation 13 contains three expectations:\n1. An expectation over dexp (the first term of the objective). Note that this expectation has a logarithm outside of it, which would make any mini-batch approximations of the gradient of this expectation biased.\n2. An expectation over p0(·) (the second term of the objective). This term is linear, and so is very amenable to mini-batch optimization.\n3. An expectation over the environment transition p(·|s, a) used to compute Bπν(s, a). This expectation has a log-expected-exponent applied to it, so its mini-batch approximated gradient would be biased in general.\nFor the first expectation, previous works have suggested a number of remedies to reduce the bias of mini-batch gradients, such as maintaining moving averages of various quantities (Belghazi et al., 2018). In the setting we considered, we found this to have a negligible effect on performance. In fact, simply using the biased mini-batched gradients was sufficient for good performance, and so we used this for our experiments.\nFor the second expectation, we use standard mini-batch gradients, which are unbiased. Although initial state distributions are usually not used in imitation learning, it is easy to record initial states as they are observed, and thus have access to an empirical sample from p0. Furthermore, as detailed in Section 5.3, it is possible to modify the initial state distribution used in the objective without adverse effects.\nFinally, for the third expectation, previous works have suggested the use of Fenchel conjugates to remove the bias (Nachum et al., 2019b). In our case, we found this unnecessary and instead use a biased estimate based on the single sample s′ ∼ p(·|s, a). This naive approach was enough to achieve good performance on the benchmark domains we considered.\nIn summary, the empirical form of the objective is given by,\nĴDICE(π, ν) =\nE batch(D)∼D, batch(p0)∼p̂0\n[ log E\ns,a,s′∼batch(D), a′∼π(·|s′)\n[ eν(s,a)−γν(s ′,a′) ] − (1− γ) · E\ns0∼batch(p0), a0∼π(·|s0)\n[ν(s0, a0)]\n] , (17)\nwhere batch(D) is a mini-batch from D and batch(p0) is a mini-batch from the recorded initial states p̂0." }, { "heading": "5.2 REPLAY BUFFER REGULARIZATION", "text": "The original ValueDICE objective uses only expert samples and the initial state distribution. In practice, the number of expert samples may be small and lack diversity, hampering learning. In order to increase the diversity of samples used for training, we consider an alternative objective, with a controllable regularization based on experience in the replay buffer:\nJmixDICE(π, ν) := log E (s,a)∼dmix [eν(s,a)−B πν(s,a)]− (1− α)(1− γ) · E\ns0∼p0(·), a0∼π(·|s0)\n[ν(s0, a0)]\n− α E (s,a)∼dRB [ν(s, a)− Bπν(s, a)], (18)\nwhere dmix(s, a) = (1− α)dexp(s, a) + αdRB(s, a). The main advantage of this formulation is that it introduces ν-values into the objective on samples that are outside the given expert demonstrations. Thus, if π deviates from the expert trajectory, we will still be able to learn optimal actions that return the policy back to the expert behavior. At the same time, one can verify that in this formulation the optimal π still matches πexp, unlike other proposals for incorporating a replay buffer distribution (Kostrikov et al., 2019). Indeed, the objective in Equation 18 corresponds to the Donsker-Varadhan representation,\n−DKL((1− α)dπ + αdRB || (1− α)dexp + αdRB) = min\nx:S×A→R logE(s,a)∼dmix [ex(s,a)]− (1− α) · E(s,a)∼dπ [x(s, a)]− α · E(s,a)∼dRB [x(s, a)] , (19)\nand so the optimal values of ν∗ satisfy,\nν∗(s, a)− Bπν∗(s, a) = x∗(s, a) = log (1− α)d π(s, a) + αdRB(s, a)\n(1− α)dexp(s, a) + αdRB(s, a) . (20)\nTherefore, the global optimality of π = πexp is unaffected by any choice of α < 1. We note that in practice we use a small value α = 0.1 for regularization." }, { "heading": "5.3 INITIAL STATE SAMPLING", "text": "Recall that dexp, dπ traditionally refer to discounted state-action distributions. That is, sampling from them is equivalent to first sampling a trajectory (s0, a0, s1, a1, . . . , sT ) and then sampling a time index t from a geometric distribution Geom(1 − γ) (appropriately handling samples that are beyond T ). This means that samples far into the trajectory do not contribute much to the objective. To remedy this, we propose treating every state in a trajectory as an ‘initial state.’ That is, we consider a single environment trajectory (s0, a0, s1, a1, . . . , sT ) as T distinct virtual trajectories {(st, at, st+1, at+1, . . . , sT )}T−1t=0 . We apply this to both dexp and dπ , so that not only does it increase the diversity of samples from dexp, but it also expands the initial state distribution p0(·) to encompass every state in a trajectory. We note that this does not affect the optimality of the objective with respect to π, since in Markovian environments an expert policy should be expert regardless of the state at which it starts (Puterman, 2014)." }, { "heading": "6 RELATED WORK", "text": "In recent years, the development of Adversarial Imitation Learning has been mostly focused on on-policy algorithms. After Ho & Ermon (2016) proposed GAIL to perform imitation learning via adversarial training, a number of extensions has been introduced. Many of these applications of the AIL framework (Li et al., 2017; Hausman et al., 2017; Fu et al., 2017) maintain the same form of distribution ratio estimation as GAIL which necessitates on-policy samples. In contrast, our work presents an off-policy formulation of the same objective.\nAlthough several works have attempted to apply the AIL framework to off-policy settings, these previous approaches are markedly different from our own. For example, Kostrikov et al. (2019) proposed to train the discriminator in the GAN-like AIL objective using samples from a replay buffer instead of samples from a policy. This changes the distribution ratio estimation to measure a divergence between the expert and the replay. Although we introduce a controllable parameter α for incorporating samples from the replay buffer into the data distribution objective, we note that in practice we use a very small α = 0.1. Furthermore, by using samples from the replay buffer in both terms of the objective as opposed to just one, the global optimality of the expert policy is not affected.\nThe off-policy formulation of the KL-divergence we derive is motivated by similar techniques in DualDICE (Nachum et al., 2019b). Still, our use of these techniques provides several novelties. First, Nachum et al. (2019b) only use the divergence formulation for data distribution estimation (which is used for off-policy evaluation), assuming a fixed policy. We use the formulation for learning a policy to minimize the divergence directly. Moreover, previous works have only applied these derivations to the f -divergence form of the KL-divergence, while we are the first to utilize the Donsker-Varadhan form. Anecdotally in our initial experiments, we found that using the f -divergence form leads to poor performance. We note that our proposed objective follows a form similar to REPS (Peters et al., 2010), which also utilizes a log-average-exp term. However, policy and value learning in REPS are performed via a bi-level optimization (i.e., the policy is learned with respect to a different objective), which is distinct from our algorithm, which trains values and policy with respect to the same objective. Our proposed ValueDICE is also significant for being able to incorporate arbitrary (non-expert) data into its learning." }, { "heading": "7 EXPERIMENTS", "text": "We evaluate ValueDICE in a variety of settings, starting with a simple synthetic task before continuing to an evaluation on a suite of MuJoCo benchmarks." }, { "heading": "7.1 RING MDP", "text": "We begin by analyzing the behavior of ValueDICE on a simple synthetic MDP (Figure 1). The states of the MDP are organized in a ring. At each state, two actions are possible: move clockwise or counter-clockwise. We first look at the performance of ValueDICE in a situation where the expert data is sparse and does not cover all states and actions. Specifically, we provide expert demonstra-\ntions which cover only states 0, 1, and 2 (see Figure 1 left). While the problem of recovering the true (unknown) expert is ill-defined, it is still possible to find a policy which recovers close to the same occupancies. Indeed, this is the policy found by ValueDICE, which chooses the appropriate actions at each state to optimally reach states 1 and 2 (and alternating between states 1 and 2 when at these states). In many practical scenarios, this is the desired outcome – if the imitating policy somehow encounters a situation which deviates from the expert demonstrations, we would like it to return to the expert behavior as fast as possible. Notably, a technique like behavioral cloning would fail to learn this optimal policy, since its learning is only based on observed expert data.\nWe also analyzed the behavior of ValueDICE with a stochastic expert (Figure 1 right). By using a synthetic MDP, we are able to measure the divergence DKL(dπ||dexp) during training. As expected, we find that this divergence decreases during ValueDICE training." }, { "heading": "7.2 MUJOCO BENCHMARKS", "text": "We compare ValueDICE against Discriminator-Actor-Critic (DAC) (Kostrikov et al., 2019), which is the state-of-the-art in sample-efficient adversarial imitation learning, as well as GAIL (Ho & Ermon, 2016). We evaluate the algorithms on the standard MuJoCo environments using expert demonstrations from Ho & Ermon (2016). We plot the average returns for the learned policies\n(using a mean action for sampling) every 1000 environment steps using 10 episodes. We perform this procedure for 10 different seeds and compute means and standard deviations (see Fig. 2 and 3, we visualize a half of standard deviation on these plots).\nWe present the extremely low-data regime first. In Figure 2 we present the results of the imitation learning algorithms given only a single expert trajectory. We find that ValueDICE performs similar or better than DAC an all tasks, with the exception of Walker2d where it converges to a slightly worse policy. Notably, in this low-data regime, behavioral cloning (BC) usually cannot recover the expert policy. We also present the results of these algorithms on a larger number of expert demonstrations (Figure 3). We continue to observe strong performance of ValueDICE as well as faster convergence on all tasks. It is worth mentioning that in this large-data regime, Behavior Cloning can recover the expert performance as well. In all of these scenarios, GAIL is too sample-inefficient to make any progress." }, { "heading": "8 CONCLUSION", "text": "We introduced ValueDICE, an algorithm for imitation learning that outperforms the state-of-the-art on standard MuJoCo tasks. In contrast to other algorithms for off-policy imitation learning, the algorithm introduced in this paper performs robust divergence minimization in a principled off-policy manner and a strong theoretical framework. To the best of our knowledge, this is also the first algorithm for adversarial imitation learning that omits learning or defining rewards explicitly and directly learns a Q-function in the distribution ratio objective directly. We demonstrate the robustness of ValueDICE in a challenging synthetic tabular MDP environment, as well as on standard MuJoCo continuous control benchmark environments, and we show increased performance over baselines in both the low and high data regimes." }, { "heading": "B ALGORITHMS", "text": "In this section, we present pseudocode for the imitation learning algorithms based on DualDICE.\nAlgorithm 1 ValueDICE Input: expert replay bufferRE\nInitialize replay bufferR ← ∅ for n = 1, . . . , do\nSample (s, a, s′) with πθ Add (s, a, s′) to the replay bufferR {(s(i), a(i), s′(i))}Bi=1 ∼ R . Geometric sampling {(s(i)0 , s (i) E , a (i) E , s ′(i) E )}Bi=1 ∼ RE . Geometric sampling, s (i) 0 is a starting episode state for\ns (i) E\na (i) 0 ∼ πθ(·|s (i) 0 ), for i = 1, . . . , B a′(i) ∼ πθ(·|s′(i)), for i = 1, . . . , B a ′(i) E ∼ πθ(·|s ′(i) E ), for i = 1, . . . , B Compute loss on expert data: Ĵlog = log( 1 B ∑B i=1((1− α)eνψ(s (i) E ,a (i) E )−γνψ(s ′(i) E ,a ′(i) E ) + αeνψ(s\n(i),a(i))−γνψ(s′(i),a′(i)))) Compute loss on the replay buffer: Ĵlinear = 1 B ∑B i=1((1− α)(1− γ)νψ(s (i) 0 , a (i) 0 ) + α(νψ(s\n(i), a(i))− γνψ(s′(i), a′(i)))) Update ψ ← ψ − ην∇ψ(Ĵlog − Ĵlinear) Update θ ← ψ + ηπ∇θ(Ĵlog − Ĵlinear)\nend for" }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "We also compared ValueDICE with behavioral cloning in the offline regime, when we sample no additional transitions from the learning environment (see Figure 4). Even given only offline data, ValueDICE outperforms behavioral cloning. For behavioral cloning we used the same regularization as for actor training in ValueDICE." } ]
2,020
null
SP:caa7fcf551f4ee75b6c06f05581bc5ef298fedbe
[ "This paper introduces a formulation for the contextual inverse reinforcement learning (COIRL) problem and proposed three algorithms for solving the proposed problem. Theoretical analysis of scalability and sample complexity are conducted for cases where both the feature function and the context-to-reward mapping function are linear. Experiments were conducted in both a simulated driving domain and a medical treatment domain to compare the three proposed algorithms empirically. Empirical results for using a deep network as the contextual mapping function is also provided.", "This work focuses on the problem of 'contextual' inverse reinforcement learning, where the reward is a function of the current state of the MDP, and a set of context features, which remain constant within each episode. The primary contribution of this work is the formulation of inverse reinforcement learning (for restricted spaces of context-dependent reward functions) as a convex optimization problem. Based on this formulation, the paper describes several IRL algorithms based on approaches to solving convex and non-convex optimization problems, including variations of mirror descent, and evolution strategies (in principle allowing for the optimization of reward functions with arbitrary parametric representations). The algorithms presented in this work all assume that computing an optimal policy for a specific reward function is a relatively inexpensive subroutine, which limits their applicability to domains where such planning is straightforward. Experimental results are presented for a simple highway driving domain, as well as a simulated patient treatment domain constructed from real-world clinical data." ]
We consider the Inverse Reinforcement Learning problem in Contextual Markov Decision Processes. In this setting, the reward, which is unknown to the agent, is a function of a static parameter referred to as the context. There is also an “expert” who knows this mapping and acts according to the optimal policy for each context. The goal of the agent is to learn the expert’s mapping by observing demonstrations. We define an optimization problem for finding this mapping and show that when it is linear, the problem is convex. We present and analyze the sample complexity of three algorithms for solving this problem: the mirrored descent algorithm, evolution strategies, and the ellipsoid method. We also extend the first two methods to work with general reward functions, e.g., deep neural networks, but without the theoretical guarantees. Finally, we compare the different techniques empirically in driving simulation and a medical treatment regime.
[]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Kareem Amin", "Nan Jiang", "Satinder Singh" ], "title": "Repeated inverse reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "venue": "Operations Research Letters,", "year": 2003 }, { "authors": [ "S Clark Berngard", "Jeremy R Beitler", "Atul Malhotra" ], "title": "Personalizing mechanical ventilation for acute respiratory distress syndrome", "venue": "Journal of thoracic disease,", "year": 2016 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Nonlinear programming", "venue": "Journal of the Operational Research Society,", "year": 1997 }, { "authors": [ "Stephen P Boyd", "Craig H Barratt" ], "title": "Linear controller design: limits of performance", "venue": null, "year": 1991 }, { "authors": [ "Bibhas Chakraborty", "Susan A Murphy" ], "title": "Dynamic treatment regimes", "venue": "Annual review of statistics and its application,", "year": 2014 }, { "authors": [ "Moustapha Cisse", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Justin Fu", "Anoop Korattikara", "Sergey Levine", "Sergio Guadarrama" ], "title": "From language to goals: Inverse reinforcement learning for vision-based instruction following", "venue": null, "year": 1902 }, { "authors": [ "Omer Gottesman", "Fredrik Johansson", "Joshua Meier", "Jack Dent", "Donghun Lee", "Srivatsan Srinivasan", "Linying Zhang", "Yi Ding", "David Wihl", "Xuefeng Peng" ], "title": "Evaluating reinforcement learning algorithms in observational health settings", "venue": "arXiv preprint arXiv:1805.12298,", "year": 2018 }, { "authors": [ "Omer Gottesman", "Fredrik Johansson", "Matthieu Komorowski", "Aldo Faisal", "David Sontag", "Finale Doshi-Velez", "Leo Anthony Celi" ], "title": "Guidelines for reinforcement learning in healthcare", "venue": "Nature medicine,", "year": 2019 }, { "authors": [ "Assaf Hallak", "Dotan Di Castro", "Shie Mannor" ], "title": "Contextual markov decision processes", "venue": "arXiv preprint arXiv:1502.02259,", "year": 2015 }, { "authors": [ "Theis Itenov", "Daniel Murray", "Jens Jensen" ], "title": "Sepsis: Personalized medicine utilizing ‘omic’technologies—a paradigm shift", "venue": "In Healthcare,", "year": 2018 }, { "authors": [ "Russell Jeter", "Christopher Josef", "Supreeth Shashikumar", "Shamim Nemati" ], "title": "Does the ”artificial intelligence clinician” learn optimal treatment strategies for sepsis", "venue": "in intensive care?,", "year": 2019 }, { "authors": [ "Alistair E.W. Johnson", "Tom J. Pollard", "Lu Shen", "Li-wei H. Lehman", "Mengling Feng", "Mohammad Ghassemi", "Benjamin Moody", "Peter Szolovits", "Leo Anthony Celi", "Roger G. Mark" ], "title": "Mimic-iii, a freely accessible critical care database", "venue": "Scientific Data,", "year": 2016 }, { "authors": [ "Sham Kakade", "John Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In International conference on Machine learning,", "year": 2002 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Matthieu Komorowski", "Leo A Celi", "Omar Badawi", "Anthony C Gordon", "A Aldo Faisal" ], "title": "The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care", "venue": "Nature Medicine,", "year": 2018 }, { "authors": [ "Donghun Lee", "Srivatsan Srinivasan", "Finale Doshi-Velez" ], "title": "Truly batch apprenticeship learning with deep successor features", "venue": "arXiv preprint arXiv:1903.10077,", "year": 2019 }, { "authors": [ "James MacQueen" ], "title": "Some methods for classification and analysis of multivariate observations", "venue": "In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability,", "year": 1967 }, { "authors": [ "Aditya Modi", "Nan Jiang", "Satinder Singh", "Ambuj Tewari" ], "title": "Markov decision processes with continuous side information", "venue": "In Algorithmic Learning Theory,", "year": 2018 }, { "authors": [ "Arkadii Semenovich Nemirovsky", "David Borisovich Yudin" ], "title": "In Problem complexity and method efficiency in optimization", "venue": null, "year": 1983 }, { "authors": [ "Yurii Nesterov", "Vladimir Spokoiny" ], "title": "Random gradient-free minimization of convex functions", "venue": "Foundations of Computational Mathematics,", "year": 2017 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Niranjani Prasad", "Li-Fang Cheng", "Corey Chivers", "Michael Draugelis", "Barbara E Engelhardt" ], "title": "A reinforcement learning approach to weaning of mechanical ventilation in intensive care", "venue": null, "year": 2017 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 1994 }, { "authors": [ "Aniruddh Raghu", "Matthieu Komorowski", "Imran Ahmed", "Leo Celi", "Peter Szolovits", "Marzyeh Ghassemi" ], "title": "Deep reinforcement learning for sepsis treatment", "venue": "arXiv preprint arXiv:1711.09602,", "year": 2017 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Richard S Sutton" ], "title": "The bitter lesson, March 2019", "venue": "URL http://www.incompleteideas.net/IncIdeas/ BitterLesson.html", "year": 2019 }, { "authors": [ "Umar Syed", "Robert E Schapire" ], "title": "A game-theoretic approach to apprenticeship learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "EM Wesselink", "TH Kappen", "HM Torn", "AJC Slooter", "WA van Klei" ], "title": "Intraoperative hypotension and the risk of postoperative adverse outcomes: a systematic review", "venue": "British journal of anaesthesia,", "year": 2018 }, { "authors": [ "Kelvin Xu", "Ellis Ratner", "Anca Dragan", "Sergey Levine", "Chelsea Finn" ], "title": "Learning a prior over intent via meta-inverse reinforcement learning", "venue": "arXiv preprint arXiv:1805.12573,", "year": 2018 }, { "authors": [ "Tom Zahavy", "Alon Cohen", "Haim Kaplan", "Yishay Mansour" ], "title": "Average reward reinforcement learning with unknown mixing times", "venue": "arXiv preprint arXiv:1905.09704,", "year": 2019 }, { "authors": [ "Martin Zinkevich" ], "title": "Online convex programming and generalized infinitesimal gradient ascent", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Jeter" ], "title": "2019) to extract the relevant data in a form of normalized measurements of sepsis patients during their hospital admission and the treatments that were given to each patient. The measurements include dynamic measures, e.g., heart rate, blood pressure, weight, body temperature, blood analysis standard measures (glucose, albumin, platelets", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "We study sequential decision-making in a Contextual Markov Decision Process (CMDP, Hallak et al. (2015)), where the reward, while unknown to the agent, depends on a static parameter referred to as the context. For a concrete example, consider the dynamic treatment regime (Chakraborty & Murphy, 2014). Here, there is a patient and a clinician which acts to improve the patient’s health. The context is composed of static information of the patient (such as age and weight); the state is composed of the patient’s dynamic measurements (such as heart rate and blood pressure); and the clinician’s actions are a set of intervention categories (e.g., infusion). The reward is different for each patient (context), and there is a mapping from the context to the reward.\nRecent trends in personalized medicine motivate this model – instead of treating the ”average patient”, patients are separated into different groups for which the medical decisions are tailored (Fig. 1b). For example, in Wesselink et al. (2018), the authors study organ injury, which may occur when a specific measurement (mean arterial pressure) decreases below a certain threshold. They found that this threshold varies across different patient groups (context). In other examples, clinicians set treatment goals for the patients, i.e., they take actions to make the patient measurements reach some pre-determined values. For instance, in acute respiratory distress syndrome (ARDS), clinicians argue that these treatment goals should depend on the static patient information (the context) (Berngard et al., 2016).\nThere are serious issues when trying to manually define a reward signal in real-world tasks. When treating patients with sepsis, for example, the only available signal is the mortality of the patient at the end of the treatment (Komorowski et al., 2018). This signal is sparse, and it is unclear how to manually tweak the reward to maximize the patient’s health condition (Leike et al., 2017; Raghu et al., 2017; Lee et al., 2019).\nTo address these issues, we propose the Contextual Inverse Reinforcement Learning (COIRL) framework. Similarly to Inverse Reinforcement Learning (Ng & Russell, 2000, IRL), we focus on trying to infer the mapping from contexts to rewards by observing experts. The main challenge in our problem is that for each context there is a different reward, hence, a different optimal policy for each context. Therefore, Apprenticeship Learning algorithms (Abbeel & Ng, 2004; Syed & Schapire, 2008) that try to mimic the expert cannot be used and, instead, we focus on directly learning the mapping.\nIn particular, our main contributions are:\n1. We formulate COIRL with a linear mapping as a convex optimization problem. 2. We propose and analyze the sample complexity of three algorithms for COIRL: the mirrored\ndescent alg. (MDA), evolution strategies (ES), and the ellipsoid method. 3. For nonlinear mappings, we implement a deep learning version for MDA and ES (without\ntheoretical guarantees). 4. We compare these methods empirically on two frameworks: an autonomous driving simulator\n(Abbeel & Ng, 2004) and a dynamic treatment regime (Komorowski et al., 2018)." }, { "heading": "2 PRELIMINARIES", "text": "Contextual MDPs: A Markov Decision Process (Puterman, 1994, MDP) is defined by the tuple (S,A, P, ξ, R, γ) where S is a finite state space, A a finite action space, P : S × S ×A→ [0, 1] the transition kernel, ξ the initial state distribution, R : S → R the reward function and γ ∈ [0, 1) is the discount factor. A Contextual MDP (Hallak et al., 2015, CMDP) is an extension of an MDP, and is defined by (C,S,A,M, γ) where C is the context space, andM is a mapping from contexts c ∈ C to MDPs: M(c) = (S,A, P,Rc, ξ, γ). In addition, each state is associated with a feature vector φ : S → [0, 1]k. Note that P and ξ are not context dependent. We consider a setting in which the reward for context c is a linear combination of the state features: R∗c(s) = f\n∗(c)Tφ(s). The goal is to approximate f∗(c) using a function fW (c), with parameters W . This notation allows us to present our algorithms for any function approximator fW (c), and in particular, a deep neural network (DNN). For the theoretical analysis, we will further assume a linear setting, where f∗(c) = cTW ∗, fW (c) = cTW and that W ∗ is in some convex setW. We assume that c ∈ C = ∆d−1, the standard d− 1 dimensional simplex. This assumption makes the contexts bounded (which we use in our proofs), and it also allows a straight-forward expansion to a model in which the transitions are also a linear mapping of the context (Modi et al., 2018). One way of viewing this model is that each row in the mapping W ∗ is a base rewards coefficient vector, and the reward for a specific context is a convex combination of these base rewards.\nWe consider deterministic policies π : S → A which dictate the agent’s behaviour at each state. The value of a policy π for reward coefficients vector r is: V πr = Eξ,P,π[ ∑∞ t=0 γ tR(st)] = r Tµ(π)\nwhere µ(π) := Eξ,P,π[ ∑∞ t=0 γ\ntφ(st)] ∈ Rk is called the feature expectations of π. For the optimal policy with respect to (w.r.t.) a reward coefficients vector r, we denote the value by V ∗r . For any context c, π∗c denotes the optimal policy w.r.t. reward R ∗ c(s) = f\n∗(c)Tφ(s) and π̂c(W ) denotes the optimal policy w.r.t. reward R̂c(s) = fW (c)Tφ(s).\nInverse Reinforcement Learning in CMDPs: In standard IRL, the goal is to learn a reward which best explains the behavior of an observed expert. The model describing this scenario is the MDP\\R -\nan MDP without a reward function (also commonly called a controlled Markov chain). Similarly, we denote a CMDP without a mapping of context to reward by CMDP\\M. The goal in Contextual IRL is to approximate the mapping f∗(c) by observing an expert. The expert knows f∗(c), and for each context c, can provide a demonstration from π∗c .\nContextual dynamics: Learning a transition kernel and an initial state distribution that is parametrized by the context is an orthogonal problem to COIRL. Therefore, we focus only on a contextual reward which simplifies our analysis. Existing methods, such as in Modi et al. (2018), can be used to learn the mappings for the transition kernel and initial distribution in a contextual model. In conjunction with the simulation lemma (Kearns & Singh, 2002), these methods can extend our results to the more general CMDP setting." }, { "heading": "3 OPTIMIZATION METHODS FOR COIRL", "text": "In this section, we propose and analyze optimization algorithms for minimizing the following loss function; Lemma 1 below justifies its use for COIRL.\nLoss(W ) = Ec max π\n[ fW (c) · ( µ(π)− µ(π∗c ) )] = Ec [ fW (c) · ( µ(π̂c(W ))− µ(π∗c ) )] . (1)\nLemma 1. Loss(W ) satisfies the following properties: (1) ∀W, Loss(W ) ≥ 0, and Loss(W ∗) = 0. (2) If Loss(W ) = 0 then ∀c ∈ C, the expert policy π∗c is the optimal policy w.r.t. reward cTW.\nTo evaluate the loss, the optimal policy π̂c(W ) and its features expectations µ(π̂c(W )) must be computed for all contexts. For a specific context, finding π̂c(W ) can be solved with standard RL methods such as Value or Policy Iteration. Computing µ(π̂c(W )) is equivalent to policy evaluation (solving linear equations).\nThe challenge is that Eq. (1) is is not differentiable in W . We tackle this problem using two methods for computing descent directions that do not involve differentiation: (i) computing subgradients and (ii) randomly perturbing the loss function. In addition, as the loss is defined in expectation over the contexts, computing it requires to calculate the optimal policy for all contexts. We deal with this issue at the end of Section 3.1. In the special case that fW (c) is a linear function, Eq. (1) is convex. The following Lemma characterizes Eq. (1) in this case.\nLemma 2. Let Llin(W ) = Ec [ cTW · ( µ(π̂c(W ))−µ(π∗c ) )] . We have that: (1) Llin(W ) is a convex\nfunction. (2) g(W ) = Ec [ c ( µ(π̂c(W ))− µ(π∗c ) )] is a sub gradient of Llin(W ). (3) Llin is a Lipschitz continuous function, with Lipschitz constant L = 21−γ w.r.t. ‖·‖∞ and L = 2 √ dk 1−γ w.r.t. ‖·‖2.\nA technical proof (by definition) is provided in the supplementary material. Note that g(W ) ∈ Rd×k; we will sometimes refer to it as a matrix and sometimes as a flattened vector, no confusion will arise. Remark 1. The Lipschitz of LLin(W ) is related to the simulation lemma (Kearns & Singh, 2002); a small change in the reward results in a small change in the optimal value. Remark 2. As g(W ) is a subgradient of Loss(W ), it can be used to back-propagate DNNs. Clearly, we cannot guarantee convexity (hence no theoretical guarantees), but we can design Loss(W ) to be Lipschitz continuous in W using the methods presented in Cisse et al. (2017); Arjovsky et al. (2017). Remark 3. The subgradient g(W ) is given in expectation over contexts, and in expectation over trajectories (feature expectations). We will later see how to replace it with an unbiased estimate, which can be computed by observing a single expert trajectory for a single context." }, { "heading": "3.1 MIRRORED DESCENT FOR COIRL", "text": "Lemma 2 identifies LLin(W ) as a convex function and provides a method to compute its subgradients. A standard method for minimizing a convex function over a convex set is the subgradient projection algorithm (Bertsekas, 1997): wt+1 = ProjW{wt − αtg(wt)}, where f(wt) is a convex function, g(wt) is a subgradient of f(wt), and αt the learning rate.W is a convex set, and specifically, we consider the `2 ball (Abbeel & Ng, 2004) and the simplex (Syed & Schapire, 2008)1. We focus on\n1Scaling of the reward by a constant does not affect the resulting policy, thus, these sets are not restricting.\nAlgorithm 1 MDA for COIRL input: a convex set W , T number of iterations initialize w1 ∈ W for t = 1, . . . , T do\nObserve c, µ(π∗c ) Compute π̂c(W ), µ(π̂c(W )) Compute gt according to Lemma 2 if PSGD then\nαt = (1− γ) √ 1 2dkt wt+1 = wt − αtgt if ‖wt+1‖ > 1 then\nwt+1 = wt+1/ ‖wt+1‖2 else if Exponential weights then\nαt = (1− γ) √ log(dk) 2t for i = 1, . . . , dk do wt+1(i) = wt(i) exp (−αtgt(i))\nwt+1 = wt+1/ ∑ i wt+1(i)\nreturn 1t ∑T t=1 wt\nAlgorithm 2 ES for COIRL\ninput: step sizes {αt}Tt=1s, noise STD σ, number of evaluations m and smoothing parameter ν > 0 initialize: W ∈ Rk for t = 1, . . . , T do\nObserve c, µ(π∗c ) for j = 1, ...,m do\nuj ∼ N k(0, σ2) Lossj(W ) = Loss ( W +\nuj ||uj ||ν ) dLoss(W ) = ∑m j=1 Lossj(W ) uj ||uj ||ν If Loss(W − αtb dLoss(W )) < Loss(W ) then W = W − αtmσdLoss(W )\nreturn W\na generalization of the subgradient projection algorithm that is called the mirror descent algorithm (Nemirovsky & Yudin, 1983, MDA): wt+1 = arg minw∈W { w · ∇f (wt) + 1αtDψ(w,wt) } , where Dψ(w,wt) is a Bregman distance2, associated with a strongly convex function ψ. The following theorem characterizes the convergence rate of MDA.\nTheorem 1 (Convergence rate of MDA). Let ψ be a σ-strongly convex function onW w.r.t. ‖·‖, and let D2 = supw1,w2∈W Dψ(w1, w2). Let f be convex and L-Lipschitz continuous w.r.t. ‖·‖. Then,\nMDA with αt = DL √ 2σ t satisfies: f ( 1 T ∑T s=1 xs ) − f(x∗) ≤ DL √ 2 σT .\nWe refer the reader to Beck & Teboulle (2003) and Bubeck (2015) for the proof. Next, we provide two MDA instances (see, for example Beck & Teboulle (2003) for derivation) and analyze them for COIRL.\nProjected subgradient descent (PSGD): LetW be an `2 ball with radius 1. Fix || · ||2, and ψ(w) = 1 2 ||w|| 2 2. ψ is strongly convex w.r.t. || · ||2 with σ = 1. The associated Bregman divergence is given by Dψ(w1, w2) = 0.5||w1 − w2||22. Thus, mirror descent is equivalent to PSGD. D2 = maxw1,w2∈W Dψ(w1, w2) ≤ 1, and according to Lemma 2, L = 2 √ dk 1−γ . Thus, we have that after T\niterations Llin ( 1 T ∑T t=1 wt ) − Llin(w∗) ≤ O ( √ dk (1−γ) √ T ) . Exponential Weights (EW): Let W be the standard dk − 1 dimensional simplex. Let ψ(w) =∑ i w(i) log(w(i)). ψ is strongly convex w.r.t. || · ||1 with σ = 1. We get that the associated Bregman\ndivergence is given by Dψ(w1, w2) = ∑ i w1(i) log( w1(i) w2(i)\n), also known as the Kullback-Leibler divergence. In addition, D2 = maxx,y∈W Dψ(w1, w2) ≤ log(dk) and according to Lemma 2, L = 21−γ . Furthermore, the projection onto the simplex w.r.t. to this distance amounts to a simple renormalization w ← w/||w||1. Thus, we get that MDA is equivalent to the exponential weights algorithm and Llin ( 1 T ∑T t=1 wt ) − Llin(w∗) ≤ O (√log(dk) (1−γ) √ T ) .\nPractical MDA: One of the “miracles” of MDA is its robustness to noise. If we replace gt with an unbiased estimate g̃t, such that Eg̃t = gt and E ‖g̃t‖ ≤ L, we obtain the same convergence results as in Lemma 2 (Robbins & Monro, 1951) (see, for example, (Bubeck, 2015, Theorem 6.1)). Such an unbiased estimate can be obtained in the following manner: (i) sample a context ct, (ii) compute\n2We refer the reader to Appendix C for definitions of the Bregman distance, the dual norm, etc.\nµ(π∗ cTt wt ), (iii) observe a single expert demonstration τEi = {si0, a0, si1, a1, . . . , }, where ai is chosen by the expert policy π∗\ncTt w ∗ (iv) let µ̂i = ∑ t∈[0,...,|τEi |−1]\nγtφ(sit) be the accumulated discounted features across the trajectory such that Eµ̂i = µ(π∗c ).\nThe challenge is, that for µ̂i to be an unbiased estimate of µ(π∗cTt w∗), τ E i needs to be of infinite length. There are two ways in which we can tackle this issue. We can either (1) execute the expert trajectory online, and terminate it at each time step with probability 1− γ (as in (Kakade & Langford, 2002)), or (2) execute a trajectory of length H = 11−γ log(1/ H). The issue with the first approach is that since the trajectory length is unbounded, the estimate µ̂i cannot be shown to concentrate to µ(π∗c ) via Hoeffding type inequalities. Nevertheless, it is possible to obtain a concentration inequality using the fact that the length of each trajectory is bounded in high probability (similar to Zahavy et al.). The second approach can only guarantee that ‖gt − Eg̃t‖ ≤ H (Syed & Schapire, 2008). Therefore, using the robustness of MDA to adversarial noise (Zinkevich, 2003), we get that MDA converges with an additional error of H , i.e., Llin ( 1 T ∑T t=1 wt ) − Llin(w∗) ≤ O ( 1√ T ) + H . While this sampling mechanism comes with the cost of a controlled bias, usually it is more practical, in particular when the trajectories are given as a set demonstrations (offline data)." }, { "heading": "3.2 EVOLUTION STRATEGIES FOR COIRL", "text": "To minimize Eq. (1), we also design a derivative free algorithm (Algorithm 2) that is based on Evolution Strategies (Salimans et al., 2017, ES). For convex optimization problems, ES is a gradientfree descent method that is based on computing finite differences (Nesterov & Spokoiny, 2017), whose sample complexity is provided below in Theorem 2. The Theorem is given in terms of the Lipschitz constant, which is upper bounded by 2 √ dk\n1−γ (Section 3.1). While this approach has looser upper-bound guarantees compared to MDA (Theorem 1), Nesterov & Spokoiny (2017) observed that in practice, it often outperforms subgradient based methods. Thus, we test this method empirically and compare it with the subgradient method (Section 3.1). ES is also known to perform well in practice, even with nonconvex objectives. Specifically, Salimans et al. (2017) has shown that ES can be used to optimize the parameters of a DNN to solve challenging high dimensional RL tasks like playing Atari.\nTheorem 2 (ES Convergence Rate (Nesterov & Spokoiny, 2017)). Let Llin(W ) be a non-smooth convex function with Lipschitz constant L, such that ||x0 − x∗|| ≤ D, step size of αt =\nD (dk+4) √ T+1L and ν ≤ 2L √ dk\nthen in T = 4(dk+4) 2D2L2\n2 ES finds a solution which is bounded by EUT−1 [Llin(x̂T )]− Llin(x∗) ≤ , where UT = {u0, . . . , uT } denotes the random variables of the algorithm up to time T and x̂T = arg mint=1,...,T Llin(xt)." }, { "heading": "3.3 ELLIPSOID ALGORITHMS FOR COIRL", "text": "Algorithm 3 Ellipsoid algorithm for COIRL Initialize: Θ0 ← B∞(0, 1) ={x ∈ Rd·k : ||x||∞ ≤ 1} Θ1 ← MVEE(Θ0) for t = 1, 2, . . . do\nObserve ct, let W t be the center of Θt Play episode using π̂t = arg maxπ V\nπ cTt Wt\nif V ∗ cTt W ∗ − V π̂tcTt W∗ > then µ(π∗ct) is revealed Let at = ct ( µ(π∗ct)− µ(π̂t)\n) Θt+1← MVEE ({ θ ∈ Θt : θTat ≥WTt at\n}) else\nΘt+1 ← Θt The final algorithm we consider is an ellipsoid method, introduced to the IRL setting by Amin et al. (2017). In this section we extend it to the contextual setting, specifically, we focus on finding a linear mapping W and further assume that W = {W : ||W ||∞ ≤ 1}, and that W ∗ ∈ W . The algorithm maintains an ellipsoidshaped feasibility set that contains W ∗. At any step, the current estimation Wt of W ∗ is defined as the center of the ellipsoid, and the agent acts optimally w.r.t. this estimation. If the agent performs suboptimally, the expert provides a demonstration in the form of the optimal feature expectations for ct, µ(π∗ct). The feature expectations are used to generate a linear constraint (hyperplane) on the ellipsoid that is crossing its center. Under this constraint, we construct a new feasibility set that is half of the\nprevious ellipsoid, and still contains W ∗. For the algorithm to proceed, we compute a new ellipsoid that is the minimum volume enclosing ellipsoid (MVEE) around this ”half-ellipsoid” 3. These updates are guaranteed to gradually reduce the volume of the ellipsoid (a well-known result (Boyd & Barratt, 1991)) until its center is a mapping which induces -optimal policies. Theorem 3 shows that this algorithm achieves a polynomial upper bound on the number of sub-optimal time-steps. Finally, note that in Algorithm 3 we use an underline notation to denote a ”flattening” operator for matrices, and to denote a composition of an outer product and the flattening operator. The proofs in this section are provided in the supplementary material, and are adapted from (Amin et al., 2017). Theorem 3. In the linear setting where R∗c(s) = cTW ∗φ(s), for an agent acting according to Algorithm 1, the number of rounds in which the agent is not -optimal is O(d2k2 log( dk(1−γ) )). Remark 4. Note that the ellipsoid method presents a new learning framework, where demonstrations are only provided when the agent performs sub-optimally. Thus, the theoretical results in this section cannot be directly compared with those of the descent methods. We further discuss this in the experiments and discussion sections. Remark 5. The ellipsoid method does not require a distribution over contexts - an adversary may choose them. MDA can also be easily extended to the adversarial setting via known regret bounds on online MDA (Hazan, 2016).\nPractical ellipsoid algorithm: In many real-world scenarios, the expert cannot evaluate the value of the agent’s policy and cannot provide its policy or feature expectations. To address these issues, we follow Amin et al. (2017) and consider a relaxed approach, in which the expert evaluates each of the individual actions performed by the agent rather than its policy, and provides finite rollouts instead of a policy or feature expectations (see the supplementary material (Algorithm 4) for pseudo code). We define the expert criterion for providing a demonstration to be Q∗\ncTt W ∗(s, a) + < V ∗ cTt W ∗(s) for each state-action pair (s, a) in the agent’s trajectory.\nNear-optimal experts: In addition, we relax the optimality requirement of the expert and instead assume that, for each context ct, the expert acts optimally w.r.t. W ∗t which is close to W\n∗; the expert also evaluates the agent w.r.t. this mapping. This allows the agent to learn from different experts, and from non-stationary experts whose judgment and performance slightly vary over time. If a sub-optimal action w.r.t. W ∗t is played at state s, the expert provides a roll-out of H steps from s to the agent. As this roll-out is a sample of the optimal policy w.r.t. W ∗t , we aggregate n examples to assure that with high probability, the linear constraint that we use in the ellipsoid algorithm does not exclude W ∗ from the feasibility set. Note that these batches may be constructed across different contexts, different experts, and different states from which the demonstrations start. Theorem 4 below upper bounds the number of sub-optimal actions that Algorithm 4 chooses.4\nTheorem 4. For an agent acting according to Algorithm 4 , with probability of at least 1 − δ, for H = d 11−γ log( 8k (1−γ) )e and n = d 512k2 (1−γ)2 2 log(4dk(dk + 1) log( 16k √ dk\n(1−γ) )/δ)e, if ∀t : W ∗t ∈ B∞(W\n∗, (1−γ) 8k ) ∩ Θ0 the number of rounds in which a sub-optimal action is played is O ( d2k4 (1−γ)2 2 log ( dk (1−γ)δ log( dk (1−γ) ) )) ." }, { "heading": "4 EXPERIMENTS", "text": "The simulations in this section include two domains: (1) an autonomous driving simulation (Abbeel & Ng, 2004), that we adapted to the contextual setup and (2) a medical treatment regime, constructed from a data set of expert (clinician) trajectories for treating patients with sepsis5. In each of these domains we compare the algorithms in two setups: the ellipsoid learning framework and an offline framework. All the results are averaged across 10 random seeds in Section 4.1 and 5 seeds in Section 4.2 (we report the mean and the standard deviation). Due to space considerations we present the simulations in the ellipsoid framework only for the car domain, and the simulations in the offline framework only in the dynamic treatment regime. Complementary simulations can be found in the supplementary material.\n3This procedure follows a sequence of linear algebra operations which we explain in the appendix. 4MDA also works with near optimal experts due to the robustness of MDA. The analysis of this case is\nidentical to the analysis of biased trajectories, as we discuss in the end of Section 3.1. 5The data, code and implementation of our algorithms can be found in github.com/CIRLMDP/CIRL." }, { "heading": "4.1 DRIVING SIMULATION – THE ELLIPSOID FRAMEWORK", "text": "In the ellipsoid framework, an expert evaluates the agent policy. If the agent’s policy is sub-optimal, the expert provides the agent its feature expectations; otherwise, no demonstration is given. The algorithm performs learning in between demonstrations. This setup enables a proper comparison with the ellipsoid algorithm, which requires the additional expert supervision. We measure performance w.r.t. the following criteria: (1) # demonstrations – the amount of contexts on which each algorithm requested an expert demonstration (y-axis) as a function of time, i.e., the total number of contexts (x-axis). (2) Value – the difference in value, between the agent policy and the expert policy w.r.t. the true reward mapping, i.e., ∑ c∈Ctest fW∗(c) · ( µ(π̂c(W ))− µ(π∗c ) ) , where Ctest is a holdout (test) set of contexts. The x-axis measures the amount of demonstrations given.\nSetup. This domain simulates a three-lane highway with two visible cars - cars A and B (illustration provided in the appendix). The agent, controlling car A, can drive both on the highway and off-road. Car B drives on a fixed lane, at a slower speed than car A. Upon leaving the frame, car B is replaced by a new car, appearing in a random lane at the top of the screen. The feature vector φ(s) is composed of 3 features: (1) a speed feature, (2) a collision feature, which is valued 0 in case of a collision and 0.5 otherwise, and (3) an off-road feature, which is 0.5 if the car is on the road and 0 otherwise.\nIn this task, the context vector implies different priorities for the agent; should it prefer speed or safety? Is going off-road to avoid collisions a valid option? For example, an ambulance will prioritize speed and may allow going off-road as long as it goes fast and avoids collisions, while a bus will prioritize avoiding both collisions and off-road driving as safety is its primary concern. To demonstrate the effectiveness of our solutions, the mapping f : C 7→ [−1, 1]k is constructed in a way that induces different behaviors for different contexts, making generalization a challenging task. We provide additional details on the domain as well as the hyper parameter selection in the appendix.\nLinear: The optimal behavior is defined using a linear mapping W ∗. In this setting, all three approaches obtain competitive results, in terms of generalization, although the ES is capable of obtaining these results faster, as is seen through the regret and number of required demonstrations.\nNonlinear: For the nonlinear task, we consider two reward coefficient vectors r1 and r2, and define the mapping by f∗(c) = r1 if ||c||∞ ≥ 0.55, and r2 otherwise - an illustration is provided in the appendix. In order to learn the nonlinear mapping, we represent fW (c) using a DNN, a multi-layered perceptron, which maps from context to reward vector. DNNs have proven to be capable of extracting meaningful features from complex high-dimensional data, e.g., images - in these scenarios, the linear assumption no longer holds, yet DNNs often overcome such issues. In this setting, the superiority of the descent methods rises; as the linear assumption in the ellipsoid algorithm is not met, it fails to generalize and keeps requiring new demonstrations. We believe these results to be crucial when considering real-life applications, in which the problem is not necessarily linear. Such cases highlight the strength of the descent methods, which, as Fig. 2 shows, are capable of scaling to nonlinear high dimensional mappings." }, { "heading": "4.2 DYNAMIC TREATMENT REGIME – THE OFFLINE FRAMEWORK", "text": "In the offline framework, we focus on the ability to learn from previously collected data. A data set of previously collected trajectories is given, such that a single trajectory of finite length is observed for each context and no context is observed more than once. We measure performance w.r.t. the following\ncriteria: (1) Value – as in the ellipsoid framework above, but here the x-axis corresponds to the amount of iterations. Each iteration corresponds to a single subgradient step, where the subgradient is computed from a mini batch of 10 contexts. (2) Loss – as in Eq. (1). (3) Accuracy % – the percent of actions on which the expert and the agent agree on. All these criteria are evaluated on a holdout set.\nSetup. In the dynamic treatment regime, there is a clinician which acts to improve a sick patient’s medical condition. The context (static information) represents patient features, which do not change during treatment, such as age and gender. The state summarizes the dynamic measurements of the patient, e.g., blood pressure and EEG readouts. The actions are the forms of intervention a clinician may take, including combinations of various treatments provided in parallel. Dynamic treatment regimes are particularly useful for managing chronic disorders and fit well into the broader paradigm of personalized medicine (Komorowski et al., 2018; Prasad et al., 2017).\nThe agent needs to choose the right treatment for a patient that is diagnosed with sepsis. We use the MIMIC-III data set (Johnson et al., 2016) and follow the data processing steps that were taken in Jeter et al. (2019). As performing off-policy evaluation is not possible using this data set, due to it not satisfying basic requirements (Gottesman et al., 2018; 2019), we designed a simulator of a CMDP. The simulator is based on this data set; a complete overview and explanation on how it was created is provided in the appendix. The mapping W ∗ is linear, W ∗ ∈ R8×42, which we constructed from the data. In the simulator, the expert acts optimally w.r.t. this W ∗.\nSpecifically, when treating a sepsis patient, the clinician has several decisions to make, such as whether or not to provide a patient with vasopressors, drugs which are commonly provided to restore and maintain blood pressure in patients with sepsis. However, what is regarded as healthy blood pressure differs based on the age and weight of the patient (Wesselink et al., 2018). In our setting, W captures this information - as it maps from contextual (e.g., age) and dynamic information (e.g., blood pressure) to reward.\nResults. Fig. 3 presents the ability of the descent methods to generalize to unseen contexts by learning from offline data (without supervision). The data is composed of a set of trajectories, i.e., offline data, that were collected from experts (clinicians treating patients). In each iteration, we sample a minibatch of 10 contexts, i.i.d, from the context distribution. For each context, there is a corresponding expert trajectory of length H = 40. Performance is measured on a holdout set of 300 contexts (that are sampled from the same context distribution) according to Theorem 1. We can see that both ES and PSGD attain near-optimal performance using only previously collected expert trajectories.\nLooking at Fig. 3a, we can see that all the algorithms manage to minimize the loss to roughly the same error. The small bias is explained by the fact that we use truncated trajectories (as we discussed in the practical MDA paragraph) where in the ellipsoid framework experiments we used feature expectations. We can also see that minimizing the loss leads to policies that attain −optimal value w.r.t. the true reward Fig. 3b. Finally, in Fig. 3 we can see that all the algorithms reach around 70% accuracy with the expert policy. We emphasize here that 100% accuracy should not be expected for two reasons: (i) different policies may have the same feature expectations (hence the same value) but make different decisions (ii) there exists reward for which there is more than one optimal policy. Nevertheless, Fig. 3 suggests that accuracy is correlated with minimizing the COIRL loss (Eq. (1)).\nFinally, we present results in the nonlinear setting. Here, there is a non-linear function of the context that determines which one of the two reward coefficient vectors is used, i.e., f∗(c) = r1 if age > 0.1, and r2 otherwise. Where age refers to the normalized age of the patient, which is an element\nof the context vector. We use a DNN to learn the mapping and follow Section 3.1 (PSGD). As seen in Fig. 4, the PSGD algorithm minimizes the loss and achieves a value that is close to that of the expert. In addition, similarly to Fig. 7, accuracy and performance do not necessarily correlate one to another." }, { "heading": "5 RELATED WORK", "text": "We begin with a short discussion on contextual policies, i.e., a policy that is a function of both the state and the context. While there is empirical evidence that learning such a policy may perform well in practice (e.g., Xu et al. (2018); Fu et al. (2019)), from a theoretical point of view, there exist hardness results in this setting. Specifically, given an MDP with k + 1 states, there is a reduction from the training problem of the union of k hyperplanes to the policy (see appendix E for proof).\nAlternatively, one may consider applying an AL algorithm on a single, large MDP that includes all the states and the contexts. For a concrete example, consider a reduction from the CMDP model to a large MDP where each state is expanded by concatenating the context to it. The new states are s′ = (s, c), and the new features are φ(s′) = c φ(s). Generally speaking, applying an AL algorithm to this large MDP will give the same scalability and sample complexity as COIRL. However, as the large MDP has |S ′| = |C||S| states, computing the optimal policy in each iteration of the algorithm will require at least |C| times more time. To illustrate this problem we conducted a simple grid world experiment on a 3× 4 grid world MDP (with 12 states) and one-hot features (φ(si) = ei ∈ R12). The dynamics are deterministic, and the actions correspond to going up, down, left and right (with cyclic transitions on the borders). The contexts correspond to ”preferences” on the grid; mathematically, each context is sampled from a uniform distribution over the simplex. The mapping W ∗ is set to be I12·12, and for AL, we let w∗ to be a flattened version of W ∗.\nWe compare the performance of PSGD with the projection algorithm of (Abbeel & Ng, 2004) in Fig. 5. We measure performance by three metrics: run-time, value, and accuracy. Inspecting the results, we can see that AL in the large MDP requires significantly more time to run as the number of contexts grows, while the run time of PSGD (COIRL) is not affected by the number of contexts. We can also see that both methods achieve roughly the same performance: COIRL performs slightly better in terms of value while AL performs slightly better in terms of accuracy. To conclude, AL on a large MDP does not scale to problems with large context spaces. In addition, this construction is\nonly possible when there is a finite number of context and does not provide generalization results. We avoided all of these issues in the COIRL framework." }, { "heading": "6 SUMMARY AND DISCUSSION", "text": "In this work, we formulated and studied the COIRL problem. We presented two types of algorithms to solve it: (1) cutting plane methods (ellipsoid) and (2) iterative descent approaches (MDA and ES). We summarize the theoretical guarantees of the different algorithms in Table 1.\nWe can see that the iterative descent approaches have better dependence in dk than the ellipsoid method, i.e., they scale better with the dimensions of the problem. In particular, the EW algorithm has a logarithmic dependence in dk, which makes it computationally comparable to standard IRL/AL (on a single, noncontextual MDP). In addition, the iterative methods extend naturally to the more general scenario where the mapping from contexts to rewards is not linear, and fW is modeled as a DNN. As Sutton (2019) puts it: ”The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin”.\nThe ellipsoid method has better sample complexity (as a function of ) than the descent methods in the deterministic setting. However, both methods attain the same complexity in the more realistic, stochastic setting. Our empirical findings suggest that the iterative methods always outperform the ellipsoid algorithm. Among these methods, we found the ES method to perform better than the MDA method. Similar findings were reported in (Nesterov & Spokoiny, 2017) for other convex problems.\nThe iterative methods have another advantage over the ellipsoid method – they can learn from previously collected demonstrations (i.e., offline learning). The ellipsoid framework, on the other hand, requires expert supervision throughout the entire learning process.\nFinally, an attractive property of the ellipsoid learning framework is its safety, i.e., an IRL algorithm that is being supervised by an expert will never perform sub-optimally. In each step, either that the agent performs -optimally or that the expert acts on its behalf (provides a demonstration). This property is appealing in mission-critical domains where errors have a high cost; for instance, in health-care, a failure may result in a loss of lives. In the experimental section, we have seen that we can use this learning framework for the iterative methods as well while enjoying improved efficiency." }, { "heading": "Appendices", "text": "" }, { "heading": "CONTENTS", "text": "" }, { "heading": "A Complementary simulations 14", "text": "A.1 Autonomous driving simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\nA.2 Dynamic treatment regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14" }, { "heading": "B Experimental details 15", "text": "B.1 Autonomous driving simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15\nB.2 Dynamic treatment regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16" }, { "heading": "C Proofs for Section 3 18", "text": "" }, { "heading": "D Proofs & pseudo code for Section 3.3 20", "text": "D.1 Ellipsoid Algorithm for trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nD.2 MVEE computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nD.3 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nD.4 Proof of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21\nE Hardness of learning contextual policies 23" }, { "heading": "A COMPLEMENTARY SIMULATIONS", "text": "" }, { "heading": "A.1 AUTONOMOUS DRIVING SIMULATION", "text": "Similar to Section 4.2, we compare the various methods in the offline framework. We can see that all the algorithms manage to minimize the loss, and achieve near optimal value. We can also see that they achieve high accuracy with the expert policy but not 100%." }, { "heading": "A.2 DYNAMIC TREATMENT REGIME", "text": "Similar to Section 4.1, we compare the various methods in the ellipsoid framework . We observe that ES outperforms the ellipsoid method. Additionally, we compare the accuracy, i.e., how often does the policy derived from W match the expert’s policy, which is derived from W ∗. As IRL is not a supervised learning problem, we observe that while there is a correlation between the success in the task and the ability to act similarly to the experts policy - this correlation is not strict, in the sense that the agent is capable of finding near-optimal policies with a relatively high miss rate (accuracy of approximately 80%). For more intuition, see the proof of Lemma 1." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "In this section, we describe the technical details of our experiments, including the hyper-parameters used. To solve MDPs, we use value iteration. Our implementation is based on a stopping condition with a tolerance threshold, τ , such that the algorithm stops if |Vt−Vt−1| < τ. In the driving simulation we used τ = 10−4 and in the sepsis treatment we use τ = 10−3." }, { "heading": "B.1 AUTONOMOUS DRIVING SIMULATION", "text": "The environment is modeled as a tabular MDP that consists of 1531 states. The speed is selected once, at the initial state, and is kept constant afterward. The other 1530 states are generated by 17 X-axis positions for the agent’s car, 3 available speed values, 3 lanes and 10 Y-axis positions in which car B may reside. During the simulation, the agent controls the steering direction of the car, moving left or right, i.e., two actions.\nIn these experiments, we define our mappings in a way that induces different behaviours for different contexts, making generalization a more challenging task. Specifically, for the linear setting we use W ∗ = ( −1 0.75 0.75\n0.5 −1 1 0.75 1 −0.75\n) , before normalization. For our nonlinear mapping, contexts with\n||c||∞ > 0.55 are mapped to reward coefficients vector (1,−1,−0.05), otherwise they are mapped to (−0.01, 1,−1), which induce the feature expectations (9.75, 3.655, 5), (5.25, 5, 2.343) respectively. The decision regions for the nonlinear mapping are visualized in Appendix B.1. The contexts are sampled uniformly in the 2-dimensional simplex. We evaluate all algorithms on the same sequences of contexts, and average the results over 20 such sequences. The algorithms were modified to fit the Ellipsoid framework; instead of iterating over the whole data, the algorithms iterate over the given expert feature expectations, one at a time, until convergence, i.e. every timestep a new demonstration is presented.\nHyper-parameter selection and adjustments:\nEllipsoid Framework: For the linear model the algorithms maintained a 3× 3 matrix to estimate W ∗. Ellipsoid: By definition, the ellipsoid algorithm is hyper-parameter free and does not require tuning.\nPSGD: The algorithm was executed with with the parameters: α0 = 0.3, αt = 0.9tαt−1, and iterated for 40 epochs. An outer decay on the step size was added for faster convergence, the initial α0\nbecomes 0.94 · α0 every time a demonstration is presented. The gradient, gt is normalized to be gt = gt\ngt ||gt||∞ and the calculated step is taken if: cWt ( µ(π̂tc)−µ(π∗c ) ) > cWt+1 ( µ(π̂t+1c )−µ(π∗c ) ) ,\nwhere π̂tc denotes the optimal policy for a context c according to Wt.\nES: The algorithm was executed with the parameters: σ = 10−3,m = 250, α = 0.1 with decay rate of 0.95, for 50 iterations which didn’t iterate randomly over one the contexts, but rather used the entire training set (all of the observed contexts and expert demonstrations up to the current time-step) for each step. The matrix was normalized according to || · ||2, and so was the step calculated by the ES algorithm, before it was multiplied by α and applied.\nFor the nonlinear setting, the model used for the ES method was a fully connected DNN, with layers of sizes 15, 10, 5, 3. The activation function used was the leaky ReLU function, with a parameter of α = 0.1. Note that we can’t normalize the parameters here as in the linear case; therefore an L2-normalization layer is added to the output. The same parameters were used as in the linear case, except with 120 iterations over the entire training set. They were originally optimized for this model and setting and worked as-is for the linear environment. As we aim to estimate the gradient, a small σ = 10−3 was used and performed best. The number of points,m = 250, was selected as fewer points produced noisy results. The step size, decay rate and the number of iterations were 0.1, 0.96, 120 respectively, and were selected in a way that produced fast yet accurate convergence of the loss. Note that here the steps were also normalized before application, and the normalization was applied per layer. For PSGD, a similar network was used. Specifically, it had layer sizes 14, 10, 6, 3, and the same leaky ReLU activation function was used in this network. In parallel to the normalization used for the ES model, here we used batch normalization and gradient clipping. The learning rate was set to 0.1 · 0.98t and 120 iterations were performed. For this result, as with the ES method, the batch consisted of all available training data.\nOffline Framework: In the offline framework we compute the subgradients using expert trajectories of length 40, instead of the feature expectations. In this framework at every iteration we sample a mini batch of 10 contexts (from a finite set) and their corresponding trajectories (sampled from the expert policy and dynamics) then taking one descent step according to them. The generalization is measured over 80 holdout contexts that are referred as the test set, where the W that is used to calculate the feature expectations of the agent is fitted to the EW algorithm requirement to be in the dk− 1 simplex. The PSGD and the EW algorithms are configured as the theory specifies, where each descent step is calculated from the whole batch. The ES algorithm is applied with the parameters σ = 10−3,m = 500, α = 0.1 with decay rate 0.95, for every iteration." }, { "heading": "B.2 DYNAMIC TREATMENT REGIME", "text": "The environment we describe in 4.2 simulates a decision-making process for treating sepsis. Sepsis is a life-threatening severe infection, where the treatment applied to a sepsis patient is crucial for saving his life. To create a sepsis treating simulator, we leverage the MIMIC-III data set (Johnson et al., 2016). This data set includes data from hospital electronic databases, social security, and archives from critical care information systems, that had been acquired during routine hospital care. We follow the data processing steps that were taken in Jeter et al. (2019) to extract the relevant data in a form of normalized measurements of sepsis patients during their hospital admission and the treatments that were given to each patient. The measurements include dynamic measures, e.g., heart rate, blood pressure, weight, body temperature, blood analysis standard measures (glucose, albumin, platelets count, minerals, etc.), as well as static measures such as age, gender, re-admission (of the patient), and more.\nThe processed data from Jeter et al. (2019) consists of 5366 trajectories, each representing the sequential treatment provided by a clinician to a patient. At each time-step, the available information for each patient consists of 8 static measurements and 41 dynamic measurements. In addition, each trajectory contains the reported actions performed by the clinician (the number of fluids and vasopressors given to a patient at each time-step and binned to 25 different values), and there is a mortality signal which indicates whether the patient was alive 90 days after his hospital admission.\nIn order to create a tabular CMDP from the processed data, we separate the static measurements of each patient and keep them as the context. We cluster the dynamic measurements using K-means (MacQueen et al., 1967). Each cluster is considered a state and the coordinates of the cluster centroids are taken as its features φ(s). We construct the transition kernel between the clusters using the\nempirical transitions in the data given the state and the performed actions. Two states are added to the MDP and the feature vector is extended by 1 element, corresponding to whether or not the patient died within the 90 days following hospital release. This added feature receives a value of 0 on all non-terminal states, a value of −0.5 for the state representing the patient’s death and 0.5 for the one representing survival. In addition, as the data is limited, not all state-action pairs are available. In order to ensure the agent does not attempt to perform such an action for which the outcome is unknown, we add an additional terminal state. At this state, all features are set to−1 to make it clearly distinguishable from all other states in the CMDP.\nIn our simulator, we used the same structure as the raw data, i.e., we used the same contexts prevalent in the data and the same initial state distribution. Each context is projected onto the simplex and the expert’s feature expectations for each context are attained by solving the CMDP. While we focus on a simulator, as it allows us to analyze the performance of the algorithms, our goal is to have a reward structure which is influenced by the data. Hence, we produce W ∗ by running the ellipsoid algorithm on trajectories obtained from the data. As done in the Autonomous Driving Simulation, the algorithms were modified to fit the Ellipsoid framework.\nHyper-parameter selection and adjustments:\nEllipsoid Framework: ES algorithm: the same method as in the autonomous driving is applied, with the parameters σ = 10−4,m = 1000, α = 0.25 with decay rate 0.95, for 80 iterations over the entire training set.\nOffline Framework: In the offline framework for the linear setting, we compute the subgradients using expert trajectories of length 40, instead of the exact feature expectations. At every iteration, we sample a mini batch of 10 contexts (from a finite set) and their corresponding trajectories (sampled from the expert policy and dynamics) and perform a single descent step. Generalization is measured over a set of 300 holdout contexts, referred to as the test set, where W is fitted to be in the dk − 1 simplex. The PSGD and the EW algorithms are configured as specified by the theory, where each descent step is calculated from the entire batch. The ES algorithm is applied with the parameters σ = 10−4,m = 1000, α = 0.3 with decay rate 0.95, for every iteration.\nFor the nonlinear setting, the model used for the ES method was a fully connected DNN, with layers of sizes 24, 42, 42 which include a bias term. The activation function used was the leaky ReLU function, with a parameter of α = 0.1. In this setting we use trajectories of length 80, mini batch of 32 contexts. The PSGD algorithm uses the gradient computed by back-propagation of the loss function value on the DNN, where each descent step is calculated from the entire batch. The algorithm is applied with modified parameters; α = 0.1 with decay rate 0.98, for every iteration." }, { "heading": "C PROOFS FOR SECTION 3", "text": "Definition 1 (Bregman distance). Let ψ :W → R be strongly convex and continuously differential in the interior ofW . The Bregman distance is defined byDψ(x, y) = ψ(x)−ψ(y)−(x−y) ·∇ψ(y), where ψ is strongly convex with parameter σ. Definition 2 (Conjugate function). The conjugate of a function ψ(y), denoted by ψ∗(y) is maxx∈W {x · y − ψ(x)}.\nExample: let ‖ · ‖ be a norm on Rn. The associated dual norm, denoted ‖ · ‖∗, is defined as ‖z‖∗ = sup{zᵀx | ‖x‖ ≤ 1}. The dual norm of ‖ · ‖2 is ‖ · ‖2, and the dual norm of ‖ · ‖1 is ‖ · ‖∞.\nBefore we begin with the proof of Lemma 2, we make the following observation. By definition, π̂c(W ) is the optimal policy w.r.t. cTW ; i.e., for any policy π we have that\ncTW · µ(π̂c(W )) ≥ cTW · µ(π). (2)" }, { "heading": "Proof of Lemma 1.", "text": "Fix W . For any context c, we have that µ(π̂c) is the optimal policy w.r.t. reward fW (c), thus, fW (c) · ( µ(π̂c(W )) − µ(π∗c ) ) ≥ 0. Therefore we get that Loss(W ) ≥ 0. For W ∗, we have that µ(π̂c(W )) = µ(π ∗ c ), thus Loss(W ∗) = 0.\nFor the second statement, note that Loss(W ) = 0 implies that ∀c, fW (c)· ( µ(π̂c(W ))−µ(π∗c ) ) = 0. This can happen in one of two cases. (1) µ(π̂c(W )) = µ(π∗c ), in this case π ∗ c , π̂c(W ) have the same feature expectations. Therefore, they are equivalent in terms of value. (2) µ(π̂c(W )) 6= µ(π∗c ), but fW (c) · ( µ(π̂c(W ))− µ(π∗c ) ) = 0. In this case, π∗c , π̂c(W ) have different feature expectations, but still achieve the same value w.r.t. reward fW (c). Since π̂c(W ) is an optimal policy w.r.t. this reward, so is π∗c ." }, { "heading": "Proof of Lemma 2.", "text": "1. We need to show that ∀W1,W2 ∈ W,∀λ ∈ [0, 1], we have that\nLlin(λW1 + (1− λ)W2) ≤ λLlin(W1) + (1− λ)Llin(W2)\nLlin(λW1 + (1− λ)W2) = Ec [ cT (λW1 + (1− λ)W2) · ( µ ( π̂c ( λW1 + (1− λ)W2 )) − µ(π∗c ) )] = λEc [ cTW1 · ( µ ( π̂c ( λW1 + (1− λ)W2 )) − µ(π∗c )\n)] + (1− λ)Ec [ cTW2 · ( µ ( π̂c ( λW1 + (1− λ)W2 )) − µ(π∗c )\n)] ≤ λEc [ cTW1 · ( µ ( π̂c ( W1 )) − µ(π∗c ) )] + (1− λ)Ec [ cTW2 · ( µ ( π̂c ( W2 )) − µ(π∗c )\n)] = λLlin(W1) + (1− λ)Llin(W2),\nwhere the inequality follows from Eq. (2).\n2. Fix z ∈ W. We have that Llin(z) = Ec [ cT z · ( µ(π̂c(z))− µ(π∗c ) )] ≥ Ec [ cT z · ( µ(π̂c(W ))− µ(π∗c )\n)] = Llin(W ) + (z −W ) · Ec [ c ( µ(π̂c(W ))− µ(π∗c ) )] ,\nwhere the inequality follows from Eq. (2).\n3. Recall that a bound on the dual norm of the subgradients implies Lipschitz continuity for convex functions. Thus it is enough to show that ∀W ∈ W, ‖g(W )‖p =\n‖Ec [ c ( µ(π̂c(W ))− µ(π∗c ) )] ‖p ≤ L. Let p =∞, we have that\n‖g(W )‖∞ = ∥∥∥Ecc (µ(π̂c(W ))− µ(π∗c ))∥∥∥∞\n≤ Ec‖c ( µ(π̂c(W ))− µ(π∗c ) ) ‖∞ (Jensen inequality)\n≤ Ec‖c‖∞ ‖µ(πi)− µ(πj)‖∞ ≤ 2\n1− γ . (3)\nwhere in Eq. (3) we used the fact that ∀π we have that ‖µ(π)‖∞ ≤ 1 1−γ , thus, for any πi, πj , ‖µ(πi)− µ(πj)‖∞ ≤ 2 1−γ Therefore, L = 21−γ w.r.t. ‖·‖∞. Since ‖·‖2 ≤ √ dk ‖·‖∞ we get that L = 2 √ dk 1−γ w.r.t. ‖·‖2 ." }, { "heading": "D PROOFS & PSEUDO CODE FOR SECTION 3.3", "text": "" }, { "heading": "D.1 ELLIPSOID ALGORITHM FOR TRAJECTORIES", "text": "Algorithm 4 Batch ellipsoid algorithm for COIRL\nInitialize: Θ0 ← B∞(0, 1) = {x ∈ Rd·k : ||x||∞ ≤ 1} Θ1 ← MVEE(Θ0) i← 0, Z̄ ← 0, Z̄∗ ← 0 for t = 1, 2, 3, ... do\nct is revealed, Let W t be the center of Θt Play episode using π̂t = arg maxπ V\nπ cTt Wt\nΘt+1 ← Θt if a sub-optimal action a is played at state s then\nExpert provides H-step trajectory (sE0 = s, s E 1 , ..., s E H). Let x̂ ∗,H i be the H-step sample of\nthe expert’s feature expectations for ξ′i = 1s: x̂ ∗,H i = ∑H h=0 γ\nhφ(sEh ) Let xi be the agent’s feature expectations for ξ′i : Eξ′i,P,πt [ ∑∞ h=0 γ hφ(sh)] Denote zi = ct xi, ẑ∗,Hi = ct x̂ ∗,H i i← i+ 1, Z̄ ← Z̄ + zi, Z̄∗ ← Z̄∗ + ẑ∗,Hi if i = n then\nΘt+1← MVEE ({ θ ∈ Θt : ( θ −W t )T · ( Z̄ ∗ n − Z̄ n ) ≥ 0 }) i← 0, Z̄ ← 0, Z̄∗ ← 0" }, { "heading": "D.2 MVEE COMPUTATION", "text": "This computation is commonly found in optimization lecture notes and textbooks. First, we define an ellipsoid by {x : (x − c)Q−1(x − c) ≤ 1} for a vector c, the center of the ellipsoid, and an invertible matrix Q. Our first task is computing Θ1- the MVEE for the initial feasibility set Θ0 = B∞(0, 1) = {x ∈ Rd·k : ||x||∞ ≤ 1}. The result is of course a sphere around 0: c1 = 0, Q1 = dkI .\nFor the update Θt+1 ← MVEE ({ θ ∈ Θt : ( θ −W t )T · at ≥ 0 }) , we define ãt = −1√\naTt Qtat at\nand calculate the new ellipsoid by ct+1 = ct − 1dk+1Qãt , Qt+1 = d2k2 d2k2−1 (Qt − 2 dk+1Qtãtã T t Qt)." }, { "heading": "D.3 PROOF OF THEOREM 3", "text": "For simpler analysis, we define a ”flattening” operator, converting a matrix to a vector: Rd×k → Rd·k by W = [w1,1, . . . , w1,k, . . . , wd,1, . . . , wd,k]. We also define the operator to be the composition of the flattening operator and the outer product: u v = [u1v1, . . . , u1vk, . . . , udv1, . . . , udvk]. Therefore, the value of policy π for context c is given by V πcTW∗ = c\nTW ∗µ(π) = W ∗T (c µ(π)), where ||W ∗||∞ ≤ 1, ||c µ(π)||1 ≤ k1−γ .\nLemma 3 (Boyd & Barratt (1991)). If B ⊆ RD is an ellipsoid with center w, and x ∈ RD\\{0}, we define B+ = MVEE({θ ∈ B : (θ − w)Tx ≥ 0}), then: V ol(B\n+) V ol(B) ≤ e − 1 2(D+1) .\nProof of Theorem 3. We prove the theorem by showing that the volume of the ellipsoids Θt for t = 1, 2, ... is bounded from below. In conjunction with Lemma 3, which claims there is a minimal rate of decay in the ellipsoid volume, this shows that the number of times the ellipsoid is updated is polynomially bounded. We begin by showing that W ∗ always remains in the ellipsoid. We note that in rounds where V π ∗\ncTt W ∗ − V π̂tcTt W∗ > , we have W\n∗T ( ct ( µ(π∗ct) − µ(π̂t) )) > . In addition, as the agent acts\noptimally w.r.t. the reward rt = cTt Wt, we have that W T t ( ct ( µ(π∗ct)−µ(π̂t) )) ≤ 0 . Combining\nthese observations yield:\n(W ∗ −W t) T · ( ct ( µ(π∗ct)− µ(π̂t) )) > > 0 . (4)\nThis shows that W ∗ is never disqualified when updating Θt . Since W ∗ ∈ Θ0 this implies that ∀t : W ∗ ∈ Θt. Now we show that not only W ∗ remains in the ellipsoid, but also a small ball surrounding it. If θ is disqualified by the algorithm: (θ −W t)T · ( ct ( µ(π∗ct) − µ(π̂t) )) < 0 .\nMultiplying this inequality by -1 and adding it to (4) yields: (W ∗−θ)T · ( ct ( µ(π∗ct)−µ(π̂t) )) > .\nWe apply Hölder inequality to LHS: < LHS ≤ ||W ∗ − θ||∞ · || ( ct ( µ(π∗ct) − µ(π̂t) )) ||1 ≤\n2k 1−γ ||W ∗ − θ||∞. Therefore for any disqualified θ: ||W ∗ − θ||∞ > (1−γ) 2k , thus B∞(W ∗, (1−γ) 2k ) is never disqualified. This implies that ∀t : vol(Θt) ≥ vol(Θ0 ∩ B∞(W ∗, (1−γ) 2k )) ≥ vol(B∞(W ∗, (1−γ) 4k )). Finally, let MT be the number of rounds by T in which V π∗ cTt W ∗ − V π̂t cTt W ∗ > . Using Lemma 3 we get that: MT 2(dk+1) ≤ log ( vol(Θ1) ) − log ( vol(ΘT+1) ) ≤\nlog ( vol ( MVEE(B∞(0, 1)) )) − log ( vol(B∞(0, (1−γ) 4k )) ) ≤ log ( vol ( MVEE(B2(0, √ dk)) )) −\nlog ( vol(B2(0, (1−γ) 4k )) ) ≤ log ( ( 4k √ dk (1−γ) ) dk ) ≤ dk log 4k √ dk\n(1−γ) . Therefore MT ≤ 2dk(dk + 1) log 4k √ dk\n(1−γ) = O(d 2k2 log( dk(1−γ) )) ." }, { "heading": "D.4 PROOF OF THEOREM 4", "text": "Lemma 4 (Azuma’s inequality). For a martingale {Si}ni=0, if |Si − Si−1| ≤ b a.s. for i = 1, ..., n:\nP ( |Sn − S0| > b √ 2n log( 2δ ) ) < δ\nProof of Theorem 4. We first note that we may assume that for any t: ||W ∗−Wt||∞ ≤ 2. If Wt 6∈ Θ0, we update the ellipsoid by Θt ← MVEE ({ θ ∈ Θt : ( θ −W t )T · ej ≶ 0 }) where ej is the\nindicator vector of coordinate j in which Wt exceeds 1, and the inequality direction depends on the sign of (Wt)j . If Wt 6∈ Θ0 still, this process can be repeated for a finite number of steps until Wt ∈ Θ0, as the volume of the ellipsoid is bounded from below and each update reduces the volume (Lemma 3). Now we have Wt ∈ Θ0, implying ||W ∗ −Wt||∞ ≤ 2. As no points of Θ0 are removed this way, this does not affect the correctness of the proof. Similarly, we may assume ||W ∗t −Wt||∞ ≤ 2 as W ∗t ∈ Θ0. We denote Wt which remains constant for each update in the batch by W . We define t(i) the timesteps corresponding to the demonstrations in the batch for i = 1, ..., n. We define z∗,Hi to be the expected value of ẑ∗,Hi , and z ∗ i to be the outer product of ct(i) and the feature expectations of the expert policy for W ∗t(i), ct(i), ξ ′ t(i) . We also denote W ∗ t(i) by W ∗ i . We bound the following term from below, as in Theorem 3:\n(W ∗ −W )T · ( Z̄ ∗\nn − Z̄ n ) = 1 n n∑ i=1 (W ∗ −W )T · (ẑ∗,Hi − zi) =\n1\nn n∑ i=1 (W ∗ −W )T · (z∗i − zi) + 1 n n∑ i=1 (W ∗ −W )T · (z∗,Hi − z ∗ i )+\n1\nn n∑ i=1 (W ∗ −W )T · (ẑ∗,Hi − z ∗,H i ) =\n1\nn n∑ i=1\n(W ∗i −W )T · (z∗i − zi)︸ ︷︷ ︸ (1)\n+ 1\nn n∑ i=1\n(W ∗ −W ∗i )T · (z∗i − zi)︸ ︷︷ ︸ (2) +\n1\nn n∑ i=1 (W ∗ −W )T · (z∗,Hi − z ∗ i )︸ ︷︷ ︸\n(3)\n+ 1\nn n∑ i=1 (W ∗ −W )T · (ẑ∗,Hi − z ∗,H i )︸ ︷︷ ︸\n(4)\n(1): Since the sub-optimality criterion implies a difference in value of at least for the initial distribution which assigns 1 to the state where the agent errs, we may use identical arguments to the previous proof. Therefore, the term is bounded from below by .\n(2): By assumption ||W ∗ −W ∗i ||∞ ≤ (1−γ) 8k thus since ||(z ∗ i − zi)||1 ≤ 2k1−γ by Hölder’s inequality the term is bounded by 4 .\n(3): We have ||x∗,Hi − x∗i ||1 ≤ kγH 1−γ from definitions, thus ||z ∗,H i − z∗i ||1 ≤ kγH\n1−γ since c ∈ ∆d−1. As mentioned previously we may assume ||W ∗ −Wt||∞ ≤ 2, therefore by Hölder’s inequality the term is bounded by 4 due to our choice of H: γ H = (1 − (1 − γ))H ≤ ( (1 − (1 − γ)) 1 1−γ )(1−γ)H\n=( (1− (1− γ)) 1 1−γ )log( 8k (1−γ) ) ≤ e− log( 8k\n(1−γ) ) = (1−γ) 8k .\n(4): The partial sums ∑N i=1(W ∗−W )T · (z∗,Hi − ẑ ∗,H i ) for N = 0, ..., n form a martingale sequence.\nNote that ||z∗,Hi ||1 ≤ k1−γ , and ||ẑ ∗,H i ||1 ≤ k1−γ . Also, we have that ||W ∗ − Wt||∞ ≤ 2, thus, we can apply Azuma’s inequality (Lemma 4) with b = 4k(1−γ) and with our chosen n this yields:∑n\ni=1(W ∗ −W )T · (z∗,Hi − ẑ ∗,H i ) ≤ n 4 with probability of at least 1−\nδ\n2dk(dk+1) log( 16k √ dk (1−γ) ) .\nThus (W ∗ − W )T · ( Z̄ ∗\nn − Z̄ n ) > 4 and as in Theorem 3 this shows B∞(W ∗, (1−γ) 8k ) is never disqualified, and the number of updates is bounded by 2dk(dk + 1) log(16k √ dk\n(1−γ) ), and multiplied by n this yields the upper bound on the number of rounds in which a sub-optimal action is chosen. By union-bound, the required bound for term (4) holds in all updates with probability of at least 1− δ." }, { "heading": "E CONTEXTUAL POLICIES", "text": "Consider the following problem. Given a complete specification of an MDP, and a hypothesis class H , for each state s assign a hypothesis ha : C → A such that the return is maximized. The following theorem shows that it is NP-complete to find such a policy. We will use the class of linear separators. We define the following contextual MDP training problem. We are given an MDP with only transitions and a sample of m contexts and their reward (namely for each context ci we specify the rewards for each state and action). Theorem 5. There is a reduction from training problem of the union of k hyperplanes to the contextual MDP training problem of k + 1 states.\nProof. Consider the following MDP, which has two parameters r0 and r1 which will define the rewards for each context. The MDP composed from a line of k internal states i, and one sink state. Each internal state i ∈ [1, k − 1] has two actions: action 1 leads to the sink state with reward r1 and the action 0 lead to the next internal state i+ 1 with reward 0. In internal state k action 1 leads to the sink state with reward r1 and action 0 leads to the sink state with reward r0. In the sink state there is a singe action that stays in the sink state and has reward 0.\nThe context isC = Rd. The hypotheses class includes all hyperplanes, each hyperplane is characterize by a weight vector w ∈ Rd, and if w>c ≥ 0 then the action is 1 and otherwise it is 0. Given a sample of size m to the training problem (ci, yi), we generate rewards for the contextual MDP training problem by having the context ci and specifying the parameters r0 and r1. Specifically, we will set r0 = 1− yi and r1 = yi. Given k hyperplanes w1, . . . , wk, and m example, assume that they make e errors. By using those k hyperplanes in the k internal nodes for each context ci we have a reward 1 iff the union of the k hyperplanes classify it correctly. This implies that we have reward m− e. This is since each example that is labeled correctly will get a reward of 1 and each incorrect example will get a reward of 0.\nGiven an assignment of k hyperplanes to the internal nodes, which gets a return of m− e, we output as a hypothesis the union of the k hyperplanes. Again, the number of errors on the sample is exactly e errors." } ]
2,019
null
SP:5406872be7f8a36576284f9a18ecb76d658bf25c
[ "The authors derive the influence function of models that are first pre-trained and then fine-tuned. This extends influence functions beyond the standard supervised setting that they have been primarily considered in. To do so, the authors make two methodological contributions: 1) working through the calculus for the pre-training setting and deriving a corresponding efficient algorithm, and 2) adding $L_2$ regularization to approximate the effect of fine-tuning for a limited number of gradient steps.", "This is an analysis paper of pretraining with the tool “influence function”. First, the authors calculate the influence score for the models with/without pretraining, and then propose some implementation details (i.e., use CG to estimate the inversed Hessian). To calculate the influence function of a model with pretraining, the authors use an approximation f(w)+||w-w*||, where w* is pretrained. " ]
Multi-stage training and knowledge transfer from a large-scale pretraining task to various finetuning tasks have revolutionized natural language processing (NLP) and computer vision (CV), with state-of-the-art performances constantly being improved. In this paper, we develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data. With this score, we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task. The proposed multistage influence function generalizes the original influence function for a single model in Koh & Liang (2017), thereby enabling influence computation through both pretrained and finetuned models. We study two different scenarios with the pretrained embeddings fixed or updated in the finetuning tasks. We test our proposed method in various experiments to show its effectiveness and potential applications.
[]
[ { "authors": [ "Rie Kubota Ando", "Tong Zhang" ], "title": "A framework for learning predictive structures from multiple tasks and unlabeled data", "venue": "J. Mach. Learn. Res.,", "year": 2005 }, { "authors": [ "Marc-Etienne Brunet", "Colleen Alkalay-Houlihan", "Ashton Anderson", "Richard Zemel" ], "title": "Understanding the origins of bias in word embeddings", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ciprian Chelba", "Tomas Mikolov", "Mike Schuster", "Qi Ge", "Thorsten Brants", "Phillipp Koehn", "Tony Robinson" ], "title": "One billion word benchmark for measuring progress in statistical language modeling", "venue": "arXiv preprint arXiv:1312.3005,", "year": 2013 }, { "authors": [ "R. Dennis Cook", "Sanford Weisberg" ], "title": "Characterizations of an empirical influence function for detecting influential cases in regression", "venue": null, "year": 1980 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Tian Guo", "Tao Lin", "Nino Antulov-Fantulin" ], "title": "Exploring interpretable LSTM neural networks over multi-variable data", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Pang Wei Koh", "Kai-Siang Ang", "Hubert H.K. Teo", "Percy Liang" ], "title": "On the accuracy of influence functions for measuring group effects", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Hugo Larochelle", "Dumitru Erhan", "Yoshua Bengio" ], "title": "Zero-data learning of new tasks", "venue": "In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2,", "year": 2008 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jose Oramas", "Kaili Wang", "Tinne Tuytelaars" ], "title": "Visual explanation by interpretation: Improving visual feedback capabilities of deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher Manning" ], "title": "Glove: Global vectors for word representation", "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP),", "year": 2014 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Jacob Steinhardt", "Pang Wei Koh", "Percy Liang" ], "title": "Certified defenses for data poisoning attacks", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Hao Wang", "Berk Ustun", "Flávio P. Calmon" ], "title": "Repairing without retraining: Avoiding disparate impact with counterfactual distributions", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "dataset Chelba" ], "title": "For the finetuned model, we add a hidden layer with 64 neurons", "venue": null, "year": 2013 } ]
[ { "heading": null, "text": "Multi-stage training and knowledge transfer from a large-scale pretraining task to various finetuning tasks have revolutionized natural language processing (NLP) and computer vision (CV), with state-of-the-art performances constantly being improved. In this paper, we develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data. With this score, we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task. The proposed multistage influence function generalizes the original influence function for a single model in Koh & Liang (2017), thereby enabling influence computation through both pretrained and finetuned models. We study two different scenarios with the pretrained embeddings fixed or updated in the finetuning tasks. We test our proposed method in various experiments to show its effectiveness and potential applications." }, { "heading": "1 INTRODUCTION", "text": "Multi-stage training has become increasingly important and has achieved state-of-the-art results in many tasks. In NLP applications, it is now a common practice to first learn word embeddings (e.g., word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014)) or contextual representations (e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2018)) from a large unsupervised corpus, and then refine or finetune the model on supervised end tasks. In computer vision applications, it is common to use a pretrained CNN as a feature extractor and only finetune top-layer networks through training on the end task. Also, it has been demonstrated that pretraining ResNet (He et al., 2016) with large hashtag data can greatly benefit many end tasks (Mahajan et al., 2018). Intuitively, the successes of these multi-stage learning paradigms are due to knowledge transfer from pretraining tasks to the end task. However, current approaches using multi-stage learning are usually based on trial-and-error and many fundamental questions remain unanswered. For example, which part of the pretraining data/task contributes most to the end task? How can one detect “false transfer” where some pretraining data/task could be harmful for the end task? If a testing point is wrongly predicted by the finetuned model, can we trace back to the problematic examples in the pretraining data? Answering these questions requires a quantitative measurement of how the data and loss function in the pretraining stage influence the end model, which has not been studied in the past and will be the main focus of this paper.\nTo find the most influential training data responsible for a model’s prediction, the influence function was first introduced by Cook & Weisberg (1980) from a robust statistics point of view. Recently, as large-scale applications become more challenging for influence function computation, Koh & Liang (2017) proposed to use a first-order approximation to measure the effect of removing one training point on the model’s prediction, to overcome computational challenges. More broadly, there are many works using influence functions to investigate the impact of training data on models in various machine learning applications, such as tracing back the origins of bias in the word embeddings generated by GloVe (Brunet et al., 2019), and understanding and mitigating disparate impact to improve model fairness (Wang et al., 2019). However, all of the existing influence score computation algorithms study the case of single-stage training – where there is only one model with one set of training/prediction data in the training process. To the best of our knowledge, the influence of pretraining data on a subsequent finetuning task and model has not been studied, and it is nontrivial to apply the original influence function in (Koh & Liang, 2017) to this scenario.\nIn this work, we derive the influence function from pretraining data to the end task in multi-stage training. Since the computation involves several expensive Hessian vector products, we also show how to compute the influence function efficiently in large-scale problems. Based on this technique, we show that\n• in real datasets and experiments across various vision and NLP tasks, predictions using the technique and actual influence for the pretraining data to the finetuned model are highly correlated (Pearson’s r score to be around 0.6). This shows the effectiveness of our proposed technique for computing the influence scores in multi-stage models; • the influence for the pretraining data to the finetuned model can be split into two parts: the\ninfluence of the pretraining data on the pretrained model, and influence of the pretraining data on the finetuned model. Therefore the testing data from the finetuning task will be impacted by changes in the pretraining data, which can be quantified using our proposed technique; • the influence of the pretraining data on the finetuning task is highly dependent on 1) the\nsimilarity of two tasks or stages, 2) and the number of training data in the finetuning task. Thus our proposed technique provides a novel way to measure how the pretraining data helps or benefits the finetuning data." }, { "heading": "2 RELATED WORK", "text": "Multi-stage model training that trains models in many stages on different tasks to improve the endtask has been used widely in many machine learning areas, such as in transfer learning (Ando & Zhang, 2005) and zero-shot learning (Larochelle et al., 2008). Recently, multi-stage model training has achieved state-of-the-art performance by learning large embeddings or data representation as the pretraining step on a very large pretraining dataset, which is then followed by a finetune step with further training on the end-task. Examples include the recently proposed BERT (Devlin et al., 2018), which learns contextual embeddings on a large corpus with the pretraining tasks chosen to be predicting the masked words in a sentence and predicting whether one sentence is after another sentence. This contextual embedding is then used in finetuning tasks, such as a question answering task. ELMo (Peters et al., 2018) is widely used in multi-stage model training as a sentence feature extractor to benefit the end-task. Similarly, there are some works in computer vision that train an image representation model on a large number of images as the pretraining step, and then use the resulting features to finetune another task, such as particular image classification tasks. For example, (Mahajan et al., 2018) uses ResNet in the pretraining step, and the finetuning task is based on hashtag data. The rationale for this multi-stage model is that the pretraining task can learn some common or latent representation which could benefit the end task.\nAnother related line of research is on understanding machine learning models. One category of research is to explain predictions with respect to model variables, and trace back the contribution of variables to the prediction. For example, Oramas et al. (2019) automatically detects internal features in the set of classes in the pretrained model that are relevant to interpreting the prediction, and shows for various vision tasks that the proposed scheme can produce detailed explanations based on the features that are relevant to the targeted classes. Guo et al. (2019) aims to interpret variable-wise hidden states in an LSTM model to quantify variable importance and variable-wise temporal importance in the model.\nClosely related research has sought to connect model prediction and training data, and trace back the most influential training data that are most responsible for the model prediction. Among them, the influence function (Cook & Weisberg, 1980; Koh & Liang, 2017), which aims to model the prediction changes when training data is added/removed, has been shown to be effective in many applications. There is a series of work on influence functions, including investigating the influence of a group of data on the prediction (Koh et al., 2019), using influence functions to detect bias in word embeddings (Brunet et al., 2019), and using it in preventing data poisoning attacks (Steinhardt et al., 2017), etc. All of these works only consider a single stage training procedure, and it is not straightforward to apply the existing influence functions to multi-stage models. In this paper, we propose to analyze the influence of pretraining data on predictions in the subsequent finetuned model and end task." }, { "heading": "3 ALGORITHMS", "text": "In this section, we detail the procedure of multi-stage training, show how to compute the influence score for the multi-stage training, and then discuss how to scale up the computation." }, { "heading": "3.1 MULTI-STAGE MODEL TRAINING", "text": "Multi-stage models, which train different models in consecutive stages, have been widely used in various ML tasks. Mathematically, letZ be the training set for pretraining task with data size |Z| = m, and X be the training data for the finetuning task with data size |X | = n. In pretraining stage, we assume the parameters of the pretrained network have two parts: the parameters W that are shared with the end task, and the task-specific parameters U that will only be used in the pretraining stage. Note that W could be a word embedding matrix (e.g., in word2vec) or a representation extraction network (e.g., Elmo, BERT, ResNet), while U is usually the last few layers that corresponds to the pretraining task. After training on the pretraining task, we obtain the optimal parameters W ∗, U∗. The pretraining stage can be formulated as\nPretrain Stage: W ∗, U∗ = arg min W,U\n1\nm ∑ z∈Z g(z, W, U) := arg min W,U G(W,U), (1)\nwhere g(·) is the loss function for the pretrain task. In the finetuning stage, the network parameters are W,Θ, where W is shared with the pretraining task and Θ is the rest of the parameters specifically associated with the finetuning task. We will initialize the W part by W ∗. Let f(·) denote the finetuning loss, there are two cases when finetuning the end-task:\n• Finetuning Case 1: Fixing embedding parameters W = W ∗, and only finetune Θ:\nΘ∗ = arg min Θ\n1\nn ∑ x∈X f(x, W ∗, Θ) := arg min Θ F (W ∗,Θ). (2)\n• Finetuning Case 2: finetune both the embedding parameters W (initialized from W ∗) and Θ. Sometimes updating the embedding parameters W in the finetuning stage is necessary, as the embedding parameters from the pretrained model may not be good enough for the finetuning task. This corresponds to the following formulation:\nW ∗∗,Θ∗ = arg min W,Θ\n1\nn ∑ x∈X f(x, W, Θ) := arg min W,Θ F (W,Θ). (3)" }, { "heading": "3.2 INFLUENCE FUNCTION FOR MULTI-STAGE MODELS", "text": "We derive the influence function for the multi-stage model to trace the influence of pretraining data on the finetuned model. In Figure 1 we show the task we are interested in solving in this paper. Note\nthat we use the same definition of influence function as (Koh & Liang, 2017) and discuss how to compute it in the multi-stage training scenario. As discussed at the end of Section 3.1, depending on whether or not we are updating the shared parameters W in the finetuning stage, we will derive the influence functions under two different scenarios." }, { "heading": "3.2.1 CASE 1: EMBEDDING PARAMETERS W ARE FIXED IN FINETUNING", "text": "To compute the influence of pretraining data on the finetuning task, the main idea is to perturb one data example in the pretraining data, and study how that impacts the test data. Mathematically, if we perturb a pretraining data example z with loss change by a small , the perturbed model can be defined as\nŴ , Û = arg min W,U G(W,U) + g(z,W,U). (4)\nNote that choices of can result in different effects in the loss function from the original solution in (1). For instance, if we set = − 1n , then we are removing the training data z in the pretraining dataset.\nFor the finetuning stage, since we consider Case 1 where the embedding parameters W are fixed in the finetuning stage, the new model for the end-task or finetuning task will thus be\nΘ̂ = arg min Θ F (Ŵ ,Θ). (5)\nThe influence function that measures the impact of a small perturbation on z to the finetuning loss on a test sample xt from finetuning task is defined as\nIz,xt := ∂f(xt, Ŵ , Θ̂ )\n∂\n∣∣ =0\n(6)\n= ∇Θf(xt,W ∗,Θ∗)T · Iz,Θ +∇W f(xt,W ∗,Θ∗)T · Iz,W . (7)\nwith Iz,Θ := ∂Θ̂ ∂ ∣∣ =0 and Iz,W := ∂Ŵ ∂ ∣∣ =0 , (8)\nwhere Iz,Θ measures the influence of z on the finetuning task parameters Θ, and Iz,W measures how z influences the pretrained model W . Therefore we can split the influence of z on the test sample into two pieces: one is the impact of z on the pretrained model Iz,W , and the other is the impact of z on the finetuned model Iz,Θ. It is worth mentioning that, due to linearity, if we want to estimate a set of test example influence function scores with respect to a set of pretraining examples, we can simply sum up the pair-wise influence functions, and so define\nI{z(i)},{x(j)t } := ∑ i ∑ j I z(i),x (j) t . (9)\nwhere {z(i)} and {x(j)t } contain a set of pretraining data and finetuning test data. Next we will derive these two influence scores Iz,Θ and Iz,W (see the detailed derivations in the appendix) in Theorem 1 below. Theorem 1. For the two-stage training procedure in (1) and (2), we have\nIz,W := ∂Ŵ ∂ ∣∣ =0 = − [(∂2G(W ∗, U∗) ∂(W,U)2 )−1 ( ∂g(z,W ∗, U∗) ∂(W,U) ) ] W\n(10)\nIz,Θ := ∂Θ̂ ∂ ∣∣ =0 = ( ∂2F (W ∗,Θ∗) ∂Θ2 )−1 · (∂ 2F (W ∗,Θ∗) ∂Θ∂W ) · [(∂2G(W ∗, U∗) ∂(W,U)2 )−1 ( ∂g(z,W ∗, U∗) ∂(W,U) ) ] W\nwhere [·]W means taking the W part of the vector.\nBy plugging (10) into (7), we finally obtain the influence score of pretraining data z on the finetuning task testing point xt, Iz,xt as\nIz,xt =\n[ − ∂f(x,W ∗,Θ∗)T\n∂Θ ·(∂\n2F (W ∗,Θ∗)\n∂Θ2 )−1 · ∂\n2F (W ∗,Θ∗) ∂Θ∂W + ∂f(x,W ∗,Θ∗)T ∂W\n] Iz,W\n(11) Algorithm 1 shows how to compute the influence score in (11). The pseudocode for computing the influence function in (11) is shown in Algorithm 1.\nAlgorithm 1: Multi-Stage Influence Score with Fixed Embedding 1 Input: pretrain and finetune models with W ∗, Θ∗, and U∗; pretrain and finetune training data Z and X ; test example xt; and a pretrain training example z; 2 Output: Influence function value Iz,xt ; 3 Compute fintune model’s gradients∂f(xt,W\n∗,Θ∗) ∂Θ and ∂f(xt,W ∗,Θ∗) ∂W ;\n4 Compute the first inverse Hessian vector product Vihvp1(xt) := ( ∂2F (W∗,Θ∗) ∂Θ2 ) −1 ∂f(xt,W∗,Θ∗) ∂Θ ; 5 Compute finetune loss’s gradient w.r.t W : ∂f(xt,W ∗,Θ∗)T\n∂W = V T ihvp1 ∂2F (W∗Θ∗) ∂Θ∂W − f(xt,W ∗,Θ∗)\n∂W and concatenate it with 0 to make it the same dimension as (W, U);\n6 Compute and save the second inverse Hessian vector product\nV Tihvp2(xt) := [ ∂f(xt,Θ\n∗,W∗)T ∂W , 0]( ∂2G(W∗,U∗) ∂(W,U)2 ) −1 ;\n7 Compute influence function score Iz,xt = V T ihvp2(xt) ∂g(z,W∗,U∗) ∂(W,U) ;" }, { "heading": "3.2.2 CASE 2: EMBEDDING PARAMETER W IS ALSO UPDATED IN THE FINETUNING STAGE", "text": "For the second finetuning stage case in (3), we will also further train the embedding parameter W from the pretraining stage. When W is also updated in the finetuning stage, it is challenging to characterize the influence since the pretrained embedding W ∗ is only used as an initialization. In general, the final model (W ∗∗,Θ∗) may be totally unrelated to W ∗; for instance, when the objective function is strongly convex, any initialization of W in (3) will converge to the same solution.\nHowever, in practice the initialization of W will strongly influence the finetuning stage in deep learning, since the finetuning objective is usually highly non-convex and initializing W with W ∗ will converge to a local minimum near W ∗. Therefore, we propose to approximate the whole training procedure as\nW̄ , Ū = arg min W,U G(W,U) (12)\nW ∗,Θ∗ = arg min W,Θ {α‖W − W̄‖2F + F (W,Θ)},\nwhere W̄ , Ū are optimal for the pretraining stage, W ∗,Θ∗ are optimal for the finetuning stage, and 0 ≤ α 1 is a small value. This is to characterize that in the finetuning stage, we are targeting to find a solution that minimizes F (W,Θ) and is close to W̄ . In this way, the pretrained parameters are connected with finetuning task and thus influence of pretraining data to the finetuning task can be tractable. The results in our experiments show that with this approximation, the computed influence score can still reflect the real influence quite well.\nSimilarly we can have ∂Θ̂ ∂ , ∂Ŵ ∂ , and ∂W̄ ∂ to measure the difference between their original optimal solutions in (12) and the optimal solutions from perturbation over the pretraining data z. Similar to (7), the influence function Iz,xt that measures the influence of perturbation to pretraining data z on test sample xt’s loss is\nIz,xt : = ∂f(xt, Ŵ , Θ̂ )\n∂\n∣∣ =0 = ∂f(xt,W ∗,Θ∗)\n∂(W,Θ)\nT [ ∂Ŵ ∂ ∣∣ =0\n∂Θ̂ ∂ ∣∣ =0\n] . (13)\nThe influence function of small perturbation of G(W,U) to W̄ ,W ∗,Θ∗ can be computed following the same approach in Subsection 3.2.1 by replacing W̄ for W ∗ and [Θ∗,W ∗] for Θ∗ in (10). This will lead to\n∂W̄ ∂ ∣∣ =0 =− [(∂2G(W̄ , Ū) ∂(W,U)2 )−1 ( ∂g(z, W̄ , Ū) ∂(W,U) ) ] W\n(14)[ ∂Θ̂ ∂ ∣∣ =0\n∂Ŵ ∂ ∣∣ =0\n] = [ ∂2F (W∗,Θ∗) ∂Θ2 ∂2F (W∗,Θ∗)\n∂Θ∂W ∂2F (W∗,Θ∗)\n∂Θ∂W ∂2F (W∗,Θ∗) ∂W 2 + 2αI\n]−1 [ 0 −2αI ] [(∂2G(W̄ , Ū) ∂(W,U)2 )−1 ( ∂g(z, W̄ , Ū) ∂(W,U) ) ] W\nAfter plugging (14) into (13), we will have the influence function Iz,xt . Similarly, the algorithm for computing Iz,xt for Case 2 can follow Algorithm 1 for Case 1 by replacing gradient computation." }, { "heading": "3.3 IMPLEMENTATION DETAILS", "text": "The influence function computation for multi-stage model is presented in the previous section. As we can see in Algorithm 1 that the influence score computation involves many Hessian matrix operations, which will be very expensive and sometimes unstable for large-scale models. We used several strategies to speed up the computation and make the scores more stable.\nLarge Hessian Matrices A Hessian matrix H has a size of p × p, where p is the number of parameters in the model. For large deep learning models with thousands or even millions of parameters, it is almost impossible to fit a p × p Hessian into memory. Also, to invert a Hessian requires O(p3) operations. Similar to Koh & Liang (2017), we avoid explicitly computing and storing the Hessian matrix and its inverse, and instead compute product of the inverse Hessian with a vector directly. Every time we need an inverse Hessian vector product v = H−1b, we invoke conjugate gradients (CG), which transforms the linear system problem into an quadratic optimization problem H−1b ≡ arg minx{ 12x\nTHx− bTx}. In each iteration of CG, instead of computing H−1b directly, we will compute a Hessian vector product, which can be efficiently done by backprop through the model twice with O(p) time complexity. The aforementioned conjugate gradient method requires the Hessian matrix to be positive definite. However, in practice the Hessian may have negative eigenvalues, since we run a SGD and the final H may not at a local minimum exactly. To tackle this issue, we solve\narg min x {1 2 xTH2x− bTHx}, (15)\nwhose solution can be shown the same as arg minx{ 12x THx − bTx} since the Hessian matrix is symmetric. H2 is guaranteed to be positive definite as long as H is invertible, even when H has negative eigenvalues. If H2 is not ill-conditioned, we can solve (15) directly. The rate of convergence of CG depends on √ κ(H2)−1√ κ(H2)+1 , where κ(H2) is the condition number of H2, which can be very large if H2 is ill-conditioned. When H2 is ill-conditioned, to stabilize the solution and to encourage faster convergence, we add a small damping term λ on the diagonal and solve arg minx{ 12x T (H2 + λI)x− bTHx}.\nTime Complexity As mentioned above, we can get an inverse Hessian vector product in O(p) time. We assume there are p1 parameters in our pretrained model and p2 parameters in our finetuned model. Since F and G are summation of loss with respect to all pretraining or finetuning examples, it takes O(mp1) or O(np2) to compute a Hessian vector product, where m is the number of pretraining examples and n is the number of finetuning examples. We may also subsample the pretraining examples to estimate G(W,U) when the number of pretraining examples is gigantic such as pretraining using One-Billion-word dataset (Chelba et al., 2013) for ELMo etc. For the two inverse Hessian vector products, the time complexity is O(np2r) and O(mp1r), where r is the number of iterations in CG. For other operations in computing the influence score, vector product has a time complexity of O(p1) or O(p1), and computing the gradients of all pretraining examples has a complexity of O(mp1). So computing the total time complexity of computing a multi-stage influence score is O((mp1 + np2)r)" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we will conduct experiment with real datasets on both vision and NLP tasks to show the effectiveness of our proposed influence function. We will show the results in the main text on the vision tasks, and some qualitative results on NLP task related to ELMo are presented in Section C in Appendix." }, { "heading": "4.1 INFLUENCE FUNCTION CORRELATION WITH REAL SCORE", "text": "To show that our proposed influence score are a good approximation, we evaluate our proposed multi-stage influence function on two CNN models with CIFAR and MNIST datasets. The model structures are shown in Table A in Appendix. We use Tanh for all activations. For both MNIST and CIFAR models, CNN layers are used as embeddings and fully connected layers are task-specific. At the pretraining stage, we train the models with examples from only two classes (“bird\" vs. “frog\") for\nCIFAR and four classes (0, 1, 2, and 3) for MNIST. The resulting embedding is used in the finetuning tasks, where we finetune the model with the examples from the remaining eight classes in CIFAR or the other 6 numbers in MNIST task for classification. In order to make the experiments closer to typical real finetuning situations, we reduce the size of the training set in the finetuning task by subsampling.\nIn this experiment we test the correlation between individual pretraining example’s multi-stage influence function and the real loss difference when the pretraining examples are removed. We test two cases (as mentioned in Section 3.1) – where the pretrained embedding is fixed, and where it is updated during finetuning. For both MNIST and CIFAR, we first train the embedding with the binary classification for Tpretrain steps. The embedding is then used in the finetuned model. Then we train the finetuned model for Tfinetune steps. Depending on different scenarios which will be explained below, the embedding may be fixed or updated in the finetuning task training. For a given example in the pretraining data, we calculate its influence function score with respect to each test example in the finetuning task test set using the method presented in Section 3. To evaluate this pretraining example’s contribution to the overall performance of the model, we sum up the influence function scores across the whole test set in the finetuning task.\nTo validate the score, we remove that pretraining example and go through the aforementioned process again by updating the model. In the updating process we further train the pretrained and finetuned models for T ′pretrain and T ′ finetune steps from the original model checkpoints. Note that in this process the pretraining is conducted with the new leave-one-out pretraining training set, while the training set for the finetuning task is intact. Due to computation constraints, we only use the top 100 pretraining examples with the largest influence function absolute values in this experiments to get 100 scoredifference pairs. Then we run a linear regression between the true loss difference values obtained and the influence score computed to show their correlation. The detailed hyperparameters used in these experiments are presented in Appendix B." }, { "heading": "4.1.1 EMBEDDING IS FIXED", "text": "In Figure 2 we show the correlation results of CIFAR and MNIST models when the embedding is fixed in finetuning task training. Though we make many approximations in our formulation, from Figures 2a and 2b we can see that there is a clear linear correlation between the true loss difference and the influence function scores obtained. The correlation is evaluated with Pearson’s r value. This supports our argument that we can use this score to detect the examples in the pretraining set which contributes most to the model’s performance. In Figure 3 we demonstrate the misclassified test\nimages in the finetuning task and the images with the largest positive influence function score in the pretraining dataset. Examples with large positive influence score are expected to have negative effect on the model’s performance since intuitively when they are added to the pretraining dataset, the loss of the test example will increase. From Figure 3 we can indeed see that the identified examples are with low quality, and they can be easily misclassified even with human eyes.\nOne may doubt the effectiveness of the expensive inverse Hessian computation in our formulation. As a comparison, we replace all inverse Hessians in (11) with identity matrices to compute the\ninfluence function score for the MNIST model. The results are shown in Figure 4 with a much smaller Pearson’s r of 0.17. This again shows effectiveness of our proposed influence function." }, { "heading": "4.1.2 EMBEDDING IS UPDATED IN FINETUNE", "text": "Practically, the embedding can also be updated in the finetuning process. In Figure 5 we show the correlation between true loss difference and influence function score values using (13). We can see that even under this challenging condition, our multi-stage influence function from (13) still has a strong correlation with the true loss difference, with a Pearson’s r = 0.40." }, { "heading": "4.2 THE FINETUNING TASK’S SIMILARITY TO THE PRETRAINING TASK", "text": "In this experiment, we explore the relationship between influence function score and finetuning task similarity with the pretraining task. Specifically, we study whether the influence function score will increase in absolute value if the finetuning task is very similar to the pretraining task. To do this, we use the CIFAR embedding obtained from a “bird vs. frog\" classification and test its influence function scores on two finetuning tasks. The finetuning task A is exactly the same as the pretraining “bird vs. frog\" classification, while the finetuning task B is a classification on two other classes (“automobile vs. deer\"). All hyperparameters used in the two finetuning tasks are the same. In Figure 6, for both tasks we plot the distribution of the influence function values with respect to each pretraining example. We sum up the influence score for all test examples. We can see that, the first finetuning task influence function has much larger absolute values than that of the second task. The average absolute value of task A influence function score is 0.055, much larger than that of task B, which is 0.025. This supports the argument that if pretraining task and finetuning task are similar, the pretraining data will have larger influence on the finetuning task performance." }, { "heading": "4.3 INFLUENCE FUNCTION SCORE WITH DIFFERENT NUMBER OF FINETUNE EXAMPLES", "text": "We also study the relationship between the influence function scores and number of examples used in finetuning. In this experiment, we update the pretrained embedding in finetuning stage. We use the same pretraining and finetuning task as in Section 4.1. In Figure 7, model C is the model used in\nSection 4.1.2 while in model D we triple the number of finetuning examples as well as the number of finetuning steps. Figure 7 demonstrates the distribution of each pretraining examples’ influence function score with the whole test set. The average absolute value of influence function score in model D is 0.15, much less than that of model C. This indicates that with more finetuning examples and more finetuning steps, the influence of pretraining data to the finetuning model’s performance will decrease. This makes sense as if the finetuning data does not have sufficient information for training a good finetuning task, then pretraining data will have more impact on the finetuning task." }, { "heading": "5 CONCLUSION", "text": "We introduce a multi-stage influence function to evaluate pretraining examples’ contribution to finetuned model’s prediction. Two different cases are studied: the pretrained embedding is fixed in finetuning or the pretrained embedding is updated in finetuning. We test our method on both CV and NLP tasks. Our experimental results show strong correlation between the proposed multistage influence function scores and the true loss difference when an example is removed from the pretraining data. We believe this is a promising way to connect finetuned model’s performance with pretraining data directly." }, { "heading": "A PROOF OF THEOREM 1", "text": "Proof. Since Θ̂ , Û , Ŵ are optimal solutions, and thus satisfy the following optimality conditions:\n0 = ∂\n∂Θ F (Ŵ , Θ̂ ) (16)\n0 = ∂\n∂(W,U) G(Ŵ , Û ) +\n∂\n∂(W,U) g(z, Ŵ , Û ), (17)\nwhere ∂(W,U) means concatenate the U and W as [W,U ] and compute the gradient w.r.t [W,U ]. We define the changes of parameters as ∆W = Ŵ − Ŵ , ∆Θ = Θ̂ − Θ̂, and ∆U = Û − Û . Applying Taylor expansion to the rhs of (17) we get\n0 ≈ ∂ ∂(W,U)\nG(W ∗, U∗) + ∂2G(W ∗, U∗)\n∂(W,U)2\n[ ∆W ∆U ] + ∂g(z,W ∗, U∗) ∂(W,U) + ∂2g(z,W ∗, U∗) ∂(W,U)2 [ ∆W ∆U ] (18)\nSince W ∗, U∗ are optimal of unperturbed problem, ∂∂(W,U)G(W ∗, U∗) = 0, and we have[\n∆W ∆U\n] ≈ − ( ∂2G(W ∗, U∗)\n∂(W,U)2 +\n∂2g(z,W ∗, U∗)\n∂(W,U)2\n)−1 ( ∂g(z,W ∗, U∗)\n∂(W,U) ) (19)\nSince → 0, we have further approximation[ ∆W ∆U ] ≈ ( ∂2G(W ∗, U∗)\n∂(W,U)2\n)−1 ( ∂g(z,W ∗, U∗)\n∂(W,U) ) (20)\nSimilarly, based on (16) and applying first order Taylor expansion to its rhs we have\n0 ≈ ∂F (W ∗,Θ∗) ∂Θ + ∂2F (W ∗,Θ∗) ∂Θ∂W ·∆W + ∂2F (W ∗,Θ∗) ∂Θ2 ∆Θ . (21)\nCombining (21) with (20) we have\n∆Θ ≈ ( ∂2F (W ∗,Θ∗)\n∂Θ2 )−1 · (∂\n2F (W ∗,Θ∗) ∂Θ∂W ) · [(∂2G(W ∗, U∗) ∂(W,U)2 )−1 ( ∂g(z,W ∗, U∗) ∂(W,U) ) ] W\nwhere [·]W means taking the W part of the vector. Therefore,\nIz,W := ∂Ŵ ∂ ∣∣ =0 = − [(∂2G(W ∗, U∗) ∂(W,U)2 )−1 ( ∂g(z,W ∗, U∗) ∂(W,U) ) ] W\n(22)\nIz,Θ := ∂Θ̂ ∂ ∣∣ =0 = ( ∂2F (W ∗,Θ∗) ∂Θ2 )−1 · (∂ 2F (W ∗,Θ∗) ∂Θ∂W ) · [(∂2G(W ∗, U∗) ∂(W,U)2 )−1 ( ∂g(z,W ∗, U∗) ∂(W,U) ) ] W\n(23)" }, { "heading": "B MODELS AND HYPERPARAMETERS FOR THE EXPERIMENTS IN", "text": "SECTIONS 4.1, 4.2 AND 4.3\nThe model structures we used in Sections 4.1, 4.2 and 4.3 are listed in Table A. As mentioned in the main text, for all models, CNN layers are used as embeddings and fully connected layers are task-specific. The number of neurons on the last fully connected layer is determined by the number of classes in the classification. There is no activation at the final output layer and all other activations are Tanh.\n• For MNIST experiments in Section 4.1.1, we train a four-class classification (0, 1, 2, and 3) in pretraining. All examples in the original MNIST training set with with these four labels are used in pretraining. The finetuning task is to classify the rest six classes, and\nwe subsample only 5000 examples to finetune. The pretrained embedding is fixed in finetuning. We run Adam optimizer in both pretraining and finetuning with a batch size of 512. The pretrained and finetuned models are trained to converge. When validating the influence function score, we remove an example from pretraining dataset. Then we re-run the pretraining and finetuning process with this leave-one-out pretraining dataset starting from the original models’ weights. In this process, we only run 100 steps for pretraining and finetuning as the models converge. When computing the influence function scores, the damping term for the pretrained and finetuned model’s Hessians are 1× 10−2 and 1× 10−8, respectively. We sample 1000 pretraining examples when computing the pretraind model’s Hessian summation. • For CIFAR experiments in Section 4.1.1, we train a two-class classification (“bird\" vs “frog\")\nin pretraining. All examples in the original CIFAR training set with with these four labels are used in pretraining. The finetuning task is to classify the rest eight classes, and we subsample only 5000 examples to finetune. The pretrained embedding is fixed in finetuning. We run Adam optimizer to train both pretrained and finetuned model with a batch size of 128. The pretrained and finetuned models are trained to converge. When validating the influence function score, we remove an example from pretraining dataset. Then we re-run the pretraining and finetuning process with this leave-one-out pretraining dataset starting from the original models’ weights. In this process, we only run 6000 steps for pretraining and 3000 steps for finetuning. When computing the influence function scores, the damping term for the pretrained and finetuned model’s Hessians are 1× 10−8 and 1× 10−6, respectively. Same hyperparameters are used in experiments in Sections 4.2 and 4.3. We also use these hyperparameters in Section 4.1.2’s experiments, except that the pretrained embedding is updated in finetuning and the number of finetuning steps is reduced to 1000 in validation. The α constant in Equation 14 is chosen as 0.01. We sample 1000 pretraining examples when computing the pretrained model’s Hessian summation.\nDataset MNIST CIFAR\nEmbedding\nCONV 32 5×5+1 CONV 32 3×3+1 MAX-POOL 2×2 +2 CONV 64 4×4+1\nCONV 64 5×5+1 MAX-POOL 2×2 +2 MAX-POOL 2×2 +2 CONV 128 2×2+1\nMAX-POOL 2×2 +2 CONV 128 2×2+1 MAX-POOL 2×2 +2\nTask specific FC <# classes> FC 1500FC <# classes>\nTable A: Model Architectures. “CONV k w×h+s” represents a 2D convolutional layer with k filters of size w×h using a stride of s in both dimensions. “MAX-POOL w×h+s” represents a 2D max pooling layer with kernel size w×h using a stride of s in both dimensions. “FC n” = fully connected layer with n outputs. All activation functions are Tanh and last fully connected layers do not have activation functions. The number of neurons on the last fully connected layer is determined by the number of classes in the task." }, { "heading": "C ELMO EXPERIMENT", "text": "In this section we show influence function score results with ELMo. The finetune task is a binary sentiment classification of twitter1 and the ELMo model is pretrained on a one-billion-word dataset Chelba et al. (2013). For the finetuned model, we add a hidden layer with 64 neurons and an output layer to build the classifier. The activation function is Tanh. For simplicity, we use the original pretrained ELMo embedding and the embedding is fixed in finetuning. We randomly sample a subset of 1000 sentences from one-billion-word dataset. For a test sentence, we list the pretrain sentences with the largest and the smallest absolute influence function score values in one-billion-word dataset.\n1 https://datahack.analyticsvidhya.com/contest/linguipedia-codefest-natural-language-processing-1/#data_dictionary\nTest Sentence Max abs. score Sentence in Pretrain Min abs. score Sentence in Pretrain\nFinally a transparent silicon case Thanks to my uncle :) #yay #Sony #Xperia #S #sonyexperias. . .\n-0.0049 Prof Slobodchikoff details the experiments he has done to reveal the hidden structure of the prairie dog ’s language within the BBC natural history programme \" Prairie dogs , talk of the town , \" broadcast as part of the Natural World documentary series .\n6.74× 10−9 He will be there to help you . \"\nBout to go shopping again listening to music #iphone #justme #music #likeforlike #followforfollow. . .\n0.0014 We are seeing the first big systematic investment in dance .\n−6.30× 10−9 The war seemed to energize her , and she began to hang out with the American journalists based in London .\nHa! Not heavy machinery but it does what I need it to. @Apple really dropped the ball with that design. #drinkyourhaterade\n0.00052 He ’s been on the move since his comeback victory over Juan Manuel Marquez more than two weeks ago , a brutally efficient boxing display that generated a staggering 1 million pay-per-v iew buys . −2.25× 10−9 Taylor said the syndicated TV psychologist broached the idea of the show to Spears ’ handlers , who eventually decided that such a show would be \" detrimental \" to the family .\nTable B: The list of test sentences and pretraining sentences with the largest and the smallest absolute influence function score values in our subset of pretraining data. The subset consists of 1000 random sentences from one-billion-word, which is used to pretrained ELMo embedding." } ]
2,019
MULTI-STAGE INFLUENCE FUNCTION
SP:1879f9692f92bb2249772e14f27839d9e426f9b3
[ "This paper proposed DSGAN which learns to generate unseen data from seen data distribution p_d and its somehow “broad” version p_{\\hat d} (E.g., p_d convolved with Gaussian). The “unseen data” is the one that appears in p_{\\hat d} but not in p_d. DSGAN is trained to generate such data. In particular, it uses samples from p_d as fake data and samples from p_{\\hat d} as the real one. ", "This paper provides an interesting application of GAN which can generate the outlier distribution of training data which forces generator to learn the distribution of the low probability density area of given data. To show the effectiveness of the method, the author intuitively shows how it works on 2-D points data as well as the reconstructed Mnist dataset. Additionally, this approach reaches a comparable performance on semi-supervised learning and novelty detection task." ]
Unseen data, which are not samples from the distribution of training data and are difficult to collect, have exhibited importance in numerous applications, (e.g., novelty detection, semi-supervised learning, and adversarial training). In this paper, we introduce a general framework called difference-seeking generative adversarial network (DSGAN), to generate various types of unseen data. Its novelty is the consideration of the probability density of the unseen data distribution as the difference between two distributions pd̄ and pd whose samples are relatively easy to collect. The DSGAN can learn the target distribution, pt, (or the unseen data distribution) from only the samples from the two distributions, pd and pd̄. In our scenario, pd is the distribution of the seen data, and pd̄ can be obtained from pd via simple operations, so that we only need the samples of pd during the training. Two key applications, semi-supervised learning and novelty detection, are taken as case studies to illustrate that the DSGAN enables the production of various unseen data. We also provide theoretical analyses about the convergence of the DSGAN.
[ { "affiliations": [], "name": "Yi-Lin Sung" }, { "affiliations": [], "name": "Sung-Hsien Hsieh" }, { "affiliations": [], "name": "Chun-Shien Lu" } ]
[ { "authors": [ "D. Abati", "A. Porrello", "S. Calderara", "R. Cucchiara" ], "title": "And: Autoregressive novelty detectors", "venue": "In IEEE CVPR,", "year": 2019 }, { "authors": [ "M. Arjovsky", "L. Bottou" ], "title": "Towards principled methods for training generative adversarial networks", "venue": "In ICLR", "year": 2017 }, { "authors": [ "M. Arjovsky", "S. Chintala", "L. Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Fan Yang", "William W Cohen", "Ruslan R Salakhutdinov" ], "title": "Good semisupervised learning that requires a bad gan", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "I. Goodfellow", "J. Shlens", "C. Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martín Arjovsky", "Vincent Dumoulin", "Aaron C. Courville" ], "title": "Improved training of wasserstein gans", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": null, "year": 2017 }, { "authors": [ "M. Hou", "B. Chaib-draa", "C. Li", "Q. Zhao" ], "title": "Generative adversarial positive-unlabelled learning", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "D.P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "ICLR, abs/1312.6114,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Mark Kliger", "Shachar Fleishman" ], "title": "Novelty detection with gan", "venue": "ArXiv,", "year": 2018 }, { "authors": [ "A. Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Y. LeCun", "C. Cortes", "C.J.C. Burges" ], "title": "The mnist database of handwritten digits", "venue": null, "year": 1998 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Chongxuan Li", "Kun Xu", "Jun Zhu", "Bo Zhang" ], "title": "Triple generative adversarial nets", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Wenyuan Li", "Zichen Wang", "Jiayun Li", "Jennifer S Polson", "William Speier", "Corey Conkling Arnold" ], "title": "Semi-supervised learning based on generative adversarial network: a comparison between good gan and bad gan approach", "venue": null, "year": 1905 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "X. Mao", "Q. Li", "H. Xie", "R.Y.K. Lau", "Z. Wang", "S.P. Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In IEEE ICCV,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In ICLR", "year": 2018 }, { "authors": [ "Sameer A. Nene", "Shree K. Nayar", "Hiroshi Murase" ], "title": "Columbia object image library (coil-20)", "venue": null, "year": 1996 }, { "authors": [ "Y. Netzer", "T. Wang", "A. Coates", "A. Bissacco", "B. Wu", "A.Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop,", "year": 2011 }, { "authors": [ "Yao Ni", "Dandan Song", "Xi Zhang", "Hank Wu", "Lejian Liao" ], "title": "Cagan: Consistent adversarial training enhanced gans", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Pramuditha Perera", "Ramesh Nallapati", "Bing Xiang" ], "title": "OCGAN: one-class novelty detection using gans with constrained latent representations", "venue": "In IEEE CVPR,", "year": 2019 }, { "authors": [ "Stanislav Pidhorskyi", "Ranya Almohsen", "Donald A. Adjeroh", "Gianfranco Doretto" ], "title": "Generative probabilistic novelty detection with adversarial autoencoders", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Antti Rasmus", "Mathias Berglund", "Mikko Honkala", "Harri Valpola", "Tapani Raiko" ], "title": "Semi-supervised learning with ladder networks", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Scott E. Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": null, "year": 2016 }, { "authors": [ "Lukas Ruff", "Robert Vandermeulen", "Nico Goernitz", "Lucas Deecke", "Shoaib Ahmed Siddiqui", "Alexander Binder", "Emmanuel Müller", "Marius Kloft" ], "title": "Deep one-class classification", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Y. Saito", "S. Takamichi", "H. Saruwatari" ], "title": "Statistical parametric speech synthesis incorporating generative adversarial networks", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2018 }, { "authors": [ "Mayu Sakurada", "Takehisa Yairi" ], "title": "Anomaly detection using autoencoders with nonlinear dimensionality reduction", "venue": "In MLSDA, pp", "year": 2014 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P. Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jonathon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer", "venue": null, "year": 2015 }, { "authors": [ "Yaxing Wang", "Chenshen Wu", "Luis Herranz", "Joost van de Weijer", "Abel Gonzalez-Garcia", "Bogdan Raducanu" ], "title": "Transferring gans: generating images from limited data", "venue": "CoRR, abs/1805.01677,", "year": 2018 }, { "authors": [ "Xiang Wei", "Boqing Gong", "Zixia Liu", "Wei Lu", "Liqiang Wang" ], "title": "Improving the improved training of wasserstein gans: A consistency term and its dual effect", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Y. Yu", "W.-Y. Qu", "N. Li", "Z. Guo" ], "title": "Open-category classification by adversarial sample generation", "venue": "In IJCAI,", "year": 2017 }, { "authors": [ "J.J. Zhao", "M. Mathieu", "Y. LeCun" ], "title": "Energy-based generative adversarial network", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "V Consider" ], "title": "G,D) = U(pg, D) as a function of pg. By the proof idea of Theorem 2 in Goodfellow et al. (2014), if f(x) = supα∈A fα(x) and fα(x) is convex in x for every α, then ∂fβ(x) ∈ ∂f if β = argsupα∈A fα(x). In other words, if supD V (G,D) is convex in pg, the subderivatives of supD V (G,D) includes the derivative of the function at the point, where the maximum is attained, implying the convergence with sufficiently small updates of pg", "venue": null, "year": 2014 }, { "authors": [ "Dai" ], "title": "2017), we also define the feature space as the input space of the output layer of discriminators. Compared to SVHN and CIFAR-10, MNIST is a simple dataset as it is only composed of fully connected layers. Batch normalization (BN) or weight normalization (WN) is used in every layer", "venue": null, "year": 2017 }, { "authors": [ "Rasmus" ], "title": "We find that the added Gaussian noise exhibits a positive effect for semisupervised learning. The architecture is shown in Table 6. Table 7 and Table 8 are models for SVHN and CIFAR-10, respectively, and these models are almost the same except for some implicit differences, e.g., the number of convolutional filters and types", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unseen data1are not samples from the distribution of the training data and are difficult to collect. It has been demonstrated that unseen samples can be applied to several applications. Dai et al. (2017) proposed how to create complement data, and theoretically showed that complement data, considered as unseen data, could improve semi-supervised learning. In novelty detection, Yu et al. (2017) proposed a method to generate unseen data and used them to train an anomaly detector. Another related area is adversarial training Goodfellow et al. (2015), where classifiers are trained to resist adversarial examples, which are unseen during the training phase. However, the aforementioned methods only focus on producing specific types of unseen data, instead of enabling the generation of general types of unseen data.\nIn this paper, we propose a general framework called difference-seeking generative adversarial network (DSGAN), to generate a variety of unseen data. The DSGAN is a generative approach. Traditionally, generative approaches, which are usually conducted in an unsupervised learning manner, are developed for learning the data distribution from its samples, from which subsequently, they produce novel and high-dimensional samples, such as the synthesized image Saito et al. (2018). A state-of-the-art approach is the so-called generative adversarial network (GAN) Goodfellow et al. (2014). GAN produces sharp images based on a game-theoretic framework, but it can be difficult and unstable to train owing to multiple interaction losses. Specifically, GAN consists of two functions: generator and discriminator. Both functions are represented as parameterized neural networks. The discriminator network is trained to determine whether the inputs belong to the real dataset or fake dataset created by the generator. The generator learns to map a sample from a latent space to some distribution to increase the classification errors of the discriminator.\n1In traditional machine learning scenarios, \"unseen\" data corresponds to data that is not used or seen during the training stage but rather the testing stage. The distribution of \"unseen\" data could be same as or different\nNevertheless, if a generator can learn to create unseen data, then a traditional GAN requires numerous training samples of unseen classes for training, leading to a contradiction with the definition of the unseen data. This fact motivates us to present the DSGAN, which can generate unseen data by adopting seen data as training samples (see Fig. 9, which illustrates the difference between GAN and the DSGAN, in Appendix A). The key concept is to consider the distribution of the unseen data as the difference between two distributions that are relatively easy to obtain. For example, the out-of-distribution examples in the MNIST dataset, from another perspective, are found to belong to the differences between the sets of examples in MNIST and the universal set. It should be noted that in traditional GAN, the target distribution is identical to the training data distribution; however, in the DSGAN these two distributions, are considered to be different.\nThis paper makes the following contributions:\n(1) We propose the DSGAN to generate any unseen data only if the density of the target (unseen data) distribution is the difference between those of any two distributions, pd̄ and pd.\n(2) We show that the DSGAN possesses the flexibility to learn different target (unseen data) distributions in two key applications, semi-supervised learning and novelty detection. Specifically, for novelty detection, the DSGAN can produce boundary points around the seen data because this type of unseen data is easily misclassified. For semi-supervised learning, the unseen data are linear combinations of any labeled data and unlabeled data, excluding the labeled and unlabeled data themselves2.\n(3) The DSGAN yields results comparable to a semi-supervised learning but with a short training time and low memory consumption. In novelty detection, combining both the DSGAN and variational auto-encoder (VAE, Kingma & Welling (2014b)) methods achieve the state-of-the-art results." }, { "heading": "2 PROPOSED METHOD-DSGAN", "text": "" }, { "heading": "2.1 FORMULATION", "text": "We denote the generator distribution as pg and training data distribution as pd, both in an N - dimensional space. Let pd̄ be the distribution decided by the user. For example, pd̄ can be the convolution of pd and normal distribution. Let pt be the target distribution that the user is interested in, and it can be expressed as\n(1− α)pt(x) + αpd(x) = pd̄(x), (1)\nwhere α ∈ [0, 1]. Our method, the DSGAN, aims to learn pg such that pg = pt. Note that if the support set of pd belongs to that of pd̄, then there exists at least an α such that the equality in (1) holds. However, even if the equality does not hold, intuitively, the DSGAN attempts to learn pg such that pg(x) ∼ pd̄(x)− αpd(x)\n1− α with the constraint, pg(x) ≥ 0. Specifically, the generator will output\nsamples located in the high-density areas of pd̄ − αpd. Furthermore, we show that the DSGAN can learn pg , whose support set is the difference between those of pd̄ and pd in Theorem 1.\nFirst, we formulate the generator and discriminator in GANs. The inputs, z, of the generator are drawn from pz (z) in an M -dimensional space. The generator function, G(z; θg) : RM → RN , represents a mapping to the data space, where G is a differentiable function with parameter θg . The discriminator is defined as D (x; θd) : RN → [0, 1], which outputs a single scalar. D (x) can be considered as the probability that x belongs to a class of the real data.\nSimilar to traditional GAN, we train D to distinguish the real data from the fake data sampled from G. Concurrently, G is trained to produce realistic data that can mislead D. However, in the DSGAN, the definitions of “real data” and “fake data” are different from those in traditional GAN. The samples from pd̄ are considered as real, but those from the mixture distribution between pd and pg are considered as fake. The objective function is defined as follows:\nfrom the \"seen\" data, according to applications. In this paper, we focus on the scenario that the two distributions are different.\n2The linear combination of any labeled data and unlabeled data probably belongs to the set of seen data (labeled data and unlabeled data), which contradicts the definition of unseen data. Thus, the samples generated by the DSGAN should not include the seen data themselves.\nV (G,D) := Ex∼pd̄(x) [logD(x)] + (1− α)Ez∼pz(z) [log (1−D (G (z)))] + αEx∼pd(x) [log (1−D(x))] . (2) We optimize (2) by a min–max game between G and D, i.e.,\nmin G max D V (G,D) .\nDuring the training procedure, an iterative approach, like traditional GAN, is to alternate between k steps of training D and one step of training G. In practice, minibatch stochastic gradient descent via backpropagation is used to update θd and θg. Thus, for each pg, pd, and pd̄, m samples are required for computing the gradients, where m is the number of samples in a minibatch. The training procedure is illustrated in Algorithm 1 in Appendix A. The DSGAN suffers from the same drawbacks as traditional GAN, (e.g., mode collapse, overfitting, and strong discriminator) so that the generator gradient vanishes. There are literature Salimans et al. (2016); Arjovsky & Bottou (2017); Miyato et al. (2018) focusing on dealing with the above problems, and such concepts can be readily combined with the DSGAN.\nLi et al. (2017) and Reed et al. (2016) proposed an objective function similar to (2). Their goal was to learn the conditional distribution of training data. However, we aim to learn the target distribution, pt, in Eq. (1), and not the training data distribution." }, { "heading": "2.2 CASE STUDY ON VARIOUS UNSEEN DATA GENERATION", "text": "To achieve a more intuitive understanding about the DSGAN, we conduct several case studies on two-dimensional (2D) synthetic datasets and MNIST. In Eq. (1), α = 0.8 is used.\n1\n7\n1\n7\nFigure 4: Illustration of the difference-set seeking in MNIST.\nFigure 5: DSGAN learns the difference between two sets.\nComplement samples generation Fig. 1 illustrates that the DSGAN can generate complement samples between 2 circles. Denoting the density function of the two circles as pd, we assign the samples drawn from pd̄ as linear combinations of the two circles. Then, by applying the DSGAN, we achieve our goal of generating complement samples. In fact, this type of unseen data is used in semi-supervised learning.\nBoundary samples generation Fig. 2 illustrates that the DSGAN generates boundary points between four circles. This type of unseen data is used in novelty detection. In this case, we assign pd and pd̄ as “the density function of four circles” and “the convolution of pd and normal distribution,” respectively. The basis of our concept is also illustrated by a one-dimensional (1D) example in Fig. 3.\nDifference-set generation We also validate the DSGAN on a high-dimensional dataset such as MNIST. In this example, we define pd as the distribution of digit “1” and pd̄ as the distribution containing two digits “1” and “7”. Because the density, pd(x), is high when x is digit “1,” the generator is prone to output digit “7” with a high probability. More sample qualities of DSGAN on CelebA can be refer to Appendix G.\nFrom the above results, we can observe two properties of the generator distribution, pg: i) the higher the density of pd(x), the lower the density of pg(x); ii) pg prefers to output samples from the high-density areas of pd̄(x)− αpd(x).\n2.3 DESIGNING pd̄\nThus far, we have demonstrated how the DSGAN can produce various types of unseen data by choosing a specific pd̄. In this section, we introduce a standard procedure to design pd̄, and illustrate each step with pictures.\nStep 1. First, the training data, pd, are collected (Fig. 6 (a)).\nStep 2. Second, based on the applications, the desired unseen data distribution is defined (e.g., complement samples for semi-supervised learning) (Fig. 6 (b)).\nIn the above procedure, the most important step is to determine which types of unseen data are suitable for a specific problem (Step 2). In this paper, we show two types of unseen data, which are useful in semi-supervised learning and novelty detection. However, determining all types of unseen data for all applications is beyond the scope of this study, and we leave this for future work.\nFurthermore, we provide a method (see Appendix B in supplementary materials) by reformulating the objective function (2), so that it is more stable to train the DSGAN." }, { "heading": "3 THEORETICAL RESULTS", "text": "In this section, we show that by choosing an appropriate α, the support set of pg belongs to the difference set between pd̄ and pd, so that the samples from pg are unseen from the pd perspective.\nWe start our proofs from two assumptions. First, in a non-parametric setting, we assume that both the generator and discriminator have infinite capacities. Second, pg is defined as the distribution of the samples drawn from G(z) under z ∼ pz . In the following, we show that the support set of pg is contained within the differences in the support sets of pd̄ and pd while achieving the global minimum such that we can generate the desired pg by designing an appropriate pd̄.\nTheorem 1. Suppose αpd(x) ≥ pd̄(x) for all x ∈ Supp(pd) and all density functions pd(x), and pd̄(x) and pg(x) are continuous. If the global minimum of C(G) is achieved, then\nSupp (pg) ⊆ Supp (pd̄)− Supp(pd), where C(G) = max\nD V (G,D) = Ex∼pd̄(x) [ log\npd̄(x)\npd̄(x) + (1− α)pg(x) + αpd(x)\n] + Ex∼p∗(x) [ log\n(1− α)pg(x) + αpd(x) pd̄(x) + (1− α)pg(x) + αpd(x)\n] .\nProof. See Appendix C for the details.\nSummarizing, the generator is prone to output samples that are located in the high-density areas of pd̄ − αpd." }, { "heading": "4 APPLICATIONS", "text": "The DSGAN was applied to two problems: semi-supervised learning and novelty detection. In the semi-supervised learning, the DSGAN acts as a “bad generator,” which creates complement samples (unseen data) in the feature space of the training data. For the novelty detection, the DSGAN generates the samples (unseen data) as boundary points around the training data." }, { "heading": "4.1 SEMI-SUPERVISED LEARNING", "text": "Semi-supervised learning (SSL) is a type of learning model that uses a few labeled data and numerous unlabeled data. The existing SSL methods based on a generative model, (e.g., VAE Kingma et al. (2014) and GAN Salimans et al. (2016)), yield good empirical results. Dai et al. (2017) theoretically showed that a good semi-supervised learning required a bad GAN with the following objective function: max D Ex,y∼L logPD (y | x, y ≤ K) + Ex∼pd(x) logPD (y ≤ K | x) + Ex∼pg(x) logPD (K + 1 | x) , (3)\nwhere (x, y) denotes a pair of data, and its corresponding label, {1, 2, . . . ,K} denotes the label space for the classification, and L = {(x, y)} is the label dataset. Moreover, under the semi-supervised settings, pd in (3) is the distribution of the unlabeled data. Note that the discriminator, D, in GAN also plays the role of a classifier. If the generator distribution exactly matches the real data distribution (i.e., pg = pd), then the classifier trained by the objective function (3) with the unlabeled data cannot have a better performance than that trained by the supervised learning with the objective function. Specifically,\nmax D Ex,y∼L logPD (y | x, y ≤ K) . (4) Contrastingly, the generator is preferred to generate complement samples, which lie on the lowdensity area of pd. Under some mild assumptions, these complement samples help D to learn the correct decision boundaries in the low-density area because the probabilities of the true classes are forced to be low in the out-of-distribution areas.\nThe complement samples in Dai et al. (2017) are complex to produce. In Sec. 5.2, we will demonstrate that with the DSGAN, complement samples can be easily generated." }, { "heading": "4.2 NOVELTY DETECTION", "text": "Novelty detection determines if a query example belongs to a seen class. If the samples of one seen class are considered as positive data, then this difficulty is the absence of negative data in the training phase, so that the supervised learning cannot function.\nRecently, novelty detection has made significant progress with the advent of deep leaning. Pidhorskyi et al. (2018)Sakurada & Yairi (2014) focused on learning a representative latent space for a seen class. When testing, the query image was projected onto the learned latent space. Then, the difference between the query image and its inverse image (reconstruction) was measured. Thus, only an encoder was needed to be trained for the projection and a decoder for the reconstruction. Under the circumstance, an autoencoder (AE) is generally is adopted to learn both the encoder and decoder Pidhorskyi et al. (2018)Perera et al. (2019). Let Enc(·) be the encoder and Dec(·) be the decoder. The loss function of the AE is defined as\nmin Enc,Dec\nEx∼ppos(x) [ ‖x−Dec(Enc(x))‖22 ] , (5)\nwhere ppos is the distribution of a seen class. After the training, a query example, xtest, is classified as the seen class if ‖xtest −Dec(Enc(xtest))‖22 ≤ τ, (6) where τ ∈ R+ plays the trade-off between the true positive rate and false positive rate. However, (6) is based on two assumptions: (1) the positive samples from one seen class should have a small reconstruction error; (2) the AE (or latent space) cannot well describe the negative examples from the unseen classes, leading to a relatively large reconstruction error. In general, the first assumption inherently holds when both the testing and training data originate from the same seen class. However, Pidhorskyi et al. (2018)Perera et al. (2019) observed that assumption (2) does not hold at all times because the loss function in (5) does not include a loss term to enforce the negative data to have a large reconstruction error.\nFor assumption (2) to hold, given positive data as the training inputs, we propose using the DSGAN to generate negative examples in the latent space, as discussed in Sec. 5.3. Then, the loss function of the AE is modified to enforce the negative data to have a large reconstruction error." }, { "heading": "5 EXPERIMENTS", "text": "Our experiments are divided into three parts. The first one examines how the hyperparameter, α, influences the learned generator distribution, pg. In the second and third experiments, we obtain empirical results about semi-supervised learning and novelty detection, which are presented in Sec. 5.2 and Sec. 5.3, respectively. Note that the training procedure of the DSGAN can be improved by other extensions of GANs such as WGAN Arjovsky et al. (2017), WGAN-GP Gulrajani et al. (2017), EBGAN Zhao et al. (2017), and LSGAN Mao et al. (2017). In our method, the WGAN-GP was adopted for the stability of the DSGAN in training and reduction in the mode collapse.\n5.1 DSGAN WITH DIFFERENT α\nThe impacts of different α values on the DSGAN are illustrated in Fig. 7. In this example, the support of pd is the area bounded by a red dotted line, and the orange points are the samples from pd. Concurrently, we shift pd to the right by 1 unit and create the distribution, pd̄, whose support is bounded by blue dotted lines. The overlapping area between pd̄ and pd is 0.5 unit (assuming the area of pd is 1 unit). Based on our theoretical results, α = 0.5 is the smallest selected value allowing pg to be disjoint to pd. Therefore, we can see that some generated samples, as presented in Fig. 7(a), still belong to the support set of pd. Fig. 7(b) shows that there is a perfect agreement between our theoretical and experiment results with α = 0.5. When α = 0.8, there is a remarkable gap between the generated (green) points and yellow points, as shown in Fig. 7(c). In theory, the result obtained at α = 0.8 should be the same as that obtained at α = 0.5. This is because the discriminator should assign the entire area, which is the intersection of the complement of support set of pd and support set of pd̄, to the same score, under the assumption that the discriminator has an infinite capacity. However, in practice, the capacity of the discriminator is limited. Therefore, the score of the area near pd is lower than that far from it, when α is large. Therefore, pg tends to repel pd to achieve a high score (to deceive the discriminator). 5.2 DSGAN IN SEMI-SUPERVISED LEARNING We first introduce how the DSGAN generates the complement samples in the feature space. Dai et al. (2017) proved that if the complement samples generated by G could satisfy the following two assumptions in (7) and (8), i.e.,\n∀x ∼ pg(x), 0 > max 1≤i≤K wTi f(x) and ∀x ∼ pd(x), 0 < max 1≤i≤K wTi f(x), (7)\nwhere f is the feature extractor and wi is the linear classifier for the ith class, and\n∀x1 ∼ L, x2 ∼ pd(x),∃xg ∼ pg(x) s.t. f(xg) = βf(x1) + (1− β)f(x2) with β ∈ [0, 1],\n(8)\nthen all the unlabeled data would be correctly classified by the objective function (3). Specifically, (7) ensures that the classifiers can discriminate the generated data from the unlabeled data, and (8) causes the decision boundary to be located in the low-density areas of pd.\nThe assumption in (8) implies that the complement samples must be in the space created by the linear combination of the labeled and unlabeled data. In addition, they cannot fall into the real data distribution, pd, owing to the assumption (7). To allow the DSGAN to generate such samples, we let the samples of pd̄ be linear combinations of those from L and pd. Since pg(x) ≈ pd̄(x)− αpd(x)\n1− α ,\npg will tend to match pd̄, whereas the term, −αpd, ensures that the samples from pg do not belong to pd. Thus, pg satisfies the assumption in (8). Moreover, (7) is also satisfied by training the classifier with (3) based on substituting the generator distribution in (3) into the learned pg .\nFollowing the previous works, we apply the proposed DSGAN to semi-supervised learning on three benchmark datasets: MNIST LeCun et al. (1998), SVHN Netzer et al. (2011), and CIFAR-10 Krizhevsky (2009). The details of the experiments can be found in Appendix D." }, { "heading": "5.2.1 SIMULATION RESULTS", "text": "First, the selected hyperparameters are listed in Table 5 in Appendix D.1. Second, the results obtained from the DSGAN and state-of-the-art methods on the three benchmark datasets are summarized in Table 1. It can be observed that our method can compete with the state-of-the-art methods on the three datasets. Note that we report the results of badGAN not only from the original papers in the literature but also by reproducing them using the released codes of the authors. The reason of presenting both the results is that we cannot reproduce parts of the results. The experiments in Li et al. (2019) also showed a similar problem. In comparison with Dai et al. (2017), our methods do not need to rely on an additional density estimation network, PixelCNN++ Salimans et al. (2017). Although PixelCNN++ is one of the best density estimation networks, learning such a deep architecture requires large computation and high memory consumption. In Table 2, we list the training time and memory consumption for our method and badGAN. Compared to badGAN, our method consumes 15.8% less training time and saves about 9000 MB during the training.\nMoreover, it can also be observed from Table 1 that our results are comparable to the best record of badGAN and CAGAN. are better than those of other approaches on the MNIST and SVHN datasets. On CIFAR-10, our method is only inferior to the CT-GAN. However, this might not be a reasonable comparison because the CT-GAN uses extra techniques, including temporal ensembling and data augmentation, which the other methods do not use." }, { "heading": "5.3 DSGAN IN NOVELTY DETECTION", "text": "In this section, we study how to use the DSGAN for assisting novelty detection. As mentioned in Sec. 4.2, we need to train the auto-encoder (AE) such that (i) the positive samples from one seen class have a small reconstruction error; (ii) negative samples from the unseen classes incur relatively higher reconstruction errors.\nThe fundamental concept is to use the DSGAN to generate negative samples, which originally do not exist under the scenario of novelty detection. Next, we add a new loss term to penalize the small reconstruction errors of the negative samples (see the third stage below). Three stages are required to train our model (AE):\n1. The encoder, Enc(·), and decoder, Dec(·), are trained using the loss function (5). 2. Given x ∼ ppos, Enc(x) are collected as the samples drawn from pd. pd̄ is the convolution\nof pd having a normal distribution with a zero mean and variance σ. Then, we train the DSGAN to generate negative samples, which are drawn from pd̄(x) − pd(x) and are the boundary points around the positive samples in the latent space. Note that there are some variations in the DSGAN: the input of the generator, G, is Enc(x), instead of a random vector z in the latent space. We also add ‖Enc(x)−G(Enc(x))‖22, which will be explained in the next step, to train the generator.\n3. Fixing the encoder, we retrain the decoder by the modified loss function,\nmin Dec\nEx∼ppos(x) [ ‖x−Dec(Enc(x))‖22 + w ·max ( 0,m− ‖x−Dec(G(Enc(x)))‖22 ]) ,\nwhere w is the trade-off between the reconstruction errors of positive samples Enc(x) and negative samples G(Enc(x)). Note that in the previous step, we add ‖Enc(x) − G(Enc(x))‖22 to ensure that the outputs of the generator are around the input. Thus, the second term charges even though the negative samples are close to the corresponding positive sample, and they still exhibit a high reconstruction error, which is bounded by m (Zhao et al. (2017)).\nThe above algorithm, called VAE+DSGAN, can be used to strengthen the existing AE-based methods by using them in the first stage. In the simulation, we used a variational autoencoder (VAE) Kingma & Welling (2014a) because it performs better than the AE in the novelty detection." }, { "heading": "5.3.1 SIMULATION RESULTS", "text": "In this section, following Perera et al. (2019), the performance was evaluated using the area under the curve (AUC) of the receiver operating characteristics (ROC) curve. Given a dataset, a class was chosen as the seen class for training, and all the classes were used for testing. There exist several testing benchmarks for novelty detection, such as MNIST, COIL100 Nene et al. (1996) and CIFAR-10. The state-of-the-art method Perera et al. (2019) achieves high performance in AUC on MNIST and COIL100 (AUC is larger than 0.97). However, for CIFAR-10, Perera et al. (2019) only\nachieves 0.656. Thus, we chose the challenging dataset, CIFAR-10, as the benchmark to evaluate our method. The detailed network architecture can be found in Appendix E.\nBecause VAE+DSGAN can be considered as a fine tuning VAE Kingma & Welling (2014a), we first illustrate the key difference between the VAE and VAE+DSGAN, as shown in Fig. 8. The seen class, which is at the bottom of the images, is a car. Other rows are the images from the unseen classes. One can see that the reconstructed images are reasonably good even for the unseen class in the VAE. By contrast, our method enforces the reconstructed images of the unseen classes to be blurred while still preserving the reconstruction quality of the seen class. Thus, our method achieves a relatively larger gap, in terms of the reconstruction error between the seen data and unseen data, than the VAE.\nIn Table 3, we compare the proposed method with several methods, including the VAE Kingma & Welling (2014a), AND Abati et al. (2019), DSVDD Ruff et al. (2018), and OCGAN Perera et al. (2019), in terms of the AUC value. One can see that in most cases, our method almost outperforms the VAE. Furthermore, the mean of the AUC values of our method also is larger than those of the state-of-the-art methods. It is worth mentioning that in addition to the VAE, the DSGAN has potential of being combined with other AE-based methods." }, { "heading": "6 RELATED WORKS ABOUT UNSEEN DATA GENERATION", "text": "Yu et al. (2017) proposed a method to generate samples of unseen classes in a unsupervised manner via an adversarial learning strategy. However, it requires solving an optimization problem for each sample, which certainly leads to a high computation cost. By contrast, the DSGAN has the capability to create infinite diverse unseen samples. Hou et al. (2018) presented a new GAN architecture that could learn two distributions of unseen data from a part of seen data and the unlabeled data. However, the unlabeled data must be a mixture of seen and unseen samples; the DSGAN does not require any unseen data. Kliger & Fleishman (2018) also applied GAN in novelty detection. Their objective\nwas to learn a generator whose distribution is a mixture of novelty data distribution and training data distribution. To this end, they used feature matching (FM) to train the generator and expected pg to learn the mixture of distributions. However, the ultimate goal of FM is still to learn pg = pd; therefore, their method might fail when GAN learns well.\nDai et al. (2017) aimed to generate complementary samples (or out-of-distribution samples), but assumed that the in-distribution could be estimated by a pre-trained model, such as PixelCNN++, which might be difficult and expensive to train. Lee et al. (2018) used a simple classifier to replace the role of PixelCNN++ in Dai et al. (2017) so that the training was comparatively much easier and more suitable. Nevertheless, their method only focused on generating unseen data surrounding the low-density area of seen data. In comparison, the DSGAN has more flexibility to generate different types of unseen data (e.g., a linear combination of seen data, as described in Sec. 5.2). In addition, their method needs the label information of the data, whereas our method is fully unsupervised." }, { "heading": "7 CONCLUSIONS", "text": "We propose the DSGAN, which can produce any unseen data based on the assumption that the density of the unseen data distribution is the difference between the densities of any two distributions. The DSGAN is useful in an environment when the samples from the unseen data distribution are more difficult to collect than those from the two known distributions. Empirical and theoretical results are provided to validate the effectiveness of the DSGAN. Finally, because the DSGAN is developed based on GAN, it is easy to apply any improved versions of GAN to the DSGAN." }, { "heading": "8 ACKNOWLEDGEMENT", "text": "This work was partially supported by grants MOST 107-2221-E-001-015-MY2 and MOST 108-2634- F-007-010 from Ministry of Science and Technology, Taiwan, ROC." }, { "heading": "A FLOWCHART AND ALGORITHM OF DSGAN", "text": "Algorithm 1 The training procedure of DSGAN using minibatch stochastic gradient descent. k is the number of steps applied to discriminator. α is the ratio between pg and pd in the mixture distribution. We used k = 1 and α = 0.8 in experiments.\n01. for number of training iterations do 02. for k steps do 03. Sample minibatch of m noise samples z(1), ..., z(m) from pg(z). 04. Sample minibatch of m samples x(1)d , ..., x (m) d from pd(x). 05. Sample minibatch of m samples x(1) d̄ , ..., x (m) d̄ from pd̄(x). 06. Update the discriminator by ascending its stochastic gradient:\n∇θd\n[ 1\nm m∑ i=1 logD ( x (i) d ) + log ( 1−D ( G ( z(i) ))) + log ( 1−D ( x (i) d̄ ))] 07. end for 08. Sample minibatch of m noise samples z(1), ..., z(m) from pg(z). 09. Update the generator by descending its stochastic gradient:\n∇θg 1\nm m∑ i=1 [ log ( 1−D ( G ( z(i) )))]\n10. end for" }, { "heading": "B TRICKS FOR STABLE TRAINING", "text": "We provide a trick to stabilize the training procedure by reformulating the objective function. Specifically, V (G,D) in (2) is reformulated as:\nV (G,D) = ∫ x pd̄(x) log (D (x))\n+ ((1− α)pg(x) + αpd(x)) log (1−D (x)) dx = Ex∼pd̄(x) [logD(x)] + Ex∼(1−α)pg(x)+α∼pd(x) [log (1−D (x))] .\n(9)\nInstead of sampling a mini-batch of m samples from pz and pd in Algorithm 1, (1− α)m and αm samples from both distributions are required, respectively. The computation cost in training can be reduced due to fewer samples. Furthermore, although (9) is equivalent to (2) in theory, we find that the training using (9) achieves better performance than using (2) via empirical validation in Table 4. We conjecture that the equivalence between (9) and (2) is based on the linearity of expectation, but mini-batch stochastic gradient descent in practical training may lead to the different outcomes." }, { "heading": "C PROOF OF THEOREM 1", "text": "In this section, we show Theorem 1.\nThis proof includes two parts: the first part shows that the objective function is equivalent to minimizing the Jensen–Shannon divergence in the mixture distribution (pd and pg) and pd̄ if G and D are assigned sufficient capacity; the second part shows that by choosing an appropriate α, the support set of pg belongs to the difference set between pd̄ and pd, so that the samples from pg are unseen from the pd perspective.\nFor the first part, we show the optimal discriminator givenG, and then show that minimizing V (G,D) via G, given the optimal discriminator, is equivalent to minimizing the Jensen–Shannon divergence between (1− α)pg + αpd and pd̄.\nProposition 1. If G is fixed, the optimal discriminator, D, is\nD∗G(x) = pd(x)\npd(x) + (1− α)pg(x) + αpd(x) .\nProof. Given any generator G, the training criterion for the discriminator D is to maximize the quantity V (G,D):\nV (G,D) = ∫ x pd̄(x) log (D (x)) dx\n+ (1− α) ∫ z pz(z) log (1−D (G (z))) dz\n+ α ∫ x pd(x) log (1−D (x)) dx\n= ∫ x pd̄(x) log (D (x)) dx\n+ (1− α) ∫ x pg(x) log (1−D (x)) dz\n+ α ∫ x pd(x) log (1−D (x)) dx\n= ∫ x pd̄(x) log (D (x))\n+ ((1− α)pg(x) + αpd(x)) log (1−D (x)) dx.\nFor any (a, b) ∈ R2\\{0, 0}, the function a log (y) + b log (1− y) achieves its maximum in [0, 1] at y = aa+b . The discriminator only needs to be defined within Supp(pd̄) ⋃ Supp(pd) ⋃ Supp(pg). We complete this proof.\nMoreover, D can be considered to discriminate between samples from pd̄ and ((1− α)pg(x) + αpd(x)). By replacing the optimal discriminator in V (G,D), we trivially obtain\nC(G) = max D V (G,D) = Ex∼pd̄(x) [ log\npd̄(x)\npd̄(x) + (1− α)pg(x) + αpd(x)\n] + Ex∼p∗(x) [ log\n(1− α)pg(x) + αpd(x) pd̄(x) + (1− α)pg(x) + αpd(x)\n] .\n(10)\nActually, the results thus far yield the optimal solution of D given G is fixed in (1). Now, the next step is to determine the optimal G with D∗G as fixed.\nTheorem 2. The global minimum of C(G) is achieved if and only if (1−α)pg(x)+αpd(x) = pd̄(x) for all x. Then, C(G) achieves the value, − log 4.\nProof. We start from\n(1) = − log(4) + Ex∼pd̄(x) [ log\n2pd̄(x)\npd̄(x) + (1− α)pg(x) + αpd(x) ] + Ex∼p∗(x) [ log\n2 ((1− α)pg(x) + αpd(x)) pd̄(x) + (1− α)pg(x) + αpd(x) ] = − log(4) + KL ( pd̄ ∥∥∥∥ pd̄ + (1− α)pg + αpd2 )\n+KL ( (1− α)pg(x) + αpd ∥∥∥∥ pd̄ + (1− α)pg + αpd2 )\n= − log(4) + 2 JSD (pd̄ ‖ (1− α)pg + αpd) ,\nwhere p∗(x) = (1 − α)pg(x) + αpd(x), KL is the Kullback-Leibler divergence and JSD is the Jensen-Shannon divergence. The JSD returns the minimal value, which is 0, iff both distributions are the same, namely pd̄ = (1− α)pg + αpd. Because pg(x)’s are always non-negative, it should be noted both distributions are the same only if αpd(x) ≤ pd̄(x) for all x’s. We complete this proof.\nNote that (1− α)pg(x) + αpd(x) = pd̄(x) may not hold if αpd(x) > pd̄(x). However, the DSGAN still works based on two facts: i) given D, V (G,D) is a convex function in pg and ii) because∫ x pg(x)dx = 1, the set collecting all the feasible solutions of pg is convex. Thus, there always exists a global minimum of V (G,D) given D, but it may not be − log(4). Now, we go back to prove Theorem 1. We show that the support set of pg is contained within the differences in the support sets of pd̄ and pd while achieving the global minimum such that we can generate the desired pg by designing an appropriate pd̄.\nProof. Recall that\nC(G) = ∫ x pd̄(x) log ( pd̄(x) pd̄(x) + (1− α)pg(x) + αpd(x) ) + p∗(x) log ( (1− α)pg(x) + αpd(x)\npd̄(x) + (1− α)pg(x) + αpd(x)\n) dx\n= ∫ x S(pg;x)dx\n= ∫ x∈Supp(pd̄)−Supp(pd) S(pg;x)dx\n+ ∫ x∈Supp(pd) S(pg;x)dx.\nS(pg;x) is used to simplify the notations inside the integral. For any x, S(pg;x) in pg(x) is nonincreasing and S(pg;x) ≤ 0 always holds. Specifically, S(pg;x) is decreasing along the increase of pg(x) if pd̄(x) > 0; S(pg;x) attains the maximum value, zero, for any pg(x) if pd̄(x) = 0. Since\nDSGAN aims to minimize C(G) with the constraint ∫ x pg(d)dx = 1, the solution attaining the global minima must satisfy pg(x) = 0 if pd̄(x) = 0; otherwise, there exists another solution with smaller value of C(G). Thus, Supp (pg) ⊆ Supp (pd̄).\nFurthermore, T (pg;x) = ∂S(pg;x)\n∂pg(x) = log\n( (1− α)pg(x) + αpd(x)\npd̄(x) + (1− α)pg(x) + αpd(x)\n) , which is expected\nto be as small as possible to minimizeC(G), is increasing on pg(x) and converges to 0. Then, we show that T (pg;x) for x ∈ Supp(pd̄) ⋂ Supp(pd) is always larger than that for x ∈ Supp(pd̄)−Supp(pd) for all pg . Specifically,\n1. When x ∈ Supp(pd̄) ⋂ Supp(pd), T (pg;x) ≥ log 12 always holds due to the assumption of\nαpd(x) ≥ pd̄(x).\n2. When x ∈ Supp(pd̄) − Supp(pd), T (pg;x) < log 12 for all pg(x)’s satisfying (1 − α)pg(x) ≤ pd̄(x).\nThus, the minimizer prefers pg(x) > 0 for x ∈ Supp(pd̄) − Supp(pd) and (1 − α)pg(x) ≤ pd̄(x). We check whether there exists a solution pg such that (1 − α)pg(x) ≤ pd̄(x) and∫ x∈Supp(pd̄)−Supp(pd) pg(d)dx = 1, implying pg(x) = 0 for x ∈ Supp(pd̄) ⋂ Supp(pd). Based\non the following expression,∫ x∈Supp(pd̄)−Supp(pd) pd̄(x)dx + ∫ x∈Supp(pd) pd̄(x)dx = 1\n⇒ ∫ x∈Supp(pd̄)−Supp(pd) pd̄(x)dx\n≥ 1− ∫ x∈Supp(pd) αpd(x)dx\n⇒ ∫ x∈Supp(pd̄)−Supp(pd) pd̄(x)dx ≥ 1− α\n⇒ ∫ x∈Supp(pd̄)−Supp(pd) pd̄(x)dx\n≥ ∫ x∈Supp(pd̄)−Supp(pd) (1− α)pg(x)dx,\nthe last inequality implies that there must exist a feasible solution. We complete this proof.\nAnother concern is the convergence of Algorithm 1.\nProposition 2. The discriminator reaches its optimal value givenG in Algorithm 1, and pg is updated by minimizing\nEx∼pd̄(x) [logD ∗ G(x)] + Ex∼p∗(x) [log (1−D∗G (x))] .\nIf G and D have sufficient capacities, then pg converges to argmin pg JSD (pd̄ ‖ (1− α)pg + αpd).\nProof. Consider V (G,D) = U(pg, D) as a function of pg. By the proof idea of Theorem 2 in Goodfellow et al. (2014), if f(x) = supα∈A fα(x) and fα(x) is convex in x for every α, then ∂fβ(x) ∈ ∂f if β = argsupα∈A fα(x). In other words, if supD V (G,D) is convex in pg, the subderivatives of supD V (G,D) includes the derivative of the function at the point, where the maximum is attained, implying the convergence with sufficiently small updates of pg . We complete this proof." }, { "heading": "D EXPERIMENTAL DETAILS FOR SEMI-SUPERVISED LEARNING", "text": "D.0.1 DATASETS: MNIST, SVHN, AND CIFAR-10\nFor evaluating the semi-supervised learning task, we used 60000/ 73257/ 50000 samples and 10000/ 26032/ 10000 samples from the MNIST/ SVHN/ CIFAR-10 datasets for the training and testing, respectively. Under the semi-supervised setting, we randomly chose 100/ 1000/ 4000 samples from the training samples, which are the MNIST/ SVHN/ CIFAR-10 labeled datasets, and the amounts of the labeled data for all the classes are equal. Furthermore, our criterion to determine the hyperparameters is introduced in Appendix D.1, and the network architectures are described in Appendix D.2. We performed testing with 10/ 5/ 5 runs on MNIST/ SVHN/ CIFAR-10 based on the selected hyperparameters, and randomly selected the labeled dataset. The results were recorded as the mean and standard deviation of the number of errors from each run.\nD.1 HYPERPARAMETERS\nThe hyperparameters were chosen to make our generated samples consistent with the assumptions in (7) and (8). However, in practice, if we make all the samples produced by the generator following the assumption in (8), then the generated distribution is not close to the true distribution, even a large margin between them exists, which is not what we desire. So, in our experiments, we make a concession that the percentage of generated samples, which accords with the assumption, is around 90%. To meet this objective, we tune the hyperparameters. Table 5 shows our setting of hyperparameters, where β is defined in (8).\nIn order to fairly compare with other methods, our generators and classifiers for MNIST, SVHN, and CIFAR-10 are same as in Salimans et al. (2016) and Dai et al. (2017). However, different from previous works that have only a generator and a discriminator, we design an additional discriminator in the feature space, and its architecture is similar across all datasets with only the difference in the input dimensions. Following Dai et al. (2017), we also define the feature space as the input space of the output layer of discriminators.\nCompared to SVHN and CIFAR-10, MNIST is a simple dataset as it is only composed of fully connected layers. Batch normalization (BN) or weight normalization (WN) is used in every layer to stable training. Moreover, Gaussian noise is added before each layer in the classifier, as proposed in Rasmus et al. (2015). We find that the added Gaussian noise exhibits a positive effect for semisupervised learning. The architecture is shown in Table 6.\nTable 7 and Table 8 are models for SVHN and CIFAR-10, respectively, and these models are almost the same except for some implicit differences, e.g., the number of convolutional filters and types of dropout. In these tables, given a dropping rate, “Dropout” denotes a normal dropout in that the elements of input tensor are randomly set to zero while Dropout2d is a dropout only applied on the channels to randomly zero all the elements.\nFurthermore, the training procedure alternates between k steps of optimizing D and one step of optimizing G. We find that k in Algorithm 1 is a key role in the problem of mode collapse for different applications. For semi-supervised learning, we set k = 1 for all datasets." }, { "heading": "E EXPERIMENTAL DETAILS FOR NOVELTY DETECTION", "text": "The architecture of GAN and VAE are depicted in Table 9 and 10, respectively.\nIn the experiment, we first trained the VAE for 500 epochs and then we trained DSGAN for 500 epochs with m = 1.5 and w = 0.5. Third, we fixed the encoder and tuned the decoder with both positive and negative samples (generated by DSGAN) for 600 epochs.\nF ABLATION STUDY ON DIFFERENT α VALUES FOR SEMI-SUPERVISED LEARNING\nFig. 7 shows how different α values influence DSGAN. The optimal α for DSGAN to generate “unseen” data depends on pd̄ and pd. According to Fig. 7, we can figure out that DSGAN is prone to generating unseen data under a larger α. Recall that Theorem 1 illustrates α should be expected to be as large as possible if both network G and D have infinite capacity. Though the networks never have the infinite capacity in real applications, a general rule is to pick a large α and force the complement data to be far from pd, which is similar to the results in Sec. 5.1.\nTable 10: The architectures of VAE for novelty detection.\nEncoder Decoder\n5× 5 conv. 32 stride = 2, with BN, lReLU(0.2) 5× 5 conv. 64 stride = 2, with BN, lReLU(0.2) 5× 5 conv. 128 stride = 2, with BN, lReLU(0.2) (For mean) 4× 4 conv. 128 stride = 1 (For std) 4× 4 conv. 128 stride = 1 5× 5 conv. transpose 128 stride = 2 with BN, lReLU(0.2) 5× 5 conv. transpose 64 stride = 2 with BN, lReLU(0.2) 5× 5 conv. transpose 32 stride = 2 with BN, lReLU(0.2) 5× 5 conv. transpose 3 stride = 2, Tanh\nHere, we conduct the experiments on different α under semi-supervised learning settings. From Sec. 4.1 and 5.2, badGAN already shows that, if the desired unseen data can be generated, then the classifier will put the correct decision boundary in the low-density area.\nIn Table 11, we demonstrate the classification results on α = 0.5 and α = 0.8, respectively. We can observe that the results generated at α = 0.8 is better than those generated at α = 0.5, meeting the above discussion. From our empirical observations, DSGAN is prone to generating unseen data at α = 0.8, leading a better classifier." }, { "heading": "G SAMPLE QUALITY OF DSGAN ON CELEBA", "text": "We show one more experiment on CelebA (Liu et al. (2015)) to demonstrate DSGAN can work well even for complicated images. In this experiment, we generate the color images of size 64 × 64. Similar to our 1/7 experiments on the MNIST dataset, we let pd̄ be the distribution of face images with glasses and without glasses and let pd be the distribution of images without glasses. We validate DSGAN with α = 0.5 and α = 0.8, respectively. For α = 0.5, we sample 10000 images with glasses and 10000 images without glasses from CelebA. When α is 0.8, we sample 40000 instead of 10000 images without glasses.\nWe also train GAN to verify the generated image quality of DSGAN. For fair comparison, GAN is trained under two kinds of settings. The first one is that GAN is only trained with the images with glasses. Second, it is pretrained with all images, and is finetuned with the images with glasses, namely transferring GANs in Wang et al. (2018). It should be noted that transferring GAN uses the same amount of training data as DSGAN and serves as a stronger baseline than GAN under the first setting.\nFréchet Inception Distance (FID) (Heusel et al. (2017)) is used to evaluate the quality of generated images. FID calculates the Wasserstein-2 distance between generated images and real images (images with glasses) in the feature space of Inception-v3 network (Szegedy et al. (2015)). We train both networks for 600 epochs, and use WGAN-GP as the backbone for both GAN and DSGAN. In addition, transferring GANs are pretrained for 500 epochs, then being finetuned for 600 epochs.\nFig. 10 and Table 12 show generated images and FID for all methods, respectively. We can see that our DSGAN can generate images with glasses from the given pd and pd̄, and the FID of DSGAN are comparable to that of GAN. The experiment validates that DSGAN still works well to create complement data for complicate images." } ]
2,020
DIFFERENCE-SEEKING GENERATIVE ADVERSARIAL NETWORK–UNSEEN SAMPLE GENERATION
SP:b386db96a13357984b61761342ae1cc876fe6d3a
[ "This paper proposes a new method to defend a neural network agains adversarial attacks (both white-box and black-box attacks). By jointly training a Generative Cleaning Network with quantized nonlinear transform, and a Detector Network, the proposed cleans the incoming attacked image and correctly classifies its true label. The authors use state-of-the-art attack methods on various models, and the proposed model consistently outperforms all baseline models, even dramatically outperforming them for some specific attack methods.", "This paper developed a method for defending deep neural networks against adversarial attacks based on generative cleaning networks with quantized nonlinear transform. The network is claimed to recover the original image while cleaning up the residual attack noise. The authors developed a detector network, which serves as the dual network of the target classifier network to be defended, to detect if the image is clean or being attacked. This detector network and the generative cleaning network are jointly trained with adversarial learning so that the detector network cannot find any attack noise in the output image of generative cleaning network. The experimental results demonstrated that the proposed approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. " ]
Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under white-box attacks. In this paper, we develop a new generative cleaning network with quantized nonlinear transform for effective defense of deep neural networks. The generative cleaning network, equipped with a trainable quantized nonlinear transform block, is able to destroy the sophisticated noise pattern of adversarial attacks and recover the original image content. The generative cleaning network and attack detector network are jointly trained using adversarial learning to minimize both perceptual loss and adversarial loss. Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. For example, it improves the classification accuracy for white-box attacks upon the second best method by more than 40% on the SVHN dataset and more than 20% on the challenging CIFAR-10 dataset.
[]
[ { "authors": [ "Nasir Ahmed", "T Natarajan", "Kamisetty R Rao" ], "title": "Discrete cosine transform", "venue": "IEEE transactions on Computers,", "year": 1974 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer encoding: One hot way to resist adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Defensive distillation is not robust to adversarial examples", "venue": "arXiv preprint arXiv:1607.04311,", "year": 2016 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Ingrid Daubechies" ], "title": "The wavelet transform, time-frequency localization and signal analysis", "venue": "IEEE transactions on information theory,", "year": 1990 }, { "authors": [ "Guneet S. Dhillon", "Kamyar Azizzadenesheli", "Jeremy D. Bernstein", "Jean Kossaifi", "Aran Khanna", "Zachary C. Lipton", "Animashree Anandkumar" ], "title": "Stochastic activation pruning for robust adversarial defense", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Warren He", "James Wei", "Xinyun Chen", "Nicholas Carlini", "Dawn Song" ], "title": "Adversarial example defense: Ensembles of weak defenses are not strong", "venue": "In 11th {USENIX} Workshop on Offensive Technologies ({WOOT}", "year": 2017 }, { "authors": [ "Zhezhi He", "Adnan Siraj Rakin", "Deliang Fan" ], "title": "Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xiaojun Jia", "Xingxing Wei", "Xiaochun Cao", "Hassan Foroosh" ], "title": "Comdefend: An efficient image compression model to defend adversarial examples", "venue": null, "year": 2019 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and super-resolution", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian Goodfellow" ], "title": "Adversarial logit pairing, 2018", "venue": null, "year": 2018 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Xingjun Ma", "Bo Li", "Yisen Wang", "Sarah M. Erfani", "Sudanthi Wijewickrema", "Grant Schoenebeck", "Dawn Song", "Michael E. Houle", "James Bailey" ], "title": "Characterizing adversarial subspaces using local intrinsic dimensionality, 2018", "venue": null, "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Dongyu Meng", "Hao Chen" ], "title": "Magnet: a two-pronged defense against adversarial examples", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Taesik Na", "Jong Hwan Ko", "Saibal Mukhopadhyay" ], "title": "Cascade adversarial machine learning regularized with a unified embedding", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel" ], "title": "On the effectiveness of defensive distillation", "venue": "arXiv preprint arXiv:1607.05113,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Somesh Jha", "Matt Fredrikson", "Z Berkay Celik", "Ananthram Swami" ], "title": "The limitations of deep learning in adversarial settings", "venue": null, "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "venue": "arXiv preprint arXiv:1805.06605,", "year": 2018 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Bo Sun", "Nian-Hsuan Tsai", "Fangchen Liu", "Ronald Yu", "Hao Su" ], "title": "Adversarial defense by stratified convolutional sparse coding", "venue": "URL http://par.nsf.gov/biblio/", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Gregory K Wallace" ], "title": "The jpeg still picture compression standard", "venue": "IEEE transactions on consumer electronics,", "year": 1992 }, { "authors": [ "Huaxia Wang", "Chun-Nam Yu" ], "title": "A direct approach to robust deep learning using adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "David Warde-Farley" ], "title": "11 adversarial perturbations of deep neural networks. Perturbations", "venue": "Optimization, and Statistics,", "year": 2016 }, { "authors": [ "Chaowei Xiao", "Bo Li", "Jun-Yan Zhu", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Generating adversarial examples with adversarial networks", "venue": "arXiv preprint arXiv:1801.02610,", "year": 2018 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "arXiv preprint arXiv:1812.03411,", "year": 2018 }, { "authors": [ "Cihang Xie", "Zhishuai Zhang", "Yuyin Zhou", "Song Bai", "Jianyu Wang", "Zhou Ren", "Alan Yuille" ], "title": "Improving transferability of adversarial examples with input diversity, 2018c", "venue": null, "year": 2018 }, { "authors": [ "Weilin Xu", "David Evans", "Yanjun Qi" ], "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "venue": "arXiv preprint arXiv:1704.01155,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent research has shown that deep neural networks are sensitive to adversarial attacks (Szegedy et al., 2013). Very small changes of the input image can fool the state-of-art classifier with very high success probabilities. During the past few years, a number of methods have been proposed to construct adversarial samples to attack the deep neural networks, including fast gradient sign (FGS) method (Goodfellow et al., 2014b), Jacobian-based saliency map attack (J-BSMA) (Papernot et al., 2016a), and projected gradient descent (PGD) attack (Kurakin et al., 2016; Madry et al., 2018). It has also been demonstrated that different classifiers can be attacked by the same adversarial perturbation (Szegedy et al., 2013). The fragility of deep neural networks and the availability of these powerful attacking methods present an urgent need for effective defense methods. During the past few years, a number of deep neural network defense methods have been developed, including adversarial training (Kurakin et al., 2016; Szegedy et al., 2013), defensive distillation (Papernot et al., 2016b; Carlini & Wagner, 2016; Papernot & McDaniel, 2016), Magnet (Meng & Chen, 2017) and featuring squeezing (He et al., 2017; Xu et al., 2017). It has been recognized that these methods suffer from significant performance degradation under strong attacks, especially white-box attacks with large magnitude and iterations (Samangouei et al., 2018).\nIn this work, we explore a new approach to defend various attacks by developing a generative cleaning network with quantized nonlinear transform. We recognize that the attack noise is not random and has sophisticated patterns. The attackers often generate noise patterns by exploring the specific network architecture or classification behavior of the target deep neural network so that the small noise at the input layer can accumulate along the network inference layers, finally exceed the decision threshold at the output layer, and result in false decision. On the other hand, we know a well-trained deep neural networks are robust to random noise (Arjovsky et al., 2017), such as Gaussian noise. Therefore, the key issue in network defense is to randomize or destroy the sophisticated pattern of the attack noise while preserving the original image content.\nMotivated by this observation, we design a new generative cleaning network with quantized nonlinear transform to first destroy the sophisticated noise patterns of adversarial attacks and then recover the original image content damaged during this nonlinear transform. We also construct a detector\nnetwork which serves as the dual network for the target classifier to be defended. The generative cleaning network and detector network are jointly trained using adversarial learning so that the detector network cannot detect the existence of attack noise pattern in the images recovered by the generative cleaning network. Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 40% on the SVHN dataset from 46.90% to 93.80%, and more than 20% on the challenging CIFAR-10 dataset from 60.15% to 86.05%.\nThe major contributions of this work can be summarized as follows. (1) We have proposed a new approach for deep neural network defense by developing a unique generative cleaning network with quantized nonlinear transform. (2) We have formulated the problem of destroying the noise patterns of adversarial attacks and reconstructing original image content into generative adversarial network design and training which considers both perceptual loss and adversarial loss. (3) Our new method has significantly improved the performance of the state-of-the-art methods in the literature under a wide variety of attacks.\nThe rest of this paper is organized as follows. Section 2 reviews related work. The proposed method is presented in Section 3. Experimental results and performance comparison with existing methods are provided in Section 4. Section 5 concludes the paper." }, { "heading": "2 RELATED WORK", "text": "In this section, we review related work on adversarial attack and network defense methods.\n(A) Attack methods. Attack methods can be divided into two threat models: white-box attacks and black-box attacks. The white-box attacker has full access to the classifier network parameters, network architecture, and weights. The black-box attacker has no knowledge of or access to the target network. For white-box attack, a simple and fast approach called Fast Gradient Sign (FGS) method has been developed by Goodfellow et al. (2014b) using error back propagation to directly modify the original image. Basic Iterative Method (BIM) is an improved version of the FGS method. Carlini & Wagner (2016) designed an optimization-based attack method, called Carlini-Wagner (C&W) attack, which is able to fool the target network with the smallest perturbation. Xiao et al. (2018) trained a generative adversarial network (GAN) (Goodfellow et al., 2014a) to generate perturbations. Kannan et al. (2018) found that the Projected Gradient Descent (PGD) is the strongest among all attack methods. It can be viewed as a multi-step variant of FGSk (Madry et al., 2018). Athalye et al. (2018) introduced a method, called Backward Pass Differentiable Approximation (BPDA), to attack networks where gradients are not available. It is able to successfully attack all existing state-of-the-arts defense methods. For black-box attacks, the attacker has no knowledge about target classifier. Papernot et al. (2017) introduced the first approach for black-box attack using a substitute model. Dong et al. (2018) proposed a momentum-based iterative algorithms to improve the transferability of adversarial examples. Xie et al. (2018c) boosted the transferability of adversarial examples by creating diverse input patterns.\n(B) Defense methods Several approaches have recently been proposed for defending both whitebox attacks and black-box attacks. Adversarial training defends various attacks by training the target model with adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2014b). Madry et al. (2018) suggested that training with adversarial examples generated by PGD improves the robustness. Meng & Chen (2017) proposed a method, called MagNet, which detects the perturbations and then reshape them according to the difference between clean and adversarial examples. Recently, there are several defense methods based on GANs have been developed. Samangouei et al. (2018) projected the adversarial examples into a trained generative adversarial network (GAN) to approximate the input using generated clean image with multiple iterations. Recently, some defense methods have been developed based on input transformations. Guo et al. (2018) proposed several input transformations to defend the adversarial examples, including image cropping and re-scaling, bit-depth reduction, and JPEG compression. Xie et al. (2018a) proposed to defend against adversarial attacks by adding a randomization layer, which randomly re-scales the image and then randomly zero-pads the image. Jia et al. (2019) proposed an image compression framework to defend adversarial examples, called ComDefend. Xie et al. (2018b) introduced a feature denoising method for defending PGD white-box attacks.\nOur proposed defense method is also related to GANs and image transformations. But, compared to existing methods, our method is unique in the following aspects: (1) We introduce a special layer called quantized nonlinear transform, into the generative cleaning network to destroy the sophisticated noise pattern of adversarial attacks. (2) Unlike the GAN-based methods in (Wang & Yu, 2019; Xiao et al., 2018) which aim to approximate input noise image using images generated by the GAN over multiple iterations, our generative cleaning network aims to reconstruct the image content damaged by quantized nonlinear transform. (3) Our method does not need to modify the target network to be protected." }, { "heading": "3 THE PROPOSED DEFENSE METHOD", "text": "In this section, we present our proposed generative cleaning network method for effective deep neural network defense. For convenience, we refer to our proposed method by GCLN." }, { "heading": "3.1 METHOD OVERVIEW", "text": "Figure 1 provides an overview of the proposed method. The attacked image x∗ is fed into the generative cleaning network Gθ. The network has a special layer, called quantized nonlinear transform, to destroy the noise pattern of the adversarial attack in the input image. The generative cleaning network aims to recover the original image content and produce a recovered image x̄. This recovered image x̄ will be passed to the target classifier Cα for image classification or recognition. To successfully learn the generative cleaning network Gθ, we construct a detector network Dφ, which serves as the dual network for the target classifier network Cα. The task of Dφ is to determine if the input image is clean or being attacked. In our proposed method, the generative cleaning network Gθ and the detector network Dφ are jointly trained through adversarial learning: the Gθ network is trying to recover the image x̂ so that Dφ cannot detect any attack noise in it. In the following sections, we will explain the proposed method in more detail." }, { "heading": "3.2 QUANTIZED NONLINEAR TRANSFORM LAYER IN THE GENERATIVE CLEANING NETWORK", "text": "During the generative cleaning network design, We incorporate one special layer into the network, called quantized nonlinear transform. This transform aims to disturb and partially destroy the sophisticated pattern of the attack noise. In this work, we propose to construct such a transform using a linear transform T , followed by a quantizer Q and an inverse transform T−1. For the linear transform, we can use the discrete cosine transform (DCT) (Ahmed et al., 1974) which has been in JPEG image compression (Wallace, 1992). Specifically, we partition the input image into blocks of M × M . The original image block is denoted by X∗B = [x∗nk]1≤n,k≤M . The output block X̂B = [x̂ij ]1≤i,j≤M after DCT transform is given by\nx̂ij = 1\n4 CiCj M−1∑ n=0 M−1∑ k=0 xnk cos(iπ 2n+ 1 2M ) cos(jπ 2k + 1 2M ), (1)\nwith Ci = 1/ √\n2 for i = 0, and Ci = 1 for i 6= 0. After transform, we will quantize the transform coefficient x̂ij as follows\nRQ(x̂ij) = Round ( x̂ij q ) × q, (2)\nwhere q is the quantization parameter. Certainly, this DCT transform can be replaced with other invertible transform, such as discrete wavelet transform (Daubechies, 1990). During network training, this special quantized nonlinear transform layer is implemented in the same way as the pooling layers in existing deep neural networks and included into the training process of the whole generative cleaning network." }, { "heading": "3.3 ADVERSARIAL TRAINING FOR GENERATIVE CLEANING NETWORKS", "text": "In our defense method design, the generative cleaning network Gθ and the detector network Dφ are trained against each other, just like the existing generative adversarial networks (GAN). Dφ is a binary classifier to detect if the input image is clean or not. During the initial phase of training, Dφ is trained with the clean images and their attacked versions generated by existing attack methods. It should be noted that, when training Dφ, we do not need to know the model inside the target network Cα.\nThe goal of the generative cleaning network Gθ is two-fold: (1) first, it needs to successfully remove the residual attack noise so that the noise cannot be detected by the detector network Dφ. (2) Second, it needs to make sure that the original image content is largely recovered. To achieve the above two goals, we formulate the following generative loss function for training the generative cleaning network Gθ LG = λLP + (1− λ)LA, (3) where LP is perceptual loss and LA is the adversarial loss. λ is a weighting parameter. In our experiments, we set it to be 0.5. To define the perceptual loss, the L2-norm between the recovered image x̄ and the original image xo is often used (Johnson et al., 2016). In this work, we observe that the small adversarial perturbation often leads to very substantial noise in the feature map of the network (Xie et al., 2018b). Motivated by this, we use a pre-trained VGG-19 network, denoted by Fβ to generate visual features for the recovered image x̄ and the original image xo, and use their feature difference as the perceptual loss LP . Specifically,\nLP = ||Fβ(xo)− Fβ(Gθ(x̂))||2, (4) The adversarial loss LA aims to train Gθ so that the recovered images will be detected as clean by the detector network Dφ. It is formulated as LA = Ex∗∈Ω∗Φ[Dφ(Gθ(x∗)), Iclean]. (5) Here, Φ[·, ·] represents the cross-entropy between the output generated by the generative network and the target label Iclean for clean images. We train our discriminative network Dφ, along with the generative cleaning network Gθ, to optimize the following min-max loss function:\nmin Gθ max Dφ {Exo∈Ωo [logDφ(xo)] + Ex∗∈Ω∗ [log(1−Dφ(Gθ(x∗)))]}. (6)\nHere, Ωo and Ω∗ represent the clean and attacked images of the training dataset. The goal of generative model Gθ is to fool the discriminator Dφ that is trained to distinguish adversarial images from clean images. With this framework, our generator learns to recover images that are highly similar to clean images and difficult to be detected by Dφ. The detector network Dφ acts as a dual network for the original classifier Cα. Cascaded with the generative cleaning network Gφ, it will guide the training of Gφ using back propagation of gradients from its own network, aiming to minimize the above loss function. In our design, during the adversarial learning process, the target classifier Cα is called to determine if the recovered image x̄ is clean or not, as illustrated in Figure 1. If it is clean, it is added back into the clean training sample set Ωo on the fly to enhance the learning process." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section, we implement and evaluate our GCLN defense method and compare its performance with state-of-the-art defense methods under a wide variety of attacks, including white-box and blackbox attacks." }, { "heading": "4.1 EXPERIMENT SETUP", "text": "Following existing methods in the literature, we use CIFAR-10 and SVHN (Street View House Number) datasets. The CIFAR-10 dataset consists of 60,000 images in 10 classes, with 32×32 image size. The Street View House Numbers (SVHN) dataset (Netzer et al., 2011) has about 200K images of street numbers. The attack methods to be considered in this work include FGS (Goodfellow et al., 2014b), PGD (Madry et al., 2018), BIM attack (Kurakin et al., 2016), and C&W attack (Carlini & Wagner, 2017)." }, { "heading": "4.2 RESULTS ON THE CIFAR-10 DATASET", "text": "We compare the performance of our defense method with 6 state-of-the-art methods developed in the literature under four different white-box attacks: FGS attack (Goodfellow et al., 2014b), PGD (Madry et al., 2018) attack, BIM attack (Kurakin et al., 2016) and C&W attack (Carlini & Wagner, 2017). Following (Kannan et al., 2018) and (Wang & Yu, 2019), the white-box attackers generate adversarial perturbations within a range of = 8/255. In addition, we set the step size of attackers to be = 1/255 with 10 attack iterations as the baseline settings. Table 1 shows image classification\naccuracy with different defense methods on the CIFAR-10 dataset. The second column shows the classification accuracy when the input images are all clean. We can see that some methods, such as the Adversarial BIM, Feature Squeezing, and Adversarial PGD degrade the classification accuracy of clean images. This implies that their defense methods have caused significant damages to the original images, or they cannot accurately tell if the input image is clean or being attacked. The rest four columns list the final image classification accuracy with different defense methods. Some methods did not provide results on specific attack methods, which were left blank (marked with ’-’) in the table. For all of these four attacks, our methods significantly outperforms existing methods. For example, for the powerful PGD attack, our method outperforms the Adversarial-PGD method by more than 28%. We can also see that the GCLN with quantization step size Qs = 10 achieves better efficiency than that with Qs = 5. This is because the quantized nonlinear transform layers with larger quantization parameters are relatively more efficient in removing the noise in feature maps.\nDefending against the BPDA attack. The Backward Pass Differentiable Approximation (BPDA) (Athalye et al., 2018) attack is very challenging to defend since it can iteratively strengthen the adversarial examples using gradient approximation according to the defense mechanism. Table 2 summarizes the defense results of our algorithm in comparison with other seven methods. We can see that our GCLN is able to improve the classification accuracy up the second best method by 6%.\nDefending against black-box attacks. We generate the black-box adversarial examples using FGS and PGD attacks with a substitute model (Papernot et al., 2017). The substitute model is trained in the same way as the target classifier with ResNet-34 network (He et al., 2016) structure. Table 3 shows the performance of our defense mechanism under back-box attacks on the CIFAR-10 dataset. The adversarial examples are constructed with = 8/256 under the substitute model. We observe that the target classifier is much less sensitive to adversarial examples generated by FGS and PGD black-box attacks than the white-box ones. But the powerful PGD attack is still able to decrease the overall classification accuracy to a very low level, 38.71%. We compare our method with the Adersarial-PGD (Madry et al., 2018) and Adversarial Network (Wang & Yu, 2019) methods. We include these two because they are the only ones that provide performance results on CIFAR-10 with black-box attacks. From the Table 3, we can see our method improves the accuracy by 5.8% over the state-of-the-art Adversarial Network method for the PGD attack." }, { "heading": "4.3 RESULTS ON THE SVHN DATASET.", "text": "We evaluate our GCLN method on the SVHN dataset with comparison with four state-of-the-art defense methods: M-PGD (Madry et al., 2018), ALP (Kannan et al., 2018), adversarial PGD (Tramr et al., 2018) and Adversaral network (Wang & Yu, 2019). For the SVHN dataset, as in the existing methods (Kannan et al., 2018; Wang & Yu, 2019), we used the Resnet-18 (He et al., 2016) for the target classifier. The average classification accuracy is 96.21%. We use the same parameters as in (Kannan et al., 2018) for the PGD attack with a total magnitude of = 0.05 (12/255). Within each single step, the perturbation magnitude is set to be = 0.01 (3/255) and 10 iterative steps are used.\nDefending against white-box attack. Table 4 summarizes the experimental results and performance comparisons with those four existing defense methods. We can see that on this dataset the PGD attack is able to decrease the overall classification accuracy to an extremely low level, 0.15%.\nOur algorithm outperforms existing methods by a very large margin. For example, for the PGD attack, our algorithm outperforms the second best ALP (Kannan et al., 2018) algorithm by more than 46%.\nDefending against black-box attack. We also perform experiments of defending black-box attacks on the SVHN dataset. Table 4 summarizes our experimental results with the powerful PGD attack and provides the comparison with those four methods. We can see that our approach outperforms other methods by 2.39% for the FGS attacks and 7.01% for the PGD attacks. From the above results, we can see that our proposed method is particularly effective for defense against the strong attacks, for example, the PGD attacks with large iteration steps and noise magnitude.\nVisualizing the defense process. Network defense is essentially a denosing process of the feature maps. To further understand how the the proposed GCLN method works, we visualize the feature maps of original, attacked, and GCLN-cleaned images. We use the feature map from the activation layer, the third from the last layer in the network. Figure 2 shows three examples. In the first example, the first row is the original image (classified into flamigo), its feature map, its gradientweighted class activation heatmap, and the heatmap overlaied on the original image. The heatmap shows which parts of the original image the classification network is paying attention to. The second row shows the attacked image (being classified into hoopskirt), its feature map, heatmap, and the heatmap overlaid on the attacked image. We can see that the feature map is very noisy and the heatmap is distorted. The third row shows the GCLN-cleaned images. We can see that both the feature map and heatmaps have been largely restored." }, { "heading": "4.4 ABLATION STUDIES AND ALGORITHM ANALYSIS", "text": "In this section, we provide in-depth ablation study results of our algorithm to further understand its capability.\n(A) Analyze the impact of the quantization parameter. We notice that the quantization parameter plays an important role in the defense. Figure 3(left) shows the defense performance (classification accuracy after defense) of our method on the CIFAR-10 dataset with white-box BPDA attacks. We can see that the quantization step size within the range of 8 to 12 yields the best performance. Small quantization parameters do not provide efficient defense since the quantized nonlinear transform is not able to disturb and destroy the attack noise pattern. However, when the quantization parameter becomes too large, it will damage the original image content too much which cannot be recovered by the subsequent generative cleaning network.\n(B) Defense against large-iteration BPDA attacks. The impact of the white-box BPDA attacks increases with its number of iterations since it accesses the network and performs gradient backpropagation with more iterations to force the network towards wrong classification output. Following the protocol of ALP (Kannan et al., 2018), we evaluate the capacity of our defense method against different numbers of BPDA white-box attack iterations. Figure 3(right) shows the performance of our method with an increasing number of attack iteration. We can see that our method is able to withstand large number of BPDA attack iterations. The impact of attack becomes relatively stable after 50 iterations.\n(C) Defense against large-iteration and large-epsilon PGD attacks. As shown in Fig. 4, gradually increasing the PGD attack iterations will raise the attack strength. This significantly degrades the accuracy of the vanilla adversary training method (Madry et al., 2018) and the PNI (Parametric Noise Injection) method (He et al., 2019), as well as our method. But, our method signficantly outperforms the other two. In both cases, the perturbed-data accuracy starts saturating and do not degrade any further when Nstep ≥ 40. Our method still outperforms the vanilla adversarial training and PNI defense methods even when the magnitude of adversarial noise is increased up to = 0.3." }, { "heading": "5 CONCLUSION", "text": "We have developed a new method for defending deep neural networks against adversarial attacks based on generative cleaning networks with quantized nonlinear transform. This network is able to recover the original image while cleaning up the residual attack noise. We developed a detector network, which serves as the dual network of the target classifier network to be defended, to detect if the image is clean or being attacked. This detector network and the generative cleaning network are jointly trained with adversarial learning so that the detector network cannot find any attack noise in the output image of generative cleaning network. Our extensive experimental results demonstrated that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. For example, it dramatically improves the classification accuracy upon the second best method more than 30% on the SVHN dataset and more than 14% on the challenging CIFAR-10 dataset." } ]
2,019
null
SP:35b6bf3da512cae6ad93e1422b1e272474f9a8cb
[ "This paper studies an optimistic variant of AMSGrad algorithm, where an estimate of the future gradient is incorporated into the optimization problem. The main claim is that when we have good enough (distance from the ground truth is small) estimate of the unknown gradient, the proposed algorithm will enjoy lower regret. Theoretical results are provided and experiments are conducted to compare the proposed algorithm with baselines. The idea seems to be not very novel since the optimistic optimization techniques are borrowed directly from the online optimization field, while it is still interesting to see this kind of work and to see its comparison with existing algorithms in experiments. However, the comparison seems to be not fair both in theory and experiments.", "This paper proposes an online optimization method called Optimistic-AMSGrad, which combines two existing methods: (i) AMSGrad (Reddi et al 2018) and (ii) optimistic online learning where the prediction step is done with the extrapolation algorithm by Scieur et al 2016. The authors do a good job of presenting the method (by introducing the background in proper order), the paper seems self-contained and cites the relevant literature. The regret analysis of the proposed algorithm is provided, where the obtained regret can be smaller than AMSGrad depending on whether or not the guess of the gradient and the gradient are close." ]
This paper considers a new variant of AMSGrad called Optimistic-AMSGrad. AMSGrad (Reddi et al. (2018)) is a popular adaptive gradient based optimization algorithm that is widely used in training deep neural networks. The new variant assumes that mini-batch gradients in consecutive iterations have some underlying structure, which makes the gradients sequentially predictable. By exploiting the predictability and some ideas from Optimistic Online learning, the proposed algorithm can accelerate the convergence and also enjoys a tighter regret bound. We evaluate Optimistic-AMSGrad and AMSGrad in terms of various performance measures (i.e., training loss, testing loss, and classification accuracy on training/testing data), which demonstrate that Optimistic-AMSGrad improves AMSGrad.
[]
[ { "authors": [ "Jacob Abernethy", "Kevin A. Lai", "Kfir Y. Levy", "Jun-Kun Wang" ], "title": "Faster rates for convex-concave games", "venue": null, "year": 2018 }, { "authors": [ "Naman Agarwal", "Brian Bullins", "Xinyi Chen", "Elad Hazan", "Karan Singh", "Cyril Zhang", "Yi Zhang" ], "title": "Efficient full-matrix adaptive regularization", "venue": null, "year": 2019 }, { "authors": [ "Rohan Anil", "Vineet Gupta", "Tomer Koren", "Yoram Singer" ], "title": "Memory efficient adaptive optimization", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Gary Becigneul", "Octavian-Eugen Ganea" ], "title": "Riemannian adaptive optimization methods", "venue": null, "year": 2019 }, { "authors": [ "C. Brezinski", "M.R. Zaglia" ], "title": "Extrapolation methods: theory and practice", "venue": null, "year": 2013 }, { "authors": [ "S. Cabay", "L. Jackson" ], "title": "A polynomial extrapolation method for finding limits and antilimits of vector sequences", "venue": "SIAM Journal on Numerical Analysis,", "year": 1976 }, { "authors": [ "Jinghui Chen", "Quanquan Gu" ], "title": "Closing the generalization gap of adaptive gradient methods in training deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Ruoyu Sun", "Mingyi Hong" ], "title": "On the convergence of a class of adam-type algorithms for non-convex optimization. ICLR, 2019a", "venue": null, "year": 2019 }, { "authors": [ "Zaiyi Chen", "Zhuoning Yuan", "Jinfeng Yi", "Bowen Zhou", "Enhong Chen", "Tianbao Yang" ], "title": "Universal stagewise learning for non-convex problems with convergence on averaged solutions. ICLR, 2019b", "venue": null, "year": 2019 }, { "authors": [ "Chao-Kai Chiang", "Tianbao Yang", "Chia-Jung Lee", "Mehrdad Mahdavi", "Chi-Jen Lu", "Rong Jin", "Shenghuo Zhu" ], "title": "Online optimization with gradual variations", "venue": null, "year": 2012 }, { "authors": [ "Constantinos Daskalakis", "Andrew Ilyas", "Vasilis Syrgkanis", "Haoyang Zeng" ], "title": "Training gans with optimism", "venue": null, "year": 2018 }, { "authors": [ "Timothy Dozat" ], "title": "Incorporating nesterov momentum into adam", "venue": "ICLR (Workshop Track),", "year": 2016 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2011 }, { "authors": [ "Aritra Dutta", "El Houcine Bergou", "Yunming Xiao", "Marco Canini", "Peter Richtarik" ], "title": "Direct nonlinear acceleration", "venue": null, "year": 1905 }, { "authors": [ "R. Eddy" ], "title": "Extrapolating to the limit of a vector sequence", "venue": "Information linkage between applied mathematics and industry,", "year": 1979 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": null, "year": 2014 }, { "authors": [ "Alex Graves", "Abdel rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": null, "year": 2013 }, { "authors": [ "Vineet Gupta", "Tomer Koren", "Yoram Singer" ], "title": "Shampoo: Preconditioned stochastic tensor optimization", "venue": null, "year": 2018 }, { "authors": [ "Elad Hazan" ], "title": "Introduction to online convex optimization", "venue": "Foundations and Trends in Optimization,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Nitish Shirish Keskar", "Richard Socher" ], "title": "Improving generalization performance by switching from adam to sgd", "venue": null, "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Hugo Larochelle", "Dumitru Erhan", "Aaron Courville", "James Bergstra", "Yoshua Bengio" ], "title": "An empirical evaluation of deep architectures on problems with many factors of variation", "venue": null, "year": 2007 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": null, "year": 2017 }, { "authors": [ "Ping Li" ], "title": "Robust logitboost and adaptive base class (abc) logitboost", "venue": null, "year": 2010 }, { "authors": [ "Xiaoyu Li", "Francesco Orabona" ], "title": "On the convergence of stochastic gradient descent with adaptive stepsizes", "venue": null, "year": 2019 }, { "authors": [ "Liyuan Liu", "Haoming Jiang", "Pengcheng He", "Weizhu Chen", "Xiaodong Liu", "Jianfeng Gao", "Jiawei Han" ], "title": "On the variance of the adaptive learning rate and beyond", "venue": null, "year": 1908 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": null, "year": 2019 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate", "venue": null, "year": 2019 }, { "authors": [ "James Martens", "Roger Grosse" ], "title": "Optimizing neural networks with kronecker-factored approximate curvature", "venue": null, "year": 2015 }, { "authors": [ "H. Brendan McMahan", "Matthew J. Streeter" ], "title": "Adaptive bound optimization for online convex optimization", "venue": null, "year": 2010 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "NIPS (Deep Learning Workshop),", "year": 2013 }, { "authors": [ "Mehryar Mohri", "Scott Yang" ], "title": "Accelerating optimization via adaptive prediction", "venue": null, "year": 2016 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex optimization: A basic course", "venue": null, "year": 2004 }, { "authors": [ "B.T. Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Alexander Rakhlin", "Karthik Sridharan" ], "title": "Optimization, learning, and games with predictable sequences", "venue": null, "year": 2013 }, { "authors": [ "Alexander Rakhlin", "Karthik Sridharan" ], "title": "Online learning with predictable sequence", "venue": "COLT,", "year": 2013 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": null, "year": 2018 }, { "authors": [ "Damien Scieur", "Alexandre d’Aspremont", "Francis Bach" ], "title": "Regularized nonlinear acceleration", "venue": null, "year": 2016 }, { "authors": [ "Matthew Staib", "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar", "Suvrit Sra" ], "title": "Escaping saddle points with adaptive gradient methods", "venue": null, "year": 2019 }, { "authors": [ "Vasilis Syrgkanis", "Alekh Agarwal", "Haipeng Luo", "Robert E. Schapire" ], "title": "Fast convergence of regularized learning in games", "venue": null, "year": 2015 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "Rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Paul Tseng" ], "title": "On accelerated proximal gradient methods for convex-concave optimization", "venue": null, "year": 2008 }, { "authors": [ "H.F. Walker", "P. Ni" ], "title": "Anderson acceleration for fixed-point iterations", "venue": "SIAM Journal on Numerical Analysis,", "year": 2011 }, { "authors": [ "Rachel Ward", "Xiaoxia Wu", "Leon Bottou" ], "title": "Adagrad stepsizes: Sharp convergence over nonconvex landscapes, from any initialization", "venue": null, "year": 2019 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": null, "year": 2017 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Matthew D. Zeiler" ], "title": "Adadelta: An adaptive learning rate method", "venue": null, "year": 2012 }, { "authors": [ "Dongruo Zhou", "Yiqi Tang", "Ziyan Yang", "Yuan Cao", "Quanquan Gu" ], "title": "On the convergence of adaptive gradient methods for nonconvex optimization", "venue": null, "year": 2018 }, { "authors": [ "Zhiming Zhou", "Qingru Zhang", "Guansong Lu", "Hongwei Wang", "Weinan Zhang", "Yong Yu" ], "title": "Adashift: Decorrelation and convergence of adaptive learning rate methods", "venue": null, "year": 2019 }, { "authors": [ "Fangyu Zou", "Li Shen" ], "title": "On the convergence of adagrad with momentum for training deep neural networks", "venue": null, "year": 2018 }, { "authors": [ "Recently", "Zaheer" ], "title": "2019) provide some theoretical analysis of ADAM-type algorithms when applying them to smooth nonconvex optimization problems", "venue": "For example, Chen et al", "year": 2019 }, { "authors": [ "Agarwal" ], "title": "optimization, one can follow the approach", "venue": null, "year": 2019 }, { "authors": [], "title": "diag{v̂t}·〉, then the update might be viewed as an optimistic variant of ADAGRAD. However, no experiments was provided in (Mohri & Yang (2016))", "venue": "COMPARISON TO OPTIMISTIC-ADAM OF (DASKALAKIS ET AL", "year": 2018 }, { "authors": [ "Daskalakis" ], "title": "2018) proposed one version of optimistic algorithm for ADAM, which is called OPTIMISTIC-ADAM in their paper. We want to emphasize that the goals are different. OPTIMISTIC-ADAM in their paper is designed to optimize two-player games (e.g., GANs (Goodfellow et al. (2014))), while the proposed algorithm in this paper is designed to accelerate optimization (e.g., solving empirical risk minimization quickly)", "venue": null, "year": 2018 }, { "authors": [ "GANs (Goodfellow" ], "title": "GANs is a two-player zero-sum game. There have been some related works in OPTIMISTIC ONLINE LEARNING like (Chiang et al", "venue": null, "year": 2014 }, { "authors": [ "OPTIMISTIC-ADAM (Daskalakis" ], "title": "Required: parameter β1, β2, and ηt. 2: Init: w1 ∈ K. 3: for t = 1 to T do 4: Get mini-batch stochastic gradient vector gt ∈ R at wt", "venue": null, "year": 2018 }, { "authors": [ "Syrgkanis" ], "title": "2015)) showing that if both players use some kinds of OPTIMISTICupdate, then accelerating the convergence to the equilibrium of the game is possible", "venue": "Daskalakis et al", "year": 2018 }, { "authors": [ "Furthermore", "Daskalakis" ], "title": "OPTIMISTIC-ADAM while we give some analysis for the proposed algorithm. For comparison, we replicate OPTIMISTIC-ADAM in Algorithm 4. OPTIMISTIC-ADAM in Algorithm 4 uses the previous gradient as the guess of the next gradient", "venue": null, "year": 2018 }, { "authors": [ "ADAGRAD", "ADAM. Zhou" ], "title": "2019) propose decorrelation between the second moment term vt and the gradient gt by temporal shifting to deal with the non-convergence issue of ADAM. Luo et al. (2019) show that an extreme effective learning rate might happen during the execution of ADAM, which can cause the non-convergence. They propose an operation to clip the effective learning rate that avoids the extreme learning", "venue": null, "year": 2018 }, { "authors": [ "Liu" ], "title": "2019) study a heuristic called the learning rate “warm-up” and propose a new variant of ADAM by including a variance rectification term. Becigneul & Ganea (2019) propose a counterpart of ADAM for Riemannian manifolds. Other directions include improving the generalization of adaptive gradient methods (e.g", "venue": null, "year": 2019 }, { "authors": [ "e.g. Wilson" ], "title": "2019)), or showing that an adaptive optimization method can escape saddle points (Staib et al. (2019)). B PROOF OF THEOREM 1 We provide the regret analysis", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Nowadays deep learning has been very successful in numerous applications, from robotics (e.g., Levine et al. (2017)), computer vision (e.g., He et al. (2016); Goodfellow et al. (2014)), reinforcement learning (e.g., Mnih et al. (2013)), to natural language processing (e.g., Graves et al. (2013)). A common goal in these applications is learning quickly. It becomes a desired goal due to the presence of big data and/or the use of large neural nets. To accelerate the process, there are variety of training algorithms proposed in recent years, such as AMSGRAD (Reddi et al. (2018)), ADAM (Kingma & Ba (2015)), RMSPROP (Tieleman & Hinton (2012)), ADADELTA (Zeiler (2012)), and NADAM (Dozat (2016)), etc.\nAll the prevalent algorithms for training deep nets mentioned above combine two ideas: the idea of adaptivity from ADAGRAD (Duchi et al. (2011); McMahan & Streeter (2010)) and the idea of momentum from NESTEROV’S METHOD (Nesterov (2004)) or HEAVY BALL method (Polyak (1964)). ADAGRAD is an online learning algorithm that works well compared to the standard online gradient descent when the gradient is sparse. Its update has a notable feature: the effective learning rate is different for each dimension, depending on the magnitude of gradient in each dimension, which might help in exploiting the geometry of data and leading to a better update. On the other hand, NESTEROV’S METHOD or HEAVY BALL Method (Polyak (1964)) is an accelerated optimization algorithm whose update not only depends on the current iterate and current gradient but also depends on the past gradients (i.e., momentum). State-of-the-art algorithms like AMSGRAD (Reddi et al. (2018)) and ADAM (Kingma & Ba (2015)) leverage the ideas to accelerate training neural nets.\nIn this paper, we propose an algorithm that goes further than the hybrid of the adaptivity and momentum approach. Our algorithm is inspired by OPTIMISTIC ONLINE LEARNING (see e.g. Chiang et al. (2012); Rakhlin & Sridharan (2013a;b); Syrgkanis et al. (2015); Abernethy et al. (2018)). OPTIMISTIC ONLINE LEARNING considers that a good guess of the loss function in each round is available and plays an action by utilizing the guess. By exploiting the guess, algorithms in OPTIMISTIC ONLINE LEARNING can enjoy a smaller regret than the ones without exploiting the guess. We combine the OPTIMISTIC ONLINE LEARNING idea with the adaptivity and the momentum ideas to design a new algorithm — OPTIMISTIC-AMSGRAD. We also provide a theoretical analysis of OPTIMISTIC-AMSGRAD. The proposed algorithm not only adapts to the informative dimensions, exhibits momentum, but also exploits a good guess of the next gradient to facilitate acceleration. We conduct experiments and show that OPTIMISTIC-AMSGRAD improves AMSGRAD in terms of various measures: training loss, testing loss, and classification accuracy on training/testing data over epochs." }, { "heading": "2 PRELIMINARIES", "text": "We begin by providing some background in online learning, as it will be the main tool to design and analyze our proposed algorithm. We follow the notation in the literature of adaptive optimization (Kingma & Ba (2015); Reddi et al. (2018)). For any vector u, v ∈ Rd, u/v represents element-wise division, u2 represents element-wise square, √ u represents element-wise square-root. We denote g1:T [i] as the sum of the ith element of T vectors g1, g2, . . . , gT ∈ Rd." }, { "heading": "2.1 A BRIEF REVIEW OF ONLINE LEARNING AND OPTIMISTIC ONLINE LEARNING", "text": "The standard setup of online learning is that, in each round t, an online learner selects an action wt ∈ K ⊆ Rd, then the learner observes `t(·) and suffers loss `t(wt) after the learner commits the action. The goal of the learner is minimizing the regret,\nRegretT ({wt}) := ∑T t=1 `t(wt)− ∑T t=1 `t(w ∗),\nwhich is the cumulative loss of the learner minus the cumulative loss of some benchmark w∗ ∈ K. The idea of OPTIMISTIC ONLINE LEARNING (e.g., Chiang et al. (2012); Rakhlin & Sridharan (2013a;b); Syrgkanis et al. (2015); Abernethy et al. (2018)) is as follows. Suppose that, in each round t, the learner has a good guess mt(·) of the loss function `t(·) before playing an action wt. Then, the learner should exploit the guess mt(·) to choose an action wt since mt(·) is close to the true loss function `t(·). 1 For example, Syrgkanis et al. (2015) proposes an optimistic-variant of FOLLOW-THE-REGULARIZED-LEADER (FTRL). FTRL (see e.g. Hazan (2016)) is an online learning algorithm whose update is\nwt = arg minw∈K〈w,Lt−1〉+ 1ηR(w),\nwhere η is a parameter, R(·) is a 1-strongly convex function with respect to a norm (‖ · ‖) on the constraint setK, and Lt−1 := ∑t−1 s=1 gs is the cumulative sum of gradient vectors of the loss functions\n(i.e., gs := ∇`s(ws) ) up to but not including t. FTRL has regret at most O( √∑T\nt=1 ‖gt‖∗). On the other hand, OPTIMISTIC-FTRL (Syrgkanis et al. (2015)) has the update\nwt = arg minw∈K〈w,Lt−1 +mt〉+ 1ηR(w),\nwhere mt is the learner’s guess of the gradient vector gt := ∇`t(wt). Under the assumption that loss functions are convex, the regret of OPTIMISTIC-FTRL is at most O( √∑T t=1 ‖gt −mt‖∗), which can be much smaller than the regret of FTRL if mt is close to gt. Consequently, OPTIMISTIC-FTRL can achieve better performance than FTRL. On the other hand, if mt is far from gt, then the regret of OPTIMISTIC-FTRL would be only a constant factor worse than that of its non-optimistic counterpart.\nIn Section 4, we will provide a strategy to obtain mt. At the moment, we just would like to use this example of FTRL to emphasize the importance of leveraging a good guess mt for updating wt, in order to achieve a faster convergence rate (or equivalently, small regret). We will have a similar argument when we compare OPTIMISTIC-AMSGRAD and AMSGRAD." }, { "heading": "2.2 ADAM AND AMSGRAD", "text": "ADAM (Kingma & Ba (2015)) is a popular algorithm for training deep nets. It combines the momentum idea (Polyak (1964)) with the idea of ADAGRAD (Duchi et al. (2011)), which has effective different learning rates for different dimensions. The effective learning rate of ADAGRAD in iteration t for a dimension j is proportional to the inverse of √ Σts=1gs[j]\n2, where gs[j] is the jth element of the gradient vector gs in time s. This adaptive learning rate might help for accelerating the convergence when the gradient vector is sparse (Duchi et al. (2011)). However, when applying ADAGRAD to train deep nets, it is observed that the learning rate might decay too fast (Kingma & Ba (2015)). Therefore, Kingma & Ba (2015) propose using a moving average of gradients divided by the square root of the second moment of the moving average (element-wise fashion), for updating the model parameter w (i.e., lines 5,6 and 8 of Algorithm 1). Yet, ADAM (Kingma & Ba (2015)) fails at\n1Imagine that if the learner would had been known `t(·) before committing its action, then it would exploit the knowledge to determine its action and consequently minimizes the regret.\nAlgorithm 1 AMSGRAD (Reddi et al. (2018)) 1: Required: parameter β1, β2, and ηt. 2: Init: w1 ∈ K ⊆ Rd and v̂0 = v0 = 1 ∈ Rd. 3: for t = 1 to T do 4: Get mini-batch stochastic gradient vector gt at wt. 5: θt = β1θt−1 + (1− β1)gt. 6: vt = β2vt−1 + (1− β2)g2t . 7: v̂t = max(v̂t−1, vt). 8: wt+1 = wt − ηt θt√\nv̂t . (element-wise division)\n9: end for\nsome online convex optimization problems. AMSGRAD (Reddi et al. (2018)) fixes the issue. The algorithm of AMSGRAD is shown in Algorithm 1. The difference between ADAM and AMSGRAD lies on line 7 of Algorithm 1. ADAM does not have the max operation on line 7 (i.e., v̂t = vt for ADAM) while AMSGRAD adds the operation to guarantee a non-increasing learning rate, ηt√\nv̂t , which\nhelps for the convergence (i.e., average regret RegretTT → 0). For the parameters of AMSGRAD, it is suggested that β1 = 0.9 and β2 = 0.99." }, { "heading": "3 OPTIMISTIC-AMSGRAD", "text": "Algorithm 2 OPTIMISTIC-AMSGRAD 1: Required: parameter β1, β2, , and ηt. 2: Init: w1 = w−1/2 ∈ K ⊆ Rd and v̂0 = v0 = 1 ∈ Rd. 3: for t = 1 to T do 4: Get mini-batch stochastic gradient vector gt at wt. 5: θt = β1θt−1 + (1− β1)gt. 6: vt = β2vt−1 + (1− β2)(gt −mt)2. 7: v̂t = max(v̂t−1, vt). 8: wt+ 1\n2 = ΠK\n[ wt− 1\n2 − ηt θt√v̂t\n] .\n9: wt+1 = ΠK [ wt+ 1 2 − ηt+1 ht+1√v̂t ] , where ht+1 := β1θt−1 + (1− β1)mt+1\nand mt+1 is the guess of gt+1. 10: end for\nWe propose a new optimization algorithm, OPTIMISTIC-AMSGRAD, shown in Algorithm 2. In each iteration, the learner computes a gradient vector gt := ∇`t(wt) at wt (line 4), then it maintains an exponential moving average of θt ∈ Rd (line 5) and vt ∈ Rd (line 6), which is followed by the max operation to obtain v̂t ∈ Rd (line 7). The learner also updates an auxiliary variable wt+ 12 ∈ K (line 8). It uses the auxiliary variable to update and commit wt+1 (line 9), which exploits the guess mt+1 of gt+1 to get wt+1. As the learner’s action set is K ⊆ Rd, we adopt the notation ΠK[·] for the projection to K if needed. We see that OPTIMISTIC-AMSGRAD has three properties:\n• Adaptive learning rate of each dimension as ADAGRAD (Duchi et al. (2011)). (line 6, line 8 and line 9) • Exponentially moving average of the past gradients as NESTEROV’S METHOD (Nesterov\n(2004)) and the HEAVY-BALL method (Polyak (1964)). (line 5) • Optimistic update that exploits a good guess of the next gradient vector as optimistic online\nlearning algorithms (e.g. Chiang et al. (2012); Rakhlin & Sridharan (2013a;b); Syrgkanis et al. (2015)). (line 9)\nThe first property helps for acceleration when the gradient has a sparse structure. The second one is from the well-recognized idea of momentum which can also help for acceleration. The last one, perhaps less known outside the ONLINE LEARNING community, can actually lead to acceleration when the prediction of the next gradient is good. This property will be elaborated in the following subsection in which we provide the theoretical analysis of OPTIMISTIC-AMSGRAD.\nObserve that the proposed algorithm does not reduce to AMSGRAD when mt = 0. Furthermore, if K = Rd (unconstrained case), one might want to combine line 8 and line 9 and get a single line as wt+1 = wt− 12 − ηt θt√ v̂t − ηt+1 ht+1√v̂t . Yet, based on this expression, we see that wt+1 is updated from wt− 12 instead of wt. Therefore, while OPTIMISTIC-AMSGRAD looks like just doing an additional update compared to AMSGRAD, the difference of the updates is subtle. In the following analysis, we show that the interleaving actually leads to certain cancellation in the regret bound." }, { "heading": "3.1 THEORETICAL ANALYSIS OF OPTIMISTIC-AMSGRAD", "text": "We provide the regret analysis here. To begin with, let us introduce some notations first. We denote the Mahalanobis norm ‖ · ‖H := √ 〈·, H·〉 for some PSD matrix H . We let ψt(x) := 〈x, diag{v̂t}1/2x〉 for a PSD matrix H1/2t := diag{v̂t}1/2, where diag{v̂t} represents the diagonal matrix whose ith diagonal element is v̂t[i] in Algorithm 2. We define its corresponding Mahalanobis norm ‖ · ‖ψt := √ 〈·, diag{v̂t}1/2·〉, where we slightly abuse the notation ψt to represent the PSD matrix H1/2t := diag{v̂t}1/2. Consequently, ψt(·) is 1-strongly convex with respect to the norm ‖ ·‖ψt := √ 〈·, diag{v̂t}1/2·〉. Namely, ψt(·) satisfies ψt(u) ≥ ψt(v)+ 〈ψt(v), u−v〉+ 12‖u−v‖ 2 ψt for any point u, v. A consequence of 1-strongly convexity of ψt(·) is that Bψt(u, v) ≥ 12‖u− v‖ 2 ψt\n, where the Bregman divergence Bψt(u, v) is defined as Bψt(u, v) := ψt(u)−ψt(v)−〈ψt(v), u− v〉 with ψt(·) as the distance generating function. We can also define the corresponding dual norm ‖ · ‖ψ∗t := √ 〈·, diag{v̂t}−1/2·〉.\nWe prove the following result regarding to the regret in the convex loss setting. The proof is available in Appendix B. For simplicity, we analyze the case when β1 = 0. One might extend our analysis to more general setting β1 = [0, 1).\nTheorem 1. Let β1 = 0. Assume that K has bounded diameter D∞ 2. Suppose that the learner incurs a sequence of convex loss functions {`t(·)}. OPTIMISTIC-AMSGRAD (Algorithm 2) has regret\nRegretT ≤ 1ηminD 2 ∞ ∑d i=1 v̂ 1/2 T [i] + Bψ1 (w ∗,w1/2) η1 + ∑T t=1 ηt 2 ‖gt −mt‖ 2 ψ∗t−1 , (1)\nwhere gt := ∇`t(wt) and ηmin := mint ηt. The result holds for any benchmark w∗ ∈ K and any step size sequence {ηt}.\nCorollary 1. Suppose that vt is always monotone increasing (i.e., v̂t = vt,∀t). Then, RegretT ≤ 1ηminD 2 ∞ ∑d i=1{(1− β2) ∑T s=1 β T−s 2 (gs[i]−ms[i])2}1/2\n+ Bψ1 (w\n∗,w1/2) η1 + ∑T t=1 ηt 2 ‖gt −mt‖ 2 ψ∗t−1 . (2)\nWe should compare the bound of (2) 3 with that of AMSGRAD (Reddi et al. (2018)), which is\nRegretT ≤ √ T 2η(1−β1)D 2 ∞ ∑d i=1 v̂T [i] 2 +D2∞ ∑T t=1 ∑d i=1 β1v̂t[i] 1/2 2ηt(1−β1)\n+ η √ 1+log T\n(1−β1)2(1−γ) √ 1−β2 ∑d i=1 ‖g1:T [i]‖2,\n(3)\nwhere the result was obtained by setting the step size ηt = η/ √ t. Notice that v̂t in (3) is the one\nin Algorithm 1 (AMSGRAD). For fair comparison, let us set ηt = η/ √ t in (2) so that η1 = η and\nηmin = η/ √ T and also let us set β1 = 0 in (3) so that their parameters have the same values as ours in the analysis. By comparing the first term in (2) and (3), we clearly see that if gt and mt are close, the first term in (2) would be smaller than √ T 2η(1−β1)D 2 ∞ ∑d i=1 v̂T [i] 2 of (3).\n2The boundedness assumption also appears in the previous works (Reddi et al. (2018); Kingma & Ba (2015)). It seems to be necessary in regret analysis. If the boundedness assumption is lifted, then one might construct a scenario such that the benchmark is w∗ =∞ and the learner’s regret is infinite.\n3The following conclusion in general holds for (1), when vt may not be monotone-increasing. For brevity, we only consider the case that v̂t = vt, as v̂T has a clean expression in this case.\nNow let us switch to the second term in (2) and (3), we see that Bψ1 (w ∗,w1/2)\nη1 ' D∞ in (2), while 0\nin (3). For the last term in (2), we have∑T t=1 ηt 2 ‖gt −mt‖ 2 ψ∗t−1\n= T−1∑ t=1 ηt 2 ‖gt −mt‖2ψ∗t−1 + ηT d∑ i=1 (gT [i]−mT [i])2√ vT−1[i]\n= T−1∑ t=1 ηt 2 ‖gt −mt‖2ψ∗t−1 + η d∑ i=1 (gT [i]−mT [i])2√ T ( (1− β2) ∑T−1 s=1 β T−1−s 2 (gs[i]−ms[i])2\n) ≤η\nd∑ i=1 T∑ t=1 (gt[i]−mt[i])2√ t ( (1− β2) ∑t−1 s=1 β t−1−s 2 (gs[i]−ms[i])2\n) . To interpret the bound, let us make a rough approximation such that\nt−1∑ s=1 βt−1−s2 (gs[i]−ms[i])2 ' (gt[i]−mt[i])2.\nWe can then further obtain an upper-bound as\nT∑ t=1 ηt 2 ‖gt −mt‖2ψ∗t−1 / η√ 1− β d∑ i=1 T∑ t=1 |gt[i]−mt[i]|√ t ≤ η √ 1 + log T√ 1− β d∑ i=1 ‖(g −m)1:T [i]‖2,\nwhere the last inequality is due to Cauchy-Schwarz. The bound means that when gt and mt are sufficiently close, the last term in (2) is smaller than that in (3).\nTo conclude, as the second term in (2) (which is approximately D∞) is likely to be dominated by the other terms, the proposed algorithm improves AMSGRAD when the good guess mt is available.\n4 PREDICTING mt\nFrom the analysis in the previous section, we know that whether OPTIMISTIC-AMSGRAD converges faster than its counterpart depends on how mt is chosen. In OPTIMISTIC-ONLINE LEARNING, mt is usually set to mt = gt−1, i.e., using the previous gradient as a guess of the next one. The choice can accelerate the convergence to an equilibrium in some two-player zero-sum games (Rakhlin & Sridharan (2013a;b); Syrgkanis et al. (2015); Daskalakis et al. (2018)), in which each player uses an optimistic online learning algorithm against its opponent.\nThis paper is, however, about solving optimization problems instead of solving zero-sum games. We propose to use the extrapolation algorithm of (Scieur et al. (2016)). Extrapolation studies estimating the limit of sequence using the last few iterates (Brezinski & Zaglia (2013)). Some classical works include Anderson acceleration (Walker & Ni. (2011)), minimal polynomial extrapolation (Cabay & Jackson (1976)), reduced rank extrapolation (Eddy (1979)). These methods typically assume that the sequence {xt} ∈ Rd has a linear relation\nxt = A(xt−1 − x∗) + x∗, (4)\nand A ∈ Rd×d is an unknown, not necessarily symmetric, matrix. The goal is to find the fixed point of x∗. Scieur et al. (2016) relaxes the assumption to certain degrees, by assuming that the sequence {xt} ∈ Rd satisfies xt − x∗ = A(xt−1 − x∗) + et, (5) where et is a second order term satisfying ‖et‖2 = O(‖xt−1 − x∗‖22) and A ∈ Rd×d is an unknown matrix. The extrapolation algorithm we used is shown in Algorithm 3. Some theoretical guarantees regarding the distance between the output and x∗ are provided in (Scieur et al. (2016)).\nAlgorithm 3 REGULARIZED APPROXIMATE MINIMAL POLYNOMIAL EXTRAPOLATION (RMPE) (Scieur et al. (2016))\n1: Input: sequence {xs ∈ Rd}s=rs=0, parameter λ > 0. 2: Compute matrix U = [x1 − x0, . . . , xr − xr−1] ∈ Rd×r . 3: Obtain z by solving (U>U + λI)z = 1. 4: Get c = z/(z>1). 5: Output: Σr−1i=0 cixi, the approximation of the fixed point x ∗.\nFor OPTIMISTIC-AMSGRAD, we use Algorithm 3 to get mt+1. The following describes the procedure.\n• Call Algorithm 3 with input being a sequence of some past r + 1 iterates, {wt, wt−1, wt−2, . . . , wt−r}, where r is a parameter.\n• Set ŵt+1 := Σr−1i=0 ciwt−r+i from the output of Algorithm 3.\n• Output mt+1 := ∇̂f(ŵt+1).\nThat is, the latest r iterates are the input to Algorithm 3. The prediction of the gradient mt+1 is by computing a mini-batch stochastic gradient at the output, namely at ŵt+1, of Algorithm 3.\nWe would like to emphasize that the choice of algorithm for gradient prediction is surely not unique. We propose to use the recent result among various related works. Indeed, one can use any method that can provide reasonable guess of the gradient in next iteration.\nRemark: The work (Scieur et al. (2016)) leverages its extrapolation algorithm to post-process the trajectory of gradient descent and obtains a point that is closer to an optimal point. In contrast, we use the extrapolation algorithm on the fly to accelerate the convergence (of OPTIMISTIC-AMSGRAD)." }, { "heading": "5 EXPERIMENTS", "text": "Datasets and neural nets: The experiments were conducted on CIFAR10 and CIFAR100 datasets, and a noisy variant of MNIST dataset (MNIST-back-Image (Larochelle et al. (2007)) 4). We train Res-18 (He et al. (2016)) for CIFAR10 and CIFAR100 datasets and a four-layer convolutional neural net 5 for the noisy MNIST dataset.\nIn all the experiments described in the following of this section, we use the following hyperparameters:\n• Step size η = 0.001. • β1 = 0.9, β2 = 0.99. • Number of training samples in each batch: batch size = 64. • (Optimistic-AMSGrad) Number of previous iterates stored for gradient prediction: r = 5\n(i.e., r = 5 means the latest five iterates are stored for gradient prediction).\nAs these are classification tasks, we use the cross entropy loss for training the neural nets. For training on CIFAR 10 and CIFAR 100, after getting the guess of the next iterate ŵt+1 by the extrapolation method, we construct the guess of the next gradient mt+1 by computing a mini-batch of stochastic gradient of the negative log likelihood loss 6 at ŵt+1 (instead of the gradient of the cross entropy loss at t+1). The slight modification leads to a better performance.\n4MNIST-back-image takes random patches from a black and white as noisy background. The dataset has 12,000 training samples and 50,000 test samples.\n5Specifically, we use a neural net model defined on the tutorial page https://github.com/pytorch/ examples/blob/master/mnist/main.py.\n6Assume a K classification problem. Denote the values on the output layer ŷi ∈ RK for a sample i with true label yi ∈ [K], its negative log likelihood loss is defined as −ŷi[yi]. See function nll loss in PyTorch for details.\nResults: Figure 1 shows the result on CIFAR10+Res-18 and Figure 2 shows the result on CIFAR100+Res-18. Figure 3 shows the result on MNIST-back-img dataset.7 From the results\n7Note that our results on MNIST-back-image actually improved those reported in (Larochelle et al. (2007)), which did not use convolutional nets. The test accuracy is now comparable to that reported in (Li (2010)) which\nshown on the figures, it is clear that OPTIMISTIC-AMSGRAD noticeably improves AMSGRAD in terms of the standard performance measures: training (cross entropy) loss, testing loss, training classification accuracy, and testing classification accuracy. All results are plotted against the number of training epochs. The results also suggest that OPTIMISTIC-AMSGRAD finds a better point that generalize well than AMSGRAD. In Appendix D.2, we report OPTIMISTIC-AMSGRAD with different values of the parameters r. We find that the algorithm performance is not sensitive the choice of r.\nComparisons with related works. In Appendix A, we provide a comprehensive survey of the related works. There has been a trend in studying adaptive optimization methods from different respects. We compare our contribution and some of the related works, in particular AO-FTRL (Mohri & Yang (2016)) and OPTIMISTIC-ADAM (Daskalakis et al. (2018)). Moreover, in Section D.1, we provide the experimental results for the comparison to a modified version of OPTIMISTIC-ADAM." }, { "heading": "6 CONCLUSION", "text": "We propose OPTIMISTIC-AMSGRAD that combines the ideas of optimistic online learning and AMSGRAD to accelerate optimization. For training deep neural networks, OPTIMISTIC-AMSGRAD significantly improves AMSGRAD in terms of various performance measures in practice (e.g. training loss, testing loss, and classification accuracy on training/testing data). Though we only provide the theoretical analysis in the convex setting, the experiment in non-convex optimization shows some promising results. The results seem to suggest that OPTIMISTIC-AMSGRAD not only minimizes the training loss faster but it can also find a point that generalizes better than the baselines. As the success of OPTIMISTIC-AMSGRAD relies on a good guess of the next gradient, future work includes improving predicting gradients. Exploring the possibility of developing a new way to obtain a better guess of the gradient would be an interesting direction. One possibility is by considering a very recent work of (Dutta et al. (2019)) which proposes a new extrapolation algorithm.\ndeveloped the second-order tree-split formulation for boosted trees. Also see (Li (2018)) for comparisons with new kernel methods." }, { "heading": "A COMPARISON TO RELATED WORKS", "text": "A.1 COMPARISON TO SOME NON-CONVEX OPTIMIZATION WORKS\nRecently, Zaheer et al. (2018); Chen et al. (2019a); Ward et al. (2019); Zhou et al. (2018); Zou & Shen (2018); Li & Orabona. (2019) provide some theoretical analysis of ADAM-type algorithms when applying them to smooth nonconvex optimization problems. For example, Chen et al. (2019a) provide a bound, which is mint∈[T ] E[‖∇f(wt)‖2] = O(log T/ √ T ). Yet, this data independent bound does not show an advantage over standard stochastic gradient descent. Similar concerns appear in other papers.\nTo obtain some adaptive data dependent bound (e.g., bounds like (2) or (3) that are in terms of the gradient norms observed along the trajectory) when applying OPTIMISTIC-AMSGRAD to nonconvex optimization, one can follow the approach of (Agarwal et al. (2019)) or (Chen et al. (2019b)). They provide ways to convert algorithms with adaptive data dependent regret bound for convex loss functions (e.g., ADAGRAD) to the ones that can find an approximate stationary point of non-convex loss functions. Their approaches are modular so that simply using OPTIMISTIC-AMSGRAD as the base algorithm in their methods will immediately lead to a variant of OPTIMISTIC-AMSGRAD that enjoys some guarantee on nonconvex optimization. The variant can outperform the ones instantiated by other ADAM-type algorithms when the gradient prediction mt is close to gt. We omit the details since this is a straightforward application.\nA.2 COMPARISON TO MOHRI & YANG (2016)\nMohri & Yang (2016) proposes AO-FTRL, which has the update of the form wt+1 = arg minw∈K( ∑t s=1 gs)\n>w +m>t+1w + r0:t(w), where r0:t(·) is a 1-strongly convex loss function with respect to some norm ‖ · ‖(t) that may be different for different iteration t. Data dependent regret bound was provided in the paper, which is r0:T (w∗) + ∑T t=1 ‖gt−mt‖(t)∗ for any benchmark\nw∗ ∈ K. We see that if one selects r0:t(w) := 〈w, diag{v̂t}1/2w〉 and ‖ · ‖(t) := √ 〈·, diag{v̂t}1/2·〉, then the update might be viewed as an optimistic variant of ADAGRAD. However, no experiments was provided in (Mohri & Yang (2016)).\nA.3 COMPARISON TO OPTIMISTIC-ADAM OF (DASKALAKIS ET AL. (2018))\nWe are aware that Daskalakis et al. (2018) proposed one version of optimistic algorithm for ADAM, which is called OPTIMISTIC-ADAM in their paper. We want to emphasize that the goals are different. OPTIMISTIC-ADAM in their paper is designed to optimize two-player games (e.g., GANs (Goodfellow et al. (2014))), while the proposed algorithm in this paper is designed to accelerate optimization (e.g., solving empirical risk minimization quickly). Daskalakis et al. (2018) focused on training GANs (Goodfellow et al. (2014)). GANs is a two-player zero-sum game. There have been some related works in OPTIMISTIC ONLINE LEARNING like (Chiang et al. (2012); Rakhlin & Sridharan\nAlgorithm 4 OPTIMISTIC-ADAM (Daskalakis et al. (2018)) 1: Required: parameter β1, β2, and ηt. 2: Init: w1 ∈ K. 3: for t = 1 to T do 4: Get mini-batch stochastic gradient vector gt ∈ Rd at wt. 5: θt = β1θt−1 + (1− β1)gt. 6: vt = β2vt−1 + (1− β2)g2t . 7: wt+1 = Πk[wt − 2ηt θt√vt + ηt θt−1√ vt−1\n]. 8: end for\n(2013a;b); Syrgkanis et al. (2015)) showing that if both players use some kinds of OPTIMISTICupdate, then accelerating the convergence to the equilibrium of the game is possible. Daskalakis et al. (2018) were inspired by these related works and showed that OPTIMISTIC-MIRROR-DESCENT can avoid the cycle behavior in a bilinear zero-sum game, which accelerates the convergence. Furthermore, Daskalakis et al. (2018) did not provide theoretical analysis of OPTIMISTIC-ADAM while we give some analysis for the proposed algorithm.\nFor comparison, we replicate OPTIMISTIC-ADAM in Algorithm 4. OPTIMISTIC-ADAM in Algorithm 4 uses the previous gradient as the guess of the next gradient. Yet, the update cannot be written into the same form as our OPTIMISTIC-AMSGRAD (and vise versa). OPTIMISTIC-AMSGRAD (Algorithm 2) actually uses two interleaving sequences of updates {wt}Tt=1, {wt− 12 } T t=1. The design and motivation of both algorithms are different.\nA.4 OTHER WORKS ABOUT ADAPTIVE GRADIENT METHODS\nThere has been a spate of research in improving adaptive gradient methods from different respects. Anil et al. (2019) develop a method to reduce memory overheads in adaptive gradient methods like ADAGRAD and ADAM. Zhou et al. (2019) propose decorrelation between the second moment term vt and the gradient gt by temporal shifting to deal with the non-convergence issue of ADAM. Luo et al. (2019) show that an extreme effective learning rate might happen during the execution of ADAM, which can cause the non-convergence. They propose an operation to clip the effective learning rate that avoids the extreme learning rate. Gupta et al. (2018) propose a new adaptive gradient method by designing a preconditioned matrix for the update. Liu et al. (2019) study a heuristic called the learning rate “warm-up” and propose a new variant of ADAM by including a variance rectification term. Becigneul & Ganea (2019) propose a counterpart of ADAM for Riemannian manifolds. Other directions include improving the generalization of adaptive gradient methods (e.g. Loshchilov & Hutter (2019); Chen & Gu (2018); Keskar & Socher. (2017); Luo et al. (2019)), comparing adaptive gradient methods and standard SGD with momentum (e.g. Wilson et al. (2017); Loshchilov & Hutter (2019)), or showing that an adaptive optimization method can escape saddle points (Staib et al. (2019))." }, { "heading": "B PROOF OF THEOREM 1", "text": "We provide the regret analysis here. To begin with, let us introduce some notations first. We denote the Mahalanobis norm ‖ · ‖H = √ 〈·, H·〉 for some PSD matrix H . We let ψt(x) := 〈x, diag{v̂t}1/2x〉 for a PSD matrix H1/2t := diag{v̂t}1/2, where diag{v̂t} represents the diagonal matrix whose ith diagonal element is v̂t[i] in Algorithm 2. We define its corresponding Mahalanobis norm ‖ · ‖ψt := √ 〈·, diag{v̂t}1/2·〉, where we abuse the notation ψt to represent the PSD matrix H 1/2 t := diag{v̂t}1/2. Consequently, ψt(·) is 1-strongly convex with respect to the norm ‖ · ‖ψt :=√ 〈·, diag{v̂t}1/2·〉. Namely, ψt(·) satisfies ψt(u) ≥ ψt(v) + 〈ψt(v), u− v〉+ 12‖u− v‖ 2 ψt\nfor any point u, v. A consequence of 1-strongly convexity of ψt(·) is that Bψt(u, v) ≥ 12‖u− v‖ 2 ψt\n, where the Bregman divergence Bψt(u, v) is defined as Bψt(u, v) := ψt(u)− ψt(v)− 〈ψt(v), u− v〉 and ψt(·) serves as the distance generating function of the Bregman divergence. We can also define the the corresponding dual norm ‖ · ‖ψ∗t := √ 〈·, diag{v̂t}−1/2·〉.\nProof. [of Theorem 1] By regret decomposition, we have that\nRegretT := T∑ t=1 `t(wt)− min w∈K T∑ t=1 `t(w)\n≤ ∑T t=1〈wt − w∗,∇`t(wt)〉\n= ∑T t=1〈wt − wt+ 12 , gt −mt〉+ 〈wt − wt+ 12 ,mt〉+ 〈wt+ 12 − w ∗, gt〉,\n(6)\nwhere we denote gt := ∇`t(wt). Recall the notation ψt(x) and the Bregman divergence Bψt(u, v) we defined in the beginning of this section. For β1 = 0, we can rewrite the update on line 8 of (Algorithm 2) as\nwt+ 12 = arg minw∈K ηt〈w, gt〉+Bψt(w,wt− 12 ), (7)\nand rewrite the update on line 9 of (Algorithm 2) as\nwt+1 = arg minw∈K ηt+1〈w,mt+1〉+Bψt(w,wt+ 12 ). (8)\nNow we are going to exploit a useful inequality (which appears in e.g., Tseng (2008)); for any update of the form ŵ = arg minw∈K〈w, θ〉+Bψ(w, v), it holds that\n〈ŵ − u, θ〉 ≤ Bψ(u, v)−Bψ(u, ŵ)−Bψ(ŵ, v), (9)\nfor any u ∈ K. By using (9) for (8), we have\n〈wt − wt+ 12 ,mt〉 ≤ 1\nηt\n( Bψt−1(wt+ 12 , wt− 1 2 )−Bψt−1(wt+ 12 , wt)−Bψt−1(wt, wt− 12 ) ) , (10)\nand, by using (9) for (7), we have\n〈wt+ 12 − w ∗,gt〉 ≤\n1\nηt\n( Bψt(w ∗, wt− 12 )−Bψt(w ∗, wt+ 12 )−Bψt(wt+ 12 , wt− 12 ) ) . (11)\nSo, by (6), (10), and (11), we obtain\nRegretT (6) ≤ ∑T t=1〈wt − wt+ 12 , gt −mt〉+ 〈wt − wt+ 12 ,mt〉+ 〈wt+ 12 − w\n∗, gt〉 (10),(11)\n≤ ∑T t=1 ‖wt − wt+ 12 ‖ψt−1‖gt −mt‖ψ∗t−1\n+ 1ηt ( Bψt−1(wt+ 12 , wt− 1 2 )−Bψt−1(wt+ 12 , wt)−Bψt−1(wt, wt− 12 ) +Bψt(w ∗, wt− 12 )−Bψt(w ∗, wt+ 12 )−Bψt(wt+ 12 , wt− 12 ) ) ,\n(12)\nwhich is further bounded by\n(a) ≤ ∑T t=1{ 1 2ηt ‖wt − wt+ 12 ‖ 2 ψt−1 + ηt2 ‖gt −mt‖ 2 ψ∗t−1 + 1ηt ( Bψt−1(wt+ 12 , wt− 1 2 )− 12‖wt+ 12 − wt‖ 2 ψt−1 −Bψt−1(wt, wt− 12 ) +Bψt(w ∗, wt− 12 )−Bψt(w ∗, wt+ 12 )−Bψt(wt+ 12 , wt− 12 ) ) }\n≤ ∑T t=1{ ηt 2 ‖gt −mt‖ψ∗t−1 + 1 ηt ( Bψt(w ∗, wt− 12 )−Bψt(w ∗, wt+ 12 )\n+Bψt−1(wt+ 12 , wt− 1 2 )−Bψt(wt+ 12 , wt− 12 )\n) },\n(13) where (a) is because ‖wt − wt+ 12 ‖ψt−1‖gt −mt‖ψ∗t−1 = infβ>0 1 2β ‖wt − wt+ 12 ‖ 2 ψt−1\n+ β2 ‖gt − mt‖2ψ∗t−1 by Young’s inequality and that ψt−1(·) is 1-strongly convex with respect to ‖ · ‖ψt−1 .\nTo proceed, notice that\nBψt+1(w ∗, wt+ 12 )−Bψt(w ∗, wt+ 12 ) = 〈w ∗ − wt+ 12 , diag(v̂ 1/2 t+1 − v̂ 1/2 t )(w ∗ − wt+ 12 )〉\n≤ (maxi(w∗[i]− wt+ 12 [i]) 2) · ( ∑d i=1 v̂ 1/2 t+1[i]− v̂ 1/2 t [i])\n(14)\nand Bψt−1(wt+ 12 , wt− 1 2 )−Bψt(wt+ 12 , wt− 12 )\n= 〈wt+ 12 − wt− 12 , diag(v̂ 1/2 t−1 − v̂ 1/2 t )(wt+ 12 − wt− 12 )〉 ≤ 0,\n(15)\nas the sequence {v̂t} is non-decreasing. Therefore,\nRegretT (13),(14),(15) ≤ 1ηminD 2 ∞ ∑d i=1 v̂ 1/2 T [i] + Bψ1 (w ∗,w1/2) η1 + ∑T t=1 ηt 2 ‖gt −mt‖ 2 ψ∗t−1 ." }, { "heading": "C DISCUSSION OF ITERATION COST OF OPTIMISTIC-AMSGRAD", "text": "We observe that the iteration cost (i.e., actual running time per iteration) of our implementation of OPTIMISTIC-AMSGRAD is roughly two times larger than the standard AMSGRAD in the empirical minimization task. Here, we report the breakdown analysis for the computational overhead. The overhead mostly comes from the extrapolation step. Specifically, the extrapolation step consists of: (a) The step of constructing the linear system (U>U). The cost of this step can be optimized and reduced to r × d, since the matrix U only changes one column at a time. (b) The step of solving the linear system. The cost of this step is O(r3), which is negligible as the linear system is very small (5-by-5 if r = 5). (c) The step that outputs an estimated gradient as a weighted average of previous gradients. The cost of this step is r × d. So, the computational overhead is 2rd+ r3. Yet, we notice that step (a) and (c) is parallelizable.\nMemory usage: Our algorithm needs a storage of past r gradients to get an estimated gradient. Though it seems quite demanding compared to the standard AMSGrad, it is relatively cheap compared to Natural gradient method (e.g., Martens & Grosse (2015)), as Natural gradient method needs to store some matrix inverse." }, { "heading": "D MORE EXPERIMENTS", "text": "D.1 A COMPARISON WITH MODIFIED OPTIMISTIC-ADAM (DASKALAKIS ET AL. (2018))\nAlgorithm 5 OPTIMISTIC-ADAM+v̂t. 1: Required: parameter β1, β2, and ηt. 2: Init: w1 ∈ K and v̂0 = v0 = 1 ∈ Rd. 3: for t = 1 to T do 4: Get mini-batch stochastic gradient vector gt ∈ Rd at wt. 5: θt = β1θt−1 + (1− β1)gt. 6: vt = β2vt−1 + (1− β2)g2t . 7: v̂t = max(v̂t−1, vt). 8: wt+1 = Πk[wt − 2ηt θt√v̂t + ηt θt−1√ v̂t−1 ].\n9: end for\nHere we also compare OPTIMISTIC-AMSGRAD with another baseline, which we called OPTIMISTICADAM+v̂t as shown in Algorithm 5. OPTIMISTIC-ADAM+v̂t is OPTIMISTIC-ADAM (Algorithm 4) of (Daskalakis et al. (2018)) with the additional max operation v̂t = max(v̂t−1, vt) to guarantee that the weighted second moment is monotone increasing. Figure 4, 5, and 6 show the results. We observe that our method dominates the other two methods.\nD.2 CHOICE OF DIFFERENT r VALUES\nRecall that our proposed algorithm has the parameter r in addition to the step size η that governs the use of past information. Figure 7, 8, and 9 compare the performance under different values or r, r = 3, 5, 10. From the result we see that the choice of r does not have significant impact on learning performance. Taking consideration both quality of gradient prediction and computational issues, it appears that r = 5 is a good choice, although the results for r = 3 do not differ much." } ]
2,019
null
SP:8ff1115adfd50e2c1512534ec8b90f91e0c0c331
[ "This paper presents experimental evidence that learning with privacy requires approaches that are not identical to those used when learning without privacy. These approaches include re-considering different model choices (i.e., its structure and activation functions), its initialization, and its optimization procedure. With these changes, they show that it is possible to obtain state-of-the-art results for some canonical learning tasks.", "The paper methodically analyses the settings and choices used when training neural networks (specifically CNNs) via the DP-SGD algorithm and suggests changes to the standard procedures that empirically lead to higher accuracies despite the added noise. The main statement of the paper is quite simple: optimize hyperparameters for the model that you're training (DP-SGD) rather than the model it is inspired by. Yet, the findings an recommendations may be useful for practitioners." ]
Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data. However, in practice, this has been mostly an afterthought, with privacypreserving models obtained by re-running training with a different optimizer, but using the same model architecture that performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. Instead, we propose that model architectures and initializations are chosen and hyperparameter tuning is performed, ab initio, explicitly for privacypreserving training. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the fundamental learning procedures or differential-privacy analysis.
[]
[ { "authors": [ "Martin Abadi", "Andy Chu", "Ian Goodfellow", "H Brendan McMahan", "Ilya Mironov", "Kunal Talwar", "Li Zhang" ], "title": "Deep learning with differential privacy", "venue": "In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2016 }, { "authors": [ "Brendan Avent", "Javier Gonzalez", "Tom Diethe", "Andrei Paleyes", "Borja Balle" ], "title": "Automatic discovery of privacy-utility pareto fronts", "venue": "arXiv preprint arXiv:1905.10862,", "year": 2019 }, { "authors": [ "Eugene Bagdasaryan", "Vitaly Shmatikov" ], "title": "Differential privacy has disparate impact on model accuracy, 2019", "venue": null, "year": 2019 }, { "authors": [ "Raef Bassily", "Adam Smith", "Abhradeep Thakurta" ], "title": "Private empirical risk minimization: Efficient algorithms and tight error bounds", "venue": "In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science,", "year": 2014 }, { "authors": [ "Raef Bassily", "Abhradeep Guha Thakurta", "Om Dipakbhai Thakkar" ], "title": "Model-agnostic private learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "Chang Liu", "Úlfar Erlingsson", "Jernej Kos", "Dawn Song" ], "title": "The secret sharer: Evaluating and testing unintended memorization in neural networks", "venue": "In USENIX Security Symposium,", "year": 2019 }, { "authors": [ "Kamalika Chaudhuri", "Claire Monteleoni", "Anand D Sarwate" ], "title": "Differentially private empirical risk minimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "C. Dwork", "A. Roth" ], "title": "The Algorithmic Foundations of Differential Privacy", "venue": null, "year": 2014 }, { "authors": [ "Cynthia Dwork", "Vitaly Feldman", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Aaron Roth" ], "title": "The reusable holdout: Preserving validity in adaptive data analysis", "venue": null, "year": 2015 }, { "authors": [ "Vitaly Feldman" ], "title": "Does learning require memorization? a short tale about a long tail", "venue": "arXiv preprint arXiv:1906.05271,", "year": 2019 }, { "authors": [ "Vitaly Feldman", "Ilya Mironov", "Kunal Talwar", "Abhradeep Thakurta" ], "title": "Privacy amplification by iteration", "venue": "IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS),", "year": 2018 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Prateek Jain", "Abhradeep Guha Thakurta" ], "title": "near) dimension independent risk bounds for differentially private learning", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Igor Kononenko" ], "title": "Machine learning for medical diagnosis: history, state of the art and perspective", "venue": "Artificial Intelligence in medicine,", "year": 2001 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Shuang Song", "Ilya Mironov", "Ananth Raghunathan", "Kunal Talwar", "lfar Erlingsson" ], "title": "Scalable private learning with pate, 2018", "venue": null, "year": 2018 }, { "authors": [ "Lorien Y Pratt", "Jack Mostow", "Candace A Kamm", "Ace A Kamm" ], "title": "Direct transfer of learned information among neural networks", "venue": "In AAAI,", "year": 1991 }, { "authors": [ "Maithra Raghu", "Chiyuan Zhang", "Jon Kleinberg", "Samy Bengio" ], "title": "Transfusion: Understanding transfer learning for medical imaging, 2019", "venue": null, "year": 2019 }, { "authors": [ "Reza Shokri", "Marco Stronati", "Congzheng Song", "Vitaly Shmatikov" ], "title": "Membership inference attacks against machine learning models", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Congzheng Song", "Vitaly Shmatikov" ], "title": "Auditing data provenance in text-generation models", "venue": null, "year": 2019 }, { "authors": [ "Shuang Song", "Kamalika Chaudhuri", "Anand D Sarwate" ], "title": "Stochastic gradient descent with differentially private updates", "venue": "IEEE Global Conference on Signal and Information Processing,", "year": 2013 }, { "authors": [ "Kunal Talwar", "Abhradeep Thakurta", "Li Zhang" ], "title": "Private empirical risk minimization beyond the worst case: The effect of the constraint set geometry", "venue": "arXiv preprint arXiv:1411.5417,", "year": 2014 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "LeCun Yann", "Cortes Corinna", "J Christopher" ], "title": "The mnist database of handwritten digits", "venue": "URL http://yhann. lecun. com/exdb/mnist,", "year": 1998 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning (ML) can be usefully applied to the analysis of sensitive data, e.g., in the domain of healthcare (Kononenko, 2001). However, ML models may unintentionally reveal sensitive aspects of their training data, e.g., due to overfitting (Shokri et al., 2017; Song & Shmatikov, 2019). To counter this, ML techniques that offer strong privacy guarantees have been developed. Notably, the differentially private stochastic gradient descent, or DP-SGD, of Abadi et al. (2016) is an easyto-use, generally-applicable modification of stochastic gradient descent. In addition to its rigorous privacy guarantees, it has been empirically shown to stop the leaking of secrets (Carlini et al., 2019).\nTo strictly bound the impact of any training example, DP-SGD makes two changes to every gradient step: first, each example’s gradient contribution is limited to a fixed bound (in practice, by clipping all per-example gradients to a maximum `2 norm); second, random (Gaussian) noise of the scale of the clipping norm is added to each batch’s combined gradient, before it is backpropagated to update model parameters. Together, these changes create a new, artificial noise floor at each step of gradient descent, such that the unique signal of any individual example is below this new noise floor; this allows differential privacy to be guaranteed for all training examples (Dwork & Roth, 2014).\nTraining using DP-SGD is eminently practical and in addition to privacy offers advantages such as strong generalization and the promise of reusable holdouts (Google, 2019; Dwork et al., 2015). Unfortunately, its advantages have not been without cost: empirically, the test accuracy of differentially private ML is consistently lower than that of non-private learning (e.g., see Papernot et al. (2018)). Such accuracy loss may sometimes be inevitable: for example, the task may involve heavy-tailed distributions and adding noise will definitely hinder visibility of examples in the tails (Feldman, 2019; Bagdasaryan & Shmatikov, 2019). However, this does not explain the accuracy loss of differentially private learning on standard benchmark tasks that are known to be relatively simple: MNIST (Yann et al., 1998), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009), etc.\nThis paper presents several new results for privacy-preserving learning that improve the state-ofthe-art in terms of both privacy and accuracy. Significantly, these new results stem from a single, simple observation: differentially-private learning with DP-SGD is different enough that all aspects of learning—model architecture, parameter initialization, and optimization strategy, as well as hyperparameter tuning—must be reconsidered. To achieve the best privacy/accuracy tradeoffs, we must tune our learning strategies to the specifics of privacy-preserving learning; i.e., we must “learn to learn” with privacy. Conversely, we concretely demonstrate how the architecture, initialization,\nand optimization strategy that gives the best accuracy for non-private learning can be a poor fit for learning with privacy. Instead, by revisiting our choices, we can reduce the information loss induced by clipping, limit the impact of added noise, and improve the utility of each gradient step when learning with privacy. Our contributions facilitate DP-SGD learning as follows:\n• We show how simple architecture changes, such as the use of tanh instead of ReLU activations, can improve a model’s private-learning suitability and achievable privacy/accuracy tradeoffs, by eliminating the negative effects of clipping and noising large gradients.\n• We explain how high-capacity models can be disadvantageous, as well as the advantages of models with a final, fully-connected layer that can be independently fine tuned, and how both help address the curse of dimensionality and high-dimensional noise.\n• We demonstrate the importance of finding good initializations, and show how this can be done with privacy using either transfer learning or weight scaling (Raghu et al., 2019).\n• We show that better tradeoffs and increased wall-clock learning speeds can be achieved by tuning hyperparameters and choosing optimizers directly for DP-SGD learning.\nBy applying the above, we advance the state of the art for MNIST, FashionMNIST, and CIFAR10, significantly improving upon the privacy/accuracy tradoffs from prior work. On MNIST, we achieve 98.1% test accuracy for a privacy guarantee of (ε, δ) = (2.93, 10−5), whereas the previous stateof-the-art reported in the TensorFlow Privacy library (Google, 2019) was 96.6%. On CIFAR10, we achieve 72% test accuracy at (ε, δ) = (2.1, 10−5) in a setup for which to the best of our knowledge the previous state-of-the-art was achieved by Abadi et al. (2016) at 67% accuracy." }, { "heading": "2 TRAINING-DATA MEMORIZATION, DIFFERENTIAL PRIVACY, AND DP-SGD", "text": "Machine-learning models will easily memorize whatever sensitive, personal, or private data that was used in their training, and models may in practice disclose this data—as demonstrated by the attacks of Shokri et al. (2017), Song & Shmatikov (2019), and Carlini et al. (2019).\nFor reasoning about the privacy guarantees of algorithms such as training by stochastic gradient descent, differential privacy has become the established gold standard (Dwork & Roth, 2014). Informally, an algorithm can be differentially private if it will always produce effectively the same output (in a mathematically precise sense), when applied to two input datasets that differ by only one record. Formally, a learning algorithmA that trains models from the set S is (ε, δ)-differentiallyprivate, if the following holds for all training datasets d and d′ that differ by exactly one record:\nPr[A(d) ∈ S] ≤ eεPr[A(d′) ∈ S] + δ (1) Here, ε gives the formal privacy guarantee, by placing a strong upper bound on any privacy loss, even in the worst possible case. A lower ε indicates a stronger privacy guarantee or a tighter upper bound (Erlingsson et al., 2019). The factor δ allows for some probability that the property may not hold (in practice, this δ is required to be very small, e.g., in inverse proportion to the dataset size).\nA very attractive property of differential-privacy guarantees is that they hold true for all attackers— whatever they are probing and whatever their prior knowledge—and that they remain true under various forms of composition. In particular, the output of a differentially-private algorithm can be arbitrarily post processed, without any weakening of the guarantees. Also, if sensitive training data contains multiple examples from the same person (or, more generally, the same sensitive group), ε-differentially-private training on this data will result in model with a kε-differential-privacy guarantee for each person, as long as at most k training-data records are present per person.\nAbadi et al. (2016) introduced DP-SGD as a method for training deep neural networks with differential-privacy guarantees that was able to achieve better privacy and utility than previous efforts (Chaudhuri et al., 2011; Song et al., 2013; Bassily et al., 2014). DP-SGD bounds the sensitivity of the learning process to each individual training example by computing per-example gradients {gi}i∈0..n−1 with respect to the loss, for the n model parameters {θi}i∈0..n−1, and clipping each per-example gradient to a maximum fixed `2 norm C. Subsequently, to the average of these perexample gradients, DP-SGD adds (Gaussian) noise that whose standard deviation σ is proportional to this sensitivity. In this work, we use the canonical implementation of DP-SGD and its associated analysis that has been made available through the TensorFlow Privacy library (Google, 2019)." }, { "heading": "3 MODEL ARCHITECTURES BETTER SUITED TO LEARNING WITH PRIVACY", "text": "We show here that learning with differential privacy imposes additional constraints that need to be taken into account when designing neural network architectures. They help us control the sensitivity of learning to training examples before the clipping operation is performed in DP-SGD, thus reducing the potential negative impact of clipping on the estimated gradient direction." }, { "heading": "3.1 MODEL CAPACITY", "text": "The success of neural networks is in part explained by their ability to scale to complex tasks through an increase in model capacity. ResNets are an illustrative recent examples (He et al., 2016). Here, we explain how additional capacity may not be beneficial when learning with privacy. One of the major challenges in training models with differential privacy is the curse of dimensionality (Bassily et al., 2014). The accuracy of privately trained models typically degrades with the increase in the number of dimensions. Unfortunately, strong lower bounds suggest that this dependence on dimensionality is necessary (Bassily et al., 2014).\nConsider the convolutional architecture described to the right. With all other architectural details being fixed, we can control the model’s capacity by varying the number of filters k in its two convolutional layers. While the relationship between generalization performance and the number of parameters is not always monotonic (Neyshabur et al., 2017), we leave as future work a study of how different measures of capacity can inform the design of model architectures for private learning. We report the model’s accuracy when trained with SGD\nand DP-SGD in Figure 1, both on MNIST (left) and FashionMNIST (right). The test accuracy of models trained without privacy monotonically increases with the number of filters in their convolutional layers. Instead, we observe an inflection point at about 15 filters for which models trained with privacy achieve their highest test accuracy. Afterwards, the model’s generalization suffers as more filters are added.\nThere are two competing explanations of this behavior, both compatible with the lower bound stated in Bassily et al. (2014). First, recall that DP-SGD performs a clipping operation on each per-example gradient before the average gradients is used to update model parameters; i.e., each gradient is subject to the following transformation\ngi ← gi ·min 1, C√∑n−1 i=0 g 2 i (2) where gi is the gradient corresponding to model parameter i. For a fixed clipping norm C (corresponding to a certain, fixed privacy guarantee), the quantity C√∑n−1\ni=0 g 2 i\nby which individual param-\neters are multiplied decreases as the number n of parameters in a model increases. That is, the more\nparameters we have, the more likely DP-SGD is to clip the gradient (or signal) at each parameter. This can explain the presence of an inflection point in Figure 1, after which learning with privacy becomes increasingly difficult as capacity is increased. Second, as the number of parameters (i.e., gi’s) increases, the norm of the noise vector that DP-SGD must add to the gradient average to ensure privacy also increases. This noise norm increases as √ #parameters, and introduces another source of accuracy degradation with an increased number of parameters.\nOur observations may seem to contradict some of the findings in Abadi et al. (2016). However, their limited experimental setup could offer few general lessons. First, they reduced data dimensionality using PCA to have inputs of only 60 dimensions; second, they explored only a model architectures using a single layer perceptron with between 200 and 2, 000 units. Instead, our experiments involve a realistic setting where the full input is passed to a convolutional neural network with a total of 3 hidden layers and over 26,000 parameters." }, { "heading": "3.2 ACTIVATION FUNCTIONS", "text": "When training a model with differential privacy, gradients computed during SGD are clipped (recall Equation 2) to control the sensitivity of learning to training examples. If these gradients take large values, some of the signal will be discarded as gradients are being clipped. One way to reduce the magnitude (or at least control it), is to prevent the model’s activations from exploding. However, a common choice of activation function in modern deep neural networks is the ReLU and, unlike other activations functions, ReLUs are unbounded.\nHere, we thus test the hypothesis that replacing ReLUs with a bounded activation function prevents activations from exploding and thus keeps the magnitude of gradients to a more reasonable value. This in turn implies that the clipping operation applied by DP-SGD will discard less signal from gradient updates—eventually resulting in higher performance at test time.\nOn MNIST and FashionMNIST, we train two models based off the architecture of Table 1: the first model uses ReLU whereas the second model uses tanh1 as the activation for its hidden layers, with other architectural elements kept identical. In our experiments, we later fine-tuned those architectural aspects (i.e., model capacity, choice of optimizer, etc.) separately for each activation function, to avoid favoring any one choice. In all cases, tanh was an improvement, as summarized in our conclusions (Section 6).\nFigure 2 visualizes the privacy-utility Pareto curve (Avent et al., 2019) of the two models trained with DP-SGD. Rather than plotting the test accuracy as a function of the number of steps, we plot it as a function of the privacy loss ε (but the privacy loss is a monotonically increasing function of the number of steps). On MNIST, the test accuracy of the tanh model is 98.0% compared to 96.6% for the ReLU model with an identical privacy loss of ε = 2.93. For comparison, baseline tanh and ReLU models trained without privacy both achieve a test accuracy of 99.0%. Similarly, on FashionMNIST, the tanh model trained with DP-SGD achieves 85.5% test accuracy compared to 81.9% with ReLUs. The baselines on FashionMNIST are 89.3% for tanh and 89.4% with ReLUs.\n1We obtained results similar to the tanh with a sigmoid and a learning rate increased by a factor of 2 to 8. This is explained by the fact that the tanh is a rescaled sigmoid φ: tanh(x) = 2φ(x)− 1.\nTo explain why a simple change of activation functions has a large impact on the model’s accuracy, we conjecture that the bounded nature of the tanh prevents activations from exploding during\ntraining. We thus monitored the `2 norm of the first layer’s activations for our MNIST model while it is being trained in three scenarios: (a) without privacy using vanilla SGD and ReLU activations, (b) with ReLU activations and DPSGD, and (c) with tanh activations and DPSGD. The evolution of activation norms on test data is visualized in Figure 3. As conjectured, the activations of our ReLU network explode by a factor of 3 when training with privacy when compared to without privacy. Switching to tanh activations brings down the norms of activations back to levels comparable with the activations of our non-private ReLU network." }, { "heading": "4 INITIALIZATIONS FOR LEARNING WITH DIFFERENTIAL PRIVACY", "text": "Because each gradient step expends some privacy budget, good initialization of learning is important; here, we consider transfer learning (Pratt et al., 1991) and weight scaling (Raghu et al., 2019)." }, { "heading": "4.1 INITIALIZING FROM A PRE-TRAINED MODEL USING TRANSFER LEARNING", "text": "Transfer learning can improve the initialization used when learning with privacy, and allow better privacy/accuracy tradoffs to be achieved.2 For example, to reach reasonable accuracy (> 80%) on CIFAR10, a convolutional neural network may necessarily include many convolutional layers comprising several hundred-thousand parameters. However, since convolutional layers for similar image-processing tasks are known to learn similar representations—at least in early layers—it may be possible to transfer most of these parameters from a public model, either as initializations or as frozen parameters, and subsequently train with DP-SGD. For CIFAR10, the natural choice for such transfer is a CIFAR100 model, and this has been previously explored by Abadi et al. (2016).\nTaking the Abadi et al. (2016) transfer learning results for CIFAR10 as a baseline, we perform new experiments using much of the same setup and the model architecture of Table 2. As it is relatively simple, this model is a good candidate for differentially-private learning (although it reaches only 84.2% accuracy on CIFAR10 when all its parameters are trained nonprivately, whereas state-of-the-art models can have over 10% higher accuracy).\nWe performed new transfer-learning experiments based on training this model on CIFAR100 data in three different ways: trained on a total of 5000 examples from 10 classes picked at random (Minrand-10 ); trained on 25,000 examples from a random half of the CIFAR100 classes, grouped into 10 new, evenly-sized meta classes (Half-rand-50 ); trained on all examples and all 100 separate classes (Max-100 ). From each of these trained models, transfer learning was used to initialize a model to be trained on CIFAR10. In the subsequent CIFAR10 training, all but the last layer was frozen, which simplifies the learning task to that of logistic regression (but also reduces utility, with the best non-private accuracy reduced to 75% on CIFAR10).\nTable 3 shows CIFAR10 privacy and accuracy resulting from fine-tuning of different transferlearning models with DP-SGD. As shown in Table 4, the results improve on those of Abadi et al. (2016), even though they performed non-linear fine-tuning of two neural-network layers, and their underlying model was able to achieve higher non-private accuracy (86%).\n2A different, formal take on how public models and data can facilitate learning with privacy is studied in (Bassily et al., 2018; Feldman et al., 2018).\nTable 4: CIFAR10 privacy and accuracy tradeoffs.\nThis paper Abadi et al. (ε, acc.) (ε, acc.)\n(0.3, 59%) – (1.0, 70%) – (2.1, 72%) (2.0, 67%)\n– (4.0, 70%) – (8.0, 73%)\nIn addition, the results show the benefits of model architectures whose final layer can be fine-tuned using logistic regression training, or other forms of convex optimization. Such training can be made possible by including a final fully-connected layer into a network; in additional experiments (not detailed here), the inclusion of such a layer did not harm the training of the original, source model from which transfer learning was done. Furthermore, the number of parameters in this layer did not seem to matter much: privacy/accuracy tradeoffs remained the same, even when the layer was grown by an order of magnitude, which is consistent with what is known about differentially-private convex optimization (Jain & Thakurta, 2014).\n4.2 INITIALIZATION BY WEIGHT SCALING\nInitialization by transfer learning is only applicable when a suitable public model exists whose weights can facilitate learning with privacy on sensitive data. But, such model may not exist, and DP-SGD learning may possibly benefit from other, non-standard means of initialization. We consider the Mean Var weightscaling approach of Raghu et al. (2019) and initialize DP-SGD learning with Gaussian random parameter distributions whose layer-wise mean and variance are extracted from a seed model trained on the same sensitive input data. The weight-scaling approach does not directly transfer the parameters of an existing model; instead, just the layer-wise mean and variance are extracted, and those statistics are used to configure the Gaussian random distributions from which a second model with the same architecture is initialized.\nIn the context of learning with privacy, Mean Var weight scaling can improve model initialization by transfer from one differentially-private model to another. First, DP-SGD can be applied to train a model with standard random initialization. From this model, per-layer mean/variance statistics can be extracted to initialize a new model of the same architecture, sub-\nsequently trained with strong privacy guarantees. (This extraction can be done privately, although the privacy risk of summary statistics that drive random initialization should be vanishing. Following Bassily et al. (2018); Papernot et al. (2018), one can use the formal framework of sub-sample and aggregate in conjunction with Propose-Test-Release (PTR) for this selection. The algorithm first splits the training data into disjoint subsets, and trains models independently on each of the splits. Using these trained models, the parameter is chosen via consensus voting with differential privacy. Notice that if the training data set is large, and there is a strong consensus, then the cost towards\nUnder review as a conference paper at ICLR 2020\n0 10 20 30 40 Epochs\n80\n85\n90\n95\nTe st\nA cc\nur ac\ny\nsgd adam\nMNIST\n0 10 20 30 40 Epochs\n72\n74\n76\n78\n80\n82\n84\nTe st\nA cc\nur ac\ny\nsgd adam\nFashionMNIST 0 100 200 300 400\nEpochs\n30\n40\n50\n60\n70\nTe st\nA cc\nur ac\ny\nsgd adam\nCIFAR10\nFigure 5: Learning curves for DP-SGD and DP-Adam. Early on in training, DP-Adam converges faster to an accuracy that is within 1 point of its final accuracy, however DP-SGD increases more steadily towards the end of training, thus both achieve comparable results. Given one of the datasets, the privacy budget ε for both models is identical at each epoch.\nprivacy is very low.) The idea is that the mean and variance pairs can be obtained quickly at a modest privacy budget, but the faster convergence of the Mean Var initialized model both reduces the overall privacy budget needed for training, and mitigates the increased wall-clock time of DP-SGD.\nWe experiment with a relatively deep CIFAR10 convolutional model (see Appendix A), since Raghu et al. found the benefits of Mean Var initialization most pronounced for large models. We first trained a model using random initialization, and then did weight scaling by transferring that model’s statistics to a new model. In this proof-of-concept, both models were trained with the same noise variance (σ = 0.5), but one could reserve a larger portion of the privacy budget for the new model.\nWe should note that we did not directly transfer the weight statistics between corresponding layers in the original and new models. Rather, we used the weight statistics of each of original model’s early layers of the original model for two of the layers in the new model. This gives superior performance to a one-to-one transfer; we conjecture that this is because early layers have higher variance.\nFigure 4 shows the results of this experiment for some early training epochs. Each run that used standard He random initialization (He et al., 2015) gave near identical results, achieving 37% accuracy at epoch 33. The Mean Var initialization runs showed substantially higher variance, with the best models having 7% better accuracy at epoch 33. These results are intriguing, and reminiscent of the lottery ticket hypothesis (Frankle & Carbin, 2019); they suggest a strategy of training a collection of Mean Var models and keeping those that show early promise." }, { "heading": "5 TUNING OPTIMIZERS FOR PRIVATE LEARNING", "text": "Architectural choices presented in Section 3 control how sensitive learning is to training examples. This helps us to learn with privacy—because it eliminates the negative effects of clipping and noising large gradients. We now turn our attention to the training algorithm itself. We find that it is important to tailor algorithm and hyperparameter choices to the specificities of private learning: a batch size or learning rate that yields good results without privacy may not perform well with privacy." }, { "heading": "5.1 ADAPTIVE OPTIMIZERS PROVIDE MARGINAL GAINS WHEN LEARNING WITH PRIVACY", "text": "We first explore the choice of optimizer, and in particular whether adaptive optimizers that leverage the history of iterates help convergence when learning privately. We compare learning curves for DPSGD and the differentially private counterpart of Adam (Kingma & Ba, 2014), a canonical adaptive optimizer. A qualitative analysis of Figure 5 leads to the same conclusion for all datasets (MNIST, FashionMNIST, and CIFAR10). While DP-Adam may converge faster initially, its convergence rate eventually slows down sufficiently for DP-SGD to achieve comparable (if not higher) accuracy.\nTo explain the ineffectiveness of adaptive optimizers, we hypothesize that the iterates they accumulate during training are affected negatively by noise introduced to preserve privacy. Indeed, while there is enough signal from the training data included in any given batch sampled early in training, later in training most training examples have a loss of zero and do not contribute to the gradients being noised. Carrying this noise from one gradient descent step to the next to adapt learning rates therefore inadequately slows down training. To verify this, we track the estimate of the first moment in Adam on MNIST. The mean absolute value of its components converges when learning without privacy (from 0.5 after the first epoch to about 0.8 for epochs 45 through 60). Instead, it increases steadily throughout training with privacy (from 0.5 at the first epoch to above 1. after 60 epochs).\nThus, choosing an adaptive optimizer (e.g., DP-Adam) is not necessary if one is interested in achieving maximal accuracy: given a fixed privacy budget, fine-tuning the learning rate is more important\nas we confirm in Section 5.2. Note that this resonates well with recent results questioning the generalization capabilities of adaptive optimizers (Wilson et al., 2017)." }, { "heading": "5.2 CHOOSING A (LARGE) BATCH SIZE AND LEARNING RATE", "text": "Having observed that few training examples contribute signal after the first epochs, it is natural to ask whether increasing the size of batches could improve the noise-to-signal ratio in DP learning.\nTo ensure a fair comparison, we fix the privacy budget ε and deduce the number of epochs we can train the model for given a desired batch size. For instance, in Table 5, we compare models trained for 7 epochs on batches of 1, 024 examples to models trained for 40 epochs on batches of 256 examples. In both cases, the total privacy budget for training these models is ε = 2.7. We run a hyperparameter search to fine-tune the choice of learning rate for both DP-SGD and DP-Adam. We then compare the test accuracy achieved with small and large batch sizes.\nHyperparameters should be tuned for DP-SGD, not SGD. We confirm that DP-Adam does not improve over DP-SGD. Yet, this experiment shows how training for a small number of epochs at a large batch size can do comparably well to a large number of epochs at a small batch size: the wall-clock time gain is important (about 5×) and the cost in performance is moderate—half a percentage point. This suggests that earlier theoretical analysis (Talwar et al., 2014) also holds in the non-convex setting. Furthermore, note how learning rates vary across the non-DP and DP settings." }, { "heading": "6 CONCLUSIONS", "text": "Rather than first train a non-private model and later attempt to make it private, we bypass nonprivate training altogether and directly incorporate specificities of privacy-preserving learning in the selection of architectures, initializations, and tuning strategies. This improves substantially upon the state-of-the-art privacy/accuracy trade-offs on three benchmarks, as summarized below. Up to now, we evaluated each component (e.g., change of activation function, optimizer, etc.) individually to demonstrate its influence on private learning. Instead, here this summary table compares each approach after all hyperparameters explored in the paper have been jointly fined tuned. In particular, note how even in their own individually-best setting, tanh continues to consistently outperform ReLU with for example 98.1% test accuracy (instead of 96.6% for ReLU) on MNIST.\nDataset Technique Acc. ε δ Assumptions\nMNIST SGD w/ tanh (not private) 99.0% ∞ 0 -\nDP-SGD w/ ReLU 96.6% 2.93 10−5 - DP-SGD w/ tanh (ours) 98.1% 2.93 10−5 -\nFashionMNIST SGD w/ ReLU (not private) 89.4% ∞ 0 -\nDP-SGD w/ ReLU 81.9% 2.7 10−5 - DP-SGD w/ tanh (ours) 86.1% 2.7 10−5 -\nCIFAR10 Transfer + SGD (not private) 75% ∞ 0 -\nTransfer + DP-SGD (Abadi et al.) 67% 2 10−5 Public Data Transfer + DP-SGD (ours) 72% 2.1 10−5 Public Data" }, { "heading": "A DEEP CONVOLUTIONAL MODEL", "text": "" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "We describe hyperparameters used in Sections 3, 4, and 5 because they were omitted from the main body of the paper due to space constraints.\nSection 3.1 Section 3.2 Batch size / microbatches 100 256\nEpochs 40 40 Optimizer SGD / DP-SGD SGD / DP-SGD\nLearning rate 0.15 0.15 Clipping norm 1.0 1.0\nNoise multiplier 1.1 1.1 Architecture Table 1 (varying k) Table 1 (k = 16)\nActivation function ReLU ReLU / tanh\nSection 4.1 (CIFAR100) Section 4.1 (CIFAR10) Section 4.2 Batch size / microbatches 64 5000 500\nEpochs 150 400 See Figure 4 Optimizer Adam DP-SGD DP-Adam\nLearning rate 0.001 0.8 0.0001 Clipping norm n/a (public) 1.0 1,0 Noise multiplier n/a (public) 15 0.5 Architecture Table 2 Table 2 Table 6\nActivation function ReLU ReLU tanh\nSection 5.1 Section 5.2 Batch size / microbatches 256 256 / 1024\nEpochs 40 40 / 7 Optimizer DP-SGD / DP-Adam SGD / DP-SGD / Adam / DP-Adam\nLearning rate See Table 5 for dataset specific rates Clipping norm 1.0 1.0\nNoise multiplier 1.1 1.1 Architecture Table 1 (k = 16) Table 1 (k = 16)\nActivation function tanh tanh" } ]
2,019
null
SP:276b6721d8cad9d323908d8677706c8ad1668c95
[ "This paper proposes visualization techniques for the optimization landscape in GANs. The primary tool presented in this paper is a quantity called path-angle, which looks at the angle between the game vector field and the linear path between a point away from a stationary point and a point near a stationary point. The paper present examples of the visualization for dynamics with pure attraction, pure rotation, and a mix of attraction and rotation. Along with this, the authors propose to look at the eigenvalues of the game Jacobian and the individual player Hessian’s to evaluate convergence in GANs. The paper presents application of the tools on GANs trained with NSGAN and WGAN-GP objectives on a mixture of Gaussians, MNIST, and CIFAR10. The primary observation is that the generator performance is good, but the algorithms converge to non-Nash stable attractors. Moreover, it is shown using the path-angle plots that GANs exhibit rotational behavior around stable points.", "This paper tries to provide a deeper understanding of the training dynamics of GANs in practice via characterizing and visualizing the rotation and attraction phenomena nearby a locally stable stationary point (LSSP) and questions the necessity to access a differential/local Nash equilibrium (LNE). In particular, this paper first discusses the difference between LSSP and LNE and formalize the notions of rotation and attraction around LSSP in games. Then, this paper proposes the path angle to visualize the rotation and attraction nearby an LSSP. The path angle is a function that maps linearly distributed points in the line, which is determined by an initial parameter set and a well-trained parameter set, to the angles between the line and the gradient of a given point in that line. The rotation and attraction phenomena can be observed in the plot of the path angle as \"a quick sign switch\" and \"a bump\" nearby 1, respectively. The experiments empirically demonstrate that: 1. rotation exists in the training dynamics of practical GANs; 2. GANs often converge to an LSSP than an LNE, but still, achieve good results." ]
Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train compared to standard deep neural networks. In this paper, we propose new visualization techniques for the optimization landscapes of GANs that enable us to study the game vector field resulting from the concatenation of the gradient of both players. Using these visualization techniques we try to bridge the gap between theory and practice by showing empirically that the training of GANs exhibits significant rotations around Local Stable Stationary Points (LSSP), similar to the one predicted by theory on toy examples. Moreover, we provide empirical evidence that GAN training converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance.1
[ { "affiliations": [], "name": "A CLOSER LOOK" }, { "affiliations": [], "name": "AT THE" }, { "affiliations": [], "name": "OPTIMIZATION LANDSCAPES" }, { "affiliations": [], "name": "Hugo Berard" }, { "affiliations": [], "name": "Gauthier Gidel" }, { "affiliations": [], "name": "Pascal Vincent" }, { "affiliations": [], "name": "Simon Lacoste-Julien" } ]
[ { "authors": [ "L. Adolphs", "H. Daneshmand", "A. Lucchi", "T. Hofmann" ], "title": "Local saddle point optimization: A curvature exploitation approach", "venue": null, "year": 2018 }, { "authors": [ "G. Alain", "N. Le Roux", "P.-A. Manzagol" ], "title": "Negative eigenvalues of the hessian in deep neural networks", "venue": null, "year": 2019 }, { "authors": [ "M. Arjovsky", "S. Chintala", "L. Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "D. Balduzzi", "S. Racaniere", "J. Martens", "J. Foerster", "K. Tuyls", "T. Graepel" ], "title": "The mechanics of n-player differentiable games", "venue": null, "year": 2018 }, { "authors": [ "A. Brock", "J. Donahue", "K. Simonyan" ], "title": "Large scale GAN training for high fidelity natural image synthesis", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "A. Choromanska", "M. Henaff", "M. Mathieu", "G.B. Arous", "Y. LeCun" ], "title": "The loss surfaces of multilayer networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "C. Daskalakis", "A. Ilyas", "V. Syrgkanis", "H. Zeng" ], "title": "Training GANs with optimism", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Y.N. Dauphin", "R. Pascanu", "C. Gulcehre", "K. Cho", "S. Ganguli", "Y. Bengio" ], "title": "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "F. Draxler", "K. Veschgini", "M. Salmhofer", "F. Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "In ICML,", "year": 2018 }, { "authors": [ "G. Gidel", "H. Berard", "P. Vincent", "S. Lacoste-Julien" ], "title": "A variational inequality perspective on generative adversarial nets. ICLR, 2019a", "venue": null, "year": 2019 }, { "authors": [ "G. Gidel", "R.A. Hemmat", "M. Pezeshki", "G. Huang", "R. Lepriol", "S. Lacoste-Julien", "I. Mitliagkas" ], "title": "Negative momentum for improved game dynamics", "venue": "In AISTATS,", "year": 2019 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In AISTATS,", "year": 2010 }, { "authors": [ "I. Goodfellow" ], "title": "Neurips 2016 tutorial", "venue": "Generative adversarial networks", "year": 2016 }, { "authors": [ "I. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A. Courville", "Y. Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS,", "year": 2014 }, { "authors": [ "I.J. Goodfellow", "O. Vinyals", "A.M. Saxe" ], "title": "Qualitatively characterizing neural network optimization problems", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "I. Gulrajani", "F. Ahmed", "M. Arjovsky", "V. Dumoulin", "A.C. Courville" ], "title": "Improved training of wasserstein GANs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "M. Heusel", "H. Ramsauer", "T. Unterthiner", "B. Nessler", "S. Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "K. Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "G. Korpelevich" ], "title": "The extragradient method for finding saddle points and other problems", "venue": "Matecon,", "year": 1976 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2012 }, { "authors": [ "Y. LeCun", "C. Cortes", "C. Burges" ], "title": "MNIST handwritten digit database", "venue": "AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "H. Li", "Z. Xu", "G. Taylor", "C. Studer", "T. Goldstein" ], "title": "Visualizing the loss landscape of neural nets", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "J. Li", "A. Madry", "J. Peebles", "L. Schmidt" ], "title": "On the limitations of first order approximation in gan dynamics", "venue": "In ICML,", "year": 2018 }, { "authors": [ "E. Mazumdar", "L.J. Ratliff" ], "title": "On the convergence of gradient-based learning in continuous games", "venue": null, "year": 2018 }, { "authors": [ "E.V. Mazumdar", "M.I. Jordan", "S.S. Sastry" ], "title": "On finding local nash equilibria (and only local nash equilibria) in zero-sum games", "venue": null, "year": 2019 }, { "authors": [ "L. Mescheder", "S. Nowozin", "A. Geiger" ], "title": "The numerics of GANs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "L. Mescheder", "A. Geiger", "S. Nowozin" ], "title": "Which Training Methods for GANs do actually Converge", "venue": "In ICML,", "year": 2018 }, { "authors": [ "T. Miyato", "T. Kataoka", "M. Koyama", "Y. Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "V. Nagarajan", "J.Z. Kolter" ], "title": "Gradient descent GAN optimization is locally stable", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "W. Paul" ], "title": "Electromagnetic traps for charged and neutral particles", "venue": "Reviews of modern physics,", "year": 1990 }, { "authors": [ "B.A. Pearlmutter" ], "title": "Fast exact multiplication by the hessian", "venue": "Neural computation,", "year": 1994 }, { "authors": [ "A. Radford", "L. Metz", "S. Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "L.J. Ratliff", "S.A. Burden", "S.S. Sastry" ], "title": "On the characterization of local nash equilibria in continuous games", "venue": "In IEEE Transactions on Automatic Control,", "year": 2016 }, { "authors": [ "L. Sagun", "L. Bottou", "Y. LeCun" ], "title": "Eigenvalues of the hessian in deep learning: Singularity and beyond", "venue": null, "year": 2016 }, { "authors": [ "L. Sagun", "U. Evci", "V.U. Guney", "Y. Dauphin", "L. Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks", "venue": null, "year": 2017 }, { "authors": [ "T. Salimans", "I. Goodfellow", "W. Zaremba", "V. Cheung", "A. Radford", "X. Chen" ], "title": "Improved techniques for training GANs", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "R. Thompson", "T. Harmon", "M. Ball" ], "title": "The rotating-saddle trap: A mechanical analogy to rf-electricquadrupole ion trapping", "venue": "Canadian journal of physics,", "year": 2002 }, { "authors": [ "F. Verhulst" ], "title": "Nonlinear differential equations and dynamical systems", "venue": "Springer Science & Business Media,", "year": 1989 }, { "authors": [ "Y. Yazıcı", "C.-S. Foo", "S. Winkler", "K.-H. Yap", "G. Piliouras", "V. Chandrasekhar" ], "title": "The unusual effectiveness of averaging in GAN training", "venue": "In ICLR,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have exhibited remarkable success in many applications (Krizhevsky et al., 2012). This success has motivated many studies of their non-convex loss landscape (Choromanska et al., 2015; Kawaguchi, 2016; Li et al., 2018b), which, in turn, has led to many improvements, such as better initialization and optimization methods (Glorot and Bengio, 2010; Kingma and Ba, 2015).\nWhile most of the work on studying non-convex loss landscapes has focused on single objective minimization, some recent class of models require the joint minimization of several objectives, making their optimization landscape intrinsically different. Among these models is the generative adversarial network (GAN) (Goodfellow et al., 2014) which is based on a two-player game formulation and has achieved state-of-the-art performance on some generative modeling tasks such as image generation (Brock et al., 2019).\nOn the theoretical side, many papers studying multi-player games have argued that one main optimization issue that arises in this case is the rotation due to the adversarial component of the game (Mescheder et al., 2018; Balduzzi et al., 2018; Gidel et al., 2019b). This has been extensively studied on toy examples, in particular on the so-called bilinear example (Goodfellow, 2016) (a.k.a Dirac GAN (Mescheder et al., 2018)). However, those toy examples are very far from the standard realistic setting of image generation involving deep networks and challenging datasets. To our knowledge it remains an open question if this rotation phenomenon actually occurs when training GANs in more practical settings.\nIn this paper, we aim at closing this gap between theory and practice. Following Mescheder et al. (2017) and Balduzzi et al. (2018), we argue that instead of studying the loss surface, we should study the game vector field (i.e., the concatenation of each player’s gradient), which can provide\n∗Equal contributions. Correspondence to firstname.lastname@umontreal.ca. †Canada CIFAR AI Chair (held at Mila) 1Code available at https://bit.ly/2kwTu87\nbetter insights to the problem. To this end, we propose a new visualization technique that we call Path-angle which helps us observe the nature of the game vector field close to a stationary point for high dimensional models, and carry on an empirical investigation of the properties of the optimization landscape of GANs. The core questions we want to address may be summarized as the following:" }, { "heading": "Is rotation a phenomenon that occurs when training GANs on real world datasets, and do existing training methods find local Nash equilibria?", "text": "To answer this question we conducted extensive experiments by training different GAN formulations (NSGAN and WGAN-GP) with different optimizers (Adam and ExtraAdam) on three datasets (MoG, MNIST and CIFAR10). Based on our experiments and using our visualization techniques we observe that the landscape of GANs is fundamentally different from the standard loss surfaces of deep networks. Furthermore, we provide evidence that existing GAN training methods do not converge to a local Nash equilibrium.\nContributions More precisely, our contributions are the following: (i) We propose studying empirically the game vector field (as opposed to studying the loss surfaces of each player) to understand training dynamics in GANs using a novel visualization tool, which we call Path-angle and that captures the rotational and attractive behaviors near local stationary points (ref. §4.2). (ii) We observe experimentally on both a mixture of Gaussians, MNIST and CIFAR10 datasets that a variety of GAN formulations have a significant rotational behavior around their locally stable stationary points (ref. §5.1). (iii) We provide empirical evidence that existing training procedures find stable stationary points that are saddle points, not minima, for the loss function of the generator (ref. § 5.2)." }, { "heading": "2 RELATED WORK", "text": "Improving the training of GANs has been an active research area in the past few years. Most efforts in stabilizing GAN training have focused on formulating new objectives (Arjovsky et al., 2017), or adding regularization terms (Gulrajani et al., 2017; Mescheder et al., 2017; 2018). In this work, we try to characterize the difference in the landscapes induced by different GAN formulations and how it relates to improving the training of GANs.\nRecently, Nagarajan and Kolter (2017); Mescheder et al. (2018) show that a local analysis of the eigenvalues of the Jacobian of the game can provide guarantees on local stability properties. However, their theoretical analysis is based on some unrealistic assumptions such as the generator’s ability to fully capture the real distribution. In this work, we assess experimentally to what extent these theoretical stability results apply in practice.\nRotations in differentiable games has been mentioned and interpreted by (Mescheder et al., 2018; Balduzzi et al., 2018) and Gidel et al. (2019b). While these papers address rotations in games from a theoretical perspective, it was never shown that GANs, which are games with highly non-convex losses, suffered from these rotations in practice. To our knowledge, trying to quantify that GANs actually suffer from this rotational component in practice for real world dataset is novel.\nThe stable points of the gradient dynamics in general games have been studied independently by Mazumdar and Ratliff (2018) and Adolphs et al. (2018). They notice that the locally stable stationary point of some games are not local Nash equilibria. In order to reach a local Nash equilibrium, Adolphs et al. (2018); Mazumdar et al. (2019) develop techniques based on second order information. In this work, we argue that reaching local Nash equilibria may not be as important as one may expect and that we do achieve good performance at a locally stable stationary point.\nSeveral works have studied the loss landscape of deep neural networks. Goodfellow et al. (2015) proposed to look at the linear path between two points in parameter space and show that neural networks behave similarly to a convex loss function along this path. Draxler et al. (2018) proposed an extension where they look at nonlinear paths between two points and show that local minima are connected in deep neural networks. Another extension was proposed by (Li et al., 2018a) where they use contour plots to look at the 2D loss surface defined by two directions chosen appropriately. In this paper, we use a similar approach of following the linear path between two points to gain insight about GAN optimization landscapes. However, in this context, looking at the loss of both players along that path may be uninformative. We propose instead to look, along a linear path from initialization to best solution, at the game vector field, particularly at its angle w.r.t. the linear path, the Path-angle.\nAnother way to gain insight into the landscape of deep neural networks is by looking at the Hessian of the loss; this was done in the context of single objective minimization by (Dauphin et al., 2014; Sagun et al., 2016; 2017; Alain et al., 2019). Compared to linear path visualizations which can give global information (but only along one direction), the Hessian provides information about the loss landscape in several directions but only locally. The full Hessian is expensive to compute and one often has to resort to approximations such has computing only the top-k eigenvalues. While, the Hessian is symmetric and thus has real eigenvalues, the Jacobian of a game vector field is significantly different since it is in general not symmetric, which means that the eigenvalues belong to the complex plane. In the context of GANs, Mescheder et al. (2017) introduced a gradient penalty and use the eigenvalues of the Jacobian of the game vector field to show its benefits in terms of stability. In our work, we compute these eigenvalues to assess that, on different GAN formulations and datasets, existing training procedures find a locally stable stationary point that is a saddle point for the loss function of the generator." }, { "heading": "3 FORMULATIONS FOR GAN OPTIMIZATION AND THEIR PRACTICAL IMPLICATIONS", "text": "" }, { "heading": "3.1 THE STANDARD GAME THEORY FORMULATION", "text": "From a game theory point of view, GAN training may be seen as a game between two players: the discriminator Dϕ and the generator Gθ, each of which is trying to minimize its loss LD and LG, respectively. Using the same formulation as Mescheder et al. (2017), the GAN objective takes the following form (for simplicity of presentation, we focus on the unconstrained formulation):\nθ∗ ∈ arg min θ∈Rp LG(θ,ϕ∗) and ϕ∗ ∈ arg min ϕ∈Rd LD(θ∗,ϕ) . (1)\nThe solution (θ∗,ϕ∗) is called a Nash equilibrium (NE). In practice, the considered objectives are non-convex and we typically cannot expect better than a local Nash equilibrium (LNE), i.e. a point at which (1) is only locally true (see e.g. (Adolphs et al., 2018) for a formal definition). Ratliff et al. (2016) derived some derivative-based necessary and sufficient conditions for being a LNE. They show that, for being a local NE it is sufficient to be a differential Nash equilibrium:\nDefinition 1 (Differential NE). A point (θ∗,ϕ∗) is a differential Nash equilibrium (DNE) iff\n‖∇θLG(θ∗,ϕ∗)‖ = ‖∇ϕLD(θ∗,ϕ∗)‖ = 0 , ∇2θLG(θ∗,ϕ∗) 0 and ∇2ϕLD(θ∗,ϕ∗) 0 (2) where S 0 if and only if S is positive definite.\nBeing a DNE is not necessary for being a LNE because a local Nash equilibrium may have Hessians that are only semi-definite. NE are commonly used in GANs to describe the goal of the learning procedure (Goodfellow et al., 2014): in this definition, θ∗ (resp. ϕ∗) is seen as a local minimizer of LG(·,ϕ∗) (resp. LD(θ∗, ·)). Under this view, however, the interaction between the two networks is not taken into account. This is an important aspect of the game stability that is missed in the definition of DNE (and Nash equilibrum in general). We illustrate this point in the following section, where we develop an example of a game for which gradient methods converge to a point which is a saddle point for the generator’s loss and thus not a DNE for the game." }, { "heading": "3.2 AN ALTERNATIVE FORMULATION BASED ON THE GAME VECTOR FIELD", "text": "In practice, GANs are trained using first order methods that compute the gradients of the losses of each player. Following Gidel et al. (2019a), an alternative point of view on optimizing GANs is to jointly consider the players’ parameters θ and ϕ as a joint state ω := (θ,ϕ), and to study the vector field associated with these gradients,2 which we call the game vector field\nv(ω) := [ ∇θLG(ω)> ∇ϕLD(ω)> ]> where ω := (θ,ϕ) . (3)\n2Note that, in practice, the joint vector field (3) is not a gradient vector field, i.e., it cannot be rewritten as the gradient of a single function.\nWith this perspective, the notion of DNE is replaced by the notion of locally stable stationary point (LSSP). Verhulst (1989, Theorem 7.1) defines a LSSP ω∗ using the eigenvalues of the Jacobian of the game vector field∇v(ω∗) at that point. Definition 2 (LSSP). A point ω∗ is a locally stable stationary point (LSSP) iff\nv(ω∗) = 0 and <(λ) > 0 , ∀λ ∈ Sp(∇v(ω∗)) . (4) where < denote the real part of the eigenvalue λ belonging to the spectrum of∇v(ω∗).\nThis definition is not easy to interpret but one can intuitively understand a LSSP as a stationary point (a point ω∗ where v(ω∗) = 0) to which all neighbouring points are attracted. We will formalize this intuition of attraction in Proposition 1. In our two-player game setting, the Jacobian of the game vector field around the LSSP has the following block-matrices form:\n∇v(ω∗) = [ ∇2θLG(ω∗) ∇ϕ∇θLG(ω∗) ∇θ∇ϕLD(ω∗) ∇2ϕLD(ω∗) ] = [ S1 B A S2 ] . (5)\nWhenB = −A>, being a DNE is a sufficient condition for being of LSSP (Mazumdar and Ratliff, 2018). However, some LSSP may not be DNE (Adolphs et al., 2018), meaning that the optimal generator θ∗ could be a saddle point of LG(·,ϕ∗), while the optimal joint state (θ∗,ϕ∗) may be a LSSP of the game. We summarize these properties in Table 1. In order to illustrate the intuition behind this counter-intuitive fact, we study a simple example where the generator is 2D and the discriminator is 1D. Example 1. Let us consider LG as a hyperbolic paraboloid (a.k.a., saddle point function) centered in (1, 1) where (1, ϕ) is the principal descent direction and (−ϕ, 1) is the principal ascent direction, while LD is a simple bilinear objective. LG(θ1, θ2, ϕ) = (θ2 − ϕθ1 − 1)2 − 12 (θ1 + ϕθ2 − 1)2 , LD(θ1, θ2, ϕ) = ϕ(5θ1 + 4θ2 − 9)\nWe plot LG in Fig. 1b. Note that the discriminator ϕ controls the principal descent direction of LG.\nWe show (see § A.2) that (θ∗1 , θ∗2 , ϕ∗) = (1, 1, 0) is a locally stable stationary point but is not a DNE: the generator loss at the optimum (θ1, θ2) 7→ LG(θ1, θ2, ϕ∗) = θ22 − 12θ21 is not at a DNE because it has a clear descent direction, (1, 0). However, if the generator follows this descent direction, the dynamics will remain stable because the discriminator will update its parameter, rotating the saddle and making (1, 0) an ascent direction. We call this phenomenon dynamic stability: the loss LG(·, ϕ∗) is unstable for a fixed ϕ∗ but becomes stable when ϕ dynamically interacts with the generator around ϕ∗.\nA mechanical analogy for this dynamic stability phenomenon is a ball in a rotating saddle—even though the gravity pushes the ball to escape the saddle, a quick enough rotation of the saddle would trap the ball at the center (see (Thompson et al., 2002) for more details). This analogy has been used to explain Paul’s trap (Paul, 1990): a counter-intuitive way to trap ions using a dynamic electric field. In Example 1, the parameter ϕ explicitly controls the rotation of the saddle.\nThis example illustrates the fact that the DNE corresponds to a notion of static stability: it is the stability of one player’s loss given the other player is fixed. Conversely, LSSP captures a notion of dynamic stability that considers both players jointly.\nBy looking at the game vector field we capture these interactions. Fig. 1b only captures a snapshot of the generator’s loss surface for a fixed ϕ and indicates static instability (the generator is at a saddle point of its loss). In Fig. 1a, however, one can see that, starting from any point, we will rotate around the stationary point (ϕ∗, θ∗1) = (0, 1) and eventually converge to it.\nThe visualization of the game vector field reveals an interesting behavior that does not occur in single objective minimization: close to a LSSP, the parameters rotate around it. Understanding this phenomenon is key to grasp the optimization difficulties arising in games. In the next section, we\nformally characterize the notion of rotation around a LSSP and in §4 we develop tools to visualize it in high dimensions. Note that gradient methods may converge to saddle points in single objective minimization, but these are not stable stationary points, unlike in our game example." }, { "heading": "3.3 ROTATION AND ATTRACTION AROUND LOCALLY STABLE STATIONARY POINTS IN GAMES", "text": "In this section, we formalize the notions of rotation and attraction around LSSP in games, which we believe may explain some difficulties in GAN training. The local stability of a LSSP is characterized by the eigenvalues of the Jacobian∇v(ω∗) because we can linearize v(ω) around ω∗:\nv(ω) ≈ ∇v(ω∗)(ω − ω∗). (6) If we assume that (6) is an equality, we have the following theorem.\nProposition 1. Let us assume that (6) is an equality and that∇v(ω∗) is diagonalizable, then there exists a basis P such that the coordinates ω̃j(t) := [P (ω(t)−ω∗)]j where ω(t) is a solution of (6) have the following behavior: for λj ∈ Sp∇v(ω∗) we have,\n1. If λj ∈ R, we observe pure attraction: ω̃j(t) = e−λjtω̃j(0) .\n2. If <(λj) = 0, we observe pure rotation: [ ω̃j(t) ω̃j+1(t) ] = [ cos |λjt| sin |λjt| − sin |λjt| cos |λjt| ] [ ω̃j(0) ω̃j+1(0) ] .\n3. Otherwise, we observe both: [ ω̃j(t) ω̃j+1(t) ] = e−Re(λj)t [ cos Im(λjt) sin Im(λjt) − sin Im(λjt) cos Im(λjt) ] [ ω̃j(0) ω̃j+1(0) ] .\nNote that we re-ordered the eigenvalues such that the complex conjugate eigenvalues form pairs: if λj /∈ R then λj+1 = λ̄j .\nMatrices in 2. and 3. are rotations matrices. They induce a rotational behavior illustrated in Fig 1a.\nThis proposition shows that the dynamics of ω(t) can be decomposed in a particular basis into attractions and rotations over components that do not interact between each other. Rotation does not appear in single objective minimization around a local minimum, because the eigenvalues of the Hessian of the objective are always real. Mescheder et al. (2017) discussed that difficulties in training GANs may be a result of the imaginary part of the eigenvalues of the Jacobian of the game vector field and Gidel et al. (2019b) mentioned that games have a natural oscillatory behavior. This cyclic behavior has been explained in (Balduzzi et al., 2018) by a non-zero Hamiltonian component in the Helmholtz decomposition of the Jacobian of the game vector field. All these explanations are related to the spectral properties of this Jacobian. The goal of Proposition 1 is to provide a formal definition to the notions of rotation and attraction we are dealing with in this paper.\nIn the following section, we introduce a new tool in order to assess the magnitude of the rotation around a LSSP compared to the attraction to this point." }, { "heading": "4 VISUALIZATION FOR THE VECTOR FIELD LANDSCAPE", "text": "Neural networks are parametrized by a large number of variables and visualizations are only possible using low dimensional plots (1D or 2D). We first present a standard visualization tool for deep neural network loss surfaces that we will exploit in §4.2." }, { "heading": "4.1 STANDARD VISUALIZATIONS FOR THE LOSS SURFACE", "text": "One way to visualize a neural network’s loss landscape is to follow a parametrized path ω(α) that connects two parameters ω,ω′ (often one is chosen early in learning and another one is chosen late in learning, close to a solution). A path is a continuous function ω(·) such that ω(0) = ω and ω(1) = ω′. Goodfellow et al. (2015) considered a linear path ω(α) = αω + (1 − α)ω′. More complex paths can be considered to assess whether different minima are connected (Draxler et al., 2018)." }, { "heading": "4.2 PROPOSED VISUALIZATION: PATH-ANGLE", "text": "We propose to study the linear path between parameters early in learning and parameters late in learning. We illustrate the extreme cases for the game vector field along this path in simple examples in Figure 2(a-c): pure attraction occurs when the vector field perfectly points to the optimum (Fig. 2a) and pure rotation when the vector field is orthogonal to the direction to the optimum (Fig. 2b). In practice, we expect the vector field to be in between these two extreme cases (Fig. 2c). In order to determine in which case we are, around a LSSP, in practice, we propose the following tools.\nPath-norm. We first ensure that we are in a neighborhood of a stationary point by computing the norm of the vector field. Note that considering independently the norm of each player may be misleading: even though the gradient of one player may be close to zero, it does not mean that we are at a stationary point since the other player might still be updating its parameters.\nPath-angle. Once we are close to a final point ω′, i.e., in a neighborhood of a LSSP, we propose to look at the angle between the vector field (3) and the linear path from ω to ω′. Specifically, we monitor the cosine of this angle, a quantity we call Path-angle:\nc(α) := 〈ω ′−ω,vα〉\n‖ω′−ω‖‖vα‖ where vα := v(αω ′ + (1− α)ω) , α ∈ [a, b] . (7)\nUsually [a, b] = [0, 1], but since we are interested in the landscape around a LSSP, it might be more informative to also consider further extrapolated points around ω′ with b > 1.\nEigenvalues of the Jacobian. Another important tool to gain insights on the behavior close to a LSSP, as discussed in §3.2, is to look at the eigenvalues of ∇v(ω∗). We propose to compute the top-k eigenvalues of this Jacobian. When all the eigenvalues have positive real parts, we conclude that we have reached a LSSP, and if some eigenvalues have large imaginary parts, then the game has a strong rotational behavior (Thm. 1). Similarly, we can also compute the top-k eigenvalues of the diagonal blocks of the Jacobian, which correspond to the Hessian of each player. These eigenvalues can inform us on whether we have converged to a LSSP that is not a LNE.\nAn important advantage of the Path-angle relative to the computation of the eigenvalues of∇v(ω∗) is that it only requires computing gradients (and not second order derivatives, which may be prohibitively computationally expensive for deep networks). Also, it provides information along a whole path between two points and thus, more global information than the Jacobian computed at a single point. In the following section, we use the Path-angle to study the archetypal behaviors presented in Thm 1." }, { "heading": "4.3 ARCHETYPAL BEHAVIORS OF THE PATH-ANGLE AROUND A LSSP", "text": "Around a LSSP, we have seen in (6) that the behavior of the vector field is mainly dictated by the Jacobian matrix ∇v(ω∗). This motivates the study of the behavior of the Path-angle c(α) where the Jacobian is a constant matrix:\nv(ω) = [ S1 B A S2 ] (ω − ω∗) and thus ∇v(ω) = [ S1 B A S2 ] ∀ω . (8)\nDepending on the choice of S1,S2,A andB, we cover the following cases:\n• S1,S2 0,A = B = 0: eigenvalues are real. Thm. 1 ensures that we only have attraction. Far from ω∗, the gradient points to ω∗ (See Fig. 2a) and thus c(α) = 1 for α 1 and c(α) = −1 for α 1. Since ω′ is not exactly ω∗, we observe a quick sign switch of the Path-angle around α = 1. We plotted the average Path-angle over different approximate optima in Fig. 2a (see appendix for details). • S1,S2 = 0,A = −B>: eigenvalues are pure imaginary. Thm. 1 ensures that we only have\nrotations. Far from the optimum the gradient is orthogonal to the direction that points to ω (See Fig. 2b). Thus, c(α) vanishes for α 1 and α 1. Because ω′ is not exactly ω∗, around α = 1, the gradient is tangent to the circles induced by the rotational dynamics and thus c(α) = ±1. That is why in Fig. 2b we observe a bump in c(α) when α is close to 1. • General high dimensional LSSP (4). The dynamics display both attraction and rotation.\nWe observe a combination of the sign switch due to the attraction and the bump due to the rotation. The higher the bump, the closer we are to pure rotations. Since we are performing a low dimensional visualization, we actually project the gradient onto our direction of interest. That is why the Path-angle is significantly smaller than 1 in Fig. 2c." }, { "heading": "5 NUMERICAL RESULTS ON GANS", "text": "Losses. We focus on two common GAN loss formulations: we consider both the original nonsaturating GAN (NSGAN) formulation proposed in Goodfellow et al. (2014) and the WGAN-GP objective described in Gulrajani et al. (2017).\nDatasets. We first propose to train a GAN on a toy task composed of a 1D mixture of 2 Gaussians (MoG) with 10,000 samples. For this task both the generator and discriminator are neural networks with 1 hidden layer and ReLU activations. We also train a GAN on MNIST, where we use the DCGAN architecture (Radford et al., 2016) with spectral normalization(see §C.2 for details). Finally we also look at the optimization landscape of a state of the art ResNet on CIFAR10 (Krizhevsky and Hinton, 2009).\nOptimization methods. For the mixture of Gaussian (MoG) dataset, we used the full-batch extragradient method (Korpelevich, 1976; Gidel et al., 2019a). We also tried to use standard batch gradient descent, but this led to unstable results indicating that gradient descent might indeed be unable to\nconverge to stable stationary points due to the rotations (see §C.4). On MNIST and CIFAR10, we tested both Adam (Kingma and Ba, 2015) and ExtraAdam (Gidel et al., 2019a). The observations made on models trained with both methods are very similar. ExtraAdam gives slightly better performance in terms of inception score (Salimans et al., 2016), and Adam sometimes converge to unstable points, thus we decided to only include the observations on ExtraAdam, for more details on the observations on Adam (see §C.5). As recommended by Heusel et al. (2017), we chose different learning rates for the discriminator and the generator. All the hyper-parameters and precise details about the experiments can be found in §C.1." }, { "heading": "5.1 EVIDENCE OF ROTATION AROUND LOCALLY STABLE STATIONARY POINTS IN GANS", "text": "We first look, for all the different models and datasets, at the path-angles between a random initialization (initial point) and the set of parameters during training achieving the best performance (end point) (Fig. 3), and at the eigenvalues of the Jacobian of the game vector field for the same end point (Fig. 4). We’re mostly interested in looking at the optimization landscape around LSSPs, so we first check if we are actually close to one. To do so we look at the gradient norm around the end point, this is shown by the orange curves in Fig.3, we can see that the norm of the gradient is quite small for all the models meaning that we are close to a stationary point. We also need to check that the point is stable, to do so we look at the eigenvalues of the Game in Fig. 4, if all the eigenvalues have positive real parts then the point is also stable. We observe that most of the time, the model has reached a LSSP. However we can see that this is not always the case, for example in Fig. 4d some of the eigenvalues have a negative real part. We still include those results since although the point is unstable it gives similar performance to a LSSP.\nOur first observation is that all the GAN objectives on both datasets have a non zero rotational component. This can be seen by looking at the Path-angle in Fig. 3, where we always observe a bump, and this is also confirmed by the large imaginary part in the eigenvalues of the Jacobian in Fig. 4. The rotational component is clearly visible in Fig. 3d, where we see no sign switch and a clear bump similar to Fig. 2b. On MNIST and CIFAR10, with NSGAN and WGAN-GP (see Fig. 3), we observe a combination of a bump and a sign switch similar to Fig. 2c. Also Fig. 4 clearly shows the existence of imaginary eigenvalues with large magnitude. Fig. 4c and 4e. We can see that while almost all models exhibit rotations, the distribution of the eigenvalues are very different. In particular the complex eigenvalues for NSGAN seems to be much more concentrated on the imaginary axis while WGAN-GP tends to spread the eigenvalues towards the right of the imaginary axis Fig. 4e. This shows that different GAN objectives can lead to very different landscapes, and has implications in terms of optimization, in particular that might explain why WGAN-GP performs slightly better than NSGAN.\nGenerator\n0 15 30 45 60 75 90 Top-100 Eigenvalues\n0.06\n0.04\n0.02\n0.00\n0.02\n0.04\n0.06\n0.08\nM ag\nni tu\nde\ninit end\n0 2 4 6 8 10 12 14 16 18 20 Top-20 Eigenvalues\n0\n20\n40\n60\nM ag\nni tu\nde\ninit end\n0 2 4 6 8 10 12 14 16 18 20 Top-20 Eigenvalues\n200\n0\n200\n400\n600 800 M ag ni tu de\ninit end\nDiscriminator\n0 15 30 45 60 75 90 Top-100 Eigenvalues\n0\n5\n10\n15\n20\nM ag\nni tu\nde\ninit end\n(a) MoG\n0 2 4 6 8 10 12 14 16 18 20 Top-20 Eigenvalues\n0\n5000\n10000\n15000\n20000\n25000\n30000\nM ag\nni tu\nde\ninit end\n(b) MNIST, IS = 8.97\n0 2 4 6 8 10 12 14 16 18 20 Top-20 Eigenvalues\n40\n30\n20\n10\n0\n10\n20\n30\n40\nM ag\nni tu\nde\ninit end\n(c) CIFAR10, IS = 7.33\nFigure 5: NSGAN. Top k-Eigenvalues of the Hessian of each player (in terms of magnitude) in descending order. Top Eigenvalues indicate that the Generator does not reach a local minimum but a saddle point (for CIFAR10 actually both the generator and discriminator are at saddle points). Thus the training algorithms converge to LSSPs which are not Nash equilibria." }, { "heading": "5.2 THE LOCALLY STABLE STATIONARY POINTS OF GANS ARE NOT LOCAL NASH EQUILIBRIA", "text": "As mentioned at the beginning of §5.1, the points we are considering are most of the times LSSP. To check if these points are also local Nash equilibria (LNE) we compute the eigenvalues of the Hessian of each player independently. If all the eigenvalues of each player are positive, it means that we have reached a DNE. Since the computation of the full spectrum of the Hessians is expensive, we restrict ourselves to the top-k eigenvalues with largest magnitude: exhibiting one significant negative eigenvalue is enough to indicate that the point considered is not in the neighborhood of a\nLNE. Results are shown in Fig. 5 and Fig. 6, from which we make several observations. First, we see that the generator never reaches a local minimum but instead finds a saddle point. This means that the algorithm converges to a LSSP which is not a LNE, while achieving good results with respect to our evaluation metrics. This raises the question whether convergence to a LNE is actually needed or if converging to a LSSP is sufficient to reach a good solution. We also observe a large difference in the eigenvalues of the discriminator when using the WGAN-GP v.s. the NSGAN objective. In particular, we find that the discriminator in NSGAN converges to a solution with very large positive eigenvalues compared to WGAN-GP. This shows that the discriminator in NSGAN converges to a much sharper minimum. This is consistent with the fact that the gradient penalty acts as a regularizer on the discriminator and prevents it from becoming too sharp." }, { "heading": "6 DISCUSSION", "text": "Across different GAN formulations, standard optimization methods and datasets, we consistently observed that GANs do not converge to local Nash equilibria. Instead the generator often ends up being at a saddle point of the generator loss function. However, in practice, these LSSP achieve really good generator performance metrics, which leads us to question whether we need a Nash equilibrium to get a generator with good performance in GANs and whether such DNE with good performance does actually exist. Moreover, we have provided evidence that the optimization landscapes of GANs typically have rotational components specific to games. We argue that these rotational components are part of the reason why GANs are challenging to train, in particular that the instabilities observed during training may come from such rotations close to LSSP. It shows that simple low dimensional examples, such as for instance Dirac GAN, does capture some of the arising challenges for training large scale GANs, thus, motivating the practical use of method able to handle strong rotational components, such as extragradient (Gidel et al., 2019a), averaging (Yazıcı et al., 2019), optimism (Daskalakis et al., 2018) or gradient penalty based methods (Mescheder et al., 2017; Gulrajani et al., 2017).\nACKNOWLEDGMENTS.\nThe contribution to this research by Mila, Université de Montréal authors was partially supported by the Canada CIFAR AI Chair Program (held at Mila), the Canada Excellence Research Chair in “Data Science for Realtime Decision-making”, by the NSERC Discovery Grant RGPIN-2017-06936 (held at Université de Montréal), by a Borealis AI fellowship and by a Google Focused Research award. The authors would like to thank Tatjana Chavdarova for fruitful discussions." }, { "heading": "A PROOF OF THEOREMS AND PROPOSITIONS", "text": "A.1 PROOF OF THEOREM 1\nLet us recall the theorem of interest: Proposition’ 1. Let us assume that (6) is an equality and that ∇v(ω∗) is diagonalizable, then there exists a basis P such that the coordinates ω̃(t) := P (ω(t)− ω∗) have the following behavior,\n1. For λj ∈ Sp∇v(ω∗), λj ∈ R, we observe pure attraction: ω̃j(t) = e−λjt[ω̃j(0) . 2. For λj ∈ Sp∇v(ω∗), <(λj) = 0, we observe pure rotation: [ ω̃j(t) ω̃j+1(t) ] = R|λj |t [ ω̃j(0) ω̃j+1(0) ] .\n3. Otherwise, we observe both: [ ω̃j(t) ω̃j+1(t) ] = e−Re(λj)tRIm(λj)t [ ω̃j(0) ω̃j+1(0) ] .\nThe matrix Rϕ corresponds to a rotation of angle ϕ. Note that, we re-ordered the eigenvalues such that the complex conjugate eigenvalues form pairs: if λj /∈ R then λj+1 = λ̄j .\nProof. The ODE we consider is,\ndω(t)\ndt = ∇v(ω∗)(ω(t)− ω∗) (9)\nThe solution of this ODE is\nω(t) = e−(t−t0)∇v(ω ∗)(ω(t0)− ω∗) + ω∗ (10)\nLet us now consider λ an eigenvalue of Sp(∇v(ω∗)) such that Re(λ) > 0 and Im(λ) 6= 0. Since ∇v(ω∗) is a real matrix and Im(λ) 6= 0 we know that the complex conjugate λ̄ of λ belongs to Sp(∇v(ω∗)). Let u0 be a complex eigenvector of λ, then we have that,\n∇v(ω∗)u0 = λu0 ⇒ ∇v(ω∗)ū0 = λ̄ū0 (11) and thus ū0 is a eigenvector of λ̄. Now if we set u1 := u0 + ū0 and iu2 := u0 − ū0, we have that\ne−t∇v(ω ∗)u1 = e −tλu0 + e −tλ̄ū0 = Re(e −tλ)u1 + Im(e −tλ)u2 (12) e−t∇v(ω ∗)iu2 = e\n−tλu0 − e−tλ̄ū0 = i(Re(e−tλ)u2 − Im(e−tλ)u1) (13) Thus if we consider the basis that diagonalizes ∇v(ω∗) and modify the complex conjugate eigenvalues in the way we described right after 11 we get the expected diagonal form in a real basis. Thus there exists P such that ∇v(ω∗) = PDP−1 (14) whereD is the block diagonal matrix with the block described in Theorem 1.\nA.2 BEING A DNE IS NEITHER NECESSARY OR SUFFICIENT FOR BEING A LSSP\nLet us first recall Example 1. Example’ 1. Let us consider LG as a hyperbolic paraboloid (a.k.a., saddle point function) centered in (1, 1) where (1, ϕ) is the principal descent direction and (−ϕ, 1) is the principal ascent direction, while LD is a simple bilinear objective. LG(θ1, θ2, ϕ) = (θ2 − ϕθ1 − 1)2 − 12 (θ1 + ϕθ2 − 1)2 , LD(θ1, θ2, ϕ) = ϕ(5θ1 + 4θ2 − 9)\nWe want to show that (1, 1, 0) is a locally stable stationary point.\nProof. The game vector field has the following form,\nv(θ1, θ2, ϕ) = (2ϕ2 − 1)θ1 − 3ϕθ2 + 2ϕ+ 1(2− ϕ2)θ2 − 3ϕθ1 − 2 + ϕ 5θ1 + 4θ2 − 9 (15)\nThus, (θ∗1 , θ ∗ 2 , ϕ ∗) := (1, 1, 0) is a stationary point (i.e., v(θ∗1 , θ ∗ 2 , ϕ ∗) = 0). The Jacobian of the game vector field is\n∇v(θ1, θ2, ϕ) = 2ϕ2 − 1 −3ϕ 2− 3θ2−3ϕ 2− ϕ2 1− 3θ1 5 4 0 , (16) and thus,\n∇v(θ∗1 , θ∗2 , ϕ∗) = (−1 0 −1\n0 2 −2 5 4 0\n) . (17)\nWe can verify that the eigenvalues of this matrix have a positive real part with any solver (the eigenvalues of a 3× 3 always have a closed form) . For completeness we provide a proof without using the closed form of the eigenvalues. The eigenvalues∇v(θ∗1 , θ∗2 , ϕ∗) are given by the roots of its characteristic polynomial,\nχ(X) := ∣∣∣∣∣X + 1 0 10 X − 2 2−5 −4 0 ∣∣∣∣∣ = X3 −X2 + 11X − 2 . (18)\nThis polynomial has a real root in (0, 1) because χ(0) = −2 < 0 < 9 = χ(1). Thus we know that, there exists α ∈ (0, 1) such that,\nX3 −X2 + 11X − 2 = (X − α)(X − λ1)(X − λ2) . (19)\nThen we have the equalities,\nαλ1λ2 = 2 (20) α+ λ1 + λ2 = 1 . (21)\nThus, since 0 < α < 1, we have that,\n• If λ1 and λ2 are real, they have the same sign λ1λ2 = 2/α > 0) and thus are positive (λ1 + λ2 = 1− α > 0).\n• If λ1 is complex then λ2 = λ̄1 and thus, 2<(λ1) = λ1 + λ2 = 1− α > 0.\nExample 1 showed that LSSP did not imply DNE. Let us construct an example where a game have a DNE which is not locally stable.\nExample 2. Consider the non-zero-sum game with the following respective losses for each player,\nL1(θ, φ) = 4θ2 + ( 12φ2 − 1) · θ and L2(θ, φ) = (4θ − 1)φ+ 16θ3 (22)\nThis game has two stationary points for θ = 0 and φ = ±1. The Jacobian of the dynamics at these two points are\n∇v(0, 1) = (\n1 1/2 2 1/2\n) and ∇v(0,−1) = ( 1 −1/2 2 −1/2 ) (23)\nThus,\n• The stationary point (0, 1) is a DNE but Sp(∇v(0, 1)) = { 3± √\n17 4 } contains an eigenvalue\nwith negative real part and so is not a LSSP.\n• The statioanry point (0,−1) is not a DNE but Sp(∇v(0, 1)) = { 1±i √\n7 4 } contains only\neigenvalue with positive real part and so is a LSSP." }, { "heading": "B COMPUTATION OF THE TOP-K EIGENVALUES OF THE JACOBIAN", "text": "Neural networks usually have a large number of parameters, this usually makes the storing of the full Jacobian matrix impossible. However the Jacobian vector product can be efficiently computed by using the trick from (Pearlmutter, 1994). Indeed it’s easy to show that∇v(ω)u = ∇(v(ω)Tu). To compute the eigenvalues of the Jacobian of the Game, we first compute the gradient v(ω) over a subset of the dataset. We then define a function that computes the Jacobian vector product using automatic differentiation. We can then use this function to compute the top-k eigenvalues of the Jacobian using the sparse.linalg.eigs functions of the Scipy library." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "C.1 MIXTURE OF GAUSSIAN EXPERIMENT\nDataset. The Mixture of Gaussian dataset is composed of 10,000 points sampled independently from the following distribution pD(x) = 12N (2, 0.5) + 12N (−2, 1) where N (µ, σ2) is the probability density function of a 1D-Gaussian distribution with mean µ and variance σ2. The latent variables z ∈ Rd are sampled from a standard Normal distributionN (0, Id). Because we want to use full-batch methods, we sample 10,000 points that we re-use for each iteration during training.\nNeural Networks Architecture. Both the generator and discriminator are one hidden layer neural networks with 100 hidden units and ReLU activations.\nWGAN Clipping. Because of the clipping of the discriminator parameters some components of the gradient of the discriminator’s gradient should no be taken into account. In order to compute the relevant path angle we apply the following filter to the gradient:\n1 {(|ϕ| = c) and (sign∇ϕLD(ω) = −signϕ)} (24) where ϕ is clipped between −c and c. If this condition holds for a coordinate of the gradient then it mean that after a gradient step followed by a clipping the value of the coordinate will not change.\nHyperparameters for WGAN-GP on MoG Batch size = 10, 000 (Full-Batch) Number of iterations = 30, 000 Learning rate for generator = 1× 10−2 Learning rate for discriminator = 1× 10−1 Gradient Penalty coefficient = 1× 10−3\nHyperparameters for NSGAN on MoG Batch size = 10, 000 (Full-Batch) Number of iterations = 30, 000 Learning rate for generator = 1× 10−1 Learning rate for discriminator = 1× 10−1\nC.2 MNIST EXPERIMENT\nDataset We use the training part of MNIST dataset LeCun et al. (2010) (50K examples) for training our models, and scale each image to the range [−1, 1]. Architecture We use the DCGAN architecture Radford et al. (2016) for our generator and discriminator, with both the NSGAN and WGAN-GP objectives. The only change we make is that we replace the Batch-norm layer in the discriminator with a Spectral-norm layer Miyato et al. (2018), which we find to stabilize training.\nTraining Details\nHyperparameters for NSGAN with Adam Batch size = 100 Number of iterations = 100, 000 Learning rate for generator = 2× 10−4 Learning rate for discriminator = 5× 10−5 β1 = 0.5\nHyperparameters for NSGAN with ExtraAdam Batch size = 100 Number of iterations = 100, 000 Learning rate for generator = 2× 10−4 Learning rate for discriminator = 5× 10−5 β1 = 0.9\nHyperparameters for WGAN-GP with Adam Batch size = 100 Number of iterations = 200, 000 Learning rate for generator = 8.6× 10−5 Learning rate for discriminator = 8.6× 10−5 β1 = 0.5 Gradient penalty λ = 10 Critic per Gen. iterations λ = 5\nHyperparameters for WGAN-GP with ExtraAdam Batch size = 100 Number of iterations = 200, 000 Learning rate for generator = 8.6× 10−5 Learning rate for discriminator = 8.6× 10−5 β1 = 0.9 Gradient penalty λ = 10 Critic per Gen. iterations λ = 5\nComputing Inception Score on MNIST We compute the inception score (IS) for our models using a LeNet classifier pretrained on MNIST. The average IS score of real MNIST data is 9.9.\nC.3 PATH-ANGLE PLOT\nWe use the path-angle plot to illustrate the dynamics close to a LSSP. To compute this plot, we need to choose an initial point ω and an end point ω′. We choose the ω to be the parameters at initialization, but ω′ can more subtle to choose. In practice, when we use stochastic gradient methods we typically reach a neighborhood of a LSSP where the norm of the gradient is small. However, due to the stochastic noise, we keep moving around the LSSP. In order to be robust to the choice of the end point ω′, we take multiple close-by points during training that have good performance (e.g., high IS in MNIST). In all of figures, we compute the path-angle (and path-norm) for all these end points (with the same start point), and we plot the median path-angle (middle line) and interquartile range (shaded area).\nC.4 INSTABILITY OF GRADIENT DESCENT\nFor the MoG dataset we tried both the extragradient method (Korpelevich, 1976; Gidel et al., 2019a) and the standard gradient descent. We observed that gradient descent leads to unstable results. In\nparticular the norm of the gradient has very large variance compared to extragradient this is shown in Fig. 7.\nC.5 ADDITIONAL RESULTS WITH ADAM" } ]
2,020
null
SP:881f632bbadf0cac11ec1e466f02b26762f67073
[ "This paper studies the vulnerability of representations learned by variational auto-encoders (VAE). It first show that the learned representation of VAE is susceptible to small changes, similar to the adversarial examples in supervised learning setting. Then propose a regularization method, called smooth encoder, to improve the robustness of the representation. Experiments are conducted on several benchmark datasets to show the effectiveness of the method. ", "This paper analyzes the shortcoming of VAE objective, and propose a regularization method based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. It is lead to Wasserstein distance between representations. Experiments are made on three datasets; ColorMNIST, MNIST, and CelebA, which shows superior performance on adversarial accuracy while similar accuracy to VAE on nominal accuracy." ]
This paper studies the undesired phenomena of over-sensitivity of representations learned by deep networks to semantically-irrelevant changes in data. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO). We show that the ELBO fails to control the behaviour of the encoder out of the support of the empirical data distribution and this behaviour of the VAE can lead to extreme errors in the learned representation. This is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with specifications that enforce insensitivity of the representation with respect to families of transformations. To incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.
[ { "affiliations": [], "name": "SMOOTH ENCODERS" }, { "affiliations": [], "name": "A. Taylan Cemgil" }, { "affiliations": [], "name": "Sumedh Ghaisas" }, { "affiliations": [], "name": "Krishnamurthy Dvijotham" }, { "affiliations": [], "name": "Pushmeet Kohli" } ]
[ { "authors": [ "Alessandro Achille", "Stefano Soatto" ], "title": "Emergence of Invariance and Disentanglement in Deep Representations", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Shun-ichi Amari", "Ryo Karakida", "Masafumi Oizumi" ], "title": "Information Geometry Connecting Wasserstein Distance and Kullback-Leibler Divergence via the Entropy-Relaxed Transportation Problem", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Yogesh Balaji", "Hamed Hassani", "Rama Chellappa", "Soheil Feizi" ], "title": "Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Yoshua Bengio" ], "title": "Deep Learning of Representations: Looking Forward", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Olivier Bousquet", "Sylvain Gelly", "Ilya Tolstikhin", "Carl-Johann Simon-Gabriel", "Bernhard Schoelkopf" ], "title": "From optimal transport to generative modeling: the VEGAN cookbook", "venue": "arXiv eprints, art", "year": 2017 }, { "authors": [ "Christopher P. Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-VAE", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards Evaluating the Robustness of Neural Networks", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Distances", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial Feature Learning", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Olivier Mastropietro", "Alex Lamb", "Martin Arjovsky", "Aaron Courville" ], "title": "Adversarially Learned Inference", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Networks", "venue": "arXiv e-prints, art", "year": 2014 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Matthew D. Hoffman", "Matthew J. Johnson" ], "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "venue": "In Advances in Approximate Bayesian Inference,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Soheil Kolouri", "Phillip E. Pope", "Charles E. Martin", "Gustavo K. Rohde" ], "title": "Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Jernej Kos", "Ian Fischer", "Dawn Song" ], "title": "Adversarial examples for generative models", "venue": "arXiv eprints, art", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "John D. Lafferty", "Andrew McCallum", "Fernando C.N. Pereira" ], "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "venue": "In Proceedings of the Eighteenth International Conference on Machine Learning,", "year": 2001 }, { "authors": [ "Christos Louizos", "Xiahan Shi", "Klamer Schutte", "Max Welling" ], "title": "The Functional Neural Process", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Alireza Makhzani", "Jonathon Shlens", "Navdeep Jaitly", "Ian Goodfellow", "Brendan Frey" ], "title": "Adversarial Autoencoders", "venue": "arXiv e-prints, art", "year": 2015 }, { "authors": [ "Lars Mescheder", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks", "venue": "arXiv e-prints, art", "year": 2017 }, { "authors": [ "Jimenez Danilo Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "arXiv e-prints, art", "year": 2014 }, { "authors": [ "Justin Solomon" ], "title": "Optimal Transport on Discrete Domains", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Charles Sutton", "Andrew McCallum" ], "title": "An introduction to conditional random fields", "venue": "Found. Trends Mach. Learn.,", "year": 2012 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Da Tang", "Dawen Liang", "Tony Jebara", "Nicholas Ruozzi" ], "title": "Correlated Variational Auto-Encoders", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Qa", "Qb" ], "title": "A link between entropic GANs and VAEs is also pointed at in the literature, albeit for calculating a likelihood for GANs Balaji et al. (2018). However, our motivations as well as the interpretation of the connection is quite different and we view the GAN decoder as an instance of the smooth encoder", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning is a fundamental problem in Machine learning and holds the promise to enable data-efficient learning and transfer to new tasks. Researchers working in domains like Computer Vision (Krizhevsky et al., 2012) and Natural Language Processing (Devlin et al., 2018) have already demonstrated the effectiveness of representations and features computed by deep architectures for the solution of other tasks. A case in point is the example of the FC7 features from the AlexNet image classification architecture that have been used for many other vision problems (Krizhevsky et al., 2012).\nThe effectiveness of learned representations has given new impetus to research in representation learning, leading to a lot of work being done on the development of techniques for inducing representations from data having desirable properties like disentanglement and compactness (Burgess et al., 2018; Achille & Soatto, 2017; Bengio, 2013; Locatello et al., 2019). Many popular techniques for generating representation are based on the Variational AutoEncoders (VAE) model (Kingma & Welling, 2013; Rezende et al., 2014). The use of deep networks as universal function approximators has facilitated very rapid advancements which samples generated from these models often being indistinguishable from natural data. While the quality of generated examples can provide significant convincing evidence that a generative model is flexible enough to capture the variability in the data distribution, it is far from a formal guarantee that the representation is fit for other purposes. In fact, if the actual goal is learning good latent representations, evaluating generative models only based on reconstruction fidelity and subjective quality of typical samples is neither sufficient nor entirely necessary, and can be even misleading.\nIn this paper, we uncover the problematic failure mode where representations learned by VAEs exhibit over-sensitivity to semantically-irrelevant changes in data. One example of such problematic\nbehaviour can be seen in Figure 1. We identify a cause for this shortcoming in the classical Variational Auto-encoder (VAE) objective, the evidence lower bound (ELBO), that fails to control the behaviour of the encoder out of the support of the empirical data distribution. We show this behaviour of the VAE can lead to extreme errors in the recovered representation by the encoder and is a key hurdle in the effective use of representations for data-efficient learning and transfer. To address this problem, we propose to augment the data with properties that enforce insensitivity of the representation with respect to families of transformations.\nTo incorporate these specifications, we propose a regularization method that is based on a selection mechanism that creates a fictive data point by explicitly perturbing an observed true data point. For certain choices of parameters, our formulation naturally leads to the minimization of the entropy regularized Wasserstein distance between representations. We illustrate our approach on standard datasets and experimentally show that significant improvements in the downstream adversarial accuracy can be achieved by learning robust representations completely in an unsupervised manner, without a reference to a particular downstream task and without a costly supervised adversarial training procedure.\nIt is clear that if learned representations are overly sensitive to irrelevant changes in the input (for example, small changes in the pixels of an image or video, or inaudible frequencies added to an audio signal), models that rely on these representations are naturally susceptible to make incorrect predictions when inputs are changed. We argue that such specifications about the robustness properties of learned representations can be one of the tractable guiding features in the search for good representations. Based on these observations, we make the following contributions:\n1. We introduce a method for learning robust latent representations by explicitly targeting a structured model that admits the original VAE model as a marginal. We also show that in the case the target is chosen a pairwise conditional random field with attractive potentials, this choice leads naturally to the Wasserstein divergence between posterior distributions over the latent space. This insight provides us a flexible class of robustness metrics for controlling representations learned by VAEs.\n2. We develop a modification to training algorithms for VAEs to improve robustness of learned representations, using an external selection mechanism for obtaining transformed examples and by enforcing the corresponding representations to be close. As a particular selection mechanism, we adopt attacks in adversarial supervised learning (Madry et al., 2017) to attacks to the latent representation. Using this novel unsupervised training procedure we learn encoders with adjustable robustness properties and show that these are effective at learning representations that perform well across a variety of downstream tasks.\n3. We show that alternative models proposed in the literature, in particular β-VAE model used for explicitly controlling the learned representations, or Wasserstein Generative Adversarial Networks (GANs) can also be interpreted in our framework as variational lower bound maximization.\n4. We show empirically using simulation studies on MNIST, color MNIST and CelebA datasets, that models trained using our method learn representations that provide a higher degree of adversarial robustness even without supervised adversarial training." }, { "heading": "2 GENERATIVE MODELS", "text": "Modern generative models are samplers p(X|θ) for generating realizations from an ideal target distribution π(X), also known as the data distribution. In practice π(X) is unknown in the sense that it is hard to formally specify. Instead, we have a representative data set X , samples that are assumed to be conditionally independently drawn from the data distribution π(X) of interest. We will refer to the empirical distribution as π̂(X) = 1|X | ∑ ξ∈X δ(x − ξ). The goal is learning a\nparameter θ∗ such that p(X|θ = θ∗) = ∫ dZp(X|Z, θ = θ∗)p(Z) ≈ π̂(X), thereby also learning a generator." }, { "heading": "2.1 FROM VAE TO SMOOTH ENCODERS", "text": "The VAE corresponds to the latent variable model p(X|Z, θ)p(Z) with latent variable Z and observation X . The forward model p(X|Z = z, θ) (the decoder) is represented using a neural network g with parameters θ, usually the mean of a Gaussian N (X; g(z; θ), vIx) where v is a scalar observation noise variance and Ix is an identity matrix. The prior is usually a standard Gaussian p(Z = z) = N (z; 0, Iz). The exact posterior over latent variables p(Z|X = x, θ) is approximated by a probability model q(Z|X = x, η) with parameters η. A popular choice here is a multivariate Gaussian N (Z;µ(x; η),Σ(x; η)), where the mapping f such that (µ,Σ) = f(x, η) is chosen to be a neural network (with parameters η to be learned from data). We will refer to the pair f, g as an encoder-decoder pair. Under the above assumptions, VAE’s are trained by maximizing the following form of the ELBO using stochastic gradient descent (SGD),\nlog p(X = x|θ) ≥ E {log p(X = x|Z, θ)}q(Z|X=x,η) −DKL(q(Z|X = x, η)||p(Z)) ≡ Bx(η, θ)\nThe gradient of the Kullback-Leibler (KL) divergence term above (see A.1) is available in closed form. An unbiased estimate of the gradient of the first term can be obtained via sampling z from q using the reparametrization trick Kingma & Welling (2013), aided by automatic differentiation." }, { "heading": "2.2 A PROBLEM WITH THE VAE OBJECTIVE", "text": "Under the i.i.d. assumption, where each data point x(n), for n = 1 . . . N is independently drawn from the model an equivalent batch ELBO objective can be defined (See also E.1) as\nB(η, θ) ≡ 1 N N∑ n=1 Bx(n)(η, θ) = −DKL(π̂(X)q(Z|X, η)||p(X|Z, θ)p(Z)) + const (1)\nwhere the empirical distribution of observed data is denoted as π̂. This form makes it more clear that the variational lower bound is only calculating the distance between the encoder and decoder under the support of the empirical distribution.\nTo see how this locality leads to a fragile representation, we construct a VAE with discrete latents and observations. We let X ∈ {1, . . . , Nx} and Z ∈ {1, . . . , Nz} and define the following system of conditional distributions as the decoder and encoder models as:\np(X = i|Z = j) ∝ exp ( 1\nv ω (mj − i/Nx)\n) q(Z = j|X = i) ∝ exp ( 1\nσi ω (µi − j/Nz) ) where ω(u) = cos(2πu). These distributions can be visualized by heatmaps of probability tables where i and j are row and column indicies, respectively Figure 2. This particular von-Mises like parametrization is chosen for avoiding boundary effects due to a finite latent and observable spaces.\nLatent\nO b se\nrv a ti\no n\nEncoder\nLatent\nDecoder\nProbability\nFigure 2: Example VAE model. (left) Heatmap of the encoder distribution (darker colors referring to higher probability) q(Z = j|X = i;µi, σi) where each row i is a probability distribution over latents with a mode around µi and spread σi (middle) Heatmap of the decoder distribution p(X = i|Z = j,mj , vj) where each column j is a probability distribution with mode at mj and spread v. The prior p(Z) is chosen to be uniform and is not shown here. (right) The marginal model p(X = i|m, v)\n=\n∑Nz\nj=1 p(Z = j)p(X = i|Z = j,mj , v) depicted as an histogram.\nThe prior p(Z) is taken as uniform, and is not shown. Note that this parametrization emulates a high capacity network that can model any functional relationship between latent states and observations, while being qualitatively similar to a standard VAE model with conditionally Gaussian decoder and encoder functions.\nIn reality, the true target density is not available but we would have a representative sample. To simulate this scenario, we sample a ’dataset’ from a discrete target distribution π(X): this is merely a draw from a multinomial distribution, yielding a multinomial vector s with entries si that gives the count how many times we observe x = i. The results of such an experiment are depicted in Figure 3(a) (see caption for details). This picture reveals several important properties of a VAE approximation.\n1. After training, we observe that when j and j′ are close, the corresponding conditionals p(X|Z = j) and p(X|Z = j′) are close (hence corresponding decoder mean parameters mj and mj′ are close, hence (see middle panel of Fig.3(a) with the title P showing the decoder). This smoothness is perhaps surprising at a first sight: in this example, we could arbitrarily permute columns of the decoder and still get the same marginal distribution. Technically speaking, given a uniform prior p(Z), the marginal likelihood p(X|θ) is entirely invariant with respect to permutations of the latent state. In fact if the encoder distribution wouldn’t be constrained we could also permute the columns of the encoder to keep the ELBO invariant. In the appendix E.2, we provide an argument why the choice of an unimodal encoder model and optimization of the variational objective leads naturally to smooth decoder functions.\n2. The encoders found by the VAE on the other hand are not smooth at all, despite the fact that the model shows a relatively good fit. This behaviour alerts us about judging generative models only by the quality of the samples, by traversing the latent space and generating conditional samples from the decoder. The quality of the decoder seems to be not a proxy for the robustness of the representation.\nThe fragility of representations is inherent from the ELBO objective. For the entire dataset, a batch ELBO that involves the counts si can be written as\nELBO = − ∑ i ∑ j siq(Z = j|Xa = i) log siq(Z = j|Xa = i) p(X = i|Z = j)p(Z = j) (2)\nThe last expression is proportional to the negative KL divergence between two tabular distributions: siq(Z = j|Xa = i)/L and p(X = i|Z = j)p(Z = j). As such, whenever si is zero, the contribution of row i of the encoder distribution vanishes and the corresponding parameters µi and σi are not effecting the lower bound. In a sense, the objective does not enforce any structure on the encoder outside of the position of the data points in the training set. This figure shows that the outof-sample behaviour (i.e., for i where π̂(X) = 0) the encoder is entirely initialization dependent, hence no learning takes place. We would also expect that the resulting representations would be fragile, in the sense that a small perturbation of an observation can result in a large change in the encoder output." }, { "heading": "3 ROBUST REPRESENTATIONS WITH SMOOTH ENCODERS", "text": "In this section, we will adopt a strategy for training the encoder that is guaranteed not to change the original objective of the decoder when maximizing the lower bound while obtaining a smoother representation. The key idea of our approach is that we assume an external selection mechanism that is able to provide new fictive data point x′ in the vicinity of each observation in our data set x. Here, “in the vicinity” means that we desire that the corresponding latent state of the original datapoint z = f(x; η) and the latent state of the fictitious point z′ = f(x′; η) should be close to each other in some sense. Assuming the existence of such an external selection mechanism, we first define the following augmented distribution\np(X = x,X ′ = x′|θ) ∝ ∫ p(X = x|Za, θ)p(X ′ = x′|Zb, θ)ψ(Za, Zb)dZadZb (3)\nwhere ψ(Za, Zb) = exp(−γ2 c(Za, Zb))p(Za)p(Zb). This is a pairwise conditional Markov random field (CRF) model (Lafferty et al., 2001; Sutton & McCallum, 2012), where we take c(Za, Zb) as a pairwise cost function. A natural choice here would be, for example, the Euclidean square distance ‖Za − Zb‖2. Moreover, we choose a nonnegative coupling parameter γ ≥ 0. For any pairwise Q(Za, Zb) distribution, the ELBO has the following form\nlog p(X = x,X ′ = x′|θ) ≥ E {log p(X = x|Za, θ)}Q(Za) + E {log p(Za)}Q(Za) +E {log p(X ′ = x′|Zb, θ)}Q(Zb) + E {log p(Zb)}Q(Zb) −γ\n2 E {c(Za, Zb)}Q(Za,Zb) +H(Q(Za, Zb)) (4)\nIt may appear that the SE has to maintain a pairwise approximation distribution Q(Za, Zb). However, this turns out to be not necessary. Given the encoder, the marginals of Q(Za, Zb) are fixed as Qa(Za) = q(Z|Xa = x, η) and Qb(Zb) = q(Z|Xb = x, η), so the only remaining terms that depend on the pair distribution are the final two terms in (4). We note that this two terms are just the objective function of the entropy regularized optimal transport problem (Cuturi, 2013; Amari et al., 2017). If we view Q(Za, Zb) as a transport plan, the first term is maximal when the expected cost is minimal while the second term is maximal when the variational distribution is factorized as Q(Za, Zb) = Qa(Za)Qb(Zb).\nIn retrospection, this link is perhaps not that surprising as the Wasserstein distance, the solution of the optimal transport problem, is itself defined as the solution to a variational problem (Solomon, 2018): Consider a set Γ of joint densities Q(Za, Zb) with the property that Q has fixed marginals Qa(Za) and Qb(Zb), i.e.,\nΓ[Qa, Qb] ≡ { Q : Qa(Za) = ∫ Q(Za, Zb)dZb, Qb(Zb) = ∫ Q(Za, Zb)dZa } (5)\nThe Wasserstein divergence1, denoted byWD is defined as the solution of the optimization problem with respect to pairwise distribution Q\nWD[c](Qa, Qb) = inf Q∈Γ\n∫ c(Za, Zb)Q(Za, Zb)dZadZb (6)\nwhere c(Za, Zb) is a function that specifies the ‘cost’ of transferring a unit of probability mass from Za to Zb.\nIt is important to note that with our choice of the particular form of the variational distribution Q(Za, Zb) we can ensure that we are still optimizing a lower bound of the original problem. We can achieve this by simply integrating out the X ′, effectively ignoring the likelihood term for the fictive observations. Our choice does not modify the original objective of the decoder due to the fact that the marginals are fixed given η. To see this, take the exponent of (4) and integrate over the unobserved X ′\nlog p(X = x|θ) = log ∫ dX ′p(X = x,X ′|θ)\n≥ E {log p(X = x|Za, θ)}Q(Za) + E {log p(Za)}Q(Za) + E {log p(Zb)}Q(Zb) −γ\n2 E {c(Za, Zb)}Q(Za,Zb) +H(Q(Za, Zb)) ≡ BSE(θ, η) (7)\nwe name this lower bound BSE as the Smooth Encoder ELBO (SE-ELBO). The gradient of BSE with respect to the decoder parameters θ is identical to the gradient of the original VAE objective B. This is intuitive as x′ is an artificially generated sample, we should use only terms that depend on x and not on x′. Another advantage of this choice is that it is possible to optimize the decoder and encoder concurrently as in the standard VAE. Only an additional term enters for the regularization of the encoder where the marginals obtained via amortized inference q(Za|xa, η) and q(Zb|xb, η) are forced to be close in a regularized Wasserstein distance sense, with the coupling strength γ. Effectively, we are doing data augmentation for smoothing the representations obtained by the encoder without changing the actual data distribution. In the appendix E.3, we also provide an argument about the smoothness of the corresponding encoder mapping, justifying the name. The resulting algorithm is actually a simple modification to the standard VAE and is summarized below: Initialize η(0), θ(0) for τ = 1, 2, . . . do\nxa = GetData(), xb = Select(xa;L, ) (see Section 3.1) µa,Σa = f(xa; η), µb,Σb = f(xb; η) (Compute Representation) WD2,γ(η) = EntropyRegularizedWassersteinDivergence(µa,Σa, µb,Σb, γ) (see Apdx. B.2) u ∼ N (0, I) (Reparametrization Trick) E1(η, θ) = − 12v‖xa − g(µa + Σ 1/2 a u; θ)‖2 (Data Fidelity)\nE2(η) = − 12 ( ‖µa‖2 + ‖µb‖2 + Tr{Σa + Σb} ) (Prior Fidelity) E(η, θ) = E1(η, θ) + E2(η)−WD2,γ(η) η(τ), θ(τ) = Optimization Step(E, η(τ−1), θ(τ−1))\nend" }, { "heading": "3.1 SELECTION MECHANISM VIA ADVERSARIAL ATTACKS", "text": "Adversarial attacks are one of the most popular approaches for probing trained models in supervised tasks, where the goal of an adversarial attack is finding small perturbations to an input example that would maximally change the output, e.g., flip a classification decision, change significantly a prediction (Szegedy et al., 2013). The perturbed input is named as an adversarial example and these extra examples are used, along with the original data points, for training adversarially robust models (Madry et al., 2017; Kurakin et al., 2016). As extra samples are also included, such a training procedure is referred as data augmentation. However, in unsupervised learning and density estimation, data augmentation is not a valid approach as the underlying empirical distribution would be altered by the introducing new points.\n1We use the term divergence to distinguish the optimal transport cost from the corresponding metric. This distinction is reminiscent to the distinction between Euclidian divergence ‖ · ‖2 and the Euclidian distance ‖ · ‖\nHowever, as we let the encoder to target a different distribution than the actual decoder, we can actually use the extra, self generated samples to improve desirable properties of a given model. Hence this approach could also be interpreted as a ’self-supervised’ learning approach where we bias our search for a ’good encoder’ and the data selection mechanism acts like a critique, carefully providing examples that should lead to similar representations.\nIn this paper we will restrict ourselves to Projected Gradient Descent (PGD) attacks popular in adversarial training Carlini & Wagner (2016) as a selection mechanism, where the goal of the attacker is finding a point that would introduce the maximum difference in the Wasserstein distance of the latent representation. In other words, we implement our selection mechanism where the extra data point is found by approximately solving the following constrained optimization problem\nx′ = x+ arg max d:‖d‖p≤ WD(q(Z|X = x, η), q(Z|X ′ = x+ d, η))\nThis attack is assigned a certain iteration budget L for a given radius , that we refer as selection iteration budget and the selection radius, respectively. We note a similar attack mechanism is proposed for generative models as described in (Kos et al., 2017), where one of the proposed attacks is directly optimizing against differences in source and target latent representations. Note that our method is not restricted to a particular selection mechanism; indeed two inputs that should give a similar latent representation could be used as candidates." }, { "heading": "4 EXPERIMENTS", "text": "Goal and Protocol In our experiments, we have tested and compared the adversarial accuracy of representations learned using a VAE and our smooth encoder approach. We adopt a two step experimental protocol, where we first train encoder-decoder pairs agnostic to any downstream task. Then we fix the representation, that is we freeze the encoder parameters and only use the mean of the encoder as the representation, then train a simple linear classifier based on the fixed representation using standard techniques. In this supervised stage, no adversarial training technique is employed. Ideally, we hope that such an approach will provide a degree of adversarial robustness, without the need for a costly, task specific adversarial training procedure. To evaluate the robustness of the resulting classifier, for each data point in the test set, we search for an adversarial example using an untargeted attack that tries to change the classification decision. The adversarial accuracy is reported in terms of percentage of examples where the attack is not able to find an adversarial example.\nThe VAE and SE decoder and encoder are implemented using standard MLP and ConvNet architectures. The selection procedure for SE training is implemented as a projected gradient descent optimization (a PGD attack) with selection iteration budget of L iterations to maximize the Wasserstein distance between q(Z|X = x) and q(Z|X = x + δ) with respect to the perturbation δ where ‖δ‖∞ < . Further details about the experiment can be found in the appendix C.1. Results: We run simulations on ColorMNIST, MNIST and CelebA datasets. The ColorMNIST is constructed from the MNIST dataset by coloring each digit artificially with all of the colors corresponding to the seven of the eight corners of the RGB cube (excluding black). We present the results with the strongest attack we have experimented: a PGD attack with 100 iterations and 10 restarts. We observe that for weaker attacks (such as 50 iterations with no restarts), the adversarial accuracy is typically much higher.\nFor the ColorMNIST dataset, the results are shown in Figure 4 where we test the adversarial accuracy of representations learned by our method and compare it to a VAE. We observe that the adversarial accuracy of a VAE representation quickly drops towards zero while SE can maintain adversarial accuracy in both tasks. In particular, we observe that for the arguably simpler color classification task, we are able to obtain close to perfect adversarial test accuracy using representations learned by the VAE and SE. However, when the classifiers are attacked using PGD, the adversarial accuracy quickly drops with increasing radius size, while the accuracy degrades more gracefully in the SE case.\nIn Figure 5, we show the robustness behaviour of the method for different architectures. A ConvNet seems to perform relatively better than an MLP but these results show that the VAE representation is not robust, irrespective of the architecture. We have also carried out controlled experiments with random selection instead of the more costly untargetted adversarial attacks (See appendix C.1 Fig-\nure 7(a) for further results). We observe some limited improvements with SE using random selection in adversarial accuracy compared to VAE but training a SE with adversarial selection seems to be much more effective. We note that the selection iteration budget was lower (L = 20 with no restarts) than the attack iteration budget (100 with 10 restarts) during evaluation. It was not practical to train the encoder with more powerful selection attacks, thus it remains to be seen if the tendency of increased accuracy with increased iteration budgets would continue. We also observe that essentially the same level of adversarial accuracy can be obtained with a small fraction of the available labels (See appendix C.1 Figure 8 for further results).\nWe have also repeated our experiments on the CelebA dataset, a large collection of high resolution face images labeled with 40 attribute labels per example. We have used 17 of the attribute labels as the targets of 17 different downstream classification tasks. The results are shown in Table.2. The results clearly illustrate that we can achieve much more robust representations than a VAE. It is also informative to investigate specific adversarial examples to understand the failure modes. In Figure 6 we show two illustrative examples from the CelebA. Here we observe that attacks to the SE representations are much more structured and semantically interpretable. In our exploratory investigations, we qualitatively observe that the reconstructions corresponding to the adversarial examples are almost always legitimate face images with clearly recognizable features. This also seems to support our earlier observation that VAE decoders are typically smooth while the encoders\nare inferring non-robust features. Our approach seems to be a step towards obtaining more robust representations." }, { "heading": "5 RELATED WORK", "text": "The literature on deep generative models and representation learning is quite extensive and is rapidly expanding. There is a plethora of models, but some approaches have been quite popular in recent\nyears: Generative Adversarial Networks (GANs) and VAEs. While the connection of our approach to VAE’s is evident, there is also a connection to GANs. In the appendix, we provide the details where we show that a GAN decoder can be viewed as an instance of a particular smooth encoder. Our method is closely related to the β-VAE (Higgins et al., 2017), used for controlling representations replaces the original variational objective (1) with another one for explicitly trading the data fidelity with that of prior fidelity. In the appendix, we show that the method can be viewed as an instance of the smooth encoders.\nWasserstein distance minimization has been applied in generative models as an alternative objective for fitting the decoder. Following the general framework sketched in Bousquet et al. (2017), the terms of the variational decomposition of the marginal likelihood can be modified in order to change the observation model or the regulariser. For example, Wasserstein AutoEncoders (WAE) Tolstikhin et al. (2017), Zhang et al. (2019) or sliced Wasserstein Autoencoders Kolouri et al. (2018) propose to replace data fidelity and/or the KL terms with a Wasserstein distance. Our approach is different from these approaches as we do not propose to replace the likelihood as a fundamental principle for data fitting. In contrast, the Wasserstein distance formulation naturally emerges from the particular model choice and the corresponding variational approximation.\nOur approach involves an adversarial selection step. The word ’Adversarial’ is an overloaded term in generative modelling so it is important to mention differences between our approach. Adversarial Variational Bayes is a well known technique in the literature that aims to combine the empirical success of GANs with the probabilistic formulation of VAEs, where the limiting functional form of the variational distribution can be replaced by blackbox inference (Mescheder et al., 2017). This approach also does not modify the original VAE objective, however, the motivation here is different as the aim is developing a more richer family. In our view, for learning useful representations, when the decoder is unknown, the advantage of having a more powerful approximating family is not clear yet; this can even make the task of learning a good representation harder. Adversarial Autoencoders (Makhzani et al., 2015), Adversarially Learned Inference (ALI) (Dumoulin et al., 2016) and BiGANs (Bidirectional GANs) (Donahue et al., 2016) are also techniques that combine ideas from GANs and VAEs for learning generative models. The key idea is matching an encoder process q(z|x)p(x) and to the decoder process p(z)p(x|z) using an alternative objective, rather than by minimizing the KL divergence. In this formulation, p(x) is approximated by the empirical data distribution, and p(z) is the prior model of a VAE. The encoder q(z|x) and decoder p(x|z) are modelled using deep networks. This approach is similar to Wasserstein autoencoders that propose to replace the likelihood principle.\nThe idea of improving VAEs by capturing the correlation structure between data points using MRFs and graphical models has been also been recently proposed (Tang et al., 2019) under the name Correlated Variational Auto-Encoders (CVAEs). Our approach is similar, however we introduce the correlation structure not between individual data points but only between true data points and artificially selected data points. We believe that correctly selecting such a correlation structure of the individual data points can be quite hard in practice, but if such prior knowledge is available, CVAE can be indeed a much more powerful model than a VAE. We note that a proposal for automatically learning such a correlation structure is also recently proposed by (Louizos et al., 2019)." }, { "heading": "6 DISCUSSION AND CONCLUSIONS", "text": "In this paper, we have introduced a method for improving robustness of latent representations learned by a VAE. It must be stressed that our goal is not building the most powerful adversarially robust supervised classifier, but obtaining a method for learning generic representations that can be used for several tasks; the tasks can be even unknown at the time of learning the representations. While the nominal accuracy of an unsupervised approach is expected to be inferior to a supervised training method that is informed by extra label information, we observe that significant improvements in adversarial robustness can be achieved by our approach that forces smooth representations." }, { "heading": "ACKNOWLEDGMENTS", "text": "We are grateful to Arnaud Doucet for the insights and references about optimal transport and Arnaud Doucet, Johannes Welbl, Sven Gowal and Michalis Titsias for their helpful comments to earlier drafts of this paper." }, { "heading": "A APPENDIX", "text": "A.1 KL DIVERGENCE\nThe KL divergence between two Gaussian distributions translates to a well known divergence in the parameters (in the general case this is a Bregman divergence)\nKL(Pa||Pb) = 1\n2\n( TrΣ−1b (Σa − Σb)− log |Σ −1 b Σa| ) + 1\n2 (µa − µb)>Σ−1b (µa − µb) (8)\nwhere Pa = N (µa,Σa) and Pb = N (µb,Σb) are Gaussian densities with mean µ· and covariance matrix Σ·, and | · | denotes the determinant for a matrix argument, and Tr denotes the trace. The KL divergence consists of two terms, the first term is the scale invariant divergence between two covariance matrices also known as a Itakuro-Saito divergence and the second term is a Mahalonobis distance between the means. The KL divergence is invariant to the choice of parametrization or the choice of the coordinate system." }, { "heading": "B OPTIMAL TRANSPORT AND WASSERSTEIN DISTANCE", "text": "Consider a set Γ of joint densities Q(Za, Zb) with the property that Q has fixed marginals Qa(Za) and Qb(Zb), i.e.,\nΓ[Qa, Qb] ≡ { Q : Qa(Za) = ∫ Q(Za, Zb)dZb, Qb(Zb) = ∫ Q(Za, Zb)dZa } (9)\nThe Wasserstein divergenceWD is defined as the solution of the optimization problem with respect to pairwise distribution Q\nWD[c](Qa, Qb) = inf Q∈Γ\n∫ c(Za, Zb)Q(Za, Zb)dZadZb (10)\nwhere c(za, zb) is a function that specifies the ‘cost’ of transferring a unit of probability mass from za to zb.\nB.1 `2-WASSERSTEIN DISTANCEW\nThe `2-Wasserstein distanceW22 for two Gaussians has an interesting form. The optimum transport plan, where the minimum of (10) is attained, is given\nQ∗(za, zb) = N ((\nµa µb\n) , ( Σa Ψ Ψ Σb )) (11)\nwhere Ψ = ΣaΣ 1/2 b (Σ 1/2 b ΣaΣ 1/2 b ) −1/2Σ 1/2 b . It can be checked that this optimal Guassian density is degenerate in the sense that there exists a linear mapping between za and zb:\nza(zb) = µa + ΣaΣ 1/2 b (Σ 1/2 b ΣaΣ 1/2 b ) −1/2Σ −1/2 b (zb − µb)\nwhere A1/2 denotes the matrix square root, a symmetric matrix such that (A1/2)2 = A for a symmetric positive semidefinite matrix A. The `2-Wasserstein distance is the value attained by the optimum transport plan\nW22 (Pa, Pb) = ‖µa − µb‖22 + Tr ( Σa + Σb − 2 ( Σ 1/2 b ΣaΣ 1/2 b )1/2) (12)\nB.2 ENTROPY REGULARIZED `2-WASSERSTEIN DISTANCE\nEntropy Regularized `2-Wasserstein is the value attained by the minimizer of the following functional\nF [Q] = γ 2 E { Tr(Za − Zb)(Za − Zb)> } Q(Za,Zb) −H[Q(Za, Zb)] (13)\nwhere H is the entropy of the joint distribution Q. Using the form in (11) subject to the semidefinite constraint Σa −ΨΣ−1b Ψ> 0\nTr (za − zb) (za − zb)> = −2Tr(Ψ) + const (14)\nThe entropy of a Gaussian Q(za, zb) is given by the Schur formula\nH[Q(za, zb)] = D\n2 log(2πe) +\n1 2 log |Σb||Σa −ΨΣ−1b Ψ >| (15)\nHere, D is the dimension of the vector (za, zb). The entropy regularized problem has a solution where we need to minimize\nF̃ (Ψ) = −γTr(Ψ)− 1 2 Tr log ∣∣Σa −ΨΣ−1b Ψ>∣∣ (16) Taking the derivative and setting to zero\n∂F̃ (Ψ)\n∂Ψ = −γI + Σ−1b Ψ > (Σa −ΨΣ−1b Ψ>)−1 (17) we obtain a particular Matrix Ricatti equation\n0 = −ΨΣ−1b Ψ > − 1 γ Σ−1b Ψ > + Σa (18)\nthat gives us a closed form formula for the specific entropy regularized Wasserstein distance\nW22,γ(N (ma,Σa),N (mb,Σb)) = ‖ma −mb‖22 + Tr{Σa + Σb − 2Ψ} (19)\nWD2,γ(N (ma,Σa),N (mb,Σb)) ≡ γ\n2 W22,γ(N (ma,Σa),N (mb,Σb)) (20)\n−D 2 log(2πe)− 1 2 log |Σb||Σa −ΨΣ−1b Ψ >| (21)\nFor the case of two univariate Gaussians, i.e., when the joint distribution has the form Q(Za, Zb) = N ((\nma mb\n) , ( Σa ψ ψ Σb )) the solution is given by the solution of the scalar quadratic equation.\nf(ψ)′ = ψ2 + 1\nγ ψ − ΣaΣb = 0 (22)\nψ = − 1 2γ ± 1 2|γ| ( 1 + 4γ2ΣaΣb )1/2 (23)\nWe take the root that gives a feasible solution as the minimizer. In the scalar case, this is the solution that satisfies Σa − ψ2/Σb ≥ 0, or equivalently ΣaΣb ≥ ψ2\nψ = 1\n2γ (uγ(Σa,Σb)− 1) (24)\nwhere we have defined uγ(Σa,Σb) = ( 1 + 4γ2ΣbΣa )1/2 It can be easily checked that the other root is infeasible. For the scalar ψ case we obtain\nWD2,γ(N (ma,Σa),N (mb,Σb)) = γ\n2\n( ‖ma −mb‖22 + Σa + Σb ) − 1\n2 (uγ(Σa,Σb)− 1)\n+ 1\n2 log(uγ(Σa,Σb) + 1)−\n1 2 log(2ΣbΣa)− log(2π)− 1" }, { "heading": "C SUMMARY OF THE SMOOTH ENCODER ALGORITHM WITH FACTORIZED GAUSSIAN", "text": "Assume a factorized encoder distribution of form q(Za|x, η) = ∏Dz k=1N (Zka ;µka,Σka) and\nq(Zb|x′, η) = ∏Dz k=1N (Zkb ;µkb ,Σkb ) where Dz is the dimension of the latent representation, and\nµk and Σk are the k’th component of the output of a neural network with parameters η. Similarly, xi denotes the i’th component of the observation vector x of size Dx. For optimization, we need an unbiased estimate of the gradient of the SE-ELBO with respect to encoder parameters η and decoder parameters θ:\nBSE(η, θ) = E {log p(X = x|Za, θ)}q(Za|xa,η) + E {log p(Za)}q(Za|xa,η) + E {log p(Zb)}q(Zb|xb,η) −γ\n2 E {c(Za, Zb)}q(Za,Zb|xa,xb,η) +H(q(Za, Zb|xa, xb, η))\nGiven x, we first select a fictive sample x′ via a selection mechanism, in this case as an adversarial attack as explained in section 3.1.\nSample a latent representation and calculate the associated prediction\nza ∼ q(Za|Xa = x, η) = N (Za;µa,Σa) x̄ = g(za; η)\nThe terms of the SE-ELBO can be calculated as\nE {log p(x|Za, θ)}q(Za|Xa=x,η) ≈ − Dx 2 log 2πv − 1 2v Dx∑ i=1 (xi − x̄i)2\nE {log p(Za)}q(Za|Xa=x,η) = − 1\n2 Dz∑ k=1 ((µka) 2 + Σka)− Dz 2 log 2π\nE {log p(Zb)}q(Zb|Xb=x′,η) = − 1\n2 Dz∑ k=1 ((µkb ) 2 + Σkb )− Dz 2 log 2π\nWD2,γ = γ 2 E { ‖Za − Zb‖2 } q(Za,Zb|xa,xb,η) −H(q(Za, Zb|xa, xb, η))\nwhere uγ(Σa,Σb) = √ 1 + 4γ2ΣbΣa\nWD2,γ = γ\n2 Dz∑ k=1 ( ‖µka − µka‖22 + Σka + Σkb ) −1\n2 Dz∑ k=1 ( (uγ(Σ k a,Σ k b )− 1)− log(uγ(Σka,Σkb ) + 1) + log(2ΣkbΣka) ) −Dz log(2π)−Dz\nC.1 EXPERIMENTAL DETAILS AND FURTHER RESULTS\nWe always train decoder-encoder pairs with identical architectures using both the standard VAE ELBO and the SE ELBO with a fixed γ. Then, in each case by fixing the encoder (that is essentially using the same representation) and by only using the mean activations of the encoders, we train linear classifiers using standard training for solving several downstream tasks.\nFor both encoder and decoder networks we use a 4 layer multi layer perceptron (MLP) and a convolutional network (ConvNET) architectures with 200 units of ReLU activations at each layer. We carried out experiments with latent space dimensions of 32, 64 and 128, corresponding to an output sizes of an encoder with 64, 128 and 256 units, with two units per dimensions to encode the mean and the log-variance parameters of a fully factorized Gaussian condition distribution. The training is done using the Adam optimizer. Each network (both the encoder and decoder) are randomly initialized and trained for 300K iterations." }, { "heading": "D RELATED WORK", "text": "D.1 GENERATIVE ADVERSARIAL NETWORKS (GANS)\nGANs are presented as neural sampling models of observations x of form x = f(ζ; η) where f is typically a deep neural network with parameters η, and ζ is a realization from some simple distribution p(Z). In the context of GANs, the function f is called a generator. When the dimension of x\nis bigger than the dimension of ζ, the density p(x) induced by the transformation f is inevitably a degenerate distribution. Since f is continuous, and it is concentrated on a subset of the data space Xf ≡ {x : ∃ζ, x = f(ζ; η)}. Our use of letter f and parameters η is deliberate and we will illustrate in the sequel that the generator network of a GANs is actually analogous to a smooth encoder, where the roles of the latent variables and observations are switched, but we will first review GANs.\nTo fit a degenerate distribution to a dataset, the GAN approach adopts a strategy where the generator is co-trained with a second neural network d(x;w) with parameters w with the following objective\nmin θ max w\n{ E {log d(x;w)}Dreal(x) + E {log(1− d(g(ζ; θ);w))}p(ζ) } (25)\nwhere Dreal(x) is the empirical data distribution. This objective is (unfortunately) referred as an adversarial objective in the literature, not to be confused with adversarial attack mechanism in the context of supervised learning Madry et al. (2017). The function d is called a discriminator. After replacing expectations with finite sample averages, this objective enforces that in a dataset that contains both synthetically generated (fake) and real examples, the classification function d should increase the correct classification rate by discriminating fakes from real examples while the generator f should decrease the detection rate of fake examples. When 0 ≤ d(·) ≤ 1, which is the case for a classifier, one can also write the objective as\nmin θ max w\n{ E {l(x;w)}Dreal − E {l(f(ζ; η);w))}p(ζ) } (26)\nwhere l(x;w) = log d(x;w). This form also highlights an interesting alternative formulation and an interpretation in terms of optimal transport. In fact, not long after the seminal work of Goodfellow et al. (2014), the mechanism beyond the GAN objective and its direct connection to the theory of optimal transport has been recognized by the seminal paper Arjovsky et al. (2017) where the problem is further framed as\nmin θ max w\n{ E {l(x;w)}Dreal(x) − E {l(x̄;w)}Dfake(x̄;θ) } (27)\nwith the constraint that |l(x;w) − l(x̄;w)| ≤ ‖c(x, x̄)‖, i.e. l is a Lipschitz function for some L where ‖c(x, x̄)‖ ≤ L‖x − x̄‖. Here, Dfake(x̄; θ) is the fitted density of x̄ = f(ζ; η). This is the dual formulation of the optimal transport problem, that can be understood as an economic transaction between a customer and a shipment company. Here, the difference l(x;w)− l(x̄;w) can be interpreted as the profit made by the company for the shipment of one unit of mass from x and to x̄, and the Lipschitz condition ensures that it makes still sense for the customer to make use of the services of the company rather than simply doing the transport of her own (Solomon, 2018). The customer wants to pay less, so she should minimize the profit of the company. This can be achieved by changing the desired delivery distribution Dfake by adjusting θ, so that the transfer from the fixed source distribution Dreal is minimized. Ideally, when Dfake = Dreal, there is nothing to transfer and no cost is incurred. This objective also minimizes the Wasserstein distance between the actual data distribution Dreal and the fake data distribution Dfake as given by the generator. Once the GAN objective can be viewed as minimizing a particular Wasserstein distance, it is rather straightforward to view it as a maximizer of a particular ELBO corresponding to a particular smooth encoder, albeit in one where the positions of the observations and the latents are exchanged and a very large coupling coefficient γ is chosen. Moreover, the variational marginals have specific forms: One marginal Qa(X) is chosen as the empirical data distribution and the other marginal is chosen as having the form of a neural sampler Qb(Xb) = ∫ q(Xb|Zb, η)p(Zb)dZb.\nThe artificial extended target becomes p(Z,Z ′|X, θ) ∝ ∫ dXbp(Z|X, θ)p(Z ′|Xb, θ)ψ(X,Xb) (28)\nIt can be seen that the ELBO in this case becomes\nlog p(Z,Z ′|X, θ) ≥ E {log p(Z|X, θ)}Qa(X) + E {log p(X)}Qa(X) +E {log p(Z ′|Xb, θ)}Qb(Xb) + E {log p(Xb)}Qb(Xb) −γ\n2 E {c(Xa, Xb)}Q(Xa,Xb) +H(Q(Xa, Xb)) (29)\nNow, by taking the coupling γ sufficiently large, the coupling term dominates the lower bound and we obtain the Wasserstein minimization objective. The random draws from p(Z) become the selection mechanism. Moreover, the terms that depend on the artificial target p(Z|X, θ) become also irrelevant so in this regime the problem becomes just solving the optimal transport problem between Qa and Qb.\nA link between entropic GANs and VAEs is also pointed at in the literature, albeit for calculating a likelihood for GANs Balaji et al. (2018). However, our motivations as well as the interpretation of the connection is quite different and we view the GAN decoder as an instance of the smooth encoder.\nD.2 DISENTANGLED REPRESENTATIONS AND β-VAE\nTargeting the encoder to an augmented distribution different than the decoder us the freedom to express some extensions of VAE in the same framework. One of such extensions is the β-VAE, quite often used for controlling representations replaces the original variational objective (1) with the following objective\nlog p(X = x|θ) ≥ E {log p(X = x|Z, θ)}q(Z|Xa=x,η) − βDKL(q(Z|Xa = x, η)||p(Z))(30)\nThe justification in the original paper Higgins et al. (2017) is obtained from an implicit robustness criteria where DKL(q(Z|Xa = x, η)||p(Z)) < and β appears in a Lagrangian formulation. Hoffman & Johnson (2016) have also provided an alternative justification.\nIn our formulation, β can be simply interpreted as a dispersion term that is related to the number of points selected by the selection mechanism. To see this, suppose the selection mechanism chooses β − 1 points xb,i where i = 1 . . . β − 1 that are identical to the true observation x = xb,i = x′i for i = 1 . . . β − 1.\np(X|θ) = ∫ dX ′1:β−1p(X,X ′ 1:β−1|θ) (31)\n∝ ∫ dX ′1:β−1dZdZ ′ 1:β−1p(X|Z, θ)p(Z) ( β−1∏ i=1 p(X ′i|Z ′i, θ)p(Z ′i) ) (32)\n= ∫ dZdZ ′1:β−1p(X|Z, θ)p(Z) ( β−1∏ i=1 p(Z ′i) ) (33)\nNow, instead of integrating out Z ′1:β−1, we choose a variational distribution with identical marginals of form\nQ(Z,Z ′1:β−1) = q(Z|Xa = x, η) β−1∏ i=1 q(Z ′i|Xb,i = x, η) (34)\nThe variational lower bound becomes identical to the β-VAE objective as\nlog p(X|θ) ≥ E {log p(X|Z, θ)}q(Z|Xa=x,η) + E {log p(Z)}q(Z|Xa=x,η) (35)\n+ β−1∑ i=1 E {log p(Z ′i)}q(Z′i|Xb,i=x,η) +H(Q(Z,Z ′ 1:β−1)) (36)\n= E {log p(X = x|Z, θ)}q(Z|Xa=x,η) − βDKL(q(Z|Xa = x, η)||p(Z)) (37)\nwhere the last step follows due to the functional form of the variational distribution." }, { "heading": "E TECHNICAL RESULTS", "text": "E.1 BATCH ELBO\nIn section 2.2, we have defined a batch ELBO (2). To see the connection to VAE ELBO (1)\nlog p(X = x|θ) ≥ E {log p(X = x|Z, θ)}q(Z|X=x,η) −DKL(q(Z|X = x, η)||p(Z)) ≡ Bx(η, θ)\nwe first define the empirical data distribution π(X) = 1N ∑N i=1 δ(X − xi). We can now write\nlog p(X = x|θ) ≥ E {log p(X = x|Z, θ)}q(Z|X=x,η) −DKL(q(Z|X = x, η)||p(Z)) ≡ Bx(η, θ)\n1\nN N∑ i=1 log p(X = xi|θ) = 1 N N∑ i=1 E {log p(X = xi|Z, θ)}q(Z|X=xi,η)\n− 1 N N∑ i=1 E {log q(Z|X = xi, η)}q(Z|X=xi,η)\n+ 1\nN N∑ i=1 E {log p(Z)}q(Z|X=xi,η)\n= E {log p(X|Z, θ)}q(Z|X,η)π(X) − E {log q(Z|X, η)}q(Z|X,η)π(X) +E {log p(Z)}q(Z|X,η)π(X) −E {log π(X)}q(Z|X,η)π(X) + E {log π(X)}π(X)\n= −DKL(q(Z|X, η)π(X)||p(X|Z, θ)p(Z)) + const\nThis result shows that the ELBO is minimizing the KL distance between one exact and one approximate factorization of the joint distribution p(X,Z) = p(X|Z, θ)p(Z) ≈ q(Z|X, η)π(X).\nE.2 WHY IS THE DECODER TYPICALLY SMOOTH AFTER THE VAE TRAINING?\nIn the context of a VAE, the smoothness of the decoder is implicitly enforced by the highly constrained encoder distribution and the dynamics of an SGD based training. In the sequel, we will illustrate that, if two latent coordinates are sufficiently close, the decoder mean mapping is forced to be bounded.\nIn a standard VAE, the encoder output for each data point is conditionally Gaussian as q(Z|X = x; η) = N (fµ(x; η), fΣ(x; η)). The decoder is chosen as p(X|Z = z; η) = N (g(z; θ), vI). The decoder parameters θ under the ELBO depend only on the data fidelity term ‖x− g(z; θ)‖2/v. For a moment, assume that the encoder is fixed and focus on a single data point x. During training, a set of latent state vectors zi for i = 1 . . . T are sampled from the conditionally Gaussian encoder distribution. When the dimension of the latent space Dz is large, these samples zi will be with high probability on the typical set. The typical set of a nondegenerate Gaussian distribution is approximately the surface of a Mahalanobis ball, a compact hyper-ellipsoid M(x) centered at fµ(x; η) with scaling matrix fΣ(x; η)1/2.\nIf we assume that the training procedure is able to reduce the error in the sense that ‖x−g(zi; θ)‖ ≤ E for all zi where E is a bound on the error magnitude for zi sampled from the encoder, the decoder is forced to give the same output for each point approximately on M(x). For a point za drawn from q(Z|X = x; η) we have\n‖za − fµ(x; η)‖K ≈ √ Dz with high probability\nwhere K = fΣ(x; η)−1 and ‖x‖K ≡ √ x>Kx.\nFor a point zb independently drawn from q(Z|X = x; η), by the triangle inequality we have\n‖g(za; θ)− g(zb; θ)‖ ≤ 2E (38)\nwhere the Mahalanobis distance 2 √ Dz ≈ ‖za − zb‖K ≤\n1√ λmin ‖za − zb‖\nwhere λmin is the smallest eigenvalue of the covariance matrix. Hence the distance is also bounded when the variance is not degenerate and minimum distance will be on the order of ‖za − zb‖ ≈ 2 √ Dzλmin so we expect the ratio to be bounded\n‖g(za; θ)− g(zb; θ)‖/‖za − zb‖ ≤ E/ √ Dzλmin (39)\nWe see that the ELBO objective enforces the decoder to be invariant on the typical set of q(Z|X = x; η), where most of the probability mass is concentrated.\nNow, for each data point x, the corresponding latent space hyper-ellipsoid M(x) are forced to be large in the sense of having a large determinant by the entropy term of the encoder that promotes large log-determinant. The size of M(x) is also controlled by the prior fidelity term, avoiding blowing up. Hence the union ∪x∈XM(x), where X is the dataset, will approximately cover the latent space when the encoder has converged and on each hyper-ellipsoid M(x) the decoder will be enforced to be smooth.\nE.3 SMOOTHNESS OF THE SMOOTH ENCODER\nIn this section we show that the smooth encoder training forces a small Lipschitz constant for the encoder mean mapping. To simplify the argument, we will assume that the variance mapping of the encoder would be a constant function that does not vary with x, i.e., fΣ(x; η) = Σ(η). The latter assumption could be removed by considering a metric on the joint space of the means and covariance.\nUsing the adversarial selection mechanism, during training we solve the following problem using PGD:\nx∗ = arg max x′:‖x′−x‖p≤ WD(q(Z|X = x, η), q(Z|X ′ = x′, η))\nAssuming that PGD finds the global maximum at the boundary of the -ball where ‖x− x∗‖p = , under constant variance assumption for the encoder we can see that the Wasserstein divergence simply becomes the square distance between mean mappings\nWD(q(Z|X = x, η), q(Z|X ′ = x∗, η)) = ‖fµ(x; η)− fµ(x∗; η)‖22 We know that the SE ELBO objective has to minimize this distance for any coupling term γ so the procedure actually tries to reduce the local Lipschitz constant L(x) around data point x\nL(x) = ‖fµ(x; η)− fµ(x∗; η)‖\n‖x− x∗‖p ≤ E\nand promotes smoothness whereE is an upper bound on the change in the representation ‖fµ(x; η)− fµ(x ∗; η)‖ ≤ E." } ]
2,020
null
SP:12411220098647e9bc26769218f2f64d82867493
[ "The paper is concerned with neural ODE-based networks, specifically their robustness. While ODEs are a classical subject in mathematics with many applications in the sciences and beyond, neural ODEs are a recently proposed family of models for nonlinear mappings in the context of machine learning systems. There they show promise and are an active field of research.", "This paper investigates the robustness of Neural Ordinary differential equations (ODEs) against corrupted and adversarial examples. The crux of the analysis is based on the separation property of ODE integral curves. The insights from empirical robustness evaluation show that controlling the difference between neighboring integral curves is able to improve neural ODE's robustness. In general, neural ODE is a hot research topic in recent years, and a paper advancing knowledge in this area about understanding its various characteristics is certainly welcome. The paper is well motivated and clearly written. One aspect that confuses me a little originally is the different effects of getting ridding of the dependency on the time t and adding the steady state regularization. It would be nice to elucidate which part makes more contributions? Furthermore, to compare the robustness of the new approach with CNN, the input data consists of original images and their Gaussian-noise based perturbed samples. Since the paper already involves the evaluation using adversarial examples, it will make the paper much more stronger to show that when training both the new approach and the CNN with adversarial training, the proposed regularization can still lead to better robustness. " ]
Neural ordinary differential equations (ODEs) have been attracting increasing attention in various research domains recently. There have been some works studying optimization issues and approximation capabilities of neural ODEs, but their robustness is still yet unclear. In this work, we fill this important gap by exploring robustness properties of neural ODEs both empirically and theoretically. We first present an empirical study on the robustness of the neural ODE-based networks (ODENets) by exposing them to inputs with various types of perturbations and subsequently investigating the changes of the corresponding outputs. In contrast to conventional convolutional neural networks (CNNs), we find that the ODENets are more robust against both random Gaussian perturbations and adversarial attack examples. We then provide an insightful understanding of this phenomenon by exploiting a certain desirable property of the flow of a continuous-time ODE, namely that integral curves are non-intersecting. Our work suggests that, due to their intrinsic robustness, it is promising to use neural ODEs as a basic block for building robust deep network models. To further enhance the robustness of vanilla neural ODEs, we propose the time-invariant steady neural ODE (TisODE), which regularizes the flow on perturbed data via the time-invariant property and the imposition of a steady-state constraint. We show that the TisODE method outperforms vanilla neural ODEs and also can work in conjunction with other state-of-the-art architectural methods to build more robust deep networks.
[ { "affiliations": [], "name": "ORDINARY DIFFEREN" }, { "affiliations": [], "name": "TIAL EQUATIONS" }, { "affiliations": [], "name": "Hanshu YAN" }, { "affiliations": [], "name": "Jiawei DU" }, { "affiliations": [], "name": "Vincent Y. F. TAN" }, { "affiliations": [], "name": "Jiashi FENG" } ]
[ { "authors": [ "Lynton Ardizzone", "Jakob Kruse", "Sebastian Wirkert", "Daniel Rahner", "Eric W Pellegrini", "Ralf S Klessen", "Lena Maier-Hein", "Carsten Rother", "Ullrich Köthe" ], "title": "Analyzing inverse problems with invertible neural networks", "venue": "arXiv preprint arXiv:1808.04730,", "year": 2018 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2017 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Earl A Coddington", "Norman Levinson" ], "title": "Theory of ordinary differential equations", "venue": "Tata McGrawHill Education,", "year": 1955 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Emilien Dupont", "Arnaud Doucet", "Yee Whye Teh" ], "title": "Augmented neural odes", "venue": "arXiv preprint arXiv:1904.01681,", "year": 2019 }, { "authors": [ "Gamaleldin F Elsayed", "Shreya Shankar", "Brian Cheung", "Nicolas Papernot", "Alex Kurakin", "Ian Goodfellow", "Jascha Sohl-Dickstein" ], "title": "Adversarial examples that fool both human and computer vision", "venue": "arXiv preprint arXiv:1802.08195,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Will Grathwohl", "Ricky TQ Chen", "Jesse Betterncourt", "Ilya Sutskever", "David Duvenaud" ], "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models", "venue": "arXiv preprint arXiv:1810.01367,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Gustav Larsson", "Michael Maire", "Gregory Shakhnarovich" ], "title": "Fractalnet: Ultra-deep neural networks without residuals", "venue": "arXiv preprint arXiv:1605.07648,", "year": 2016 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Xuanqing Liu", "Si Si", "Qin Cao", "Sanjiv Kumar", "Cho-Jui Hsieh" ], "title": "Neural sde: Stabilizing neural ode networks with stochastic noise", "venue": null, "year": 1906 }, { "authors": [ "Yiping Lu", "Aoxiao Zhong", "Quanzheng Li", "Bin Dong" ], "title": "Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations", "venue": "arXiv preprint arXiv:1710.10121,", "year": 2017 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Lev Semenovich Pontryagin" ], "title": "Mathematical theory of optimal processes", "venue": null, "year": 2018 }, { "authors": [ "Alessio Quaglino", "Marco Gallieri", "Jonathan Masci", "Jan Koutnı́k" ], "title": "Accelerating neural odes with spectral elements", "venue": "arXiv preprint arXiv:1906.07038,", "year": 2019 }, { "authors": [ "Jure Sokolić", "Raja Giryes", "Guillermo Sapiro", "Miguel RD Rodrigues" ], "title": "Robust large margin deep neural networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Dimitris Tsipras", "Shibani Santurkar", "Logan Engstrom", "Alexander Turner", "Aleksander Madry" ], "title": "Robustness may be at odds with accuracy", "venue": "arXiv preprint arXiv:1805.12152,", "year": 2018 }, { "authors": [ "B. Wang", "B. Yuan", "Z. Shi", "S. Osher" ], "title": "ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies", "venue": "arXiv e-prints, art. arXiv:1811.10745,", "year": 2018 }, { "authors": [ "Bao Wang", "Alex T Lin", "Zuoqiang Shi", "Wei Zhu", "Penghang Yin", "Andrea L Bertozzi", "Stanley J Osher" ], "title": "Adversarial defense via data dependent activation function and total variation minimization", "venue": "arXiv preprint arXiv:1809.08516,", "year": 2018 }, { "authors": [ "Bao Wang", "Xiyang Luo", "Zhen Li", "Wei Zhu", "Zuoqiang Shi", "Stanley Osher" ], "title": "Deep neural nets with interpolating function as output activation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "E Weinan" ], "title": "A proposal on machine learning via dynamical systems", "venue": "Communications in Mathematics and Statistics,", "year": 2017 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "arXiv preprint arXiv:1711.01991,", "year": 2017 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan L Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ziang Yan", "Yiwen Guo", "Changshui Zhang" ], "title": "Deep defense: Training dnns with improved adversarial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Laurent Younes" ], "title": "Shapes and diffeomorphisms, volume 171", "venue": null, "year": 2010 }, { "authors": [ "Xingcheng Zhang", "Zhizhong Li", "Chen Change Loy", "Dahua Lin" ], "title": "Polynet: A pursuit of structural diversity in very deep networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nNeural ordinary differential equations (Chen et al., 2018) form a family of models that approximate nonlinear mappings by using continuous-time ODEs. Due to their desirable properties, such as invertibility and parameter efficiency, neural ODEs have attracted increasing attention recently (Dupont et al., 2019; Liu et al., 2019). For example, Grathwohl et al. (2018) proposed a neural ODE-based generative model—the FFJORD—to solve inverse problems; Quaglino et al. (2019) used a higher-order approximation of the states in a neural ODE, and proposed the SNet to accelerate computation. Along with the wider deployment of neural ODEs, robustness issues come to the fore. However, the robustness of neural ODEs is still yet unclear. In particular, it is unclear how robust neural ODEs are in comparison to the widely-used CNNs. Robustness properties of CNNs have been studied extensively. In this work, we present the first systematic study on exploring the robustness properties of neural ODEs.\nTo do so, we consider the task of image classification. We expect that results would be similar for other machine learning tasks such as regression. Neural ODEs are dimension-preserving mappings, but a classification model transforms a high-dimensional input—such as an image—into an output whose dimension is equal to the number of classes. Thus, we consider the neural ODE-based classification network (ODENet) whose architecture is shown in Figure 1. An ODENet consists of three components: the feature extractor (FE) consists of convolutional layers which maps an input datum to a multi-channel feature map, a neural ODE that serves as the nonlinear representation mapping (RM), and the fully-connected classifier (FCC) that generates a prediction vector based on the output of the RM.\nThe robustness of a classification model can be evaluated through the lens of its performance on perturbed images. To comprehensively investigate the robustness of neural ODEs, we perturb original images with commonly-used perturbations, namely, random Gaussian noise (Szegedy et al., 2013) and harmful adversarial examples (Goodfellow et al., 2014; Madry et al., 2017). We conduct experiments in two common settings—training the model only on authentic non-perturbed images and training the model on authentic images as well as the Gaussian perturbed ones. We observe that ODENets are more robust compared to CNN models against all types of perturbations in both settings. We then provide an insightful understanding of such intriguing robustness of neural ODEs by exploiting a certain property of the flow (Dupont et al., 2019), namely that integral curves that start at distinct initial states are nonintersecting. The flow of a continuous-time ODE is defined as the family of solutions/paths traversed by the state, starting from different initial points, and an integral curve is a specific solution for a given initial point. The non-intersecting property indicates that\nan integral curve starting from some point is constrained by the integral curves starting from that point’s neighborhood. Thus, in an ODENet, if a correctly classified datum is slightly perturbed, the integral curve associated to its perturbed version would not change too much from the original one. Consequently, the perturbed datum could still be correctly classified. Thus, there exists intrinsic robustness regularization in ODENets, which is absent from CNNs.\nMotivated by this property of the neural ODE flow, we attempt to explore a more robust neural ODE architecture by introducing stronger regularization on the flow. We thus propose a Time-Invariant Steady neural ODE (TisODE). The TisODE removes the time dependence of the dynamics in an ODE and imposes a steady-state constraint on the integral curves. Removing the time dependence of the derivative results in the time-invariant property of the ODE. To wit, given a solution z1(t), another solution z̃1(t), with an initial state z̃1(0) = z1(T ′) for some T ′ > 0, can be regarded as the −T ′- shift version of z1(t). Such a time-invariant property would make bounding the difference between output states convenient. To elaborate, let the output of a neural ODE correspond to states at time T > 0. By the time-invariant property, the difference between outputs, ‖z̃1(T ) − z1(T )‖, equals to ‖z1(T + T ′) − z1(T )‖. To control this distance, a steady-state regularization term is introduced to the overall objective to constrain the change of a state after time exceeds T . With the time-invariant property and the steady-state term, we show that TisODE even is more robust. We do so by evaluating the robustness of TisODE-based classifiers against various types of perturbations and observe that such models are more robust than vanilla ODE-based models.\nIn addition, some other effective architectural solutions have also been recently proposed to improve the robustness of CNNs. For example, Xie et al. (2017) randomly resizes or pads zeros into test images to destroy the specific structure of adversarial perturbations. Besides, the model proposed by Xie et al. (2019) contains feature denoising filters to remove the feature-level patterns of adversarial examples. We conduct experiments to show that our proposed TisODE can work seamlessly and in conjunction with these methods to further boost the robustness of deep models. Thus, the proposed TisODE can be used as a generally applicable and effective component for improving the robustness of deep models.\nIn summary, our contributions are as follows. Firstly, we are the first to provide a systematic empirical study on the robustness of neural ODEs and find that the neural ODE-based models are more robust compared to conventional CNN models. This finding inspires new applications of neural ODEs in improving robustness of deep models, a problem that concerns many deep learning theorists and practitioners alike. Secondly, we propose the TisODE method, which is simple yet effective in significantly boosting the robustness of neural ODEs. Moreover, the proposed TisODE can also be used in conjunction with other state-of-the-art robust architectures. Thus, TisODE can serve as a drop-in module to improve the robustness of deep models effectively.\n2 PRELIMINARIES ON NEURAL ODE\nIt has been shown that a residual block (He et al., 2016) can be interpreted as the discrete approximation of an ODE by setting the discretization step to be one. When the discretization step approaches zero, it yields a family of neural networks, which are called neural ODEs (Chen et al., 2018). Formally, in a neural ODE, the relation between input and output is characterized by the following set of equations:\ndz(t)\ndt = fθ(z(t), t), z(0) = zin, zout = z(T ), (1)\nwhere fθ : Rd × [0,∞) → Rd denotes the trainable layers that are parameterized by weights θ and z : [0,∞) → Rd represents the d-dimensional state of the neural ODE. We assume that fθ is continuous in t and globally Lipschitz continuous in z. In this case, the input zin of the neural ODE corresponds to the state at t = 0, and the output zout is associated to the state at some T ∈ (0,∞). Because fθ governs how the state changes with respect to time t, we also use fθ to denote the dynamics of the neural ODE.\nGiven input zin, the output zout can be computed by solving the ODE in (1). If T is fixed, the output zout only depends on the input zin and the dynamics fθ, which also corresponds to the weighted layers in the neural ODE. Therefore, the neural ODE can be represented as the d-dimensional function φT (·, ·) of the input zin and the dynamics fθ, i.e.,\nzout = z(T ) = z(0) +\n∫ T\n0\nfθ(z(t), t) dt = φT (zin, fθ).\nThe terminal time T of the output state z(T ) is set to be 1 in practice. Several methods have been proposed for training neural ODEs, such as the adjoint sensitivity method (Chen et al., 2018), SNet (Quaglino et al., 2019), and the auto-differentiation technique (Paszke et al., 2017). In this work, we use the most straightforward technique, i.e., updating the weights θ with the autodifferentiation technique in the PyTorch framework.\n3 AN EMPIRICAL STUDY ON THE ROBUSTNESS OF ODENETS\nRobustness of deep models has gained increased attention, as it is imperative that deep models employed in critical applications, such as healthcare, are robust. The robustness of a model is measured by the sensitivity of the prediction with respect to small perturbations on the inputs. In this study, we consider three commonly-used perturbation schemes, namely random Gaussian perturbations, FGSM (Goodfellow et al., 2014) adversarial examples, and PGD (Madry et al., 2017) adversarial examples. These perturbation schemes reflect noise and adversarial robustness properties of the investigated models respectively. We evaluate the robustness via the classification accuracies on perturbed images, in which the original non-perturbed versions of these images are all correctly classified.\nFor a fair comparison with conventional CNN models, we made sure that the number of parameters of an ODENet is close to that of its counterpart CNN model. Specifically, the ODENet shares the same network architecture with the CNN model for the FE and FCC parts. The only difference is that, for the RM part, the input of the ODE-based RM is concatenated with one more channel which represents the time t, while the RM in a CNN model has a skip connection and serves as a residual block. During the training phase, all the hyperparameters are kept the same, including training epochs, learning rate schedules, and weight decay coefficients. Each model is trained three times with different random seeds, and we report the average performance (classification accuracy) together with the standard deviation.\n3.1 EXPERIMENTAL SETTINGS\nDataset: We conduct experiments to compare the robustness of ODENets with CNN models on three datasets, i.e., the MNIST (LeCun et al., 1998), the SVHN (Netzer et al., 2011), and a subset of the ImageNet datset (Deng et al., 2009). We call the subset ImgNet10 since it is collected from 10 synsets of ImageNet: dog, bird, car, fish, monkey, turtle, lizard, bridge, cow, and crab. We selected 3,000 training images and 300 test images from each synset and resized all images to 128× 128.\nArchitectures: On the MNIST dataset, both the ODENet and the CNN model consists of four convolutional layers and one fully-connected layer. The total number of parameters of the two models is around 140k. On the SVHN dataset, the networks are similar to those for the MNIST; we only changed the input channels of the first convolutional layer to three. On the ImgNet10 dataset, there are nine convolutional layers and one fully-connected layer for both the ODENet and the CNN model. The numbers of parameters is approximately 280k. In practice, the neural ODE can be solved with different numerical solvers such as the Euler method and the Runge-Kutta methods (Chen et al., 2018). Here, we use the easily-implemented Euler method in the experiments. To balance the computation and the continuity of the flow, we solve the ODE initial value problem in equation (1) by the Euler method with step size 0.1. Our implementation builds on the open-source neural ODE codes.1 Details on the network architectures are included in the Appendix.\nTraining: The experiments are conducted using two settings on each dataset—training models only with original non-perturbed images and training models on original images together with their perturbed versions. In both settings, we added a weight decay term into the training objective to regularize the norm of the weights, since this can help control the model’s representation capacity and improve the robustness of a neural network (Sokolić et al., 2017). In the second setting, images perturbed with random Gaussian noise are used to fine-tune the models, because augmenting the dataset with small perturbations can possibly improve the robustness of models and synthesizing Gaussian noise does not incur excessive computation time.\n3.2 ROBUSTNESS OF ODENETS TRAINED ONLY ON NON-PERTURBED IMAGES\nThe first question we are interested in is how robust ODENets are against perturbations if the model is only trained on original non-perturbed images. We train CNNs and ODEnets to perform classification on three datasets and set the weight decay parameters for all models to be 0.0005. We make sure that both the well-trained ODENets and CNN models have satisfactory performances on original non-perturbed images, i.e., around 99.5% for MNIST, 95.0% for the SVHN, and 80.0% for ImgNet10.\nSince Gaussian noise is ubiquitous in modeling image degradation, we first evaluated the robustness of the models in the presence of zero-mean random Gaussian perturbations. It has also been shown that a deep model is vulnerable to harmful adversarial examples, such as the FGSM (Goodfellow et al., 2014). We are also interested in how robust ODENets are in the presence of adversarial examples. The standard deviation σ of Gaussian noise and the l∞-norm $ of the FGSM attack for each dataset are shown in Table 1.\nFrom the results in Table 1, we observe that the ODENets demonstrate superior robustness compared to CNNs for all types of perturbations. On the MNIST dataset, in the presence of Gaussian\n1https://github.com/rtqichen/torchdiffeq.\nperturbations with a large σ of 100, the ODENet produces much higher accuracy on perturbed images compared to the CNN model (73.2% vs. 56.4%). For the FGSM-0.3 adversarial examples, the accuracy of ONEnet is around twice as high as that of the CNN model. On the SVHN dataset, ODENets significantly outperform CNN models, e.g., for the FGSM-5/255 examples, the accuracy of the ODENet is 43.0%, which is much higher than that of the CNN model (13.7%). On the ImgNet10, for both cases of σ = 25 and FGSM-8/255, ODENet outperforms CNNs by a large margin of around 9%.\n3.3 ROBUSTNESS OF ODENETS TRAINED ON ORIGINAL IMAGES TOGETHER WITH GAUSSIAN PERTURBATIONS\nTraining a model on original images together with their perturbed versions can improve the robustness of the model. As mentioned previously, Gaussian noise is commonly assumed to be present in real-world images. Synthesizing Gaussian noise is also fast and easy. Thus, we add random Gaussian noise into the original images to generate their perturbed versions. ODENets and CNN models are both trained on original images together with their perturbed versions. The standard deviation of the added Gaussian noise is randomly chosen from {50, 75, 100} on the MNIST dataset, {15, 25, 35} on the SVHN dataset, and {10, 15, 25} on the ImgNet10. All other hyperparameters are kept the same as above.\nThe robustness of the models is evaluated under Gaussian perturbations, FGSM adversarial examples, and PGD (Madry et al., 2017) adversarial examples. The latter is a stronger attacker compared to the FGSM. The l∞-norm $ of the PGD attack for each dataset is shown in Table 2. Based on the results, we observe that ODENets consistently outperform CNN models on both two datasets. On the MNIST dataset, the ODENet outperforms the CNN against all types of perturbations. In particular, for the PGD-0.2 adversarial examples, the accuracy of the ODENet (64.7%) is much higher than that of the CNN (32.9%). Besides, for the PGD-0.3 attack, the CNN is completely misled by the adversarial examples, but the ODENet can still classify perturbed images with an accuracy of 13.0%. On the SVHN dataset, ODENets also show superior robustness in comparison to CNN models. For all the adversarial examples, ODENets outperform CNN models by a margin of at least 10 percentage points. On the ImgNet10 dataset, the ODENet also performs better than CNN models against all forms of adversarial examples.\n3.4 INSIGHTS ON THE ROBUSTNESS OF ODENETS\nFrom the results in Sections 3.2 and 3.3, we find ODENets are more robust compared to CNN models. Here, we attempt to provide an intuitive understanding of the robustness of the neural ODE. In an ODENet, given some datum, the FE extracts an informative feature map from the datum. The neural ODE, serving as the RM, takes as input the feature map and performs a nonlinear mapping. In practice, we use the weight decay technique during training which regularizes the norm of weights\nin the FE part, so that the change of feature map in terms of a small perturbation on the input can be controlled. We aim to show that, in the neural ODE, a small change on the feature map will not lead to a large deviation from the original output associated with the feature map.\nTheorem 1 (ODE integral curves do not intersect (Coddington & Levinson, 1955; Younes, 2010; Dupont et al., 2019)). Let z1(t) and z2(t) be two solutions of the ODE in (1) with different initial conditions, i.e. z1(0) ∕= z2(0). In (1), fθ is continuous in t and globally Lipschitz continuous in z. Then, it holds that z1(t) ∕= z2(t) for all t ∈ [0,∞).\nTo illustrate this theorem, considering a simple 1- dimensional system in which the state is a scalar. As shown in Figure 2, equation (1) has a solution z1(t) starting from A1 = (0, z1(0)), where z1(0) is the feature of some datum. Equation (1) also has another two solutions z2(t) and z3(t), whose starting points A2 = (0, z2(0)) and A3 = (0, z3(0)), both of which are close to A1. Suppose A1 is between A2 and A3. By Theorem 1, we know that the integral curve z1(t) is always sandwiched between the integral curves z2(t) and z3(t).\nNow, let $ < min{|z2(0)−z1(0)|, |z3(0)−z1(0)|}. Consider a solution z̃1(t) of equation (1). The integral curve z̃1(t) starts from a point Ã1 = (0, z̃1(0)). The point Ã1 is in the $-neighborhood of A1 with |z̃1(0)−z1(0)| < $. By Theorem 1, we know that |z̃1(T ) − z1(T )| ≤ |z3(T ) −\nz2(T )|. In other words, if any perturbation smaller than $ is added to the scalar z1(0) in A1, the deviation from the original output z1(T ) is bounded by the distance between z2(T ) and z3(T ). In contrast, in a CNN model, there is no such bound on the deviation from the original output. Thus, we opine that due to this non-intersecting property, ODENets are intrinsically robust.\n4 TISODE: BOOSTING THE ROBUSTNESS OF NEURAL ODES\nIn the previous section, we presented an empirical study on the robustness of ODENets and observed that ODENets are more robust compared to CNN models. In this section, we explore how to boost the robustness of the vanilla neural ODE model further. This motivates the proposal of time-invariant steady neural ODEs (TisODEs).\n4.1 TIME-INVARIANT STEADY NEURAL ODES\nsteady-state constraint.\nIn the neural ODE characterized by equation (1), the dynamics fθ(z(t), t) depends on both the state z(t) at time t and the time t itself. In contrast, if the neural ODE is modified to be time-invariant, the time dependence of the\ndynamics is removed. Consequently, the dynamics depends only on the state z. So, we can rewrite\nthe dynamics function as fθ(z), and the neural ODE is characterized as \n\ndz(t)\ndt = fθ(z(t));\nz(0) = zin;\nzout = z(T ).\n(2)\nLet z1(t) be a solution of (2) on [0,∞) and $ > 0 be a small positive value. We define the set M1 = {(z1(t), t)|t ∈ [0, T ], ‖z1(t)−z1(0)‖ ≤ $}. This set contains all points on the curve of z1(t) during [0, T ] that are also inside the $-neighborhood of z1(0). For some element (z1(T ′), T ′) ∈ M1, let z̃1(t) be the solution of (2) which starts from z̃1(0) = z1(T ′). Then we have\nz̃1(t) = z1(t+ T ′) (3)\nfor all t in [0,∞). The property shown in equation (3) is known as the time-invariant property. It indicates that the integral curve z̃1(t) is the −T ′ shift of z1(t) (Figure 3). We can regard z̃1(0) as a slightly perturbed version of z1(0), and we are interested in how large the difference between z̃1(T ) and z1(T ) is. In a robust model, the difference should be small. By equation (3), we have ‖z̃1(T )− z1(T )‖ = ‖z1(T +T ′)− z1(T )‖. Since T ′ ∈ [0, T ], the difference between z1(T ) and z̃1(T ) can be bounded as follows,\n‖z̃1(T )−z1(T )‖= ∥∥∥∥∥ ∫ T+T ′\nT\nfθ(z1(t)) dt ∥∥∥∥∥≤ ∥∥∥∥∥ ∫ T+T ′\nT\n|fθ(z1(t))| dt ∥∥∥∥∥≤ ∥∥∥∥∥ ∫ 2T\nT\n|fθ(z1(t))| dt ∥∥∥∥∥ ,\n(4) where all norms are ℓ2 norms and |fθ| denotes the element-wise absolute operation of a vectorvalued function fθ. That is to say, the difference between z̃1(T ) and z1(T ) can be bounded by only using the information of the curve z1(t). For any t′ ∈ [0, T ] and element (z1(t′), t′) ∈ M1, consider the integral curve that starts from z1(t′). The difference between the output state of this curve and z1(T ) satisfies inequality (4).\nTherefore, we propose to add an additional term Lss to the loss function when training the timeinvariant neural ODE:\nLss =\nN∑\ni=1\n∥∥∥∥∥ ∫ 2T\nT\n|fθ(zi(t))| dt ∥∥∥∥∥ , (5)\nwhere N is the number of samples in the training set and zi(t) is the solution whose initial state equals to the feature of the ith sample. The regularization term Lss is termed as the steady-state loss. This terminology “steady state” is borrowed from the dynamical systems literature. In a stable dynamical system, the states stabilize around a fixed point, known as the steady-state, as time tends to infinity. If we can ensure that Lss is small, for each sample, the outputs of all the points in Mi will stabilize around zi(T ). Consequently, the model is robust. This modification of the neural ODE is dubbed Time-invariant steady neural ODE.\n4.2 EVALUATING ROBUSTNESS OF TISODE-BASED CLASSIFIERS\nHere, we conduct experiments to evaluate the robustness of our proposed TisODE, and compare TisODE-based models with the vanilla ODENets. We train all models with original non-perturbed images together with their Gaussian perturbed versions. The regularization parameter for the steadystate loss Lss is set to be 0.1. All other hyperparameters are exactly the same as those in Section 3.3.\nFrom the results in Table 3, we can see that our proposed TisODE-based models are clearly more robust compared to vanilla ODENets. On the MNIST dataset, when combating FGSM-0.3 attacks, the TisODE-based models outperform vanilla ODENets by more than 4 percentage points. For the FGSM-0.5 adversarial examples, the accuracy of the TisODE-based model is 6 percentage points better. On the SVHN dataset, the TisODE-based models perform better in terms of all forms of adversarial examples. On the ImgNet10 dataset, the TisODE-based models also outperform vanilla ODE-based models on all types of perturbations. In the presence of FGSM and PGD-5/255 examples, the accuracies are enhanced by more than 2 percentage points.\n4.3 TISODE - A GENERALLY APPLICABLE DROP-IN TECHNIQUE FOR IMPROVING THE ROBUSTNESS OF DEEP NETWORKS\nIn view of the excellent robustness of the TisODE, we claim that the proposed TisODE can be used as a general drop-in module for improving the robustness of deep networks. We support this claim by showing the TisODE can work in conjunction with other state-of-the-art techniques and further boost the models’ robustness. These techniques include the feature denoising (FDn) method (Xie et al., 2019) and the input randomization (IR) method (Xie et al., 2017). We conduct experiments on the MNIST and SVHN datasets. All models are trained with original non-perturbed images together with their Gaussian perturbed versions. We show that models using the FDn/IRd technique becomes much more robust when equipped with the TisODE. In the FDn experiments, the dot-product nonlocal denoising layer (Xie et al., 2019) is added to the head of the fully-connected classifier.\nFrom Table 4, we observe that both FDn and IRd can effectively improve the adversarial robustness of vanilla CNN models (CNN-FDn, CNN-IRd). Furthermore, combining our proposed TisODE with FDn or IRd (TisODE-FDn, TisODE-IRd), the adversarial robustness of the resultant model is significantly enhanced. For example, on the MNIST dataset, the additional use of our TisODE increases the accuracies on the PGD-0.3 examples by at least 10 percentage points for both FDn (8.2% to 28.2%) and IRd (55.5% to 66.0%). However, on both MNIST and SVHN datasets, the IRd technique improves the robustness against adversarial examples, but its performance is worse\non random Gaussian noise. With the help of the TisODE, the degradation in the robustness against random Gaussian noise can be effectively ameliorated.\n5 RELATED WORKS\nIn this section, we briefly review related works on the neural ODE and works concerning improving the robustness of deep neural networks.\nNeural ODE: The neural ODE (Chen et al., 2018) method models the input and output as two states of a continuous-time dynamical system by approximating the dynamics of this system with trainable layers. Before the proposal of neural ODE, the idea of modeling nonlinear mappings using continuous-time dynamical systems was proposed in Weinan (2017). Lu et al. (2017) also showed that several popular network architectures could be interpreted as the discretization of a continuoustime ODE. For example, the ResNet (He et al., 2016) and PolyNet (Zhang et al., 2017) are associated with the Euler scheme and the FractalNet (Larsson et al., 2016) is related to the Runge-Kutta scheme. In contrast to these discretization models, neural ODEs are endowed with an intrinsic invertibility property, which yields a family of invertible models for solving inverse problems (Ardizzone et al., 2018), such as the FFJORD (Grathwohl et al., 2018).\nRecently, many researchers have conducted studies on neural ODEs from the perspectives of optimization techniques, approximation capabilities, and generalization. Concerning the optimization of neural ODEs, the auto-differentiation techniques can effectively train ODENets, but the training procedure is computationally and memory inefficient. To address this problem, Chen et al. (2018) proposed to compute gradients using the adjoint sensitivity method (Pontryagin, 2018), in which there is no need to store any intermediate quantities of the forward pass. Also in Quaglino et al. (2019), the authors proposed the SNet which accelerates the neural ODEs by expressing their dynamics as truncated series of Legendre polynomials. Concerning the approximation capability, Dupont et al. (2019) pointed out the limitations in approximation capabilities of neural ODEs because of the preserving of input topology. The authors proposed an augmented neural ODE which increases the dimension of states by concatenating zeros so that complex mappings can be learned with simple flow. The most relevant work to ours concerns strategies to improve the generalization of neural ODEs. In Liu et al. (2019), the authors proposed the neural stochastic differential equation (SDE) by injecting random noise to the dynamics function and showed that the generalization and robustness of vanilla neural ODEs could be improved. However, our improvement on the neural ODEs is explored from a different perspective by introducing constraints on the flow. We empirically found that our proposal and the neural SDE can work in tandem to further boost the robustness of neural ODEs.\nRobust Improvement: A straightforward way of improving the robustness of a model is to smooth the loss surface by controlling the spectral norm of the Jacobian matrix of the loss function (Sokolić et al., 2017). In terms of adversarial examples (Carlini & Wagner, 2017; Chen et al., 2017), researchers have proposed adversarial training strategies (Madry et al., 2017; Elsayed et al., 2018; Tramèr et al., 2017) in which the model is fine-tuned with adversarial examples generated in realtime. However, generating adversarial examples is not computationally efficient, and there exists a trade-off between the adversarial robustness and the performance on original non-perturbed images (Yan et al., 2018; Tsipras et al., 2018). In Wang et al. (2018a), the authors model the ResNet as a transport equation, in which the adversarial vulnerability can be interpreted as the irregularity of the decision boundary. Consequently, a diffusion term is introduced to enhance the robustness of the neural nets. Besides, there are also some works that propose novel architectural defense mechanisms against adversarial examples. For example, Xie et al. (2017) utilized random resizing and random padding to destroy the specific structure of adversarial perturbations; Wang et al. (2018b) and Wang et al. (2018c) improved the robustness of neural networks by replacing the output layers with novel interpolating functions; In Xie et al. (2019), the authors designed a feature denoising filter that can remove the perturbation’s pattern from feature maps. In this work, we explore the intrinsic robustness of a specific novel architecture (neural ODE), and show that the proposed TisODE can improve the robustness of deep networks and can also work in tandem with these state-of-the-art methods Xie et al. (2017; 2019) to achieve further improvements.\n6 CONCLUSION\nIn this paper, we first empirically study the robustness of neural ODEs. Our studies reveal that neural ODE-based models are superior in terms of robustness compared to CNN models. We then explore how to further boost the robustness of vanilla neural ODEs and propose the TisODE. Finally, we show that the proposed TisODE outperforms the vanilla neural ODE and also can work in conjunction with other state-of-the-art techniques to further improve the robustness of deep networks. Thus, the TisODE method is an effective drop-in module for building robust deep models.\nACKNOWLEDGEMENT\nThis work is funded by a Singapore National Research Foundation (NRF) Fellowship (R-263-000D02-281).\nJiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133, MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490\nREFERENCES Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahner, Eric W Pellegrini, Ralf S\nKlessen, Lena Maier-Hein, Carsten Rother, and Ullrich Köthe. Analyzing inverse problems with invertible neural networks. arXiv preprint arXiv:1808.04730, 2018.\nNicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE, 2017.\nPin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. ACM, 2017.\nTian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in neural information processing systems, pp. 6571–6583, 2018.\nEarl A Coddington and Norman Levinson. Theory of ordinary differential equations. Tata McGrawHill Education, 1955.\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.\nEmilien Dupont, Arnaud Doucet, and Yee Whye Teh. Augmented neural odes. arXiv preprint arXiv:1904.01681, 2019.\nGamaleldin F Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, and Jascha Sohl-Dickstein. Adversarial examples that fool both human and computer vision. arXiv preprint arXiv:1802.08195, 10, 2018.\nIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.\nWill Grathwohl, Ricky TQ Chen, Jesse Betterncourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.\nRalph Howard. The gronwall inequality. lecture notes, 1998.\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.\nYann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.\nXuanqing Liu, Si Si, Qin Cao, Sanjiv Kumar, and Cho-Jui Hsieh. Neural sde: Stabilizing neural ode networks with stochastic noise. arXiv preprint arXiv:1906.02355, 2019.\nYiping Lu, Aoxiao Zhong, Quanzheng Li, and Bin Dong. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations. arXiv preprint arXiv:1710.10121, 2017.\nAleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.\nAdam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.\nLev Semenovich Pontryagin. Mathematical theory of optimal processes. Routledge, 2018.\nAlessio Quaglino, Marco Gallieri, Jonathan Masci, and Jan Koutnı́k. Accelerating neural odes with spectral elements. arXiv preprint arXiv:1906.07038, 2019.\nJure Sokolić, Raja Giryes, Guillermo Sapiro, and Miguel RD Rodrigues. Robust large margin deep neural networks. IEEE Transactions on Signal Processing, 65(16):4265–4280, 2017.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.\nFlorian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.\nDimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018.\nB. Wang, B. Yuan, Z. Shi, and S. Osher. ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies. arXiv e-prints, art. arXiv:1811.10745, Nov 2018a.\nBao Wang, Alex T Lin, Zuoqiang Shi, Wei Zhu, Penghang Yin, Andrea L Bertozzi, and Stanley J Osher. Adversarial defense via data dependent activation function and total variation minimization. arXiv preprint arXiv:1809.08516, 2018b.\nBao Wang, Xiyang Luo, Zhen Li, Wei Zhu, Zuoqiang Shi, and Stanley Osher. Deep neural nets with interpolating function as output activation. In Advances in Neural Information Processing Systems, pp. 743–753, 2018c.\nE Weinan. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1–11, 2017.\nCihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017.\nCihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 501–509, 2019.\nZiang Yan, Yiwen Guo, and Changshui Zhang. Deep defense: Training dnns with improved adversarial robustness. In Advances in Neural Information Processing Systems, pp. 419–428, 2018.\nLaurent Younes. Shapes and diffeomorphisms, volume 171. Springer, 2010.\nXingcheng Zhang, Zhizhong Li, Chen Change Loy, and Dahua Lin. Polynet: A pursuit of structural diversity in very deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 718–726, 2017.\n7 APPENDIX\n7.1 NETWORKS USED ON THE MNIST, THE SVHN, AND THE IMGNET10 DATASETS\nIn Table 5, the four arguments of the Conv layer represent the input channel, output channel, kernel size, and the stride. The two arguments of the Linear layer represents the input dimension and the output dimension of this fully-connected layer. In the network on the ImgNet10, the BasicBlock refers to the standard architecture in (He et al., 2016), the three arguments of the BasicBlock represent the input channel, output channel and the stride of the Conv layers inside the block. Note that we replace the BatchNorm layers in BasicBlocks as the GroupNorm to guarantee that the dynamics of each datum is independent of other data in the same mini-batch.\n7.2 THE CONSTRUCTION OF IMGNET10 DATASET\n7.3 GRONWALL’S INEQUALITY\nWe formally state the Gronwall’s Inequality here, following the version in (Howard, 1998).\nTheorem 2. Let U ⊂ Rd be an open set. Let f : U × [0, T ] → Rd be a continuous function and let z1, z2: [0, T ] → U satisfy the initial value problems:\ndz1(t)\ndt = f(z1(t), t), z1(t) = x1\ndz2(t)\ndt = f(z2(t), t), z2(t) = x2\nAssume there is a constant C ≥ 0 such that, for all t ∈ [0, T ],\n‖f(z2(t), t)− f(z1(t), t))‖ ≤ C‖z2(t)− z1(t)‖\nThen, for any t ∈ [0, T ], ‖z1(t)− z2(t)‖ ≤ ‖x2 − x1‖ · eCt.\n7.4 MORE EXPERIMENTAL RESULTS\n7.4.1 COMPARISON IN THE SETTING OF ADVERSARIAL TRAINING\nWe implement the adversarial training of the models on the MNIST dataset, and the adversarial examples for training are generated in real-time via the FGSM method (epsilon=0.3) during each epoch (Madry et al., 2017). The results of the adversarially trained models are shown in Table 7. We can observe that the neural ODE-based models are consistently more robust than CNN models. The proposed TisODE also outperforms the vanilla neural ODE.\n7.4.2 EXPERIMENTS ON THE CIFAR10 DATASET\nWe conduct experiments on CIFAR10 to compare the robustness of CNN and neural ODE-based models. We train all the models only with original non-perturbed images and evaluate the robustness of models against random Gaussian noise and FGSM adversarial attacks. The results are shown in Table 8. We can observe that the ONENet is more robust than the CNN model in terms of both the random noise and the FGSM attack. Besides, our proposal, TisODE, can improve the robustness of the vanilla neural ODE.\nHere, we control the number of parameters to be the same for all kinds of models. We use a small network, which consists of five convolutional layers and one linear layer.\n7.4.3 AN EXTENSION ON THE COMPARISON BETWEEN CNNS AND ODENETS\nHere, we compare CNN and neural ODE-based models by controlling both the number of parameters and the number of function evaluations. We conduct experiments on the MNIST dataset, and all the models are trained only with original non-perturbed images.\nFor the neural ODE-based models, the time range is set from 0 to 1. We use the Euler method, and the step size is set to be 0.05. Thus the number of evaluations is 1/0.05 = 20. For the CNN models (specifically ResNet), we repeatedly concatenate the residual block for 20 times, and these 20 blocks share the same weights. Our experiments show that, in this condition, the neural ODE-based models still outperform the CNN models (FGSM-0.15: 87.5% vs. 81.9%, FGSM-0.3: 53.4% vs. 49.7%, PGD-0.2: 11.8% vs. 4.8%)." } ]
2,020
null
SP:674372d2a8bfd6460e61cf6d39f85a9128cdf131
[ "This paper proposes a new way of finding the Granger temporal-causal network based on attention mechanism on the predictions obtained by individual time series. It describes a surprisingly complex procedure for computing the attention vector based on combining Granger-inspired attentions with attentions obtained during a diverse prototype generation process. There are also extensive experiments demonstrating the success of the proposed method in uncovering the underlying temporal-causal graph.", "The paper proposes a novel way of reconstructing Granger causal structures using a differentiable neural network architecture that contains attention modules that are proportional to the Granger causality of the input layers. Furthermore, the architecture blends individual-specific induced causal structures and cross-population prototypical causal structures. The paper has an extensive experimental section on which the proposed method shows impressive improvements in causal discovery performance and predictive performance on par with state-of-the-art." ]
Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data. In many real-world systems, it is common to encounter a large amount of multivariate time series data collected from heterogeneous individuals with sharing commonalities, however there are ongoing concerns regarding its applicability in such large scale complex scenarios, presenting both challenges and opportunities for Granger causal reconstruction. To bridge this gap, we propose a Granger cAusal StructurE Reconstruction (GASER) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series. In particular, we address the problem through a novel attention mechanism, called prototypical Granger causal attention. Extensive experiments, as well as an online A/B test on an E-commercial advertising platform, demonstrate the superior performances of GASER.
[]
[ { "authors": [ "Martı́n Abadi", "Ashish Agarwal", "Paul Barham", "Eugene Brevdo", "Zhifeng Chen", "Craig Citro", "Greg S Corrado", "Andy Davis", "Jeffrey Dean", "Matthieu Devin" ], "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1603.04467,", "year": 2016 }, { "authors": [ "Amir F Atiya", "Alexander G Parlos" ], "title": "New results on recurrent network training: unifying the algorithms and accelerating convergence", "venue": "IEEE transactions on neural networks,", "year": 2000 }, { "authors": [ "Jacob Bien", "Robert Tibshirani" ], "title": "Prototype selection for interpretable classification", "venue": "The Annals of Applied Statistics,", "year": 2011 }, { "authors": [ "Chaofan Chen", "Oscar Li", "Chaofan Tao", "Alina Jade Barnett", "Jonathan Su", "Cynthia Rudin" ], "title": "This looks like that: deep learning for interpretable image recognition", "venue": "arXiv preprint arXiv:1806.10574,", "year": 2018 }, { "authors": [ "David Maxwell Chickering" ], "title": "Optimal structure identification with greedy search", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Xuan-Hong Dang", "Syed Yousaf Shah", "Petros Zerfos" ], "title": "seq2graph: Discovering dynamic dependencies from multivariate time series with multi-level attention", "venue": "arXiv preprint arXiv:1812.04448,", "year": 2018 }, { "authors": [ "Michael Eichler" ], "title": "Causal inference with multiple time series: principles and problems", "venue": "Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences,", "year": 1997 }, { "authors": [ "Tom Fawcett" ], "title": "An introduction to roc analysis", "venue": "Pattern recognition letters,", "year": 2006 }, { "authors": [ "Clive WJ Granger" ], "title": "Investigating causal relations by econometric models and cross-spectral methods", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1969 }, { "authors": [ "Clive WJ Granger" ], "title": "Testing for causality: a personal viewpoint", "venue": "Journal of Economic Dynamics and control,", "year": 1980 }, { "authors": [ "Ruocheng Guo", "Lu Cheng", "Jundong Li", "P Richard Hahn", "Huan Liu" ], "title": "A survey of learning causality with data: Problems and methods", "venue": "arXiv preprint arXiv:1809.09337,", "year": 2018 }, { "authors": [ "Tian Guo", "Tao Lin", "Nino Antulov-Fantulin" ], "title": "Exploring interpretable lstm neural networks over multi-variable data", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Biwei Huang", "Kun Zhang", "Jiji Zhang", "Joseph Ramsey", "Bernhard Schölkopf", "Clark Glymour" ], "title": "Causal discovery and hidden driving force estimation from nonstationary/heterogeneous data", "venue": null, "year": 1903 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Been Kim", "Cynthia Rudin", "Julie Shah" ], "title": "The bayesian case model: A generative approach for case-based reasoning and prototype classification", "venue": "In Proceedings of Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas Kipf", "Ethan Fetaya", "Kuan-Chieh Wang", "Max Welling", "Richard Zemel" ], "title": "Neural relational inference for interacting systems", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Samantha Kleinberg" ], "title": "Causality, probability, and time. In Causality, probability, and time", "venue": null, "year": 2009 }, { "authors": [ "Oscar Li", "Hao Liu", "Chaofan Chen", "Cynthia Rudin" ], "title": "Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions", "venue": "In Proceedings of AAAI,", "year": 2018 }, { "authors": [ "David Lopez-Paz", "Krikamol Muandet", "Bernhard Schölkopf", "Iliya Tolstikhin" ], "title": "Towards a learning theory of cause-effect inference", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Helmut Lütkepohl" ], "title": "New introduction to multiple time series analysis", "venue": "Springer Science & Business Media,", "year": 2005 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Yao Ming", "Panpan Xu", "Huamin Qu", "Liu Ren" ], "title": "Interpretable and steerable sequence learning via prototypes", "venue": "In KDD,", "year": 2019 }, { "authors": [ "Meike Nauta", "Doina Bucur", "Christin Seifert" ], "title": "Causal discovery with attention-based convolutional neural networks", "venue": "Machine Learning and Knowledge Extraction,", "year": 2019 }, { "authors": [ "Judea Pearl" ], "title": "Causality: models, reasoning and inference, volume 29", "venue": null, "year": 2000 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Causal inference on time series using restricted structural equation models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of causal inference: foundations and learning algorithms", "venue": "MIT press,", "year": 2017 }, { "authors": [ "Yao Qin", "Dongjin Song", "Haifeng Cheng", "Wei Cheng", "Guofei Jiang", "Garrison W Cottrell" ], "title": "A dual-stage attention-based recurrent neural network for time series prediction", "venue": "In Proceedings of the 26th International Joint Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Jakob Runge", "Peer Nowack", "Marlene Kretschmer", "Seth Flaxman", "Dino Sejdinovic" ], "title": "Detecting causal associations in large nonlinear time series datasets", "venue": "arXiv preprint arXiv:1702.07007,", "year": 2017 }, { "authors": [ "Patrick Schwab", "Djordje Miladinovic", "Walter Karlen" ], "title": "Granger-causal attentive mixtures of experts: Learning important features with neural networks", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Skipper Seabold", "Josef Perktold" ], "title": "Statsmodels: Econometric and statistical modeling with python", "venue": "In 9th Python in Science Conference,", "year": 2010 }, { "authors": [ "Shohei Shimizu", "Patrik O Hoyer", "Aapo Hyvärinen", "Antti Kerminen" ], "title": "A linear non-gaussian acyclic model for causal discovery", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Stephen Slade" ], "title": "Case-based reasoning: A research paradigm", "venue": "AI magazine,", "year": 1991 }, { "authors": [ "Stephen M Smith", "Karla L Miller", "Gholamreza Salimi-Khorshidi", "Matthew Webster", "Christian F Beckmann", "Thomas E Nichols", "Joseph D Ramsey", "Mark W Woolrich" ], "title": "Network modelling methods for fmri", "venue": null, "year": 2011 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yue Yu", "Jie Chen", "Tian Gao", "Mo Yu" ], "title": "Dag-gnn: Dag structure learning with graph neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Kun Zhang", "Biwei Huang", "Jiji Zhang", "Clark Glymour", "Bernhard Schölkopf" ], "title": "Causal discovery from nonstationary/heterogeneous data: Skeleton estimation and orientation determination", "venue": "In IJCAI: Proceedings of the Conference,", "year": 2017 } ]
[ { "heading": null, "text": "Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data. In many real-world systems, it is common to encounter a large amount of multivariate time series data collected from heterogeneous individuals with sharing commonalities, however there are ongoing concerns regarding its applicability in such large scale complex scenarios, presenting both challenges and opportunities for Granger causal reconstruction. To bridge this gap, we propose a Granger cAusal StructurE Reconstruction (GASER) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series. In particular, we address the problem through a novel attention mechanism, called prototypical Granger causal attention. Extensive experiments, as well as an online A/B test on an E-commercial advertising platform, demonstrate the superior performances of GASER." }, { "heading": "1 INTRODUCTION", "text": "Broadly, machine learning tasks are either predictive or descriptive in nature, often addressed by black-box methods (Guo et al., 2018). With the power of uncovering relationship behind the data and providing explanatory analyses, causality inference has drawn increasing attention in many fields, e.g. marketing, economics, and neuroscience (Pearl, 2000; Peters et al., 2017). Since the cause generally precedes its effects, known as temporal precedence (Eichler, 2013), recently, an increasing number of studies have focused on causal discovery from time series data. They are commonly based on the concept of Granger causality (Granger, 1969; 1980) to investigate the causal relationship with quantification measures.\nIn many real-world systems, it is common to encounter a large amount of multivariate time series (MTS) data collected from different individuals with shared commonalities, which we define as heterogeneous multivariate time series. The underlying causal structures of such data often vary (Zhang et al., 2017; Huang et al., 2019). For example, in the financial market, the underlying causal drivers of stock prices are often heterogeneous across stocks of different plates. Similar phenomenons are also observed in the sales of different products in E-commerce. To this situation, most existing methods have to train separate models for MTS of each individual, which suffer from over-fitting especially given limited training samples. Although some works have been proposed to solve such problem (Zhang et al., 2017; Huang et al., 2019), they lack the inductive capability to do inferences for unseen samples and fall short of fully exploiting shared causal information among the heterogeneous data which often exist in practice. For instance, the causal structures of the products belonging to the same categories are usually similar. Such shared information presents opportunities for causal reconstruction to alleviate overfitting and to do inductive reasoning. However, it is also challenging to detect common and specific causal structures simultaneously.\nIn this paper, we propose a Granger cAusal StructurE Reconstruction (GASER) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series data. Our approach builds on the idea of quantifying the contributions of each variable series into the prediction of target variable via a novel designed prototypical Granger causal attention mechanism. In order to ensure that the attention capturing Granger causality, we first design an attention mechanism based on Granger causal attribution of the target series and then perform prototype learning that generates both shared and specific prototypes to improve the model’s robust-\nness. Extensive experiments demonstrate the superior causal structure reconstruction and prediction performances of GASER. In summary, our specific contributions are as follows:\n• A novel framework that inductively reconstructs Granger causal structures and uncovers common structures among heterogeneous multivariate time series.\n• A prototypical Granger causal attention mechanism that summarizes variable-wise contributions towards prediction and generates prototypes representing common causal structures.\n• Relative extensive experiments on real-world, benchmark and synthetic datasets as well as an online A/B test on an E-commercial advertising platform that demonstrate the superior performance on the causal discovery and the prediction performance comparable to state-of-the-art methods." }, { "heading": "2 GASER", "text": "In this section, we formally define the problem, introduce the architecture of GASER, present the prototypical Granger causal attention with the final objective function." }, { "heading": "2.1 PROBLEM DEFINITION", "text": "Assuming we have a set of heterogeneous multivariate time series from N individuals, i.e., X = {Xi}Ni=1, with each consisting of S time series of length T , denoted as Xi = (x1i ,x2i , . . . ,xSi )T ∈ RS×T , where xsi = (xsi,1, xsi,2, . . . , xsi,T )T ∈ RT represents the s-th time series of individual i, and one of them is taken as the target series yi. We aim to train a model that (1) reconstructs Granger causal structures among variables for each individual; (2) generates K common structures among all the N individuals, each structure represented by a prototype pk ∈ RS , k = 1, ...,K; and (3) learns a nonlinear mapping to predict the next value of the target variable series for each individual, i.e., ŷi,T+1 = F(Xi)." }, { "heading": "2.2 NETWORK ARCHITECTURE", "text": "Our GASER framework consists of two parts: a set of parallel encoders, each predicting the target given the past observations, and an attention mechanism that generates prototypical Granger causal attention vectors to quantify variable-wise contributions towards prediction. Figure 1 illustrates the overall framework of GASER. As illustrated in Figure 1(a), for an input multivariate time series Xi, the encoder specific to s-th variable projects the time series xsi into a sequence of hidden state, denoted as hsi,t = Hs(xi,t,hsi,t−1). The encoder could be any RNN models, such as LSTM (Hochreiter & Schmidhuber, 1997) and GRU (Cho et al., 2014). The last hidden states, {hsi,T }Ss=1, are used as the hidden embeddings of each variable. Then the predicted next value of the target variable conditioned on historical data of variable s, denoted as ŷsi,T+1, can be computed by ŷ s i,T+1 = fs(h s i,T ), where fs(·) denotes the MLP network specific to variable s. Then we obtain the prediction ŷi,T+1 by aggregating the predicted values specific to variables through the prototypical Granger causal attention described below." }, { "heading": "2.3 PROTOTYPICAL GRANGER CAUSAL ATTENTION", "text": "We propose a novel attention mechanism in GASER, namely prototypical Granger causal attention, to reconstruct Granger causal relationships for each individual and uncover common causal structures among heterogeneous individuals. The goal is to learn attentions that can reflect the Granger causal strength between variables for each individual, and generate prototypes among heterogeneous individuals. As illustrated in Figure 1(b), the idea of the prototypical Granger causal attention mechanism is as follows. The Granger causal attribution corresponding to each individual is first computed according to the concept of Granger causality, followed by prototype learning that summarizes common causal structures for heterogeneous individuals in the training set, and produces the attention vector specific to each individual. The details of these two parts are described below." }, { "heading": "2.3.1 GRANGER CAUSAL ATTRIBUTION", "text": "Granger causality (Granger, 1969; 1980) is a concept of causality based on prediction, which declares that if a time series x Granger-causes a time series y, then y can be better predicted using all available information than if the information apart from x had been used. Thus, we obtain the Granger causal attributions by comparing the prediction error when using all available information\nwith the error when using the information excluding one variable series. In particular, given all the hidden embeddings {hsi,T }Ss=1 of individual i, we obtain the embedding that encodes all available information and the one that encodes all available information excluding one variable s, denoted as halli and h all\\s i respectively, by concatenating the embeddings of corresponding variables:\nhalli = [h j i,T ] S j=1, h all\\s i = [h j i,T ] S j=1,j 6=s, (1)\nwhere [·] represents the concatenation operation. Then we feed them into respective predictors, denoted as gall(·) and gs(·), to get the predicted value of target and compute the squared errors:\nŷalli,T+1 = gall(h all i ), ŷ all\\s i,T+1 = gs(h all\\s i ), (2)\nεalli = (ŷ all i,T+1 − yi,T+1)2, εall\\si = (ŷ all\\s i,T+1 − yi,T+1)2, (3)\nwhere the predictor gall(·) and gs(·) can be MLP networks. Inspired by Schwab et al. (2019), we define the Granger causal attribution of the target variable corresponding to variable s as the decrease in error when adding s-th series to the set of available information, computed as:\n∆εsi = ReLU(ε all\\s i − εalli ), (4)\nwhere ReLU(·) is the rectified linear unit. For each individual i, by normalising the Granger causal attribution, we obtain an attention vector that reflects Granger causality, namely Granger causal attention, denoted as qi. The attention factor for variable s can be computed as:\nqsi = ∆εsi∑S j=1 ∆ε j i . (5)" }, { "heading": "2.3.2 PROTOTYPE LEARNING", "text": "The Granger causal attention above is not robust enough to reconstruct Granger causal structure, given limited data (e.g., very short time series) of each individual in training. We address the problem\nby generating Granger causal prototypes from all the individuals, under the assumption that there should be several common causal structures among heterogeneous individuals. In particular, we assume there existK Granger causal prototypes, denoted as {pk}Kk=1, and compute the similarity between the Granger causal attention vector qi of individual i and each prototype vectors pk. Since the attention can be seen as a distribution, we use the cosine similarity:\ndk,i = pk · qi ‖pk‖‖qi‖ , (6)\nThen we output a prototype most similar to qi by sampling from the similarity distribution di using Gumbel-Softmax (Maddison et al., 2017; Jang et al., 2016), which samples from a reparameterized continuous distribution approximation to the categorical one-hot distribution:\ne = GumbelSoftmax(di) = softmax((log(di) + g)/τ), (7)\nwhere GumbelSoftmax(·) denotes the Gumbel-Softmax function, e ∈ RK is the sample vector which approaches one-hot, and g is a vector of i.i.d. samples drawn from Gumbel(0, 1) distribution. τ is the softmax temperature, and the distribution becomes discrete when τ goes to 0. With the sample vector e, the output prototype p̂ can be obtained as:\np̂ = [p1,p2, . . . ,pK ] · e. (8) After normalizing the sampled prototype, we obtain an attention vector for individual i, denoted as ri, namely prototypical attention.\nThe Granger causal attention reflects the Granger causal structure specific to each individual, while the prototypical attention reflects one common Granger causal structure most similar to the Granger causal structure of each individual. To detect the specific and common causal structures simultaneously, we summarize them together and generate the prototypical Granger causal attention ai as follows: ai = αqi + (1− α)ri, (9) where α ∈ [0, 1] is a hyperparameter that controls the ratio of the two attention mechanism. Finally, the prediction of the target variable’s next value can be computed as the weighted sum of the predicted values from all variables:\nŷi,T+1 = S∑ s=1 asi ŷ s i,T+1. (10)" }, { "heading": "2.4 LEARNING OBJECTIVE", "text": "In order to obtain accurate prediction and Granger causality structure, and generate diverse common causality structures, the objectives of GASER consist of three parts. The first two objective functions are to encourage accurate predictors, including the predictors f(·) to perform final prediction and the auxiliary predictors g(·) to compute Granger attribution, and we adopt the the mean squared error (MSE) as the prediction loss function:\nLpred = 1\nN N∑ i=1 (ŷi,T+1 − yi,T+1)2, Laux = 1 N N∑ i=1 (εalli + s∑ s=1 ε all\\s i ). (11)\nThe last objective function is to avoid duplicate prototypes by a diversity regularization term that penalizes on prototypes that are similar to each other (Ming et al., 2019):\nLdiv = K∑ i=1 K∑ j=i+1 max(γ, pi · pj ‖pi‖‖pj‖ ), (12)\nwhere γ controls the closeness to a tolerable degree.\nTo summarize, the loss function, denoted by L, is given by: L = Lpred + λ1Laux + λ2Ldiv, (13)\nwhere λ1 and λ2 are hyperparameters that adjust the ratios between the losses." }, { "heading": "3 EXPERIMENTS", "text": "In this section, we evaluate the causal structure reconstruction performance on multivariate time series from both single individual and multiple individuals, as well as the prediction performance of GASER. We also conduct an online A/B test on an E-commerce advertising platform to further test GASER in more practical situations." }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "We first evaluate the causal structure reconstruction performances on two causal benchmark datasets.\nFinance (Kleinberg, 2009) consists of simulated financial market time series with known underlying causal structures. Each dataset includes 25 variables of length 4,000. For each dataset, we choose variables that are related to the most causes as the target variables to test model abilities in the relatively most challenging scenarios.\nFMRI (Smith et al., 2011) contains 28 different Blood-oxygen-level dependent time series datasets with the ground-truth causal structures. In the experiments, we evaluate on the first 5 datasets and take the first variable as the target as causal variables distribute relatively evenly in this dataset.\nThen, we evaluate the causal structure reconstruction performance on heterogeneous individuals on synthetic data:\nSynthetic data: We first obtain the S exogenous time series through the following Non-linear Autoregressive Moving Average (NARMA) (Atiya & Parlos, 2000) generators:\nxsi,t = αsx s i,t−1 + βsx s i,t−1 d∑ j=1 xsi,t−j + γsεi,t−dεi,t−1 + εi,t, (14)\nwhere εt are zero-mean noise terms of 0.01 variance, d is the order of non-linear interactions, and αs, βs and γs are parameters specific to variable s, generated from N (0, 0.1). Then, we generate the target series from the generated exogenous series via the formula:\nyi,t = S∑ s=1 ωsi (η s i ) T tanh (xsi,t−p:t−1) + εi,t, (15)\nwhere ωsi ∈ {0, 1} with 0.6 probability of being zero that controls the underlying causal relationship from the s-th variable to the target variable, ηsi ∈ Rp controls the causal strength sampling from Unif{−1, 1}, and xsi,t−p:t−1 = (xsi,t−p, xsi,t−p+1, . . . , xsi,t−1)T ∈ Rp represents the last p historical values of variable s of sample i. The 0-1 indicator vector ωi = (ω1i , ω 2 i , . . . , ω S i )\nT ∈ RS is the ground-truth causal structure of i-th individual.\nFor the causal structure reconstruction task, we compare our method with previous causal discovery methods including linear Granger causality (Granger, 1969; Lütkepohl, 2005) and TCDF (Nauta et al., 2019), as well as the interpretable neural network based prediction method, i.e., IMVLSTM (Guo et al., 2019), using the standard metrics of Area Under the Precision-Recall Curve (PR-AUC), and Area Under the ROC Curve (ROC-AUC) (Fawcett, 2006).\nSince a byproduct of GASER is the time series prediction, we also evaluate the prediction performance on the real-world datasets, i.e., PM2.5 and SML:\nPM2.5 contains the hourly PM2.5 and meteorological data in Beijing during Jan 2010 to Dec 2014, includes 7 variables (such as PM2.5 concentration, temperature, pressure and wind speed), and forms a multivariate time series of length 43,824. The PM2.5 concentration is the target series. the dataset is split into training (60%), validation (20%) and testing sets (20%).\nSML is a monitoring dataset for the temperature forecasting, collected from a monitor system in a domotic house for approximately 40 days. The data are sampled every minute and smoothed with the mean of every 15 minutes, forming the MTS of length 4,137. We predict the dinning-room temperature with 17 relevant variable series. The first 3,200, the following 400 and the last 537 data points are respectively used for training, validation, and test.\nWe compare with linear Granger causality (VAR) and the state-of-the-art prediction models including DUAL (Qin et al., 2017) and IMV-LSTM (Guo et al., 2019), and adopt Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) as metrics." }, { "heading": "3.2 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "3.2.1 CAUSAL STRUCTURE RECONSTRUCTION PERFORMANCE ON HOMOGENEOUS MULTIVARIATE TIME SERIES", "text": "To evaluate the causal discovery performance on homogeneous multivariate time series, we train individual models for each dataset with the hyper-parameter α equaling 0.5. We report PR-AUC and ROC-AUC averaged across all datasets, with the standard deviation reported in Table 1. As can be seen, the proposed method greatly surpasses other methods. Especially, GASER recovers the ground-truth causal structure with high score on the Finance data." }, { "heading": "3.2.2 CAUSAL STRUCTURE RECONSTRUCTION PERFORMANCE ON HETEROGENEOUS MULTIVARIATE TIME SERIES", "text": "In this part, we evaluate the causal discovery performance on heterogeneous multivariate time series. We denote the number of common causal structures asC, the number of variables as S and the series length as T , and generate 100 multivariate time series for each common causal structure according to Equation (14) and Equation (15), forming 100C datasets. For the inductive methods GASER and IMV-LSTM, we train one model using all the datasets, while for other methods, we train separate models for each dataset. We report PR-AUC and ROC-AUC results w.r.t the variable number, the series length and the common structure number in Table 2 to 4, respectively. We observe that GASER outperforms other methods significantly in all cases, and GASER (α = 0.5) (with the Prototypical Granger causal attention) performs better than GASER (α = 1) (only with Granger causal attention). The observations demonstrate the superior causal discovery performance of GASER, the effectiveness of the prototypical Granger causal attention in GASER, and the advantages of utilizing shared commonalities among heterogeneous MTS. Regarding the other competitors, linear Granger performs the best followed by TCDF and IMV-LSTM at most cases. The possible reason is that linear Granger can detect Granger causal relations to some extent, though it utilizes linear model, i.e., Vector autoregression (VAR). TCDF utilizes attention-based CNN to inference potential causals followed by a causal validation step, but the attention it proposed cannot reflect Granger causality, thus achieves unsatisfactory performance. Compare to the performance on homogeneous multivariate time series, the performance of IMV-LSTM drops dramatically, which indicates that the attention mechanism in IMV-LSTM fails given heterogeneous multivariate time series.\nIn Table 2, we vary the number of variables to generate datasets of different complexity, and we can see that GASER outperforms other competitors consistently across different S, and achieves good performance when S is as large as 20, demonstrating our method’s capability to infer complex causal structures. Since in practice, the size of collected data is often limited, which poses challenges to recover causal structure, thus we also vary the length of time series to see the model robustness to data of small sizes. As can be seen in Table 3, GASER outperforms other methods across all cases, even when T is as small as 20, which demonstrates that advantage of using shared information. We also observe that GASER (α = 0.5) surpasses GASER (α = 1) by a large margin, which demon-\nGASER (α = 0.5) 0.911±0.147∗∗ 0.922±0.122∗∗ 0.998±0.009∗∗ 0.999±0.008∗∗ 0.858±0.103∗ 0.939±0.050∗∗ ∗∗ denotes the p-value is less than 1%, and ∗ denotes the p-value is less than 5%.\nGASER (α = 0.5) 0.824±0.123∗∗ 0.833±0.117∗∗ 0.973±0.040∗∗ 0.976±0.036∗∗ 0.998±0.009∗∗ 0.999±0.008∗∗ ∗∗ denotes the p-value is less than 1%, and ∗ denotes the p-value is less than 5%.\nstrates that learning prototypical attention can alleviate the over-fitting problem. In Table 4, we control the causal heterogeneity by varing the number of common causal structures C = {3, 5, 7}. We observe that the performance of GASER decreases with increasing C. In Figure 2, we map the learned causal attention vectors to a 2D space by the visualization tool t-SNE (Maaten & Hinton, 2008). Individuals of different causal structures are labeled by different colors. From the results, we observe that nodes belonging to the same causal structures are clustered together, which also demonstrates the effectiveness of our method." }, { "heading": "3.2.3 PREDICTION PERFORMANCE", "text": "We evaluate the prediction performance on the real-world datasets, i.e., PM2.5 and SML. To evaluate the robustness of prediction and the accuracy of Granger causal attribution, we also build another datasets that only contain top 50% important variables towards prediction detected from each method. We report the prediction results in Table 5. As can be seen, GASER achieves the best performance on PM2.5 data with all features, demonstrating its superior prediction performances. We also observe that GASER achieves comparable or even better performance using selected variables, while the others’ performances decrease, which indicates that effective variable selection of GASER. Linear Granger achieves the best performance on SML data, because the time series length of SML is short, thus providing limited training samples for neural-network-based methods." }, { "heading": "3.3 ONLINE A/B TESTS", "text": "In order to further evaluate the effectiveness of GASER in practice, an online A/B test is conducted on an E-commercial platform, and the process is designed as follows: • We first train GASER on the historical MTS of 30,665 items. Each MTS includes 26 variables\nrelated to searching, recommending and advertising, such as Page View (PV), Gross Merchandise Volume (GMV) and Impression Position In-Page, etc. Here, we take the item popularity as the target series, and generate the underlying causal structure for each item.\n• We randomly sample 100 items whose impression position in-page Granger-causes the item popularity with high confidence, and divide them into two buckets. For Bucket A, we adjust impression positions in-page of each item by one grid since 2019/08/19 till 2019/08/29, and ensure the intervention has little impact on other variables. For Bucket B, we do nothing.\n• We compare the improvement rate of item popularity week-on-week on the two bucket in Fig 3.\nAs shown in Figure 3, four days after the beginning of the intervention, the item popularity improvement rate of Bucket A consistently outperforms that of Bucket B, and the gap between the two buckets increases significantly since 2019/08/25, which shows that the intervention, i.e., adjusting the impression positions in-page, causes the improvement on item popularities, thus demonstrates the right causal relationships detected by GASER." }, { "heading": "4 RELATED WORK", "text": "Recently a considerable amount of work has been proposed for causal inference. Classical methods, such as constraint-based methods (Pearl, 2000; Spirtes et al., 2000; Peters et al., 2013; Runge et al., 2017; Zhang et al., 2017), score-based methods (Chickering, 2002) and functional causal models (FCM) based methods (Shimizu et al., 2006), mainly focus on i.i.d data. Under the scope of time series, causal inference is commonly based on the notion of Granger causality (Granger, 1969; 1980), and a classical way is to estimate linear Granger causality under the framework of VAR models (Lütkepohl, 2005). However, existing classicial methods fail to uncover causal structures inductively. Neural network based methods that infer causal relationships or relations that approach causality have gained increasing popularity. Lopez-Paz et al. (2015) learns a probability distribution classifier to unveil causal relations. Kipf et al. (2018) proposes a neural relation inference model to infer interactions while simultaneously learning the dynamics. Yu et al. (2019) develops a deep generative model to recover the underlying DAG from complex data. Attention mechanism has often been adopted to discover relations between variables. For example, Dang et al. (2018) discovers dynamic dependencies with multi-level attention. Nauta et al. (2019) studies causal discovery through attention-based neural networks with a causal validation step. Guo et al. (2019) proposes an interpretable multi-variable LSTM with mixture attention to extract variable importance knowledge. However, these attention mechanisms provide no incentive to yield accurate attributions (Sundararajan et al., 2017; Schwab et al., 2019).\nSince our method utilizes the concept of prototype to detect common causal structures, another line of related research is about prototype learning. Prototype learning is a form of cased-based reasoning (Slade, 1991), which solves problems for new inputs based on similarity to prototypical cases. Recently prototype learning has been leveraged in interpretable classification (Bien et al., 2011; Kim et al., 2014; Snell et al., 2017; Li et al., 2018; Chen et al., 2018) and sequence learning (Ming et al., 2019). We incorporate the concept for Granger causal structure reconstruction on time series data for the first time." }, { "heading": "5 CONCLUSION", "text": "We formalize the problem of Granger causal structure reconstruction from heterogeneous MTS data and propose an inductive framework GASER to solve it. In particular, we propose a novel attention mechanism, namely prototypical Granger causal attention, which computes Granger causal attribution combined with prototype learning, to reconstruct Granger causal structures and uncover\ncommon causal structures. The approach has been successfully evaluated by offline experiments on real-world and synthetic datasets compared to previous methods, also confirmed by an online A/B test on an E-commercial platform." }, { "heading": "A TABLE OF NOTATIONS", "text": "" }, { "heading": "B ALGORITHM PSEUDOCODE", "text": "The full algorithm is presented in Algorithm 1. The network parameter set Θ includes the parameters of sequence encoders and MLPs. We adopt stochastic gradient descent (SGD) to optimize the network parameters and the prototype parameters. To initialize the prototypes, we first pretrain GASER for several epochs, and then employ k-means with cosine similarity on the Granger causal attentions {q}Ni=1, and finally we take the cluster centers as the initial prototypes. Note that the Gumbel-softmax function is only adopted in the training phase to backpropagate, and replaced by the argmax function in inference.\nAlgorithm 1: The algorithm of GASER. Input:\nInput data X = {Xi}Ni=1; Number of prototypes K; Maximum iterations MaxIter; Hyperparameters α, γ, λ1 and λ2.\nOutput: Network parameters Θ; Attention vectors {ai}Ni=1; Prototypes {p}Kj=1; Prediction results {ŷi,T+1}Ni=1.\n1: Pretrain the model by optimizing Lpred + λ1Laux; 2: Employ k-means on all Granger causal attention vectors {qi}Ni to get initial prototypes {pj}Kj=1 3: for iter ← 1 to MaxIter do 4: Update network parameters Θ and prototypes {p}Kj=1 by optimizing L = Lpred + λ1Laux + λ2Ldiv; 5: end for 6: Generate prototypical Granger causal attentions {ai}Ni=1 by Equation (9) using the argmax\nfunction instead of the Gumbel-softmax function." }, { "heading": "C ADDITIONAL DETAILS ON THE EXPERIMENTAL SETUP", "text": "C.1 DATASETS\nIn this section, we provide some additional dataset details. Finance data are available at http: //www.skleinberg.org/data.html. We use the processed FMRI data provide by (Nauta et al., 2019). The source and details of PM2.5 and SML are at https://archive.ics.uci. edu/ml/datasets/Beijing+PM2.5+Data and https://archive.ics.uci.edu/ ml/datasets/SML2010, respectively.\nC.2 IMPLEMENTATION DETAILS\nWe implement GASER in Tensorflow (Abadi et al., 2016) by the Adam optimizor (Kingma & Ba, 2014) with the learning rate set to 0.001. We adopt LSTMs as the sequence encoders with the hidden states size set to 128 and the window size set to 5. In all experiments, we first pretrain GASER with only the Granger causal attention for 40 epochs. The hyperparameters λ1 and λ2 are both set to 1, and the softmax temperature in Gumbel-softmax is set to 0.1.\nC.3 COMPARED METHODS\nLinear Granger (Granger, 1969; 1980): We conduct a Granger causality test in the context of Vector Autoregression (VAR) as described in chapter 7.6.3 in (Lütkepohl, 2005) and implemented by the Statsmodels package (Seabold & Perktold, 2010). In detail, we perform F-test at 5% significance level. The maximum number of lags to check for order selection is set to 5, which is larger than the causal order in the ground-truth.\nTCDF (Nauta et al., 2019): TCDF learns causal structure on multivariate time series by attentionbased convolutional neural networks combined with a causal validation step. The codes are available at https://github.com/M-Nauta/TCDF. In all experiments, we follow the default settings as described in (Nauta et al., 2019), i.e., the significance number (stating when an increase in loss is significant enough to label a potential cause as true) as 0.8, the size of kernels as 4, dilation coefficient as 4, the learning rate as 0.01, and adopting Adam optimizator.\nDUAL (Qin et al., 2017): It is an encoder-decoder RNN with an input attention mechanism, which forces the model pay more attention on certain driving series rather than treating all the input driving series equally. In the experiment, we use the input-attention factors to detect important variables as (Guo et al., 2019) did. We set the the size of hidden states for encoder and decoder to 64 and the window size to 10 as stated in the paper.\nIMV-LSTM (Guo et al., 2019): It is a multi-variable attention-based LSTM capable of both prediction and variable importance interpretation, with the attention factors reflecting importance of variables in prediction. Thus, we take the learnt attention vectors as the Granger causal weights in the experiment. The codes are available at https://github.com/KurochkinAlexey/ IMV_LSTM. In all experiments, IMV-LSTM is implemented by Adam optimizer with the minibatch size 64, hidden layer size 128 and learning rate 0.001." }, { "heading": "D SUPPLEMENTARY EXPERIMENTAL RESULTS", "text": "D.1 VISUALIZATION OF LEARNED PROTOTYPES\nWe visualize the the learned prototypes and the ground-truth causal structures in Fig. 4. In this experiments, we set the hyper-parameter of prototype number K equal to the ground-truth common structure number C. From the results, we can see that the learned prototypes are similar to the ground-truth causal structures, which demonstrates the learned prototypes are interpretable.\nD.2 VISUALIZATION OF GRANGER CAUSAL ATTENTION\nIn this part, we visualize the Granger causal attention vectors over epochs during the training phase and the ground-truth causal structures in Fig. 5. From the results, we can see that for the later epochs,\nshown on the right of the figure, the Granger causal attention vectors are similar to the groundtruth structures. It demonstrates that the proposed Granger causal attention mechanism forces the attention values to correlate with Granger causality.\nD.3 RUNNING TIME\nWe compare the running time including the training time and the inference time with linear Granger method, i.e. VAR. The experiments were carried out on a server with 64 Intel(R) Xeon(R) E5-2682 v4 2.50GHz processors and 512 GB RAM. Since the linear Granger method is not inductive and has to retrain for new individuals, we set the inference time equal to the training time. As shown in Table 7, although GASER takes more time on the training phase, about 6.4 times slower than linear Granger, it is about 44.5 times faster on the inference phase.\nD.4 PARAMETER SENSITIVITY\nWe investigate the parameter sensitivity in this section. Specifically, we evaluate the sensitivity of GASER to different numbers of prototypes K and different values of hyper-parameter α,γ, λ1 and λ2. The results on heterogenous MTS (C=5, S=10, T=100) are shown in Fig. 6.\nWe first show how the number of prototypes affects the performance in Fig. 6(a). We can see that the performance raises when K increases and achieves the best performance when the number of prototypes K is equal to the number of underlying common structures C. Then, the performance starts to drop slowly. Overall, the proposed method is not very sensitive to the parameter K.\nThen we evaluate how the value of α affects the performance in Fig. 6(b). The parameter α balances the weight of the Granger causal attention and the prototypical attention in our model. We can see that when α = 0.5, the model reaches the best performance, which demonstrates both the Granger causal attention and the prototypical attention are essential in our model, and it is important to find a good balanced point between them.\nThe parameter γ controls the closeness of different prototypes. The smaller the γ, the prototypes are more diverse. The result is shown in Fig. 6(c). We can see that when γ is too small, the result is not good. This is intuitive as the ground-truth common structures among heterogeneous MTS are not totally different. Too large γ also deteriorates the performance, because it will result in a number of similar or even duplicate prototypes. Thus, it is important to determine the parameter γ carefully.\nFigure 6(d) shows the the performance of GASER w.r.t. the parameter λ1. The parameter λ1 is the weight of the auxiliary prediction loss. From the result, we can see that the performance raises then λ1 increases initially, and then keeps stable. It demonstrates that the auxiliary predictors are essential to detect the Granger causality.\nFinally, we show how the value of λ2 affects the performance in Fig. 6(e). The parameter λ1 controls the weight of the prototype diversity regularization. From the result, we can see that the prototype diversity loss is essential in our mobel, but we should concentrate more on choosing an appropriate weight." } ]
2,019
GRANGER CAUSAL STRUCTURE RECONSTRUCTION
SP:0bd5556d83764d1445a07e46ea5fcd074789b6e0
[ "The paper proposes to extend the popular linear-control algorithm, RLSVI, to utilize learned representations. This is done by adapting a work from bandit literature that utilizes BLR with representations that are learned via a DNN. The proposed solution is then compared to DQN with fixed epsilon as the exploration strategy in a chain MDP, and to the Rainbow agent and DQN in 5 selected Atari games to show sample-efficiency improvements.", "This paper introduces a deep learning-based adaptation for the RLVSI algorithm, where the agent uses the representation learned by the deep neural network-based RL agent (DQN). They use the last layer of DQN as a state representation for RLSVI. In order to work with the changing representations of the deep agent, they propose a likelihood matching mechanism. The approach is applied to two tasks: a) A toy modified n-chain experiment and b) set of 5 Atari games. They show that their method outperforms the DQN with naive exploration. " ]
Exploration while learning representations is one of the main challenges Deep Reinforcement Learning (DRL) faces today. As the learned representation is dependant in the observed data, the exploration strategy has a crucial role. The popular DQN algorithm has improved significantly the capabilities of Reinforcement Learning (RL) algorithms to learn state representations from raw data, yet, it uses a naive exploration strategy which is statistically inefficient. The Randomized Least Squares Value Iteration (RLSVI) algorithm (Osband et al., 2016), on the other hand, explores and generalizes efficiently via linearly parameterized value functions. However, it is based on hand-designed state representation that requires prior engineering work for every environment. In this paper, we propose a Deep Learning adaptation for RLSVI. Rather than using hand-design state representation, we use a state representation that is being learned directly from the data by a DQN agent. As the representation is being optimized during the learning process, a key component for the suggested method is a likelihood matching mechanism, which adapts to the changing representations. We demonstrate the importance of the various properties of our algorithm on a toy problem and show that our method outperforms DQN in five Atari benchmarks, reaching competitive results with the Rainbow algorithm.
[]
[ { "authors": [ "Shipra Agrawal", "Navin Goyal" ], "title": "Analysis of thompson sampling for the multi-armed bandit problem", "venue": "In Conference on Learning Theory, pp", "year": 2012 }, { "authors": [ "Shipra Agrawal", "Navin Goyal" ], "title": "Thompson sampling for contextual bandits with linear payoffs", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Oron Anschel", "Nir Baram", "Nahum Shimkin" ], "title": "Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Peter Auer", "Nicolo Cesa-Bianchi", "Paul Fischer" ], "title": "Finite-time analysis of the multiarmed bandit problem", "venue": "Machine Learning,", "year": 2002 }, { "authors": [ "Kamyar Azizzadenesheli", "Emma Brunskill", "Animashree Anandkumar" ], "title": "Efficient exploration through bayesian deep q-networks", "venue": "In ITA", "year": 2018 }, { "authors": [ "Pablo Samuel Castro", "Subhodeep Moitra", "Carles Gelada", "Saurabh Kumar", "Marc G. Bellemare" ], "title": "Dopamine: A Research Framework for Deep Reinforcement Learning. 2018", "venue": "URL http: //arxiv.org/abs/1812.06110", "year": 2018 }, { "authors": [ "Olivier Chapelle", "Lihong Li" ], "title": "An empirical evaluation of thompson sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2011 }, { "authors": [ "Steven Diamond", "Stephen Boyd" ], "title": "CVXPY: A Python-embedded modeling language for convex optimization", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Ian Osband", "Alex Graves", "Vlad Mnih", "Remi Munos", "Demis Hassabis", "Olivier Pietquin" ], "title": "Noisy networks for exploration", "venue": "arXiv preprint arXiv:1706.10295,", "year": 2017 }, { "authors": [ "Matteo Hessel", "Joseph Modayil" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the national academy of sciences,", "year": 2017 }, { "authors": [ "Michail G Lagoudakis", "Ronald Parr" ], "title": "Least-squares policy iteration", "venue": "Journal of machine learning research,", "year": 2003 }, { "authors": [ "Nir Levine", "Tom Zahavy" ], "title": "Shallow updates for deep reinforcement learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Long-Ji Lin" ], "title": "Reinforcement learning for robots using neural networks", "venue": "Technical report,", "year": 1993 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518,", "year": 2015 }, { "authors": [ "Brendan O’Donoghue", "Ian Osband", "Remi Munos", "Volodymyr Mnih" ], "title": "The uncertainty bellman equation and exploration", "venue": "arXiv preprint arXiv:1709.05380,", "year": 2017 }, { "authors": [ "Ian Osband", "Benjamin Van Roy" ], "title": "Why is posterior sampling better than optimism for reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Ian Osband", "Daniel Russo", "Benjamin Van Roy" ], "title": "more) efficient reinforcement learning via posterior sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Ian Osband", "Benjamin Van Roy", "Zheng Wen" ], "title": "Generalization and exploration via randomized value functions", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Ian Osband", "Daniel Russo", "Zheng Wen", "Benjamin Van Roy" ], "title": "Deep exploration via randomized value functions", "venue": "arXiv preprint arXiv:1703.07608,", "year": 2017 }, { "authors": [ "Ian Osband", "John Aslanides", "Albin Cassirer" ], "title": "Randomized prior functions for deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": null, "year": 1905 }, { "authors": [ "Carlos Riquelme", "George Tucker", "Jasper Snoek" ], "title": "Deep bayesian bandits showdown", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "Steven L Scott" ], "title": "A modern bayesian look at the multi-armed bandit", "venue": "Applied Stochastic Models in Business and Industry,", "year": 2010 }, { "authors": [ "Malcolm Strens" ], "title": "A bayesian framework for reinforcement learning", "venue": "In ICML,", "year": 2000 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to reinforcement learning, volume 135", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "William R Thompson" ], "title": "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples", "venue": null, "year": 1933 }, { "authors": [ "John N Tsitsiklis", "Benjamin Van Roy" ], "title": "Analysis of temporal-diffference learning with function approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 1997 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "arXiv preprint arXiv:1511.06581,", "year": 2015 }, { "authors": [ "Jeremy Wyatt" ], "title": "Exploration and inference in learning from reinforcement", "venue": null, "year": 1998 }, { "authors": [ "Tom Zahavy", "Shie Mannor" ], "title": "Deep neural linear bandits: Overcoming catastrophic forgetting through likelihood matching", "venue": "arXiv preprint arXiv:1901.08612.,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "In Reinforcement Learning (RL), an agent seeks to maximize the cumulative rewards obtained from interactions with an unknown environment (Sutton et al., 1998). Since the agent can learn only by its interactions with the environment, it faces the exploration-exploitation dilemma: Should it take actions that will maximize the rewards based on its current knowledge or instead take actions to potentially improve its knowledge in the hope of achieving better future performance. Thus, to find the optimal policy the agent needs to use an appropriate exploration strategy.\nClassic RL algorithms were designed to face problems in the tabular settings where a table containing a value for each state-action pair can be stored in the computer’s memory. For more general settings, where generalization is required, a common practice is to use hand-designed state representation (or state-action), upon which a function approximation can be learned to represent the value for each state and action. RL algorithms based on linear function approximation have demonstrated stability, data efficiency and enjoys convergence guarantees under mild assumptions (Tsitsiklis & Van Roy, 1997; Lagoudakis & Parr, 2003). They require that the desired learned function, e.g. Qfunction, will be a linear combination of the state representation. This is, of course, a hard constraint as the representation is hand-designed, where the designer often does not know how the optimal value-function will look like. Furthermore, hand-designed representation is environment-specific and requires re-designing for every new environment.\nThe DQN algorithm (Mnih et al., 2015) has changed RL. Using Deep Neural Networks (DNN) as function approximators, the DQN algorithm enabled the learning of policies directly from raw highdimensional data and led to unprecedented achievements over a wide variety of domains (Mnih et al., 2015). Over the years, many improvements to DQN were presented, suggesting more fitting network architectures (Wang et al., 2015), reducing overestimation (Van Hasselt et al., 2016; Anschel et al., 2017) or improving its data efficiency (Schaul et al., 2015). Despite its great success, DQN uses the overly simple -greedy strategy for exploration. This strategy is one of the simplest exploration\nstrategies that currently exist. The agent takes random action with probability and takes the optimal action according to its current belief with probability 1− . This strategy is commonly used despite its simplicity and proven inefficiency (Osband et al., 2016). The main shortcoming of -greedy and similar strategies derives from the fact that they do not use observed data to improve exploration. To explore, it takes a completely random action, regardless of the experience obtained by the agent.\nThompson Sampling (TS) (Thompson, 1933), is one of the oldest heuristics to address the ’exploration/exploitation’ trade-off in sequential decision-making problems. Its variations were proposed in RL (Wyatt, 1998; Strens, 2000) and various bandits settings (Chapelle & Li, 2011; Scott, 2010). For Multi-Armed Bandit (MAB) problems, TS is very effective both in theory (Agrawal & Goyal, 2012; 2013) and practice (Chapelle & Li, 2011). Intuitively, TS randomly takes actions according to the probability it believes to be optimal. In practice, a prior distribution is assumed over the model’s parameters p(w), and a posterior distribution p(w|D) is computed using the Bayes theorem, where D is the observed data. TS acts by sampling models from the posterior distribution, and plays the best action according to these samples.\nRandomized Least Squares Value Iteration (Osband et al., 2016) is an RL algorithm which uses linear function approximation and is inspired by Thompson Sampling. It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models. This algorithm was proven to be efficient in tabular settings, with a bound on the expected regret that match the worst-case lower bound up to logarithmic factors. More importantly, it demonstrates efficiency even when generalization is required. Alas, as it assumes a linearly parametrized value function on a hand-designed state representation, the success of this algorithm crucially depends on the quality of the given state representation.\nIn this paper, we present a new DRL algorithm that combines the exploration mechanism of RLSVI with the representation learning mechanism of DQN; we call it the Deep Randomized Least Squares Value Iteration (DRLSVI) algorithm. We use standard DQN to learn state representation and explores by using the last layer’s activations of DQN as state representation for RLSVI. To compensate for the constantly changing representation and the finite memory of DQN, we use a likelihood matching mechanism, which allows the transfer of information held by an old representation regarding past experience. We evaluate our method on a toy-problem – the Augmented Chain environment – for a qualitative evaluation of our method on a small MDP with a known optimal value function. Then, we compare our algorithm to the DQN and Rainbow algorithms on several Atari benchmarks. We show that it outperforms DQN both in learning speed and performance." }, { "heading": "2 RELATED WORK", "text": "Thompson Sampling in Multi-Armed Bandit problems: Thompson Sampling (TS) (Thompson, 1933), is one of the oldest heuristics to address the ’exploration/exploitation’ trade-off in sequential decision-making problems. Chapelle & Li (2011) sparked much of the interest in Thompson Sampling in recent years. They rewrote the TS algorithm for Bernoulli bandit and showed impressive empirical results on synthetic and real data sets that demonstrate the effectiveness of the TS algorithm. Their results demonstrate why TS might be a better alternative to balance between exploration and exploitation in sequential decision-making problems than other popular alternatives like the Upper Confidence Bound algorithm (Auer et al., 2002). Agrawal & Goyal (2013) suggested a Thompson Sampling algorithm for the linear contextual bandit problem and supplied a high-probability regret bound for it. They use Bayesian Linear Regression (BLR) with Gaussian likelihood and Gaussian prior to design their version of Thompson Sampling algorithm. Riquelme et al. (2018) suggested performing a BLR on top of the representation of the last layer of a neural network. The predicted value vi for each action ai is given by vi = βTi zx, where zx is the output of the last hidden layer of the network for context x. While linear methods directly try to regress values v on x, they independently trained a DNN to learn a representation z, and then used a BLR to regress v on z, obtaining uncertainty estimates on the β’s, and making decisions accordingly via Thompson Sampling. Moreover, the network is only being used to find good representation – z. Since training the network and updating the BLR can be done independently, they train the network for a fixed number of iterations, then, perform a forward pass on all the training data to obtain the new zx, which is then fed to the BLR. This procedure of evaluating the new representation for all the observed data is very costly, moreover, it requires infinite memory which obviously does not\nscale. Zahavy & Mannor (2019) suggested matching the likelihood of the reward under old and new representation to avoid catastrophic forgetting when using such an algorithm with finite memory.\nThompson Sampling in RL: In the Reinforcement Learning settings, Strens (2000) suggested a method named ”Posterior Sampling for Reinforcement Learning” (PSRL) which is an application of Thompson Sampling to Model-Based Reinforcement Learning. PSRL estimates the posterior distribution over MDPs. Each episode, the algorithm samples MDP from it and finds the optimal policy for this sampled MDP by dynamic programming. Recent work (Osband et al., 2013; Osband & Van Roy, 2017) have shown a theoretical analysis of PSRL that guarantees strong expected performance over a wide range of environments. The main problem with PSRL, like all modelbased approaches, is that it may be applied to relatively small environments. The Randomized Least Squares Value Iteration (RLSVI) algorithm is an application of Thompson Sampling to Model-Free Reinforcement Learning. It explores by sampling plausible Q-functions from uncertainty sets and selecting the action that optimizes the sampled models.\nThompson Sampling in DRL: Various approaches have been suggested to extend the idea behind RLSVI to DRL. Bootstrapped DQN (Osband et al., 2017) uses an ensemble of Q-networks, each trained with slightly different data samples. To explore, Bootstrapped DQN randomly samples one of the networks and acts greedy with respect to it. Recently, Osband et al. (2018) extended this idea by supplying each member of the ensemble with a different prior. Fortunato et al. (2017) and Plappert et al. (2017) investigate a similar idea and propose to adaptively perturb the parameterspace, which can also be thought of as tracking approximate posterior over the network’s parameters. O’Donoghue et al. (2017) proposed TS in combination with uncertainty Bellman equation, which connects the uncertainty at any time-step to the expected uncertainties at subsequent timesteps. Recently and most similar to our work, Azizzadenesheli et al. (2018) experimented with a Deep Learning extension to RLSVI. They changed the network architecture to exclude the last layer weights, optimized the hyper parameters and used double-DQN. In contrary, we don’t change anything in the DQN agent. We use the representation learned by DQN to perform RLSVI, however, the network structure, loss and hyper-parameters are the same. Additionally, differently from our method, they don’t compensate for the changing representation and solve BLR problem with the same arbitrary prior every time." }, { "heading": "3 PRELIMINARIES", "text": "We consider the standard RL settings (Sutton et al., 1998), in which an environment with discrete time steps is modeled by a Markov Decision Process (MDP). An MDP is a tuple < S,A, P,R, γ >, where S is a state space, A a finite action space, P : S × A −→ ∆(S), is a transition kernel, and R : S × A −→ R a reward function. At each step the agent receives an observation st ∈ S which represents the current physical state of the system, takes an action at ∈ A which is applied to the environment, receives a scalar reward rt = r(st, at), and observes a new state st+1 which the environment transitions to. As mentioned above, the agent seeks an optimal policy π∗ : S −→ ∆(A), mapping an environment state to probabilities over the agent’s executable actions. γ ∈ (0, 1) is the discount factor – a scalar representing the trade-off between immediate and delayed reward. A brief survey of the DQN algorithm can be found in Appendix 1." }, { "heading": "3.1 RANDOMIZED LEAST SQUARES VALUE ITERATION", "text": "The Randomized Least Squares Value Iteration (RLSVI) algorithm is a TS-inspired exploration strategy for Model-Free Reinforcement Learning. It combines TS-like exploration and linear function approximation, where the main novelty is in the manner in which it explores: Sampling value-functions rather than sampling actions. The Q-function is assumed to be in the form Q(s, a) = φ(s, a)Tw, where φ(s, a) is a hand-designed state-action representation. RLSVI operates similar to other linear function algorithms and minimizes the Bellman equation by solving a regression problem – Bayesian Linear Regression. BLR obtains a posterior distribution over valuefunction instead of point estimates. To explore, RLSVI samples plausible value functions from the posterior distribution and acts the greedy action according to the sampled value-function. In the episodic settings where the representation is tabular, i.e., no generalization is needed, RLSVI guarantees near-optimal expected episodic regret. Finally, the main benefit of this algorithm is that it\ndisplays impressive results even when generalization is required – despite the lack of theoretical guarantees. A pseudo-code can be found in Appendix 1." }, { "heading": "4 THE DEEP RANDOMIZED LEAST SQUARES VALUE ITERATION ALGORITHM", "text": "In this paper, we propose to use RLSVI as the exploration mechanism for DQN. RLSVI capabilities are enhanced by using state representation that is learned directly from the data by a neural network rather than hand-designed one. As the neural network gradually improves its representation of the states, a likelihood matching mechanism is applied to transfer information from old to new representations." }, { "heading": "4.1 LEARNING REPRESENTATION", "text": "A DQN agent is trained in the standard fashion, i.e., the same architecture, hyper-parameters and loss function as the original DQN. Two exceptions were made (1) The size of the last hidden layer is reduced to be d = 64. (2) The Experience Replay buffer is divided evenly between actions and transitions are stored in a round-robin fashion. I.e., whenever the buffer is full, a new transition < st, at, rt, st+1 > is placed instead of the oldest transition with the same action at." }, { "heading": "4.2 EXPLORATION", "text": "Exploration is performed using RLSVI on top of the last hidden layer of the target network. Given a state st, the activations of the last hidden layer of the target network applied to this state are denoted as φ(st) = LastLayerActivations(Qθtarget(st)). Several changes to the original RLSVI algorithm were made: First, rather than solving different regression problem for every time step, a different regression problem is being solved for every action. As the last hidden layer’s activations serves as state representation, the representation is time-homogeneous and shared among actions. The regression targets y are DQN’s targets which use the target network predictions. Another change is that a slightly different formulation of Bayesian Linear Regression than RLSVI is being used. Similar to RLSVI a Gaussian form for the likelihood is assumed: Q(s, a) ∼ N(wTa φ(s), σ2), however, like Riquelme et al. (2018) the noise parameter σ2 is formulated as a random variable, which is distributed according to the Inverse-Gamma distribution. The prior for each regression problem is therefore in the form: p(w, σ2) = p(w|σ2)p(σ2), p(w|σ2) ∼ N(µ0, σ2Σ0), p(σ2) ∼ InvGamma(a0, b0). For this prior and the Gaussian likelihood Q(s, a) ∼ N(wTa φ(s), σ2) it is known that the posterior distribution can be calculated analytically as follows:\nφn={φ(s1), ..., φ(sn)}∈Rd×n, Yn=(y1, ..., yn)T ∈ Rn\nΣn = (φnφ T n + Σ −1 0 ) −1, µn = Σn(φnYn + Σ −1 0 µ0),\nan = a0 + n\n2 , bn = b0 +\n1 2 (Y Tn Yn + µ T 0 Σ −1 0 µ0 − µTnΣ−1n µn),\n(1)\nFormulating σ2 as a random variable allows adaptive exploration where the adaptation is derived directly from the observed data. Lastly, while RLSVI’s choice for the prior’s parameters is somewhat arbitrary, in our algorithm the prior has a central role which we’ll discuss further on. Since RLSVI requires a fixed representation and the target network’s weights are fixed, we use the last layer activations of the target network, denoted φ(s), as state-representation. Every T target training time steps, the target network is updated with the weights of the Q-network. In these T target time steps the Q-network is changing due to the optimization performed by the DQN algorithm. Since the representation is changing, the posterior distribution that was approximated in the old representation can’t be used. A posterior distribution based on the new representation needs to be approximated. Therefore, whenever the target network changes, new Bayesian linear regression problems are being solved using NBLR samples from the ER buffer. Since the ER buffer is finite, some experience-tuples were used to approximate the posterior in the old representation and are no longer available. Ignoring this lost experience can and will lead to degradation in performance derived by ’Catastrophic Forgetting’ (Kirkpatrick et al., 2017). To compensate for the changing representation and the loss of old experience, we follow (Zahavy & Mannor, 2019) and match the\nlikelihood of the Q-function in the old and new representation. This approach assumes that the important information from old experiences is coded in the state representation.\nAlgorithm 1 Deep RLSVI\nInput: s0 − Initial state, Qθ(s, a), Qθtarget(s, a), ER buffer, Prior: σ2a ∼ InvGamma(aa,0, ba,0), p(wa,0|σ2a) ∼ N(µa,0, σ2Σa,0) Define: φ(st)← LastLayerActivation(Qθtarget), ψ(st)← LastLayerActivation(Qθ) for t = 0, 1... do\nif t mod Tsample then for a = 0, ..., |A|, j = 0, ..., |J | do\nSample σ̃2a,j ∼ InvGamma(âa, b̂a) Sample w̃a,j ∼ N(µ̂a, σ̃2a,jΣ̂a)\nend for end if Sample ja ∼ U{1, 2, ..., |J |}∀a ∈ A Act at ∈ arg maxα w̃Tα,jαφ(st) Observe st+1, rt, Store Transition < st, at, st+1, rt > Train DQN using sampled mini-batch if t mod Ttarget = 0 then\nfor a = 0, ..., |A| do Construct Priors µ0,Σ0 by likelihood matching (Equation 2) Sample NBLR transitions < si, a, si+1, ri > from ER buffer Solve Bayesian Linear Regression (Equation 1) end for Qθtarget(s, a)←− Qθ(s, a)\nend if end for" }, { "heading": "4.3 CONSTRUCTING PRIORS BY LIKELIHOOD MATCHING", "text": "Recall that the likelihood of the Q function is Q(s, a) ∼ N(wTa φ(s), σ2), our best estimate for this likelihood is to plug-in the posterior approximation for wa: Q̂(s, a) ∼ (wTa µ φ n, σ\n2φT (s)Σφnφ(s)). The likelihood for the new representation, ψ(s), is in the same form: Q̂(s, a) ∼ (wTa µψn , σ2ψT (s)Σψnψ(s)). Since the likelihood is Gaussian, to compensate for the changing representation, we will find moments that match the likelihood of the Q-function in the new representation to the old one and use them for our Gaussian prior belief.\nExpectation prior: As DQN is trained to predict theQ-function, given the new last layer activations ψ, a good prior for µ0 in the new representation will be the last layer weights of the DQN (Levine et al., 2017).\nCovariance prior: We use NSDP samples from the experience replay buffer. We evaluate both old and new representation {φ(si), ψ(si)}NSDPi=1 . Our goal is to find a solution Σ ψ 0 that will match the covariance of the likelihood in the new representation to the old one: ψ(si) TΣψ0ψ(si) = φ(si) TΣφnφ(si). Using the cyclic property of the trace, this is equivalent to finding Trace(ψ(si)ψ(si)TΣ ψ 0 ) = Si, where Si = φ(si) TΣφ0φ(si) = Trace(φ(si) TΣφ0φ(si)). We denote Ψi = ψ(si)ψ(si)T ∈ Rd×d. Adding the constraint that Σψ0 should be Positive-Semi-Definite as it is a covariance matrix, we end up with the following Semi-Definite Program (SDP) (Vandenberghe & Boyd, 1996):\nmin Σψ0 m∑ i=0 ||Trace(ΨiΣψ0 )− Si||2 s.t. Σ ψ 0 0 (2)\nIn practice, we solve this SDP by using CVXPY (Diamond & Boyd, 2016)." }, { "heading": "4.4 REDUCING COMPUTATION COMPLEXITY", "text": "Approximate Sampling: To perform Thompson Sampling one needs to sample the posterior distribution before every decision. This, regrettably, is computationally expensive and will slow down training significantly. To speed up learning, we sample j weights for every action {w̃i,1, ..., w̃i,j} every T Sample time steps. Then, every step we sample an index ia ∈ 1, .., j for every action, which is computationally cheap, and act greedy accordingly: at = arg maxα w̃Tα,iαφ(st).\nSolving the SDP: Another bottleneck our algorithm faces is solving the SDP. We refer the reader to Vandenberghe & Boyd (1996) for an excellent survey on the complexity of solving SDPs. As the running time of an SDP solver mainly depends on the dimension of the representation d, the number of samples being used NSDP and the desired accuracy , we chose the last hidden layer size to be d = 64, used NSDP = 600 for every SDP and set = 1e− 5. The running time for solving a single SDP took us 10-50 seconds using Intel’s Xeon CPU E5-2686 v4 2.30 GHz." }, { "heading": "5 EXPERIMENTS", "text": "We conduct a series of experiments that highlight the different aspects of our method. We begin with a qualitative evaluation of our algorithm on a simple toy environment, then, move to report quantitative results on 5 different Atari games in the ALE." }, { "heading": "5.1 THE AUGMENTED CHAIN ENVIRONMENT", "text": "Setup: The chain environment includes a chain of states S = 1, ..., n. In each step, the agent can transition left or right. This standard-settings is augmented with additional k actions which transitions the agent to the same state (self-loop). We name this variation ”The Augmented Chain Environment”. All states have zero rewards except for the far-right n-state which gives a reward of 1. Each episode is of length H = n − 1, and the agent will begin each episode at state 1. The raw state-representation is a one-hot vector. The Q-network is an MLP with 2 hidden layers. Results are averaged across 5 different runs. We report the cumulative episodic regret: Regret(T ) =∑T t=0(v ∗ 0(s0) − ∑H h=0 rt,h). Here T is the number of played episodes, v ∗ 0(s0) is the return of the optimal policy, and rt,h is the reward the agent received in episode t at time step h. An illustration for the augmented chain environment can be found in figure 1. The hyper-parameters that are being used in the following experiments can be found in Appendix 2." }, { "heading": "5.1.1 EPSILON-GREEDY", "text": "We compared our algorithm to standard DQN where -greedy serves as the exploration strategy. We experimented with various values, however, as the results for different values were similar, we display the result for a single value (Figure 2 (a)). We can see that -greedy (red) achieves a linear regret in T which is the lower bound for this type of problems, while our algorithm (blue) achieves much lower regret. These results demonstrate that -greedy can be highly inefficient even in very simple scenarios." }, { "heading": "5.1.2 ADAPTIVE SIGMA", "text": "In this experiment, we compared our algorithm with variants that do not model σ2 as a random variable. We experimented with various constant σ2 values. We can see that modeling σ2 as a random variable (red) leads to lower regret compared to constant σ2 variants (Figure 2 (b)). Note that choosing a small value for σ2 (blue) results in near-deterministic posterior function. Therefore the results are very similar to the -greedy variant. Intuitively, a deterministic posterior acts as a 0-greedy strategy. On the other hand, choosing a high value for σ2 (purple) results in a very noisy sampling of the posterior approximation, therefore we get a policy which is relatively random concluding in a bad performance. Choosing σ2 with the appropriate size for the given MDP (green, black) results in better performance, as indicated by the lower regret. However, as σ2 is constant it doesn’t adapt. We can see that the regret at the beginning of the learning is better even compared to the adaptive-version. However, as the uncertainty level decrease, the algorithm ”over-explores” which results in inferior regret compared to the adaptive version." }, { "heading": "5.1.3 LIKELIHOOD MATCHING", "text": "We compared our method with a variant that matches only the expectation, similar to (Levine et al., 2017), and a variant that does not match the likelihood at all, i.e., approximates the posterior with a fixed arbitrary prior. The version that does not match the likelihood at all is close to BDQN (Azizzadenesheli et al., 2018) and can be thought of as our implementation for it. Additionally, we report the results of a variant of the algorithm where the ER buffer is not bounded – this is possible due to the fact that the toy problem is very small and so choosing large enough buffer serves as infinite. Results are shown in Figure 2 (c). The superiority of our method (black) over the one-moment method (red) and no-prior at all (blue) support our claim that constructing priors by likelihood matching reduces the forgetting phenomenon. Additionally, the fact that the unbounded memory algorithm (green) doesn’t demonstrate any degradation in performance confirms that this phenomenon is caused since the ER buffer is bounded." }, { "heading": "5.1.4 BUFFER SIZE", "text": "The previous experiment may suggest that catastrophic forgetting in DRLSVI can be avoided by simply increasing the buffer size. In the following experiment, We examine the simple Chain environment (no self-loop actions; k = 0), with the following modification: we replaced the meaning of the actions in half of the states, i.e., to move right in the odd states, the agent needs to take the opposite action from the even states. We compare our algorithm with variants that do not match the likelihood with different buffer sizes. Figure 2 (c) shows the performance of each of the algorithms in this setup. We can see that our algorithm (blue) doesn’t suffer from degradation of performance. The other algorithms, that don’t match the likelihood, all suffer from degradation, where the only difference is the time in which the degradation starts. These results demonstrate that without the likelihood matching mechanism, catastrophic forgetting will occur regardless of the buffer size. It is interesting to observe how catastrophic forgetting happens: When the buffer reaches a point where it doesn’t contain experience of acting the non-optimal actions, a quick degradation occurs. Then, the algorithm initially succeeds to re-learn the optimal policy and the regret saturates. This phenomenon is getting increasingly aggravated until the regret becomes linear. These chain of events occurred in all the experiments without likelihood matching regardless of the buffer size." }, { "heading": "5.2 THE ARCADE LEARNING ENVIRONMENT", "text": "We report the performance of our algorithm across 5 different Atari games. We trained our algorithm for 10 million time steps and followed the standard evaluation: Every 250k training time steps we evaluated the model for 125k time steps. Reported measurements are the average episode return during evaluation. For evaluation, we used the learned Q-network with -greedy policy ( = 0.001), results are averaged across 5 different runs. We use the original DQN’s hyper-parameters. Hyperparameters that are only relevant for our method are summarised in Appendix 2. For comparison, we used the publically available learning curves for DQN1 and Rainbow from the Dopamine framework (Castro et al., 2018). Rainbow (Hessel et al., 2018) is a complex agent comprised of multiple additions to the original DQN algorithm. The averaged scores for the three methods are presented in Figure 3. The evaluation suggests that our method explores in a much faster rate than DQN, and is competitive with the Rainbow algorithm that combines multiple improvements to the DQN.\nNote: Azizzadenesheli et al. (2018) didn’t supply standard evaluation metrics and reported results for a single run only. Additionally, they change the architecture of the Q-network to exclude last layer weights, so a direct comparison to our method is not feasible. We, therefore, didn’t compare our results with theirs." }, { "heading": "6 DISCUSSION", "text": "A Deep Learning adaptation to RLSVI was presented which learn the state representation directly from the data. We demonstrated the different properties of our method in experiments and showed the promise of our method. We hope to further reduce the complexity and running time of our algorithm in future work.\n1In the publicly available results the authors use a different set of hyper-parameters than the original paper. We use the original paper’s hyper-parameters. Notice that the results for DQN are generally the same for the new set of hyper-parameters, however, they may vary for a specific game" } ]
2,019
null
SP:86bd95d8a233760200cafb7cb72ac48a7d50b7d1
[ "The paper considers the problem of parametric conditional density estimation, i.e. given a set of points {(x_n, y_n)} drawn from a distribution $p(x,y)$, the task is to estimate the conditional distribution p(x|y). The paper considers parametric estimation where in given a parametrized family of distributions f_{theta} we wish to minimize the likelihood of seeing the given data over theta. The parametric family in a lot of applications consists of highly expressive families like neural networks, which leads to the issue of overfitting in small data regimes. This has been tackled via regularization over the parameter space which might be hard to interpret as the associated inductive bias is not well understood and depends on the parametric family under consideration. On the other hand the paper proposes to add explicit noise in the examples used during training, i.e. irrespective of the optimization procedure (which could be mini-bath sgd) the paper proposes to draw examples from the data set, explicitly add noise onto the examples and create a proxy objective over the augmented data set. ", "The paper presents a regularization technique for conditional density estimation. The method is simple: adding noise to the data points, and training on the noisy data points. The paper also further gives an interpretation of the method, as a form of smoothing the curvature of the density function. It further proves the consistency of the method." ]
Modelling statistical relationships beyond the conditional mean is crucial in many settings. Conditional density estimation (CDE) aims to learn the full conditional probability density from data. Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective. Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective. To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training. We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency. In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models. The effectiveness of noise regularization makes neural network based CDE the preferable method over previous nonand semi-parametric approaches, even when training data is scarce.
[]
[ { "authors": [ "Luca Ambrogioni", "Umut Güçlü", "Marcel A.J. van Gerven", "Eric Maris" ], "title": "The Kernel Mixture Network: A Nonparametric Method for Conditional Density Estimation of Continuous Random Variables", "venue": null, "year": 2017 }, { "authors": [ "Chris M. Bishop" ], "title": "Training with Noise is Equivalent to Tikhonov Regularization", "venue": "Neural Computation,", "year": 1995 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D McAuliffe" ], "title": "Variational Inference: A Review for Statisticians", "venue": "Journal of the American Statistical Association,", "year": 2017 }, { "authors": [ "Chris J.C. Burges", "Bernhard Schölkopf" ], "title": "Improving the accuracy and speed of support vector machines", "venue": "In NIPS,", "year": 1996 }, { "authors": [ "Ricardo Cao", "Antonio Cuevas", "Wensceslao González Manteiga" ], "title": "A comparative study of several smoothing methods in density estimation", "venue": "Computational Statistics & Data Analysis, 17(2):153–176,", "year": 1994 }, { "authors": [ "Tianqi Chen", "Emily B Fox", "Carlos Guestrin" ], "title": "Stochastic Gradient Hamiltonian Monte Carlo", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Luc Devroye" ], "title": "The equivalence of weak, strong and complete convergence in L1 for kernel density estimates", "venue": "Annals of Statistics,", "year": 1983 }, { "authors": [ "Luc. Devroye", "Luc" ], "title": "A course in density estimation", "venue": "Birkhauser,", "year": 1987 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using Real NVP", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Vincent Dutordoir", "Hugh Salimbeni", "Marc Peter Deisenroth", "James Hensman" ], "title": "Gaussian Process Conditional Density Estimation", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Yarin Gal", "Zg201@cam Ac Uk" ], "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "venue": "Zoubin Ghahramani. In ICML,", "year": 2016 }, { "authors": [ "Nicolas Gilardi", "Samy Bengio", "Mikhail Kanevski" ], "title": "Conditional Gaussian Mixture Models for Environmental Risk Mapping", "venue": "In NNSP,", "year": 2002 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Hsi Guang Sung" ], "title": "Gaussian Mixture Regression and Classification", "venue": "PhD thesis,", "year": 2004 }, { "authors": [ "Geoorey E Hinton", "Drew Van Camp" ], "title": "Keeping Neural Networks Simple by Minimizing the Description Length of the Weights", "venue": "In COLT,", "year": 1993 }, { "authors": [ "Lasse Holmstrom", "Petri Koistinen" ], "title": "Using Additive Noise in Back Propagation Training", "venue": "IEEE Transactions on Neural Networks,", "year": 1992 }, { "authors": [ "Rob J. Hyndman", "David M. Bashtannyk", "Gary K. Grunwald" ], "title": "Estimating and Visualizing Conditional Densities", "venue": "Journal of Computational and Graphical Statistics, 5(4):315,", "year": 1996 }, { "authors": [ "Diederik P. Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative Flow with Invertible 1x1 Convolutions", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Anders Krogh", "John A Hertz" ], "title": "A Simple Weight Decay Can Improve Generalization", "venue": "In NIPS,", "year": 1991 }, { "authors": [ "Anders Krogh", "John A Hertz" ], "title": "A Simple Weight Decay Can Improve Generalization", "venue": "Technical report,", "year": 1992 }, { "authors": [ "Jan Kukačka", "Vladimir Golkov", "Daniel Cremers" ], "title": "Regularization for Deep Learning: A Taxonomy", "venue": "Technical report,", "year": 2017 }, { "authors": [ "Qi Li", "Jeffrey S. Racine" ], "title": "Nonparametric econometrics : theory and practice", "venue": null, "year": 2007 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled Weight Decay Regularization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Laurens Maaten", "Minmin Chen", "Stephen Tyree", "Kilian Weinberger" ], "title": "Learning with marginalized corrupted features", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "David J C Mackay" ], "title": "A Practical Bayesian Framework for Backprop Networks", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional Generative Adversarial Nets", "venue": "Technical report,", "year": 2014 }, { "authors": [ "Kevin P. Murphy" ], "title": "Machine Learning: A Probabilistic Perspective", "venue": null, "year": 2012 }, { "authors": [ "Alan F Murray", "Peter J Edwards" ], "title": "Synaptic Weight Noise During MLP Learning Enhances FaultTolerance, Generalisation and Learning Trajectory", "venue": "In NIPS,", "year": 1993 }, { "authors": [ "Nagarajan Natarajan", "Inderjit S Dhillon", "Pradeep K Ravikumar", "Ambuj Tewari" ], "title": "Learning with noisy labels", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks, volume 118", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Andrew Y Ng" ], "title": "Feature selection, L 1 vs. L 2 regularization, and rotational invariance", "venue": "In ICML,", "year": 2004 }, { "authors": [ "Steven J. Nowlan", "Geoffrey E. Hinton" ], "title": "Simplifying Neural Networks by Soft Weight Sharing", "venue": "Neural Computation,", "year": 1992 }, { "authors": [ "Emanuel Parzen" ], "title": "On Estimation of a Probability Density Function and Mode", "venue": "The Annals of Mathematical Statistics, 33(3):1065–1076,", "year": 1962 }, { "authors": [ "Lorien Y. Pratt", "Stephen J. Hanson" ], "title": "Comparing Biases for Minimal Network Construction with Back-Propagation", "venue": "In NIPS,", "year": 1989 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed" ], "title": "Variational Inference with Normalizing Flows", "venue": "In Proceedings of the 32nd International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Murray Rosenblatt" ], "title": "Remarks on Some Nonparametric Estimates of a Density Function", "venue": "The Annals of Mathematical Statistics, 27(3):832–837,", "year": 1956 }, { "authors": [ "Jonas Rothfuss", "Fabio Ferreira", "Simon Walther", "Maxim Ulrich" ], "title": "Conditional Density Estimation with Neural Networks: Best Practices and Benchmarks", "venue": "Technical report,", "year": 2019 }, { "authors": [ "Mats Rudemo" ], "title": "Empirical Choice of Histograms and Kernel Density Estimators", "venue": null, "year": 1982 }, { "authors": [ "David W. Scott", "M.P. Wand" ], "title": "Feasibility of Multivariate Density Estimates", "venue": "ISSN 00063444", "year": 1991 }, { "authors": [ "S.J. Sheather", "M.C. Jones" ], "title": "A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation", "venue": "Journal of the Royal Statistical Society,", "year": 1991 }, { "authors": [ "Jocelyn Sietsma", "Robert J.F. Dow" ], "title": "Creating artificial neural networks that generalize", "venue": "Neural Networks, 4(1):67–79,", "year": 1991 }, { "authors": [ "B. Silverman" ], "title": "On the estimation of a probability density function by the maximum penalized likelihood method", "venue": "The Annals of Statistics,", "year": 1982 }, { "authors": [ "B. Silverman" ], "title": "Density estimation for statistics and data analysis", "venue": "Monographs on Statistics and Applied Probability,", "year": 1986 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning Structured Output Representation using Deep Conditional Generative Models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Masashi Sugiyama", "Ichiro Takeuchi" ], "title": "Conditional density estimation via Least-Squares Density Ratio Estimation", "venue": "In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Volker Tresp" ], "title": "Mixtures of Gaussian Processes", "venue": "In NIPS,", "year": 2001 }, { "authors": [ "Brian L Trippe", "Richard E Turner" ], "title": "Conditional Density Estimation with Bayesian Normalising Flows", "venue": "Technical report,", "year": 2018 }, { "authors": [ "Li Wan", "Matthew Zeiler", "Sixin Zhang", "Yann Le Cun", "Rob Fergus" ], "title": "Regularization of Neural Networks using DropConnect", "venue": "In ICML, pp. 1058–1066,", "year": 2013 }, { "authors": [ "A.R. Webb" ], "title": "Functional approximation by feed-forward networks: a least-squares approach to generalization", "venue": "IEEE Transactions on Neural Networks, 5(3):363–371,", "year": 1994 }, { "authors": [ "Max Welling", "Yee Whye Teh" ], "title": "Bayesian Learning via Stochastic Gradient Langevin Dynamics", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Halbert White" ], "title": "Learning in Artificial Neural Networks: A Statistical Perspective", "venue": "Neural Computation, 1(4):425–464,", "year": 1989 }, { "authors": [ "Xiaoyong Yuan", "Pan He", "Qile Zhu", "Xiaolin Li" ], "title": "Adversarial Examples: Attacks and Defenses for Deep Learning", "venue": "Technical report,", "year": 2017 }, { "authors": [ "Rothfuss" ], "title": "Overall, the target variable is one-dimensional, i.e. y ∈ Y ⊆ R, whereas the conditional variable x constitutes a 14-dimensional", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "While regression analysis aims to describe the conditional mean E[y|x] of a response y given inputs x, many problems such as risk management and planning under uncertainty require gaining insight about deviations from the mean and their associated likelihood. The stochastic dependency of y on x can be captured by modeling the conditional probability density p(y|x). Inferring such a density function from a set of empirical observations {(xn, yn)}Nn=1 is typically referred to as conditional density estimation (CDE) and is the focus of this paper.\nIn the recent machine learning literature, there has been a resurgence of interest in high-capacity density models based on neural networks (Dinh et al., 2017; Ambrogioni et al., 2017; Kingma & Dhariwal, 2018). Since this line of work mainly focuses on the modelling of images based on large scale data sets, over-fitting and noisy observations are of minor concern in this context. In contrast, we are interested in CDE in settings where data may be scarce and noisy. When combined with maximum likelihood estimation, the flexibility of such high-capacity models results in over-fitting and poor generalization. While regression typically assumes Gaussian conditional noise, CDE uses expressive distribution families to model deviations from the conditional mean. Hence, the overfitting problem tends to be even more severe in CDE than in regression.\nClassical regularization of the neural network weights such as weight decay (Pratt & Hanson, 1989) has been shown to be effective for regression and classification. However, in the context of CDE, the output of the neural network merely controls the parameters of a density model such as a Gaussian Mixture or Normalizing Flow. This makes the standard regularization methods in the parameter space less effective and harder to analyze.\nAiming to address this issue, we propose and analyze noise regularization, a method well-studied in the context of regression and classification, for the purpose of conditional density estimation. In that, the paper attempts to close a gap in previous research. By adding small random perturbations to the data during training, the conditional density estimate is smoothed and tends to generalize better. In fact, we show that adding noise during maximum likelihood estimation is equivalent to penalizing the second derivatives of the conditional log-probability. Visually, the respective regularization term punishes very curved or even spiky density estimators in favor of smoother variants, which proves to be a favorable inductive bias in many applications. Moreover, under some regularity conditions, we show that the proposed regularization scheme is asymptotically consistent, converging to the unbiased maximum likelihood estimator. This does not only support the soundness of the proposed\nmethod but also endows us with useful insight in how to set the regularization intensity relative to the data dimensionality and training set size.\nOverall, the proposed noise regularization scheme is easy to implement and agnostic to the parameterization of the CDE model. We empirically demonstrate its effectiveness on three different neural network based models. The experimental results show that noise regularization outperforms other regularization methods significantly and consistently across various data sets. Finally, we demonstrate that, when properly regularized, neural network based CDE is able to improve upon state-of-the art non-parametric estimators, even when only 400 training observations are available." }, { "heading": "2 BACKGROUND", "text": "Density Estimation. Let X be a random variable with probability density function (PDF) p(x) defined over the domain X ⊆ Rdx . Given a collection D = {x1, ..., xn} of observations sampled from p(x), the goal is to find a good estimate f̂(x) of the true density function p. In parametric estimation, the PDF f̂ is assumed to belong to a parametric family F = {f̂θ(·)|θ ∈ Θ} where the density function is described by a finite dimensional parameter θ ∈ Θ. The standard method for estimating θ is maximum likelihood estimation, wherein θ is chosen so that the likelihood of the data D is maximized. This is equivalent to minimizing the Kullback-Leibler divergence between the empirical data distribution pD(x) = 1n ∑n i=1 δ(||x − xi||) (i.e., mixture of point masses in the observations xi) and the parametric distribution f̂θ:\nθ∗ = arg max θ∈Θ n∑ i=1 log f̂θ(xi) = arg min θ∈Θ DKL(pD||f̂θ) (1)\nFrom a geometric perspective, (1) can be viewed as an orthogonal projection of pD(x) onto F w.r.t. the reverse KL-divergence. Hence, (1) is also commonly referred to as an M-projection (Murphy, 2012; Nielsen, 2018). In contrast, non-parametric density estimators make implicit smoothness assumptions through a kernel function. The most popular non-parametric method, kernel density estimation (KDE), places a symmetric density function K(z), the so-called kernel, on each training data point xn (Rosenblatt, 1956; Parzen, 1962). The resulting density estimate reads as q̂(x) =\n1 nhd ∑n i=1K ( x−xi h ) . One popular choice of K(·) is a Gaussian K(z) = (2π)− d2 exp (− 12z\n2). Beyond the appropriate choice of K(·), a central challenge is the selection of the bandwidth parameter h which controls the smoothness of the estimated PDF (Li & Racine, 2007).\nConditional Density Estimation (CDE). Let (X,Y ) be a pair of random variables with respective domains X ⊆ Rdx and Y ⊆ Rdy and realizations x and y. Let p(y|x) = p(x, y)/p(x) denote the conditional probability density of y given x. Typically, Y is referred to as a dependent variable (explained variable) and X as conditional (explanatory) variable. Given a dataset of observations D = {(xn, yn)}Nn=1 drawn from the joint distribution (xn, yn) ∼ p(x, y), the aim of conditional density estimation (CDE) is to find an estimate f̂(y|x) of the true conditional density p(y|x). In the context of CDE, the KL-divergence objective is expressed as expectation over p(x):\nEx∼p(x) [ DKL(p(y|x)||f̂(y|x)) ] = E(x,y)∼p(x,y) [ log p(y|x)− log f̂(y|x) ] (2)\nCorresponding to (1), we refer to the minimization of (2) w.r.t. θ as conditional M-projection. Given a dataset D drawn i.i.d. from p(x, y), the conditional MLE following from (2) can be stated as\nθ∗ = arg min θ − n∑ i=1 log f̂θ(yi|xi) (3)" }, { "heading": "3 RELATED WORK", "text": "The first part of this section discusses relevant work in the field of CDE, focusing on high-capacity models that make little prior assumptions. The second part relates our approach to previous regularization and data augmentation methods.\nNon-parametric CDE. A vast body of literature in statistics and econometrics studies nonparametric kernel density estimators (KDE) (Rosenblatt, 1956; Parzen, 1962) and the associated bandwidth selection problem, which concerns choosing the appropriate amount of smoothing (Silverman,\n1982; Hall et al., 1992; Cao et al., 1994). To estimate conditional probabilities, previous work proposes to estimate both the joint and marginal probability separately with KDE and then computing the conditional probability as their ratio (Hyndman et al., 1996; Li & Racine, 2007). Other approaches combine non-parametric elements with parametric elements (Tresp, 2001; Sugiyama & Takeuchi, 2010; Dutordoir et al., 2018). Despite their theoretical appeal, non-parametric density estimators suffer from poor generalization in regions where data is sparse (e.g., tail regions), causing rapid performance deterioration as the data dimensionality increases (Scott & Wand, 1991).\nCDE based on neural networks. Most work in machine learning focuses on flexible parametric function approximators for CDE. In our experiments, we use the work of Bishop (1994) and Ambrogioni et al. (2017), who propose to use a neural network to control the parameters of a mixture density model. A recent trend in machine learning are latent density models such as cGANs (Mirza & Osindero, 2014) and cVAEs (Sohn et al., 2015). Although such methods have been shown successful for estimating distributions of images, the probability density function (PDF) of such models is intractable. More promising in this sense are normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2017; Trippe & Turner, 2018), since they provide the PDF in tractable form. We employ a neural network controlling the parameters of a normalizing flow as our third CDE model to showcase the empirical efficacy of our regularization approach.\nRegularization. Since neural network based CDE models suffer from severe over-fitting when trained with the MLE objective, they require proper regularization. Classical regularization of the parameters such as weight decay (Pratt & Hanson, 1989; Krogh & Hertz, 1992; Nowlan & Hinton, 1992), l1/l2-penalties (Mackay, 1992; Ng, 2004) and Bayesian priors (Murray & Edwards, 1993; Hinton & Van Camp, 1993) have been shown to work well in the regression and classification setting. However, in the context of CDE, it is less clear what kind of inductive bias such a regularization imposes on the density estimate. In contrast, our regularization approach is agnostic w.r.t. parametrization and is shown to penalize strong variations of the log-density function.\nRegularization methods such as dropout are closely related to ensemble methods (Srivastava et al., 2014). Thus, they are orthogonal to our work and can be freely combined with noise regularization.\nAdding noise during training. Adding noise during training is a common scheme that has been proposed in various forms. This includes noise on the neural network weights or activations (Wan et al., 2013; Srivastava et al., 2014; Gal & Uk, 2016) and additive noise on the gradients for scalable MCMC posterior inference (Welling & Teh, 2011; Chen et al., 2014). While this line of work corresponds to noise in the parameter space, other research suggests to augment the training data through random and/or adversarial transformations of the data (Sietsma & Dow, 1991; Burges & Schölkopf, 1996; Goodfellow et al., 2015; Yuan et al., 2017). Our approach transforms the training observations by adding small random perturbations. While this form of regularization has been studied in the context of regression and classification problems (Holmstrom & Koistinen, 1992a; Webb, 1994; Bishop, 1995; Natarajan et al., 2013; Maaten et al., 2013), this paper focuses on the regularization of CDE. In particular, we build on top of the results of Webb (1994) showing that training with noise corresponds to a penalty on strong variations of the log-density and extend previous consistency results for regression of Holmstrom & Koistinen (1992a) to the more general setting of CDE. To our best knowledge, this is also the first paper to evaluate the empirical efficacy of noise regularization for density estimation." }, { "heading": "4 NOISE REGULARIZATION", "text": "When considering expressive families of conditional densities, standard maximum likelihood estimation of the model parameters θ is ill suited. As can be observed in Figure 1, simply minimizing the negative log-likelihood of the data leads to severe over-fitting and poor generalization beyond the training data. Hence, it is necessary to impose additional inductive bias, for instance, in the form of regularization. Unlike in regression or classification, the form of inductive bias imposed by popular regularization techniques such as weight decay (Krogh & Hertz, 1991; Kukačka et al., 2017) is less clear in the CDE setting, where the neural network weights often only indirectly control the probability density through a unconditional density model, e.g., a Gaussian Mixture.\nWe propose to add noise perturbations to the data points during the optimization of the log-likelihood objective. This can be understood as replacing the original data points (xi, yi) by random variables x̃i = xi + ξx and ỹi = yi + ξy where the perturbation vectors are sampled from noise distribu-\ntions Kx(ξx) and Ky(ξy) respectively. Further, we choose the noise to be zero centered as well as identically and independently distributed among the data dimensions, with standard deviation h:\nEξ∼K(ξ) [ξ] = 0 and Eξ∼K(ξ) [ ξξ> ] = h2I (4)\nThis can be seen as data augmentation, where “synthetic” data is generated by randomly perturbing the original data. Since the supply of noise vectors is technically unlimited, an arbitrary large augmented data set can be generated by repetitively sampling data points from D, and adding a random perturbation vector to the respective data point. This procedure is formalized in Algorithm 1.\nFor notational brevity, we set Z := X × Y , z := (x>, y>)> and denote f̂θ(z) := f̂θ(y|x). The presented noise regularization approach is agnostic to whether we are concerned with unconditional or conditional MLE. Thus, the generic notation also allows us to generalize the results to both settings (derived in the remainder of the paper).\nAlgorithm 1 (Conditional) MLE with Noise Regularization - Generic Procedure\nRequire: D = {z1, ..., zn}, noise intensity h Require: number of perturbed samples r,\n1: for j = 1 to r do 2: Select i ∈ {1, ..., n} with equal prob. 3: Draw perturbation ξ ∼ K 4: Set z̃j = zi + hξ 5: return arg minθ∈Θ− ∑r j=1 log f̂θ(z̃j)\nAlgorithm 2 (Conditional) MLE with Noise Regularization - Mini-Batch Gradient Descent\nRequire: D = {z1, ..., zn}, noise intensity h Require: learning rate α, mini-batch size m\n1: Initialize θ 2: while θ not converged do 3: Sample minibatch {z1, ..., zm} ⊂ D 4: for j = 1 to m do 5: Draw perturbation ξ ∼ K 6: Set z̃j = zj + hξ 7: θ ← θ + α∇θ ∑m j=1 log f̂θ(z̃j) 8: return optimized parameter θ\nWhen considering highly flexible parametric families such as Mixture Density Networks (MDNs) (Bishop, 1994), the maximum likelihood solution in line 5 of Algorithm 1 is no longer tractable. In such case, one typically resorts to numerical optimization techniques such as mini-batch gradient descent and variations thereof. In this context, the generic procedure in Algorithm 1 can be transformed into a simple extensions of mini-batch gradient descent on the MLE objective (see Algorithm 2). Specifically, each mini-batch is perturbed with i.i.d. noise before computing the MLE objective function (forward pass) and the respective gradients (backward pass)." }, { "heading": "4.1 VARIABLE NOISE AS SMOOTHNESS REGULARIZATION", "text": "Intuitively, the previously presented variable noise can be interpreted as “smearing” the data points during the maximum likelihood estimation. This alleviates the jaggedness of the density estimate arising from an un-regularized maximum likelihood objective in flexible density classes. We will now give this intuition a formal foundation, by mathematically analyzing the effect of the noise perturbations.\nBefore discussing the particular effects of randomly perturbing the data during conditional maximum likelihood estimation, we first analyze noise regularization in a more general case. Let l(D) be a loss function over a set of data points D = {z1, ..., zn}, which can be partitioned into a sum of losses l(D) = ∑n i=1 l(zi), corresponding to each data point zi: The expected loss l(zi+ξ), resulting from adding random perturbations, can be approximated by a second order Taylor expansion around\nzi. Using the assumption about ξ in (4), the expected loss an be written as\nEξ∼K(ξ) [l(zi + ξ)] = l(zi) + 1\n2 Eξ∼K(ξ)\n[ ξ>H(i)ξ ] +O(ξ3) ≈ l(zi) + h2\n2 tr(H(i)) (5)\nwhere l(zi) is the loss without noise and H(i) = ∂ 2l ∂z2 (z) ∣∣ zi\nthe Hessian of l w.r.t z, evaluated at zi. Assuming that the noise ξ is small in its magnitude, O(ξ3) is negligible. This effect has been observed earlier by Webb (1994) and Bishop (1994). See Appendix A for derivations.\nWhen concerned with maximum likelihood estimation of a conditional density f̂θ(y|x), the loss function coincides with the negative conditional log-likelihood l(yi, xi) = − log f̂θ(yi|xi). Let the standard deviation of the additive data noise ξx, ξy be hx and hy respectively. Maximum likelihood estimation (MLE) with data noise is equivalent to minimizing the loss\nl(D) ≈ − n∑ i=1 log f̂θ(yi|xi)− h2y 2 n∑ i=1 dy∑ j=1 ∂2 log f̂θ(y|x) ∂y(j)∂y(j) ∣∣∣∣x=xi y=yi − h 2 x 2 n∑ i=1 dx∑ j=1 ∂2 log f̂θ(y|x) ∂x(j)∂x(j) ∣∣∣∣x=xi y=yi\n(6) In that, the first term corresponds to the standard MLE objective, while the other two terms constitute a form of smoothness regularization. The second term in (6) penalizes large negative second derivatives of the conditional log density estimate log f̂θ(y|x) w.r.t. y. As the MLE objective pushes the density estimate towards high densities and strong concavity in the data points yi, the regularization term counteracts this tendency to over-fit and overall smoothes the fitted distribution. The third term penalizes large negative second derivatives w.r.t. the conditional variable x, thereby regularizing the sensitivity of the density estimate to changes in the conditional variable. The intensity of the noise regularization can be controlled through the variance (h2x and h 2 y) of the random perturbations.\nFigure 1 illustrates the effect of the introduced noise regularization scheme on MDN estimates. Plain maximum likelihood estimation (left) leads to strong over-fitting, resulting in a spiky distribution that generalizes poorly beyond the training data. In contrast, training with noise regularization (center and right) results in smoother density estimates that are closer to the true conditional density." }, { "heading": "4.2 CONSISTENCY OF NOISE REGULARIZATION", "text": "We now establish asymptotic consistency results for the proposed noise regularization. In particular, we show that, under some regularity conditions, concerning integrability and decay of the noise regularization, the solution of Algorithm 1 converges to the asymptotic MLE solution.\nLet f̂θ(z) : Rdz × Θ → (0,∞) a continuous function of z and θ. Moreover, we assume that the parameter space Θ is compact. In the classical MLE setting, the idealized loss, corresponding to a (conditional) M-projection of the true data distribution onto the parametric family, reads as\nl(θ) = −Ep(z) [ log f̂θ(z) ] (7)\nAs we typically just have a finite number of samples from p(z), the respective empirical estimate l̂n(θ) = − 1n ∑n i=1 log f̂θ(zi), zi\ni.i.d∼ p(z) is used as training objective. Note that we now define the loss as function of θ, and, for fixed θ, treat ln(θ) as a random variable. Under some regularity conditions, one can invoke the uniform law of large numbers to show consistency of the empirical ML objective in the sense that supθ∈Θ |l̂n(θ)− l(θ)|\na.s.−−→ 0 (see Appendix B for details). In case of the presented noise regularization scheme, the maximum likelihood estimation is performed using on the augmented data {z̃j} rather than the original data {zi}. For our analysis, we view Algorithm 1 from a slightly different angle. In fact, the data augmentation procedure of uniformly selecting a data point from {z1, ..., zn} and perturbing it with a noise vector drawn from K can be viewed as drawing i.i.d. samples from a kernel density estimate q̂(h)n (z) = 1 n ∑n i=1 1 hdz K ( z−zi h ) . Hence, maximum likelihood estimation with variable noise can be understood as\n1. forming a kernel density estimate q̂(h)n of the training data\n2. followed by a (conditional) M-projection of q̂(h)n onto the parametric family.\nIn that, step 2 aims to find the θ∗ that minimizes the following objective: l(h)n (θ) = −Eq̂(h)n (z) [ log f̂θ(z) ] (8)\nSince (8) is generally intractable, r samples are drawn from the kernel density estimate, forming the following Monte Carlo approximation of (8) which corresponds to the loss in line 5 Algorithm 1:\nl̂(h)n,r(θ) = − 1\nr r∑ j=1 log f̂θ(z̃j) , z̃j ∼ q̂(h)n (z) (9)\nWe are concerned with the consistency of the training procedure in Algorithm 1, similar to the classical MLE consistency result discussed above. Hence, we need to show that supθ∈Θ ∣∣∣l̂(h)n,r(θ)− l(θ)∣∣∣ a.s.−−→ 0 as n, r →∞. We begin our argument by decomposing the problem into easier sub-problems. In particular, the triangle inequality is used to obtain the following upper bound:\nsup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(θ)∣∣∣ ≤ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(h)n (θ)∣∣∣+ sup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ (10) Note that l̂(h)n,r(θ) is based on samples from the kernel density estimate, which are obtained by adding random noise vectors ξ ∼ K(·) to our original training data. Since we can sample an unlimited amount of such random noise vectors, r can be chosen arbitrarily high. This allows us to make supθ∈Θ |l̂ (h) n,r(θ) − l(h)n (θ)| arbitrary small by the uniform law of large numbers. In order to make supθ∈Θ |l (h) n (θ)− l(θ)| small in the limit n→∞, the sequence of bandwidth parameters hn needs to be chosen appropriately. Such results can then be combined using a union bound argument. In the following we outline the steps leading us to the desired results. In that, the proof methodology is similar to Holmstrom & Koistinen (1992b). While they show consistency results for regression with a quadratic loss function, our proof deals with generic and inherently unbounded log-likelihood objectives and thus holds for a much more general class of learning problems. The full proofs can be found in the Appendix.\nInitially, we have to make asymptotic integrability assumptions that ensure that the expectations in l(h)n (θ) and l(θ) are well-behaved in the limit (see Appendix C for details). Given respective integrability, we are able to obtain the following proposition.\nProposition 1 Suppose the regularity conditions (28) and (29) are satisfied, and that lim n→∞ hn = 0, lim n→∞ n(hn) d =∞ (11)\nThen, lim n→∞ sup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ = 0 (12) almost surely.\nIn (11) we find conditions on the asymptotic behavior of the smoothing sequence (hn). These conditions also give us valuable guidance on how to properly choose the noise intensity in line 4 of Algorithm 1 (see Section 4.3 for discussion). The result in (12) demonstrates that, under the discussed conditions, replacing the empirical data distribution with a kernel density estimate still results in an asymptotically consistent maximum likelihood objective. However, as previously discussed, l (h) n (θ) is intractable and, thus, replaced by its sample estimate l̂ (h) n,r. Since we can draw an arbitrary amount of samples from q̂(h)n , we can approximate l (h) n (θ) with arbitrary precision. Given a fixed\ndata setD of size n > n0, this means that limr→∞ supθ∈Θ ∣∣∣l̂(h)n,r(θ)− l(h)n (θ)∣∣∣ = 0 almost surely, by\n(29) and the uniform law of large numbers. Since our original goal was to also show consistency for n→∞, this result is combined with Proposition 1, obtaining the following consistency theorem.\nTheorem 1 Suppose the regularity conditions (28) and (29) are satisfied, hn fulfills (11) and Θ is compact. Then,\nlim n→∞ lim r→∞ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(θ)∣∣∣ = 0 (13) almost surely.\nIn that, lim used to denote the limit superior (“lim sup”) of a sequence.\nTraining a (conditional) density model with noise regularization means minimizing l̂(h)n,r(θ) w.r.t. θ. As result of this optimization, one obtains a parameter vector θ̂(h)n,r, which we hope is close to the minimizing parameter θ̄ of the ideal objective function l(θ). In the following, we establish asymptotic consistency results, similar to Theorem 1, in the parameter space. Therefore we first have to formalize the concept of closeness and optimality in the parameter space. Since a minimizing parameter θ̄ of l(θ) may not be unique, we define Θ∗ = {θ∗ | l(θ∗) ≤ l(θ) ∀θ ∈ Θ} as the set of global minimizers of l(θ), and d(θ,Θ∗) = minθ∗∈Θ∗{||θ− θ∗||2} as the distance of an arbitrary parameter θ to Θ∗. Based on these definitions, it can be shown that Algorithm 1 is asymptotically consistent in a sense that the minimizer of θ̂(h)n,r converges almost surely to the set of optimal parameters Θ∗.\nTheorem 2 Suppose the regularity conditions (28) and (29) are satisfied, hn fulfills (11) and Θ is compact. For r > 0 and n > n0, let θ̂ (h) n,r ∈ Θ be a global minimizer of the empirical objective l̂(h)n,r. Then lim n→∞ lim r→∞ d(θ̂(h)n,r,Θ ∗) = 0 (14)\nalmost surely.\nNote that Theorem 2 considers global optimizers, but equivalently holds for compact neighborhoods of a local minimum θ∗ (see discussion in Appendix C)." }, { "heading": "4.3 CHOOSING THE NOISE INTENSITY", "text": "After discussing the properties of noise regularization, we are interested in how to properly choose the noise intensity h, for different training data sets. Ideally, we would like to choose h so that |l(h)n (θ) − l(θ)| is minimized, which is practically not feasible since l(θ) is intractable. Inequality (30) gives as an upper bound on this quantity, suggesting to minimize l1 distance between the kernel density estimate q(h)n and the data distribution p(z). This is in turn a well-studied problem in the kernel density estimation literature (see e.g., Devroye & Luc (1987)). Unfortunately, general solutions of this problem require knowing p(z) which is not the case in practice. Under the assumption that p(z) and the kernel function K are Gaussian, the optimal bandwidth can be derived as h = 1.06σ̂n− 1 4+d (Silverman, 1986). In that, σ̂ denotes the estimated standard deviation of the data, n the number of data points and d the dimensionality of Z . This formula is widely known as the rule of thumb and often used as a heuristic for choosing h.\nIn addition, the conditions in (11) give us further intuition. The first condition tells us that hn needs to decay towards zero as n becomes large. This reflects the general theme in machine learning that the more data is available, the less inductive bias / regularization should be imposed. The second condition suggests that the bandwidth decay must happen at a rate slower than n− 1 d . For instance, the rule of thumb fulfills these two criteria and thus constitutes a useful guideline for selecting h. However, for highly non-Gaussian data distributions, the respective hn may decay too slowly and a faster decay rate such as n− 1 1+d may be appropriate." }, { "heading": "5 EXPERIMENTS", "text": "This section provides a detailed experimental analysis of the proposed method, aiming to empirically validate the theoretical arguments outlined previously and investigating the practical efficacy of our regularization approach. In all experiments we use Gaussian pertubations of the data, i.e., K(ξ) = N (0, I). Since one of the key features of our noise regularization scheme is that it is agnostic to the choice of model, we evaluate its performance on three different neural network based CDE models: Mixture Density Networks (MDN) (Bishop, 1994), Kernel Mixture Networks (KMN) (Ambrogioni et al., 2017) and Normalizing Flows Networks (NFN) (Rezende & Mohamed, 2015; Trippe & Turner, 2018). In our experiments, we consider both simulated as well as real-world data sets. In particular, we simulate data from a 4-dimensional Gaussian Mixture (dx = 2, dy = 2) and a Skew-Normal distribution whose parameters are functionally dependent on x (dx = 1, dy = 1). In terms of real-world data, we use the following three data sources:\nEuro Stoxx: Daily stock-market returns of the Euro Stoxx 50 index conditioned on various stock return factors relevant in finance (dx = 14, dy = 1).\nNYC Taxi: Drop-off locations of Manhattan taxi trips conditioned on the pickup location, weekday and time (dx = 6, dy = 2).\nUCI datasets: Standard data sets from the UCI machine learning repository (Dua & Graff, 2017). In particular, Boston Housing (dx = 13, dy = 1), Concrete (dx = 8, dy = 1), Energy (dx = 9, dy = 1).\nThe reported scores are test log-likelihoods, averaged over at least 5 random seeds alongside the respective standard deviation. For further details regarding the data sets and simulated data, we refer to Appendix E. The experiment data and code is available at TODO" }, { "heading": "5.1 NOISE INTENSITY SCHEDULES", "text": "We complement the discussion in 4.3 with an empirical investigation of different schedules of hn. In particular, we compare a) the rule of thumb hn ∝ n− 1 4+d b) a square root decay schedule hn ∝ n− 1\n1+d c) a constant bandwidth hn = const. ∈ (0,∞) and d) no noise regularization, i.e. hn = 0. Figure 2 plots the respective test log-likelihoods against an increasing training set size n for the two simulated densities Gaussian Mixture and Skew Normal.\nFirst, we observe that bandwidth rates which conform with the decay conditions seem to converge in performance to the non-regularized maximum likelihood estimator (red) as n becomes large. This reflects the theoretical result of Theorem 1. Second, a fixed bandwidth across n (green), violating (11), imposes asymptotic bias and thus saturates in performance vastly before its counterparts. Third, as hypothesized, the relatively slow decay of hn through the rule of thumb works better for data distributions that have larger similarities to a Gaussian, i.e., in our case the Skew Normal distribution. In contrast, the highly non-Gaussian data from the Gaussian Mixture requires faster decay rates like the square root decay schedule. Most importantly, noise regularization substantially improves the estimator’s performance when only little training data is available." }, { "heading": "5.2 REGULARIZATION COMPARISON", "text": "We now investigate how the proposed noise regularization scheme compares to classical regularization techniques. In particular, we consider an l1 and l2-penalty on the neural network weights as regularization term, the weight decay technique of Loshchilov & Hutter (2019)1, as well a Bayesian neural network (Neal, 2012) trained with variational inference using a Gaussian prior and posterior (Blei et al., 2017).\nFirst, we study the performance of the regularization techniques on our two simulation benchmarks. Figure 3 depicts the respective test log-likelihood across different training set sizes. For each regularization method, the regularization hyper-parameter has been optimized via grid search.\nAs one would expect, the importance of regularization, i.e., performance difference to un-regularized model, decreases as the amount of training data becomes larger. The noise regularization scheme\n1Note that an l2 regularizer and weight decay are not equivalent since we use the adaptive learning rate technique Adam. See Loshchilov & Hutter (2019) for details.\nyields similar performance across the different CDE models while the other regularizers vary greatly in their performance depending on the different models. This reflects the fact that noise regularization is agnostic to the parameterization of the CDE model while regularizers in the parameter space are dependent on the internal structure of the model. Most importantly, noise regularization performs well across all models and sample sizes. In the great majority of configurations it outperforms the other methods. Especially when little training data is available, noise regularization ensures a moderate test error while the other approaches mostly fail to do so.\nNext, we consider real world data sets. Since now the amount of data we can use for hyper-parameter selection, training and evaluation is limited, we use 5-fold cross-validation to select the parameters for each regularization method. The test log-likelihoods, reported in Table 1, are averages over 3 different train/test splits and 5 seeds each for initializing the neural networks. The held out test set amounts to 20% of the overall data sets. Consistent with the results of the simulation study, noise regularization outperforms the other methods across the great majority of data sets and CDE models." }, { "heading": "5.3 CONDITIONAL DENSITY ESTIMATOR BENCHMARK STUDY", "text": "We benchmark neural network based density estimators against state-of-the art CDE approaches. While neural networks are the obvious choice when a large amount of training data is available, we pose the questions how such estimators compete against well-established non-parametric methods in small data regimes. In particular, we compare to the three following CDE methods:\nConditional Kernel Density Estimation (CKDE). Non-parametric method that forms a KDE of both p(x, y) and p(x) to compute its estimate as p̂(y|x) := p̂(x, y)/p̂(x) (Li & Racine, 2007).\n-Neighborhood kernel density estimation (NKDE). Non-parametric method that considers only a local subset of training points to form a density estimate.\nLeast-Squares Conditional Density Estimation (LSCDE). Semi-parametric estimator that computes the conditional density as linear combination of fixed kernels (Sugiyama & Takeuchi, 2010).\nFor the kernel density estimation based methods CKDE and NKDE, we perform bandwidth selection via the rule of thumb (R.O.T) (Silverman, 1982; Sheather & Jones, 1991) and via maximum likelihood leave-one-out cross-validation (CV-ML) (Rudemo, 1982; Hall et al., 1992). In case of LSCDE, MDN, KMN and NFN, the respective hyper-parameters are selected via 5-fold cross-validation grid search on the training set. Note that, in contrast to Section 5.2 which focuses on regularization parameters, the grid search here extends to more hyper-parameters. The respective test log-likelihood scores are listed in Table 2. For the majority of data sets, all three neural network based methods outperform all of the non- and semi-parametric methods. Perhaps surprisingly, it can be seen that, when properly regularized, neural network based CDE works well even when training data is scarce, such as in case of the Boston Housing data set." }, { "heading": "6 CONCLUSION", "text": "This paper addresses conditional density estimation with high-capacity models. In particular, we propose to add small random perturbations to the data during training. We demonstrate that the resulting noise regularization method corresponds to a smoothness regularization and prove its asymptotic consistency. The experimental results underline the effectiveness of the proposed method, demonstrating that it consistently outperforms other regularization methods across various conditional density models and data sets. This makes neural network based CDE the preferable method, even when only little training data is available. While we assess the estimator performance in terms of the test log-likelihood, an interesting question for future research is whether the noise regularization also improves the respective uncertainty estimates for downstream tasks such as safe control and decision making." }, { "heading": "A DERIVATION SMOOTHNESS REGULARIZATION", "text": "Let l(D) be a loss function over a set of data points D = {z1, ..., zN}, which can be partitioned into a sum of losses corresponding to each data point xn:\nlD(D) = n∑ i=1 l(zi) (15)\nAlso, let each zi be perturbed by a random noise vector ξ ∼ K(ξ) with zero mean and i.i.d. elements, i.e. Eξ∼K(ξ) [ξ] = 0 and Eξ∼K(ξ) [ ξnξ > j ] = h2I (16) The resulting loss l(zi + ξ) can be approximated by a second order Taylor expansion around zi\nl(zi + ξ) = l(zi) + ξ >∇zl(z) ∣∣ zi + 1 2 ξ>∇2zl(z) ∣∣ zi ξ +O(ξ3) (17)\nAssuming that the noise ξ is small in its magnitude, O(ξ3) may be neglected. The expected loss under K(ξ) follows directly from (17):\nEξ∼K(ξ) [l(zi + ξ)] = l(zi) + Eξ∼K(ξ) [ ξ>∇xl(z) ∣∣ zi ] + 1 2 Eξ∼K(ξ) [ ξ>∇2xl(z) ∣∣ zi ξ ]\n(18)\nUsing the assumption about ξ in (16) we can simplify (18) as follows:\nEξ∼K(ξ) [l(zi + ξ)] = l(zi) + Eξ∼K(ξ) [ξ] >∇zl(z) ∣∣ zi + 1 2 Eξ∼K(ξ) [ ξ>∇2zl(z) ∣∣ zi ξ ]\n(19)\n= l(zi) + 1\n2 Eξ∼K(ξ)\n[ ξ>H(i)ξ ] (20)\n= l(zi) + 1\n2 Eξ∼K(ξ) ∑ j ∑ k ξjξk ∂2l(z) ∂z(j)∂z(k) ∣∣∣∣ zi (21) = l(zi) + 1\n2 ∑ j Eξ [ ξ2j ] ∂2l(z) ∂z(j)∂z(j) ∣∣∣∣ zi + 1 2 ∑ j ∑ k 6=j Eξ [ξjξk] ∂2l(z) ∂z(j)∂z(k) ∣∣∣∣ zi\n(22)\n= l(zi) + η2\n2 ∑ j ∂2l(z) ∂z(j)∂z(j) ∣∣∣∣ zi\n(23)\n= l(zi) + η2\n2 tr(H(i)) (24) In that, l(zi) is the loss without noise and H(i) = ∇2zl(z) ∣∣ zi\nthe Hessian of l at zi. With z(j) we denote the elements of the column vector z.\nB VANILLA CONDITIONAL MLE OBJECTIVE IS UNIFORMLY CONSISTENT\nThe objective function corresponding to a conditional M-projection. l(θ) = −Ep(x,y) [ log f̂θ(y|x) ] (25)\nThe sample equivalent:\nl̂n(θ) = − 1\nn n∑ i=1 log f̂θ(yi|xi) , (xi, yi) i.i.d∼ P (X,Y ) (26)\nCorollary 1 Let Θ be a compact set and and f̂θ : Rl × Rm × Θ → (0,∞) continuous in θ for all (x, y) ∈ Rl × Rm such that Ep(x,y) [ supθ∈Θ log f̂θ(y|x) ] <∞. Then, as n→∞, we have\nsup θ∈Θ ∣∣∣l̂n(θ)− l(θ)∣∣∣ a.s.−−→ 0 (27) Proof. The corollary follows directly from the uniform law of large numbers." }, { "heading": "C CONSISTENCY PROOFS", "text": "Lemma 1 Suppose for some > 0 there exists a constant B( )p such that∫ | log f̂θ(z)|1+ p(z)dz ≤ B( )p <∞ ∀θ ∈ Θ (28)\nand there exists an n0 such that for all n > n0 there exists a constant B ( ) q̂ such that∫\n| log f̂θ(z)|1+ q̂(hn)n (z)dz ≤ B ( ) q̂ <∞ ∀θ ∈ Θ (29)\nalmost surely. Then, the inequality\nsup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ ≤ C (∫ |q̂(h)n (z)− p(z)|dz) 1+ (30) where C is a constant holds with probability 1 for all n > n0.\nProof of Lemma 1 Using Hoelder’s inequality and the nonnegativity of p and q̂(h)n , we obtain∣∣∣l(h)n (θ)− l(θ)∣∣∣ = ∣∣∣∣∫ log f̂θ(z)(q̂(h)n (z)− p(z))dz∣∣∣∣ ≤ ∫ | log f̂θ(z)| |q̂(h)n (z)− p(z)|dz\n= ∫ | log f̂θ(z)| |q̂(h)n (z)− p(z)| 1 1+ |q̂(h)n (z)− p(z)| 1+ dz\n≤ (∫ | log f̂θ(z)|1+ |q̂(h)n (z)− p(z)|dz ) 1 1+\n(∫ |q̂(h)n (z)− p(z)|dz ) 1+\n≤ (∫\n| log f̂θ(z)|1+ q̂(h)n (z)dz\n+ ∫ | log f̂θ(z)|1+ p(z)dz ) 1 1+ (∫ |q̂(h)n (z)− p(z)|dz ) 1+\nEmploying the regularity conditions (28) and (29) and writing C( ) = B( )p + B ( ) q̂ , it follows that ∃n0 such that ∀n > n0\nsup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ ≤ (B( )p +B( )q̂ )(∫ |q̂(h)n (z)− p(z)|dz) 1+ = C( ) (∫ |q̂(h)n (z)− p(z)|dz ) 1+\nwith probability 1.\nLemma 1 states regularity conditions ensuring that the expectations in l(h)n (θ) and l(θ) are wellbehaved in the limit. In particular, (28) and (29) imply uniform and absolute integrability of the loglikelihoods under the respective probability measures induced by p and q̂(h)n . Since we are interested in the asymptotic behavior, it is sufficient for (29) to hold for n large enough with probability 1.\nInequality (30) shows that we can make |l(h)n (θ) − l(θ)| small by reducing the l1-distance between the true density p and the kernel density estimate q̂(h)n . There exists already a vast body of literature, discussing how to properly choose the kernel K and the bandwidth sequence (hn) so that∫ |q̂(hn)n (z) − p(z)|dz → 0. We employ the results in Devroye (1983) for our purposes, leading us to Proposition 1.\nProof of Proposition 1. Let A denote the event that ∃n0∀n > n0 inequality (30) holds for some constant C( ). From our regularity assumptions it follows that P(Ac) = 0. Given that A holds, we just have to show that ∫ |q̂(h)n (z)− p(z)|dz\na.s.−−→ 0. Then, the upper bound in (30) tends to zero and we can conclude our proposition.\nFor any δ > 0 let Bn denote the event∫ |q̂(h)n (z)− p(z)|dz ≤ δ (31)\nwherein q̂(h)n (z) is a kernel density estimate obtained based on n samples from p(z). Under the conditions in (11) we can apply Theorem 1 of Devroye (1983), obtaining an upper bound on the probability that (31) does not hold, i.e. ∃u,m0 such that P(Bcn) ≤ e−un for all n > m0. Since we need both A and Bn for n → ∞ to hold, we consider the intersection of the events (A ∩Bn). Using a union bound argument it follows that ∃k0 such that ∀n > k0 : P((A ∩Bn)c) ≤ P(Ac) + P(Bcn) = 0 + e−un = e−un. Note that we can simply choose k0 = max{n0,m0} for this to hold. Hence, ∑∞ n=k0+1 P ((A ∩ Bn)c) < ∑∞ n=1 e\n−un = 1eu−1 < ∞ and by the Borel-Cantelli lemma we can conclude that\nlim n→∞ sup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ = 0 (32) holds with probability 1.\nProof of Theorem 1. The inequality in (10) implies that for any n > n0,\nlim r→∞ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(θ)∣∣∣ ≤ lim r→∞ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(h)n (θ)∣∣∣+ sup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ (33) Let n > n0 be fixed but arbitrary and denote\nJn,r = sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(h)n (θ)∣∣∣ r ∈ N, n > n0 (34) It is important to note that Jn,r is a random variable that depends on the samples Z(n) = (Z1, ..., Zn) as well as on the randomness inherent in Algorithm 1. We define I(r) = (I1, ...Ir) as the indices sampled uniformly from {1, ..., n} and Ξ(r) = (ξ1, ...xir) as the sequence of perturbation vectors sampled from K. Let P (Z(n)), P (I(r)) and P (Ξ(r)) be probability measures of the respective random sequences.\nIf we fix Z(n) to be equal to an arbitrary sequence Z(n), then q̂(h)n is fixed and we can treat Jn,r as the regular difference between a sample estimate and expectation under q̂(h)n . By the regularity condition (29), the compactness of Θ and the continuity of fθ in θ, we can invoke the uniform law of large numbers to show that\nlim r→∞ Jn,r = lim r→∞ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(h)n (θ)∣∣∣ = 0 (35) with probability 1.\nNow we want to show that (35) also holds with probability 1 for random training samples Z(n). First, we write Jn,r as a deterministic function of random variables:\nJn,r = J(Z (n), I(r),Ξ(r)) (36)\nThis allows us to restate the result in (35) as follows: PI(r),Ξ(r) ( ∀δ > 0 ∃r0 ∀r > r0 : J(Z(n) = Z(n), I(r),Ξ(r)) < δ ) = ∫ ∫ 1 ( ∀δ > 0 ∃r0 ∀r > r0 : J(Z(n) = Z(n), I(r),Ξ(r)) < δ ) dP (Ξ(r))dP (I(r))\n= 1\n(37)\nIn that 1(A) denotes an indicator function which returns 1 if A is true and 0 else. Next we consider the probability that the convergence in (35) holds for random Z(n):\nPZ(n),I(r),Ξ(r) ( ∀δ > 0 ∃r0 ∀r > r0 : J(Z(n), I(r),Ξ(r)) < δ ) = ∫ ∫ ∫ 1 ( ∀δ > 0 ∃r0 ∀r > r0 : J(Z(n), I(r),Ξ(r)) < δ ) dP (Ξ(r))dP (I(r))dP (Z(n))\n= ∫ dP (Z(n)) (∫ ∫ 1 ( ∀δ > 0 ∃r0 ∀r > r0 : J(Z(n), I(r),Ξ(r)) < δ ) dP (Ξ(r))dP (I(r)) ) ︸ ︷︷ ︸\n=1\n= 1\nNote that we can dP (Z(n)) move outside of the inner integrals, since Z(n) is independent from I(r) and Ξ(r). Hence, we can conclude that (35) also holds, which we denote as eventA, with probability 1 for random training data.\nFrom Proposition 1 we know, that\nlim n→∞ sup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ = 0 (38) with probability 1. We denote the event that (38) holds as B. Since P (Ac) = P (Bc) = 0, we can use a union bound argument to show that P (A ∩B) = 1. From (35) and (33) it follows that for any n > n0,\nlim r→∞ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(θ)∣∣∣ ≤ sup θ∈Θ ∣∣∣l(h)n (θ)− l(θ)∣∣∣ (39) with probability 1. Finally, we combine this result with (38), obtaining that\nlim n→∞ lim r→∞ sup θ∈Θ ∣∣∣l̂(h)n,r(θ)− l(θ)∣∣∣ = 0 (40) almost surely, which concludes the proof.\nProof of Theorem 2. The proof follows the argument used in Theorem 1 of White (1989). In the following, we assume that (13) holds. From Theorem 1 we know that this is the case with probability 1. Respectively, we only consider realizations of our training data Z(n) and noise samples I(r), Ξ(r) for which the convergence in (13) holds (see proof of Theorem 1 for details on this notation).\nFor such realization, let (θ̂(h)n,r) be minimizers of l̂ (h) n,r. Also let (ni)i and for any i, (ri,j)j be increasing sequences of positive integers. Define vi,j := θ̂ (h) ni,ri,j and µi,j(θ) := l̂ (h) ni,ri,j (θ). Due to the compactness of Θ and the Bolzano-Weierstrass property thereof, there exists a limit point θ0 ∈ Θ and increasing subsequences (ik)k, (jk)k so that vik,jk → θ0 as k →∞. From the triangle inequality, it follows that for any > 0 there exists k0 so that ∀k > k0\n|µik,jk(vik,jk)− l(θ0)| ≤ |µik,jk(vik,jk)− l(vik,jk)|+ |l(vik,jk)− l(θ0)| < 2 (41)\ngiven the convergence established in Theorem 1 and the continuity of l in θ. Next, the result above is extended to\nl(θ0)− l(θ) = [l(θ0)−µik,jk(vik,jk)] + [µik,jk(vik,jk)−µik,jk(θ)] + [µik,jk(θ)− l(θ)] ≤ 3 (42)\nwhich again holds for k large enough. This due to (41), µik,jk(vik,jk)−µik,jk(θ) ≤ 0 since vik,jk is the minimizer of µik,jk , and µik,jk(θ)− l(θ) < by Theorem 1. Because can be made arbitrarily small, l(θ0) ≤ l(θ) as k → ∞. Because θ ∈ Θ is arbitrary, θ0 must be in Θ∗. In turn, since (ni)i, (ri,j)j and (ik)k, (jk)k were chosen arbitrarily, every limit point of a sequence (vik,jk)k must be in Θ∗.\nIn the final step, we proof the theorem by contradiction. Suppose that (14) does not hold. In this case, there must exist an > 0 and sequences (ni)i, (ri,j)j and (ik)k, (jk)k such that ||(vik,jk)k−θ̄||2 > for all k and θ̄ ∈ Θ∗. However, by the previous argument the limit point of the any sequence (vik,jk)k must be in Θ ∗. That is a contradiction to ||(vik,jk)k − θ̄||2 > ∀ k, θ̄ ∈ Θ∗. Since\nthe random sequences Z(n), I(r), Ξ(r) where chosen from a set with probability mass of 1, we can conclude our proposition that\nlim n→∞ lim r→∞\nd(θ̂(h)n,r,Θ ∗) = 0\nalmost surely.\nDiscussion of Theorem 2. Note that, similar to θ∗, θ̂(h)n,r does not have to be unique. In case there are multiple minimizers of l̂(h)n,r, we can chose one of them arbitrarily and the proof of the theorem still holds. Theorem 2 considers global optimizers over a set of parameters Θ, which may not be attainable in practical settings. However, the application of the theorem to the context of local optimization is straightforward when Θ is chosen as a compact neighborhood of a local minimum θ∗ of l (Holmstrom & Koistinen, 1992b). If we set Θ∗ = {θ∗} and restrict minimization over l̂(h)n,r to the local region, then θ̂(h)n,r converges to Θ∗ as n, r →∞ in the sense of Theorem 2." }, { "heading": "D CONDITIONAL DENSITY ESTIMATION MODELS", "text": "" }, { "heading": "D.1 MIXTURE DENSITY NETWORK", "text": "Mixture Density Networks (MDNs) combine conventional neural networks with a mixture density model for the purpose of estimating conditional distributions p(y|x) (Bishop, 1994). In particular, the parameters of the unconditional mixture distribution p(y) are outputted by the neural network, which takes the conditional variable x as input.\nFor our purpose, we employ a Gaussian Mixture Model (GMM) with diagonal covariance matrices as density model. The conditional density estimate p̂(y|x) follows as weighted sum of K Gaussians\np̂(y|x) = K∑ k=1 wk(x; θ)N ( y|µk(x; θ), σ2k(x; θ) ) (43)\nwherein wk(x; θ) denote the weight, µk(x; θ) the mean and σ2k(x; θ) the variance of the k-th Gaussian component. All the GMM parameters are governed by the neural network with parameters θ and input x. The mixing weights wk(x; θ) must resemble a categorical distribution, i.e. it must hold that∑K k=1 wk(x; θ) = 1 and wk(x; θ) ≥ 0 ∀k. To satisfy the conditions, the softmax linearity is used for the output neurons corresponding to wk(x; θ). Similarly, the standard deviations σk(x) must be positive, which is ensured by a sofplus non-linearity. Since the component means µk(x; θ) are not subject to such restrictions, we use a linear output layer without non-linearity for the respective output neurons.\nFor the experiments in 5.2 and 5.1, we set K = 10 and use a neural network with two hidden layers of size 32." }, { "heading": "D.1.1 KERNEL MIXTURE NETWORK", "text": "While MDNs resemble a purely parametric conditional density model, a closely related approach, the Kernel Mixture Network (KMN), combines both non-parametric and parametric elements (Ambrogioni et al., 2017). Similar to MDNs, a mixture density model of p̂(y) is combined with a neural network which takes the conditional variable x as an input. However, the neural network only controls the weights of the mixture components while the component centers and scales are fixed w.r.t. to x. For each of the kernel centers, M different scale/bandwidth parameters σm are chosen. As for MDNs, we employ Gaussians as mixture components, wherein the scale parameter directly coincides with the standard deviation.\nLet K be the number of kernel centers µk and M the number of different kernel scales σm. The KMN conditional density estimate reads as follows:\np̂(y|x) = K∑ k=1 M∑ m=1 wk,m(x; θ)N (y|µk, σ2m) (44)\nAs previously, the weights wk,m correspond to a softmax function. The M scale parameters σm are learned jointly with the neural network parameters θ. The centers µk are initially chosen by k-means clustering on the {yi}ni=1 in the training data set. Overall, the KMN model is more restrictive than MDN as the locations and scales of the mixture components are fixed during inference and cannot be controlled by the neural network. However, due to the reduced flexibility of KMNs, they are less prone to over-fit than MDNs.\nFor the experiments in 5.2 and 5.1, we set K = 50 and M = 2. The respective neural network has two hidden layers of size 32." }, { "heading": "D.2 NORMALIZING FLOW NETWORK", "text": "The Normalizing Flow Network (NFN) is similar to the MDN and KMN in that a neural network takes the conditional variable x as its input and outputs parameters for the distribution over y. For the NFN, the distribution is given by a Normalizing Flow (Rezende & Mohamed, 2015). It works by transforming a simple base distribution and an accordingly distributed random variable Z0 through a series of invertible, parametrized mappings f = fN ◦ · · · ◦ f1 into a successively more complex distribution p(f(Z0)). The PDF of samples zN ∼ p(f(Z0)) can be evaluted using the change-ofvariable formula:\nlog p(zN ) = log p(z0)− N∑ n=1 log ∣∣∣det ∂fn ∂zn−1 ∣∣∣ (45) The Normalizing Flows from Rezende & Mohamed (2015) were introduced in the context of posterior estimation in variational inference. They are optimized for fast sampling while the likelihood evaluation for externally provided data is comparatively slow. To make them useful for CDE, we invert the direction of the flows, defining a mapping from the transformed distribution p(ZN ) to the base distribution p(Z0) by setting f̂−1i (zi) = fi(zi).\nWe experimented with three types of flows: planar flows, radial flows as parametrized by Trippe & Turner (2018) and affine flows f−1(z) = exp(a)z+b. We have found that one affine flow combined with multiple radial flows performs favourably in most settings.\nFor the experiments in 5.2 and 5.1, we used a standard Gaussian as the base distribution that is transformed through one affine flow and ten radial flows. The respective neural network has two hidden layers of size 32." }, { "heading": "E SIMULATED DENSITIES AND DATASETS", "text": "" }, { "heading": "E.1 SKEWNORMAL", "text": "The data generating process (x, y) ∼ p(x, y) resembles a bivariate joint-distribution, wherein x ∈ R follows a normal distribution and y ∈ R a conditional skew-normal distribution (Anděl et al., 1984). The parameters (ξ, ω, α) of the skew normal distribution are functionally dependent on x. Specifically, the functional dependencies are the following:\nx ∼ N ( · ∣∣∣∣µ = 0, σ = 12 ) (46)\nξ(x) = a ∗ x+ b a, b ∈ R (47) ω(x) = c ∗ x2 + d c, d ∈ R (48)\nα(x) = αlow + 1\n1 + e−x ∗ (αhigh − αlow) (49) y ∼ SkewNormal ( ξ(x), ω(x), α(x) ) (50)\nAccordingly, the conditional probability density p(y|x) corresponds to the skew normal density function:\np(y|x) = 2 ω(x) N ( y − ξ(x) ω(x) ) Φ ( α(x) y − ξ(x) ω(x) ) (51)\nIn that, N (·) denotes the density, and Φ(·) the cumulative distribution function of the standard normal distribution. The shape parameter α(x) controls the skewness and kurtosis of the distribution.\nWe set αlow = −4 and αhigh = 0, giving p(y|x) a negative skewness that decreases as x increases. This distribution will allow us to evaluate the performance of the density estimators in presence of skewness, a phenomenon that we often observe in financial market variables. Figure 4a illustrates the conditional skew normal distribution." }, { "heading": "E.2 GAUSSIAN MIXTURE", "text": "The joint distribution p(x, y) follows a Gaussian Mixture Model in R4 with 5 Gaussian components, i.e. K = 5. We assume that x ∈ R2 and y ∈ R2 can be factorized, i.e.\np(x, y) = K∑ i=1 wk N (y|µy,k,Σy,k)N (x|µx,k,Σx,k) (52)\nWhen x and y can be factorized as in (52), the conditional density p(y|x) can be derived in closed form:\np(y|x) = K∑ i=1 Wk(x) N (y|µy,k,Σy,k) (53)\nwherein the mixture weights are a function of x:\nWk(x) = wk N (x|µx,k,Σx,k)∑K j=1 wk N (x|µx,j ,Σx,j)\n(54)\nFor details and derivations we refer the interested reader to Guang Sung (2004) and Gilardi et al. (2002). The weightswk are sampled from a uniform distributionU(0, 1) and then normalized to sum to one. The component means are sampled from a spherical Gaussian with zero mean and standard deviation of σ = 1.5. The covariance matrices Σy,k) and Σy,k) are sampled from a Gaussian with mean 1 and standard deviation 0.5, and then projected onto the cone of positive definite matrices.\nSince we can hardly visualize a 4-dimensional GMM, Figure 4b depicts a 2-dimensional equivalent, generated with the procedure explained above." }, { "heading": "E.3 EURO STOXX 50 DATA", "text": "The Euro Stoxx 50 data comprises 3169 trading days, dated from January 2003 until June 2015. The goal is to predict the conditional probability density of 1-day log-returns, conditioned on 14 explanatory variables. These conditional variables comprise classical return factors from finance as well as option implied moments. For details, we refer to Rothfuss et al. (2019). Overall, the target variable is one-dimensional, i.e. y ∈ Y ⊆ R, whereas the conditional variable x constitutes a 14-dimensional vector, i.e. x ∈ X ⊆ R14." }, { "heading": "E.4 NYC TAXI DATA", "text": "We follow the setup in Dutordoir et al. (2018). The dataset contains records of taxi trips in the Manhattan area operated in January 2016. The objective is to predict spatial distributions of the dropoff location, based on the pick-up location, the day of the week, and the time of day. In that, the two temporal features are represented as sine and cosine with natural periods. Accordingly, the target variable y is 2-dimensional (longitude and latitude of dropoff-location) whereas the conditional variable is 6-dimensional. From the ca. 1 million trips, we randomly sample 10,000 trips to serve as training data." }, { "heading": "E.5 UCI", "text": "Boston Housing Concerns the value of houses in the suburban area of Boston. Conditional variables are mostly socio-economic as well as geographical factors. For more details see https://archive.ics.uci.edu/ml/machine-learning-databases/housing/\nConcrete The task is to predict the compressive strength of concrete given variables describing the conrete composition. For more details see https://archive.ics.uci.edu/ml/machine-learningdatabases/concrete/compressive/\nEnergy Concerns the energy efficiency of homes. The task is to predict the cooling load based on features describing the build of the respective house. For more details see https://archive.ics.uci.edu/ml/datasets/energy+efficiency" } ]
2,019
null
SP:96afb20c4d7fe41c083a0217c9cb8d1f21a73a15
[ "This paper proposes a Mirror Descent (MD) framework for the quantization of neural networks, which, different with previous quantization methods, enables us to derive valid mirror maps and the respective MD updates. Moreover, the authors also provide a stable implementation of MD by storing an additional set of auxiliary dual variables. Experiments on CIFAR-10/100 and TinyImageNet with convolutional and residual architectures show the effective of the proposed model. ", "This paper proposes a neural network (NN) quantization based on Mirror Descent (MD) framework. The core of the proposal is the construction of the mirror map from the unconstrained auxiliary variables to the quantized space. Building on that core, the authors derive some mapping functions from the corresponding projection, i.e. tanh, softmax and shifted tanh. The experimental result on benchmark datasets (CIFAR & TinyImageNet) and basic architectures (VGG & ResNet-18) showed that the proposed method is suitable for quantization. The proposed method is a natural extension of ProxQuant, which adopted the proximal gradient descent to quantize NN (a.k.a $\\ell_2$ norm in MD). Different projections in NN quantization lead to different Bregman divergences in MD. " ]
Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity. NN quantization is usually formulated as a constrained optimization problem and optimized via a modified version of gradient descent. In this work, by interpreting the continuous parameters (unconstrained) as the dual of the quantized ones, we introduce a Mirror Descent (MD) framework (Bubeck (2015)) for NN quantization. Specifically, we provide conditions on the projections (i.e., mapping from continuous to quantized ones) which would enable us to derive valid mirror maps and in turn the respective MD updates. Furthermore, we discuss a numerically stable implementation of MD by storing an additional set of auxiliary dual variables (unconstrained). This update is strikingly analogous to the popular Straight Through Estimator (STE) based method which is typically viewed as a “trick” to avoid vanishing gradients issue but here we show that it is an implementation method for MD for certain projections. Our experiments on standard classification datasets (CIFAR-10/100, TinyImageNet) with convolutional and residual architectures show that our MD variants obtain fully-quantized networks with accuracies very close to the floating-point networks.
[]
[ { "authors": [ "J. Achterhold", "J.M. Kohler", "A. Schmeink", "T. Genewein" ], "title": "Variational network quantization", "venue": null, "year": 2018 }, { "authors": [ "Thalaiyasingam Ajanthan", "Puneet K Dokania", "Richard Hartley", "Philip HS Torr" ], "title": "Proximal mean-field for neural network quantization", "venue": null, "year": 2019 }, { "authors": [ "Yu Bai", "Yu-Xiang Wang", "Edo Liberty" ], "title": "Proxquant: Quantized neural networks via proximal operators", "venue": null, "year": 2019 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "venue": "Operations Research Letters,", "year": 2003 }, { "authors": [ "Sébastien Bubeck", "Nicolo Cesa-Bianchi", "Sham M Kakade" ], "title": "Towards minimax policies for online linear optimization with bandit feedback", "venue": "Conference on Learning Theory,", "year": 2012 }, { "authors": [ "A Miguel" ], "title": "Carreira-Perpinán and Yerlan Idelbayev. Model compression as constrained optimization, with application to neural nets. part ii: Quantization", "venue": "NeurIPS Workshop on Optimization for Machine Learning,", "year": 2017 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "S.K. Esser", "R. Appuswamy", "P.A. Merolla", "J.V. Arthur", "D.S. Modha" ], "title": "Backpropagation for energy-efficient neuromorphic computing", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Yunhui Guo" ], "title": "A survey on methods and theories of quantized neural networks", "venue": "arXiv preprint arXiv:1808.04752,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Lu Hou", "Quanming Yao", "James T Kwok" ], "title": "Loss-aware binarization of deep networks", "venue": null, "year": 2017 }, { "authors": [ "Ya-Ping Hsieh", "Ali Kavis", "Paul Rolland", "Volkan Cevher" ], "title": "Mirrored langevin dynamics", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E Hopcroft", "Kilian Q Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free", "venue": null, "year": 2017 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Quantized neural networks: Training neural networks with low precision weights and activations", "venue": null, "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": null, "year": 2015 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip H S Torr" ], "title": "SNIP: Single-shot network pruning based on connection sensitivity", "venue": null, "year": 2019 }, { "authors": [ "Hao Li", "Soham De", "Zheng Xu", "Christoph Studer", "Hanan Samet", "Tom Goldstein" ], "title": "Training quantized nets: A deeper understanding", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "C. Louizos", "K. Ullrich", "M. Welling" ], "title": "Bayesian compression for deep learning", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Christos Louizos", "Matthias Reisser", "Tijmen Blankevoort", "Efstratios Gavves", "Max Welling" ], "title": "Relaxed quantization for discretized neural networks", "venue": null, "year": 2019 }, { "authors": [ "Arkadii Semenovich Nemirovsky", "David Borisovich Yudin" ], "title": "Problem complexity and method efficiency in optimization", "venue": null, "year": 1983 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": null, "year": 2017 }, { "authors": [ "Mohammad Rastegari", "Vicente Ordonez", "Joseph Redmon", "Ali Farhadi" ], "title": "Xnor-net: Imagenet classification using binary convolutional neural networks", "venue": null, "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Penghang Yin", "Shuai Zhang", "Jiancheng Lyu", "Stanley Osher", "Yingyong Qi", "Jack Xin" ], "title": "Binaryrelax: A relaxation approach for training deep neural networks with quantized weights", "venue": "SIIMS,", "year": 2018 }, { "authors": [ "Penghang Yin", "Jiancheng Lyu", "Shuai Zhang", "Stanley Osher", "Yingyong Qi", "Jack Xin" ], "title": "Understanding straight-through estimator in training activation quantized neural nets", "venue": null, "year": 2019 }, { "authors": [ "Ruimao Zhang", "Liang Lin", "Rui Zhang", "Wangmeng Zuo", "Lei Zhang" ], "title": "Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification", "venue": null, "year": 2015 }, { "authors": [ "Siqi Zhang", "Niao He" ], "title": "On the convergence rate of stochastic mirror descent for nonsmooth nonconvex optimization", "venue": null, "year": 2018 }, { "authors": [ "Shuchang Zhou", "Yuxin Wu", "Zekun Ni", "Xinyu Zhou", "He Wen", "Yuheng Zou" ], "title": "Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients", "venue": null, "year": 2016 }, { "authors": [ "Zhengyuan Zhou", "Panayotis Mertikopoulos", "Nicholas Bambos", "Stephen Boyd", "Peter Glynn" ], "title": "Mirror descent in non-convex stochastic programming", "venue": null, "year": 2017 }, { "authors": [ "Zhengyuan Zhou", "Panayotis Mertikopoulos", "Nicholas Bambos", "Stephen Boyd", "Peter W Glynn" ], "title": "Stochastic mirror descent in variationally coherent optimization problems", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "PQ (Bai" ], "title": "2019)) optimizes for the quantization levels (differently for each layer) as well in an alternating optimization regime rather than fixing it to Q = {−1, 0, 1}. Also, PQ does ternarize the first convolution layer, fully-connected layers and the shortcut layers. We cross-validate hyperparameters for both the original PQ setup and the equivalent setting of our MD-variants where we optimize all the weights and denote them as PQ* and PQ respectively", "venue": null, "year": 2019 }, { "authors": [ "Lee" ], "title": "2019)), the size of the fully-connected (FC) layers of VGG-16 is set to 512 and no dropout layers are employed. For TinyImageNet, the stride of the first convolutional layer of ResNet-18 is set to 2 to handle the image size (Huang et al", "venue": "CIFAR experiments,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Despite the success of deep neural networks in various domains, their excessive computational and memory requirements limit their practical usability for real-time applications or in resource-limited devices. Quantization is a prominent technique for network compression, where the objective is to learn a network while restricting the parameters to take values from a small discrete set (usually binary). This leads to a dramatic reduction in memory (a factor of 32 for binary quantization) and inference time – as it enables specialized implementation using bit operations.\nNeural Network (NN) quantization is usually formulated as a constrained optimization problem minx∈X f(x), where f(·) denotes the loss function by abstracting out the dependency on the dataset and X ⊂ IRr denotes the set of all possible quantized solutions. Majority of the works in the literature (Hubara et al. (2017); Yin et al. (2018); Ajanthan et al. (2019)) convert this into an unconstrained problem by introducing auxiliary variables (x̃) and optimize via (stochastic) gradient descent. Specifically, the objective and the update step take the following form:\nmin x̃∈IRr\nf(P (x̃)) , x̃k+1 = x̃k − η ∇x̃f(P (x̃))|x̃=x̃k , (1)\nwhere P : IRr → X is a mapping from the unconstrained space to the quantized space (sometimes called projection) and η > 0 is the learning rate. In cases where the mapping P is not differentiable, a suitable approximation is employed (Hubara et al. (2017)).\nIn this work, by noting that the well-known Mirror Descent (MD) algorithm, widely used for online convex optimization (Bubeck (2015)), provides a theoretical framework to perform gradient descent in the unconstrained space (dual space, IRr) with gradients computed in the quantized space (primal space, X ), we introduce an MD framework for NN quantization. In essence, MD extends gradient descent to non-Euclidean spaces where Euclidean projection is replaced with a more general projection defined based on the associated distance metric. Briefly, the key ingredient of MD is a concept called mirror map which defines both the mapping between primal and dual spaces and the exact form of\nthe projection. Specifically, in this work, by observing P in Eq. (1) as a mapping from dual space to the primal space, we analytically derive corresponding mirror maps under certain conditions on P . This enables us to derive different variants of the MD algorithm useful for NN quantization.\nFurthermore, as MD is often found to be numerically unstable (Hsieh et al. (2018)), we discuss a numerically stable implementation of MD by storing an additional set of auxiliary variables similar to the existing methods. As will be shown later, this update is strikingly analogous to the popular Straight Through Estimator (STE) based gradient method (Hubara et al. (2017); Bai et al. (2019)) which is typically viewed as a “trick” to avoid vanishing gradients issue but here we show that it is an implementation method for MD under certain conditions on the mapping P . We believe this connection sheds some light on the practical effectiveness of STE.\nWe evaluate the merits of our MD variants on CIFAR-10/100 and TinyImageNet classification datasets with convolutional and residual architectures. Our experiments show that the quantized networks obtained by the MD variants yield accuracies very close to the floating-point counterparts while outperforming directly comparable baselines. Finally, we would like to emphasize that even though our formulation does not necessarily extend the theory of MD, we believe showing MD as a suitable framework for NN quantization with superior empirical performance opens up new ways of designing MD-inspired update rules for NNs." }, { "heading": "2 PRELIMINARIES", "text": "We first provide some background on the MD algorithm and NN quantization. Then we discuss the link between them and provide our MD framework for NN quantization." }, { "heading": "2.1 MIRROR DESCENT", "text": "The Mirror Descent (MD) algorithm is first introduced in (Nemirovsky & Yudin (1983)) and it has been extensively studied in the convex optimization literature ever since. In this section we provide a brief overview and we refer the interested reader to Chapter 4 of (Bubeck (2015)). In the context of MD, we consider a problem of the form:\nmin x∈X f(x) , (2)\nwhere f : X → IR is a convex function and X ⊂ IRr is a compact convex set. The main concept of MD is to extend gradient descent to a more general non-Euclidean space (Banach space1), thus overcoming the dependency of gradient descent on the Euclidean geometry. The motivation for this generalization is that one might be able to exploit the geometry of the space to optimize much more efficiently. One such example is the simplex constrained optimization where MD converges at a much faster rate than the standard Projected Gradient Descent (PGD).\nTo this end, since the gradients lie in the dual space, optimization is performed by first mapping the primal point xk ∈ B (quantized space, X ) to the dual space B∗ (unconstrained space, IRr), then performing gradient descent in the dual space, and finally mapping back the resulting point to the primal space B. If the new point xk+1 lie outside of the constraint set X ⊂ B, it is projected to the set X . Both the primal/dual mapping and the projection are determined by the mirror map. Specifically, the gradient of the mirror map defines the mapping from primal to dual and the projection is done via the Bregman divergence of the mirror map. We first provide the definitions for mirror map and Bregman divergence and then turn to the MD updates. Definition 2.1 (Mirror map). Let C ⊂ IRr be a convex open set such that X ⊂ C̄ (C̄ denotes the closure of set C) and X ∩ C 6= ∅. Then, Φ : C → IR is a mirror map if it satisfies:\n1. Φ is strictly convex and differentiable. 2. ∇Φ(C) = IRr, i.e., ∇Φ takes all possible values in IRr. 3. limx→∂C ‖∇Φ(x)‖ =∞ (∂C denotes the boundary of C), i.e.,∇Φ diverges on the boundary of C. Definition 2.2 (Bregman divergence). Let Φ : C → IR be a continuously differentiable, strictly convex function defined on a convex set C. The Bregman divergence associated with Φ for points p,q ∈ C is the difference between the value of Φ at point p and the value of the first-order Taylor expansion of Φ around point q evaluated at point p, i.e.,\nDΦ(p,q) = Φ(p)− Φ(q)− 〈∇Φ(q),p− q〉 . (3) 1A Banach space is a complete normed vector space where the norm is not necessarily derived from an inner\nproduct.\nNotice, DΦ(p,q) ≥ 0 with DΦ(p,p) = 0, and DΦ(p,q) is convex on p.\nNow we are ready to provide the mirror descent strategy based on the mirror map Φ. Let x0 ∈ argminx∈X∩C Φ(x) be the initial point. Then, for iteration k ≥ 0 and step size η > 0, the update of the MD algorithm can be written as:\n∇Φ(yk+1) = ∇Φ(xk)− η gk , where gk ∈ ∂f(xk) and yk+1 ∈ C , (4) xk+1 = argmin\nx∈X∩C DΦ(x,y\nk+1) .\nNote that, in Eq. (4), the gradient gk is computed at xk ∈ X ∩ C (solution space) but the gradient descent is performed in IRr (unconstrained dual space). Moreover, by simple algebraic manipulation, it is easy to show that the above MD update (4) can be compactly written in a proximal form where the Bregman divergence of the mirror map becomes the proximal term (Beck & Teboulle (2003)):\nxk+1 = argmin x∈X∩C 〈η gk,x〉+DΦ(x,xk) . (5)\nNote, if Φ(x) = 12 ‖x‖ 2 2, then DΦ(x,x k) = 12 ∥∥x− xk∥∥2 2 , which when plugged back to the above problem and optimized for x, leads to exactly the same update rule as that of PGD. However, MD allows us to choose various forms of Φ depending on the problem at hand." }, { "heading": "2.2 NEURAL NETWORK QUANTIZATION", "text": "Neural Network (NN) quantization amounts to training networks with parameters restricted to a small discrete set representing the quantization levels. Here we review two constrained optimization formulations for NN quantization: 1) directly constrain each parameter to be in the discrete set; and 2) optimize the probability of each parameter taking a label from the set of quantization levels." }, { "heading": "2.2.1 PARAMETER SPACE FORMULATION", "text": "Given a dataset D = {xi,yi}ni=1, NN quantization can be written as:\nmin w∈Qm L(w;D) := 1 n n∑ i=1 `(w; (xi,yi)) . (6)\nHere, `(·) denotes the input-output mapping composed with a standard loss function (e.g., crossentropy loss), w is the m dimensional parameter vector, and Q with |Q| = d is a predefined discrete set representing quantization levels (e.g., Q = {−1, 1} or Q = {−1, 0, 1}). The approaches that directly optimize in the parameter space include BinaryConnect (BC) (Courbariaux et al. (2015)) and its variants (Hubara et al. (2017); Rastegari et al. (2016)), where the constraint set is discrete. In contrast, recent approaches (Bai et al. (2019); Yin et al. (2018)) relax this constraint set to be its convex hull: conv(Qm) = [qmin, qmax]m , (7) where qmin and qmax represent the minimum and maximum quantization levels, respectively. In this case, a quantized solution is obtained by gradually increasing an annealing hyperparameter." }, { "heading": "2.2.2 LIFTED PROBABILITY SPACE FORMULATION", "text": "Another formulation is based on the Markov Random Field (MRF) perspective to NN quantization recently studied in (Ajanthan et al. (2019)). It treats Eq. (6) as a discrete labelling problem and introduces indicator variables uj:λ ∈ {0, 1} for each parameter wj where j ∈ {1, . . . ,m} such that uj:λ = 1 if and only if wj = λ ∈ Q. For convenience, by denoting the vector of quantization levels as q, a parameter vector w ∈ Qm can be written in a matrix vector product as:\nw = uq , where u ∈ Vm = { u ∑ λ uj:λ = 1, ∀ j\nuj:λ ∈ {0, 1}, ∀ j, λ\n} . (8)\nHere, u is a m × d matrix (i.e., each row uj = {uj:λ | λ ∈ Q}), and q is a column vector of dimension d. Note that, u ∈ Vm is an overparametrized (i.e., lifted) representation of w ∈ Qm. Now, similar to the relaxation in the parameter space, one can relax the binary constraint in Vm to form its convex hull:\n∆m = conv(Vm) = { u ∑ λ uj:λ = 1, ∀ j\nuj:λ ≥ 0, ∀ j, λ\n} . (9)\nThe set ∆m is in fact the Cartesian product of the standard (d− 1)-probability simplexes embedded in IRd. Therefore, for a feasible point u ∈ ∆m, the vector uj for each j (j-th row of matrix u) belongs to the probability simplex ∆. Hence, we can interpret the value uj:λ as the probability of assigning the discrete label λ to the weight wj . This relaxed optimization can then be written as:\nmin u∈∆m L(uq;D) := 1 n n∑ i=1 `(uq; (xi,yi)) . (10)\nIn fact, this can be interpreted as finding a probability distribution u ∈ ∆m such that the cost L(u) is minimized. Note that, the relaxation of u from Vm to ∆m translates into relaxing w from Qm to the convex region conv(Qm). Even in this case, a discrete solution u ∈ Vm can be enforced via an annealing hyperparameter or using rounding schemes." }, { "heading": "3 MIRROR DESCENT FRAMEWORK FOR NETWORK QUANTIZATION", "text": "Before introducing the MD formulation, we first write NN quantization as a single objective unifying (6) and (10) as:\nmin x∈X f(x) , (11)\nwhere f(·) denotes the loss function by abstracting out the dependency on the dataset D, and X denotes the constraint set (Qm or Vm depending on the formulation). Note that, as discussed in Sec. 2.2, many recent NN quantization methods optimize over the convex hull of the constraint set. Following this, we consider the solution space X in Eq. (11) to be convex and compact. To employ MD, we need to choose a mirror map (refer Definition 2.1) suitable for the problem at hand. In fact, as discussed in Sec. 2.1, mirror map is the core component of an MD algorithm which determines the effectiveness of the resulting MD updates. However, there is no straightforward approach to obtain a mirror map for a given constrained optimization problem, except in certain special cases.\nTo this end, we observe that the usual approach to optimize the above constrained problem is via a version of projected gradient descent, where the projection is the mapping from the unconstrained auxiliary variables (high-precision) to the quantized space X . Now, noting the analogy between the purpose of the projection operator and the mirror maps in the MD formulation, we intend to derive the mirror map analogous to a given projection. Precisely, we prove that if the projection is invertible and strictly monotone, a valid mirror map can be derived from the projection itself. This does not necessarily extend the theory of MD as finding a strictly monotone map is as hard as finding the mirror map itself. However, this derivation is interesting as it connects existing PGD type algorithms to their corresponding MD variants. For completeness, we now state it as a theorem for the case X ⊂ IR and the multidimensional case can be easily proved with an additional assumption that the vector field P−1(x) is conservative. Theorem 3.1. Let X be a compact convex set and P : IR → C be an invertible function where C ⊂ IR is a convex open set such that X = C̄ (C̄ denotes the closure of C). Now, if\n1. P is strictly monotonically increasing. 2. limx→∂C ‖P−1(x)‖ =∞ (∂C denotes the boundary of C).\nThen, Φ(x) = ∫ x x0 P−1(y)dy is a valid mirror map. Proof. This can be proved by noting that ∇Φ(x) = P−1(x). Please refer to Appendix A.\nThe MD update based on the mirror map derived from a given projection is illustrated in Fig. 1. Note that, to employ MD to the problem (11), in theory, any mirror map satisfying Definition 2.1 whose domain (the closure of the domain) is a superset of the constraint set X can be chosen. However, the above theorem provides a method to derive only a subset of all applicable mirror maps, where the closure of the domain of mirror maps is exactly equal to the constraint set X .\nWe now give some example projections useful for NN quantization (tanh for w-space and softmax for u-space) and derive their corresponding mirror maps. Given mirror maps, the MD updates are straightforward based on Eq. (5). Even though we consider differentiable projections, Theorem 3.1\ndoes not require the projection to be differentiable. For the rest of the section, we assume m = 1, i.e., consider projections that are independent for each j ∈ {1, . . . ,m}. Example 3.1 (w-space, binary, tanh). Consider the tanh function, which projects a real value to the interval [−1, 1]:\nw = P (w̃) := tanh(βw̃) = exp(2βw̃)− 1 exp(2βw̃) + 1 , (12)\nwhere β > 0 is the annealing hyperparameter and when β →∞, tanh approaches the step function. The inverse of the tanh is:\nP−1(w) = 1\nβ tanh−1(w) =\n1\n2β log\n1 + w 1− w . (13)\nNote that, P−1 is monotonically increasing for a fixed β. Correspondingly, the mirror map from Theorem 3.1 can be written as:\nΦ(w) = ∫ P−1(w)dw = 1\n2β\n[ (1 + w) log(1 + w) + (1− w) log(1− w) ] . (14)\nHere, the constant from the integration is ignored. It can be easily verified that Φ(w) is in fact a valid mirror map. The projection, its inverse and the corresponding mirror map are illustrated in Fig. 2a. Consequently, the resulting MD update (5) takes the following form:\nwk+1 = argmin w∈(−1,1)\n〈η gk, w〉+DΦ(w,wk) = 1+wk 1−wk exp(−2βηg k)− 1\n1+wk 1−wk exp(−2βηgk) + 1 . (15)\nThe update formula is derived using the KKT conditions (Boyd & Vandenberghe (2009)). For the detailed derivation please refer to Appendix B. A similar derivation can also be performed for the sigmoid function, where C̄ = X = [0, 1]. Note that the sign function has been used for binary quantization in (Courbariaux et al. (2015)) and tanh can be used as a soft version of sign function as pointed out by (Zhang et al. (2015)). Mirror map corresponding to tanh is used for online linear optimization in (Bubeck et al. (2012)) but here we use it for NN quantization.\nExample 3.2 (u-space, multi-label, softmax). Now we consider the softmax projection used in Proximal Mean-Field (PMF) (Ajanthan et al. (2019)) to optimize in the lifted probability space. In this case, the projection is defined as P (ũ) := softmax(βũ) where P : IRd → C with C̄ = X = ∆. Here ∆ is the (d− 1)-dimensional probability simplex and |Q| = d. Note that the softmax projection is not invertible as it is a many-to-one mapping. In particular, it is invariant to translation, i.e.,\nu = softmax(ũ + c1) = softmax(ũ) , where uλ = exp(ũλ)∑ µ∈Q exp(ũµ) , (16)\nfor any scalar c ∈ IR (1 denotes a vector of all ones). Therefore, the softmax projection does not satisfy Theorem 3.1. However, one could define the inverse of softmax as follows: given u ∈ ∆, find a unique point ṽ = ũ + c1, for a particular scalar c, such that u = softmax(ṽ). Now, by choosing c = − log( ∑ µ=Q exp(ũµ)), softmax can be written as:\nu = softmax(ṽ) , where uλ = exp(ṽλ) , ∀λ ∈ Q . (17)\nNow, the inverse of the projection can be written as:\nṽ = P−1(u) = 1\nβ softmax−1(u) , where ṽλ =\n1 β log(uλ) , ∀λ . (18)\nIndeed, log is a monotonically increasing function and from Theorem 3.1, by summing the integrals, the mirror map can be written as:\nΦ(u) = 1\nβ [∑ λ uλ log(uλ)− uλ ] = − 1 β H(u)− 1/β . (19)\nHere, ∑ λ uλ = 1 as u ∈ ∆, and H(u) is the entropy. Interestingly, as the mirror map in this case is the negative entropy (up to a constant), the MD update leads to the well-known Exponentiated\nGradient Descent (EGD) (or Entropic Descent Algorithm (EDA)) (Beck & Teboulle (2003); Bubeck (2015)). Consequently, the update takes the following form:\nuk+1λ = ukλ exp(−βηgkλ)∑\nµ∈Q u k µ exp(−βηgkµ)\n∀λ . (20)\nThe derivation follows the same approach as in the tanh case above. It is interesting to note that the MD variant of softmax is equivalent to the well-known EGD. Notice, the authors of PMF hinted that PMF is related to EGD but here we have clearly showed that the MD variant of PMF under the above reparametrization (17) is exactly EGD.\nExample 3.3 (w-space, multi-label, shifted tanh). Note that, similar to softmax, we wish to extend the tanh projection beyond binary. The idea is to use a function that is an addition of multiple shifted tanh functions. To this end, as an example we consider ternary quantization, with Q = {−1, 0, 1} and define our shifted tanh projection P : IR→ C as:\nw = P (w̃) = 1\n2\n[ tanh (β(w̃ + 0.5)) + tanh (β(w̃ − 0.5)) ] , (21)\nwhere β > 0 and w = C̄ = X = [−1, 1]. When β → ∞, P approaches a stepwise function with inflection points at −0.5 and 0.5 (here, ±0.5 is chosen heuristically), meaning w move towards one of the quantization levels in the setQ. This behaviour together with its inverse is illustrated in Fig. 2b. Now, one could potentially find the functional form of P−1 and analytically derive the mirror map corresponding to this projection. Note that, while Theorem 3.1 provides an analytical method to derive mirror maps, in some cases such as the above, the exact form of mirror map and the MD update might be nontrivial. In such cases, as will be shown subsequently, the MD update can be easily implemented by storing an additional set of auxiliary variables w̃.\nEffect of Annealing. Note that, to ensure a discrete solution, projection P is parametrized by a scalar β and it is annealed throughout the optimization. This annealing hyperparameter translates into a time varying mirror map (refer Eqs. (14) and (19)) in our case. Such an adaptive mirror map gradually constrains the solution spaceX to its boundary and in the limit enforces a quantized solution. Since this adaptive behaviour can affect the convergence of the algorithm, in our implementation β is capped at an arbitrarily chosen maximum value, and empirically, the algorithm converges to fully quantized solutions in all tested cases. We leave the theoretical analysis of annealing for future work.\nNumerically Stable form of Mirror Descent. We showed a few examples of valid projections, their corresponding mirror maps, and the final MD updates. Even though, in theory, these updates can be used directly, they are sometimes numerically unstable due to the operations involving multiple logarithms, exponentials and divisions (Hsieh et al. (2018)). To this end, we provide a numerically stable way of performing MD by storing an additional set of auxiliary parameters during training.\nA careful look at the Fig. 1 suggests that the MD update with the mirror map derived from Theorem 3.1 can be performed by storing auxiliary variables x̃ = P−1(x). In fact, once the auxiliary variable x̃k is updated using gradient gk, it is directly mapped back to the constraint set X via the projection. This is mainly because of the fact that the domain of the mirror maps derived based on the Theorem 3.1 are exactly the same as the constraint set. Formally, with this additional set of variables, one can write the MD update (4) corresponding to the projection P as:\nx̃k+1 = x̃k − η gk , update in the dual space (22) xk+1 = P (x̃k+1) ∈ X , projection to the primal space\nwhere η > 0, gk ∈ ∂f(xk) and x̃k = P−1(xk). Experimentally we observed these updates to show stable behaviour and performed remarkably well for both the tanh and softmax.\nNote, above updates can be seen as optimizing the function f(P (x̃)) using gradient descent where the gradient through the projection (i.e., Jacobian) JP = ∂P (x̃)/∂x̃ is replaced with the identity matrix. This is exactly the same as the Straight Through Estimator (STE) for NN quantization (following the nomenclature of (Bai et al. (2019); Yin et al. (2018))). Despite being a crude approximation, STE has shown to be highly effective for NN quantization with various network architectures and datasets (Yin et al. (2018); Zhou et al. (2016)). However, a solid understanding of the effectiveness of STE is lacking in the literature except for its convergence analysis in certain special cases (Li et al. (2017); Yin et al. (2019)). In this work, by showing STE based gradient descent as an implementation method of MD under certain conditions on the projection, we provide a justification on the effectiveness of STE. Besides, as shown in Example 3.3, in cases where the MD formulation is nontrivial, the STE based implementation can be used. The pseudocodes of original and numerically stable versions of our MD algorithm for tanh are presented in Appendix B.\nComparison against ProxQuant. The connection between the dual averaging version of MD and STE was recently hinted in ProxQuant (PQ) (Bai et al. (2019)). However, no analysis of whether an analogous mirror map exists to the given projection is provided and their final algorithm is not based on MD. In particular, following our notation, the final update equation of PQ can be written as:\nx̃k+1 = xk − η gk , assumes xk and gk are in the same space (23) xk+1 = prox(x̃k+1) , prox : IRr → IRr is the proximal mapping defined in (Bai et al. (2019))\nwhere η > 0, and gk ∈ ∂f(xk). Note that, as opposed to MD (refer to Eq. (22)), PQ assumes the point xk and gradient gk are in the same space. Then only the formula xk − η gk is valid. This would only be true for the Euclidean space. However, as discussed in Sec. 2.1, MD allows gradient descent to be performed on a more general non-Euclidean space by first mapping the primal point xk to a point x̃k in the dual space via the mirror map. Such an ability enabled theoretical and practical research on MD for the past three decades.\nConvergence of MD in the Nonconvex Setting. We would like to point out that MD is originally developed for convex optimization, however, in this paper we directly apply MD to NN quantization where the loss is highly nonconvex and gradient estimates are stochastic, and empirically evaluate its convergence behaviour and performance. Theoretical analysis of MD for nonconvex, stochastic setting is an active research area (Zhou et al. (2017a;b)) and MD has been recently shown to converge in the nonconvex stochastic setting under certain conditions (Zhang & He (2018)). We believe convergence analysis of MD for NNs could constitute to a completely new theoretical paper." }, { "heading": "4 RELATED WORK", "text": "In this work we consider parameter quantization, which is usually formulated as a constrained problem and optimized using a modified projected gradient descent algorithm, where the methods (Courbariaux et al. (2015); Carreira-Perpinán & Idelbayev (2017); Yin et al. (2018); Bai et al. (2019); Ajanthan et al. (2019)) mainly differ in the constraint set, the projection used, and how backpropagation through the projection is performed. Among them, STE based gradient descent is the most popular method as it enables backpropagation through nondifferentiable projections and it has shown to be highly effective in practice (Courbariaux et al. (2015)). In fact, the success of this approach lead to various extensions by including additional layerwise scalars (Rastegari et al. (2016)), relaxing the solution space (Yin et al. (2018)), and even to quantizing activations (Hubara et al. (2017)), and/or gradients (Zhou et al. (2016)). Moreover, there are methods focusing on loss aware quantization (Hou et al. (2017)), quantization for specialized hardware (Esser et al. (2015)), and quantization based on the variational approach (Achterhold et al. (2018); Louizos et al. (2017; 2019)). We have only provided a brief summary of relevant methods and for a comprehensive survey we refer the reader to (Guo (2018))." }, { "heading": "5 EXPERIMENTS", "text": "Due to the popularity of binary neural networks (Courbariaux et al. (2015); Rastegari et al. (2016)), we mainly consider binary quantization and set the quantization levels as Q = {−1, 1}. We would\nlike to point out that we quantize all learnable parameters, meaning our approach results in 32 times less memory compared to the floating-point counterparts.\nWe evaluate two MD variants corresponding to tanh and softmax projections, on CIFAR-10, CIFAR100 and TinyImageNet2 datasets with VGG-16 and ResNet-18 architectures. We also evaluate the numerically stable versions of our MD variants (denoted with suffix “-S”) performed by storing auxiliary parameters during training as explained in Eq. (22). The results are compared against parameter quantization methods, namely BinaryConnect (BC) (Courbariaux et al. (2015)), ProxQuant (PQ) (Bai et al. (2019)) and Proximal Mean-Field (PMF) (Ajanthan et al. (2019)). In addition, for completeness, we also compare against a standard PGD variant corresponding to the tanh projection (denoted as GD-tanh), i.e., minimizing f(tanh(x̃)) using gradient descent. The only difference of this to our MD-tanh-S is that, in Eq. (22), the Jacobian of tanh is directly used in the updates. Note that, numerous techniques have emerged with BC as the workhorse algorithm by relaxing constraints such as the layer-wise scalars (Rastegari et al. (2016)), and similar extensions are straightforward even in our case. Briefly, our results indicate that the binary networks obtained by the MD variants yield accuracies very close to the floating-point counterparts while outperforming the baselines.\nFor all the experiments, standard multi-class cross-entropy loss is used. We crossvalidate the hyperparameters such as learning rate, learning rate scale, rate of increase of annealing hyperparameter β, and their respective schedules for all tested methods including the baselines. This extensive crossvalidation improved the accuracies of previous methods by a large margin, e.g., up to 3% improvement for PMF. We provide the hyperparameter tuning search space and the final hyperparameters in Appendix C. Our algorithm is implemented in PyTorch (Paszke et al. (2017)) and the experiments are performed on NVIDIA Tesla-P100 GPUs. Our code will be released upon publication." }, { "heading": "5.1 RESULTS", "text": "The classification accuracies of binary networks obtained by both variants of our algorithm, namely, MD-tanh and MD-softmax, their numerically stable versions (suffix “-S”) and the baselines BC, PQ, PMF, GD-tanh and the floating point Reference Network (REF) are reported in Table 1. Both the numerically stable MD variants consistently produce better or on par results compared to other binarization methods while narrowing the performance gap between binary networks and floating point counterparts to a large extent, on multiple datasets.\nOur stable MD-variant perform slightly better than MD-softmax, whereas for tanh, MD updates either perform on par or sometimes even better than numerically stable version of MD-tanh. We\n2https://tiny-imagenet.herokuapp.com/\nbelieve, the main reason for this empirical variation in results for our MD-variants is due to numerical instability caused by the floating-point arithmetic of logarithm and exponential functions in Eq. (15) and Eq. (20). Furthermore, even though our two MD-variants, namely MD-softmax and MD-tanh optimize in different spaces, their performance is similar in most cases. This may be explained by the fact that both algorithms belong to the same family where a “soft” projection to the constraint set (in fact the constraints sets are equivalent in this case, refer Sec. 2.2.2) is used and an annealing hyperparameter is used to gradually enforce a quantized solution.\nNote, PQ (Bai et al. (2019)) does not quantizee the fully connected layers, biases and shortcut layers. For fair comparison, we crossvalidate PQ with all layers binarized and original PQ settings, and report the results denoted as PQ and PQ* respectively in Table 1. Our MD-variants outperform PQ consistently on multiple datasets in equivalent experimental settings. This clearly shows that entropic or tanh-based regularization with our annealing scheme is superior to a simple “W” shaped regularizer and emphasizes that MD is a suitable framework for quantization.\nFurthermore, the superior performance of MD-tanh against GD-tanh and on par or better performance of MD-softmax against PMF for binary quantization empirically validates that MD is useful even in a nonconvex stochastic setting. This hypothesis along with our numerically stable form of MD can be particularly useful to explore other projections which are useful for quantization and/or network compression in general.\nThe training curves for our MD variants for CIFAR-10 and CIFAR-100 datasets with ResNet-18 are shown in Fig. 3. The original MD variants show unstable behaviour during training which is attributed to the fact that it involves logarithms and exponentials in the update rules. In addition, we believe, the additional annealing hyperparameter also contributes to this instability. Regardless, by storing auxiliary variables, the MD updates are demonstrated to be quite stable. This clear distinction between MD variants emphasizes the significance of practical considerations while implementing MD especially in NN optimization. For more experiments such as training curves comparison to other methods and ternary quantization results please refer to the Appendix C." }, { "heading": "6 DISCUSSION", "text": "In this work, we have introduced an MD framework for NN quantization by deriving mirror maps corresponding to various projections useful for quantization. In addition, we have discussed a numerically stable implementation of MD by storing an additional set of auxiliary variables and showed that this update is strikingly analogous to the popular STE based gradient method. The superior performance of our MD formulation even with simple projections such as tanh and softmax is encouraging and we believe, MD would be a suitable framework for not just NN quantization but for network compression in general. Finally, some theoretical aspects such as the use of time-varying mirror maps and the combination of MD and a stochastic optimizer such as Adam are left unattended in this paper, which we intend to analyze in a future work." }, { "heading": "A DERIVING MIRROR MAPS FROM PROJECTIONS", "text": "Theorem A.1. Let X be a compact convex set and P : IR → C be an invertible function where C ⊂ IR is a convex open set such that X = C̄ (C̄ denotes the closure of C). Now, if\n1. P is strictly monotonically increasing. 2. limx→∂C ‖P−1(x)‖ =∞ (∂C denotes the boundary of C).\nThen, Φ(x) = ∫ x x0 P−1(y)dy is a valid mirror map.\nProof. From the fundamental theorem of calculus, the gradient of Φ(x) satisfies, ∇Φ(x) = P−1(x). Since P is strictly monotonically increasing and invertible, P−1 is strictly monotonically increasing. Therefore, Φ(x) is strictly convex and differentiable. Now, from the definition of projection and since it is invertible (i.e., P−1 is one-to-one and onto), ∇Φ(C) = P−1(C) = IR. Therefore, together with condition (2), we can conclude that Φ(x) = ∫ x x0 P−1(y)dy is a valid mirror map (refer Definition 2.2 in the main paper). For the multi-dimensional case, we need an additional condition that the vector field P−1(x) is conservative. Then by the gradient theorem (gra), there exists a mirror map Φ(x) = ∫ x x0 P−1(y)dy for some arbitrary base point x0." }, { "heading": "B MD UPDATE DERIVATION FOR THE TANH PROJECTION", "text": "We now derive the MD update corresponding to the tanh projection below. From Theorem A.1, the mirror map for the tanh projection can be written as:\nΦ(w) = ∫ P−1(w)dw = 1\n2β\n[ (1 + w) log(1 + w) + (1− w) log(1− w) ] . (24)\nCorrespondingly, the Bregman divergence can be written as:\nDΦ(w, v) = Φ(w)− Φ(v)− Φ′(v)(w − v) , where Φ′(v) = 12β log 1+v 1−v , (25)\n= 1\n2β\n[ w log\n(1 + w)(1− v) (1− w)(1 + v)\n+ log(1− w)(1 + w)− log(1− v)(1− v) ] .\nNow, consider the proximal form of MD update\nwk+1 = argmin x∈(−1,1) 〈η gk, w〉+DΦ(w,wk) . (26)\nThe idea is to find w such that the KKT conditions are satisfied. To this end, let us first write the Lagrangian of Eq. (26) by introducing dual variables y and z corresponding to the constraints w > −1 and w < 1, respectively:\nF (w, x, y) = ηgkw + 1\n2β\n[ w log\n(1 + w)(1− wk) (1− w)(1 + wk)\n+ log(1− w)(1 + w)− log(1− wk)(1− wk) ]\n(27) + y(−w − 1) + z(w − 1) .\nNow, setting the derivatives with respect to w to zero:\n∂F ∂w = ηgk + 1 2β log (1 + w)(1− wk) (1− w)(1 + wk) − y + z = 0 . (28)\nFrom complementary slackness conditions,\ny(−w − 1) = 0 , since w > −1 ⇒ y = 0 , (29) z(w − 1) = 0 , since w < 1 ⇒ z = 0 .\nTherefore, Eq. (28) now simplifies to:\n∂F ∂w = ηgk + 1 2β log (1 + w)(1− wk) (1− w)(1 + wk) = 0 , (30)\nlog (1 + w)(1− wk) (1− w)(1 + wk) = exp(−2βηgk) ,\n1 + w 1− w =\n1 + wk 1− wk exp(−2βηgk) ,\nw = 1+wk 1−wk exp(−2βηg k)− 1\n1+wk 1−wk exp(−2βηgk) + 1 .\nThe pseudocodes of original (MD-tanh) and numerically stable versions (MD-tanh-S) for tanh are presented in Algorithms 1 and 2 respectively." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "We first give training curves of all compared methods, and provide ternary quantization results as a proof of concept. Later, we provide experimental details.\nConvergence Analysis. The training curves for CIFAR-10 and CIFAR-100 datasets with ResNet-18 are shown in Fig. 4. Notice, after the initial exploration phase (due to low β) the validation accuracies of our MD-tanh-S increase sharply while this steep rise is not observed in regularization methods such as PQ. The training behaviour for both our stable MD-variants (softmax and tanh) is quite similar.\nAlgorithm 1 MD-tanh\nRequire: K, b, {ηk}, ρ > 1,D, L Ensure: w∗ ∈ Qm\n1: w0 ∈ IRm, β ← 1 . Initialization 2: w0 ← tanh(βw0) . Projection 3: for k ← 0, . . . ,K do 4: Db = {(xi,yi)}bi=1 ∼ D . Sample a mini-batch 5: gk ← ∇wL(w;Db) ∣∣ w=wk\n. Gradient w.r.t. w at wk (Adam based gradients) 6: for j ← 1, . . . ,m do 7: wk+1j ← 1+wkj 1−wk j exp(−2βηkgkj )−1\n1+wk j 1−wk j\nexp(−2βηkgkj )+1 . MD update\n8: end for 9: β ← ρβ . Increase β 10: end for 11: w∗ ← sign(w̃K) . Quantization\nAlgorithm 2 MD-tanh-S\nRequire: K, b, {ηk}, ρ > 1,D, L Ensure: w∗ ∈ Qm\n1: w̃0 ∈ IRm, β ← 1 . Initialization 2: for k ← 0, . . . ,K do 3: wk ← tanh(βw̃k) . Projection 4: Db = {(xi,yi)}bi=1 ∼ D . Sample a mini-batch 5: gk ← ∇wL(w;Db) ∣∣ w=wk\n. Gradient w.r.t. w at wk (Adam based gradients) 6: w̃k+1 ← w̃k − ηkgk . Gradient descent on w̃ 7: β ← ρβ . Increase β 8: end for 9: w∗ ← sign(w̃K) . Quantization\nTernary Quantization. As a proof of concept for our shifted tanh projection (refer Example 3.3), we also show results for ternary quantization with quantization levels Q = {−1, 0, 1} in Table 2. Note that the performance improvement of our ternary networks compared to their respective binary networks is marginal as only 0 is included as the 3rd quantization level. In contrast to us, the baseline method PQ (Bai et al. (2019)) optimizes for the quantization levels (differently for each layer) as well in an alternating optimization regime rather than fixing it to Q = {−1, 0, 1}. Also, PQ does ternarize the first convolution layer, fully-connected layers and the shortcut layers. We cross-validate hyperparameters for both the original PQ setup and the equivalent setting of our MD-variants where we optimize all the weights and denote them as PQ* and PQ respectively.\nOur MD-tanh variant performs on par or sometimes even better in comparison to tanh projection results where gradient is calculated through the projection instead of performing MD. This again empirically validates the hypothesis that MD yields in good approximation for the task of network quantization. The better performance of PQ in their original quantization setup, compared to our approach in CIFAR-10 can be accounted to their non-quantized layers and different quantization levels. We believe, similar explorations are possible with our MD framework as well.\nExperimental Details. As mentioned in the main paper the experimental protocol is similar to (Ajanthan et al. (2019)). To this end, the details of the datasets and their corresponding experiment setups are given in Table 3. For CIFAR-10/100 and TinyImageNet, VGG-16 (Simonyan & Zisserman (2015)) and ResNet-18 (He et al. (2016)) architectures adapted for CIFAR dataset are used. In particular, for CIFAR experiments, similar to (Lee et al. (2019)), the size of the fully-connected (FC) layers of VGG-16 is set to 512 and no dropout layers are employed. For TinyImageNet, the stride of the first convolutional layer of ResNet-18 is set to 2 to handle the image size (Huang et al. (2017)). In all the models, batch normalization (Ioffe & Szegedy (2015)) (with no learnable parameters) and\nReLU nonlinearity are used. Only for the floating point networks (i.e., REF), we keep the learnable parameters for batch norm. Standard data augmentation (i.e., random crop and horizontal flip) is used.\nFor both of our MD variants, hyperparameters such as the learning rate, learning rate scale, annealing hyperparameter β and its schedule are crossvalidated from the range reported in Table 4 and the chosen parameters are given in the Tables 5 and 6. To generate the plots, we used the publicly available codes of BC3, PQ4 and PMF5.\nAll methods are trained from a random initialization and the model with the best validation accuracy is chosen for each method. Note that, in MD, even though we use an increasing schedule for β to enforce a discrete solution, the chosen network may not be fully-quantized (as the best model could be obtained in an early stage of training). Therefore, simple argmax rounding is applied to ensure that the network is fully-quantized.\n3https://github.com/itayhubara/BinaryNet.pytorch 4https://github.com/allenbai01/ProxQuant 5https://github.com/tajanthan/pmf\nDataset Image # class Train / Val. b K\nMNIST 28× 28 10 50k / 10k 100 20k CIFAR-10 32× 32 10 45k / 5k 128 100k CIFAR-100 32× 32 100 45k / 5k 128 100k TinyImageNet 64× 64 200 100k / 10k 128 100k\nTable 3: Experiment setup. Here, b is the batch size and K is the total number of iterations for all the methods.\nHyperparameters Fine-tuning grid\nlearning_rate [0.1, 0.01, 0.001, 0.0001] lr_scale [0.1, 0.2, 0.3, 0.5]\nbeta_scale [1.01, 1.02, 1.05, 1.1, 1.2] beta_scale_interval [100, 200, 500, 1000, 2000]\nTable 4: The hyperparameter search space for all the experiments. Chosen parameters are given in Tables 5 and 6." } ]
2,019
null
SP:662edd2fd9437de887821ebf7de06415eba13fae
[ "Motivated by the observation that powerful deep autoregressive models such as PixelCNNs lack the ability to produce semantically meaningful latent embeddings and generate visually appealing interpolated images by latent representation manipulations, this paper proposes using Fisher scores projected to a reasonably low-dimensional space as latent embeddings for image manipulations. A decoder based on a CNN, a Conditional RealNVP, or a Conditional Pyramid PixelCNN is used to decode high-dimensional images from these projected Fisher score. Experiments with different autoregressive and decoder architectures are conducted on MNIST and CelebA datasets are conducted. ", "This paper focuses on the problem of interpolating between data points using neural autoregressive models. The core idea is that it is possible to use (a smaller-dimensional projection of) the Fisher score of the density function defined by the autoregressive model to represent data points in embedding space, and a neural decoder for mapping them back to input space. Experiments on both MNIST and Celeb suggest that this is a sensible method, and leads to smoother interpolations rather than just relying on the embeddings resulting from the network activations." ]
Deep autoregressive models are one of the most powerful models that exist today which achieve state-of-the-art bits per dim. However, they lie at a strict disadvantage when it comes to controlled sample generation compared to latent variable models. Latent variable models such as VAEs and normalizing flows allow meaningful semantic manipulations in latent space, which autoregressive models do not have. In this paper, we propose using Fisher scores as a method to extract embeddings from an autoregressive model to use for interpolation and show that our method provides more meaningful sample manipulation compared to alternate embeddings such as network activations.
[]
[ { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Sanjoy Dasgupta", "Anupam Gupta" ], "title": "An elementary proof of the johnson-lindenstrauss lemma", "venue": "Technical Report,", "year": 1999 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "arXiv preprint arXiv:1907.02544,", "year": 2019 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "arXiv preprint arXiv:1605.09782,", "year": 2016 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Philippe-Henri Gosselin", "Naila Murray", "Hervé Jégou", "Florent Perronnin" ], "title": "Revisiting the fisher vector for fine-grained classification", "venue": "Pattern recognition letters,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2017 }, { "authors": [ "Tommi Jaakkola", "David Haussler" ], "title": "Exploiting generative models in discriminative classifiers", "venue": "In Advances in neural information processing systems,", "year": 1999 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alexander Kolesnikov", "Christoph H Lampert" ], "title": "Pixelcnn models with auxiliary variables for natural image modeling", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume", "year": 2017 }, { "authors": [ "Ping Li", "Trevor J Hastie", "Kenneth W Church" ], "title": "Very sparse random projections", "venue": "In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2006 }, { "authors": [ "Jacob Menick", "Nal Kalchbrenner" ], "title": "Generating high fidelity images with subscale pixel networks and multidimensional upscaling", "venue": "arXiv preprint arXiv:1812.01608,", "year": 2018 }, { "authors": [ "Sebastian Mika", "Gunnar Ratsch", "Jason Weston", "Bernhard Scholkopf", "Klaus-Robert" ], "title": "Mullers. Fisher discriminant analysis with kernels. In Neural networks for signal processing", "venue": "IX: Proceedings of the 1999 IEEE signal processing society workshop (cat. no", "year": 1999 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "arXiv preprint arXiv:1609.03499,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Florent Perronnin", "Yan Liu", "Jorge Sánchez", "Hervé Poirier" ], "title": "Large-scale image retrieval with compressed fisher vectors", "venue": "In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2010 }, { "authors": [ "Florent Perronnin", "Jorge Sánchez", "Thomas Mensink" ], "title": "Improving the fisher kernel for large-scale image classification", "venue": "In European conference on computer vision,", "year": 2010 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Jorge Sánchez", "Florent Perronnin", "Thomas Mensink", "Jakob Verbeek" ], "title": "Image classification with the fisher vector: Theory and practice", "venue": "International journal of computer vision,", "year": 2013 }, { "authors": [ "Karen Simonyan", "Omkar M Parkhi", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Fisher vector faces in the wild", "venue": "In BMVC,", "year": 2013 }, { "authors": [ "Aaron Van den Oord", "Nal Kalchbrenner", "Lasse Espeholt", "Oriol Vinyals", "Alex Graves" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In European conference on computer vision,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Over the last few decades, unsupervised learning has been a rapidly growing field, with the development of more complex and better probabilistic density models. Autoregressive generative models (Salimans et al., 2017; Oord et al., 2016a; Menick & Kalchbrenner, 2018) are one of the most powerful generative models to date, as they generally achieve the best bits per dim compared to other likelihood-based models such as Normalizing Flows or Variational Auto-encoders (VAEs) (Kingma & Welling, 2013; Dinh et al., 2016; Kingma & Dhariwal, 2018). However, it remains a difficult problem to perform any kind of controlled sample generation using autoregressive models. For example, flow models and VAEs are structured as latent variable models and allow meaningful manipulations in latent space, which can then be mapped back to original data distribution to produce generated samples either through an invertible map or a learned decoder.\nIn the context of natural image modelling, since discrete autoregressive models do not have continuous latent spaces, there is no natural method to apply controlled generation. When a latent space is not used, prior works generally perform controlled sample generation through training models conditioned on auxiliary information, such as class labels or facial attributes (Van den Oord et al., 2016). However, this requires a new conditional model to be trained for every new set of labels or features we want to manipulate, which is a time-consuming and tedious task. Ideally, we could structure an unconditional latent space that the autoregressive model could sample from, but where would this latent space come from?\nIn this paper, we propose a method of image interpolation and manipulation in a latent space defined by the Fisher score of both discrete and continuous autoregressive models. We use PixelCNNs to model the natural image distribution and use them to compute Fisher scores. In order to map\nback from Fisher score space to natural images, we train a decoder by minimizing reconstruction error. We show that interpolations in Fisher score space provide higher-level semantic meaning compared to baselines such as interpolations in PixelCNN activation space, which produce blurry and incoherent intermediate interpolations similar in nature to interpolations using pixel values. In order to evaluate interpolations quantitatively, for different mixing coefficients α, we calculate FID (Heusel et al., 2017) of the images decoded from a large sample of convex combinations of latent vectors.\nIn summary, we present two key contributions in our paper:\n• A novel method for natural image interpolation and semantic manipulation using autoregressive models through Fisher scores\n• A new quantitative approach to evaluate interpolation quality of images" }, { "heading": "2 RELATED WORK", "text": "There exists a substantial amount of work on natural image manipulation using deep generative models. VAEs (Higgins et al., 2017), BigBiGANS (Donahue et al., 2016; Donahue & Simonyan, 2019; Brock et al., 2018) and normalizing flows Kingma & Dhariwal (2018) provide learned latent spaces in which realistic image manipulation is a natural task. Interpolating through a latent space allows more natural transitions between images compared to pixel value interpolations, which naively overlay images on top of each other. Other prior methods learn hierarchical latent spaces on GANs and VAEs to encourage semantic manipulations to disentangle different levels of global characteristics of facial features, such as skin color, gender, hair color, and facial features (Karras et al., 2019).\nAside from using a latent space for controlled image generation, prior methods have also trained generative methods conditioned on relevant auxiliary feature labels, such as class labels in ImageNet. Other prior works have also used facial feature embeddings and binary facial attributes from CelebA to manipulate facial characteristics of generated images (Van den Oord et al., 2016).\nSimilarly, there has been a large amount of work on using Fisher information in machine learning. Many prior methods use the Fisher Kernel in algorithms such as kernel discriminant analysis and kernel support vector machines (Mika et al., 1999; Jaakkola & Haussler, 1999). More recent works introduce the Fisher vector, which uses Fisher scores of mixtures of Gaussians as feature vectors for downstream tasks (Sánchez et al., 2013; Simonyan et al., 2013; Perronnin et al., 2010b; Gosselin et al., 2014; Perronnin et al., 2010a). However, to our knowledge, there has been no work in using Fisher scores for deep generative modelling." }, { "heading": "3 BACKGROUND", "text": "" }, { "heading": "3.1 PIXELCNN", "text": "PixelCNNs (Oord et al., 2016b) are powerful autoregressive models used to model complex image distributions. They are likelihood-based generative models that directly model the data distribution p(x). This allows us to exactly compute the Fisher score instead of using an approximation, which would be necessary for GANs (Goodfellow et al., 2014) or VAEs, as they either implicitly optimize likelihood, or optimize a variational lower bound.\nPixelCNNs use a series of masked convolutions to define an autoregressive model over image data. Masked convolutions allow PixelCNNs to retain the autoregressive ordering when propagating values through layers. Over the past few years, many improved variants of PixelCNNs have been developed to address certain problems with the original PixelCNN design. Specifically for our work, we use Gated PixelCNNs (Van den Oord et al., 2016), which introduced horizontal and vertical convolutional blocks to remove the blind-spot in the original masked convolutions of PixelCNNs, and Pyramid PixelCNNs (Kolesnikov & Lampert, 2017), which use PixelCNN++ (Salimans et al., 2017) architectures in a hierarchical fashion, with each layer of the hierarchy modelling images at different down-sampled resolutions conditioned on the previous layer." }, { "heading": "3.2 FISHER SCORE", "text": "The Fisher score ˙̀(x; θ) = ∇θ log pθ(x) is defined as the gradient of the log-probability of a sample with respect to the model parameters θ. Intuitively, Fisher scores describe the contributions of each parameter during the generation process. Similar samples in the data distribution should elicit similar Fisher scores. Similarity between samples can also be evaluated by taking a gradient step for one sample, and checking if the log-likelihood of another sample also increases. This intuition is re-affirmed in the underlying mathematical interpretation of Fisher information - the Fisher score maps data points onto a Riemannian manifold with a local metric given by the Fisher information matrix. The underlying kernel of this space is the Fisher kernel, defined as:\n˙̀(xi; θ) TF−1 ˙̀(xj ; θ) = (F − 12 ˙̀(xi; θ)) T (F− 1 2 ˙̀(xj ; θ))\nwhere F−1 is the inverse Fisher information matrix, and F− 1 2 is the Cholesky Decomposition. Applying F− 1 2 is a normalization process, so the Fisher kernel can be approximated by taking the dot product of standardized Fisher scores. This is useful since computing F−1 is normally difficult. As such, we can see the collection of Fisher scores as a high dimensional embedding space in which more meaningful information about the data distribution can be extracted compared to raw pixel values. More complex deep generative models may learn parameters that encode information at high-levels of abstraction which may be reflected as high-level features in Fisher scores." }, { "heading": "3.3 SPARSE RANDOM PROJECTIONS", "text": "It would be cumbersome to work in the very high-dimensional parameter spaces of deep generative models, so we use dimensionality reduction methods to make our methods more scalable. Sparse random projections allow for memory-efficient and scalable projections of high dimensional vectors. The Johnson-Lindenstrauss Lemma (Dasgupta & Gupta, 1999) states that under a suitable orthogonal projection, a set of n points in a d-dimensional space can be accurately embedded to some k-dimensional vector space, where k depends only on log n. Therefore, for suitably large k, we can preserve the norms and relative distances between projected points, even for very high dimensional data. Since sparse random matrices are nearly orthogonal in high-dimensional settings, we can safely substantially reduce the dimensionality of our embedding spaces using this method.\nWe generate sparse random matrices according to Li et al. (2006). Given an n× k matrix to project, we define the minimum density of our sparse random matrix as d = 1√\nk , and let s = 1d . Suppose\nthat we are projecting the data into p dimensions, then the n × p projection matrix P is generated according to the following distribution:\nPij = − s√p with probability 1 2s\n0 with probability 1− 1s s√ p with probability 1 2s\n(1)\nwhere Pij is the element of the ith row and jth column of the projection matrix.\nAlgorithm 1: The procedural generation of the new interpolated embedding dataset for quantitative evaluation. Following FID conventions, we use a sample size of 50000 images. Result: α-interpolated dataset Dα Input: The dataset D, projection matrix P , pre-trained autoregressive model pθ(x), and the learned decoder Dec Dα← {} for i← 0 to 50000 do\nSample x1, x2 ∼ D, a random pair of samples z1 ← P∇θ log pθ(x1) z2 ← P∇θ log pθ(x2) ẑ ← (1− α)z1 + αz2 x̂← Dec(ẑ) Dα ← Dα ∪ {x̂}\nend" }, { "heading": "4 METHOD", "text": "We now describe our approach for natural image manipulation and interpolation using autoregressive models. Note that this method is not restricted to autoregressive models and can be applied to any likelihood-based models." }, { "heading": "4.1 EMBEDDING SPACE CONSTRUCTION", "text": "Given a trained autoregressive model, pθ(x), we want to construct an embedding space that is more meaningful than raw pixel values. In this paper, our autoregressive models are exclusively variants of PixelCNNs since we are working in an image domain. Drawing inspiration off popular selfsupervised methods (Zhang et al., 2016; Gidaris et al., 2018; Doersch et al., 2015; Pathak et al., 2016; Noroozi & Favaro, 2016), it may seem natural to take the output activations of one of the last few convolutional layers. However in our experiments, we show that using output activations provides less meaningful embedding manipulation than using the Fisher score. This is especially the case for PixelCNNs, as PixelCNNs are known for generally encoding more local statistics than global features of images. However, Fisher scores lie in parameter space, which may encode more global semantics.\nIn order to construct the embedding space using Fisher scores, we initialize a sparse random matrix P , randomly generated following the distribution in equation 1. In practice, we found sparse random matrices the only feasible method for reasonable dimensionality reduction when scaling to more difficult datasets, where the sizes of corresponding PixelCNNs grew to tens of millions of parameters. Finally, the embedding space is constructed as follows: each element zi in the new embedding space is computed as zi = P∇θ log pθ(xi) for each sample xi in the dataset." }, { "heading": "4.2 LEARNING A DECODER", "text": "Regardless of whether we use Fisher scores or network activations as embedding spaces, doing any sort of image manipulation in a generated embedding space requires a mapping back from zi to xi. To solve this problem, we learn a mapping back from zi to xi by training a network to model the density p(xi|zi). We try both supervised and unsupervised learning approaches, either directly learning a decoding model to minimize reconstruction error, or implicitly learn reconstruction by training a conditional generative model, such as another autoregressive model or a flow." }, { "heading": "4.3 INTERPOLATION EVAUATION METRIC", "text": "We introduce an evaluation procedure to quantitatively evaluate image interpolation quality. The quality of image interpolations can roughly be measured by how realistic any intermediate interpolated image is. Existing popular methods to measure image quality can largely be attributed GAN metrics, such as Fréchet Inception Distance (FID) (Heusel et al., 2017) or Inception Score (IS) (Salimans et al., 2016). We choose to use FID as our evaluation metric since it is more generalizable to other datasets than IS. Interpolation quality is evaluated by computing FID for different mixing coefficients α. For α ∈ {0, 0.125, . . . , 0.5}, we generate a new dataset Dα according to Algorithm 1, and compute the FID between the true dataset and Dα. Under this evaluation metric, good interpolations result in lower FID scores, and bad interpolations produce peaked FID scores at α = 0.5." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate our method on MNIST and CelebA and design experiments to answer the following questions:\n• How does image interpolation quality using Fisher scores compare to baseline embedding spaces such as PixelCNN activations?\n• Do Fisher scores contain high-level semantic information about the original image data?" }, { "heading": "5.1 MNIST", "text": "Setup We trained a standard PixelCNN with masked convolutions on binarized MNIST images as our autoregressive model. Images were binarized by sampling from Bernoulli random variables biased by grayscale pixel values. The decoder model consisted of a dense layer following by 3 transposed convolutional layers that upscale the image to 28× 28.\nProjection Method We primarily experimented with varying levels of density and projection dimensions for dense and sparse random projections. Figure 3 (a) shows reconstruction error on trained decoders for different hyperparameter combinations. For MNIST, we chose to use a dense matrix with a projection dimension of 1024.\nResults Figure 2 shows a comparison between using the PixelCNN activations of the second-tolast convolutional layer and projected Fisher scores as embedding spaces. In general, reconstruction using the activations is slightly more accurate than using Fisher scores. However, for interpolations, using activations produces intermediate images that are similar to interpolations using raw pixel values. On the other hand, interpolations with our method using Fisher scores produces a more natural transition between samples. These qualitative observations are also reflected in our quantitative measurements shown in Figure 2. The interpolations using activations reaches a much higher FID (25.42) than using interpolation using Fisher scores (5.43). Note that we only show α from 0 to 0.5, as FID scores for α = 0.75 are the same as α = 0.25 since mixing coefficients are symmetric." }, { "heading": "5.2 CELEBA", "text": "Dataset We evaluate our method 128x128 CelebA images. We follow the standard procedure of cropping raw 218 x 178 CelebA images into 128 x 128 CelebA images. All autoregressive models and decoders are trained to model 5-bit images, since this bit reduction results in faster learning with a negligible cost in visible image quality.\nAutoregressive Architecture In order to investigate the effect of model complexity on Fisher score representations, we experimented with three different variations of PixelCNN architectures: a standard PixelCNN with 5 layers (same as MNIST), a Gated PixelCNN, and a Pyramid PixelCNN, each with about 1 million, 5 million, and 20 million parameters respectively. We trained all models with batch size 128 and learning rate 1e− 3 using an Adam optimizer for 50 epochs. More architecture details can be found in the appendix.\nDecoder Architecture We experimented with different kinds of decoder architectures. We tested a convolutional decoder architecture with added residual blocks, and also various conditional generative models, such as conditional RealNVPs, Pyramid PixelCNNs, and WGANs (Gulrajani et al., 2017) which helped produce less noisy reconstructions. Out of all decoder architectures, we observed that the conditional RealNVP produced the best qualitative results. See the appendix for more samples of each decoder architecture.\nProjection Method We use sparse random matrices to project the Fisher Scores with the default density 1√nfeatures , and projection dimension of 16384. We then apply PCA to reduce the dimensionality of the Fisher scores to 4096. We found that using PCA has minimal degradation on reconstruction quality, and significantly reduced the number of parameters in our decoder models. Using this process showed better quality reconstruction compared to directly applying a random projection to 4096 dimensions, which suggests that the random projection is the primary bottleneck in this method.\nReconstruction and Interpolation Figure 4 shows a comparison between reconstruction and interpolation for PixelCNN activations versus Fisher scores. We note that since we are using\na decoder to reconstruct images, interpolations do not begin and end with the true images of the dataset, and instead, are approximations constructed from the decoder. We see similar results an in our MNIST experiments. Reconstruction quality for activations is better than with Fisher scores. However, interpolations for Fisher scores is qualitatively more natural than those of activations. Figure 6 shows that similar interpolation characteristics were observed across different decoder architectures. Therefore, it is unlikely that the good interpolations arise solely from the stronger decoder networks, and instead meaningful semantic information is stored in the Fisher scores\nthemselves. These observations are supported by the quantitative results shown in Figure 7. For the RealNVP decoder, we can see that although using network activations began with a lower FID (better reconstruction), increasing levels of interpolation result in FID rising faster to a higher peak than the same interpolation level for Fisher scores.\nLooking at the effect of model complexity, we see that there is not too much of a difference in reconstruction and interpolation quality, but generally, less complex models allow slightly more accurate image reconstructions. Good interpolations for even the simpler models suggest that they are still learning a substantial amount of information about the data including high level semantic information.\nSemantic Manipulation Since CelebA has binary labels, we can also look at semantic manipulation in addition to interpolations. Given a binary attribute, such as the presence of black hair, a smile, or eyeglasses, we can extrapolate in embedding space in the same manner described in Glow (Kingma & Dhariwal, 2018). For some attribute, let zpos be the average of all embedding vectors with the attribute, and zneg be the average of all embedding vectors without the attribute. We can then apply or remove the attribute by manipulating a given embedding vector in the direction of δ = zpos − zneg or its negation. For our experiments, we found that scaling δ by a factor of 3 was enough to see visible change in our images. See Figure 5 for example of applying different attributes using both the activation and Fisher score embedding spaces. Overall, our method of using Fisher scores tends to encourage more natural semantic manipulations in adding a smile, or adding eyeglasses compared to activation embeddings, which tend to ”paste” generic smiles and glasses on top of each face." }, { "heading": "6 CONCLUSION", "text": "We proposed a new method to allow image interpolation and manipulation using autoregressive models. Our approach used Fisher scores an as underlying embedding space, and showed more natural interpolation compared to our baselines using activations of PixelCNNs or raw pixel interpolations. In addition, we introduced a new evaluation metric for interpolations by using FID to measure images at different levels of interpolation. Lastly, we note that this method is not restricted to images, and generalizes to any autoregressive model on any kind of dataset." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 PIXELCNN ARCHITECTURES", "text": "" }, { "heading": "7.1.1 MNIST", "text": "The PixelCNN architecture for MNIST was a standard PixeCNN Oord et al. (2016b), with 5 masked convolutional layers each kernel size 7, padding 3, and 64 filters. We used ReLU for our activation function." }, { "heading": "7.1.2 CELEBA", "text": "1-layer PixelCNN Architecture is the same as the PixelCNN from MNIST, but with only 1 masked convolutional layer.\n5-layer PixelCNN Architecture is the same as the PixelCNN from MNIST\nGated PixelCNN The Gated PixelCNN architecture is the same as described in Van den Oord et al. (2016). We use a filter size of 120 dimensions, with 5 masked convolutional layers of kernel size 7 and padding 2. Each masked convolutional layer uses vertical and horizontal stack of convolutions, with residual connections and gating mechanisms.\nPyramid PixelCNN The Pyramid PixelCNN has 3 layers. The bottom layer is a PixelCNN++ on 8 × 8 down-sampled image. The second layer is a conditional PixelCNN++ that generates 32× 32 images conditioned on 8× 8 images. The final layer generates 128× 128 images conditioned on 32× 32 images." }, { "heading": "7.2 DECODER ARCHITECTURES", "text": "" }, { "heading": "7.2.1 MNIST", "text": "ResBlock Architecture\nConv2d (channels, k = 1, no bias) BatchNorm, ReLU\nConv2d (channels, k = 3, padding 1, no bias) BatchNorm, ReLU\nConv2d (channels, k = 1, no bias) BatchNorm, ReLU\nShortcut Connection, ReLU\nDecoder Architecture\nx ∈ Rn Linear (1024), ReLU\nConvTranspose (128 filters, k = 5, stride 2), ReLU ResBlock(128)\nConvTranspose (32 filters, k = 5, stride 2) BatchNorm, ReLU\nConvTranspose (2 filter, k = 5, stride 2)" }, { "heading": "7.2.2 CELEBA", "text": "Convolutional Model\nx ∈ R1×1×4096 ConvTranspose (256 filters, k = 4)\n4x ResBlock, Upsample 2x BatchNorm, ReLU\nConv2d (32 filters, k = 1)\nWGAN Follows the same architecture described in https://github.com/LynnHo/ DCGAN-LSGAN-WGAN-GP-DRAGAN-Pytorch. To make it conditional, we concatenate the conditioning vector with the noise when generating. For the discriminator, we project the conditioning vector to the channel dimension of each ResBlock input, and add it broadcasted across the image as a bias.\nRealNVP We use the Multiscale RealNVP architecture described in Dinh et al. (2016). We apply conditioning as follows: for each ResNet in the affine coupling layers, we project the conditioning vector to the channel dimension size, and add it as a bias to the ResNet input." }, { "heading": "7.3 RECONSTRUCTIONS FOR DIFFERENT DECODERS", "text": "" }, { "heading": "7.4 INTERPOLATIONS FOR DIFFERENT DECODERS", "text": "(a) Autoregressive model: Gated PixelCNN, Embedding Space: Activations, Decoder: Convolutional Network\n(b) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Scores, Decoder: Convolutional Network\n(a) Autoregressive model: Gated PixelCNN, Embedding Space: Activations, Decoder: RealNVP\n(b) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Scores, Decoder: Conditional RealNVP\n(a) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Scores, Decoder: Conditional WGAN\n(b) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Scores, Decoder: Conditional Pyramid PixelCNN" }, { "heading": "7.5 FACIAL ATTRIBUTE MANIPULATION FOR DIFFERENT DECODERS", "text": "(a) Autoregressive model: Gated PixelCNN, Embedding Space: Activations, Decoder: Conditional RealNVP (b) Autoregressive model: Gated PixelCNN, Embedding Space: Activations, Decoder: Conditional RealNVP\n(c) Autoregressive model: Gated PixelCNN, Embedding Space: Activations, Decoder: Conditional RealNVP (d) Autoregressive model: Gated PixelCNN, Embedding Space: Activations, Decoder: Conditional RealNVP\n(a) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Score, Decoder: Conditional RealNVP (b) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Score, Decoder: Conditional RealNVP\n(c) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Score, Decoder: Conditional RealNVP (d) Autoregressive model: Gated PixelCNN, Embedding Space: Fisher Score, Decoder: Conditional RealNVP" } ]
2,019
null
SP:9e36913574414b98fd6c4a66061cf0216dcc536b
[ "The paper proposes a method for few-shot object detection (FSOD), a variant of few-shot learning (FSL) where using a support set of few training images for novel categories (usually 1 or 5) not only the correct category labels are predicted on the query images, but also the object instances from the novel categories are localized and their bounding boxes are predicted. The method proposes a network architecture where the sliding window features that enter the RPN are first attenuated using support classes prototypes discovered using (a different?) RPN and found as matching to the few provided box annotations on the support images. The attenuation is by channel wise multiplication of the feature map and concatenation of the resulting feature maps (one per support class). After the RPN, ROI-pooling is applied on the concatenated feature map that is reduced using 1x1 convolution and original feature map (before attenuation) being added to the result. Following this a two FC layer classifier is fine-tuned on the support data to form the final ", "This paper is about the task of object detection in the setting of few-shots dataset. The problem is addressed in the learning scheme of meta-learning paradigm: the proposed meta-rcnn trains the popular faster-rcnn on several tasks of few shots object detection while the RPN and the object classification networks are meta-learned among the tasks. Compared to previous work the paper introduces the meta learning framework and several changes to the faster rcnn detector. A prototype representation is derived from the standard RPN network and its proposed bounding box. An attention mechanism choose the object of interest and is used to train the final RPN and classification network. Experiments on the popular Pascal Voc 2007 and ImageNet-FSOD show that the proposed system have state of the art performance." ]
Despite significant advances in object detection in recent years, training effective detectors in a small data regime remains an open challenge. Labelling training data for object detection is extremely expensive, and there is a need to develop techniques that can generalize well from small amounts of labelled data. We investigate this problem of few-shot object detection, where a detector has access to only limited amounts of annotated data. Based on the recently evolving meta-learning principle, we propose a novel meta-learning framework for object detection named “Meta-RCNN”, which learns the ability to perform few-shot detection via meta-learning. Specifically, Meta-RCNN learns an object detector in an episodic learning paradigm on the (meta) training data. This learning scheme helps acquire a prior which enables Meta-RCNN to do few-shot detection on novel tasks. Built on top of the Faster RCNN model, in Meta-RCNN, both the Region Proposal Network (RPN) and the object classification branch are meta-learned. The meta-trained RPN learns to provide class-specific proposals, while the object classifier learns to do few-shot classification. The novel loss objectives and learning strategy of Meta-RCNN can be trained in an end-to-end manner. We demonstrate the effectiveness of Meta-RCNN in addressing few-shot detection on Pascal VOC dataset and achieve promising results.
[]
[ { "authors": [ "Antreas Antoniou", "Harrison Edwards", "Amos Storkey" ], "title": "How to train your maml", "venue": "arXiv preprint arXiv:1810.09502,", "year": 2018 }, { "authors": [ "Hao Chen", "Yali Wang", "Guoyou Wang", "Yu Qiao" ], "title": "Lstd: A low-shot transfer detector for object detection", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Jifeng Dai", "Haozhi Qi", "Yuwen Xiong", "Yi Li", "Guodong Zhang", "Han Hu", "Yichen Wei" ], "title": "Deformable convolutional networks", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Xuanyi Dong", "Liang Zheng", "Fan Ma", "Yi Yang", "Deyu Meng" ], "title": "Few-example object detection with model communication", "venue": "In TPAMI,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Object detection via a multi-region and semantic segmentation-aware cnn model", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Andrej Karpathy", "George Toderici", "Sanketh Shetty", "Thomas Leung", "Rahul Sukthankar", "Li FeiFei" ], "title": "Large-scale video classification with convolutional neural networks", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NeurIPS,", "year": 2012 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for fewshot learning", "venue": "arXiv preprint arXiv:1707.09835,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "SSD: Single shot multibox detector", "venue": null, "year": 2016 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": null, "year": 2016 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolo9000: Better, faster, stronger", "venue": "In arXiv preprint arXiv:1612.08242,", "year": 2016 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": null, "year": 2016 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "arXiv preprint arXiv:1803.00676,", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Eli Schwartz", "Leonid Karlinsky", "Joseph Shtok", "Sivan Harary", "Mattias Marder", "Sharathchandra Pankanti", "Rogerio Feris", "Abhishek Kumar", "Raja Giries", "Alex M Bronstein" ], "title": "Repmet: Representative-based metric learning for classification and one-shot object detection", "venue": null, "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one", "venue": "Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Shifeng Zhang", "Longyin Wen", "Xiao Bian", "Zhen Lei", "Stan Z Li" ], "title": "Advances in neural information processing", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Object detection is the task of identifying various objects in a given image, and localizing them with a bounding box. It is a widely studied problem in computer vision, and following the success deep convolutional neural networks (DCNN) in image classification (Karpathy et al., 2014; Krizhevsky et al., 2012), recent years have witnessed remarkable progress made in object detection based on deep learning. A series of detection algorithms based on DCNNs have been proposed which achieve state-of-the-art results on public detection benchmark datasets (Gidaris & Komodakis, 2015; Girshick et al., 2014; Ren et al., 2015; Lin et al., 2017a;b; Liu et al., 2016; Redmon & Farhadi, 2016). However, all these methods are data hungry, and require large amounts of annotated data to learn an immense number of parameters. For object detection, annotating the data is every expensive (much more than image classification), as it requires not only identifying the categorical labels for every object in the image, but also providing accurate localization information through bounding box coordinates. Moreover, in some applications, such as medical research, it’s often impossible to even collect sufficient data to annotate. This warrants a need for effective detectors that can generalize well from small amounts of annotated data. We refer to the problem of learning detectors from limited labeled data as few-shot detection. For example, in one-shot detection, only one image is available with objects of interest annotated, and a detector needs to train on just this image and generalize. When presented with such small amounts of annotated data, traditional detectors tend to suffer from overfitting. Inspired by the fact that humans can learn a new concepts from little annotated data, we aim to develop a new few-shot detection algorithm.\nThere have been several efforts exploring few-shot learning (Vinyals et al., 2016; Finn et al., 2017; Snell et al., 2017). Many of them follow the principle of meta learning. In meta learning, a set of tasks in a few-shot setting is simulated from a large corpus of annotated data, and the model is optimized to perform well over these few shot tasks. This trains the model to learn how to solve few-shot tasks. However, most existing efforts of meta learning are mainly focused on classification. Adapting few-shot classification algorithms directly for few-shot detection (e.g. by replacing the\nregion classification branch of detector with a meta-learner) is non-trivial because of two major concerns: i). Detection algorithms not only require classifying objects but also need to correctly localize objects in cluttered backgrounds by using a Region Proposal Network (RPN) and bounding box (bbox) regressors. It is thus also desirable that both RPN and bbox regressors should also be capable enough to adapt to few-shot settings. ii). For a given task with one (or few) annotated image(s), the annotated image may contain objects from several classes. But only a few objects of interest are annotated. The goal of the few-shot detector is to detect only these objects of interest. Unfortunately, a naively trained meta-detector’s RPN would detect all objects (even objects from classes not of interest) and try to classify them as one of the classes of interest rather than background images (See Figure 1 for an example).\nWe aim to address these challenges by proposing a novel method for solving few-shot detection using the meta-learning paradigm. We develop Meta-RCNN, an end to end trainable meta object detector. The proposed Meta-RCNN follows the episodic learning paradigm of meta-learning (Vinyals et al., 2016), where based on a give meta-train dataset, multiple few-shot tasks are simulated. For a given task, we first construct a class prototype for each of the annotated object categories in the support set. Using these prototypes, a class-specific feature map of the entire image is constructed, i.e., we obtain a feature map of the entire image for each of the class prototypes. These feature maps are tailored to detect only objects of the class of the prototype, by giving higher attention to appropriate regions in the image containing that object. Finally, all feature maps are merged to produce a combined feature map, followed by an RPN, and then classification and bbox regression layers.\nMeta-RCNN learns few-shot detector where the whole framework can be trained via meta-learning in an end-to-end manner. In contrast to the naive adaptation of meta-learning for classification into an object detection framework, Meta-RCNN learns the few-shot classifier, the RPN, and the bbox regressor in the meta-learning setting, thus making all three components suitable for handling fewshot scenarios. Moreover, Meta-RCNN learns a class-specific feature map for a given class prototype enabling easier distinction between classes of interest and backgrounds (where other objects in the image from classes not of interest are considered as backgrounds). We demonstrate the effectiveness of Meta-RCNN on two few-shot detection benchmarks: Pascal VOC and animal subset of ImageNet, and show that Meta-RCNN significantly improves the detection result in few shot settings." }, { "heading": "2 RELATED WORK", "text": "Generic Object Detection. Object detection based on deep learning can be broadly divided into two families: two-stage detectors and one-stage detectors. Two-stage detectors such as RCNN (Girshick et al., 2014), Fast RCNN (Gidaris & Komodakis, 2015) and Faster RCNN (Ren et al., 2015), first generate a sparse set of proposal candidates, and a fixed-length feature vector is extracted from each of these candidates, followed by a categorical classifier and a bounding box regressor. Twostage detection algorithms have achieved state-of-the-art results on many public benchmarks (He et al., 2016; Lin et al., 2017a), but are relatively slower than one-stage detectors. One-stage detectors such as SSD (Liu et al., 2016), Yolo (Redmon et al., 2016; Redmon & Farhadi, 2016) and RefineDet (Zhang et al., 2018) directly generate categorical proposals from the feature map and thus avoid cascaded region classifiers. One-stage detectors can achieve real-time inference speed but the detection accuracy is often inferior to two-stage detection algorithms. Both detection families assume access to a large set of annotated data, and are not suitable for scenarios where the model has access to small amounts of annotated training data. In contrast, our proposed Meta-RCNN method addresses detection problem of few-shot setting, and achieves promising results.\nMeta Learning for few-shot classification. Few-shot learning has been widely explored in image classification, and currently the most promising methods are mainly based on meta learning. Ravi & Larochelle (2016) optimized the base-model via an LSTM-based meta-learner which simulates traditional SGD optimization method. Finn et al. (Finn et al., 2017) proposed MAML which learns a good feature initialization which can adapt to a new task in only one gradient step udpate. Based on MAML, Li et al. (2017) proposed Meta-SGD which learns a set of learnable parameters to control gradient step of different tasks. Learning initialization is potentially a very general idea for few-shot learning however, the training process can be unstable (Antoniou et al., 2018) especially for complex problems such as detection. Vinyals et al. (Vinyals et al., 2016; Snell et al., 2017) proposed a matching network which followed a non-parametric principle by learning a differentiable K-Nearest Neighbour model. Ren et al. (Ren et al., 2018) extended this idea to semi-supervised learning by self-learning from the unlabeled data. Sung et al. (Sung et al., 2018) proposed a relation network to automatically define the optimal distance metric. These metric-learning based methods are easy to train and effective in addressing few-shot classification. However, directly adapting these techniques for detection is very challenging as just replacing the object classification branch of a detector with a meta-learner is not sufficient, and training the RPN under a meta-learning paradigm is non-trivial.\nFew-shot Object Detection. Few-shot detection has received considerably less interest from the community. Dong et al. (Dong et al., 2018) addressed few-shot detection using large scale unlabeled data. Their model is based on a semi-supervised method which extracts knowledge from unlabeled dataset to enrich training dataset by self-paced learning and multi-modal learning. However, their method may be misled by the incorrect predictions from initial model and also requires re-training the model for every new task. Chen et al. (Chen et al., 2018) propose a Low-shot Transfer Detector (LSTD) using regularization to transfer the knowledge from source domain to target domain by minimizing the gap between these two domains. RepMet (Schwartz et al., 2019) is a few-shot detection algorithm based on meta learning. It replaces the fully connected classification layer of a standard detector with modified prototypical network. However, they suffer from the two limitations of the RPN and bbox regression not being able to handle few-shot settings, and difficulties in distinguishing object classes of interest from background (including object classes not of interest). Our proposed method is also based on meta learning but can be optimized end-to-end and addresses these limitations to do effective few-shot detection." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 PROBLEM SETTING", "text": "In this section we present the formal problem setting of few-shot detection investigated in our paper. Assume we have two datasets L and S, where L is a large scale annotated dataset with Lc categories and S is a dataset with only a few annotated images with Sc categories. There is no category overlap between two datasets: Lc ∩ Sc = φ. Our goal is to learn a robust detector based on the annotated data in L and S to detect unlabeled objects of S.\nThe proposed Meta-RCNN aims to learn a general detection framework which can be quickly adapted to different detection tasks which have only a few labeled samples. We follow the standard training scheme of meta learning, which splits the whole learning stage into two parts: meta-training and meta-testing, and the model is optimized over multiple few-shot tasks simulated from the metatraining data. Specifically, during meta-training, few-shot detection tasks are sampled from L, and each task contains a support set and a query set. For the i-th task, K ways (or categories) and N images per category are randomly selected from Lc to build support set: TL,si . Similarly, Q images per category are randomly selected to build query set TL,qi . Support set T L,s i and query set T L,q i construct a complete task extracted from L (See Figure 1):\nTLi = { TL,si ,T L,q i } (1)\nwhere both the support set and query set are used to train the meta-model. The meta-model optimizes the base-model with respect to the support set and makes predictions on query set. Finally the loss suffered on the query set is used to update the model. In the meta-testing stage, similar to metatraining stage, a set of few-shot tasks are sampled from S:\nTSi = { TS,si ,T S,q i } (2)\nwhere TS,si is support set and T S,q i is query set. The model makes predictions on the query set, and these results are averaged across several few-shot tasks to evaluate the expected performance of the few-shot detector over a variety of novel few-shot detection tasks." }, { "heading": "3.2 OVERVIEW OF FASTER RCNN", "text": "Meta-RCNN is based on two-stage region based object detection algorithms. In this paper, we use the state-of-the-art detection algorithm Faster RCNN (Ren et al., 2015) as our base model, which is widely used in the computer vision community. Faster RCNN consists of two components, an RPN (Region Proposal Network) for proposal generation and Fast RCNN for region classification. RPN generates a sparse set of proposals which are classified into different categories by the region classifiers. Specifically, RPN extracts a feature vector from each region by scanning the whole image using sliding windows. This is followed by a binary classifier (objects vs backgrounds) and a bounding box regressor, where easy negatives are filtered. For each proposal, a fixed-length feature vector is extracted by using ROI Pooling layers. This vector is then fed into a sequence of dense connected layers branching into two outputs. One output is responsible for representing softmax probability over K + 1 classes(K target classes and one background class), and the other one encodes four real-values for refining bounding box position. We denoted u and v as the category and bounding box label respectively, p as the predicted probability distribution over C classes, and tu as the predicted bounding box prediction of class u, and λ as the trade-off parameter. Lcls represents softmax loss and Lloc represents SmoothL1 loss function. The entire network can be optimized in an end-to-end manner by minimizing loss L(p, u, tu, v):\nL(p, u, tu, v) = Lcls(p, u) + λ[u ≥ 1]Lloc(tu, v), (3)\nHowever, two-stage detectors require a lot of training samples to obtain a good performance. In the next section, we present the proposed Meta-RCNN which builds over Faster RCNN and is specifically designed to address few-shot detection." }, { "heading": "4 META-RCNN", "text": "" }, { "heading": "4.1 OVERVIEW", "text": "We now present our proposed method Meta-RCNN for few-shot detection (See Figure 2 for an overview). Meta-RCNN is trained with multiple few-shot tasks simulated from the meta-train dataset. For each episode, a few object categories of interest are assumed to be annotated (Support set). During meta-training, a prototype is computed for each object category. For each of these category prototypes, a class-specific feature map is generated by using a class-attention module which combines the prototype information with the feature map of the entire image. This feature map only highlights the signals of the class of interest, and suppresses information from other classes. Finally,\nfeature maps of all target categories are combined, followed by RPN and RCNN branches to make predictions on the query set. Based on the loss on the query set, the model is updated.\nMeta-RCNN is general paradigm to train few-shot detector via by meta-learning. For each task, irrelevant categories and background can be filtered by attention module, and the final generated feature map learns a general representation for the given few-shot detection task. Compared to other variants (Schwartz et al., 2019) which directly replaces the FC classification branch with a metalearning branch, Meta-RCNN is more general and the whole framework can be optimized including RPN and bbox regressors, making all the components few-shot capable. Next, we present the details of the model." }, { "heading": "4.2 META-TRAINING", "text": "During Meta-Training, multiple Kway-Nshot tasks are simulated from the annotated dataset L. To fit memory size, in Meta-Training stage we train the model using 5way-1shot tasks, and only 5 query images (1 query image per class). This results in a total of 10 images for one task. With this, implementing the meta-training is not too difficult. For each task TLi , images of support set T L,s i are fed into Faster RCNN to generate region features. For each of the object categories of interest (those assumed to be annotated in the support image), a prototype Pc is generated based on the corresponding region features:\nPc = 1\nNc Nc∑ i ric (4)\nwhere Pc denotes prototype of class c, and ric denotes i-th region features of all annotated objects from class c. Based on these generated prototypes, images of query set TL,qi are fed into the same Faster RCNN model and we obtain the image feature map before RPN and ROI Pooling. For each category, a class-specific feature map is learned based on the input query image and its corresponding prototype. We use a learnable class attention module here to highlight the signals of target class and suppress signals of other categories. The class attention module is based on basic channel-wise multiplication. The prototype Pc is encoded by a FC layer φ, which is later combined with feature map f by element-wise multiplication:\nFc = f φ(Pc) (5)\nFor each category c, one new feature map Fc is generated which aims to highlight the objects of class c. Next, all these new feature maps are combined into one feature map F :\nF = concate{F1, F2, ..., Fk} (6)\nF learns a general representation of K-classes, where each sub-channel contains information of different classes of interest. Based on the new feature map F , 1x1 conv layer is used to reduce computation cost, followed by RPN to produce region proposals. In order to recover the information lost in attention module, we finally combine the new generated feature map with original feature map by element-wise summation, and crop region features based on the new generated map. Finally, a K+1 region classifier and a bbox regressors are optimized w.r.t the label info from query set TL,qi :\nL(TL,qi ;T L,s i , θ) = Lloc + Lcls + LRPN (7)\nwhere θ represents the parameters of Meta-RCNN." }, { "heading": "4.3 META-TESTING", "text": "During meta-testing, we sample few-shot detection tasks from S. The annotations of support set are available and we make predictions on the query set to evaluate the performance of Meta-RCNN. For each task TSi , prototypes are generated from support set T S,q i , which are later used to generate new class-specific feature maps of images from query set TS,qi . In this stage, we need to finetune the model based on the labeled images of support set. The finetuning operation addresses the learning limitation of non-parametric method when more labeled images are provided. Finally, we evaluate the output from the query set as traditional detection problem:\np, u = MetaRCNN(TS,qi ;T S,s i , θ) (8)\nwhere p is class probability vector and u is location set of bounding boxes." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 DATASETS AND IMPLEMENTATION DETAILS", "text": "Benchmark Datasets: We construct two benchmark testbeds to facilitate the performance evaluation for few-shot object detection in meta-learning settings. The first is on Pascal VOC2007, and the second is on the animal subset of ImageNet-LOC dataset. Table 1 gives details of these datasets. Pascal VOC2007 has 20 categories with 5k images in trainval set and 5k images in test set. A subset of 10 categories are randomly from selected for VOC2007 trainval set for Meta-Training and the remaining 10-category subset of VOC2007 test set is used for Meta-Testing. Images without target object categories are removed. For ImageNet-FSOD benchmark, we use the subset of first 100 animal classes of ImageNet in Meta-Training stage and the subset of remaining 214 animal species in ImageNet-LOC in Meta-Testing stage. The model used in VOC-FSOD benchmark is pre-trained on ImageNet, while in ImageNet-FSOD benchmark, the model is pre-trained on MSCOCO dataset with 115k images in 80 categories.\nTask Generation: For each benchmark, Meta-RCNN is evaluated on multiple tasks with different Kway-Nshot few-shot settings (N annotated images per category). For VOC-FSOD benchmark, we have 3 few-shot settings to evaluate Meta-RCNN: 5way-1shot, 5way-3shot and 5way-5shot. In detection, a single image has more than one object, and proposal generation will automatically increase the number of training samples, so the real number of training samples is about 5 times larger thanN . On ImageNet-FSOD benchmark, we mainly follow (Chen et al., 2018) and (Schwartz et al., 2019) with two settings: 50way-1shot and 50way-5shot.\nMeta-model Parameter Setting: In Meta-Training stage, totally 1000 distinct tasks and 5000 tasks are generated in VOC-FSOD benchmark and ImageNet-FSOD benchmark respectively. There are 10 images per class in query set to update the model weights for 10 epochs. The initial learning rate is set to 1e-3 and is reduced to 1e-4 every 600 tasks and 3500 tasks in VOC-FSOD benchmark and ImageNet-FSOD benchmark. We set the batch size as 5 during query update. Basic Detection Parameter Setting: The parameter settings in Meta-RCNN is identical to vanilla Faster RCNN. Proposal overlap with objects larger than 0.5 are considered positive and less than 0.3 are negative. During Meta-Training the top 128 confident proposals are selected for training and during evaluation, 300 proposals with largest confidence score are selected. we build our MetaRCNN based on Faster RCNN with VGG16 (Simonyan & Zisserman, 2014) and ResNet50 (He et al., 2016) model which is pretrained on ImageNet. Model Evaluation: We evaluate Meta-RCNN based on multiple tasks of few-shot settings, which follows the evaluation metric of standard meta learning. More specifically, in evaluation stage, 200 Kshot-Nshot tasks are sampled from dataset S and images in query set will be evaluated. Mean average precision(mAP) over selected K categories is used as evaluation metric." }, { "heading": "5.2 RESULTS ON VOC-FSOD BENCHMARK", "text": "We validate the effectiveness of Meta-RCNN on VOC-FSOD benchmark where subset of 10 VOC categories are selected for Meta-Training and the other ten categories are used for Meta-Testing. For fair comparison, these two subsets are split as similar as possible. For example, we keep animal categories on both sides since they share similar semantic information (see appendix for details). Here we set up three baselines on VOC-FSOD benchmark to compete with proposed Meta-RCNN.\n• vanilla FRCN (Ren et al., 2015): the vanilla Faster RCNN which is the most popular object detection algorithm with competitive performance on many benchmarks. The vanilla FRCN is not designed for few-shot detection problem, but we try to include this baseline by fine-tuning the detector on the few-shot training data.\n• LSTD (Chen et al., 2018) is a few-shot detection algorithm based on Faster RCNN. LSTD uses categorical regularization items which transfers knowledge of L dataset to S dataset.\n• FRCN-PN is a modified version of Faster RCNN using meta-learning, which replaces final FC classification layer with non-parametric prototypical network (PN), sharing the same principle of RepMet (Schwartz et al., 2019).\nAll three baselines as well as the proposed Meta-RCNN are based on VGG16 (Simonyan & Zisserman, 2014) backbone. For Regular FRCN and LSTD, we first train a global Faster RCNN during Meta-Training. Then the pretrained detector models are adapted to different tasks during MetaTesting. During Meta-Testing, Meta-RCNN and vanilla FRCN are finetuned for 4 epochs while LSTD requires longer finetuning period (10 epochs). For FRCN-PN, prototypes of different categories are extracted as Meta-RCNN, and metric distances are learned to assign correct labels to each proposal. We report the results on Table 2 based on three different settings.\nFrom Table 2, the performances of all four methods improve with training shot increasing. Notably, FRCN-PN obtains much less improvement because the non-parametric property of PN layer limits its learning capacity from increased training samples. Benefit from the finetuning operation as well as FC layer in final classification and regression, Meta-RCNN can still maintain consistent improvement when trained with more samples. Furthermore, it’s interesting that Regular FRCN outperforms FRCN-PN even in very few-shot cases (5way-1shot), where non-parametric property does not help PN obtain better performance. We argue this is because few-shot detection problem is more difficult than few-shot classification problem, as we discussed in introduction section. FRCNPN cannot learn a representative prototype of background classes and the whole framework cannot be optimized by meta learning style (e.g., RPN and bbox regressors). The failure of FRCN-PN indicates naively attach components from few-shot classification framework cannot address few-shot detection problem. Finally, our Meta-RCNN achieves better results than all three baselines.\nPerformance of RPN: Here, we present the performance of RPN to validate our concerns of the negative impact of irrelevant categories. We use regular FRCN and FRCN-PN as our baseline. The models are optimized in the same manner as before but during Meta-Testing, we evaluate the recall on each task instead of mAP. From Table 3, Regular FRCN baseline outperform the FRCNPN significantly. This is because objects of irrelevant categories in the same image hurt the training process of RPN. And our proposed Meta-RCNN outperforms these two baseline significantly. Meta-\nRCNN learns a general feature map for all Kway-Nshot detection problem and optimize RPN by meta learning scheme, which proves more effective in few-shot settings. Notably, the results are surprising since the recall of RPN in few-shot scenario is significantly lower (> 90% with enough training data on VOC dataset)." }, { "heading": "5.3 RESULTS ON IMAGENET-FSOD BENCHMARK", "text": "On ImageNet-FSOD benchmark, we adapt weights of detector pretrained on MSCOCO trainval set, and then optimize Meta-RCNN based on this starting point. The Meta-RCNN is evaluated on animal subset of ImageNet-LOC. Animal subset of ImageNet-LOC only contains single animal category per image, so there are no irrelevant classes during training and it’s simpler than the situation we discussed. In addition to FRCN and LSTD, we also include another latest baseline RepMet (Schwartz et al., 2019), which replaces FC classification layers in FRCN with more careful design of PN layers, as well as much more stronger backbone architecture (DCN (Dai et al., 2017) and FPN (Lin et al., 2017a)). In this benchmark, we have 50 categories per task, so we attach a 1x1 convolution layer before class-specific feature map generation to reduce computation cost. We report the results in Tab. 4. In 50way-1shot and 50way-5shot, Meta-RCNN is better than other methods." }, { "heading": "5.4 DISCUSSIONS", "text": "Extension to other Meta-Learning Methods: Beyond prototypical networks, other meta-learning methods such as MAML (Finn et al., 2017) in principle can also be applied, e.g., we can apply MAML for vanilla FRCN framework, which updates the base model with the average gradient step of multiple tasks. However, in our experiments, the training process of MAML was unstable. This may be because few-shot detection is generally more difficult than few-shot classification, due to multiple dependent loss objectives (FRCN relies on RPN and regression loss etc.) and more complicated noisy contexts. In future, we plan to explore extensions to other meta-learning methods." }, { "heading": "6 CONCLUSION", "text": "Object detection has been widely explored but little attention has been given to learning detectors under a few-shot regime. In this paper we propose a meta learning based detection algorithm MetaRCNN, which is robust to few-shot learning, and the proposed training strategies make it more suitable in detection scenario. Specifically it adapts the Faster RCNN method and enables meta-learning of the object classifier, the RPN and the bounding box regressor. The RPN is meta-trained through a novel class-specific attention module. We conduct several experiments and obtain promising results." }, { "heading": "A APPENDIX", "text": "A.1 CATEGORY SPLIT IN VOC-FSOD BENCHMARK AND 2\nHere we describe the category splits (Lc and Sc) of VOC-FSOD benchmark and ImageNet-FSOD benchmark. These splits are used in all our paper.\nVOC-FSOD benchmark: Lc =\naeroplane, bicycle, ’bird, car, cat, chair, cow, person, pottedplant, tvmonitor\nSc =\nbus, motorbike, train, dog, sheep, bottle, sofa, diningtable, horse, boat\nImageNet-FSOD benchmark: Lc =\nkit fox, English setter, Siberian husky, Australian terrier, English springer, grey whale, lesser panda, Egyptian cat, ibex, Persian cat, cougar, gazelle, porcupine, sea lion, malamute, badger, Great Dane, Walker hound, Welsh springer spaniel, whippet, Scottish deerhound, killer whale, mink, African elephant, Weimaraner, soft-coated wheaten terrier, Dandie Dinmont, red wolf, Old English sheepdog, jaguar, otterhound, bloodhound, Airedale, hyena, meerkat, giant schnauzer, titi, three-toed sloth, sorrel, black-footed ferret, dalmatian, black-and-tan coonhound, papillon, skunk, polecat, Staffordshire bullterrier, Mexican hairless, Bouvier des Flandres, weasel, miniature poodle, malinois, bighorn, fox squirrel, colobus, tiger cat, Lhasa, impala, coyote, Yorkshire terrier, Newfoundland, brown bear, red fox, Norwegian elkhound, Rottweiler, hartebeest, Saluki, grey fox, schipperke, Pekinese, Brabancon griffon, West Highland white terrier, Sealyham terrier, guenon, mongoose, indri, tiger, Irish wolfhound, wild boar, EntleBucher, zebra, ram, French bulldog, orangutan, basenji, leopard, Bernese mountain dog, Maltese dog, Norfolk terrier toy terrier vizsla, cairn, squirrel monkey, groenendael, clumber, Siamese cat, chimpanzee, komondor, Afghan hound, Japanese spaniel, proboscis monkey, guinea pig\nSc =\nPomeranian, wombat, hare, snow leopard, Arctic fox, Sussex spaniel, lynx, wood rabbit, Saint Bernard, redbone, chow, collie, German shepherd, affenpinscher, dingo, golden retriever, American Staffordshire terrier, briard, kelpie, Tibetan terrier, cocker spaniel, sloth bear, standard poodle, wire-haired fox terrier, Border terrier, American black bear, Bedlington terrier, banded gecko, wallaby, Tibetan mastiff, flat-coated retriever, koala, toy poodle, Border collie, Chesapeake Bay retriever, German short-haired pointer, great grey owl, Doberman, Lakeland terrier, miniature pinscher, timber wolf, hog, marmot, Irish setter, bull mastiff, Irish terrier, Shetland sheepdog, keeshond, miniature schnauzer, llama, Pembroke, ice bear, standard schnauzer, white wolf, Boston bull, Gordon setter, Great Pyrenees, Irish water spaniel, warthog, Scotch terrier, Chihuahua, Norwich terrier, Rhodesian ridgeback, borzoi, gibbon, Samoyed, tabby, Kerry blue terrier, Labrador retriever, thunder snake, Ibizan hound, beagle, curly-coated retriever, African hunting dog, boxer, common newt,\ngiant panda, ringneck snake, Angora, beaver, lion, bluetick, basset, alligator lizard, armadillo, pug, Greater Swiss Mountain dog, hognose snake, dhole, echidna, sidewinder, Komodo dragon, silky terrier, Brittany spaniel, patas, European fire salamander, Madagascar cat, macaque, boa constrictor, gorilla, polecat, howler monkey, Appenzeller, Blenheim spaniel, Indian cobra, Shih-Tzu, baboon, kuvasz, horned viper, rhinoceros beetle, tailed frog, Eskimo dog, Gila monster, mud turtle, capuchin, spider monkey, Leonberg, garter snake, African chameleon, barracouta, bullfrog, spotted salamander, leatherback turtle, rock python, marmoset, otter, Arabian camel, gar, tarantula, langur, tench, platypus, Italian greyhound, box turtle, cheetah, hippopotamus, English foxhound, eft, admiral, night snake, whiptail, siamang, agama, bittern, terrapin, axolotl, African grey, African crocodile, frilled lizard, quail, water ouzel, sulphur-crested cockatoo, bison, bustard, bulbul, cock, prairie chicken, ruffed grouse, jay, partridge, tusker, spoonbill, green snake, junco, black grouse, crane, water buffalo, toucan, redshank, hornbill, ostrich, vine snake, hummingbird, Indian elephant, magpie, albatross, king snake, little blue heron, bald eagle, peacock, limpkin, hamster, ruddy turnstone, jacamar, green mamba, kite, indigo bunting, American egret, American coot, coucal, house finch, ptarmigan, black stork, robin, white stork, brambling, red-backed sandpiper, king penguin, goldfinch, lorikeet, water snake, macaw, drake, vulture, bee eater, hen, dowitcher, red-breasted merganser, ox, diamondback, oystercatcher, goose, pelican, black swan," } ]
2,019
null
SP:ab1e96008989209a4c5423f723bfae327416e78a
[ "This paper defined Angular Visual Hardness (AVH) based on the angle between image feature embedding and the weights of the target class. The authors compared the correlation between AVH and human selection frequency with model confidence and feature norm. The results show that both AVH and model confidence have correlation, but AVH has a stronger correlation than model confidence. Differently from the conjecture of [41], feature norm was not correlated with human selection frequency. ", "This paper is trying to bridge the gap between CNN and the human visual system by proposing a metric (angular visual distance) and validate that this metric is correlated to the human visual hardness and this metric has a stronger relation compared to the softmax score which has been viewed as a metric measuring the hardness of images in CNNs. Furthermore, this paper proposed a reasonable explanation for this observation, i.e., the norm is possibly not correlated to the human visual hardness and validate through the experiment. Finally, this paper shows that this metric is also useful in other applications. " ]
Although convolutional neural networks (CNNs) are inspired by the mechanisms behind human visual systems, they diverge on many measures such as ambiguity or hardness. In this paper, we make a surprising discovery: there exists a (nearly) universal score function for CNNs whose correlation is statistically significant than the widely used model confidence with human visual hardness. We term this function as angular visual hardness (AVH) which is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category in a CNN. We conduct an in-depth scientific study. We observe that CNN models with the highest accuracy also have the best AVH scores. This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples. We find that AVH displays interesting dynamics during training: it quickly reaches a plateau even though the training loss keeps improving. This suggests the need for designing better loss functions that can target harder examples more effectively. Finally, we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training methods for domain adaptation.
[]
[ { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In ICML,", "year": 2009 }, { "authors": [ "Alexander Berardino", "Johannes Ball", "Valero Laparra", "Eero P. Simoncelli" ], "title": "Eigen-distortions of hierarchical representations, 2017", "venue": null, "year": 2020 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "In Conference on Fairness, Accountability and Transparency,", "year": 2018 }, { "authors": [ "Charles F. Cadieu", "Ha Hong", "Daniel L.K. Yamins", "Nicolas Pinto", "Diego Ardila", "Ethan A. Solomon", "Najib J. Majaj", "James J. DiCarlo" ], "title": "Deep neural networks rival the representation of primate it cortex for core visual object recognition", "venue": "PLoS Computational Biology,", "year": 2014 }, { "authors": [ "Claus-Christian Carbon" ], "title": "Understanding human perception by human-made illusions", "venue": "Frontiers in Human Neuroscience,", "year": 2014 }, { "authors": [ "Lichao Chen", "Sudhir Singh", "Thomas Kailath", "Vwani Roychowdhury" ], "title": "Brain-inspired automated visual object discovery and detection", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Minmin Chen", "Kilian Q Weinberger", "John Blitzer" ], "title": "Co-training for domain adaptation", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Ron Dekel" ], "title": "Human perception in computer vision, 2017", "venue": null, "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Samuel Dodge", "Lina Karam" ], "title": "Can the early human visual system compete with deep neural networks", "venue": "IEEE International Conference on Computer Vision Workshops (ICCVW), Oct 2017. doi: 10.1109/iccvw.2017.329. URL http://dx.doi.org/10.1109/iccvw", "year": 2017 }, { "authors": [ "Samuel Dodge", "Lina Karam" ], "title": "A study and comparison of human and deep learning recognition performance under visual distortions", "venue": "26th International Conference on Computer Communication and Networks (ICCCN), Jul 2017. doi: 10.1109/icccn.2017.8038465. URL http://dx.doi.org/10.1109/ICCCN.2017.8038465", "year": 2017 }, { "authors": [ "Daniel J Felleman", "DC Essen Van" ], "title": "Distributed hierarchical processing in the primate cerebral cortex", "venue": "Cerebral cortex (New York, NY: 1991),", "year": 1991 }, { "authors": [ "Kunihiko Fukushima" ], "title": "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position", "venue": "Biological cybernetics,", "year": 1980 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks. JMLR, 2016", "venue": null, "year": 2016 }, { "authors": [ "Robert Geirhos", "Carlos RM Temme", "Jonas Rauber", "Heiko H Schütt", "Matthias Bethge", "Felix A Wichmann" ], "title": "Generalisation in humans and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": null, "year": 2019 }, { "authors": [ "Ross Girshick", "Jeff Donahue", "Trevor Darrell", "Jitendra Malik" ], "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Yves Grandvalet", "Yoshua Bengio" ], "title": "Semi-supervised learning by entropy minimization", "venue": "In NeurIPS,", "year": 2005 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "In CVPR,", "year": 2006 }, { "authors": [ "Bharath Hariharan", "Ross Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Martina Jakesch", "Helmut Leder", "Michael Forster" ], "title": "Image ambiguity and fluency", "venue": "PLoS One,", "year": 2013 }, { "authors": [ "Angelos Katharopoulos", "François Fleuret" ], "title": "Not all samples are created equal: Deep learning with importance sampling", "venue": "arXiv preprint arXiv:1803.00942,", "year": 2018 }, { "authors": [ "Saeed Reza Kheradpisheh", "Masoud Ghodrati", "Mohammad Ganjtabesh", "Timothe Masquelier" ], "title": "Deep networks can resemble human feed-forward vision in invariant object recognition", "venue": "Scientific Reports,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Aviral Kumar", "Sunita Sarawagi", "Ujjwal Jain" ], "title": "Trainable calibration measures for neural networks from kernel mean embeddings", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "M Pawan Kumar", "Benjamin Packer", "Daphne Koller" ], "title": "Self-paced learning for latent variable models", "venue": "In NeurIPS,", "year": 2010 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In ICML Workshop on Challenges in Representation Learning,", "year": 2013 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Reducing over-confident errors outside the known distribution", "venue": "arXiv preprint arXiv:1804.03166,", "year": 2018 }, { "authors": [ "Rongmei Lin", "Weiyang Liu", "Zhen Liu", "Chen Feng", "Zhiding Yu", "James M Rehg", "Li Xiong", "Le Song" ], "title": "Compressive hyperspherical energy minimization", "venue": "arXiv preprint arXiv:1906.04892,", "year": 1906 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Peter H Lindsay", "Donald A Norman" ], "title": "Human information processing: An introduction to psychology", "venue": "Academic press,", "year": 2013 }, { "authors": [ "Weiyang Liu", "Bo Dai", "Ahmad Humayun", "Charlene Tay", "Chen Yu", "Linda B Smith", "James M Rehg", "Le Song" ], "title": "Iterative machine teaching", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Weiyang Liu", "Yandong Wen", "Zhiding Yu", "Ming Li", "Bhiksha Raj", "Le Song" ], "title": "Sphereface: Deep hypersphere embedding for face recognition", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Weiyang Liu", "Yan-Ming Zhang", "Xingguo Li", "Zhiding Yu", "Bo Dai", "Tuo Zhao", "Le Song" ], "title": "Deep hyperspherical learning", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Weiyang Liu", "Rongmei Lin", "Zhen Liu", "Lixin Liu", "Zhiding Yu", "Bo Dai", "Le Song" ], "title": "Learning towards minimum hyperspherical energy", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Mingsheng Long", "Yue Cao", "Jianmin Wang", "Michael I Jordan" ], "title": "Learning transferable features with deep adaptation networks", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Tiago Marques", "Julia Nguyen", "Gabriela Fioreze", "Leopoldo Petreanu" ], "title": "The functional organization of cortical feedback inputs to primary visual cortex", "venue": "Nature neuroscience,", "year": 2018 }, { "authors": [ "Radoslaw Martin Cichy", "Aditya Khosla", "Dimitrios Pantazis", "Aude Oliva" ], "title": "Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks", "venue": "URL http://dx.doi.org/10.1016/j.neuroimage.2016.03.063", "year": 2016 }, { "authors": [ "Patrick McClure", "Nikolaus Kriegeskorte" ], "title": "Representational distance learning for deep neural networks", "venue": "Frontiers in computational neuroscience,", "year": 2016 }, { "authors": [ "Warren S McCulloch", "Walter Pitts" ], "title": "A logical calculus of the ideas immanent in nervous activity", "venue": "Bulletin of mathematical biology,", "year": 1990 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Xingchao Peng", "Ben Usman", "Neela Kaushik", "Judy Hoffman", "Dequan Wang", "Kate Saenko" ], "title": "Visda: The visual domain adaptation challenge", "venue": "arXiv preprint arXiv:1710.06924,", "year": 2017 }, { "authors": [ "R.T. Pramod", "S.P. Arun" ], "title": "Do computational models differ systematically from human object perception? 2016", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016. doi: 10.1109/cvpr.2016.177. URL http://dx.doi.org/10.1109/CVPR.2016", "year": 2016 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "arXiv preprint arXiv:1902.10811,", "year": 1902 }, { "authors": [ "Kuniaki Saito", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Asymmetric tri-training for unsupervised domain adaptation", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Kuniaki Saito", "Kohei Watanabe", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Maximum classifier discrepancy for unsupervised domain adaptation", "venue": null, "year": 2017 }, { "authors": [ "Kuniaki Saito", "Yoshitaka Ushiku", "Tatsuya Harada", "Kate Saenko" ], "title": "Adversarial dropout regularization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Rui Shu", "Hung H Bui", "Hirokazu Narui", "Stefano Ermon" ], "title": "A dirt-t approach to unsupervised domain adaptation", "venue": null, "year": 2018 }, { "authors": [ "Edgar Simo-Serra", "Eduard Trulls", "Luis Ferraz", "Iasonas Kokkinos", "Pascal Fua", "Francesc Moreno-Noguer" ], "title": "Discriminative learning of deep convolutional feature point descriptors", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In NeurIPS, 2016", "year": 2020 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Courtney J Spoerer", "Patrick McClure", "Nikolaus Kriegeskorte" ], "title": "Recurrent convolutional neural networks: a better model of biological object recognition", "venue": "Frontiers in psychology,", "year": 2017 }, { "authors": [ "Yi Sun", "Yuheng Chen", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face representation by joint identification-verification", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Yi Sun", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face representation from predicting 10,000 classes", "venue": "In CVPR,", "year": 2014 }, { "authors": [ "Peng Tang", "Xinggang Wang", "Xiang Bai", "Wenyu Liu" ], "title": "Multiple instance detection network with online instance classifier refinement", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Chao-Yuan Wu", "R Manmatha", "Alexander J Smola", "Philipp Krahenbuhl" ], "title": "Sampling matters in deep embedding learning", "venue": "In ICCV, 2017", "year": 2017 }, { "authors": [ "D.L.K. Yamins", "H. Hong", "C.F. Cadieu", "E.A. Solomon", "D. Seibert", "J.J. DiCarlo" ], "title": "Performance-optimized hierarchical models predict neural responses in higher visual cortex", "venue": "Proceedings of the National Academy of Sciences,", "year": 2014 }, { "authors": [ "Dong Yi", "Zhen Lei", "Shengcai Liao", "Stan Z Li" ], "title": "Learning face representation from scratch", "venue": "arXiv preprint arXiv:1411.7923,", "year": 2014 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Jianxiong Xiao", "Antonio Torralba", "Aude Oliva" ], "title": "Learning deep features for scene recognition using places database", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Yan Zhou", "Murat Kantarcioglu", "Bhavani Thuraisingham" ], "title": "Self-training with selection-byrejection", "venue": "IEEE 12th international conference on data mining,", "year": 2012 }, { "authors": [ "Yang Zou", "Zhiding Yu", "BVK Kumar", "Jinsong Wang" ], "title": "Domain adaptation for semantic segmentation via class-balanced self-training", "venue": "arXiv preprint arXiv:1810.07911,", "year": 2018 } ]
[ { "heading": null, "text": "Although convolutional neural networks (CNNs) are inspired by the mechanisms behind human visual systems, they diverge on many measures such as ambiguity or hardness. In this paper, we make a surprising discovery: there exists a (nearly) universal score function for CNNs whose correlation is statistically significant than the widely used model confidence with human visual hardness. We term this function as angular visual hardness (AVH) which is given by the normalized angular distance between a feature embedding and the classifier weights of the corresponding target category in a CNN. We conduct an in-depth scientific study. We observe that CNN models with the highest accuracy also have the best AVH scores. This agrees with an earlier finding that state-of-art models tend to improve on classification of harder training examples. We find that AVH displays interesting dynamics during training: it quickly reaches a plateau even though the training loss keeps improving. This suggests the need for designing better loss functions that can target harder examples more effectively. Finally, we empirically show significant improvement in performance by using AVH as a measure of hardness in self-training methods for domain adaptation." }, { "heading": "1 INTRODUCTION", "text": "The invention and development of Convolutional Neural Networks (CNNs) were inspired by natural visual processing systems. For example, artificial neurons were designed to mimic neurons taking and transforming information [48], and neocognitron, the origin of the CNN architecture, was enlightened by early findings of receptive fields in the visual cortex [15]. CNNs have achieved a great success in pushing the boundaries in a wide range of computer vision tasks such as image classification [24, 30, 59], face recognition [39, 63, 64], and scene analysis [19, 43, 70]. Specifically, on certain large-scale benchmarks such as ImageNet [10], CNNs have even surpassed human-level accuracy. Despite such notable progress, CNNs are still far from matching human vision on many measures such as robustness, adaptability and few-shot learning [23], and could suffer from various biases. For example, CNNs pre-trained on Imagenet are biased towards textures [18]. These biases can result in CNNs being overconfident, or prone to domain gaps and adversarial attacks. Therefore, to fundamentally solve the above problems, the efforts should be made to improve CNN’s capabilities to model human visual system [47, 62].\nThe popular measure of CNN confidence, softmax score, has been widely used in many applications, yet causing calibration problems and tending to make CNNs overconfident even if they are wrong [21, 34]. However, this is not the case with human vision. Thus, there is a gap between the current measure of hard examples that appear to be ambiguous or uncertain in these two systems. We denote human visual hardness as the measure of how hard an example is to human visual system. In this paper, we bridge the gap by proposing a novel score function on CNNs that correlates closely with human visual hardness. The first piece of this puzzle starts with the question of what is a good measure of human visual hardness. Recently, [52] argued that human selection frequency is a good measure. This is the average number of times an image gets picked by a crowd of annotators, when they are asked to pick an image from a pool that belongs to a certain specified category. Intuitively, human selection frequency depends on various factors like object sizes, poses, special filters applied to images, etc. [52] collected human selection frequency scores on ImageNet validation set using the MTurk platform. In this paper, we use this dataset to verify several hypotheses on correlations between CNNs and human visual systems in section 3.2.\nMoreover, an automatic detection of examples that are hard for human vision has numerous applications. [52] showed that state-of-the-art models perform better on hard examples (i.e., hard for humans). This implies that in order to improve generalization, the models need to improve accuracy on hard examples. This can be achieved through various learning algorithms such as curriculum learning [2] and self-paced learning [32] where being able to detect hard examples is crucial. Measuring sample confidence is also important in partially-supervised problems such as semi-supervised learning [71, 72], unsupervised domain adaptation [8] and weakly-supervised learning [65] due to their under-constrained nature. For instance, self-training [73] can easily reach to trivial solutions if one does not select pseudo-labels carefully based on correct measure of hardness. Furthermore, by identifying hard examples, one can detect various biases in current CNN models. Sample hardness can also be used to identify implicit distribution imbalance in datasets to ensure fairness and remove societal biases [4]." }, { "heading": "Our contributions are summarized as follows:", "text": "Angular visual hardness (AVH): Given a CNN, we propose a novel score function that has stronger correlation with human visual hardness than softmax score. It is the normalized angular distance between the image feature embedding and the weights of the target category (See Figure 1). The normalization takes into account the angular distances to other categories. We argue that the semantic ambiguity that affects human visual hardness has stronger correlation with this score and we find empirical evidence to support this claim.\nThe AVH score is inspired by the observation from Figure 1 and also [42] that samples from each class concentrate in a convex cone in the embedding space. In addition, some existing theoretical results [61] show that for minimization of logistic loss or cross-\nentropy loss which is often used in CNNs, gradient descent converges to the same direction as maximum margin solutions irrelevant to the `2 norm of classifier weights or feature embeddings. This also provides the intuition behind why we can show AVH score rather than current model confidence, softmax score, correlates better with human visual hardness.\nWe validate that there is a statistically-significant stronger correlation between AVH and human selection frequency across a wide range of CNN models. Hence, it serves as its proxy on datasets where such information is not available and is beneficial to a number of downstream tasks.\nWe observed the evolution of AVH score during training of CNN models. It plateaus early in training even if the training (cross-entropy) loss keeps improving. This suggests the need to design better loss functions that can improve performance on hard examples. We also validate the argument in [52] that improving on hard examples is the key to improve the generalization by verifying that the state-of-the-art models have the best average AVH scores.\nFinally, we empirically show the superiority of AVH with its application to self-training for unsupervised domain adaptation. With AVH being an improved confidence measure, our proposed self-training framework renders considerably improved pseudo-label selection and category estimation, leading to state-of-the-art results with significant performance gain over baselines." }, { "heading": "2 RELATED WORK", "text": "Example hardness measures: Recently, measuring sample hardness for deep learning models has been widely studied with loss value [58], relative Euclidean distance [56, 66] and gradient norm [28]. On the other hand, there is a rich history in cognitive and neuroscience communities to understand human visual perception [6, 7, 14, 45], where many of them focus on mechanisms used by the human brain to translate visual information to mental representations. These representations are subject to many correspondence differences and errors and thereby are not isomorphic to the real world [37]. They can be affected by the ambiguity of different semantics [27] such as occlusion, distortion, motion blur, and inherent similarity among objects. Due to the expensive human labeling process, such detailed semantic information is typically not present in large-scale image benchmarks used to train the CNN models.\nAngular distance in CNNs: [69] uses the deep features to quantify the semantic difference between images, indicating that deep features contain the most crucial semantic information. It empirically shows that the angular distance between feature maps in deep neural networks is very consistent with the human in distinguishing the semantic difference. However, because of the different goal mentioned above, they have not studied or shown any strong correlation of human visual hardness and the angular distance on natural images. [40] proposes a hyperspherical neural network that constrains the parameters of neurons on a unit hypersphere and uses angular similarity to replace the inner product similarity. [42] decouples the inner product as the norm and the angle and argues that the norm corresponds to intra-class variation, and the angle corresponds to inter-class semantic difference. However, this work does not consider any human factors, while our goal is to bridge the gap between CNNs and human perception. [35, 41] propose well-performing regularizations based on angular diversity to improve the network generalization.\nImage degradation: Because CNNs and humans achieve similar accuracy on a wide range of tasks on benchmark datasets, a number of works have investigated similarities and differences between CNNs and human vision [3, 5, 9, 11, 12, 29, 46, 51, 67]. Since human annotation data is hard to come by, researchers have proposed an alternative measure of visual hardness on images based on image degradation [37]. It involves adding noise or changing image properties such as contrast, blurriness, and brightness. [17] employed psychological studies to validate the degradation method as a way to measure human visual hardness. It should be noted that the artificial visual hardness introduced by degradation is a different concept from the natural visual hardness. The hardness based on degradation only reflects the hardness of a single original image with various of transformations, while natural visual hardness based on the ambiguity of human perception across a distribution of natural images. In this paper, we consider both as the surrogates of human visual hardness.\nDeep model calibration. Confidence calibration is the problem of predicting probability estimates representative of the true correctness likelihood [21]. It is well-known that the deep neural networks are mis-calibrated and there has been a rich literature trying to solve this problem [21, 31]. However, this is a somewhat different issue because the confidence calibration is a problem introduced by two measurements of the model, which does not have any involvement of human visual hardness." }, { "heading": "3 A DISCOVERY FROM SCIENTIFIC TESTING: ANGULAR VISUAL HARDNESS", "text": "" }, { "heading": "3.1 NOTATIONS AND SETUP", "text": "In order to quantify Human Visual Hardness and Model Predictions for convenience purposes in experiments, we use corresponding surrogates which are formally defined as the following throughout the paper. We use the ImageNet [10] benchmark in all following experiments. Particularly, we take advantage of the Human Selection Frequency information for validation images provided by the recent paper [52]. Recall that such information can serve as one of the proxy for Human Visual Hardness. To test if our findings with Human Selection Frequency hold on another proxy, image degradation, we create an augmented validation set based on two image degradation methods, decreasing contrast and adding noise. We label them with corresponding degradation level (results shown in Appendix A.2 and A.5). Besides, in order to verify that the our experimental results hold consistently across models instead of a particular model, we use four popular ImageNet pre-trained models AlexNet [30], VGG19 [59], DenseNet121 [26], ResNet50 [24]. We select ResNet50 as the representative model for some experiments. Most importantly, we also provide experimental results on different datasets, MNIST and CIFAR10/100, in Appendix A.3 and A.4 to better support our proposal.\nDenote Sn as the unit n-sphere, formally, Sn = {x ∈ Rn+1|‖x‖2 = 1}. Below by A(·, ·), we denote the angular distance between two points on Sn, i.e., A(u,v) = arccos( 〈u,v〉‖u‖‖v‖ ). Let x be the feature embeddings input for the layer before the last one of the classifier of the pretrained CNN models, eg. FC2 for VGG19. Let C be the number of classes for a classification task. Denote W = {wi|0 < i ≤ C} as the set of weights for all C classes in the final layer of the classifier. Definition 1 (Angular Visual Hardness (AVH)). AVH, for any (x, y), is defined as:\nAVH(x) = A(x,wy)∑C i=1A(x,wi) ,\nwhich wy represents the weights of the target class.\nDefinition 2 (Model Confidence). We define model confidence on a single sample as the probability score of the true objective class output by the CNN models, formally, e\nwyx∑C i=1 e wix .\nDefinition 3 (Human Selection Frequency). We define one way to measure human visual hardness on pictures as Human Selection Frequency. Quantitatively, given m number of human workers in a labeling process described in [52], if b out of m label a picture as a particular class and that class is the target class of that picture in the final dataset, then Human Selection Frequency is defined as bm ." }, { "heading": "3.2 CONNECTIONS AND GAPS BETWEEN HUMAN VISUAL SYSTEM AND CNN", "text": "Studying the precise connection or gap between human visual hardness and model predictions is not feasible because data collection involving human labelling or annotation requires large amount of work. In addition, usually those human data is application or dataset specific, which makes the scalability of this study even worse. Therefore, all the testing and experiments we design are at best effort given the limited resources. That is exactly another motivation for us to bridge the gap between Human and models because models predictions require minimum costs compared to human efforts. In this section, We first provide four hypothesis and test them accordingly.\nHypothesis 1. AVH has a correlation with Human Selection Frequency." }, { "heading": "Outcome: Null Hypothesis Rejected", "text": "Correspondingly, after evaluating each validation sample on pre-trained models, we extract feature embeddings x and also the class weightsW to compute AVH(x). Noted that we linear scale the range of AVH(x) to [0, 1]. Table 1 shows the overall strong correlation of AVH(x) and Human Selection Frequency consistently (p-value is < 0.001 rejecting the null hypothesis). From the coefficients represented for different bins of example hardness, we can see that the harder the examples, the weaker the correlation. Noted that we also check the results across four different CNN architectures and we found that better model has higher coefficient.\nThe plot on the left in Figure 2 help visualize the strong correlation between AVH(x) and Human Selection Frequency for validation images. One intuition behind this correlation is that the class weightsW might corresponds to human semantic for each category and thereby AVH(x) corresponds to human semantic categorization of an image. In order to test if the strong correlation holds for all models, we perform the same experiments on AlexNet, VGG19 and DenseNet121.\nHypothesis 2. Model Confidence has a correlation with Human Selection Frequency." }, { "heading": "Outcome: Null Hypothesis Rejected", "text": "An interesting observation in [52] shows that Human Selection Frequency has strong influence on the Model Confidence. Specifically, examples with low Human Selection Frequency tends to have relatively low Model Confidence. Naturally we examine if the correlation between Model Confidence and Human Selection Frequency is strong. Specifically, all ImageNet validation images are evaluated by the pre-trained models. The corresponding output is simply the Model Confidence on each image. From table 1, we can first see that it is clear that because p-value is < 0.001, Model Confidence does have a strong correlation with Human Selection Frequency. However, the correlation coefficient for Model Confidence and Human Selection Frequency is consistently lower than that of AVH and Human Selection Frequency.\nThe middle plot in figure 2 presents a two-dimensional histogram for the correlation visualization. The x-axis represents Human Selection Frequency, and the y-axis represents Model Confidence. Each bin exhibits the number of images which lie in the corresponding range. We can observe the high density at the right corner, which means the majority of the images have both high human and model accuracy. However, there is a considerable amount of density on the range of medium human accuracy but either extremely low or high model accuracy. One may question that the difference of the correlation coefficient is not large, thereby we also run statistical testing on the significance of the gap, naturally our next step is to test if the difference is significant.\nHypothesis 3. AVH has a stronger correlation to Human Selection Frequency than Model Confidence." }, { "heading": "Outcome: Null Hypothesis Rejected", "text": "There are three steps for testing if two correlation coefficient are significantly different. First step is applying Fisher Z-Transformation to both coefficient. The Fisher Z-Transformation is a way to transform the sampling distribution of the correlation coefficient so that it becomes normally distributed. Therefore, we apply fisher transformation for each correlation coefficient: Z score for coefficient of AVH becomes 0.377 and that of Model Confidence becomes 0.337. The second step is to compute the Z value of two Z scores. Then we determined the Z value to be 4.85 from the two above-mentioned Z scores and sample sizes. The last step is that find out the p-value according to the Z table. According to Z table, p-value is 0.00001. Therefore, we reject the null hypothesis and conclude that AVH has statistically significant stronger correlation with Human Selection Frequency than Model Confidence. In later section 5, we also empirically show that such stronger correlation brings cumulative advantages in some applications. In appendix A.1, besides Spearman correlation coefficient, we have also shown Pearson and Kendall Tau ones. Further more, to test if the conclusion holds for different models, we run the same tests on all four different architectures. The conclusion is for all the models and under different testings, AVH correlate significantly stronger than model confidence, but the correlation is even stronger for better models.\nHypothesis 4. ‖x‖2 has a correlation with Human Selection Frequency." }, { "heading": "Outcome: Failure to Reject Null Hypothesis", "text": "[42] conjectures that ‖x‖2 accounts for intra-class Human/Model Confidence. Particularly, if the norm is larger, the prediction from the model is also more confident, to some extent. Therefore, we conduct similar experiments like previous section to demonstrate the correlation between ‖x‖2 and Human Selection Frequency. Initially, we compute the ‖x‖2 for every validation sample for all models. Then we normalize ‖x‖2 within each class. Table 1 presents the results for the correlation test. We omit the results for p-value in the table and report here that they are all much higher than 0.05, indicating there is no correlation between ‖x‖2 and Human Selection Frequency. The right plot in figure 2 uses a two-dimensional histogram to show the correlation for all the validation images. Given that the norm has been normalized with each class, naturally, there is notable density when the norm is 0 or 1. Except for that, there is no obvious correlation between ‖x‖2 and Human Selection Frequency. We further verify if presenting all samples across 1000 different classes affects the visualization of the correlation. According to WordNet [13] hierarchy, we map the original 1000 fine-grained classes to 45 higher hierarchical classes. A figure in appendix exhibits the relationship between Human Selection Frequency and ‖x‖2 for three representative higher classes containing 58, 7, 1 fine-grained classes respectively. Noted that there is still not any visible direct proportion between these two variables across all plots.\nBesides the hypothesis testings, we provide a detailed discussion on the difference between AVH and Model Confidence in Appendix C." }, { "heading": "4 DYNAMICS OF AVH DURING TRAINING", "text": "After discovering the strong correlation of human visual hardness and AVH score, a natural question would be: What role does AVH play during the training process? Optimization algorithms are used to update weights and biases i.e. the internal parameters of a model to improve the training loss. Both the angles between the feature embedding and classifiers, and the L2 norm of the embedding can influence the loss. While it is well-known that the training loss or accuracy keeps improving but it is not obvious what would be the dynamics of the angles and norms separately during training. we design the experiments to observe the training dynamics of various network architectures.\nExperiment Settings. For datasets and models, we use exactly the same setting as the experiments in 3.1. Nevertheless, observing training dynamics involves training models from scratch on ImageNet training set instead of directly using the pre-trained models. Therefore, we follow the standard training process of AlexNet [30], VGG19 [59], ResNet50 [24] and DenseNet121 [26] (DenseNet results are put in Appendix). For consistency, we train all four models for 90 epochs and decay the initial learning rate by a factor of 10 every 30 epochs. The initial learning rate for AlexNet and VGG19 is 0.01 and for DensetNet121 and ResNet50 is 0.1. For human visual hardness based on Human Selection Frequency, we split all the validation images into 5 bins, [0.0, 0.2], [0.2, 0.4], [0.4, 0.6], [0.6, 0.8], [0.8, 1.0], based on their human selection frequency respectively. For human visual hardness based on Image Degradation Level, we create an augmented validation set based on two image degradation methods,\ndecreasing contrast and adding noise. We label them with corresponding degradation level as well. Note that for all the figures in this section, Epoch starts from 1.\nObservation 1: The norm of feature embeddings keeps increasing during training. Figure 3, 10 and 11 present the dynamics of the average ‖x‖2 and the dynamics of the accuracy for validation samples vary in 90 epochs during the training on three architectures. Note that we are using the validation data for dynamics observation and therefore have never fit them into the model. The average ‖x‖2 increases with a small initial slope but it suddenly climbs after 30 epochs when the first learning rate decay happens. The accuracy curve is very similar to that of the average ‖x‖2. The above observations are consistent in all models. More interestingly, we find that neural networks with shortcut connections (e.g., ResNets and DenseNets) tend to make the norm of the images with different human selection frequency become the same, while the neural networks without shortcuts (e.g., AlexNet and VGG) tend to keep the gap of norm among the images with different human visual hardness.\nObservation 2: AVH hits a plateau very early even when the accuracy or loss is still improving. Figure 3, 10 and 11 exhibits the change of average AVH for validation samples in 90 epochs of training on three models. The average AVH for AlexNet and VGG19 decreases sharply at the beginning and then starts to bounce back a little bit before converging. However, the dynamics of the average AVH for DenseNet121 and ResNet50 are different. They both decrease slightly and then quickly hits a plateau in all three learning rate decay stages. But the common observation is that they all stop improving even when ‖x‖2 and model accuracy are increasing. AVH is more important than ‖x‖2 in the sense that it is the key factor deciding which class the input sample is classified to. However, optimizing the norm under the current softmax cross-entropy loss would be easier so, which cause the plateau of angles for easy examples. However, the plateau for the hard examples can be caused by the limitation of the model itself. As a result, it shows the necessity and importance of designing loss functions that focus on optimizing angles, such as [35, 39, 41].\nObservation 3: AVH’s correlation with human selection frequency consistently holds across models throughout the training process. In Figure 3, 10 and 11, we average over validation samples in five human selection frequency bins or five degradation level bins separately , and then compute the average embedding norm, AVH and model accuracies. We can observe that for ‖x‖2, the gaps between the samples with different human visual hardness are not obvious in ResNet and DenseNet, while they are quite obvious in AlexNet and VGG. However, for AVH, such AVH gaps are very significant and consistent across every network architecture during the entire training process. Interestingly, even if the network is far from being converged, such AVH gaps are still consistent across different human selection frequency. Also the norm gaps are also consistent. The intuition behind this could be that the angles for hard examples are much harder to decrease and probably never in the region for correct classification. Therefore the corresponding norms would not increase otherwise hurting the loss. It validates that AVH is a consistent and robust measure for visual hardness (and even generalization).\nObservation 4: AVH is an indicator of model’s generalization ability. From Figure 3, 18, 10 and 11, we observe that better models (i.e., higher accuracy) have lower average AVH throughout the training process and also across samples under different human visual hardness. For instance, Alexnet is the worst model, and its overall average AVH and average AVH on each of five bins are worse than those of the other three models. In addition, we have found when testing Hypothesis 3 for better models, its AVH correlation is much more stronger than its Model confidence correlation with Human Selection Frequency. The above observations are aligned with the earlier observations of [52] that better models also generalize better on samples across different human visual hardness. Moreover, we AVH is potentially a better measure for generalization as a pretrained model. The norm of feature embeddings is often embedded with training data prior such as data imbalance [39] and class granularity [30]. But when extracting the features for the classes that do not exist in training set, such training data prior is undesired. Since AVH does not consider the norm of feature embeddings, it may better evaluate the generalization of the deep network.\nConjecture on training dynamics of CNNs. From Figure 3 and observations above, we conjecture that the training of CNN has two phases. 1) At the beginning of the training, the softmax cross-entropy loss will first optimize the angles among different classes while the norm will fluctuate and increase very slowly. We argue that it is because changing the norm will not decrease the loss when the angles are not separated enough for correct classification. As a result, the angles get optimized firstly. 2)\nAs the training continues, the angles become more stable and change very slowly while the norm increases rapidly. On the one hand, for easy examples, it is because when the angles get decreased enough for correct classification, the softmax cross-entropy loss can be well minimized by purely increasing the norm. On the other hand, for hard examples, the plateau is caused by that the CNN is unable to decrease the angle to correctly classify examples and thereby also unable to increase the norms (because it may otherwise increase the loss)." }, { "heading": "5 APPLICATION TO SELF-TRAINING FOR DOMAIN ADAPTATION", "text": "Unsupervised domain adaptation [1] presents an important transfer learning problem with wide applications. Deep self-training [33] recently emerged as a powerful framework towards addressing this problem [53, 57, 73, 74]. Here we show the application of AVH as an improved confidence measure in self-training that could significantly benefit the domain adaptation task.\nDataset: We conduct expeirments on the VisDA-17 [50] dataset which is a widely used major benchmark for domain adaptation in image classification. The dataset contains a total number of 152, 409 2D synthetic images from 12 categories in the source training set, and 55, 400 real images from MS-COCO [36] with the same set of categories as the target domain validation set. We follow the protocol of previous works to train a source model with the synthetic training set, and report the model performance on target validation set upon adaptation.\nBaseline: We choose class-balanced self-training (CBST) [73] as our starting self-training baseline considering its good performance on domain adaptaiton. We also compare our model with confidence regularized self-training (CRST)1 [74], a more recent state-of-the-art self-training framework improved over CBST with network prediction/pseudo-label regularized with smoothness. Specifically, our work follows the exact implementation of CBST/CRST in [74].\nSpecifically, given the labeled source domain training set xs ∈ XS and the unlabeled target domain data xt ∈ XT , with known source labels ys = (y(1)s , ..., y(K)s ) ∈ YS and unknown target labels ŷt = (ŷ (1) t , ..., ŷ (K) t ) ∈ ŶT from K classes, CBST performs joint network learning and pseudo-label estimation by treating pseudo-labels as discrete learnable latent variables with the following loss:\nmin w,ŶT LCB(w, Ŷ) = − ∑ s∈S K∑ k=1 y(k)s log p(k|xs;w)− ∑ t∈T K∑ k=1 ŷ (k) t log p(k|xt;w) λk\ns.t. ŷt ∈ EK ∪ {0}, ∀t\n(1)\nwhere the feasible set of pseudo-labels is the union of {0} and the K dimensional one-hot vector space EK , and w and p(k|x;w) represent the network weights and the classifier’s softmax probability for class k, respectively. In addition, λk serves as a class-balancing parameter controlling the pseudolabel selection of class k, and is determined by the softmax confidence ranked at portion p (in descending order) among samples predicted to class k. Therefore, only one parameter p is used to determine all λk’s. The optimization problem in (1) can be solved via minimizing with respect to w and Ŷ alternatively, and the solver of Ŷ can be written as:\nŷ (k)∗ t = 1, if k = argmaxk { p(k|xt;w) λk } and p(k|xt;w) > λk\n0, otherwise\n(2)\nThe optimization with respect to w is simply normal network re-training with source labels and estimated pseudo-labels. And the complete self-training process involves alternatively repeat of network re-training and pseudo-label estimation.\nCBST+AVH: We seek to improve the pseudo-label solver with better confidence measure from AVH. To this end, we propose the following definition of angular visual confidence (AVC) to represent the predicted probability of class c:\nAV C(c|x;w) = π −A(x,wc)∑K k=1(π −A(x,wk)) , (3)\n1We consider MRKLD+LRENT which is reported to be the highest one in [74].\nand the pseudo-label estimation in CBST+AVH is accordingly defined as:\nŷ (k)∗ t = 1, if k = argmaxk { AV C(k|xt;w) λk } and AV C(k|xt;w) > λk\n0, otherwise\n(4)\nwhere λk is differently determined by referring to AV C(k|xt;w) ranked at portion p among samples predicted to class k by AVH, following a similar definition of λk in CBST. In addition, network re-training in CBST+AVH follows the softmax self-training loss in (1).\nOne could see that AVH changes the self-training behavior from two ways with the conditions in (4): Improved selection: This is determined by AV C(k|xt;w) > λk. Improved classification: This is determined by k = argmaxk{ AV C(k|xt;w) λk\n}. Specifically, the former determines which samples are not ignored during self-training based on AVC, whereas the latter determines the pseudo-label class by taking argmax over normalized AVC scores. With calibrated confidence that better resembles human visual hardness, both aspects are likely to considerably influence the performance of self-training.\nExperimental Results: We present the results of the proposed method in Table 3, and also show its performance with respect to different self-training epochs in Figure 4. One could see that CBST+AVH outperforms both CBST and CRST by a very significant margin. We would like to emphasize that this is a very compelling result under “apples to apples” comparison with the same source model, implementation and hyper-parameters.\nAnalysis: A major challenge of self-training is the amplification of error due to misclassified pseudo-labels. Therefore, traditional self-training methods such as CBST often use model confidence as the confidence measure to select confidently labeled examples. The hope is that higher confidence potentially implies lower error rate. While this in generally proves useful, the model tends to focus on the “less informative” samples, whereas ignoring the “more informative”, harder ones near classier boundaries that could be essential for learning a stronger classifier.\nAn advantage we observe from AVH is that the improved calibration leads to more frequent sampling of harder samples, whereas the pseudo-label classification on these hard samples generally outperforms softmax results. Table 2 shows some\nstatistics of the examples selected with AVH and Model Confidence respectively at the beginning of the training process. The true postive rate (TP Rate) for of CBST+AVH remains similar to CBST/CRST, indicating AVH is not overall introducing more noisy examples compare to model confidence. On the other hand, it is observed that the average model confidence of AVH selected samples is lower, indicating there are more selected hard samples that are closer to the decision boundary. It is also observed that the average sample norm by AVH is also lower, confirming the influence of sample norm on ultimate model confidence." }, { "heading": "6 EXTENSIONS AND APPLICATIONS", "text": "Adversarial Example: A Counter Example? Our claim about the strong correlation between AVH score and human visual hardness does not apply on non-natural images such as adversarial examples. For such examples, the human can not tell the difference visually, but the adversarial example has a worse AVH than the original image, which runs counter to our claim that AVH has strong correlation with human visual hardness. So this claim is limited to distribution of natural images. However, on a positive note, we do find that AVH is slower to change compared to the embedding norm during the dynamics of adversarial training. See Appendix for details.\nConnection to deep metric learning: Measuring the hardness of samples is also of great importance in the field of deep metric learning [49, 60, 66]. For instance, objective functions in deep metric learning consist of e.g., triplet loss [56] or contrastive loss [22], which requires data pair/triplet mining in order to perform well in practice. One of the most widely used data sampling strategies is semi-hard negative sample mining [56] and hard negative sample mining. These negative sample mining techniques highly depend on how one defines the hardness of samples. AVH can be potentially useful in this setting.\nConnections to fairness in machine learning: Easy and hard samples can implicitly reflect imbalances in latent attributes in the dataset. For example, the CASIA-WebFace dataset [68] mostly contains white celebrities, so the neural network trained on CASIA-WebFace is highly biased against the other races. [4] demonstrates a performance drop of faces of darker people due to the biases in the training dataset. In order to ensure fairness and remove dataset biases, the ability to identify hard samples automatically can be very useful. We would like to test if AVH is effective in these settings.\nConnections to knowledge transfer and curriculum learning: The efficiency of knowledge transfer [25] is partially determined by the sequence of input training data. [38] theoretically shows feeding easy samples first and hard samples later (known as curriculum learning) can improve the convergence of model. [2] also show that the curriculum of feeding training samples matters in terms of both accuracy and convergence. We plan to investigate the use of AVH metric in such settings." }, { "heading": "7 CONCLUDING REMARKS", "text": "Human perception and deep neural networks in general have different notions of visual hardness. Our paper studies the gap between them, and attempts to bridge this gap by proposing a novel measure for CNN models known as angular visual hardness. Our comprehensive empirical studies show that AVH has many nice properties. First, AVH has a strong correlation with human selection frequency and image degradation level. Second, this holds across different network architectures and throughout the training process. Third, AVH can serve as an indicator of generalization abilities of neural networks, and improving SOTA accuracy entails improving accuracy on hard examples. Then we empirically show the huge advantage of AVH over Model Confidence in self-training for domain adaptation task. It is still an open problem of designing an appropriate loss function that can focus on improving AVH during training. AVH can be very useful in other applications such as deep metric learning, fairness, knowledge transfer, etc. and we plan to investigate them in future." }, { "heading": "Appendix", "text": "" }, { "heading": "A ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "A.1 ADDITIONAL RESULTS FOR CORRELATION TESTINGS", "text": "In order to run rigorous correlation testings, besides computing the Spearman coefficient, we provide additional results on Pearson and Kendall Tau correlation coefficients. Moreover, we show results for all four architectures, AlexNet, VGG19, ResNet50 and DenseNet121 in Table 4, 5, 6 and 7 respectively to support our claims in section 3.2." }, { "heading": "A.2 ADDITIONAL RESULTS FOR THE HYPOTHESIS", "text": "Definition 4 (Image Degradation Level). We define another way to measure human visual hardness on pictures as Image Degradation Level. We consider two degradation methods in this paper, decreasing contrast and adding noise. Quantitatively, Image Degradation Level for decreasing contrast is directly the contrast level. Image Degradation Level for adding noise is the amount of pixel-wise additive uniform noise.\nHypothesis: AVH has a strong correlation with Image Degradation Level\nIn order to test if the results from Prediction 1 hold on another proxy to human visual hardness, Image Degradation Level, we perform the similar experiments but on the augmented ImageNet validation set. The plots in Figure 5 show the strong correlation between AVH(x) and Noise Degradation Level while the plots in Figure 6 present the strong correlation between AVH(x) and Contrast Degradation Level. They, along with Figure 7, demonstrate that AVH(x) strongly correlates with Human Visual Hardness. Additional Plots for DenseNet121 is shown in Figure 8.\nHypothesis: ‖x‖2 has a correlation with Human Selection Frequency We further verify if presenting all samples across 1000 different classes affects the visualization of the correlation. According to WordNet [13] hierarchy, we map the original 1000 fine-grained classes to 45 higher hierarchical classes. Figure 9 exhibits the relationship between Human Selection Frequency and ‖x‖2 for three representative higher classes containing 58, 7, 1 fine-grained classes respectively. Noted that there is still not any visible direct proportion between these two variables across all plots.\nHypothesis: AVH has a correlation with Human Selection Frequency Additional results for DenseNet121 are shown in Figure 8." }, { "heading": "A.3 ADDITIONAL EXPERIMENTS FOR OBSERVING DYNAMICS ON MNIST", "text": "Figure 12 illustrates how the average norm of the feature embedding and angles between feature and class embedding for testing samples vary in 60 iterations during the training process. The average norm increases with a large initial slope but it flattens slightly after 10 iterations. On the other hand, the average angle decreases sharply at the beginning and then becomes almost flat after 10 iterations.\nMoreover, we explore the difference between norm and angle change for easy and hard human examples in more details. Figure 13 also plots the angle and norm changes for two examples, which are hard and easy for human visualization, in the training phase. Note that both examples are testing data and thereby have never fit into the model. We can see that for the angle, both of them drop largely initially and then the angle for the easy one converges to a much lower value. For the norm, both of them are increasing drastically at an early stage but that for the harder example keeps climbing even when that for the easy one saturates." }, { "heading": "A.4 ADDITIONAL EXPERIMENTS FOR TRAINING DYNAMICS ON CIFAR10 AND CIFAR100", "text": "Figure 14 and 15 show the dynamics of average `2 norm of the embeddings and average AVH(x) on CIFAR10 and CIFAR100 datasets respectively. We can observe the similar phenomenons we have discussed in section 4. It further supports our theoretical foundation from [61] that gradient descent converges to the same direction as maximum margin solutions irrelevant to the `2 norm of classifier weights or feature embeddings." }, { "heading": "A.5 ADDITIONAL EXPERIMENTS FOR TRAINING DYNAMICS ON IMAGENET", "text": "Figure 17 presents the dynamics of the average ‖x‖2 and the dynamics of the accuracy for validation samples vary in 90 epochs during the training on AlexNet, VGG19, DenseNet121 and ResNet50. In figure 16 and 18, we average over validation samples in five human selection frequency bins separately, and then compute the average embedding norm, AVH and model accuracies. In figure 11 and 19, we average over validation samples in five image noise degradation level bins separately, and then compute the average embedding norm, AVH and model accuracies. In figure 10 and 20, we average over validation samples in five image contrast degradation level bins separately, and then compute the average embedding norm, AVH and model accuracies. Figure 21 shows the training dynamics of the model confidence on AlexNet, VGG19 and ResNet50." }, { "heading": "B A SPECIAL CASE: ADVERSARIAL EXAMPLES", "text": "We show a special case in Figure 22 to illustrate how the norm and the angle change when one sample switches from one class to another. Specifically, we change the sample from one class to another using adversarial perturbation. It is essentially performing gradient ascent to the ground truth class. In Figure 22, the purple line denotes the trajectory of an adversarial sample switching from one class to another. We can see that the sample will first shrink its norm towards origin and then push its angle away from the ground truth class. Such a trajectory indicates that the adversarial sample will first approach to the origin in order to become a hard sample for this class. Then the sample will change the angle in order to switch its label. This special example fully justifies the importance of both norm and angle in terms of the hardness of samples." }, { "heading": "C MORE DISCUSSIONS ON THE DIFFERENCE BETWEEN AVH AND MODEL CONFIDENCE", "text": "The difference between AVH and model confidence lies in the feature norm and its role during training. To illustrate the difference, we consider a simple binary classification case where the softmax score (i.e., model confidence) for class 1 is\nexp(w1x)∑ i exp(wix) = exp(‖w1‖‖x‖ cos(θw1,x))∑ i exp(‖wi‖‖x‖ cos(θwi,x))\nwhere wi is the classifier weights of class i, x is the input deep feature and θwi,x is the angle between wi and x. To simplify, we assume the norm of w1 and w2 are the same, and then the classification result is based on the angle now. Once θw1,x is smaller than θw2,x, the network will classify the sample x as class 1. However, in order to further minimize the cross-entropy loss after making θw1,x smaller than θw2,x, the network has a trivial solution: increasing the feature norm ‖x‖ instead of further minimizing the θw1,x. It is obviously a much more difficult task to minimize θw1,x rather than increasing ‖x‖. Therefore, the network will tend to increase the feature norm ‖x‖ to minimize the cross-entropy loss, which is equivalent to maximizing the model confidence in class 1. In fact, this also matches our empirical observation that the feature norm keeps increasing during training. Most importantly, one can notice that AVH will stay unchanged no matter how large the feature norm ‖x‖ is. Moreover, this also matches our empirical result that AVH easily gets saturated while model confidence can keep improving. Therefore, AVH is able to better characterize the visual hardness, since it is trivial for the network to increase feature norm. This is the fundamental difference between model confidence and AVH.\nTo get a more intuitive sense of how feature norm can affect the model confidence, we plot the value of the model confidence for two scenarios: θw1,x < θw2,x and θw1,x > θw2,x. Under the case that the sample x belongs to class 1, once we have θw1,x < θw2,x, then we only need to increase the feature norm and can easily get nearly perfect confidence on this sample. In contrast, AVH will stay unchanged during the entire process and therefore is a more robust indicator for visual hardness than model confidence." } ]
2,019
null
SP:32ff67eb5376e6c8c52c5adc601c520abc9a648c
[ "This paper proposes a new method for source separation, by using deep learning UNets, complex-valued representations and the Fourier domain. Concretely, their contribution is : i) a complex-valued convolutional version of the Feature-Wise Linear Modulation, able to optimise the parameters needed to create multiple separated candidates for each of signal sources that are then combined using signal averaging; ii) the design of a loss that takes into account magnitude and phase while being scale and time invariant. It was then tested and compared with real-valued versions, and also some state-of-the-art methods.", "This work researches the deep complex-valued neural networks. Specifically, it proposes a new signal extraction mechanism that operates in frequency domain and applies to address the speech separation issue. Also, a function is proposed to explicitly consider both the magnitude and phase information of a signal. Related work on learning representation in frequency domain and speech separation is well introduced. Theoretical analysis is conducted to show the motivation and connection to signal processing. The architecture of the deep neural networks is presented in details, with the elaboration of the complex mask generation. Experimental study is conducted on a benchmark dataset to compare the proposed complex networks with those using real-part values only to demonstrate the improvement. " ]
Recent advances have made it possible to create deep complex-valued neural networks. Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems. Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain. As a case study, we perform audio source separation in the Fourier domain. Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation. We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram. Using the Wall Street Journal Dataset, we compare our phaseaware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss. When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters. Our proposed mechanism improves significantly deep complex-valued networks’ performance and we demonstrate the usefulness of its regularizing effect.
[]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": null, "year": 2015 }, { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Shaojie Bai", "J Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "arXiv preprint arXiv:1803.01271,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Patrice Simard", "Paolo Frasconi" ], "title": "Learning long-term dependencies with gradient descent is difficult", "venue": "IEEE transactions on neural networks,", "year": 1994 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": "arXiv preprint arXiv:1312.6203,", "year": 2013 }, { "authors": [ "Zhuo Chen", "Yi Luo", "Nima Mesgarani" ], "title": "Deep attractor network for single-microphone speaker separation", "venue": null, "year": 2016 }, { "authors": [ "Hyeong-Seok Choi", "Janghyun Kim", "Jaesung Huh", "Adrian Kim", "Jung-Woo Ha", "Kyogu Lee" ], "title": "Phase-aware speech enhancement with deep complex u-net", "venue": null, "year": 1903 }, { "authors": [ "Ivo Danihelka", "Greg Wayne", "Benigno Uria", "Nal Kalchbrenner", "Alex Graves" ], "title": "Associative long short-term memory", "venue": null, "year": 2016 }, { "authors": [ "Muneer Ahmad Dedmari", "Sailesh Conjeti", "Santiago Estrada", "Phillip Ehses", "Tony Stöcker", "Martin Reuter" ], "title": "Complex fully convolutional neural networks for mr image reconstruction", "venue": "In International Workshop on Machine Learning for Medical Image Reconstruction,", "year": 2018 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Michal Drozdzal", "Eugene Vorontsov", "Gabriel Chartrand", "Samuel Kadoury", "Chris Pal" ], "title": "The importance of skip connections in biomedical image segmentation", "venue": "In Deep Learning and Data Labeling for Medical Applications,", "year": 2016 }, { "authors": [ "J. Du", "Y. Tu", "Y. Xu", "L. Dai", "C. Lee" ], "title": "Speech separation of a target speaker based on deep neural networks", "venue": "In Proc. of ICSP,", "year": 2014 }, { "authors": [ "Ariel Ephrat", "Inbar Mosseri", "Oran Lang", "Tali Dekel", "Kevin Wilson", "Avinatan Hassidim", "William T. Freeman", "Michael Rubinstein" ], "title": "Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation", "venue": null, "year": 2018 }, { "authors": [ "H. Erdogan", "J.R. Hershey", "S. Watanabe", "J. Le Roux" ], "title": "Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks", "venue": "In Proc. of ICASSP,", "year": 2015 }, { "authors": [ "Ruohan Gao", "Rogério Schmidt Feris", "Kristen Grauman" ], "title": "Learning to separate object sounds by watching unlabeled video", "venue": null, "year": 2018 }, { "authors": [ "George M Georgiou", "Cris Koutsougeras" ], "title": "Complex domain backpropagation", "venue": "IEEE transactions on Circuits and systems II: analog and digital signal processing,", "year": 1992 }, { "authors": [ "Klaus Greff", "Rupesh K Srivastava", "Jürgen Schmidhuber" ], "title": "Highway and residual networks learn unrolled iterative estimation", "venue": "arXiv preprint arXiv:1612.07771,", "year": 2016 }, { "authors": [ "D. Griffin", "Jae Lim" ], "title": "Signal estimation from modified short-time fourier transform", "venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing,", "year": 1984 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "John R. Hershey", "Zhuo Chen", "Jonathan Le Roux", "Shinji Watanabe" ], "title": "Deep clustering: Discriminative embeddings for segmentation and separation", "venue": null, "year": 2015 }, { "authors": [ "Akira Hirose" ], "title": "Complex-valued neural networks: theories and applications", "venue": "World Scientific,", "year": 2003 }, { "authors": [ "Sepp Hochreiter" ], "title": "Untersuchungen zu dynamischen neuronalen Netzen", "venue": "PhD thesis,", "year": 1991 }, { "authors": [ "Guoning Hu", "DeLiang Wang" ], "title": "Monaural speech segregation based on pitch tracking and amplitude modulation", "venue": "Trans. Neur. Netw.,", "year": 2004 }, { "authors": [ "Po-Sen Huang", "Kim Minje", "Mark Hasegawa-Johnson", "Paris Smaragdis" ], "title": "Deep learning for monaural speech separation", "venue": null, "year": 2014 }, { "authors": [ "Stanisław Jastrzebski", "Devansh Arpit", "Nicolas Ballas", "Vikas Verma", "Tong Che", "Yoshua Bengio" ], "title": "Residual connections encourage iterative inference", "venue": "arXiv preprint arXiv:1710.04773,", "year": 2017 }, { "authors": [ "Cijo Jose", "Moustpaha Cisse", "Francois Fleuret" ], "title": "Kronecker recurrent units", "venue": "arXiv preprint arXiv:1705.10142,", "year": 2017 }, { "authors": [ "Taehwan Kim", "Tülay Adalı" ], "title": "Approximation by fully complex multilayer perceptrons", "venue": "Neural computation,", "year": 2003 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "arXiv preprint arXiv:1609.02907,", "year": 2016 }, { "authors": [ "Jonathan Le Roux", "Gordon Wichern", "Shinji Watanabe", "Andy Sarroff", "John R Hershey" ], "title": "Phasebook and friends: Leveraging discrete representations for source separation", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2019 }, { "authors": [ "Yuan-Shan Lee", "Chien-Yao Wang", "Shu-Fan Wang", "Jia-Ching Wang", "Chung-Hsien Wu" ], "title": "Fully complex deep neural network for phase-incorporating monaural source separation", "venue": "ICASP,", "year": 2017 }, { "authors": [ "Yi Luo", "Nima Mesgarani" ], "title": "Tasnet: time-domain audio separation network for real-time, singlechannel speech separation", "venue": null, "year": 2017 }, { "authors": [ "Yi Luo", "Nima Mesgarani" ], "title": "Tasnet: Surpassing ideal time-frequency masking for speech separation", "venue": "arXiv preprint arXiv:1809.07454,", "year": 2018 }, { "authors": [ "Yurii Nesterov" ], "title": "A method of solving a convex programming problem with convergence rate o (1/k2)", "venue": null, "year": 1983 }, { "authors": [ "Tohru Nitta" ], "title": "Orthogonality of decision boundaries in complex-valued neural networks", "venue": "Neural Computation,", "year": 2004 }, { "authors": [ "Ethan Perez", "Florian Strub", "Harm De Vries", "Vincent Dumoulin", "Aaron Courville" ], "title": "Film: Visual reasoning with a general conditioning", "venue": null, "year": 2018 }, { "authors": [ "Tony Plate" ], "title": "Holographic reduced representations: Convolution algebra for compositional distributed representations", "venue": "In IJCAI, pp", "year": 1991 }, { "authors": [ "Tony A Plate" ], "title": "Holographic reduced representations", "venue": "IEEE Transactions on Neural networks,", "year": 1995 }, { "authors": [ "David P Reichert", "Thomas Serre" ], "title": "Neuronal synchrony in complex-valued deep networks", "venue": "ICLR,", "year": 2014 }, { "authors": [ "Oren Rippel", "Jasper Snoek", "Ryan P Adams" ], "title": "Spectral representations for convolutional neural networks", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": null, "year": 2015 }, { "authors": [ "Ziqiang Shi", "Huibin Lin", "Liu Liu", "Rujie Liu", "Jiqing Han" ], "title": "Furcanext: End-to-end monaural speech separation with dynamic gated dilated temporal convolutional networks", "venue": null, "year": 1902 }, { "authors": [ "N. Sturmel", "L. Daudet" ], "title": "Signal reconstruction from stft magnitude: A state of the art", "venue": "In In Proc. of the International conference on digital audio effects,", "year": 2006 }, { "authors": [ "Shrikant Venkataramani", "Paris Smaragdis" ], "title": "End-to-end networks for supervised single-channel speech", "venue": "separation. CoRR,", "year": 2018 }, { "authors": [ "Emmanuel Vincent", "Rémi Gribonval", "Cédric Févotte" ], "title": "Performance measurement in blind audio source separation", "venue": "IEEE Trans. Audio, Speech & Language Processing,", "year": 2006 }, { "authors": [ "Zhong-Qiu Wang", "Jonathan Le Roux", "DeLiang Wang", "John R. Hershey" ], "title": "End-to-end speech separation with unfolded iterative phase reconstruction", "venue": null, "year": 2018 }, { "authors": [ "Chao Weng", "Dong Yu", "Michael L Seltzer", "Jasha Droppo" ], "title": "Deep neural networks for singlechannel multi-talker speech recognition", "venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP),", "year": 2015 }, { "authors": [ "Moritz Wolter", "Angela Yao" ], "title": "Fourier rnns for sequence analysis and prediction", "venue": "arXiv preprint arXiv:1812.05645,", "year": 2018 }, { "authors": [ "Moritz Wolter", "Angela Yao" ], "title": "Gated complex recurrent neural networks", "venue": "arXiv preprint arXiv:1806.08267,", "year": 2018 }, { "authors": [ "Dong Yu", "Morten Kolbæk", "Zheng-Hua Tan", "Jesper Jensen" ], "title": "Permutation invariant training of deep models for speaker-independent multi-talker speech separation", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "Richard S Zemel", "Christopher KI Williams", "Michael C Mozer" ], "title": "Lending direction to neural networks", "venue": null, "year": 1995 }, { "authors": [ "Jiong Zhang", "Yibo Lin", "Zhao Song", "Inderjit S Dhillon" ], "title": "Learning long term dependencies via fourier recurrent units", "venue": "arXiv preprint arXiv:1803.06585,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Complex-valued neural networks have been studied since long before the emergence of modern deep learning techniques (Georgiou & Koutsougeras, 1992; Zemel et al., 1995; Kim & Adalı, 2003; Hirose, 2003; Nitta, 2004). Nevertheless, deep complex-valued models have only started to gain momentum (Reichert & Serre, 2014; Arjovsky et al., 2015; Danihelka et al., 2016; Trabelsi et al., 2017; Jose et al., 2017; Wolter & Yao, 2018b; Choi et al., 2019), with the great majority of models in deep learning still relying on real-valued representations. The motivation for using complex-valued representations for deep learning is twofold: On the one hand, biological nervous systems actively make use of synchronization effects to gate signals between neurons – a mechanism that can be recreated in artificial systems by taking into account phase differences (Reichert & Serre, 2014). On the other hand, complex-valued representations are better suited to certain types of data, particularly those that are naturally expressed in the frequency domain.\nOther benefits provided by working with complex-valued inputs in the spectral or frequency domain are computational. In particular, short-time Fourier transforms (STFTs) can be used to considerably reduce the temporal dimension of the representation for an underlying signal. This is a critical advantage, as training recurrent neural networks (RNNs) or convolutional neural networks (CNNs) on long sequences remains challenging due to unstable gradients and the computational requirements of backpropagation through time (BPTT) (Hochreiter, 1991; Bengio et al., 1994). Applying the STFT on the raw signal, on the other hand, is computationally efficient, as in practice it is implemented with the fast Fourier transform (FFT) whose computational complexity is O(n log(n)). The aforementioned biological, representational and computational considerations provide compelling motivations for designing learning models for tasks where the complex-valued representation of the input and output data is more desirable than their real-counterpart. Recent work has provided building blocks for deep complex-valued neural networks (Trabelsi et al., 2017). These building blocks have been shown, in many cases, to avoid numerical problems during training and, thereby, enable the\nuse of complex-valued representations. These representations are well-suited for frequency domain signals, as they have the ability to explicitly encode frequency magnitude and phase components. This motivates us to design a new signal extraction mechanism operating in the frequency domain. In this work, our contributions are summarized as follows:\n1. We present a new signal separation mechanism implementing a local ensembling procedure. More precisely, a complex-valued convolutional version of Feature-wise Linear Modulation (FiLM) (Perez et al., 2018) is used to create multiple separated candidates for each of the signals we aim to retrieve from a mixture of inputs. A signal averaging operation on the candidates is then performed in order to increase the robustness of the signal to noise and interference. Before the averaging procedure, a form of dropout is implemented on the signal candidates in order to reduce the amount of interference and noise correlation existing between the different candidates.\n2. We propose and explore a new magnitude and phase-aware loss taking explicitly into account the magnitude and phase of signals. A key characteristic of our loss is that it is scale- and time-invariant.\nWe test our proposed signal extraction mechanism in the audio source separation setting where we aim to retrieve distinct audio signals associated with each speaker in the input mix. Our experiments demonstrate the usefulness of our extraction method, and show its regularizing effect." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 RELATED WORK ON LEARNING REPRESENTATIONS IN THE FOURIER DOMAIN", "text": "Leveraging the Convolution Theorem to retrieve information has been done decades ago in the machine learning community using holographic reduced representations (HRRs) in the context of associative memories (Plate, 1991; 1995). HRRs enable one to store key-value data. Retrieval of a value in the data associated with a given key can be performed by convolving the whole data with the key or by applying an inner product between these two. By applying a fast Fourier transform (FFT) on the keys and the data, one can perform elementwise multiplication between the Fourier transforms and apply an inverse FFT to convert the result to the time domain. This would be equivalent to performing circular convolution between the key and the data in the time domain and has the advantage of being less expensive. Recently, Danihelka et al. (2016) have used associative memories to augment the capacity of LSTMs and to increase their robustness to noise and interference. For that, they applied independent permutations on the memory to create multiple copies of it. This enables one to obtain decorrelated noise in each of the permuted copies. A complex multiplication is then performed between the key and each of the copies. A signal averaging on the resulted multiplications eliminates the decorrelated noise in them and strengthens the signal-to-noise ratio (SNR) of the retrieved signal. Danihelka et al. (2016), however, have not relied on FFTs in order to convert the temporal signals to the frequency domain. In fact, they assumed that complex-valued multiplication between the key and the data is itself enough to perform retrieval, and they have assumed that for each input representation the first half is real and the second one is imaginary.\nDuring this decade, interest in Fourier domain representations has started to grow in the machine learning community. Bruna et al. (2013) introduced a generalization of convolutions to graphs using the Graph Fourier Transform, which is defined as the multiplication of a graph signal by the eigenvector matrix of the graph Laplacian. However, the computation of the eigenvector matrix is expensive. Recently, methods that are computationally more efficient have been introduced in Defferrard et al. (2016) and Kipf & Welling (2016) to avoid an explicit use of the Graph Fourier basis. In the context of Convolutional Neural Networks (CNNs), Rippel et al. (2015) introduced spectral pooling, which allows one to perform pooling in the frequency domain. This allows one to maintain the output spatial dimensionality, and thus the technique can retain significantly more information than other pooling approaches. Rippel et al. (2015) have also observed that the parametrization of the convolution filters in the Fourier domain induces faster convergence during training. Arjovsky et al. (2016) designed a recurrent neural network (RNN) where the transition hidden matrix is unitary. More precisely, the hidden transition matrix is constructed using the product of specific unitary transformations such as diagonal matrices, permutations, rotations, the Discrete Fourier Transform and its inverse. This allows one to preserve the norm of the hidden state, and as a consequence,\nmitigates the problem of vanishing and exploding gradients. Wolter & Yao (2018a) designed an RNN where the input is converted to the frequency domain using a Short Time Fourier Transform (STFT). The output is converted back to the time domain by applying an inverse STFT. Zhang et al. (2018) proposed a Fourier Recurrent Unit (FRU) where they showed that FRU has gradient lower and upper bounds independent of the temporal dimension. They have also demonstrated the great expressivity of the sparse Fourier basis from which the FRU draws its power. As we consider the task of speech separation as case study, we provide a related work section on both time domain and frequency domain speech separation methods in section 2.2." }, { "heading": "2.2 RELATED WORK ON TIME DOMAIN AND FREQUENCY DOMAIN SPEECH SEPARATION", "text": "Speech separation has been the subject of extensive study within the audio processing literature for a considerable amount of time. Recently, there has been growing interest in leveraging deep learning techniques (Du et al., 2014; Huang et al., 2014; Hershey et al., 2015; Gao et al., 2018; Ephrat et al., 2018) to tackle the speech separation problem. Hershey et al. (2015) proposed a deep clustering approach to speech separation. The basic idea is to learn high-dimensional embeddings of the mixture signals, that is later exploited to separate the speech targets with standard clustering techniques. A recent attempt to extend deep clustering led to the deep attractor network proposed by Chen et al. (2016). Similarly to deep clustering, high dimensional embeddings are learned, but the network also creates the so-called “attractors\" to better cluster time-frequency points dominated by different speakers. The aforementioned approaches estimate only the magnitude of the STFTs and reconstruct the time-domain signal with the Griffin-Lim algorithm (Griffin & Lim, 1984) or other similar procedures (Sturmel & Daudet, 2006). Other papers have recently proposed to integrate the phase-information within a speech separation system without necessarily working in the complexvalued frequency domain. The work by Erdogan et al. (2015), for instance, proposes to train a deep neural network with a phase-sensitive loss. Another noteworthy attempt has been described in Wang et al. (2018), where the neural network still estimates the magnitude of the spectrum, but the timedomain speech signals are retrieved by directly integrating the Griffin-Lim reconstruction into the neural layers. Furthermore, methods reported in Wang et al. (2018) integrate the phase-information within a speech separation system by reconstructing the clean phase of each source starting from the estimated magnitude of each source and the mixture phase. This is fundamentally different from our proposed framework, as we provide an end-to-end solution to perform signal retrieval in the complex-valued frequency domain, and process both spectrogram magnitude and phase information rather than working only on magnitude representation with heuristic reconstruction of phase. Another attempt to estimate the clean phase is reported in Le Roux et al. (2019) where the clean phase of each speaker is estimated using discrete representation. This is also fundamentally different from our work as it considers a discrete representation of the phase for source separation and, in our case, we consider continuous representation of the complex-domain signal.\nInstead of explicitly integrating phase-information, other recent work perform speech separation in the time domain directly, as described in Venkataramani & Smaragdis (2018). Likewise, the TasNet architecture proposed in Luo & Mesgarani (2017) and ConvTasNet (Luo & Mesgarani, 2018) perform speech separation using the mixed time signal as input. Operating directly on the time-domain signal using the ConvTasNet architecture, which implements temporal convolutional networks (TCN) (Bai et al., 2018), has led to state-of-the-art results in audio speech separation (Luo & Mesgarani, 2018; Shi et al., 2019). The studies by (Lee et al., 2017; Hu & Wang, 2004; Huang et al., 2014) are more related to our work as they address the speech separation problem by processing the complex-valued spectral input of the mixed speech. However, this was done without leveraging the recent advances in complex-valued deep learning." }, { "heading": "3 CONNECTION TO SIGNAL PROCESSING: MOTIVATION FOR USING FILM AND SIGNAL AVERAGING", "text": "Our signal extraction method takes advantage of the convolution theorem which states that the Fourier transform of two circularly convolved signals is the elementwise product of their Fourier transforms. It also implements a signal averaging procedure that allows to reduce the energy of the noise existing in the estimates of a clean signal, and so, to increase their respective signal to noise ratio (SNR). We detail here the motivation for using FiLM (Perez et al., 2018) and how our extraction method\nincreases the signal to noise ratio. Let’s consider a clean signal s corrupted by the environment impulse response r and an additive noise . The corrupted signal can be expressed as y = s ~ r + , where ~ denotes the circular convolution operator. By leveraging the convolution theorem and the linearity of the Fourier transform we get :\nF(y) = F(s) F(r) + F( ), (1) where F denotes the Fourier transform and the complex element-wise multiplication. If we want to retrieve the spectral information of the clean signal s, we can express it as:\nF(s) = [ F(y) 1F(r) ] − F( )F(r) , (2)\nwhere 1F(r) and − F( ) F(r) are respectively scaling and shifting representations. These representations could easily be inferred using FiLM (Perez et al., 2018) as it conditionally learns scaling Γ and shifting B representations. To be more rigorous, we can assume in the case of speech separation that, for each speaker, there exists an impulse response such that when it is convolved with the clean speech of the speaker, it allows to reconstruct the mix. We would then have:\nmix = si ~ ri + i ∀i ∈ {1, ...,Nb speakers}\n⇒ F(si) = F(mix) 1 F(ri) − F( i)F(ri) ⇒ F(si) = F(mix) Γi + Bi.\n(3)\nNow, let’s assume that y is a stochastic process such that y = x + , where is the noise component which mean E[ ] = 0. x is the clean signal that we want to estimate such that x is constant for all observations and that an ith observation of y is given by yi = x + i. The signal-to-noise ratio (SNR), which is a measure of the signal quality, is defined as the ratio of the power of a clean signal to the power of noise, i.e, SNR = E[|x|\n2] E[| i|2] . Estimating the clean speech x by approximating E[y] allows\nto discard the noise component as E[y] = x. In that case x̂ = 1N ∑N i=1(x + i) = x + 1 N ∑N i=1 i. The SNR would then be: SNR = E[|x| 2]\nE[| 1N ∑N i=1 i|2] = E[|x| 2] 1 N2 E[| ∑N i=1 i|2] . If i are uncorrelated,\nE[|∑Ni=1 i|2] = ∑N i=1 E[| i|2] = N E[| i|2] ⇒ SNR = N E[|x|2] E[| i|2] . This shows that the signal averaging operation and the uncorrelated noises allow to increase the SNR by a factor of N . If we want to approximate F(si) by performing signal averaging, we would then have:\nE[F(si)] = F(mix) E[Γi] + E[Bi] ⇒ ̂E[F(si)] = F(mix) Ê[Γi] + Ê[Bi]\n= F(mix) 1 N\nN∑\nj=1\nΓij + 1\nN\nN∑\nj=1\nBij ,\n(4)\nwhere F(mix) is constant. In equation (4), N is equal to the number of scaling and shifting representations generated to approximate respectively each of E[Γi] and E[Bi]." }, { "heading": "4 AMPLITUDE AND PHASE-AWARE LOSS", "text": "In Choi et al. (2019) a weighted version of the cosine similarity is proposed in order to maximize the signal-to-distortion ratio (SDR) proposed in Vincent et al. (2006). Recall that cosine similarity loss is defined in the real-valued domain and it is given by the following equation:\ncostime(y,x) = −∑i xi ◦ yi ||x|| · ||y|| , (5)\nwhere ◦ denotes the element-wise real-valued multiplication operation. Both y and x are real-valued in the above equation as y is the target signal in the temporal domain and x is the estimated signal after performing an inverse STFT on the spectrogram. The phase is then taken implicitly into account as the real-valued target signal encodes inherently the phase of the spectrogram. As the task in Choi et al. (2019) is speech enhancement (which is different from ours as we are performing speech separation), the authors used a weighted version of the costime loss to weight the part of the loss\ncorresponding to the speech signal and also the complementary part corresponding to the noise signal. This weighting is performed according to their respective target energies. In our case we are interested in extracting the clean speech signals of all the involved speakers whether each speaker signal has either high or low energy in the mixture. This is why we are not interested in penalizing the retrieved speech of each speaker by its corresponding energy.\nHere, we suggest the use of a loss function which explicitly takes into account both magnitude and phase. This is accomplished by computing the inner product, between the reference signal and its estimate, in the complex plane. In fact computing the inner product in the frequency domain is equivalent to computing the cross correlation in the time domain followed by a weighted average. The inner product in the frequncy domain is then shift-invariant (time-invariant). The complex inner product between 2 signals is given by the following equation:\n〈x|y〉 = ∑\nj\n[<(xj)<(yj) + =(xj)=(yj)] + i [<(xj)=(yj)−=(xj)<(yj)]. (6)\nIf x and y are identical, which is equivalent of having ||x|| = ||y|| and ∠x = ∠y, then, 〈x|y〉 = ||y||2 + 0i. If x and y are parallel, then 〈x|y〉||x||·||y|| = 1 + 0i = 1. The inner product between the 2 signals normalized by the product of their amplitudes, is then scale and time invariant. We chose a loss that maximizes the real part of that normalized inner product and minimizes the square of its imaginary part. Note that each of the real and imaginary parts of the normalized inner product lies between [-1, 1]. We refer the reader to section A.1 in the appendix for more information on how the complex inner product is both amplitude and phase aware, how the real part of equation (6) is responsible of the amplitude similarity between the reference and estimate signals and how the imaginary part of the same equation is responsible for the phase matching between them. We define the following similarity loss denoted by CSimLoss as:\nCSimLoss(x,y) =− λreal< ( 〈x|y〉 ||x|| · ||y|| ) + λimag=2 ( 〈x|y〉 ||x|| · ||y|| ) , (7)\nwhere λreal and λimag are penalty constants. λreal is penalizing amplitude mismatch and λimag is penalizing phase mismatch. We fixed λreal to 1 in all our experiments. We tried different values of λimag ∈ {102, 103, 104}. we found that the only value of λimag that allows the phase matching part of the train loss to have same range of values than the amplitude matching part is λimag = 104. All the results are reported in Table 2 and Table 3 for CSimLoss correpond to λimag = 104." }, { "heading": "5 DETAILS OF THE U-NET ARCHITECTURE USED FOR SPEECH SEPARATION", "text": "We detail here the architecture we used to perform speech separation1. For this, we rely on the U-Net architecture proposed by Ronneberger et al. (2015) and the complex-valued building blocks proposed by Trabelsi et al. (2017). This is similar to the complex-valued U-Net architecture used in Dedmari et al. (2018) who reported state-of-the-art results in MRI reconstruction using complex-valued raw input. Our primary goal is to demonstrate that our proposed signal extraction mechanism can improve upon the performance of baseline models.\nFor our task, we required the addition of residual connections inside the U-Net blocks and replaced complex batch normalization with complex layer normalization, as the model was otherwise unable to learn, yielding instabilities during training. The reasons why complex LayerNorm outperformed complex BatchNorm are discussed in the appendix in section A.2. We describe, in section 6 how our extraction mechanism is implemented in the context of audio source separation." }, { "heading": "5.1 COMPLEX RESIDUAL U-NET", "text": "Residual networks (He et al., 2016a) and identity connections (He et al., 2016b) have had a significant impact on image segmentation. These architectural elements have also been combined with U-Nets (Drozdzal et al., 2016) for image segmentation. In our case, we use simple basic complex residual blocks (Figure 2 in appendix) inside each of the U-Net encoding and decoding paths (Figure 1) . Figure 2 (Left) and (Middle) illustrate the basic structure of our Residual U-Net upsampling and\n1The source code is located at https://github.com/FourierSignalRetrievalICLR2020/FourierExtraction\ndownsampling blocks (Ui and Di) used in Figure 1, while Figure 2 (Right) illustrates the structure of the complex residual blocks used in Figure 2 (Left) and Figure 2 (Middle).\nEach U-Net block begins with a downsampling block (in the encoding U-Net path) or an upsampling block (in the decoding U-Net path). It also contains a block that doubles the number of feature maps (in the encoding path), or halves them (in the decoding path). The upsampling, downsampling, doubling and halving blocks each applies successively a complex layer normalization, a CReLU and a complex convolution to their inputs. All complex convolutions have a kernel size of 3×3 except for the case of a downsampling block, where the convolution layer has a kernel size of 1× 1 and a stride of 2× 2. In the case of upsampling, we use bilinear interpolation instead of transposed convolution because we found empirically that it yielded better results. Immediately before and immediately after the doubling / halving blocks, we use k = 1 or k = 2 residual blocks. We have opted for this residual U-Net block architecture because of memory constraints and because residual connections are believed to perform inference through iterative refinement of representations (Greff et al., 2016; Jastrzebski et al., 2017)." }, { "heading": "6 COMPLEX MASK GENERATION", "text": "Featurewise Linear Modulation (FiLM) (Perez et al., 2018) techniques have yielded impressive results in visual question answering (VQA). The FiLM approach applies an affine transformation to convolutional feature maps, given the embedding of the question. In our approach, we create multiple transformations of the complex input spectrogram using FiLM. The FiLM parameters are determined from the output of our U-Net (See Figure 1). We then generate a complex mask for the original input spectrogram as well as for each of the FiLM-transformed spectrograms. This is accomplished by using a ResNet conditioned on the U-Net output, the spectrogram and its FiLM transformations. Each spectrogram is multiplied by its corresponding complex mask. This leads to multiple candidates for the separated speech of each speaker. The resulting outputs are averaged to produce the final estimated clean speech. This could be interpreted as a local ensembling procedure to estimate the clean speech of the different speakers. More precisely, given the output of the last upsampling block of the U-Net, we generate scaling matrices Γj and shift matrices Bj , j ∈ [1, C] of the same size as the input mix spectrogram. These parameters operate on the input mix as described by the following equation: input_transformationj = Γj ⊗ inputmix + Bj , (8) where Γj and Bj are functions of the output of the last upsampling block in the U-Net, and ⊗ is the elementwise complex product. In our case, we used a simple complex convolution layer with a kernel of size 3 × 3 to generate Γj and Bj . The original input mix and its C scaled and shifted\ntransformations together form C + 1 representations of the input mix. Given these C + 1 complex representations, we generate C + 1 corresponding complex masks, with which the representations are then multiplied. These masks are generated by a sequence of a complex convolution layer which kernel size is 3 × 3 followed by two residual blocks. Once we have performed the complex multiplication of the masks with their respective inputs, C + 1 separated speech candidates are obtained for a given speaker. This procedure is repeated for the maximum number of speakers that could exist in an input mix. The main motivation for this process is to increase the separation capability and reduce interference between the separated speakers. Each transformation can focus on a specific pattern in the representation. Each mask corresponding to a specific input transformation can be seen as a feature of the speaker embedding. Grouped together, the masks generated to retrieve the speech of a given speaker could be interpreted as an embedding identifying the speaker. The proposed complex masking procedure is summarized in Algorithm 1.\nAlgorithm 1 Complex Extractor Masking Input: U-Net output: O Input: Nb transformations (XFs): C Input: Nb speakers: N Input: Input Mix: mix Output: Speakers separated speeches: S1, ..., SN\n1: function C-FILMED MASKING(O, C, N , mix) 2: for i← 1 to N do 3: Γi1 ...ΓiC , Bi1 ...BiC ← FilMFunction(O) 4: end for 5: XFS← [ ] 6: for i← 1 to N do 7: for j ← 1 to C do 8: XFij ← Γij ⊗mix + Bij 9: XFSi.append(XFij)\n10: end for 11: end for 12: XFS← concatenate(XFS11, ...,XFSNC) 13: masks← GenerateMasks(O,XFS) 14: candidates← masks⊗ XFS 15: cleanspeeches← [ ] 16: for i← 1 to N do 17: cleanspeechi ← average(candidates[(C + 1)× (i− 1) + 1 : (C + 1)× i])) 18: cleanspeeches.append(cleanspeechi) 19: end for 20: return cleanspeeches 21: end function" }, { "heading": "7 SYNOPSIS OF THE EXPERIMENTS", "text": "We present in Table 1 the most important results obtained when conducting our experiments. The complete results and the extended empirical analysis can be found in the appendix in section A.5. The data pre-processing and training protocol can be found in the appendix, in section A.4.\nWe explore several variants of our architecture and report the test SDR. They are parametrized by:\n• k, the number of residual blocks used inside the residual U-Net block (See Figure 2). • START FMAPS, the number of feature maps in the first layer of the encoding path in the\nU-Net. START FMAPS defines the depth of each of the successive layers in the model. 2\n• PARAMS, the number of parameters, in millions. • TRANSFORMS, the number of input mixture transformations. • DROPOUT, the mask dropout rate. 3\n2The effective number of feature maps for a complex feature map is equal to the number of reported feature maps × 2. This is due to the fact that it has a real and an imaginary part.\n3Dropping out a mask is equivalent to a dropout of input mixture transformations or clean speech candidates. Performing dropout on the masks reduces the correlation of the different noise components existing in the different candidates of clean speech. Along with signal averaging, dropout regularizes the retrieval mechanism.\nFrom the first four rows of the results contained in Table 1, we will highlight that the complex-valued baseline models vastly outperform their real-valued counterparts. These baselines (both real and complex) are architecturally the same as the U-Net of Figure 1, but do not include our extraction mechanism (the FiLM, GenerateMask and signal-averaging operations). The real and complex U-Nets’ outputs are masks that are complex-multiplied with the mix to infer the clean speech of the speakers. All complex models, whether they have approximately the same number of parameters (R:8.45M ≈ C:7.4M), half (R:8.45M; C:4.39M) or a third, with half the depth (R:14.76M; C:4.39M) outperformed by a convincing margin their real counterparts. Thus, natively-complex input, inference and output give complex networks such an overwhelming advantage that almost no handicap of size or depth can mask it. We will therefore not consider real-valued models, transformations and losses any further.\nA second highlight from Table 1 is that, while our signal extraction mechanism is inexpensive in terms of parameter count, the extraction mechanism substantially improves the quality of the retrieved signal. For instance, when 10 mixture transformations are in use, the number of parameters is marginally increased by less than 1% (13.97M to 14.03M) while a substantial jump in terms of SDR is observed (from 9.87 to 11.34). This can be also observed in Figure 3 in the appendix. Dropping out the speech candidates with low probability has a further regularization effect on the wider models that have more feature maps, as shown in appendix Figure 5.\nThe third highlight is that spectral-domain losses, i.e CSimLoss and L2freq, outperform their timedomain counterparts. Our proposed CSimLoss posts the best reported result, 11.34 SDR, and Tables 2 and 3 and Figure 4 demonstrate that our extraction mechanism is ideally paired with the CSimLoss objective.\nFinally, and although this is out of scope for this paper, we compare ourselves to ConvTasNet (Luo & Mesgarani, 2018), which operates on a time-domain input mixture (as mentioned in §2.2). ConvTasNet claims state-of-the-art results in speech separation, and has led to even further improvements (Shi et al., 2019). Its headline achievement of 15.6 SDR must, however, be understood in light of a significant difference in their preparation of the dataset. Whereas we follow the standard setup described in Hershey et al. (2015), with input mixtures generated using an SNR between 0 and 5 dB, Luo & Mesgarani (2018) use an SNR between -5 and 5 dB. Keeping this in mind, we retrain optimally-configured ConvTasNet but using the standard setup, and obtain 12.1 SDR, compared to our own model’s 11.3 SDR." }, { "heading": "8 CONCLUSION", "text": "In this work, we introduced a new complex-valued extraction mechanism for signal retrieval in the Fourier domain. As a case study, we considered audio source separation. We also proposed a new phase-aware loss taking, explicitly, into account the magnitude and phase of the reference and estimated signals. The amplitude and phase-aware loss improves over other frequency and time-domain losses. We believe that our proposed method could lead to new research directions where signal retrieval is needed." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DETAILS ABOUT THE AMPLITUDE AND PHASE-AWARE LOSS", "text": "We show here that solving the two-equation system, assimilating the real part of the inner product, between the two signals x and y, to the square of the amplitude of y, and canceling its imaginary part, amounts to canceling the differences in amplitude and phase between x and y, respectively (See equation 11). For this we will use the following trigonometric properties: cos(θx) cos(θy) = 1 2 cos(θx − θy) + 12cos(θx + θy) sin(θx) sin(θy) = 1 2 cos(θx − θy) − 12cos(θx + θy) cos(θx) sin(θy) = 1 2 sin(θx + θy) − 12 sin(θx − θy)\nsin(θx) cos(θy) = 1 2 sin(θx + θy) + 1 2 sin(θx − θy),\n(9)\nwhere θx, θy ∈ R. For simplicity of notation, we will denote a complex-valued target scalar as y and a its estimate as x instead of ŷ:\ny = |y|eiθy = |y| [cos(θy) + i sin(θy)] ∈ C x = |y|eiθx = |x| [cos(θx) + i sin(θx)] ∈ C. (10)\nθy and θx are the corresponding phases of the reference y and its complex estimate x respectively.Resolving the system of equations below is equivalent of having both magnitude and phase of the reference and estimation identical OR when y is 0. Recall that <(〈x|y〉) = ∑j [<(xj)<(yj) + =(xj)=(yj)] and =(〈x|y〉) =\n∑ j [<(xj)=(yj)−<(yj)=(xj)].{\n<(x)<(y) + =(x)=(y)− |y|2 = 0 <(x)=(y)−<(y)=(x) = 0\n⇔\n{ <(x)<(y) + =(x)=(y) = <(y)2 + =(y)2\n<(x)=(y) = <(y)=(x)\n⇔\n{ |x| cos(θx) |y| cos(θy) + |x| sin(θx) |y| sin(θy) = |y|2\n|x| cos(θx) |y| sin(θy) = |y| cos(θy) |x| sin(θx)\n⇔\n{ |x| |y| [ cos(θx) cos(θy) + sin(θx) sin(θy) ] = |y|2\n|x| |y| [ cos(θx) sin(θy)− cos(θy) sin(θx) ] = 0\n⇔ |y| [ |x| ( cos(θx) cos(θy) + sin(θx) sin(θy) ) − |y| ] = 0 |x| = 0 OR |y| = 0 OR [ cos(θx) sin(θy)− cos(θy) sin(θx) ] = 0\n⇔\n{ |y| = 0 OR |x| ( cos(θx) cos(θy) + sin(θx) sin(θy) ) = |y|\n|x| = 0 OR |y| = 0 OR cos(θx) sin(θy) = cos(θy) sin(θx)\n⇔\n{ |y| = 0 OR |x| ( 1 2 cos(θx − θy) + 12cos(θx + θy) + 1 2 cos(θx − θy) − 12cos(θx + θy) ) = |y|\n|x| = 0 OR |y| = 0 OR 1 2 sin(θx + θy) − 12 sin(θx − θy) = 1 2 sin(θx + θy) + 1 2 sin(θx − θy)\n⇔ { |y| = 0 OR |x| cos(θx − θy) = |y| |x| = 0 OR |y| = 0 OR − sin(θx − θy) = sin(θx − θy)\n⇔ { |y| = 0 OR |x| cos(θx − θy) = |y| |x| = 0 OR |y| = 0 OR θx − θy ≡ 0 (modπ)\n⇔ { |y| = 0 OR |x| cos(θx − θy) = |y| |x| = 0 OR |y| = 0 OR − sin(θx − θy) = sin(θx − θy)\n⇔ { |y| = 0 OR |x| cos(kπ) = |y|, k ∈ Z |x| = 0 OR |y| = 0 OR θx = θy + kπ, k ∈ Z\n⇔ { |y| = 0 OR |x| cos(2k′π) = |y| OR |x|cos((2k′ + 1)π) = |y| = 0 (because cos((2k′ + 1)π) = −1) |x| = 0 OR |y| = 0 OR θx = θy + k′π, k′ ∈ Z\n⇔ { |y| = 0 OR |x| = |y| OR |x| = |y| = 0 |x| = 0 OR |y| = 0 OR θx = θy + 2k′π, k′ ∈ Z\n⇔ { |y| = 0 OR |x| = |y| θx = θy + 2k\n′π, k′ ∈ Z. (11)\nWe have just shown that <(〈x|y〉)j = |yj |2 AND =(〈x|y〉)j = 0 is equivalent of having [(|yj | = 0 OR |yj | = |xj |) AND ∠xj = ∠yj]. This means that the real and imaginary parts of the inner product between the estimate and target are respectively responsible of the amplitude and phase matching between the estimate and the target. Now, a solution corresponding to a null reference vector y could be problematic as it leads to an infinite number of choices for the estimated signal x. In fact, Choi et al. (2019) mentioned this issue and chose to work with a cosine similarity-based function in order to learn from noisy-only data. This is why it is more convenient to work with the normalized inner product loss." }, { "heading": "A.2 COMPLEX LAYER NORMALIZATION", "text": "Just as in complex batch normalization, complex layer normalization consists in whitening 2D vectors by left-multiplying the 0-centered data ( x− E[x] ) by the inverse square root of the 2× 2 covariance matrix V . x̃ = (V )− 1 2 ( x− E[x] ) , where the covariance matrix V is\nV = ( Vrr Vri Vir Vii )\n= ( Cov(<{x},<{x}) Cov(<{x},={x}) Cov(={x},<{x}) Cov(={x},={x}) ) .\nComplex layer normalization is distinguished from complex batch normalization by its computation of the mean and covariance statistics over the layer features instead of the batch instances. This allows us, as in the real-valued version, to avoid estimating batch statistics during training. An intuition for batch normalization’s inappropriateness is related to the sparsity, in both time and frequency domains, of speech. This is reflected in the spectrograms. Speech is temporally halting and restarting, and spectrally consists of at most a few simultaneously-held fundamentals and their discrete overtones. Mixing few speakers does not significantly change this property.\nIn the light of this observation, it stands to reason that statistics computed across a batch’s multiple utterance mixtures are almost meaningless. Speakers within and across utterance mixtures are not controlled for volume, nor can their pauses be meaningfully aligned. Batch statistics will therefore be inappropriately driven by the mixture with the most simultaneous speakers, the loudest speaker(s), or the speaker(s) with the “dirtiest” spectrogram. Finally, in the absence of any speech, batch statistics will inappropriately boost background noise to a standardized magnitude.\nThe above motivates the use of exclusively intra-sample normalization techniques like Layer Normalization for speech data. Batch normalization is more appropriate for natural images, which are dense, both in space and frequency.\nIn addition to the fact that intra-sample normalization is more appropriate for speech signals, CLN ensures a more robust normalization of data when the number of feature maps is sufficiently large. In fact, according to the weak law of large numbers, as the sample size increases, the sample statistics approximate their expected values. Therefore, when the number of feature maps far exceeds the number of batch instances, we obtain more robust estimates because they converge, in probability, to the corresponding expected values." }, { "heading": "A.3 FIGURES", "text": "" }, { "heading": "A.4 DATA PRE-PROCESSING AND TRAINING DETAILS", "text": "The speech mixtures are generated using the procedure adopted in Erdogan et al. (2015); Wang et al. (2018). More precisely, the training set consists of 30 hours of two-speaker mixtures that were generated by randomly selecting sentences (uttered by different speakers) from the Wall Street Journal WSJ0 training set called si_tr_s. The signals are then mixed with different amplitude factors, leading signal-to-noise ratios (SNR) ranging between 0 dB and 5 dB. Using the same method, we also generated 10 hours of validation set. The test set is composed of 5 hours that were generated similarly using utterances from the different speakers belonging to the WSJ0 development set si_dt_05. The data sampling rate is 8KHz. Regarding the STFT parameters, a Hann window of size 256 and a hop length equal to 128 are used.\nTable 2 (see section A.5) and Table 3 contain the results for the experiments conducted using the Wall Street Journal dataset. All models in Tables 2 (see section A.5) and 3 were trained using the backpropagation algorithm with Stochastic Gradient Descent with Nesterov momentum (Nesterov, 1983) set at 0.9. The gradient norm was clipped to 1. We used the learning rate schedule described in Trabelsi et al. (2017). In order to warm up the model during training, a constant learning rate of 0.01 was fixed for the first 10 epochs. From epoch 10 to 100, the learning rate was increased to 0.1. Later, an annealing of the learning rates by a factor of 10, at epochs, 120 and 150 was performed. We ended up the training at epoch 200. Models in Table 2(see section A.5 ) have been trained using a batch size of 40. Models in Table 3 have been trained using a batch size of 24 to fit in the GPU memory. All the models have been trained in parallel using 8 V100 GPUs. For all the tested losses, we used the Permutation Invariant Training criterion knows as PIT (Yu et al., 2017). The PIT criterion allows to take into account all possible assignments between the target signals and the estimated clean speeches. This is done by computing all possible permutations between the targets and the estimated clean speeches. During training, the assignment with the minimal loss is considered for backpropagation. This is due to the fact that for the synthetically mixed input, the order of the target speakers is randomly chosen and it doesn’t satisfy a specific criterion. This random order in the target speakers causes the well-known label permutation problem (Hershey et al., 2015; Weng et al., 2015). The PIT criterion allows then to reduce significantly this problem by considering the output-target assignment yielding the minimal training loss. During inference, we assume that the model has learned to produce output that does not permute speeches. (Yu et al. (2017) mention that output-to-speaker assignment may change across time frames. This would have the effect of decreasing the Signal to Noise Ratio (SNR) and the Signal to Distortion Ratio (SDR) as it causes interference of speakers speeches." }, { "heading": "A.5 EXPERIMENTS", "text": "We tried different configurations combining unitary and standard complex initializations. All of these initializations have been proposed by Trabelsi et al. (2017). It turned out that the best configuration is obtained when using a complex standard initialization for all layers, except for the convolutional layer, generating the FiLM parameters, and the first convolutional layer in the generating mask function which precedes the 2 residual blocks. For the above-mentioned convolutional layers a unitary initialization respecting the He criterion (He et al., 2015) was applied. This is not surprising as a unitary weight matrix ∈ Cd×d constitutes a basis of Cd. Therefore any complex-valued vector in Cd, such as those representing the FiLM parameters or the masks, could be generated using a linear combination of the row vectors of that unitary matrix.\nIn Tables 2 and 3 we experiment with architectures that use different number of mixture transformations. Adding mixture transformations does not significantly increase the number of parameters compared to the size of the whole model. In the case where 15 transformations are adopted, the number of parameters is increased by less than 1% of the total number.\nSince Table 2’s first row contains baselines, they exclude our proposed masking method and loss. These baselines (both real and complex) are architecturally the same as the U-Net of Figure 1, without the FiLM, the GenerateMask and the averaging operation. A real counterpart of a complex model is one where the convolution and the normalization layers are real, the nonlinearity is plain ReLU and He init is used for the weights. The real and complex U-Nets output the masks which are complex multiplied with the mix in order to infer the clean speech of the speakers. All the complex models, whether they have approximately the same number of parameters (R:8.45M ≈ C:7.4M),\nhalf (R:8.45M; C:4.39M) or a third, with half the depth (R:14.76M; C:4.39M) outperformed by a large margin their real counterparts. This shows that whether the comparison is fair, or even where advantages in terms of capacity and depth are given to the real network, it doesn’t perform as well as complex models when it comes to process complex input and infer complex output. Thus, we will no longer focus on real-valued models, but, instead, will concentrate on transformations and losses that are appropriate for complex-valued models.\nThree major observations can be inferred from the numbers displayed in Table 2: 1- Wider and deeper models improve the quality of separation in terms of SDRs; 2- The increase in the number of input transformations has a positive impact on the task of separating audio sources, as additional input transformations achieve higher SDR scores; 3- For a given number of input transformations, the best results are obtained with losses computed in the spectral domain. For all the experiments reported in Table 2, either the CSimLoss or the L2freq achieve the highest SDR.\nThe scores reported in Table 2 show that the local ensembling procedure is beneficial to the task of speech separation. This rewarding impact is confirmed in all experiments of Table 3 (See also Figure 3). As mentioned in section 6, each mask could be seen as a feature of the speaker embedding and the generated masks together constitute the whole embedding. Performing dropout on the masks might then allow to perform regularization for the retrieval and separation mechanism. Dropping out a mask is equivalent to a dropout of input mixture transformations or clean speech candidates. Since spectral loss functions yielded higher SDRs than their time-domain counterparts, we adopted them to evaluate the effect of applying different dropout rates to the input transformations. Wider and deeper models\nwith Start Fmaps = 44 and k=2 residual blocks are tested in the conducted experiments. Results are reported in Table 3.\nIn the absence of dropout and multiple transformations, we observe from the results displayed in Table 3, that wider models are not necessarily more beneficial to the separation task. The SDRs reported in the case of no mixture transformations are 9.88 and 9.87 for the wider model. These SDR scores correspond to the L2freq and CSimLoss losses respectively. However, for the narrower models, SDRs of 10.30 and 10.21 were respectively reported for the same losses in Table 2. This means that wider models have the potential to overfit. On the other hand, if input transformations are taken into account, a jump in the SDR is observed. When 10 input transformations are introduced, SDR scores of 11.05 and 10.90 are recorded with the CSimLoss and the L2freq losses, respectively. Lower SDR performances are recorded when ensembling is implemented with mixtures of 5 and 15 transformations, respectively. This means that the local ensembling procedure is acting as a regularizer. However, a tradeoff in terms of the number of input transformations (and so in terms of clean speech candidates) has to be made as increasing the number of input transformations might worsen the performance of the model and lead to overfitting.\nDropping out the speech candidates using a small probability rate has a further regularization effect on the wider model. This could be inferred from the results reported in Table 3 (See also Figure 5). We employed different dropout rates varying from 0 to 0.4. A rate of 0.1 yielded the best result as it caused a jump of SDR score from 11.05 to 11.34. It is important to emphasize again the importance of having a compromise in terms of the number of transformations. For instance, for most of the dropout rates we experimented, a number of 10 mixture transformations yielded the highest SDRs. In all the experiments reported in Table 3, the CSimLoss clearly outperformed the L2freq (See Figure 4). In fact, regardless of the dropout rate and the number of input transformations employed, for wider models using the L2freq training loss function, the SDR score did not cross the threshold of 10.91 dB. The highest SDR score obtained, when using the L2freq loss function, is 10.93. This value corresponds to a narrower model with 15 input transformations (see Table 2)." }, { "heading": "A.6 STATE-OF-THE-ART TABLE", "text": "As can be seen from Table 4, state-of-the-art results in speech separation depend largely on the following:\n1. The use of a model that takes into account short and long term temporal dependencies such as BLSTMs or Temporal Convolutional Networks (Bai et al., 2018). Almost all the methods since (Hershey et al, 2015) that have led to improvements in state-of-the-art speech separation have used either BLSTMs or TCN;\n2. The STFT window size and hop length or the time-domain input segment size when using the raw signal. Yu et al. (2017) demonstrated that the smaller the window size, hop length are, the better the quality of separation. This probably explains Luo & Mesgarani (2017) and Luo & Mesgarani (2018) selection of very short time-domain segment sizes of 5 and 2 ms for the TasNet and ConvTasNet archtiectures respectively." } ]
2,019
null
SP:4140a212888e058dc1f0bfaa5233f54e9d87fcee
[ "This paper proposes training losses, unlikelihood objective, for mitigating the repetition problem of the text generated by recent neural language models. The problem is well-motivated by evidence from the existing literature. Specifically, the paper argues that the main cause of the degenerated output is the maximum likelihood objective commonly used to train language models. Their main contribution is to introduce additional objectives to penalize “unlikely” word probabilities. The proposed penalty is derived into 2 objectives: token level (previous words in context) and sentence level (future decoded words). The prior objective is used along with the MLE, while the later and more expensive is used for fine-tuning. They perform experiments on Wikitext-103 and evaluate models on the perplexity of the models, and n-gram statistics such as repetition, and uniqueness of the decoded texts. The proposed training scheme (UL-token+seq) is shown to have the closest statistics to the original corpus while the perplexity slightly suffers. The additional manual analysis shows that human annotators prefer the outputs (sentence completion) of the proposed method over the other baselines.", "The main contribution of this paper lies in the proposed unlikelihood training objective for open-ended text generation. The key idea is to enforce the unlikely generations to be assigned lower probability by the model. Both token and sequence-level unlikelihood training objectives are provided. Impressively, the authors show that models trained with the proposed method can generate high-quality text via only beam search, without using top-k, nucleus sampling, or beam blocking methods. " ]
Neural text generation is a key tool in natural language applications, but it is well known there are major problems at its core. In particular, standard likelihood training and decoding leads to dull and repetitive outputs (Holtzman et al., 2019). While some post-hoc fixes have been proposed, in particular top-k and nucleus sampling, they do not address the fact that the token-level probabilities predicted by the model are poor. In this paper we show that the likelihood objective itself is at fault, resulting in a model that assigns too much probability to sequences containing repeats and frequent words, unlike those from the human training distribution. We propose a new objective, unlikelihood training, which forces unlikely generations to be assigned lower probability by the model. We show that both token and sequence level unlikelihood training give less repetitive, less dull text while maintaining perplexity, giving superior generations using standard greedy or beam search. According to human evaluations, our approach with standard beam search also outperforms the currently popular decoding methods of nucleus sampling or beam blocking, thus providing a strong alternative to existing techniques.
[ { "affiliations": [], "name": "UNLIKELIHOOD TRAINING" }, { "affiliations": [], "name": "Sean Welleck" }, { "affiliations": [], "name": "Ilia Kulikov" }, { "affiliations": [], "name": "Stephen Roller" }, { "affiliations": [], "name": "Emily Dinan" }, { "affiliations": [], "name": "Kyunghyun Cho" }, { "affiliations": [], "name": "Jason Weston" } ]
[ { "authors": [ "Alexei Baevski", "Michael Auli." ], "title": "Adaptive input representations for neural language modeling", "venue": "International Conference on Learning Representations.", "year": 2019 }, { "authors": [ "Yejin Choi." ], "title": "The missing representation in neural (language) models", "venue": "3rd Workshop on Representation Learning for NLP (RepL4NLP).", "year": 2018 }, { "authors": [ "Michael Collins." ], "title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). Association for Computational Linguistics.", "year": 2002 }, { "authors": [ "Hal Daumé", "John Langford", "Daniel Marcu." ], "title": "Search-based structured prediction", "venue": "Machine learning, 75(3):297–325.", "year": 2009 }, { "authors": [ "Adji B Dieng", "Kyunghyun Cho", "David M Blei", "Yann LeCun" ], "title": "Learning with reflective likelihoods", "venue": null, "year": 2018 }, { "authors": [ "Emily Dinan", "Varvara Logacheva", "Valentin Malykh", "Alexander Miller", "Kurt Shuster", "Jack Urbanek", "Douwe Kiela", "Arthur Szlam", "Iulian Serban", "Ryan Lowe" ], "title": "The second conversational intelligence challenge (convai2)", "venue": "arXiv preprint arXiv:1902.00098", "year": 2019 }, { "authors": [ "Sergey Edunov", "Myle Ott", "Michael Auli", "David Grangier", "Marc’Aurelio Ranzato" ], "title": "Classical structured prediction losses for sequence to sequence learning", "venue": "arXiv preprint arXiv:1711.04956", "year": 2017 }, { "authors": [ "Angela Fan", "Mike Lewis", "Yann Dauphin." ], "title": "Hierarchical neural story generation", "venue": "arXiv preprint arXiv:1805.04833.", "year": 2018 }, { "authors": [ "Tianxing He", "James Glass." ], "title": "Negative training for neural dialogue response generation", "venue": "arXiv preprint arXiv:1903.02134.", "year": 2019 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Maxwell Forbes", "Antoine Bosselut", "David Golub", "Yejin Choi." ], "title": "Learning to write with cooperative discriminators", "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638–1649. Association for Computational Linguistics.", "year": 2018 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Maxwell Forbes", "Yejin Choi." ], "title": "The curious case of neural text degeneration", "venue": "arXiv preprint arXiv:1904.09751.", "year": 2019 }, { "authors": [ "Guillaume Klein", "Yoon Kim", "Yuntian Deng", "Jean Senellart", "Alexander M Rush." ], "title": "Opennmt: Open-source toolkit for neural machine translation", "venue": "arXiv preprint arXiv:1701.02810.", "year": 2017 }, { "authors": [ "Ilya Kulikov", "Alexander H Miller", "Kyunghyun Cho", "Jason Weston." ], "title": "Importance of a search strategy in neural dialogue modelling", "venue": "arXiv preprint arXiv:1811.00907.", "year": 2018 }, { "authors": [ "Yann LeCun", "Sumit Chopra", "Raia Hadsell", "M Ranzato", "F Huang." ], "title": "A tutorial on energybased learning", "venue": "Predicting structured data.", "year": 2006 }, { "authors": [ "Jiwei Li", "Will Monroe", "Dan Jurafsky." ], "title": "A simple, fast diverse decoding algorithm for neural generation", "venue": "arXiv preprint arXiv:1611.08562.", "year": 2016 }, { "authors": [ "Margaret Li", "Jason Weston", "Stephen Roller." ], "title": "Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons", "venue": "arXiv preprint arXiv:1909.03087.", "year": 2019 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher." ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843.", "year": 2016 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli." ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "Proceedings of NAACL-HLT 2019: Demonstrations.", "year": 2019 }, { "authors": [ "Romain Paulus", "Caiming Xiong", "Richard Socher." ], "title": "A deep reinforced model for abstractive summarization", "venue": "arXiv preprint arXiv:1705.04304.", "year": 2017 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Marc’Aurelio Ranzato", "Sumit Chopra", "Michael Auli", "Wojciech Zaremba" ], "title": "Sequence level training with recurrent neural networks. CoRR, abs/1511.06732", "venue": null, "year": 2015 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell." ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 627–635.", "year": 2011 }, { "authors": [ "Abigail See", "Stephen Roller", "Douwe Kiela", "Jason Weston." ], "title": "What makes a good conversation? how controllable attributes affect human judgments", "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702–1723, Minneapolis, Minnesota. Association for Computational Linguistics.", "year": 2019 }, { "authors": [ "Shiqi Shen", "Yong Cheng", "Zhongjun He", "Wei He", "Hua Wu", "Maosong Sun", "Yang Liu." ], "title": "Minimum risk training for neural machine translation", "venue": "arXiv preprint arXiv:1512.02433.", "year": 2015 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin." ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems, pages 5998–6008.", "year": 2017 }, { "authors": [ "Jesse Vig." ], "title": "Deconstructing bert: Distilling 6 patterns from 100 million parameters", "venue": "Medium.", "year": 2018 }, { "authors": [ "Ashwin K Vijayakumar", "Michael Cogswell", "Ramprasaath R Selvaraju", "Qing Sun", "Stefan Lee", "David Crandall", "Dhruv Batra." ], "title": "Diverse beam search for improved description of complex scenes", "venue": "Thirty-Second AAAI Conference on Artificial Intelligence.", "year": 2018 }, { "authors": [ "Jason Weston", "Emily Dinan", "Alexander H Miller." ], "title": "Retrieve and refine: Improved sequence generation models for dialogue", "venue": "arXiv preprint arXiv:1808.04776.", "year": 2018 }, { "authors": [ "Lantao Yu", "Weinan Zhang", "Jun Wang", "Yingrui Yu." ], "title": "Seqgan: Sequence generative adversarial nets with policy gradient", "venue": "ArXiv, abs/1609.05473.", "year": 2016 }, { "authors": [ "Saizheng Zhang", "Emily Dinan", "Jack Urbanek", "Arthur Szlam", "Douwe Kiela", "Jason Weston." ], "title": "Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204–2213, Melbourne, Australia", "venue": "Association for Computational Linguistics.", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural text generation is a vital tool in a wide range of natural language applications. However, the standard approach – training a sequence to sequence model, e.g. Transformer (Vaswani et al., 2017), to maximize log-likelihood and approximately decoding the most likely sequence – is known to be flawed. Generated text in open-ended applications such as language modeling or dialogue has been observed to be dull, with high frequency tokens used too often and interesting content words used too rarely (Holtzman et al., 2019; Dinan et al., 2019). Moreover, the models repeat themselves at the token, phrase, and sentence levels, and statistics comparing a set of human-generated utterances and model-generated responses indicate a discrepancy between the human and model word distributions. This does not appear to be rectified by training on more data (Radford et al., 2019). Recent fixes involve modifying the decoding strategy using sampling or more sophisticated beam search variants. However, these decoding strategies do not address the core issue: the model’s underlying sequence probabilities are clearly not correct.\nSeveral reasons for exactly why neural text is degenerate have been posited, with the cause currently unknown. Possible candidates include the problem being (i) a by-product of the model architecture, e.g. the Transformer architecture preferring repeats (Holtzman et al., 2019; Vig, 2018), (ii) an intrinsic property of human language (Holtzman et al., 2019) rather than a modeling deficiency, or that (iii) a training objective relying on fixed corpora cannot take into account the real goal of using the language (Choi, 2018). Our work shows that, while the above may be factors, a primary factor is the use of the likelihood objective itself, as we demonstrate that degeneration is alleviated if we replace the likelihood objective with our proposal.\nWhile low perplexity in the limit should lead to predicting the correct next target word, there are two major flaws of the likelihood objective: (i) it pays relatively little attention to the argmax or the top of the ranked list of next token probabilities, instead optimizing the likelihood of the entire distribution; ∗Equal contribution; the ordering was decided by a coin flip.\n(ii) it is not focused on optimizing sequence generation, only on producing the next token. The first issue means that greedy or beam search decoding, which rely on the top of the list to generate, are not optimized – there is a discrepancy between maximizing the log-probability of a ground-truth token and ensuring the rank of the ground-truth token to be one. The second issue means that during sequence generation, any imperfection in next token prediction leads to error accumulation that is not addressed by likelihood training.\nIn this work, we introduce unlikelihood training, an approach that addresses the two aforementioned issues. It combines two types of updates: a likelihood update on the true target tokens so that they are assigned high probability, and an unlikelihood update on tokens that are otherwise assigned too high a probability. We can collect these unlikely token candidates either during next-token prediction or from generated sequences, allowing us to train at both the token and sequence levels. Both token and sequence level unlikelihood training are shown to improve metrics that measure dullness and repetition of the model, while maintaining performance in other metrics such as perplexity or token accuracy compared to the maximum likelihood baseline. Finally, we assess our models using human evaluations. We find that our generations have vastly improved quality compared to likelihood trained models when both models use beam search decoding. Moreover, our approach when using beam search also significantly improves over likelihood trained models using either beam blocking or nucleus sampling, thus outperforming the current state-of-the-art." }, { "heading": "2 RELATED WORK", "text": "Neural Text Degeneration Recently, several papers have observed various forms of neural text degeneration, especially in open-ended generation tasks. In dialogue, it has been shown that there is a mismatch between model and human word distributions, where generative models are more likely to output frequent words, but less likely to produce rare words compared to humans. For example, this was observed across all generative models submitted to the ConvAI2 NeurIPS 2018 competition (Dinan et al., 2019). In language modeling, the work of Holtzman et al. (2019) highlighted problems with the word frequency distribution and level of repetition in model generations compared to human text. These issues are not remedied by simply increasing the amount of the training data; e.g. largescale GPT-2 language models (Radford et al., 2019) display the same issues.\nImproved Decoding Algorithms Several methods have been proposed to rectify these issues. The primary ones involve changing the decoding method to a sophisticated beam search variant or to stochastic decoding, e.g. sampling. Different variants of beam search have been explored (Li et al., 2016; Vijayakumar et al., 2018; Kulikov et al., 2018; Holtzman et al., 2018) which can decrease a model’s level of repetition by selecting candidates that are unlike previously chosen ones. Separately, hard or soft beam blocking has been investigated (Paulus et al., 2017; Klein et al., 2017), whereby previously generated n-grams are blocked from subsequent generation. This approach is often used in dialogue generation, fixing some token or phrase level repetitions but removing repetitions that would naturally occur in human text.\nThe second major approach is that of sampling from the model at generation time. Top k-sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2019) are two methods that sample sequences based on a function of the predicted next token probability distribution given by the model. Both approaches vastly improve the repetition issue, as the randomization often reduces the number of duplicate tokens in a decoded sequence, even if highly scored paths under the model (represented by beam search candidates) contain repetitions. However, as the underlying model is unchanged, it often prefers semantically similar phrasing, depending on the temperature parameter of the sampling (Holtzman et al., 2019). Furthermore, this solution is less relevant in less open-ended tasks such as machine translation, where beam search variants are the preferred method. Ideally we would like a model that can work with both beam and sampling decoding methods.\nImproved Learning Algorithms The proposed learning criteria are closely related to structured output prediction methods in which the goal is to increase the scores assigned by a model to true examples while decreasing those assigned to negative examples often generated by the model itself. Some representative algorithms include structured perceptron (Collins, 2002), energy-based models (LeCun et al., 2006) and more recently reflective likelihood (Dieng et al., 2018). A particular variant in this family of algorithms, called negative training, was recently used by He and\nGlass (2019) to prevent generic and malicious responses in dialogue models. Similarly, these structured prediction algorithms with neural language models have been applied to machine translation in recent years by Shen et al. (2015) and Edunov et al. (2017)." }, { "heading": "3 NEURAL TEXT GENERATION", "text": "Language Modeling In language modeling, our goal is to model a probability distribution p∗(x) over variable-length text sequences x = (x1, . . . , x|x|) composed of tokens from a vocabulary, xt ∈ V . We wish to find a model pθ(x) which resembles p∗(x), meaning that samples x̂ ∼ pθ are similar to samples from p∗, and pθ(x) ≈ p∗(x) for all x. When pθ(x) is parameterized by a neural network, we call pθ a neural language model. We assume that pθ takes the form pθ(x) = ∏|x| t=1 pθ(xt|x<t).\nThe de facto approach to training such a model is to find parameters θ that maximize the loglikelihood of a finite set of samples D from p∗ by minimizing:\nLMLE(pθ,D) = − |D|∑ i=1 |x(i)|∑ t=1 log pθ(x (i) t |x (i) <t). (1)\nSequence Completion A closely related problem consists of sampling a sub-sequence, or prefix, x1:k ∼ p∗, then using pθ to conditionally decode a continuation, x̂k+1:N ∼ pθ(·|x1:k). We now want the resulting completion (x1, . . . , xk, x̂k+1, . . . , x̂N ) to resemble a sample from p∗.\nWe use sequence completion as a setting to study the behavior of neural language models due to its generality. For instance, sequence completion encompasses story generation (Fan et al., 2018), contextual text completion (Radford et al., 2019), language modeling (for k = 0), and dialogue modeling (Zhang et al., 2018) where x1:k is a dialogue history and a continuation is a next utterance.\nGiven pθ and a prefix x1:k, finding the optimal continuation is not tractable, so in practice approximate deterministic or stochastic decoding strategies are used to generate continuations.\nDeterministic Decoding Two widely used deterministic decoding approaches are greedy search and beam search. The former can be seen as a special case of the latter. Greedy search selects the highest probability token at each time step: xt = arg max pθ(xt|x<t). Beam search maintains a fixed-size set of partially-decoded sequences, called hypotheses. At each time step, beam search forms new hypotheses by appending each token in the vocabulary to each existing hypothesis, scoring the resulting sequences then selecting the highest scoring sequences. As we describe in Section 4, these deterministic decoding strategies, which depend highly on underlying model probabilities, expose issues with conventionally trained neural language models.\nStochastic Decoding An alternative is to sample from a model-dependent distribution at each step, xt ∼ q(xt|x<t, pθ). In order to prevent sampling low probability tokens, a typical approach is to restrict sampling to a subset of the vocabulary U ⊂ V at each step:\nq(xt|x<t, pθ) = { pθ(xt|x<t)/Z xt ∈ U 0 otherwise,\nwhere Z = ∑ x∈U pθ(x|x<t). The top-k sampler restricts sampling to the k most-probable tokens;\ni.e. U is the size k subset of V which maximizes ∑ x∈U pθ(x|x<t) (Fan et al., 2018). The nucleus sampler instead restricts sampling to the smallest set of tokens with total mass above a threshold p ∈ [0, 1]; i.e. U is the smallest subset with ∑ x∈U pθ(x|x<t) >= p (Holtzman et al., 2019)." }, { "heading": "4 NEURAL TEXT DEGENERATION", "text": "In this section we discuss two degenerate properties that frequently occur in conventional neural language models trained with the maximum likelihood objective (Equation 1).\nRepetition First, model-generated continuations exhibit sequence-level repetition, especially with deterministic decoding. The problem is seen by observing samples in Appendix Table 4, which\nshows completions from the state-of-the-art GPT-2 language model (Radford et al., 2019). Greedy decoding as well as top-k and nucleus sampling exhibit degenerate repetition (with a certain hyperparameter setting), although greedy decoding shows the worst degradation. Using a Transformer language model trained with maximum likelihood (§6), we find that the average percentage of repeated n-grams in model continuations with greedy decoding (43%) far exceeds that of humans (0.5%), computed over prefixes drawn from a validation corpus.\nUnlike previous work which only focused on degenerate sequence-level repeats (Holtzman et al., 2019), we additionally observe that neural language models exhibit substantially more repetition in next-token prediction compared to human text:\nPr (x̂k+1 = arg max pθ(x|x1:k) ∈ x1:k) > Pr (xk+1 ∈ x1:k) . (2) For instance, the Transformer language model (§6) predicted next-tokens that appeared in the preceding 128 words 62% of the time, versus 49% in ground-truth text. This is especially concerning since the maximum-likelihood objective focuses on optimizing next-token conditional distributions.\nToken Distribution Mismatch Second, both greedy continuations and next-token predictions from conventional neural text generators have different token distributions from human text. As demonstrated by Holtzman et al. (2019), such models with greedy or beam search tend to produce high frequency tokens too often and low frequency tokens too rarely, where frequency is defined by the human token distribution. With the Transformer language model (§6), the set of nexttoken greedy predictions on a held-out validation set had roughly 40% fewer unique tokens than the ground-truth tokens (11.6k vs. 18.9k), and overproduced frequent tokens (Appendix Figure 1). Such behavior has been linked to generations being judged as dull by humans because rare words can add engaging specificity (Weston et al., 2018; See et al., 2019)." }, { "heading": "5 THE UNLIKELIHOOD TRAINING OBJECTIVE", "text": "We now describe unlikelihood training for neural language models, then in Section 6 demonstrate empirically that our proposal substantially improves neural text degeneration (§4)." }, { "heading": "5.1 UNLIKELIHOOD TRAINING", "text": "The key idea behind unlikelihood training is decreasing the model’s probability of certain tokens, called negative candidates. Given a sequence (x1, . . . , xT ) and a set of negative candidate tokens Ct = {c1, . . . , cm}, where each cj ∈ V , we define the unlikelihood loss for step t as:\nLtUL(pθ(·|x<t), Ct) = − ∑ c∈Ct log(1− pθ(c|x<t)). (3)\nThe loss decreases as pθ(c|x<t) decreases. We incorporate the unlikelihood loss into a token-level unlikelihood objective which augments each time-step of maximum likelihood training:\nLtUL-token(pθ(·|x<t), Ct) = −α · ∑ c∈Ct\nlog(1− pθ(c|x<t))︸ ︷︷ ︸ unlikelihood − log pθ(xt|x<t)︸ ︷︷ ︸ likelihood . (4)\nAs candidates, we use previous context tokens:\nCtprev-context = {x1, . . . , xt−1} \\ {xt}. (5) Intuitively, minimizing the unlikelihood loss with this candidate set makes (i) incorrect repeating tokens less likely, as the previous context contains potential repeats, and (ii) frequent tokens less likely, as these tokens appear often in the previous context. These candidates are efficient to compute, without requiring additional supervision.\nGradient analysis We assume pθ(xt|x<t) = softmax(a) and consider the gradient of (4) with respect to the softmax input a ∈ RV . With a single negative candidate, the (negative) gradient is:\n∇La = x∗ −m p, mi = { (1− α pneg1−pneg ) if i 6= ineg (1 + α) if i = ineg,\n(6)\nwhere x∗ ∈ {0, 1}V is a one-hot ground-truth vector, m ∈ RV , p = pθ(·|x<t), and pneg is the probability of the negative candidate at index ineg (derivation in Appendix A).\nThis unlikelihood gradient (6) differs from the likelihood gradient, (x∗−p), due to the termmwhich varies based on the hyper-parameter α and the model’s negative candidate probability, pneg. At the ground-truth token index i∗, the unlikelihood gradient is positive, increasing the ground-truth token’s probability with a magnitude that grows with pneg. Conversely, at the negative candidate index ineg the gradient is negative. At all other token indices i 6∈ {i∗, ineg}, the gradient moves from negative to positive as pneg increases. For instance, with α = 1.0 the gradient increases the probability of each token xi when the model assigns high probability to the negative candidate (pneg > 0.5)." }, { "heading": "5.2 SEQUENCE-LEVEL UNLIKELIHOOD TRAINING", "text": "While the token-level unlikelihood objective efficiently augments maximum likelihood training with token-level penalties, it is limited to prefixes drawn from the training distribution. The resulting distribution mismatch between training sequences and generated sequences is a known issue with maximum-likelihood training, motivating objectives that operate on model-generated sequences (Daumé et al., 2009; Ross et al., 2011; Ranzato et al., 2015; Yu et al., 2016).\nWe thus propose a sequence-level unlikelihood objective which uses unlikelihood on decoded continuations. That is, given a prefix (x1, . . . , xk) ∼ p∗, we decode a continuation (xk+1, . . . , xk+N ) ∼ pθ(·|x1, . . . , xk), construct per-step negative candidate sets (Ck+1, . . . , Ck+N ), and define each perstep sequence-level loss for t ∈ {k + 1, . . . , k +N} as:\nLtULS(pθ(·|x<t), Ct) = − ∑ c∈Ct log(1− pθ(c|x<t)). (7)\nIntuitively, the negative candidates can identify problematic tokens for the loss to penalize. We choose to penalize repeating n-grams in the continuation:\nCtrepeat-n = {xt} if (xt−i, . . . , xt, . . . , xt+j) ∈ x<t−i for any (j − i) = n, i ≤ n ≤ j, (8) which says that xt is the (single) negative candidate for step t if it is part of a repeating n-gram1.\nIn our experiments we apply this sequence loss in two ways: (i) using it to fine-tune a standard MLE baseline; and (ii) using it to fine-tune an unlikelihood model trained at the token level, LUL-token. We refer to the former as LUL-seq and the latter as LUL-token+seq. In both cases, fine-tuning is done by equally mixing sequence-level unlikelihood updates (7) and the token-level loss from which it was initially trained (either likelihood updates (1) or token-level unlikelihood updates (4)).\nEfficiency Any objective that requires explicitly decoding a sequence is constrained by sample efficiency when decoding is slow; if sample efficiency is low, the total decoding time is too large for practical use. In our experiments we show that when used for fine-tuning, the sequence-level unlikelihood objective substantially reduced degeneration in under 1,500 updates, rendering it practical for modern large-scale neural models, even with high decoding costs." }, { "heading": "6 EXPERIMENTS", "text": "We follow a standard language modeling setup from Baevski and Auli (2019) and evaluate our method on the task of sequence completion, detailed below.2\nModel Architecture Recent large-scale language models are based on the Transformer architecture, a multi-layer feed-forward network with self-attention (Vaswani et al., 2017). We use a 16-layer Transformer with 8 attention heads, embedding dimension 1024, and fully-connected dimension 4096; the architecture is based on Baevski and Auli (2019) but with standard embedding and softmax layers. Our proposed method is architecture agnostic; we choose this one as a representative of recent large-scale language models, e.g. Radford et al. (2019).\n1An alternative we tried is to choose a penalization probability ppenalize, and use xt as the single negative candidate for time t when zt ∼ Bernoulli(ppenalize) is 1, and no negative candidate for time t otherwise; this approach was effective but under-performed the Crepeat-n candidates; see Appendix D.\n2Code and trained models are available at https://github.com/facebookresearch/ unlikelihood_training; implemented with Fairseq (Ott et al., 2019).\nDataset We use the Wikitext-103 dataset (Merity et al., 2016), a large-scale collection of Wikipedia articles containing over 100 million words and 260 thousand unique tokens. As a document-level dataset, Wikitext-103 is an open-source representative of recent datasets used for large-scale language modeling (Baevski and Auli, 2019; Radford et al., 2019). We perform experiments at the word level.\nTraining We train on fixed-length contiguous sequences, in our case of length 1,536, which was selected based on GPU memory constraints. For the token-level losses (LMLE, LUL-token), we train each model on 8 GPUs for a maximum of 150k updates, evaluating on the validation set and saving the model state every 10k updates. For the experiments below, we select the saved model state with the best validation perplexity.\nSequence-level fine-tuning begins with the model state selected based on the validation perplexity. Models are fine-tuned for 1,500 total updates. With probability 0.5 an update uses LULS and otherwise uses the token-level loss with which the model was trained. For a LULS update, we split each training sequence and greedily decode continuations (details below). The experiments use a prefix length k = 50 and continuation length N = 100 for fine-tuning.\nCompletions We evaluate a model on sequence completion by using the model to decode continuations of prefixes derived from the validation (or test) set. Specifically, the validation (or test) set is first partitioned into sequences of 1,536 tokens, as in training. Then we split each sequence into a batch of prefixes of length k (discarding extra tokens), and decode a continuation of length N for each prefix. The experiments below use k = 50 and N = 100 for evaluation. For deterministic decoding we use greedy search and beam search with beam size 10, and for stochastic decoding we use top-k sampling with k ∈ {3, 50} and nucleus sampling with p ∈ {0.3, 0.9}." }, { "heading": "6.1 EVALUATION METRICS", "text": "Repetition As a token-level metric for repetition, we use the fraction of next-token (top-1) predictions that occur in the previous ` tokens (rep/`); given a set D of length-T sequences,\nrep/` = 1 |D|T ∑ x∈D T∑ t=1 I [argmax pθ(x|x<t) ∈ xt−`−1:t−1] . (9)\nA predicted token is called a “single-token repeat” when I [·] is 1. Some of these single-token repeats also occur in the human-generated sequences, and we thus report a variant which only counts singletoken repeats that are additionally not equal to the ground-truth next-token (wrep/`).\nWe use the portion of duplicate n-grams (seq-rep-n) in a generated sequence to measure sequencelevel repetition. That is, for a continuation xk+1:k+N we compute,\nseq-rep-n = 1.0− |unique n-grams(xk+1:k+N )||n-grams| , (10)\nand average over continuations. seq-rep-n is zero when the continuation has no repeating n-grams, and increases towards 1.0 as the model repeats. We compute seq-rep-n on the continuation.\nToken Distribution We quantify a model’s predicted token distribution using the number of unique tokens. As a token-level metric (uniq), we use the number of unique next-token predictions on a validation or test setD, i.e. |{arg max p(xt|x<t) | x<t ∈ D}|. As a sequence-level metric (uniq-seq) we use the number of unique tokens in continuations of validation or test prefixes (§6).\nLanguage Modeling Quality We use perplexity (ppl), and next-token prediction accuracy (acc), defined as 1N |{arg max p(xt|x<t) = x ∗ t | x<t ∈ D}|, with N prefixes x<t and true next tokens x∗t ." }, { "heading": "6.2 RESULTS", "text": "Token-level and sequence-level results on the test set are in Table 2 (valid set in Appendix Table 5).\nBaseline The baseline model trained with maximum likelihood (LMLE) achieved 25.64 test perplexity, comparable to a current state-of-the-art system (Baevski and Auli, 2019) (24.92). However, the greedy baseline’s seq-level repeats (seq-rep-4 .442) and single-token repeats (rep .627) far exceed those in human text (.006, .487 respectively). The baseline continuations have far fewer unique tokens than human text (uniq-seq 11.8k vs 19.8k), with a high rate of frequent tokens (Figure 1).\nToken-Level Objective The proposed token-level unlikelihood objective (LUL-token) reduced nexttoken wrong repetition (wrep .311 vs. .352) while increasing the number of unique next-tokens (uniq 12.7k vs. 11.8k) compared to the baseline (LMLE). Perplexity and accuracy were similar. Importantly, the token-level unlikelihood objective yielded substantial improvements in sequencelevel generations. With greedy search, token-level unlikelihood training improved the 4-gram repetition in continuations by 36% (seq-rep-4 .283 vs. .442) while generating roughly 22% more unique tokens than the baseline (uniq-seq 13.2k vs. 10.8k), and a more favorable rate of infrequent tokens (Figure 1). With beam search, unlikelihood training showed similar improvements over the baseline.\nSequence-Level Objective The sequence level fine-tuning (LUL-token+seq) yielded further improvements, with a 97% reduction in 4-gram repetitions (seq-rep-4 .013 vs. .442) from the baseline level (greedy LMLE), and 77% more unique tokens (uniq-seq 19.1k vs. 10.8k) with beam search. Compared to the token-level unlikelihood model (LUL-token) which was the starting point of finetuning, the fine-tuned model’s repetition substantially improved (seq-rep-4 .058 vs. .283), unique tokens increased (uniq-seq 15.4k vs. 13.2k), and token-level metrics such as perplexity improved\n(ppl 26.72 vs. 26.91), despite using only 1,500 updates. The token distribution improved, with infrequent tokens produced more often than the baseline, and frequent tokens approaching the human level (Figure 1). Finally, after sequence-level fine-tuning, beam search out-performed greedy search.\nTo visualize how these improvements in metrics translate to generation quality, Table 1 shows greedy completions that characterize the baseline’s degeneration and LUL-token+seq’s improved behavior.\nGPT-2 Fine-Tuning In the preceding experiment, sequence-level fine-tuning alone (LUL-seq) showed substantial improvements over the baseline using a small number of updates. This indicates that the proposed sequence-level fine-tuning can be a cheap, effective way to improve existing pre-trained language models. We demonstrate this by fine-tuning a pre-trained GPT-2 (Radford et al., 2019) language model with sequence-level unlikelihood, using a comparable experimental setup to §6 (details in Appendix C). Fine-tuning with unlikelihood yielded similar improvements in sequence-level repetition (seq-rep-4 .042 vs. .506) to those observed in Table 5, while maintaining language modeling quality according to perplexity and accuracy (see Appendix Table 7).\nStochastic Decoding Although we have focused on deterministic decoding, we also confirm that a model trained with the proposed unlikelihood objectives may still be used with stochastic decoders. Appendix Table 6 shows metrics for completions generated with top-k sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2019). Models trained with unlikelihood objectives maintain language modeling quality compared to the baseline, but with improvements in repetition.\nHuman Evaluation We perform a crowdworker evaluation to judge the quality of the generations of our proposed models compared to each other, the baseline, two other generation methods, and the reference. We employ a pairwise setup: an evaluator is presented with a prefix and shown continuations from two different models and asked to select which continuation they found more natural. Following Li et al. (2019), we filter workers using quality controls (detailed in Appendix E) and limit the number of annotations that they may complete. Prompts are from the Wikitext-103 test set. All models used beam search (beam size 10) for generation, except for those that use stochastic decoding. We report the win rates for each pairwise comparison.\nThe main results are presented in Table 3, with additional experiments in Appendix Table 9. We find that all proposed models are preferred over the baseline, and that congruent with automatic metrics, win rates improve after adding the sequence level objective. Our best model also outperforms the baseline used with either nucleus sampling or beam blocking.\nWe also collected limited annotations from other NLP researchers. These Expert annotators were given the same UI as the crowdworkers, and not told about models they were evaluating, but all annotators were familiar with language models. As shown in Table 3, the LUL-token+seq model significantly outperforms both nucleus sampling and beam blocking according to the experts." }, { "heading": "7 CONCLUSION", "text": "We described unlikelihood training, an approach to training neural language models. We observed that state-of-the-art models trained to maximize likelihood exhibit neural text degeneration, which\nwe characterized and quantified in terms of repetition and token distribution mismatch. Our results show that the likelihood objective is not constrained enough, in the sense that two models with the same perplexity can exhibit wildly different generation performance. We empirically showed that unlikelihood training - both at the token and sequence levels - substantially reduced degeneration according to automatic metrics, and outperformed likelihood-trained models with various decoding methods according to human evaluation, being superior to the current state-of-the-art approaches." }, { "heading": "A GRADIENT", "text": "Notation Let x∗t be the true next-token (index i∗ ∈ V) at step t, and let xneg be a negative candidate (index ineg). Let p = p(xt|x<t) ∈ RV be the output of softmax(a) where a ∈ RV .\nDenote the probability of an element i ∈ {1, . . . , V } as pi = p(xit|x<t), and let p∗, pneg, and p̃i be probabilities of the true next-token, negative-candidate token, and any other token with i 6∈ {i∗, ī}.\nA.1 DERIVATION\nThe (negative) token-level loss with a single candidate is,\nLt = log p(x∗t |x<t) + α · log(1− p(xneg|x<t)), (11)\nand its gradient with respect to a logit ai is:\n∂L ∂pi ∂pi ∂ai = (I[i = i∗]− pi)− α pneg 1− pneg (I[i = ineg]− pi) . (12)\nWe consider the gradient when i is the true next-token, a negative-candidate, and any other token.\nTrue Next-Token (i = i∗)\n∂L ∂p∗ ∂p∗ ∂ai∗ = (1− p∗)− α pneg 1− pneg (0− p∗) (13)\n= 1− p∗(1− α pneg\n1− pneg ). (14)\nNegative Candidate (i = ineg)\n∂L ∂pneg ∂pneg ∂aneg = (0− pneg)− α pneg 1− pneg (1− pneg) (15)\n= −pneg(1 + α). (16)\nOther Token (i 6∈ {i∗, ineg})\n∂L ∂p̃i ∂p̃i ∂ai = (0− p̃i)− α pneg 1− pneg (0− p̃i) (17)\n= −p̃i(1− α pneg\n1− pneg ). (18)\nCombining the three cases above, we get:\n∇La = x∗ −m p, (19)\nwhere x∗ ∈ {0, 1}V is 1 at index i∗ and 0 otherwise, and m ∈ RV is:\nmi = { (1− α pneg1−pneg ) i 6= ineg (1 + α) i = ineg.\n(20)\nMultiple Candidates In general the objective considers multiple candidates (see section 5):\nLtUL-token(pθ(·|x<t), Ct) = −α · ∑ c∈Ct\nlog(1− pθ(c|x<t))︸ ︷︷ ︸ unlikelihood − log pθ(xt|x<t)︸ ︷︷ ︸ likelihood . (21)\nWe regroup the token-level objective to be a weighted sum of per-candidate objectives:\n−LtUL-token(pθ(·|x<t), Ct) = 1 |Ct| ∑ c∈Ct ( log pθ(xt|x<t) + αc · log(1− pθ(c|x<t)) ) (22)\nwhere αc = α · |Ct|. Now the gradient can be generalized to multiple candidates, in which case the gradient takes the same form as Eqn. 20, but with αc in place of α." }, { "heading": "B STOCHASTIC DECODING RESULTS", "text": "Table 6 provides automatic metrics for top-k and nucleus sampling (called top-p) on the Wikitext103 test set. These can be compared with the main results of the paper in Table 2. In general, sampling methods yield worse next-token predictions than deterministic approaches (0.302 vs. 0.394 acc for top-k-50 vs. greedy MLE, where acc for stochastic decoding measures the probability that the decoding strategy chooses the ground truth word given a ground truth context). As the choice of sampling hyperparameter gets closer to greedy (i.e. lower values of k and p) next token accuracy improves, eventually approaching the greedy MLE results. The unlikelihood-trained sampling models have similar next token accuracy (acc) to their likelihood-trained counterparts, but exhibit fewer repetitions. For lower values of p and k the improvements of unlikelihood training are larger, e.g. 0.277 reduced to 0.0041 for 4-gram sequence repetitions (seq-rep-4) using top-p-0.3. At higher levels of p and k, for all methods the continuations contain more unique tokens than that of humans, meaning those values may be too high." }, { "heading": "C GPT-2 FINE-TUNING", "text": "We evaluated the GPT-2 medium pre-trained model (‘GPT-2’) and two separate fine-tuning variants on Wikitext-103. The first variant (‘GPT-2MLE’) was fine-tuned using maximum likelihood; we select the model state with the lowest validation perplexity. The second model (‘GPT-2UL-seq’) was fine-tuned using the sequence-level unlikelihood objective (§5.2). For both evaluation and sequencelevel tuning, we used a prefix length of 50 BPE tokens and a continuation length of 100 BPE tokens. In order to train on a single GPU, we used a batch-size of 1024 tokens for MLE updates, and 300 prefix tokens for unlikelihood updates. Due to the smaller batch size and single-GPU setting, we used 10,000 updates during sequence-level fine-tuning, comparable to the 1,500 updates in the main experiment (§6) in terms of the total number of tokens. Results are shown in Table 7." }, { "heading": "D SEQUENCE-LEVEL RANDOM CANDIDATES", "text": "In Sec. 5.2 we described a way to penalize tokens that occurred in a n-gram repetition. One alternative is to penalize a random subset of the generated sequence. That is, given a continuation xt+1, . . . , xt+K , we now define per-step candidates (Ck+1, . . . , Ck+N ) as:\nCtrandom-seq = { {xt} if zt = 1 ∅ if zt = 0,\n(23)\nfor each t ∈ {k + 1, . . . , k + N}, where zt ∼ Bernoulli(ppenalize), and ppenalize ∈ [0, 1] is a fixed hyper-parameter. Intuitively, these candidates identify random tokens in the generated sequence (hence ‘random-seq’), which are then penalized by the sequence-level loss (Eqn. 7).\nResults with different values of ppenalize are shown in Table 8. Penalizing 10% of the generated tokens led to substantial improvements in seq-rep-4 for both greedy and beam search compared to the baseline (e.g. 41% for LUL-seq greedy, 73% for LUL-tok+seq greedy), though using n-gram repetition candidates yielded further improvements (§5.2, Table 5). Improvements in single-token metrics were similar to those from the n-gram repetition candidates (e.g. wrep .287). These results with random-seq candidates demonstrate that sequence fine-tuning can yield improvements without explicitly using the notion of repetition for candidate selection. We also find that penalizing 90% of the generated tokens yields substantial improvements in beam search, but not greedy search; investigating this is left as future work." }, { "heading": "E HUMAN EVALUATION DETAILS", "text": "E.1 UI SCREENSHOT\nE.2 CROWDWORKER QUALITY CONTROLS\nWe require workers to correctly answer both of the following quality control questions for their evaluations to be included. Both quality controls compare the true completion against a greedy baseline model.\nFollowing Li et al. (2019), we informed workers that they must provide reasoning for their choices. We filtered workers who did not provide reasoning for at least 80% of their choices.\n63% of workers fail at least one of our three quality control mechanisms (2 quality control metrics, and failing to give reasons). 61% fail at least one quality control question; 16% of workers fail both; 4% of workers fail to give reasoning for their choices.\nE.2.1 QUALITY CONTROL 1\nPrompt = = In the decades since its release , The Hustler has cemented its reputation as a classic . Roger Ebert , echoing earlier praise for the performances , direction , and cinematography and adding laurels for editor Dede Allen , cites the film as ” one of”’\nCorrect answer those films where scenes have such psychic weight that they grow in our memories . ” He further cites Fast Eddie Felson as one of ” only a handful of movie characters so real that the audience refers to them as touchstones . ” TV Guide calls the film a ” dark stunner ” offering ” a grim world whose only bright spot is the top of the pool table , yet [ with ] characters [ who ] maintain a shabby nobility and grace . ” The four leads are again lavishly praised for their performances and the\nIncorrect answer the most influential films of the year ” . In his review for the Chicago Sun @-@ Times , Richard Corliss calls it ” a film of the highest order ” and ” a film of the highest order ” . In his review for the Chicago Sun @-@ Times , Richard Corliss calls it ” a film of the highest order ” and ” a film of the highest order ” . In his review for the Chicago Sun @-@ Times , Richard Corliss calls it ” a film of the highest order ” and ” a film of\nE.2.2 QUALITY CONTROL 2\nPrompt , which had a major negative effect on the state ’s large merchant fleet . Gore was in 1808 elected to the Massachusetts House of Representatives , where he successfully led Federalist efforts to ensure the selection of a Federalist slate of presidential electors . He also spearheaded actions to\nCorrect answer drive Senator John Quincy Adams from the Federalist Party over his support of Thomas Jefferson ’s foreign policy . The legislature elected Adams ’ successor nine months early , and gave Adams sufficiently distasteful instructions that he resigned the post and joined with the Republicans . = = Governor = = Gore led the Federalists to victory in 1809 against Sullivan ’s successor , Levi Lincoln , Sr. , who had taken over as acting governor upon Sullivan ’s death late in 1808 . During Gore ’s term the principal domestic issue occupying state politics\nIncorrect Answer prevent the American Revolutionary War from being fought by the British , and to prevent the British from using the country to launch a war against the British . Gore ’s actions in the House of Representatives were a major turning point in his political career . He was elected to the House of Representatives in 1811 , and served until his death in 1815 . = = Early life and education = = ¡/s¿ ¡/s¿ Gore was born in Boston , Massachusetts , on February 22 , 1798 , the son of Benjamin Gore and his\nE.3 FULL HUMAN EVALUATION RESULTS" } ]
2,020
null
SP:3af65f4601748c89802e82f7e312d169ab8f54f2
[ "This paper aims at solving geometric bin packing (2D or 3D) problems using a deep reinforcement learning framework. Namely, the framework is based on the actor-critic paradigm, and uses a conditional query learning model for performing composite actions (selections, rotations) in geometric bin packing. Experiments are performed on several instances of 2D-BPP and 3D-BPP,", "This paper proposes an end-to-end deep reinforcement learning-based algorithm for the 2D and 3D bin packing problems. Its main contribution is conditional query learning (CQL) which allows effective decision over mutually conditioned action spaces through policy expressed as a sequence of conditional distributions. Efficient neural architectures for modeling of such a policy is proposed. Experiments validate the effectiveness of the algorithm through comparisons with genetic algorithm and vanilla RL baselines. " ]
Neural Combinatorial Optimization (NCO) has shown the potential to solve traditional NP-hard problems recently. Previous studies have shown that NCO outperforms heuristic algorithms in many combinatorial optimization problems such as the routing problems. However, it is less efficient for more complicated problems such as packing, one type of optimization problem that faces mutual conditioned action space. In this paper, we propose a Conditional Query Learning (CQL) method to handle the packing problem for both 2D and 3D settings. By embedding previous actions as a conditional query to the attention model, we design a fully end-to-end model and train it for 2D and 3D packing via reinforcement learning respectively. Through extensive experiments, the results show that our method could achieve lower bin gap ratio and variance for both 2D and 3D packing. Our model improves 7.2% space utilization ratio compared with genetic algorithm for 3D packing (30 boxes case), and reduces more than 10% bin gap ratio in almost every case compared with extant learning approaches. In addition, our model shows great scalability to packing box number. Furthermore, we provide a general test environment of 2D and 3D packing for learning algorithms. All source code of the model and the test environment is released.
[]
[ { "authors": [ "Marcin Andrychowicz", "Filip Wolski", "Alex Ray", "Jonas Schneider", "Rachel Fong", "Peter Welinder", "Bob McGrew", "Josh Tobin", "OpenAI Pieter Abbeel", "Wojciech Zaremba" ], "title": "Hindsight experience replay", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Erhan Baltacioglu", "James T Moore", "Raymond R Hill Jr." ], "title": "The distributor’s three-dimensional pallet-packing problem: a human intelligence-based heuristic approach", "venue": "International Journal of Operational Research,", "year": 2006 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "Qingpeng Cai", "Will Hang", "Azalia Mirhoseini", "George Tucker", "Jingtao Wang", "Wei Wei" ], "title": "Reinforcement learning driven heuristic optimization", "venue": null, "year": 1906 }, { "authors": [ "William Chan", "Navdeep Jaitly", "Quoc Le", "Oriol Vinyals" ], "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "venue": "In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2016 }, { "authors": [ "Henrik I Christensen", "Arindam Khan", "Sebastian Pokutta", "Prasad Tetali" ], "title": "Approximation and online algorithms for multidimensional bin packing: A survey", "venue": "Computer Science Review,", "year": 2017 }, { "authors": [ "Lu Duan", "Haoyuan Hu", "Yu Qian", "Yu Gong", "Xiaodong Zhang", "Jiangwen Wei", "Yinghui Xu" ], "title": "A multi-task selected learning approach for solving 3d flexible bin packing problem", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Matthew Hausknecht", "Peter Stone" ], "title": "Deep reinforcement learning in parameterized action space", "venue": "arXiv preprint arXiv:1511.04143,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Vijay R Konda", "John N Tsitsiklis" ], "title": "Actor-critic algorithms. In Advances in neural information processing", "venue": null, "year": 2000 }, { "authors": [ "Wouter Kool", "Herke van Hoof", "Max Welling" ], "title": "Attention, learn to solve routing problems", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alexandre Laterre", "Yunguan Fu", "Mohamed Khalil Jabri", "Alain-Sam Cohen", "David Kas", "Karl Hajjar", "Torbjorn S Dahl", "Amine Kerkeni", "Karim Beguir" ], "title": "Ranked reward: Enabling self-play reinforcement learning for combinatorial optimization", "venue": "arXiv preprint arXiv:1807.01672,", "year": 2018 }, { "authors": [ "Weibo Liu", "Zidong Wang", "Xiaohui Liu", "Nianyin Zeng", "Yurong Liu", "Fuad E Alsaadi" ], "title": "A survey of deep neural network architectures and their applications", "venue": null, "year": 2017 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Silvano Martello", "David Pisinger", "Daniele Vigo" ], "title": "The three-dimensional bin packing problem", "venue": "Operations research,", "year": 2000 }, { "authors": [ "Warwick Masson", "Pravesh Ranchod", "George Konidaris" ], "title": "Reinforcement learning with parameterized actions", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Mohammadreza Nazari", "Afshin Oroojlooy", "Lawrence Snyder", "Martin Takác" ], "title": "Reinforcement learning for solving the vehicle routing problem", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In NIPS Autodiff Workshop,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "Tao Shen", "Tianyi Zhou", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Bi-directional block selfattention for fast and memory-efficient sequence modeling", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": "arXiv preprint arXiv:1511.06391,", "year": 2015 }, { "authors": [ "Ermo Wei", "Drew Wicke", "Sean Luke" ], "title": "Hierarchical approaches for reinforcement learning in parameterized action space", "venue": "AAAI Spring Symposium Series,", "year": 2018 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Yong Wu", "Wenkai Li", "Mark Goh", "Robert de Souza" ], "title": "Three-dimensional bin packing problem with variable bin height", "venue": "European journal of operational research,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "How to pack boxes with the smallest bin size? With the development of globalization and ECommerce, this question becomes more and more important. Boxes are packed to various bins such as shipping container and boxcar. Many of these boxes are made by paper or plastic; packing boxes in a more efficient way can greatly reduce material cost or shipping energy.\nThe Bin Packing Problem (BPP) is one of the classic integer combinatorial optimization problems and it has been extensively studied for decades (Wu et al., 2010). It is shown that BPP is a strongly NP-hard problem (Martello et al., 2000), which requires exponential time to generate the optimal solution. Some heuristic algorithms (Baltacioglu et al., 2006) try to obtain a nearly optimal solution within polynomial time, but these methods require explicit rules for every specific problem setting. When the setting changes even very slightly, the original method cannot work properly. On contrast, the explicit or hand-crafted rules can be interpreted as policies by neural networks to make decisions, which are insensitive to problem settings. Neural networks have achieved great success in many domains, such as computer vision (Liu et al., 2017), natural language processing (Vaswani et al., 2017), speech recognition (Chan et al., 2016), etc. Inspired by these booming techniques, many studies (Bello et al., 2016) adopt the neural networks and the recent advances in artificial intelligence to solve the classic combinatorial optimization problems, such as the Travelling Salesman Problem (TSP), the Vehicle Routing Problem (VRP), etc. Some latest works propose the pointer network (Vinyals et al., 2015b) and utilize the attention mechanism with reinforcement learning (Nazari et al., 2018; Kool et al., 2019) to solve the TSP and the routing problems respectively.\nThere are also some learning-based attempts for the packing problem, which utilize reinforcement learning in the neural network model since the optimal solution is unknown beforehand. For example, MTSL (Duan et al., 2019) separates selecting and rotating step by selected learning, and applies the transitional greedy method to perform final positioning step. Laterre et al. (2018) enlighten by AlphaZero (Silver et al., 2017) adopt Monte Carlo Tree Search (MCTS) with self-competition reinforcement learning to solve the packing problem, but it is restrained to pack boxes that have been\npreliminarily divided from a bin. Cai et al. (2019) simply use reinforcement learning to get some packing results, which serve as the initialization to accelerate the original heuristic algorithms.\nHowever, these approaches either miss the connection between sub-actions or combine handicraft rules with a learning algorithm. Without the sub-action connection, the learning process becomes partial observable Markov Decision Process (MDP) (Sutton & Barto, 2018), which is hard to generalize a good policy for the lack of information. Some methods generate all possible sub-action sequences at the same time, it is still non-trivial for a neural network model to produce many mutual related outputs in a single forward prorogation even though the setting of these methods are strict MDP. These methods combining handicraft rules are not only difficult to achieve an optimal solution, but also sensitive to problem settings.\nTo the best of our knowledge, there is no end-to-end learning algorithm that solves standard packing problems. In this paper, we propose a Conditional Query Learning (CQL) model that directly addresses the gap between sub-actions. With the inner conditional query mechanism, the learning process becomes fully observable MDP, which makes the problem easier to apply the reinforcement learning. Compared with the model that outputs several sub-actions simultaneously, CQL model has smaller action space per forward step. Benefit from the small action space feature, we can apply a simpler model to fit the policy, which work more efficiently. In addition, it would not sacrifice performance with the small action space. As a result, we do not require handicraft rules to feed the gaps between sub-actions since they are provided by conditional queries.\nSpecifically, the packing problem requires three mutual conditioned sub-actions: box selecting, rotating and positioning. To fill the gap between sub-actions, we adopt the CQL model as follows. First of all, the packing problem is formulated as an MDP to apply reinforcement learning. Then the previous sub-actions are embedded as a query to the next sub-action decoder. After all three sub-actions are generated, the environment performs one packing step and updates the observation. Finally, we adopt the actor-critic algorithm to update the model parameters.\nWe conduct extensive experiments to evaluate the models and the results show that the CQL model greatly outperforms the vanilla model which produces sub-actions in one step forward propagation without query. In addition, the CQL model achieves lower bin gap ratio in both 2D and 3D packing compared with extant learning approaches and heuristic algorithms. Specifically, our model improves 7.2% space utilization ratio in 3D packing (30 boxes) compared with genetic algorithm, and reduces more than 10% bin gap ratio in almost every case compared with the state-of-the-art learning approaches. Furthermore, numerical results show that CQL greatly outperforms other methods when the scale of the problem increases. The learning carve and the variance of results also illustrate that CQL makes the training process more stable.\nThe contributions of this paper are summarized as follows:\n• We propose the first end-to-end learning algorithm that solves the standard packing problem for both 2D and 3D settings;\n• We propose the conditional query learning (CQL) model to address the packing problem that has mutual conditioned multi-dimension actions;\n• We combine the conditional query learning with an attention mechanism to construct a new learning framework;\n• We conduct extensive experiments and the results show that our model outperforms both heuristic algorithms and the state-of-the-art learning-based models.\nWe also release our model implementation and 2D&3D packing environment for the community to test their algorithms.\nThe rest of the paper is organized as follows. We introduce related works in the next section. We introduce the preliminaries in Section 3. Design of CQL model is presented in Section 4, We implement CQL model and illustrate the comparison results in Section 5. Finally, we conclude this paper in section 6." }, { "heading": "2 RELATED WORKS", "text": "Getting an optimal solution of combinatorial optimization problems are computationally heavy, optimal labeled data for supervised learning is expensive. Hence, when using Neural Networks (NNs) to solve these problems, one solution is to apply heuristic algorithm results as labeled data, but the performance of this approach cannot be better than the performance of the heuristic algorithm. The other solution is to apply reinforcement learning that makes the algorithm learn from its own experience, which is possible to produce better results than existing algorithms. Here we focus on the reinforcement learning approaches and introduce some related works of reinforcement learning and neural combinatorial optimization." }, { "heading": "2.1 SEQUENCE PROBLEM IN NEURAL COMBINATORIAL OPTIMIZATION", "text": "Enlighten by the recent success of Neural Networks (NNs), especially the fast progress in sequence to sequence model, which is initially used in Neural Machine Translation (NMT) (Bahdanau et al., 2014; Vinyals et al., 2015a; Luong et al., 2015; Vaswani et al., 2017; Shen et al., 2018). Because many combinatorial optimization problems have similar input and output structure as NMT, many studies adopt NNs to solve sequential combinatorial optimization problems. Vinyals et al. (2015b) propose Pointer Networks, which uses attention as a pointer to select a member of the input sequence as the output. Bello et al. (2016) and (Nazari et al., 2018) view TSP and VRP as MDP respectively, and they both apply policy gradient algorithm to train their models. Kool et al. (2019) further improve the result of routing problem using attention model (Vaswani et al., 2017)." }, { "heading": "2.2 REINFORCEMENT LEARNING WITH PARAMETERIZED ACTION SPACE", "text": "Different from routing problems that only require one object selected from input sequence per step, the packing problem requires three sub-actions to pack a box into the bin. This kind of action space is called parameterized action space (Masson et al., 2016) in reinforcement learning, which requires the agent to select multiple types of action every action step. Hausknecht & Stone (2015) expand DDPG to parameterized action space and test it on RoboCup soccer game. But this approach suffers from tanh saturation problem in continuous action space, so they apply inverse gradients trick to address it. Masson et al. (2016) introduce Q-PAMDP algorithm, which alternates learning action selection and parameter selection polices to make the training process more stable, but the parameter policy have to output all the parameters for all discrete actions. The output size of the parameter policy can be explosion when the problem has large high dimensional parameters with large discrete actions. The authors of Wei et al. (2018) propose a hierarchical approach, they condition the parameter policy on the output of discrete action policy, and they apply Variational Auto-Encoders (VAE) trick to make the model differentiable.\nHowever, all these researches only divide action to two parts, namely, a discrete action and a set of actions as parameters that may include discrete or continuous actions. But the problems like packing contains several mutual conditioned actions that can not directly view as conventional parameterized action space problem, otherwise the number of outputs of the second model will be the product of the candidate output of each sub-actions, which makes the model has too many output options and is hard to generalize the problem and learn to produce a good solution." }, { "heading": "2.3 PACKING PROBLEM", "text": "As mentioned before, the packing problems are such problem that have mutually conditioned subactions, and there are some works trying to solve this problem by NCO. Enlighten by AlphaZero Silver et al. (2017), Laterre et al. (2018) apply MCTS self play to search the better solution and learn how to pack, but their method only applies to the dataset which boxes are divided by random cuts from a regular bin. Duan et al. (2019) propose a selected learning approach solving 3D flexible bin packing problem which balances sequence and orientation complexity. They adopt pointer networks with reinforcement learning to get the box selection and rotation results, and apply greedy algorithm to select the location of boxes, which is not end-to-end learning method and can not get better than greedy algorithm. More importantly, this hybrid method do not view the entire packing problem as one complete optimization process, so the learning process only tries to optimize part of problem. At the same time, different algorithms of sub-actions may have conflict goal in optimization process.\nDifferent from previous methods, we simply embed the previous actions as an attention query for the model to reason the subsequent actions and after the full action step is finished, we perform one-step optimization along every sub-action model. In this way, we reduce the total output size to the sum of all sub-action candidate output, which makes the model smaller and easier to learn." }, { "heading": "3 PRELIMINARIES", "text": "In this section, we introduce the Bin Packing Problem (BPP) and formulate it as a MDP." }, { "heading": "3.1 BIN PACKING PROBLEM", "text": "Bin packing problems are a class of optimization problems in mathematics that involve attempting to pack objects together into containers. The goal is to either pack a single bin as densely as possible or pack all objects using as few bins as possible. For simplicity, we adopt the first goal, in which we have a number of boxes and we want to use minimal size of bin to pack all these boxes. Specifically, we have fixed bottom size rectangle (2D) or cube shape (3D) bin, and the object is to minimize the final height of bin and achieve higher space utilization ratio. The problem is the strip packing problem with rotations (Christensen et al., 2017), which is a subclass of BPPs, and we put the formal definition of this problem in Appendix 7.1. In the following sections, we elaborate our approach in the 3D BPP if not specifically stated.\nIn respect of NCO, the packing procedure of each box can be divided into three sub-actions:\n1. Selecting target box from all unpacked boxes.\n2. Choosing the rotation of the selected box.\n3. Outputting coordinates relative to the bin of the rotated box.\nThese three sub-actions are ordered and mutual conditioned, that is, to select a rotation, choosing the box to be rotated should be done first, and each previous decision serves the following one. Each sub-action can not be viewed as an independent optimization process, otherwise, there may be conflicts between optimization processes, resulting in sub-optimal and even some strange solutions." }, { "heading": "3.2 FORMULATING PACKING PROBLEM AS MARKOV DECISION PROCESS", "text": "Since the packing problem is largely NP-hard, getting an optimal solution in acceptable time is not realistic, thus we have to view the problem as an MDP and adopt reinforcement learning to make the agent learn from experience. In MDP, the probabilities given by p completely characterize the environment’s dynamics. That is, the state of the environment must include information about all aspects of past agent-environment interaction that make a difference for the future, and this kind of state is said to have the Markov property.\nTo make the problem satisfy Markov property, we divide our model to encoder and decoder. The encoder state is S = {s1, s2, ..., sn}, where si = (sp,i, li, wi, hi, xi, yi, zi), and sp is a boolean variable that indicates whether the box is packed in the bin. (li, wi, hi) is box length, width and height, and (xi, yi, zi) is the box left-front-bottom coordinate relative to the bin. From the perspective of a box packing step, giving s is enough for MDP, however, if we divide the box packing step to the three sub-actions, then the rotating and positioning step is not strict MDP. The rotating step has to know which box is selected, and the positioning step has to know both the selected box and rotation from previous sub-actions. Therefore, we propose the attention encoder with dynamic state and conditional query decoder which is introduced in the next section. The detail of environment transition is described in Appendix 7.2." }, { "heading": "4 CONDITIONAL QUERY LEARNING MODEL", "text": "In this section, we introduce the CQL model. Our method adds a conditional query to Multi-Head Attention (MHA) (Vaswani et al., 2017) to make the model capable of solving mutual conditioned multi-dimensional optimization problems.\nWe divide this section into three parts, namely, encoder, decoder and training. We first show the dynamic state self-attention encoder model." }, { "heading": "4.1 ATTENTION ENCODER WITH DYNAMIC STATE", "text": "As shown in Fig. 1, we adopt Transformer (Vaswani et al., 2017) layers to encode input states. Different from Vaswani et al. (2017), the sequence of boxes are unordered in packing problem, so we remove the positional encoding in Vaswani et al. (2017). The input state of encoder contains box shape, box position, and the boolean variable that indicates whether the box has been packed. In addition, we mask the box position in the input state if the box is not packed.\nUnlike machine translation or routing problems which has a fixed input feature in the procedure of inference, in packing problem, the rotation and coordinates of boxes are updated immediately after one box has been packed, which makes the encoding embedding must update after every packing step, rather than keep fixed encoded hidden feature on one training example.\nTo construct a strict MDP, we design the Attention Encoder with Dynamic State shown in Fig. 1, where we first embed input state to fixed vectors and then feed it to n layers Transformer\nthat contains a Multi-Head Attention (MHA) layer and FeedForward (FF) layer. Each layer adds a residual connection (He et al., 2016) and batch normalization (Ioffe & Szegedy, 2015). We feed the embedded vector to conditional query decoder introduced later. The decoder chooses the packing box from unpacked boxes and the rotation, coordinates of the packing box. After finishing one step packing, we update the input state of encoder according the decoding result of last packing step, more specifically, set the packed state sp,i to True and update the last packed box shape (li, wi, hi) according to the box rotation as shown in Table 1 and replace the masked position with the packed box location (xi, yi, zi) as shown in Fig. 1." }, { "heading": "4.2 CONDITIONAL QUERY DECODER", "text": "After encoding the input state to an embedded vector for each box, we design the conditional query mechanism to handle the connection between sub-actions without greatly increasing the complexity of the model. As shown in Fig. 2, for packing problem, the model performs three sub-actions separately, namely, boxes selection, rotation and location for the selected box.\nIn the box selection step, we first feed the dynamic input state to the encoder and get N hidden vectors H = (h1, h2, ..., hn) as described before. Then we feed it to the selection decoder. In this step, all information from encoder is required, so we use self-attention followed by select head, which consists of several linear layers with N final outputs. To avoid the situation that boxes which have already been packed are selected, we mask the packed boxes on selection output and then perform sampling.\nIn the rotation step, we construct a conditional query vector by picking up the selected box shape information from the input state and embedding it by one linear layer. Besides, after choosing the box to pack, the model has to know the boxes which have been put into the bin, so we mask out the attention vector of unpacked boxes as shown in Eq. 2 in scaled dot-product attention 3 of the decoder. At last, we feed the decoder output hq to rotation head to produce rotation result (2 options for 2D and 6 options for 3D). Where the WK and WV are trainable parameters, k and v are key and value vector in attention model, and the q is the embedded query vector.\nki = W Khi, vi = W V hi (1)\nai =\n{ qT ki√ di sp,i = True\n−∞ sp,i = False (2)\nhq = n∑ i=0 softmax(ai)vi (3)\nIn the final step, the model calculates the position of the selected box relative to the bin. To comprise all previous results, the query have to combine rotation of selected box, and we do this by following rules in Table 1 Then it is embedded and fed to the masked MHA. The model may generate coordinates outside the bin, and a fixed bound can not make sure all the boxes do not exceed the bin since the box sizes are different. We bound the packing box coordinates according to the bin width and length after every packing step by moving the boxes outside the bin to the border of the bin.\nThroughout the entire forward pass of packing one box, the data stream passes through the encoder and each decoder once. The target query is extracted from the input state based on the previous outputs. By conditional query, the model receives the information from hidden vectors of encoder as well as previous sub-action outputs, which ensures every sub-action decoding step is a strict MDP." }, { "heading": "4.3 TRAINING", "text": "As mentioned earlier, we view the packing process as a MDP and apply reinforcement learning, specifically, the actor-critic algorithm (Konda & Tsitsiklis, 2000). The model we presented earlier is the actor model." }, { "heading": "4.3.1 CRITIC NETWORK", "text": "The critic network consists of self-attention layers followed by value head. The input of critic is the dynamic state s. The graph embedding H̄ = h1+h2+...hnn of the last self-attention layer is fed to value head that contains several linear layers, where H̄ is the mean of last hidden vectors of the last attention layer. To make the train process mare stable and easy to tune, we separate the actor network and critic network, that is, the two networks do not share parameters." }, { "heading": "4.3.2 VALUE FUNCTION ESTIMATION", "text": "In the actor-critic algorithm, to accurately estimate the advantages (Ât) which have a significant impact on optimization performance, a popular technique is learning the state value function V (s) and performing bootstraps to get low variance advantage estimations. However, using only one step bootstrap introduces too much value estimation error. Schulman et al. (2015b) propose the Generalized Advantage Estimation (GAE), which combines n-step bootstrap to produce a more accurate advantage. The GAE balances the bias and variance to achieve stable and accurate advantage estimation. As shown in Eq. 4, where t is the time index in [0, T ], with a T -steps trajectory segment, and λ is the hyper-parameter that adjust the trade-off between bias and variance.\nÂt = −δt + (γλ)δt+1 + ...+ (γλ)T−t+1δT−1, where δt = rt + γV (st+1)− V (st)\n(4)" }, { "heading": "4.3.3 REWARD FUNCTION", "text": "Unlike supervised learning, the agent in reinforcement learning studies from experience and tries to find the policy that gets more accumulated rewards. Therefore, how to design the reward signal is a crucial factor for learning process. In the single bin packing problem, the goal is to minimize the height of the bin with given width and length. The most straightforward way is to adopt the negative change of bin height −∆h = hi − hi+1 as the reward signal in every packing step. However, this leads to sparse reward which is one of the biggest challenges (Andrychowicz et al., 2017) in reinforcement learning.\ngi = WLHi − i∑\nj=1\n(wj ljhj)\nr = ∆gi = gi−1 − gi\n(5)\nTo address the sparse reward problem, we design the reward signal based on the change of the current volume gap of the bin. As shown in Eq. 5, the volume gap gi of packing step is defined as current bin volume minus the total volume of packed boxes. Where the W,L,H is the width, length and height of the bin, respectively. The reward of packing step is defined as ∆gi, then the accumulated reward becomes the final gap of bin, which is linear to the negative final bin height −Hn as formulated in Eq. 6. By doing this, the agent always gets a meaningful reward signal no matter if the total bin height increase in that packing step.\nR = n∑ i=1 ri = (0− g1) + (g1 − g2) + ...+ (gn−1 − gn)\n= −gn = −WLHn + n∑ j=1 (wj ljhj)\n(6)" }, { "heading": "4.3.4 TRAINING PROCESS", "text": "In every training step, the critic network gets the dynamic state input to estimate the state value, and the actor network performs the three steps described before to get each sub-action output. Thereafter, one-step parameter update is performed for both actor and critic networks. Because our training data is randomly generated and is inexpensive, it is better to use on-policy reinforcement learning which does not has the sample inefficient problem that is shown in off-policy methods (Schulman et al., 2015a).\nL = La + Lc + βLent (7a) La,θa = −Ât ∗ ∑ a∈A log[πθas(s, as) + πθar (s, ar) + πθap(s, ap)] (7b)\nLc = MSE(Vθc(st), Ât + Vθc(st)) (7c) Lent = − ∑ a∈A (πθa log πθa) (7d)\nThe Eq. 7 formulates the loss function calculation process, which includes actor, critic and entropy loss functions. The policy consists of three sub-action policies as shown in Eq. 7b, The actor loss is the advantage multiply policy gradient that encourage the action which achieves higher accumulate rewards. The critic network is trained through MSE loss 7c. In addition, we add the negative entropy 7d item to loss to avoid the policy is too deterministic and encourage exploration.\nAlgorithm 1 Conditional Query Learning 1: Input Random produce N training set X 2: Initialize actor and critic network parameters θ, φ 3: for t in {1,...,N} do 4: Randomly sample a batch from X 5: Make state s(sp, l, w, h, x, y, z) 6: for j in 0, n/ngae do // n is the the number of boxes 7: for k in 0, ngae do // Getting a mini-batch for GAE 8: Get Vθc(sj∗ngae+k) // Critic network for value 9: H = {h1, h2...hn} = attθe(sj∗ngae+k) // Encoder 10: Sample box index i from πθs(H ) // Selecting step 11: Sample box rotation r from πθr (H, qr,i) // Rotating step 12: Update l′i, w ′ i, h ′ i via Table 1 // Update query based on rotation 13: Sample position x, y from πθp(H, qp,i) // Positioning step 14: Update state s and get reward // Dynamic state update 15: end for 16: Calculate advantages by Eq. 4 // GAE 17: Calculate loss by Eq. 7 18: θ ← θ + αa∆La, φ← φ+ αc∆Lc 19: end for 20: end for 21: Output θ, φ\nAs shown in Alg. 1, after initialization, the algorithm gets ngae-step experiences for GAE. In every packing step, the encoder embeds input state to n ∗ dmodel hidden vectors H , and the model gets packing box index, rotation, and position by conditional query process in the decoder (line 10∼line 12). After finished one step packing, the input state and rewards are updated correspondingly." }, { "heading": "5 EXPERIMENTS", "text": "We evaluate the CQL model on 2D and 3D bin packing problems with different packing box numbers and compare the results with heuristic algorithms and learning methods." }, { "heading": "5.1 PACKING ENVIRONMENT AND DATASET", "text": "We design the test environment for 2D and 3D bin packing problems. Here we only describe the 3D condition that can be easily reduced to 2D. In our test environment, N boxes are initialized by random width, length and depth for packing. The bin is initialized by fixed width and length, specifically, from -1 to 1. The agent has to find a solution sequence S(s, o, x, y) that uses minimal bin depth and does not have any overlap boxes. The final result is assessed by the accumulated reward which is linear to negative bin height. The environment is designed for reinforcement learning, and we also encourage others to test their algorithms on it.\nTo fully evaluate the learning model, the test dataset should be hard enough for packing, that is, the variance of box size should not be too large or too small. When the variance is too large, some large boxes will dominate the bin, then other small boxes are more likely to packed below large boxes, which makes the position of small boxes has little impact on the total bin height. When the variance is too small, the boxes are similar to cubes, which makes the problem too easy to solve. Therefore, we choose the dataset with the box sizes sampled from [0.02, 0.4] for 2D and [0.2, 0.8] for 3D after a few tests." }, { "heading": "5.2 IMPLEMENTATION DETAILS", "text": "Both actor and critic networks of our model have two self-attention layers. The box selection decoder has one self-attention layer, and both the box rotation and location decoder have one conditional query attention layer. Each attention layer applies batch normalization. We set the attention hidden dim as 64 both in 2D and 3D cases. Every decoder head has three linear layers with Relu (Nair & Hinton, 2010) activation function. The decoder calculates the probability distribution for each subactions. The box selection, rotation and position head generate N box selection choices, 6 rotation choices, and 128 position discrete choices for each axis, respectively. The linear decay learning rate scheduler is applied with Adam (Kingma & Ba, 2014) optimizer and 5e-5, 3e-4 initial learning rate for actor and critic, respectively. We train our model on a single Tesla V100 GPU. It takes 2∼3 days training for 20 boxes cases. The Pytorch (Paszke et al., 2017) implementation of the CQL model is also open sourced." }, { "heading": "5.3 RESULTS", "text": "We evaluate the model on various number of packing boxes, specifically, 10 boxes, 16 boxes, 20 boxes and 30 boxes.\nBaselines: The Multi-Task Selected learning (MTSL) (Duan et al., 2019) model, which initially designed for minimize the surface area of bin, and the authors adopt reinforcement learning for box selecting step, supervised learning for rotation step and a greedy method that minimizes bin surface usage for position step. For comparison purpose, we implement their model but set the reward function same as ours, specifically, the delta gap reward 5, and set the positioning step model similar to their rotation step model, which is the attention from encoder and previous decoder steps. To verify the effectiveness of our conditional query mechanism, we remove the conditional query of our model and get the box rotation and position from the box selection decoder. Besides, we also test the rollout baseline with REINFORCE (Williams, 1992) algorithm proposed on Kool et al. (2019), which they claimed it is more computationally efficient. Furthermore, the Genetic Algorithm (GA) proposed on Wu et al. (2010) is tested in 3D BPP. Metrics: We evaluate previous mentioned algorithms by the bin gap ratio r = 1 − ∑n\ni=1 wilihi WLHn\n, which is positively related to the final bin height. The variance of bin gap ratio is also evaluated to show the stability of learning algorithms. It is worth noting that the optimal gap ratio of is greater than 0%. Depending on the dataset and box number, the optimal gap ratio is different. In general, the more boxes are required to pack, the more likely gaps are filled, and filling the gaps in 3D condition is even more difficult.\nTable 2 shows the bin gap ratios and its variances on 512 test instances after 100 epochs training for learning algorithm. We use the default settings for genetic algorithm. The results show that our CQL model achieves low bin gap ratio and variance in most cases. It is clear that the model with conditional query mechanism is better than no query model, which justifies that the CQL fills the gap between sub-actions and makes the learning algorithm has the ability to reason following sub-action according the embedded of previous outputs. Meanwhile, The rollout baseline with REINFORCE produces similar results as no query model but with higher variance. Because the rollout method\nsums up all reward signal in every packing steps as the action value, its learning process treat every packing step equally and back propagate gradients for all steps no matter the step is good or bad.\nThe overall gap ratio and variance of CQL decrease as the packing box number increases, which shows that the CQL model is scalable to problem size. Notice that in 10 boxes case of 3DBPP, the genetic algorithm achieve a little bit lower gap ratio than the CQL model, we analyze the results and find that the optimal solution of small packing box number tend to tile the bottom of bin, which is easy for heuristic algorithm to find good solutions. But when the box number goes larger, heuristic algorithms tends to drop into local optimum since they are more greedy and difficult to generalize the entire solution space. Besides, the genetic algorithm has relative lower variance compare with learning approaches because it use some fixed rules that result in similar solutions for different examples, which leads to low performance for specific instances. The end to end MTSL learning process shows poor results, because the model is not only partially observable MDP, but also suffers from sparse rewards due to the utilization of rollout method.\nFrom the learning carve of training process shown in Fig. 3, the CQL model also shows superior stability than other learning algorithm, which leads to lower variance as shown in Table 2. The learning carve of no query model is oscillating since the sampled result of earlier step do not pass to the later one. That is, the model can only estimate the solution that is overall good but not fit the specific example. The learning carve of rollout method shows a big jumping up in the beginning of training process because of the sudden baseline update after the first epoch. In contrast, the CQL model benefits from conditional query to construct a strict MDP and has meaningful reward signals from every box packing step, so it shows smooth convergence process." }, { "heading": "6 CONCLUSION AND FUTURE WORKS", "text": "In this paper, we propose the conditional query learning (CQL) model to solve packing problems. Benefit from the conditional query mechanism, the CQL model is capable of generalizing the problem that has mutual conditioned actions with relatively simple structure. Numerical results show that the conditional query learning greatly reduces the bin gap ratio both in 2D and 3D packing settings.\nWe are excited about the future of the conditional query model and we can apply it to other multidimension and parameterized action space problems. We can also extend the model with dynamic attention span and continuous action space to further improve its scalability. In addition, the current test environment does not consider the physical gravity of boxes and the boxes are not packed close to each other, we will add these restrictions to make it suitable for more practical bin packing." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 PACKING PROBLEM", "text": "Now we formally define the packing problem with notations in Wu et al. (2010).\n(li, wi, hi): parameters indicating the length, width, and height of box i.\n(L,W, H̃): length, width and height of the bin to be loaded, where H̃ indicates that the bin height can be adjusted.\n(xi, yi, zi): coordinates of box i’s left-bottom-behind corner.\nXli , Zli , Ywi , Zhi : binary variables indicating the orientation of box i, where Xli , Zli indicate whether the length direction of box i is parallel to the bin’s X and Z axes, and Ywi , Zhi defines the width direction is parallel to the Y axis, or the height direction is parallel to the Z axis respectively. For 3D view, there are six kind of orientation for a box.\naij , bi,j , ci,j : binary variables defining the relative placement of box i to carton j: variables will be 1 if box i is in front of, to the right of, or on top of box j, respectively; otherwise, 0.\nM : a large enough number.\nThe objective is to minimize the variable bin height H̃ , that is min H̃\nsubject to the following set of constraints: xi + liXli + wi(Zli − Ywi + Zhi) + hi(1−Xli − Zli + Ywi − Zhi)\n<= xj +M(1− aij), i 6= j (8a) yi + wiYwi + li(1−Xli − Zli) + hi(Xli + Zli − Ywi) <= yj +M(1− bij), i 6= j (8b)\nzi + hiZhi + wi(1− Zli − Zhi) + liZli <= zj +M(1− cij), i 6= j (8c)\nxi + liXli + wi(Zli − Ywi + Zhi) + hi(1−Xli − Zli + Ywi − Zhi) <= L (9a) yi + wiYwi + li(1−Xli − Zli) + hi(Xli + Zli − Ywi) <= W (9b)\nzi + hiZhi + wi(1− Zli − Zhi) + liZli <= H̃ (9c)\naij + aji + bij + bji + cij + cji >= 1, i 6= j (10)\nXli + Zli <= 1 (11a) Zli + Zhi <= 1 (11b)\nZli − Ywi = Zhi <= 1 (11c) Zli − Ywi + Zhi >= 0 (11d)\n1−Xli − Zli + Ywi − Zhi <= 1 (11e) 1−Xli − Zli + Ywi − Zhi >= 0 (11f)\nXli + Zli − Ywi <= 1 (11g) Xli + Zli − Ywi >= 0 (11h)\nConstraints 8 ensure that any two boxes i and j do not overlap with each other. Constraints 9 keep all boxes with the bin dimension. Xli , Zli , Ywi and Zhi are used to calculate the respective mappings of box length, width and height to the corresponding bin’s X,Y and Z axes. Constraints 10 limits the relative position of any two boxes i and j. Constraints 11 ensure that binary variables which determine the box position are properly controlled to reflect practical positions." }, { "heading": "7.2 PACKING PROBLEM ENVIRONMENT DETAIL", "text": "In our packing problem environment, the model only needs to generate index, rotation and the horizontal coordinate of packing box in every packing step. That means that the environment automatically drops the box to the lowest available position in the bin.\nzi = max(zb + hb) (12)\nwhere box b satisfies:\nxi < xb + wb\nxb > xi + wi\nyi < yb + lb\nyb > yi + li\n(13)\nTo avoid the model generates the positions outside of the bin, the environment forces the crossborder boxes to bin border.\nxi = min(xi,W − wi) yi = min(yi, L− yi)\n(14)" }, { "heading": "7.3 EXTEND RESULTS", "text": "Here, we illustrate the experiment results of 2D, 3D bin packing problem with different box numbers by showing the placement of boxes of different algorithms. Fig. 5 and Fig. 6 illustrate the results of 2D and 3D of CQL model in various packing box numbers. Obviously, when the number of the boxes increases, the boxes are placed denser and the gap ratio is become lower. Fig. 7 and Fig. 8 show the results of different algorithms of 2D & 3D BPP in 20 boxes case. It is clear that our algorithm performs better than others." } ]
2,019
null
SP:158dd8882013a9a5efa7fd4579ad3900ca76a4b5
[ "This paper suggests an approach for learning how to sparsify similarity search graphs. Graph-based methods currently attain state of the art performance for similarity search, and reducing their number of edges may speed them up even further. The paper suggests a learning framework that uses sample queries in order to determine which edges are more useful for searches, and prune the less useful edges. This is a sensible and potentially useful approach in line with the recent flurry of work on improving algorithms with tool from machine learning.", "This paper studies the problem of improving proximity graph for nearest neighbor search. It formulates the task of pruning the graph as a problem of learning annealable proximity graph. A hard pruning processes is used after the learning process, and the results shows that the proposed method can reduce 50% of the edges and speed up the search time by 16-41%." ]
This paper studies similarity search, which is a crucial enabler of many feature vector–based applications. The problem of similarity search has been extensively studied in the machine learning community. Recent advances of proximity graphs have achieved outstanding performance through exploiting the navigability of the underlying graph structure. In this work, we introduce the annealable proximity graph (APG) method to learn and reshape proximity graphs for efficiency and effective similarity search. APG makes proximity graph edges annealable, which can be effectively trained with a stochastic optimization algorithm. APG identifies important edges that best preserve graph navigability and prune inferior edges without drastically changing graph properties. Experimental results show that APG achieves state-of-the-art results not only by producing proximity graphs with less number of edges but also speeding up the search time by 20–40% across different datasets with almost no loss of accuracy.
[]
[ { "authors": [ "Mohammad Aliannejadi", "Hamed Zamani", "Fabio Crestani", "W. Bruce Croft" ], "title": "Target Apps Selection: Towards a Unified Search Framework for Mobile Devices", "venue": "In SIGIR 2018,", "year": 2018 }, { "authors": [ "Artem Babenko", "Victor S. Lempitsky" ], "title": "Efficient Indexing of Billion-Scale Datasets of Deep Descriptors", "venue": "In CVPR 2016,", "year": 2016 }, { "authors": [ "Dmitry Baranchuk", "Artem Babenko", "Yury Malkov" ], "title": "Revisiting the Inverted Indices for BillionScale Approximate Nearest Neighbors", "venue": "In ECCV 2018,", "year": 2018 }, { "authors": [ "Norbert Beckmann", "Hans-Peter Kriegel", "Ralf Schneider", "Bernhard Seeger" ], "title": "The R*-Tree: An Efficient and Robust Access Method for Points and Rectangles", "venue": null, "year": 1990 }, { "authors": [ "Jon Louis Bentley" ], "title": "Multidimensional Binary Search Trees Used for Associative Searching", "venue": "Communications of the ACM,", "year": 1975 }, { "authors": [ "Léon Bottou", "Olivier Bousquet" ], "title": "The Tradeoffs of Large Scale Learning", "venue": "In Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems,", "year": 2007 }, { "authors": [ "Simon R Broadbent", "John M Hammersley" ], "title": "Percolation processes: I. Crystals and mazes", "venue": "In Mathematical Proceedings of the Cambridge Philosophical Society,", "year": 1957 }, { "authors": [ "Duncan S Callaway", "Mark EJ Newman", "Steven H Strogatz", "Duncan J Watts" ], "title": "Network robustness and fragility: Percolation on random graphs", "venue": "Physical review letters,", "year": 2000 }, { "authors": [ "Qi Chen", "Haidong Wang", "Mingqin Li", "Gang Ren", "Scarlett Li", "Jeffery Zhu", "Jason Li", "Chuanjie Liu", "Lintao Zhang", "Jingdong Wang" ], "title": "SPTAG: A library for fast approximate nearest neighbor search, 2018", "venue": "URL https://github.com/Microsoft/SPTAG", "year": 2018 }, { "authors": [ "DW Dearholt", "N Gonzales", "G Kurup" ], "title": "Monotonic search networks for computer vision databases", "venue": "In Twenty-Second Asilomar Conference on Signals, Systems and Computers,", "year": 1988 }, { "authors": [ "Mostafa Dehghani", "Hamed Zamani", "Aliaksei Severyn", "Jaap Kamps", "W. Bruce Croft" ], "title": "Neural Ranking Models with Weak Supervision", "venue": "In SIGIR 2017,", "year": 2017 }, { "authors": [ "Matthijs Douze", "Alexandre Sablayrolles", "Hervé Jégou" ], "title": "Link and Code: Fast Indexing With Graphs and Compact Regression Codes", "venue": null, "year": 2018 }, { "authors": [ "Cong Fu", "Deng Cai" ], "title": "EFANNA : An Extremely Fast Approximate Nearest Neighbor Search Algorithm", "venue": "Based on kNN Graph. CoRR,", "year": 2016 }, { "authors": [ "Cong Fu", "Chao Xiang", "Changxu Wang", "Deng Cai" ], "title": "Fast Approximate Nearest Neighbor Search with the Navigating Spreading-out Graph", "venue": "In VLDB’19,", "year": 2019 }, { "authors": [ "Tiezheng Ge", "Kaiming He", "Qifa Ke", "Jian Sun" ], "title": "Optimized Product Quantization for Approximate Nearest Neighbor Search", "venue": null, "year": 2013 }, { "authors": [ "Aristides Gionis", "Piotr Indyk", "Rajeev Motwani" ], "title": "Similarity Search in High Dimensions via Hashing", "venue": "In VLDB’99,", "year": 1999 }, { "authors": [ "Jiafeng Guo", "Yixing Fan", "Qingyao Ai", "W. Bruce Croft" ], "title": "A Deep Relevance Matching Model for Ad-hoc Retrieval", "venue": "In CIKM 2016,", "year": 2016 }, { "authors": [ "Kiana Hajebi", "Yasin Abbasi-Yadkori", "Hossein Shahbazi", "Hong Zhang" ], "title": "Fast Approximate Nearest-neighbor Search with K-nearest Neighbor Graph", "venue": "In IJCAI’11,", "year": 2011 }, { "authors": [ "Song Han", "Huizi Mao", "William J Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": "arXiv preprint arXiv:1510.00149,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "D. Frank Hsu", "Xiaojie Lan", "Gabriel Miller", "David Baird" ], "title": "A Comparative Study of Algorithm for Computing Strongly Connected Components", "venue": "In DASC’17,", "year": 2017 }, { "authors": [ "Po-Sen Huang", "Xiaodong He", "Jianfeng Gao", "Li Deng", "Alex Acero", "Larry Heck" ], "title": "Learning deep structured semantic models for web search using clickthrough data", "venue": "In CIKM", "year": 2013 }, { "authors": [ "Piotr Indyk", "Rajeev Motwani" ], "title": "Approximate nearest neighbors: towards removing the curse of dimensionality", "venue": "In Proceedings of the thirtieth annual ACM symposium on Theory of computing,", "year": 1998 }, { "authors": [ "Lester Ingber" ], "title": "Simulated annealing: Practice versus theory", "venue": "Mathematical and computer modelling,", "year": 1993 }, { "authors": [ "Jeff Johnson", "Matthijs Douze", "Hervé Jégou" ], "title": "Billion-scale similarity search with GPUs", "venue": "CoRR, abs/1702.08734,", "year": 2017 }, { "authors": [ "Yannis Kalantidis", "Yannis S. Avrithis" ], "title": "Locally Optimized Product Quantization for Approximate Nearest Neighbor Search", "venue": null, "year": 2014 }, { "authors": [ "Scott Kirkpatrick", "C Daniel Gelatt", "Mario P Vecchi" ], "title": "Optimization by simulated annealing", "venue": null, "year": 1983 }, { "authors": [ "D.T. Lee", "Bruce J. Schachter" ], "title": "Two algorithms for constructing a Delaunay triangulation", "venue": "International Journal of Parallel Programming,", "year": 1980 }, { "authors": [ "Victor Lempitsky" ], "title": "The inverted multi-index", "venue": "In CVPR", "year": 2012 }, { "authors": [ "Wen Li", "Ying Zhang", "Yifang Sun", "Wei Wang", "Wenjie Zhang", "Xuemin Lin" ], "title": "Approximate Nearest Neighbor Search on High Dimensional Data - Experiments, Analyses, and Improvement", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2019 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the Value of Network Pruning", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yury Malkov", "Alexander Ponomarenko", "Andrey Logvinov", "Vladimir Krylov" ], "title": "Approximate nearest neighbor algorithm based on navigable small world graphs", "venue": "Inf. Syst.,", "year": 2014 }, { "authors": [ "Yury A. Malkov", "D.A. Yashunin" ], "title": "Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs", "venue": "CoRR, arXiv preprint abs/1603.09320,", "year": 2016 }, { "authors": [ "Bhaskar Mitra", "Fernando Diaz", "Nick Craswell" ], "title": "Learning to Match using Local and Distributed Representations of Text for Web Search", "venue": "In WWW", "year": 2017 }, { "authors": [ "Marius Muja", "David G. Lowe" ], "title": "Scalable Nearest Neighbor Algorithms for High Dimensional Data", "venue": "TPAMI 2014,", "year": 2014 }, { "authors": [ "Rodrigo Nogueira", "Kyunghyun Cho" ], "title": "Passage Re-ranking with BERT", "venue": "CoRR, abs/1901.04085,", "year": 2019 }, { "authors": [ "Yaghout Nourani", "Bjarne Andresen" ], "title": "A comparison of simulated annealing cooling strategies", "venue": "Journal of Physics A: Mathematical and General,", "year": 1998 }, { "authors": [ "Jeffrey Pennington", "Richard Socher", "Christopher D. Manning" ], "title": "GloVe: Global Vectors for Word Representation", "venue": "In EMNLP, pp", "year": 2014 }, { "authors": [ "Danny Sullivan" ], "title": "Faq: All about the google rankbrain", "venue": null, "year": 2018 }, { "authors": [ "Christophe Van Gysel", "Maarten de Rijke", "Evangelos Kanoulas" ], "title": "Learning Latent Vector Spaces for Product Search", "venue": "In CIKM ’16,", "year": 2016 }, { "authors": [ "Jizhe Wang", "Pipei Huang", "Huan Zhao", "Zhibo Zhang", "Binqiang Zhao", "Dik Lun Lee" ], "title": "Billionscale Commodity Embedding for E-commerce Recommendation in Alibaba", "venue": "KDD", "year": 2018 }, { "authors": [ "Mengdi Wang", "Qing Zhang", "Jun Yang", "Xiaoyuan Cui", "Wei Lin" ], "title": "Graph-Adaptive Pruning for Efficient Inference of Convolutional", "venue": "Neural Networks. CoRR,", "year": 2018 }, { "authors": [ "Ronald J. Williams", "David Zipser" ], "title": "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Chenyan Xiong", "Zhuyun Dai", "Jamie Callan", "Zhiyuan Liu", "Russell Power" ], "title": "End-to-End Neural Ad-hoc Ranking with Kernel Pooling", "venue": "In SIGIR 2017,", "year": 2017 }, { "authors": [ "Peter N. Yianilos" ], "title": "Data Structures and Algorithms for Nearest Neighbor Search in General Metric Spaces", "venue": "In SODA", "year": 1993 }, { "authors": [ "Lei Yu", "Karl Moritz Hermann", "Phil Blunsom", "Stephen Pulman" ], "title": "Deep Learning for Answer Sentence Selection", "venue": "CoRR, abs/1412.1632,", "year": 2014 }, { "authors": [ "Hamed Zamani", "Mostafa Dehghani", "W. Bruce Croft", "Erik G. Learned-Miller", "Jaap Kamps" ], "title": "From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing", "venue": "In CIKM", "year": 2018 }, { "authors": [ "Hamed Zamani", "Bhaskar Mitra", "Xia Song", "Nick Craswell", "Saurabh Tiwary" ], "title": "Neural Ranking Models with Multiple Document Fields", "venue": "In WSDM ’18,", "year": 2018 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression", "venue": "In ICLR", "year": 2018 } ]
[ { "heading": null, "text": "This paper studies similarity search, which is a crucial enabler of many feature vector–based applications. The problem of similarity search has been extensively studied in the machine learning community. Recent advances of proximity graphs have achieved outstanding performance through exploiting the navigability of the underlying graph structure. In this work, we introduce the annealable proximity graph (APG) method to learn and reshape proximity graphs for efficiency and effective similarity search. APG makes proximity graph edges annealable, which can be effectively trained with a stochastic optimization algorithm. APG identifies important edges that best preserve graph navigability and prune inferior edges without drastically changing graph properties. Experimental results show that APG achieves state-of-the-art results not only by producing proximity graphs with less number of edges but also speeding up the search time by 20–40% across different datasets with almost no loss of accuracy." }, { "heading": "1 INTRODUCTION", "text": "Similarity search (nearest neighbor search) is an integral and indispensable task in many machine learning applications, such as non-parametric classification/regression, computer vision, information retrieval, and language modeling. Recently, it has been demonstrated that it is possible to build a vector search engine to support semantic search (Chen et al., 2018; Sullivan, 2018; Wang et al., 2018a; Johnson et al., 2017), which leverages high-quality neural ranking models (Nogueira & Cho, 2019; Xiong et al., 2017; Zamani et al., 2018a) to encode both natural language query and documents into dense continuous feature vectors and performs similarity search to retrieve relevant documents with vast data volumes (e.g., based on Euclidean distance). This approach has demonstrated significant relevance gains in a wide range of applications and outperforms existing term matching baselines, such as web search (Huang et al., 2013; Zamani et al., 2018b), question and answering (Yu et al., 2014), ad-hoc retrieval (Mitra et al., 2017; Dehghani et al., 2017; Guo et al., 2016), mobile search (Aliannejadi et al., 2018), and product search (Van Gysel et al., 2016).\nThe efficiency and effectiveness of the similarity search approaches have become a problem of great interest, due to the widespread commercial value and the exciting prospect. Recent advance of proximity graphs has demonstrated great potential for fast and accurate nearest neighbor retrieval (Malkov & Yashunin, 2016; Fu et al., 2019), and the empirical performance of proximity graphs outperforms existing tree–based (Bentley, 1975; Beckmann et al., 1990; Yianilos, 1993; Muja & Lowe, 2014), locality sensitive hashing–based (Gionis et al., 1999), and product quantization– based methods (Jegou et al., 2011; Ge et al., 2013; Norouzi & Fleet, 2013; Lempitsky, 2012; Kalantidis & Avrithis, 2014) by a large margin. Proximity graphs exploit the navigability of graph structures, which the search process relies on to converge and achieve good efficiency. In practice, that often results in dense connectivity and large memory consumption because they need to have sufficient edges to maintain specific graph properties, which is a major limitation of this class of approaches.\nWe wish to improve the efficiency of similarity search. In this paper, we address the following research question: can we learn to prune edges of a proximity graph while still being accurate to find nearest neighbors? Specifically, the pruned proximity graph should be more efficient than the state-of-the-art proximity graphs with comparable accuracy. Before providing a definite answer to the question, we briefly review the findings in percolation theory that motivates our research.\nPercolation describes the phase transition of a physical system when one or more of its properties change abruptly after a slight change in controlling variables (e.g., temperature, pressure, or others) (Broadbent & Hammersley, 1957). Prototypical percolation processes include water turning into ice or steam, and the spontaneous emergence of magnetization and superconductivity in metals. Percolation theory mathematically models these physical systems as complex networks and phase transition as a dramatic change of the properties of network connections. We believe that if we can model edge importance as the robustness of proximity graphs to the removal of the edges between vertices, we can produce a proximity graph with less number of edges without dramatically changing the navigability of the graph.\nWe present Annealable Proximity Graph (APG), for simplifying proximity graphs. In particular, we make the following contributions:\n• We introduce the annealable proximity graph and summarize its key characteristics. • To learn edge importance, we present a percolation inspired method for identifying impor-\ntant edges and introduce a domain-specific loss derived from search distance errors. • Our formulation makes it possible to leverage a stochastic optimization algorithm to opti-\nmize the objective and prune edges with low importance. • We prove the convergence of our optimization process and a theoretical guarantee of the\nsearch quality of the pruned graph.\nThis approach is unique compared with previous proximity graph algorithms, where most of them only exploit the structure of the underlying index instead of learning from query distribution to reshape proximity graphs. We provide a detailed empirical analysis of our approach. Experimental results show that our approach reduces the number of edges of state-of-the-art proximity graphs significantly by 50% while also speeds up the search time by 20–40% across different datasets with almost no loss of accuracy." }, { "heading": "2 RELATED WORK", "text": "In this section, we review the main ideas from the existing work that is relevant to our approach.\nApproximate nearest neighbor search (ANN). The problem of similarity search has been extensively studied in the literature of ANN algorithms, which trade the guarantee of exactness against high-efficiency improvement. Some representative methods include tree structure–based (Bentley, 1975; Beckmann et al., 1990; Yianilos, 1993; Muja & Lowe, 2014), locality sensitive hashing (LSH)–based (Gionis et al., 1999), product quantization (PQ)–based (Jegou et al., 2011; Ge et al., 2013; Norouzi & Fleet, 2013; Lempitsky, 2012; Kalantidis & Avrithis, 2014), and nearest neighbor graph–based (Hajebi et al., 2011; Fu & Cai, 2016) approaches. Although some of these methods, such as LSH, have strong theoretical performance guarantee even in the worst case (Indyk & Motwani, 1998), recent advances of the proximity graphs have demonstrated logarithmic search complexity and outperformed prior approaches by a large margin (Malkov & Yashunin, 2016; Douze et al., 2018; Fu et al., 2019; Li et al., 2019).\nProximity graphs. A proximity graph exploits the closeness relationship among feature vectors to support similarity search. In particular, let V = {vi ∈ RD|i = 1, ..., N} be a database of vectors, a proximity graph G(V,E) is a directed graph, where each vertex corresponds to one of the vectors v and the whole graph achieves great local connectivity (as in a lattice graph) combined with a small graph diameter (as in a random graph) (Malkov et al., 2014; Malkov & Yashunin, 2016; Fu et al., 2019). Such a graph exhibits strong navigability and enables quick search with an N -greedy bestfirst search algorithm. During the search, a candidate queue of size L is used to determine the tradeoff between the searching time and accuracy. Recent studies look into optimizing proximity graphs with product quantization (Douze et al., 2018; Baranchuk et al., 2018). However, these approaches often suffer from a considerable amount of recall loss on large datasets because quantization errors tend to be large on dense continuous feature vectors (e.g., generated by neural networks).\nLearning to prune. Pruning is a common method to derive sparse neural networks and reduce their heavy inference cost (Han et al., 2015a;b). These methods annihilate the non-important weights\nthrough the introduction of an L0 or L1 regularizer to the loss function. Many of them remove weights and keep important weights to best preserve the accuracy. Neural network pruning can also be viewed as an architecture search technique, where the network is viewed as a computational graph where the vertices denote the computation nodes and the edges represent the flow of tensors (Wang et al., 2018b; Liu et al., 2019). To the best of our knowledge, learning to prune has not yet been applied to the task of proximity graphs for similarity search. However, it appears to be a natural fit for restructuring the proximity graphs to improve similarity search." }, { "heading": "3 CHALLENGES", "text": "The navigability of proximity graphs comes as a result of approximating monotonicity graphs, e.g., Delaunay graphs (Lee & Schachter, 1980). According to the graph monotonicity theory (Dearholt et al., 1988), a monotonicity graph has a strong guarantee to find the exact nearest neighbor by following a monotonic path with 1-greedy search (Fu et al., 2019). However, monotonicity graphs in high dimensional space quickly become almost fully connected, and search in fully connected graphs would be infeasible, due to the out-degree explosion problem (Hajebi et al., 2011). To address the problem, proximity graphs limit each node to connect to only a number of R neighbors, aiming to minimize the loss of graph monotonicity while still letting greedy search be effective.\nHowever, several challenges remain. The correct choice of R is not so obvious. R cannot be too small, because then the graph tends to lose too much monotonicity and search can frequently get stuck at non-global local minima, hurting accuracy. A sufficiently large R is often required to reach high accuracy, but it also increases the number of edges significantly and decreases efficiency: (1) It ubiquitously raises the connections of both ”hubs” (i.e., nodes in dense areas) and nodes in sparse areas; and (2) it makes the graph more densely connected with many vertices sharing a lot of common neighbors, increasing unnecessary distance computations. Ideally, each vertex should have a different R that best preserves graph monotonicity. The problem is beyond selecting a good value for R and seems to require more fundamental changes to existing proximity graphs." }, { "heading": "4 METHODS", "text": "In this section, we propose the annealable proximity graph (APG), which supports the exploration of edge heterogeneity in proximity graphs and learning to prune such graphs with an eye towards getting to the nearest neighbor as quickly as possible with minimal loss of accuracy." }, { "heading": "4.1 OVERVIEW", "text": "APG starts with augmenting a pre-built proximity graph with a learnable weight associated with each edge, representing its importance to preserve graph monotonicity, and a keep probability (§§ 4.2). The weight is updated in an iterative manner according to a learning rule.\nAPG employs a multi-step learning process, as illustrated in Fig. 1. At initialization of APG, all edges have equal weights. Since no edge heterogeneity has been learned so far, applying pruning at this stage could lead to premature decisions. Therefore, we start with performing a “warm-up” iteration over the edge weights only, without optimizations, i.e., as the grace-period.\nOnce this warm-up ends, we introduce a systematic and principled approach to optimize the edge keep probability distribution of the APG (§§ 4.4). In particular, APG models edge importance as the robustness of graph monotonicity to edge removal and defines an objective function that reflects the destruction of graph monotonicity, based on relative search distance errors (§§ 4.3). It then generates a sequence of randomized subgraphs through a sampling policy to learn edge importance and uses a predefined annealing schedule to optimize the objective function.\nThe process ends once we meet a stopping criterion. After this step, APG marks low weight edges as less important and perform a hard pruning to remove inferior edges, as shown in Fig. 2.\nHops Checks Entry\nQuery\nGround Truth\nAPG\nFigure 2: Example of before and after pruning proximity graph." }, { "heading": "4.2 ANNEALABLE PROXIMITY GRAPH (APG)", "text": "Given a proximity graphG(V,E), an annealable proximity graphG∗(V,E) is obtained by augmenting each edge e ∈ E with a weight variable we, and a keep probability function p : E → (0, 1) such that p(e) ≡ pe indicates the keep probability of e ∈ E. That is, an independent Bernoulli random variable Re, where P(Re = 1) = pe,P(Re = 0) = 1− pe is assigned to each e. Intuitively, pe should be: (i) monotonically increasing as we increases; and (ii) limwe→+∞ pe = 1, and limwe→−∞ pe = 0. More importantly, it is desirable to have we initialized to similar values for all edges, allowing each edge to have an equal probability of consideration when there is little information about edge importance. As the optimization process continues, pe should converge into a degenerated distribution that allows identifying a subset of removable edges that do not significantly change graph properties. Moreover, we introduce an additional parameter, the temperature T ∈ (0,∞), which smooths the probabilities pe as following. If T → ∞ (at the beginning), the probabilities pe converges uniformly to the same value regardless of edge e; on the other hand, if T → 0 (at the end), the probabilities pe converges to either 1 or 0, for important and not important edges, respectively. To satisfy the above conditions, we introduce the following function p:\npe(T ) = 1 1 + exp ( −we+µT ) (1) where µ is a normalizing factor to keep\n∑ e∈E pe(T ) = C a constant." }, { "heading": "4.3 ROBUSTNESS OF GRAPH MONOTONICITY", "text": "To efficiently find nearest neighbors, all the previous algorithms try to exploit the proximity graph structure of V vectors by letting each vertex connect a number of R neighbors. However, some connections could be more crucial for preserving graph monotonicity, which is important for search efficiency, while the rest is less important. How do we identify those important edges?\nOne naive approach is to keep those edges that appear as part of the shortest path from the entry vertex to the ground truth nearest neighbor for a query. Such an approach significantly suffers overfitting: non-shortest-path edges may still contain possibly relevant closeness relation for unseen queries on the test query set. Another possibility is to treat all checked edges during the search process as important. However, the checked and taken edges are then not differentiated, and some checked but not taken edges may even hurt accuracy by misleading the route. Both of these results are undesirable. As § 3 mentioned, proximity graphs rely on the approximation of graph monotonicity to converge and achieve their efficiency. Can we identify important edges based on their robustness to preserve graph monotonicity?\nIdentifying important edges. Inspired by the phase transition in percolation (Broadbent & Hammersley, 1957; Callaway et al., 2000), we introduce a method for identifying edge importance in proximity graphs. In particular, as Fig. 3 shown, for a given G∗, we randomly delete each edge e with probability 1 − pe (remember that pe denotes the keep probability of e), independently of all other edges, and we denote the resulting random graph by G (V,E \\ F ), where F is the set of deleted edges. For any query q, if we can find the exact nearest neighbor in G but not in G under a search budget, then we treat the edge hops along the search path of q in G, including those erroneous paths, to be important for preserving the robustness of graph monotonicity, because it is their deletion that causes the failure to find the nearest neighbor in G .\nRelative distance errors. Once we get the set of edge hops for a query q (let us denote it as Hq), we update the weight wh of h ∈ Hq to increase the importance of those edges, based on how far off the found candidate is relative to the true nearest neighbor, similar to the Teacher Forcing scheme (Williams & Zipser, 1989). In particular, assumeG answers the query q by returning a point p, and G answers q with a found candidate p′, we define the weight update as:\n∆wh = ( δ〈p′, q〉 δ〈p, q〉 − 1) · η,∀h ∈ Hq (2)\nwhere δ〈·, ·〉 represents the distance between two vertices and η is the learning rate. It measures the delta of searching q’s nearest neighbor in a subgraph G versus in the full graph G by removing F edges. The update is designed based on the following intuition: when the returned candidate p′ in G is the exact nearest neighbor as q returned by G, then the deleted edges are less important and the relative distance error is 0. Otherwise, the larger δ〈p′, q〉 in comparison to δ〈p, q〉, the more importance Hq indicates, and the edge weights of Hq should increase more.\nObjective. Based on Eqn. 2, we propose to learn the keep probability distribution of APG while minimizing the relative distance error over a learning set Q. In particular, the learning objective function is defined as:\nminimize 1 |Q| ∑ q∈Q E [ ∑ h∈Hq ∆wh ] (3)\nBased on the learned keep probability, we identify a subgraph G that is robust to the deletion of a subset of edges F with minimal loss of graph monotonicity." }, { "heading": "4.4 OPTIMIZATION", "text": "The main objective of the optimization is to construct a proper learning framework to optimize the objective defined in Eq 3. Similar to bagged ensemble learning (Bengio et al., 2017), one approach is to initialize a set of graphs, each with a subset of edges randomly deleted. Then it individually learns edge importance on each subgraph and finally combines learned weights. However, bagging ignores the dependency among the individual subgraphs, and the edge importance does in fact depend on the chosen subgraph. In this paper, we introduce a learning algorithm, inspired by simulated annealing (Ingber, 1993) and stochastic optimization process (Bottou & Bousquet, 2007), where we probabilistically generate a sequence of sampled subgraphs to learn the important edges that preserve graph monotonicity. The benefit is that it gradually discovers important edges while also allowing edges with lower keep probability to be sampled to demonstrate their values.\nGenerating a sequence of random subgraphs. For a given G∗, we generate a sequence of randomized subgraphs: G (1) → G (2) → · · · → G (K), which correspond to K optimization steps. At each step k, the keep probability of each edge is computed from the weight of the previous step w(k − 1). The subgraph G (k) is obtained via sampling edges from G∗ by randomly picking a set of E(k)(|E(k)| ≤ |E|) edges based on this keep probability. The new weight w(k) is then obtained through learning to minimize the relative distance errors measured on G (k). As a stringent control, the expected number of edges to be selected by the algorithm is |E(k)| = dλ(k) · |E|e, where 0 < λ(k) ≤ 1 denotes the sampling ratio at iteration k and is governed by a sampling policy.\nGradual sampling policy. The sampling policy λ(k) decides how many edges a sampled subgraph G (k) has. We found that the simplest solution to having a fixed value λ(k) ≡ 1− σ works reasonably well in most cases, where σ is the final pruning ratio (the fraction of pruned edges). In this paper, inspired by the formula suggested in (Zhu & Gupta, 2018), we define λ(k) as:\nλ(k) = 1− σ + (λ(0) + σ − 1) (\n1− k K\n)c (4)\nwhere c ∈ {1, 3} and λ(0) is the initial sampling rate. Take σ = 0.5 and λ(0) = 1 as an example, intuitively, this policy allows to select more edges for exploration in the beginning and gradually becomes more selective as the optimization is close to the end.\nBinomial weight normalization. The expectation of the number of edges of a random subgraph G (k) follows a Poisson binomial distribution: It is the sum of Bernoulli random variables Re, e = 1, ..., |E|, each taking on values 0 and 1 with probabilities 1−pe and pe, respectively. However, since ∆w ≥ 0 in Eq 2, how to expect the sampled graph G (k) to have |E(k)| edges given that increased weights also increase the sum of Re? To address this issue, we do a binomial normalization to adjust the edge weights at each iteration by adding a normalizing factor µ(k) (as in Eq 1) to all edge weights so that the sum of Re equals to |E(k)|:\n|E(k)| ≡ E [∑ e∈E Re ] = ∑ e∈E pe(T ) = ∑ e∈E\n1\n1 + e− we+µ(k) T\n(5)\nwhere µ(k) is calculated through a binary search of the computed sum of probabilities, which has a time complexity of O(|E| · log(max(w(k))−min(w(k))).\nAnnealing schedule. To balance exploration and exploitation of edge importance, our approach includes an annealing schedule Φ(k) to determine the temperature T , which is updated along the iterations. The schedule choice is dominated by a trade-off. On one hand, fast temperature decay simplifies the optimization objective and reduces the complexity, assisting the edge weights to converge, since inferior edges quickly have their keep probability driven to 0 and are excluded from subsequent optimizations. However, premature decisions could lead to a sub-optimal keep probability distribution. This suggests that we should choose a slow temperature decay, which provides more opportunities for testing random subgraphs during the pre-convergence phase, but also may find a better subgraph during the convergence phase.\nThe solution we propose is based on observations explored in simulated annealing, which share a similar trade-off between the continuity of the optimization process and the time required for deriving the solution (Ingber, 1993). Among the many alternatives, we choose the exponential schedule, which has been shown to be effective in multiple other tasks, e.g. (Kirkpatrick et al., 1983; Nourani & Andresen, 1998),\nΦ(k) = T0 · βk (6)\nThis schedule starts with a relatively high temperature T0 and decays fast with a decay factor β.\nHard pruning. The optimization process ends when it meets a stopping criterion. In our current implementation, we use a simple criterion that stops after a given number of iterations. Once the optimization finishes, we re-rank edge weight and prune less important edges according to a desired pruning ratio σ. While pruning, we avoid deleting bridges that disconnect the graph. After pruning, we add a minimal number of edges to keep the graph strongly connected through connectivity augmentation (Hsu et al., 2017). The overall learning algorithm is given in Algorithm 1.\nConvergence and correctness proof. We demonstrate the effectiveness of our approach empirically in § 5 and provide a theoretically derived proof of the convergence and correctness of our algorithm in Theorem A.1 (Appendix A).\nAlgorithm 1 APG learning algorithm 1: Input: Unpruned proximity graph G(V,E), learning set Q, candidate queue length L. 2: Output: Pruned graph G(V,E \\ F ). 3: Parameters: Learning rate η, starting temperature T0, decay factor β, pruning ratio σ, max\niteration K 4: Init: T ← T0, k ← 0 5: Convert G(V,E) to G∗(V,E), w← 0 6: Update w according to a warm up run 7: while k ≤ K do 8: λ← 1− σ + (λ(0) + σ − 1) ( 1− kK\n)c 9: Normalize w s.t. |E(k)| = dλ(k) · |E|e ≡\n∑ e∈E\n1\n1+exp(−we+µ(k)T )\n10: Randomly sample a subgraph G (k)(V, E(k)) 11: for q in Q do 12: p′,← search(G (k), q, L) 13: p,Hq ← search(G, q, L) 14: if p 6= p′ then 15: for h in Hq do 16: ∆wh ← ( δ〈p ′,q〉 δ〈p,q〉 − 1) · η 17: wh ← wh + ∆wh 18: T ← T0 · βk 19: Shuffle Q 20: Remove |F | = σ · |E| lowest ranking edges from G∗ 21: Convert G∗(V,E \\ F ) to G(V,E \\ F )" }, { "heading": "5 EVALUATION", "text": "" }, { "heading": "5.1 METHODOLOGY", "text": "Datasets. We evaluate APG on three publicly available datasets, which are widely used as the similarity search benchmarks:\n• SIFT1M is a classical dataset containing 128-dimensional SIFT descriptors (Jegou et al., 2011). It consists 1,000,000 base vectors, 100,000 learning vectors, and 10,000 testing vectors.\n• Deep1M is a random 1,000,000 subset of one billion of 96-dimensional vectors produced by CNN (Babenko & Lempitsky, 2016). We sample 100,000 vectors from the provided 350M learn set as the learning set. For testing, we take the original 10,000 queries.\n• GloVe is a collection of 200-dimensional word embedding vectors from Twitter data (Pennington et al., 2014). We randomly sample from the original 1,193,514 vectors to get base, learning, and testing sets, each containing 1,000,000, 100,000, and 10,000 vectors, respectively.\nFor each dataset, we use one of the state-of-the-art approaches, Hierarchical Navigable Small World graph (Malkov & Yashunin, 2016), to build the proximity graph using the base set, which we refer to as PG (proximity graph). We use the learning set to learn and prune the bottom layer of HNSW and use the testing set for final evaluation. We do not prune the edges in the upper layers of HNSW since there are much fewer edges in those layers. We set hyperparameters as T0 = 1, K = 20, β = 0.8, η = 0.1, and σ = 0.5. The training time is 77, 40, and 128 minutes for SIFT1M, Deep1M, and GloVe, respectively.\nSetup. All the experiments were done on a 64-bit Linux Ubuntu 16.04 server with Intel Xeon CPU E5-2650 v4 @ 2.20GHz processor.\nImplementations. APG and the learning algorithm are implemented in C++. Subgraph sampling is implemented by using a binary mask map, which is of the same size as the number of edges, to determines edges that are kept in the sampled subgraph. The weights that are masked in the sampled subgraph do not get updated during the weight update phase.\nEvaluation metrics. To measure the pruning effectiveness, we report the number of edges before and after pruning and the search time. We measure the accuracy by calculating the rate of queries for which the exact nearest neighbor is found. Since it is essential to be both fast and with high accuracy, we focus on the high accuracy range." }, { "heading": "5.2 EXPERIMENT RESULTS", "text": "In this section we evaluate the proposed method by comparing the following schemes for each dataset:\n• Original PG. We construct a proximity graph over the base vectors using the HNSW algorithm.\n• Ours. Our main algorithm as described in § 4. We use the same initial R as PG and set the pruning ratio to σ = 0.5, which forces half of the edges to be useful for preserving the graph properties.\n• PG + sampling. As a heuristic baseline, we uniformly sample half of the edges from PG to remove without iteration optimizations and annealing edge weights.\n• Sparse-PG. Unlike ours, this approach directly constructs a proximity graph with only half edges to begin with. Hence all the edges still have equal importance.\nTable 1 shows the accuracy, the edge counts, the search time for all three datasets. To compare the performance of baselines and our approach apples to apples, we perform controlled experiments to keep all approaches to reach the same accuracy target in order to compare the edge reduction rate and the search time improvement. Given an accuracy target (e.g., 0.99 for SIFT1M, 0.94 for Deep1M, and 0.83 for GloVE), we vary L to find the minimum latency to reach the desired accuracy.\nOverall, compared to the original proximity graph, ours reduces the number of edges by 50%, making PG more memory efficient. On SIFT1M and GloVe1M, ours further makes the search 26.1% and 1.8% faster, respectively. This is because pruning simplifies the proximity graph while preserving the graph monotonicity, so that a query still gets to the nearest neighbor but is faster by avoiding checking less important edges. On Deep1M, the fastest search time is achieved by the unmodified PG, because when the target accuracy is really high (e.g., 0.99), there are much fewer redundant edges. And for this particular dataset, after pruning 50% edges, ours needs to search a little bit more to reach the same level of accuracy.\nBoth PG + subsampling and sparse-PG have a similar number of edges like ours. However, reducing the number of edges alone does not necessarily lead to better search efficiency. If the search takes a fixed number of hops to reach the nearest neighbor, then pruning edges will lead to less number of vectors being checked. However, for proximity graphs, the search stop condition is (1) greedy search until reaching a local optimum and (2) the search time budget has exhausted, so the search may end up searching the same amount of edges even after pruning edges. Since subsampling does not take into graph monotonicity into account, it leads to poor search efficiency either because a query needs to take a detour to find the nearest neighbor or it may not even find the nearest neighbor if the graph becomes disconnected. In fact, randomly removing edges is so destructive that GloVe1M cannot get the desired 0.83 accuracy even if the search time has increased from 0.55ms to 1.45ms. On the other hand, the sparse-PG directly restricts the number of edges of each node during the graph construction phase. However, as described in § 3, a smallerR hurts the connectivity of the proximity graph, and search can easily get stuck at local minimum. In order to reach the same accuracy, this approach needs to search more, which causes the degradation of search time. As a result, APG makes the search 25–74.2% faster compared to these two approaches.\nImpact of pruning ratio σ. We evaluate the impact of different pruning ratio σ. Fig.4a shows that, as σ increases, the number of edges left in the pruned graph linearly decreases, which is expected because our algorithm enables accurate control of pruning a given number of edges.\nFig.4b evaluates the impact on accuracy varying σ, under different L = 20, 50, 100. The accuracy remains almost on par as the unpruned proximity graph when the pruning ratio is equal or less than 50%. It starts to drop significantly once the pruning ratio is beyond 50%. The results suggest that our approach prunes a large number of edges while still being able to let queries find their nearest\nneighbors, as long as the proximity graph has not entered a phase transition where it starts to quickly lose graph monotonicity.\nTo demonstrate the impact of the pruning ratio on search efficiency, we present Fig.4c and Fig.4d, which illustrate the distance computations and search time under different σ. The results show that larger σ often leads to fewer distance computations and shorter search time, with varying L. This is expected, because APG makes the proximity graph sparser and saves both distance computations and search time by avoiding checking less important edges.\nImpact on degree distribution. To see how pruning affects the graph topology, Fig.6 in Appendix B shows the frequency distributions of in-degree and out-degree for all nodes before and after pruning (σ = 0.5). We found that the out-degree tends to be smoothed out to have a power-law distribution after the pruning, and the in-degree remains to have a binomial distribution but with only a slight shift after the pruning.\nInternal of APG learning process. To reveal the internal learning process, Fig. 7 in the Appendix demonstrates the snapshot of the keep probability distribution at different iterations. The distribution starts with a relatively high temperature and high sampling ratio λ, so that most edges having an equal probability being selected (e.g., 0.8-1). Then as the iteration goes on, the temperature decays by following the annealing schedule Φ(k) (Eqn. 6) and the sampling ratio follows the sampling policy λ(k) (Eqn. 4). As a result, those edges that are less important for preserving graph monotonicity have their probability gradually reduced. This process keeps going until the distribution becomes very biased: most of the edge probability are distributed around the two peaks. This is expected because as Theorem A.1 shows, the optimization eventually converge with the joint distribution of Re(T ) for all edges e being equal for T →∞ and sparsely supported for T → 0." }, { "heading": "5.3 COMPARISON WITH EXISTING METHODS", "text": "We include the comparison between APG and two state-of-the-art proximity graphs: (1) HNSW (Malkov & Yashunin, 2016) and (2) NSG (Navigable Spread-out Graph) (Fu et al., 2019).\nWe further report the comparison group results from the corresponding sparse counterparts: (3) HNSW-sparse and (4) NSG-sparse, both of which have a similar number of edges like ours. We also provide a comparison to (5) HNSW with random pruning 1.\nWe first consider the compromise between the search time and accuracy. Fig. 5 illustrates the accuracy–latency tradeoff between APG and the other configurations. We observe that on SIFT1M, APG is reaches the highest accuracy (0.995) when having the same search time budget (e.g., 0.2ms) and is the fastest to reach the same accuracy (e.g., 0.99), which indicates that by pruning redundant edges APG makes graph similarity search faster. On the other hand, compared to the sparse counterparts, APG delivers better accuracy-latency tradeoff than both HNSW-sparse and NSG-sparse. APG is 34% and 66% faster than HNSW-sparse and NSG-sparse to reach the same accuracy (0.98) and achieves the highest accuracy under the same search time budget (e.g., <0.3ms), which indicates it is preferable to first build a proximity graph with a large number of edges and then prune to obtain a simplified one than directly build a proximity graph with size comparable to the pruned graph. Random pruning has the worst performance, because it considers neither the graph structure nor edge importance. For Deep1M and GloVe, APG outperforms HNSW-sparse, NSG-sparse, and HNSW-rand. Compared to HNSW and NSG, APG achieves almost on-par accuracy-latency tradeoff, but it has 50% less number of edges than HNSW and NSG (Fig. 5d) and therefore is much more memory efficient." }, { "heading": "6 CONCLUDING REMARKS", "text": "Proximity graph is an important data structure for building large scale vector search engine of many machine learning systems. It is crucial to make it answer queries with low latency, low memory cost, and high accuracy. To the best of our knowledge, this is the first work on proximity graphs that demonstrate that we can learn to anneal and prune proximity graphs without losing much accuracy. This has several benefits: pruned edges reduce the memory cost; and the pruned proximity graphs perform similarity search 21–41% faster than existing and the state-ofthe-art approaches with minimal loss of accuracy. The cost is a small investment on learning and optimization time. We open-sourced the code at https://drive.google.com/open?id= 15vGhNS0O9l-zPAbdPAxwxIzV558jjGeQ.\n1APG is built with R = 64 and a pruning ratio σ = 0.5. (1) and (2) are built with R = 64, (3) and (4) are built with R = 32. (5) is built with R = 64 but with 50% of edges randomly deleted." }, { "heading": "A PROOF OF THEOREM", "text": "Theorem A.1. Let G0(V,E0) be the original graph to prune, and let Q be the query set. Suppose there exists a subset of edgesE∗ ⊆ E0 such that the average recall of retrieving the nearest neighbor in V for all queries q ∈ Q using edges in E∗ is 1. Let G(k) be the random graph at iteration k running Algorithm 1, and let {Re(k)} be the family of Bernoulli random variables defined on the edges of G(k), with P(Re(k) = 1) = pe(k). Assume that pe(ke) < 12 for e ∈ E0\\E\n∗ at some step ke. Also assume that T (k) ≥ T (k + 1) for all k. If σ = 1− |E∗|/|E0|, then:\n(1) ∑ e∈E∗ pe(k + 1) ≥ ∑ e∈E∗ pe(k) for all sufficiently large k. In particular,\nlimk→∞ ∑ e∈E∗ pe(k) exists.\n(2) If limk→∞ T (k) = 0, then limk→∞ pe(k) = 1 for all e ∈ E∗, and limk→∞ pe(k) = 0 for all e ∈ E0\\E∗.\n(3) With the condition in (2), suppose r(k) denotes the average expected recall of retrieving the ground truth nearest neighbor of a query q ∈ Q in a subgraph of G(k) randomly sampled from the joint distribution {Re(k)}. Then limk→∞ r(k) = 1.\nTheorem A.1. We start with showing (1). Note that the probability sum constraints∑ e∈E∗ pe(k) + ∑ e∈E0\\E∗ pe(k) = (1− σ)|E0| = |E∗|\nis satisfied for all k. It then suffices to show that∑ e∈E\\E∗ pe(k + 1) ≤ ∑ e∈E\\E∗ pe(k)\nfor sufficiently large k. Observe that we(k) remains a constant (= we(0)) for all k and all e ∈ E0\\E∗, and that µ(k) monotonically decreases as k increases. We claim that pe(k+1) ≤ pe(k) for all k ≥ ke. Together with the probability sum constraints this shows that ∑ e∈E∗ pe(k)\nis monotonically increasing for k ≥ maxe ke. Since ∑ e∈E∗ pe(k) is bounded from above,\nlimk→∞ ∑ e∈E∗ pe(k) = supk ∑ e∈E∗ pe(k) exists.\nFor (2), since limk Tk = 0, observe that limk→∞ pe(k)→ 0 for all e ∈ E0\\E∗. Thus the probability sum constraint enforces that limk ∑ e∈E∗ pe(k) = |E∗|. Recall that lim infk xk + lim supk yk ≥ lim infk(xk + yk) for any real sequences xk and yk. Now consider any e ∈ E∗, we have\nlim inf k→∞ pe(k)\n≥ lim k→∞ ∑ e∈E∗ pe(k)− lim sup k→∞ ∑ e′∈E∗\\{e′} pe′(k)\n≥ |E∗| − ∑\ne′∈E∗\\{e′}\nlim sup k→∞ pe′(k)\n≥ |E∗| − ∑\ne′∈E∗\\{e′}\n1\n= |E∗| − (|E∗| − 1) = 1.\nwhich implies that limk pe(k) = 1.\nTo prove (3), we first define the notation χ(q, E) to be the indicator function of whether the ground truth nearest neighbor of a query q can be retrieved in G using a subset of edges E ⊆ E0. More precisely, χ(q, E) = 1 if the nearest neighbor of q can be retrieved using edges in E; and χ(q, E) = 0 otherwise. Note that the average recall at step k can be written as\nrk = 1 |Q| ∑ q∈Q ∑ E⊆E0 χ(q, E)P(E is chosen from G(k)),\nwhere the second summation is amongst all subsets of E0. Since each edge being chosen is independent of other edges, we can expand the probability and it follows that\nrk = 1 |Q| ∑ q∈Q ∑ E⊆E0 χ(q, E) ∏ e∈E pe(k) ∏ e 6∈E (1− pe(k)) .\nSince we assumed that χ(q, E∗) = 1 for all q ∈ Q, it follows that χ(q, E) = 1 for all E ⊇ E∗. Thus\nrk ≥ 1 |Q| ∑ q∈Q ∑ E∗⊆E⊆E0 ∏ e∈E pe(k) ∏ e 6∈E (1− pe(k)) .\nNote that\n∑ E∗⊆E⊆E0 ∏ e∈E pe(k) ∏ e 6∈E (1− pe(k)) =\n∑ E∗⊆E⊆E0 ∏ e∈E∗ pe(k) ∏ e∈E\\E∗ pe(k) ∏ e 6∈E (1− pe(k)) =\n∏ e∈E∗ pe(k) ∑ E∗⊆E⊆E0 ∏ e∈E\\E∗ pe(k) ∏ e6∈E (1− pe(k)) =\n∏ e∈E∗ pe(k) ∑ E∗⊆E⊆E0 P(E is chosen from E0\\E∗)\n= ∏ e∈E∗ pe(k).\nTherefore\nlim inf k→∞ rk ≥ lim inf k\n1 |Q| ∑ q∈Q ∏ e∈E∗ pe(k)\n≥ 1 |Q| ∑ q∈Q lim inf k→∞ ∏ e∈E∗ pe(k)\n≥ 1 |Q| ∑ q∈Q ∏ e∈E∗ lim inf k→∞ pe(k)\n= 1 |Q| ∑ q∈Q 1 = 1.\nThis shows that limk→∞ rk = 1 and completes the proof of the Theorem." }, { "heading": "B FREQUENCY DISTRIBUTION OF DEGREE", "text": "C INTERNAL OF APG LEARNING PROCESS" } ]
2,019
null
SP:23726c6ff50e4ff1beb7f21e31a9f6286a656b1e
[ "This paper studies how to improve the multi-task learning from both theoretical and experimental viewpoints. More specifically, they study an architecture where there is a shared model for all of the tasks and a separate module specific to each task. They show that data similarity of the tasks, measured by task covariance is an important element for the tasks to be constructive or destructive. They theoretically find a sufficient condition that guarantee one task can transfer positively to the other; i.e. a lower bound of the number of data points that one task has to have. Consequently, they propose an algorithm which is basically applying a covariance alignment method to the input. ", "This paper analyzed the principles for a successful transfer in the hard-parameter sharing multitask learning model. They analyzed three key factors of multi-task learning on linear model and relu linear model: model capacity (output dimension after common transformation), task covariance (similarity between tasks) and optimization strategy (influence of re-weighting algorithm), with theoretical guarantees. Finally they evaluated their assumptions on the state-of-the-art multi-task framework (e.g GLUE,CheXNet), showing the benefits of the proposed algorithm." ]
We investigate multi-task learning approaches that use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-activated models. Our key observation is that whether or not tasks’ data are well-aligned can significantly affect the performance of multi-task learning. We show that misalignment between task data can cause negative transfer (or hurt performance) and provide sufficient conditions for positive transfer. Inspired by the theoretical insights, we show that aligning tasks’ embedding layers leads to performance gains for multi-task training and transfer learning on the GLUE benchmark and sentiment analysis tasks; for example, we obtain a 2.35% GLUE score average improvement on 5 GLUE tasks over BERTLARGE using our alignment method. We also design an SVD-based task reweighting scheme and show that it improves the robustness of multi-task training on a multi-label image dataset.
[ { "affiliations": [], "name": "MULTI-TASK LEARNING" }, { "affiliations": [], "name": "Sen Wu" }, { "affiliations": [], "name": "Hongyang R. Zhang" } ]
[ { "authors": [ "Héctor Martı́nez Alonso", "Barbara Plank" ], "title": "When is multitask learning effective? semantic sequence prediction under varying data conditions", "venue": "arXiv preprint arXiv:1612.02251,", "year": 2016 }, { "authors": [ "Rie Kubota Ando", "Tong Zhang" ], "title": "A framework for learning predictive structures from multiple tasks and unlabeled data", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Andreas Argyriou", "Andreas Maurer", "Massimiliano Pontil" ], "title": "An algorithm for transfer learning in a heterogeneous environment", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2008 }, { "authors": [ "Maria-Florina Balcan", "Yingyu Liang", "David P Woodruff", "Hongyang Zhang" ], "title": "Matrix completion and related problems via strong duality", "venue": "In 9th Innovations in Theoretical Computer Science Conference (ITCS", "year": 2018 }, { "authors": [ "Peter L Bartlett", "Philip M Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Jonathan Baxter" ], "title": "A model of inductive bias learning", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Shai Ben-David", "Reba Schuller" ], "title": "Exploiting task relatedness for multiple task learning", "venue": "In Learning Theory and Kernel Machines,", "year": 2003 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Joachim Bingel", "Anders Søgaard" ], "title": "Identifying beneficial task relations for multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1702.08303,", "year": 2017 }, { "authors": [ "John Blitzer", "Ryan McDonald", "Fernando Pereira" ], "title": "Domain adaptation with structural correspondence learning", "venue": "In Proceedings of the 2006 conference on empirical methods in natural language processing,", "year": 2006 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Simon S Du", "Jason D Lee", "Yuandong Tian", "Barnabas Poczos", "Aarti Singh" ], "title": "Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima", "venue": "arXiv preprint arXiv:1712.00779,", "year": 2017 }, { "authors": [ "Simon S Du", "Wei Hu", "Sham M Kakade", "Jason D Lee", "Qi Lei" ], "title": "Few-shot learning via learning the representation, provably", "venue": "arXiv preprint arXiv:2002.09434,", "year": 2020 }, { "authors": [ "Ehsan Elhamifar", "Guillermo Sapiro", "S Shankar Sastry" ], "title": "Dissimilarity-based sparse subset selection", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2015 }, { "authors": [ "Theodoros Evgeniou", "Massimiliano Pontil" ], "title": "Regularized multi-task learning", "venue": "In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2004 }, { "authors": [ "Basura Fernando", "Amaury Habrard", "Marc Sebban", "Tinne Tuytelaars" ], "title": "Unsupervised visual domain adaptation using subspace alignment", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2013 }, { "authors": [ "Han Guo", "Ramakanth Pasunuru", "Mohit Bansal" ], "title": "Autosem: Automatic task selection and mixing in multi-task learning", "venue": "arXiv preprint arXiv:1904.04153,", "year": 2019 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman", "James Franklin" ], "title": "The elements of statistical learning: data mining, inference and prediction", "venue": "The Mathematical Intelligencer,", "year": 2005 }, { "authors": [ "Minqing Hu", "Bing Liu" ], "title": "Mining and summarizing customer reviews", "venue": "In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2004 }, { "authors": [ "Alex Kendall", "Yarin Gal", "Roberto Cipolla" ], "title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Mikhail Khodak", "Maria-Florina Balcan", "Ameet Talwalkar" ], "title": "Provable guarantees for gradientbased meta-learning", "venue": "arXiv preprint arXiv:1902.10644,", "year": 2019 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "arXiv preprint arXiv:1408.5882,", "year": 2014 }, { "authors": [ "Iasonas Kokkinos" ], "title": "Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Weihao Kong", "Raghav Somani", "Zhao Song", "Sham Kakade", "Sewoong Oh" ], "title": "Meta-learning for mixed linear regression", "venue": "arXiv preprint arXiv:2002.08936,", "year": 2020 }, { "authors": [ "Wouter M Kouw" ], "title": "An introduction to domain adaptation and transfer learning", "venue": "arXiv preprint arXiv:1812.11806,", "year": 2018 }, { "authors": [ "Tao Lei", "Yu Zhang", "Sida I Wang", "Hui Dai", "Yoav Artzi" ], "title": "Simple recurrent units for highly parallelizable recurrence", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Xin Li", "Dan Roth" ], "title": "Learning question classifiers", "venue": "In Proceedings of the 19th international conference on Computational linguistics-Volume", "year": 2002 }, { "authors": [ "Yuanzhi Li", "Tengyu Ma", "Hongyang Zhang" ], "title": "Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations", "venue": "In Conference On Learning Theory,", "year": 2018 }, { "authors": [ "Yunsheng Li", "Nuno Vasconcelos" ], "title": "Efficient multi-domain learning by covariance normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Multi-task deep neural networks for natural language understanding", "venue": "arXiv preprint arXiv:1901.11504,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "MM Mahmud", "Sylvian Ray" ], "title": "Transfer learning using kolmogorov complexity: Basic theory and empirical evaluations", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Pasin Manurangsi", "Daniel Reichman" ], "title": "The computational complexity of training relu (s)", "venue": "arXiv preprint arXiv:1810.04207,", "year": 2018 }, { "authors": [ "Andreas Maurer" ], "title": "Bounds for linear multi-task learning", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "Bryan McCann", "Nitish Shirish Keskar", "Caiming Xiong", "Richard Socher" ], "title": "The natural language decathlon: Multitask learning as question answering", "venue": "arXiv preprint arXiv:1806.08730,", "year": 2018 }, { "authors": [ "Charles A Micchelli", "Massimiliano Pontil" ], "title": "Kernels for multi–task learning", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Mike Mintz", "Steven Bills", "Rion Snow", "Dan Jurafsky" ], "title": "Distant supervision for relation extraction without labeled data", "venue": "In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume", "year": 2009 }, { "authors": [ "Ishan Misra", "Abhinav Shrivastava", "Abhinav Gupta", "Martial Hebert" ], "title": "Cross-stitch networks for multi-task learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Guillaume Obozinski", "Ben Taskar", "Michael I Jordan" ], "title": "Joint covariate selection and joint subspace selection for multiple classification problems", "venue": "Statistics and Computing,", "year": 2010 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2009 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "venue": "In Proceedings of the 42nd annual meeting on Association for Computational Linguistics,", "year": 2004 }, { "authors": [ "Bo Pang", "Lillian Lee" ], "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 115–124", "venue": "Association for Computational Linguistics,", "year": 2005 }, { "authors": [ "Anastasia Pentina", "Shai Ben-David" ], "title": "Multi-task and lifelong learning of kernels", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2015 }, { "authors": [ "Anastasia Pentina", "Christoph H Lampert" ], "title": "Multi-task learning with labeled and unlabeled tasks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Anastasia Pentina", "Viktoriia Sharmanska", "Christoph H Lampert" ], "title": "Curriculum learning of multiple tasks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Pranav Rajpurkar", "Jeremy Irvin", "Kaylie Zhu", "Brandon Yang", "Hershel Mehta", "Tony Duan", "Daisy Ding", "Aarti Bagul", "Curtis Langlotz", "Katie Shpanskaya" ], "title": "Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning", "venue": "arXiv preprint arXiv:1711.05225,", "year": 2017 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Changjian Shui", "Mahdieh Abbasi", "Louis-Émile Robitaille", "Boyu Wang", "Christian Gagné" ], "title": "A principled approach for learning task similarity in multitask learning", "venue": null, "year": 1903 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 conference on empirical methods in natural language processing,", "year": 2013 }, { "authors": [ "Trevor Standley", "Amir R Zamir", "Dawn Chen", "Leonidas Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Which tasks should be learned together in multi-task learning", "venue": null, "year": 1905 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Xiaosong Wang", "Yifan Peng", "Le Lu", "Zhiyong Lu", "Mohammadhadi Bagheri", "Ronald M Summers" ], "title": "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yu Wang", "David Wipf", "Qing Ling", "Wei Chen", "Ian James Wassell" ], "title": "Multi-task learning for subspace segmentation", "venue": null, "year": 2015 }, { "authors": [ "Janyce Wiebe", "Theresa Wilson", "Claire Cardie" ], "title": "Annotating expressions of opinions and emotions in language", "venue": "Language resources and evaluation,", "year": 2005 }, { "authors": [ "Ya Xue", "Xuejun Liao", "Lawrence Carin", "Balaji Krishnapuram" ], "title": "Multi-task learning for classification with dirichlet process priors", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Amir R Zamir", "Alexander Sax", "William Shen", "Leonidas J Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hongyang Zhang", "Vatsal Sharan", "Moses Charikar", "Yingyu Liang" ], "title": "Recovery guarantees for quadratic tensors with limited observations", "venue": "In International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2019 }, { "authors": [ "Yu Zhang", "Qiang Yang" ], "title": "A survey on multi-task learning", "venue": "arXiv preprint arXiv:1707.08114,", "year": 2017 }, { "authors": [ "Yu Zhang", "Dit-Yan Yeung" ], "title": "A regularization approach to learning task relationships in multitask learning", "venue": "ACM Transactions on Knowledge Discovery from Data (TKDD),", "year": 2014 }, { "authors": [ "Yuchen Zhang", "Tianle Liu", "Mingsheng Long", "Michael I Jordan" ], "title": "Bridging theory and algorithm for domain adaptation", "venue": "arXiv preprint arXiv:1904.05801,", "year": 2019 }, { "authors": [ "Balcan" ], "title": "2018), the only local minima of ‖CA − Z‖2", "venue": null, "year": 2018 }, { "authors": [ "Lei" ], "title": "2018), followed by a classification layer. • For the CNN model, we use the model proposed by Kim", "venue": null, "year": 2014 }, { "authors": [ "Liu" ], "title": "GLUE. For the experiments on the GLUE benchmark, we use a state-of-the-art language model called BERT (Devlin et al. (2018)). For each task, we add a classification/regression layer on top it as our model. For all the experiments, we use the BERTLARGE uncased model, which is a 24 layer network as described in Devlin et al. (2018)", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Multi-task learning has recently emerged as a powerful paradigm in deep learning to obtain language (Devlin et al. (2018); Liu et al. (2019a;b)) and visual representations (Kokkinos (2017)) from large-scale data. By leveraging supervised data from related tasks, multi-task learning approaches reduce the expensive cost of curating the massive per-task training data sets needed by deep learning methods and provide a shared representation which is also more efficient for learning over multiple tasks. While in some cases, great improvements have been reported compared to single-task learning (McCann et al. (2018)), practitioners have also observed problematic outcomes, where the performances of certain tasks have decreased due to task interference (Alonso and Plank (2016); Bingel and Søgaard (2017)). Predicting when and for which tasks this occurs is a challenge exacerbated by the lack of analytic tools. In this work, we investigate key components to determine whether tasks interfere constructively or destructively from theoretical and empirical perspectives. Based on these insights, we develop methods to improve the effectiveness and robustness of multi-task training.\nThere has been a large body of algorithmic and theoretical studies for kernel-based multi-task learning, but less is known for neural networks. The conceptual message from the earlier work (Baxter (2000); Evgeniou and Pontil (2004); Micchelli and Pontil (2005); Xue et al. (2007)) show that multi-task learning is effective over “similar” tasks, where the notion of similarity is based on the single-task models (e.g. decision boundaries are close). The work on structural correspondence learning (Ando and Zhang (2005); Blitzer et al. (2006)) uses alternating minimization to learn a shared parameter and separate task parameters. Zhang and Yeung (2014) use a parameter vector for each task and learn task relationships via l2 regularization, which implicitly controls the capacity of the model. These results are difficult to apply to neural networks: it is unclear how to reason about neural networks whose feature space is given by layer-wise embeddings.\nTo determine whether two tasks interfere constructively or destructively, we investigate an architecture with a shared module for all tasks and a separate output module for each task (Ruder (2017)). See Figure 1 for an illustration. Our motivating observation is that in addition to model similarity which affects the type of interference, task data similarity plays a second-order effect after controlling model similarity. To illustrate the idea, we consider three tasks with the same number of data ∗Equal contribution. Correspondence to {senwu,hongyang,chrismre}@cs.stanford.edu\nsamples where task 2 and 3 have the same decision boundary but different data distributions (see Figure 2 for an illustration). We observe that training task 1 with task 2 or task 3 can either improve or hurt task 1’s performance, depending on the amount of contributing data along the decision boundary! This observation shows that by measuring the similarities of the task data and the models separately, we can analyze the interference of tasks and attribute the cause more precisely.\nMotivated by the above observation, we study the theory of multi-task learning through the shared module in linear and ReLU-activated settings. Our theoretical contribution involves three components: the capacity of the shared module, task covariance, and the per-task weight of the training procedure. The capacity plays a fundamental role because, if the shared module’s capacity is too large, there is no interference between tasks; if it is too small, there can be destructive interference. Then, we show how to determine interference by proposing a more fine-grained notion called task covariance which can be used to measure the alignment of task data. By varying task covariances, we observe both positive and negative transfers from one task to another! We then provide sufficient conditions which guarantee that one task can transfer positively to another task, provided with sufficiently many data points from the contributor task. Finally, we study how to assign per-task weights for settings where different tasks share the same data but have different labels.\nExperimental results. Our theory leads to the design of two algorithms with practical interest. First, we propose to align the covariances of the task embedding layers and present empirical evaluations on well-known benchmarks and tasks. On 5 tasks from the General Language Understanding Evaluation (GLUE) benchmark (Wang et al. (2018b)) trained with the BERTLARGE model by Devlin et al. (2018), our method improves the result of BERTLARGE by a 2.35% average GLUE score, which is the standard metric for the benchmark. Further, we show that our method is applicable to transfer learning settings; we observe up to 2.5% higher accuracy by transferring between six sentiment analysis tasks using the LSTM model of Lei et al. (2018).\nSecond, we propose an SVD-based task reweighting scheme to improve multi-task training for settings where different tasks have the same features but different labels. On the ChestX-ray14 dataset, we compare our method to the unweighted scheme and observe an improvement of 0.4% AUC score on average for all tasks . In conclusion, these evaluations confirm that our theoretical insights are applicable to a broad range of settings and applications." }, { "heading": "2 THREE COMPONENTS OF MULTI-TASK LEARNING", "text": "We study multi-task learning (MTL) models with a shared module for all tasks and a separate output module for each task. We ask: What are the key components to determine whether or not MTL is better than single-task learning (STL)? In response, we identify three components: model capacity, task covariance, and optimization scheme. After setting up the model, we briefly describe the role of model capacity. We then introduce the notion of task covariance, which comprises the bulk of the section. We finish by showing the implications of our results for choosing optimization schemes." }, { "heading": "2.1 MODELING SETUP", "text": "We are given k tasks. Letmi denote the number of data samples of task i. For task i, letXi ∈ Rmi×d denote its covariates and let yi ∈ Rmi denote its labels, where d is the dimension of the data. We have assumed that all the tasks have the same input dimension d. This is not a restrictive assumption and is typically satisfied, e.g. for word embeddings on BERT, or by padding zeros to the input otherwise. Our model assumes the output label is 1-dimensional. We can also model a multi-label problem with k types of labels by having k tasks with the same covariates but different labels. We consider an MTL model with a shared module B ∈ Rd×r and a separate output module Ai ∈ Rr for task i, where r denotes the output dimension of B. See Figure 1 for the illustration. We define the objective of finding an MTL model as minimizing the following equation over B and the Ai’s:\nf(A1, A2, . . . , Ak;B) = k∑ i=1 L (g(XiB)Ai, yi) , (1)\nwhere L is a loss function such as the squared loss. The activation function g : R→ R is applied on every entry of XiB. In equation 1, all data samples contribute equally. Because of the differences between tasks such as data size, it is natural to re-weight tasks during training:\nf(A1, A2, . . . , Ak;B) = k∑ i=1 αi · L(g(XiB)Ai, yi), (2)\nThis setup is an abstraction of the hard parameter sharing architecture (Ruder (2017)). The shared module B provides a universal representation (e.g., an LSTM for encoding sentences) for all tasks. Each task-specific module Ai is optimized for its output. We focus on two models as follows.\nThe single-task linear model. The labels y of each task follow a linear model with parameter θ ∈ Rd: y = Xθ + ε. Every entry of ε follows the normal distribution N (0, σ2) with variance σ2. The function g(XB) = XB. This is a well-studied setting for linear regression (Hastie et al. (2005)).\nThe single-task ReLU model. Denote by ReLU(x) = max(x, 0) for any x ∈ R. We will also consider a non-linear model where Xθ goes through the ReLU activation function with a ∈ R and θ ∈ Rd: y = a ·ReLU(Xθ)+ε, which applies the ReLU activation on Xθ entrywise. The encoding function g(XB) then maps to ReLU(XB).\nPositive vs. negative transfer. For a source task and a target task, we say the source task transfers positively to the target task, if training both through equation 1 improves over just training the target task (measured on its validation set). Negative transfer is the converse of positive transfer.\nProblem statement. Our goal is to analyze the three components to determine positive vs. negative transfer between tasks: model capacity (r), task covariances ({X>i Xi}ki=1) and the per-task weights ({αi}ki=1). We focus on regression tasks under the squared loss but we also provide synthetic experiments on classification tasks to validate our theory.\nNotations. For a matrixX , its column span is the set of all linear combinations of the column vectors of X . Let X† denote its pseudoinverse. Given u, v ∈ Rd, cos(u, v) is equal to u>v/(‖u‖ · ‖v‖)." }, { "heading": "2.2 MODEL CAPACITY", "text": "We begin by revisiting the role of model capacity, i.e. the output dimension ofB (denoted by r). We show that as a rule of thumb, r should be smaller than the sum of capacities of the STL modules.\nExample. Suppose we have k linear regression tasks using the squared loss, equation 1 becomes:\nf(A1, A2, . . . , Ak;B) = k∑ i=1 ‖XiBAi − yi‖2F . (3)\nThe optimal solution of equation 3 for task i is θi = (X>i Xi) †X>i yi ∈ Rd. Hence a capacity of 1 suffices for each task. We show that if r ≥ k, then there is no transfer between any two tasks.\nProposition 1. Let r ≥ k. There exists an optimum B? and {A?i }ki=1 of equation 3 where B?A?i = θi, for all i = 1, 2, . . . , k.\nTo illustrate the idea, as long as B? contains {θi}ki=1 in its column span, there exists A?i such that B?A?i = θi, which is optimal for equation 3 with minimum error. But this means no transfer among any two tasks. This can hurt generalization if a task has limited data, in which case its STL solution overfits training data, whereas the MTL solution can leverage other tasks’ data to improve generalization. The proof of Proposition 1 and its extension to ReLU settings are in Appendix A.1.\nAlgorithmic consequence. The implication is that limiting the shared module’s capacity is necessary to enforce information transfer. If the shared module is too small, then tasks may interfere negatively with each other. But if it is too large, then there may be no transfer between tasks. In Section 3.3, we verify the need to carefully choose model capacity on a wide range of neural networks including CNN, LSTM and multi-layer perceptron." }, { "heading": "2.3 TASK COVARIANCE", "text": "To show how to quantify task data similarity, we illustrate with two regression tasks under the linear model without noise: y1 = X1θ1 and y2 = X2θ2. By Section 2.2, it is necessary to limit the capacity of the shared module to enforce information transfer. Therefore, we consider the case of r = 1. Hence, the shared module B is now a d-dimensional vector, and A1, A2 are both scalars.\nA natural requirement of task similarity is for the STL models to be similar, i.e. |cos(θ1, θ2)| to be large. To see this, the optimal STL model for task 1 is (X>1 X1)\n−1X>1 y1 = θ1. Hence if |cos(θ1, θ2)| is 1, then tasks 1 and 2 can share a model B ∈ Rd which is either θ1 or −θ1. The scalar A1 and A2 can then transform B to be equal to θ1 and θ2.\nIs this requirement sufficient? Recall that in equation 3, the task data X1 and X2 are both multiplied by B. If they are poorly “aligned” geometrically, the performance could suffer. How do we formalize the geometry between task alignment? In the following, we show that the covariance matrices of X1 andX2, which we define to beX>1 X1 andX > 2 X2, captures the geometry. We fix |cos(θ1, θ2)| to be close to 1 to examine the effects of task covariances. In Appendix A.2.1 we fix task covariances to examine the effects of model cosine similarity. Concretely, equation 3 reduces to:\nmax B∈Rd h(B) = 〈 X1B ‖X1B‖ , y1〉2 + 〈 X2B ‖X2B‖ , y2〉2, (4)\nwhere we apply the first-order optimality condition onA1 andA2 and simplify the equation. Specifically, we focus on a scenario where task 1 is the source and task 2 is the target. Our goal is to determine when the source transfers to the target positively or negatively in MTL. Determining the type of transfer from task 2 to task 1 can be done similarly. Answering the question boils down to studying the angle or cosine similarity between the optimum of equation 4 and θ2.\nExample. In Figure 3, we show that by varying task covariances and the number of samples, we can observe both positive and negative transfers. The conceptual message is the same as Figure 2; we\nAlgorithm 1 Covariance alignment for multi-task training\nRequire: Task embedding layers X1 ∈ Rm1×d, X2 ∈ Rm2×d, . . . , Xk ∈ Rmk×d, shared module B Parameter: Alignment matrices R1, R2, . . . , Rk ∈ Rd×d and output modules A1, A2 . . . , Ak ∈ Rr 1: Let Zi = XiRi, for 1 ≤ i ≤ k.\nConsider the following modified loss (with B being fixed): f̂(A1, . . . , Ak;R1, . . . , Rk) = ∑k i=1 L(g(ZiB)Ai, yi) = ∑k i=1 L(g(XiRiB)Ai, yi)\n2: Minimize f̂ by alternatively applying a gradient descent update on Ai and Ri, given a sampled data batch from task i. Other implementation details are described in Appendix B.3.\ndescribe the data generation process in more detail. We use 3 tasks and measure the type of transfer from the source to the target. The x-axis is the number of data samples from the source. The y-axis is the target’s performance improvement measured on its validation set between MTL minus STL.\nData generation. We have |cos(θ1, θ2)| ≈ 1 (say 0.96). For i ∈ {1, 2, 3}, let Ri ⊆ Rmi×d denote a random Gaussian matrix drawn from N (0, 1). Let S1, S2 ⊆ {1, 2, . . . , d} be two disjoint sets of size d/10. For i = 1, 2, let Di be a diagonal matrix whose entries are equal to a large value κ (e.g. κ = 100) for coordinates in Si and 1 otherwise. Let Qi ⊆ Rd×d denote an orthonormal matrix, i.e. Q>i Qi is equal to the identity matrix, orthogonalized from a random Gaussian matrix.\nThen, we define the 3 tasks as follows. (i) Task 1 (target): X1 = R1Q1D1 and y1 = X1θ1. (ii) Task 2 (source task for red line): X2 = R2Q1D1 and y2 = X2θ2. (iii) Task 3 (source task for green line): X3 = R3Q2D2 and y3 = X3θ2. Task 1 and 2 have the same covariance matrices but task 1 and 3 have different covariance matrices. Intuitively, the signals of task 1 and 3 lie in different subspaces, which arise from the difference in the diagonals of Di and the orthonormal matrices.\nAnalysis. Unless the source task has lots of samples to estimate θ2, which is much more than the samples needed to estimate only the coordinates of S1, the effect of transferring to the target is small. We observe similar results for logistic regression tasks and for ReLU-activated regression tasks.\nTheory. We rigorously quantify how many data points is needed to guarantee positive transfer. The folklore in MTL is that when a source task has a lot of data but the related target task has limited data, then the source can often transfer positively to the target task. Our previous example shows that by varying the source’s number of samples and its covariance, we can observe both types of transfer. How much data do we need from the source to guarantee a positive transfer to the target? We show that this depends on the condition numbers of both tasks’ covariances.\nTheorem 2 (informal). For i = 1, 2, let yi = Xiθi + εi denote two linear regression tasks with parameters θi ∈ Rd and mi number of samples. Suppose that each row of the source task X1 is drawn independently from a distribution with covariance Σ1 ⊆ Rd×d and bounded l2-norm. Let c = κ(X2)sin(θ1, θ2) and assume that c ≤ 1/3. Denote by (B?, A?1, A?2) the optimal MTL solution. With high probability, when m1 is at least on the order of (κ2(Σ1) · κ4(X2) · ‖y2‖2)/c4, we have\n‖B?A?2 − θ2‖/‖θ2‖ ≤ 6c+ 1 1− 3c ‖ε2‖ ‖X2θ2‖ . (5)\nRecall that for a matrix X , κ(X) denotes its condition number. Theorem 2 quantifies the trend in Figure 3, where the improvements for task 2 reaches the plateau when m1 becomes large enough.\nThe parameter c here indicates how similar the two tasks are. The smaller sin(θ1, θ2) is, the smaller c is. As an example, if sin(θ1, θ2) ≤ δ/κ(X2) for some δ, then equation 5 is at most O(δ) + ‖ε2‖/‖X2θ2‖.1 The formal statement, its proof and discussions on the assumptions are deferred to Appendix A.2.2.\nThe ReLU model. We show a similar result for the ReLU model, which requires resolving the challenge of analyzing the ReLU function. We use a geometric characterization for the ReLU function under distributional input assumptions by Du et al. (2017). The result is deferred to Appendix A.2.3.\n1The estimation error of θ2 is upper bounded by task 2’s signal-to-noise ratio ‖ε2‖/‖X2θ2‖. This dependence arises because the linear component A?2 fits the projection of y2 to X2B?. So even if B? is equal to θ2, there could still be an estimation error out of A?2, which cannot be estimated from task 1’s data.\nAlgorithm 2 An SVD-based task re-weighting scheme\nInput: k tasks: (X, yi) ∈ (Rm×d,Rm); a rank parameter r ∈ {1, 2, . . . , k} Output: A weight vector: {α1, α2, . . . , αk} 1: Let θi = X>yi. 2: Ur, Dr, Vr = SVDr(θ1, θ2, . . . , θk), i.e. the best rank-r approximation to the θi’s. 3: Let αi = ‖θ>i Ur‖, for i = 1, 2, . . . , k.\nAlgorithmic consequence. An implication of our theory is a covariance alignment method to improve multi-task training. For the i-th task, we add an alignment matrixRi before its inputXi passes through the shared module B. Algorithm 1 shows the procedure.\nWe also propose a metric called covariance similarity score to measure the similarity between two tasks. Given X1 ∈ Rm1×d and X2 ∈ Rm2×d, we measure their similarity in three steps: (a) The covariance matrix is X>1 X1. (b) Find the best rank-r1 approximation to be U1,r1D1,r1U > 1,r1 , where r1 is chosen to contain 99% of the singular values. (c) Apply step (a),(b) to X2, compute the score:\nCovariance similarity score := ‖(U1,r1D 1/2 1,r1 )>U2,r2D 1/2 2,r2 ‖ F\n‖U1,r1D 1/2 1,r1 ‖ F · ‖U2,r2D 1/2 2,r2 ‖ F\n. (6)\nThe nice property of the score is that it is invariant to rotations of the columns of X1 and X2." }, { "heading": "2.4 OPTIMIZATION SCHEME", "text": "Lastly, we consider the effect of re-weighting the tasks (or their losses in equation 2). When does reweighting the tasks help? In this part, we show a use case for improving the robustness of multi-task training in the presence of label noise. The settings involving label noise can arise when some tasks only have weakly-supervised labels, which have been studied before in the literature (e.g. Mintz et al. (2009); Pentina and Lampert (2017)). We start by describing a motivating example.\nConsider two tasks where task 1 is y1 = Xθ and task 2 is y2 = Xθ + ε2. If we train the two tasks together, the error ε2 will add noise to the trained model. However, by up weighting task 1, we reduce the noise from task 2 and get better performance. To rigorously study the effect of task weights, we consider a setting where all the tasks have the same data but different labels. This setting arises for example in multi-label image tasks. We derive the optimal solution in the linear model.\nProposition 3. Let the shared module have capacity r ≤ k. Given k tasks with the same covariates X ⊆ Rm×d but different labels {yi}ki=1. Let X be full rank and UDV > be its SVD. Let QrQ>r be the best rank-r approximation to ∑k i=1 αiU >yiy > i U . Let B\n? ⊆ Rd×r be an optimal solution for the re-weighted loss. Then the column span of B? is equal to the column span of (X>X)−1V DQr.\nWe can also extend Proposition 3 to show that all local minima of equation 3 are global minima in the linear setting. We leave the proof to Appendix A.3. We remark that this result does not extend to the non-linear ReLU setting and leave this for future work.\nBased on Proposition 3, we provide a rigorous proof of the previous example. Suppose that X is full rank, (X>X)†X[α1y1, α1y2]) = [α1θ, α2θ + α2(X>X)−1Xε2]. Hence, when we increase α1, cos(B?, θ) increases closer to 1.\nAlgorithmic consequence. Inspired by our theory, we describe a re-weighting scheme in the presence of label noise. We compute the per-task weights by computing the SVD over X>yi, for 1 ≤ i ≤ k. The intuition is that if the label vector of a task yi is noisy, then the entropy of yi is small. Therefore, we would like to design a procedure that removes the noise. The SVD procedure does this, where the weight of a task is calculated by its projection into the principal r directions. See Algorithm 2 for the description." }, { "heading": "3 EXPERIMENTS", "text": "We describe connections between our theoretical results and practical problems of interest. We show three claims on real world datasets. (i) The shared MTL module is best performing when its capacity is smaller than the total capacities of the single-task models. (ii) Our proposed covariance alignment method improves multi-task training on a variety of settings including the GLUE benchmarks and six sentiment analysis tasks. Our method can be naturally extended to transfer learning settings and we validate this as well. (iii) Our SVD-based reweighed scheme is more robust than the standard unweighted scheme on multi-label image classification tasks in the presence of label noise." }, { "heading": "3.1 EXPERIMENTAL SETUP", "text": "Datasets and models. We describe the datasets and models we use in the experiments.\nGLUE: GLUE is a natural language understanding dataset including question answering, sentiment analysis, text similarity and textual entailment problems. We choose BERTLARGE as our model, which is a 24 layer transformer network from Devlin et al. (2018). We use this dataset to evaluate how Algorithm 1 works on the state-of-the-art BERT model.\nSentiment Analysis: This dataset includes six tasks: movie review sentiment (MR), sentence subjectivity (SUBJ), customer reviews polarity (CR), question type (TREC), opinion polarity (MPQA), and the Stanford sentiment treebank (SST) tasks.\nFor each task, the goal is to categorize sentiment opinions expressed in the text. We use an embedding layer (with GloVe embeddings2) followed by an LSTM layer proposed by Lei et al. (2018)3.\nChestX-ray14: This dataset contains 112,120 frontal-view X-ray images and each image has up to 14 diseases. This is a 14-task multi-label image classification problem. We use the CheXNet model from Rajpurkar et al. (2017), which is a 121-layer convolutional neural network on all tasks.\nFor all models, we share the main module across all tasks (BERTLARGE for GLUE, LSTM for sentiment analysis, CheXNet for ChestX-ray14) and assign a separate regression or classification layer on top of the shared module for each tasks.\nComparison methods. For the experiment on multi-task training, we compare Algorithm 1 by training with our method and training without it. Specifically, we apply the alignment procedure on the task embedding layers. See Figure 4 for an illustration, where Ei denotes the embedding of task i, Ri denotes its alignment module and Zi = EiRi is the rotated embedding.\nFor transfer learning, we first train an STL model on the source task by tuning its model capacity (e.g. the output dimension of the LSTM layer). Then, we fine-tune the STL model on the target task for 5-10 epochs. To apply Algorithm 1, we add an alignment module for the target task during fine-tuning.\nFor the experiment on reweighted schemes, we compute the per-task weights as described in Algorithm 2. Then, we reweight the loss function as in equation 2. We compare with the reweighting techniques of Kendall et al. (2018). Informally, the latter uses Gaussian likelihood to model classi-\n2http://nlp.stanford.edu/data/wordvecs/glove.6B.zip 3We also tested with multi-layer perceptron and CNN. The results are similar (cf. Appendix B.5).\nfication outputs. The weights, defined as inversely proportional to the variances of the Gaussian, are optimized during training. We also compare with the unweighted loss (cf. equation 1) as a baseline.\nMetric. We measure performance on the GLUE benchmark using a standard metric called the GLUE score, which contains accuracy and correlation scores for each task.\nFor the sentiment analysis tasks, we measure the accuracy of predicting the sentiment opinion.\nFor the image classification task, we measure the area under the curve (AUC) score. We run five different random seeds to report the average results. The result of an MTL experiment is averaged over the results of all the tasks, unless specified otherwise.\nFor the training procedures and other details on the setup, we refer the reader to Appendix B." }, { "heading": "3.2 EXPERIMENTAL RESULTS", "text": "We present use cases of our methods on open-source datasets. We expected to see improvements via our methods in multi-task and other settings, and indeed we saw such gains across a variety of tasks.\nImproving multi-task training. We apply Algorithm 1 on five tasks (CoLA, MRPC, QNLI, RTE, SST-2) from the GLUE benchmark using a state-of-the-art language model BERTLARGE. 4 We train the output layers {Ai} and the alignment layers {Ri} using our algorithm. We compare the average performance over all five tasks and find that our method outperforms BERTLARGE by 2.35% average GLUE score for the five tasks. For the particular setting of training two tasks, our method outperforms BERTLARGE on 7 of the 10 task pairs. See Figure 5a for the results.\nImproving transfer learning. While our study has focused on multi-task learning, transfer learning is a naturally related goal – and we find that our method is also useful in this case. We validate this by training an LSTM on sentiment analysis. Figure 5b shows the result with SST being the source task and the rest being the target task. Algorithm 1 improves accuracy on four tasks by up to 2.5%.\nReweighting training for the same task covariates. We evaluate Algorithm 2 on the ChestX-ray14 dataset. This setting satisfies the assumption of Algorithm 2, which requires different tasks to have the same input data. Across all 14 tasks, we find that our reweighting method improves the technique of Kendall et al. (2018) by 0.1% AUC score. Compared to training with the unweighted loss, our method improves performance by 0.4% AUC score over all tasks." }, { "heading": "3.3 ABLATION STUDIES", "text": "Model capacity. We verify our hypothesis that the capacity of the MTL model should not exceed the total capacities of the STL model. We show this on an LSTM model with sentiment analysis tasks. Recall that the capacity of an LSTM model is its output dimension (before the last classification layer). We train an MTL model with all tasks and vary the shared module’s capacity to find the optimum from 5 to 500. Similarly we train an STL model for each task and find the optimum.\nIn Figure 1, we find that the performance of MTL peaks when the shared module has capacity 100. This is much smaller than the total capacities of all the STL models. The result confirms that\n4https://github.com/google-research/bert\nconstraining the shared module’s capacity is crucial to achieve the ideal performance. Extended results on CNN/MLP to support our hypothesis are shown in Appendix B.5.\nTask covariance. We apply our metric of task covariance similarity score from Section 2.3 to provide an in-depth study of the covariance alignment method. The hypothesis is that: (a) aligning the covariances helps, which we have shown in Figure 5a; (b) the similarity score between two tasks increases after applying the alignment. We verify the hypothesis on the sentiment analysis tasks. We use the single-task model’s embedding before the LSTM layer to compute the covariance.\nFirst, we measure the similarity score using equation 6 between all six single-task models. Then, for each task pair, we train an MTL model using Algorithm 1. We measure the similarity score on the trained MTL model. Our results confirm the hypothesis (Figure 6): (a) we observe increased accuracy on 13 of 15 task pairs by up to 4.1%; (b) the similarity score increases for all 15 task pairs.\nOptimization scheme. We verify the robustness of Algorithm 2. After selecting two tasks from the ChestX-ray14 dataset, we test our method by assigning random labels to 20% of the data on one task. The labels for the other task remain unchanged.\nOn 10 randomly selected pairs, our method improves over the unweighted scheme by an average 1.0% AUC score and the techniques of Kendall et al. (2018) by an average 0.4% AUC score. We include more details of this experiment in Appendix B.5." }, { "heading": "4 RELATED WORK", "text": "There has been a large body of recent work on using the multi-task learning approach to train deep neural networks. Liu et al. (2019a); McCann et al. (2018) and subsequent follow-up work show state-of-the-art results on the GLUE benchmark, which inspired our study of an abstraction of the MTL model. Recent work of Zamir et al. (2018); Standley et al. (2019) answer which visual tasks to train together via a heuristic which involves intensive computation. We discuss several lines of studies related to this work. For complete references, we refer the interested readers to the survey of Ruder (2017); Zhang and Yang (2017) and the surveys on domain adaptation and transfer learning by Pan and Yang (2009); Kouw (2018) for references.\nTheoretical studies of multi-task learning. Of particular relevance to this work are those that study the theory of multi-task learning. The earlier works of Baxter (2000); Ben-David and Schuller (2003) are among the first to formally study the importance of task relatedness for learning multiple tasks. See also the follow-up work of Maurer (2006) which studies generalization bounds of MTL.\nA closely related line of work to structural learning is subspace selection, i.e. how to select a common subspace for multiple tasks. Examples from this line work include Obozinski et al. (2010); Wang et al. (2015); Fernando et al. (2013); Elhamifar et al. (2015). Evgeniou and Pontil (2004); Micchelli and Pontil (2005) study a formulation that extends support vector machine to the multitask setting. See also Argyriou et al. (2008); Pentina et al. (2015); Pentina and Ben-David (2015); Pentina and Lampert (2017) that provide more refined optimization methods and further study. The work of Ben-David et al. (2010) provides theories to measure the differences between source and target tasks for transfer learning in a different model setup. Khodak et al. (2019); Kong et al. (2020);\nDu et al. (2020) consider the related meta learning setting, which is in spirit an online setting of multi-task learning.\nOur result on restricting the model capacities for multi-task learning is in contrast with recent theoretical studies on over-parametrized models (e.g. Li et al. (2018); Zhang et al. (2019a); Bartlett et al. (2020)), where the model capacities are usually much larger than the regime we consider here. It would be interesting to better understand multi-task learning in the context of over-parametrized models with respect to other phenomenon such as double descent that has been observed in other contexts (Belkin et al. (2019)).\nFinally, Zhang et al. (2019b); Shui et al. (2019) consider multi-task learning from the perspective of adversarial robustness. Mahmud and Ray (2008) consider using Kolmogorov complexity measure the effectiveness of transfer learning for decision tree methods.\nHard parameter sharing vs soft parameter sharing. The architecture that we study in this work is also known as the hard parameter sharing architecture. There is another kind of architecture called soft parameter sharing. The idea is that each task has its own parameters and modules. The relationships between these parameters are regularized in order to encourage the parameters to be similar. Other architectures that have been studied before include the work of Misra et al. (2016), where the authors explore trainable architectures for convolutional neural networks.\nDomain adaptation. Another closely related line of work is on domain adaptation. The acute reader may notice the similarity between our study in Section 2.3 and domain adaptation. The crucial difference here is that we are minimizing the multi-task learning objective, whereas in domain adaptation the objective is typically to minimize the objective on the target task. See Ben-David et al. (2010); Zhang et al. (2019b) and the references therein for other related work.\nOptimization techniques. Guo et al. (2019) use ideas from the multi-armed bandit literature to develop a method for weighting each task. Compared to their method, our SVD-based method is conceptually simpler and requires much less computation. Kendall et al. (2018) derive a weighted loss schme by maximizing a Gaussian likelihood function. Roughly speaking, each task is reweighted by 1/σ2 where σ is the standard deviation of the Gaussian and a penalty of log σ is added to the loss. The values of {σi}i are also optimized during training. The exact details can be found in the paper. The very recent work of Li and Vasconcelos (2019) show empirical results using a similar idea of covariance normalization on imaging tasks for cross-domain transfer." }, { "heading": "5 CONCLUSIONS AND FUTURE WORK", "text": "We studied the theory of multi-task learning in linear and ReLU-activated settings. We verified our theory and its practical implications through extensive synthetic and real world experiments.\nOur work opens up many interesting future questions. First, could we extend the guarantees for choosing optimization schemes to non-linear settings? Second, a limitation of our SVD-based optimization scheduler is that it only applies to settings with the same data. Could we extend the method for heterogeneous task data? More broadly, we hope our work inspires further studies to better understand multi-task learning in neural networks and to guide its practice.\nAcknowledgements. Thanks to Sharon Y. Li and Avner May for stimulating discussions during early stages of this work. We are grateful to the Stanford StatsML group and the anonymous referees for providing helpful comments that improve the quality of this work. We gratefully acknowledge the support of DARPA under Nos. FA87501720095 (D3M), FA86501827865 (SDH), and FA86501827882 (ASED); NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No. N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, and Infosys. H. Zhang is supported in part by Gregory Valiant’s ONR YIP award (#1704417). The experiments are partly run on Stanford’s SOAL cluster. 5 The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding\n5https://5harad.com/soal-cluster/\nany copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government." }, { "heading": "A MISSING DETAILS OF SECTION 2", "text": "We fill in the missing details left from Section 2. In Section A.1, we provide rigorous arguments regarding the capacity of the shared module. In Section A.2, we fill in the details left from Section 2.3, including the proof of Theorem 2 and its extension to the ReLU model. In Section A.3, we provide the proof of Proposition 3 on the task reweighting schemes. We first describe the notations.\nNotations. We define the notations to be used later on. We denote f(x) . g(x) if there exists an absolute constant C such that f(x) ≤ Cg(x). The big-O notation f(x) = O(g(x)) means that f(x) . g(x).\nSuppose A ∈ Rm×n, then λmax(A) denotes its largest singular value and λmin(A) denotes its min{m,n}-th largest singular value. Alternatively, we have λmin(A) = minx:‖x‖=1 ‖Ax‖. Let κ(A) = λmax(A)/λmin(A) denote the condition number of A. Let Id denotes the identity matrix. Let U† denote the Moore-Penrose pseudo-inverse of the matrix U . Let ‖ · ‖ denote the Euclidean norm for vectors and spectral norm for matrices. Let ‖ · ‖F denote the Frobenius norm of a matrix. Let 〈A,B,=〉Tr(A>B) denote the inner product of two matrices.\nThe sine function is define as sin(u, v) = √\n1− cos(u, v)2, where we assume that sin(u, v) ≥ 0 which is without loss of generality for our study." }, { "heading": "A.1 MISSING DETAILS OF SECTION 2.2", "text": "We describe the full detail to show that our model setup captures the phenomenon that the shared module should be smaller than the sum of capacities of the single-task models. We state the following proposition which shows that the quality of the subspace B in equation 1 determines the performance of multi-task learning. This supplements the result of Proposition 1. Proposition 4. In the optimum of f(·) (equation 1), each Ai selects the vector v within the column span of gB(Xi) to minimize L(v, yi). As a corollary, in the linear setting, the optimal B can be achieved at a rotation matrix B? ⊆ Rd×r by maximizing\nk∑ i=1 〈B(B>X>i XiB)†B>, X>i yiy>i Xi〉. (7)\nFurthermore, any B? which contains {θi}ki=1 in its column subspace is optimal. In particular, for such a B?, there exists {A?i } so that B?A?i = θi for all 1 ≤ i ≤ k.\nProof. Recall the MTL objective in the linear setting from equation 3 as follows:\nmin f(A1, A2, . . . , Ak;B) = k∑ i=1 (XiBAi − yi)2 ,\nNote that the linear layer Ai can pick any combination within the subspace of B. Therefore, we could assume without loss of generality that B is a rotation matrix. i.e. B>B = Id. After fixing B, since objective f(·) is linear in Ai for all i, by the local optimality condition, we obtain that\nAi = (B >X>i XiB) †B>X>i yi\nReplacing the solution of Ai to f(·), we obtain an objective over B.\nh(B) = k∑ i=1 ‖XiB(B>X>i XiB)†B>X>i yi − yi‖2F .\nNext, note that\n‖XiB(B>X>i XiB)†B>X>i yi‖2F = Tr(y > i XiB(B >X>i XiB) †B>X>i yi)\n= 〈B(B>X>i XiB)B>, X>i yiy>i Xi〉,\nwhere we used the fact thatA†AA† = A† forA = B>X>i XiB in the first equation. Hence we have shown equation 7.\nFor the final claim, as long as B? contains {θi}ki=1 in its column subspace, then there exists A?i such that B?A?i = θi. The B\n? and {A?i }ki=1 are optimal solutions because each θi is an optimal solution for the single-task problem.\nThe above result on linear regression suggests the intuition that optimizing an MTL model reduces to optimizing over the span of B. The intuition can be easily extended to linear classification tasks as well as mixtures of regression and classification tasks.\nExtension to the ReLU setting. If the shared module’s capacity is larger than the total capacities of the STL models, then we can put all the STL model parameters into the shared module. As in the linear setting, the final output layer Ai can pick out the optimal parameter for the i-th task. This remains an optimal solution to the MTL problem in the ReLU setting. Furthermore, there is no transfer between any two tasks through the shared module." }, { "heading": "A.2 MISSING DETAILS OF SECTION 2.3", "text": "" }, { "heading": "A.2.1 THE EFFECT OF COSINE SIMILARITY", "text": "We consider the effect of varying the cosine similarity between single task models in multi-task learning. We first describe the following proposition to solve the multi-task learning objective when the covariances of the task data are the same. The idea is similar to the work of Ando and Zhang (2005) and we adapt it here for our study. Proposition 5. Consider the reweighted loss of equation 2 with the encoding function being linear, where the weights are {αi}ki=1. Suppose the task features of every task have the same covariance: X>i Xi = Σ for all 1 ≤ i ≤ k. Let Σ = V DV > be the singular vector decomposition (SVD) of Σ. Then the optimum of f(·) in equation 3 is achieved at:\nB? = V D−1/2C?,\nwhere C?C?> is the best rank-r approximation subspace of ∑k i=1 αiU > i yiy > i Ui andXi = UiDV > is the SVD of Xi, for each 1 ≤ i ≤ k.\nAs a corollary, denote by λ1, λ2, . . . , λk as the singular values of D−1V > ∑k i=1 αiX > i yiy > i Xi in decreasing order. Then the difference between an MTL model with hidden dimension r and the all the single task models is bounded by ∑k i=r+1 λ 2 i .\nProof. Note that B? is obtained by maximizing\nk∑ i=1 〈B(B>X>i XiB)−1B>, αiX>i yiy>i Xi〉\nLet C = DV >B. Clearly, there is a one to one mapping between B and C. And we have B = V D−1C. Hence the above is equivalent to maximizing over C ⊆ Rd×r with\nk∑ i=1\n〈C(C>C)−1C>, D−1V > (\nk∑ i=1 αiX > i yiy > i Xi\n) V D−1〉\n=〈C(C>C)−1C>, k∑ i=1 αiU > i yiy > i Ui〉.\nNote that C(C>C)−1C> is a projection matrix onto a subspace of dimension r. Hence the maximum (denote by C?) is attained at the best rank-r approximation subspace of∑k i=1 αiU > i yiy > i Ui.\nTo illustrate the above proposition, consider a simple setting where Xi is identity for every 1 ≤ i ≤ k, and yi = ei, i.e. the i-th basis vector. Note that the optimal solution for the i-th task is (X>i Xi)\n−1X>i yi = yi. Hence the optimal solutions are orthogonal to each other for all the tasks, with λi = 1 for all 1 ≤ i ≤ k. And the minimum STL error is zero for all tasks.\nConsider the MTL model with hidden dimension r. By Proposition 5, the minimum MTL error is achieved by the best rank-r approximation subspace to ∑k i=1X > i yiy > i Xi = ∑k i=1 yiy > i . Denote the optimum as B?r . The MTL error is:\nk∑ i=1 ‖yi‖2 − 〈 k∑ i=1 yiy > i , B ? rB ? r >〉 = k − r.\nDifferent data covariance. We provide upper bounds on the quality of MTL solutions for different data covariance, which depend on the relatedness of all the tasks. The following procedure gives the precise statement. Consider k regression tasks with data {(Xi, yi)}ki=1. Let θi = (X>i Xi)†X>i yi denote the optimal solution of each regression task. Let W ⊆ Rd×k denote the matrix where the i-th column is equal to θi. Consider the following procedure for orthogonalizing W for 1 ≤ i ≤ k.\na) Let W ?i ∈ Rd denote the vector which maximizes ∑k i=1〈 XiB ‖XiB‖ , yi〉 2 over B ∈ Rd;\nb) Denote by λj = ∑k j=1〈 XjW ? j\n‖XjW?j ‖ , yj〉2;\nc) For each 1 ≤ i ≤ k, project XiW ?i off from every column of Xi. Go to Step a). Proposition 6. Suppose that r ≤ d. Let B? denote the optimal MTL solution of capacity r in the shared module. Denote by OPT = ∑k i=1(‖yi‖2 − ‖Xi(X>i Xi)†X>i yi‖2). Then h(B?) ≤\nOPT − ∑d i=r+1 λi. Proof. It suffices to show that OPT is equal to ∑k i=1 λi. The result then follows since h(B\n?) is less than the error given by W ?1 , . . . ,W ? k , which is equal to OPT − ∑d i=r+1 λi." }, { "heading": "A.2.2 PROOF OF THEOREM 2", "text": "We fill in the proof of Theorem 2. First, we restate the result rigorously as follows.\nTheorem 2. For i = 1, 2, let (Xi, yi) ∈ (Rmi×d,Rmi) denote two linear regression tasks with parameters θi ∈ Rd. Suppose that each row of X1 is drawn independently from a distribution with covariance Σ1 ⊆ Rd×d and bounded l2-norm √ L. Assume that θ>1 Σ1θ1 = 1 w.l.o.g.\nLet c ∈ [κ(X2) sin(θ1, θ2), 1/3] denote the desired error margin. Denote by (B?, A?1, A?2) the optimal MTL solution. With probability 1− δ over the randomness of (X1, y1), when\nm1 & max ( L‖Σ1‖ log dδ λ2min(Σ1) , κ(Σ1)κ 2(X2) c2 ‖y2‖2, κ2(Σ1)κ 4(X2) c4 σ21 log 1 δ ) ,\nwe have that ‖B?A?2 − θ2‖/‖θ2‖ ≤ 6c+ 11−3c‖ε2‖/‖X2θ2‖.\nWe make several remarks to provide more insight on Theorem 2.\n• Theorem 2 guarantees positive transfers in MTL, when the source and target models are close and the number of source samples is large. While the intuition is folklore in MTL, we provide a formal justification in the linear and ReLU models to quantify the phenomenon.\n• The error bound decreases with c, hence the smaller c is the better. On the other hand, the required number of data points m1 increases. Hence there is a trade-off between accuracy and the amount of data.\n• c is assumed to be at most 1/3. This assumption arises when we deal with the label noise of task 2. If there is no noise for task 2, then this assumption is not needed. If there is noise for task 2, this assumption is satisfied when sin(θ1, θ2) is less than 1/(3κ(X2)). In synthetic experiments, we observe that the dependence on κ(X2) and sin(θ1, θ2) both arise in the performance of task 2, cf. Figure 3 and Figure 7, respectively.\nThe proof of Theorem 2 consists of two steps.\na) We show that the angle between B? and θ1 will be small. Once this is established, we get a bound on the angle between B? and θ2 via the triangle inequality.\nb) We bound the distance between B?A2 and θ2. The distance consists of two parts. One part comes from B?, i.e. the angle between B? and θ2. The second part comes from A2, i.e. the estimation error of the norm of θ2, which involves the signal to noise ratio of task two.\nWe first show the following geometric fact, which will be used later in the proof.\nFact 7. Let a, b ∈ Rd denote two unit vectors. Suppose that X ∈ Rm×d has full column rank with condition number denoted by κ = κ(X). Then we have\n|sin(Xa,Xb)| ≥ 1 κ2 |sin(a, b)| .\nProof. Let X = UDV > be the SVD of X . Since X has full column rank by assumption, we have X>X = XX> = Id. Clearly, we have sin(Xa,Xb) = sin(DV >a,DV >b). Denote by a′ = V >a and b′ = V >b. We also have that a′ and b′ are both unit vectors, and sin(a′, b′) = sin(a, b). Let λ1, . . . , λd denote the singular values of X . Then,\nsin2(Da′, Db′) = 1−\n(∑d i=1 λ 2 i a ′ ib ′ i )2(∑d i=1 λ 2 i a ′ i 2 )(∑d i=1 λ 2 i b ′ i 2 )\n=\n∑ 1≤i,j≤d λ 2 iλ\n2 j (a ′ ib ′ j − a′jb′i)2(∑d i=1 λ 2 i a ′ i 2 )(∑d i=1 λ 2 jb ′ i 2 )\n≥ λ 4 min λ4max · ∑\n1≤i,j≤d\n(a′ib ′ j − a′jb′i)2\n= 1\nκ4 (( d∑ i=1 a′i 2 )( d∑ i=1 b′i 2 )− ( d∑ i=1 a′ib ′ i) 2) = 1 κ4 sin2(a′, b′).\nThis concludes the proof.\nWe first show the following Lemma, which bounds the angle between B? and θ2.\nLemma 8. In the setting of Theorem 2, with probability 1− δ over the randomness of task one, we have that\n|sin(B?, θ2)| ≤ sin(θ1, θ2) + c/κ(X2).\nProof. We note that h(B?) ≥ ‖y1‖2 by the optimality of B?. Furthermore, 〈 X2B ?\n‖X2B?‖ , y2〉 ≤ ‖y2‖ 2.\nHence we obtain that\n〈 X1B ?\n‖X1B?‖ , y1〉2 ≥ ‖y1‖2 − ‖y2‖2.\nFor the left hand side,\n〈 X1B ?\n‖X1B?‖ , y1〉2 = 〈\nX1B ?\n‖X1B?‖ , X1θ1 + ε1〉2\n= 〈 X1B ?\n‖X1B?‖ , X1θ1〉2 + 〈\nX1B ?\n‖X1B?‖ , ε1〉2 + 2〈\nX1B ?\n‖X1B?‖ , X1θ1〉〈\nX1B ?\n‖X1B?‖ , ε1〉\nNote that the second term is a chi-squared random variable with expectation σ21 . Hence it is bounded by σ21 √ log 1δ with probability at least 1 − δ. Similarly, the third term is bounded by\n2‖X1θ1‖σ1 √ log 1δ with probability 1− δ. Therefore, we obtain the following:\n‖X1θ1‖2 cos2(X1B?, X1θ1) ≥ ‖y1‖2 − ‖y2‖2 − (σ21 + 2σ1‖X1θ1‖) √ log 1\nδ\nNote that\n‖y1‖2 ≥ ‖X1θ1‖2 + 2〈X1θ1, ε1〉\n≥ ‖X1θ1‖2 − 2‖X1θ1‖σ1 √ log 1\nδ .\nTherefore, ‖X1θ1‖2 cos2(X1B?, X1θ1) ≥ ‖X1θ1‖2 − ‖y2‖2 − (σ21 + 3σ1‖X1θ1‖) √ log 1\nδ\n⇒ sin2(X1B?, X1θ1) ≤ ‖y2‖2\n‖X1θ1‖2 +\n4σ1 √ log 1δ\n‖X1θ1‖\n⇒ sin2(B?, θ1) ≤ κ2(X1) ‖y2‖2 ‖X1θ1‖2 + 4σ1 √ log 1δ ‖X1θ1‖ (by Lemma 7) By matrix Bernstein inequality (see e.g. Tropp et al. (2015)), when m1 ≥ 10‖Σ1‖ log dδ /λ 2 min(Σ1), we have that: ∥∥∥∥ 1m1X>1 X1 − Σ1 ∥∥∥∥ ≤ 12λmin(Σ1).\nHence we obtain that κ2(X1) ≤ 3κ(Σ1) and ‖X1θ1‖2 ≥ m1 · θ>1 Σ1θ1/2 ≥ m1/2 (where we assumed that θ>1 Σ1θ1 = 1). Therefore,\nsin2(B?, θ1) ≤ 3κ(Σ1) ‖y2‖2 m21/4 + 4σ1 √ log 1δ√ m1/2 , which is at most c2/κ2(X2) by our setting of m1. Therefore, the conclusion follows by triangle inequality (noting that both c and sin(θ1, θ2) are less than 1/2).\nBased on the above Lemma, we are now to ready to prove Theorem 2.\nProof of Theorem 2. Note that in the MTL model, after obtaining B?, we then solve the linear layer for each task. For task 2, this gives weight value A?2 := 〈X2θ̂, y2〉/‖X2θ̂‖2. Thus the regression coefficients for task 2 isB?A?2. For the rest of the proof, we focus on bounding the distance between B?A?2 and θ2. By triangle inequality,\n‖B?A?2 − θ2‖ ≤ |〈X2B?, ε2〉| ‖X2B?‖2 + ∣∣∣∣ 〈X2B?, X2θ2〉‖X2B?‖2 − ‖θ2‖ ∣∣∣∣+ ‖B?‖θ2‖ − θ2‖ . (8)\nNote that the second term of equation 8 is equal to\n|〈X2B?, X2(θ2 − ‖θ2‖B?)〉| ‖X2B?‖2 ≤ κ(X2) · ‖θ2 − ‖θ2‖B?‖.\nThe first term of equation 8 is bounded by\n‖ε2‖ ‖X2B?‖ ≤ ‖ε2‖‖θ2‖ ‖X2θ2‖ − ‖X2(θ2 − ‖θ2‖B?)‖ . (9)\nLastly, we have that\n‖θ2 − ‖θ2‖B?‖2 = ‖θ2‖22(1− cos(B?, θ2)) ≤ 2‖θ2‖2 sin2(B?, θ2)\nBy Lemma 8, we have\n|sin(B?, θ2)| ≤ sin(θ1, θ2) + c/κ(X2)\nTherefore, we conclude that equation 9 is at most\n‖ε2‖ · ‖θ2‖ ‖X2θ2‖ − √ 2λmax(X2)‖θ2‖ sin(θ1, θ2)− √ 2cλmin(X2)‖θ2‖\n≤ ‖ε2‖ · ‖θ2‖ ‖X2θ2‖ − 3cλmin(X2)‖θ2‖\n≤ 1 1− 3c ‖ε2‖ · ‖θ2‖ ‖X2θ2‖\nThus equation 8 is at most the following. ‖θ2‖ · ( 1\n1− 3c ‖ε2‖ ‖X2θ2‖ + √\n2(κ(X2) + 1) · sin(B?, θ2) )\n≤‖θ2‖ · ( 1\n1− 3c ‖ε2‖ ‖X2θ2‖ + 6c\n) .\nHence we obtain the desired estimation error of BA?2." }, { "heading": "A.2.3 EXTENSION TO THE RELU MODEL", "text": "In this part, we extend Theorem 2 to the ReLU model. Note that the problem is reduced to the following objective.\nmax B∈Rd g(B) = 〈 ReLU(X1B) ‖ReLU(X1B)‖ , y1〉2 + 〈 ReLU(X2B) ‖ReLU(X2B)‖ , y2〉2 (10)\nWe make a crucial assumption that task 1’s input X1 follows the Gaussian distribution. Note that making distributional assumptions is necessary because for worst-case inputs, even optimizing a single ReLU function under the squared loss is NP-hard (Manurangsi and Reichman (2018)). We state our result formally as follows. Theorem 9. Let (X1, y1) ∈ (Rm1×d,Rm1) and (X2, y2) ∈ (Rm2×d,Rm2) denote two tasks. Suppose that each row of X1 is drawn from the standard Gaussian distribution. And yi = ai · ReLU(Xiθi) + εi are generated via the ReLU model with θ1, θ2 ∈ Rd. Let E [ (ai · ReLU(Xiθi))2j ] = 1 for every 1 ≤ j ≤ m1 without loss of generality, and let σ21 denote the variance of every entry of ε1.\nSuppose that c ≥ sin(θ1, θ2)/κ(X2). Denote by (B?, A?1, A?2) the optimal MTL solution of equation 10. With probability 1− δ over the randomness of (X1, y1), when\nm1 & max\n( d log d\nc2 (\n1 c2 + log d),\n‖y2‖2\nc2\n) ,\nwe have that the estimation error is at most:\nsin(B?, θ1) ≤ sin(θ1, θ2) +O(c/κ(X2)), |A?2 − a2|\na2 ≤ O(c) + 1 (1−O(c)) · ‖ε2‖ a2 · ReLU(‖X2θ2‖)\nProof. The proof follows a similar structure to that of Theorem 2. Without loss of generality, we can assume that θ1, θ2 are both unit vectors. We first bound the angle between B? and θ1.\nBy the optimality of B?, we have that:\n〈 ReLU(X1B ?)\n‖ReLU(X1B?)‖ , y1〉2 ≥ 〈 ReLU(X1θ1) ‖ReLU(X1θ1)‖ , y1〉2 − ‖y2‖2\nFrom this we obtain:\na21 · 〈 ReLU(X1B?) ‖ReLU(X1B?)‖ ,ReLU(X1B?)〉2\n≥a21 · ‖ReLU(X1θ1)‖2 − ‖y2‖2 − (σ21 + 4a1 · σ1‖ReLU(X1θ1)‖) √ log 1\nδ (11)\nNote that each entry of ReLU(X1θ1) is a truncated Gaussian random variable. By the Hoeffding bound, with probability 1− δ we have∣∣∣‖ReLU(X1θ1)‖2 − m1\n2 ∣∣∣ ≤√m1 2 log 1 δ .\nAs for 〈ReLU(X1B?),ReLU(X1θ1)〉, we will use an epsilon-net argument over B? to show the concentration. For a fixed B?, we note that this is a sum of independent random variables that are all bounded within O(log m1δ ) with probability 1 − δ. Denote by φ the angle between B\n? and θ1, a standard geometric fact states that (see e.g. Lemma 1 of Du et al. (2017)) for a random Gaussian vector x ∈ Rd,\nE x\n[ ReLU(x>B?) · ReLU(x>θ1) ] = cosφ\n2 + cosφ(tanφ− φ) 2π := g(φ) 2 .\nTherefore, by applying Bernstein’s inequality and union bound, with probability 1− η we have: |〈ReLU(X1B?),ReLU(X1θ1)〉 −m1g(φ)/2| ≤ 2 √ m1g(φ) log 1\nη +\n2 3 log 1 η log m1 δ\nBy standard arguments, there exists a set of dO(d) unit vectors S such that for any other unit vector u there exists û ∈ S such that ‖u − û‖ ≤ min(1/d3, c2/κ2(X2)). By setting η = d−O(d) and take union bound over all unit vectors in S, we have that there exists û ∈ S satisfying ‖B? − û‖ ≤ min(1/d3, c2/κ2(X2)) and the following:\n|〈ReLU(X1û),ReLU(X1θ1)〉 −m1g(φ′)/2| . √ m1d log d+ d log 2 d\n≤ 2m1c2/κ2(X2) (by our setting of m1) where φ′ is the angle between û and θ1. Note that∣∣∣〈ReLU(X1θ̂)− ReLU(X1B?),ReLU(X1θ1)〉∣∣∣ ≤ ‖X1(û−B?)‖ · ‖ReLU(X1θ1)‖\n≤ c2/κ2(X2) ·O(m1) Together we have shown that\n|〈ReLU(X1B?),ReLU(X1θ1)〉 −m1g(φ′)/2| ≤ c2/κ2(X2) ·O(m1). Combined with equation 11, by our setting of m1, it is not hard to show that g(φ′) ≥ 1−O(c2/κ2(X2)). Note that\n1− g(φ′) = 1− cosφ′ − cosφ′(tanφ′ − φ′)\n≤ 1− cosφ′ = 2 sin2 φ ′\n2 . c2/κ2(X2),\nwhich implies that sin2 φ′ . c2/κ2(X2) (since cosφ ′ 2 ≥ 0.9). Finally note that ‖û − B ?‖ ≤ c2/κ2(X2), hence ‖û−B?‖2 = 2(1− cos(û, B?)) ≥ 2 sin2(û, B?).\nOverall, we conclude that sin(B?, θ1) ≤ O(c/κ(X2)). Hence sin(B?, θ2) ≤ sin(θ1, θ2) +O(c/κ(X2)).\nFor the estimation of a2, we have∣∣∣∣ 〈ReLU(X2B?), y2〉‖ReLU(X2B?)‖2 − a2 ∣∣∣∣ ≤|〈ReLU(X2B?), ε2〉|‖ReLU(X2B?)‖2\n+a2 ∣∣∣∣ 〈ReLU(X2B?),ReLU(X2B?)− ReLU(X2θ2)〉‖ReLU(X2B?)‖2 ∣∣∣∣\nThe first part is at most ‖ε2‖\n‖ReLU(X2B?)‖ ≤ ‖ε2‖ ‖ReLU(X2θ2)‖ − ‖ReLU(X2θ2)− ReLU(X2B?)‖\n≤ 1 1−O(c) ‖ε2‖ ‖ReLU(X2θ2)‖\nSimilarly, we can show that the second part is at most O(c). Therefore, the proof is complete." }, { "heading": "A.3 PROOF OF PROPOSITION 3", "text": "In this part, we present the proof of Proposition 3. In fact, we present a more refined result, by showing that all local minima are global minima for the reweighted loss in the linear case.\nf(A1, A2, . . . , Ak;B) = k∑ i=1 αi‖XiBAi − yi)‖2F . (12)\nThe key is to reduce the MTL objective f(·) to low rank matrix approximation, and apply recent results by Balcan et al. (2018) which show that there is no spurious local minima for the latter problem . Lemma 10. Assume that X>i Xi = αiΣ with αi > 0 for all 1 ≤ i ≤ k. Then all the local minima of f(A1, . . . , Ak;B) are global minima of equation 3.\nProof. We first transform the problem from the space of B to the space of C. Note that this is without loss of generality, since there is a one to one mapping between B and C with C = DV >B. In this case, the corresponding objective becomes the following.\ng(A1, . . . , Ak;B) = k∑ i=1 αi · ‖UiCAi − yi‖2\n= k∑ i=1 ‖C( √ αiAi)− √ αiU > i yi‖2 + k∑ i=1 αi · (‖yi‖2 − ‖U>i yi‖2)\nThe latter expression is a constant. Hence it does not affect the optimization solution. For the former, denote by A ∈ Rr×k as stacking the √αiAi’s together column-wise. Similarly, denote by Z ∈ Rd×k as stacking √αiU>i yi together column-wise. Then minimizing g(·) reduces solving low rank matrix approximation: ‖CA− Z‖2\nF .\nBy Lemma 3.1 of Balcan et al. (2018), the only local minima of ‖CA − Z‖2 F are the ones where CA is equal to the best rank-r approximation of Z. Hence the proof is complete.\nNow we are ready to prove Proposition 3.\nProof of Proposition 3. By Proposition 5, the optimal solution of B? for equation 12 is V D−1 times the best rank-r approximation to αiU>yiy>i U , where we denote the SVD of X as UDV\n>. Denote by QrQ>r as the best rank-r approximation to U\n>ZZ>U , where we denote by Z = [ √ α1y1, √ α2y2, . . . , √ αkyk] as stacking the k vectors to a d by k matrix. Hence the result of Proposition 5 shows that the optimal solution B? is V D−1Qr, which is equal to (X>X)−1XQr. By Proposition 4, the optimality ofB? is the same up to transformations on the column space. Hence the proof is complete.\nTo show that all local minima are also equal to (X>X)−1XQr, we can simply apply Lemma 10 and Proposition 3.\nRemark. This result only applies to the linear model and does not work on ReLU models. The question of characterizing the optimization landscape in non-linear ReLU models is not well-understood based on the current theoretical understanding of neural networks. We leave this for future work." }, { "heading": "B SUPPLEMENTARY EXPERIMENTAL RESULTS", "text": "We fill in the details left from our experimental section. In Appendix B.1, we review the datasets used in our experiments. In Appendix B.2, we describe the models we use on each dataset. In Appendix B.3, we describe the training procedures for all experiments. In Appendix B.4 and Appendix B.5, we show extended synthetic and real world experiments to support our claims." }, { "heading": "B.1 DATASETS", "text": "We describe the synthetic settings and the datasets Sentiment Analysis, General Language Understanding Evaluation (GLUE) benchmark, and ChestX-ray14 used in the experiments.\nSynthetic settings. For the synthetic experiments, we draw 10,000 random data samples with dimension d = 100 from the standard Gaussian N (0, 1) and calculate the corresponding labels based on the model described in experiment. We split the data samples into training and validation sets with 9,000 and 1,000 samples in each. For classification tasks, we generate the labels by applying a sigmoid function and then thresholding the value to binary labels at 0.5. For ReLU regression tasks, we apply the ReLU activation function on the real-valued labels. The number of data samples used in the experiments varies depending on the specification. Specifically, for the task covariance experiment of Figure 3, we fix task 1’s data with m1 = 9, 000 training data and vary task 2’s data under three settings: (i) same rotation Q1 = Q2 but different singular values D1 6= D2; (ii) same singular values D1 = D2 but random rotations Q1 6= Q2. Sentiment analysis. For the sentiment analysis task, the goal is to understand the sentiment opinions expressed in the text based on the context provided. This is a popular text classification task which is usually formulated as a multi-label classification task over different ratings such as positive (+1), negative (-1), or neutral (0). We use six sentiment analysis benchmarks in our experiments:\n• Movie review sentiment (MR): In the MR dataset (Pang and Lee (2005)), each movie review consists of a single sentence. The goal is to detect positive vs. negative reviews.\n• Sentence subjectivity (SUBJ): The SUBJ dataset is proposed in Pang and Lee (2004) and the goal is to classify whether a given sentence is subjective or objective.\n• Customer reviews polarity (CR): The CR dataset (Hu and Liu (2004)) provides customer reviews of various products. The goal is to categorize positive and negative reviews.\n• Question type (TREC): The TREC dataset is collected by Li and Roth (2002). The aim is to classify a question into 6 question types.\n• Opinion polarity (MPQA): The MPQA dataset detects whether an opinion is polarized or not (Wiebe et al. (2005)).\n• Stanford sentiment treebank (SST): The SST dataset, created by Socher et al. (2013), is an extension of the MR dataset.\nThe General Language Understanding Evaluation (GLUE) benchmark. GLUE is a collection of NLP tasks including question answering, sentiment analysis, text similarity and textual entailment problems. The GLUE benchmark is a state-of-the-art MTL benchmark for both academia and industry. We select five representative tasks including CoLA, MRPC, QNLI, RTE, and SST-2 to validate our proposed method. We emphasize that the goal of this work is not to come up with a state-of-the-art result but rather to provide insights into the working of multi-task learning. It is conceivable that our results can be extended to the entire dataset as well. This is left for future work. More details about the GLUE benchmark can be found in the original paper (Wang et al. (2018a)).\nChestX-ray14. The ChestX-ray14 dataset (Wang et al. (2017)) is the largest publicly available chest X-ray dataset. It contains 112,120 frontal-view X-ray images of 30,805 unique patients. Each image contains up to 14 different thoracic pathology labels using automatic extraction methods on radiology reports. This can be formulated as a 14-task multi-label image classification problem. The ChestX-ray14 dataset is a representative dataset in the medical imaging domain as well as in computer vision. We use this dataset to examine our proposed task reweighting scheme since it satisfies the assumption that all tasks have the same input data but different labels." }, { "heading": "B.2 MODELS", "text": "Synthetic settings. For the synthetic experiments, we use the linear regression model, the logistic regression model and a one-layer neural network with the ReLU activation function.\nSentiment analysis. For the sentiment analysis experiments, we consider three different models including multi-layer perceptron (MLP), LSTM, CNN:\n• For the MLP model, we average the word embeddings of a sentence and feed the result into a two layer perceptron, followed by a classification layer.\n• For the LSTM model, we use the standard one-layer single direction LSTM as proposed by Lei et al. (2018), followed by a classification layer.\n• For the CNN model, we use the model proposed by Kim (2014) which uses one convolutional layer with multiple filters, followed by a ReLU layer, max-pooling layer, and classification layer. We follow the protocol of Kim (2014) and set the filter size as {3, 4, 5}.\nWe use the pre-trained GLoVe embeddings trained on Wikipedia 2014 and Gigaword 5 corpora 6. We fine-tune the entire model in our experiments. In the multi-task learning setting, the shared modules include the embedding layer and the feature extraction layer (i.e. the MLP, LSTM, or CNN model). Each task has its separate output module.\nGLUE. For the experiments on the GLUE benchmark, we use a state-of-the-art language model called BERT (Devlin et al. (2018)). For each task, we add a classification/regression layer on top it as our model. For all the experiments, we use the BERTLARGE uncased model, which is a 24 layer network as described in Devlin et al. (2018). For the multi-task learning setting, we follow the work of Liu et al. (2019a) and use BERTLARGE as the shared module.\nChestX-ray14. For the experiments on the ChestX-ray14 dataset, we use the DenseNet model proposed by Rajpurkar et al. (2017) as the shared module, which is a 121 layer network. For each task, we use a separate classification output layer. We use the pre-trained model7 in our experiments." }, { "heading": "B.3 TRAINING PROCEDURES", "text": "In this subsection, we describe the training procedures for our experiments.\nMini-batch SGD. We describe the details of task data sampling in our SGD implementation.\n• For tasks with different features such as GLUE, we first divide each task data into small batches. Then, we mix all the batches from all tasks and shuffle randomly. During every epoch, a SGD step is applied on every batch over the corresponding task. If the current batch is for task i, then the SGD is applied on Ai, and possibly Ri or B depending on the setup. The other parameters for other tasks are fixed.\n• For tasks with the same features such as ChestX-ray14, the SGD is applied on all the tasks jointly to update all the Ai’s and B together.\nSynthetic settings. For the synthetic experiments, we do a grid search over the learning rate from {1e − 4, 1e − 3, 1e − 2, 1e − 1} and the number of epochs from {10, 20, 30, 40, 50}. We pick the best results for all the experiments. We choose the learning rate to be 1e− 3, the number of epochs to be 30, and the batch size to be 50. For regression task, we report the Spearman’s correlation score For classification task, we report the classification accuracy.\nSentiment analysis. For the sentiment analysis experiments, we randomly split the data into training, dev and test sets with percentages 80%, 10%, and 10% respectively. We follow the protocol of Lei et al. (2018) to set up our model for the sentiment analysis experiments.\nThe default hidden dimension of the model (e.g. LSTM) is set to be 200, but we vary this parameter for the model capacity experiments. We report the accuracy score on the test set as the performance metric.\n6http://nlp.stanford.edu/data/wordvecs/glove.6B.zip 7https://github.com/pytorch/vision\nGLUE. For the GLUE experiments, the training procedure is used on the alignment modules and the output modules. Due to the complexity of the BERTLARGE module, which involves 24 layers of non-linear transformations.\nWe fix the BERTLARGE module during the training process to examine the effect of adding the alignment modules to the training process. In general, even after fine-tuning the BERTLARGE module on a set of tasks, it is always possible to add our alignment modules and apply Algorithm 1.\nFor the training parameters, we apply grid search to tune the learning rate from {2e−5, 3e−5, 1e−5} and the number of epochs from {2, 3, 5, 10}. We choose the learning rate to be 2e−5, the number of epochs to be 5, and with batch size 16 for all the experiments.\nWe use the GLUE evaluation metric (cf. Wang et al. (2018b)) and report the scores on the development set as the performance metric.\nChestX-ray14. For the ChestX-ray14 experiments, we use the configuration suggested by Rajpurkar et al. (2017) and report the AUC score on the test set after fine-tuning the model for 20 epochs." }, { "heading": "B.4 EXTENDED SYNTHETIC EXPERIMENTS", "text": "Varying cosine similarity on linear and ReLU models. We demonstrate the effect of cosine similarity in synthetic settings for both regression and classification tasks.\nSynthetic tasks. We start with linear settings. We generate 20 synthetic task datasets (either for regression tasks, or classification tasks) based on data generation procedure and vary the task similarity between task 1 and task i. We run the experiment with a different dataset pairs (dataset 1 and dataset i).\nAfter generating the tasks, we compare the performance gap between MTL and STL model.\nResults. From Figure 7a and Figure 7a, we find that for both regression and classification settings, with the larger task similarity the MTL outperforms more than STL model and the negative transfer could occur if the task similarity is too small.\nReLU settings. We also consider a ReLU-activated model. We use the same setup as the linear setting, but apply a ReLU activation to generate the data. Similar results are shown in Figure 7c, 7d.\nHigher rank regimes for ReLU settings. We provide further validation of our results on ReLUactivated models.\nSynthetic tasks. In this synthetic experiment, there are two sets of model parameters Θ1 ⊆ Rd×r and Θ2 ⊆ Rd×r (d = 100 and r = 10). Θ1 is a fixed random rotation matrix and there are m1 = 100 data points for task 1. Task 2’s model parameter is Θ2 = αΘ1 + (1−α)Θ′, where Θ′ is also a fixed rotation matrix that is orthogonal to Θ1. Note that α is the cosine value/similarity of the principal angle between Θ1 and Θ2.\nWe then generate X1 ⊆ Rm1×d and X2 ⊆ Rm2×d from Gaussian. For each task, the labels are yi = ReLU(XiΘi)e+ εi, where e ∈ Rr is the all ones vector and εi is a random Gaussian noise. Given the two tasks, we use MTL with ReLU activations and capacity H = 10 to co-train the two tasks. The goal is to see how different levels of α or similarity affects the transfer from task two to task one. Note that this setting parallels the ReLU setting of Theorem 9 but applies to rank r = 5.\nResults. In Figure 8 we show that the data size, the cosine similarity between the STL solutions and the alignment of covariances continue to affect the rate of transfer in the new settings. The study shows that our conceptual results are applicable to a wide range of settings.\nEvaluating Algorithm 1 on linear and ReLU-activated models. We consider the synthetic example in Section 2.3 to compare Algorithm 1 and the baseline MTL training. Recall that in the example, when the source and target tasks have different covariance matrices, MTL causes negative transfer on the target task. Our hypothesis in this experiment is to show that Algorithm 1 can correct the misalignment and the negative transfer.\nSynthetic tasks. We evaluate on both linear and ReLU regression tasks. The linear case follows the example in Section 2.3. For the ReLU case, the data is generated according to the previous example.\nResults. Figure 9 confirms the hypothesis. We observe that Algorithm 1 corrects the negative transfer in the regime where the source task only has limited amount of data. Furthermore, Algorithm 1 matches the baseline MTL training when the source task has sufficiently many data points." }, { "heading": "B.5 EXTENDED ABLATION STUDIES", "text": "Cross validation for choosing model capacities. We provide a cross validation experiment to indicate how we choose the best performing model capacities in Figure 1. This is done on the six sentiment analysis tasks trained with an LSTM layer.\nIn Figure 10, we vary the model capacities to plot the validation accuracies of the MTL model trained with all six tasks and the STL model for each task. The result complements Table 1 in Section 3.3.\nChoosing model capacities for CNN and MLP. Next we verify our result on model capacities for CNN and MLP models. We select the SST and MR datasets from the sentiment analysis tasks for this experiment. We train all three models CNN, MLP and LSTM by varying the capacities.\nResults. From Figure 11 we observe that the best performing MTL model capacity is less than total best performing model capacities of STL model on all models.\nThe effect of label noise on Algorithm 2. To evaluate the robustness of Algorithm 2 in the presence of label noise, we conduct the following experiment. First, we subsample 10% of the ChestX-ray14 dataset and select two tasks from it. Then, we randomly pick one task to add 20% of noise to its labels by randomly flipping them with probability 0.5. We compare the performance of training both tasks using our reweighting scheme (Algorithm 2) vs. the reweighting techniques of Kendall et al. (2018) and the unweighted loss scheme.\nResults. On 10 randomly chosen task pairs, our method improves over the unweighted training scheme by 1.0% AUC score and 0.4% AUC score over Kendall et al. (2018) averaged over the 10 task pairs. Figure 12 shows 5 example task pairs from our evaluation." } ]
2,020
null
SP:027dfebf9732ce68cb3985ef873b00d65e6e7205
[ "-\tThis paper modifies and extends the recent “free” training strategies in adversarial training for representation learning for natural language. The proposed “Free” Large-Batch Adversarial Training is well motived, in comparison with plain PGD-based adversarial training and the existing methods like FreeAT and YOPO, which virtually enlarges the batch size and minimize maximum risk at every ascent step. The contributions are solid. ", "In this paper, the authors present a new adversarial training algorithm and apply it to the fintuning stage large scale language models BERT and RoBERTa. They find that with FreeLB applied to finetuning, both BERT and RoBERTa see small boosts in performance on GLUE, ARC, and CommonsenseQA. The gains they see on GLUE are quite small (0.3 on the GLUE test score for RoBERTa) but the gains are more substantial on ARC and CommonsenseQA. The paper also presents some ablation studies on the use of the same dropout mask across each ascent step of FreeLB, empirically seeing gains by using the same mask. They also present some analysis on robustness in the embedding space, showing that FreeLB leads to greater robustness than other adversarial training methods" ]
Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm, FreeLB, that promotes higher invariance in the embedding space, by adding adversarial perturbations to word embeddings and minimizing the resultant adversarial risk inside different regions around input samples. To validate the effectiveness of the proposed approach, we apply it to Transformer-based models for natural language understanding and commonsense reasoning tasks. Experiments on the GLUE benchmark show that when applied only to the finetuning stage, it is able to improve the overall test scores of BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8. In addition, the proposed approach achieves state-of-the-art single-model test accuracies of 85.44% and 67.75% on ARC-Easy and ARC-Challenge. Experiments on CommonsenseQA benchmark further demonstrate that FreeLB can be generalized and boost the performance of RoBERTa-large model on other tasks as well. 1
[ { "affiliations": [], "name": "Chen Zhu" }, { "affiliations": [], "name": "Yu Cheng" }, { "affiliations": [], "name": "Zhe Gan" }, { "affiliations": [], "name": "Siqi Sun" }, { "affiliations": [], "name": "Tom Goldstein" }, { "affiliations": [], "name": "Jingjing Liu" } ]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "Roy Bar Haim", "Ido Dagan", "Bill Dolan", "Lisa Ferro", "Danilo Giampiccolo", "Bernardo Magnini", "Idan Szpektor" ], "title": "The second PASCAL recognising textual entailment challenge", "venue": null, "year": 2006 }, { "authors": [ "Yonatan Belinkov", "Yonatan Bisk" ], "title": "Synthetic and natural noise both break neural machine translation", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Luisa Bentivogli", "Ido Dagan", "Hoa Trang Dang", "Danilo Giampiccolo", "Bernardo Magnini" ], "title": "The fifth PASCAL recognizing textual entailment challenge", "venue": null, "year": 2009 }, { "authors": [ "Yong Cheng", "Lu Jiang", "Wolfgang Macherey" ], "title": "Robust neural machine translation with doubly adversarial inputs", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Peter Clark", "Isaac Cowhey", "Oren Etzioni", "Tushar Khot", "Ashish Sabharwal", "Carissa Schoenick", "Oyvind Tafjord" ], "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge", "venue": "arXiv preprint arXiv:1803.05457,", "year": 2018 }, { "authors": [ "Patrick L Combettes", "Jean-Christophe Pesquet" ], "title": "Proximal splitting methods in signal processing. In Fixed-point algorithms for inverse problems in science and engineering", "venue": null, "year": 2011 }, { "authors": [ "Ido Dagan", "Oren Glickman", "Bernardo Magnini" ], "title": "The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment", "venue": null, "year": 2006 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "William B Dolan", "Chris Brockett" ], "title": "Automatically constructing a corpus of sentential paraphrases", "venue": "In Proceedings of the International Workshop on Paraphrasing,", "year": 2005 }, { "authors": [ "Sanghamitra Dutta", "Gauri Joshi", "Soumyadip Ghosh", "Parijat Dube", "Priya Nagpurkar" ], "title": "Slow and stale gradients can win the race: Error-runtime trade-offs in distributed sgd", "venue": "In AISTATS,", "year": 2018 }, { "authors": [ "Javid Ebrahimi", "Anyi Rao", "Daniel Lowd", "Dejing Dou" ], "title": "HotFlip: White-box adversarial examples for text classification", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "A theoretically grounded application of dropout in recurrent neural networks", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Danilo Giampiccolo", "Bernardo Magnini", "Ido Dagan", "Bill Dolan" ], "title": "The third PASCAL recognizing textual entailment challenge", "venue": "In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing,", "year": 2007 }, { "authors": [ "Tom Goldstein", "Christoph Studer", "Richard Baraniuk" ], "title": "A field guide to forward-backward splitting with a fasta implementation", "venue": "arXiv preprint 1411.3406,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Shankar Iyer", "Nikhil Dandekar", "Kornl Csernai" ], "title": "First quora dataset release: Question pairs, 2017", "venue": null, "year": 2017 }, { "authors": [ "Mohit Iyyer", "John Wieting", "Kevin Gimpel", "Luke Zettlemoyer" ], "title": "Adversarial example generation with syntactically controlled paraphrase networks", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Robin Jia", "Percy Liang" ], "title": "Adversarial examples for evaluating reading comprehension systems", "venue": "In EMNLP,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Guokun Lai", "Qizhe Xie", "Hanxiao Liu", "Yiming Yang", "Eduard Hovy" ], "title": "Race: Large-scale reading comprehension dataset from examinations", "venue": "arXiv preprint arXiv:1704.04683,", "year": 2017 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": null, "year": 2020 }, { "authors": [ "Hector J Levesque", "Ernest Davis", "Leora Morgenstern" ], "title": "The Winograd schema challenge", "venue": "In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning,", "year": 2011 }, { "authors": [ "Xiaodong Liu", "Pengcheng He", "Weizhu Chen", "Jianfeng Gao" ], "title": "Multi-task deep neural networks for natural language understanding", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": "arXiv preprint arXiv:1907.11692,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Todor Mihaylov", "Peter Clark", "Tushar Khot", "Ashish Sabharwal" ], "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "venue": "arXiv preprint arXiv:1809.02789,", "year": 2018 }, { "authors": [ "Takeru Miyato", "Andrew M Dai", "Ian Goodfellow" ], "title": "Adversarial training methods for semisupervised text classification", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: A regularization method for supervised and semi-supervised learning", "venue": null, "year": 2019 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": null, "year": 1910 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In EMNLP,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Semantically equivalent adversarial rules for debugging NLP models", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Parsa Saadatpanah", "Ali Shafahi", "Tom Goldstein" ], "title": "Adversarial attacks on copyright detection systems", "venue": "arXiv preprint 1906.07153,", "year": 2019 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "In ACL,", "year": 2016 }, { "authors": [ "A. Shafahi", "M. Najibi", "A. Ghiasi", "Z. Xu", "J. Dickerson", "C. Studer", "L. Davis", "G. Taylor", "T. Goldstein" ], "title": "Adversarial Training for Free", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In EMNLP,", "year": 2013 }, { "authors": [ "Jure Sokolic", "Raja Giryes", "Guillermo Sapiro", "Miguel Rodrigues" ], "title": "Generalization error of invariant classifiers", "venue": "In AISTATS,", "year": 2017 }, { "authors": [ "Robert Speer", "Joshua Chin", "Catherine Havasi" ], "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": null, "year": 2014 }, { "authors": [ "Kai Sun", "Dian Yu", "Dong Yu", "Claire Cardie" ], "title": "Improving machine reading comprehension with general reading strategies", "venue": "arXiv preprint arXiv:1810.13441,", "year": 2018 }, { "authors": [ "Alon Talmor", "Jonathan Herzig", "Nicholas Lourie", "Jonathan Berant" ], "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Alex Warstadt", "Amanpreet Singh", "Samuel R. Bowman" ], "title": "Neural network acceptability judgments", "venue": "arXiv preprint 1805.12471,", "year": 2018 }, { "authors": [ "Adina Williams", "Nikita Nangia", "Samuel R. Bowman" ], "title": "A broad-coverage challenge corpus for sentence understanding through inference", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Ruizhi Deng", "Bo Li", "Fisher Yu", "Mingyan Liu", "Dawn Song" ], "title": "Characterizing adversarial examples based on spatial consistency information for semantic segmentation", "venue": null, "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan L. Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": null, "year": 2019 }, { "authors": [ "Huan Xu", "Shie Mannor" ], "title": "Robustness and generalization", "venue": "Machine learning,", "year": 2012 }, { "authors": [ "Zhilin Yang", "Zihang Dai", "Yiming Yang", "Jaime Carbonell", "Ruslan Salakhutdinov", "Quoc V Le" ], "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Liu" ], "title": "Additional tricks are required to achieve high performance on WNLI and QNLI for the GLUE benchmark. We use the same tricks as Liu et al. (2019b). For WNLI, we use the same WSC data provided by Liu et al. (2019b) for training. For testing, Liu et al. (2019b) also provided the test set with span annotations, but the order is different form the GLUE dataset", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Adversarial training is a method for creating robust neural networks. During adversarial training, mini-batches of training samples are contaminated with adversarial perturbations (alterations that are small and yet cause misclassification), and then used to update network parameters until the resulting model learns to resist such attacks. Adversarial training was originally proposed as a means to enhance the security of machine learning systems (Goodfellow et al., 2015), especially for safety-critical systems like self-driving cars (Xiao et al., 2018) and copyright detection (Saadatpanah et al., 2019).\nIn this paper, we turn our focus away from the security benefits of adversarial training, and instead study its effects on generalization. While adversarial training boosts the robustness, it is widely accepted by computer vision researchers that it is at odds with generalization, with classification accuracy on non-corrupted images dropping as much as 10% on CIFAR-10, and 15% on Imagenet (Madry et al., 2018; Xie et al., 2019). Surprisingly, people observe the opposite result for language models (Miyato et al., 2017; Cheng et al., 2019), showing that adversarial training can improve both generalization and robustness.\nWe will show that adversarial training significantly improves performance of state-of-the-art models for many language understanding tasks. In particular, we propose a novel adversarial training algorithm, called FreeLB (Free Large-Batch), which adds adversarial perturbations to word embeddings and minimizes the resultant adversarial loss around input samples. The method leverages recently proposed “free” training strategies (Shafahi et al., 2019; Zhang et al., 2019) to enrich the training data with diversified adversarial samples under different norm constraints at no extra cost than PGD-based (Projected Gradient Descent) adversarial training (Madry et al., 2018), which enables us to perform such diversified adversarial training on large-scale state-of-the-art models. We observe improved invariance in the embedding space for models trained with FreeLB, which is positively correlated with generalization.\n1Code is available at https://github.com/zhuchen03/FreeLB.\nWe perform comprehensive experiments to evaluate the performance of a variety of adversarial training algorithms on state-of-the-art language understanding models and tasks. In the comparisons with standard PGD (Madry et al., 2018), FreeAT (Shafahi et al., 2019) and YOPO (Zhang et al., 2019), FreeLB stands out to be the best for the datasets and models we evaluated. With FreeLB, we achieve state-of-the-art results on several important language understanding benchmarks. On the GLUE benchmark, FreeLB pushes the performance of the BERT-base model from 78.3 to 79.4. The overall score of the RoBERTa-large models on the GLUE benchmark is also lifted from 88.5 to 88.8, achieving best results on most of its sub-tasks. Experiments also show that FreeLB can boost the performance of RoBERTa-large on question answering tasks, such as the ARC and CommonsenseQA benchmarks. We also provide a comprehensive ablation study and analysis to demonstrate the effectiveness of our training process." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 ADVERSARIAL TRAINING", "text": "To improve the robustness of neural networks against adversarial examples, many defense strategies and models have been proposed, in which PGD-based adversarial training (Madry et al., 2018) is widely considered to be the most effective, since it largely avoids the the obfuscated gradient problem (Athalye et al., 2018). It formulates a class of adversarial training algorithms (Kurakin et al., 2017) into solving a minimax problem on the cross-entropy loss, which can be achieved reliably through multiple projected gradient ascent steps followed by a SGD (Stochastic Gradient Descent) step.\nDespite being verified by Athalye et al. (2018) to avoid obfuscated gradients, Qin et al. (2019) shows that PGD-based adversarial training still leads to highly convolved and non-linear loss surfaces when K is small, which could be readily broken under stronger adversaries. Thus, to be effective, the cost of PGD-based adversarial training is much higher than conventional training. To mitigate this cost, Shafahi et al. (2019) proposed a “free” adversarial training algorithm that simultaneously updates both model parameters and adversarial perturbations on a single backward pass. Using a similar formulation, Zhang et al. (2019) effectively reduce the total number of full forward and backward propagations for obtaining adversarial examples by restricting most of its adversarial updates in the first layer." }, { "heading": "2.2 ADVERSARIAL EXAMPLES IN NATURAL LANGUAGES", "text": "Adversarial examples have been explored primarily in the image domain, and received many attention in text domain recently. Previous works on text adversaries have focused on heuristics for creating adversarial examples in the black-box setting, or on specific tasks. Jia & Liang (2017) propose to add distracting sentences to the input document in order to induce mis-classification. Zhao et al. (2018) generate text adversaries by projecting the input data to a latent space using GANs, and searching for adversaries close to the original instance. Belinkov & Bisk (2018) manipulate every word in a sentence with synthetic or natural noise in machine translation systems. Iyyer et al. (2018) propose a neural paraphrase model based on back-translated data to produce paraphrases that have different sentence structures. Different from previous work, ours is not to produce actual adversarial examples, but only take the benefit of adversarial training for natural language understanding.\nWe are not the first to observe that robust language models may perform better on clean test data. Miyato et al. (2017) extend adversarial and virtual adversarial training (Miyato et al., 2019) to the text domain to improve the performance on semi-supervised classification tasks. Ebrahimi et al. (2018) propose a character/word replacement for crafting attacks, and show employing adversarial examples in training renders the models more robust. Ribeiro et al. (2018) show that adversarial attacks can be used as a valuable tool for debugging NLP models. Cheng et al. (2019) also find that crafting adversarial examples can help neural machine translation significantly. Notably, these studies have focused on simple models or text generation tasks. Our work explores how to efficiently use the gradients obtained in adversarial training to boost the performance of state-of-the-art transformer-based models." }, { "heading": "3 ADVERSARIAL TRAINING FOR LANGUAGE UNDERSTANDING", "text": "Pre-trained large-scale language models, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2020) and T5 (Raffel et al., 2019), have proven to be highly effective for downstream tasks. We aim to further improve the generalization of these pre-trained language models on the downstream language understanding tasks by enhancing their robustness in the embedding space during finetuning on these tasks. We achieve this goal by creating “virtual” adversarial examples in the embedding space, and then perform parameter updates on these adversarial embeddings. Creating actual adversarial examples for language is difficult; even with state-of-theart language models as guidance (e.g., (Cheng et al., 2019)), it remains unclear how to construct label-preserving adversarial examples via word/character replacement without human evaluations, because the meaning of each word/character depends on the context (Ribeiro et al., 2018). Since we are only interested in the effects of adversarial training, rather than producing actual adversarial examples, we add norm-bounded adversarial perturbations to the embeddings of the input sentences using a gradient-based method. Note that our embedding-based adversary is strictly stronger than a more conventional text-based adversary, as our adversary can make manipulations on word embeddings that are not possible in the text domain.\nFor models that incorporate various input representations, including word or subword embeddings, segment embeddings and position embeddings, our adversaries only modify the concatenated word or sub-word embeddings, leaving other components of the sentence representation unchanged. 2 Denote the sequence of one-hot representations of the input subwords as Z = [z1, z2, ...,zn], the embedding matrix as V , and the language model (encoder) as a function y = fθ(X), whereX = V Z is the subword embeddings, y is the output of the model (e.g., class probabilities for classification models), and θ denotes all the learnable parameters including the embedding matrix V . We add adversarial perturbations δ to the embeddings such that the prediction becomes y′ = fθ(X + δ). To preserve the semantics, we constrain the norm of δ to be small, and assume the model’s prediction should not change after the perturbation. This formulation is analogous to Miyato et al. (2017), with the difference that we do not requireX to be normalized." }, { "heading": "3.1 PGD FOR ADVERSARIAL TRAINING", "text": "Standard adversarial training seeks to find optimal parameters θ∗ to minimize the maximum risk for any δ within a norm ball as:\nmin θ\nE(Z,y)∼D [\nmax ‖δ‖≤ L(fθ(X + δ), y)\n] , (1)\nwhere D is the data distribution, y is the label, and L is some loss function. We use the Frobenius norm to constrain δ. For neural networks, the outer “min” is non-convex, and the inner “max” is non-concave. Nonetheless, Madry et al. (2018) demonstrated that this saddle-point problem can be solved reliably with SGD for the outer minimization and PGD (a standard method for large-scale constrained optimization, see (Combettes & Pesquet, 2011) and (Goldstein et al., 2014)), for the inner maximization. In particular, for the constraint ‖δ‖F ≤ , with an additional assumption that the loss function is locally linear, PGD takes the following step (with step size α) in each iteration:\nδt+1 = Π‖δ‖F≤ (δt + αg(δt)/‖g(δt)‖F ) , (2)\nwhere g(δt) = ∇δL(fθ(X + δt), y) is the gradient of the loss with respect to δ, and Π‖δ‖F≤ performs a projection onto the -ball. To achieve high-level robustness, multi-step adversarial examples are needed during training, which is computationally expensive. TheK-step PGD (K-PGD) requires K forward-backward passes through the network, while the standard SGD update requires only one. As a result, the adversary generation step in adversarial training increases run-time by an order of magnitude—a catastrophic amount when training large state-of-the-art language models." }, { "heading": "3.2 LARGE-BATCH ADVERSARIAL TRAINING FOR FREE", "text": "In the inner ascent steps of PGD, the gradients of the parameters can be obtained with almost no overhead when computing the gradients of the inputs. From this observation, FreeAT (Shafahi et al.,\n2“Subword embeddings” refers to the embeddings of sub-word encodings such as the popular Byte Pair Encoding (BPE) (Sennrich et al., 2016).\nAlgorithm 1 “Free” Large-Batch Adversarial Training (FreeLB-K) Require: Training samples X = {(Z, y)}, perturbation bound , learning rate τ , ascent steps K,\nascent step size α 1: Initialize θ 2: for epoch = 1 . . . Nep do 3: for minibatch B ⊂ X do 4: δ0 ← 1√NδU(− , ) 5: g0 ← 0 6: for t = 1 . . .K do 7: Accumulate gradient of parameters θ 8: gt ← gt−1 + 1KE(Z,y)∈B [∇θ L(fθ(X + δt−1), y)] 9: Update the perturbation δ via gradient ascend\n10: gadv ← ∇δ L(fθ(X + δt−1), y) 11: δt ← Π‖δ‖F≤ (δt−1 + α · gadv/‖gadv‖F ) 12: end for 13: θ ← θ − τgK 14: end for 15: end for\n2019) and YOPO (Zhang et al., 2019) have been proposed to accelerate adversarial training. They achieve comparable robustness and generalization as standard PGD-trained models using only the same or a slightly larger number of forward-backward passes as natural training (i.e., SGD on clean samples). FreeAT takes one descent step on the parameters together with each of the K ascent steps on the perturbation. As a result, FreeAT may suffer from the “stale gradient” problem (Dutta et al., 2018), where in every step t, δt does not necessarily maximize the model with parameter θt since its update is based on ∇δL(fθt−1(X + δt−1), y), and vice versa, θt does not necessarily minimize the adversarial risk with adversary δt since its update is based on ∇θL(fθt−1(X + δt−1), y). Such a problem may be more significant when the step size is large.\nDifferent from FreeAT, YOPO accumulates the gradient of the parameters from each of the ascent steps, and updates the parameters only once after the K inner ascent steps. YOPO also advocates that after each back-propagation, one should take the gradient of the first hidden layer as a constant and perform several additional updates on the adversary using the product of this constant and the Jacobian of the first layer of the network to obtain strong adversaries. However, when the first hidden layer is a linear layer as in their implementation, such an operation is equivalent to taking a larger step size on the adversary. The analysis backing the extra update steps also assumes a twice continuously differentiable loss, which does not hold for ReLU-based neural networks they experimented with, and thus the reasons for the success of such an algorithm remains obscure. We give empirical comparisons between YOPO and our approach in Sec. 4.3.\nTo obtain better solutions for the inner max and avoid fundamental limitations on the function class, we propose FreeLB, which performs multiple PGD iterations to craft adversarial examples, and simultaneously accumulates the “free” parameter gradients ∇θL in each iteration. After that, it updates the model parameter θ all at once with the accumulated gradients. The overall procedure is shown in Algorithm 1, in which X + δt is an approximation to the local maximum within the intersection of two balls It = BX+δ0(αt) ∩ BX( ). By taking a descent step along the averaged gradients atX + δ0, ...,X + δK−1, we approximately optimize the following objective:\nmin θ E(Z,y)∼D\n[ 1\nK K−1∑ t=0 max δt∈It L(fθ(X + δt), y)\n] , (3)\nwhich is equivalent to replacing the original batchX with a K-times larger virtual batch, consisting of samples whose embeddings are X + δ0, ...,X + δK−1. Compared with PGD-based adversarial training (Eq. 1), which minimizes the maximum risk at a single estimated point in the vicinity of each training sample, FreeLB minimizes the maximum risk at each ascent step at almost no overhead.\nIntuitively, FreeLB could be a learning method with lower generalization error than PGD. Sokolic et al. (2017) have proved that the generalization error of a learning method invariant to a set of T transformations may be up to √ T smaller than a non-invariant learning method. According to\ntheir theory, FreeLB could have a more significant improvement over natural training, since FreeLB enforces the invariance to K adversaries from a set of up to K different norm constraints,3 while PGD only enforces invariance to a single norm constraint .\nEmpirically, FreeLB does lead to higher robustness and invariance than PGD in the embedding space, in the sense that the maximum increase of loss in the vicinity of X for models trained with FreeLB is smaller than that with PGD. See Sec. 4.3 for details. In theory, such improved robustness can lead to better generalization (Xu & Mannor, 2012), which is consistent with our experiments. Qin et al. (2019) also demonstrated that PGD-based method leads to highly convolved and non-linear loss surfaces in the vicinity of input samples when K is small, indicating a lack of robustness." }, { "heading": "3.3 WHEN ADVERSARIAL TRAINING MEETS DROPOUT", "text": "Usually, adversarial training is not used together with dropout (Srivastava et al., 2014). However, for some language models like RoBERTa (Liu et al., 2019b), dropout is used during the finetuning stage. In practice, when dropout is turned on, each ascent step of Algorithm 1 is optimizing δ for a different network. Specifically, denote the dropout mask asm with each entry mi ∼ Bernoulli(p). Similar to our analysis for FreeAT, the ascent step from δt−1 to δt is based on∇δL(fθ(mt−1)(X + δt−1), y), so δt is sub-optimal for L(fθ(mt)(X+δ), y). Here θ(m) is the effective parameters under dropout maskm.\nThe more plausible solution is to use the same m in each step. When applying dropout to any network, the objective for θ is to minimize the expectation of loss under different networks determined by the dropout masks, which is achieved by minimizing the Monte Carlo estimation of the expected loss. In our case, the objective becomes:\nmin θ E(Z,y)∼D,m∼M\n[ 1\nK K−1∑ t=0 max δt∈It L(fθ(m)(X + δt), y)\n] , (4)\nwhere the 1-sample Monte Carlo estimation should be 1K ∑K−1 t=0 maxδt∈It L(fθ(m0)(X + δt), y) and can be minimized by using FreeLB with dropout mask m0 in each ascent step. This is similar to applying Variational Dropout to RNNs as used in Gal & Ghahramani (2016)." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we provide comprehensive analysis on FreeLB through extensive experiments on three Natural Language Understanding benchmarks: GLUE (Wang et al., 2019), ARC (Clark et al., 2018) and CommonsenseQA (Talmor et al., 2019). We also compare the robustness and generalization of FreeLB with other adversarial training algorithms to demonstrate its strength. Additional experimental details are provided in the Appendix." }, { "heading": "4.1 DATASETS", "text": "GLUE Benchmark. The GLUE benchmark is a collection of 9 natural language understanding tasks, namely Corpus of Linguistic Acceptability (CoLA; Warstadt et al. (2018)), Stanford Sentiment Treebank (SST; Socher et al. (2013)), Microsoft Research Paraphrase Corpus (MRPC; Dolan & Brockett (2005)), Semantic Textual Similarity Benchmark (STS; Agirre et al. (2007)), Quora Question Pairs (QQP; Iyer et al. (2017)), Multi-Genre NLI (MNLI; Williams et al. (2018)), Question NLI (QNLI; Rajpurkar et al. (2016)), Recognizing Textual Entailment (RTE; Dagan et al. (2006); Bar Haim et al. (2006); Giampiccolo et al. (2007); Bentivogli et al. (2009)) and Winograd NLI (WNLI; Levesque et al. (2011)). 8 of the tasks are formulated as classification problems and only STS-B is formulated as regression, but FreeLB applies to all of them. For BERT-base, we use the HuggingFace implementation4, and follow the single-task finetuning procedure as in Devlin et al. (2019). For RoBERTa, we use the fairseq implementation5. Same as Liu et al. (2019b), we also use\n3The cardinality of the set is approximately min{K, d −E[‖δ0‖] α\ne+ 1}. 4https://github.com/huggingface/pytorch-transformers 5https://github.com/pytorch/fairseq\nsingle-task finetuning for all dev set results, and start with MNLI-finetuned models on RTE, MRPC and STS-B for the test submissions.\nARC Benchmark. The ARC dataset (Clark et al., 2018) is a collection of multi-choice science questions from grade-school level exams. It is further divided into ARC-Challenge set with 2,590 question answer (QA) pairs and ARC-Easy set with 5,197 QA pairs. Questions in ARC-Challenge are more difficult and cannot be handled by simply using a retrieval and co-occurence based algorithm (Clark et al., 2018). A typical question is:\nWhich property of a mineral can be determined just by looking at it?\n(A) luster [correct] (B) mass (C) weight (D) hardness.\nCommonsenseQA Benchmark. The CommonsenseQA dataset (Talmor et al., 2019) consists of 12,102 natural language questions that require human commonsense reasoning ability to answer. A typical question is :\nWhere can I stand on a river to see water falling without getting wet?\n(A) waterfall, (B) bridge [correct], (C) valley, (D) stream, (E) bottom.\nEach question has five candidate answers from ConceptNet (Speer et al., 2017). To make the question more difficult to solve, most answers have the same relation in ConceptNet to the key concept in the question. As shown in the above example, most answers can be connected to “river” by “AtLocation” relation in ConceptNet. For a fair comparison with the reported results in papers and leaderboard6, we use the official random split 1.11." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "GLUE We summarize results on the dev sets of GLUE in Table 1, comparing the proposed FreeLB against other adversatial training algorithms (PGD (Madry et al., 2018) and FreeAT (Shafahi et al., 2019)). We use the same step size α and number of steps m for PGD, FreeAT and FreeLB. FreeLB is consistently better than the two baselines. Comparisons and detailed discussions about\n6https://www.tau-nlp.org/csqa-leaderboard\nYOPO (Zhang et al., 2019) are provided in Sec. 4.3. We have also submitted our results to the evaluation server, results provided in Table 2. FreeLB lifts the performance of the BERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8 on overall scores.\nARC For ARC, a corpus of 14 million related science documents (from ARC Corpus, Wikipedia and other sources) is provided. For each QA pair, we first use a retrieval model to select top 10 related documents. Then, given these retrieved documents7, we use RoBERTa-large model to encode 〈s〉 Retrieved Documents 〈/s〉 Question + Answer 〈/s〉, where 〈s〉 and 〈/s〉 are special tokens for RoBERTa model8. We then apply a fully-connected layer to the representation of the [CLS] token to compute the final logit, and use standard cross-entropy loss for model training.\nResults are summarized in Table 3. Following Sun et al. (2018), we first finetune the RoBERTa model on the RACE dataset (Lai et al., 2017). The finetuned RoBERTa model achieves 85.70% and 85.24% accuracy on the development and test set of RACE, respectively. Based on this, we further finetune the model on both ARC-Easy and ARC-Challenge datasets with the same hyper-parameter searching strategy (for 5 epochs), which achieves 84.13%/64.44% test accuracy on ARC-Easy/ARCChallenge. And by adding FreeLB finetuning, we can reach 84.81%/65.36%, a significant boost on ARC benchmark, demonstrating the effectiveness of FreeLB.\nTo further improve the results, we apply a multi-task learning (MTL) strategy using additional datasets. We first finetune the model on RACE (Lai et al., 2017), and then finetune on a joint dataset of ARC-Easy, ARC-Challenge, OpenbookQA (Mihaylov et al., 2018) and Regents Living Environment9. Based on this, we further finetune our model on ARC-Easy and ARC-Challenge with FreeLB. After finetuning, our single model achieves 67.75% test accuracy on ARC-Challenge and 85.44% on ARC-Easy, both outperforming the best submission on the official leaderboard10.\nCommonsenseQA Similar to the training strategy in Liu et al. (2019b), we construct five inputs for each question by concatenating the question and each answer separately, then encode each input with the representation of the [CLS] token. A final score is calculated by applying the representation of [CLS] to a fully-connected layer. Following the fairseq repository11, the input is formatted as: ”〈s〉 Q: Where can I stand on a river to see water falling without getting wet? 〈/s〉 A: waterfall 〈/s〉”, where ’Q:’ and ’A:’ are the prefix for question and answer, respectively.\nResults are summarized in Table 3. We obtained a dev-set accuracy of 77.56% with the RoBERTalarge model. When using FreeLB finetuning, we achieved 78.81%, a 1.25% absolute gain. Compared with the results reported from fairseq repository, which obtains 78.43% accuracy on the devset, FreeLB still achieves better performance. Our submission to the CommonsenseQA leaderboard achieves 72.2% single-model test set accuracy, and the result of a 20-model ensemble is 73.1%, which achieves No.1 among all the submissions without making use of ConceptNet.\n7We thank AristoRoBERTa team for providing retrieved documents and additional Regents Living Environments dataset.\n8Equivalent to [CLS] and [SEP] token in BERT. 9https://www.nysedregents.org/livingenvironment\n10https://leaderboard.allenai.org/arc/submissions/public and https://leaderboard.allenai.org/arc easy/ submissions/public\n11https://github.com/pytorch/fairseq/tree/master/examples/roberta/commonsense qa" }, { "heading": "4.3 ABLATION STUDY AND ANALYSIS", "text": "In this sub-section, we first show the importance of reusing dropout mask, then conduct a thorough ablation study on FreeLB over the GLUE benchmark to analyze the robustness and generalization strength of different approaches. We observe that it is unnecessary to perform shallow-layer updates on the adversary as YOPO for our case, and FreeLB results in improved robustness and generalization compared with PGD.\nImportance of Reusing Mask Table 4 (columns 2 to 4) compares the results of FreeLB with and without reusing the same dropout mask in each ascent step, as proposed in Sec. 3.3. With reusing, FreeLB can achieve a larger improvement over the naturally trained models. Thus, we enable mask reusing for all experiments involving RoBERTa.\nComparing the Robustness Table 5 provides the comparisons of the maximum increment of loss in the vicinity of each sample, defined as:\n∆Lmax(X, ) = max ‖δ‖≤\nL(fθ(X + δ), y)− L(fθ(X), y), (5)\nwhich reflects the robustness and invariance of the model in the embedding space. In practice, we use PGD steps as in Eq. 2 to find the value of ∆Lmax(X, ). We found that when using a step size of 5·10−3 and = 0.01‖X‖F , the PGD iterations converge to almost the same value, starting from 100 different random initializations of δ for the RoBERTa models, trained with or without FreeLB. This indicates that PGD reliably finds ∆Lmax for these models. Therefore, we compute ∆Lmax(X, ) for eachX via a 2000-step PGD.\nSamples with small margins exist even for models with perfect accuracy, which could give a false sense of vulnerability of the model. To rule out the outlier effect and make ∆Lmax(X, ) comparable across different samples, we only consider samples that all the evaluated models can correctly classify, and search for an for each sample such that the reference model can correctly classify all samples within the ball.12 However, such choice of per-sample favors the reference model by design. To make fair comparisons, Table 5 provides the median of ∆Lmax(X, ) with per-sample from models trained by FreeLB (Max Inc) and PGD (Mac Inc (R)), respectively.\nAcross all three datasets and different reference models, FreeLB has the smallest median increment even when starting from a larger natural loss than vanilla models. This demonstrates that FreeLB is more robust and invariant in most cases. Such results are also consistent with the models’ dev set performance (the performances for Vanilla/PGD/FreeLB models on RTE, CoLA and MRPC are 86.69/87.41/89.21, 69.91/70.84/71.40, 91.67/91.17/91.17, respectively).\n12For each sample, we start from a value slightly larger than the norm constraint during training for , and then decrease linearly until the model trained with the reference model can correctly classify after a 2000-step PGD attack. The reference model is either trained with FreeLB or PGD.\nComparing with YOPO The original implementation of YOPO (Zhang et al., 2019) chooses the first convolutional layer of the ResNets as f0 for updating the adversary in the “s-loop”. As a result, each step of the “s-loop” should be using exactly the same value to update the adversary,and YOPOm-n degenerates into FreeLB with a n-times large step size. To avoid that, we choose the layers up to the output of the first Transformer block as f0 when implementing YOPO. To make the total amount of update on the adversary equal, we take the hyper-parameters for FreeLB-m and only change the step size α into α/n for YOPO-m-n. Table 4 shows that FreeLB performs consistently better than YOPO on all three datasets. Accidentally, we also give the results comparing with YOPO-m-n without changing the step size α for YOPO in Table 8. The gap between two approaches seem to shrink, which may be caused by using a larger total step size for the YOPO adversaries. We leave exhaustive hyperparameter search for both models as our future work." }, { "heading": "5 CONCLUSION", "text": "In this work, we have developed an adversarial training approach, FreeLB, to improve natural language understanding. The proposed approach adds perturbations to continuous word embeddings using a gradient method, and minimizes the resultant adversarial risk in an efficient way. FreeLB is able to boost Transformer-based model (BERT and RoBERTa) on several datasets and achieve new state of the art on GLUE and ARC benchmarks. Empirical study demonstrates that our method results in both higher robustness in the embedding space than natural training and better generalization ability. Such observation is also consistent with recent findings in Computer Vision. However, adversarial training still takes significant overhead compared with vanilla SGD. How to accelerate this process while improving generalization is an interesting future direction.\nAcknowledgements: Goldstein and Zhu were supported in part by the DARPA GARD, DARPA QED for RML, and AFOSR MURI programs." }, { "heading": "A ADDITIONAL EXPERIMENTAL DETAILS", "text": "A.1 PROBLEM FORMULATIONS\nFor tasks with ranking loss like ARC, CommonsenseQA, WNLI and QNLI, add the perturbation to the concatenation of the embeddings of all question/answer pairs.\nAdditional tricks are required to achieve high performance on WNLI and QNLI for the GLUE benchmark. We use the same tricks as Liu et al. (2019b). For WNLI, we use the same WSC data provided by Liu et al. (2019b) for training. For testing, Liu et al. (2019b) also provided the test set with span annotations, but the order is different form the GLUE dataset. We re-order their test set by matching. For the QNLI, we follow Liu et al. (2019b) and formulate the problem as pairwise ranking problem, which is the same for CommonsenseQA. We find the matching pairs for both training set and testing set by matching the queries in the dev set. We predict “entailment” if the candidate has the higher score, and “not entailment” otherwise.\nA.2 HYPER-PARAMETERS\nAs other adversarial training methods, introduces three additional hyper-parameters: step size α, maximum perturbation , number of steps m. For all other hyper-parameters such as learning rate and number of iterations, we either search in the same interval as RoBERTa (on CommonsenseQA, ARC, and WNLI), or use exactly the same setting as RoBERTa (except for MRPC, where we find using a learning rate of 5× 10−6 gives better results).13. We list the best combinations for α, and m for each of the GLUE tasks in Table 6. For WSC/WNLI, the best combination is = 1e− 2, α = 5e− 3,m = 2. Notice even when mα < , the maximum perturbation could still reach due to the random initialization.\nB VARIANCE OF MAXIMUM INCREMENT OF LOSS\nTable 7 provides the complete results for the increment of loss in the interval, with median and standard deviation." }, { "heading": "C ADDITIONAL RESULTS FOR ABLATION STUDIES", "text": "Here we provide some additional results for comparison with YOPO as complementary results to Table 4. We will release the complete results for comparing with YOPO and without variational dropout on each of the GLUE tasks in our next revision. From the current results, there is no need\n13https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.glue.md\nin using extra shallow-layer updates that YOPO advocates, since this consistently deteriorates the performance while introducing extra computations." } ]
2,020
FREELB: ENHANCED ADVERSARIAL TRAINING FOR NATURAL LANGUAGE UNDERSTANDING
SP:1dae5dd9635962d35767ee1a5a4da01170e18029
[ "This paper proposes a modeling approach for norm and metric learning that ensures triangle inequalities are satisfied by the very design of the architecture. The main idea is that convexity together with homogeneity imply subadditivity, so starting from an input-convex architecture and using activations that preserve homogeneity implies the resulting model is sub-additive at every point. This architecture is used to model a norm, and in conjunction with an embedding - a metric. The authors also propose a mixture-based approach that combines a given set of metrics into a new one using a max-mean approach. Universal approximation results are presented for both architectures. The results are illustrated on a few mostly synthetic examples including metric nearness for random matrices, value functions for maze MDPs and distances between nodes on a graph (some problems here are sourced from open street map).", "This manuscript proposes a general framework to learn non-Euclidean distances from data using neural networks. The authors provide a combination of theoretical and experimental results in support of the use of several neural architectures to learn such distances. In particular, the develop “deep norms” and “wide norms”, based either on a deep or shallow neural network. Metrics are elaborated based on norms by combining them with a learnt embedding function mapping the input space de R^n. Theoretical results are mostly application textbook results and intuitive, the overall work forms a coherent line of research bridging theory and applications that sets well justified reference approaches for this topic." ]
Distances are pervasive in machine learning. They serve as similarity measures, loss functions, and learning targets; it is said that a good distance measure solves a task. When defining distances, the triangle inequality has proven to be a useful constraint, both theoretically—to prove convergence and optimality guarantees— and empirically—as an inductive bias. Deep metric learning architectures that respect the triangle inequality rely, almost exclusively, on Euclidean distance in the latent space. Though effective, this fails to model two broad classes of subadditive distances, common in graphs and reinforcement learning: asymmetric metrics, and metrics that cannot be embedded into Euclidean space. To address these problems, we introduce novel architectures that are guaranteed to satisfy the triangle inequality. We prove our architectures universally approximate norm-induced metrics on R, and present a similar result for modified Input Convex Neural Networks. We show that our architectures outperform existing metric approaches when modeling graph distances and have a better inductive bias than non-metric approaches when training data is limited in the multi-goal reinforcement learning setting.1
[ { "affiliations": [], "name": "TRIANGLE INEQUALITY" }, { "affiliations": [], "name": "Silviu Pitis" }, { "affiliations": [], "name": "Harris Chan" }, { "affiliations": [], "name": "Kiarash Jamali" }, { "affiliations": [], "name": "Jimmy Ba" } ]
[ { "authors": [ "Brandon Amos", "Lei Xu", "J. Zico Kolter" ], "title": "Input convex neural networks", "venue": "Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Peter Anderson", "Angel Chang", "Devendra Singh Chaplot", "Alexey Dosovitskiy", "Saurabh Gupta", "Vladlen Koltun", "Jana Kosecka", "Jitendra Malik", "Roozbeh Mottaghi", "Manolis Savva" ], "title": "On evaluation of embodied navigation agents", "venue": "arXiv preprint arXiv:1807.06757,", "year": 2018 }, { "authors": [ "Cem Anil", "James Lucas", "Roger Grosse" ], "title": "Sorting out lipschitz function approximation", "venue": "arXiv preprint arXiv:1811.05381,", "year": 2018 }, { "authors": [ "F.L. Bauer", "J. Stoer", "C. Witzgall" ], "title": "Absolute and monotonic norms", "venue": "Numerische Mathematik,", "year": 1961 }, { "authors": [ "Yanhong Bi", "Bin Fan", "Fuchao Wu" ], "title": "Beyond mahalanobis metric: cayley-klein metric learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Arijit Biswas", "David W Jacobs" ], "title": "An efficient algorithm for learning distances that obey the triangle inequality", "venue": "In BMVC,", "year": 2015 }, { "authors": [ "Jean Bourgain" ], "title": "On lipschitz embedding of finite metric spaces in hilbert space. Israel", "venue": "Journal of Mathematics,", "year": 1985 }, { "authors": [ "Justin Brickell", "Inderjit S Dhillon", "Suvrit Sra", "Joel A Tropp" ], "title": "The metric nearness problem", "venue": "SIAM Journal on Matrix Analysis and Applications,", "year": 2008 }, { "authors": [ "Jane Bromley", "Isabelle Guyon", "Yann LeCun", "Eduard Säckinger", "Roopak Shah" ], "title": "Signature verification using a\" siamese\" time delay neural network", "venue": "In Advances in neural information processing systems,", "year": 1994 }, { "authors": [ "Ying-Cong Chen", "Wei-Shi Zheng", "Jian-Huang Lai", "Pong C Yuen" ], "title": "An asymmetric distance model for cross-view feature mapping in person reidentification", "venue": "IEEE transactions on circuits and systems for video technology,", "year": 2016 }, { "authors": [ "Yize Chen", "Yuanyuan Shi", "Baosen Zhang" ], "title": "Optimal control via neural networks: A convex approach", "venue": "arXiv preprint arXiv:1805.11835,", "year": 2018 }, { "authors": [ "Thomas Cover", "Peter Hart" ], "title": "Nearest neighbor pattern classification", "venue": "IEEE transactions on information theory,", "year": 1967 }, { "authors": [ "Ian Davidson", "Sekharipuram S Ravi" ], "title": "Using instance-level constraints in agglomerative hierarchical clustering: theoretical and empirical results", "venue": "Data mining and knowledge discovery,", "year": 2009 }, { "authors": [ "A.C. Gilbert", "L. Jain" ], "title": "If it ain’t broke, don’t fix it: Sparse metric repair", "venue": null, "year": 2017 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Frank S He", "Yang Liu", "Alexander G Schwing", "Jian Peng" ], "title": "Learning to play in a day: Faster deep reinforcement learning by optimality tightening", "venue": "arXiv preprint arXiv:1611.01606,", "year": 2016 }, { "authors": [ "Elad Hoffer", "Nir Ailon" ], "title": "Deep metric learning using triplet network", "venue": "Similarity-Based Pattern Recognition,", "year": 2015 }, { "authors": [ "Cheng-Kang Hsieh", "Longqi Yang", "Yin Cui", "Tsung-Yi Lin", "Serge Belongie", "Deborah Estrin" ], "title": "Collaborative metric learning", "venue": "In Proceedings of the 26th International Conference on World Wide Web,", "year": 2017 }, { "authors": [ "Piotr Indyk" ], "title": "Sublinear time algorithms for metric space problems", "venue": "In Proceedings of the Thirtyfirst Annual ACM Symposium on Theory of Computing,", "year": 1999 }, { "authors": [ "Eric Jones", "Travis Oliphant", "Pearu Peterson" ], "title": "SciPy: Open source scientific tools for Python, 2001. URL http://www.scipy.org/. [Online; accessed <today>", "venue": null, "year": 2001 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Nathan Linial", "Eran London", "Yuri Rabinovich" ], "title": "The geometry of graphs and some of its algorithmic applications", "venue": null, "year": 1995 }, { "authors": [ "Frank Nielsen", "Boris Muzellec", "Richard Nock" ], "title": "Classification with mixtures of curved mahalanobis metrics", "venue": "In Image Processing (ICIP),", "year": 2016 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Mingdong Ou", "Peng Cui", "Jian Pei", "Ziwei Zhang", "Wenwu Zhu" ], "title": "Asymmetric transitivity preserving graph embedding", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "arXiv preprint arXiv:1802.09464,", "year": 2018 }, { "authors": [ "Stuart J Russell", "Peter Norvig" ], "title": "Artificial intelligence: a modern approach", "venue": "Malaysia; Pearson Education", "year": 2016 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Walter J Scheirer", "Michael J Wilber", "Michael Eckmann", "Terrance E Boult" ], "title": "Good recognition is non-metric", "venue": "Pattern Recognition,", "year": 2014 }, { "authors": [ "Florian Schroff", "Dmitry Kalenichenko", "James Philbin" ], "title": "Facenet: A unified embedding for face recognition and clustering", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Suvrit Sra", "Joel Tropp", "Inderjit S Dhillon" ], "title": "Triangle fixing algorithms for the metric nearness problem", "venue": "In Advances in Neural Information Processing Systems,", "year": 2005 }, { "authors": [ "Richard S Sutton", "Joseph Modayil", "Michael Delp", "Thomas Degris", "Patrick M Pilarski", "Adam White", "Doina Precup" ], "title": "Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2011 }, { "authors": [ "N. Veldt", "D. Gleich", "A. Wirth", "J. Saunderson" ], "title": "A Projection Method for Metric-Constrained Optimization", "venue": null, "year": 2018 }, { "authors": [ "Ivan Vendrov", "Ryan Kiros", "Sanja Fidler", "Raquel Urtasun" ], "title": "Order-embeddings of images and language", "venue": "arXiv preprint arXiv:1511.06361,", "year": 2015 }, { "authors": [ "Luke Vilnis", "Andrew McCallum" ], "title": "Word representations via gaussian embedding", "venue": "arXiv preprint arXiv:1412.6623,", "year": 2014 }, { "authors": [ "Qi Wang", "Jia Wan", "Yuan Yuan" ], "title": "Deep metric learning for crowdedness regression", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2018 }, { "authors": [ "Eric P Xing", "Michael I Jordan", "Stuart J Russell", "Andrew Y Ng" ], "title": "Distance metric learning with application to clustering with side-information", "venue": "In Advances in neural information processing systems,", "year": 2003 }, { "authors": [ "Liu Yang", "Rong Jin" ], "title": "Distance metric learning: A comprehensive survey", "venue": "Michigan State Universiy,", "year": 2006 }, { "authors": [ "Dong Yi", "Zhen Lei", "Shengcai Liao", "Stan Z Li" ], "title": "Deep metric learning for person re-identification", "venue": "In Pattern Recognition (ICPR),", "year": 2014 }, { "authors": [ "Deep Norm", "Wide Norm" ], "title": "Training Regime The triangle fixing algorithm was allowed to go up to 400 iterations or convergence. For the symmetric case we reimplemented Sra et al. (2005) in Python using their C++ code as a template. For the asymmetric version, we used the same code but we removed their symmetrization step. For the training of all networks, we did 1500 epochs in total split up in to 500 epoch chunks", "venue": null, "year": 2005 }, { "authors": [ "Grover", "Leskovec" ], "title": "2016), but found that our noisy landmark approach was both faster to run and produced results that were an order of magnitude better for all algorithms. Architectures We compare a 128-dimensional Mahalanobis metric (equivalent to a deep Euclidean metric with an additional layer), a Wide Norm based Neural Metric (32 components of 32 units, with 5-unit concave activations, and MaxMean global pooling), a plain Deep Norm", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Many machine learning tasks involve a distance measure over the input domain. A good measure can make a once hard task easy, even trivial. In many cases—including graph distances, certain clustering algorithms, and general value functions in reinforcement learning (RL)—it is either known that distances satisfy the triangle inequality, or required for purposes of theoretical guarantees; e.g., speed and loss guarantees in k-nearest neighbors and clustering (Cover and Hart, 1967; Indyk, 1999; Davidson and Ravi, 2009), or optimality guarantees for A∗ search (Russell and Norvig, 2016). This also makes the triangle inequality a potentially useful inductive bias for learning distances. For these reasons, numerous papers have studied different ways to learn distances that satisfy the triangle inequality (Xing et al., 2003; Yang and Jin, 2006; Brickell et al., 2008; Kulis et al., 2013).\nThe usual approach to enforcing the triangle inequality in deep metric learning (Yi et al., 2014; Hoffer and Ailon, 2015; Wang et al., 2018) is to use a Siamese network (Bromley et al., 1994) that computes a Euclidean distance in the latent space. Specifically, the Siamese network models distance dX : X × X → R+ on domain X by learning embedding φ : X → Rn and computing dX (x, y) as ‖φ(x)− φ(y)‖2. Successful applications include collaborative filtering (Hsieh et al., 2017), few-shot learning (Snell et al., 2017), and multi-goal reinforcement learning (Schaul et al., 2015). The use of Euclidean distance, however, has at least two downsides. First, the Euclidean architecture cannot represent asymmetric metrics, which arise naturally in directed graphs and reinforcement learning. Second, it is well known that for some metric spaces (X , dX ), including large classes of symmetric graphs (e.g., constant-degree expanders and k-regular graphs), there is no embedding φ : X → Rn that can model dX precisely using ‖ · ‖2 (Indyk et al., 2017). A classic example is shown in Figure 1. In part due to these issues, some have considered non-architectural constraints. He et al. (2016) impose a triangle inequality constraint in RL via an online, algorithmic penalty. Implementing such a penalty can be expensive, and does not provide any guarantees. An approach that does guarantee\n1Code available at https://github.com/spitis/deepnorms\nPublished as a conference paper at ICLR 2020\n... ... ... ... ... ... Cleaned up version\nBlah\nBlah\nBlah\nNorm MSE\nEuclidean, Rn, ∀n 0.057 Deep Norm, R2 0.000 Wide Norm, R2 0.000\nDeep Norm Wide Norm Mahalanobis\n1 0 11.5\n1.0\n0.5\n0.0\n0.5\n1.0\n1.5\nA B\nC D\n1 0 1\n1\n0\n1\nA B\nC D\n1 0 1\n1.0\n0.5\n0.0\n0.5\n1.0\nA\nB\nC\nD\nFig. 1: The nodes in the graph (left) cannot be embedded into any Rn so that edge distances are represented by the Euclidean metric: points φ(A) and φ(D) must lie at the midpoint of the segment from φ(B) to φ(C)—but then φ(A) and φ(D) coincide, which is incorrect. Our models fit the data in R2 (middle). The visualization (right) shows learned norm balls in red and embeddings in blue.\nsatisfaction of triangle inequality is to fix any violations after learning, as done by Brickell et al. (2008). But this does not scale to large problems or provide an inductive bias during learning.\nIs it possible to impose the triangle inequality architecturally, without the downsides of Euclidean distance?\nIn response to this question, we present the following contributions: (1) three novel neural network architectures, Deep Norms, Wide Norms and Neural Metrics, which model symmetric and asymmetric norms and metrics, (2) universal approximation theorems for Deep Norms and Wide Norms and modified Input Convex Neural Networks (Amos et al., 2017), and (3) empirical evaluations of our models on several tasks: modeling norms, metric nearness, modeling shortest path lengths, and learning a general value function (Sutton et al., 2011). Our models are guaranteed to satisfy the triangle inequality, straightforward to implement, and may be used in place of the usual Euclidean metric should one seek to model asymmetry or increase expressiveness." }, { "heading": "2 MODELING NORMS", "text": "" }, { "heading": "2.1 PRELIMINARIES", "text": "Our goal is to construct expressive models of metrics and quasi-metrics on domain X . A metric is a function d : X × X → R+ satisfying, ∀x, y, z ∈ X : M1 (Non-negativity). d(x, y) ≥ 0. M2 (Definiteness). d(x, y) = 0 ⇐⇒ x = y. M3 (Subadditivity). d(x, z) ≤ d(x, y) + d(y, z). M4 (Symmetry). d(x, y) = d(y, x).\nSince we care mostly about the triangle inequality (M3), we relax other axioms and define a quasimetric as a function that is M1 and M3, but not necessarily M2 or M4. Given weighted graph G = (V, E) with non-negative weights, shortest path lengths define a quasi-metric between vertices. When X is a vector space (we assume over R), many common metrics, e.g., Euclidean and Manhattan distances, are induced by a norm. A norm is a function ‖·‖ : X → R satisfying, ∀x, y ∈ X , α ∈ R+: N1 (Pos. def.). ‖x‖ > 0, unless x = 0. N2 (Pos. homo.). α‖x‖ = ‖αx‖, for α ≥ 0. N3 (Subadditivity). ‖x+ y‖ ≤ ‖x‖+ ‖y‖ . N4 (Symmetry). ‖x‖ = ‖ –x‖.\nAn asymmetric norm is N1-N3, but not necessarily N4. An (asymmetric) semi-norm is nonnegative, N2 and N3 (and N4), but not necessarily N1. We will use the fact that any asymmetric semi-norm ‖ · ‖ induces a quasi-metric using the rule, d(x, y) = ‖x − y‖, and first construct models of asymmetric semi-norms. Any induced quasi-metric d is translation invariant—d(x, y) = d(x+ z, y+ z)—and positive homogeneous—d(αx, αy) = αd(x, y) for α ≥ 0. If ‖ · ‖ is symmetric (N4), so is d (M4). If ‖ · ‖ is N1, d is M2. Metrics that are not translation invariant (e.g., Bi et al. (2015)) or positive homogeneous (e.g., our Neural Metrics in Section 3) cannot be induced by a norm.\nA convex function f : X → R is a function satisfying C1: ∀x, y ∈ X , α ∈ [0, 1]: f(αx+(1−α)y) ≤ αf(x) + (1− α)f(y). The commonly used ReLU activation, relu(x) = max(0, x), is convex." }, { "heading": "2.2 DEEP NORMS", "text": "It is easy to see that any N2 and N3 function is convex—thus, all asymmetric semi-norms are convex. This motivates modeling norms as constrained convex functions, using the following proposition. Proposition 1. All positive homogeneous convex functions are subadditive; i.e., C1 ∧ N2⇒ N3.\n...\nLess misleading:\nThe proof is straightforward (put α = 12 in C1 and apply N2 to the left side). To use Proposition 1, we begin with the Input Convex Neural Network (ICNN) (Amos et al., 2017) architecture, which satisfies C1, and further constrain it to be non-negative and satisfy N2. The resulting Deep Norm architecture is guaranteed to be an asymmetric semi-norm. A k-layer Deep Norm is defined as:\nfor i = 1...k, where x is the input, h0 = 0,W+1 = 0, the activation functions gi preserve C1 and N2\n(element-wise), gk is non-negative, W+i is a non-negative matrix, and Ui is an unconstrained matrix.\nAs compared to the original ICNN architecture, we have omitted the bias terms from Equation 1, have constrained the gi to preserve positive homogeneity while also allowing them to be any function that preserves element-wise convexity (this is essential to our universal approximation results), and have required gk to be non-negative. It is easy to verify that the set of valid element-wise gi is {gαβ(x) = α relu(x) + βx |α, β ≥ 0}. This includes ReLUs and leaky ReLUs. But we do not restrict ourselves to element-wise activations. Inspired by GroupSort (Anil et al., 2018), we use activations that depend on multiple inputs (and preserve element-wise C1 and N2). In particular, we use the pairwise MaxReLU:\nmaxrelu(x, y) = [max(x, y), α relu(x) + β relu(y)], where α, β ≥ 0 (2) Deep Norms are N2 and N3. Using the following propositions, we may also impose N1 and N4. Proposition 2. If ‖ · | is an asymmetric semi-norm, then ‖x‖ = ‖x|+ ‖ –x| is a semi-norm. Proposition 3. If ‖ · ‖a is an (asymmetric) semi-norm, ‖ · ‖b is a norm (e.g., ‖ · ‖b = ‖ · ‖2), and λ > 0, then ‖x‖a+λb = ‖x‖a + λ‖x‖b is an (asymmetric) norm." }, { "heading": "2.3 WIDE NORMS", "text": "In addition to Deep Norms, we propose the following alternative method for constructing norms: a Wide Norm is any combination of (asymmetric) (semi-) norms that preserves N1-N4. It is easy to verify that both (1) non-negative sums and (2) max are valid combinations (indeed, these properties were also used to construct Deep Norms), and so the vector-wise MaxMean combination is valid: maxmean(x1, x2, . . . , xn) = α max(x1, x2, . . . , xn) + (1− α) mean(x1, x2, . . . , xn). Although the family of Wide Norms is broad, for computational reasons to be discussed in Subsection 3.5, we focus our attention on the Wide Mahalanobis norm. References to “Wide Norms” in the rest of this paper refer to Wide Mahalanobis norms. The Mahalanobis norm of x ∈ Rn, parameterized by W ∈ Rm×n, is defined as ‖x‖W = ‖Wx‖2. It is easily verified that ‖ · ‖W is a proper norm when W is a non-singular (square) matrix, and a semi-norm when W is singular or m < n.\nA k-component Mixture of Mahalanobis norm (hereafter Wide Norm, or Wide Norm with k Euclidean components) is defined as the maxmean of k Mahalanobis norms: ‖x‖ = maxmeani (‖Wix‖2) where Wi ∈ Rmi×n with mi ≤ n. (3) Wide Norms are symmetric by default, and must be asymmetrized to obtain asymmetric (semi-) norms. We use the below property (Bauer et al., 1961) and propositions (proofs in Appendix A). N5. ‖ · ‖ is monotonic in the positive orthant if 0 ≤ x ≤ y (element-wise) implies ‖x‖ ≤ ‖y‖. Proposition 4. If ‖ · ‖ is an N5 (semi-) norm on R2n, then ‖x| = ‖relu(x :: –x)‖, where :: denotes concatenation, is an asymmetric (semi-) norm on Rn. Proposition 5. The Mahalanobis norm with W = DU , with D diagonal and U non-negative, is N5." }, { "heading": "2.4 UNIVERSAL APPROXIMATION OF CONVEX FUNCTIONS AND NORMS", "text": "How expressive are Deep Norms and Wide Norms? Although it was empirically shown that ICNNs have “substantial representation power” (Amos et al., 2017), the only prior work characterizing the approximation power of ICNNs uses a narrow network with infinite depth, which does not reflect typical usage (Chen et al., 2018). One of our key contributions is a series of universal approximation results for ICNNs (with MaxReLU activations), Deep Norms and Wide Norms that use a more practical infinite width construction. The next lemma is central to our results.\nLemma 1 (Semilattice Stone-Weierstrass (from below)). Let C be a set of continuous functions defined on compact subset K of Rn, L be a closed subset of C, and f ∈ C. If (1) for every x ∈ K, there exists gx ∈ L such that gx ≤ f and gx(x) = f(x), and (2) L is closed under max (i.e., a, b ∈ L⇒ max(a, b) ∈ L), then ∃h ∈ L with f = h on K.\nIntuitively, Lemma 1 and its proof (Appendix A) say we can approximate continuous f arbitrarily well with a family L of functions that is closed under maximums if we can “wrap” f from below using functions gx ∈ L with gx ≤ f . Our Universal Approximation (UA) results are now straightforward. Theorem 1 (UA for MICNNs). The familyM of Max Input Convex Neural Networks (MICNNs) that uses pairwise max-pooling (or MaxReLU) activations is dense in the family C of convex functions. Proof. For f ∈ C, x ∈ Rn, let gx ∈ M be a linear function whose hyperplane in the graph of f is tangent to f at x. Then gx satisfies condition (1) of Lemma 1 (because f is convex). The use of pairwise max activations allows one to construct max(h1, h2) ∈ M for any two h1, h2 ∈ M by using log2(n) max-pooling layers, satisfying condition (2) of Lemma 1. Thus f is in the closure of M, and the result follows.\nThis result applies when MaxReLUs are used, since we can set α, β = 0. Using MaxReLU (rather than max) guarantees that MICNNs can imitate regular ICNNs with at most double the parameters. Theorem 2 (UA for Deep Norms and Wide Norms). The familiesD of Deep Norms (using MaxReLU) andW of Wide Norms (using MaxMean) are dense in the family N of asymmetric semi-norms. Proof (sketch). The proof is almost identical to that of Theorem 1, except that here D andW contain all linear functions whose graph is tangent to any f ∈ N since f is N2. This is easy to see for functions defined on R2. See Appendix A for more details." }, { "heading": "2.5 MODELING NORMS IN 2D", "text": "Having shown that Deep Norms and Wide Norms universally approximate norms, we now show that they can successfully learn to approximate random norms on R2 when trained using gradient descent. To generate data, we use the below fact, proved in Appendix A. Proposition 6. The set of all asymmetric norms on Rn is in one-to-one correspondence with the set of all bounded and open convex sets (“unit balls”) containing the origin.\nWe use Proposition 6 by generating a random point set, computing its convex hull, and using the hull as the unit ball of the generated norm, ‖ · ‖. We then train different models to approximate ‖ · ‖ using |D| ∈ {16, 128} training samples of form ((xη, yη), η), where ‖(x, y)‖=1 and η ∼ U(0.85, 1.15). The models are trained to minimize the mean squared error (MSE) between the predicted (scalar) norm value and the ground truth norm value. See Appendix C.1 for details.\nFigure 3 illustrates the learned 2D norm balls of a random symmetric (top) and asymmetric (bottom) norm, for three architectures: Deep Norm, Wide Norm, and an unconstrained, fully connected neural network (MLP). The blue contours in Figure 3 are the ground truth norm balls for values {0.5, 1, 1.5}, and black contours the norm balls of the learned approximations. With the small dataset, the MLP\noverfits to the training data and generalizes poorly, while Deep Norm and Wide Norm generalize almost perfectly. With the large dataset, we observe that Deep Norms and Wide Norms generalize to larger and smaller norm balls (due to being N2), whereas the MLP is unable to generalize to the 0.5 norm ball (indeed, MLP−1(0.5) = ∅). While the symmetric Wide Norm fits well, the results suggest that the asymmetrized Wide Norm is not as effective as the naturally asymmetric Deep Norm. See Appendix C for additional details and visualizations." }, { "heading": "3 MODELING METRICS", "text": "Having constructed models of asymmetric semi-norms on Rn, we now use them to induce quasimetrics on a domain X . As the geometry of X ’s raw feature space will not, in general, be amenable to a favorable metric (indeed, the raw features need not be in Rn), we assume the Siamese network approach and learn an embedding function, φ : X → Rn. A Deep Norm or Wide Norm ‖ · ‖θ, with parameters θ, is defined on Rn and the metric over X is induced as dφ,θ(x, y) = ‖φ(y) − φ(x)‖θ. We could also define ‖ · ‖θ using an unconstrained neural network (MLP), but this would not be guaranteed to satisfy the norm axioms and induce a quasi-metric.\nTo illustrate this approach, we revisit Figure 1. To get the results shown, we represent the four nodes as one hot vectors and embed them into R2 using φ(x) = Wx, W ∈ R2×4. We then define the norm on R2 as either a Mahalanobis norm, a Deep Norm, or a Wide Norm. Training the norm and φ together, end-to-end with gradient descent, produces the Figure 1 results.\nThe choices are summarized in Table 1. While Deep Norm and Wide Norm have similar properties, the depth of the former suggests that they can learn efficient, hierarchical representations (as do deep neural networks); however, Wide Norms have a computational advantage when computing pairwise distances for large minibatches, which we explore in Subsection 3.5. Since inducing metrics with Deep Norms or Wide Norms produces a potentially unnecessary homogeneity constraint (due to N2), the next Subsection considers Neural metrics, which offer an approach to relaxing this constraint." }, { "heading": "3.1 NEURAL METRICS", "text": "Instead of inducing a metric with a norm, we could define a metric directly as a non-negative weighted sum of different metrics. E.g., as we did for Wide Norms, we can define a Wide Metric as the mean of k deep Euclidean metrics. If all components of a Wide Metric are norm-induced, however, it can be induced directly by a single Wide Norm with k components, by setting φ(x) to be the concatenation of the φi(x), and using each Wj to select the indices corresponding to φj . It follows that for a family of Wide Metrics to be more expressive than the family of norm-induced metrics, it must include components that are either not translation invariant or not positive homogeneous. We consider the latter, and use the propositions below to modify our norm-induced metrics (proofs in Appendix A).\nProposition 7 (Metric-preserving concave functions). If d : X × X → R+ is (quasi-) metric, f : R+ → R+, f−1(0) = {0}, and f is concave (i.e., −f is convex), then f ◦ d is (quasi-) metric. Proposition 8 (Max preserves metrics). If d1 and d2 are (quasi-) metric, so too is max(d1, d2).\nTo use Proposition 7, we note that each unit in the final layer of a Deep Norm defines a metric (assuming gk−1 is non-negative, as it is when using ReLU or MaxReLU activations), as does\neach component of a Wide Norm. Thus, by applying metric-preserving functions fi to these metrics, and then combining them using a MaxMean, mean or max, we obtain a valid metric, which we name a Neural Metric. Our k-component fi are parameterized by wi, bi ∈ Rk as follows: fi(x) = minj{wijx+ bij}, where wij ≥ 0, bij ≥ 0, b0 = 0. The advantage of Neural Metrics over plain Deep Norms and Wide Norms is that one can better model certain metrics, such as d(x, y) = min(1, ‖x− y‖). This type of metric might be induced by shortest path lengths in a fully connected graph with some maximum edge weight (for example, a navigation problem with a teleportation device that takes some constant time to operate).\n3.2 APPLICATION: METRIC NEARNESS\nJ (S) MN # (S) ¬M3 J (A) MN # (A) ¬M3\nTF 2.01e-2 7.7e3 1.30e-1 3.8e2 Eucl 9.12e-2 0 2.02e-1 0 WN 2.44e-2 0 1.86e-1 0 DN 2.00e-2 0 7.29e-2 0\ndistance) to the original metric. Sra et al. (2005) proved this loss function attains its unique global minimum in the set of size n discrete metrics and proposed an O(n3) triangle fixing (“TF”) algorithm for the symmetric case, which iteratively fixes M3 violations and is guaranteed to converge to the global minimum. Table 2 shows that for n = 200, our models (Deep Norm (DN) and Wide Norm (WN) based Neural Metrics) achieve results comparable to that found by 400 iterations of TF in the symmetric case. We note that TF, which approaches the solution through the space of all n× n matrices, produces a non-metric approximation, so that the number of M3 violations (#¬M3) is greater than 0; this said, it is possible to fix these violations by adding a small constant to all entries of the matrix, which does not appreciably increase the loss JMN . To test our models in the asymmetric case, we modified the TF algorithm; although our modified TF found a solution , our DN model performed significantly better. See Appendix D for complete experimental details.\nAlthough triangle fixing is effective for small n, there is no obvious way to scale it to large datasets or to metric approximations of non-metric distance functions defined on a continuous domain. Using our models in these settings is natural, as we demonstrate in the next subsection." }, { "heading": "3.3 APPLICATION: MODELING GRAPH DISTANCES", "text": "In the previous subsection we sought to “fix” a noisy, but small and fully observable metric; we now test the generalization ability of our models using large (n > 100K) metrics. We do this on the task of modeling shortest path lengths in a weighted graph G = (V, E). So long as edge weights are positive and the graph is connected, shortest path lengths are discrete quasi-metrics (n = |V|), and provide an ideal domain for a comparison to the standard Euclidean approach.\nOur experiments are structured as a supervised learning task, with inputs x, y ∈ V and targets d(x, y). The targets for a random subset of 150K pairs (of the O(|V |2) total pairs) were computed beforehand\nusing A∗ search and normalized to have mean 50. 10K were used as a test set, and the remainder was subsampled to form training sets of size |D| ∈ {1K, 2K, 5K, 10K, 20K, 50K}. Nodes were represented using noisy landmark embeddings (see Appendix E). We compare Wide Norms, Deep Norms (ICNN style, with ReLU activations, DNI ), and Deep Norm based Neural Metrics (with MaxReLU and concave activations, DNN ) to the standard Siamese-style Bromley et al. (1994) deep Euclidean metric and MLPs in six graphs, three symmetric and three not, summarized in Table 3a and described in Appendix E. Though MLPs do not induce proper metrics, they are expressive function approximators and serve as a relevant reference point. The results (shown for |D| = 50K in Table 3b and expanded upon in Appendix E) show that our models tend to outperform the basic Siamese (Euclidean/Mahalanobis) network, and oftentimes the MLP reference point as well." }, { "heading": "3.4 APPLICATION: LEARNING GENERAL VALUE FUNCTIONS", "text": "In this section, we evaluate our models on the task of learning a Universal Value Function Approximator (UVFA) in goal-oriented reinforcement learning (Sutton et al., 2011; Schaul et al., 2015). Figure 4 (left) illustrates the two 11x11 grid world environments, 4-Room and Maze, in which we conduct our experiments. Each environment has a symmetric version, where the reward is constant (-1), and an asymmetric version with state-action dependent reward. We create a training set of transitions (s, s′, g, r, d) for the state, next state, goal, reward, and done flag, and the UVFA Vθ(s, g) is trained to minimize the temporal difference error via bootstrapped updates with no discounting. We tested several architectures at different training set sizes, which leaves out a portion of transitions such that only a fraction of states or goals were present during training. See Appendix F.1 for details.\nFigure 5 summarizes the final generalization performance (on held out (s, g) pairs) for each architecture after training for 1000 epochs. Performance is given in terms of the SPL metric (Anderson et al., 2018), which measures the success rate of reaching the goal weighted by the ratio of agent versus optimal path cost. We observe that when given only partial training data, Deep Norm and Wide Norm consistently outperform both MLPs and ICNNs at each sparsity level. As expected, the Euclidean metric could not solve the asymmetric environments. Qualitatively, we plot a heatmap of the squared difference (SE) between the learned and ground truth value function in Figure 4 (center), and observed that our proposed architectures have mostly lower SE than the MLP and ICNN architectures. Figure 4 (right) illustrates that WideNorm architecture is able to reach more cells (with almost optimal trajectory) in the environment when using a greedy policy with respect to the learned value function. Additional visualizations and results are in Appendix F.2.\n3.5 COMPUTATIONAL CONSIDERATIONS\nSeveral deep metric learning algorithms, such as semihard triplet mining Schroff et al. (2015), involve computing a pairwise distance matrix for large minibatches. For example, Schroff et al. use mini-batches of size 1800. Computing this matrix for Euclidean distances can be done efficiently by taking advantage of the identity ‖x− y‖22 = ‖x‖22 + ‖y‖22 − 2xT y (see Section 4 of Oh Song et al. (2016) for details). This same identity can be applied to each component of a Wide Norm. There is no obvious way, however, to\ncompute pairwise Deep Norms more efficiently than the naive O(n2) approach. To quantify, we recorded the pairwise distance computation time for our implementations (Table 6). Thus, although our previous experiments often found that Deep Norms performed slightly better (see, e.g., Figure 3 and Tables 2 and 3b), these results suggest that only Wide Norms are practical for large mini-batches." }, { "heading": "4 DISCUSSION AND OTHER RELATED WORK", "text": "Deep Norms, Wide Norms, and Neural Metrics all respect the triangle inequality while universally approximating finite-dimensional asymmetric semi-norms. This allows them to represent metrics that the deep Euclidean Siamese architecture cannot, no matter how deep its embedding function is (see Figure 1). Our models thus provide a more expressive, non-Euclidean alternative for learning distances that satisfy the triangle inequality. As noted in the Introduction, this may useful for providing running time and error rate guarantees in clustering (Davidson and Ravi, 2009) and as an inductive bias to improve generalization performance (Figure 5; Hsieh et al. (2017)).\nA line of theoretical work, surveyed by Indyk et al. (2017), characterizes the representational capacity of Euclidean space (and, more generally, `p space) by examining the asymptotic “distortion” of embedding n-point metric spaces into Euclidean space. Here the distortion of an embedding φ : X → X ′ of metric space (X , dX ) into metric space (X ′, dX ′) is at most c ≥ 1 if there exists r > 0 such that for all x, y ∈ X , r · dX (x, y) ≤ dX ′(φ(x), φ(y)) ≤ cr · dX (x, y). The pioneering work of Bourgain (1985) bounds worst case distortion by O(log n), and Linial et al. (1995) shows that this bound is tight (i.e., Θ(log n)) for graph distances in n-point constant-degree expanders. This applies regardless of the dimensionality of the embedding space, and is true for all `p with 1 ≤ p <∞. Future work might investigate the asymptotic distortion of our proposed architectures. On account of the above limitations, others have also proposed non-Euclidean alternatives to learning and representing metrics. To improve expressivity, some have proposed non-parametric algorithms that learn distances directly. For instance, Biswas and Jacobs (2015) propose to frame clustering as a metric nearness problem and applying a quadratic programming algorithm; see also Gilbert and Jain (2017) and Veldt et al. (2018). Others learn parametric distances, as we do. Bi et al. (2015) parameterize and learn Cayley-Klein metrics, which are not translation-invariant. Yang and Jin (2006) and Nielsen et al. (2016) propose metrics that are similar to our Wide Norm, in that they use non-negative sums of several components (binary codes and Cayley-Klein metrics). As for symmetry, several papers have used asymmetric measures such as KL divergence (Vilnis and McCallum, 2014;\nVendrov et al., 2015; Chen et al., 2016; Ou et al., 2016). To our knowledge, we are the first to propose a parametric measure that is both asymmetric and satisfies the triangle inequality.\nAnother common approach in deep “metric” learning is to forgo the triangle inequality altogether and use non-metric similarity measures such as cosine similarity (Bromley et al., 1994; Yi et al., 2014). This can be sensible, as proper metrics are not always required. In image recognition, for example, the triangle inequality is questionable: why should d(CAT,WOLF) ≤ d(CAT,DOG) + d(DOG,WOLF)? In this case, Scheirer et al. (2014) find that non-metric similarities often perform better, concluding that “good recognition is non-metric”. Thus, one should apply our models with care, in settings where the triangle inequality is thought to be a useful inductive bias (e.g., Hsieh et al. (2017)). Such applications we are excited about include learning search heuristics (Russell and Norvig, 2016) and scaling our UVFA models to more complex reinforcement learning problems (Plappert et al., 2018)." }, { "heading": "5 CONCLUSION", "text": "This paper proposed three novel architectures for modeling asymmetric semi-norms and metrics. They can be used instead of the usual Euclidean metric to increase expressiveness while respecting the triangle inequality. We showed that our models outperform Euclidean metrics when used with a Siamese-style deep metric learning architecture to (1) solve the metric nearness problem, (2) model shortest path lengths, and (3) learn general value functions in small reinforcement learning domains. Future work should explore larger scale applications such as facial recognition (Schroff et al., 2015), collaborative metric learning (Hsieh et al., 2017) and continuous control (Plappert et al., 2018)." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Cem Anil, James Lucas, Mitchell Stern, Michael Zhang and the anonymous reviewers for helpful discussions. Harris Chan was supported by an NSERC CGS-M award. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute (www.vectorinstitute.ai/ #partners)." }, { "heading": "A PROOFS", "text": "In this Appendix, we restate each proposition from the main text and provide a short proof.\nProposition 1. All positive homogeneous convex functions are subadditive; i.e., C1 ∧ N2⇒ N3.\nProof. Putting α = 0.5 in the definition of C1 gives f(0.5x+0.5y) ≤ 0.5f(x)+0.5f(y). Applying N2 on the left and multiplying by 2 gives f(x+ y) ≤ f(x) + f(y) as desired.\nProposition 2. If ‖ · | is an asymmetric semi-norm, then ‖x‖ = ‖x|+ ‖ –x| is a semi-norm.\nProof. ‖x‖ = ‖x| + ‖ –x| = ‖ –x‖, so N4 is satisfied. N2-N3 and non-negativity are similarly trivial.\nProposition 3. If ‖ · ‖a is an (asymmetric) semi-norm, ‖ · ‖b is a norm (e.g., ‖ · ‖b = ‖ · ‖2), and λ > 0, then ‖x‖a+λb = ‖x‖a + λ‖x‖b is an (asymmetric) norm.\nProof. This follows from the positive definiteness of λ‖x‖b and the easily verified fact that nonnegative weighted sums of semi-norms are semi-norms.\nProposition 4. If ‖ · ‖ is an N5 (semi-) norm on R2n, then ‖x| = ‖relu(x :: –x)‖, where :: denotes concatenation, is an asymmetric (semi-) norm on Rn.\nProof. N1 and N2 are easily verified. To see that ‖ · | is not necessarily symmetric, choose ‖ · ‖ to be a Mahalanobis norm parameterized by a diagonal W with Wii = i (but NB that if ‖ · ‖ is invariant to element-wise permutations, as are the Lp norms, ‖ · | will be symmetric). For N3, we have:\n‖x|+ ‖y| = ‖relu(x :: –x)‖+ ‖relu(y :: – y)‖ ≥ ‖relu(x :: –x) + relu(y :: – y)‖ ≥ ‖relu(x+ y :: – (x+ y))‖ = ‖x+ y|.\nThe first inequality holds because ‖ · ‖ is N3. The second holds because ‖ · ‖ is N5, and we have either element-wise equality (when sign(xi) and sign(yi) agree) or domination (when they don’t).\nProposition 5. The Mahalanobis norm with W = DU , with D diagonal and U non-negative, is N5.\nProof. Let ‖ · ‖ be ‖ · ‖2, and 0 ≤ x ≤ y = x+ . We have:\n‖Wx‖‖Wy‖ ≥ |(Wx)T (Wy)| = xWTW (x+ )\n= xWTWx+ xUTDTDU\n≥ ‖Wx‖2.\nThe first inequality is Cauchy-Schwarz. The second holds as all elements of x, UTDTDU , and are non-negative.\nLemma 1 (Semilattice Stone-Weierstrass (from below)). Let C be a set of continuous functions defined on compact subset K of Rn, L be a closed subset of C, and f ∈ C. If (1) for every x ∈ K, there exists gx ∈ L such that gx ≤ f and gx(x) = f(x), and (2) L is closed under max (i.e., a, b ∈ L⇒ max(a, b) ∈ L), then ∃h ∈ L with f = h on K.\nProof. Fix > 0 and consider the sets Ux = {y | y ∈ K, f(y) − < gx(y)}, for all x ∈ K. The Ux form an open cover of K, so there is a finite subcover {Ux1 , Ux2 , . . . , Uxn}. Let g = max(gx1 , gx2 , . . . , gxn) ∈ L. We have f − ≤ g ≤ f , and the result follows since L is closed.\nTheorem 1 (UA for MICNNs). The familyM of Max Input Convex Neural Networks (MICNNs) that uses pairwise max-pooling (or MaxReLU) activations is dense in the family C of convex functions.\nProof. For f ∈ C, x ∈ Rn, let gx ∈ M be a linear function whose hyperplane in the graph of f is tangent to f at x. Then gx satisfies condition (1) of Lemma 1 (because f is convex). The use of pairwise max activations allows one to construct max(h1, h2) ∈ M for any two h1, h2 ∈ M by using log2(n) max-pooling layers, satisfying condition (2) of Lemma 1. Thus f is in the closure of M, and the result follows.\nTheorem 2 (UA for Deep Norms and Wide Norms). The familiesD of Deep Norms (using MaxReLU) andW of Wide Norms (using MaxMean) are dense in the family N of asymmetric semi-norms.\nProof. The proof is almost identical to that of Theorem 1. The only subtlety is that D andW do not contain all linear functions in Rn; they do, however, contain all linear functions whose hyperplane is tangent to any f ∈ N , since f is N2. This is easy to see for functions defined on R2. To generalize the intuition to Rn, consider the ray Rx = {(αx :: α‖x‖) |α ≥ 0} ⊂ Rn+1 defined for each x ∈ Rn. By N2, this ray is a subset of the graph g ⊂ Rn+1 of f . Furthermore, any hyperplane tangent to one point on this ray is tangent to the entire ray and contains all points on the ray, since the ray is linear from the origin—therefore the hyperplane contains the origin. But any hyperplane tangent to g at x is tangent to a point (x) on the ray Rx, and so contains the origin. Since D andW contain all linear functions containing the origin, it follows that they contain all linear functions whose graph is tangent to f , in satisfaction of condition (1) of Lemma 1 (because f is convex). For Deep Norms, the use of pairwise max activations allows one to construct a global max operation, as in the proof of Theorem 1, satisfying condition (2) of Lemma 1. Wide Norms using MaxMean have direct access to a global max, and so satisfy condition (2) of Lemma 1. It follows that f is in the closures of D andW .\nProposition 6. The set of all asymmetric norms on Rn is in one-to-one correspondence with the set of all bounded and open convex sets (“unit balls”) containing the origin.\nProof. Given asymmetric norm ‖ · ‖ on Rn, its unit ball B1 := {x | ‖x‖ < 1} is convex since, ∀x, y ∈ B1, λ ∈ [0, 1], we have ‖(1 − λ)x + λy‖ ≤ (1 − λ)‖x‖ + λ‖y‖ ≤ 1 (using N3 & N2). Conversely, given open and bounded convex set B1 containing the origin, let ‖x‖ = inf{α > 0 | xα ∈ B1}. N1 and N2 are straightforward and N3 follows by noting that for x, y ∈ Rn, α, β > 0, such that xα , y β ∈ B1, we have α α+β x α + β α+β y β = x+y α+β ∈ B1, so that inf{γ > 0 | x+y γ ∈ B1} ≤ inf{α > 0 | xα ∈ B1}+ inf{β > 0 | y β ∈ B1}.\nProposition 7 (Metric-preserving concave functions). If d : X × X → R+ is (quasi-) metric, f : R+ → R+, f−1(0) = {0}, and f is concave (i.e., −f is convex), then f ◦ d is (quasi-) metric.\nProof. This proposition is proven for metrics by Doboš (1998) (Chapter 1, Theorem 3). The extension to quasi-metrics is immediate, as the proof does not require symmetry.\nProposition 8 (Max preserves metrics). If d1 and d2 are (quasi-) metric, so too is max(d1, d2).\nProof. M1 and M2 are trivial. For M3, let d = max(d1, d2). Given some x, y, z, we have both d1(x, y)+d1(y, z) ≥ d1(x, z) and d2(x, y)+d2(y, z) ≥ d2(x, z), so that d(x, y)+d(y, z) ≥ d1(x, z) and d(x, y)+d(y, z) ≥ d2(x, z) . Therefore, d(x, y)+d(y, z) ≥ max(d1(x, z), d2(x, z)) = d(x, z). Because if an element is larger than two other elements, it is also larger than their max.\nErratum dated July 6, 2020 The originally published version of our paper did not cite the prior universal approximation result for ICNNs by Chen et al. (2018), which we were not aware of at the time. The text of Subsection 2.4 has been revised to reflect this prior work.\nB IMPLEMENTATION DETAILS\nExcept where otherwise noted, these implementation details are common throughout our experiments.\nOur Deep Norm implementation constrains each Wi for i < k to be non-negative by clipping the parameter matrix after each gradient update. For Wk, we use either a simple mean or MaxMean (see Subsection 2.3). We set Uk = 0. We use either ReLU or MaxReLU for our activations gi, as noted.\nSince the output layer is always scalar (size 1), we refer to Deep Norms in terms of their hidden layers only. Thus, a Deep Norm with k = 3 layers of sizes (400, 400, 1) is a “2x400” Deep Norm.\nOur Wide Norm implementation avoids parameterizing the αi in Equation 3 by absorbing them into the weight matrices, so that ‖x‖ = 1k ∑ i ‖Uix‖2, where Ui = αikWi. Our asymmetric Wide Norm constrains matrix U of Proposition 5 by clipping negative values and does not use matrix D (i.e., we set D = I).\nNeither our Deep Norm nor Wide Norm implementations impose positive definiteness. This simplifies our architectures, does not sacrifice representational power, and allows them to be used on pseudometric problems (where d(x, y) = 0 is possible for distinct x, y ∈ X ). Neural Metrics involve only a very small modification to Deep Norms and Wide Norms: before applying the mean or MaxMean global activation to obtain the final output, we apply an element-wise concave activation function, as described in Subsection 3.1.\nC 2D NORMS ADDITIONAL DETAILS\nDataset generation To generate data, we use Proposition 6 by generating a random point set, computing its convex hull, and using the hull as the unit ball of the generated norm, ‖ · ‖. Having obtained the unit ball for a random norm, we sample a set of N = 500 unit vectors in R2 (according to L2) in random directions, then scale the vectors until they intersect the convex hull: {x}N . We use these vectors for testing data, defined as Dtest = {(x(1), 1), ..., (x(N), 1)}. For training data, we sample a size |D| = {16, 128} subset of these vectors and multiply them by random perturbations (i) ∼ U(0.85, 1.15) to get {x̂ |x̂(i) = x(i) (i)}k. The training data is then defined as Dtrain = {(x̂(1), (1)), ..., (x̂(k), (k))}.\nModels We compare 4 model types on 4 types of norms. The models are (1) Mahalanobis, (2) Deep Norms of depth ∈ {2, 3, 4, 5} and layer size ∈ {10, 50, 250}, (3) Wide Norms of width ∈ {2, 10, 50} and number of components ∈ {2, 10, 50}, and (4) unconstrained, fully connected neural networks with ReLU activations of depth ∈ {2, 3, 4, 5} and layer size ∈ {10, 50, 250} (MLPs). Although MLPs do not generally satisfy N1-N4, we are interested in how they generalize without the architectural inductive bias.\nNorms The norms we experiment on are random (1) symmetric and (2) asymmetric norms, constructed as above, and (3) square (L∞), and (4) diamond (L1) norms.\nTraining The models, parameterized by θ, are trained to minimize the mean squared error (MSE) between the predicted (scalar) norm value ‖x̂(i)‖ and the label norm value (i): L(θ) = 1 M ∑M i (‖x̂(i)‖θ − (i))2, where M = 16 is the batch size. We use Adam Optimizer Kingma and Ba (2014), with learning rate 1e-3, and train for a maximum of 5000 epochs.\nResults Tables 5-7 show the test MSE for the best configurations for each target norm and training data size. Deep Norms performed orders of magnitude better than Mahalanobis and Wide Norms on the random symmetric and asymmetric norms, but Wide Norms performed better on the square and diamond norms. The MLP performed worse than our models, except on the asymmetric norm with small training data.\nFigure 3 illustrates the learned norm balls for random symmetric and asymmetric norms, when trained with small (k = 16) and large (k = 128) data sizes. Appendix C includes additional visualizations. From the red contours, we observe that Deep Norms and Wide Norms generalize to larger and smaller norm balls (due to being N2), whereas the MLP is unable to generalize to the 0.5 norm ball.\nC.1 ADDITIONAL DETAILS ON NORM GENERATION\nTo generate point sets in R2, we generated c ∈ [3, 10] clusters, each with ni ∈ [5, 50] random points. The points for each cluster were sampled from a truncated normal distribution with µi ∼ U(−0.5, 0.5) and σ(i) ∼ U(0.2, 0.6), truncated to 2 standard deviations. For symmetric sets, we considered only\nthose points with positive x-coordinates and also included their reflection about the origin. To ensure that the origin was inside the resulting convex hull, we normalized the points to have mean zero before computing the hull. The convex hull was generated by using the SciPy library Jones et al. (2001) ConvexHull implementation.\nC.2 ADDITIONAL VISUALIZATIONS\nFigure 8 and Figure 7 further visualizes the training data and the Mahalanobis architecture in addition to the MLP, Deep Norm, and Wide Norm architectures, for the random symmetric, asymmetric, square (L∞) and the diamond (L1) unit ball shape." }, { "heading": "D METRIC NEARNESS ADDITIONAL DETAILS", "text": "Dataset creation For the symmetric metric nearness dataset, we generated the data as was found in Sra et al. (2005): a random matrix in R200×200 was generated with values drawn from the uniform distribution ranging between 0 and 5. This matrix was added to its transpose to make it symmetric. Random uniform noise was added to each entry between 0 and 1, then the diagonal was removed. For the asymmetric case, a more complex dataset creation strategy was employed due to the fact that a random asymmetric matrix generated as above was too difficult a task for triangle fixing. Instead, we generated a random directed lattice graph with random weights generated from the exponential function applied to a random uniform sample between -1 and 1. The distances were calculated using Dijkstra’s A∗ algorithm. Then, again, random uniform noise between 0 and 4 was added to the output, multiples of 10 removed to make the scale comparable to before, and the diagonals removed. The reason for the extra noise was due to the fact that the data matrix was already much closer to a metric.\nArchitectures used For the symmetric metric nearness experiment, the Deep Norm based Neural Metric architecture used two layers of 512 neurons (we found that larger layer sizes learned much faster), with a 512-dimensional embedding function, MaxReLU activations, 5-unit concave activation functions, and a MaxMean global pooling layer. The Wide Norm based Neural Metric consisted of 128 Mahalanobis components of size 48, with a 512-dimensional embedding function, 5-unit concave activation functions, and MaxMean global pooling. The deep Euclidean architecture used a 1024-dimensional embedding function. For the asymmetric case, the architectures were the same except that we used the asymmetric equivalents for Deep Norm and Wide Norm.\nTraining Regime The triangle fixing algorithm was allowed to go up to 400 iterations or convergence. For the symmetric case we reimplemented Sra et al. (2005) in Python using their C++ code as a template. For the asymmetric version, we used the same code but we removed their symmetrization step. For the training of all networks, we did 1500 epochs in total split up in to 500 epoch chunks were the learning rate decreased from 1e-3 to 3e-4 to 1e-4 at the end, using a batch size of 1000. We used the Adam optimizer Kingma and Ba (2014) with default hyperparameters. Results were averaged over 10 seeds." }, { "heading": "E GRAPH EXPERIMENTS ADDITIONAL DETAILS", "text": "Symmetric graph descriptions The first symmetric graph, to, consists of a symmetrized road network extracted from openStreetMap (OSM, www.openstreetmap.org). The second, 3d, represents navigation in a 50x50x50 cubic 3d gridworld, where the agent can move one step at a time in 6 directions, and movement wraps around the sides of the cube. The third, taxi, represents a 25x25 2d taxi environment, where the agent can move in four directions and there is a passenger that the agent can pick up and drop. Unlike in 3d, there is no wraparound in taxi. Weights for edges in to correspond to the OSM distances, whereas weights for edges in 3d and taxi are randomly sampled from {0.01, 0.02, . . . , 1.00} (but final distances are normalized so that the mean distance is 50).\nAsymmetric graph descriptions The first, push, is a 25x25 2d pusher environment, where the agent can move in four directions, and there is a box that the agent can push by moving into it. If the box is pushed into a wall, it switches places with the agent. The second, 3dd, is a directed version of 3d, where all paths in three of the six movement directions have been pruned (the same three directions for all nodes). The third, 3dr, is a randomly pruned version of 3d, where three random movement directions were pruned at each node, and any inaccessible portions of the resulting graph were pruned.\nDataset generation Our experiments are structured as a supervised learning task, with inputs x, y ∈ V and targets d(x, y). The targets for a random subset of 150K pairs (of the O(|V |2) total pairs) were computed beforehand using A∗ search and normalized to have mean 50. 10K were used as a test set, and the remainder was subsampled to form training sets of size |D| ∈ {1K, 2K, 5K, 10K, 20K, 50K}. Nodes were represented using noisy landmark embeddings. A subset of 32 landmark nodes was randomly chosen, and the distances to and from all other nodes in the graph were computed using Dijkstra’s algorithm to form 32 base landmark features for symmetric graphs and 32 base landmark features for asymmetric graphs. These base landmark features were then normalized to have mean 0 and standard deviation 1, and noise sampled from N (0, 0.2) was added. To add additional noise through distractor features, 96 normally distributed features were concatenated with the noisy landmark features to obtain node embeddings with 128 dimensions for symmetric graphs and 160 dimensions for asymmetric graphs. We also tested node2vec embeddings Grover and Leskovec (2016), but found that our noisy landmark approach was both faster to run and produced results that were an order of magnitude better for all algorithms.\nArchitectures We compare a 128-dimensional Mahalanobis metric (equivalent to a deep Euclidean metric with an additional layer), a Wide Norm based Neural Metric (32 components of 32 units, with 5-unit concave activations, and MaxMean global pooling), a plain Deep Norm (3 layers of 128 units, ICNN style with ReLU activations, no concave activations, and average pooling, DNI ), and a Deep Norm based Neural Metric (3 layers of 128 units, with MaxReLU and 5-unit concave activations, and MaxMean pooling, DNN ), and an MLP (3 layers of 128 units). We train each algorithm with 4 different embedding functions φ, each a fully connected, feed-forward neural network with ReLU activations. The depths of tested φ ranged from 0 to 3 layers, all with 128 units. No regularization was used besides the size/depth of the layers.\nTraining Training was done end-to-end with Adam Optimizer Kingma and Ba (2014), using an initial learning rate of 1e-3 and a batch size of 256. Networks were each trained for 1000 total epochs, and the learning rate was divided by 5 every 250 epochs.\nResults The complete results/learning curves are displayed in Figures 10 and 11 below." }, { "heading": "F GENERAL VALUE FUNCTIONS ADDITIONAL DETAILS", "text": "F.1 EXPERIMENTAL DETAILS\n4-Room and Maze environment description The fully observable grid environment has a state and goal representation of a binary tensor with dimensions 11 × 11 × 3. Each cell in the 2D grid is represented by a 1-hot vector with 3 dimensions, indicating whether the cell is (1) empty, (2) wall, (3) agent. The agent can move up to 4 cardinal directions (North, South, East, West). If a wall is present in the direction then the agent cannot take that move. The 4-Room environment and Maze environment has a total of 4556 and 2450 training (s, g) pairs, respectively. For each environment, the agent has access to a neighbour function N(s) which returns a list of possible next states, corresponding to the empty cells adjacent to the current agent location. This is equivalent to having environment transition model p(s′|s, a) over all the actions.\nTransition reward and episode termination For symmetric environments, every transition has a reward of -1, i.e., R(s, s′) = −1. Note that reward is not a function of the goal, but the termination is. For asymmetric environments, we add a noise µ ∼ U(0,−5) to the base −1 reward for a transition if the direction of movement is west (left) or south (down). The done flag is set to 1 when the neighbouring state equals to the goal, i.e. D(s, s′, g) = [s′ = g].\nChoosing Training-Test Split Dataset Given the full training data set of (s, g) pairs, we use 2 types of splitting into train and test datasets: (1) goal, and (2) state. We denote the training fraction η =∈ [0, 1] as the fraction of the total data points used for the training set. For goal splitting, we pick a subset of goals {g∗} ⊆ g with |{g∗}| ≈ (1 − η)|{g}|, and remove all (s, g) pairs where g ∈ g∗, from the training set. These goals were chosen to be the percentage of empty cells from the bottom of the grid, when numbering the empty cells from left to right, and top-down (i.e. reading order of words in a page). This corresponds to approximately a fraction of the bottom rows of goals. In this case, the training set observes all possible states, and hence also all the transition rewards. We are interested in whether it can generalize to new unseen goals. For state splitting, we perform a similar procedure but instead with a subset of states which are removed, with the states corresponding to the agent being at the bottom rows of the grid. While all the goals are included in the training dataset, in the state splitting, some states and transition rewards are unknown to the agent, hence much more difficult to generalize.\nArchitecture The architectures used for the experiments consist of a shared feature extractor φ for the state s and goal g, followed by a function fθ on the difference of the features:\nV (s, g;φ, θ) = fθ(φ(g)− φ(s)) (4)\nThe feature extractor φ is composed of 2 convolutional layers with 3 × 3 kernel size, and 32 and 62 filters with ReLU activations and a stride of 1. We then flatten the feature maps and follow by 2 fully connected layers of hidden size 128 and 64, with ReLU activation on the first layer only. The variants of fθ are summarized in table 8. Euclidean simply computes the L2 norm between the feature embeddings: ‖φ(g)− φ(s)‖2:\nTraining The objective function for training the models is the Temporal Difference (TD) Error L(φ, θ):\nL(φ, θ) = E(s,g)∼D [( V (s, g;φ, θ)− y )2] , (5)\ny =\n{ r(s, s′, g) + maxs′∈N(s) V (s\n′, g; φ̄, θ̄), if s′ 6= g r(s, s′, g), otherwise\n(6)\nWhere N(s) refers to the set of next states of s after applying different actions at that state. Note that we did not apply a discount factor on the value of the next state (i.e. the discount factor γ = 1). We make use of target networks φ̄, θ̄ when computing the target y. The target networks are updated once every epoch of training, via exponential moving average (Polyak Averaging) with the main networks (φ, θ):\nθ̄(n+1) = αθ̄(n) + (1− α)θ(n), (7)\nwhere, α = 0.95 is the update fraction. We use Adam Kingma and Ba (2014) with learning rate 0.0001, batch size 128, for 1000 epoch. We evaluate SPL metric every 200 epoch.\nEvaluation Metrics We evaluate the learned value functions on several metrics: MSE, Policy Success, and SPL Anderson et al. (2018). The MSE was calculated on on the ground truth value of the held out test set V (s, g). For policy success and SPL, we select N = 100 random (s, g) pairs in the test set, initialize the agent at s, and follow a greedy policy to visit the neighbour s′ = arg max′s r(s, s\n′) + V (s′) for up to T = 121 timesteps (11 × 11). If the agent reaches the desired goal within the episode i then the binary success indicator Si = 1 and the episode terminates, and 0 otherwise. Let li be the ground truth V (s, g) optimal episode cumulative reward (i.e. path cost), and pi the cumulative reward in the episode by the agent’s trajectory. We then define the Policy Success rate and SPL as:\nSuccess = 1\nN N∑ i=1 Si, SPL = 1 N N∑ i=1 Si li max(pi, li) (8)\nF.2 ADDITIONAL EVALUATIONS AND VISUALIZATIONS\nWe visualize additional heatmaps for (1) learned value function (Figure 15), (2) Squared Error (SE) between the ground truth value and the learned value (Figure 16), and (3) SPL metric for individual (s0, s) and (s, s0) pairs (Figure 17). These results are for agents trained with training fraction η = 0.75.\nIn comparing the performance versus training fraction, we also plot the final performance after 1000 epoch of training: (1) training set SPL versus training fraction (Figure 13a), (2) Train (Figure 14a) and test (Figure 14b) mean squared error (MSE) with the groundtruth values. We note that while our Deep Norm and Wide Norm appears to have higher MSE than the baselines, in practice when utilizing the value function with greedy policy, our architectures were able to achieve better success than the baseline architectures." } ]
2,020
null
SP:3a670c06bf87ba895ed91ed2280d88881defa412
[ "This paper proposes a new loss function to compute the exact ordered eigenvectors of a dataset. The loss is motivated from the idea of computing the eigenvectors sequentially. However doing so would be computationally expensive, and the authors show that the loss function they propose (sum of sequential losses) has the same order (constant less than 7) of computational complexity as using the squared loss. A proof of the correctness of the algorithm is given, along with experiments to verify its performance.", "This paper proposes a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). With this new loss function, the decoder weights of LAEs can eventually converge to the exact ordered unnormalized eigenvectors of the sample covariance matrix. The main contribution is to add the identifiability of principal components in PCA using LAEs and. Two empirical experiments were done to show the effectiveness of proposed loss function on one synthetic dataset and the MNIST dataset. " ]
In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors. This downside originates from an invariance that cancels out in the global map. Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss. We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60, 000×784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs’ downsides.
[]
[ { "authors": [ "Brandon Amos", "J Zico Kolter" ], "title": "Optnet: Differentiable optimization as a layer in neural networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Pierre Baldi", "Kurt Hornik" ], "title": "Neural networks and principal component analysis: Learning from examples without local minima", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Hervé Bourlard", "Yves Kamp" ], "title": "Auto-association by multilayer perceptrons and singular value decomposition", "venue": "Biological cybernetics,", "year": 1988 }, { "authors": [ "David L Donoho", "Michael Elad", "Vladimir N Temlyakov" ], "title": "Stable recovery of sparse overcomplete representations in the presence of noise", "venue": "IEEE Transactions on information theory,", "year": 2005 }, { "authors": [ "Charles G Frye", "Neha S Wadia", "Michael R DeWeese", "Kristofer E Bouchard" ], "title": "Numerically recovering the critical points of a deep linear autoencoder", "venue": null, "year": 1901 }, { "authors": [ "R.A. Horn", "C.R. Johnson" ], "title": "Matrix Analysis", "venue": null, "year": 2012 }, { "authors": [ "Sun-Yuan Kung", "KI Diamantaras" ], "title": "A neural network learning algorithm for adaptive principal component extraction (apex)", "venue": "In International Conference on Acoustics, Speech, and Signal Processing,", "year": 1990 }, { "authors": [ "Daniel Kunin", "Jonathan M Bloom", "Aleksandrina Goeva", "Cotton Seed" ], "title": "Loss landscapes of regularized linear autoencoders", "venue": "arXiv preprint arXiv:1901.08168,", "year": 2019 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Erkki Oja", "Hidemitsu Ogawa", "Jaroonsakdi Wangviwattana" ], "title": "Principal component analysis by homogeneous neural networks, part 1: The weighted subspace criterion", "venue": "IEICE Transactions on Information and Systems,", "year": 1992 }, { "authors": [ "Elad Plaut" ], "title": "From principal subspaces to principal components with linear autoencoders", "venue": "arXiv preprint arXiv:1804.10253,", "year": 2018 }, { "authors": [ "Arnu Pretorius", "Steve Kroon", "Herman Kamper" ], "title": "Learning dynamics of linear denoising autoencoders", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jeanne Rubner", "Paul Tavan" ], "title": "A self-organizing network for principal-component analysis", "venue": "EPL (Europhysics Letters),", "year": 1989 }, { "authors": [ "Lei Xu" ], "title": "Least mean square error reconstruction principle for self-organizing neural-nets", "venue": "Neural networks,", "year": 1993 }, { "authors": [ "E. Zeidler" ], "title": "Applied Functional Analysis: Main Principles and Their Applications. Applied Mathematical Sciences", "venue": null, "year": 1995 }, { "authors": [ "Y Zhou", "Y Liang" ], "title": "Critical points of linear neural networks: Analytical forms and landscape properties", "venue": "In Proc. Sixth International Conference on Learning Representations (ICLR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Ranking among the most widely-used and valuable statistical tools, Principal Component Analysis (PCA) represents a given set of data within a new orthogonal coordinate system in which the data are uncorrelated and the variance of the data along each orthogonal axis is successively ordered from the highest to lowest. The projection of data along each axis gives what are called principal components. Theoretically, eigendecomposition of the covariance matrix provides exactly such a transformation. For large data sets, however, classical decomposition techniques are infeasible and other numerical methods, such as least squares approximation schemes, are practically employed. An especially notable instance is the problem of dimensionality reduction, where only the largest principal components—as the best representative of the data—are desired. Linear autoencoders (LAEs) are one such scheme for dimensionality reduction that is applicable to large data sets.\nAn LAE with a single fully-connected and linear hidden layer, and Mean Squared Error (MSE) loss function can discover the linear subspace spanned by the principal components. This subspace is the same as the one spanned by the weights of the decoder. However, it failure to identify the exact principal directions. This is due to the fact that, when the encoder is transformed by some matrix, transforming the decoder by the inverse of that matrix will yield no change in the loss. In other words, the loss possesses a symmetry under the action of a group of invertible matrices, so that directions (and orderings/permutations thereto) will not be discriminated.\nThe early work of Bourlard & Kamp (1988) and Baldi & Hornik (1989) connected LAEs and PCA and demonstrated the lack of identifiability of principal components. Several methods for neural networks compute the exact eigenvectors (Rubner & Tavan, 1989; Xu, 1993; Kung & Diamantaras, 1990; Oja et al., 1992), but they depend on either particular network structures or special optimization methods. It was recently observed (Plaut, 2018; Kunin et al., 2019) that regularization causes the left singular vectors of the decoder to become the exact eigenvectors, but recovering them still requires an extra decomposition step. As Plaut (2018) point out, no existent method recovers the eigenvectors from an LAE in an optimization-independent way on a standard network — this work fills that void.\nMoreover, analyzing the loss surface for various architectures of linear/non-linear neural networks is a highly active and prominent area of research (e.g. Baldi & Hornik (1989); Kunin et al. (2019); Pretorius et al. (2018); Frye et al. (2019)). Most of these works extend the results of Baldi & Hornik (1989) for shallow LAEs to more complex networks. However, most retain the original MSE loss, and they prove the same critical point characterization for their specific architecture of interest. Most notably Zhou & Liang (2018) extends the results of Baldi & Hornik (1989) to deep linear networks and shallow RELU networks. In contrast in this work we are going after a loss with better loss surface properties.\nWe propose a new loss function for performing PCA using LAEs. We show that with the proposed loss function, the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. The idea is simple: for identifying p principal directions we build up a total loss function as a sum of p squared error losses, where the ith loss function identifies only the first i principal directions. This approach breaks the symmetry since minimizing the first loss results in the first principal direction, which forces the second loss to find the first and the second. This constraint is propagated through the rest of the losses, resulting in all p principal components being identified. For the new loss we prove that all local minima are global minima.\nConsequently, the proposed loss function has both theoretical and practical implications. Theoretically, it provides better understanding of the loss surface. Specifically, we show that any critical point of our loss L is a critical point of the original MSE loss but not vice versa, and conclude that L eliminates those undesirable global minima of the original loss (i.e., exactly those which suffer from the invariance). Given that the set of critical points of L is a subset of critical points of MSE loss, many of the previous work on loss surfaces of more complex networks likely extend. In light of the removal of undesirable global minima through L, examining more complex networks is certainly a very promising direction.\nAs for practical consequences, we show that the loss and its gradients can be compactly vectorized so that their computational complexity is no different from the MSE loss. Therefore, the loss L can be used to perform PCA/SVD on large datasets using any method of optimization such as Stochastic Gradient Descent (SGD). Chief among the compellingly reasons to perform PCA/SVD using this method is that, in recent years, there has been unprecedented gains in the performance of very large SGD optimizations, with autoencoders in particular successfully handling larger numbers of high-dimensional training data (e.g., images). The loss function we offer is attractive in terms of parallelizability and distributability, and does not prescribe any single specific algorithm or implementation, so stands to continue to benefit from the arms race between SGD and its competitors.\nMore importantly, this single loss function (without an additional post hoc processing step) fits seamlessly into optimization pipelines (where SGD is but one instance). The result is that the loss allows for PCA/SVD computation as single optimization layer, akin to an instance of a fully differentiable building block in a NN pipeline Amos & Kolter (2017), potentially as part of a much larger network." }, { "heading": "2 THE PROPOSED LOSS FUNCTION AND REVIEW OF FINAL RESULTS", "text": "Let X ∈ Rn×m and Y ∈ Rn×m be the input and output matrices, where m centered sample points, each n-dimensional, are stacked column-wise. Let xj ∈ Rn and yj ∈ Rn be the jth sample input and output (i.e. the jth column of X and Y , respectively). Define the loss function L(A,B) as\nL(A,B) := p∑ i=1 m∑ j=1 ‖yj −AIi;pBxj‖22 = p∑ i=1 ‖Y −AIi;pBX‖2F (1)\nwhere 〈·, ·〉F and ‖·‖F are the Frobenius inner product and norm, Ii;p is a p× p matrix with all elements zero except the first i diagonal elements being one. (Or, equivalently, the matrix obtained by\nsetting the last p− i diagonal elements of a p×p identity matrix to zero, e.g. I2;3 = [\n1 0 0 0 1 0 0 0 0\n] .) In\nwhat follows, we shall denote the transpose of matrix M by M ′. Moreover, the matrices A ∈ Rn×p, and B ∈ Rp×n can be viewed as the weights of the decoder and encoder parts of an LAE. The results are based on the following standard assumptions that hold generically:\nAssumption 1. For an input X and an output Y , let Σxx := XX ′, Σxy := XY ′, Σyx := Σ′xy and Σyy = Y Y ′ be their sample covariance matrices. We assume\n• The input and output data are centered (zero mean). • Σxx, Σxy , Σyx and Σyy are positive definite (of full rank and invertible).\n• The covariance matrix Σ := ΣyxΣ−1xxΣxy is of full rank with n distinct eigenvalues λ1 > λ2 > · · · > λn. • The decoder matrix A has no zero columns.\nClaim. The main result of this work proved in Theorem 2 is as follows:\nIf the above assumptions hold then all the local minima of L(A,B) are achieved iff A and B are of the form\nA = U1:pDp B = D−1p U ′ 1:pΣyxΣ −1 xx ,\nwhere the ith column of U1:p is the unit eigenvector of Σ := ΣyxΣ−1xxΣxy corresponding to the i th largest eigenvalue and Dp is a diagonal matrix with nonzero diagonal elements. In other words, A contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues. Moreover, all the local minima are global minima with the value of the loss function at those global minima being\nL(A,B) = p Tr(Σyy)− p∑ i=1 (p− i+ 1)λi,\nwhere λi is the ith largest eigenvalue of Σ := ΣyxΣ−1xxΣxy . In the case of autoencoder (Y = X): Σ = Σxx. Finally, while L(A,B) in the given form containsO(p) matrix products, we will show that it can be evaluated with constant (less than 7) matrix products independent of the value p." }, { "heading": "3 NOTATION", "text": "In this paper, the underlying field is always R, and positive semidefinite matrices are symmetric by definition. The following constant matrices are used extensively throughout. The matrices Tp ∈ Rp×p and Sp ∈ Rp×p are defined as\n(Tp)ij = (p− i+ 1) δij , i.e. Tp = diag (p, p− 1, · · · , 1) , (2)\n(Sp)ij=p−max(i, j)+1, i.e. Sp= p p− 1 · · · 2 1 p− 1 p− 1 · · · 2 1 ... ... . . . 2 1\n2 2 2 2 1 1 1 1 1 1\n , e.g. S4= 4 3 2 13 3 2 12 2 2 1\n1 1 1 1 . (3) Another matrix that will appear in the formulation is Ŝp := T−1p SpT −1 p . Clearly, the diagonal\nmatrix Tp is positive definite. As shown in Lemma 2, Sp and Ŝp are positive definite as well." }, { "heading": "4 MAIN THEOREMS", "text": "The general strategy to prove the above claim is as follows. First the analytical gradients of the loss is derived in a matrix form in Propositions 1 and 2. We compare the gradients with that of the original Minimum Square Error (MSE) loss. Next, we analyze the loss surface by solving the gradient equations which yields the general structure of critical points based on the rank of the decoder matrix A. Next, we delineate several interesting properties of the critical points, notably, any critical point of the loss is also a critical point for the MSE loss but not the other way around. Finally, by performing second order analysis on the loss in Theorem 2 the exact equations for local minima are derived which is shown to be as claimed.\nLet L̃(A,B) and L(A,B) be the original loss, and the proposed loss function, respectively, i.e.,\nL̃(A,B) := m∑ j=1 ‖yj −ABxj‖22\n= ‖Y −ABX‖2F\nL(A,B) := p∑ i=1 m∑ j=1 ‖yj −AIi;pBxj‖22\n= p∑ i=1 ‖Y −AIi;pBX‖2F\nThe first step is to calculate the gradients with respect to A and B and set them to zero to derive the implicit expressions for the critical points. In order to do so, first, in Lemma 5, for a fixed A, we derive the directional (Gateaux) derivative of the loss with respect to B along an arbitrary direction W ∈ Rp×n, denoted as dBL(A,B)W , i.e.\ndBL(A,B)W = lim‖W ‖F→0 L(A,B + W )− L(A,B) ‖W ‖F .\nAs shown in the proof of the lemma, dBL(A,B)W is derived by writing the norm in the loss as an inner product, opening it up using linearity of inner product, dismiss second order terms in W (i.e. O(‖W ‖2)) and rearrange the result as the inner product between the gradient with respect to B, and the direction W , which yields\ndBL(A,B)W = −2 Tr (W ′ (TpA′Σyx − (Sp ◦ (A′A))BΣxx)) = −2〈TpA′Σyx − (Sp ◦ (A′A))BΣxx,W 〉F , (4)\nwhere, ◦ is the Hadamard product and the constant matrices Tp and Sp, were defined in the beginning. Second, the same process is done in Lemma 6, to derive dAL(A,B)V ; the derivative of L with respect to A in an arbitrary direction V ∈ Rn×p, for a fixed B, which is then set to zero to derive the implicit expressions for the critical points. The results are formally stated in the two following propositions. Proposition 1. For any fixed matrix A ∈ Rn×p the function L(A,B) is convex in the coefficients of B and attains its minimum for any B satisfying the equation\n(Sp ◦ (A′A))BΣxx = TpA′Σyx, (5) where ◦ is the Hadamard (element-wise) product operator, and Sp and Tp are constant matrices defined in the previous section. Further, if A has no zero column, then L(A,B) is strictly convex in B and has a unique minimum when the critical B is\nB = B̂(A) = (Sp ◦ (A′A))−1TpA′ΣyxΣ−1xx , (6) and in the autoencoder case it becomes\nB = B̂(A) = (Sp ◦ (A′A))−1TpA′. (6′)\nThe proof is given in appendix A.2. Remark 1. Note that as long as A has no zero column, Sp ◦ (A′A) is nonsingular (we will explain the reason soon). In practice, A with zero columns can always be avoided by nudging the zero columns of A during the gradient decent process. Proposition 2. For any fixed matrix B ∈ Rp×n the function L(A,B) is a convex function in A. Moreover, for a fixed B, the matrix A that satisfies\nA (Sp ◦ (BΣxxB′)) =ΣyxB′Tp (7) is a critical point of L(A,B).\nThe proof is given in appendix A.3.\nThe pair (A,B) is a critical point of L if they make dBL(A,B)W and dAL(A,B)V zero for any pair of directions (V ,W ). Therefore, the implicit equations for critical points are given below, next to the ones derived by Baldi & Hornik (1989) for L̃(A,B).\nFor L̃(A,B):\nA′ABΣxx = A ′Σyx, ABΣxxB ′ = ΣyxB ′.\nFor L(A,B):\n(Sp ◦ (A′A))BΣxx = TpA′Σyx, A (Sp ◦ (BΣxxB′)) = ΣyxB′Tp.\nRemark 2. Notice the similarity, and the difference only being the presence of Hadamard product by Sp in the left and by diagonal Tp in the right. Therefore, practically, the added computational cost of evaluating the gradients is negligible compare to that of MSE loss.\nThe next step is to determine the structure of (A,B) that satisfies the above equations, and find the subset of those solutions that account for local minima. For the original loss, the first expression (A′ABΣxx = A′Σyx) is used to solve for B and put it in the second to derive an expression solely based on A. Obviously, in order to solve the first expression for B, two cases are considered separately: the case where A is of full rank p, so A′A is invertible, and the case of A being of rank r < p. Here we do the same but there is a twist; for us there is only one case. The reason is as long as (not necessarily full rank) A has no zero column, Sp ◦ (A′A) is positive definite and hence, invertible. This is discussed in detail in Lemma 2 and we briefly explain it here. As shown in the lemma, Sp is positive definite and by Shur product theorem for any A (of any rank), Sp ◦ (A′A) is positive semidefinite. However, as a result of Oppenheim inequality (Horn & Johnson (2012), Thm 7.8.16), that in our case translates to det(Sp) ∏ i(A ′A)ii ≤ det(Sp ◦ (A′A)), as long as A has no\nzero column, ∏ i(A ′A)ii > 0 and therefore, det(Sp ◦ (A′A)) > 0. Here, we assume A of any rank r ≤ p has no zero column (since this can be easily avoided in practice) and consider Sp ◦ (A′A) to be always invertible. Therefore, (A,B) define a critical point of losses L̃ and L if\nFor L̃(A,B) and full rank A:\nB = B̂(A) = (A′A)−1A′ΣyxΣ −1 xx , ABΣxxB ′ = ΣyxB ′.\nFor L(A,B) and no zero column A:\nB = B̂(A) = (Sp ◦ (A′A))−1TpA′ΣyxΣ−1xx , A (Sp ◦ (BΣxxB′)) = ΣyxB′Tp.\nBefore, we state the main theorem we need the following definitions. First, a rectangular permutation matrix Πr ∈ Rr×p is a matrix that each column consists of at most one nonzero element with the value 1. If the rank of Πr is r with r < p then clearly, Πr has p − r zero columns. Also, by taking away those zero columns the resultant r × r submatrix of Πr is a standard square permutation matrix.\nSecond, under the conditions provided in Assumption 1, the matrix Σ := ΣyxΣ−1xxΣxy has an eigenvalue decomposition Σ = UΛU ′, where the ith column of U , denoted as ui, is an eigenvector of Σ corresponding to the ith largest eigenvalue of Σ, denoted as λi. Also, Λ = diag(λ1, · · · , λn) is the diagonal vector of ordered eigenvalues of Σ, with λ1 > λ2 > · · · > λn > 0. We use the following notation to organize a subset of eigenvectors of Σ into a rectangular matrix. Let for any r ≤ p, Ir = {i1, · · · , ir}(1 ≤ i1 < · · · < ir < n) be any ordered r−index set. Define UIr ∈ Rn×p as UIr = [ui1 , · · · ,uir ]. That is the columns of UIr are the ordered orthonormal eigenvectors of Σ associated with eigenvalues λi1 < · · · < λir . Clearly, when r = p, we have UIr = [ui1 , · · · ,uip ] corresponding to an p−index set Ip = {i1, · · · , ip}(1 ≤ i1 < · · · < ip < n). Similarly, we define ΛIr ∈ Rp×p as ΛIr = diag(λi1 , · · · , λir ). Theorem 1. Let A ∈ Rn×p and B ∈ Rp×n such that A is of rank r ≤ p. Under the conditions provided in Assumption 1 and the above notation, The matrices A and B define a critical point of L(A,B) if and only if for any given r−index set Ir, and a nonsingular diagonal matrix D ∈ Rr×r, A and B are of the form\nA = UIrCD, (8) B = B̂(A) = D−1ΠCU ′ IrΣyxΣ −1 xx , (9)\nwhere, C ∈ Rr×p is of of full rank r with nonzero and normalized columns such that ΠC := (Sp ◦ (C ′C))−1 TpC ′ is a rectangular permutation matrix of rank r and CΠC = Ir. For all 1 ≤ r ≤ p, such C always exists. In particular, if matrix A is of full rank p, i.e. r = p, the two given conditions on ΠC are satisfied iff the invertible matrix C is any squared p × p permutation matrix Π. In this case (A,B) define a critical point of L(A,B) iff they are of the form\nA = UIpΠD, (10)\nB = B̂(A) = D−1Π′U ′IpΣyxΣ −1 xx . (11)\nThe proof is given in appendix A.4.\nRemark 3. The above theorem provides explicit equations for the critical points of the loss surface in terms of the rank of the decoder matrixA and the eigenvectors of Σ. This explicit structure allows us to further analyze the loss surface and its local/global minima.\nHere, we provide a proof sketch for the above theorem to make the claims more clear. Again as a reminder, the EVD of Σ := ΣyxΣ−1xxΣxy is Σ = UΛU\n′. For both L̃ and L, the corresponding B̂(A) is replaced by B on the RHS of critical point equations. For the loss L(A,B), as shown in the proof of the theorem, results in the following identity\nU ′A ( Sp ◦ ( B̂ΣxxB̂ ′ )) A′U = Λ∆, (12)\nwhere ∆ := U ′ATp(Sp ◦ (A′A))−1TpA′U is symmetric and positive semidefinite. The LHS of eq. (12) is symmetric so the RHS is symmetric too, so Λ∆ = (Λ∆)′ = ∆′Λ′ = ∆Λ. Therefore, ∆ commutes with the diagonal matrix of eigenvalues Λ. Since eigenvalues are assumed to be distinct, ∆ has to be diagonal as well. By Lemma 2 Tp(Sp ◦ (A′A))−1Tp is positive definite and U is an orthogonal matrix. Therefore, r = rank(A) = rank(∆) = rank(U ′∆U), which implies that the diagonal matrix ∆, has r nonzero and positive diagonal entries. There exists an r−index set Ir corresponding to the nonzero diagonal elements of ∆. Forming a diagonal matrix ∆Ir ∈ Rr×r by filling its diagonal entries (in order) by the nonzero diagonal elements of ∆, we have\nU∆U ′ = UIr∆IrU ′ Ir Def of ∆ ====⇒\nATp(Sp ◦ (A′A))−1TpA′ = UIr∆IrU ′Ir , (13) which indicates that the matrix A has the same column space as UIr . Therefore, there exists a full rank matrix C̄ ∈ Rr×p such that A = UIrC̄. Since A has no zero column, C̄ has no zero column. Further, by normalizing the columns of C̄ we can write A = UIrCD, where D ∈ Rp×p is diagonal that contains the norms of columns of C̄.\nBaldi & Hornik (1989) did something similar for full rank A for the loss L̃ to derive (AL̃ = UIpC̃). But their C̃ can be any invertible p × p matrix. However, in our case, the matrix C ∈ Rr×p corresponding to rank r ≤ p matrix A, has to satisfy eq. (13) by replacing A by UIrCD and eq. (12) by replacing B̂(A) by B̂(UIrCD). In the case of Baldi & Hornik (1989), for the original loss L̃, equations similar to eq. (13) and eq. (12) appear but they are are satisfied trivially by any invertible matrix C̃. Simplifying those equations by using A = UIrCD after some algebraic manipulation results in the following two conditions for C:\nCTp (Sp ◦ (C ′C))−1 TpC ′ =∆Ir and (14) C ( Sp ◦ ( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 )) C ′ =ΛIr∆Ir . (15)\nAs detailed in proof of Theorem 1, solving for C leads to its specific structure as laid out in the theorem. Remark 4. Note that when A is of rank r < p with no zero columns then the invariant matrix C is not necessarily a rectangular permutation matrix but ΠC := (Sp ◦ (C ′C))−1 TpC ′ is a rectangular permutation matrix with CΠC = Ir. It is only when r = p that the invariant matrix C becomes a permutation matrix. Nevertheless, as we show in the following corollary, the global map is always ∀r ≤ p : G = AB = UIrU ′IrΣyxΣ−1xx . It is possible to find further structure (in terms of block matrices) for the invariant matrix C when r < p. However, this is not necessary as we soon show that all rank deficient matrix As are saddle points for the loss and ideally should be passed by during the gradient decent process. Based on some numerical results our conjecture is that when r < p the matrix C can only start with a r × k rectangular permutation matrix of rank r with r ≤ k ≤ p and the rest of p− k columns of C is arbitrary as long as none of the columns are identically zero. Corollary 1. Let (A,B) be a critical point of L(A,B) under the conditions provided in Assumption 1 and rankA = r ≤ p. Then the following are true\n1. The matrix BΣxxB′ is a p× p diagonal matrix of rank r.\n2. For all 1 ≤ r ≤ p, for any critical pair (A,B), the global map G := AB becomes G = UIrU ′ IrΣyxΣ −1 xx . (16)\nFor the autoencoder case (Y = X) the global map is simply G = UIrU ′ Ir .\n3. (A,B) is also the critical point of the classical loss L̃(A,B) = ∑p i=1 ‖Y −ABX‖ 2 F .\nThe proof is given in appendix A.5. Remark 5. The above corollary implies that L(A,B) not only does not add any extra critical point compare to the original loss L̃(A,B), it provides the same global map G := AB. It only limits the structure of the invariance matrix C as described in Theorem 1 so that the decoder matrix A can recover the exact eigenvectors of Σ. Lemma 1. The loss function L(A,B) can be written as L(A,B) = pTr(Σyy)− 2 Tr (ATpBΣxy) + Tr (B′ (Sp ◦ (A′A))BΣxx) . (17) The above identity shows that the number of matrix operations required for computing the loss L(A,B) is constant and thereby independent of the value of p.\nThe proof is given in appendix A.6. Theorem 2. Let A∗ ∈ Rn×p and B∗ ∈ Rp×n such that A∗ is of rank r ≤ p. Under the conditions provided in Assumption 1, (A∗,B∗) define a local minima of the proposed loss function iff they are of the form\nA∗ = U1:pDp (18) B∗ = D−1p U ′ 1:pΣyxΣ −1 xx , (19)\nwhere the ith column of U1:p is a unit eigenvector of Σ := ΣyxΣ−1xxΣxy corresponding the i th largest eigenvalue and Dp is a diagonal matrix with nonzero diagonal elements. In other words, A∗ contains ordered unnormalized eigenvectors of Σ corresponding to the p largest eigenvalues. Moreover, all the local minima are global minima with the value of the loss function at those global minima being\nL(A∗,B∗) = p Tr(Σyy)− p∑ i=1 (p− i+ 1)λi, (20)\nwhere λi is the ith largest eigenvalue of Σ.\nThe proof is given in appendix A.7. Remark 6. Finally, the second and third assumptions we made in the beginning in Assumption 1 can be relaxed by requiring only Σxx to be full rank. The output data can have a different dimension than the input. That is Y ∈ Rn×m and X ∈ Rn′×m, where n 6= n′. The reason is that the given loss function structurally is very similar to MSE loss and can be represented as a Frobenius norm on the space of n × m matrices. In this case the covariance matrix Σ := ΣyxΣ−1xxΣxy is still n × n. Clearly, for under-constrained systems with n < n′ the full rank assumption of Σ holds. For the overdetermined case, where n′ > n the second and third assumptions in Assumption 1 can be relaxed: we only require Σxx to be full rank since this is the only matrix that is inverted in the theorems. Note that if p > min(n′, n) then ΛIp : the p× p diagonal matrix of eigenvalues of Σ for a p-index-set Ip bounds to have some zeros and will be say rank r < p, which in turn, results in the encoder A with rank r. However, the Theorem 1 is proved for encoder of any rank r ≤ p. Finally, following theorem 2 then the first r columns of the encoder A converges to ordered eigenvectors of Σ while the p − r remaining columns span the kernel space of Σ. Moreover, Σ need not to have distinct eigenvectors. In this case ∆Ir becomes a block diagonal matrix, where the blocks correspond to identical eigenvalues ΣIr . In this case, the corresponding eigenvectors in A\n∗ are not unique but they span the respective eigenspace." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "LAEs with Two Loss functions We will verify the loss function L(A,B) defined in eq. (1) by setting the input matrix X ∈ Rn×m equal to the output matrix Y ∈ Rn×m (Y = X), where the\nlinear autodecoder (LAE) becomes a solution to PCA. In order for comparison, we train another LAE using the MSE loss L̃(Ã, B̃) defined as L̃(Ã, B̃) = ∥∥∥Y − ÃB̃X∥∥∥2\nF , where Y = X is also\napplied in our experiments.\nThe weights of networks are initialized to random numbers with a small enough standard deviation (10−7 in our case). We choose to use the Adam optimizer with a scheduled learning rate (starting from 10−3 and ending with 10−6 in our case), which empirically benefits the optimization process. The two training processes are stopped at the same iteration at which one of the models firstly finds all of the principal directions. As a side note, we feed all data samples to the network at one time with batch size equal to m, although mini-batch implementations are apparently amendable.\nEvaluation Metrics We use the classical PCA approach to get the ground truth principal direction matrix A∗ ∈ Rn×p, by conducting Eigen Value Decomposition (EVD) to XX ′ ∈ Rn×n or Singular Value Decomposition (SVD) to X ∈ Rn×m. As a reminder, A ∈ Rn×p stands for the decoder weight matrix of an trained LAE given a loss function L. To measure the distance between A∗ and A, we propose an absolute cosine similarity (ACS) matrix inspired by mutual coherence (Donoho et al., 2005), which is defined as:\nACSij = |〈A∗i ,Aj〉| ‖A∗i ‖ · ‖Aj‖ , (21)\nwhere A∗i ∈ Rn×1 denotes the ith ground truth principal direction, and Aj ∈ Rn×1 denotes the jth column of the decoder A, i, j = 1, 2, . . . , p. The elements of ACS ∈ Rp×p in eq. (21) take values between [0,1], measuring pair-wise similarity across two sets of vectors. The absolute value absorbs the sign ambiguity of principal directions.\nThe performances of LAEs are evaluated by defining the following metrics:\nRatioTP = p∑ i=1 I[ACSii > 1− ]/p (22)\nRatioFP = p∑ i,j=1 i6=j I[ACSij > 1− ]/p, and (23)\nRatioTotal = RatioTP + RatioFP , (24)\nwhere I is the indicator function and is a manual tolerance threshold ( = 0.01 in our case). If two vectors have absolute cosine similarity over 1 − , they are deemed equal. Considering some columns of decoder may be correct principal directions but not in the right order, we introduce RatioTP and RatioFP in eqs. (22) and (23) to check the ratio of correct in-place and out-of-place principal directions respectively. ThenRatioTotal in eq. (24) measures the total ratio of the correctly obtained principal directions by the LAE regardless of the order.\nDatasets As a proof-of-concept, both synthetic data and real data are used. For the synthetic data, 2000 zero-centered data samples are generated from a 1000-dimension zero mean multivariate normal distribution with the covariance matrix being diag(Np). For the real data, we choose to use MNIST dataset (LeCun et al., 1998), which includes 60,000 grayscale handwritten digits images, each of dimension 28× 28 = 784." }, { "heading": "5.2 EVALUATION AND ANALYSIS", "text": "Synthetic Data Experiments In our experiment, p, the number of desired principal components (PCs), is set to 100, i.e. the dimension is to be reduced from 1000 to 100. Figures 1 and 2 demonstrate a few conclusions. First, during the training process, the loss ratio of both losses continuously decreases to 1, i.e. they both converge to the optimal loss value. However, when both get close enough, L require more iterations since the optimizer is forced to find the right directions: it fully converges only after it has found all the principal directions in the right order.\nSecond, using the loss L results in finding more and more correct principal directions, with RatioTP continuously rising; and ultimately affords all correct and ordered principal directions,\nwith RatioTP ending with 100%. Notice that occasionally and temporarily, some of the principal directions is found but not at their correct position, which is indicated by the rise of RatioFP in the figure. However, as optimization continues they are shifted to the right column, which results in RatioFP going back to zero, and RatioTP reaching one. As for L̃, it fails to identify any principal directions; both RatioTP and RatioFP for L̃ stay at 0, which indicates that none of the columns of the decoder Ã, aligns with any principal direction.\nThird, as shown in the figure, while the optimizer finds almost all the principal directions rather quickly, it requires much more iterations to find some final ones. This is because some eigenvalues in the empirical covariance matrix of the finite 2000 samples become very close (the difference becomes less than 1). Therefore, the loss has to get very close to the optimal loss, making the gradient of the loss hard to distinguish between the two.\nReal Data: MNIST Experiments We set the number of principal components (PCs) as 100, i.e., the dimension is to be reduced from 784 to 100. We also try to reconstruct with the top-10 columns found in this case. As in Fig. 3, the reconstruction performance of L is consistently better than L̃. That also reflects that L̃ does not identify PCs, while L is directly applicable to performing PCA without bells and whistles." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have introduced a loss function for performing principal component analysis and linear regression using linear autoencoders. We have proved that the optimizing with the given loss results in the decoder matrix converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. We have also demonstrated the claims on a synthetic data set of random samples drawn from a multivariate normal distribution and on\nMNIST data set. There are several possible generalizations of this approach we are currently working on. One is improving performance when the corresponding eigenvalues of two principal directions are very close and another is generalization of the loss for tensor decomposition." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A PROOFS", "text": "" }, { "heading": "A.1 PRELIMINARIES", "text": "Before we present the proof for the main theorems, the following two lemmas introduce some notations and basic relations that are required for the proofs. Lemma 2. The constant matrices Tp ∈ Rp×p and Sp ∈ Rp×p are defined as (Tp)ij = (p− i+ 1) δij , i.e. Tp = diag (p, p− 1, · · · , 1) ,\n(Sp)ij = p−max(i, j) + 1, i.e. Sp = p p− 1 · · · 2 1 p− 1 p− 1 · · · 2 1 ... ... . . . 2 1\n2 2 2 2 1 1 1 1 1 1\n , e.g. S4 = 4 3 2 13 3 2 12 2 2 1\n1 1 1 1 . Clearly, the diagonal matrix Tp is positive definite. Another matrix that will appear in the formulation is Ŝp := T−1p SpT −1 p\n( Ŝp ) ij = ( T−1p SpT −1 p ) ij = 1 p−min(i, j) + 1 i.e. T −1 p SpT −1 p = 1 p\n1 p · · · 1p 1p\n1 p 1 p−1 · · · 1p−1 1p−1 ... ... . . . ...\n... 1 p 1 p−1 · · · 12 12 1 p 1 p−1 · · · 12 1\n ,\ne.g. Ŝ4 = 1 4 1 4 1 4 1 4 1 4 1 3 1 3 1 3\n1 4 1 3 1 2 1 2 1 4 1 3 1 2 1 . The following properties of Hadamard product and matrices Tp and Sp are used throughout:\n1. For any arbitrary matrix A ∈ Rn×p, p∑ i=1 Ii;p = Tp, and (25)\np∑ i=1 Ii;pA ′AIi;p = Sp ◦ (A′A) , (26)\nwhere, ◦ is the Hadamard (element-wise) product. 2. For any matrices M1,M2 ∈ Rp×p and diagonal matrices D ,E ∈ Rp×p,\nD (M1 ◦M2)E = (DM1E ) ◦M2 = M1 ◦ (DM2E ) . Moreover, if Π1,Π2 ∈ Rp×p are permutation matrices then\nΠ1 (M1 ◦M2) Π2 = (Π1M1Π2) ◦ (Π1M2Π2) .\n3. Sp is invertible and its inverse is a symmetric tridiagonal matrix\n(S−1p )ij = 1 i = j = 1\n2 i = j 6= 1 −1 |i− j| = 1 0 otherwise , i.e. S−1p =\n 1 −1 · · · 0 0 −1 2 −1 0 0 ... ... . . . ...\n... 0 0 −1 2 −1 0 0 0 −1 2\n .\n4. Sp is positive definite.\n5. For any matrix A ∈ Rn×p, Sp ◦ (A′A) is positive semidefinite. If (not necessarily full rank) A has no zero column then Sp ◦ (A′A) is positive definite.\n6. For any diagonal matrix D ∈ Rp×p\nSp ◦D = TpD , and (27) Ŝp ◦D = T−1p D . (28)\n7. Let D ,E ∈ Rp×p be positive semidefinite matrices, where E has no zero diagonal element, and D is of rank r ≤ p. Also, let for any r ≤ p, Jr = {i1, · · · , ir}(1 ≤ i1 < · · · < ir < n) be any ordered r−index set. Then D and E satisfy\nE ( Ŝp ◦D ) = ( Ŝp ◦ E ) D ,\nif and only if, the following two conditions are satisfied:\n(a) The matrix D is diagonal with p − r zero diagonal elements and r positive diagonal elements indexed by the set Jr. That is for any i ∈ Jr : (D )ii > 0 and the rest of elements of D are zero.\n(b) For any i, j ∈ Jr and i 6= j we have (E )i,j = 0.\nClearly, if D is positive definite then Jr = Np and hence, both D and E are diagonal.\nProof. . The proof of the properties are as follows.\n1. eq. (25) is trivial. For eq. (26) note that AIi;p selects the first i columns of A (zeros out the rest), and similarly, Ii;pA′ selects the first i rows of A (zeros out the rest). Therefore, Ii;pA\n′AIi;p is a p×pmatrix that its Leading Principal Submatrix of order i (LPSi) 1 is the same as the LPSi of A′A (and the rest of the elements are zero). Hence, ∑p i=1 Ii;pA\n′AIi;p (counting backwards) adds LPSp of A′A (i.e. A′A itself) with LPSp−1 that doubles LPSp−1 part of the result and then adds LPSp−2 that triples the LPSp−2 part of result, the process continues until by the last addition LPS1is added to the result for the pthtimes. This is exactly the same as evaluating Sp ◦ (A′A).\n2. This is a standard result (Horn & Johnson, 2012), and no proof is needed.\n3. Directly compute SpS−1p :\n( SpS −1 p ) ij = p∑ k=1 (Sp)ik(S −1 p )kj ∀|k−j|>1:(S−1p )kj=0 =============⇒\n= (Sp)i,j−1(S−1p )j−1,j+(Sp)i,j(S −1 p )j,j+(Sp)i,j+1(S −1 p )j+1,j 2 ≤ j& j ≤p− 1 (Sp)i,p−1(S−1p )p−1,p + (Sp)i,p(S −1 p )p,p j = p\n(Sp)i,1(S −1 p )1,1 + (Sp)i,2(S −1 p )2,1 j = 1\n= −(Sp)i,j−1 + 2(Sp)i,j − (Sp)i,j+1 2 ≤ j ≤ p− 1 −(Sp)i,p−1 + 2(Sp)i,p j = p (Sp)i,1 − (Sp)i,2 j = 1\n= max(i, j − 1)− 2 max(i, j) + max(i, j + 1) 2 ≤ j ≤ p− 1 −(p−max(i, p− 1) + 1) + 2(p−max(i, p) + 1) j = p −max(i, 1) + max(i, 2) j = 1\n1For a p × p matrix, the leading principal submatrix of order i is an i × i matrix derived by removing the last p− i rows and columns of the original matrix (Horn & Johnson (2012), P17)\n= max(i, j − 1)− 2 max(i, j) + max(i, j + 1) 2 ≤ j ≤ p− 1 1− p+ max(i, p− 1) j = p max(i, 2)−max(i, 1) j = 1\n= { 1 i = j 0 i 6= j 1 < j < p{ 1 i = p 0 i 6= p j = p{ 1 i = 1\n0 i ≥ 2 j = 1\n= (Ip)ij .\n4. Firstly, note that S−1p is symmetric and nonsingular so all the eigenvalues are real and nonzero. It is also a diagonally dominant matrix (Horn & Johnson (2012), Def 6.1.9) since\n∀i ∈ {1, · · · , p} : Ci := |(S−1p )ii| ≥ ∑\nj=1,j 6=i |(S−1p )ij | =: Ri,\nwhere the inequality is strict for the first and the last row and it is equal for the rows in the middle. Moreover, by Gersgorin circle theorem (Horn & Johnson (2012), Thm 6.1.1) for every eigenvalue li of S−1p there exists i such that li ∈ [Ci − Ri, Ci + Ri]. Since ∀i : Ci ≥ Ri we have all the eigenvalues are non-negative. They are also nonzero, hence, S−1p is positive definite, which implies Sp is also positive definite.\n5. For any matrix A ∈ Rn×p, A′A is positive semidefinite. Also, Sp is positive definite so by Schur product theorem (Horn & Johnson (2012), Thm 7.5.3(a)), Sp ◦ (A′A) is positive semidefinite. Moreover, if all diagonal elements of A′A are positive (i.e. A has no zero column) by the extension of Schur product theorem (Horn & Johnson (2012), Thm 7.5.3(b)) it is positive definite. This can also be easily deduced using the Oppenheim inequality (Horn & Johnson (2012), Thm 7.8.16); that is for positive semidefinite matrices Sp and A′A: det(Sp) ∏ i(A ′A)ii ≤ det(Sp◦(A′A)). Since, Sp is positive definite, det(Sp) > 0\n(in fact it is 1 for any p) and if A′A has no zero diagonal then det(Sp ◦ (A′A)) > 0 and therefore, Sp ◦ (A′A) is positive definite.\n6. Clearly, the matrix Tp is achieved by setting the off-diagonal elements of Sp to zero. Hence, for any diagonal matrix D ∈ Rp×p: Sp ◦D = Tp ◦D . For the diagonal matrices Hadamard product and matrix product are interchangeable so the latter may also be written as TpD . The same argument applies for the second identity.\n7. This property can easily be proved by induction on p and careful bookkeeping of indices.\nLemma 3 (Simultaneous diagonalization by congruence). Let M1,M2 ∈ Rp×p, where M1 is positive definite and M2 is positive semidefinite. Also, let D ,E ∈ Rr×r be positive definite diagonal matrices with r ≤ p. Further, assume there is a C ∈ Rr×p of rank r ≤ p such that\nCM1C ′ =D and CM2C ′ =DE .\nThen there exists a nonsingular C̄ ∈ Rp×p that its first r rows are the matrix C and C̄M1C̄ ′ =D̄ and\nC̄M2C̄ ′ =D̄Ē ,\nwhere, D̄ = D̄ ⊕ Ir−p is a p× p diagonal matrix and Ē = E ⊕E is another p× p diagonal matrix, in which E ∈ Rp−r×p−r is a nonnegative diagonal matrix. Clearly, the rank of M2 is r plus the number of nonzero diagonal elements of E .\nProof. The proof is rather straightforward since this lemma is the direct consequence of Theorem 7.6.4 in Horn & Johnson (2012). The theorem basically states that if M1,M2 ∈ Rp×p is symmetric\nand M1 is positive definite then there exists an invertible S ∈ Rp×p such that SM1S′ = Ip and SM2S\n′ is a diagonal matrix with the same inertia as M2. Here, we have M2 that is positive semidefinite and C ∈ Rr×p of rank r ≤ p such that(\nD −1 2 C ) M1 ( D −1 2 C )′ =Ir and(\nD −1 2 C ) M2 ( D −1 2 C )′ =E .\nTherefore, since S is of full rank p and D −1 2 C is of rank r ≤ p, there exists p − r rows in S that are linearly independent of rows of D −1 2 C. Establish C̄ ∈ Rp×p by adding those p− r rows to C. Then C̄ has p linearly independent rows so it is nonsingular, and fulfills the lemma’s proposition that is\nC̄M1C̄ ′ =D̄ and C̄M2C̄ ′ =D̄Ē ,\nwhere, D̄ = D̄ ⊕ Ir−p is a p× p diagonal matrix and Ē = E ⊕E is another p× p diagonal matrix, in which E ∈ Rp−r×p−r is a nonnegative diagonal matrix.\nLemma 4. Let A and B define a critical point of L. Further, let V ∈ Rn×p and W ∈ Rp×n are such that ‖V ‖F , ‖W ‖F = O(ε) for some ε > 0. Then\nL(A + V ,B + W )− L(A,B) =〈V TpBΣxxB′,V 〉F −2〈ΣyxW ′Tp −A (Sp ◦ (BΣxxW ′ + WΣxxB′)) ,V 〉F +〈(Sp ◦ (A′A))WΣxx,W 〉F +O(ε3). (29)\nFurther, for W = W̄ := (Sp ◦ (A′A))−1 TpV ′ΣyxΣ−1xx , the above equation becomes\nL(A + V ,B + W̄ )− L(A,B) = Tr (V ′V TpBΣxxB′)− Tr ( V ′ΣV Tp (Sp ◦ (A′A))−1 Tp ) +2 Tr ( V ′A ( Sp ◦ ( BΣxyV Tp (Sp ◦ (A′A))−1\n+ (Sp ◦ (A′A))−1 TpV ′ΣyxB′ ))) +O(ε3). (30)\nFinally, in case the critical A is of full rank p and so, (A,B) = (UIpΠD, B̂(UIpΠD)), for the encoder direction V with ‖V ‖F = O(ε) and W = W̄ we have,\nL(A + V ,B + W )− L(A,B) = Tr ( V ′V Π′ΛIpΠTpD −2)− Tr (V ′ΣV TpD−2) +2 Tr ( V ′UIpΠD ( Sp ◦ ( D−1Π′U ′IpΣV D −2 )))\n+2 Tr ( V ′UIpΠD ( Sp ◦ ( D−2V ′ΣUIpΠD −1))) +O(ε3). (31)\nProof. As described in appendix B.1, the second order Taylor expansion for the loss L(A,B) is then given by eq. (63), i.e.\nL(A + V ,B + W )− L(A,B) =dAL(A,B)V + dBL(A,B)W + 1\n2 d2AL(A,B)V 2\n+dABL(A,B)V W + 1\n2 d2BL(A,B)W 2 +RV ,W (A,B).\nIf ‖V ‖F , ‖W ‖F = O(ε) then ‖R(V ,W )‖ = O(ε3). Moreover, when A and B define a critical point of L we have dAL(A,B)V = dBL(A,B)W = 0. By setting the derivatives d2AL(A,B)V 2, dABL(A,B)V W , d2BL(A,B)W 2 that are given by eq. (69), eq. (68), and eq. (66) respectively, the above equation simplifies to\nL(A + V ,B + W )− L(A,B) =〈V (Sp ◦ (BΣxxB′)) ,V 〉F −2〈ΣyxW ′Tp −A (Sp ◦ (BΣxxW ′ + WΣxxB′)) ,V 〉F +〈(Sp ◦ (A′A))WΣxx,W 〉F +O(ε3).\nNow, based on the first item in Corollary 1, BΣxxB′ is a p×p diagonal matrix, so based on eq. (27): Sp◦(BΣxxB′) = TpBΣxxB′. The substitution then yields eq. (29). Finally, in the above equation replace W with W̄ = (Sp ◦ (A′A))−1 TpV ′ΣyxΣ−1xx . We have\nL(A + V ,B + W̄ )− L(A,B) = = 〈V TpBΣxxB′,V 〉F − 2〈ΣyxΣ−1xxΣxyV Tp (Sp ◦ (A′A)) −1 Tp,V 〉F\n+2〈A ( Sp ◦ ( BΣxxΣ −1 xxΣxyV Tp(Sp◦(A′A)) −1 +(Sp◦(A′A))−1TpV ′ΣyxΣ−1xxΣxxB′ )) ,V〉F\n+〈(Sp ◦ (A′A)) (Sp ◦ (A′A))−1TpV ′ΣyxΣ−1xxΣxx, (Sp ◦ (A′A)) −1 TpV ′ΣyxΣ −1 xx 〉F +O(ε3)\n= Tr (V ′V TpBΣxxB ′)− Tr ( V ′ΣV Tp (Sp ◦ (A′A))−1 Tp ) + 2 Tr ( V ′A ( Sp ◦ ( BΣxyV Tp (Sp ◦ (A′A))−1 + (Sp ◦ (A′A))−1 TpV ′ΣyxB′ ))) +O(ε3),\nwhich is eq. (30). For the final equation, we have\nTpBΣxxB ′ =TpD −1Π′U ′Ip ΣyxΣ −1 xxΣxxΣ −1 xxΣxy︸ ︷︷ ︸UIpΠD−1\n=TpD −1Π′U ′IpΣUIp︸ ︷︷ ︸ΠD−1 = TpD−1 Π′ΛIpΠ︸ ︷︷ ︸D−1 =Π′ΛIpΠTpD −2, and (32)\nTp (Sp ◦ (A′A))−1 Tp =Tp ( Sp ◦ ( DΠ′U ′IpUIpΠ︸ ︷︷ ︸D ))−1 Tp\n=Tp ( Sp ◦D2 )−1 Tp = TpT −1 p D −2Tp = TpD −2. (33)\nReplace the above in eq. (30) and simplify: L(A + V ,B + W )− L(A,B) = Tr (V ′V TpBΣxxB′)− Tr ( V ′ΣV Tp (Sp ◦ (A′A))−1 Tp ) +2 Tr ( V ′A ( Sp ◦ ( BΣxyV Tp (Sp ◦ (A′A))−1\n+ (Sp ◦ (A′A))−1 TpV ′ΣyxB′ )))\n+O(ε3) eq. (32) ====⇒ eq. (33)\nL(A + V ,B + W )− L(A,B) = Tr ( V ′V Π′ΛIpΠTpD −2)− Tr (V ′ΣV TpD−2) +2 Tr ( V ′A ( Sp ◦ ( BΣxyV D −2 + D−2V ′ΣyxB ′)))\n+O(ε3) A=UIpΠD\n=========⇒ B=B̂(UIpΠD)\nL(A + V ,B + W )− L(A,B) = Tr ( V ′V Π′ΛIpΠTpD −2)− Tr (V ′ΣV TpD−2) +2 Tr ( V ′UIpΠD ( Sp ◦ ( D−1Π′U ′IpΣV D −2 )))\n+2 Tr ( V ′UIpΠD ( Sp ◦ ( D−2V ′ΣUIpΠD −1))) +O(ε3),\nwhich finalizes the proof." }, { "heading": "A.2 PROOF OF PROPOSITION 1", "text": "For this proof we use the first and second order derivatives for L(A,B) wrt B derived in Lemma 5. From eq. (66), we have that for a given A the second derivative wrt to B of the cost L(A,B) at B,\nand in the direction W is the quadratic form\nd2B2L(A,B)W 2 = 2 Tr (W ′ (Sp ◦A′A)WΣxx) .\nThe matrix Σxx is positive-definite and by Lemma 2, Sp ◦ A′A is positive-semidefinite. Hence, d2B2L(A,B)W\n2 is clearly non-negative for all W ∈ Rp×n. Therefore, L(A,B) is convex in coefficients of B for a fixed matrix A. Also the critical points of L(A,B) for a fixed A is a matrix B that satisfies ∀W ∈ Rp×n : dBL(A,B)W = 0 and hence, from eq. (64) we have\n−2〈TpA′Σyx − (Sp ◦ (A′A))BΣxx,W 〉F = 0. Setting W = TpA′Σyx − (Sp ◦ (A′A))BΣxx we have\nTpA ′Σyx − (Sp ◦ (A′A))BΣxx = 0.\nFor a fixed A, the cost L(A,B) is convex in B, so any matrix B that satisfies the above equation corresponds to a minimum of L(A,B). Further, if A has no zero column then by Lemma 2, Sp ◦ A′A is positive definite. Hence, ∀W ∈ Rp×n : d2B2L(A,B)W 2 = 2 Tr (W ′ (Sp ◦A′A)WΣxx) is positive. Therefore, the cost L(A,B) becomes strictly convex and the unique global minimum is achieved at B = B̂(A) as defined in eq. (6)." }, { "heading": "A.3 PROOF OF PROPOSITION 2", "text": "For this proof we use the first and second order derivatives for L(A,B) wrt A derived in Lemma 6. For a fixed B, based on eq. (69) the second derivative wrt to A of L(A,B) at A, and in the direction V is the quadratic form\nd2A2L(A,B)V 2 = 2〈V (Sp ◦ (BΣxxB′)) ,V 〉F = 2 Tr (V (Sp ◦ (BΣxxB′))V ′) .\nThe matrix Σxx is positive-definite and by Lemma 2, Sp ◦ (BΣxxB′) is positive-semidefinite. Hence, d2A2L1(A,B)V\n2is non-negative for all V ∈ Rn×p. Therefore, L(A,B) is convex in coefficients of A for a fixed matrix B. Based on eq. (67) the critical point of L(A,B) for a fixed B is a matrix A that satisfies for all V ∈ Rn×p\ndAL(A,B)V = 〈−2 (ΣyxB′Tp −A (Sp ◦ (BΣxxB′))) ,V 〉F = 0 =⇒ ΣyxB\n′Tp = A (Sp ◦ (BΣxxB′)) , which is eq. (7)." }, { "heading": "A.4 PROOF OF THEOREM 1", "text": "Before we start, a reminder on notation and some useful identities that are used throughout the proof. The matrix Σ := ΣyxΣ−1xxΣxy has an eigenvalue decomposition Σ = UΛU ′, where the ith column of U , denoted as ui, is an eigenvector of Σ corresponding to the ith largest eigenvalue of Σ, denoted as λi. Also, Λ = diag(λ1, · · · , λn) is the diagonal vector of ordered eigenvalues of Σ, with λ1 > λ2 > · · · > λn > 0. We use the following notation to organize a subset of eigenvectors of Σ into a rectangular matrix. Let for any r ≤ p, Ir = {i1, · · · , ir}(1 ≤ i1 < · · · < ir < n) be any ordered r−index set. Define UIr ∈ Rn×p as UIr = [ui1 , · · · ,uir ]. That is the columns of UIr are the ordered orthonormal eigenvectors of Σ associated with eigenvalues λi1 < · · · < λir . The following identities are then easy to verify:\nU ′IrUIr =Ir,\nΣUIr =UIrΛIr , (34) U ′IrΣUIr =ΛIr . (35)" }, { "heading": "The sufficient condition:", "text": "Let A ∈ Rn×pof rank r ≤ p and no zero column be given by eq. (8), B ∈ Rp×n given by eq. (9), and the accompanying conditions are met. Notice that U ′IrUIr = Ir implies that DC\n′CD = DC ′U ′IrUIrCD = A ′A, so\nB = D−1ΠCU ′ IrΣyxΣ −1 xx\nΠC :=(Sp◦(C′C))−1TpC′ =================⇒\nD−1D=Ip\nB = D−1 (Sp ◦ (C ′C))−1 D−1DTpC ′U ′IrΣyxΣ−1xx Lemma 2-2\n=======⇒ DTp=TpD\nB = ( Sp ◦ (DC ′CD)︸ ︷︷ ︸ )−1 TpDC\n′U ′Ir︸ ︷︷ ︸ΣyxΣ−1xx A′=D′C′U ′Ir=========⇒DC′CD=A′A B = (Sp ◦ (A′A))−1 TpA′ΣyxΣ−1xx = B̂(A),\nwhich is eq. (6). Therefore, based on Proposition 1, for the given A, the matrix B defines a critical point of L(A,B). For the gradient wrt to A, first note that with B given by eq. (9) we have\nBΣxxB ′ =D−1ΠCU ′ IrΣyxΣ −1 xxΣxxΣ −1 xxΣxyUIrΠ ′ CD −1\n=D−1ΠC U ′ IrΣyxΣ −1 xxΣxyUIr︸ ︷︷ ︸Π′CD−1 eq. (35)===⇒\nBΣxxB ′ =D−1ΠCΛIrΠ ′ CD −1. (36)\nThe matrix ΠC is a rectangular permutation matrix so ΠCΛIrΠ ′ C is diagonal so as D−1ΠCΛIrΠ ′ CD −1. Therefore, BΣxxB′ is diagonal and by eq. (27) in Lemma 2-6 we have\nSp ◦ (BΣxxB′) =TpBΣxxB′ = BΣxxB′Tp =D−1ΠCΛIrΠ ′ CD −1Tp A× ==⇒\nA (Sp ◦ (BΣxxB′)) =AD−1ΠCΛIrΠ′CD−1Tp A=UIrCD=======⇒\nA (Sp ◦ (BΣxxB′)) =UIrCDD−1ΠCΛIrΠ′CD−1Tp A=UIrCD=======⇒ =UIr CΠC︸ ︷︷ ︸ΛIrΠ′CD−1Tp CΠC=Ir======⇒ A (Sp ◦ (BΣxxB′)) =UIrΛIr︸ ︷︷ ︸Π′CD−1Tp eq. (34)===⇒\n=ΣUIrΠ ′ CD −1Tp =ΣyxΣ −1 xxΣxyUIrΠ ′ CD −1Tp\n=Σyx ( D−1ΠCU ′ IrΣyxΣ −1 xx )′︸ ︷︷ ︸Tp =ΣyxB ′Tp,\nwhich is eq. (7). Therefore, based on Proposition Proposition 2, for the given B, the matrix A define a critical point of L(A,B). Hence, A and B together define a critical point of L(A,B)." }, { "heading": "The necessary condition:", "text": "Based on Proposition 1 and Proposition 2, for A (with no zero column) and B, to define a critical point of L(A,B), B has to be B̂(A) given by eq. (6), and A has to satisfy eq. (7). That is\nA ( Sp ◦ ( B̂ΣxxB̂ ′ )) =ΣyxB̂ ′Tp B̂(A) on RHS =======⇒\nA ( Sp ◦ ( B̂ΣxxB̂ ′ )) =ΣxyΣ −1 xxΣyxATp(Sp ◦ (A′A))−1Tp\n×A′ ==========⇒ Σ=ΣxyΣ −1 xx Σyx\nA ( Sp ◦ ( B̂ΣxxB̂ ′ )) A′ =ΣATp(Sp ◦ (A′A))−1TpA′ Σ=UΛU ′′\n======⇒ ×U ,U ′×\nU ′A ( Sp ◦ ( B̂ΣxxB̂ ′ )) A′U =U ′UΛU ′ATp(Sp ◦ (A′A))−1TpA′U U ′U=In=====⇒\nU ′A ( Sp ◦ ( B̂ΣxxB̂ ′ )) A′U =Λ∆, (37)\nwhere, ∆ := U ′ATp(Sp ◦ (A′A))−1TpA′U is symmetric and positive semidefinite. The LHS of the above equation is symmetric so the RHS is symmetric too, so Λ∆ = (Λ∆)′ = ∆′Λ′ = ∆Λ. Therefore, ∆ commutes with the diagonal matrix of eigenvalues Λ. Since, eigenvalues are assumed to be distinct, ∆ has to be diagonal as well. By Lemma 2 Tp(Sp ◦ (A′A))−1Tp is positive definite and U is an orthogonal matrix. Therefore, r = rank(A) = rank(∆) = rank(U ′∆U), which implies that the diagonal matrix ∆, has r nonzero and positive diagonal entries. There exists an\nr−index set Ir corresponding to the nonzero diagonal elements of ∆. Forming a diagonal matrix ∆Ir ∈ Rr×r by filling its diagonal entries (in order) by the nonzero diagonal elements of ∆ we have\nU∆U ′ = UIr∆IrU ′ Ir Def of ∆ ====⇒\nUU ′ATp(Sp ◦ (A′A))−1TpA′UU ′ = UIr∆IrU ′Ir UU ′=In=====⇒\nATp(Sp ◦ (A′A))−1TpA′ = UIr∆IrU ′Ir , (38) which indicates that the matrix A has the same column space as UIr . Therefore, there exists a full rank matrix C̃ ∈ Rr×p such that A = UIrC̃. Since A has no zero column, C̃ has no zero column. Further, by normalizing the columns of C̃ we can write A = UIrCD, where D ∈ Rp×p is diagonal that contains the norms of columns of C̃. Therefore, A is exactly in the form given by eq. (8). The matrix C has to satisfy eq. (38) that is\nATp(Sp ◦ (A′A))−1TpA′ = UIr∆IrU ′Ir A=UIrC=====⇒\nUIrCDTp(Sp ◦ (A′A))−1TpDC ′U ′Ir = UIr∆IrU ′Ir ×UIr ,UIr×=========⇒\nA′A=DC′CD\nCDTp(Sp ◦ (DC ′CD))−1TpC ′D = ∆Ir Lemma 2-2 =====⇒\nCTpDD −1(Sp ◦ (C ′C))−1D−1DTpC ′ = ∆Ir =⇒\nCTp(Sp ◦ (C ′C))−1TpC ′ = ∆Ir . (39) Now that the structure of A has been identified, evaluate B̂(A) of eq. (6) by setting A = UIrCD, that is\nB =B̂(A) = (Sp ◦ (A′A))−1TpA′ΣyxΣ−1xx =(Sp ◦ (DC ′CD))−1TpDC ′U ′IrΣyxΣ−1xx Lemma 2-2 =====⇒\nB =D−1(Sp ◦ (C ′C))−1TpC ′U ′IrΣyxΣ−1xx , which by defining ΠC := (Sp ◦ (C ′C))−1 TpC ′ gives eq. (34) for B as claimed. While C has to satisfy eq. (39), A and B in the given form have to satisfy eq. (37) that provides another condition for C as follows. First, note that\nSp ◦ ( B̂ΣxxB̂ ′ ) = Sp ◦ ( D−1(Sp ◦ (C ′C))−1TpC ′U ′IrΣUIrCTp(Sp ◦ (C ′C))−1D−1 ) = Sp ◦ ( D−1(Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1D−1 ) Lemma 2-2 =====⇒\n= D−1 ( Sp ◦ ( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 )) D−1\nNow, replace A and B in eq. (37) by their respective identities that we just derived. Performing the same process for eq. (37) we have\nU ′A ( Sp ◦ ( B̂ΣxxB̂ ′ ))\nA′U = Λ∆ A=UIrCD=======⇒ ×U ′,U×\nUIrC ( Sp ◦ ( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 )) C ′U ′Ir = UΛ∆U\n′ ×UIr===⇒ U ′Ir×\nC ( Sp ◦ ( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 )) C ′ = U ′IrUΛ∆U\n′UIr =⇒ C ( Sp ◦ ( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 )) C ′ = ΛIr∆Ir . (40)\nNow we have to find C such that it satisfies eq. (39) and eq. (40). To make the process easier to follow, lets have them in one place. The matrix C ∈ Rr×p have to satisfy\nCTp (Sp ◦ (C ′C))−1 TpC ′ =∆Ir and (41) C ( Sp ◦ ( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 )) C ′ =ΛIr∆Ir . (42)\nSince C is a rectangular matrix, solving above equations for C in this form seems intractable. We use a trick to temporarily extend C into an invertible square matrix as follows.\n• Temporarily, let M1 = Tp (Sp ◦ (C ′C))−1 Tp, and M2 = Sp ◦( (Sp ◦ (C ′C))−1TpC ′ΛIrCTp(Sp ◦ (C ′C))−1 ) . Then M1 is positive definite and\nM2 is positive semidefinite, so they are simultaneously diagonalizable by congruence that is based on Lemma 3 and eq. (41) and eq. (42), there exists a nonsingular C̄ ∈ Rp×p such that C consists of the first r rows of C̄ and\nC̄Tp (Sp ◦ (C ′C))−1 TpC̄ ′ =∆̄Ir , (43) C̄ ( Sp ◦ ( (Sp ◦ (C ′C))−1 TpC ′ΛIrCTp (Sp ◦ (C ′C)) −1 )) C̄ ′ =Λ̄Ir∆̄Ir , (44)\nwhere, ∆̄Ir = ∆Ir ⊕ Ir−p is a p× p diagonal matrix and Λ̄Ir = ΛIr ⊕Λ is another p× p diagonal matrix, in which Λ ∈ Rr−p×r−p is a nonnegative diagonal matrix. • Substitute ∆̄Ir from eq. (43) in eq. (44), then left multiply by C̄ ′−1, and right multiply by C̄ ′Ir;p:\nC̄ ( Sp ◦ ( (Sp ◦ (C ′C))−1 TpC ′ΛIrCTp (Sp ◦ (C ′C)) −1 )) C̄ ′ =\nΛ̄IrC̄Tp (Sp ◦ (C ′C)) −1 TpC̄ ′ C̄ ′Ir;p× =====⇒ ×C̄′−1\nC̄ ′Ir;pC̄ ( Sp ◦ ( (Sp ◦ (C ′C))−1 TpC ′ΛIrCTp (Sp ◦ (C ′C)) −1 )) =\nC̄ ′Ir;pΛ̄IrC̄Tp (Sp ◦ (C ′C)) −1 Tp.\n• Now we can revert back everything to C again. Since C consists of the first r rows of C̄ we have C̄ ′Ir;pC̄ = C ′C, and C̄ ′Ir;pΛ̄IrC̄ = C\n′ΛIrC, which turns the above equation into\nC ′C ( Sp ◦ ( Ip (Sp ◦ (C ′C))−1 TpC ′ΛIrCTp (Sp ◦ (C ′C)) −1 Ip )) =\nIpC ′ΛIrCTp (Sp ◦ (C ′C)) −1 Tp.\n• In the above equation, replace Ip by T−1p Tp in LHS and by T−1p (Sp ◦ (C ′C))T−1p Tp (Sp ◦ (C ′C))−1 Tp in the RHS. Use ΠC := (Sp ◦ (C ′C))−1 TpC ′ to shrink it into : C ′C ( Sp ◦ ( T−1p TpΠCΛIrΠ ′ CTpT −1 p )) =T−1p (Sp ◦ (C ′C))T−1p TpΠCΛIrΠ′CTp.\n• By the second property of Lemma 2 we can collect diagonal matrices T−1p ’s around Sp to arrive at\n(C ′C) ( Ŝp ◦ (TpΠCΛIrΠ′CTp) ) = ( Ŝp ◦ (C ′C) ) (TpΠCΛIrΠ ′ CTp) ,\nwhere, Ŝp := T−1p SpT −1 p .\n• Define p× p matrices E r := C ′C and D r := TpΠCΛIrΠ′CTp. Substitute in the above to arrive at: E r ( Ŝp ◦D r ) = ( Ŝp ◦ E r ) D r.\nBoth D r and E r in the above identity are positive semidefinite. Moreover, since by assumption C has no zero columns, E r has no zero diagonal element. Then the 7th property of Lemma 2 implies the following two conclusions:\n1. The matrix D r is diagonal. The rank of D r is r so it has exactly r positive diagonal elements and the rest is zero. This argument is true for T−1p D rT −1 p =\nΠCΛIrΠ ′ C . Since ΛIr is a diagonal positive definite matrix, the p × r matrix ΠC := (Sp ◦ (C ′C))−1 TpC ′ of rank r should have p − r zero rows. Let Jr be an r−index set corresponding to nonzero diagonal elements of ΠCΛIrΠ′C . Then the matrix ΠC [Jr,Nr] (r × r submatrix of ΠC consist of its Jr rows) is nonsingular. 2. For every i, j ∈ Jr and i 6= j, (E r)i,j = 0. Since E r := C ′C and so (E r)i,j is the inner product of ith and jth columns of C, we conclude that the columns of C[Nr, Jr] (r × r submatrix of C consist of its Jr columns) are orthogonal or in other words C[Nr, Jr]′C[Nr, Jr] is diagonal. The columns of C are normalized. Therefore, C[Nr, Jr]′C[Nr, Jr] = Ir and hence, C[Nr, Jr] is an orthogonal matrix.\n• We use the two conclusions to solve the original eq. (41) and eq. (42). First use ΠC := (Sp ◦ (C ′C))−1 TpC ′ to shrink them into :\nCTpΠC =∆Ir , (45)\nC (Sp ◦ (ΠCΛIrΠ′C))C ′ =ΛIr∆Ir . (46) Next, by the first conclusion, the matrix T−1p D rT −1 p = ΠCΛIrΠ ′ C is diagonal and so eq. (46) becomes\nCTpΠC︸ ︷︷ ︸ΛIrΠ′CC ′ =ΛIr∆Ir eq. (45)===⇒ ∆IrΛIrΠ ′ CC\n′ =ΛIr∆Ir =⇒ Π′CC\n′ = CΠC =Ir, (47) which is one of the two claimed conditions. What is left is to show that ΠC is a rectangular permutation matrix. From the first conclusion we also have ΠC has exactly r nonzero columns indexed by Jr so\nC[Nr, Jr]ΠC [Jr,Nr] =Ir. By the second conclusion C[Nr, Jr] is an orthogonal matrix therefore, ΠC [Jr,Nr] is the orthogonal matrix C[Nr, Jr]′. Moreover, we had T−1p D rT−1p = ΠCΛIrΠ′C is a p × p diagonal matrix with exactly r nonzero diagonal elements. Hence, ΠC [Nr, Jr]ΛIrΠ′C [Nr, Jr] is an r × r positive definite diagonal matrix with ΛIr having distinct diagonal elements, and ΠC [Nr, Jr] being orthogonal. Therefore, ΠC [Jr,Nr] (as well as C[Nr, Jr]) should be a square permutation matrix. Putting back the zero columns, we conclude that C should be such that ΠC := (Sp ◦ (C ′C))−1 TpC ′ is a rectangular permutation matrix and CΠC = Ir. Note that it is possible to further analyze these conditions and determine the exact structure of C. However, this is not needed in general for the critical point analysis of the next theorem except for the case where r = p and C is a square invertible matrix. In this case, square matrix ΠC is of full rank p, Jr = Np and therefore, C[Nr, Jr] = C[Np,Np] = C. Hence, C is any square permutation matrix Π, C ′C = Π′Π = Ip and ΠC := (Sp ◦ (C ′C))−1 TpC ′ = T−1p TpΠ′ = Π′, which verifies eq. (10) and eq. (11) for A and B when A is of full rank p." }, { "heading": "A.5 PROOF OF COROLLARY 1", "text": "1. We already show in the proof Theorem 1 that for critical (A,B) the matrix BΣxxB′ is given by eq. (36) that is\nBΣxxB ′ =D−1ΠCΛIrΠ ′ CD −1.\nThe matrix ΠC is a p × r rectangular permutation matrix so ΠCΛIrΠ′C is diagonal as well as D−1ΠCΛIrΠ ′ CD\n−1. Therefore, BΣxxB′ is diagonal. The diagonal matrix ΛIr is of rank r therefore, BΣxxB′ is of rank r.\n2. Again by Theorem 1 critical (A,B) is of the form given by eq. (8) and eq. (9) with the proceeding conditions on the invariance C. Therefore, the global map is\nG = AB = UIrCDD −1ΠCU ′ IrΣyxΣ −1 xx\n= UIrCΠCU ′ IrΣyxΣ −1 xx CΠC=Ir======⇒ G = UIrU ′ IrΣyxΣ −1 xx .\n3. Based on Baldi & Hornik (1989) (A,B) define a critical point of L̃(A,B) =∑p i=1 ‖Y −ABX‖ 2 F iff they satisfy\nA′ABΣxx =A ′Σyx and (48) ABΣxxB ′ =ΣyxB\n′. (49) Again by assumption (A,B) define a critical point of L(A,B) so by Theorem 1 they are of the form given by eq. (8) and eq. (9) with the proceeding conditions on the invariance C. Hence,\nA′ABΣxx =DC ′U ′IrUIr︸ ︷︷ ︸CDD−1︸ ︷︷ ︸ΠCU ′IrΣyx Σ−1xxΣxx︸ ︷︷ ︸\n=DC ′CΠC︸ ︷︷ ︸U ′IrΣyx CΠC=Ir======⇒ A′ABΣxx =DC ′U ′IrΣyx = A ′Σyx.\nHence, eq. (48) is satisfied. For the second equation we use the first property of this corollary that is BΣxxB′ is diagonal and satisfy eq. (7) of Proposition 2 that is\nA (Sp ◦ (BΣxxB′)) =ΣyxB′Tp BΣxxB ′ is diagonal ===========⇒\nATpBΣxxB ′ =ΣyxB ′Tp BΣxxB ′ is diagonal ===========⇒ ABΣxxB ′Tp =ΣyxB\n′Tp =⇒ ABΣxxB ′ =ΣyxB ′.\nHence, the second condition, eq. (49) is also satisfied. Therefore, any critical point of L(A,B) is a critical point of L̃(A,B)." }, { "heading": "A.6 PROOF OF LEMMA 1", "text": "Proof. We have\nL(A,B) = p∑ i=1 ‖Y −AIi;pBX‖2F = p∑ i=1 〈Y −AIi;pBX,Y −AIi;pBX〉F\n= p∑ i=1 (〈Y ,Y 〉F + 〈Y ,−AIi;pBX〉F + 〈−AIi;pBX,Y 〉F +〈−AIi;pBX,−AIi;pBX〉F )\n= p〈Y ,Y 〉F − 2〈Y ,A (\np∑ i=1 Ii;p\n) BX〉F +\np∑ i=1 〈AIi;pBX,AIi;pBX〉F eq. (25) ===⇒\n= pTr(Y Y ′)− 2 Tr (ATpBXY ′) + p∑ i=1 Tr (X ′B′Ii;pA ′AIi;pBX)\n= pTr(Σyy)− 2 Tr (ATpBΣxy) + Tr ( XX ′B′\np∑ i=1 (Ii;pA ′AIi;p)B\n) eq. (26) ===⇒\n= pTr(Σyy)− 2 Tr (ATpBΣxy) + Tr (B′ (Sp ◦ (A′A))BΣxx) , which is eq. (17)." }, { "heading": "A.7 PROOF OF THEOREM 2", "text": "Proof. The full rank matrices A∗ and B∗ given by eq. (18) and eq. (19) are clearly of the form given by Theorem 1 with Ip = Np := {1, 2, · · · , p}, and Πp = Ip. Hence, they define a critical point of L(A,B). We want to show that these are the only local minima, that is any other critical (A,B) is a saddle points. The proof is similar to the second partial derivative test. However, in this case the Hessian is a forth order tensor. Therefore, the second order Taylor approximation of the loss, derived in Lemma 4, is used directly. To prove the necessary condition, we show that at any other critical point (A,B), where the first order derivatives are zero, there exists infinitesimal direction along which the second derivative of loss is negative. Next, for the sufficient condition we show that the any critical point of the form (A∗,B∗) is a local and global minima." }, { "heading": "The necessary condition:", "text": "Recall that UIp is the matrix of eigenvectors indexed by the p−index set Ip and Π is a p × p permutation matrix. Since all the index sets Ir, r ≤ p are assumed to be ordered, the only way to have UNp = UIpΠ is by having Ip = Np and Π = Ip. Let A (with no zero column) and B define an arbitrary critical point of L(A,B). Then Based on the previous theorem, either A = UIrC with r < p or A = UIpΠD while in both cases B = B̂(A) given by eq. (6). If (A,B) is not of the form of (A∗,B∗) then there are three possibilities either 1) A = UIrCD with r < p, or 2)\nA = UIpΠD with Ip 6= Np or 2) A = UNpΠD but Π 6= Ip. The first two cases corresponds to not having the “right” and/or “enough” eigenvectors, and the third corresponds to not having the “right” ordering. We introduce the following notation and investigate each case separately. Let ε > 0 and Ui;j ∈ Rn×p be a matrix of all zeros except the ith column, which contains uj ; the eigenvector of Σ corresponding to the jth largest eigenvalue. Therefore,\nU ′i;jΣUi;j = U ′ i;jUΛU ′Ui;j = λjEi, (50)\nwhere, Ei ∈ Rp×p is matrix of zeros except the ith diagonal element that contains 1. In what follows, for each case we define a encoder direction V ∈ Rn×p with ‖V ‖F = O(ε), and set the decoder direction W ∈ Rp×n as W = W̄ := (Sp ◦ (A′A))−1 TpV ′ΣyxΣ−1xx . Then we use eq. (30) and eq. (31) of Lemma 4, to show that the given direction (V ,W ) infinitesimally reduces the loss and hence, in every case the corresponding critical (A,B) is a saddle point.\n1. For the case A = UIrCD, with r < p, note that based on the first item in Corollary 1, BΣxxB\n′ is a p×p diagonal matrix of rank r so it has p−r zero diagonal elements. Pick an i ∈ Np such that (BΣxxB′)ii is zero and a j ∈ Np \\ Ir. Set V = εUi;jD and W = W̄ . Clearly,\nV ′A =εDU ′i;jUIrCD = 0, (51)\nV ′V TpBΣxxB ′ =ε2DU ′i;jUi;j︸ ︷︷ ︸DTpBΣxxB′,\n=ε2DEiDTpBΣxxB ′ = ε2D2TpEi (BΣxxB ′) = 0 and (52)\nV ′ΣV =ε2DU ′i;jUΛU ′Ui;jD = ε 2λjD 2Ei. (53)\nNotice, ‖V ‖F , ‖W ‖F = O(ε), so based on eq. (30) of Lemma 4, we have L(A + V ,B + W )− L(A,B) = Tr (V ′V TpBΣxxB ′)− Tr ( V ′ΣV Tp (Sp ◦ (A′A))−1 Tp\n) + 2 Tr ( V ′A ( Sp ◦ ( BΣxyV Tp (Sp ◦ (A′A))−1 + (Sp ◦ (A′A))−1 TpV ′ΣyxB′\n))) +O(ε3)\neq. (51) ====⇒ eq. (52)\nL(A + V ,B + W )− L(A,B) = − Tr ( V ′ΣV Tp (Sp ◦ (A′A))−1 Tp ) +O(ε3)\neq. (53) =========⇒ A′A=DC′CD\nL(A + V ,B + W )− L(A,B) = − ε2λj Tr ( D2EiD −1 (( T−1p SpT −1 p︸ ︷︷ ︸ ) ◦ (C ′C) )−1 D−1 ) +O(ε3) =\n− ε2λj (( Ŝp ◦ (C ′C) )−1)\nii\n+O(ε3).\nTherefore, since ( Ŝp ◦ (C ′C) )−1 is a positive definite matrix, as ε→ 0, we have L(A +\nV ,B + W ) ≤ L(A,B). Hence, any (A,B) = (UIrCD, B̂(UIrCD)) with r < p is a saddle point.\n2. Next, consider the case where A = UIpΠD with Ip 6= Np. Then there exists at least one j ∈ Ip \\ Np and i ∈ Np \\ Ip such that i < j (so λi > λj). Let σ be the permutation corresponding to the permutation matrix Π. Also, let ε > 0 and Uσ(j);i ∈ Rn×p be a matrix of all zeros except the σ(j)th column, which contains ui; the eigenvector of Σ corresponding to the ith largest eigenvalue. Set V = εUσ(j);iD and W = W̄ . Then, since i /∈ Ip we have\nV ′UIp =εDU ′ σ(j);iUIp = 0, (54)\nV ′V =ε2DU ′σ(j);iUσ(j);iD = ε 2D2Eσ(j), and (55)\nV ′ΣV =ε2DU ′σ(j);iUΛU ′Uσ(j);iD = ε 2λiD 2Eσ(j). (56)\nSince ‖V ‖F , ‖W ‖F = O(ε), based on eq. (31) of Lemma 4, we have L(A + V ,B + W )− L(A,B) = Tr ( V ′V Π′ΛIpΠTpD −2)− Tr (V ′ΣV TpD−2) +2 Tr ( V ′UIpΠD ( Sp ◦ ( D−1Π′U ′IpΣV D −2 )))\n+2 Tr ( V ′UIpΠD ( Sp ◦ ( D−2V ′ΣUIpΠD −1))) +O(ε3)\neq. (54) =========⇒ eq. (55),eq. (56)\nL(A + V ,B + W )− L(A,B) = Tr ( ε2D2Eσ(j)Π ′ΛIpΠTpD −2)\n−Tr ( ε2λiD 2Eσ(j)TpD −2)+O(ε3)\n=ε2 Tr ( Eσ(j)Π ′ΛIpΠ︸ ︷︷ ︸Tp ) −ε2λi Tr ( Eσ(j)Tp ) +O(ε3)\n=ε2λj Tr ( Eσ(j)Tp ) − ε2λi Tr ( Eσ(j)Tp ) +O(ε3)\n=− ε2(p− σ(j) + 1) (λi − λj) +O(ε3). Note that in the above, the diagonal matrix Π′ΛIpΠ has the same diagonal elements as ΛIp but they are permuted by σ. So Eσ(j)Π ′ΛIpΠ selects σ(j) th diagonal element of Π′ΛIpΠ that is the j thdiagonal element of ΛIp , which is nothing but λj . Now, since i < j so λi > λj and σ(j) ≤ p, as ε→ 0, we have L(A+V ,B +W ) ≤ L(A,B). Hence, any (A,B) = (UIpΠD, B̂(UIpΠD)) is a saddle point.\n3. Finally consider the case where A = UNpΠD with Π 6= Ip. Since Π 6= Ip, the permutation σ of the set Np, corresponding to the permutation matrix Π, has at least a cycle (i1i2 · · · ik), where 1 < i1 < i2 · · · < ik < p and 2 ≤ k ≤ p. Hence, Π can be decomposed as Π = Π(i1i2···ik)Π̂, where Π̂ is the permutation matrix corresponding to other cycles of σ. The cycle (i1i2 · · · ik) can be decomposed into transpositions as (i1i2 · · · ik) = (ikik−1) · · · (iki1), which in matrix form is Π(i1i2···ik) = Π(iki1)Π(iki2) · · ·Π(ikik−1). Therefore, Π can be decomposed as Π = Π(iki1)Π̃, where Π̃ = Π(iki2) · · ·Π(ikik−1)Π̂. Note that Π(iki1), the permutation matrix corresponding to transposition (iki1) is a symmetric involutory matrix, i.e. Π2(iki1) = Ip. Set V = ε(Ui1;i1−Uik;ik)Π̃D and W = W̄ . Again we replace V and W in eq. (31) of Lemma 4. There are some tedious steps to simplify the equation, which is given in appendix A.7.1. The final result is as follows. With the given V and W , the third and forth terms of the RHS of eq. (31) are canceled and the first two terms are simplified to\nTr ( V ′V Π′ΛNpΠTpD −2) =ε2λik(p− i1 + 1) + ε2λi1(p− im + 1), and (57) Tr ( V ′ΣV TpD\n−2) =ε2λi1(p− i1 + 1) + ε2λik(p− im + 1), (58) in which, m = max{k − 1, 2}. This means that If the selected cycle is just a transposition (i1i2) then im = i2. But if for the selected cycle (i1i2 · · · ik), k is greater than 2 then im = ik−1. Using above equations, eq. (31) yields\nL(A+V ,B+W )−L(A,B)=Tr ( V ′V Π′ΛIpΠTpD −2)−Tr (V ′ΣV TpD−2)+O(ε3) =ε2λik(p− i1 + 1) + ε2λi1(p− im + 1) −ε2λi1(p− i1 + 1)− ε2λik(p− im + 1) +O(ε3) =− ε2i1λik − ε2imλi1 + ε2i1λi1 + ε2imλik =− ε2 ((λi1 − λik)(im − i1)) +O(ε3). (59)\nBy the above definition of im, we have im − i1 > 0 and since i1 < ik, λi1 − λik > 0. Hence, the first term in the above equation is negative and as ε → 0, we have L(A + V ,B +W )−L(A,B) < 0. Therefore, any any (A,B) = (UIpΠD, B̂(UIpΠD)) with Π 6= Ip is a saddle point." }, { "heading": "The Sufficient condition:", "text": "From Lemma 1 we know that the loss L(A,B) can be written in the form of eq. (17). Use this equation to evaluate loss at (A∗,B∗) = ( UNpDp,D −1 p U ′ NpΣyxΣ −1 xx ) as follows\nL(A∗,B∗) = pTr(Σyy)− 2 Tr (A∗TpB∗Σxy) + Tr ( B∗ ′ ( Sp ◦ ( A∗ ′ A∗ )) B∗Σxx ) =⇒\nL(A∗,B∗) = pTr(Σyy)− 2 Tr ( UNpDpTpD −1 p U ′ Np ΣyxΣ −1 xxΣxy︸ ︷︷ ︸ ) + Tr (( Sp ◦ ( DpU\n′ NpUNp︸ ︷︷ ︸Dp\n)) D−1p U ′ Np ΣyxΣ −1 xxΣxxΣ −1 xxΣxy︸ ︷︷ ︸UNpD−1p ) =⇒\nL(A∗,B∗) = pTr(Σyy)− 2 Tr ( TpDpD\n−1 p︸ ︷︷ ︸U ′NpΣUNp︸ ︷︷ ︸ ) + Tr (( Sp ◦ (Ip)︸ ︷︷ ︸ ) DpD −1 p︸ ︷︷ ︸U ′NpΣUNp︸ ︷︷ ︸D−1p Dp︸ ︷︷ ︸ ) =⇒\nL(A∗,B∗) = pTr(Σyy)− 2 Tr ( TpΛNp ) + Tr ( TpΛNp ) =⇒\nL(A∗,B∗) = pTr(Σyy)− Tr ( TpΛNp ) = p Tr(Σyy)− p∑ i=1 (p− i+ 1)λi,\nwhich is eq. (20), as claimed. Notice that the above value is independent of the diagonal matrix Dp. From the necessary condition we know that any critical point not in the form of (A∗,B∗) is a saddle point. Hence, due to the convexity of the loss at least one (A∗,B∗) is a global minimum but since the value of the loss at (A∗,B∗) is independent of Dp all these critical points yield the same value for the loss. Therefore, any critical point in the form of (A∗,B∗) is a local and global minima." }, { "heading": "A.7.1 SUPPLEMENTARY DETAILS OF THE PROOF OF THEOREM 2", "text": "To verify eq. (57), eq. (58), and eq. (59) in the proof of Theorem 2, we want to replace V and W in eq. (31) of Lemma 4 with V = ε(Ui1;i1 −Uik;ik)Π̃D and W = W̄ and simplify. eq. (31) is\nL(A + V ,B + W )− L(A,B) = Tr ( V ′V Π′ΛIpΠTpD −2)− Tr (V ′ΣV TpD−2) +2 Tr ( V ′UIpΠD ( Sp ◦ ( D−1Π′U ′IpΣV D −2 )))\n+2 Tr ( V ′UIpΠD ( Sp ◦ ( D−2V ′ΣUIpΠD −1))) +O(ε3).\nWe investigate each term on the RHS separately. but before note that\nEiΠ̃TpΠ̃ ′ = ( Π̃TpΠ̃ ′ ) i,i Ei = (Tp)σ̃−1(i),σ̃−1(i) Ei = (p− σ̃−1(i) + 1)Ei, (60)\nwhere, σ̃ and its function inverse σ̃−1 are permutations corresponding to Π̃ and Π̃′ respectively. Π̃TpΠ̃\n′ is a diagonal matrix where diagonal elements of Tp are ordered based on σ̃−1. Moreover, recall that we decomposed the permutation matrix Π in A with a cycle (i1i2 · · · ik) as Π = Π(i1ik) Π(iki2) · · ·Π(ikik−1)Π̂︸ ︷︷ ︸ = Π(i1ik)Π̃, where i1, i2, · · · ik are fixed points of Π̂. Therefore, with σ̃ being the permutation corresponding to Π̃ we have\nσ̃(i1) = i1 =⇒ σ̃−1(i1) = i1, and (61) σ̃(ik−1) = im =⇒ σ̃−1(ik) = im, (62)\nwhere, m = max{k − 1, 2}. This means that If the selected cycle is just a transposition (i1i2) then im = i2. But if for the selected cycle (i1i2 · · · ik), k is greater than 2 then im = ik−1. For the first term we have\nV ′V =ε2DΠ̃′(U ′i1;i1 −U ′ik;ik)(Ui1;i1 −Uik;ik)Π̃D U ′i1;i1 Uik;ik=0 =========⇒\nV ′V =ε2DΠ̃′(U ′i1;i1Ui1;i1 + U ′ ik;ik Uik;ik)Π̃D U ′i1;i1 Ui1;i1=Ei1 ===========⇒ U ′ik;ik Uik;ik=Eik\nV ′V =ε2DΠ̃′(Ei1 + Eik)Π̃D Π̃′(Ei1+Eik )Π̃ is diagonal===============⇒ V ′V =ε2Π̃′(Ei1 + Eik)Π̃D 2 =⇒\nTr ( V ′V Π′ΛNpΠTpD −2) = Tr(︷ ︸︸ ︷V ′V D−2Π̃′Π(i1ik)ΛNpΠ(i1ik)Π̃Tp)\n= Tr ε2Π̃′(Ei1 + Eik) Π̃D2D−2Π̃′︸ ︷︷ ︸ Ip Π(i1ik)ΛNpΠ(i1ik)Π̃Tp =ε2 Tr ( (Ei1 + Eik)Π(i1ik)ΛNpΠ(i1ik)Π̃TpΠ̃ ′ )\n=ε2 Tr ( λikEi1Π̃TpΠ̃ ′ + λi1EikΠ̃TpΠ̃ ′ ) eq. (60) ===⇒\nTr ( V ′V Π′ΛNpΠTpD −2) =ε2λik(p− σ̃−1(i1) + 1)Ei1 + ε2λi1(p− σ̃−1(ik) + 1)Eik eq. (61)===⇒eq. (62) Tr ( V ′V Π′ΛNpΠTpD\n−2) =ε2λik(p− i1 + 1)Ei1 + ε2λi1(p− im + 1)Eik , which is eq. (57) as claimed.\nFor the second term we have\nV ′ΣV =ε2DΠ̃′(U ′i1;i1 −U ′ik;ik)UΛU ′(Ui1;i1 −Uik;ik)Π̃D =ε2DΠ̃′(U ′i1;i1UΛU\n′Ui1;i1︸ ︷︷ ︸ λi1Ei1 −U ′i1;i1UΛU ′Uik;ik︸ ︷︷ ︸ 0\n−U ′ik;ikUΛU ′Ui1;i1︸ ︷︷ ︸ 0 +U ′ik;ikUΛU ′Uik;ik︸ ︷︷ ︸\nλikEik\n)Π̃D\n=ε2Π̃′(λi1Ei1 + λikEik)Π̃D 2 =⇒ Tr ( V ′ΣV TpD −2) = Tr(ε2Π̃′(λi1Ei1 + λikEik)Π̃D2TpD−2) =ε2 Tr ( λi1Ei1Π̃TpΠ̃ ′ + λikEikΠ̃TpΠ̃ ′ ) eq. (60) ===⇒\nTr ( V ′ΣV TpD −2) =ε2λi1(p− σ̃−1(i1) + 1) + ε2λik(p− σ̃−1(ik) + 1) eq. (61)===⇒eq. (62) Tr ( V ′ΣV TpD\n−2) =ε2λi1(p− i1 + 1) + ε2λik(p− im + 1), which is eq. (58) as claimed.\nFinally, we have to show that the third and the forth terms of the eq. (31) are canceled. First, observe that\nTr ( V ′UNpΠD ( Sp ◦ ( D−1Π′U ′NpΣV D −2 ))) =\nTr ( εDΠ̃′(U ′i1;i1 −U ′ik;ik)UNpΠ ( Sp ◦ ( Π′U ′NpΣV D −2 ))) =\nεTr ( Π̃′(Ei1 −Eik)Π ( Sp ◦ ( Π′U ′NpΣV D −2 )) D ) =\nε2 Tr ( Π̃′(Ei1 −Eik)Π ( Sp ◦ ( Π′(λi1Ei1 − λikEik)Π̃ ))) =\nε2 Tr ( (Ei1 −Eik) (( ΠSpΠ̃ ′ ) ◦ ( ΠΠ′(λi1Ei1 − λikEik)Π̃Π̃′ ))) =\nε2 Tr (( ΠSpΠ̃ ′ ) ◦ ((Ei1 −Eik) (λi1Ei1 − λikEik)) ) =\nε2 Tr (( ΠSpΠ̃ ′ ) ◦ (λi1Ei1 + λikEik) ) , and\nTr ( V ′UNpΠD ( Sp ◦ ( D−2V ′ΣUNpΠD −1))) =\nTr ( εDΠ̃′(U ′i1;i1 −U ′ik;ik)UNpΠ ( Sp ◦ ( D−1V ′ΣUNpΠD −1))) = εTr ( Π̃′(Ei1 −Eik)Π ( Sp ◦ ( D−1V ′ΣUNpΠ ))) =\nε2 Tr ( (Ei1 −Eik)Π ( Sp ◦ ( Π̃′(λi1Ei1 − λikEik)Π )) Π̃′ ) =\nε2 Tr ( (Ei1 −Eik) (( ΠSpΠ̃ ′ ) ◦ ( ΠΠ̃′(λi1Ei1 − λikEik)ΠΠ̃′ ))) =\nε2 Tr ( (Ei1 −Eik) (( ΠSpΠ̃ ′ ) ◦ ( Π(i1ik)(λi1Ei1 − λikEik)Π(i1ik) ))) =\nε2 Tr ( (Ei1 −Eik) (( ΠSpΠ̃ ′ ) ◦ ((λi1Eik − λikEi1)) )) =\nε2 Tr (( ΠSpΠ̃ ′ ) ◦ ((Ei1 −Eik)(λi1Eik − λikEi1)) ) =\n−ε2 Tr (( ΠSpΠ̃ ′ ) ◦ (λi1Eik + λikEi1) ) =\n−ε2 Tr (( ΠSpΠ̃ ′ ) ◦ (λi1Eik + λikEi1) ) .\nNow, note that in both cases the matrices that are multiplied elementwise with ΠSpΠ̃′ are diagonal and hence, we only need to look at diagonal elements of ΠSpΠ̃′. Moreover,\nΠSpΠ̃ ′ = Π(i1ik)Π(iki2) · · ·Π(ikik−1)Π̂SpΠ̂′Π(ikik−1) · · ·Π(iki2),\nwhere, i1 · · · ik are fixed points of permutation corresponding to Π̂ so Π̂SpΠ̂′ has the same values at diagonal positions i1 and ik as the original matrix Sp. The only permutation that is only on the left side is Π(i1ik) which exchanges the i1 and ik rows of Sp. Since Sp is such that the elements at each row before the diagonal element are the same and ik > i1, we have the i1 and ik diagonal elements of ΠSpΠ̃′ have the same value. Let that value be denoted as s. Then the sum of the above two equations yields m(λi1 + λik)−m(λi1 + λik) = 0, as claimed." }, { "heading": "B DERIVATIVES OF THE LOSS FUNCTION", "text": "" }, { "heading": "B.1 FIRST AND SECOND ORDER FRÉCHET DERIVATIVE", "text": "In order to derive and analyze the critical points of the cost function which is a real-valued function of matrices we use the first and second order Fréchet derivatives as described in chapter 4 of Zeidler (1995). For a function f : Rn×m → R the first order Fréchet derivative at the point A ∈ Rn×m is a linear functional df(A) : Rn×m → R such that\nlim V→0 |f(A + V )− f(A)− df(A)V | ‖V ‖F = 0,\nwhere we used the shorthand df(A)V ≡ (df(A))(V ). Similarly, the 2nd derivative is a bilinear functional d2f(A) : Rn×m × Rn×m → R such that\nlim V→0 |df(A + V )K − df(A)K − d2f(A)V K| ‖V ‖F = 0,\nfor all ‖K‖F ≤ 1, where again d2f(A)V K ≡ (d2f(A))(V ,K). The generalized Taylor formula then becomes:\nf(A + V ) = f(A) + df(A)V + 1\n2 d2f(A)V 2 + o(‖V ‖2),\nMoreover, we derive functions ∇f : Rn×m → Rn×m and H(A) : Rn×m → Rn×m such that df(A)V = 〈∇f(A),V 〉F and d2f(A)V 2 = 〈H(A)V ,V 〉F , where again H(A)V ≡ H(A)(V ). Then clearly, A ∈ Rn×mis a critical point of f iff ∇f(A) = 0 and for such As the sign of the bilinear form 〈H(A)V ,V 〉over directions V determines the type of the critical point.\nExtending the generalized Taylor theorem of Zeidler (1995), the second order Taylor expansion for the loss L(A,B) is then given by\nL(A + V ,B + W )− L(A,B) =dAL(A,B)V + dBL(A,B)W + 1\n2 d2AL(A,B)V 2\n+dABL(A,B)V W + 1\n2 d2BL(A,B)W 2 +RV ,W (A,B),\n(63)\nwhere, if ‖V ‖F , ‖W ‖F = O(ε) then ‖R(V ,W )‖ = O(ε3). Clearly, as at critical points where dAL(A,B)V +dBL(A,B)W = 0, as ε→ 0 we haveRV ,W (A,B)→ 0 and the sign of the sum of the second order partial Fréchet derivatives determines the type of the critical point very much similar to second partial derivative test for two variable functions. However, here for local minima we have to show the sign is positive in all directions and for saddle points have to show the sign is positive in some directions and negative at least in on direction. Finally, note that the smoothness of the loss entails that Fréchet derivative and directional derivative (Gateaux) both exist and (foregoing some subtleties in definition) are the same." }, { "heading": "B.2 FIRST AND SECOND ORDER DERIVATIVE OF THE LOSS WRT TO B", "text": "Lemma 5. The first and second (partial Fréchet ) derivative of the loss L(A,B) wrt to B is derived as follows.\ndBL(A,B)W= −2 Tr (W ′ (TpA′Σyx − (Sp ◦ (A′A))BΣxx)) (64) = −2〈TpA′Σyx − (Sp ◦ (A′A))BΣxx,W 〉F . (65)\nd2B2L(A,B)W 2 = 2〈(Sp ◦ (A′A))WΣxx,W 〉F = 2 Tr (W ′ (Sp ◦ (A′A))WΣxx) . (66)\nProof. Directly compute\nL(A,B + W ) = p∑ i=1 ‖Y −AIi;p(B + W )X‖2F\n= p∑ i=1 〈Y −AIi;p(B + W )X,Y −AIi;p(B + W )X〉F\n= p∑ i=1 〈Y −AIi;pBX,Y −AIi;pBX〉F + p∑ i=1 〈Y −AIi;pBX,−AIi;pWX〉F\n+ p∑ i=1 〈−AIi;pWX,Y −AIi;pBX〉F + p∑ i=1 〈−AIi;pWX,−AIi;pWX〉F\n= L(A,B)− p∑ i=1 2〈Y −AIi;pBX,AIi;pWX〉+O(‖W ‖2F ) =⇒\nL(A,B + W )− L(A,B) = −2 p∑ i=1 〈Y −AIi;pBX,AIi;pWX〉F +O(‖W ‖2F ) W→0 =⇒\ndBL(A,B)W = −2 p∑ i=1 Tr(X ′W ′Ii;pA ′(Y −AIi;pBX))\n= −2 Tr ( W ′ (( p∑ i=1 Ii;p ) A′Y X ′− ( p∑ i=1 Ii;pA ′AIi;p ) BXX ′ )) = −2 Tr (W ′ (TpA′Y X ′ − (Sp ◦ (A′A))BXX ′)) ,\nwhich can be written as the given form. For the second derivative wrt B we have\ndBL(A,B)W = −2〈TpA′Σyx − (Sp ◦ (A′A))BΣxx,W 〉F =⇒ dBL(A,B + W̄ )W = −2〈TpA′Σyx − (Sp ◦ (A′A)) (B + W̄ )Σxx,W 〉F\n= −2〈TpA′Σyx − (Sp ◦ (A′A))BΣxx,W 〉F + 2〈(Sp ◦ (A′A)) W̄Σxx,W 〉F =⇒\ndBL(A,B + W̄ )W − dBL(A,B)W = 2〈(Sp ◦ (A′A)) W̄Σxx,W 〉F , which by having W̄ → 0 results in the second order partial derivative." }, { "heading": "B.3 FIRST AND SECOND ORDER DERIVATIVE OF THE LOSS WRT TO A", "text": "Lemma 6. The first and second (partial Fréchet ) derivative of the loss L(A,B) wrt to A is derived as follows.\ndAL(A,B)V = −2〈ΣyxB′Tp −A (Sp ◦ (BΣxxB′)) ,V 〉F , (67) d2ABL(A,B)V W= −2〈ΣyxW ′Tp −A (Sp ◦ (BΣxxW ′))−A (Sp ◦ (WΣxxB′)) ,V 〉F ,\n(68)\nd2A2L(A,B)V 2= 2〈V (Sp ◦ (BΣxxB′)) ,V 〉F . (69)\nProof. Directly compute\nL(A + V ,B) = p∑ i=1 〈Y − (A + V )Ii;pBX,Y − (A + V )Ii;pBX〉F\n= p∑ i=1 〈Y −AIi;pBX,Y −AIi;pBX〉F − p∑ i=1 〈Y −AIi;pBX,V Ii;pBX〉F\n+ p∑ i=1 〈−V Ii;pBX,Y −AIi;pBX〉F + p∑ i=1 〈−V Ii;pBX,−V Ii;pBX〉F\n= L(A,B)− p∑ i=1 2〈Y −AIi;pBX,V Ii;pBX〉F + p∑ i=1 〈V Ii;pBX,V Ii;pBX〉F\nL(A + V ,B)− L(A,B) = − p∑ i=1 2〈Y −AIi;pBX,V Ii;pBX〉F +O(‖V ‖2F ) V→0 =⇒\ndAL(A,B)V = − p∑ i=1 2〈Y −AIi;pBX,V Ii;pBX〉F\n= −2 Tr(V ′(ΣyxB′ p∑ i=1 Ii;p −A p∑ i=1 Ii;pBΣxxB ′Ii;p)) =⇒\ndAL(A,B)V = −2〈ΣyxB′Tp −A (Sp ◦ (BΣxxB′)) ,V 〉F =⇒ dAL(A + V̄ ,B)V = −2〈ΣyxB′Tp − (A + V̄ ) (Sp ◦ (BΣxxB′)) ,V 〉F\ndAL(A + V̄ ,B)V − dAL(A,B)V = 2〈V̄ (Sp ◦ (BΣxxB′)) ,V 〉F V̄→0=⇒ d2A2L(A,B)(V , V̄ ) = 2〈V̄ (Sp ◦ (BΣxxB′)) ,V 〉F =⇒\nd2A2L(A,B)V 2 = 2〈V (Sp ◦ (BΣxxB′)) ,V 〉F\ndAL(A,B + W )V =− 2〈Σyx(B + W )′Tp,V 〉F −2〈−A (Sp ◦ ((B + W )Σxx(B + W )′)) ,V 〉F −2〈ΣyxB′Tp −A (Sp ◦ (BΣxxB′)) ,V 〉F =dAL(A,B)V − 2〈ΣyxW ′Tp,V 〉F −2〈−A (Sp ◦ (BΣxxW ′))−A (Sp ◦ (WΣxxB′)) ,V 〉F +O(‖W ‖2F ) =⇒\ndAL(A,B + W )V − dAL(A,B)V = −2〈ΣyxW ′Tp,V 〉F − 2〈−A (Sp ◦ (BΣxxW ′))−A (Sp ◦ (WΣxxB′)) ,V 〉F +O(‖W ‖2F ) W→0 =⇒\nd2ABL(A,B)V W = −2〈ΣyxW ′Tp −A (Sp ◦ (BΣxxW ′))−A (Sp ◦ (WΣxxB′)) ,V 〉F ." } ]
2,019
NEURAL NETWORKS FOR PRINCIPAL COMPONENT ANALYSIS: A NEW LOSS FUNCTION PROVABLY YIELDS ORDERED EXACT EIGENVECTORS
SP:7c442073ca3d80b472665b8bd9ec3534ef010950
[ "This paper introduces a hierarchical extension to existing work in vision-based model predictive control. Here, a hierarchical model is optimised to find sub-goals that minimise the planning cost (bottleneck states), so as to allow for improved planning to goal states expressed in higher dimensional state spaces. As expected, results show that this hierarchy improves tasks execution success rates. ", "This paper proposes a method, hierarchical visual foresight (HVF) that learns to break down the long horizon tasks into short horizon segments. It first generates the subgoals conditioned on the main goal. These subgoals are optimized to have meaningful states and used for planning. The experiments on Maze navigation, simulated desk manipulation, and real robot manipulation show significant performance gain over the planning method without subgoals and model-free RL. " ]
Video prediction models combined with planning algorithms have shown promise in enabling robots to learn to perform many vision-based tasks through only selfsupervision, reaching novel goals in cluttered scenes with unseen objects. However, due to the compounding uncertainty in long horizon video prediction and poor scalability of sampling-based planning optimizers, one significant limitation of these approaches is the ability to plan over long horizons to reach distant goals. To that end, we propose a framework for subgoal generation and planning, hierarchical visual foresight (HVF), which generates subgoal images conditioned on a goal image, and uses them for planning. The subgoal images are directly optimized to decompose the task into easy to plan segments, and as a result, we observe that the method naturally identifies semantically meaningful states as subgoals. Across four simulated vision-based manipulation tasks, we find that our method achieves more than 20% absolute performance improvement over planning without subgoals and model-free RL approaches. Further, our experiments illustrate that our approach extends to real, cluttered visual scenes.
[ { "affiliations": [], "name": "SUBGOAL GENERATION" }, { "affiliations": [], "name": "Suraj Nair" }, { "affiliations": [], "name": "Chelsea Finn" } ]
[ { "authors": [ "Mohammad Babaeizadeh", "Chelsea Finn", "Dumitru Erhan", "Roy H. Campbell", "Sergey Levine" ], "title": "Stochastic variational video", "venue": "prediction. CoRR,", "year": 2017 }, { "authors": [ "Andrew G. Barto", "Sridhar Mahadevan" ], "title": "Recent advances in hierarchical reinforcement learning", "venue": "Discrete Event Dynamic Systems,", "year": 2003 }, { "authors": [ "J. Betts" ], "title": "Practical Methods for Optimal Control and Estimation Using Nonlinear Programming", "venue": "Society for Industrial and Applied Mathematics, second edition,", "year": 2010 }, { "authors": [ "B. Boots", "A. Byravan", "D. Fox" ], "title": "Learning predictive models of a depth camera amp; manipulator from raw execution traces", "venue": "In 2014 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2014 }, { "authors": [ "Arunkumar Byravan", "Dieter Fox" ], "title": "Se3-nets: Learning rigid body motion using deep neural networks", "venue": null, "year": 2016 }, { "authors": [ "Howie Choset", "Kevin M. Lynch", "Seth Hutchinson", "George Kantor", "Wolfram Burgard", "Lydia Kavraki", "Sebastian Thrun" ], "title": "Principles of Robot Motion: Theory, Algorithms, and Implementations", "venue": null, "year": 2005 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Alex X. Lee", "Sergey Levine" ], "title": "Self-supervised visual planning with temporal skip connections", "venue": null, "year": 2017 }, { "authors": [ "Frederik Ebert", "Sudeep Dasari", "Alex X. Lee", "Sergey Levine", "Chelsea Finn" ], "title": "Robustness via retrying: Closed-loop robotic manipulation with self-supervised learning", "venue": "CoRR, abs/1810.03043,", "year": 2018 }, { "authors": [ "Frederik Ebert", "Chelsea Finn", "Sudeep Dasari", "Annie Xie", "Alex X. Lee", "Sergey Levine" ], "title": "Visual foresight: Model-based deep reinforcement learning for vision-based robotic control", "venue": "CoRR, abs/1812.00568,", "year": 2018 }, { "authors": [ "B. Espiau", "F. Chaumette", "P. Rives" ], "title": "A new approach to visual servoing in robotics", "venue": "IEEE Transactions on Robotics and Automation,", "year": 1992 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": null, "year": 2018 }, { "authors": [ "Benjamin Eysenbach", "Ruslan Salakhutdinov", "Sergey Levine" ], "title": "Search on the replay buffer: Bridging planning and reinforcement learning", "venue": null, "year": 1906 }, { "authors": [ "Chelsea Finn", "Sergey Levine" ], "title": "Deep visual foresight for planning robot", "venue": "motion. CoRR,", "year": 2016 }, { "authors": [ "Roy Fox", "Richard Shin", "Sanjay Krishnan", "Ken Goldberg", "Dawn Song", "Ion Stoica" ], "title": "Parametrized hierarchical procedures for neural programming", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ali Ghadirzadeh", "Atsuto Maki", "Danica Kragic", "Mårten Björkman" ], "title": "Deep predictive policy training using reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "De-An Huang", "Suraj Nair", "Danfei Xu", "Yuke Zhu", "Animesh Garg", "Li Fei-Fei", "Silvio Savarese", "Juan Carlos Niebles" ], "title": "Neural task graphs: Generalizing to unseen tasks from a single video", "venue": "demonstration. CoRR,", "year": 2018 }, { "authors": [ "Brian Ichter", "Marco Pavone" ], "title": "Robot motion planning in learned latent spaces", "venue": null, "year": 2018 }, { "authors": [ "M. Jagersand", "O. Fuentes", "R. Nelson" ], "title": "Experimental evaluation of uncalibrated visual servoing for precision manipulation", "venue": "In Proceedings of International Conference on Robotics and Automation,", "year": 1997 }, { "authors": [ "Stephen James", "Andrew J. Davison", "Edward Johns" ], "title": "Transferring end-to-end visuomotor control from simulation to real world for a multi-stage", "venue": "task. CoRR,", "year": 2017 }, { "authors": [ "Dinesh Jayaraman", "Frederik Ebert", "Alexei Efros", "Sergey Levine" ], "title": "Time-agnostic prediction: Predicting predictable video frames", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Leslie Pack Kaelbling" ], "title": "Learning to achieve goals", "venue": "In IJCAI, pp", "year": 1993 }, { "authors": [ "Leslie Pack Kaelbling", "Tomas Lozano-Perez" ], "title": "Hierarchical task and motion planning in the now", "venue": "In IEEE Conference on Robotics and Automation (ICRA),", "year": 2011 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke", "Sergey Levine" ], "title": "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "arxiv:Preprint,", "year": 2018 }, { "authors": [ "Sanjay Krishnan", "Roy Fox", "Ion Stoica", "Ken Goldberg" ], "title": "DDCO: discovery of deep continuous options forrobot learning from demonstrations", "venue": null, "year": 2017 }, { "authors": [ "Thanard Kurutach", "Aviv Tamar", "Ge Yang", "Stuart J. Russell", "Pieter Abbeel" ], "title": "Learning plannable representations with causal infogan", "venue": null, "year": 2018 }, { "authors": [ "Thomas Lampe", "Martin Riedmiller" ], "title": "Acquiring visual servoing reaching and grasping skills using neural reinforcement learning", "venue": "In International Joint Conference on Neural Networks", "year": 2013 }, { "authors": [ "S. Lange", "M. Riedmiller", "A. Voigtländer" ], "title": "Autonomous reinforcement learning on raw visual input data in a real world application", "venue": "In The 2012 International Joint Conference on Neural Networks (IJCNN),", "year": 2012 }, { "authors": [ "Steven M. LaValle" ], "title": "Planning Algorithms", "venue": "doi: 10.1017/", "year": 2006 }, { "authors": [ "Alex X. Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video", "venue": "prediction. CoRR,", "year": 2018 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "CoRR, abs/1504.00702,", "year": 2015 }, { "authors": [ "Andrew Levy", "Robert Platt", "Kate Saenko" ], "title": "Hierarchical reinforcement learning with hindsight", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Corey Lynch", "Mohi Khansari", "Ted Xiao", "Vikash Kumar", "Jonathan Tompson", "Sergey Levine", "Pierre Sermanet" ], "title": "Learning latent plans from play", "venue": null, "year": 1903 }, { "authors": [ "Jan Matas", "Stephen James", "Andrew J. Davison" ], "title": "Sim-to-real reinforcement learning for deformable object", "venue": "manipulation. CoRR,", "year": 2018 }, { "authors": [ "K. Mohta", "V. Kumar", "K. Daniilidis" ], "title": "Vision-based control of a quadrotor for perching on lines", "venue": "In 2014 IEEE International Conference on Robotics and Automation (ICRA),", "year": 2014 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Ashvin Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined", "venue": "goals. CoRR,", "year": 2018 }, { "authors": [ "Suraj Nair", "Mohammad Babaeizadeh", "Chelsea Finn", "Sergey Levine", "Vikash Kumar" ], "title": "Time reversal as self-supervision", "venue": "CoRR, abs/1810.01128,", "year": 2018 }, { "authors": [ "Alexander Neitz", "Giambattista Parascandolo", "Stefan Bauer", "Bernhard Schölkopf" ], "title": "Adaptive skip intervals: Temporal abstraction for recurrent dynamical models", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "OpenAI", "Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Józefowicz", "Bob McGrew", "Jakub W. Pachocki", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray", "Jonas Schneider", "Szymon Sidor", "Josh Tobin", "Peter Welinder", "Lilian Weng", "Wojciech Zaremba" ], "title": "Learning dexterous in-hand manipulation", "venue": null, "year": 2018 }, { "authors": [ "Chris Paxton", "Yotam Barnoy", "Kapil D. Katyal", "Raman Arora", "Gregory D. Hager" ], "title": "Visual robot task", "venue": "planning. CoRR,", "year": 2018 }, { "authors": [ "Lerrel Pinto", "Abhinav Gupta" ], "title": "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot", "venue": "hours. CoRR,", "year": 2015 }, { "authors": [ "R Rubinstein", "D Kroese" ], "title": "The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning", "venue": null, "year": 2004 }, { "authors": [ "Fereshteh Sadeghi" ], "title": "Divis: Domain invariant visual servoing for collision-free goal reaching", "venue": "CoRR, abs/1902.05947,", "year": 2019 }, { "authors": [ "Fereshteh Sadeghi", "Alexander Toshev", "Eric Jang", "Sergey Levine" ], "title": "Sim2real viewpoint invariant visual servoing by recurrent control", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Connor Schenck", "Dieter Fox" ], "title": "Visual closed-loop control for pouring", "venue": "liquids. CoRR,", "year": 2016 }, { "authors": [ "Avi Singh", "Larry Yang", "Kristian Hartikainen", "Chelsea Finn", "Sergey Levine" ], "title": "End-to-end robotic reinforcement learning without reward engineering", "venue": null, "year": 1904 }, { "authors": [ "S. Srivastava", "E. Fang", "L. Riano", "R. Chitnis", "S. Russell", "P. Abbeel" ], "title": "Combined task and motion planning through an extensible planner-independent interface layer", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2014 }, { "authors": [ "Richard Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In IROS, pp. 5026–5033", "year": 2012 }, { "authors": [ "Marc Toussaint", "Kelsey R Allen", "Kevin A Smith", "Josh B Tenenbaum" ], "title": "Differentiable physics and stable modes for tool-use and manipulation planning – extended abstract, 2019", "venue": "Sister Conference Best Paper Track – Extended abstract of the R:SS’18 paper", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Samy Bengio", "Eugene Brevdo", "Francois Chollet", "Aidan N. Gomez", "Stephan Gouws", "Llion Jones", "Łukasz Kaiser", "Nal Kalchbrenner", "Niki Parmar", "Ryan Sepassi", "Noam Shazeer", "Jakob Uszkoreit" ], "title": "Tensor2tensor for neural machine", "venue": "translation. CoRR,", "year": 2018 }, { "authors": [ "Angelina Wang", "Thanard Kurutach", "Kara Liu", "Pieter Abbeel", "Aviv Tamar" ], "title": "Learning robotic manipulation through visual planning and acting", "venue": null, "year": 1905 }, { "authors": [ "Manuel Watter", "Jost Tobias Springenberg", "Joschka Boedecker", "Martin A. Riedmiller" ], "title": "Embed to control: A locally linear latent dynamics model for control from raw images", "venue": "CoRR, abs/1506.07365,", "year": 2015 }, { "authors": [ "W.J. Wilson", "C.C. Williams Hulls", "G.S. Bell" ], "title": "Relative end-effector control using cartesian position based visual servoing", "venue": "IEEE Transactions on Robotics and Automation,", "year": 1996 }, { "authors": [ "Annie Xie", "Frederik Ebert", "Sergey Levine", "Chelsea Finn" ], "title": "Improvisation through physical understanding: Using novel objects as tools with visual foresight", "venue": null, "year": 1904 }, { "authors": [ "Danfei Xu", "Suraj Nair", "Yuke Zhu", "Julian Gao", "Animesh Garg", "Li Fei-Fei", "Silvio Savarese" ], "title": "Neural task programming: Learning to generalize across hierarchical tasks", "venue": null, "year": 2017 }, { "authors": [ "B.H. Yoshimi", "P.K. Allen" ], "title": "Active, uncalibrated visual servoing", "venue": "In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, pp. 156–161", "year": 1994 }, { "authors": [ "Tianhe Yu", "Gleb Shevchuk", "Dorsa Sadigh", "Chelsea Finn" ], "title": "Unsupervised visuomotor control through distributional planning", "venue": "networks. CoRR,", "year": 2019 }, { "authors": [ "Andy Zeng", "Shuran Song", "Stefan Welker", "Johnny Lee", "Alberto Rodriguez", "Thomas A. Funkhouser" ], "title": "Learning synergies between pushing and grasping with self-supervised deep reinforcement learning", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Developing robotic systems that can complete long horizon visual control tasks, while generalizing to novel scenes and objectives, remains an unsolved and challenging problem. Generalization to unseen objects and scenes requires robots to be trained across diverse environments, meaning that detailed supervision during data collection in not practical to provide. Furthermore, reasoning over long-horizon tasks introduces two additional major challenges. First, the robot must handle large amounts of uncertainty as the horizon increases. And second, the robot must identify how to reach distant goals when only provided with the final goal state, a sparse indication of the task, as opposed to a shaped cost that implicitly encodes how to get there. In this work, we aim to develop a method that can start to address these challenges, leveraging self-supervised models learned using only unlabeled data, to solve novel temporally-extended tasks.\nModel-based reinforcement learning has shown promise in generalizing to novel objects and tasks, as learned dynamics models have been shown to generalize to new objects (Finn & Levine, 2016; Ebert et al., 2018b), and can be used in conjunction with planning to reach goals unseen during training. However, planning to reach temporally distant goals is difficult. As the planning horizon increases model error compounds, and the cost function often provides only a noisy or sparse signal of the objective. Both of these challenges are exacerbated when planning in visual space.\nIn this work, the key insight that we leverage is that while model error and sparse cost signals can make long horizon planning difficult, we can mitigate these issues by learning to break down longhorizon tasks into short horizon segments. Consider, for example, the task of opening a drawer and putting a book in it, given supervision only in the form of the final image of the open drawer containing the book. The goal image provides nearly no useful cost signal until the last stage of the task, and model predictions are likely to become inaccurate beyond the first stage of the task. However, if we can generate a sequence of good subgoals, such as (1) the robot arm grasping the\n†Work completed at Google Brain Videos and code are available at: https://sites.google.com/stanford.edu/hvf\ndrawer handle, (2) the open drawer, and (3) the robot arm reaching for the book, planning from the initial state to (1), from (1) to (2), from (2) to (3), and from (3) to the final goal, the problem becomes substantially easier. The subgoals break the problem into short horizon subsegments each with some useful cost signal coming from the next subgoal image.\nOur main contribution is a self-supervised hierarchical planning framework, hierarchical visual foresight (HVF), which combines generative models of images and model predictive control to decompose a long-horizon visual task into a sequence of subgoals. In particular, we propose optimizing over subgoals such that the resulting task subsegments have low expected planning cost. However, in the case of visual planning, optimizing over subgoals corresponds to optimizing within the space of natural images. To address this challenge, we train a generative latent variable model over images from the robot’s environment and optimize over subgoals in the latent space of this model. This allows us to optimize over the manifold of images with only a small number of optimization variables. When combined with visual model predictive control, we observe that this subgoal optimization naturally identifies semantically meaningful states in a long horizon tasks as subgoals, and that when using these subgoals during planning, we achieve significantly higher success rates on long horizon, multi-stage visual tasks. Furthermore, since our method outputs subgoals conditioned on a goal image, we can use the same model and approach to plan to solve many different long-horizon tasks, even with previously unseen objects. We first demonstrate our approach in simulation on a continuous control navigation task with tight bottlenecks, and then evaluate on a set of four different multi-stage object manipulation tasks in a simulated desk environment, which require interacting with up to 3 different objects. In the challenging desk environment, we find that our method yields at least a 20% absolute performance improvement over prior approaches, including model-free reinforcement learning and a state of the art subgoal identification method. Finally, we show that our approach generates realistic subgoals on real robot manipulation data." }, { "heading": "2 RELATED WORK", "text": "Developing robots that can execute complex behaviours from only pixel inputs has been a well studied problem, for example with visual servoing (Mohta et al., 2014; Espiau et al., 1992; Wilson et al., 1996; Yoshimi & Allen, 1994; Jagersand et al., 1997; Lampe & Riedmiller, 2013; Sadeghi et al., 2018; Sadeghi, 2019). Recently, reinforcement learning has shown promise in completing complex tasks from pixels (Ghadirzadeh et al., 2017; Levine et al., 2015; Kalashnikov et al., 2018; Lange et al., 2012; OpenAI et al., 2018; Schenck & Fox, 2016; Matas et al., 2018; James et al., 2017; Singh et al., 2019), including in goal-conditioned settings (Kaelbling, 1993; Schaul et al., 2015; Andrychowicz et al., 2017; Sadeghi et al., 2018; Sadeghi, 2019; Nair et al., 2018a). While model-free RL approaches have illustrated the ability to generalize to new objects (Kalashnikov et al., 2018) and learn tasks such as grasping and pushing through self-supervision (Pinto & Gupta, 2015; Zeng et al., 2018), pure model-free approaches generally lack the ability to explicitly reason over temporally-extended plans, making them ill-suited for the problem of learning long-horizon tasks with limited supervision.\nVideo prediction and planning have also shown promise in enabling robots to complete a diverse set of visuomotor tasks while generalizing to novel objects (Finn & Levine, 2016; Kalchbrenner et al., 2016; Boots et al., 2014; Byravan & Fox, 2016). Since then, a number of video prediction frameworks have been developed specifically for robotics (Babaeizadeh et al., 2017; Lee et al., 2018; Ebert et al., 2017), which combined with planning have been used to complete diverse behaviors (Nair et al., 2018b; Ebert et al., 2018b; Paxton et al., 2018; Xie et al., 2019). However, these approaches still struggle with long horizon tasks, which we specifically focus on.\nOne approach to handle long horizon tasks is to add compositional structure to policies, either from demonstrations (Krishnan et al., 2017; Fox et al., 2018), with manually-specified primitives (Xu et al., 2017; Huang et al., 2018), learned temporal abstractions (Neitz et al., 2018), or through model-free reinforcement learning (Sutton et al., 1999; Barto & Mahadevan, 2003; Bacon et al., 2016; Nachum et al., 2018; Levy et al., 2019). These works have studied such hierarchy in grid worlds (Bacon et al., 2016) and simulated control tasks (Nachum et al., 2018; Eysenbach et al., 2018; Levy et al., 2019) with known reward functions. In contrast, we study how to incorporate compositional structure in learned model-based planning with video prediction models. Our approach is entirely self-supervised, without motion primitives, demonstrations, or shaped rewards, and scales to vision-based manipulation tasks.\nClassical planning methods have been successful in solving long-horizon tasks (LaValle, 2006; Choset et al., 2005), but make restrictive assumptions about the state space and reachability between states, limiting their applicability to complex visual manipulation tasks. Similarly, completing long horizon tasks has also been explored with symbolic models (Toussaint et al., 2019) and Task and Motion Planning (TAMP) (Kaelbling & Lozano-Perez, 2011; Srivastava et al., 2014). However, unlike these approaches our method requires no additional knowledge about the objects in the scene nor any predefined symbolic states. Recently, there have been several works that have explored planning in learned latent spaces (Kurutach et al., 2018; Ichter & Pavone, 2018; Watter et al., 2015; Srinivas et al., 2018). This has enabled planning in higher dimensional spaces, however these methods still struggle with long-horizon tasks. Furthermore, our hierarchical planning framework is agnostic to state space, and could directly operate in one of the above latent spaces.\nA number of recent works have explored reaching novel goals using only self-supervision (Finn & Levine, 2016; Eysenbach et al., 2019; Kurutach et al., 2018; Wang et al., 2019; Jayaraman et al., 2019; Nair et al., 2018a). In particular, time-agnostic prediction (TAP) (Jayaraman et al., 2019) aims to identify bottlenecks in long-horizon visual tasks, while other prior works (Nair et al., 2018a; Finn & Levine, 2016) reach novel goals using model-free or model-based RL. We compare to all three of these methods in Section 5 and find that HVF significantly outperforms all of them." }, { "heading": "3 PRELIMINARIES", "text": "We formalize our problem setting as a goal-conditioned Markov decision process (MDP) defined by the tuple (S,A, p,G, C, λ) where s ∈ S is the state space, which in our case corresponds to images, a ∈ A is the action space, p(st+1|st, at) governs the environment dynamics, G ⊂ S represents the set of goals which is a subset of possible states, C(st, sg) represents the cost of being in state st ∈ S given that the goal is sg ∈ G, and λ is the discount factor. In practice, acquiring cost functions that accurately reflect the distance between two images is a challenging problem (Yu et al., 2019). We make no assumptions about having a shaped cost, assuming the simple yet sparse distance metric of `2 distance in pixel space in all of our experiments. Approaches that aim to recover more shaped visual cost functions are complementary to the contributions of this work.\nIn visual foresight, or equivalently, visual MPC (Finn & Levine, 2016; Ebert et al., 2018b), the robot collects a data set of random interactions [(s1, a1), (s2, a2), ..., (sT , aT )] from a pre-determined policy. This dataset is used to learn a model of dynamics fθ(st+1, st+2, ..., st+h|st, at, at+1, ..., at+h−1) through maximum likelihood supervised learning. Note the states are camera images, and thus fθ is an action-conditioned video prediction model. Once the model is trained, the robot is given some objective and plans a sequence of actions that optimize the objective via the cross entropy method (CEM) (Rubinstein & Kroese, 2004). In this work, we will assume the objective is specified in the form of an image of the goal sg , while CEM aims to find a sequence of actions that minimize the cost C between the predicted future frames from fθ and the goal image. While standard visual foresight struggles with long-horizon tasks due to uncertainty in fθ as the prediction horizon h increases and sparse C for CEM to optimize, in the next section we describe how our proposed approach uses subgoal generation to mitigate these issues." }, { "heading": "4 HIERARCHICAL VISUAL FORESIGHT", "text": "Overview: We propose hierarchical visual foresight (HVF), a planning framework built on top of visual foresight (Finn & Levine, 2016; Ebert et al., 2018b) for long horizon visuomotor tasks. We observe that when planning to reach a long horizon goal given only a goal image, standard planning frameworks struggle with (1) sparse or noisy cost signals and (2) compounding model error. Critically, we observe that if given the ability to sample possible states, there is potential to decompose a long horizon planning task into shorter horizon tasks. While this idea has been explored in classical planning (Betts, 2010), state generation is a significant challenge when dealing with high dimensional image observations.\nOne of our key insights is that we can train a deep generative model, trained exclusively on selfsupervised data, as a means to sample possible states. Once we can sample states, we also need to evaluate how easy it is to get from one sampled state to another, to determine if a state makes for a good subgoal. We can do so through planning: running visual MPC to get from one state to another and measuring the predicted cost of the planned action sequence. Thus by leveraging the low-dimensional space of a generative model and the cost acquired by visual MPC, we can optimize over a sequence of subgoals that lead to the goal image. In particular, we can explicitly search in\nlatent image space for subgoals, such that no segment is too long-horizon, mitigating the issues around sparse costs and compounding model error.\nIn the following sections, we will describe more formally how HVF works, how we learn the generative model, how we learn the dynamics model, how goal-conditioned planning with visual MPC is executed, and lastly how subgoals are optimized.\nHierarchical visual foresight: Formally, we assume the goal conditioned MDP setting in Section 3 where the agent has a current state s0, goal state sg , cost function C, and dataset of environment interaction {(s1, a1, s2, a2, ..., sT , aT )}. This data can come from any exploration policy; in practice, we find that interaction from a uniformly random policy in the continuous action space of the agent works well. From this data, we train both a dynamics model fθ using maximum likelihood supervised learning, as well as a generative model s ∼ gφ. Now given s0 and sg , the objective is to find K subgoals s1, s2, ..., sK that enable easier completion of the task. Our hope is that the subgoals will identify steps in the task such that, for each subsegment, the planning problem is easier and the horizon is short. While one way to do this might be to find subgoals that minimize the total planning cost, we observe that this does not necessarily encourage splitting the task into shorter segments. Consider planning in a straight line: using any point on that line as a subgoal would equally minimize the total cost. Therefore, we instead optimize for subgoals that minimize the worst expected planning cost across any segment. This corresponds to the following objective:\nmin s1,...,sK\nmax{Cplan(s0, s1), Cplan(s1, s2), ..., Cplan(sK , sg)} (1)\nwhere Cplan(si, sj) is the cost achieved by the planner when planning from si to sj , which we compute by planning a sequence of actions to reach sj from si using fθ and measuring the predicted cost achieved by that action sequence1. Once the subgoals are found, then the agent simply plans using visual MPC (Finn & Levine, 2016; Ebert et al., 2018b) from each sk−1 to sk until a cost threshold is reached or for a fixed, maximum number of timesteps, then from s∗k to sk+1, until planning to the goal state, where s∗k is the actual state reached when running MPC to get to sk. For a full summary of the algorithm, see Algorithm 1. Next, we describe individual components in detail.\nGenerative model: To optimize subgoals, directly optimizing in image space is nearly impossible due to the high dimensionality of images and the thin manifold on which they lie. Thus, we learn a\n1We compare max/mean cost in Section 5.4\nAlgorithm 1 Hierarchical Visual Foresight HVF(fθ, gφ(z), st, sg) 1: Receive current state st and goal state sg 2: Initialize q = N (0, I),M = 200,M∗ = 40, number of subgoals K 3: while (σ > 1e− 3) or (iterations < 5) do 4: for m = 1, 2, ...,M do 5: sm0 , smK+1 = st, sg 6: zm ∼ q # Sample latent subgoal lists 7: for k = 1, 2, ...,K do 8: smk = gφ(z\nm[k], sm0 ) # Map latent to image subgoal 9: Amplan,k, C m plan,k = MPC(fθ, s m k−1, s m k ) # Optimal actions and planning cost\n10: end for 11: Amplan,k, C m plan,K+1 = MPC(fθ, s m K , s m K+1) 12: Costm = maxk {Cmplan,0, ..., Cmplan,K+1} # Max planning cost across segments 13: end for 14: Zsort = Sort(K : [Cost1, .., CostM ], V : [z1, .., zM ]) # Rank by Costm 15: Refit q to {Zsort[1], ..., Zsort[M∗]} with low cost 16: end while 17: for k = 1, 2, ...,K do 18: Set final subgoal sk = gφ(Zsorted[1]k) 19: Execute Ak,− = MPC(fθ, sk−1, sk) 20: end for 21: Execute AK ,− = MPC(fθ, sK , sg)\ngenerative model s ∼ gφ(z) such that samples represent possible futures in the scene, and optimize in the low-dimensional latent space z. In settings where aspects of the scene or objects change across episodes, we only want to sample plausible future states, rather than sampling states corresponding to all scenes. To handle this setting, we condition the generative model on the current image, which contains information about the current scene and the objects within view. Hence, in our experiments, we use either a variational autoencoder (VAE) or a conditional variational autoencoder (CVAE), with a diagonal Gaussian prior over z, i.e. z ∼ N (0, I). In the case of the CVAE, the decoder also takes as input an encoding of the conditioned image, i.e. s ∼ gφ(z, s0). In practice, we use the inital state s0 as the conditioned image. We train the generative model on randomly collected interaction data. The architecture and training details can be found in Appendix A.3\nDynamics model: The forward dynamics model fθ can be any forward dynamics model that estimates p(st+1, st+2, ..., st+h|st, at, at+1, ..., at+h−1). In our case states are images, so we use an action conditioned video prediction model, stochastic variational video prediction (SV2P) (Babaeizadeh et al., 2017) for fθ. We train the dynamics model on randomly collected interaction data. Architecture and training parameters are identical to those used in (Babaeizadeh et al., 2017); details can be found in Appendix A.4.\nMPC & planning with subgoals: When optimizing subgoals, we need some way to measure the ease of planning to and from the subgoals. We explicitly compute this expected planning cost between two states sk, sk+1 by running model predictive control A,C = MPC(fθ, sk, sk+1)2 as done in previous work (Finn & Levine, 2016; Nair et al., 2018b; Ebert et al., 2018b). This uses the model fθ to compute the optimal trajectory between sk and sk+1, and returns the optimal action A and associated cost C. Note this does not step any actions in the real environment, but simply produces an estimated planning cost. Details on this procedure can be found in Appendices A.1 and A.2. Once the subgoals have been computed, the same procedure is run MPC(fθ, sk−1, sk) until sk is reached (measured by a cost threshold) or for a fixed, maximum number of timesteps, then from MPC(fθ, s∗k, sk+1), ..., MPC(fθ, s ∗ K , sg) (Alg. 1 Lines 17:21), where s ∗ k represents the state actually reached after trying to plan to sk. In this case, the best action at each step is actually applied in the environment until the task is completed, or the horizon runs out. Note we only compute subgoals once in the beginning calling HVF(fθ, gφ(z), s0, sg), the plan with those subgoals. In principle HVF can be called at every step, but for computational efficiency we only call HVF once.\nSubgoal optimization: Since we need to search in the space of subgoals to optimize Equation 1, we perform subgoal optimization using the cross entropy method (CEM) (Rubinstein & Kroese, 2004) in the latent space of our generative model gφ(z). Note this subgoal optimization CEM is distinct\n2Pseudocode for MPC can be found in Appendix A.1\nfrom the CEM used for computing the planning cost, which we use as a subroutine within this outer level of CEM. At a high-level, the subgoal optimization samples a list of subgoals, evaluates their effectiveness towards reaching the goal, and then iteratively refines the subgoals through resampling and reevaluation. We begin by drawing M samples from a diagonal Gaussian distribution q = N (0, I) where the dimensionality of q is K ∗ L, where K is the number of subgoals and L is the size of the latent dimension z of the generative model gφ(z) (Alg. 1 Line 6). Thus, one sample z from q gives usK latents each of size L, each of which is decoded into a subgoal image (Alg. 1 Line 8). Then, as described in Equation 1, the cost for one sample (one set ofK subgoals) is computed as the maximum planning cost across the subsegments (Alg. 1 Lines 9:12). The M samples are ranked by this cost, and then q is refit to the latents z of the best M∗ samples (Alg. 1 Lines 14:15). In all of our experiments we use M = 200, M∗ = 40 and L = 8." }, { "heading": "5 EXPERIMENTS", "text": "In our experiments, we aim to evaluate (1) if, by using HVF, robots can perform challenging goal-conditioned long-horizon tasks from raw pixel observations more effectively than prior selfsupervised approaches, (2) if HVF is capable of generating realistic and semantically significant subgoal images, and (3) if HVF can scale to real images of cluttered scenes. To do so, we test on three domains: simulated visual maze navigation, simulated desk manipulation, and real robot manipulation of diverse objects. The simulation environments use the MuJoCo physics engine (Todorov et al., 2012). We compare against three prior methods: (a) visual foresight (Finn & Levine, 2016; Ebert et al., 2018b), which uses no subgoals, (b) RIG (Nair et al., 2018a) which trains a model-free policy to reach generated goals using latent space distance as the cost, and (c) visual foresight with subgoals generated by time-agnostic prediction (TAP) (Jayaraman et al., 2019), a state-of-the-art method for self-supervised generation of visual subgoals, which generates subgoals by predicting the most likely frame between the current and goal state.\n5.1 MAZE NAVIGATION\nFirst, we study HVF in a 2D maze with clear bottleneck states, as it allows us to study how HVF compares to oracle subgoals. In the the maze navigation environment, the objective is for the agent (the green block) to move to a goal position, which is always in the rightmost section. To do so, the agent must navigate through narrow gaps in walls, where the position of the gaps, goal, and initial state of the agent are randomized within each episode. The agent’s action space is 2D Cartesian movement, and the state space is top down images. We consider three levels of difficulty, “Easy” where the agent spawns in the rightmost section, “Medium” where the agent spawns in the middle, and “Hard” where the agent spawns in the leftmost section. Details in Appendix B.1.\nVideos/code are available at: https://sites.google.com/stanford.edu/hvf\nResults: In Figure 2, we observe that using HVF subgoals consistently improves success rate over visual foresight (Finn & Levine, 2016) without subgoals, indicating that it is able to find subgoals that make the long-horizon task more manageable. Additionally, we compare to the “Ground Truth Bottleneck” that uses manually-designed subgoals, where the subgoal is exactly at the gaps in the walls (or the midpoint between states in the “easy” case). We find that while using the oracle subgoals tends to yield the highest performance, the oracle subgoals do not perform perfectly, suggesting that a non-trivial amount of performance gains are to be had from improving the consistency of the video prediction model and cost function for short-horizon problems, as opposed to the subgoals. We also show that HVF outperforms time agnostic prediction (TAP) (Jayaraman et al., 2019) in Appendix C.3.\nWe next qualitatively examine the subgoals discovered by HVF in Figure 3, and find empirically that they seem to correspond to semantically meaningful states. In this task there are clear bottlenecks - specifically reaching the gaps in the walls and find that HVF outputs close to these states as subgoals. For example, when the agent starts in the leftmost section, and has to pass through two narrow gaps in walls, we observe the first subgoal goal corresponds to the agent around the bottleneck for the first gap and the second subgoal is near the second gap.\n5.2 SIMULATED DESK MANIPULATION\nWe now study the performance improvement and subgoal quality of HVF in a challenging simulated robotic manipulation domain. Specifically, a simulated Franka Emika Panda robot arm is mounted in front of a desk (as used in (Lynch et al., 2019)). The desk consists of 3 blocks, a sliding door, three buttons, and a drawer. We explore four tasks in this space: door closing, 2 block pushing, door closing + block pushing, and door closing + 2 block pushing. Example start and goal images for each task are visualized in Figure 5, and task details are in Appendix B.2. The arm is controlled with 3D Cartesian velocity control of the endeffector position. Across the 4 different tasks in this environment, we use a single dynamics model fθ and generative model gφ(z). Experimental details are in the Appendix B.2.\nResults: As seen in Figure 4, we find that using HVF subgoals dramatically improves performance, providing at least a 20% absolute improvement in success rate across the board. In the task with the longest horizon, closing the door and sliding two blocks off the table, we find that using no subgoals or 1 subgoal has approximately 15% performance, but 2 subgoals leads to over 42% success rate. We compare to subgoals generated by time agnostic prediction (TAP) (Jayaraman et al., 2019) and find that while it does generate plausible subgoals, they are very close to the start or goal, leading to no\nbenefit in planning. We also compare against RIG (Nair et al., 2018a), where we train a model free policy in the latent space of the VAE to reach “imagined” goals, then try and reach the unseen goals.\nHowever, due to the complexity of the environment, we find that RIG struggles to reach even the sampled goals during training, and thus fails on the unseen goals. Qualitatively, in Figure 5, we observe that HVF outputs meaningful subgoals on the desk manipulation tasks, often produces subgoals corresponding to grasping the handle, sliding the door, or reaching to a block." }, { "heading": "5.3 REAL ROBOT MANIPULATION", "text": "Lastly, we aim to study whether HVF can extend to real images and cluttered scenes. To do so, we explore the qualitative performance of our method on the BAIR robot pushing dataset (Ebert et al., 2018a). We train fθ and gφ(z, s0) on the training set, and sample current and goal states from the beginning and end of test trajectories. We then qualitatively evaluate the subgoals outputted by HVF. Further implementation details are in the Appendix B.3. Results: Our results are illustrated in Fig. 6. We observe that even in cluttered scenes, HVF produces meaningful subgoals, such grasping objects which need to be moved. For more examples, see Figure C.2. We observe that in both the BAIR dataset and the desk manipulation ex-\nperiments, the most common failure cases corresponded to the subgoal prediction and model ignoring the objects and focusing exclusively on the arm. Cost functions which can more effectively capture objects and their poses would be a step towards addresing this.\n5.4 ABLATIONS # subgoals 0 1 2 3 5 10 success 33% 47% 54% 39% 2% 0%\nTable 1: Number of Subgoals. With a fixed sampling budget, as the number of subgoals increases beyond 2, performance drops as the subgoal search is challenging.\nIn our ablations, we explore three primary questions: (1) What is the optimal number of subgoals?, (2) Is there a difference between HVF using the max and mean cost across subseg-\nments?, and (3) Is HVF as valuable when the samples used for visual MPC is significantly increased? We evaluate these questions in the maze navigation task on the “Hard” difficulty. All results report success rates over 100 trials using random initialization/goals. Additional ablations on planning horizion and latent space cost can be found in Appendix C.1.\nNumber of Subgoals: We explore how HVF performs as we increase the number of subgoals in Table 1. Interestingly, we observe that as we scale the number of subgoals beyond 2, the performance starts to drop, and with 5 or more subgoals the method fails. We conclude that this is due to the increasing complexity of the subgoal search problem: the sampling budget allocated for subgoal optimization is likely insufficient to find a large sequence of subgoals.\nMax vs Mean: In our HVF formulation, we define the subgoal optimization objective as finding the subgoals that minimize the max cost across subsegments. Table 2 compares to using the mean cost. We find that using the max cost is marginally better.\nSample Quantity: In Table 3, we examine how the number of action samples affects the relative improvement of HVF. Using more samples should also mitigate the chal-\nlenges of sparse costs, so one might suspect that HVF would be less valuable in these settings. On\nthe contrary, we find that HVF still significantly outperforms no subgoals, and the improvement between using 0 and 1 subgoals is even more significant." }, { "heading": "6 CONCLUSION AND LIMITATIONS", "text": "We presented an self-supervised approach for hierarchical planning with vision-based tasks, hierarchical visual foresight (HVF), which decomposes a visual goal into a sequence of subgoals. By explicitly optimizing for subgoals that minimize planning cost, HVF is able to discover semantically meaningful goals in visual space, and when using these goal for planning, perform a variety of challenging, long-horizon vision-based tasks. Further, HVF learns these tasks in an entirely selfsupervised manner without rewards or demonstrations.\nWhile HVF significantly extends the capabilities of prior methods, a number of limitations remain. First, HVF assumes access to an exploration policy which can cover enough of the state space to learn a good model. While the random policy used works for our environments, more complex environments may require better exploration techniques, such as intrinsic motivation methods.\nAnother limitation of the current framework is its computational requirements, as the nested optimization procedure requires many iterations of MPC with expensive video prediction models. Specifically, assume that the horizon of the task is T , one iteration of visual MPC has computational cost C, and HVF is using K subgoals, and searching in a space of N subgoal sequences. Then normal visual foresight or TAP would have cost T × C, while HVF would have cost (T × C) + (K × N × C). However note - the computation of (K × N × C) can be heavily parallelized because it is done completely offline (without environment interaction). We also expect that this can be mitigated by reformulating the subgoal prediction problem as policy inference, and training a subgoal generation policy in the loop of training, an interesting direction for future work.\nFurther, while the development of accurate visual cost functions and predictive models were not the main aim of this work, the performance with oracle subgoals suggests this to be a primary bottleneck in furthering the performance of HVF. Specifically, learning dynamics models and cost functions which can more effectively capture the state of all the objects in the scene would significantly improve the performance of HVF.\nLastly, development of better generative models that can capture the scene and generate possible futures remains an open and challenging problem. Work in generative models that enable better generation of cluttered scenes with novel objects could improve the applicability of HVF in realworld settings." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Sudeep Dasari, Vikash Kumar, Sherry Moore, Michael Ahn, and Tuna Tokosz for help with infrastructure, as well as Sergey Levine, Abhishek Gupta, Dumitru Erhan, Brian Ichter, and Vincent Vanhouke for valuable discussions. This work was supported in part by an NSF GRFP award. Chelsea Finn is a CIFAR Fellow in the Learning in Machines & Brains program." }, { "heading": "A METHOD DETAILS", "text": "A.1 VISUAL MPC\nIn HVF, evaluating the cost of potential subgoals, as well as actually taking actions given computed subgoals uses visual MPC. One step of visual MPC is described in Algorithm 1. Given the current state and goal state, MPC samples action trajectories of lengthH , then feeds them through the model fθ. Then the cost of the output images are computed relative to the goal image, which is then used to sort the actions and refit the action distribution. After 5 iterations or convergence the best action is returned.\nAlgorithm 2 MPC(fθ, st, sg) Receive current state st and goal state sg from environment Initialize N(µ, σ2) = N(0, 1), cost function C(si, sj) while (σ2 > 1e− 3) or (iterations ≤ 5) do a1, ..., aD ∼ N(µ, σ2) st+H,1, ..., st+H,D = fθ(st, a1, ..., aD) l1, ..., lD = [C(st+H,1, sg), ..., C(st+H,D, sg)] asorted = Sort([a1, .., aD]) Refit N(µ, σ2) to asorted[1−D∗]\nend while Return lsorted[1], asorted[1]\nwhere D = 200, D∗ = 40" }, { "heading": "A.2 PLANNING WITH SUBGOALS", "text": "In the main text we describe how given, s0 and sg , HVF produces subgoal images s1, s2, ..., sK . Given these subgoals, planning with them is executed as follows. For an episode starting at s0 of maximum length T , the agent plans using visual MPC from s0 to s1 until some cost threshold C(st, s1) < x or for some maximum number of steps T ∗. Once this criteria is reached, the agent plans from its current state to the next subgoal s2, again until the cost threshold C(st, s2) < x or some maximum number of steps T ∗, and so until the agent is planning form st to the true goal sg , at which point it simply plans until the environment returns that the task has been a success or the total horizon T runs out." }, { "heading": "A.3 GENERATIVE MODEL", "text": "The generative model we use is either a Variational Autoencoder s ∼ gφ(z) or a Conditional Variational Autoencoder s ∼ gφ(z, s0). In the VAE, training is done by sampling images from the dataset, and each image is encoded using an image encoder into the mean and standard deviation of a normal distribution of dimension 8. Then a sample from the distribution are decoded back to the original input image. This is trained with a maximum likelihood loss as well as a KL penalty on the normal distribution restricting it to be a unit gaussian N (0, 1). In the conditional VAE, training is done by sampling pairs of images from the dataset, specifically images from random episodes st as well as the corresponding first image of that episode s0. Then both s0 and st are encoded using the same image encoder, which are then flattened and concatenated. This is then encoded into the mean and standard deviation of a normal distribution of dimension 8. A sample from the resulting distribution is then fed through fully connected layers and reshaped into the output shape of the encoder, concatenated with the spatial embedding of the conditioned image from the encoder and then decoded. This is done to enable easier conditioning on the scene in the decoding. The resulting image is trained with a maximum likelihood loss as well as a KL penalty on the normal distribution restricting it to be a unit gaussian N (0, 1). In the Simulated Maze Navigation experiments, we use a Conditional VAE which encodes the 64x64 RGB image with 4 convolutional layers ([16, 32, 64, 128] filters, kernel size 3x3, stride 2), as well as 3 fully connect layers of size 256 for the generated image and 3 fully connected layers of size [512, 512, 256] for the conditioned image, and a decoder with the same architecture as the encoder inverted. In the Simulated Desk Manipulation experiments, we use a VAE which encodes the 64x64\nRGB image with 7 convolutional layers ([8, 16, 32, 32, 32, 64, 128] filters, kernel size 3, stride alternating between 1 and 2), as well as a single fully connected layer of size 512 before mapping the latent distribution, which is decoded with the inverted encoder architecture. In the Real Robot Manipulation experiments we use a Conditional VAE which encodes the 48x64 RGB image with 9 convolutional layers ([8, 16, 32, 64, 64, 128, 128, 256, 512] filters, with kernel size 3, and stride sizes [1,1,1,1,1,1,2,2,2]), as well as 3 fully connect layers of size 256 for the generated image and 3 fully connected layers of size [512, 512, 256] for the conditioned image, and a decoder which is the inverted architecture of the encoder. The three experiments use Adam optimizer with learning rate 1e-4, 1e-3, 1e-4 respectively." }, { "heading": "A.4 DYNAMICS MODEL", "text": "The dynamics model is an action conditioned video prediction model, stochastic variational video prediction (SV2P) (Babaeizadeh et al., 2017). See architecture below:\nFor all experimental settings the dynamics models are trained for approximately 300K iterations." }, { "heading": "B EXPERIMENT DETAILS", "text": "" }, { "heading": "B.1 MAZE NAVIGATION", "text": "Data Collection: For the maze navigation environment data is collected through random policy interaction. That is, for each action we sample uniformly in the delta x, y action space of the green block. We collect 10000 episodes, each with random initialization of the walls, block, and goal. Each episode contains 100 transitions. The images are of size 64x64x3.\nCost Function: In this experiment, the cost function C(si, sj) is simply the squared `2 pixel distance between the images, that is ||si − sj ||22 Planning Parameters: When planning in the maze navigation environment, MPC samples action trajectories of length H = 5. Additionally, the cost of a sequence of 5 actions is the cost of the last of the subsequent frames, where the cost of each frame is the temporal cost C described above. Lastly, we use T = 50 and T ∗ = 10 in these experiments." }, { "heading": "B.2 SIMULATED DESK MANIPULATION", "text": "Task Details: Door: The first task is reaching to and closing the sliding door. From the initialization position of the arm, it needs to reach around the door handle into the correct position, then slide the\ndoor shut. The position of the door and distractor blocks on the table are randomized each episode. The agent is given 50 timesteps to complete the task. 2 Block: The second task requires the agent to push two different blocks off the table, one located on the left end of the table and one located in the middle. It requires pushing first one block, then re positioning to push the other block off the table. The agent is given 50 timesteps to complete the task. Door + Block: The agent must both slide the door closed from a random position, as well as push a block off of the table. This requires both grasping and sliding the door from the previous task, as well as positioning the end effector behind the block to slide it off the table. The agent is given 100 timesteps to complete the task. Door + 2 Block: A combination of the 2 Block and Door + Block task. The agent must close the door and slide both blocks off the table within 100 timesteps.\nData Collection: For the simulated desk manipulation environment data is again collected through random policy interaction. That is, for each action we sample uniformly in the delta x, y, z action space of the robot end effector. We collect 10000 episodes, each with random initialization of the desk door/drawer and blocks. Each episode contains 100 transitions. The images are of size 64x64x3.\nCost Function: In this experiment, the cost function C(si, sj) is simply the squared `2 pixel distance between the images, that is ||si − sj ||22 Planning Parameters: When planning in the desk manipulation environment MPC samples action trajectories of length H = 15, which actually consists of 5 actions, each repeated 3 times. Additionally, the cost of a sequence of 15 actions is the `2 pixel cost C of the last frame only. Note this is distinct from the maze environment. Lastly, we use T = 50 or 100 and T ∗ = 20 in these experiments, depending on the task." }, { "heading": "B.3 REAL ROBOT MANIPULATION", "text": "Data: We use the BAIR robot manipulation dataset from (Ebert et al., 2018a), which consists of roughly 15K trajectories split into a train/test split. We train fθ and gφ on the train set and display qualitative results on the test set.\nCost Function: Like the desk manipulation set up, the cost function C(si, sj) is the squared `2 pixel distance between the images, that is ||si − sj ||22 Planning Parameters: When planning in the desk manipulation environment MPC samples action trajectories of length H = 15, which actually consists of 5 actions, each repeated 3 times. Additionally, the cost of a sequence of 15 actions is the `2 pixel cost C of the last frame only." }, { "heading": "C ADDITIONAL RESULTS", "text": "" }, { "heading": "C.1 ADDITIONAL ABLATIONS", "text": "" }, { "heading": "C.1.1 PLANNING HORIZON # SG 0 1 2", "text": "In the maze environment, on the hard difficulty, we study how increasing the planning horizon of visual MPC impacts the benefit of using HVF (See Table 4). Interestingly, we find that for longer planning horizons, performance does not necessarily improve (as a longer planning horizon constitutes a harder search problem). This supports the idea that simply using longer planning horizon is not necessarily the solution to doing long-horizon tasks. Further, we find that even with longer planning horizons, HVF outperforms standard visual foresight, but that performance with 2 subgoals is worse than with 1 subgoal.\nC.1.2 VAE LATENT SPACE COST\nTo address the sparsity of pixel cost, we explore using distance in the latent space of the VAE as\na cost signal. However, we find that this cost actually performs worse across all difficulties. In Table 5 we show the success rate of standard visual foresight for either pixel `2 cost or VAE latent space `2 cost. We find that the reason for the poor performance of the latent space cost is that it provides close to zero cost anytime the green blocks are reasonably close together, leading to CEM often getting stuck close to the goal but not actually at the goal." }, { "heading": "C.2 REAL ROBOT MANIPULATION EXAMPLES", "text": "" }, { "heading": "C.3 TIME AGNOSTIC PREDICTION(JAYARAMAN ET AL., 2019) IN THE MAZE NAVIGATION TASK", "text": "We observe that while TAP gets similar performance to HVF on the “Easy” and “Medium” cases, it has significantly lower performance in the longest horizon “Hard” setting.\nAdditionally we see that Recursive TAP with 2 subgoals has lower performance across all difficulties as the subgoals it outputs are very close to current/goal state (See Figure 9)." } ]
2,020
null
SP:ef3afd5d34fbb7c8310a1dc9d6e49c2f37db07e6
[ "This paper presents new datasets based on ImageNet and Youtube-BB to assert networks performance consistency across time. Compared to previous work, it uses human labeler to further validate the dataset and discard frames that are deemed too different from the reference one. It provides results on image classification and detection using popular network architectures. Based on these results, the paper claims an accuracy drop of 10 to 16%.", "In this paper, the authors curated two datasets: ImageNet-Vid and Youtube-BB in order to create human-reviewed perceptibly similar sets (Imagenet-Vid-Robust and YTBB-Robust). The obtained datasets are evaluated over 45 different models pre-trained on ImageNet in order to see their drop in accuracy on natural perturbations. Three detection models are also evaluated and show that not only classification models are sensitive to these perturbations, but that it also yields to localization errors." ]
We study the robustness of image classifiers to temporal perturbations derived from videos. As part of this study, we construct ImageNet-Vid-Robust and YTBB-Robust, containing a total 57,897 images grouped into 3,139 sets of perceptually similar images. Our datasets were derived from ImageNet-Vid and Youtube-BB respectively and thoroughly re-annotated by human experts for image similarity. We evaluate a diverse array of classifiers pre-trained on ImageNet and show a median classification accuracy drop of 16 and 10 percent on our two datasets. Additionally, we evaluate three detection models and show that natural perturbations induce both classification as well as localization errors, leading to a median drop in detection mAP of 14 points. Our analysis demonstrates that perturbations occurring naturally in videos pose a substantial and realistic challenge to deploying convolutional neural networks in environments that require both reliable and low-latency predictions.
[]
[ { "authors": [ "Aharon Azulay", "Yair Weiss" ], "title": "Why do deep convolutional networks generalize so poorly to small image transformations", "venue": "arXiv preprint arXiv:1805.12177,", "year": 2018 }, { "authors": [ "Luca Bertinetto", "Jack Valmadre", "Joao F Henriques", "Andrea Vedaldi", "Philip HS Torr" ], "title": "Fullyconvolutional siamese networks for object tracking", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Battista Biggio", "Fabio Roli" ], "title": "Wild patterns: Ten years after the rise of adversarial machine learning", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Remi Cadene" ], "title": "Pretrained models for pytorch", "venue": "https://github.com/Cadene/pretrain ed-models.pytorch", "year": 2019 }, { "authors": [ "Jifeng Dai", "Yi Li", "Kaiming He", "Jian Sun" ], "title": "R-fcn: Object detection via region-based fully convolutional networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Logan Engstrom", "Brandon Tran", "Dimitris Tsipras", "Ludwig Schmidt", "Aleksander Madry" ], "title": "A rotation and a translation suffice: Fooling cnns with simple transformations", "venue": "arXiv preprint arXiv:1712.02779,", "year": 2017 }, { "authors": [ "Alhussein Fawzi", "Pascal Frossard" ], "title": "Manitest: Are classifiers really invariant", "venue": "In British Machine Vision Conference (BMVC),", "year": 2015 }, { "authors": [ "Christoph Feichtenhofer", "Axel Pinz", "Andrew Zisserman" ], "title": "Detect to track and track to detect", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "arXiv preprint arXiv:1811.12231,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Carlos RM Temme", "Jonas Rauber", "Heiko H Schütt", "Matthias Bethge", "Felix A Wichmann" ], "title": "Generalisation in humans and deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Keren Gu", "Brandon Yang", "Jiquan Ngiam", "Quoc Le", "Jonathan Shlens" ], "title": "Using videos to evaluate image model robustness", "venue": null, "year": 1904 }, { "authors": [ "Wei Han", "Pooya Khorrami", "Tom Le Paine", "Prajit Ramachandran", "Mohammad Babaeizadeh", "Honghui Shi", "Jianan Li", "Shuicheng Yan", "Thomas S Huang" ], "title": "Seq-nms for video object detection", "venue": "arXiv preprint arXiv:1602.08465,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "arXiv preprint arXiv:1903.12261,", "year": 2019 }, { "authors": [ "Hossein Hosseini", "Radha Poovendran" ], "title": "Semantic adversarial examples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "SouYoung Jin", "Aruni RoyChowdhury", "Huaizu Jiang", "Ashish Singh", "Aditya Prasad", "Deep Chakraborty", "Erik Learned-Miller" ], "title": "Unsupervised hard example mining from videos for improved object detection", "venue": null, "year": 2018 }, { "authors": [ "Can Kanbak", "Seyed-Mohsen Moosavi-Dezfooli", "Pascal Frossard" ], "title": "Geometric robustness of deep networks: analysis and improvement", "venue": "arXiv preprint arXiv:1711.09115,", "year": 2017 }, { "authors": [ "Kai Kang", "Hongsheng Li", "Tong Xiao", "Wanli Ouyang", "Junjie Yan", "Xihui Liu", "Xiaogang Wang" ], "title": "Object detection in videos with tubelet proposal networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "MS COCO detection evaluation", "venue": "http://cocodataset.or g/#detection-eval", "year": 2019 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Francisco Massa", "Ross Girshick" ], "title": "maskrcnn-benchmark: Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch", "venue": "https://github.c om/facebookresearch/maskrcnn-benchmark,", "year": 2018 }, { "authors": [ "George A Miller" ], "title": "Wordnet: a lexical database for english", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Harold Pashler" ], "title": "Familiarity and visual change detection", "venue": "Perception & psychophysics,", "year": 1988 }, { "authors": [ "Esteban Real", "Jonathon Shlens", "Stefano Mazzocchi", "Xin Pan", "Vincent Vanhoucke" ], "title": "Youtubeboundingboxes: A large high-precision human-annotated data set for object detection in video", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Benjamin Recht", "Rebecca Roelofs", "Ludwig Schmidt", "Vaishaal Shankar" ], "title": "Do imagenet classifiers generalize to imagenet", "venue": "arXiv preprint arXiv:1902.10811,", "year": 2019 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Antonio Torralba", "Alexei A Efros" ], "title": "Unbiased look at dataset bias", "venue": "In CVPR,", "year": 2011 }, { "authors": [ "Fanyi Xiao", "Yong Jae Lee" ], "title": "Video object detection with an aligned spatial-temporal memory", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Cihang Xie", "Yuxin Wu", "Laurens van der Maaten", "Alan Yuille", "Kaiming He" ], "title": "Feature denoising for improving adversarial robustness", "venue": "arXiv preprint arXiv:1812.03411,", "year": 2018 }, { "authors": [ "Stephan Zheng", "Yang Song", "Thomas Leung", "Ian Goodfellow" ], "title": "Improving the robustness of deep neural networks via stability training", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016. doi: 10.1109/cvpr.2016.485. URL http://dx.doi.org/10", "year": 2016 }, { "authors": [ "Xizhou Zhu", "Yujie Wang", "Jifeng Dai", "Lu Yuan", "Yichen Wei" ], "title": "Flow-guided feature aggregation for video object detection", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "ImageNet-Vid-Robust (Gu" ], "title": "Metric ILSVRC Pretrained Noise augmentation L2 Adversarial Robust (ResNext-101", "venue": null, "year": 2019 }, { "authors": [ "YTBB-Robust (Gu" ], "title": "Metric ILSVRC Pretrained Noise augmentation L2 Adversarial Robust (ResNext-101", "venue": null, "year": 2019 }, { "authors": [ "Gu" ], "title": "In concurrent work, the authors of Gu et al. (2019) considered a different metric of robustness. In this section, we compute this metric on all models in our test bed to compare our findings to Gu et al. (2019)", "venue": null, "year": 2020 }, { "authors": [ "Xiao", "Jae Lee" ], "title": "We additionally detail hyperparameters for detection models in Table 11. Detection experiments were conducted with PyTorch version 1.0.1 on a machine with 4 Titan X GPUs, using the Mask R-CNN benchmark repositoryMassa and Girshick (2018). We used the default learning rate provided in Massa and Girshick (2018)", "venue": "For R-FCN,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks (CNNs) still exhibit many troubling failure modes. At one extreme, `p-adversarial examples cause large drops in accuracy for state-of-the-art models while relying only on visually imperceptible changes to the input image (Goodfellow et al., 2014; Biggio and Roli, 2018). However, this failure mode usually does not pose a problem outside a fully adversarial context because carefully crafted `p-perturbations are unlikely to occur naturally in the real world.\nTo study more realistic failure modes, researchers have investigated benign image perturbations such as rotations & translations, colorspace changes, and various image corruptions (Fawzi and Frossard, 2015; Engstrom et al., 2017; Fawzi and Frossard, 2015; Hendrycks and Dietterich, 2019). However, it is still unclear whether these perturbations reflect the robustness challenges arising in real data since the perturbations also rely on synthetic image modifications.\nRecent work has therefore turned to videos as a source of naturally occurring perturbations of images (Zheng et al., 2016; Azulay and Weiss, 2018; Gu et al., 2019). In contrast to other failure modes, the perturbed images are taken from existing image data without further modifications that make the task more difficult. As a result, robustness to such perturbations directly corresponds to performance improvements on real data.\nHowever, it is currently unclear to what extent such video perturbations pose a significant robustness challenge. Azulay and Weiss (2018) and Zheng et al. (2016) only provide anecdotal evidence from a small number of videos. Gu et al. (2019) go beyond individual videos and utilize a large video dataset (Real et al., 2017) in order to measure the effect of video perturbations more quantitatively. In their evaluation, the best image classifiers lose about 3% accuracy for video frames up to 0.3 seconds away. However, the authors did not employ humans to review the frames in their videos. Hence the accuracy drop could also be caused by significant changes in the video frames (e.g., due to fast camera or object motion). Since the 3% accuracy drop is small to begin with, it remains unclear whether video perturbations are a robustness challenge for current image classifiers.\nWe address these issues by conducting a thorough evaluation of robustness to natural perturbations arising in videos. As a cornerstone of our investigation, we introduce two test sets for evaluating model robustness: ImageNet-Vid-Robust and YTBB-Robust, carefully curated from the ImageNet-Vid and Youtube-BB datasets, respectively (Russakovsky et al., 2015; Real et al., 2017). All images in the two datasets were screened by a set of expert labelers to ensure high annotation quality and minimize selection biases that arise when filtering a dataset with CNNs. To the best of\nour knowledge these are the first datasets of their kind, containing tens of thousands of images that are human reviewed and grouped into thousands of perceptually similar sets. In total, our datasets contain 3,139 sets of temporally adjacent and visually similar images (57,897 images total).\nWe then utilize these datasets to measure the accuracy of current CNNs to small, naturally occurring perturbations. Our testbed contains over 45 different models, varying both architecture and training methodology (adversarial training, data augmentation, etc.). To better understand the drop in accuracy due to natural perturbations, we also introduce a robustness metric that is more stringent than those employed in prior work. Under this metric, we find that natural perturbations from ImageNet-Vid-Robust and YTBB-Robust induce a median accuracy drop of 16% and 10% respectively for classification tasks and a median 14 point drop in mAP for detection tasks.1 Even for the best-performing classification models, we observe an accuracy drop of 14% for ImageNet-Vid-Robust and 8% for YTBB-Robust.\nOur results show that robustness to natural perturbations in videos is indeed a significant challenge for current CNNs. As these models are increasingly deployed in safety-critical environments that require both high accuracy and low latency (e.g., autonomous vehicles), ensuring reliable predictions on every frame of a video is an important direction for future work." }, { "heading": "2 CONSTRUCTING A TEST SET FOR ROBUSTNESS", "text": "ImageNet-Vid-Robust and YTBB-Robust are sourced from videos in the ImageNet-Vid and Youtube-BB datasets (Russakovsky et al., 2015; Real et al., 2017). All object classes in ImageNet-Vid and Youtube-BB are from the WordNet hierarchy (Miller, 1995) and direct ancestors of ILSVRC-2012 classes. Using the WordNet hierarchy, we construct a canonical mapping from ILSVRC-2012 classes to ImageNet-Vid and Youtube-BB classes, which allows us to evaluate off-the-shelf ILSVRC-2012 models on ImageNet-Vid-Robust and YTBB-Robust. We provide more background on the source datsets in Appendix A.\n2.1 CONSTRUCTING IMAGENET-VID-ROBUST AND YTBB-ROBUST\nNext, we describe how we extracted sets of naturally perturbed frames from ImageNet-Vid and Youtube-BB to create ImageNet-Vid-Robust and YTBB-Robust. A straightforward approach would be to select a set of anchor frames and use temporally adjacent frames in the video with the assumption that such frames contain only small perturbations from the anchor. However, as Fig. 2 illustrates, this assumption is frequently violated, especially due to fast camera or object motion.\nInstead, we first collect preliminary datasets of natural perturbations following the same approach, and then manually review each of the frame sets. For each video, we randomly sample an anchor frame and take k = 10 frames before and after the anchor frame as candidate perturbation images.2 This results in two datasets containing one anchor frame each from 3,139 videos, with approximately 20 candidate perturbation per anchor frame.3\n1We only evaluated detection on ImageNet-Vid-Robust as bounding-box annotations in Youtube-BB were only at 1 frame-per-second and not dense enough for our evaluation.\n2For YTBB-Robust we use a subset of the anchor frames used by Gu et al. (2019). 3Anchor frames near the start or end of the video may have less than 20 candidate frames.\nAnchor frame Discarded frame Anchor frame Anchor frame Discarded frameDiscarded frame\nNext, we curate the dataset with the help of four expert human annotators. The goal of the curation step is to ensure that each anchor frame and its nearby frames are correctly labeled with the same ground truth class, and that the anchor frame and the nearby frames are visually similar.\nDenser labels for Youtube-BB. As Youtube-BB contains only a single category label per frame at 1 frame per second, annotators first viewed each anchor frame individually and marked any missing labels. In total, annotators corrected the labels for 834 frames, adding an average of 0.5 labels per anchor frame. These labels are then propagated to nearby, unlabeled frames at the native frame rate and verified in the next step. ImageNet-Vid densely labels all classes per frame, so we skip this step.\nFrame pairs review. Next, for each pair of anchor and candidate perturbation frames, a human annotates (i) whether the pair is correctly labeled in the dataset, and (ii) whether the pair is similar. We took several steps to mitigate the subjectivity of this task and ensure high annotation quality. First, we trained reviewers to mark frames as dissimilar if the scene undergoes any of the following transformations: significant motion, significant background change, or significant blur change. We asked reviewers to mark each dissimilar frame with one of these transformations, or “other”, and to mark a pair of images as dissimilar if a distinctive feature of the object is only visible in one of the two frames (such as the face of a dog). If an annotator was unsure about the correct label, she could mark the pair as “unsure”. Second, we present only a single pair of frames at a time to reviewers because presenting videos or groups of frames could cause them to miss large changes due to the phenomenon of change blindness (Pashler, 1988).\nVerification. In the previous stage, all annotators were given identical labeling instructions and individually reviewed a total of 71,660 images pairs. To increase consistency in annotation, annotators jointly reviewed all frames marked as dissimilar, incorrectly labeled, or “unsure”. A frame was only considered similar to its anchor if a strict majority of the annotators marked the pair as such.\nAfter the reviewing was complete, we discarded all anchor frames and candidate perturbations that annotators marked as dissimilar or incorrectly labeled. The final datasets contain a combined total of 3,139 anchor frames with a median of 20 similar frames each.\n2.2 THE PM-K EVALUATION METRIC\nGiven the datasets introduced above, we propose a metric to measure a model’s robustness to natural perturbations. In particular, let A = {a1, ..., an} be the set of valid anchor frames in our dataset. Let Y = {y1, ..., yn} be the set of labels for A. We let Nk(ai) be the set of frames marked as similar to anchor frame ai. In our setting, Nk is a subset of the 2k temporally adjacent frames (plus/minus k frames from the anchor).\nClassification. Classification accuracy is defined as accorig = 1− 1N ∑N\ni=0 L0/1(f(ai), yi), where L0/1 is the standard 0-1 loss function. We define the pm-k analog of accuracy as\naccpmk = 1− 1\nN N∑ i=0 max b∈Nk(ai) L0/1(f(b), yi) , (1)\nwhich corresponds to picking the worst frame from each set Nk(ai) before computing accuracy.\nDetection. The standard metric for detection is mean average precision (mAP) of the predictions at a fixed intersection-over-union (IoU) threshold Lin et al. (2014). We define the pm-k metric analogous to that for classification: We replace each anchor frame with the nearest frame that minimizes the average precision (AP, averaged over recall thresholds) of the predictions, and compute pm-k as the mAP on these worst-case neighboring frames." }, { "heading": "3 MAIN RESULTS", "text": "We evaluate a testbed of 45 classification and three detection models on ImageNet-Vid-Robust and YTBB-Robust. We first discuss the various types of classification models evaluated with the pm-k classification metric. Second, we evaluate the performance of detection models on ImageNet-Vid-Robust using use the bounding box annotations inherited from ImageNet-Vid using a variant of pm-k for detection. We then analyze the errors made on the detection adversarial examples to isolate the effects of localization errors vs. classification errors." }, { "heading": "3.1 CLASSIFICATION", "text": "The classification robustness metric is accpmk defined in Equation (1). For frames with multiple labels, we count a prediction as correct if the model predicts any of the correct classes for a frame. In Figure 3, we plot the benign accuracy, accorig, versus the robust accuracy, accpmk, for all classification models in our test bed and find that the relationship between accorig and accpmk is approximately linear. This relationship indicates that improvements in the benign accuracy do result in improvements in the worst-case accuracy, but do not suffice to resolve the accuracy drop due to natural perturbations.\nOur test bed consists of five model types with increasing levels of supervision. We present results for representative models from each model type in Table 2 and defer the full classification results table to Appendix B.2.\nILSVRC Trained The WordNet hierarchy enables us to repurpose models trained for the 1,000 class ILSVRC dataset on ImageNet-Vid-Robust and YTBB-Robust (see Appendix A.1). We evaluate a wide array of ILSVRC-2012 models (available from Cadene) against our natural perturbations. Since these datasets present a substantial distribution shift from the original ILSVRC2012 validation, we expect the benign accuracy accorig to be lower than the comparable accuracy on the ILSVRC-2012 validation set. However, our main interest here is in the difference between the original and perturbed accuracies accorig - accpmk. A small drop in accuracy would indicate that the model is robust to small changes that occur naturally in videos. Instead, we find significant drops of 15.0% and 13.2% in accuracy on our two datasets, indicating sensitivity to such changes.\nNoise augmentation One hypothesis for the accuracy drop from original to perturbed accuracy is that subtle artifacts and corruptions introduced by video compression schemes could degrade performance when evaluating on these corrupted frames. The worst-case nature of the pm-k metric could then be focusing on these corrupted frames. One model for these corruptions are the perturbations introduced in Hendrycks and Dietterich (2019). To test this hypothesis, we evaluate models augmented with a subset of the perturbations (exactly one of: Gaussian noise, Gaussian blur, shot noise, contrast change, impulse noise, or JPEG compression). We found that these augmentation schemes did not improve robustness against our perturbations substantially, and still result in accuracy drop of 15.6% and 16.6% on the two datasets.\n`∞ robustness. We evaluate the model from Xie et al. (2018), which currently performs best against `∞ attacks on ImageNet. We find that this model has a smaller accuracy drop than the two aforementioned model types on both datasets. However, we note that the robust model achieves significantly lower original and perturbed accuracy than either of the two model types above, and the robustness gain is modest (3% compared to models of similar benign accuracy).\nFine-tuning on video frames. To adapt to the new class vocabulary and the video domain, we fine-tune several network architectures on the ImageNet-Vid and Youtube-BB training sets. For Youtube-BB, we train on the anchor frames used for training in Gu et al. (2019), and for ImageNet-Vid we use all frames in the training set. We provide hyperparameters for all models in Appendix K. The resulting models significantly improve in accuracy over their ILSVRC pre-trained counterparts (e.g., 13% on ImageNet-Vid-Robust and 34% on YTBB-Robust for ResNet-50). This improvement in accuracy results in a modest improvement in the accuracy drop for YTBB-Robust, but a finetuned ResNet-50 still suffers from a significant 9.4% drop. On ImageNet-Vid-Robust, there is almost no change in the accuracy drop from 15.0% to 15.1%.\nFine-tuning for detection on video frames. We further analyze whether additional supervision in the form of bounding box annotations improves robustness. To this end, we train the Faster R-CNN detection model Ren et al. (2015) with a ResNet-50 backbone on ImageNet-Vid. Following standard practice, the detection backbone is pre-trained on ILSVRC-2012. To evaluate this detector\nfor classification, we assign the class with the most confident bounding box as label to the image. We find that this transformation reduces accuracy compared to the model trained for classification (77.6% vs. 80.8%). While there is a slight reduction in the accuracy drop caused by natural perturbations, the reduction is well within the error bars for this test set." }, { "heading": "3.2 DETECTION", "text": "We further study the impact of natural perturbations on object detection. Specifically, we report results for two related tasks: object localization and detection. Object detection is the standard computer vision task of correctly classifying an object and finding the coordinates of a tight bounding box containing the object. “Object localization”, meanwhile, refers to only the subtask of finding the bounding box, without attempting to correctly classify the object. We present our results on ImageNet-Vid-Robust, which contains dense bounding box labels unlike Youtube-BB, which only labels boxes at 1 frame per second. We use the popular Faster R-CNN Ren et al. (2015) and R-FCN Dai et al. (2016); Xiao and Jae Lee (2018) architectures for object detection and localization and report results in Table 3. For the R-FCN architecture, we use the model from Xiao and Jae Lee (2018)4. We first note the significant drop in mAP of 12 – 15 points for object detection due to perturbed frames for both the Faster R-CNN and R-FCN architectures. Next, we show that localization is indeed easier than detection, as the mAP is higher for localization than for detection (e.g., 76.6 vs 62.8 for Faster R-CNN with a ResNet-50 backbone). Perhaps surprisingly, however, switching to the localization task does not improve the drop between original and perturbed frames, indicating that natural perturbations induce both classification and localization errors. We show examples of detection failures in Figure 4." }, { "heading": "3.3 IMPACT OF DATASET REVIEW", "text": "We analyze the impact of our human review, described in Section 2.1, on the classifiers in our test bed. First, we compare the original and perturbed accuracies of a representative classifier (ResNet152 finetuned) with and without review in Table 4. Our review improves the original accuracy by 3-4% by throwing away mislabeled or blurry anchor frames, and improves perturbed accuracy by 5-6% by discarding pairs of dissimilar frames. Our review reduces the accuracy drop by 1.8% on\n4This model was originally trained on the 2015 subset of ImageNet-Vid. We evaluated this model on the 2015 validation set because the method requires access to pre-computed bounding box proposals which are available only for the 2015 subset of ImageNet-Vid.\nImageNet-Vid-Robust and 1.1% on YTBB-Robust, but still results in large accuracy drops. These results indicate that the changes in model predictions are indeed due to a lack of robustness, rather than due to significant differences between adjacent frames. To further analyze the impact of our review on model errors, we plot how frequently each offset distance from the anchor frame results in a model error across all model types in Figure 5. For both datasets, larger offsets (indicating pairs of frames further apart in time) lead to more frequent model errors. Our review reduces the fraction of errors across offsets, and especially for large offsets, which are more likely to display large changes from the anchor frame." }, { "heading": "4 RELATED WORK", "text": "Adversarial examples. While various forms of adversarial examples have been studied, the majority of research focuses on `p robustness Goodfellow et al. (2014); Biggio and Roli (2018). However, it is unclear whether adversarial examples pose a problem for classifier robustness outside of a truly worst case context. It is an open question whether perfect robustness against a `p adversary will induce robustness to realistic image distortions such as those studied in this paper. Recent work has proposed more realistic image modifications such as small rotations & translations Engstrom et al.\n(2017); Azulay and Weiss (2018); Fawzi and Frossard (2015); Kanbak et al. (2017), hue and color changes Hosseini and Poovendran (2018), image stylization Geirhos et al. (2018a) and synthetic image corruptions such as Gaussian blur and JPEG compression Hendrycks and Dietterich (2019); Geirhos et al. (2018b). Even though the above examples are more realistic than the `p model, they still synthetically modify the input images to generate perturbed versions. In contrast, our work performs no synthetic modification and instead uses images that naturally occur in videos.\nUtilizing videos to study robustness. In work concurrent to ours, Gu et al. (2019) exploit the temporal structure in videos to study robustness. However, their experiments suggest a substantially smaller drop in classification accuracy. The primary reason for this is a less stringent metric used in Gu et al. (2019). By contrast, our “pm-k” metric is inspired by the “worst-of-k” metric used in prior work Engstrom et al. (2017), highlighting the sensitivity of models to natural perturbations. In Appendix E we study the differences between the two metrics in more detail. Furthermore, the lack of human review and the high label error-rate we discovered in Youtube-BB(Table 1) presents a troubling confounding factor that we resolve in our work.\nDistribution shift. Small, benign changes in the test distribution are often referred to as distribution shift. Recht et al. (2019) explore this phenomenon by constructing new test sets for CIFAR-10 and ImageNet and observe performance drops for a large suite of models on the newly constructed test sets. Similar to our Figure 3, the relationship between original and new test set accuracy is also approximately linear. However, the images in their test set bear little visual similarity to images in the original test set, while all of our failure cases in ImageNet-Vid-Robust and YTBB-Robust are on perceptually similar images. In a similar vein of study, Torralba et al. (2011) studies distribution shift across different computer vision data sets such as Caltech-101, PASCAL, and ImageNet.\nComputer vision. A common issue when applying image based models to videos is flickering, where object detectors spuriously produce false-positives or false-negatives in isolated frames or groups of frames. Jin et al. (2018) explicitly identify such failures and use a technique reminiscent of adversarially robust training to improve image-based models. A similar line of work focuses on improving object detection in videos as objects become occluded or move quickly Kang et al. (2017); Feichtenhofer et al. (2017); Zhu et al. (2017); Xiao and Jae Lee (2018). The focus in this line of work has generally been on improving object detection when objects transform in a way that makes recognition difficult from a single frame, such as fast motion or occlusion. In this work, we document a broader set of failure cases for image-based classifiers and detectors and show that failures occur when the neighboring frames are imperceptibly different." }, { "heading": "5 CONCLUSION", "text": "Our study quantifies the sensitivity of image classifiers to naturally occuring temporal perturbations. We show that these perturbations can cause significant drops in accuracy for a wide range of models for both classification and detection. Our work on analyzing this failure mode opens multiple avenues for future research:\nBuilding more robust models. Our ImageNet-Vid-Robust and YTBB-Robust datasets provide a standard measure for robustness that can be applied to any classification or detection model. In Table 2, we evaluated several commonly used models and found that all of them suffer from substantial accuracy drops due to natural perturbations. In particular, we found that model improvements with respect to artificial perturbations (such as image corruptions or `∞ adversaries) induce at best modest improvements in robustness. We hope that our standardized datasets and evaluation metric will enable future work to quantify improvements in natural robustness directly.\nFurther natural perturbations. Videos provide a straightforward method for collecting natural perturbations of images, admitting the study of realistic forms of robustness for machine learning methods. Other methods for generating these natural perturbations are likely to provide additional insights into model robustness. As an example, photo sharing websites contain a large number of near-duplicate images: pairs of images of the same scene captured at different times, viewpoints, or from a different camera Recht et al. (2019). More generally, devising similar, domain-specific strategies to collect, verify, and measure robustness to natural perturbations in domains such as natural language processing or speech recognition is a promising direction for future work." }, { "heading": "A SOURCE DATASET OVERVIEW", "text": "A.1 IMAGENET-VID\nThe 2015 ImageNet-Vid dataset is widely used for training video object detectors Han et al. (2016) as well as trackers Bertinetto et al. (2016). We chose to work with the 2017 ImageNet-Vid dataset because it is a superset of the 2015 dataset. In total, the 2017 ImageNet-Vid dataset consists of 1,181,113 training frames from 4,000 videos and 512,360 validation frames from 1,314 videos. The videos have frame rates ranging from 9 to 59 frames per second (fps), with a median fps of 29. The videos range from 0.44 to 96 seconds in duration with a median duration of 12 seconds. Each frame is annotated with labels indicating the presence or absence of 30 object classes and corresponding bounding boxes for any label present in the frame. The 30 classes are ancestors of 293 of the 1,000 ILSVRC-2012 classes.\nA.2 YOUTUBE-BB\nThe 2017 Youtube-BB is a a large scale dataset with 8,146,143 annotated training frames 253,569 unique videos and with 1,013,246 validation frames from 31,829 videos. The video segments are approximately 19 seconds long on average. Each frame is annotated with exactly one label indicating the presence of 22 object classes, all of which are ancestors of 229 out of the ILSVRC-2012 classes." }, { "heading": "B FULL ORIGINAL VS PERTURBED ACCURACIES", "text": "B.1 IMAGENET-VID-ROBUST\nModel AccuracyOriginal Accuracy Perturbed ∆\nresnet152_finetuned 84.8 [82.5, 86.8] 70.2 [67.4, 72.8] 14.6 resnet50_finetuned 80.8 [78.3, 83.1] 65.7 [62.9, 68.5] 15.1 vgg16bn_finetuned 78.0 [75.4, 80.4] 61.0 [58.1, 63.9] 17.0 nasnetalarge_imagenet_pretrained 77.6 [75.1, 80.1] 62.1 [59.2, 65.0] 15.5 resnet50_detection 77.6 [75.1, 80.1] 65.0 [62.1, 67.8] 12.6 inceptionresnetv2_imagenet_pretrained 75.7 [73.1, 78.2] 58.7 [55.7, 61.6] 17.0 dpn107_imagenet_pretrained 75.6 [72.9, 78.1] 59.1 [56.1, 62.0] 16.5 inceptionv4_imagenet_pretrained 75.3 [72.6, 77.8] 59.0 [56.0, 61.9] 16.3 dpn92_imagenet_pretrained 74.4 [71.7, 76.9] 56.8 [53.8, 59.7] 17.6 dpn131_imagenet_pretrained 74.0 [71.3, 76.6] 59.9 [56.9, 62.8] 14.1 dpn68b_imagenet_pretrained 73.7 [71.0, 76.2] 54.0 [51.0, 57.0] 19.7 resnext101_32x4d_imagenet_pretrained 73.3 [70.6, 75.9] 57.2 [54.2, 60.1] 16.1 resnext101_64x4d_imagenet_pretrained 72.9 [70.1, 75.5] 56.6 [53.7, 59.6] 16.3 resnet152_imagenet_pretrained 72.8 [70.0, 75.4] 57.0 [54.0, 59.9] 15.8 resnet101_imagenet_pretrained 71.5 [68.7, 74.1] 53.7 [50.8, 56.7] 17.8 fbresnet152_imagenet_pretrained 71.5 [68.7, 74.1] 54.5 [51.5, 57.4] 17.0 densenet161_imagenet_pretrained 71.4 [68.7, 74.1] 55.1 [52.1, 58.1] 16.3 densenet169_imagenet_pretrained 70.2 [67.5, 72.9] 53.1 [50.1, 56.1] 17.1 densenet201_imagenet_pretrained 70.2 [67.5, 72.9] 53.4 [50.4, 56.4] 16.8 dpn68_imagenet_pretrained 69.4 [66.6, 72.1] 53.3 [50.3, 56.3] 16.1 bninception_imagenet_pretrained 69.0 [66.2, 71.7] 49.0 [46.0, 51.9] 20.0 densenet121_imagenet_pretrained 69.0 [66.2, 71.7] 50.9 [47.9, 53.8] 18.1 nasnetamobile_imagenet_pretrained 68.8 [66.0, 71.5] 48.4 [45.4, 51.4] 20.4 resnet50_augment___jpeg_compression 68.8 [66.0, 71.5] 53.2 [50.2, 56.2] 15.6 resnet34_imagenet_pretrained 68.0 [65.2, 70.7] 48.0 [45.0, 51.0] 20.0 resnet50_augment___impulse_noise 67.7 [64.9, 70.5] 50.2 [47.2, 53.2] 17.5 resnet50_augment__gaussian_blur 67.7 [64.9, 70.5] 52.5 [49.5, 55.5] 15.2 resnet50_imagenet_pretrained 67.5 [64.7, 70.3] 52.5 [49.5, 55.5] 15.0 resnet50_augment___gaussian_noise 67.4 [64.5, 70.1] 50.6 [47.6, 53.6] 16.8 resnet50_augment___shot_noise 66.5 [63.6, 69.2] 51.1 [48.1, 54.1] 15.4 vgg16_bn_imagenet_pretrained 66.4 [63.5, 69.1] 47.4 [44.5, 50.4] 19.0 resnet50_augment___defocus_blur 66.3 [63.4, 69.1] 47.6 [44.6, 50.6] 18.7 vgg19_bn_imagenet_pretrained 65.6 [62.7, 68.4] 46.6 [43.6, 49.6] 19.0\nvgg19_imagenet_pretrained 63.2 [60.3, 66.1] 45.4 [42.4, 48.3] 17.8 resnet18_imagenet_pretrained 61.9 [59.0, 64.8] 41.5 [38.6, 44.4] 20.4 vgg13_bn_imagenet_pretrained 61.9 [59.0, 64.8] 43.3 [40.3, 46.3] 18.6 vgg16_imagenet_pretrained 61.4 [58.5, 64.3] 43.1 [40.2, 46.1] 18.3 vgg11_bn_imagenet_pretrained 60.9 [57.9, 63.8] 43.2 [40.3, 46.2] 17.7 vgg13_imagenet_pretrained 59.6 [56.6, 62.5] 41.1 [38.2, 44.1] 18.5 vgg11_imagenet_pretrained 57.3 [54.4, 60.3] 41.3 [38.4, 44.3] 16.0 alexnet_finetuned 57.3 [54.3, 60.2] 43.6 [40.7, 46.6] 13.7 ResNeXtDenoiseAll-101_robust_pgd 54.3 [51.3, 57.2] 40.8 [37.8, 43.7] 13.5 squeezenet1_1_imagenet_pretrained 49.8 [46.8, 52.8] 31.7 [28.9, 34.5] 18.1 alexnet_imagenet_pretrained 49.4 [46.4, 52.4] 32.0 [29.3, 34.8] 17.4 resnet50_augment___contrast_change 38.3 [35.5, 41.3] 23.3 [20.8, 25.9] 15.0" }, { "heading": "C MODEL INDEPENDENT DISTRIBUTION SHIFT", "text": "Though the distribution shift we induced in our study were model dependent because we found the worst neighbor frame for each model, we could study the same problem but impose a static set of perturbed frames across all models. In Figure 6 we study this static set of perturbations across all models and see a substantial (but smaller) drop in accuracy for both models. The static set of perturbations were chosen by choosing the neighbor frame that the largest number of models classified incorrectly." }, { "heading": "D PER CLASS ANALYSIS", "text": "We study the effect of our perturbations on the 30 classes in ImageNet-Vid-Robust and YTBB-Robust to determine whether the performance drop was concentrated in a few “hard” classes.\nFigure 7 shows the original and perturbed accuracies across classes for our best performing model (a fine-tuned ResNet-152). Although there are a few particularly difficult classes for perturbed accuracy (e.g., lion or monkey on ImageNet-Vid-Robust), the accuracy drop is spread across most classes. On ImageNet-Vid-Robust, this model saw a total drop of 14.4% between original and perturbed images and a median drop of 14.0% in per-class accuracy. On YTBB-Robust, the total drop was 8.9% and the median drop was 6.7%." }, { "heading": "E PER-FRAME CONDITIONAL ROBUSTNESS METRIC INTRODUCED IN GU", "text": "ET AL. (2019)\nIn concurrent work, the authors of Gu et al. (2019) considered a different metric of robustness. In this section, we compute this metric on all models in our test bed to compare our findings to Gu et al. (2019). There are two main differences between PM-k and the robustness metric in Gu et al. (2019).\n1. For two visually similar “neighbor” frames I0 and I1 with true label Y and classifier f , Gu et al. (2019) studies the conditional probability P (f(I1) = y|f(I0) = y) 2. While PM-k looks for errors in all neighbor frames in a neighborhood of k frames away from the anchor frame (so this would include frames 1, 2, . . . , k frames away), Gu et al. (2019) only considers errors from exactly k frames away.\nIn Fig. 9 we illustrate simple example where two videos can have the same behavior for the metric introduced by Gu et al. (2019) but drastically different behavior for the PM-kmetric." }, { "heading": "F `∞ DISTANCE VS PM-K ACCURACY", "text": "`∞ adversarial examples are well studied in the robustness community, yet the connection between `∞ and other forms of more “natural” robustness is unclear. Here, we plot the cumulative distribution of the `∞ distance between pairs of nearby frames in our datasets. In Figure 10, we show the CDF of `∞ distance for all pairs, all reviewed pairs, and mistakes made by 3 indicative models. Note the fbrobust model is trained specifically to be robust to `∞ adversaries." }, { "heading": "G PM-K ACCURACY WITH VARYING K", "text": "In Figure 11, we plot the relationship between accpmk and perturbation distance (i.e., the k in the pm-k metric). The entire x-axis in Figure 11 corresponds to a temporal distance of at most 0.3 seconds between the original and perturbed frames.\nH I-FRAMES AND P-FRAMES\nH.1 IMAGENET-VID-ROBUST\nOne possible concern with analyzing performance on video frames is the impact of video compression on model robustness. In particular, the videos in ImageNet-Vid-Robust contain 3 different frame types: ‘i-frames’, ‘p-frames’, and ‘b-frames’. ‘p-frames’ are compressed by referencing pixel content from previous frames, while ‘b-frames’ are compressed via references to previous and future frames. ‘i-frames’ are stored without references to other frames. We compute the original and perturbed accuracies, and the drop in accuracy for a subset of the dataset without ‘i-frames’, a subset without ‘p-frames’, and a subset without ‘b-frames’ in Table 7. While there are modest differences in accuracy due to compression, this analysis suggests that the sensitivity of models is not significantly due to the differences in quality of frames due to video compression." }, { "heading": "I FPS ANALYSIS", "text": "I.1 IMAGENET-VID-ROBUST\nTo analyze the impact of frame-rate on accuracy, we show results on subsets of videos with fixed fps (25, 29, and 30, which cover 89% of the dataset) using a fine-tuned ResNet-152 model in Table 8. The accuracy drop is similar across the subsets, and similar to the drop for the whole dataset.\nJ ILSVRC TRAINING WITH IMAGENET-VID-ROBUST CLASSES\nWe trained ResNet-50 from scratch on ILSVRC using the 30 ImageNet-Vid classes. We also finetuned the model on ImageNet-Vid. In Table 9, we show the accuracy drops are consistent with models in our submission. We hypothesize that the lower accuracy is due to coarser supervision on ILSVRC." }, { "heading": "K EXPERIMENTAL DETAILS & HYPERPARAMETERS", "text": "All classification experiments were carried out using PyTorch version 1.0.1 on an AWS p3.2xlarge with the NVIDIA V100 GPU. All pretrained models were downloaded from Cadene at commit hash 021d97897c9aa76ec759deff43d341c4fd45d7ba. Evaluations in Table ?? all use the default settings for evaluation. The hyperparameters for the fine-tuned models are presented in Table 10. We searched for learning rates between 10−3 and 10−5 for all models. We additionally detail hyperparameters for detection models in Table 11. Detection experiments were conducted with PyTorch version 1.0.1 on a machine with 4 Titan X GPUs, using the Mask R-CNN benchmark repositoryMassa and Girshick (2018). We used the default learning rate provided in Massa and Girshick (2018). For R-FCN, we used the model trained by Xiao and Jae Lee (2018)." }, { "heading": "L DETECTION PM-K", "text": "We briefly introduce the mAP metric for detection here and refer the reader to Lin et al. for further details. The standard detection metric proceeds by first determining whether each predicted bounding box in an image is a true or false positive, based on the intersection over union (IoU) of the predicted and ground truth bounding boxes. The metric then computes the per-category average precision (AP, averaged over recall thresholds) of the predictions across all images. The final metric is reported as the mean of these per-category APs (mAP). We define the pm-k analog of mAP by replacing each anchor frame in the dataset with a nearby frame that minimizes the per-image average precision. Since the category-specific average precision is undefined for categories not present in an image, we minimize the average precision across categories present in each frame rather than the mAP." } ]
2,019
DO IMAGE CLASSIFIERS GENERALIZE ACROSS TIME?
SP:3b0d0ac062a7bc618741cff17c7d507b0b0a7489
[ "This paper investigates the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization while keeping the memory overhead modest. The paper proposes to combine pipelined and non-pipelined training in a hybrid scheme to address the issue of significant drop in accuracy when pipelining is deeper in the network. The performance of the proposed pipelined backpropagation is demonstrated on 2 GPUs using ResNet with speedups of up to 1.8X over a 1-GPU baseline and a small drop in inference accuracy.", "This paper proposes a new pipelined training approach to speedup the training for neural networks. The approach separates forward and backpropagation processes into multiple stages, cache the activation and gradients between stages, processes stages simultaneously, and then uses the stored activations to compute gradients for updating the weights. The approach leads to stale weights and gradients. The authors studied the relation between weight staleness and show that the quality degradation mainly correlates with the percentage of the weights being stale in the pipeline. The quality degradation can also be remedied by turning off the pipelining at the later training steps while overall training speed is still faster than without pipelined training." ]
The growth in the complexity of Convolutional Neural Networks (CNNs) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators. Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing. These techniques either underutilize of accelerators or increase memory footprint. We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest. We use 4 CNNs (LeNet-5, AlexNet, VGG and ResNet) and show that when pipelining is limited to early layers in a network, training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets; a drop in accuracy of 0.4%, 4%, 0.83% and 1.45% for the 4 networks, respectively. However, when pipelining is deeper in the network, inference accuracies drop significantly. We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop. We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet, achieving speedups of up to 1.8X over a 1-GPU baseline, with a small drop in inference accuracy.
[ { "affiliations": [], "name": "STALE WEIGHTS" } ]
[ { "authors": [ "Jianmin Chen", "Rajat Monga", "Samy Bengio", "Rafal Józefowicz" ], "title": "Revisiting distributed synchronous SGD, 2016", "venue": "URL http://arxiv.org/abs/1604.00981", "year": 2016 }, { "authors": [ "Xie Chen", "Adam Eversole", "Gang Li", "D.H. Yu", "Frank Seide" ], "title": "Pipelined backpropagation for context-dependent deep neural networks", "venue": "In Proc. Interspeech,", "year": 2012 }, { "authors": [ "Trishul Chilimbi", "Yutaka Suzue", "Johnson Apacible", "Karthik Kalyanaraman" ], "title": "Project adam: Building an efficient and scalable deep learning training system", "venue": "In Symp. on Operating Systems Design and Implementation (OSDI),", "year": 2014 }, { "authors": [ "Henggang Cui", "Hao Zhang", "Gregory R. Ganger", "Phillip B. Gibbons", "Eric P. Xing" ], "title": "Geeps: Scalable deep learning on distributed gpus with a gpu-specialized parameter server", "venue": "In Proc. of the Eleventh European Conference on Computer Systems,", "year": 2016 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V. Le", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training ImageNet in 1 hour, 2017", "venue": "URL http://arxiv.org/abs/1706.02677", "year": 2017 }, { "authors": [ "Aaron Harlap", "Deepak Narayanan", "Amar Phanishayee", "Vivek Seshadri", "Nikhil Devanur", "Greg Ganger", "Phil Gibbons" ], "title": "Pipedream: Fast and efficient pipeline parallel DNN training, 2018", "venue": "URL http://arXiv:1806.03377", "year": 2018 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proc. of Conf. on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Yanping Huang", "Yonglong Cheng", "Dehao Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V. Le", "Zhifeng Chen" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism, 2018", "venue": "URL http://arXiv:1811.06965", "year": 2018 }, { "authors": [ "Zhouyuan Huo", "Bin Gu", "Heng Huang" ], "title": "Training neural networks using features replay", "venue": "In Proceedings of the 32Nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhouyuan Huo", "Bin Gu", "Qian Yang", "Heng Huang" ], "title": "Decoupled parallel backpropagation with convergence guarantee", "venue": "2018b. URL http://arxiv.org/abs/1804.10574", "year": 2018 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In Proc. of the Int’l Conf. on Multimedia,", "year": 2014 }, { "authors": [ "Jin Kyu Kim", "Qirong Ho", "Seunghak Lee", "Xun Zheng", "Wei Dai", "Garth A. Gibson", "Eric P. Xing" ], "title": "Strads: A distributed framework for scheduled model parallel machine learning", "venue": "In Proc. of the 11th European Conference on Computer Systems,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "CIFAR-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/ ̃kriz/cifar.html", "venue": "In Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Yann LeCun", "Lon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Seunghak Lee", "Jin Kyu Kim", "Xun Zheng", "Qirong Ho", "Garth A Gibson", "Eric P Xing" ], "title": "On model parallelization and scheduling strategies for distributed machine learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Yucheng Low", "Joseph Gonzalez", "Aapo Kyrola", "Danny Bickson", "Carlos Guestrin", "Joseph M. Hellerstein" ], "title": "Distributed graphlab: A framework for machine learning", "venue": "in the cloud,", "year": 2012 }, { "authors": [ "Hesham Mostafa", "Bruno Pedroni", "Sadique Sheik", "Gert Cauwenberghs" ], "title": "Hardware-efficient online learning through pipelined truncated-error backpropagation in binary-state networks", "venue": "In Frontiers in Neuroscience,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In Proc. of Neural Information Processing Systems Autodiff Workshop,", "year": 2017 }, { "authors": [ "A. Petrowski", "G. Dreyfus", "C. Girault" ], "title": "Performance analysis of a pipelined backpropagation parallel algorithm", "venue": "In IEEE Transactions on Neural Networks,", "year": 1993 }, { "authors": [ "David E. Rumelhart", "Geoffrey E. Hinton", "Ronald J. Williams" ], "title": "Learning representations by back-propagating errors", "venue": "Nature, 323:533–536,", "year": 1986 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": null, "year": 2014 }, { "authors": [ "Guanhua Wang", "Shivaram Venkataraman", "Amar Phanishayee", "Jorgen Thelin", "Nikhil R. Devanur", "Ion Stoica" ], "title": "Blink: Fast and generic collectives for distributed ml", "venue": null, "year": 1910 }, { "authors": [ "Hao Zhang", "Zeyu Zheng", "Shizhen Xu", "Wei Dai", "Qirong Ho", "Xiaodan Liang", "Zhiting Hu", "Jinliang Wei", "Pengtao Xie", "Eric P. Xing" ], "title": "Poseidon: An efficient communication architecture for distributed deep learning on GPU clusters", "venue": "In Proc. of the USENIX Annual Technical Conference (ATC),", "year": 2017 } ]
[ { "heading": null, "text": "The growth in the complexity of Convolutional Neural Networks (CNNs) is increasing interest in partitioning a network across multiple accelerators during training and pipelining the backpropagation computations over the accelerators. Existing approaches avoid or limit the use of stale weights through techniques such as micro-batching or weight stashing. These techniques either underutilize of accelerators or increase memory footprint. We explore the impact of stale weights on the statistical efficiency and performance in a pipelined backpropagation scheme that maximizes accelerator utilization and keeps memory overhead modest. We use 4 CNNs (LeNet-5, AlexNet, VGG and ResNet) and show that when pipelining is limited to early layers in a network, training with stale weights converges and results in models with comparable inference accuracies to those resulting from non-pipelined training on MNIST and CIFAR-10 datasets; a drop in accuracy of 0.4%, 4%, 0.83% and 1.45% for the 4 networks, respectively. However, when pipelining is deeper in the network, inference accuracies drop significantly. We propose combining pipelined and non-pipelined training in a hybrid scheme to address this drop. We demonstrate the implementation and performance of our pipelined backpropagation in PyTorch on 2 GPUs using ResNet, achieving speedups of up to 1.8X over a 1-GPU baseline, with a small drop in inference accuracy." }, { "heading": "1 INTRODUCTION", "text": "Modern Convolutional Neural Networks (CNNs) have grown in size and complexity to demand considerable memory and computational resources, particularly for training. This growth makes it sometimes difficult to train an entire network with a single accelerator (Huang et al., 2018; Harlap et al., 2018; Chen et al., 2012). Instead, the network is partitioned among multiple accelerators, typically by partitioning its layers among the available accelerators, as shown in Figure 1 for an example 8-layer network. The 8 layers are divided into 4 computationally-balanced partitions, P0...P3 and each partition is mapped to one of the 4 accelerators, A0...A3. Each accelerator is responsible for the computations associated with the layers mapped to it.\nHowever, the nature of the backpropagation algorithm used to train CNNs (Rumelhart et al., 1986) is that the computations of a layer are performed only after the computations of the preceding layer in the forward pass of the algorithm and only after the computations of the succeeding layer in the backward pass. Further, the computations for one batch of input data are only performed after the computations of the preceding batch have updated the parameters (i.e., weights) of the network. These dependences underutilize the accelerators, as shown by the space-time diagram in Figure 2; only one accelerator can be active at any given point in time.\nThe underutilization of accelerators can be alleviated by pipelining the computations of the backpropagation algorithm over the accelerators (Huang et al., 2018; Harlap et al., 2018; Chen et al., 2012). That is, by overlapping the computations of different input data batches using the multiple accelerators. However, pipelining causes an accelerator to potentially use weights that are yet to be updated by an accelerator further down in the pipeline. The use of such stale weights can negatively affect the statistical efficiency of the network, preventing the convergence of training or producing a model with lower inference accuracy.\nA0 A1 A2 A3 P0 P1 P2 P3\nIdle Time P0\nP1 P2 P3 Forward Backward\nFigure 2: Schedule of Computations\nCommon wisdom is that the use of stale weights must either be avoided, e.g., with the use of microbatches (Huang et al., 2018), be constrained to ensure the consistency of the weights within an accelerator using stashing (Harlap et al., 2018), or by limiting the use of pipelining to very small networks (Mostafa et al., 2017). However, these approaches either underutilize accelerators (Huang et al., 2018) or inflate memory usage to stash multiple copies of weights (Harlap et al., 2018).\nIn this paper we question this common wisdom and explore pipelining that allows for the full utilization of accelerators while using stale weights. This results in a pipelining scheme that, compared to existing schemes, is simpler to implement, fully utilizes the accelerators and has lower memory overhead. We evaluate this pipelining scheme using 4 CNNs: LeNet-5 (trained on MNIST), AlexNet, VGG and ResNet (all trained on CIFAR-10). We analyze the impact of weight staleness and show that if pipelining is limited to early layers in the network, training does converge and the quality of the resulting models is comparable to that of models obtained with non-pipelined training. For the 4 networks, the drop in accuracy is 0.4%, 4%, 0.83% and 1.45%, respectively. However, inference accuracies drop significantly when the pipelining is deeper in the network. While this is not a limitation since the bulk of computations that can benefit from pipelining are in the early convolutional layers, we address this through a hybrid scheme that combines pipelined and non-pipelined training to maintain inference accuracy while still delivering performance improvement. Evaluation shows that our pipelined training delivers a speedup of up to 1.8X on a 2-GPU system.\nThe remainder of this paper is organized as follows. Section 2 briefly describes the backpropagation for training of CNNs. Section 3 details our pipelining scheme. Section 4 describes how non-pipelined and pipelined backpropagation are combined. Section 5 highlights some of the implementation details. Experimental evaluation is presented in Section 6. Related work is reviewed in Section 7. Finally, Section 8 gives concluding remarks and directions for future work." }, { "heading": "2 THE BACKPROPAGATION ALGORITHM", "text": "The backpropagation algorithm (Rumelhart et al., 1986) consists of two passes: a forward pass that calculates the output error and a backward pass that calculates the error gradients and updates the weights of the network. The two passes are performed for input data one mini-batch at a time.\nIn the forward pass, a mini-batch is fed into the network, propagating from the first to the last layer. At each layer l, the activations of the layer, denoted by x(l), are computed using the weights of the layer, denoted by W(l). When the output of the network (layer L) x(L) is produced, it is used with the true data label to obtain a training error e for the mini-batch.\nIn the backward pass, the error e is propagated from the last to the first layer. The error gradients with respect to pre-activations of layer l, denoted by δ(l), are calculated. Further, the error gradients with respect to weights of layer l, ∂e\n∂W(l) , are computed using the activations from layer l − 1 (i.e.,\nx(l−1)) and δ(l). Subsequently, δ(l) is used to calculate the δ(l−1). When ∂e ∂W(l)\nis computed for every layer, the weights are updated using the error gradients.\nIn the forward pass, the activations of the layer l, x(l), cannot be computed until the activations of the previous layers, i.e., x(l−1), are computed. In backward pass, ∂e\n∂W(l) can only be computed\nonce x(l−1) and δ(l) have been computed. Moreover, δ(l) depends on δ(l+1). Finally, for a given mini-batch the backward pass cannot be started until the forward pass is completed and the error e has been determined.\nThe above dependences ensure that the weights of the layers are updated using the activations and error gradients calculated from the same batch of training data in one iteration of the backpropagation algorithm. Only when the weights are updated is the next batch of training data fed into the\nnetwork. These dependences limit parallelism when a network is partitioned across multiple accelerators and allow only one accelerator to be active at any point. This results in under-utilization of the accelerators. It is this limitation that pipelining addresses." }, { "heading": "3 PIPELINED BACKPROPAGATION", "text": "We illustrate our pipelined backpropagation implementation with the L layer network shown in Figure 3, using conceptual pipeline registers. Two registers are inserted between layers l and l + 1; one register for the forward pass and a second for the backward pass. The forward register stores the activations of layer l (x(l)). The backward register stores the gradients δ(l+1) of layer l+1. This defines a 4-stage pipelined backpropagation. The forward pass for layers 1 to l forms forward stage FS1. The forward pass for layers l + 1 to L form forward stage FS2. Similarly, the backwards pass for layers l + 1 to L and 1 to l form backward stages BKS1 and BKS2 respectively.\nThe forward and backward stages are executed in a pipelined fashion on 3 accelerators: one for FS1, one for both FS2 and BKS1, and one for BKS21. In cycle 0, mini-batch 0 is fed to FS1. The computations of the forward pass are done as in the traditional non-pipelined implementation. In cycle 1, layer l activations x(l) are fed to FS2 and mini-batch 1 is fed to FS1. In cycle 2, the error for mini-batch 0 computed in FS2 is directly fed to BKS1, the activations of layer l x(l) are forwarded to FS2 and mini-batch 2 is fed to FS1. This pipelined execution is illustrated by the space-time diagram in Figure 4 for 5 mini-batches. The figure depicts the mini-batch processed by each accelerator cycles 0 to 6. At steady state, all the accelerators are active in each cycle of execution.\nThe above pipelining scheme utilizes weights in FS1 that are yet to be updated by the errors calculated by FS2 and BKS1. At steady state, the activations of a mini-batch in FS1 are calculated using weights that are 2 execution cycles old, or 2 cycles stale. This is reflected in Figure 4 by indicating the weights used by each forward stage and the weights updated by each backward stage. The weights of a forward stage are subscripted by how stale they are (-ve subscripts). Similarly, the weights updated by a backward stage are subscripted by how delayed they are (+ve subscripts).\nFurther, since the updates of the weights by BKS2 requires activations calculated for the same mini-batch in FS1 for all layers in the stage, it is necessary to save these activations until the error gradients with respect to the weights are calculated by BKS2. Only when the weights are updated using the gradients can these activations be discarded.\nIn the general case, we use K pairs of pipeline registers (each pair consisting of a forward register and a backward register) inserted between the layers of the network. We describe the placement of the register pairs by the Pipeline Placement Vector, PPV = (p1, p2, ..., pK), where pi represents the layer number after which a pipeline register pair is inserted. Such a placement creates (K + 1) forward stages, labeled FSi, i = 1, 2, ...,K + 1 and (K + 1) backward stages, labeled BKSi, i = 1, 2, ...,K + 1. Forward stage FSi and backward stage BKSK−i+2 correspond to the same set of layers. Specifically, stage FSi contains layers pi + 1 to pi+1, inclusive. We assign each forward stage and each backward stage to an accelerator, with the exception of the FSK+1 and backward stage BKS1, which are assigned to the same accelerator to reduce weight staleness by an execution cycle. In total 2K + 1 accelerators are used.\nWe quantify weight staleness as follows. A forward stage FSi and backward stage BKSK−i+2 use the same weights that are 2(K− i+1) cycles old. A forward stage FSi must store the activations of\n1We combine FS1 and BKS1 on the same accelerator to reduce weight staleness.\nall layer in the stage for all 2(K− i+1) cycles which are used for the corresponding backward stage BKSK−i+2. Thus, we define the Degree of Staleness as 2(K − i + 1), and these saved activations as intermediate activations. For each pair of stages FSi and BKSK−i+2, let there be Ni weights in their corresponding layers. The layers before the last pipeline register pairs always use stale weights. Thus, we define Percentage of Stale Weight as ( ∑K i=1Ni)/( ∑K+1 i=1 Ni).\nOn the one hand, the above pipelined execution allows a potential speedup of 2K + 1 over the nonpipelined implementation, keeping all the accelerators active at steady state. On the other hand, the use of stale weights may prevent training convergence or may result in a model that has an inferior inference accuracy. Further, it requires an increase in storage for activations. Our goal is to assess the benefit of this pipelined execution and the impact of its down sides." }, { "heading": "4 HYBRID PIPELINED/NON-PIPELINED BACKPROPAGATION", "text": "Hybrid training combines pipelined training with non-pipelined training. We start with pipelined training and after a number of iterations, we switch to non-pipelined training. This can address drops in inference accuracy of resulting models because of weight staleness, but it reduces the performance benefit since during non-pipelined training, the accelerators are under-utilized.\nThe extent of the speedup obtained by hybrid training with a given number of accelerators is determined by the number of iterations used for pipelined and non-pipelined training. Assume that nnp iterations are used to reach the best inference accuracy for non-pipelined training, and that in hybrid training, np iterations (np ≤ nnp) are pipelined followed by nnp − np iterations of non-pipelined training to reach the same inference accuracy as non-pipelined training. The speedup of hybrid training with respect to the non-pipelined training with 2K + 1 accelerators is nnp/(np/(2K + 1) + (nnp − np)). For large K, then using Amdahl’s law, the speedup approaches an upper bound of nnp/(nnp − np)." }, { "heading": "5 IMPLEMENTATION", "text": "We implement pipelined training in two ways: simulated in Caffe (Jia et al., 2014), where the whole training process is performed on one process with no parallelism, and actual with parallelism across accelerators in PyTorch (Paszke et al., 2017). The simulated execution is used to analyze statistical convergence, inference accuracy and impact of weight staleness unconstrained by parallelism and communication overhead. The actual execution is used to report performance and PyTorch is used instead of Caffe to leverage its support for collective communication protocols and its flexibility in partitioning a network across multiple accelerators. Both Caffe and PyTorch have no support for pipelined training. Thus both were extended to provide such support.\nWe develop a custom Caffe layer in Python, which we call a Pipeline Manager Layer (PML), to facilitate the simulated pipelining. During the forward pass, a PML registers the input from a previous layer and passes the activation to the next layer. It also saves the activations for the layers connected to it to be used in the backward pass. During the backward pass, a PML passes the appropriate error gradients. It uses the corresponding activations saved during the forward pass to update weights and generate error gradients for the previous stage, using existing weight update mechanisms in Caffe.\nTo implement actual hardware-accelerated pipelined training, we partition the network onto different accelerators (GPUs), each running its own process. Asynchronous sends and receives are used for data transfers, but all communication must go through the host CPU, since point-to-point communication between accelerators is not supported in PyTorch. This increases communication overhead. Similar to the PMLs in Caffe, the activations computed on one GPU are copied to the next GPU (via the CPU) in the forward pass and the error gradients are sent (again via the CPU) to the preceding GPU during the backward pass. The GPUs are running concurrently, achieving pipeline parallelism." }, { "heading": "6 EVALUATION", "text": "" }, { "heading": "6.1 SETUP, METHODOLOGY AND METRICS", "text": "Simulated pipelining is evaluated on a machine with one Nvidia GTX1060 GPU with 6 GB of memory and an Intel i9-7940X CPU with 64 GB of RAM. The performance of actual pipelining is evaluated using two Nvidia GTX1060 GPUs, each with 6 GB of memory, hosted in an Intel i79700K machine with 32 GB of RAM.\nWe use four CNNs in our evaluation: LeNet-5 (LeCun et al., 1998) trained on MNIST (LeCun & Cortes), AlexNet (Krizhevsky et al., 2012), VGG-16 (Simonyan & Zisserman, 2014) and ResNet (He et al., 2016), all trained on CIFAR-10 (Krizhevsky et al.). For ResNet, we experiment with different depths: 20, 56, 110, 224 and 362. We train these CNNs mostly following their original setting (LeCun et al., 1998) (Krizhevsky et al., 2012) (Simonyan & Zisserman, 2014) (He et al., 2016) with minor variations to the hyperparameters, as described in Appendix 8.\nWe evaluate the effectiveness of pipelined training in terms of its training convergence and its Top-1 inference accuracy, compared to those of the non-pipelined training. We use the speedup to evaluate performance improvements. It is defined as the ratio of the training time of the non-pipelined implementation on single communication-free GPU to the training time of the pipelined training." }, { "heading": "6.2 TRAINING CONVERGENCE AND INFERENCE ACCURACY", "text": "Figure 5 shows the improvements in the inference accuracies for both pipelined and non-pipelined training as a function of the number of training iterations (each iteration corresponds to a minibatch). The pipelined training is done using 4, 6, 8 and 10 stages. Table 1 shows where the registers are inserted in the networks using their PPV defined in Section 3. Figure 5 shows that for all the networks, both pipelined and non-pipelined training have similar convergence patterns. They converge in more or less the same number of iterations for a given number of pipeline stages, albeit to different inference accuracies. This indicates that our approach to pipelined training with stale weights does converge, similar to non-pipelined training.\nTable 2 shows the inference accuracy obtained after up to 30,000 iterations of training. For LeNet-5, the inference accuracy drop is within 0.5%. However, for the other networks, there is a small drop in inference accuracy with 4 and 6 stages. AlexNet has about 4% drop in inference accuracy, but for VGG-16 the inference accuracy drop is within 2.4%, and for ResNet-20 the accuracy drop is within 3.5%. Thus, the resulting model quality is comparable to that of a non-pipelining-trained model.\nHowever, with deeper pipelining (i.e., 8 and 10 stages), inference accuracy significantly drops. There is a 12% and a 8.5% inference accuracy drop for VGG-16 and ResNet-20 respectively. In this case, the model quality is not comparable to that of the non-pipelined training. This results confirm what is reported in the literature (Harlap et al., 2018) and can be attributed to the use of stale weights. Below we further explore the impact of stale weights on inference accuracy." }, { "heading": "6.3 IMPACT OF WEIGHT STALENESS", "text": "We wish to better understand the impact of the number of pipeline stages and the location of these stages in the network on inference accuracy. We focus on ResNet-20 because of its relatively small size and regular structure. It consists of 3 residual function groups with 3 residual function blocks within each group. In spite of this relatively small size and regular structure, it enables us to create pipelines with up to 20 stages by inserting pipeline register pairs within residual function blocks.\nWe conduct two experiments. In the first, we increase the number of pipeline stages (from earlier layers to latter layers) and measure the inference accuracy of the resulting model. The results are shown in Table 3, which gives the inference accuracy of pipelined training after 100,000 iterations, as the number of pipeline stages increases. The 8-stage pipelined training is created by a PPV of (3,5,7), and the subsequent pipeline schemes are created by adding pipeline registers after every 2 layers after layer 7. Clearly, the greater the number stages, the worse is the resulting model quality.\nThe number of stale weights used in the pipelined training increases as the number of pipeline stages increases. Thus, Figure 6 depicts the inference accuracy as a function of the percentage of weights that are stale. The curve labeled “Increasing Stages” shows that the drop in inference accuracy increases as the percentage of stale weights increases.\nIn the second experiment, we investigate the impact of the degree of staleness (Section 3). Only one pair of pipeline registers is inserted. The position of this register slides from the beginning of the network to its end. At every position, the percentage of stale weights remains the same as in the first experiment, but all stale weights have the same degree of staleness. The result of this experiment is shown by the curve labeled “Sliding Stage” in Figure 6. The curve shows the inference accuracy also drops as the percentage of stale weights increases. However, it also indicates that the drop of inference accuracy remains more or less the same as in the first experiment in which the degree of staleness is higher. Thus, the percentage of stale weight appears to be what determines the drop in inference accuracy and not the degree of staleness of the weights.\nThe percentage of stale weight is determined by where the last pair of pipeline registers are placed in the network. It is the position of this pair that determines the loss in inference accuracy. Therefore, it is desirable to place this last pair of registers as early as possible in the network so as to minimize the drop in inference accuracy.\nWhile at first glance this may seem to limit pipelining, it is important to note that the bulk of computations in a CNN is in the first few convolutional layers in the network. Inserting pipeline registers for these early layers can result in both a large number of stages that are computationally balanced. For example, our profiling of the runtime of ResNet-20 shows that the first three residual functions take more than 50% of the training runtime. This favors more pipeline stages at the beginning of the network. Such placement has the desirable effect of reducing the drop in inference accuracy while obtaining relatively computationally balanced pipeline stages." }, { "heading": "6.4 EFFECTIVENESS OF HYBRID TRAINING", "text": "We demonstrate the effectiveness of hybrid training using only ResNet-20 for brevity. Figure 7 shows the inference accuracy for 20K iterations of pipelined training followed by either 10K or 20K iterations of non-pipelined training. This inference accuracy is compared to 30K iterations of either non-pipelined or pipelined training with PPV (5,12,17). The figure demonstrates that hybrid training converges in a similar manner to both pipelined and non-pipelined training. Table 4 shows the resulting inference accuracies. The table shows that the 20K+10K hybrid training produces a model with accuracy that is comparable to that of the non-pipelined model. Further, with an additional 10K iterations of non-pipelined training, the model quality is slightly better than that of the non-pipelined model. This demonstrates the effectiveness of hybrid training." }, { "heading": "6.5 PIPELINED AND HYBRID TRAINING PERFORMANCE", "text": "We implement 4-stage pipelined training ResNet-20/56/110/224/362 on a 2-GPU system. Each GPU is responsible for one forward stage and one backward stage. Thus, the maximum speedup that can be obtained is 2. We train every ResNet for 200 epochs. Table 5 shows the inference accuracies with and without pipelining as model as the measured speedups of pipelined training over the non-pipelined one. The table indicates that the quality of the models produced by pipelined training is comparable to those achieved by the simulated pipelining on Caffe. The table further shows that speedup exists for all networks. Indeed, for ResNet-362, the speedup is 1.82X. This is equivalent to about 90% utilization for each GPU. The table also reflects that as the networks get larger, the speedup improves. This reflects that with larger networks, the ratio of computation to communication overhead is higher, leading to better speedups.\nMoreover, we combine the 4-stage pipelined training described above with non-pipelined training to demonstrate the performance of hybrid training. We train every ResNet using pipelined training for 100 epochs and follow it up by 100 epochs of non-pipelined training. Because the maximum speedup for the pipelined training is 2 and only half the training epochs is accelerated, the maximum speedup for this hybrid training is s = t/(t/2 + t/4) = 1.33, where t is the training time of non-\npipelined training. Table 5 shows the inference accuracies and speedup of the hybrid training for each ResNet and validates that hybrid training can produce a model quality that is comparable to the baseline non-pipelined training while speeding up the training process. As network size grows, the speedup reaches 1.29X, approaching the theoretical limit 1.33X." }, { "heading": "6.6 MEMORY USAGE", "text": "Pipelined training requires the saving of intermediate activations, as described earlier in Section 3, leading to an increase in memory footprint. This increase in memory is a function of not only the placement of the pipeline registers, but also of the network architecture and the number of inputs in a mini-batch (batch size). We calculate the memory usage of the 4-stage pipelined ResNet training above to show that this increase is modest for our pipelining scheme. Specifically, we use torchsummary in PyTorch to report memory usage for weights and activations for a network and calculate the additional memory required by the additional copies of activations. The results are shown in Table 6. Assuming a batch size of 128, the percentage increase in size is close to 60% except for ResNet-20." }, { "heading": "6.7 COMPARISON TO EXISTING WORK", "text": "We compare our pipelined training scheme with two key existing systems: PipeDream (Harlap et al., 2018) and GPipe (Huang et al., 2018). We do so on three aspects: the pipelining scheme, performance and memory usage. We believe that PipeDream and GPipe are representative of existing key approaches that implement pipelined training, including Decoupled Backpropagation (DDG) (Huo et al., 2018b) and Feature Replay (FR) (Huo et al., 2018a) (discussed in Section 7).\nOur pipelining scheme is simpler than that of PipeDream and GPipe in that we do not require weight stashing nor do we divide mini-batches into micro-batches. This leads to less communication overhead, and is amicable to rapid realization in machine learning framework such as PyTorch or in actual hardware such as Xilinx’s xDNN FPGA accelerators (Xilinx, 2019).\nOur pipelining scheme, as PipeDream, eliminates bubbles that exist in the pipeline leading to better performance. For example, we obtain a speedup of 1.7X for ResNet-110 using 2 GPUs in contrast to GPipe that obtains a speedup of roughly 1.3X for ResNet-101 using 2 TPUs. We also obtain similar performance compared to PipeDream for similar networks. When the number of pipeline stages grows, pipeline bubbles exhibits more negative effect on performance shown in GPipe on a 4-partition pipelined ResNet-101 using 4 TPUs as its bubble overhead doubled compared to that of the 2-partition pipelined ResNet-101.\nOur scheme uses less memory compared to PipeDream, although it introduces more memory overhead compared to GPipe. PipeDream saves intermediate activations during training, as we do. However, it also saves multiple copies of a network’s weights for weight stashing. The memory footprint increase due to this weight stashing depends on the network architecture, including the number of\nweights and activations, as well as on the size of the mini-batch. For example, for VGG-16 trained on CIFAR-10 with a mini-batch size of 128 using a 4-stage, pipelined training, we estimate our pipelining methodology to use 49% less memory compared PipeDream. Similarly for VGG-16 trained on ImageNet (Deng et al., 2009) and a mini-batch size of 32, our scheme uses 29% less memory. We estimate the memory increase due to weight stashing also using tourchsummary." }, { "heading": "7 RELATED WORK", "text": "There has been considerable work that explores parallelism in the training of deep neural networks. There are several approaches to exploiting parallelism.\nOne approach is to exploit data parallelism (Chen et al., 2016; Cui et al., 2016; Goyal et al., 2017; Zhang et al., 2017; Dean et al., 2012; Wang et al., 2019), in which each accelerator obtains a full copy of the model and processes different mini-batches of training data simultaneously. At the end of each training iteration, the gradients produced by all accelerators are aggregated and used to update weights for all copies of the model, synchronously (Chen et al., 2016; Goyal et al., 2017) or asynchronously (Dean et al., 2012). A centralized parameter server is usually used to facilitate data communication (Cui et al., 2016; Dean et al., 2012). Although the training is performed in parallel, the communication overhead can be significant (Wang et al., 2019).\nA second approach is to exploit model parallelism (Kim et al., 2016; Lee et al., 2014; Chilimbi et al., 2014; Dean et al., 2012; Low et al., 2012). In this approach, a model is partitioned onto different accelerators (Kim et al., 2016; Lee et al., 2014; Low et al., 2012; Chilimbi et al., 2014; Dean et al., 2012). Each accelerator is only responsible for updating the weights for the portion of the model assigned to it. This approach is often used when a model is large and cannot fit into the memory of a single accelerator. However, because of the data dependences described in Section 2, only one accelerator is active during the training process, resulting in under-utilization of accelerators resources. Moreover, inter-layer activations and gradients across two consecutive stages needs to be communicated during training, adding more overhead to the entire process.\nPipelined parallelism addresses the under-utilization of accelerators resources for the training of large models. There have been a few studies that explore pipelined parallelism (Petrowski et al., 1993; Chen et al., 2012; Mostafa et al., 2017; Harlap et al., 2018; Huang et al., 2018; Huo et al., 2018b;a), which we review in this section.\nPipeDream (Harlap et al., 2018) implements pipelined training for large neural networks such as VGG-16, Inception-v3 and S2VT across multiple GPUs. However, in their implementation, they limited the usage of stale weights by weight stashing, i.e., keeping multiple versions of network parameters (weights) during training. This increases the memory footprint of training. In contrast, we do not maintain multiple copies of weights during training, therefore reducing the memory footprint of pipelined training.\nGPipe (Huang et al., 2018) implements a library in Tensorflow to enable pipelined parallelism for the training of large neural networks. GPipe pipelines micro-batches within each mini-batch to keep the gradients consistently accumulated. This eliminates the use of stale weight during training, but it does so at the expense of “pipeline bubbles” at steady state. GPipe utilizes these bubbles to reduce the memory footprint by re-computing forward activations instead of storing them. In contrast, our work has no pipeline bubble and thus dedicates computing resources to compute forward pass and backward pass only once during each training iteration.\nHuo et al. (Huo et al., 2018b) implement decoupled backpropagation (DDG) using delayed gradient updates. They show that DDG guarantees convergence through a rigorous convergence analysis. Similar to PipeDream, DDG uses multiple copies of the weights and thus increases memory footprint. Further, DDG pipelines only the backward pass of training, leaving forward pass un-pipelined. Huo et al. (Huo et al., 2018a) follow up by proposing feature replay (FR) that re-computes activations during backward pass, similar to GPipe, resulting less memory footprint and improved inference accuracy than DDG. In contrast, we pipeline both forward and backward pass without maintaining multiple copies of weights or re-computing forward activations during backward pass.\nThus, in summary, our work contrasts to the above work on pipelined training, in that we use pipelining with unconstrained stale weights, resulting in full pipeline utilization with a modest increase in\nmemory usage. We extend earlier work by studying the impact of weights staleness on the quality of the model. We show that it is effective to use stale weights if the pipelining is in early layers, which is where the bulk of computations exist. Further we also extend earlier work through hybrid training, which combines both pipelined and non-pipelined training. We compare the performance and memory footprint increase of our scheme to existing work in Section 6.7." }, { "heading": "8 CONCLUDING REMARKS", "text": "We evaluate pipelined execution of backpropagation for the training of CNNs in a way that fully utilizes accelerators, achieving a speedup of 1.82X on the 2-GPU system, and does not significantly increase memory usage, unlike previous work. We show that pipelining training with stale weights does converge. Further, we show that the inference accuracies of the resulting models are comparable to those of models obtained with traditional backpropagation, but only when pipelining is implemented in the early layers of the network, with inference accuracy drop within 1.45% on 4- stage pipelined training except for AlexNet. This does not limit the benefit of pipelining since the bulk of computations is in the early convolutional layers. When pipelining is implemented deeper in the network, the inference accuracies do drop significantly, but we can compensate for this drop by combining pipelined with non-pipelined training, albeit with lower performance gains, obtaining model quality with an average of 0.19% better than the baseline in inference accuracies for ResNets.\nThis work can be extended in a number of directions. One direction is to evaluate the approach with a larger number of accelerators since pipelined parallelism is known to scale naturally with the number of accelerators. Another is to evaluate the approach on larger datasets, such as ImageNet. Finally, pipelined parallelism lends itself to hardware implementation. Thus, another direction for future work is to evaluate pipelined parallelism using Field Programmable Gate Array (FPGA) or ASIC accelerators." }, { "heading": "A TRAINING HYPERPARAMETERS FOR SIMULATED PIPELINED TRAINING", "text": "LeNet-5 is trained on the MNIST dataset with Stochastic Gradient Descent (SGD) using a learning rate of 0.01 with inverse learning policy, a momentum of 0.9, a weight decay of 0.0005 and a minibatch size of 100 and for 30,000 iterations. The progression of inference accuracy during training is recorded with 300 tests\nAlexNet is trained on the CIFAR-10 dataset with SGD with Nesterov momentum using a learning rate of 0.001 that is decreased by 10x twice during training, a momentum of 0.9, a weight decay of 0.004 and a mini-batch size of 100 for 250,000 iterations. One test is performed every epoch to record the progression of inference accuracy.\nVGG-16 is trained on CIFAR-10 dataset with SGD with Nesterov momentum using a learning rate starting at 0.1 that is decreased by half every 50 epochs during training, a momentum of 0.9, a weight decay of 0.0005 and a mini-batch size of 100 for 250,000. Since it is relatively more difficult to train VGG-16 compared to other models, batch normalization and dropout are used during training throughout the network. One test is performed every epoch to record the progression of inference accuracy.\nResNet is trained on CIFAR-10 dataset with SGD using a learning rate starting at 0.1 and 0.01 for non-pipelined and pipelined training respectively, that is decreased by 10x twice during training, a momentum of 0.9, a weight decay of 0.0001 and a mini-batch size of 128 for 100,000 iterations. Batch normalization is used during training throughout the network. One test is performed every 100 iterations to record the progression of inference accuracy." }, { "heading": "B TRAINING HYPERPARAMETERS FOR ACTUAL PIPELINED TRAINING", "text": "For the baseline non-piplined training, ResNet-20/56/110/224/362 is trained on CIFAR-10 dataset for 200 epochs with SGD using a learning rate of 0.1 that is decreased by a factor of 10 twice (at epoch 100 and 150), a momentum of 0.9, a weight decay of 0.0001 and a mini-batch size of 128. Batch normalization is used during training throughout the network. This set of hyperparameters can be found at https://github.com/akamaster/pytorch resnet cifar10.\nFor the 4-stage pipelined training, the hyperparameters are the same as the non-pipelined baseline, except for the BKS2 learning rate. Table 7 shows that learning rate for all ResNet experimented." } ]
2,019
null
SP:00b48a43a5037915e21ddb2f0941cdd26a69d44d
[ "This paper introduces Conditionally Reversible Network (CrevNet) that consists of the invertible autoencoder and a reversible predictive module (RPM). The two-way autoencoder is an invertible network that preserves the volume with no information loss while reducing memory consumption by using bijective downsampling. The RPM is a recurrent extension of two-way autoencoder that provides the reversiblity in temporal domain. The experiments on Moving MNIST, Traffic4cast, KITTI, and 2D object detection on KITTI show the improvement compare to other state-of-the-art models. ", "In this paper, the authors propose a new method of self-supervised feature learning from videos based on learning future frame prediction. The idea is similar as BERT like NLP tasks, but for videos, the computational cost and memory cost could be very large. To solve this problem efficiently, the authors adopt several existing techniques such as pixel shuffle layer, 3D-CNN, ConvRNN and Attention module to efficiently and effectively capture video information. Experiments on several datasets are conducted to show the effectiveness of the proposed method." ]
Applying resolution-preserving blocks is a common practice to maximize information preservation in video prediction, yet their high memory consumption greatly limits their application scenarios. We propose CrevNet, a Conditionally Reversible Network that uses reversible architectures to build a bijective two-way autoencoder and its complementary recurrent predictor. Our model enjoys the theoretically guaranteed property of no information loss during the feature extraction, much lower memory consumption and computational efficiency. The lightweight nature of our model enables us to incorporate 3D convolutions without concern of memory bottleneck, enhancing the model’s ability to capture both short-term and long-term temporal dependencies. Our proposed approach achieves state-of-the-art results on Moving MNIST, Traffic4cast and KITTI datasets. We further demonstrate the transferability of our self-supervised learning method by exploiting its learnt features for object detection on KITTI. Our competitive results indicate the potential of using CrevNet as a generative pre-training strategy to guide downstream tasks.
[ { "affiliations": [], "name": "Wei Yu" }, { "affiliations": [], "name": "Yichao Lu" }, { "affiliations": [], "name": "Steve Easterbrook" }, { "affiliations": [], "name": "Sanja Fidler" } ]
[ { "authors": [ "Dario Amodei", "Sundaram Ananthanarayanan", "Rishita Anubhai", "Jingliang Bai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Qiang Cheng", "Guoliang Chen" ], "title": "Deep speech 2: Endto-end speech recognition in english and mandarin", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Wonmin Byeon", "Qin Wang", "Rupesh Kumar Srivastava", "Petros Koumoutsakos" ], "title": "Contextvp: Fully context-aware video prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": "In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emily L Denton" ], "title": "Unsupervised learning of disentangled representations from video", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Piotr Dollár", "Christian Wojek", "Bernt Schiele", "Pietro Perona" ], "title": "Pedestrian detection: A benchmark", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Andreas Geiger", "Philip Lenz", "Raquel Urtasun" ], "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2012 }, { "authors": [ "Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse" ], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold Smeulders", "Edouard Oyallon" ], "title": "i-revnet: Deep invertible networks", "venue": "arXiv preprint arXiv:1802.07088,", "year": 2018 }, { "authors": [ "Nal Kalchbrenner", "Aaron van den Oord", "Karen Simonyan", "Ivo Danihelka", "Oriol Vinyals", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Video pixel networks", "venue": "arXiv preprint arXiv:1610.00527,", "year": 2016 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Manoj Kumar", "Mohammad Babaeizadeh", "Dumitru Erhan", "Chelsea Finn", "Sergey Levine", "Laurent Dinh", "Durk Kingma" ], "title": "Videoflow: A flow-based generative model for video", "venue": null, "year": 1903 }, { "authors": [ "Yong-Hoon Kwon", "Min-Gyu Park" ], "title": "Predicting future frames using retrospective cycle gan", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alex X Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ], "title": "Stochastic adversarial video prediction", "venue": "arXiv preprint arXiv:1804.01523,", "year": 2018 }, { "authors": [ "Xiaodan Liang", "Lisa Lee", "Wei Dai", "Eric P Xing" ], "title": "Dual motion gan for future-flow embedded video prediction", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Kun Liu", "Wu Liu", "Chuang Gan", "Mingkui Tan", "Huadong Ma" ], "title": "T-c3d: Temporal convolutional 3d network for real-time action recognition", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott Reed", "Cheng-Yang Fu", "Alexander C Berg" ], "title": "Ssd: Single shot multibox detector", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "William Lotter", "Gabriel Kreiman", "David Cox" ], "title": "Deep predictive coding networks for video prediction and unsupervised learning", "venue": "arXiv preprint arXiv:1605.08104,", "year": 2016 }, { "authors": [ "Marc Oliu", "Javier Selva", "Sergio Escalera" ], "title": "Folded recurrent neural networks for future video prediction", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Jimmy Ren", "Xiaohao Chen", "Jianbo Liu", "Wenxiu Sun", "Jiahao Pang", "Qiong Yan", "Yu-Wing Tai", "Li Xu" ], "title": "Accurate single stage detector using recurrent rolling convolution", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girprerhick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Wenzhe Shi", "Jose Caballero", "Ferenc Huszár", "Johannes Totz", "Andrew P Aitken", "Rob Bishop", "Daniel Rueckert", "Zehan Wang" ], "title": "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Xingjian Shi", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Ruben Villegas", "Jimei Yang", "Seunghoon Hong", "Xunyu Lin", "Honglak Lee" ], "title": "Decomposing motion and content for natural video sequence prediction", "venue": "arXiv preprint arXiv:1706.08033,", "year": 2017 }, { "authors": [ "Yunbo Wang", "Mingsheng Long", "Jianmin Wang", "Zhifeng Gao", "S Yu Philip" ], "title": "Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yunbo Wang", "Zhifeng Gao", "Mingsheng Long", "Jianmin Wang", "Philip S Yu" ], "title": "PredRNN++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Yunbo Wang", "Lu Jiang", "Ming-Hsuan Yang", "Li-Jia Li", "Mingsheng Long", "Li Fei-Fei" ], "title": "Eidetic 3d LSTM: A model for video prediction and beyond", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhou Wang", "Alan C Bovik", "Hamid R Sheikh", "Eero P Simoncelli" ], "title": "Image quality assessment: from error visibility to structural similarity", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "Bichen Wu", "Forrest Iandola", "Peter H Jin", "Kurt Keutzer" ], "title": "Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has enjoyed tremendous success in recent years due to its ability to capture complex dependencies and non-linearities in large datasets (Krizhevsky et al. (2012); He et al. (2016); Gomez et al. (2017)). Excellent performance has been achieved on a wide range of supervised machine learning tasks, ranging from image classification (He et al. (2016)) and object detection (Ren et al. (2015)) to speech recognition (Amodei et al. (2016)). Despite the significant breakthrough in supervised learning, the potential of applying deep architectures to unsupervised learning problems remains largely unexplored. Lately there has been a surge of interest in the task of video prediction, i.e., to predict future frames of a video sequence (Wang et al. (2017; 2018); Denton et al. (2017); Denton & Fergus (2018); Villegas et al. (2017); Lee et al. (2018)). The significance of video prediction primarily lies in its potential of discovering dynamics in the physical world. The self-supervised nature of video prediction aligns well with how humans learn, without requiring large amounts of labeled data. In addition, videos can provide an abundant and virtually unlimited source of visual information. This allows video prediction models to serve as a generative pre-training strategy of feature representation learning for a variety of downstream supervised tasks.\nTo date, most of the existing models for video prediction employ a hybrid of convolutional and recurrent layers as the underlying architecture (Wang et al. (2017); Shi et al. (2015); Lotter et al. (2016)). Such architectural design enables the model to simultaneously exploit the ability of convolutional units to model spatial relationships and the potential of recurrent units to capture temporal dependencies. Despite their prevalence in the literature, classical video prediction architectures suffer from two major limitations. Firstly, in dense prediction tasks such as video prediction, models are required to make pixel-wise predictions, which emphasizes the demand for the preservation of information through layers. Prior works attempt to address such demand through the extensive use of resolution-preserving blocks (Wang et al. (2017; 2018); Kalchbrenner et al. (2016)). Nevertheless, these resolution-preserving blocks are not guaranteed to preserve all the relevant information, and they greatly increase the memory consumption and computational cost of the models. The second\ndrawback of existing video prediction models is that they cannot efficiently take advantage of 3D convolutions, as that would make these already cumbersome architectures even larger. 3D convolutions have been shown to be a very effective alternative to RNNs to capture temporal relations in a variety of video tasks (Liu et al. (2018); Carreira & Zisserman (2017)), and thus desirable to exploit.\nRecently, reversible architectures (Dinh et al. (2014); Gomez et al. (2017); Jacobsen et al. (2018)) have attracted attention due to their light memory demand and their information preserving property by design. However, the effectiveness of reversible models remains greatly unexplored in the video literature. In this paper, we introduce a novel, conditionally reversible video prediction model, CrevNet, in the sense that when conditioned on previous hidden states, it can exactly reconstruct the input from its predictions. The contribution of this work can be summarized as follows:\n• We introduce a two-way autoencoder that uses the forward and backward passes of an invertible network as encoder and decoder (Fig 1). The volume-preserving two-way autoencoder not only greatly reduces the memory demand and computational cost, but also enjoys the theoretically guaranteed property of no information loss. The lightweight nature of our model enables us to incorporate 3D convolutions without concern of memory bottleneck. • We propose the reversible predictive module (RPM), as illustrated in Fig 2b, which extends\nthe reversibility from spatial to temporal domain. RPM, together with the two-way autoencoder, provides a conditionally reversible architecture (CrevNet) for spatiotemporal learning. CrevNet achieves the state-of-the-art results on Moving MNIST, Traffic4cast and KITTI. • We evaluate the effectiveness of features learnt from self-supervision by adapting our\nCrevNet for object detection on KITTI. Our competitive results indicate the potential of using CrevNet as a generative pre-training strategy to guide downstream CV tasks." }, { "heading": "2 APPROACH", "text": "We first outline the general pipeline of our method. Our CrevNet consists of two subnetworks, an autonencoder network with an encoder E , decoder D and a recurrent predictor P bridging encoder and decoder. Let xt ∈ Rw×h×c represent the tth frame in video x, where w, h, and c denote its width, height, and the number of channels. Given x0:t−1, the model predicts the next frame x̂t as follows:\nx̂t = D(P(E(xt−1)|x0:t−2)) (1) In the case of 3D convolution, xt ∈ Rk×w×h×c denotes the short video clip from t to t + k − 1 instead of a single frame at timestep t, where k is the temporal dimension of input or output. During the multi-frame generation process without access to the ground truth frames, the model uses its previous predictions instead." }, { "heading": "2.1 THE INVERTIBLE TWO-WAY AUTOENCODER", "text": "We propose a bijective two-way autoencoder based on the additive coupling layer introduced in NICE (Dinh et al. (2014)). We begin with describing the building block of the two-way autoencoder\n(Fig 2a). Formally, the input x is first reshaped and split channelwise into two groups, denoted as x1 and x2. During the forward pass of each building block, one group, e.g. x1, passes through several convolutions and activations and is then added to another group, x2, like a residual block:\nx̂2 = x2 + F1(x1) x̂1 = x1 + F2(x̂2) (2)\nwhere F is a composite non-linear operator consisting of convolutions and activations, and x̂1 and x̂2 are the updated x1 and x2. Note that x1 and x2 can be simply recovered from x̂2 and x̂1 by the inverse computation (Fig 2c) as follows:\nx1 = x̂1 −F2(x̂2) x2 = x̂2 −F1(x1) (3)\nMultiple building blocks are stacked in an alternating fashion between x1 and x2 to construct a two-way autoencoder, as shown in Fig 2a. A series of the forward and inverse computations builds a one-to-one and onto, i.e. bijective , mapping between the input and features. Such invertibility ensures that there is no information loss during the feature extraction, which is presumably more favorable for video prediction since the model is expected to restore the future frames with finegrained details. To enable the invertibility of the entire autoencoder, our two-way autoencoder uses a bijective downsampling, pixel shuffle layer (Shi et al. (2016)), that changes the shape of feature from (w, h, c) to (w/n, h/n, c× n2). The resulting volume-preserving architecture can greatly reduce its memory consumption compared with the existing resolution-preserving methods.\nWe further argue that for generative tasks, e.g. video prediction, we can effectively utilize a single two-way autoencoder, and to use its forward and backward pass as the encoder and the decoder, respectively. The predicted frame x̂t is thus given by\nx̂t = E−1(P(E(xt−1)|x0:t−2)) (4)\nwhere E−1 is the backward pass of E . Our rationale is that, such setting would not only reduce the number of parameters in the model, but also encourage the model to explore the shared feature space between the inputs and the targets. As a result, our method does not require any form of information sharing, e.g. skip connection, between the encoder and decoder. In addition, our two-way autoencoder can enjoy a lower computational cost at the multi-frame prediction phase where the encoding pass is no longer needed and the predictor directly takes the output from previous timestep as input, as shown in Fig 1, since E(E−1) is an identity mapping ." }, { "heading": "2.2 REVERSIBLE PREDICTIVE MODULE", "text": "In this section, we describe the second part of our video prediction model, the predictor P , which computes dependencies along both the space and time dimensions. Although the traditional stackedConvRNN layers architecture is the most straightforward choice of predictor, we find that it fails to\nestablish a consistent temporal dependency when equipped with our two-way autoencoder through experiments. Therefore, we propose a novel reversible predictive module (RPM), which can be regarded as a recurrent extension of the two-way autoencoder. In the RPM, we substitute all standard convolutions with layers from the ConvRNN family (e.g. ConvLSTM or spatiotemporal LSTM) and introduce a soft attention (weighting gates) mechanism to form a weighted sum of the two groups instead of the direct addition. The main operations of RPM used in this paper are given as follows:\nh1t = ConvRNN(x 1 t , h 1 t−1) ConvRNN\ngt = φ(W2 ∗ ReLU(W1 ∗ h1t + b1) + b2) Attention module\nx̂2t = (1− gt) x2t + gt h1t Weighted sum\nwhere x1t and x 2 t denote two groups of features at timestep t, h 1 t denote the hidden states of ConvRNN layer, φ is sigmoid activation, ∗ is the standard convolution operator and is the Hadamard product. The architecture of reversible predictive module is also shown in Fig 2b. RPM adopts a similar architectural design as the two-way autoencoder to ensure a pixel-wise alignment between the input and the output, i.e. each position of features can be traced back to certain pixel, and thus make it compatible with our two-way autoencoder. It also mitigates the vanishing gradient issues across stacked layers since the coupling layer provides a nice property w.r.t. the Jacobian (Dinh et al. (2014)). In addition, the attention mechanism in the RPM enables the model to focus on objects in motion instead of background, which further improves the video prediction quality. Similarly, multiple RPMs alternate between the two groups to form a predictor. We call this predictor conditionally reversible since, given ht−1, we are able to reconstruct xt−1 from x̂t if there are no numerical errors:\nxt−1 = E−1(P−1(E(x̂t)|ht−1)) (5)\nwhere P−1 is the inverse computation of the predictor P . We name the video prediction model using two-way autoencoder as its backbone and RPMs as its predictor CrevNet. Another key factor of RPM is the choice of ConvRNN. In this paper, we mainly employ ConvLSTM (Shi et al. (2015)) and spatiotemporal LSTM (ST-LSTM, Wang et al. (2017)) to enable a fair comparison with baselines." }, { "heading": "2.3 3D CONVOLUTIONS", "text": "3D convolutions are proposed to address the shortcomings of standard 2D convolutions. The major difference between 2D-CNNs and 3D-CNNs is that at each time step 2D-CNNs take as input one video frame, while 3D-CNNs read in and output a short video clip containing k continuous video frames. By applying convolutions on the temporal dimension along with the spatial dimension, models equipped with 3D convolution filters can not only extract representative spatiotemporal features, but also learn to produce consistent video clip at each generation, which further improve the quality of long-term prediction. In some cases, e.g. sequences are too short, we will use 2 consecutive frames stacked in the channel dimension instead as input at each timestep to assemble a valid warm-up sequence for ConvRNN." }, { "heading": "3 EXPERIMENTS", "text": "" }, { "heading": "3.1 LONG-TERM PREDICTION—MOVING MNIST", "text": "Moving MNIST (Srivastava et al. (2015)) is a synthetically generated dataset that contains an infinite number of sequences of length 20. Each sequence shows how 2 digits move at a constant speed and bounce inside a 64× 64 frame, where each handwritten digit is randomly sampled from the MNIST dataset. By assigning different initial locations and velocities to each digit, it is possible to generate an unlimited number of sequences, thus enabling us to accurately evaluate the performance of each model without the concern of data insufficiency issues. In the default setting, models are trained to predict 10 future frames after observing 10 prior frames in the sequence. Although the dynamics of Moving MNIST seems to be simple at first glance. It is quite hard to generate consistent future frames in the task of long-term prediction as digits can bounce or occlude each other frequently.\nDatasets and Setup: The general architecture of CrevNet used on Moving MNIST is composed of a 36-layer two-way autoencoder and 8 RPMs. All variants of CrevNet are trained by using the Adam optimizer with a starting learning rate of 5× 10−4 to minimize MSE. The training process is stopped\nafter 300, 000 iterations with the batch size of 16 and evaluated with a fixed test set containing 5, 000 sequences. To ensure that all samples in the test set are unseen by the model, digits in the training set and the testing set are separately sampled from two mutually exclusive subsets of MNIST.\nWe compare CrevNet to six popular benchmark models from the literature: (i) ConvLSTM (Shi et al. (2015)), (ii) FRNN (Oliu et al. (2018)), (iii) VPN (Kalchbrenner et al. (2016)), (iv) PredRNN (Wang et al. (2017)) , (v) PredRNN++ (Wang et al. (2018)), and (vi) E3D-LSTM (Wang et al. (2019)),. All baselines are implemented and optimized by following their corresponding protocols. To test our model in a more challenging setting, we also extend Moving MNIST to a 3-digit version where digits are more likely to occlude each other.\nResults: The performance of each model in terms of per-frame MSE and the Structural Similarity Index Measure (SSIM) (Wang et al. (2004)) is presented in Table 1. CrevNet outperforms all previous methods by a wide margin on both metrics while memory consumption of all CrevNet variants is significantly lower than that of other baselines. In particular, CrevNet with ConvLSTM only uses 130 MB memory per sample and is still capable of yielding results better than any baselines.\nTo analyze the contribution of each module, we conduct an ablation study on both ConvLSTM and ST-LSTM with respect to 3D convolution, two-way autoencoder and RPM and summarize the results in Table 2. Note that we do not include the quantitative results of the combination of two-way autoencoder and stack-ConvRNN predictor because it fails to produce consistent long-term generations and we choose UNet (Ronneberger et al. (2015)) as an alternative to our two-way autoencoder. We can observe a significant improvement over ConvLSTM after we embed it into our CrevNet framework, indicating the effectiveness of reversible architectures. Also, integrating 3D convolution can consistently enhance the performance of all architectures. To further show the superior performance of CrevNet, we evaluate it on a harder 3-digit setting. Results are shown in the right column of Table 1. Compared with the 2-digit setting, all models suffer a deterioration in quantitative performance due to the more frequent occurrence of overlapping digits. Nevertheless, our CrevNet still achieves the best result.\nIn Fig 3 , our qualitative analysis shows how each model performs on an extremely hard case of Moving MNIST where two digits are continuously overlapped during the warm-up phase.\nAs we can see, our model is the only model that can differentiate the overlapping digits. The information-preserving property of the two-way autoencoder enables our method to reconstruct every fine detail of moving digits after occlusion while baselines typically only restore the basic shape of these numbers. In fact, our CrevNet works almost perfectly on Moving MNIST, with most of its generations being visually indistinguishable from groundtruth.\nWe perform a human study to assess the fidelity of the video clips generated by different models. We presented pairs of video clips to human judges, where each pair consists of a video clip from the test set together with the prediction generated by the model. The judges were asked to decide which of the two video clips is more likely to be the groundtruth. To make each trail blind, the judges were not informed which model is used for generation and two sequences were randomly displayed on either side of the screen. We totally collected 2439 responses made by 58 human subjects and then calculated the probability that human judges answered correctly. The results are reported in Table 1. The accuracy of 55.8 % suggests that subjects could hardly detect the difference and their decisions were very close to random guesses." }, { "heading": "3.2 SHORT-TERM PREDICTION—TRAFFIC FLOW FORECASTING", "text": "Next, we evaluate our model on a more complicated real-world dataset, Traffic4cast (IARAI (2019)), which collects the traffic statuses of 3 big cities over a year at a 5-minute interval. Traffic forecasting can be straightforwardly defined as video prediction task by its spatiotemporal nature. However, this dataset is quite challenging for the following reasons. (1). High resolution: The frame resolution of Traffic4cast is 495 × 436, which is the highest among all datasets. Existing resolution-preserving methods can hardly be adapted to this dataset since they all require extremely large memory and computation. Even if these models can be fitted in GPUs, they still do not have large enough receptive fields to capture the meaningful dynamics as vehicles can move up to 100 pixels between consecutive frames. (2). Complicated nonlinear dynamics: Valid data points only reside on the hidden roadmap of each city, which is not explicitly provided in this dataset. Moving vehicles on these curved roads along with tangled road conditions will produce very complex nonlinear behaviours. It also involves many unobservable conditions or random events like weather and car accidents.\nDatasets and Setup: Each frame in Traffic4cast dataset is a 495 × 436 × 3 heatmap, where the last dimension records 3 traffic statuses representing volume, mean speed and major direction at given location. The architecture of CrevNet is the same as the one we used on Moving MNIST. As we mentioned before, the existing resolution-preserving methods cannot handle such high resolution input. Thus, to make the comparison possible, we add U-Net encoder-decoder to the baseline models including ConvLSTM and ST-LSTM. We train each model to predict next 3 frames (the next 15 minutes) from 9 observations and evaluate prediction with MSE criterion.\nResults: The quantitative comparison including the best two results on the leaderboard before the submission of this paper is provided in Table 3. Unlike all previous state-of-the-art methods, CrevNet does not suffer from high memory consumption so that we were able to train our model in a single V100 GPU. The invertibility of two-way autoencoder preserves all necessary information for spatiotemporal learning and allows our model to generate sharp and reasonable predictions. As illustrated in Fig 4, our model can identify and remember the hidden roadmap of each city through the learning of complicated nonlinear dynamics and accurately predict how traffic system will evolve." }, { "heading": "3.3 NEXT-FRAME PREDICTION AND BEYOND—CAR-MOUNTED CAMERA VIDEO", "text": "The real-world videos are usually long-term unpredictable because of the intrinsic randomness and the lack of necessary information. Thus, the common practice for datasets like KITTI (Geiger et al. (2012)), a car-mounted camera video dataset, is to perform next-frame prediction. In this section, we further demonstrate the superior performance of our CrevNet by conducting experiments on KITTI and Caltech Pedestrian (Dollár et al. (2009)). Compared with the previous two settings, carmounted camera videos dataset presents another level of difficulty for video prediction as it describes various nonlinear three-dimensional dynamics of multiple moving objects including backgrounds. Furthermore, as our well-trained model is capable of generating authentic future frames, it should spontaneously learn at least the shape and location of all moving objects, which indicates that the learnt features are very informative for downstream tasks. For example, in the case of object detection, these features can be incorporated to estimate more accurate locations and sizes of bounding boxes. Therefore, we also explore the effectiveness of our self-supervised learning method on the 2D object detection on KITTI." }, { "heading": "3.3.1 VIDEO PREDICTION", "text": "Datasets and Setup: We follow the same protocol used in PredNet (Lotter et al. (2016)) for preprocessing and evaluation. We first center-crop all video frames and resize them into 128 × 160. We compare our proposed method with 4 state-of-the-art benchmark models. Models are trained on KITTI dataset to predict the next frame after 10-frame warm-up and are evaluated on Caltech Pedestrian. The architecture of CrevNet used on KITTI is composed of a 48-layer two-way autoencoder\nand 40 RPMs. Note that it is the memory efficiency of our method that allows us to deploy such deep model. Since our model also possess good capability of long-term prediction. We add a 12-frame prediction comparison with CycleGAN (Kwon & Park (2019)) and PredNet (Lotter et al. (2016)).\nResults: Performance of different models in terms of PSNR and SSIM is displayed in Table 4. CrevNet outperforms all baselines in both next-frame and multi-frame prediction regimes. Visual comparisons are provided in Fig 5 and Fig 6. Especially, in the case of 12-frame generation, we can observe that compared with our method, PredNet suffers severely from the famous error propagation of RNN issue while CycleGAN produces realistic yet physically inconsistent predictions." }, { "heading": "3.3.2 2D OBJECT DETECTION", "text": "Datasets and Setup: KITTI provides three prior frames of unlabeled data for each labeled image. This allows us to run our CrevNet to extract useful spatiotemporal features for object detection. All video sequences were recorded at 10 Hz with resolution of 1242×375. We first resize each frame to 416×128 and finetune our best model on the video prediction task solely. The combinations of features extracted by our two-way autoencoder and attention masks of the target frame are then fed into the detection head for the further training. Note that we do not update the weights of CrevNet at this stage to purely demonstrate the power of self-supervised learning. Two image-based detection\nmodels, SqueezeDet and RRC, are compared as baselines. We also add an experiment on transfer learning of features learnt by PredNet on KITTI as comparison. To be consistent with related work, we use SSD (Liu et al. (2016)) as detection head.\nResults: The results of all experiments and baselines can be found in Table 5. Surprisingly, our CrevNet even outperforms the combination of the best model on each class in term of mAP. Since our model is capable of capturing the motion information, it is sensitive to the small (hard) moving objects. However, the motion information alone is not sufficient for object detection due to the appearance of relatively static objects. Therefore, we can observe a performance boost after we incorporate the features extracted by our two-way autoencoder. Another advantage of our method is that it can provide a better localization of bounding box since the learnt features of CrevNet remain the pixel-wise alignment with the input and output frame. Finally, thanks to the lightweight nature of our CrevNet, our best detection model can run at 6.8 FPS at the testing time." }, { "heading": "4 RELATED WORK", "text": "Deep Learning in Video Prediction: Mainstream video prediction models can mostly be categorized into two frameworks, stacked ConvRNNs and encoder-predictor-decoder models. The former framework attempts to design a new spatiotemporal module and then stacks multiple such modules to form the final model, while the latter usually utilizes an autoencoder to project video frames into their latent representations and then employs a recurrent neural network to model the temporal transformations. PredNet (Lotter et al. (2016)) is a good representative of stacked ConvRNNs framework. In PredNet, each ConvLSTM layer produces a layer-specific prediction at every time step to transmit an error term to the next layer. This model works well for predicting the next frame, but fails to maintain its performance in a long-term setting. To tackle long-term predictions, PredRNN (Wang et al. (2017)) proposed a new spatiotemporal LSTM, which allows memory to flow both vertically and horizontally. PredRNN++ (Wang et al. (2018)) further improved the results by rearranging spatial and temporal memory in a cascaded mechanism, and by using a gradient highway architecture to ease the optimization. E3D-LSTM (Wang et al. (2019)) effectively recalled the previous memory states and also proposed to include 3D convolutions to enhance its performance. ContextVP (Byeon et al. (2018)) introduced a fully context-aware architecture consisting of parallel multi-dimensional LSTM units and blending units. Methods from stacked ConvRNNs family usually yield more accurate deterministic predictions but they consume considerable GPU memory and computational power as they abandon downsampling to prevent information loss.\nThe encoder-predictor-decoder framework, on the other hand, provides more flexibility than its counterpart. MCNET (Villegas et al. (2017)) and DrNet (Denton et al. (2017)) decompose the content and motion in videos by building their corresponding encoders and then integrate this disentangled information to yield the next frame. Retrospective CycleGAN (Kwon & Park (2019)) combines sequential adversarial loss with frame adversarial loss, which encourages the model to generate frames that are visually similar to authentic images. In terms of modeling stochasticity, SVG (Denton & Fergus (2018)) and SAVG (Lee et al. (2018)) utilize a prior inference network to mimic the uncertainty in the environment, and then embed it into a deterministic generative model to produce stochastic video frames. VPN (Kalchbrenner et al. (2016)) estimates the discrete joint distribution of the raw pixel values in a video using the well-established PixelCNNs. It is worth noticing that VPN\nemploys a resolution-preserving encoder to circumvent the information loss, showing the need for an efficient information-preserving encoder in the community.\nComparison with Related Works: To the best of our knowledge, our CrevNet is the first conditionally reversible model in the video prediction literature. There are three prior arts, E3D-LSTM (Wang et al. (2019)), FRNN (Oliu et al. (2018)) and VideoFlow (Kumar et al. (2019)), having some similarities with our CrevNet. While E3D-LSTM also employs 3D convolutions, their implementation is essentially equivalent to applying two 2D convolutional operations, as there is no shared filter on the temporal dimension. Similar to CrevNet, FRNN reduces its computational cost by eliminating the need to re-encode the output of decoder. However, FRNN has a substantially different architecture compared to CrevNet. While the encoder and decoder in our model do not need information sharing at all, FRNN relies heavily on the sharing of the hidden states between them. Although VideoFlow also utilizes invertible transformation. This approach is very different from ours because: (1). VideoFlow is built upon Glow, a very memory-consuming architecture. Such memory limits preclude the use of 3D convolutions, or even from training the model with Adam. (2). They use ANN to model temporal relationship. As such, VideoFlow cannot capture complex dynamics. (3). So far, VideoFlow has only been applied to stochastic video generation instead of deterministic video prediction.\nThe Reversible and Invertible Architectures: The idea of the coupling layer was initially introduced in NICE (Dinh et al. (2014)) so as to make the computation of the determinant of the Jacobian and inverse Jacobian trival. Inspired by additive coupling layer, RevNet (Gomez et al. (2017)) introduced a reversible block that allowed the reconstruction of activations of each layer from that of the next layer, thus eliminating the need to store activations between downsampling and significantly reducing its memory consumption. The follow-up work by (Jacobsen et al. (2018)) further proposed an invertible extension, i-RevNet, which enabled the model to preserve all information of input through layers while still being capable of extracting a useful representation for classification." }, { "heading": "5 CONCLUSION", "text": "We described a novel conditionally reversible network, CrevNet, for pixel-level prediction of future frames in videos. The originality of our model lies in our use of the reversible two-way autoencoder and the accompanying reversible predictive module. Such architectural design enables the model to preserve fine-grained information without significant memory and computation overhead. CrevNet achieves state-of-the-art results on both synthetic and real-world datasets. The subsequent detection experiments demonstrate the potential of CrevNet to be a continuous self-supervised learning system to enhance downstream CV tasks, as shown in the case of BERT (Devlin et al. (2018)) for NLP tasks." }, { "heading": "A CONVLSTM AND ST-LSTM", "text": "The key equations of ConvLSTM are shown as belows.\nit = σ(Wxi ∗ Xt +Whi ∗ Hlt−1 + bi) ft = σ(Wxf ∗ Xt +Whf ∗ Hlt−1 + bf ) Clt = ft ◦ Clt−1 + it ◦ tanh(Wxc ∗ Xt +Whc ∗ Hlt−1 + bc) ot = σ(Wxo ∗ Xt +Who ∗ Hlt−1 +Wco ◦ Clt + bo) Hlt = ot ◦ tanh(Clt)\nwhere ∗ denotes the convolution operator and ◦ denotes the Hadamard product. Based on ConvLSTM, spatiotemporal LSTM (ST-LSTM) in PredRNN adds another vertical memory flow to enhance the long-term temporal dependency as follow.\nit = σ(Wxi ∗ Xt +Whi ∗ Hlt−1 + bi) ft = σ(Wxf ∗ Xt +Whf ∗ Hlt−1 + bf ) Clt = ft ◦ Clt−1 + it ◦ tanh(Wxc ∗ Xt +Whc ∗ Hlt−1 + bc) i′t = σ(W ′ xi ∗ Xt +Wmi ∗Ml−1t + b′i) f ′t = σ(W ′ xf ∗ Xt +Wmf ∗Ml−1t + b′f ) Mlt = f ′t ◦Ml−1t + i′t ◦ tanh(Wxm ∗ Xt +Wmm ∗Ml−1t + bm) ot = σ(Wxo ∗ Xt +Who ∗ Hlt−1 +Wco ◦ Clt +Wmo ◦Mlt + bo) Ht = ot ◦ tanh(W1×1[Clt,Mlt])\nwhere blue part overlaps ConvLSTM. Note thatMlt usually receives information from the previous layer instead of the previous state and the special case is thatM1t receivesMLt−1 to constitute a zigzag information flow. As we can see, ST-LSTM basically doubles the size of feature map and the number of parameters compared with ConvLSTM." }, { "heading": "B CONDITIONAL REVERSIBILITY", "text": "As we mentioned in Section 2.2, conditional reversibility is an interesting property of our CrevNet. In this section, we will provide more details about it. Given x̂2t , x 1 t and h 1 t−1, the reversible predictive module can recover x2t as follow\nh1t = ConvRNN(x 1 t , h 1 t−1)\ngt = φ(W ∗ h1t + b) x2t = (x̂ 2 t − gt h1t ) 1/1− gt\nIdeally, if there is no numerical error during the calculation, we can get the perfect reconstruction of input by applying this inverse operation repeatedly. In practice, while the most of reverse generations are successful, the inevitable numerical error will still result in some failing cases, especially in the case of a very deep architecture because errors will be amplified layer by layer.\nC OBJECT DETECTION" } ]
2,020
null
SP:5a0e35b51548e82135b965e7b692e8a0af1289f8
[ "This paper considers reinforcement learning for discrete choice models with unobserved heterogeneity, which is useful for analyzing dynamic Economic behavior. Random choice-specific shocks in reward is accommodated, which are only observed by the agent but not recorded in the data. Existing optimization approaches rely on finding a functional fixed point, which is computationally expensive. The main contribution of the paper lies in formulating discrete choice models into an MDP, and showing that the value function is concave with respect to the policy (represented by conditional choice probability). So policy gradient algorithm can provably converge to the global optimal. Conditions on the parameters for global concavity are identified and rates of convergences are established. Finally, significant advantages in computation were demonstrated on the data from Rust (1987), compared with “nested fixed point” algorithms that is commonly used in Econometrics.", "This paper deals with a certain class of models, known as discrete choice models. These models are popular in econometrics, and aim at modelling the complex behavioural patterns of individuals or firms. Entities in these models are typically modelled as rational agents, that behave optimally for reaching their goal of maximizing a certain objective function such as maximizing expected cumulative discounted payoff over a fixed period." ]
Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition. These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data. Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy. We show that in an important class of discrete choice models the value function is globally concave in the policy. That means that simple algorithms that do not require fixed point computation, such as the policy gradient algorithm, globally converge to the optimal policy. This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior. In particular, we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing “nested fixed point” algorithms used in Econometrics.
[]
[ { "authors": [ "J.H. Abbring", "J.J. Heckman" ], "title": "Econometric evaluation of social programs, part iii: Distributional treatment effects, dynamic treatment effects, dynamic discrete choice, and general equilibrium policy evaluation", "venue": "Handbook of econometrics,", "year": 2007 }, { "authors": [ "V. Aguirregabiria", "A. Magesan" ], "title": "Solution and estimation of dynamic discrete choice structural models using euler equations. Available at SSRN 2860973", "venue": null, "year": 2016 }, { "authors": [ "V. Aguirregabiria", "P. Mira" ], "title": "Sequential estimation of dynamic discrete games", "venue": null, "year": 2007 }, { "authors": [ "V. Aguirregabiria", "P. Mira" ], "title": "Dynamic discrete choice structural models: A survey", "venue": "Journal of Econometrics,", "year": 2010 }, { "authors": [ "P. Arcidiacono", "R.A. Miller" ], "title": "Conditional choice probability estimation of dynamic discrete choice models with unobserved heterogeneity", "venue": null, "year": 2011 }, { "authors": [ "P. Bajari", "H. Hong", "D. Nekipelov" ], "title": "Game theory and econometrics: A survey of some recent research", "venue": "In Advances in economics and econometrics, 10th world congress,", "year": 2013 }, { "authors": [ "N. Bansal", "A. Gupta" ], "title": "Potential-function proofs for first-order methods", "venue": "arXiv preprint arXiv:1712.04581", "year": 2017 }, { "authors": [ "Dubé", "J.-P", "P. Chintagunta", "A. Petrin", "B. Bronnenberg", "R. Goettler", "P. Seetharaman", "K. Sudhir", "R. Thomadsen", "Y. Zhao" ], "title": "Structural applications of the discrete choice model", "venue": null, "year": 2002 }, { "authors": [ "N. Dunford", "J.T. Schwartz" ], "title": "Linear Operators. Part 1: General Theory. New York Interscience", "venue": null, "year": 1957 }, { "authors": [ "Z. Eckstein", "K.I. Wolpin" ], "title": "The specification and estimation of dynamic stochastic discrete choice models: A survey", "venue": "The Journal of Human Resources,", "year": 1989 }, { "authors": [ "M. Fazel", "R. Ge", "S. Kakade", "M. Mesbahi" ], "title": "Global convergence of policy gradient methods for the linear quadratic regulator", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "W.B. Haskell", "R. Jain", "D. Kalathil" ], "title": "Empirical dynamic programming", "venue": "Mathematics of Operations Research,", "year": 2016 }, { "authors": [ "I. Hendel", "A. Nevo" ], "title": "Measuring the implications of sales and consumer inventory behavior", "venue": null, "year": 2006 }, { "authors": [ "K. Hinderer" ], "title": "Lipschitz continuity of value functions in markovian decision processes", "venue": "Mathematical Methods of Operations Research,", "year": 2005 }, { "authors": [ "V.J. Hotz", "R.A. Miller" ], "title": "Conditional choice probabilities and the estimation of dynamic models", "venue": "The Review of Economic Studies,", "year": 1993 }, { "authors": [ "T. Jaksch", "R. Ortner", "P. Auer" ], "title": "Near-optimal regret bounds for reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "P. Müller", "G. Reich" ], "title": "Structural estimation using parametric mathematical programming with equilibrium constraints and homotopy path continuation", "venue": null, "year": 2018 }, { "authors": [ "D. Nekipelov", "V. Syrgkanis", "E. Tardos" ], "title": "Econometrics for learning agents", "venue": "In Proceedings of the Sixteenth ACM Conference on Economics and Computation,", "year": 2015 }, { "authors": [ "M. Pirotta", "M. Restelli", "L. Bascetta" ], "title": "Policy gradient in lipschitz markov decision processes", "venue": "Machine Learning,", "year": 2015 }, { "authors": [ "E. Rachelson", "M.G. Lagoudakis" ], "title": "On the locality of action domination in sequential decision making", "venue": null, "year": 2010 }, { "authors": [ "J. Rust" ], "title": "Optimal replacement of gmc bus engines: An empirical model of harold zurcher", "venue": "Econometrica: Journal of the Econometric Society,", "year": 1987 }, { "authors": [ "J. Rust" ], "title": "Numerical dynamic programming in economics", "venue": "Handbook of computational economics,", "year": 1996 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: An introduction", "venue": null, "year": 2018 }, { "authors": [ "R.J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 } ]
[ { "heading": null, "text": "Discrete choice models with unobserved heterogeneity are commonly used Econometric models for dynamic Economic behavior which have been adopted in practice to predict behavior of individuals and firms from schooling and job choices to strategic decisions in market competition. These models feature optimizing agents who choose among a finite set of options in a sequence of periods and receive choice-specific payoffs that depend on both variables that are observed by the agent and recorded in the data and variables that are only observed by the agent but not recorded in the data. Existing work in Econometrics assumes that optimizing agents are fully rational and requires finding a functional fixed point to find the optimal policy. We show that in an important class of discrete choice models the value function is globally concave in the policy. That means that simple algorithms that do not require fixed point computation, such as the policy gradient algorithm, globally converge to the optimal policy. This finding can both be used to relax behavioral assumption regarding the optimizing agents and to facilitate Econometric analysis of dynamic behavior. In particular, we demonstrate significant computational advantages in using a simple implementation policy gradient algorithm over existing “nested fixed point” algorithms used in Econometrics." }, { "heading": "1 INTRODUCTION", "text": "Dynamic discrete choice model with unobserved heterogeneity is, arguably, the most popular model that is currently used for Econometric analysis of dynamic behavior of individuals and firms in Economics and Marketing (e.g. see surveys in Eckstein and Wolpin (1989), Dubé et al. (2002) Abbring and Heckman (2007), Aguirregabiria and Mira (2010)). Even most recent Econometric papers on single-agent dynamic decision-making use this setup to showcase their results (e.g. Arcidiacono and Miller, 2011; Aguirregabiria and Magesan, 2016; Müller and Reich, 2018).In this model, pioneered in Rust (1987), the agent chooses between a discrete set of options (typically 2) in a sequence of discrete time periods to maximize the expected cumulative discounted payoff. The reward in each period is a function of the state variable which follows a Markov process and is observed in the data and also a function of an idiosyncratic random variable that is only observed by the agent but is not reported in the data. The unobserved idiosyncratic component is designed to reflect heterogeneity of agents that may value the same choice differently.\nDespite significant empirical success in prediction of dynamic economic behavior under uncertainty, dynamic discrete choice models frequently lead to seemingly unrealistic optimization problems that economic agents need to solve. For instance, Hendel and Nevo (2006) features an elaborate functional fixed point problem with constraints, which is computationally intensive, especially in continuous state spaces, for consumers to buy laundry detergent in the supermarket. Common approach for this functional fixed point problem is value function iteration (See Section 2.3 for more discussion).\nAt the same time, rich literature on Markov Decision Processes (cf. Sutton and Barto, 2018) have developed several effective optimization algorithms, such as the policy gradient algorithm and its variants, that do not require solving for a functional fixed point. However, the drawback of the policy gradient is that the value function in a generic Markov Decision problem is not concave in the policy. This means that gradient-based algorithms have no guarantees for global convergence for a generic MDP. While for some specific and simple models where closed-form characterizations exist, the\nconvergence results are shown by model-specific technique which is hard to generalize (e.g. Fazel et al., 2018, for linear quadratic regulator).\nIn this paper our main goal is to resolve the dichotomy in empirical social science literature that the rationality of consumers requires for them to be able to solve the functional fixed point problem which is computationally intensive. Our main theoretic contribution is the proof that, in the class of dynamic discrete choice models with unobserved heterogeneity, the value function of the optimizing agent is globally concave in the policy. This implies that a large set of policy gradient algorithms that have a modest computational power requirement for the optimizing agents have a fast convergence guarantee in our considered class of dynamic discrete choice models. The importance of this result is twofold.\nFirst, it gives a promise that seemingly complicated dynamic optimization problems faced by consumers can be solved by relatively simple algorithms that do not require fixed point computation or functional optimization. This means that the policy gradient-style methods have an important behavioral interpretation. As a result, consumer behavior following policy gradient can serve as a behavioral assumption for estimating consumer preferences from data which is more natural for consumer choice settings than other assumptions that have been used in the past for estimation of preferences (e.g. -regret learning in Nekipelov et al. (2015)). Second, more importantly, our result showing fast convergence of the policy gradient algorithm makes it an attractive alternative to the search for the functional fixed point in this class of problems. While the goal of the Econometric analysis of the data from dynamically optimizing consumers is to estimate consumer preferences by maximizing the likelihood function, it requires to sequentially solve the dynamic optimization problem for each value of utility parameters along the parameter search path. Existing work in Economics prescribes to use fixed point iterations for the value function to solve the dynamic optimization problem (see Rust (1987), Aguirregabiria and Mira (2007)). The replacement of the fixed point iterations with the policy gradient method significantly speeds up the maximization of the likelihood function. This makes the policy gradient algorithm our recommended approach for use in Econometric analysis, and establishes practical relevance of many newer reinforcement learning algorithms from behavioral perspective for social sciences." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we introduce the concepts of the Markov decision process (MDP) with choice-specific payoff heterogeneity, the conditional choice probability (CCP) representation and the policy gradient algorithm." }, { "heading": "2.1 MARKOV DECISION PROCESS", "text": "A discrete-time Markov decision process (MDP) with choice-specific heterogeneity is defined as a 5-tuple 〈S,A, r, ,P, β〉, where S is compact convex state space with diam(S) ≤ S̄ <∞, A is the set of actions, r : S×A → R+ is the reward function, such that r(s, a) is the immediate non-negative reward for the state-action pair (s, a), are independent random variables, P is a Markov transition model where where p(s′|s, a) defines the transition density between state s and s′ under action a, and β ∈ [0, 1) is the discount factor for future payoff. We assume that random variables are observed by the optimizing agent and not recorded in the data. These variables reflect idiosyncratic differences in preferences of different optimizing agents over choices. In the following discussion we refer to these variables as “random choice-specific shocks.\"\nIn each period t = 1, 2, . . . ,∞, the nature realizes the current state st based on the Markov transition P given the state-action pair (st−1, at−1) in the previous period t− 1, and the choice-specific shocks t = { t,a}a∈A drawn i.i.d. from distribution . The optimizing agent chooses an action a ∈ A, and her current period payoff is sum of the immediate reward and the choice-specific shock, i.e., r(s, a) + t,a. Given initial state s1, the agent’s long-term payoff is E 1,s2, 2,... [∑∞ t=1 β t−1r(st, at) + t,at ] . This expression makes it clear that random shocks play a crucial role in this model by allowing us to define the ex ante value function of the optimizing agent which reflects the expected reward from agent’s choices before the agent observes realization of t. When the distribution of shocks is sufficiently smooth (differentiable), the corresponding ex ante value function is smooth (differentiable)\nas well. This allows us to characterize the impact of agent’s policy on the expected value by considering functional derivatives of the value function with respect to the policy.\nIn the remainder of the paper, we rely on the following assumptions. Assumption 2.1. The state space S is compact in R and the action spaceA is binary, i.e.,A = {0, 1}. Assumption 2.2. For all states s, the immediate reward r(s, 0) for the state-action pair (s, 0) is zero i.e., r(s, 0) = 0, and the immediate reward r(s, 1) for the state-action pair (s, 1) is bounded between [Rmin, Rmax]. Assumption 2.3. Choice-specific shocks are Type I Extreme Value random variables with location parameter 0 (cf. Hotz and Miller, 1993) which are independent over choices and time periods.\nAssumption 2.1, 2.2, 2.3 are present in most of the papers on dynamic decision-making in economics, marketing and finance, (e.g. Dubé et al., 2002; Aguirregabiria and Mira, 2010; Arcidiacono and Miller, 2011; Aguirregabiria and Magesan, 2016; Müller and Reich, 2018)\nThe policy and the value function A stationary Markov policy is a function σ : S × RA → A which maps the current state s and choice-specific shock to an action. In our further discussion we will show that there is a natural more restricted definition of the set of all feasible policies in this model.\nGiven any stationary Markov policy σ, the value function Vσ : S → R is a mapping from the initial state to the long-term payoff under policy σ, i.e.,\nVσ(s1) = E 1,s2, 2,... [ ∞∑ t=1 βt−1 { r(st, σ(st, t)) + t,σ(st, t) }] .\nSince the reward is non-negative and bounded, and the discount β ∈ [0, 1), value function Vσ is well-defined and the optimal policy σ̃ (i.e., Vσ̃(s) ≥ Vσ(s) for all policies σ and states s) exists. Furthermore, the following Bellman equation holds\nVσ(s) = E [ r(s, σ(s, )) + σ(s, ) + β Es′ [Vσ(s′)|s, σ(s, )] ] for all policies σ (1)" }, { "heading": "2.2 CONDITIONAL CHOICE PROBABILITY REPRESENTATION", "text": "Based on the Bellman equation (1) evaluated at the optimal policy, the optimal Conditional Choice Probability δ̃(a|s) (i.e., the probability of choosing action a given state s in the optimal policy σ̃) can be defined as\nδ̃(a|s) = E [1{r(s, a) + a + β Es′ [Vσ̃(s′)|s, a] ≥ r(s, a′) + a′ + β Es′ [Vσ̃(s′)|s, a′] , ∀a′}] The optimal policy σ̃ can, therefore, be equivalently characterized by threshold function π̃(s, a) = r(s, a) + β Es′ [Vσ̃(s′)|s, a], such that the optimizing agent chooses action a† which maximizes the sum of the threshold and the choice-specific shock, i.e., a† = argmaxa{π̃(s, a) + a}. Similarly, all non-optimal policies can be characterized by the corresponding threshold functions denoted π. Under Assumption 2.3 the conditional choice probability δ can be explicitly expressed in terms of the respective threshold π as (cf. Rust, 1996)\nδ(a|s) = exp(π(s, a)) /(∑ a′∈A exp(π(s, a ′)) ) .\nWe note that this expression induces a one-to-one mapping from the thresholds to the conditional choice probabilities. Therefore, all policies are fully characterized by their respective conditional choice probabilities. For notational simplicity, since we consider the binary action space A = {0, 1}, and the reward r(s, 0) is normalized to 0 we denote the immediate reward r(s, 1) as r(s); denote the conditional choice probability δ(0|s) as δ(s); and denote π(s, 1) as π(s). In the subsequent discussion given that the characterization of policy σ via its threshold is equivalent to its characterization by conditional choice probability δ, we interchangeably refer to δ as the “policy.\" Then we rewrite the Bellman equation for a given policy δ as\nVδ(s) =(1− δ(s)) r(s)− δ(s) log (δ(s)) − (1− δ(s)) log(1− δ(s)) + β E ,s′ [ Vδ(s)(s ′) ∣∣∣s] (2)\nNow we make two additional assumptions that are compatible with standard assumptions in the Econometrics literature.\nAssumption 2.4. For all states s ∈ S, the conditional distribution of the next period Markov state p(·|s, 1) first-order stochastically dominates distribution p(·|s, 0), i.e., for all ŝ ∈ S, Prs′ [s′ ≤ ŝ|s, 1] ≤ Prs′ [s′ ≤ ŝ|s, 0]. Assumption 2.5. Under the optimal policy δ̃, the value function is non-decreasing in states, i.e., Vδ̃(s) ≤ Vδ̃(s′) for all s, s′ ∈ S s.t. s < s′.\nConsider a myopic policy δ̄(s) = (exp(r(s)) + 1)−1 which uses threshold π̄(s) = r(s). This policy corresponds to agent optimizing the immediate reward without considering how current actions impact future rewards. Under Assumption 2.4 and Assumption 2.5, the threshold for optimal policy is at least the threshold of myopic policy, i.e., π̃(s) ≥ π̄(s). Hence, Lemma 2.1 holds. Lemma 2.1. The optimal policy δ̃ chooses action 0 with weakly lower probability than the myopic policy δ̄ in all states s ∈ S, i.e., δ̃(s) ≤ δ̄(s)." }, { "heading": "2.3 MDP IN ECONOMICS AND POLICY GRADIENT", "text": "Our motivation in this paper comes from empirical work in Economics and Marketing where optimizing agents are consumers or small firms who make dynamic decisions while observing the current state s and the reward r(s, a) for their choice a. These agents often have limited computational power making it difficult for them to solve the Bellman equation to find the optimal policy. They also may have only sample access to the distribution of Markov transition which further complicates the computation of the optimal policy. In this context we contrast the value function iteration method which is based on solving the fixed point problem induced by the Bellman equation and the policy gradient method.\nValue function iteration In the value function iterations, e.g., discussed in Jaksch et al. (2010); Haskell et al. (2016), the exact expectation in the Bellman equation (1) is replaced by an empirical estimate and then functional iteration uses the empirical Bellman equation to find the fixed point, i.e., the optimal policy. Under certain assumptions on MDPs, one can establish convergence guarantees for the value function iterations, e.g., Jaksch et al. (2010); Haskell et al. (2016). However, to run these iterations may require significant computation power which may not be practical when optimizing agents are consumers or small firms.\nPolicy gradient In contrast to value function iterations, policy gradient algorithm and its variations are model-free sample-based methods. At a high level, policy gradient parametrizes policies {δθ}θ∈Θ by θ ∈ Θ and computes the gradient of the value function with respect to the current policy δθ and update the policy in the direction of the gradient, i.e., θ ← θ + α∇θVδθ . Though the individuals considered in the Economic MDP models may not compute the exact gradient with respect to a policy due to having only sample access to the Markov transition, previous work has provided approaches to produce an unbiased estimator of the gradient. For example, REINFORCE (Williams, 1992) updates the policy by θ ← θ + αR∇θ log(δθ(a|s)) where R is the long-term payoff on path. Notice that this updating rule is simple comparing with value function iteration. The caveat of the policy gradient approach is the lack of its global convergence guarantee for a generic MDP. In this paper we show that such guarantee can be provided for the specific class of MDPs that we consider." }, { "heading": "3 WARM-UP: LOCAL CONCAVITY OF THE VALUE FUNCTION AT THE OPTIMAL POLICY", "text": "To understand the convergence of the policy gradient, in this section we introduce our main technique and show that the concavity of the value function with respect to policies is satisfied in a fixed neighborhood around the optimal policy. We rely on the special structure of the value function induced by random shocks which essentially “smooth it\" making it differentiable. We then use Bellman equation (7) to compute strong Fréchet functional derivatives of the value functions and argue that the respective second derivative is negative at the optimal policy. We use this approach in Section 4 to show the global concavity of the value function with respect to policies.\nBy ∆ we denote the convex compact set that contains all continuous functions δ : S → [0, 1] such that 0 ≤ δ(·) ≤ δ̄(·). The Bellman equation (7) defines the functional Vδ(·). Recall that Fréchet derivative of functional Vδ(·), which maps bounded linear space ∆ into the space of all continuous bounded functions of s, at a given δ(·) is a bounded linear functional DVδ(·) such that for all continuous h(·) with ‖h‖2 ≤ H̄: Vδ+h(·)− Vδ(·) = DVδ(·)h(·) + o(‖h‖2). When functional DVδ(·) is also Fréchet differentiable, we refer to its Fréchet derivative as the second Fréchet derivative of functional Vδ(·) and denote it D2Vδ(·). Theorem 3.1. Value function Vδ is twice Freéchet differentiable with respect to δ at the choice probability δ̃ corresponding to optimal policy and its Fréchet derivative is negative at δ̃ in all states s, i.e., D2Vδ̃(s) ≤ 0.\nWe sketch the proof idea of Theorem 3.1 and defer its formal proof to Appendix A. Start with the Bellman equation (7) of the value function, the Fréchet derivative of the value function is the fixed point of the following Bellman equation\nDVδ(s) = (log(1− δ(s))− log(δ(s))− r(s)) + β (Es′ [Vδ(s′)|s, 0]− Es′ [Vδ(s′)|s, 1]) + β E ,s′ [DVδ(s′)|s] ,\n(3)\nand\nD2Vδ(s) = − 1\nδ(s)(1− δ(s)) − 2β(Es′ [DVδ(s′)|s, 1]− Es′ [DVδ(s′)|s, 0]) + β Es′ [ D2Vδ(s ′)|s ] .\n(4)\nA necessary condition for its optimum yielding δ̃ is DVδ̃(s) = 0 for all states s. As a result, equation (9) implies that its second Fréchet derivative is negative for all states, i.e.,D2Vδ̃(s) ≤ 0.\nThe Bellman equation (9) of the second Fréchet derivative suggests that D2Vδ(s) ≤ 0 for all states s if\n1\nδ(s)(1− δ(s)) + 2β(Es′ [DVδ(s′)|s, 1]− Es′ [DVδ(s′)|s, 0]) ≥ 0 (5)\nThe first term in the inequality (5) is always positive for all policies in ∆, but the second term can be arbitrary small. In the next section, we will introduce a nature smoothness assumption on MDP (i.e., Lipschitz MDP) and show that the local concavity can be extended to global concavity, which implies that the policy gradient algorithm for our problem converges globally under this assumption." }, { "heading": "4 GLOBAL CONCAVITY OF THE VALUE FUNCTION", "text": "In this section, we introduce the notion of the Lipschitz Markov decision process, and Lipschitz policy space. We then restrict our attention to this subclass of MDPs. Our main result shows the optimal policy belongs to the Lipschitz policy space and the policy gradient globally converges in that space. We defer all the proofs of the results in this section to Appendix B." }, { "heading": "4.1 LIPSCHITZ MARKOV DECISION PROCESS", "text": "Lipschitz Markov decision process has the property that for two state-action pairs that are close with respect to Euclidean metric in S, their immediate rewards r and Markovian transition P should be close with respect to the Kantorovich or L1-Wasserstein metric. Kantorovich metric is, arguable, the most common metric used used in the analysis of MDPs (cf. Hinderer, 2005; Rachelson and Lagoudakis, 2010; Pirotta et al., 2015). Definition 4.1 (Kantorovich metric). For any two probability measures p, q, the Kantorovich metric between them is\nK(p, q) = sup f {∣∣∣∣∫ X fd(p− q) ∣∣∣∣ : f is 1-Lipschitz continuous}\nDefinition 4.2 (Lipschitz MDP). A Markov decision process is (Lr, Lp)-Lipschitz if ∀s, s′ ∈ S |r(s)− r(s′)| ≤ Lr |s− s′| ∀s, s′ ∈ S, a, a′ ∈ A K(p(·|s, a), p(·|s′, a′)) ≤ Lp (|s− s′|+ |a− a′|)" }, { "heading": "4.2 CHARACTERIZATION OF THE OPTIMAL POLICY", "text": "Our result in Section 3, demonstrates that the second Fréchet derivative of Vδ with respect to δ is negative for a given policy δ when inequality (5) holds. To bound the second term of (5) from below, i.e., Es′ [DVδ(s′)|s, 0]− Es′ [DVδ(s′)|s, 1], it is sufficient to show that Fréchet derivative DVδ(·) is Lipschitz-continuous. Even though we already assume that the Markov transition is Lipschitz, it is still possible that DVδ is not Lipschitz: Bellman equation (8) for DVδ depends on policy δ(s) via log(1− δ(s))− log(δ(s)), which can be non-Lipschitz in state s for general policies δ. Therefore, to guarantee Lipschitzness of the Fréchet derivative of the value function it is necessary to restrict attention to the space of Lipschitz policies. In this subsection, we show that this restriction is meaningful since the optimal policy is Lipschitz. Theorem 4.1. Given (Lr, Lp)-Lipschitz MDP, the optimal policy δ̃ satisfies∣∣∣∣∣log ( 1− δ̃(s) δ̃(s) ) − log ( 1− δ̃(s†) δ̃(s†) )∣∣∣∣∣ ≤ ( Lr + 2βRmaxLp 1− β\n) ∣∣s− s†∣∣ for all state s, s† ∈ S where Rmax = maxs∈S r(s) is the maximum of the immediate reward r over S." }, { "heading": "4.3 CONCAVITY OF THE VALUE FUNCTION WITH RESPECT TO LIPSCHITZ POLICIES", "text": "In this subsection, we present our main result showing the global concavity of the value function for our specific class of Lipschitz MDPs with unobserved heterogeneity over the space of Lipschitz policies. Definition 4.3. Given (Lr, Lp)-Lipschitz MDP, define its Lipschitz policy space ∆ as\n∆ = {δ :δ(s) ≤ δ̄(s) ∀s ∈ S and∣∣∣∣log(1− δ(s)δ(s) ) − log ( 1− δ(s†) δ(s†) )∣∣∣∣ ≤ (Lr + 2βRmaxLp1− β ) ∣∣s− s†∣∣ ∀s, s† ∈ S} ,\nwhere δ̄ is the myopic policy.\nTheorem 4.1 and Lemma 2.1 imply that the optimal policy δ̃ lies in this Lipschitz policy space ∆ for any Lipschitz MDP. Definition 4.4 (Condition for global convergence). We say that (Lr, Lp)-Lipschitz MDP satisfies the sufficient condition for global convergence if\n2βLp < 1 and 2βLp\n1− 2βLp\n( 2Lr +\n4βRmaxLp 1− β\n) ≤ ( exp(Rmin) + 1 )2\nexp(Rmin) . (6)\nTheorem 4.2. Given (Lr, Lp)-Lipschitz MDP which satisfies the condition for global convergence (6), value function Vδ is concave with respect to policy δ in the Lipschitz policy space ∆, i.e., D2Vδ(s) ≤ 0 for all s ∈ S, δ ∈ ∆." }, { "heading": "4.4 THE RATE OF GLOBAL CONVERGENCE OF THE POLICY GRADIENT ALGORITHM", "text": "In this subsection, we establish the rate of global convergence a simple version of the policy gradient algorithm assuming oracle access to the Fréchet derivative of the value function. While this analysis provides only a theoretical guarantee, as discussed in Section 2.3, in practice the individuals are able to produce an unbiased estimator of the exact gradient. As a result, the practical application of the policy gradient algorithm would only need to adjust for the impact of stochastic noise in the estimator.\nSince we assume that individuals know the immediate reward function r, the algorithm can be initialized at the myopic policy δ̄ with threshold π̄(s) = r(s), which is in the Lipschitz policy space ∆. From Lemma 2.1 it follows that the myopic policy is pointwise in S greater than the optimal policy, i.e., δ̄(s) ≤ δ̃(s). Consider policy δ with threshold π(s) = r(s) + β1−βRmax − β 2Rmin. Note that Bellman equation (7) implies that V (s) is between Rmin2 and Rmax 1−β for all states s. Thus, policy δ pointwise bounds the optimal policy δ̃ from below, i.e., δ(s) ≤ δ̃(s). Our convergence rate result applies to the policy gradient within the bounded Lipschitz policy set ∆̂.\nDefinition 4.5. Given (Lr, Lp)-Lipschitz MDP, define its bounded Lipschitz policy space ∆̂ as ∆̂ = {δ :δ(s) ≤ δ(s) ≤ δ̄(s) ∀s ∈ S and∣∣∣∣log(1− δ(s)δ(s) ) − log ( 1− δ(s†) δ(s†) )∣∣∣∣ ≤ (Lr + 2βRmaxLp1− β ) ∣∣s− s†∣∣ ∀s, s† ∈ S} .\nFor simplicity of notation, we introduce constants m and M which only depend on β, Rmin, Rmax, Lr and Lp, whose exact expressions are deferred to the supplementary material for this paper. Theorem 4.3. Given a (Lr, Lp)-Lipschitz MDP, which satisfies the condition for global convergence (6) and constants m and M defined above, for any step size α ≤ 1M , the policy gradient initialized at the myopic policy δ̄ and updating as δ ← α∇δVδ in the bounded Lipschitz policy space ∆̂ after k iterations, it produces policy δ(k) satisfying\nVδ̃(s)− Vδ(k)(s) ≤ (1− αm)k\n(exp(Rmin) + 1) 2 at all s ∈ S." }, { "heading": "5 EMPIRICAL APPLICATION", "text": "To demonstrate the performance of the algorithm, we use the data from Rust (1987) which made the standard benchmark for the Econometric analysis of MDPs. The paper estimates the cost associated with maintaining and replacing bus engines using data from maintenance records from Madison Metropolitan Bus City Company over the course of 10 years (December, 1974—May, 1985). The data contains monthly observations on the mileage of each bus as well as the dates of major maintenance events (such as bus engine replacement).\nRust (1987) assumes that the engine replacement decisions follow an optimal stopping policy derived from solving a dynamic discrete choice model of the type that we described earlier. Using this assumption and the data, he estimates the cost of operating a bus as a function of the running mileage as well as the cost of replacing the bus engine. We use his estimates of the parameters of the return function and the state transition probabilities (bus mileage) to demonstrate convergence of the gradient descent algorithm.\nIn Rust (1987) the state st is the running total mileage of the bus accumulated by the end of period t. The immediate reward is specified as a function of the running mileage as:\nr(st, a, θ1) = { −RC + t1, if a = 1 −c(st, θ1) + t0, if a = 0\nwhere RC is the cost of replacing the engine, c(st, θ1) is the cost of operating a bus that has st miles.\nFollowing Rust (1987), we take c(st, θ1) = θ1st. Further, as in the original paper, we discretize the mileage taking values in the range from 0 to 175 miles into an even grid of 2,571 intervals. Given the observed monthly mileage, Rust (1987) assumes that transitions on the grid can only be of increments 0, 1, 2, 3 and 4. Therefore, transition process for discretized mileage is fully specified by just four parameters θ2j = Pr[st+1 = st + j|st, a = 0], j = 0, 1, 2, 3. Table 1 describes parameter values that we use directly from Rust (1987).\nWe use the gradient descent algorithm to update the policy threshold π : 1 + π ≥ 0 ⇒ a = 1, where a = 1 denotes the decision to replace the engine. We set the learning rate using the RMSprop method1.\n1We use standard parameter values for RMSProp method: β = 0.1, ν = 0.001 and = 10−8. The performance of the the method was very similar to that when we used ADAM to update the threshold values.\nWe use “the lazy projection\" method to guarantee the search over Lipschitz policy space. The policy space is parametrized by the vector of thresholds (π1, . . . , πN ) corresponding to discretized state space (s1, . . . , sN ). It is initialized at the myopic policy, i.e. π (0) 1 = u(s1), . . . , π (0) N = u(sN ). At step k the algorithm updates the thresholds to the value π (k∗) i = π (k−1) i − αDδ(k−1)V (si)L(π (k−1) i )(1− L(π (k−1) i )), where L(·) is the logistic function and policy δ (k) j = L(π (k−1) j ) for i, j = 1, . . . , N. To make the“lazy projection\" updated values π (k∗) i are adjusted to the closest monotone set of values π(k)1 ≤ π (k) 2 ≤ . . . ≤ π (k) N . The algorithm terminates at step k where the norm maxi |DVδ(k)(si)| ≤ τ for a given tolerance τ .2 The formal definition of lazy projection can be found in Appendix C.\nFigure 3 demonstrates convergence properties of our considered version of the policy gradient algorithm. We used the “oracle\" versions of the gradient and the value function that were obtained by solving the corresponding Bellman equations. We initialized the algorithm using the myopic threshold π̄(s) = −RC + c(s, θ1); with the convergence criterion set to be based on the value maxi |DVδ(si)|3. In the original model in Rust (1987), the discount factor used when estimating parameters of the cost function was very close to 1. However, performance of the algorithm improves drastically when the discount factor is reduced. This feature is closely related to the Hadamard stability of the solution of the Bellman equation (e.g. observed in Bajari et al. (2013)) and is not algorithm-specific. In all of the follow-up analysis by the same author (e.g. Rust (1996)) the discount factor is set to more moderate values of .99 or .9 indicating that these performance issues were indeed observed with the settings in Rust (1987). Figure 3 illustrates the performance of the algorithm for the case where the discount factor is set to 0.994. For the same convergence criterion, the algorithm converges much faster.\n2To optimize the performance of the method it is also possible to consider a mixed norm of the form maxi |π(k)(si)− π(k−1)(si)|+ λmaxi |DVδ(k)(si)|∞ ≤ τ for some calibrated weight λ. This choice would control both the rate of decay of the gradient and the advancement of the algorithm in adjusting the thresholds.\n3The particular tolerance value used was 0.03 for illustrative purposes. 4When we reduce the cost of replacing the engine along with the discount factor, which ensures that there is\nsignificant variation in threshold values across states, convergence is improved even further" }, { "heading": "A OMITTED PROOF FOR THEOREM 3.1", "text": "Theorem 3.1. Value function Vδ is twice Freéchet differentiable with respect to δ at the choice probability δ̃ corresponding to optimal policy and its Fréchet derivative is negative at δ̃ in all states s, i.e., D2Vδ̃(s) ≤ 0.\nProof. We start with the Bellman equation of the value function.\nVδ(s) =(1− δ(s)) r(s)− δ(s) log (δ(s)) − (1− δ(s)) log(1− δ(s)) + β E ,s′ [ Vδ(s)(s ′) ∣∣∣s] (7)\nFirst of all, note that in (7) the first three terms on the right hand side of the equation simple nonlinear functions δ(·) and thus the directional derivative with respect to δ(·) can be taken as an ordinary derivative with respect to δ as a parameter. Next note that if functional Jδ(·) is directionally differentiable with respect to δ and for all h(·), ddτ Jδ+τ h(·)|τ=0/h(·) is invariant, then Jδ(·) is Fréchet differentiable with respect to δ and the obove ratio is its Fréchet derivative. As a result, the Fréchet derivative of simple functional (1− δ(s)) r(s)− δ(s) log (δ(s))− (1− δ(s)) log(1− δ(s)) with respect to δ(·) exists and equal to log(1− δ(s))− log(δ(s))− r(s). This expression is itself a Freéchet-differentiable functional with Fréchet derivative equal to −1/(δ(s)(1 − δ(s))), meaning that the original functional (1 − δ(s)) r(s) − δ(s) log (δ(s)) − (1 − δ(s)) log(1 − δ(s)) is twice Fréchet differentiable with the second Fréchet derivative −1/(δ(s)(1− δ(s))). Whenever the state transition is affected by the individual decision we need to consider decomposition of the conditional expectation with respect to the future state:\nE ,s′ [Vδ(s′)|s] = (1− δ(s)) Es′ [Vδ(s′)|s, 1] + δ(s) Es′ [Vδ(s′)|s, 0] .\nUnder standard technical conditions that allow the swap of the derivative and the integral\nDEs′ [Vδ(s′)|s] = (Es′ [Vδ(s′)|s, 0]− Es′ [Vδ(s′)|s, 1]) + Es′ [DVδ(s′)|s] Thus, the Fréchet derivative of the value function should be the fixed point of the following Bellman equation\nDVδ(s) = (log(1− δ(s))− log(δ(s))− r(s)) + β (Es′ [Vδ(s′)|s, 0]− Es′ [Vδ(s′)|s, 1]) + β E ,s′ [DVδ(s′)|s] ,\n(8)\nand\nD2Vδ(s) = − 1\nδ(s)(1− δ(s)) − 2β(Es′ [DVδ(s′)|s, 1]− Es′ [DVδ(s′)|s, 0]) + β Es′ [ D2Vδ(s ′)|s ] .\n(9)\nGiven that both these equations are Type II Fredholm integral equations for DVδ(·) and D2Vδ(·) which have unique solutions whenever β < 1 that are bounded and continuous (see Dunford and Schwartz (1957)) and, thus, unique solutions for both equations exist and Vδ(·) is indeed Fréchetdifferentiable. This means that the necessary condition for its optimum yielding δ̃ is DVδ̃(s) = 0 for all states s. As a result, equation (9) implies that its second Fréchet derivative is negative for all states, i.e.,D2Vδ̃(s) ≤ 0." }, { "heading": "B OMITTED PROOFS IN SECTION 4", "text": "B.1 OMITTED PROOF OF THEOREM 4.1\nTheorem 4.1. Given an (Lr, Lp)-Lipschitz MDP, the optimal policy δ̃ satisfies∣∣∣∣∣log ( 1− δ̃(s) δ̃(s) ) − log ( 1− δ̃(s†) δ̃(s†) )∣∣∣∣∣ ≤ ( Lr + 2βRmaxLp 1− β ) ∣∣s− s†∣∣ for all state s, s† ∈ S where Rmax = maxs∈S r(s) is the maximum of the immediate reward r over S.\nProof. At the optimal policy δ̃, the Fréchet derivative of the value function is zero, i.e., DVδ̃(s) = 0 for all state s. Therefore, from the Bellman equation (8) we establish that\nlog ( 1− δ̃(s) δ̃(s) ) = r(s) + β (Es′ [ Vδ̃(s ′)|s, 1 ] − Es′ [ Vδ̃(s ′)|s, 0 ] )\nThus, for all states s, s† ∈ S,∣∣Es′[Vδ̃(s′)|s, a]− Es′[Vδ̃(s′)|s†, a]∣∣ =\n∣∣∣∣∫ s′∈S Vδ̃(s ′)(p(s′|s, a)− p(s′|s†, a))ds′ ∣∣∣∣ = Rmax 1− β ∣∣∣∣∫ s′∈S (1− β) Rmax Vδ̃(s ′)(p(s′|s, a)− p(s′|s†, a))ds′\n∣∣∣∣ ≤Rmax\n1− β sup ‖f‖L≤1 {∣∣∣∣∫ s′∈S f(s′)(p(s′|s, a)− p(s′|s†, a))ds′ ∣∣∣∣}\n= Rmax 1− β K(p(·|s, a), p(·|s†, a)) ≤ RmaxLp 1− β ∣∣s− s†∣∣ where we use upper bounds sups∈S Vδ̃(s) ≤ Rmax 1−β and ‖ (1−β) Rmax\nVδ̃(s ′)‖L ≤ 1. Thus,∣∣∣∣∣log ( 1− δ̃(s) δ̃(s) ) − log ( 1− δ̃(s′) δ̃(s†) )∣∣∣∣∣ =|r(s)− r(s†) + β (Es′ [ Vδ̃(s ′)|s, 1 ] − Es′ [ Vδ̃(s ′)|s, 0 ] )\n− β (Es′ [ Vδ̃(s ′)|s†, 1 ] − Es′ [ Vδ̃(s ′)|s†, 0 ] )|\n≤ ∣∣r(s)− r(s†)∣∣+ β ∣∣Es′[Vδ̃(s′)|s, 1]− Es′[Vδ̃(s′)|s†, 1]∣∣\n+ β ∣∣Es′[Vδ̃(s′)|s, 0]− Es′[Vδ̃(s′)|s†, 0]∣∣\n≤ ( Lr +\n2βRmaxLp 1− β ) ∣∣s− s†∣∣ B.2 OMITTED PROOF OF THEOREM 4.2\nTheorem 4.2. Given an (Lr, Lp)-Lipschitz MDP which satisfies the condition for global convergence (6), the value function Vδ is concave with respect to policy δ in the Lipschitz policy space ∆, i.e., D2Vδ(s) ≤ 0 for all s ∈ S, δ ∈ ∆.\nTo show Theorem 4.2, we first introduce the following lemma establishing Lipschitz continuity of the Fréchet derivative of the value function.\nLemma B.1. Given a (Lr, Lp)-Lipschitz MDP, for all policies δ in the the Lipschitz policy space ∆, the Fréchet derivative of the respective value function DVδ(·) is ( 2Lr+ 4βRmaxLp\n1−β 1−2βLp\n) -Lipschitz\ncontinuous, i.e., for all states s, s† ∈ S,\n∣∣DVδ(s)−DVδ(s†)∣∣ ≤ (2Lr + 4βRmaxLp1−β 1− 2βLp )∣∣s− s†∣∣ . Proof. We begin with the Bellman equation (8) for the Fréchet derivative of value function.\nDVδ(s) = log ( 1− δ(s) δ(s) ) − r(s)\n+ β (Es′ [Vδ(s′)|s, 0]− Es′ [Vδ(s′)|s, 1]) + β E ,s′ [DVδ(s′)|s]\nWe use the concept of the contraction mapping to prove the result of the Lemma.\nDefinition B.1. Let T : X → X be a mapping from a metric space X to itself,\n• T is a contraction mapping (with modulus γ ∈ [0, 1)) if ρ(T (x), T (y)) ≤ γρ(x, y) for all x, y ∈ X , where ρ is a metric on X .\n• x is a fixed point of T if T (x) = x.\nLemma B.2. Suppose that X is a complete metric space and that T : X → X is a contraction mapping with modulus γ. Then,\n• T has a unique fixed point x∗.\n• If X ′ ⊆ X is a closed subset for which T (X ′) ⊆ X ′, then x∗ ∈ X ′.\nConsider the contraction mapping Tδ(x)(s) = log ( 1−δ(s) δ(s) ) − r(s) + β (Es′ [Vδ(s′)|s, 0] − Es′ [Vδ(s′)|s, 1]) + β E ,s′ [x(s′)|s], then the Bellman equation implies that DVδ is the fixed point of contraction mapping Tδ . Since the Lipschitz continuity property forms a closed subset, by Lemma B.2, it is sufficient to show for any LDV -Lipschitz continuous x, Tδ(x) is also LDV -Lipschitz continuous,\nwhere LDV = 2Lr+\n4βRmaxLp 1−β\n1−2βLp . Thus, consider states s, s † ∈ S,∣∣T (x)(s)− T (x)(s†)∣∣\n≤ ∣∣∣∣log(1− δ(s)δ(s) ) − log ( 1− δ(s†) δ(s†) )∣∣∣∣+ ∣∣r(s)− r(s†)∣∣ + β\n∣∣Es′ [Vδ(s′)|s, 0]− Es′[Vδ(s′)|s†, 0]∣∣ + β\n∣∣Es′ [Vδ(s′)|s, 1]− Es′[Vδ(s′)|s†, 1]∣∣ + β\n∣∣Es′ [x(s′)|s, 0] δ(s)− Es′[x(s′)|s†, 0] δ(s†)∣∣ + β\n∣∣Es′ [x(s′)|s, 1] (1− δ(s))− Es′[x(s′)|s†, 1] (1− δ(s†))∣∣ where ∣∣∣∣log(1− δ(s)δ(s) ) − log ( 1− δ(s†) δ(s†)\n)∣∣∣∣ ≤ (Lr + 2βRmaxLp1− β ) ∣∣s− s†∣∣∣∣r(s)− r(s†)∣∣ ≤ Lr ∣∣s− s†∣∣\nby the same calculation in the proof of Theorem 4.1, for a = 0, 1,\nβ ∣∣Es′ [Vδ(s′)|s, a]− Es′[Vδ(s′)|s†, a]∣∣ ≤ (Lr + 2βRmaxLp\n1− β\n) ∣∣s− s†∣∣\nand ∣∣Es′ [x(s′)|s, 0] δ(s)− Es′[x(s′)|s†, 0] δ(s†)∣∣ =\n∣∣∣∣∫ s′∈S (δ(s)− δ(s†))x(s′)(p(s′|s, a)− p(s′|s†, a))ds′ ∣∣∣∣\n=LDV ∣∣∣∣∫ s′∈S (δ(s)− δ(s†))x(s′) LDV (p(s′|s, a)− p(s′|s†, a))ds′ ∣∣∣∣\n≤LDV sup ‖f‖L≤1 {∣∣∣∣∫ s′∈S f(s′)(p(s′|s, a)− p(s′|s†, a))ds′ ∣∣∣∣}\n=LDVK(p(·|s, a), p(·|s†, a)) ≤ LDV Lp ∣∣s− s†∣∣\nwhere we use the bound ∣∣δ(s)− δ(s†)∣∣ ≤ 1 and thus ‖ (δ(s)−δ(s†))x(s′)LDV ‖L ≤ 1. Similarly,∣∣Es′ [x(s′)|s, 1] (1− δ(s))− Es′[x(s′)|s†, 1] (1− δ(s†))∣∣ ≤ LDV Lp ∣∣s− s†∣∣\nCombining all the bounds, we obtain that∣∣T (x)(s)− T (x)(s†)∣∣ ≤ (2Lr + 4βRmaxLp 1− β + 2βLDV Lp ) ∣∣s− s†∣∣ . Substitution LDV = 2Lr+ 4βRmaxLp\n1−β 1−2βLp yields the statement of the Lemma.\nProof of Theorem 4.2. From the Bellman equation (9), it is sufficient to show\n1\nδ(s)(1− δ(s)) ≥ 2β(Es′ [DVδ(s′)|s, 1]− Es′ [DVδ(s′)|s, 0]) (10)\nWe bound both sides separately. Since the policy satisfies δ(s) ≤ δ̄(s) for all states s, and δ̄(s) = 1 exp(r(s))+1 ≤ 1 2 , the left hand side can be bounded from below as\n1 δ(s)(1− δ(s)) ≥ 1 δ̄(s)(1− δ̄(s))\n≥ ( exp(Rmin) + 1 )2\nexp(Rmin)\nMeanwhile, the righthand side can be bounded from above by Lemma B.1. Let LDV = 2Lr+ 4βRmaxLp 1−β\n1−2βLp ,\n2β |Es′ [DVδ(s′)|s, 1]− Es′ [DVδ(s′)|s, 0]|\n=2β ∣∣∣∣∫ s′∈S DVδ(s ′)(p(s′|s, 1)− p(s′|s, 0))ds′ ∣∣∣∣ =2βLDV ∣∣∣∣∫ s′∈S DVδ(s ′) LDV (p(s′|s, 1)− p(s′|s, 0))ds′\n∣∣∣∣ ≤2βLDV sup\n‖f‖L≤1 {∣∣∣∣∫ s′∈S f(s′)(p(s′|s, 1)− p(s′|s, 0))ds′ ∣∣∣∣}\n=2βLDVK(p(·|s, 1), p(·|s, 0)) ≤ 2βLp\n1− 2βLp\n( 2Lr +\n4βRmaxLp 1− β ) From the condition of global convergence\n2βLp 1− 2βLp\n( 2Lr +\n4βRmaxLp 1− β\n) ≤ ( exp(Rmin) + 1 )2\nexp(Rmin)\nit follows that the inequality (10) is satisfied and the Bellman equation (9) implies that D2Vδ(s) ≤ 0 for all states s ∈ S.\nB.3 OMITTED PROOF OF THEOREM 4.3\nFor notation simplicity, we introduce notations m and M such that\nm = 1\n1− β\n(( exp(Rmin) + 1 )2 exp(Rmin) − 2βLp 1− 2βLp ( 2Lr + 4βRmaxLp 1− β ))\nM = 1\n(1− β)2\n(1− β) ( exp ( 1 1−βRmax − β 2Rmin ) + 1 )2\nexp( 11−βRmax − β 2Rmin)\n+2β ( exp ( 1\n1− β Rmax −\nβ 2 Rmin\n) + 2β\n1− β Rmax − (1 + β)Rmin )) Theorem 4.3. Given a (Lr, Lp)-Lipschitz MDP, which satisfies the condition for global convergence (6) and constants m and M defined above, for any step size α ≤ 1M , the policy gradient initialized at the myopic policy δ̄ and updating as δ ← α∇δVδ in the bounded Lipschitz policy space ∆̂ after k iterations, it produces policy δ(k) satisfying\nVδ̃(s)− Vδ(k)(s) ≤ (1− αm)k\n(exp(Rmin) + 1) 2\nat all s ∈ S.\nOur analysis follows the standard steps establishing convergence of the conventional gradient descent algorithm which bounds the second Fréchet derivative of the value function Vδ with respect to the policy δ from above and from below by m and M respectively. Lemma B.3. Given a (Lr, Lp)-Lipschitz MDP, which satisfies the condition for global convergence (6), for all policies δ in the bounded Lipschitz policy space ∆̂, for all states s ∈ S , the second Fréchet derivative of the value function Vδ with respect to the policy δ is upperbounded as\nD2Vδ(s) ≤ −m.\nProof. The Bellman equation (9) implies that\nmax s\nD2Vδ(s) ≤ 1\n1− β\n( −min\ns\n1\nδ(s)(1− δ(s))\n+ 2βmax s\n(Es′ [DVδ(s′)|s, 0]− Es′ [DVδ(s′)|s, 1]) )\nBy the same argument as in Theorem 4.2,\nmin s\n1 δ(s)(1− δ(s)) ≥ ( exp(Rmin) + 1 )2\nexp(Rmin)\nmax s\n(Es′ [DVδ(s′)|s, 1]− Es′ [ DVδ(s ′)|s†, 0 ] ) ≤ 2βLp\n1− 2βLp\n( 2Lr +\n4βRmaxLp 1− β ) Thus, for all state s ∈ S,\nD2Vδ(s) ≤ −m. Lemma B.4. Given a (Lr, Lp)-Lipschitz MDP, which satisfies the condition for global convergence, for all policy δ in the bounded Lipschitz policy space ∆̂, for all state s ∈ S , the second derivative of the value function Vδ with respect to the policy δ is is lowerbounded as\nD2Vδ(s) ≥ −M.\nProof. The Bellman equation (9) implies that\nmin s\nD2Vδ(s) ≥ 1\n1− β\n( −max\ns\n1\nδ(s)(1− δ(s)) + 2β (\nmin s DVδ(s)−max s\nDVδ(s) ))\nBy restricting policy to the bounded Lipschitz policy space ∆̂ we bound\nmax s\n1\nδ(s)(1− δ(s)) ≤ max s\n1\nδ(s)(1− δ(s)) ≤\n( exp ( 1\n1−βRmax − β 2Rmin\n) + 1 )2\nexp( 11−βRmax − β 2Rmin)\nProvided\nmin s Vδ(s) ≥ Rmin 2\nmax s Vδ(s) ≤ Rmax 1− β\nmin s\n( log ( 1− δ(s) δ(s) ) − r(s) ) ≥ min s ( log ( 1− δ̄(s) δ̄(s) ) − r(s) ) = 0\nmax s\n( log ( 1− δ(s) δ(s) ) − r(s) ) ≤ max s log ( 1− δ(s) δ(s) ) −min s r(s)\n≤ exp ( 1\n1− β Rmax −\nβ 2 Rmin\n) −Rmin\nit follows from Bellman equation (8) that\nmin s\nDVδ(s) ≥ 1\n1− β ( min s ( log ( 1− δ(s) δ(s) ) − r(s) ) + β(min s Vδ(s)−max s Vδ(s)) ) ≥ β\n1− β\n( Rmin\n2 − Rmax 1− β ) max s DVδ(s) ≤ 1 1− β ( max s ( log ( 1− δ(s) δ(s) ) − r(s) ) + β(max s Vδ(s)−min s Vδ(s))\n) ≤ 1\n1− β\n( exp ( 1\n1− β Rmax −\nβ 2 Rmin\n) + β\n1− β Rmax −\n2 + β\n2 Rmin ) Thus, for all state s ∈ S,\nD2Vδ(s) ≥ −M.\nProof of Theorem 4.3. The convergence rate guarantee follows from Lemma B.3 and Lemma B.4, under the standard arguments for the gradient descent algorithm for m-strongly concave and M - smooth (i.e., M -Lipschitz gradient) functions (cf. Bansal and Gupta, 2017)." }, { "heading": "C MORE RESULTS IN SECTION 5", "text": "Algorithm 1 “Lazy projection”, (π1, . . . , πN ): thresholds corresponding to discretized state space (s1, . . . , sN ); L(·): logistic function; policy δj = L(πj); α: step size; τ : termination tolerance\n1: π(0)1 ← u(s1), . . . , π (0) N ← u(sN ) // Initialize π(0) at the myopic policy 2: while maxi |DVδ(k)(si)| ≤ τ do 3: π(k∗)i ← π (k−1) i − αDδ(k−1)Vδ(k−1)(si)L(π (k−1) i )(1− L(π (k−1) i )) for all i ∈ [N ] 4: (π(k)1 , . . . , π (k) N )← the closest monotone thresholds of (π (k∗) 1 , . . . , π (k∗) N ) // Lazy projection 5: return (π(k)1 , . . . , π (k) N )\nWe list the convergence of gradient descent and its derivative, second derivative at smaller discount factor β = 0.9." } ]
2,019
null
SP:3145e0027567692ea5c3ca4ef8d0d94b40f8f27f
[ "This paper want to show that minimizing cross-entropy loss will simultaneously minimize Hinge loss with different margins, cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. The main contribution is a new gcdf loss based on Gaussian-perturbed parameters. However, this loss can only be used with linear models. For deep models, the authors suggest that only measure this loss on the top layer of model.", "This paper makes a step towards understanding of the implicit bias of optimization algorithms in deep learning. The authors consider alternative loss functions for deep networks: (1) the temperature-scaled cross-entropy loss with different values of the temperature; (2) the hinge-loss with different values of the margin parameter; (3) the Gcdf loss with different values of the variance parameter. The paper introduces the Gcdf loss which is derived as a modification of the 0-1 loss under the noise in the parameters of the linear output layer. The authors propose to use the alternative losses as measures of margin and sharpness associated with a solution found by an optimization algorithm. The experiments show how SGD in different learning scenarios (low/high learning rate and small/large batch) performs implicit minimization of the alternative loss functions with different parameters. Specifically, using larger learning rates/smaller batch sizes is shown to implicitly minimize the losses corresponding to higher values of the temperature/margin/variance. The results provide insights about margins and sharpness of solutions found by different modes of SGD." ]
Understanding the implicit bias of optimization algorithms is important in order to improve generalization of neural networks. One approach to try to exploit such understanding would be to then make the bias explicit in the loss function. Conversely, an interesting approach to gain more insights into the implicit bias could be to study how different loss functions are being implicitly minimized when training the network. In this work, we concentrate our study on the inductive bias occurring when minimizing the cross-entropy loss with different batch sizes and learning rates. We investigate how three loss functions are being implicitly minimized during training. These three loss functions are the Hinge loss with different margins, the cross-entropy loss with different temperatures and a newly introduced Gcdf loss with different standard deviations. This Gcdf loss establishes a connection between a sharpness measure for the 0− 1 loss and margin based loss functions. We find that a common behavior is emerging for all the loss functions considered.
[]
[ { "authors": [ "Sanjeev Arora", "Nadav Cohen", "Wei Hu", "Yuping Luo" ], "title": "Implicit regularization in deep matrix factorization", "venue": "CoRR, abs/1905.13655,", "year": 2019 }, { "authors": [ "Peter L. Bartlett", "Dylan J. Foster", "Matus Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems", "venue": "Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Gamaleldin F. Elsayed", "Dilip Krishnan", "Hossein Mobahi", "Kevin Regan", "Samy Bengio" ], "title": "Large margin deep networks for classification", "venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "CoRR, abs/1706.02677,", "year": 2017 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yiding Jiang", "Dilip Krishnan", "Hossein Mobahi", "Samy Bengio" ], "title": "Predicting the generalization gap in deep networks with margin distributions", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Qianli Liao", "Brando Miranda", "Andrzej Banburski", "Jack Hidary", "Tomaso A. Poggio" ], "title": "A surprising linear relationship predicts test performance in deep networks", "venue": "CoRR, abs/1807.09659,", "year": 2018 }, { "authors": [ "Yi Lin" ], "title": "A note on margin-based loss functions in classification", "venue": "Statistics and Probability Letters,", "year": 2004 }, { "authors": [ "Kaifeng Lyu", "Jian Li" ], "title": "Gradient descent maximizes the margin of homogeneous neural networks", "venue": "CoRR, abs/1906.05890,", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Proceedings of The 28th Conference on Learning Theory, COLT 2015,", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Tomaso A. Poggio", "Andrzej Banburski", "Qianli Liao" ], "title": "Theoretical issues in deep networks: Approximation, optimization and generalization", "venue": "CoRR, abs/1908.09375,", "year": 2019 }, { "authors": [ "Samuel L. Smith", "Quoc V. Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Samuel L. Smith", "Pieter-Jan Kindermans", "Chris Ying", "Quoc V. Le" ], "title": "Don’t decay the learning rate, increase the batch size", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ashia C. Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling SGD batch size to 32k for imagenet training", "venue": "CoRR, abs/1708.03888,", "year": 2017 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "In the last few years, deep learning has succeeded in establishing state of the art performances in a wide variety of tasks in fields like computer vision, natural language processing and bioinformatics (LeCun et al., 2015). Understanding when and how these networks generalize better is important to keep improving their performance. Many works starting mainly from Neyshabur et al. (2015), Zhang et al. (2017) and Keskar et al. (2017) hint to a rich interplay between regularization and the optimization process of learning the weights of the network. The idea is that a form of inductive bias can be realized implicitly by the optimization algorithm.\nIn this paper, we investigate the implicit bias induced from using different learning rates and batch sizes when minimizing the cross-entropy loss with SGD. A common theory is that more noise in the gradient bias the solution toward flatter minima (Keskar et al., 2017). We draw a connection between a particular measure of flatness and margin based loss functions1.\nOur contributions are the following:\n1. A new loss function (Gcdf loss) that can be interpreted as a measure of flatness for the 0−1 loss (for the top layer’s weights of the network).\n2. A methodology consisting in tracking alternative loss functions during training and comparing them for a given training loss value to try to uncover implicit biases in the optimization algorithm applied to varying the learning rate and batch size in SGD.\n3. Experimental results on CIFAR10 and MNIST showing that larger learning rates and smaller batch sizes are better at implicitly minimizing the cross-entropy loss with larger temperature parameter, the hinge loss with larger margin parameter and the Gcdf loss with larger standard deviation parameter. At the opposite, smaller learning rates and larger batch sizes are better at implicitly minimizing the cross-entropy loss, the hinge loss and the Gcdf loss with smaller values of their respective parameter.\n1The concept of margin has been link to generalization of deep networks; see for example Bartlett et al. (2017), Poggio et al. (2019) and Jiang et al. (2019)\nWe do not propose to modify optimization algorithms to try to improve large batch training but we instead try to offer new insights on how the solutions it produces are different from solutions resulting from small batch training (or larger learning rates). The hope is to eventually succeed at incorporating the inductive bias in the objective being optimized instead of relying on the implicit bias of the optimization algorithm. It is not yet clear to what extent this goal can be realized (and by what means2) and we certainly do not claim to be reaching it. We offer only a partial understanding of some of the differences between large batch training (or using small learning rates) and small batch training (or using large learning rates) through the behavior of alternative loss functions during training." }, { "heading": "2 RELATED WORK", "text": "It was observed by Zhang et al. (2017) that deep networks can often obtain good results without explicit regularization even if they have the capacity to essentially memorize the training set. They hypothesized that SGD is probably acting as an implicit regularizer. Also, the earlier work of Neyshabur et al. (2015) brought forward the idea that optimization might be implicitly biasing the trajectory toward low norm models. Since then, many works have investigated the idea of implicit regularization for neural networks (linear or non-linear). For example, Arora et al. (2019) studied how gradient descent finds low rank solutions for matrix completion with deep linear networks. Soudry et al. (2018) showed that gradient descent converges to the max-margin solution for logistic regression and Lyu & Li (2019) provides and extension to deep non-linear homogeneous networks. In contrast to these works, we study empirically how the optimization algorithm implicitly minimizes alternative loss functions during the course of training.\nA highly studied source of implicit bias from the optimization algorithm is the ability to reach flatter minima. In Keskar et al. (2017), the worst loss that can be obtained when slightly perturbing the parameters is considered as a measure of sharpness while Neyshabur et al. (2017) considered the expected loss under Gaussian noise in the weights. We consider a measure of sharpness (section 3) similar to Neyshabur et al. (2017) and we apply it to the 0 − 1 loss directly instead of the usual surrogate cross-entropy loss.\nThe batch size and the learning rate are two ways to control the noise in the gradient which might influence the sharpness of the resulting solution (see for example Smith & Le (2018), Smith et al. (2018)). In conjunction with increasing the learning rate, different strategies like training for more epochs (Hoffer et al., 2017), “warm up” (Goyal et al., 2017) and using a separate learning rate for each layer based on the norm of the weights (You et al., 2017) have been proposed to improve the performance of large batch training. Instead of trying to offer a new modification to the optimization algorithm, we try here to capture the inductive bias into computationally efficient to use loss functions in the hope of eventually simplifying the design of optimization algorithms." }, { "heading": "3 GCDF LOSS", "text": "This section introduces a loss function based on the idea of flat minima. It is defined as a measure of sharpness for the 0 − 1 loss. The main motivation for introducing this loss function is that it is simultaneously a measure of sharpness and a margin based loss function establishing a clear relationship between these ideas. Furthermore, as opposed to the cross-entropy loss and the Hinge loss, it is bounded and non-convex (see section 4.1 for a visual comparison). It thus offers more diversity to the loss functions investigated in this paper. We start with the binary linear case in 3.1 and then extend to the multi-class case in 3.2. For deep networks, this loss will be applied on the top layer. It is a possible extension to our work to consider loss functions applied on multiple layers maybe in a similar fashion to Elsayed et al. (2018).\n2see for example Arora et al. (2019) about the difficulties to capture the implicit bias of gradient descent with norms." }, { "heading": "3.1 BINARY LINEAR CASE", "text": "Let f(w, x) = wTx + b, where w, x ∈ Rn and b ∈ R. Consider the 0 − 1 loss for a binary linear classifier: L(f(w, x), y) = 1 [ y(wTx+b) < 0 ] , where 1 is the indicator function. Note that we will\nwrite all the loss functions for single examples (x, y) throughout the paper and it will be understood that the training loss is obtained by taking the mean over the training set. We smooth (or “robustify”) the 0− 1 loss by considering its expectation under Gaussian noise in the weights. This loss function will then be denoted by Lσ(w, x, y) when the standard deviation is σ. Consider the random variable ∼ N (0, σ2I), where N (0, σ2I) is a zero mean isotropic Gaussian distribution with covariance matrix σ2I . Since (w + )Tx + b is distributed as a Gaussian distribution with mean wTx + b and variance σ2||x||2, we get that y((w + )Tx+ b) is distributed as a Gaussian distribution with mean y(wTx+ b) and the same variance. Therefore,\nLσ(w, x, y) = E L(f(w + , x), y) = Φ (−y(wTx+ b) σ||x|| ) , (1)\nwhere Φ is the Gaussian cumulative distribution function (Gcdf) given by\nΦ(z) = 1√ 2π ∫ z −∞ exp (−t2 2 ) dt. (2)\nIf we assume that x is normalized, the loss Lσ is a (decreasing) function of yf(w, x) (it is a margin based loss function in the terminology from Lin (2004) for example)." }, { "heading": "3.2 MULTI-CLASS CASE", "text": "Suppose the number of classes is m and now consider the affine mapping f(W,x) = Wx+ b with x ∈ Rn, b ∈ Rm and W ∈ Rm×n. For some fixed x ∈ Rn and denoting by wj the jth row of W , let sj := wTj x+ bj be the corresponding score for class j. Finally, let sj( j) := (wj + j)\nTx+ bj be the perturbed score, j an isotropic Gaussian random variable with mean 0 and covariance matrix σ2I . For a given class y, we get\nP { sy( y) 6= max\nj sj( j)\n} ≤ ∑ j 6=y P { sj( j) > sy( y) } (3)\n= ∑ j 6=y P { sj − sy > ( y − j)Tx } (4)\n= ∑ j 6=y Φ ( sj − sy ||x||σ √ 2 ) , (5)\nsince ( y − j)Tx follows a zero mean Gaussian distribution with variance 2σ2||x||2. We define\nLσ(W,x, y) := ∑ j 6=y Φ ( sj − sy ||x||σ √ 2 ) . (6)\nThis is an upper bound on the probability that the classifier does not predict y under Gaussian noise on W . We will experiment with this Gcdf loss function on top of feedforward neural networks (and also with other loss functions) in the following sections. In all the experiments, we use normalization to enforce ||x|| = 1 (this x now represents the feature vector for the top layer)." }, { "heading": "4 IMPLICIT MINIMIZATION OF DIFFERENT LOSS FUNCTIONS", "text": "In this section, we track different loss functions while training deep neural networks with the crossentropy loss varying the learning rates and batch sizes in SGD with momentum. The results in the main text are obtained while training on CIFAR10. Results on MNIST are given in Appendix A. The following loss functions are considered: cross-entropy with different values of temperature, Hinge loss with different margin parameters and the Gcdf loss with different standard deviation parameters.\nFor the cross-entropy loss, the temperature T divides the scores sj before the softmax function. That is, the probability for class j is then given by\nexp(sj/T )∑ k exp(sk/T ) . (7)\nRemark that the positive homogeneity of the Relu implies that normalizing each layer of the network is equivalent to take T equal to the product of the norm of the layers. The cross-entropy loss after normalization at the end of training is investigated in Liao et al. (2018). In contrast, we consider here multiple values for T and investigate the behavior during training. Given the probabilities for each class, the cross-entropy loss (on a single example) is then the negative log probability for the correct class. For its part, the multi-class Hinge loss with margin parameter γ (on a single example) is given by ∑\nj 6=y\nmax{0, γ + (sj − sy)}. (8)\nThe Gcdf loss with standard deviation parameter σ has been described and motivated in the previous section." }, { "heading": "4.1 VISUAL COMPARISON OF THE LOSS FUNCTIONS", "text": "Assume that we have two classes and let z = sy − sj (for j 6= y). An example is correctly classified if z > 0. The Gcdf loss is then given by Φ( −z\nσ √ 2 ), the Hinge loss by max{0, γ − z} and the cross-\nentropy loss by log(exp(−zT )+1). These functions are plotted in figure 1. They share one interesting characteristic on the side z > 0: when their parameter (σ, T or γ) gets larger, the loss takes more time to get closer to zero when z increases. This kind of “heavier tail” behavior can encourage larger z values for some training points at the expense of other closer to zero training points more easily." }, { "heading": "4.2 TRAINING CURVES OF THE ALTERNATIVE LOSS FUNCTIONS", "text": "In figures 2, 3 and 4, we investigate the effect of the size of the learning rate by considering the implicit training curves of the Gcdf loss, the cross-entropy loss and the Hinge loss for different values of their respective parameter (σ, T or γ). The learning rate is kept constant throughout training (no decaying schedule is used). We consider a small learning rate of 0.001 and a larger learning rate of 0.1. For the three loss functions considered, the larger learning rate is clearly better at implicitly minimizing them for larger values of their parameter. A similar conclusion holds when considering different batch sizes as is shown in figures 5, 13 (Appendix B) and 14 (Appendix B). In this case, the smaller batch size (256) is much better at implicitly minimizing the loss functions for larger values of their parameter than the larger batch size (16384). As a technical aside, note that ghost batch normalization Hoffer et al. (2017) is used when training with large batch sizes. The gradients are accumulated on a sequence of smaller mini-batches of size 256 before updating the weights." }, { "heading": "4.3 ALTERNATIVE LOSS VERSUS ACTUAL TRAIN LOSS", "text": "At a given fixed training loss value two training runs have made the same progress toward minimizing their objective function but they might not have made the same progress with respect to other measures of performance. The other measures of performance considered here are of course our alternative loss functions. In figure 6, we plot the Gcdf loss against the actual train loss for different runs corresponding to different learning rates. We can see that smaller learning rates are actually better at minimizing the Gcdf loss with smaller σ during training while larger learning rates are better at minimizing the Gcdf loss with larger σ. There exists an intermediate value (here σ = 1) where all the learning rates considered in our experiments are essentially equivalent at implicitly minimizing the Gcdf loss. The train error behaves similarly to the Gcdf loss with a small value of σ. In the binary case, the Gcdf loss converges pointwise to the 0− 1 loss almost everywhere (except at 0) when σ goes to 0. It would therefore make sense to actually define the Gcdf loss for σ = 0 to be the train error (in the binary case; some modifications are needed in the multi-class case). In this light, it is not surprising that figure 6a and 6d are showing a similar behavior. In order to make even more clear how different choices of learning rates are not implicitly minimizing the alternative loss functions for different parameters (σ, T or γ) in the same way, we plotted the alternative losses\nagainst their respective parameter for some fixed training loss value in figure 7. Similar results are obtained on MNIST (see figure 12 in Appendix A). See also figure 16 (Appendix B) for the results when considering different batch sizes on CIFAR10." }, { "heading": "5 DISCUSSION AND CONCLUSION", "text": "Suppose is distributed according to an isotropic Gaussian distribution with covariance matrix σ2I and mean 0. Under a second order approximation to the cross-entropy loss Lc(w, x, y) at w, we get E [ Lc(w + , x, y) ] ≈ Lc(w, x, y) + σ 2\n2 Tr(H), where H is the Hessian of Lc(w, x, y). For\nsimplicity consider the binary case. Furthermore, since we restricted ourselves to loss functions applied on the top layer only, assume that is applied only to the weights of the top layer. In our setup, the Hessian H is now restricted to the weights of the final layer. Since the 0 − 1 loss is bounded above by the cross-entropy loss (times a factor 1/ log(2)), we get\nLσ(w, x, y) ≤ 1 log(2) E [ Lc(w + , x, y) ] ≈ Lc(w, x, y) log(2) + σ2 2 log(2) Tr(H). (9)\nTherefore, an optimization algorithm succeeding at finding a solution with small cross-entropy loss and small mean curvature of the cross-entropy loss must have a small Gcdf loss also. This might help to explain why larger learning rates and smaller batch sizes are good at implicitly minimizing\nthe Gcdf loss. Note however that this argument has some weaknesses. First, the approximation is only local and so might not be good for larger values of σ. Second, it cannot explain why smaller learning rates and larger batch sizes are better for smaller values of σ. Future work could concentrate on finding a rigorous explanation for these results.\nUnderstanding the inductive biases of different optimization algorithms for training deep networks might allow to make the bias more explicit, that is to incorporate it in the loss function. We think that one strategy to make progress toward this long term goal might be to study how alternative loss functions are being implicitly minimized by a given optimization algorithm. This paper considered the learning rate and batch size parameters when training with SGD. A clear avenue for future research is to extend the investigation to adaptive first-order methods (which can sometimes exhibit worse generalization performance than SGD (Wilson et al., 2017)) and second-order methods." }, { "heading": "A RESULTS ON MNIST", "text": "This section contains the results when training a 6-layer fully connected network with batch normalization on MNIST. No data augmentation is used. The optimization algorithm is SGD with momentum (0.9) and without weight decay. The learning rate is constant during all training." }, { "heading": "B MORE RESULTS ON CIFAR10", "text": "This section contains additional results when training a convolutional network with batch normalization on CIFAR10. The network consists of two convolutional layers with max pooling followed by 3 fully connected layers. No data augmentation is used. The optimization algorithm is SGD with momentum (0.9) and without weight decay. The learning rate is constant during all training." } ]
2,019
ON THE IMPLICIT MINIMIZATION OF ALTERNATIVE LOSS FUNCTIONS WHEN TRAINING DEEP NETWORKS
SP:08889d3b0659e76092dbb9a9fd2825701cebda44
[ "This paper is motivated by an observation that maximization-based decoding approaches such as beam search can lead to incoherent and repetitive sentences when open-ended long-form text generation based on neural language model such as GPT-2 is performed. To solve the problem, this paper proposes a sampling method called Nucleus Sampling. Similar to Top-k sampling, Nucleus Sampling truncates the probability distribution of the words in the vocabulary. Instead of re-normalizing the probabilities for the top-k words, Nucleus Sampling re-normalizes the original probabilities for the words with values above a pre-chosen threshold p. Some quantitative and qualitative results show that the proposed sampling method can generate long-form texts with some nice properties.", "This paper studies an important problem, i.e., how to find a good decoding strategy for open-ended text generation. To this end, the authors provide a deep analysis of the most common decoding methods, and propose Nucleus Sampling, a very simple yet effective method to generate higher-quality text. Compared with top-k sampling, the key idea behind the proposed method is to sample from the dynamic nucleus of tokens containing the majority of the probability mass. Experiments demonstrate that nucleus sampling is an effective decoding strategy in practice. " ]
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for openended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for generating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text. Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. Beam Search, b=32: "The study, published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), was conducted by researchers from the Universidad Nacional Autónoma de México (UNAM) and the Universidad Nacional Autónoma de México (UNAM/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de ...” Pure Sampling: They were cattle called Bolivian Cavalleros; they live in a remote desert uninterrupted by town, and they speak huge, beautiful, paradisiacal Bolivian linguistic thing. They say, 'Lunch, marge.' They don't tell what the lunch is," director Professor Chuperas Omwell told Sky News. "They've only been talking to scientists, like we're being interviewed by TV reporters. We don't even stick around to be interviewed by TV reporters. Maybe that's how they figured out that they're cosplaying as the Bolivian Cavalleros." Figure 1: Even with substantial human context and the powerful GPT-2 Large language model, Beam Search (size 32) leads to degenerate repetition (highlighted in blue) while pure sampling leads to incoherent gibberish (highlighted in red). When b ≥ 64, both GPT-2 Large and XL (774M and 1542M parameters, respectively) prefer to stop generating immediately after the given context.
[ { "affiliations": [], "name": "Ari Holtzman" }, { "affiliations": [], "name": "Jan Buys" }, { "affiliations": [], "name": "Li Du" }, { "affiliations": [], "name": "Maxwell Forbes" }, { "affiliations": [], "name": "Yejin Choi" } ]
[ { "authors": [ "David H Ackley", "Geoffrey E Hinton", "Terrence J Sejnowski" ], "title": "A learning algorithm for boltzmann machines", "venue": "Cognitive science,", "year": 1985 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "Proceedings of the 2015 International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Massimo Caccia", "Lucas Caccia", "William Fedus", "Hugo Larochelle", "Joelle Pineau", "Laurent Charlin" ], "title": "Language gans falling short. In Critiquing and Correcting Trends in Machine Learning: NeurIPS", "venue": "URL http://arxiv.org/abs/1811.02549", "year": 2018 }, { "authors": [ "Yining Chen", "Sorcha Gilroy", "Andreas Maletti", "Jonathan May", "Kevin Knight" ], "title": "Recurrent neural networks as weighted language recognizers", "venue": null, "year": 2018 }, { "authors": [ "Elizabeth Clark", "Yangfeng Ji", "Noah A. Smith" ], "title": "Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Yann N Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language modeling with gated convolutional networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Angela Fan", "Mike Lewis", "Yann Dauphin" ], "title": "Hierarchical neural story generation", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Jessica Ficler", "Yoav Goldberg" ], "title": "Controlling linguistic style aspects in neural language generation", "venue": "In Proceedings of the Workshop on Stylistic Variation,", "year": 2017 }, { "authors": [ "H Paul Grice" ], "title": "Logic and conversation", "venue": null, "year": 1975 }, { "authors": [ "Tatsunori B. Hashimoto", "Hugh Zhang", "Percy Liang" ], "title": "Unifying human and statistical evaluation for natural language generation", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Ari Holtzman", "Jan Buys", "Maxwell Forbes", "Antoine Bosselut", "David Golub", "Yejin Choi" ], "title": "Learning to write with cooperative discriminators", "venue": "In Proceedings of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Philipp Koehn", "Rebecca Knowles" ], "title": "Six challenges for neural machine translation", "venue": "In Proceedings of the First Workshop on Neural Machine Translation,", "year": 2017 }, { "authors": [ "Ilya Kulikov", "Alexander H Miller", "Kyunghyun Cho", "Jason Weston" ], "title": "Importance of search and evaluation strategies in neural dialogue modeling", "venue": "International Conference on Natural Language Generation,", "year": 2019 }, { "authors": [ "Jiwei Li", "Will Monroe", "Dan Jurafsky" ], "title": "A simple, fast diverse decoding algorithm for neural generation", "venue": "arXiv preprint arXiv:1611.08562,", "year": 2016 }, { "authors": [ "Jiwei Li", "Will Monroe", "Alan Ritter", "Dan Jurafsky", "Michel Galley", "Jianfeng Gao" ], "title": "Deep reinforcement learning for dialogue generation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Ramesh Nallapati", "Bowen Zhou", "Cicero dos Santos", "Caglar Gulcehre", "Bing Xiang" ], "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Chris Pal", "Charles Sutton", "Andrew McCallum" ], "title": "Sparse forward-backward using minimum divergence beams for fast training of conditional random fields", "venue": "IEEE International Conference on Acoustics Speech and Signal Processing Proceedings,", "year": 2006 }, { "authors": [ "Nanyun Peng", "Marjan Ghazvininejad", "Jonathan May", "Kevin Knight" ], "title": "Towards controllable story generation", "venue": "In Proceedings of the First Workshop on Storytelling,", "year": 2018 }, { "authors": [ "Steven T Piantadosi" ], "title": "Zipf’s word frequency law in natural language: A critical review and future directions", "venue": "Psychonomic bulletin & review,", "year": 2014 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Tim Salimans", "Ilya Sutskever" ], "title": "Improving language understanding by generative pre-training, 2018", "venue": "URL https://s3-us-west-2.amazonaws. com/openai-assets/research-covers/language-unsupervised/ language_understanding_paper.pdf. Unpublished manuscript", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners, February 2019", "venue": "URL https: //d4mucfpksywv.cloudfront.net/better-language-models/language_ models_are_unsupervised_multitask_learners.pdf. Unpublished manuscript", "year": 2019 }, { "authors": [ "Stanislau Semeniuta", "Aliaksei Severyn", "Sylvain Gelly" ], "title": "On accurate evaluation of gans for language generation", "venue": "arXiv preprint arXiv:1806.04936,", "year": 2018 }, { "authors": [ "Tianxiao Shen", "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Style transfer from non-parallel text by cross-alignment", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Felix Stahlberg", "Bill Byrne" ], "title": "On nmt search errors and model errors: Cat got your tongue", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Martin Sundermeyer", "Ralf Schlüter", "Hermann Ney" ], "title": "Lstm neural networks for language modeling", "venue": "In Thirteenth annual conference of the international speech communication association,", "year": 2012 }, { "authors": [ "Guy Tevet", "Gavriel Habib", "Vered Shwartz", "Jonathan Berant" ], "title": "Evaluating text gans as language models. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", "venue": null, "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ashwin K. Vijayakumar", "Michael Cogswell", "Ramprasaath R. Selvaraju", "Qing Sun", "Stefan Lee", "David Crandall", "Dhruv Batra" ], "title": "Diverse beam search for improved description of complex scenes", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Sean Welleck", "Ilia Kulikov", "Stephen Roller", "Emily Dinan", "Kyunghyun Cho", "Jason Weston" ], "title": "Neural text generation with unlikelihood training", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Sam Wiseman", "Stuart Shieber", "Alexander Rush" ], "title": "Challenges in data-to-document generation", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Jingjing Xu", "Xuancheng Ren", "Junyang Lin", "Xu Sun" ], "title": "Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Lantao Yu", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Seqgan: Sequence generative adversarial nets with policy gradient", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Yaoming Zhu", "Sidi Lu", "Lei Zheng", "Jiaxian Guo", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Texygen: A benchmarking platform for text generation models", "venue": null, "year": 2018 } ]
[ { "heading": null, "text": "Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.\nBeam Search, b=32: \"The study, published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), was conducted by researchers from the Universidad Nacional Autónoma de México (UNAM) and the Universidad Nacional Autónoma de México (UNAM/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de México/Universidad Nacional Autónoma de ...” Pure Sampling: They were cattle called Bolivian Cavalleros; they live in a remote desert uninterrupted by town, and they speak huge, beautiful, paradisiacal Bolivian linguistic thing. They say, 'Lunch, marge.' They don't tell what the lunch is,\" director Professor Chuperas Omwell told Sky News. \"They've only been talking to scientists, like we're being interviewed by TV reporters. We don't even stick around to be interviewed by TV reporters. Maybe that's how they figured out that they're cosplaying as the Bolivian Cavalleros.\"\nFigure 1: Even with substantial human context and the powerful GPT-2 Large language model, Beam Search (size 32) leads to degenerate repetition (highlighted in blue) while pure sampling leads to incoherent gibberish (highlighted in red). When b ≥ 64, both GPT-2 Large and XL (774M and 1542M parameters, respectively) prefer to stop generating immediately after the given context." }, { "heading": "1 INTRODUCTION", "text": "On February 14th 2019, OpenAI surprised the scientific community with an impressively highquality article about Ovid’s Unicorn, written by GPT-2.1 Notably, the top-quality generations ob-\n1https://openai.com/blog/better-language-models/\ntained from the model rely on randomness in the decoding method, in particular through top-k sampling that samples the next word from the top k most probable choices (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019), instead of aiming to decode text that maximizes likelihood.\nIn fact, decoding strategies that optimize for output with high probability, such as beam search, lead to text that is incredibly degenerate, even when using state-of-the-art models such as GPT-2 Large, as shown in Figure 1. This may seem counter-intuitive, as one would expect that good models would assign higher probability to more human-like, grammatical text. Indeed, language models do generally assign high scores to well-formed text, yet the highest scores for longer texts are often generic, repetitive, and awkward. Figure 2 exposes how different the distribution of probabilities assigned to beam search decoded text and naturally occurring text really are.\nPerhaps equally surprising is the right side of Figure 1, which shows that pure sampling — sampling directly from the probabilities predicted by the model — results in text that is incoherent and almost unrelated to the context. Why is text produced by pure sampling so degenerate? In this work we show that the “unreliable tail” is to blame. This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate.\nTo overcome these issues we introduce Nucleus Sampling (§3.1). The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus, a small subset of the vocabulary that tends to range between one and a thousand candidates. Instead of relying on a fixed top-k, or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail, we propose sampling from the top-p portion of the probability mass, expanding and contracting the candidate pool dynamically.\nIn order to compare current methods to Nucleus Sampling, we compare various distributional properties of generated text to the reference distribution, such as the likelihood of veering into repetition and the perplexity of generated text.\nThe latter reveals that text generated by maximization or top-k sampling is too probable, indicating a lack of diversity and divergence in vocabulary usage from the human distribution. On the other hand, pure sampling produces text that is significantly less likely than the gold, corresponding to lower generation quality.\nVocabulary usage and Self-BLEU (Zhu et al., 2018) statistics reveal that high values of k are needed to make top-k sampling match human statistics. Yet, generations based on high values of k often have high variance in likelihood, hinting at qualitatively observable incoherency issues. Nucleus Sampling can easily match reference perplexity through tuning the value of p, avoiding the incoherence caused by setting k high enough to match distributional statistics.\nFinally, we perform Human Unified with Statistical Evaluation (HUSE; Hashimoto et al., 2019) to jointly assess the overall quality and diversity of the decoding strategies, which cannot be captured using either human or automatic evaluation alone. The HUSE evaluation demonstrates that Nucleus Sampling is the best overall decoding strategy. We include generated examples for qualitative analysis – see Figure 3 for a representative example, and further examples in the appendix.2\n2Code and all generations are available at https://github.com/ari-holtzman/degen" }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 TEXT GENERATION DECODING STRATEGIES", "text": "A number of recent works have alluded to the disadvantages of generation by maximization, which tend to generate output with high grammaticality but low diversity (Kulikov et al., 2019; Holtzman et al., 2018; Fan et al., 2018). Generative Adversarial Networks (GANs) have been a prominent research direction (Yu et al., 2017; Xu et al., 2018), but recent work has shown that when quality and diversity are considered jointly, GAN-generated text fails to outperform generations from language models (Caccia et al., 2018; Tevet et al., 2019; Semeniuta et al., 2018). Work on neural dialog systems have proposed methods for diverse beam search, using a task-specific diversity scoring function or constraining beam hypotheses to be sufficiently different (Li et al., 2016a; Vijayakumar et al., 2018; Kulikov et al., 2019; Pal et al., 2006). While such utility functions encourage desirable properties in generations, they do not remove the need to choose an appropriate decoding strategy, and we believe that Nucleus Sampling will have complementary advantages in such approaches. Finally, Welleck et al. (2020) begin to address the problem of neural text degeneration through an “unlikelihood loss”, which decreases training loss on repeated tokens and thus implicitly reduces gradients on frequent tokens as well. Our focus is on exposing neural text degeneration and providing a decoding solution that can be used with arbitrary models, but future work will likely combine training-time and inference-time solutions." }, { "heading": "2.2 OPEN-ENDED VS DIRECTED GENERATION", "text": "Many text generation tasks are defined through (input, output) pairs, such that the output is a constrained transformation of the input. Example applications include machine translation (Bahdanau et al., 2015), data-to-text generation (Wiseman et al., 2017), and summarization (Nallapati et al., 2016). We refer to these tasks as directed generation. Typically encoder-decoder architectures are used, often with an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) or using attention-based architectures such as the Transformer (Vaswani et al., 2017). Generation is usually performed using beam search; since output is tightly scoped by the input, repetition and genericness are not as problematic. Still, similar issues have been reported when using large beam sizes (Koehn & Knowles, 2017) and more recently with exact inference (Stahlberg & Byrne, 2019), a counter-intuitive observation since more comprehensive search helps maximize probability.\nOpen-ended generation, which includes conditional story generation and contextual text continuation (as in Figure 1), has recently become a promising research direction due to significant advances in neural language models (Clark et al., 2018; Holtzman et al., 2018; Fan et al., 2018; Peng et al., 2018; Radford et al., 2019). While the input context restricts the space of acceptable output generations, there is a considerable degree of freedom in what can plausibly come next, unlike in directed generation settings. Our work addresses the challenges faced by neural text generation with this increased level of freedom, but we note that some tasks, such as goal-oriented dialog, may fall somewhere in between open-ended and directed generation." }, { "heading": "3 LANGUAGE MODEL DECODING", "text": "Given an input text passage as context, the task of open-ended generation is to generate text that forms a coherent continuation from the given context. More formally, given a sequence of m tokens x1 . . . xm as context, the task is to generate the next n continuation tokens to obtain the completed sequence x1 . . . xm+n. We assume that models compute P (x1:m+n) using the common left-to-right decomposition of the text probability,\nP (x1:m+n) = m+n∏ i=1 P (xi|x1 . . . xi−1), (1)\nwhich is used to generate the generation token-by-token using a particular decoding strategy.\nMaximization-based decoding The most commonly used decoding objective, in particular for directed generation, is maximization-based decoding. Assuming that the model assigns higher probability to higher quality text, these decoding strategies search for the continuation with the highest" }, { "heading": "An unprecedented number of mostly young whales have become stranded on the West Australian coast since 2008.", "text": "likelihood. Since finding the optimum argmax sequence from recurrent neural language models or Transformers is not tractable (Chen et al., 2018), common practice is to use beam search (Li et al., 2016b; Shen et al., 2017; Wiseman et al., 2017). However, several recent studies on open-ended generation have reported that maximization-based decoding does not lead to high quality text (Fan et al., 2018; Holtzman et al., 2018)." }, { "heading": "3.1 NUCLEUS SAMPLING", "text": "We propose a new stochastic decoding method: Nucleus Sampling. The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from. Given a distribution P (x|x1:i−1), we define its top-p vocabulary V (p) ⊂ V as the smallest set such that\n∑ x∈V (p) P (x|x1:i−1) ≥ p. (2)\nLet p′ = ∑\nx∈V (p) P (x|x1:i−1). The original distribution is re-scaled to a new distribution, from which the next word is sampled:\nP ′(x|x1:i−1) = {\nP (x|x1:i−1)/p′ if x ∈ V (p) 0 otherwise. (3)\nIn practice this means selecting the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold p. The size of the sampling set will adjust dynamically based on the shape of the probability distribution at each time step. For high values of p, this is a small subset of vocabulary that takes up vast majority of the probability mass — the nucleus.\n3.2 TOP-k SAMPLING\nTop-k sampling has recently become a popular alternative sampling procedure (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019). Nucleus Sampling and top-k both sample from truncated Neural LM distributions, differing only in the strategy of where to truncate. Choosing where to truncate can be interpreted as determining the generative model’s trustworthy prediction zone.\nAt each time step, the top k possible next tokens are sampled from according to their relative probabilities. Formally, given a distribution P (x|x1:i−1), we define its top-k vocabulary V (k) ⊂ V as the set of size k which maximizes ∑ x∈V (k) P (x|x1:i−1). Let p′ = ∑ x∈V (k) P (x|x1:i−1). The distribution is then re-scaled as in equation 3, and sampling is performed based on that distribution. Note that the scaling factor p′ can vary wildly at each time-step, in contrast to Nucleus Sampling.\nDifficulty in choosing a suitable value of k While top-k sampling leads to considerably higher quality text than either beam search or sampling from the full distribution, the use of a constant k is\nsub-optimal across varying contexts. As illustrated on the left of Figure 5, in some contexts the head of the next word distribution can be flat across tens or hundreds of reasonable options (e.g. nouns or verbs in generic contexts), while in other contexts most of the probability mass is concentrated in one or a small number of tokens, as on the right of the figure. Therefore if k is small, in some contexts there is a risk of generating bland or generic text, while if k is large the top-k vocabulary will include inappropriate candidates which will have their probability of being sampled increased by the renormalization. Under Nucleus Sampling, the number of candidates considered rises and falls dynamically, corresponding to the changes in the model’s confidence region over the vocabulary which top-k sampling fails to capture for any one choice of k." }, { "heading": "3.3 SAMPLING WITH TEMPERATURE", "text": "Another common approach to sampling-based generation is to shape a probability distribution through temperature (Ackley et al., 1985). Temperature sampling has been applied widely to text generation (Ficler & Goldberg, 2017; Fan et al., 2018; Caccia et al., 2018). Given the logits u1:|V | and temperature t, the softmax is re-estimated as\np(x = Vl|x1:i−1) = exp(ul/t)∑ l′ exp(u ′ l/t) . (4)\nSetting t ∈ [0, 1) skews the distribution towards high probability events, which implicitly lowers the mass in the tail distribution. Low temperature sampling has also been used to partially alleviate the issues of top-k sampling discussed above, by shaping the distribution before top-k sampling (Radford et al., 2018; Fan et al., 2018). However, recent analysis has shown that, while lowering the temperature improves generation quality, it comes at the cost of decreasing diversity (Caccia et al., 2018; Hashimoto et al., 2019)." }, { "heading": "4 LIKELIHOOD EVALUATION", "text": "" }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "While many neural network architectures have been proposed for language modeling, including LSTMs (Sundermeyer et al., 2012) and convolutional networks (Dauphin et al., 2017), the Transformer architecture (Vaswani et al., 2017) has been the most successful in the extremely large-scale training setups in recent literature (Radford et al., 2018; 2019). In this study we use the Generatively Pre-trained Transformer, version 2 (GPT2; Radford et al., 2019), which was trained on WebText, a 40GB collection of text scraped from the web.3 We perform experiments using the Large model (762M parameters). Our analysis is based on generating 5,000 text passages, which end upon reaching an end-of-document token or a maximum length of 200 tokens. Texts are generated conditionally, conditioned on the initial paragraph (restricted to 1-40 tokens) of documents in the held-out portion of WebText, except where otherwise mentioned." }, { "heading": "4.2 PERPLEXITY", "text": "Our first evaluation is to compute the perplexity of generated text using various decoding strategies, according to the model that is being generated from. We compare these perplexities against that of the gold text (Figure 6). Importantly, we argue that the optimal generation strategy should produce text which has a perplexity close to that of the gold text: Even though the model has the ability to generate text that has lower perplexity (higher probability), such text tends to have low diversity and get stuck in repetition loops, as shown in §5 and illustrated in Figure 4. We see that perplexity of text obtained from pure sampling is worse than the perplexity of the gold. This indicates that the model is confusing itself: sampling too many unlikely tokens and creating context that makes it difficult to recover the human distribution of text, as in Figure 1. Yet, setting the temperature lower creates diversity and repetition issues, as we shall see in §5. Even with our relatively fine-grained parameter sweep, Nucleus Sampling obtains closest perplexity to human text, as shown in Table 1.\n3Available at https://github.com/openai/gpt-2-output-dataset" }, { "heading": "4.3 NATURAL LANGUAGE DOES NOT MAXIMIZE PROBABILITY", "text": "One might wonder if the issue with maximization is a search error, i.e., there are higher quality sentences to which the model assigns higher probability than to the decoded ones, beam search has just failed to find them. Yet Figures 2 & 6 show that the per-token probability of natural text is, on average, much lower than text generated by beam search. Natural language rarely remains in a high probability zone for multiple consecutive time steps, instead veering into lower-probability but more informative tokens. Nor does natural language tend to fall into repetition loops, even though the model tends to assign high probability to this, as seen in Figure 4.\nWhy is human-written text not the most probable text? We conjecture that this is an intrinsic property of human language. Language models that assign probabilities one word at a time without a global model of the text will have trouble capturing this effect. Grice’s Maxims of Communication (Grice, 1975) show that people optimize against stating the obvious. Thus, making every word as predictable as possible will be disfavored. This makes solving the problem simply by training larger models or improving neural architectures using standard per-word learning objectives unlikely: such models are forced to favor the lowest common denominator, rather than informative language." }, { "heading": "5 DISTRIBUTIONAL STATISTICAL EVALUATION", "text": "" }, { "heading": "5.1 ZIPF DISTRIBUTION ANALYSIS", "text": "In order to compare generations to the reference text, we begin by analyzing their use of vocabulary. Zipf’s law suggests that there is an exponential relationship between the rank of a word and its frequency in text. The Zipfian coefficient s can be used to compare the distribution in a given text\nto a theoretically perfect exponential curve, where s = 1 (Piantadosi, 2014). Figure 7 shows the vocabulary distributions along with estimated Zipf coefficients for selected parameters of different decoding methods. As expected, pure sampling is the closest to the human distribution, followed by Nucleus Sampling. The visualization of the distribution shows that pure sampling slightly overestimates the use of rare words, likely one reason why pure sampling also has higher perplexity than human text. Furthermore, lower temperature sampling avoids sampling these rare words from the tail, which is why it has been used in some recent work (Fan et al., 2018; Radford et al., 2019)." }, { "heading": "5.2 SELF-BLEU", "text": "We follow previous work and compute Self-BLEU (Zhu et al., 2018) as a metric of diversity. SelfBLEU is calculated by computing the BLEU score of each generated document using all other generations in the evaluation set as references. Due to the expense of computing such an operation, we sample 1000 generations, each of which is compared with all 4999 other generations as references. A lower Self-BLEU score implies higher diversity. Figure 8 shows that Self-BLEU results largely follow that of the Zipfian distribution analysis as a diversity measure. It is worth noting that\nvery high values of k and t are needed to get close to the reference distribution, though these result in unnaturally high perplexity (§4)." }, { "heading": "5.3 REPETITION", "text": "One attribute of text quality that we can quantify is repetition. Figure 9 shows that Nucleus Sampling and top-k sampling have the least repetition for reasonable parameter ranges. Generations from temperature sampling have more repetition unless very high temperatures are used, which we have shown negatively affects coherence (as measured by high perplexity). Further, all stochastic methods face repetition issues when their tuning parameters are set too low, which tends to overtruncate, mimicking greedy search. Therefore we conclude that only Nucleus Sampling satisfies all the distributional criteria for desirable generations." }, { "heading": "6 HUMAN EVALUATION", "text": "" }, { "heading": "6.1 HUMAN UNIFIED WITH STATISTICAL EVALUATION (HUSE)", "text": "Statistical evaluations are unable to measure the coherence of generated text properly. While the metrics in previous sections gave us vital insights into the different decoding methods we compare, human evaluation is still required to get a full measure of the quality of the generated text. However, pure human evaluation does not take into account the diversity of the generated text; therefore we use HUSE (Hashimoto et al., 2019) to combine human and statistical evaluation. HUSE is computed by training a discriminator to distinguish between text drawn from the human and model distributions, based on only two features: The probability assigned by the language model, and human judgements of typicality of generations. Text that is close to the human distribution in terms of quality and diversity should perform well on both likelihood evaluation and human judgements.\nAs explored in the previous sections, the current best-performing decoding methods rely on truncation of the probability distribution, which yields a probability of 0 for the vast majority of potential tokens. Initial exploration of applying HUSE directly led to top-k and Nucleus Sampling receiving scores of nearly 0 due to truncation, despite humans favoring these methods. As a proxy, when generating the text used to compute HUSE, we interpolate (with mass 0.1) the original probability distribution with the top-k and Nucleus Sampling distribution, smoothing the truncated distribution.\nFor each decoding algorithm we annotate 200 generations for typicality, with each generation receiving 20 annotations from 20 different annotators. This results in a total of 4000 annotations per a\ndecoding scheme. We use a KNN classifier to compute HUSE, as in the original paper, with k = 13 neighbors, which we found led to the higher accuracy in discrimination. The results in Table 1 shows that Nucleus Sampling obtains the highest HUSE score, with Top-k sampling performing second best." }, { "heading": "6.2 QUALITATIVE ANALYSIS", "text": "Figure 3 shows representative example generations. Unsurprisingly, beam search gets stuck in a repetition loop it cannot escape. Of the stochastic decoding schemes, the output of full sampling is clearly the hardest to understand, even inventing a new word “umidauda”, apparently a species of bird. The generation produced by Nucleus Sampling isn’t perfect – the model appears to confuse whales with birds, and begins writing about those instead. Yet, top-k sampling immediately veers off into an unrelated event. When top-k sampling is combined with a temperature of 0.7, as is commonly done (Radford et al., 2019; Fan et al., 2018), the output devolves into repetition, exhibiting the classic issues of low-temperature decoding. More generations are available in Appendix B." }, { "heading": "7 CONCLUSION", "text": "This paper provided a deep analysis into the properties of the most common decoding methods for open-ended language generation. We have shown that likelihood maximizing decoding causes repetition and overly generic language usage, while sampling methods without truncation risk sampling from the low-confidence tail of a model’s predicted distribution. Further, we proposed Nucleus Sampling as a solution that captures the region of confidence of language models effectively. In future work, we wish to dynamically characterize this region of confidence and include a more semantic utility function to guide the decoding process." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported in part by NSF (IIS-1524371), the National Science Foundation Graduate Research Fellowship under Grant No. DGE1256082, DARPA CwC through ARO (W911NF151- 0543), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), the South African Centre for Artificial Intelligence Research, and the Allen Institute for AI." }, { "heading": "A BEAM WIDTH EFFECT", "text": "" }, { "heading": "B EXAMPLE GENERATIONS", "text": "We include a set of examples for further qualitative comparison." }, { "heading": "So what's new in my life? 09/11/18 - Just got back from vacation.", "text": "" }, { "heading": "In late 1998, a UW-Madison group led by James Thomson was the first to isolate and culture human embryonic stem cells, master undifferentiated cells that arise at the earliest stages of development and are capable of becoming any of the 220 types of", "text": "" } ]
2,020
THE CURIOUS CASE OF NEURAL TEXT DeGENERATION
SP:133403fbbb8b1195da7a017675d19d3b7b270811
[ "This paper calls to attention the importance of specifying all performance altering implementation details that are current inherent in the state-of-the-art deep policy gradient community. Specifically, this paper builds very closely on the work started by Henderson et al. 2017, building a conversation around the importance of more rigorous and careful scientific study of published algorithms. This paper identifies many \"code-level optimizations\" that account for the differences between the popular TRPO and PPO deep policy gradient algorithms. The paper then subselects four of these optimizations and carefully investigates their impact on the final performance of each algorithm. The clear conclusion from the paper is that the touted algorithmic improvement of PPO over TRPO has negligible effect on performance, and any previously reported differences are due only to what were considered unimportant implementation details.", "This paper investigates the impact of implementation \"details\", with existing implementations of TRPO and PPO as examples. The main takeaway is that the performance gains observed in PPO (compared to TRPO) are actually caused by differences in implementation, and not by the differences between the two learning algorithms. In particular, adding to the TRPO code the same implementation changes as in PPO makes TRPO on par with (and possibly even better than) PPO. The clipping objective of PPO is also found to have no significant impact on its performance. This calls for more careful comparisons between algorithms (by minimizing implementation changes and more in-depth ablation studies) than has typically been done until now in the RL research community." ]
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO). Specifically, we investigate the consequences of “code-level optimizations:” algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations turn out to have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPO’s gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difficulty, and importance, of attributing performance gains in deep reinforcement learning.
[ { "affiliations": [], "name": "Logan Engstrom" }, { "affiliations": [], "name": "Andrew Ilyas" }, { "affiliations": [], "name": "Shibani Santurkar" }, { "affiliations": [], "name": "Dimitris Tsipras" }, { "affiliations": [], "name": "Firdaus Janoos" }, { "affiliations": [], "name": "Larry Rudolph" }, { "affiliations": [], "name": "Aleksander Mądry" } ]
[ { "authors": [ "Pieter Abbeel", "John Schulman" ], "title": "Deep reinforcement learning through policy optimization", "venue": "Tutorial at Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Peter Henderson", "Riashat Islam", "Philip Bachman", "Joelle Pineau", "Doina Precup", "David Meger" ], "title": "Deep reinforcement learning that matters", "venue": "arXiv preprint arXiv:1709.06560,", "year": 2017 }, { "authors": [ "Peter Henderson", "Joshua Romoff", "Joelle Pineau" ], "title": "Where did my optimum go?: An empirical analysis of gradient descent optimization in policy gradient methods, 2018", "venue": null, "year": 2018 }, { "authors": [ "Sham M. Kakade" ], "title": "A natural policy gradient", "venue": "In NIPS,", "year": 2001 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "CoRR, abs/1412.6980,", "year": 2014 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search provides a competitive approach to reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "In NeurIPS Deep Learning Workshop,", "year": 2013 }, { "authors": [ "Jan Peters", "Katharina Mülling", "Yasemin Altun" ], "title": "Relative entropy policy search", "venue": "In AAAI,", "year": 2010 }, { "authors": [ "Aravind Rajeswaran", "Kendall Lowrey", "Emanuel Todorov", "Sham M. Kakade" ], "title": "Towards generalization and simplicity in continuous control", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "arXiv preprint arXiv:1506.02438,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Richard S. Sutton", "David A. McAllester", "Satinder P. Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In NIPS,", "year": 1999 }, { "authors": [ "George Tucker", "Surya Bhupatiraju", "Shixiang Gu", "Richard E. Turner", "Zoubin Ghahramani", "Sergey Levine" ], "title": "The mirage of action-dependent baselines in reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Amy Zhang", "Yuxin Wu", "Joelle Pineau" ], "title": "Natural environment benchmarks for reinforcement learning, 2018", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep reinforcement learning (RL) algorithms have fueled many of the most publicized achievements in modern machine learning (Silver et al., 2017; OpenAI, 2018; Abbeel & Schulman, 2016; Mnih et al., 2013). However, despite these accomplishments, deep RL methods still are not nearly as reliable as their (deep) supervised learning counterparts. Indeed, recent research found the existing deep RL methods to be brittle (Henderson et al., 2017; Zhang et al., 2018), hard to reproduce (Henderson et al., 2017; Tucker et al., 2018), unreliable across runs (Henderson et al., 2017; 2018), and sometimes outperformed by simple baselines (Mania et al., 2018).\nThe prevalence of these issues points to a broader problem: we do not understand how the parts comprising deep RL algorithms impact agent training, either separately or as a whole. This unsatisfactory understanding suggests that we should re-evaluate the inner workings of our algorithms. Indeed, the overall question motivating our work is: how do the multitude of mechanisms used in deep RL training algorithms impact agent behavior?\nOur contributions. We analyze the underpinnings of agent behavior—both through the traditional metric of cumulative reward, and by measuring more fine-grained algorithmic properties. As a first step towards tackling this question, we conduct a case study of two of the most popular deep policygradient methods: Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). These two methods are closely related: PPO was originally developed as a refinement of TRPO.\nWe find that much of the PPO’s observed improvement in performance comes from seemingly small modifications to the core algorithm that either can be found only in a paper’s original implementation, or are described as auxiliary details and are not present in the corresponding TRPO baselines. 1 We pinpoint these modifications, and perform an ablation study demonstrating that they are instrumental to the PPO’s performance.\n∗Equal contribution. Work done in part as an intern at Two Sigma. 1Note that these code-level optimizations are separate from “implementation choices” like the choice of\nPyTorch versus TensorFlow in that they intentionally change the training algorithm’s operation.\nThis observation prompts us to study how such code-level optimizations change agent training dynamics, and whether we can truly think of them as merely auxiliary improvements. Our results indicate that these optimizations fundamentally change algorithms’ operation, and go even beyond improvements in agent reward. We find that they majorly impact a key algorithmic principle behind TRPO and PPO’s operations: trust region enforcement.\nUltimately, we discover that the PPO code-optimizations are more important in terms of final reward achieved than the choice of general training algorithm (TRPO vs. PPO). This result is in stark contrast to the previous view that the central PPO clipping method drives the gains seen in Schulman et al. (2017). In doing so, we demonstrate that the algorithmic changes imposed by such optimizations make rigorous comparisons of algorithms difficult. Without a rigorous understanding of the full impact of code-level optimizations, we cannot hope to gain any reliable insight from comparing algorithms on benchmark tasks.\nOur results emphasize the importance of building RL methods in a modular manner. To progress towards more performant and reliable algorithms, we need to understand each component’s impact on agent behavior and performance—both individually, and as part of a whole.\nCode for all the results shown in this work is available at https://github.com/MadryLab/ implementation-matters." }, { "heading": "2 RELATED WORK", "text": "The idea of using gradient estimates to update neural network–based RL agents dates back at least to the work of Williams (1992), who proposed the REINFORCE algorithm. Later, Sutton et al. (1999) established a unifying framework that casts the previous algorithms as instances of the policy gradient method.\nOur work focuses on proximal policy optimization (PPO) (Schulman et al., 2017) and trust region policy optimization (TRPO) (Schulman et al., 2015a), which are two of the most prominent policy gradient algorithms used in deep RL. Much of the original inspiration for the usage of the trust regions stems from the conservative policy update of Kakade (2001). This policy update, similarly to TRPO, uses a natural gradient descent-based greedy policy update. TRPO also bears similarity to the relative policy entropy search method of Peters et al. (2010), which constrains the distance between marginal action distributions (whereas TRPO constrains the conditionals of such action distributions).\nNotably, Henderson et al. (2017) points out a number of brittleness, reproducibility, and experimental practice issues in deep RL algorithms. Importantly, we build on the observation of Henderson et al. (2017) that final reward for a given algorithm is greatly influenced depending on the code base used. Rajeswaran et al. (2017) and Mania et al. (2018) also demonstrate that on many of the benchmark tasks, the performance of PPO and TRPO can be matched by fairly elementary randomized search approaches. Additionally, Tucker et al. (2018) showed that one of the recently proposed extensions of the policy gradient framework, i.e., the usage of baseline functions that are also actiondependent (in addition to being state-dependent), might not lead to better policies after all." }, { "heading": "3 ATTRIBUTING SUCCESS IN PROXIMAL POLICY OPTIMIZATION", "text": "Our overarching goal is to better understand the underpinnings of the behavior of deep policy gradient methods. We thus perform a careful study of two tightly linked algorithms: TRPO and PPO (recall that PPO is motivated as TRPO with a different trust region enforcement mechanism). To better understand these methods, we start by thoroughly investigating their implementations in practice. We find that in comparison to TRPO, the PPO implementation contains many non-trivial optimizations that are not (or only barely) described in its corresponding paper. Indeed, the standard implementation of PPO 2 contains the following additional optimizations:\n2From the OpenAI baselines GitHub repository: https://github.com/openai/baselines\n1. Value function clipping: Schulman et al. (2017) originally suggest fitting the value network via regression to target values:\nLV = (Vθt − Vtarg)2, but the standard implementation instead fits the value network with a PPO-like objective:\nLV = min [ (Vθt − Vtarg) 2 , ( clip ( Vθt , Vθt−1 − ε, Vθt−1 + ε ) − Vtarg )2] ,\nwhere Vθ is clipped around the previous value estimates (and ε is fixed to the same value as the value used in (2) to clip the probability ratios).\n2. Reward scaling: Rather than feeding the rewards directly from the environment into the objective, the PPO implementation performs a certain discount-based scaling scheme. In this scheme, the rewards are divided through by the standard deviation of a rolling discounted sum of the rewards (without subtracting and re-adding the mean)—see Algorithm 1 in Appendix A.2.\n3. Orthogonal initialization and layer scaling: Instead of using the default weight initialization scheme for the policy and value networks, the implementation uses an orthogonal initialization scheme with scaling that varies from layer to layer.\n4. Adam learning rate annealing: Depending on the task, the implementation sometimes anneals the learning rate of Adam (Kingma & Ba, 2014) (an already adaptive method) for optimization.\n5. Reward Clipping: The implementation also clips the rewards within a preset range (usually [−5, 5] or [−10, 10]).\n6. Observation Normalization: In a similar manner to the rewards, the raw states are also not fed into the optimizer. Instead, the states are first normalized to mean-zero, variance-one vectors.\n7. Observation Clipping: Analagously to rewards, the observations are also clipped within a range, usually [−10, 10].\n8. Hyperbolic tan activations: As also observed by Henderson et al. (2017), implementations of policy gradient algorithms also also use hyperbolic tangent function activations between layers in the policy and value networks.\n9. Global Gradient Clipping: After computing the gradient with respect to the policy and the value networks, the implementation clips the gradients such the “global `2 norm” (i.e. the norm of the concatenated gradients of all parameters) does not exceed 0.5.\nThese optimizations may appear as merely surface-level or insignificant algorithmic changes to the core policy gradient method at hand. However, we find that they dramatically affect the performance of PPO. To demonstrate this, we start by performing a full ablation study on the four optimizations mentioned above 3. Figure 1 shows a histogram of the final rewards of agents trained with every possible configuration of the above optimizations—for each configuration, a grid search for the optimal learning rate is performed, and we measure the reward of random agents trained using the identified learning rate. Our findings suggest that many code-level optimizations are necessary for PPO to attain its claimed performance.\nThe above findings show that our ability to understand PPO from an algorithmic perspective hinges on the ability to distill out its fundamental principles from such algorithm-independent (in the sense that these optimizations can be implemented for any policy gradient method) optimizations. We thus consider a variant of PPO called PPO-MINIMAL (PPO-M) which implements only the core of the algorithm. PPO-M uses the standard value network loss, no reward scaling, the default network initialization, and Adam with a fixed learning rate. Importantly, PPO-M ignores all the code-level optimizations listed above in the beginning of Section 3. We then explore PPO-M alongside PPO and TRPO. We list all the algorithms we study and their defining properties in Table 1.\nOverall, our results on the importance of these optimizations both corroborate results demonstrating the brittleness of deep policy gradient methods, and demonstrate that even beyond environmental brittleness, the algorithms themselves exhibit high sensitivity to implementation choices 4.\n3Due to restrictions on computational resources, we could only perform a full ablation on the first four of the identified optimizations.\n4This might also explain the difference between different codebases observed in Henderson et al. (2017)" }, { "heading": "4 CODE-LEVEL OPTIMIZATIONS HAVE ALGORITHMIC EFFECTS", "text": "In the previous section, we found that canonical implementations of PPO contain many code-level optimizations: implementation choices that are not integral to the method but profoundly impact performance.\nThe seemingly disproportionate effect of code-level optimizations identified in our ablation study may lead us to ask: how do these seemingly superficial code-level optimizations impact underlying agent behavior? In this section, we demonstrate that the code-level optimizations fundamentally alter agent behavior. Rather than merely improving ultimate cumulative award, such optimizations directly impact the principles motivating the core algorithms.\nTrust Region Optimization. A key property of policy gradient algorithms is that update steps computed at any specific policy πθt are only guaranteed predictiveness in a neighborhood around θt. Thus, to ensure that the update steps we derive remain predictive many policy gradient algo-\nrithms ensure that these steps stay in the vicinity of the current policy. The resulting “trust region” methods (Kakade, 2001; Schulman et al., 2015a; 2017) try to constrain the local variation of the parameters in policy-space by restricting the distributional distance between successive policies.\nA popular method in this class is trust region policy optimization (TRPO) Schulman et al. (2015a). TRPO constrains the KL divergence between successive policies on the optimization trajectory, leading to the following problem:\nmax θ E(st,at)∼π [ πθ(at|st) π(at|st) Âπ(st, at) ] s.t. DKL(πθ(· | s)||π(· | s)) ≤ δ, ∀s . (1)\nIn practice, we maximize this objective with a second-order approximation of the KL divergence and natural gradient descent, and replace the worst-case KL constraints over all possible states with an approximation of the mean KL based on the states observed in the current trajectory.\nProximal policy optimization. One disadvantage of the TRPO algorithm is that it can be computationally costly—the step direction is estimated with nonlinear conjugate gradients, which requires the computation of multiple Hessian-vector products. To address this issue, Schulman et al. (2017) propose proximal policy optimization (PPO), which tries to enforce a trust region with a different objective that does not require computing a projection. Concretely, PPO proposes replacing the KL-constrained objective (1) of TRPO by clipping the objective function directly as:\nmax θ\nE(st,at)∼π [ min ( clip (ρt, 1− ε, 1 + ε) Âπ(st, at), ρtÂπ(st, at) )] (2)\nwhere\nρt = πθ(at|st) π(at|st) . (3)\nNote that this objective can be optimized without an explicit projection step, leading to a simpler parameter update during training. In addition to its simplicity, PPO is intended to be faster and more sample-efficient than TRPO (Schulman et al., 2017).\nTrust regions in TRPO and PPO. Enforcing a trust region is a core algorithmic property of different policy gradient methods. However, whether or not a trust region is enforced is not directly observable from the final rewards. So, how does this algorithmic property vary across state-of-the-art policy gradient methods?\nIn Figure 2 we measure the mean KL divergence between successive policies in a training run of both TRPO and PPO-M (PPO without code-level optimizations). Recall that TRPO is designed specifically to constrain this KL-based trust region, while the clipping mechanism of PPO attempts to approximate it. Indeed, we find that TRPO precisely enforces this trust region (this is unsuprising, and nearly by construction).\nWe thus turn our attention to the trust regions induced by training with PPO and PPO-M. First, we consider mathematically the contribution of a single state-action pair to the gradient of the PPO objective, which is given by\n∇θLPPO =\n{ ∇θLθ πθ(a|s)π(a|s) ∈ [1− , 1 + ] or L C θ < Lθ\n0 otherwise ,\nwhere Lθ := E(s,a)∈τ∼π [ πθ(a|s) π(a|s) Aπ(s, a) ] ,\nand LCθ := E(s,a)∈τ∼π [ clip ( πθ(a|s) π(a|s) , 1− ε, 1 + ε ) Aπ(s, a) ] are respectively the standard and clipped versions of the surrogate objective. As a result, since we initialize πθ as π (and thus the ratios start all equal to one) the first step we take is identical to a maximization step over the unclipped surrogate objective. It thus stands to reason that the nature of the trust region enforced is heavily dependent on the method with which the clipped PPO objective\nis optimized, rather than on the objective itself. Therefore, the size of step we take is determined solely be the steepness of the surrogate landscape (i.e. Lipschitz constant of the optimization problem we solve), and we can end up moving arbitrarily far from the trust region. We hypothesize that this dependence of PPO on properties of the optimizer rather than the optimization objective contributes to the brittleness of the algorithm to hyperparameters such as learning rate and momentum, as observed by Henderson et al. (2018) and others.\nThe results we observe (shown in Figure 2) corroborate this intuition. For agents trained with optimal parameters, all three algorithms are able to maintain a KL-based trust region. First, we note that all three algorithms fail to maintain a ratio-based trust region, despite PPO and PPO-M being trained directly with a ratio-clipping objective. Furthermore, the nature of the KL trust region enforced differs between PPO and PPO-M, despite the fact that the core algorithm remains constant between the two methods; while PPO-M KL trends up as the number of iterations increases, PPO KL peaks halfway through training before trending down again.\nThe findings from this experiment and the corresponding calculations demonstrate that perhaps a key factor in the behavior of PPO-trained agents even from an algorithmic viewpoint comes from auxiliary optimizations, rather than the core methodology." }, { "heading": "5 IDENTIFYING ROOTS OF ALGORITHMIC PROGRESS", "text": "State-of-the-art deep policy gradient methods are comprised of many interacting components. At what is generally described as their core, these methods incorporate mechanisms like trust regionenforcing steps, time-dependent value predictors, and advantage estimation methods for controlling the exploitation/exploration trade-off (Schulman et al., 2015b). However, these algorithms also incorporate many less oft-discussed optimizations (cf. Section 3) that ultimately dictate much of agent behavior (cf. Section 4). Given the need to improve on these algorithms, the fact that such optimizations are so important begs the question: how do we identify the true roots of algorithmic progress in deep policy gradient methods?\nUnfortunately, we find that answering this question is not easy. Going back to our study of PPO and TRPO, it is widely believed (and claimed) that the key innovation of PPO responsible for its improved performance over the baseline of TRPO is the ratio clipping mechanism discussed in Section 4. However, we have already shown that this clipping mechanism is insufficient theoretically\nto maintain a trust region, and also that the method by which the objective is optimized appears to have significant effect on the resulting trust region. If code-level optimizations are thus (at least partially) responsible for algorithmic properties of PPO, is it possible that they are also a key factor in PPO’s improved performance?\nTo address this question, we set out to further disentangle the impact of PPO’s core clipping mechanism from its code-level optimizations by once again considering variations on the PPO and TRPO algorithms. Specifically, we examine how employing the core PPO and TRPO steps changes model performance while controlling for the effect of code-level optimizations identified in standard implementations of PPO (in particular, we focus on those covered in Section 3). (Note that these code-level optimizations are largely algorithm-independent: they can be straightforwardly applied or lightly adapted to any policy gradient method.) The previously introduced PPO-M algorithm corresponds to PPO without these optimizations. To further account for their effects, we study an additional algorithm which we denote as TRPO+, consisting of the core algorithmic contribution of TRPO in combination with PPO’s code-level optimizations as identified in Section 3 5. We note that TRPO+ together with the other three algorithms introduced (PPO, PPO-M, and TRPO; all listed in Table 1) now capture all combinations of core algorithms and code-level optimizations, allowing us to study the impact of each in a fine-grained manner.\nAs our results show in Table 2, it turns out that code-level optimizations contribute to algorithms’ increased performance often significantly more than the choice of algorithm (i.e., using PPO vs. TRPO). For example, on Hopper-v2, PPO and TRPO see 17% and 21% improvements (respectively) when equipped with code-level optimizations. At the same time, for all tasks after fixing the choice to use or not use optimizations, the core algorithm employed does not seem to have a significant impact on reward. In Table 2 we quantify this contrast through the following two metrics, which we denote average algorithmic improvement (AAI) and average code-level improvement (ACLI):\nAAI = max{|PPO − TRPO+|, |PPO-M − TRPO|},\nACLI = max{|PPO − PPO-M|, |TRPO+ − TRPO|}. In short, AAI measures the maximal effect of switching step algorithms (from PPO to TRPO or vice-versa), whereas ACLI measures the maximal effect of adding in code-level optimizations for a fixed choice of step algorithm.\nPPO without clipping. Given the relative insignificance of the step mechanism compared to the use of code-level optimizations, we are prompted to ask: to what extent is the clipping mechanism\n5We also add a new code-level optimization, a KL decay, inapplicable to PPO but meant to serve as the analog of Adam learning rate annealing.\nof PPO actually responsible for the algorithm’s success? In Table 3, we assess this by considering a PPO-NOCLIP algorithm which makes use of common code-level optimizations (by gridding over the best possible combination of such optimizations) but does not employ a clipping mechanism (this is the same algorithm we studied in Section 4 in the context of trust region enforcement)—recall that we list all the algorithms studied in Table 1.\nIt turns out that the clipping mechanism is not necessary to achieve high performance—we find that PPO-NOCLIP performs uniformly better than PPO-M, despite the latter employing the core PPO clipping mechanism. Our results suggest that the introduction of code-level optimizations outweighs even the core PPO algorithm in terms of effect on rewards. In fact, we find that with sufficient hyperparameter tuning, PPO-NOCLIP often matches the performance of standard PPO, which includes a standard configuration of code-level optimizations6. We also include benchmark PPO numbers from the OpenAI baselines repository (Dhariwal et al., 2017) where available to put results into context.\nOur results suggest that it is difficult to attribute success to different aspects of policy gradient algorithms without careful analysis." }, { "heading": "6 CONCLUSION", "text": "In this work, we take a first step in examining how the mechanisms powering deep policy gradient methods impact agents both in terms of achieved reward and underlying algorithmic behavior. Wanting to understand agent operation from the ground up, we take a deep dive into the operation of two of the most popular deep policy gradient methods: TRPO and PPO. In doing so, we identify a number of “code-level optimizations”—algorithm augmentations found only in algorithms’ implementations or described as auxiliary details in their presentation—and find that these optimizations have a drastic effect on agent performance.\nIn fact, these seemingly unimportant optimizations fundamentally change algorithm operation in ways unpredicted by the conceptual policy gradient framework. Indeed, the optimizations have a profound effect on the nature of the trust region enforced by policy gradient algorithms, even controlling for the surrogate objective being optimized. We go on to test the importance of codelevel optimizations in agent performance, and find that PPO’s marked improvement over TRPO (and even stochastic gradient descent) can be largely attributed to these optimizations.\nOverall, our results highlight the necessity of designing deep RL methods in a modular manner. When building algorithms, we should understand precisely how each component impacts agent training—both in terms of overall performance and underlying algorithmic behavior. It is impossible to properly attribute successes and failures in the complicated systems that make up deep RL methods without such diligence. More broadly, our findings suggest that developing an RL toolkit\n6Note that it is possible that further refinement on the code-level optimizations could be added on top of PPO to perhaps improve its performance to an even greater extent (after all, PPO-NOCLIP can only express a subset the training algorithms covered by PPO, as the latter leaves the clipping severity ε to be free parameter)\nwill require moving beyond the current benchmark-driven evaluation model to a more fine-grained understanding of deep RL methods." }, { "heading": "A APPENDIX", "text": "A.1 EXPERIMENTAL SETUP\nAll the hyperparameters used in this paper were obtained through grid searches. For PPO the exact code-level optimizations and their associated hyperparameters (e.g. coefficients for entropy regularization, reward clipping, etc.) were taken from the OpenAI baselines repository 7, and gridding is performed over the value function learning rate, the clipping constant, and the learning rate schedule. In TRPO, we grid over the same parameters (replacing learning rate schedule with the KL constraint), but omit the code-level optimizations. For PPO-NoClip, we grid over the same parameters as PPO, in addition to the configuration of code-level optimizations (since we lack a good reference for what the optimal configuration of these optimizations is). For TRPO+ we also grid over the code-level optimizations, and also implement a “KL schedule” whereby the KL constraint can change over training (analogous to the learning rate annealing optimization in PPO). Finally, for PPO-M, we grid over the same parameters as PPO (just learning rate schedules), without any code-level optimizations. The final parameters for each algorithm are given below:\nAll error bars we plot are 95% confidence intervals, obtained via bootstrapped sampling.\n7https://github.com/openai/baselines\nA.2 PPO CODE-LEVEL OPTIMIZATIONS\nAlgorithm 1 PPO scaling optimization. 1: procedure INITIALIZE-SCALING() 2: R0 ← 0 3: RS = RUNNINGSTATISTICS() . New running stats class that tracks mean, standard\ndeviation 4: procedure SCALE-OBSERVATION(rt) . Input: a reward rt 5: Rt ← γRt−1 + rt . γ is the reward discount 6: ADD(RS,Rt) 7: return rt/STANDARD-DEVIATION(RS) . Returns scaled reward\nA.3 TRUST REGION OPTIMIZATION" } ]
2,020
null
SP:af656384c8eec0891912cc1893a5d827bc6efb78
[ "1. Summary: This paper proposes Capacitron, a conditional variational latent variable model for TTS which allow for controllable latent variable capacity. They optimize the Lagrangian dual of the ELBO and restrict the capacity of the rate-term through a learnable, non-negative multiplier. They demonstrate the effectiveness of their approach on a range of TTS tasks such as same-text prosody transfer and inter-text style transfer, and provide extensive analyses on their latent variable capacity (in addition to comparisons to non-variational approaches based on Tacotron).", "In this work authors present a regularized, variational autoencoder method for speech synthesis. To endow the latent space with more capacity, the authors employ a modified variational autoencoder objective, which uses a learnable Lagrange multiplier to impose a capacity limit on KL divergence between latent posterior and prior. The authors furthermore propose to decompose the latent embedding space into a two-level hierarchical representation to give generative process more control over style transfer and sample-to-sample variance. They extend earlier theoretical results providing upper bounds on the mutual information between data and its latent embedding to their hierarchical latent representation. In numerical experiments the authors evaluate their approach on a number of speech synthesis tasks involving same-text prosody transfer, inter-text style transfer, inter-speaker prosody transfer. They also analyze speech samples generated from latent samples drawn from the prior." ]
Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understanding the trade-offs between the competing methods. In this paper, we propose embedding capacity (the amount of information the embedding contains about the data) as a unified method of analyzing the behavior of latent variable models of speech, comparing existing heuristic (non-variational) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information. In our proposed model (Capacitron), we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior, the same model can be used for high-precision prosody transfer, text-agnostic style transfer, and generation of natural-sounding prior samples. For multi-speaker models, Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior. Lastly, we introduce a method for decomposing embedding capacity hierarchically across two sets of latents, allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior. Audio examples are available on the web1.
[]
[ { "authors": [ "REFERENCES Alexander Alemi", "Ben Poole", "Ian Fischer", "Joshua Dillon", "Rif A Saurous", "Kevin Murphy" ], "title": "Fixing a broken elbo", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Battenberg", "RJ Skerry-Ryan", "Soroosh Mariooryad", "Daisy Stanton", "David Kao", "Matt Shannon", "Tom Bagby" ], "title": "Location-relative attention mechanisms for robust long-form speech synthesis", "venue": null, "year": 1910 }, { "authors": [ "Christopher P Burgess", "Irina Higgins", "Arka Pal", "Loic Matthey", "Nick Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in β-vae", "venue": "arXiv preprint arXiv:1804.03599,", "year": 2018 }, { "authors": [ "Alex Graves" ], "title": "Generating sequences with recurrent neural networks", "venue": "arXiv preprint arXiv:1308.0850,", "year": 2013 }, { "authors": [ "Gustav Eje Henter", "Jaime Lorenzo-Trueba", "Xin Wang", "Junichi Yamagishi" ], "title": "Deep encoderdecoder models for unsupervised learning of controllable speech synthesis", "venue": "arXiv preprint arXiv:1807.11470,", "year": 2018 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Matthew D Hoffman", "Matthew J Johnson" ], "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "venue": "In Workshop in Advances in Approximate Bayesian Inference,", "year": 2016 }, { "authors": [ "Wei-Ning Hsu", "Yu Zhang", "Ron Weiss", "Heiga Zen", "Yonghui Wu", "Yuan Cao", "Yuxuan Wang" ], "title": "Hierarchical generative modeling for controllable speech synthesis", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference for Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "R Kubichek" ], "title": "Mel-cepstral distance measure for objective speech quality assessment", "venue": "In Communications, Computers and Signal Processing,", "year": 1993 }, { "authors": [ "Younggun Lee", "Taesu Kim" ], "title": "Robust and fine-grained prosody control of end-to-end speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Shuang Ma", "Daniel Mcduff", "Yale Song" ], "title": "A generative adversarial network for style modeling in a text-to-speech system", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Meinard Müller" ], "title": "Dynamic time warping. Information retrieval for music and motion, pp", "venue": null, "year": 2007 }, { "authors": [ "Wei Ping", "Kainan Peng", "Andrew Gibiansky", "Sercan O. Arik", "Ajay Kannan", "Sharan Narang", "Jonathan Raiman", "John Miller" ], "title": "Deep voice 3: 2000-speaker neural text-to-speech", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Jonathan Shen", "Ruoming Pang", "Ron J Weiss", "Mike Schuster", "Navdeep Jaitly", "Zongheng Yang", "Zhifeng Chen", "Yu Zhang", "Yuxuan Wang", "Rj Skerrv-Ryan" ], "title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2018 }, { "authors": [ "RJ Skerry-Ryan", "Eric Battenberg", "Ying Xiao", "Yuxuan Wang", "Daisy Stanton", "Joel Shor", "Ron Weiss", "Rob Clark", "Rif A. Saurous" ], "title": "Towards end-to-end prosody transfer for expressive speech synthesis with Tacotron", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Casper Kaae Sønderby", "Tapani Raiko", "Lars Maaløe", "Søren Kaae Sønderby", "Ole Winther" ], "title": "Ladder variational autoencoders", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jose Sotelo", "Soroush Mehri", "Kundan Kumar", "Joao Felipe Santos", "Kyle Kastner", "Aaron Courville", "Yoshua Bengio" ], "title": "Char2wav: End-to-end speech synthesis", "venue": "In International Conference on Learning Representations, Workshop Track,", "year": 2017 }, { "authors": [ "Daisy Stanton", "Yuxuan Wang", "RJ Skerry-Ryan" ], "title": "Predicting expressive speaking style from text in end-to-end speech synthesis", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2018 }, { "authors": [ "Yaniv Taigman", "Lior Wolf", "Adam Polyak", "Eliya Nachmani" ], "title": "Voiceloop: Voice fitting and synthesis via a phonological loop", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aäron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "In 9th ISCA Speech Synthesis Workshop,", "year": 2016 }, { "authors": [ "Michael Wagner", "Duane G Watson" ], "title": "Experimental and theoretical advances in prosody: A review", "venue": "Language and cognitive processes,", "year": 2010 }, { "authors": [ "Yuxuan Wang", "RJ Skerry-Ryan", "Daisy Stanton", "Yonghui Wu", "Ron J. Weiss", "Navdeep Jaitly", "Zongheng Yang", "Ying Xiao", "Zhifeng Chen", "Samy Bengio", "Quoc Le", "Yannis Agiomyrgiannakis", "Rob Clark", "Rif A. Saurous" ], "title": "Tacotron: Towards end-to-end speech synthesis", "venue": "Proceedings of Interspeech,", "year": 2017 }, { "authors": [ "Yuxuan Wang", "Daisy Stanton", "Yu Zhang", "RJ-Skerry Ryan", "Eric Battenberg", "Joel Shor", "Ying Xiao", "Ye Jia", "Fei Ren", "Rif" ], "title": "A Saurous. Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ya-Jie Zhang", "Shifeng Pan", "Lei He", "Zhen-Hua Ling" ], "title": "Learning latent representations for style control and transfer in end-to-end speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 } ]
[ { "heading": null, "text": "Recent work has explored sequence-to-sequence latent variable models for expressive speech synthesis (supporting control and transfer of prosody and style), but has not presented a coherent framework for understanding the trade-offs between the competing methods. In this paper, we propose embedding capacity (the amount of information the embedding contains about the data) as a unified method of analyzing the behavior of latent variable models of speech, comparing existing heuristic (non-variational) methods to variational methods that are able to explicitly constrain capacity using an upper bound on representational mutual information. In our proposed model (Capacitron), we show that by adding conditional dependencies to the variational posterior such that it matches the form of the true posterior, the same model can be used for high-precision prosody transfer, text-agnostic style transfer, and generation of natural-sounding prior samples. For multi-speaker models, Capacitron is able to preserve target speaker identity during inter-speaker prosody transfer and when drawing samples from the latent prior. Lastly, we introduce a method for decomposing embedding capacity hierarchically across two sets of latents, allowing a portion of the latent variability to be specified and the remaining variability sampled from a learned prior. Audio examples are available on the web1." }, { "heading": "1 INTRODUCTION", "text": "The synthesis of realistic human speech is a challenging problem that is important for natural humancomputer interaction. End-to-end neural network-based approaches have seen significant progress in recent years (Wang et al., 2017; Taigman et al., 2018; Ping et al., 2018; Sotelo et al., 2017), even matching human performance for short assistant-like utterances (Shen et al., 2018). However, these neural models are sometimes viewed as less interpretable or controllable than more traditional models composed of multiple stages of processing that each operate on reified linguistic or phonetic representations.\nText-to-speech (TTS) is an underdetermined problem, meaning the same text input has an infinite number of reasonable spoken realizations. In addition to speaker and channel characteristics, important sources of variability in TTS include intonation, stress, and rhythm (collectively referred to as prosody). These attributes convey linguistic, semantic, and emotional meaning beyond what is present in the lexical representation (i.e., the text) (Wagner & Watson, 2010). Recent end-to-end TTS research has aimed to model and/or directly control the remaining variability in the output.\nSkerry-Ryan et al. (2018) augment a Tacotron-like model (Wang et al., 2017) with a deterministic encoder that projects reference speech into a learned embedding space. The system can be used for prosody transfer between speakers (“say it like this”), but does not work for transfer between unrelated sentences, and does not preserve the pitch range of the target speaker. Lee & Kim (2019) partially address the pitch range problem by centering the learned embeddings using speaker-wise means.\n1https://variational-embedding-capacity.github.io/demos/\nOther work targets style transfer, a text-agnostic variation on prosody transfer. The Global Style Token (GST) system (Wang et al., 2018) uses a modified attention-based reference encoder to transfer global style properties to arbitrary text, and Ma et al. (2019) use an adversarial objective to disentangle style from text.\nHsu et al. (2019) and Zhang et al. (2019) use a variational approach (Kingma & Welling, 2014) to tackle the style task. Advantages of this approach include its ability to generate style samples via the accompanying prior and the potential for better disentangling between latent style factors (Burgess et al., 2018). Additionally, Hsu et al. (2019) use a Gaussian mixture prior over the latents, which (when interpreting the mixture component index as a high-level discrete latent) allows a form of hierarchical control.\nThis work extends the above approaches by providing the following contributions:\n1. We propose a unified approach for analyzing the characteristics of TTS latent variable models, independent of architecture, using the capacity of the learned embeddings (i.e., the representational mutual information between the embedding and the data).\n2. We target specific capacities for our proposed model using a Lagrange multiplier-based optimization scheme, and show that capacity is correlated with perceptual reference similarity.\n3. We show that modifying the variational posterior to match the form of the true posterior enables style and prosody transfer in the same model, helps preserve target speaker identity during inter-speaker transfer, and leads to natural-sounding prior samples even at high embedding capacities.\n4. We introduce a method for controlling what fraction of the variation represented in an embedding is specified, allowing the remaining variation to be sampled from the model." }, { "heading": "2 MEASURING REFERENCE EMBEDDING CAPACITY", "text": "" }, { "heading": "2.1 LEARNING A REFERENCE EMBEDDING SPACE", "text": "Existing heuristic (non-variational) end-to-end approaches to prosody and style transfer (Skerry-Ryan et al., 2018; Wang et al., 2018; Lee & Kim, 2019; Henter et al., 2018) typically start with the teacherforced reconstruction loss, (1), used to train Tacotron-like sequence-to-sequence models and simply augment the model with a deterministic reference encoder, ge(x), as shown in eq. (2).\nL(x,yT,yS) ≡ − log p(x|yT,yS) = ‖fθ(yT,yS)− x‖1 +K (1) L′(x,yT,yS) ≡ − log p(x|yT,yS, ge(x)) = ‖fθ(yT,yS, ge(x))− x‖1 +K (2)\nwhere x is an audio spectrogram, yT is the input text, yS is the target speaker (if training a multispeaker model), fθ(·) is a deterministic function that maps the inputs to spectrogram predictions, and K is a normalization constant. Teacher-forcing implies that fθ(·) is dependent on x<t when predicting spectrogram frame xt. In practice, fθ(·) serves as the greedy deterministic output of the model, and transfer is accomplished by pairing the embedding computed by the reference encoder with different text or speakers during synthesis.\nIn these heuristic models, the architecture chosen for the reference encoder determines the transfer characteristics of the model. This decision affects the information capacity of the embedding and allows the model to target a specific trade-off between transfer precision (how closely the output resembles the reference) and generality (how well an embedding works when paired with arbitrary text). Higher capacity embeddings prioritize precision and are better suited for prosody transfer to similar text, while lower capacity embeddings prioritize generality and are better suited for text-agnostic style transfer.\nThe variational extensions from Hsu et al. (2019) and Zhang et al. (2019) augment the reconstruction loss in eq. (2) with a KL divergence term. This encourages a stochastic reference encoder (variational posterior), q(z|x), to align well with a prior, p(z) (eq. (3)). The overall loss is then equivalent to the negative evidence lower bound (ELBO) of the marginal likelihood of the data (Kingma & Welling, 2014).\nLELBO(x,yT,yS) ≡ Ez∼q(z|x)[− log p(x|z,yT,yS)] +DKL(q(z|x)‖p(z)) (3) − log p(x|yT,yS) ≤ LELBO(x,yT,yS) (4)\nControlling embedding capacity in variational models can be accomplished more directly by manipulating the KL term in (3). Recent work has shown that the KL term provides an upper bound on the mutual information between the data, x, and the latent embedding, z ∼ q(z|x) (Hoffman & Johnson, 2016; Makhzani et al., 2015; Alemi et al., 2018).\nRAVG ≡ Ex∼pD(x)[DKL(q(z|x)‖p(z))], R ≡ DKL(q(z|x)‖p(z)) (5) Iq(X;Z) ≡ Ex∼pD(x)[DKL(q(z|x)‖q(z))], q(z) ≡ Ex∼pD(x)q(z|x) (6)\nRAVG = Iq(X;Z) +DKL(q(z)‖p(z)) (7) =⇒ Iq(X;Z) ≤ RAVG (8)\nwhere pD(x) is the data distribution, R is the the KL term in (3), RAVG is the KL term averaged over the data distribution, Iq(X;Z) is the representational mutual information (the capacity of z), and q(z) is the aggregated posterior. This brief derivation is expanded in Appendix C.1.\nThe bound in (8) follows from (7) and the non-negativity of the KL divergence, and (7) shows that the slack on the bound is DKL(q(z)‖p(z)), the aggregate KL. In addition to providing a tighter bound, having a low aggregate KL is desirable when sampling from the model via the prior, because then the samples of z that the decoder sees during training will be very similar to samples from the prior.\nVarious approaches to controlling the KL term have been proposed, including varying a weight on the KL term, β (Higgins et al., 2017), and penalizing its deviation from a target value (Alemi et al., 2018; Burgess et al., 2018). Because we would like to smoothly optimize for a specific bound on the embedding capacity, we adapt the Lagrange multiplier-based optimization approach of Rezende & Viola (2018) by applying it to the KL term rather than the reconstruction term.\nmin θ max β≥0\n{ Ez∼qθ(z|x)[− log pθ(x|z,yT,yS)] + β(DKL(qθ(z|x)‖p(z))− C) } (9)\nwhere θ are the model parameters, β serves as an automatically-tuned weight on the KL term, C is the capacity limit, and updates to θ and β are interleaved during training. We constrain β to be non-negative by passing an unconstrained parameter through a softplus non-linearity, which makes the capacity constraint a limit rather than a target. This approach is less tedious than tuning β by hand and leads to more consistent behavior from run-to-run. It also allows more stable optimization than directly penalizing the `1 deviation from the target KL." }, { "heading": "2.2 ESTIMATING EMBEDDING CAPACITY", "text": "Estimating heuristic embedding capacity Unfortunately, the heuristic methods do not come packaged with an easy way to estimate embedding capacity. We can estimate an effective capacity ordering, however, by measuring the test-time reconstruction loss when using the reference encoder from each method. In Figure 1, we show how the reconstruction loss varies with embedding dimensionality for the tanh-based prosody transfer (PT) and softmax-based global style token (GST) bottlenecks (Skerry-Ryan et al., 2018; Wang et al., 2018) and for variational models (Var.) with different capacity limits, C. We also compare to a baseline Tacotron model without a reference encoder. For this preliminary comparison, we use the expressive single-speaker dataset and training setup described in Section 4.2. Looking at the heuristic methods in Figure 1, we see that the GST bottleneck is much more restrictive than the PT bottleneck, which hurts transfer precision but allows sufficient embedding generality for text-agnostic style transfer.\nBounding variational embedding capacity We saw in (8) that the KL term is an upper bound on embedding capacity, so we can directly target a specific capacity limit by constraining the KL term using the objective in eq. (9). For the three values of C in Figure 1, we can see that the reconstruction loss flattens out once the embedding reaches a certain dimensionality. This gives us a consistent way to control embedding capacity as it only requires using a reference encoder architecture with sufficient structural capacity (at least C) to achieve the desired representational capacity in the variational embedding. Because of this, we use 128-dimensional embeddings in all of our experiments, which should be sufficient for the range of capacities we target." }, { "heading": "3 MAKING EFFECTIVE USE OF EMBEDDING CAPACITY", "text": "" }, { "heading": "3.1 MATCHING THE FORM OF THE TRUE POSTERIOR", "text": "In previous work (Hsu et al., 2019; Zhang et al., 2019), the variational posterior has the form q(z|x), which matches the form of the true posterior for a simple generative model p(x|z)p(z). However, for the conditional generative model used in TTS, p(x|z,yT,yS)p(z), it is missing conditional dependencies present in the true posterior, p(z|x,yT,yS). Figure 2 shows this visually. In order to\nmatch the form of the true posterior, we inject information about the text and the speaker into the network that predicts the parameters of the variational posterior. Speaker information is represented as learned speaker-wise embedding vectors, while the text information is summarized into a vector by passing the output of the Tacotron text encoder through a unidirectional RNN as done by Stanton et al. (2018). Appendix A.1 gives additional details.\nFor this work, we use a simple diagonal Gaussian for the approximate posterior, q(z|x,yT,yS) and a standard normal distribution for the prior, p(z). We use these distributions for simplicity and efficiency, but using more powerful distributions such as Gaussian mixtures or normalizing flows (Rezende & Mohamed, 2015) should decrease the aggregate KL, leading to better prior samples.\nBecause we are learning a conditional generative model, p(x|yT,yS), we could have used a learned conditional prior, p(z|yT,yS), in order to improve the quality of the output generated when sampling via the prior. However, in this work we focus on the transfer use case where we infer zref ∼ q(z|xref,yrefT ,yrefS ) from a reference utterance and use it to re-synthesize speech using different text or speaker inputs, x′ ∼ p(x|zref,y′T,y′S). Using a fixed prior allows z to share a high probability region across all text and speakers so that an embedding inferred from one utterance is likely to lead to non-degenerate output when being used with any other text or speaker." }, { "heading": "3.2 DECOMPOSING EMBEDDING CAPACITY HIERARCHICALLY", "text": "In inter-text style transfer uses cases, we infer zref from a reference utterance and then use it to generate a new utterance with the same style but different text. One problem with this approach is that zref completely specifies all variation that the latent embedding is capable of conveying to the decoder, p(x|zref,yT,yS). So, even though there are many possible realizations of an utterance with a given style, this approach can produce only one2.\nTo address this issue, we decompose the latents, z, hierarchically (Sønderby et al., 2016) into highlevel latents, zH, and low-level latents, zL, as shown in Figure 3. This differs from the hierarchical interpretation of the Gaussian mixture prior used by Hsu et al. (2019) in that here the high-level latents are continuous vectors rather than a single categorical variable. Factorizing continuous latents in this way allows us to specify how the joint capacity, Iq(X; [ZH,ZL]), is divided between zH and zL. This approach can also be extended to additional levels of latents, each containing a prescribed proportion of the overall joint capacity.\nAs shown in eq. (8), the KL term, RAVG, is an upper bound on Iq(X;Z). We can also derive similar bounds for Iq(X;ZH) and Iq(X;ZL). Derivations of these bounds are provided in Appendix C.2.\nIq(X;ZL) ≤ RAVG = Ex∼pD(x)[DKL(q(zH|zL)q(zL|x)‖p(zL|zH)p(zH))] (10) Iq(X;ZH) ≤ RAVGH ≡ Ex∼pD(x),zL∼q(zL|x)[DKL(q(zH|zL)‖p(zH))] (11)\nIf we define RAVGL ≡ RAVG −RAVGH , we end up with the following capacity limits for the hierarchical latents:\n=⇒ Iq(X;ZH) ≤ RAVGH , Iq(X;ZL) ≤ RAVGH +RAVGL (12)\nThe negative ELBO for this model can be written as:\nLELBO(x,yT,yS) = −EzL∼q(zL|x)[log p(x|zL,yT,yS)] +RH +RL (13) where RH and RL are single data point estimates of RAVGH and R AVG L computed from x. In order to specify how the joint capacity is distributed between the latents, we extend (9) to have two Lagrange multipliers and capacity targets.\nmin θ max βH,βL≥0\n{ EzL∼qθ(zL|x,yT,yS)[− log pθ(x|zL,yT,yS)] + βH(RH − CH) + βL(RL − CL) } (14)\nCH limits the information capacity of zH, and CL limits how much capacity zL has in excess of zH (i.e., the total capacity of zL is capped at CH + CL). This allows us to infer zrefH ∼ q(zH|zL)q(zL|xref,yrefT ,yrefS ) from a reference utterance and use it to sample multiple realizations, x′ ∼ p(x|zL,yT,yS)p(zL|zrefH ). Intuitively, the higher CH is, the more the output will resemble the reference, and the higher CL is, the more variation we would expect from sample to sample when fixing zrefH and sampling zL from p(zL|zrefH ).\n2If the decoder were truly stochastic (not greedy), we could actually sample multiple realizations given the same zref, but, at high embedding capacities the variations would likely be very similar perceptually." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 MODEL ARCHITECTURE AND TRAINING", "text": "Model architecture The baseline model we start with is a Tacotron-based system (Wang et al., 2017) that incorporates modifications from Skerry-Ryan et al. (2018), including phoneme inputs instead of characters, GMM attention (Graves, 2013), and a WaveNet neural vocoder (van den Oord et al., 2016) to convert the output mel spectrograms into audio samples (Shen et al., 2018). The decoder RNN uses a reduction factor of 2, meaning that it produces two spectrogram frames per timestep. We use the CBHG text encoder from Wang et al. (2018) and the GMMv2b attention mechanism from Battenberg et al. (2019).\nFor the heuristic models compared in Section 2.2, we augment the baseline Tacotron with the reference encoders described by Skerry-Ryan et al. (2018) and Wang et al. (2018). For the variational models that we compare in the following experiments, we start with the reference encoder from Skerry-Ryan et al. (2018) and replace the tanh bottleneck layer with an MLP that predicts the parameters of the variational posterior. When used, the additional conditional dependencies (text and speaker) are fed into the MLP as well.\nTraining To train the models, the primary optimizer is run synchronously across 10 GPU workers (2 of them backup workers) for 300,000 training steps with an effective batch size of 256. It uses the Adam algorithm (Kingma & Ba, 2015) with a learning rate that is annealed from 10−3 to 5× 10−5 over 200,000 training steps. The optimizer for β is run asychronously on the 10 workers and uses SGD with momentum 0.9 and a fixed learning rate of 10−5. The updates for these two optimizers are interleaved, allowing β to converge to a steady state value that achieves the target value for the KL term, as demonstrated in Figure B.2 in the appendix. Additional architectural and training details are provided in Appendix A." }, { "heading": "4.2 EXPERIMENT SETUP", "text": "Datasets For single-speaker models, we use an expressive English language audiobook dataset consisting of 50,086 training utterances (36.5 hours) and 912 test utterances spoken by Catherine Byers, the speaker from the 2013 Blizzard Challenge. Multi-speaker models are trained using highquality English data from 58 voice assistant-like speakers, consisting of 419,966 training utterances (327 hours). We evaluate on a 9-speaker subset of the multi-speaker test data which contains 1808 utterances (comprising US, UK, Australian, and Indian speakers).\nTasks The tasks that we explore include same-text prosody transfer, inter-text style transfer, and inter-speaker prosody transfer. We also evaluate the quality of samples produced when sampling via the prior. For these tasks, we compare performance when using variational models with and without the additional conditional dependencies in the variational posterior at a number of different capacity limits. For models with hierarchical latents, we demonstrate the effect of varying CH and CL for same-text prosody transfer when inferring zH and sampling zL or when inferring zL directly.\nEvaluation We use crowd-sourced native speakers to collect two types of subjective evaluations. First, mean opinion score (MOS) rates naturalness on a scale of 1-5, 5 being the best. Second, we use the AXY side-by-side comparison proposed by Skerry-Ryan et al. (2018) to measure subjective similarity to a reference utterance relative to the baseline model on a scale of [-3,3]. For example, a score of 3 would mean that, compared to the baseline model, the model being tested produces samples much more perceptually similar to the ground truth reference. We also use an objective similarity metric that uses dynamic time warping to find the minimum mel cepstral distortion (Kubichek, 1993) between two sequences (MCD-DTW). Lastly, for inter-speaker transfer, we follow Skerry-Ryan et al. (2018) and use a simple speaker classifier to measure how well speaker identity is preserved. Additional details on evaluation methodologies are provided in Appendix A." }, { "heading": "4.3 RESULTS", "text": "Single speaker For single-speaker models, we compare the performance on same and inter-text transfer and the quality of samples generated via the prior for models with and without text conditioning in the variational posterior (Var+Txt and Var, respectively) at different capacity limits, C.\nSimilarity results for the transfer task are shown on the left side of Figure 4 and demonstrate increasing reference similarity as C is increased, with the exception of the model without text conditioning on the inter-text transfer task. Looking at the MOS naturalness results on the right side of Figure 4, we see that both inter-text transfer and prior sampling take a serious hit as capacity is increased for the Var model, while the Var+Txt model is able to maintain respectable performance even at very high capacities on all tasks.\nListening to the audio examples, we can hear that the Var model produces degenerate output at high capacities when attempting to transfer the style from a short utterance to a long utterance. This indicates that the decoder probably hasn’t seen similar embeddings paired with long utterances during training, which suggests that z is improperly correlated with text length. Similar behavior is also observed when generating prior samples using shorter or longer text. This means that an arbitrary z (sampled from the prior or inferred from a reference) is unlikely to pair well with text of an arbitrary length.\nMulti-speaker For multi-speaker models, we compare inter-speaker same-text transfer performance and prior sample quality with and without speaker conditioning in the variational posterior (Var+Txt+Spk and Var+Txt, respectively) at a fixed capacity limit of 150 nats. In Table 1, we see that both models are able to preserve characteristics of the reference utterance during transfer (AXY Ref. Similarity column), while the Var+Txt+Spk model has an edge in MOS for both inter-speaker transfer and prior samples (almost matching the MOS of the deterministic baseline model even at high embedding capacity).\nSimilar to the utterance length argument in the single speaker section above, it is likely that adding speaker dependencies to the posterior allows the model to use the entire latent space for each speaker (meaning z is not correlated with speaker identity), thereby forcing the decoder to learn to map all plausible points in the latent space to natural-sounding utterances that preserve the target speaker’s pitch range. The speaker classifier results show that the Var+Txt+Spk model preserves target speaker identity about as well as the baseline model and ground truth data (~5% of the time the classifier chooses a speaker other than the target speaker), whereas for the Var+Txt model this happens about 22% of the time. Though 22% seems like a large speaker error rate, it is much lower than the 79% figure presented by Skerry-Ryan et al. (2018) for a heuristic prosody transfer model. This demonstrates that even with a weakly conditioned posterior, the capacity limiting properties of variational models lead to better transfer generality and robustness.\nHierarchical latents To evaluate hierarchical decomposition of capacity in a single speaker setting, we use the MCD-DTW distance to quantify reference similarity and same-reference inter-sample variability. As shown in Table B.1 in the appendix, MCD-DTW strongly (negatively) correlates with subjective similarity.\nThe left side of Figure 5 shows results for samples generated using high-level latents, zH, inferred from the reference. As CH is increased, we see a strong downward trend in the average distance to\nthe reference. We can also see that for a fixed CH, increasing CL results in a larger amount of sampleto-sample variation (average MCD-DTW between samples) when inferring a single zrefH from the variational posterior and then sampling zL ∼ p(zL|zrefH ) from the prior to use in the reconstructions. The right side of Figure 5 shows the same metrics but for samples generated using low-level latents, zrefL , inferred from the variational posterior. In this case, we see a slight downward trend in the reference distance as the total capacity limit, C, is increased (the trend is less dramatic because the capacity is already fairly high). We also see significantly lower inter-sample distance because the variation modeled by the latents is completely specified by zL. In this case, we sample multiple zrefL ’s from q(zL|xref,yrefT ) for the same xref because using the same zL would lead to identical output from the deterministic decoder.\nUsing Capacitron with hierarchical latents increases the model’s versatility for transfer tasks. By inferring just the high-level latents, zH, from a reference, we can sample multiple realizations of an utterance that are similar to the reference, with the level of similarity controlled by CH, and the amount of sample-to-sample variation controlled by CL. The same model can also be used for higher fidelity, lower variability transfer by inferring the low-level latents, zL, from a reference, with the level of similarity controlled by C = CH +CL. As mentioned before, this idea could also be extended to use additional levels of latents, thereby increasing transfer and sampling flexibility.\nTo appreciate the results fully, it is strongly recommended to listen to the audio examples available on the web3.\n3https://variational-embedding-capacity.github.io/demos/" }, { "heading": "5 CONCLUSION", "text": "We have proposed embedding capacity (i.e., representational mutual information) as a useful framework for comparing and configuring latent variable models of speech. Our proposed model, Capacitron, demonstrates that including text and speaker dependencies in the variational posterior allows a single model to be used successfully for a variety of transfer and sampling tasks. Motivated by the multi-faceted variability of natural human speech, we also showed that embedding capacity can be decomposed hierarchically in order to enable the model to control a trade-off between transfer fidelity and sample-to-sample variation.\nThere are many directions for future work, including adapting the fixed-length variational embeddings to be variable-length and synchronous with either the text or audio, using more powerful distributions like normalizing flows, and replacing the deterministic decoder with a proper likelihood distribution. For transfer and control uses cases, the ability to distribute certain speech characteristics across specific subsets of the hierarchical latents would allow more fine-grained control of different aspects of the output speech. And for purely generative, non-transfer use cases, using more powerful conditional priors could improve sample quality." }, { "heading": "A EXPERIMENT DETAILS", "text": "A.1 ARCHITECTURE DETAILS\nBaseline Tacotron The baseline Tacotron we start with (which serves as fθ(·) in eq. (1)) is similar to the original sequence-to-sequence model described by Wang et al. (2017) but uses some modifications introduced by Skerry-Ryan et al. (2018). Input to the model consists of sequences of phonemes produced by a text normalization pipeline rather than character inputs. The CBHG text encoder from Wang et al. (2017) is used to convert the input phonemes into a sequence of text embeddings. Before being fed to the CBHG encoder, the phoneme inputs are converted to learned 256-dimensional embeddings and passed through a pre-net composed of two fully connected ReLU layers (with 256 and 128 units, respectively), with dropout of 0.5 applied to the output of each layer. For multi-speaker models, a learned embedding for the target speaker is broadcast-concatenated to the output of the text encoder.\nThe attention module uses a single LSTM layer with 256 units and zoneout of 0.1 followed by an MLP with 128 tanh hidden units to compute parameters for the monotonic 5-component GMM attention window. Instead of using the exponential function to compute the shift and scale parameters of the GMM components as in (Graves, 2013), we use the softplus function, which we found leads to faster alignment and more stable optimization.\nThe autoregressive decoder module consists of 2 LSTM layers each with 256 units, zoneout of 0.1, and residual connections between the layers. The spectrogram output is produced using a linear layer on top of the 2 LSTM layers, and we use a reduction factor of 2, meaning we predict two spectrogram frames for each decoder step. The decoder is fed the last frame of its most recent prediction (or the previous ground truth frame during training) and the current context as computed by the attention module. Before being fed to the decoder, the previous prediction is passed through the same pre-net used before the text encoder above.\nMel spectrograms The mel spectrograms the model predicts are computed from 24kHz audio using a frame size of 50ms, a hop size of 12.5ms, an FFT size of 2048, and a Hann window. From the FFT energies, we compute 80 mel bins distributed between 80Hz and 12kHz.\nReference encoder The common reference encoder we use to compute reference embeddings starts with the mel spectrogram from the reference and passes it through a stack of 6 convolutional layers, each using ReLU non-linearities, 3x3 filters, 2x2 stride, and batch normalization. The 6 layers have 32, 32, 64, 64, 128, and 128 filters, respectively. The output of this convolution stack is fed into a unidirectional LSTM with 128 units, and the final output of the LSTM serves as the output of our basic reference encoder.\nTo replicate the prosody transfer model from Skerry-Ryan et al. (2018), we pass the reference encoder output through an additional tanh bottleneck layer to compute the embedding. For the Style Tokens model in Wang et al. (2018), we pass the output through the Style Tokens bottleneck described in the paper. For the approximate posterior in our variational models, we pass the output of the reference encoder (and potentially vectors describing the text and/or speaker) through an MLP with 128 tanh hidden units to produce the parameters of the diagonal Gaussian posterior which we sample from to produce a reference embedding. For all models with reference encoders, the resulting reference embedding is broadcast-concatenated to the output of the text encoder.\nConditional inputs When providing information about the text to the variational posterior, we pass the sequence of text embeddings produced by the text encoder to a unidirectional RNN with 128 units and use its final output as a fixed-length text summary that is passed to the posterior MLP. Speaker information is passed to the posterior MLP via a learned speaker embedding.\nA.2 TRAINING DETAILS\nFor the optimization problems shown in eqs. (9) and (14), we use two separate optimizers. The first minimizes the objective with respect to the model parameters using the SyncReplicasOptimizer 4 from Tensorflow with 10 workers (2 of them backup workers) and an effective batch size of 256. We\n4https://www.tensorflow.org/api_docs/python/tf/train/ SyncReplicasOptimizer\nalso use gradient clipping with a threshold of 5. This optimizer uses the Adam algorithm (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999, = 10−8, and a learning rate that is set to 10−3, 5× 10−4, 3× 10−4, 10−4, and 5× 10−5 at 50k, 100k, 150k, and 200k steps, respectively. Training is run for 300k steps total.\nThe optimizer that maximizes the objective with respect to the Lagrange multiplier is run asynchronously across the 10 workers (meaning each worker computes an independent update using its 32-example sub-batch) and uses SGD with a momentum of 0.9 and a learning rate of 10−5. The Lagrange multiplier is computed by passing an unconstrained parameter through the softplus function in order to enforce non-negativity. The initial value of the parameter is chosen such that the Lagrange multiplier equals 1 at the start of training.\nA.3 EVALUATION DETAILS\nSubjective evaluation Details for the subjective reference similarity and MOS naturalness evaluations are provided in Figures A.1 and A.2. To evaluate reference similarity, we use the AXY side-by-side template in Figure A.1, where A is the reference utterance, and X and Y are outputs from the model being tested and the baseline model.\nMCD-DTW We evaluate the models with hierarchical latents using the MCD-DTW distance to quantify reference similarity and the amount of inter-sample variation. To compute mel cepstral distortion (MCD) (Kubichek, 1993), we use the same mel spectrogram parameters described in A.1 and take the DCT to compute the first 13 MFCCs (not including the 0th coefficient). The MCD between two frames is the Euclidean distance between their MFCC vectors. Then we use the dynamic time warping (DTW) algorithm (Müller, 2007) (with a warp penalty of 1.0) to find an alignment between two spectrograms that produces the minimum MCD cost (including the total warp penalty). We report the average per-frame MCD-DTW.\nTo evaluate reference similarity, we simply compute the MCD-DTW between the synthesized audio and the reference audio (a lower MCD-DTW indicates higher similarity). The strong (negative) correlation between MCD-DTW and subjective similarity is demonstrated in Table B.1. To quantify inter-sample variation, we compute 5 output samples using the same reference and compute the average MCD-DTW between the first sample and each subsequent sample." }, { "heading": "B ADDITIONAL RESULTS", "text": "Rate-distortion plots In Figure B.1, we augment the reconstruction loss plots from Figure 1 with additional rate/distortion plots (Alemi et al., 2018) and vary the KL weight, β, and as well as C.\nOptimization Examples Figure B.2 shows examples of how the KL weight, β, and KL term, R, evolve during training for different capacity limits, C. The curves shown are from the single speaker Var+Txt models discussed in Section 4.3. The target KL is achieved very quickly and then maintained throughout training via continual updates to β that are interleaved with updates to the model parameters.\nSingle-speaker similarity and naturalness results Tables B.1 and B.2 list the raw numbers used in the single-speaker reference similarity and MOS naturalness plots shown in Figure 4 in the main paper. Also shown is MCD-DTW reference distance alongside subjective reference similarity.\nHierarchical latents results The similarity and inter-sample variability results for hierarchical latents from Figure 5 are shown in table format in Table B.3." }, { "heading": "C DERIVATIONS", "text": "C.1 BOUNDING REPRESENTATIONAL MUTUAL INFORMATION\nDefinitions: R ≡ ∫ q(z|x) log q(z|x)\np(z) dz (KL term) (15) RAVG ≡ ∫∫\npD(x)q(z|x) log q(z|x) p(z) dxdz (Average KL term) (16)\nIq(X;Z) ≡ ∫∫\npD(x)q(z|x) log q(z|x) q(z) dxdz (Representational mutual information) (17)\nq(z) ≡ ∫ pD(x)q(z|x)dx (Aggregated posterior) (18)\nKL non-negativity: ∫ q(x) log q(x)\np(x) dx ≥ 0 (19) =⇒ ∫ q(x) log q(x) ≥ ∫ q(x) log p(x)dx (20)\nMutual information is upper bounded by the average KL (Alemi et al., 2018): Iq(X;Z) ≡ ∫∫\npD(x)q(z|x) log q(z|x) q(z) dxdz (21)\n= ∫∫ pD(x)q(z|x) log q(z|x)dxdz− ∫∫ pD(x)q(z|x) log q(z)dxdz (22)\n= ∫∫ pD(x)q(z|x) log q(z|x)dxdz− ∫ q(z) log q(z)dz (23)\n≤ ∫∫ pD(x)q(z|x) log q(z|x)dxdz− ∫ q(z) log p(z)dz (24)\n= ∫∫ pD(x)q(z|x) log q(z|x)dxdz− ∫∫ pD(x)q(z|x) log p(z)dxdz (25)\n= ∫∫ pD(x)q(z|x) log\nq(z|x) p(z) dxdz (26)\n≡ RAVG (27) =⇒ Iq(X;Z) ≤ RAVG (28)\nwhere the inequality in (24) follows from (20).\nThe difference between the average KL and the mutual information is the aggregate KL: RAVG − Iq(X;Z) = ∫∫ pD(x)q(z|x) log q(z)\np(z) dxdz (29)\n= ∫ q(z) log q(z)\np(z) dz (30)\n= DKL(q(z)‖p(z)) (Aggregate KL) (31)\nC.2 HIERARCHICALLY BOUNDING MUTUAL INFORMATION\nThe model with hierarchical latents shown in Figure C.1 gives us the following:\np(z) = p(zH, zL) = p(zL|zH)p(zH) (32) q(z|x) = q(zH, zL|x) = q(zL|x)q(zH|zL) (33)\nThe conditional dependencies on yT and yS are omitted for compactness.\nDefine marginal aggregated posteriors:\nq(zL) ≡ ∫ pD(x)q(zL|x)dx (34)\nq(zH) ≡ ∫ q(zL)q(zH|zL)dzL (35)\nWe can write the average joint KL term and mutual information as follows:\nRAVG = ∫ pD(x)[DKL(q(zH|zL)q(zL|x)‖p(zL|zH)p(zH))]dx (36)\nIq(X; [ZH,ZL]) = ∫ pD(x)[DKL(q(zH|zL)q(zL|x)‖q(zH|zL)q(zL))]dx (37)\nNext we show that Iq(X; [ZH,ZL]) = Iq(X;ZL):\nIq(X; [ZH,ZL]) = ∫∫∫ pD(x)q(zH, zL|x) log\nq(zH|zL)q(zL|x) q(zH|zL)q(zL) dxdzHdzL (38)\n= ∫∫∫ pD(x)q(zH, zL|x) log\nq(zL|x) q(zL) dxdzHzL (39)\n= ∫∫ pD(x)q(zL|x) log\nq(zL|x) q(zL) dxdzL (40)\n= Iq(X;ZL) (41)\nBound Iq(X;ZL):\nIq(X; [ZH,ZL]) = Iq(X;ZL) (42)\nIq(X; [ZH,ZL]) ≤ RAVG (43) =⇒ Iq(X;ZL) ≤ RAVG (44)\nwhere (43) was shown in eq. (28).\nAgain, using the non-negativity of the KL, we can bound Iq(ZH;ZL):\nIq(ZH;ZL) = ∫∫ q(zH|zL)q(zL) log\nq(zH|zL) q(zH) dzHdzL (45)\n≤ ∫∫\nq(zH|zL)q(zL) log q(zH|zL) p(zH) dzHdzL (46)\n= ∫∫∫ pD(x)q(zH|zL)q(zL|x) log\nq(zH|zL) p(zH) dzHdzLdx (47)\n= ∫∫ pD(x)q(zL|x)DKL(q(zH|zL)‖p(zH))dzLdx (48)\n≡ RAVGH (49) Iq(X;ZH) ≤ Iq(ZL;ZH) (50)\n=⇒ Iq(X;ZH) ≤ RAVGH (51) where (50) can be demonstrated by applying the data processing inequality to a reversed version of the Markov chain, X→ ZL → ZH Define RL:\nRL ≡ R−RH (52)\n= ∫∫ q(zL|x)q(zH|zL) log\nq(zL|x) p(zL|zH) dzHdzL (53)\nGiving us the following bounds on Iq(X;ZL) and Iq(X;ZH):\n=⇒ Iq(X;ZH) ≤ RAVGH , Iq(X;ZL) ≤ RAVGH +RAVGL (54)" } ]
2,019
null
SP:87d54568606a9c1f752dae1420484a7b02a7ab1f
[ "The paper investigates hyperbolic discounting as a more biologically plausible alternative to exponential discounting in reinforcement learning. First, it formulates a notion of hazard in MDPs as constant exponential discounting and shows that hyperbolic discounting is consistent with uncertainty over the hazard rate. The paper then shows how value functions learned with exponential discounting can be used to approximate value functions with other forms of discounting. Specifically, the paper shows in section 4 how exponentially-discounted value functions can be used to approximate hyperbolically discounted value functions. The paper then presents experiments on a small MDP and Atari 2600 games, showing that learning discounted action values with many different discount rates as an auxiliary task improves performance on most Atari games.", "This paper argues that hyperbolic and other non-exponential discounting mechanisms have been more utilized by humans and animals for value preferences than exponential discounting as widely used in RL literature. The authors claim that hyperbolic discounting mechanisms are especially preferred in the setting of maintaining uncertainty over the prior belief of the hazard rate in the environment and propose an efficient approximation of the Q function with hyperbolic and other non-exponential discounting mechanisms as a weighted sum of Q-functions with the standard exponential discounting factor. The paper shows empirical evidence that hyperbolic discounting function can more accurately estimate the value in a vanilla Pathworld environment and also demonstrate that the approximated multi-horizon Q functions can improve performance on ALE, which is largely attributed to learning over multi-horizons as an auxiliary task." ]
Reinforcement learning (RL) typically defines a discount factor (γ) as part of the Markov Decision Process. The discount factor values future rewards by an exponential scheme that leads to theoretical convergence guarantees of the Bellman equation. However, evidence from psychology, economics and neuroscience suggests that humans and animals instead have hyperbolic time-preferences ( 1 1+kt for k > 0). Here we extend earlier work of Kurth-Nelson and Redish and propose an efficient deep reinforcement learning agent that acts via hyperbolic discounting and other non-exponential discount mechanisms. We demonstrate that a simple approach approximates hyperbolic discount functions while still using familiar temporal-difference learning techniques in RL. Additionally, and independent of hyperbolic discounting, we make a surprising discovery that simultaneously learning value functions over multiple time-horizons is an effective auxiliary task which often improves over state-of-the-art methods.
[]
[ { "authors": [ "George Ainslie" ], "title": "Specious reward: a behavioral theory of impulsiveness and impulse control", "venue": "Psychological bulletin,", "year": 1975 }, { "authors": [ "George Ainslie" ], "title": "Picoeconomics: The strategic interaction of successive motivational states within the person", "venue": null, "year": 1992 }, { "authors": [ "William H Alexander", "Joshua W Brown" ], "title": "Hyperbolically discounted temporal difference learning", "venue": "Neural computation,", "year": 2010 }, { "authors": [ "Marc G. Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Marc G. Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "arXiv preprint arXiv:1707.06887,", "year": 2017 }, { "authors": [ "Richard Bellman" ], "title": "A markovian decision process", "venue": "Journal of Mathematics and Mechanics,", "year": 1957 }, { "authors": [ "Richard Bellman" ], "title": "On a routing problem", "venue": "Quarterly of applied mathematics,", "year": 1958 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Neuro-dynamic programming: an overview", "venue": null, "year": 1995 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Pablo Samuel Castro", "Subhodeep Moitra", "Carles Gelada", "Saurabh Kumar", "Marc G. Bellemare" ], "title": "Dopamine: A research framework for deep reinforcement learning", "venue": "CoRR, abs/1812.06110,", "year": 2018 }, { "authors": [ "Partha Dasgupta", "Eric Maskin" ], "title": "Uncertainty and hyperbolic discounting", "venue": "American Economic Review,", "year": 2005 }, { "authors": [ "Nathaniel D Daw" ], "title": "Reinforcement learning models of the dopamine system and their behavioral implications", "venue": "PhD thesis,", "year": 2003 }, { "authors": [ "Nathaniel D Daw", "David S Touretzky" ], "title": "Behavioral considerations suggest an average reward td model of the dopamine", "venue": "system. Neurocomputing,", "year": 2000 }, { "authors": [ "Peter Dayan", "Geoffrey E Hinton" ], "title": "Feudal reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 1993 }, { "authors": [ "Dmitri Dolgov", "Edmund Durfee" ], "title": "Stationary deterministic policies for constrained mdps with multiple rewards, costs, and discount factors", "venue": "Ann Arbor,", "year": 2005 }, { "authors": [ "Ashley Edwards", "Michael L Littman", "Charles L Isbell" ], "title": "Expressing tasks robustly via multiple discount factors", "venue": null, "year": 2015 }, { "authors": [ "Eugene A Feinberg", "Adam Shwartz" ], "title": "Markov decision models with weighted discounted criteria", "venue": "Mathematics of Operations Research,", "year": 1994 }, { "authors": [ "Eugene A Feinberg", "Adam Shwartz" ], "title": "Constrained dynamic programming with two discount factors: Applications and an algorithm", "venue": "IEEE Transactions on Automatic Control,", "year": 1999 }, { "authors": [ "Vincent François-Lavet", "Raphael Fonteneau", "Damien Ernst" ], "title": "How to discount deep reinforcement learning: Towards new dynamic strategies", "venue": "arXiv preprint arXiv:1512.02011,", "year": 2015 }, { "authors": [ "Shane Frederick", "George Loewenstein", "Ted" ], "title": "O’donoghue. Time discounting and time preference: A critical review", "venue": "Journal of economic literature,", "year": 2002 }, { "authors": [ "Wai-Tat Fu", "John R Anderson" ], "title": "From recurrent choice to skill learning: A reinforcement-learning model", "venue": "Journal of experimental psychology: General,", "year": 2006 }, { "authors": [ "Leonard Green", "Joel Myerson" ], "title": "A discounting framework for choice with delayed and probabilistic rewards", "venue": "Psychological bulletin,", "year": 2004 }, { "authors": [ "Leonard Green", "Ewin B Fisher", "Steven Perlow", "Lisa Sherman" ], "title": "Preference reversal and self control: Choice as a function of reward amount and delay", "venue": "Behaviour Analysis Letters,", "year": 1981 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Alex Kacelnik" ], "title": "Normative and descriptive models of decision making: time discounting and risk sensitivity", "venue": "Characterizing human psychological adaptations,", "year": 1997 }, { "authors": [ "Leslie Pack Kaelbling", "Michael L Littman", "Anthony R Cassandra" ], "title": "Planning and acting in partially observable stochastic domains", "venue": "Artificial intelligence,", "year": 1998 }, { "authors": [ "Michael Kearns", "Satinder Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Zeb Kurth-Nelson", "A David Redish" ], "title": "Temporal-difference reinforcement learning with distributed representations", "venue": "PLoS One,", "year": 2009 }, { "authors": [ "Guillaume Lample", "Devendra Singh Chaplot" ], "title": "Playing fps games with deep reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Tor Lattimore", "Marcus Hutter" ], "title": "Time consistent discounting", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2011 }, { "authors": [ "George Loewenstein" ], "title": "Out of control: Visceral influences on behavior", "venue": "Organizational behavior and human decision processes,", "year": 1996 }, { "authors": [ "Marlos C. Machado", "Marc G. Bellemare", "Erik Talvitie", "Joel Veness", "Matthew Hausknecht", "Michael Bowling" ], "title": "Revisiting the Arcade Learning Environment: Evaluation protocols and open problems for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Tiago V Maia" ], "title": "Reinforcement learning, conditioning, and the brain: Successes and challenges. Cognitive, Affective", "venue": "Behavioral Neuroscience,", "year": 2009 }, { "authors": [ "James E Mazur" ], "title": "Probability and delay of reinforcement as factors in discrete-trial choice", "venue": "Journal of the Experimental Analysis of Behavior,", "year": 1985 }, { "authors": [ "James E Mazur" ], "title": "An adjusting procedure for studying delayed reinforcement", "venue": null, "year": 1987 }, { "authors": [ "James E Mazur" ], "title": "Choice, delay, probability, and conditioned reinforcement", "venue": "Animal Learning & Behavior,", "year": 1997 }, { "authors": [ "Piotr Mirowski", "Razvan Pascanu", "Fabio Viola", "Hubert Soyer", "Andrew J Ballard", "Andrea Banino", "Misha Denil", "Ross Goroshin", "Laurent Sifre", "Koray Kavukcuoglu" ], "title": "Learning to navigate in complex environments", "venue": "arXiv preprint arXiv:1611.03673,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "P Read Montague", "Peter Dayan", "Terrence J Sejnowski" ], "title": "A framework for mesencephalic dopamine systems based on predictive hebbian learning", "venue": "Journal of neuroscience,", "year": 1936 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Silviu Pitis" ], "title": "Rethinking the Discount Factor in Reinforcement Learning: A Decision Theoretic Approach", "venue": "In Proceedings of the 33rd AAAI Conference on Artificial Intelligence", "year": 2019 }, { "authors": [ "Danil V Prokhorov", "Donald C Wunsch" ], "title": "Adaptive critic designs", "venue": "IEEE transactions on Neural Networks,", "year": 1997 }, { "authors": [ "Antonio Rangel", "Colin Camerer", "P Read Montague" ], "title": "A framework for studying the neurobiology of value-based decision making", "venue": "Nature reviews neuroscience,", "year": 2008 }, { "authors": [ "A David Redish", "Zeb Kurth-Nelson" ], "title": "Neural models of temporal discounting", "venue": null, "year": 2010 }, { "authors": [ "Chris Reinke", "Eiji Uchibe", "Kenji Doya" ], "title": "Average reward optimization with multiple discounting reinforcement learners", "venue": "In International Conference on Neural Information Processing,", "year": 2017 }, { "authors": [ "Joshua Romoff", "Peter Henderson", "Ahmed Touati", "Yann Ollivier", "Emma Brunskill", "Joelle Pineau" ], "title": "Separating value functions across time-scales", "venue": null, "year": 1902 }, { "authors": [ "Paul A Samuelson" ], "title": "A note on measurement of utility", "venue": "The review of economic studies,", "year": 1937 }, { "authors": [ "Wolfram Schultz", "Peter Dayan", "P Read Montague" ], "title": "A neural substrate of prediction and reward", "venue": null, "year": 1997 }, { "authors": [ "Craig Sherstan", "James MacGlashan", "Patrick M. Pilarski" ], "title": "Generalizing value estimation over timescal", "venue": "In FAIM Workshop on Prediction and Generative Modeling in Reinforcement Learning,", "year": 2018 }, { "authors": [ "Satinder P Singh" ], "title": "Scaling reinforcement learning algorithms by learning variable temporal resolution models", "venue": "In Machine Learning Proceedings", "year": 1992 }, { "authors": [ "Peter D Sozou" ], "title": "On hyperbolic discounting and uncertain hazard rates", "venue": "Proceedings of the Royal Society of London B: Biological Sciences,", "year": 1998 }, { "authors": [ "Robert Henry Strotz" ], "title": "Myopia and inconsistency in dynamic utility maximization", "venue": "The Review of Economic Studies,", "year": 1955 }, { "authors": [ "Steven C Suddarth", "YL Kergosien" ], "title": "Rule-injection hints as a means of improving network performance and learning time", "venue": "In Neural Networks,", "year": 1990 }, { "authors": [ "Richard S Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine learning,", "year": 1988 }, { "authors": [ "Richard S Sutton" ], "title": "Td models: Modeling the world at a mixture of time scales", "venue": "In Machine Learning Proceedings", "year": 1995 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": null, "year": 1998 }, { "authors": [ "Richard S Sutton", "Joseph Modayil", "Michael Delp", "Thomas Degris", "Patrick M Pilarski", "Adam White", "Doina Precup" ], "title": "Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2011 }, { "authors": [ "Vivek Veeriah", "Junhyuk Oh", "Satinder Singh" ], "title": "Many-goals reinforcement learning", "venue": "arXiv preprint arXiv:1806.09605,", "year": 2018 }, { "authors": [ "Zhongwen Xu", "Hado van Hasselt", "David Silver" ], "title": "Meta-gradient reinforcement learning", "venue": "International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "DQN Mnih" ], "title": "2018), we benchmark against the baselines set by Castro et al. (2018) and we use the default hyperparameters for each of the respective algorithms", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The standard treatment of the reinforcement learning (RL) problem is the Markov Decision Process (MDP) which includes a discount factor 0 ≤ γ ≤ 1 that exponentially reduces the present value of future rewards (Bellman, 1957; Sutton & Barto, 1998). A reward rt received in t-time steps is devalued to γtrt, a discounted utility model introduced by Samuelson (1937). This establishes a timepreference for rewards realized sooner rather than later. The decision to exponentially discount future rewards by γ leads to value functions that satisfy theoretical convergence properties (Bertsekas, 1995). The magnitude of γ also plays a role in stabilizing learning dynamics of RL algorithms (Prokhorov & Wunsch, 1997; Bertsekas & Tsitsiklis, 1996) and has recently been treated as a hyperparameter of the optimization (OpenAI, 2018; Xu et al., 2018).\nHowever, both the magnitude and the functional form of this discounting function establish priors over the solutions learned. The magnitude of γ chosen establishes an effective horizon for the agent of 1/(1− γ), far beyond which rewards are neglected (Kearns & Singh, 2002). This effectively imposes a time-scale of the environment, which may not be accurate. Further, the exponential discounting of future rewards is consistent with a prior belief that there is a known constant per-time-step hazard rate (Sozou, 1998) or probability of dying of 1− γ (Lattimore & Hutter, 2011). Additionally, discounting future values exponentially and according to a single discount factor γ does not harmonize with the measured value preferences in humans1 and animals (Mazur, 1985; 1997; Ainslie, 1992; Green & Myerson, 2004; Maia, 2009). A wealth of empirical evidence has been amassed that humans, monkeys, rats and pigeons instead discount future returns hyperbolically, where dk(t) = 11+kt , for some positive k > 0 (Ainslie, 1975; 1992; Mazur, 1985; 1997; Frederick et al., 2002; Green et al., 1981; Green & Myerson, 2004).\nThis discrepancy between the time-preferences of animals from the exponential discounted measure of value might be presumed irrational. But Sozou (1998) showed that hyperbolic time-preferences is mathematically consistent with the agent maintaining some uncertainty over the prior belief of the hazard rate in the environment. Hazard rate h(t) measures the per-time-step risk the agent incurs as it acts in the environment due to a potential early death. Precisely, if s(t) is the probability that the\n1Time-preference reversals are one implication. Consider two hypothetical choices: (1) a stranger offers $1M now or $1.1M dollars tomorrow (2) a stranger instead offers $1M in 99 days versus $1.1M in 100 days.\nagent is alive at time t then the hazard rate is h(t) = − ddt lns(t). We consider the case where there is a fixed, but potentially unknown hazard rate h(t) = λ ≥ 0. The prior belief of the hazard rate p(λ) implies a specific discount function Sozou (1998). Under this formalism, the canonical case in RL of discounting future rewards according to d(t) = γt is consistent with the belief that there exists a single hazard rate λ = e−γ known with certainty. Further details are available in Appendix A.\nCommon RL environments are also characterized by risk, but often in a narrower sense. In deterministic environments like the original Arcade Learning Environment (ALE) (Bellemare et al., 2013) stochasticity is often introduced through techniques like no-ops (Mnih et al., 2015) and sticky actions (Machado et al., 2018) where the action execution is noisy. Physics simulators may have noise and the randomness of the policy itself induces risk. But even with these stochastic injections the risk to reward emerges in a more restricted sense. In Section 2 we show that a prior distribution reflecting the uncertainty over the hazard rate, has an associated discount function in the sense that an MDP with either this hazard distribution or the discount function, has the same value function for all policies. This equivalence implies that learning policies with a discount function can be interpreted as making them robust to the associated hazard distribution. Thus, discounting serves as a tool to ensure that policies deployed in the real world perform well even under risks they were not trained under.\nWe propose an algorithm that approximates hyperbolic discounting while building on successful Qlearning (Watkins & Dayan, 1992) tools and their associated theoretical guarantees. We show learning many Q-values, each discounting exponentially with a different discount factor γ, can be aggregated to approximate hyperbolic (and other non-exponential) discount factors. We demonstrate the efficacy of our approximation scheme in our proposed Pathworld environment which is characterized both by an uncertain per-time-step risk to the agent. Conceptually, Pathworld emulates a foraging environment where an agent must balance easily realizable, small meals versus more distant, fruitful meals. We then consider higher-dimensional deep RL agents in the ALE, where we measure the benefits of hyperbolic discounting. This approximation mirrors the work of Kurth-Nelson & Redish (2009); Redish & Kurth-Nelson (2010) which empirically demonstrates that modeling a finite set of µAgents simultaneously can approximate hyperbolic discounting function. Our method then generalizes to other non-hyperbolic discount functions and uses deep neural networks to model the different Q-values from a shared representation.\nSurprisingly and in addition to enabling new non-exponential discounting schemes, we observe that learning a set of Q-values is beneficial as an auxiliary task (Jaderberg et al., 2016). Adding this multi-horizon auxiliary task often improves over a state-of-the-art baseline, Rainbow (Hessel et al., 2018) in the ALE (Bellemare et al., 2013). This work questions the RL paradigm of learning policies through a single discount function which exponentially discounts future rewards through the following contributions:\n1. Hazardous MDPs. We formulate MDPs with hazard present and demonstrate an equivalence between undiscounted values learned under hazards and (potentially nonexponentially) discounted values without hazard.\n2. Hyperbolic (and other non-exponential)-agent. A practical approach for training an agent which discounts future rewards by a hyperbolic (or other non-exponential) discount function and acts according to this.\n3. Multi-horizon auxiliary task. A demonstration of multi-horizon learning over many γ simultaneously as an effective auxiliary task." }, { "heading": "2 HAZARD IN MDPS", "text": "To study MDPs with hazard distributions and general discount functions we introduce two modifications. The hazardous MDP now is defined by the tuple < S,A, R, P,H, d >. In standard form, the state space S and the action space A may be discrete or continuous. The learner observes samples from the environment transition probability P (st+1|st, at) for going from st ∈ S to st+1 ∈ S given at ∈ A. We will consider the case where P is a sub-stochastic transition function, which defines an episodic MDP. The environment emits a bounded reward r : S × A → [rmin, rmax] on each transition. In this work we consider non-infinite episodic MDPs.\nThe first difference is that at the beginning of each episode, a hazard λ ∈ [0,∞) is sampled from the hazard distributionH. This is equivalent to sampling a continuing probability γ = e−λ. During the episode, the hazard modified transition function will be Pλ, in that Pλ(s′|s, a) = e−λP (s′|s, a). The second difference is that we now consider a general discount function d(t). This differs from the standard approach of exponential discounting in RL with γ according to d(t) = γt, which is a special case. This setting makes a close connection to partially observable Markov Decision Process (POMDP) (Kaelbling et al., 1998) where one might consider λ as an unobserved variable. However, the classic POMDP definition contains an explicit discount function γ as part of its definition which does not appear here.\nA policy π : S → A is a mapping from states to actions. The state action value function QH,dπ (s, a) is the expected discounted rewards after taking action a in state s and then following policy π until termination.\nQH,dπ (s, a) = EλEπ,Pλ [ ∞∑ t=0 d(t)R(st, at)|s0 = s, a0 = a ] (1)\nwhere λ ∼ H and Eπ,Pλ implies that st+1 ∼ Pλ(·|st, at) and at ∼ π(·|st)." }, { "heading": "2.1 EQUIVALENCE BETWEEN HAZARD AND DISCOUNTING", "text": "In the hazardous MDP setting we observe the same connections between hazard and discount functions delineated in Appendix A. This expresses an equivalence between the value function of an MDP with a discount and MDP with a hazard distribution.\nFor example, there exists an equivalence between the exponential discount function d(t) = γt to the undiscounted case where the agent is subject to a (1− γ) per time-step of dying (Lattimore & Hutter, 2011). The typical Q-value (left side of Equation 2) is when the agent acts in an environment without hazard λ = 0 or H = δ(0) and discounts future rewards according to d(t) = γt = e−λt which we denote as Qδ(0),γ t\nπ (s, a). The alternative Q-value (right side of Equation 2) is when the agent acts under hazard rate λ = − ln γ but does not discount future rewards which we denote as Q δ(− ln γ),1 π (s, a).\nQδ(0),γ t π (s, a) = Q δ(− ln γ),1 π (s, a) ∀ π, s, a. (2)\nwhere δ(x) denotes the Dirac delta distribution at x. This follows from Pλ(s′|s, a) = e−λP (s′|s, a)\nEπ,P [ ∞∑ t=0 γtR(st, at)|s0 = s, a0 = a ] = Eπ,P [ ∞∑ t=0 e−λtR(st, at)|s0 = s, a0 = a ]\n= Eπ,Pλ [ ∞∑ t=0 R(st, at)|s0 = s, a0 = a ] We also show a similar equivalence between hyperbolic discounting and the specific hazard distribution pk(λ) = 1k exp(−λ/k), where again, λ ∈ [0,∞) in Appendix E.\nQδ(0),Γkπ (s, a) = Q pk,1 π (s, a)\nFor notational brevity later in the paper, we will omit the explicit hazard distributionH-superscript if the environment is not hazardous. This formulation builds upon Sozou (1998)’s relate of hazard rate and discount functions and shows that this holds for generalized Q-values in reinforcement learning." }, { "heading": "3 COMPUTING NON-EXPONENTIAL Q-VALUES", "text": "We now show how one can re-purpose exponentially-discounted Q-values to compute hyperbolic (and other-non-exponential) discounted Q-values. The central challenge with using non-exponential discount strategies is that most RL algorithms use some form of TD learning (Sutton, 1988). This family of algorithms exploits the Bellman equation (Bellman, 1958) which, when using exponential discounting, relates the value function at one state with the value at the following state.\nQγ t π (s, a) = Eπ,P [R(s, a) + γQπ(s′, a′)] (3)\nwhere expectation Eπ,P denotes sampling a ∼ π(·|s), s′ ∼ P (·|s, a), and a′ ∼ π(·|s′). Being able to reuse TD methods without being constrained to exponential discounting is thus an important challenge. We propose here a scheme to deduce hyperbolic as well as other non-exponentially discounted Q-values when our discount function has a particular form.\nLemma 3.1. Let QH,γπ (s, a) be the state action value function under exponential discounting in a hazardous MDP < S,A, R, P,H, γt > and let QH,dπ (s, a) refer to the value function in the same MDP except for new discounting < S,A, R, P,H, d >. If there exists a function w : [0, 1]→ R such that\nd(t) = ∫ 1 0 w(γ)γtdγ (4)\nwhich we will refer to as the exponential weighting condition, then\nQH,dπ (s, a) = ∫ 1 0 w(γ)QH,γπ (s, a)dγ (5)\nProof. Applying the condition on d,\nQH,dπ (s, a) = EλEπ,Pλ [ ∞∑ t=0 (∫ 1 0 w(γ)γtdγ ) R(st, at)|s0 = s, a0 = a ] (6)\n= ∫ 1 0 EλEπ,Pλw(γ) [ ∞∑ t=0 γtR(st, at)|s0 = s, a0 = a ] dγ (7)\n= ∫ 1 0 w(γ)QH,γπ (s, a)dγ (8)\nThe exchange in the above proof is valid if ∑∞ t=0 γ\ntR(st, at) < ∞. The exponential weighting condition is satisfied for hyperbolic discounting and other discounting that we might want to consider (see Appendix F for examples). As an example, the hyperbolic discount can be expressed as the integral of a function f(γ, t) for γ = [0, 1) in Equation 9.\n1\nk ∫ 1 γ=0 γ1/k+t−1dγ = 1 1 + kt (9)\nThis equationn tells us an integral over a function f(γ, t) = 1kγ 1/k+t−1 = w(γ)γt yields the desired hyperbolic discount factor Γk(t) = 11+kt . This integral can be derived by Sozou’s Laplace transform of the hazard rate priorH = p(λ) in Equation 18 and then applying our change of variables γ = e−λ relating RL discount factors to hazard rates. The computation of hyperbolic and other discount functions is demonstrated in detail in Appendix F.\nThis prescription gives us a tool to produce general forms of non-exponentially discounted Q-values using our familiar exponentially discounted Q-values traditionally learned in RL (Sutton, 1988; Sutton & Barto, 1998).\n4 APPROXIMATING HYPERBOLIC Q-VALUES\nSection 3 describes an equivalence between hyperbolically-discounted Q-values and integrals of exponentially-discounted Q-values, however, the method required evaluating an infinite set of value functions. We therefore present a practical approach to approximate discounting Γ(t) = 11+kt using a finite set of functions learned via standard Q-learning (Watkins & Dayan, 1992). To avoid estimating an infinite number of Qγπ-values we introduce a free hyperparameter (nγ) which is the total number of Qγπ-values to consider, each with their own γ. We use a practically-minded approach to choose G that emphasizes evaluating larger values of γ rather than uniformly choosing points and empirically performs well as seen in Section 5.\nG = [γ0, γ1, · · · , γnγ ] (10) Our approach is described in Appendix G. EachQγiπ computes the discounted sum of returns according to that specific discount factor Qγiπ (s, a) = Eπ [ ∑ t(γi)\ntrt|s0 = s, a0 = a]. We previously proposed two equivalent approaches for computing hyperbolic Q-values, but for simplicity we consider the one presented in Lemma 3.1. The set of Q-values permits us to estimate the integral through a Riemann sum (Equation 11) which is described in further detail in Appendix I.\nQΓπ(s, a) = ∫ 1 0 w(γ)Qγπ(s, a)dγ (11)\n≈ ∑ γi∈G (γi+1 − γi) w(γi) Qγiπ (s, a) (12)\nwhere we estimate the integral through a lower bound. We consolidate this entire process in Figure 11 where we show the full process of rewriting the hyperbolic discount rate, hyperbolically-discounted Q-value, the approximation and the instantiated agent. This approach is similar to that of KurthNelson & Redish (2009) where each µAgent models a specific discount factor γ. However, this differs in that our final agent computes a weighted average over each Q-value rather than a sampling operation of each agent based on a γ-distribution." }, { "heading": "5 HYPERBOLIC RESULTS", "text": "" }, { "heading": "5.1 WHEN TO DISCOUNT HYPERBOLICALLY?", "text": "The benefits of hyperbolic discounting will be greatest under two conditions: uncertain hazard and non-trivial intertemporal decisions. The first condition can arise under a unobserved hazard-rate variable λ drawn independently at the beginning of each episode from H = p(λ). The second condition emerges with a choice between a smaller nearby rewards versus larger distant rewards.2 In the absence of both properties we would not expect any advantage to discounting hyperbolically. To see why, if there is a single-true hazard rate λenv, than an optimal γ∗ = e−λenv exists and future rewards should be discounted exponentially according to it. Further, if there is a single path through the environment with perfect alignment of short- and long-term objectives, all discounting schemes yield the same optimal policy." }, { "heading": "5.2 PATHWORLD EXPERIMENTS", "text": "We note two sources for discounting rewards in the future: time delay and survival probability (Section 2). In Pathworld we train to maximize hyperbolically discounted returns ( ∑ t Γk(t)R(st, at)) under no hazard (H = δ(λ− 0)) but then evaluate the undiscounted returns d(t) = 1.0 ∀ t with the paths subject to hazardH = 1k exp(−λ/k). Through this procedure, we are able to train an agent that is robust to hazards in the environment. The agent makes one decision in Pathworld (Figure 2): which of the N paths to investigate. Once a path is chosen, the agent continues until it reaches the end or until it dies. This is similar to a multi-armed bandit, with each action subject to dynamic risk. The paths vary quadratically in length with the index d(i) = i2 but the rewards increase linearly with the path index r(i) = i. This presents\n2A trivial intertemporal decision is one between small distant rewards versus large close rewards. For example, the choice between $100 now versus $10 tomorrow.\na non-trivial decision for the agent. At deployment, an unobserved hazard λ ∼ H is drawn and the agent is subject to a per-time-step risk of dying of (1 − e−λ). This environment differs from the adjusting-delay procedure presented by Mazur (1987) and then later modified by Kurth-Nelson & Redish (2009). Rather then determining time-preferences through variable-timing of rewards, we determine time-preferences through risk to the reward.\nFigure 3 validates that our approach well-approximates the true hyperbolic value of each path when the hazard prior matches the true distribution. Agents that discount exponentially according to a single γ (the typical case in RL) incorrectly value the paths. We examine further the failure of exponential discounting in this hazardous setting. For this environment, the true hazard parameter in the prior was k = 0.05 (i.e. λ ∼ 20exp(−λ/0.05)). Therefore, at deployment, the agent must deal with dynamic levels of risk and faces a non-trivial decision of which path to follow. Even if we tune an agent’s γ = 0.975 such that it chooses the correct arg-max path, it still fails to capture the functional form (Figure 3) and it achieves a high error over all paths (Table 1). If the arg-max action was not available or if the agent was proposed to evaluate non-trivial intertemporal decisions, it would act sub-optimally. In Appendix B we consider additional experiments where the agent’s prior over hazard more realistically does not exactly match the environment true hazard rate and demonstrate the benefit of appropriate priors." }, { "heading": "5.3 ATARI 2600 EXPERIMENTS", "text": "With our approach validated in Pathworld, we now move to the high-dimensional environment of Atari 2600, specifically, ALE. We use the Rainbow variant from Dopamine (Castro et al., 2018) which implements three of the six considered improvements from the original paper: distributional RL, predicting n-step returns and prioritized replay buffers. The agent (Figure 4) maintains a shared representation h(s) of state, but computes Q-value logits for each of the N γi via Q (i) π (s, a) = Wih(s) + bi where Wi and bi are the learnable parameters of the affine transformation for that head. A ReLU-nonlinearity is used within the body of the network (Nair & Hinton, 2010).\nHyperparameter details are provided in Appendix K and when applicable, they default to the standard Dopamine values. We find strong performance improvements of the hyperbolic agent built on Rainbow (Hyper-Rainbow; blue bars) on a random subset of Atari 2600 games in Figure 5." }, { "heading": "6 MULTI-HORIZON AUXILIARY TASK RESULTS", "text": "To dissect the Hyper-Rainbow improvements, recognize that two properties from the base Rainbow agent have changed:\n1. Behavior policy, µ. The agent acts according to hyperbolic Q-values computed by our approximation described in Section 4\n2. Learn over multiple horizons. The agent simultaneously learns Q-values over many γ rather than a Q-value for a single γ\nOn this subset of 19 games, Hyper-Rainbow improves upon 14 games and in some cases, by large margins. But we seek here a more complete understanding of the underlying driver of this improvement in ALE through an ablation study.\nThe second modification can be regarded as introducing an auxiliary task (Jaderberg et al., 2016). Therefore, to attribute the performance of each properly we construct a Rainbow agent augmented with the multi-horizon auxiliary task (referred to as Multi-Rainbow and shown in orange) but have it still act according to the original policy. That is, Multi-Rainbow acts to maximize expected rewards discounted by a fixed γaction but now learns over multiple horizons as shown in Figure 4.\nWe find that the Multi-Rainbow agent performs nearly as well on these games, suggesting the effectiveness of this as a stand-alone auxiliary task. This is not entirely unexpected given the rather special-case of hazard exhibited in ALE through sticky-actions (Machado et al., 2018).\nWe examine further and investigate the performance of this auxiliary task across the full Arcade Learning Environment (Bellemare et al., 2017) using the recommended evaluation by (Machado et al., 2018). Doing so we find strong empirical benefits of the multi-horizon auxiliary task over the state-of-the-art Rainbow agent as shown in Figure 6." }, { "heading": "6.1 ANALYSIS AND ABLATION STUDIES", "text": "To understand the interplay of the multi-horizon auxiliary task with other improvements in deep RL, we test a random subset of 10 Atari 2600 games against improvements in Rainbow (Hessel et al., 2018). On this set of games we measure a consistent improvement with multi-horizon C51 (Multi-C51) in 9 out of the 10 games over the base C51 agent (Bellemare et al., 2017) in Figure 7.\nFigure 7 indicates that the current implementation of Multi-Rainbow does not generally build successfully on the prioritized replay buffer. On the subset of ten games considered, we find that four out of ten games (Pong, Venture, Gravitar and Zaxxon) are negatively impacted despite (Hessel et al., 2018) finding it to be of considerable benefit and specifically beneficial in three out of these\nfour games (Venture was not considered). The current prioritization scheme simply averaged the temporal-difference errors over all Q-values to establish priority. Alternative prioritization schemes are resulted in comparable performance indicating this is an open issue (Appendix J)." }, { "heading": "7 RELATED WORK", "text": "Hyperbolic discounting in economics. Hyperbolic discounting is well-studied in the field of economics (Sozou, 1998; Dasgupta & Maskin, 2005). Dasgupta and Maskin (2005) proposes a softer interpretation than Sozou (1998) (which produces a per-time-step of death via the hazard rate) and demonstrates that uncertainty over the timing of rewards can also give rise to hyperbolic discounting and preference reversals, a hallmark of hyperbolic discounting. Hyperbolic discounting was initially presumed to not lend itself to TD-based solutions (Daw & Touretzky, 2000) but the field has evolved on this point. Maia (2009) proposes solution directions that find models that discount quasi-hyperbolically even though each learns with exponential discounting (Loewenstein, 1996) but reaffirms the difficulty. Finally, Alexander and Brown (2010) proposes hyperbolically discounted temporal difference (HDTD) learning by making connections to hazard.\nBehavior RL and hyperbolic discounting in neuroscience. TD-learning has long been used for modeling behavioral reinforcement learning (Montague et al., 1996; Schultz et al., 1997; Sutton & Barto, 1998). TD-learning computes the error as the difference between the expected value and actual value (Sutton & Barto, 1998; Daw, 2003) where the error signal emerges from unexpected rewards. However, these computations traditionally rely on exponential discounting as part of the estimate of the value which disagrees with empirical evidence in humans and animals (Strotz, 1955; Mazur, 1985; 1997; Ainslie, 1975; 1992). Hyperbolic discounting has been proposed as an alternative to exponential discounting though it has been debated as an accurate model (Kacelnik, 1997; Frederick et al., 2002). Naive modifications to TD-learning to discount hyperbolically present issues since the simple forms are inconsistent (Daw & Touretzky, 2000; Redish & Kurth-Nelson, 2010) RL models have been proposed to explain behavioral effects of humans and animals (Fu & Anderson, 2006;\nRangel et al., 2008) but Kurth-Nelson & Redish (2009) demonstrated that distributed exponential discount factors can directly model hyperbolic discounting. This work proposes the µAgent, an agent that models the value function with a specific discount factor γ. When the distributed set of µAgent’s votes on the action, this was shown to approximate hyperbolic discounting well in the adjusting-delay assay experiments (Mazur, 1987). Using the hazard formulation established in Sozou (1998), we demonstrate how to extend this to other non-hyperbolic discount functions and demonstrate the efficacy of using a deep neural network to model the different Q-values from a shared representation.\nTowards more flexible discounting in reinforcement learning. RL researchers have recently adopted more flexible versions beyond a fixed discount factor (Feinberg & Shwartz, 1994; Sutton, 1995; Sutton et al., 2011; White, 2017). Optimal policies are studied in Feinberg & Shwartz (1994) where two value functions with different discount factors are used. Introducing the discount factor as an argument to be queried for a set of timescales is considered in both Horde (Sutton et al., 2011) and γ-nets (Sherstan et al., 2018). Reinke et al. (2017) proposes the Average Reward Independent Gamma Ensemble framework which imitates the average return estimator. Lattimore and Hutter (2011) generalizes the original discounting model through discount functions that vary with the age of the agent, expressing time-inconsistent preferences as in hyperbolic discounting. The need to increase training stability via effective horizon was addressed in François-Lavet, Fonteneau, and Ernst (2015) who proposed dynamic strategies for the discount factor γ. Meta-learning approaches to deal with the discount factor have been proposed in Xu, van Hasselt, and Silver (2018). Finally, Pitis (2019) characterizes rational decision making in sequential processes, formalizing a process that admits a state-action dependent discount rates. Operating over multiple time scales has a long history in RL. Sutton (1995) generalizes the work of Singh (1992) and Dayan and Hinton (1993) to formalize a multi-time scale TD learning model theory. Previous work has been explored on solving MDPs with multiple reward functions and multiple discount factors though these relied on separate transition models (Feinberg & Shwartz, 1999; Dolgov & Durfee, 2005). Edwards, Littman, and Isbell (2015) considers decomposing a reward function into separate components each with its own discount factor. In our work, we continue to model the same rewards, but now model the value over different horizons. Recent work in difficult exploration games demonstrates the efficacy of two different discount factors (Burda et al., 2018) one for intrinsic rewards and one for extrinsic rewards. Finally, and concurrent with this work, Romoff et al. (2019) proposes the TD(∆)-algorithm which breaks a value function into a series of value functions with smaller discount factors.\nAuxiliary tasks in reinforcement learning. Finally, auxiliary tasks have been successfully employed and found to be of considerable benefit in RL. Suddarth and Kergosien (1990) used auxiliary tasks to facilitate representation learning. Building upon this, work in RL has consistently demonstrated benefits of auxiliary tasks to augment the low-information coming from the environment through extrinsic rewards (Lample & Chaplot, 2017; Mirowski et al., 2016; Jaderberg et al., 2016; Veeriah et al., 2018; Sutton et al., 2011)" }, { "heading": "8 DISCUSSION AND FUTURE WORK", "text": "This work builds on a body of work that questions one of the basic premises of RL: one should maximize the exponentially discounted returns via a single discount factor. By learning over multiple horizons simultaneously, we have broadened the scope of our learning algorithms. Through this we have shown that we can enable acting according to new discounting schemes and that learning multiple horizons is a powerful stand-alone auxiliary task. Our method well-approximates hyperbolic discounting and performs better in hazardous MDP distributions. This may be viewed as part of an algorithmic toolkit to model alternative discount functions.\nHowever, this work still does not fully capture more general aspects of risk since the hazard rate may be a function of time. Further, hazard may not be an intrinsic property of the environment but a joint property of both the policy and the environment. If an agent purses a policy leading to dangerous state distributions then it will naturally be subject to higher hazards and vice-versa - this creates a complicated circular dependency. We would therefore expect an interplay between time-preferences and policy. This is not simple to deal with but recent work proposing state-action dependent discounting (Pitis, 2019) may provide a formalism for more general time-preference schemes." }, { "heading": "A SOZOU (1998): BELIEF OF RISK IMPLIES A DISCOUNT FUNCTION", "text": "Sozou (1998) formalizes time preferences in which future rewards are discounted based on the probability that the agent will not survive to collect them due to an encountered risk or hazard. Definition A.1. Survival s(t) is the probability of the agent surviving until time t.\ns(t) = P (agent is alive|at time t) (13)\nA future reward rt is less valuable presently if the agent is unlikely to survive to collect it. If the agent is risk-neutral, the present value of a future reward rt received at time-t should be discounted by the probability that the agent will survive until time t to collect it, s(t).3\nv(rt) = s(t)rt (14)\nConsequently, if the agent is certain to survive, s(t) = 1, then the reward is not discounted per Equation 14. From this it is then convenient to define the hazard rate. Definition A.2. Hazard rate h(t) is the negative rate of change of the log-survival at time t\nh(t) = − d dt lns(t) (15)\nor equivalently expressed as h(t) = −ds(t)dt 1 s(t) . Therefore the environment is considered hazardous at time t if the log survival is decreasing sharply.\nSozou (1998) demonstrates that the prior belief of the risk in the environment implies a specific discounting function. When the risk occurs at a known constant rate than the agent should discount future rewards exponentially. However, when the agent holds uncertainty over the hazard rate then hyperbolic and alternative discounting rates arise.\nA.1 KNOWN HAZARD IMPLIES EXPONENTIAL DISCOUNT\nWe recover the familiar exponential discount function in RL based on a prior assumption that the environment has a known constant hazard. Consider a known hazard rate of h(t) = λ ≥ 0. Definition A.2 sets a first order differential equation λ = − ddt lns(t) = − ds(t) dt 1 s(t) . The solution for the survival rate is s(t) = e−λt which can be related to the RL discount factor γ\ns(t) = e−λt = γt (16)\nThis interprets γ as the per-time-step probability of the episode continuing. This also allows us to connect the hazard rate λ ∈ [0,∞] to the discount factor γ ∈ [0, 1).\nγ = e−λ (17)\nAs the hazard increases λ → ∞, then the corresponding discount factor becomes increasingly myopic γ → 0. Conversely, as the environment hazard vanishes, λ → 0, the corresponding agent becomes increasingly far-sighted γ → 1. In RL we commonly choose a single γ which is consistent with the prior belief that there exists a known constant hazard rate λ = −ln(γ). We now relax the assumption that the agent holds this strong prior that it exactly knows the true hazard rate. From a Bayesian perspective, a looser prior allows for some uncertainty in the underlying hazard rate of the environment which we will see in the following section.\nA.2 UNCERTAIN HAZARD IMPLIES NON-EXPONENTIAL DISCOUNT\nWe may not always be so confident of the true risk in the environment and instead reflect this underlying uncertainty in the hazard rate through a hazard prior p(λ). Our survival rate is then computed by weighting specific exponential survival rates defined by a given λ over our prior p(λ)\ns(t) = ∫ ∞ λ=0 p(λ)e−λtdλ (18)\n3Note the difference in RL where future rewards are discounted by time-delay so the value is v(rt) = γtrt.\nSozou (1998) shows that under an exponential prior of hazard p(λ) = 1k exp(−λ/k) the expected survival rate for the agent is hyperbolic\ns(t) = 1\n1 + kt ≡ Γk(t) (19)\nWe denote the hyperbolic discount by Γk(t) to make the connection to γ in reinforcement learning explicit. Further, Sozou (1998) shows that different priors over hazard correspond to different discount functions. We reproduce two figures in Figure 8 showing the correspondence between different hazard rate priors and the resultant discount functions. The common approach in RL is to maintain a delta-hazard (black line) which leads to exponential discounting of future rewards. Different priors lead to non-exponential discount functions." }, { "heading": "B ADDITIONAL PATHWORLD EXPERIMENTS", "text": "In Figure 9 we consider the case that the agent still holds an exponential prior but has the wrong coefficient k and in Figure 10 we consider the case where the agent still holds an exponential prior but the true hazard is actually drawn from a uniform distribution with the same mean.\nThrough these two validating experiments, we demonstrate the robustness of estimating hyperbolic discounted Q-values in the case when the environment presents dynamic levels of risk and the agent faces non-trivial decisions. Hyperbolic discounting is preferable to exponential discounting even when the agent’s prior does not precisely match the true environment hazard rate distribution, by coefficient (Figure 9) or by functional form (Figure 10).\nDiscount function MSE k=0.05 0.002 k=0.1 0.493\nk=0.025 0.814 k=0.2 1.281\nTable 2: The average mean squared error (MSE) over each of the paths in Figure 9. As the prior is further away from the true value of k = 0.05, the error increases. However, notice that the errors for large factor-of-2 changes in k result in generally lower errors than if the agent had considered only a single exponential discount factor γ as in Table 1.\nDiscount function MSE hyperbolic value 0.235 γ = 0.975 0.266 γ = 0.95 0.470 γ = 0.99 4.029\nTable 3: The average mean squared error (MSE) over each of the paths in Figure 10 when the underlying hazard is drawn according to a uniform distribution. We find that hyperbolic discounting results is more robust to hazards drawn from a uniform distribution than exponential discounting." }, { "heading": "C ALTERNATIVE APPROACH TO HYPERBOLIC Q-VALUES", "text": "C.1 COMPUTING HYPERBOLIC Q-VALUES\nLet’s start with the case where we would like to estimate the value function where rewards are discounted hyperbolically instead of the common exponential scheme. We refer to the hyperbolic Q-values as QΓπ below in Equation 21\nQΓkπ (s, a) =Eπ [ Γk(1)R(s1, a1) + Γk(2)R(s2, a2) + · · · ∣∣∣∣s, a] (20) =Eπ\n[∑ t Γk(t)R(st, at) ∣∣∣∣s, a ]\n(21)\nWe may relate the hyperbolic QΓπ-value to the values learned through standard Q-learning. To do so, notice that the hyperbolic discount Γt can be expressed as the integral of a certain function f(γ, t) for γ = [0, 1) in Equation 22. ∫ 1\nγ=0\nγktdγ = 1\n1 + kt = Γk(t) (22)\nThe integral over this specific function f(γ, t) = γkt yields the desired hyperbolic discount factor Γk(t) by considering an infinite set of exponential discount factors γ over its domain γ ∈ [0, 1). Recognize that the integrand γkt is the standard exponential discount factor which suggests a connection to standard Q-learning (Watkins & Dayan, 1992). This suggests that if we could consider an infinite set of γ then we can combine them to yield hyperbolic discounts for the corresponding time-step t. We build on this idea of modeling many γ throughout this work.\nWe employ Equation 22 and return to the task of computing hyperbolic Q-values QΓπ(s, a) 4\nQΓπ(s, a) =Eπ [∑ t Γk(t)R(st, at) ∣∣∣∣s, a ]\n(23)\n=Eπ [∑ t (∫ 1 γ=0 γktdγ ) R(st, at) ∣∣∣∣s, a ]\n(24)\n= ∫ 1 γ=0 Eπ [∑ t R(st, at)(γ k)t ∣∣∣∣s, a ] dγ (25)\n= ∫ 1 γ=0 Q(γ k)t π (s, a)dγ (26)\nwhere Γk(t) has been replaced on the first line by (∫ 1 γ=0 γktdγ ) and the exchange is valid if∑∞\nt=0 γ ktrt < ∞. This shows us that we can compute the QΓπ-value according to hyperbolic\ndiscount factor by considering an infinite set of Qγ k\nπ -values computed through standard Q-learning. Examining further, each γ ∈ [0, 1) results in TD-errors learned for a new γk. For values of k < 1, which extends the horizon of the hyperbolic discounting, this would result in larger γ.\n4Hyperbolic Q-values can generally be infinite for bounded rewards. We consider non-infinite episodic MDPs only.\nD VISUAL SUMMARY OF APPROACH\nWe summarize our approach for estimating non-exponential discounted Q-values here." }, { "heading": "E EQUIVALENCE OF HYPERBOLIC DISCOUNTING AND EXPONENTIAL HAZARD", "text": "Following Section A we also show a similar equivalence between hyperbolic discounting and the specific hazard distribution pk(λ) = 1k exp(−λ/k), where again, λ ∈ [0,∞)\nQδ(0),Γkπ (s, a) = Eπ,P0 [ ∞∑ t=0 Γk(t)R(st, at)|s0 = s, a0 = a ]\n= Eπ,P0 [ ∞∑ t=0 (∫ ∞ λ=0 pk(λ)e −λtdλ ) R(st, at)|s0 = s, a0 = a ]\n= ∫ ∞ λ=0 pk(λ)Eπ,P0 [ ∞∑ t=0 e−λtR(st, at)|s0 = s, a0 = a ] dλ\n= Eλ∼pk(·)Eπ,P0 [ ∞∑ t=0 e−λtR(st, at)|s0 = s, a0 = a ]\n= Eλ∼pk(·)Eπ,Pλ [ ∞∑ t=0 R(st, at)|s0 = s, a0 = a ] = Qpk,1π (s, a)\nWhere the first step uses Equation 19. This equivalence implies that discount factors can be used to learn policies that are robust to hazards." }, { "heading": "F ALTERNATIVE DISCOUNT FUNCTIONS", "text": "We expand upon three special cases to see how functions f(γ, t) = w(γ)γt may be related to different discount functions d(t).\nWe summarize in Table 4 how a particular hazard prior p(λ) can be computed via integrating over specific weightings w(γ) and the corresponding discount function.\nThree cases: 1. Delta hazard prior: p(λ) = δ(λ− k)\n2. Exponential hazard prior: p(λ) = 1ke −λ/k\n3. Uniform hazard prior: p(λ) = 1k for λ ∈ [0, k]\nFor the three cases we begin with the Laplace transform on the prior p(λ) = ∫∞ λ=0\np(λ)e−λtdλ and then chnage the variables according to the relation between γ = e−λ, Equation 17.\nF.1 DELTA HAZARD PRIOR\nA delta prior p(λ) = δ(λ− k) on the hazard rate is consistent with exponential discounting.∫ ∞ λ=0 p(λ)e−λtdλ = ∫ ∞ λ=0 δ(λ− k)e−λtdλ\n= e−kt\nwhere δ(λ− k) is a Dirac delta function defined over variable λ with value k. The change of variable γ = e−λ (equivalently λ = − ln γ) yields differentials dλ = − 1γ dγ and the limits λ = 0→ γ = 1 and λ =∞→ γ = 0. Additionally, the hazard rate value λ = k is equivalent to the γ = e−k.\nd(t) = ∫ ∞ λ=0 p(λ)e−λtdλ\n= ∫ 0 γ=1 δ(− ln γ − k)γt ( − 1 γ dγ ) =\n∫ 1 γ=0 δ(− ln γ − k)γt−1dγ\n= e−kt\n= γtk\nwhere we define a γk = e−k to make the connection to standard RL discounting explicit. Additionally and reiterating, the use of a single discount factor, in this case γk, is equivalent to the prior that a single hazard exists in the environment.\nF.2 EXPONENTIAL HAZARD PRIOR\nAgain, the change of variable γ = e−λ yields differentials dλ = − 1γ dγ and the limits λ = 0→ γ = 1 and λ =∞→ γ = 0. ∫ ∞\nλ=0\np(λ)e−λtdλ = ∫ 0 γ=1 p(−lnγ)γt ( − 1 γ dγ ) =\n∫ 1 γ=0 p(−lnγ)γt−1dγ\nwhere p(·) is the prior. With the exponential prior p(λ) = 1k exp(−λ/k) and by substituting λ = −lnγ we verify Equation 9\n∫ 1 0 1 k exp(ln γ/k)γt−1dγ = 1 k ∫ 1 0 exp(lnγ1/k)γt−1dγ\n= 1\nk ∫ 1 0 γ1/k+t−1dγ\n= 1\nk 1 1 k + t γ1/k+t ∣∣∣∣1 γ=0\n= 1\n1 + kt\nF.3 UNIFORM HAZARD PRIOR\nFinally if we hold a uniform prior over hazard, 1k for λ ∈ [0, k] then Sozou (1998) shows the Laplace transform yields\nd(t) = ∫ ∞ 0 p(λ)e−λtdλ\n= 1\nk ∫ k 0 e−λtdλ\n=− 1 kt e−λt ∣∣∣∣k λ=0 = 1\nkt\n( 1− e−kt ) Use the same change of variables to relate this to γ. The bounds of the integral become λ = 0 → γ = 1 and λ = k → γ = e−k.\nd(t) =− 1 k ∫ e−k γ=1 γt−1dγ\n= 1 kt γt ∣∣∣∣1 γ=e−k\n= 1\nkt\n( 1− e−kt ) which recovers the discounting scheme.\nG DETERMINING THE γ INTERVAL\nWe provide further detail for which γ we choose to model and motivation why. We choose a γmax which is the largest γ to learn through Bellman updates. If we are using k as the hyperbolic coefficient in Equation 19 and we are approximating the integral with nγ our γmax would be\nγmax = (1− bnγ )k (27) However, allowing γmax → 1 get arbitrarily close to 1 may result in learning instabilities Bertsekas (1995). Therefore we compute an exponentiation base of b = exp(ln(1− γ1/kmax )/nγ) which bounds our γmax at a known stable value. This induces an approximation error which is described more in Appendix H." }, { "heading": "H APPROXIMATION ERRORS", "text": "Instead of evaluating the upper bound of Equation 9 at 1 we evaluate at γmax which yields γktmax/(1+kt). Our approximation induces an error in the approximation of the hyperbolic discount.\nThis approximation error in the Riemann sum increases as the γmax decreases as evidenced by Figure 12. When the maximum value of γmax → 1 then the approximation becomes more accurate as supported in Table 5 up to small random errors." }, { "heading": "I ESTIMATING HYPERBOLIC COEFFICIENTS", "text": "As discussed, we can estimate the hyperbolic discount in two different ways. We illustrate the resulting estimates here and resulting approximations. We use lower-bound Riemann sums in both cases for simplicity but more sophisticated integral estimates exist.\nAs noted earlier, we considered two different integrals for computed the hyperbolic coefficients. Under the form derived by the Laplace transform, the integrals are sharply peaked as γ → 1. The difference in integrals is visually apparent comparing in Figure 13.\nDiscount function MSE max-γ=0.999 0.002\nmax-γ=0.9999 0.003 max-γ=0.99 0.233 max-γ=0.95 1.638 max-γ=0.9 2.281\nTable 5: The average mean squared error (MSE) over each of the paths in Figure 12." }, { "heading": "J PERFORMANCE OF DIFFERENT REPLAY BUFFER PRIORITIZATION SCHEME", "text": "As found through our ablation study in Figure 7, the Multi-Rainbow auxiliary task interacted poorly with the prioritized replay buffer when the TD-errors were averaged evenly across all heads. As an alternative scheme, we considered prioritizing according to the largest γ, which is also the γ defining the Q-values by which the agent acts.\nThe (preliminary5) results of this new prioritization scheme is in Figure 14.\n-103 -102 -101 -1000 100 101 102 103\nPercent Improvement (log)\nSeaquest IceHockey SpaceInvaders Phoenix\nPitfall Breakout\nZaxxon Asterix YarsRevenge Asteroids\nGopher Gravitar NameThisGame WizardOfWor\nStarGunner Tennis FishingDerby Centipede\nKungFuMaster BankHeist MsPacman\nTimePilot BattleZone\nAlien Solaris\nBerzerk Riverraid\nAtlantis Tutankham\nKrull Skiing\nHero BeamRider\nRoadRunner Venture Enduro\nVideoPinball ElevatorAction CrazyClimber\nUpNDown Pong\nBoxing Kangaroo Freeway\nRobotank DoubleDunk JourneyEscape ChopperCommand\nCarnival DemonAttack\nBowling Assault\nFrostbite Amidar Pooyan AirRaid Jamesbond PrivateEye\nMontezumaRevenge\nG a m\ne N\na m\ne\nMulti-Rainbow Improvement over Rainbow (prioritize-largest)\nFigure 14: The (preliminary) performance improvement over Rainbow using the multi-horizon auxiliary task in Atari Learning Environment when we instead prioritize according to the TD-errors computed from the largest γ (3 seeds each).\nTo this point, there is evidence that prioritizing according to the TD-errors generated by the largest gamma is a better strategy than averaging.\n5These runs have been computed over approximately 100 out of 200 iterations and will be updated for the final version." }, { "heading": "K HYPERPARAMETERS", "text": "For all our experiments in DQN Mnih et al. (2015), C51 Bellemare et al. (2017) and Rainbow Hessel et al. (2018), we benchmark against the baselines set by Castro et al. (2018) and we use the default hyperparameters for each of the respective algorithms. That is, our Multi-agent uses the same optimization, learning rates, and hyperparameters as it’s base class." }, { "heading": "L AUXILIARY TASK RESULTS", "text": "Final results of the multi-horizon auxiliary task on Rainbow (Multi-Rainbow) in Table 7." } ]
2,019
null
SP:39c5ad94a057196b513d4a96d3478ddf73add838
[ "This paper contributes to the deep learning generalization theory, mainly from the theoretical perspective with experimental verifications. The key proposition is given by the unnumbered simple equation in the middle of page 4 (please number it), where \\mathcal{I} is the Fisher information matrix. According to the authors, this simple metric, which is the log-determinant of the Fisher information matrix, can characterize the generalization of a DNN.", "This paper provides a metric to characterize local minima of deep network loss landscapes based on the Fisher information matrix of the model parameterized by the deep network. The authors connect the Fisher information to the curvature of the loss landscape (the loss considered is the negative loss likelihood) and obtain generalization bounds through PAC Bayes analysis. They further propose regularizing the training of deep networks using the local curvature of the loss as a regularizer. In the final experimental section of the paper, the relationship between the empirical measures and generalization is shown on a variety of networks." ]
Recent advances in deep learning theory have evoked the study of generalizability across different local minima of deep neural networks (DNNs). While current work focused on either discovering properties of good local minima or developing regularization techniques to induce good local minima, no approach exists that can tackle both problems. We achieve these two goals successfully in a unified manner. Specifically, based on the Fisher information we propose a metric both strongly indicative of generalizability of local minima and effectively applied as a practical regularizer. We provide theoretical analysis including a generalization bound and empirically demonstrate the success of our approach in both capturing and improving the generalizability of DNNs. Experiments are performed on CIFAR-10 and CIFAR-100 for various network architectures.
[ { "affiliations": [], "name": "MINIMA CHARAC" } ]
[ { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Peter L Bartlett", "Shahar Mendelson" ], "title": "Rademacher and gaussian complexities: Risk bounds and structural results", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-sgd: Biasing gradient descent into wide valleys", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Anna Choromanska", "Mikael Henaff", "Michael Mathieu", "Gérard Ben Arous", "Yann LeCun" ], "title": "The loss surfaces of multilayer networks", "venue": "In Artificial Intelligence and Statistics,", "year": 2015 }, { "authors": [ "Yann N Dauphin", "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Surya Ganguli", "Yoshua Bengio" ], "title": "Identifying and attacking the saddle point problem in high-dimensional non-convex optimization", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Simon S Du", "Jason D Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "arXiv preprint arXiv:1811.03804,", "year": 2018 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "arXiv preprint arXiv:1703.11008,", "year": 2017 }, { "authors": [ "Gamaleldin Elsayed", "Dilip Krishnan", "Hossein Mobahi", "Kevin Regan", "Samy Bengio" ], "title": "Large margin deep networks for classification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Peter D Grünwald", "Abhijit Grunwald" ], "title": "The minimum description length principle", "venue": "MIT press,", "year": 2007 }, { "authors": [ "Nick Harvey", "Christopher Liaw", "Abbas Mehrabian" ], "title": "Nearly-tight vc-dimension bounds for piecewise linear neural networks", "venue": "In Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Elad Hoffer", "Itay Hubara", "Daniel Soudry" ], "title": "Train longer, generalize better: closing the generalization gap in large batch training of neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Yiding Jiang", "Dilip Krishnan", "Hossein Mobahi", "Samy Bengio" ], "title": "Predicting the generalization gap in deep networks with margin distributions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ryo Karakida", "Shotaro Akaho", "Shun-ichi Amari" ], "title": "Universal statistics of fisher information in deep neural networks: Mean field approach", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Kenji Kawaguchi" ], "title": "Deep learning without poor local minima", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Nitish Shirish Keskar", "Richard Socher" ], "title": "Improving generalization performance by switching from adam to sgd", "venue": "arXiv preprint arXiv:1712.07628,", "year": 2017 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "John Langford", "Rich Caruana" ], "title": "not) bounding the true error", "venue": "In Advances in Neural Information Processing Systems,", "year": 2002 }, { "authors": [ "Chen-Yu Lee", "Patrick W Gallagher", "Zhuowen Tu" ], "title": "Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Tengyuan Liang", "Tomaso Poggio", "Alexander Rakhlin", "James Stokes" ], "title": "Fisher-rao metric, geometry, and complexity of neural networks", "venue": "arXiv preprint arXiv:1711.01530,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "David McAllester" ], "title": "Simplified pac-bayesian margin bounds", "venue": "In Learning theory and Kernel machines,", "year": 2003 }, { "authors": [ "David A McAllester" ], "title": "Pac-bayesian model averaging", "venue": "In COLT,", "year": 1999 }, { "authors": [ "David A McAllester" ], "title": "Some pac-bayesian theorems", "venue": "Machine Learning,", "year": 1999 }, { "authors": [ "Behnam Neyshabur", "Ruslan R Salakhutdinov", "Nati Srebro" ], "title": "Path-sgd: Path-normalized optimization in deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Conference on Learning Theory, pp", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "Nathan Srebro" ], "title": "A PAC-bayesian approach to spectrally-normalized margin bounds for neural networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Quynh Nguyen", "Matthias Hein" ], "title": "Optimization landscape and expressivity of deep cnns", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emin Orhan", "Xaq Pitkow" ], "title": "Skip connections eliminate singularities", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jeffrey Pennington", "Pratik Worah" ], "title": "The spectrum of the fisher information matrix of a singlehidden-layer neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jorma Rissanen" ], "title": "Modeling by shortest data", "venue": "description. Automatica,", "year": 1978 }, { "authors": [ "Jorma J Rissanen" ], "title": "Fisher information and stochastic complexity", "venue": "IEEE transactions on information theory,", "year": 1996 }, { "authors": [ "Levent Sagun", "Utku Evci", "V. Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks, 2018", "venue": "URL https://openreview.net/ forum?id=rJrTwxbCb", "year": 2018 }, { "authors": [ "Jure Sokolić", "Raja Giryes", "Guillermo Sapiro", "Miguel RD Rodrigues" ], "title": "Robust large margin deep neural networks", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Ke Sun", "Frank Nielsen" ], "title": "Lightlike neuromanifolds, occam’s razor and deep learning", "venue": "arXiv preprint arXiv:1905.11027,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Lei Wu", "Zhanxing Zhu" ], "title": "Towards understanding generalization of deep learning: Perspective of loss landscapes", "venue": "In Proceedings of the 34th International Conference on Machine LearningVolume 70. JMLR. org,", "year": 2017 }, { "authors": [ "Huan Xu", "Shie Mannor" ], "title": "Robustness and generalization", "venue": "Machine learning,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, there has been a surge in the interest of acquiring a theoretical understanding over deep neural network’s behavior. Breakthroughs have been made in characterizing the optimization process, showing that learning algorithms such as stochastic gradient descent (SGD) tend to end up in one of the many local minima which have close-to-zero training loss (Choromanska et al., 2015; Dauphin et al., 2014; Kawaguchi, 2016; Nguyen & Hein, 2018; Du et al., 2018). However, these numerically similar local minima typically exhibit very different behaviors in terms of generalizability. It is, therefore, natural to ask two closely related questions: (a) What kind of local minima can generalize better? (b) How to find those better local minima?\nTo our knowledge, existing work focused only on one of the two questions. For the “what” question, various definitions of “flatness/sharpness” have been introduced and analyzed (Keskar et al., 2017; Neyshabur et al., 2018; 2017; Wu et al., 2017; Liang et al., 2017). However, they suffer from one or more of the problems: (1) being mostly theoretical with no or poor empirical evaluations on modern neural networks, (2) lack of theoretical analysis and understanding, (3) in practice not applicable to finding better local minima. Regarding the “how” question, existing approaches (Hochreiter & Schmidhuber, 1997; Sokolić et al., 2017; Chaudhari et al., 2017; Hoffer et al., 2017; Neyshabur et al., 2015a; Izmailov et al., 2018) share some of the common drawbacks: (1) derived only from intuitions but no specific metrics provided to characterize local minima, (2) no or weak analysis of such metrics, (3) not applicable or no consistent generalization improvement for modern DNNs.\nIn this paper, we tackle both the “what” and the “how” questions in a unified manner. Our answer provides both the theory and applications for the generalization problems across different local minima. Based on the determinant of Fisher information estimated from the training set, we propose a metric that solves all the aforementioned issues. The metric can well capture properties that characterize local minima of different generalization ability. We provide its theoretical analysis, primarily a generalization bound based on PAC-Bayes (McAllester, 1999b;a). For modern DNNs in practice, it is necessary to provide a tractable approximation of our metric. We propose an intuitive and efficient approximation to compare it across different local minima. Our empirical evaluations fully illustrate the effectiveness of the metric as a strong indicator of local minima’s generalizability. Moreover, from the metric we further derive and design a practical regularization technique that guides the optimization process in finding better generalizable local minima. The experiments on image classification datasets demonstrate that our approach gives consistent generalization boost for a range of DNN architectures." }, { "heading": "2 RELATED WORK", "text": "It has been empirically shown that larger batch sizes lead to worse generalization (Keskar et al., 2017). Hoffer et al. (2017) analyzed how the training dynamics is affected by different batch sizes and presented a perturbed batch normalization technique for better generalization. While it effectively improves generalization for large-batch training, a specific metric that indicates the generalizability is missing. Similarly, Elsayed et al. (2018) employed a structured margin loss to improve performance of DNNs w.r.t. noise and adversarial attack yet no metric was proposed. Furthermore, this approach essentially provided no generalization gain in the normal training setup.\nThe local entropy of the loss landscape was proposed to measure “flatness” in Chaudhari et al. (2017), which also designed an entropy-guided SGD that achieves faster convergence in training DNNs. However, the method does not consistently improve generalization, e.g., a decrease of performance on CIFAR-10 (Krizhevsky & Hinton, 2009). Another method that focused on modifying the optimization process is the Path-SGD proposed by Neyshabur et al. (2015a). Specifically, the authors derived an approximate steepest descent algorithm that utilizes the path-wise norm regularization to achieve better generalization. The authors only evaluated it on a two-layer neural network, very likely since the path norm is computationally expensive to optimize during training.\nA flat minimum search algorithm was proposed by Hochreiter & Schmidhuber (1997) based on the “flatness” of local minima defined as the volume of local boxes. Yet since the boxes have their axes aligned to the axes of the model parameters, their volumes could be significant underestimations of “flatness” for over-parametrized networks, due to the specific spectral density of Hessian of DNNs studied in Pennington & Worah (2018); Sagun et al. (2018). The authors of Wu et al. (2017) also characterized the “flatness” by volumes. They considered the inverse volume of the basin of attraction and proposed to use the Frobenius norm of Hessian at the local minimum as a metric. In our experiments, we show that their metric does not accurately capture the generalization ability of local minima under different scenarios. Moreover, they have not derived a regularizer from their metric.\nBased on a “robustness” metric, Sokolić et al. (2017) derived a regularization technique that successfully improves generalization on multiple image classification datasets. Nevertheless, we show that their metric fails to capture the generalizability across different local minima.\nBy using the Bayes factor, MacKay (1992) studied the generalization ability of different local minima obtained by varying the coefficient of L2 regularization. It derived a formula involving the determinant of Hessian, similar to the one in ours. Whereas, this approach has restricted settings and, without proposing an efficient approximation, its metric is not applicable to modern DNNs, let alone serving as a regularizer. A generalization bound is missing in MacKay (1992) as well.\nIn a broader context of the “what” question, properties that capture the generalization of neural networks have been extensively studied. Various complexity measures for DNNs have been proposed based on norm, margin, Lipschitz constant, compression and robustness (Bartlett & Mendelson, 2002; Neyshabur et al., 2015b; Sokolić et al., 2017; Xu & Mannor, 2012; Bartlett et al., 2017; Zhou et al., 2019; Dziugaite & Roy, 2017; Arora et al., 2018; Jiang et al., 2019). While some of them aimed to provide tight generalization bounds and some of them to provide better empirical results, none of the above approaches explored the “how” question at the same time.\nVery recently, Karakida et al. (2019) and Sun & Nielsen (2019) studied the Fisher information of the neural network through the lens of its spectral density. In specific, Karakida et al. (2019) applied mean field theory to study the statistics of the spectrum and the appropriate size of the learning rate. Also an information-theoretic approach, Sun & Nielsen (2019) derived a novel formulation of the minimum description length in the context of deep learning by utilizing tools from singular semi-Riemannian geometry." }, { "heading": "3 OUTLINE AND NOTATIONS", "text": "In a typical K-way classification setting, each sample x ∈ X belongs to a single class denoted cx ∈ {0, 1, ...,K} according to the probability vector y ∈ Y , where Y is the k-dimensional probability simplex so that p(cx = i) = yi and ∑ i yi = 1. Denote a feed-forward DNN parametrized by w ∈ RW as fw : X → Y , which uses nonlinear activation functions and a softmax layer at the end. Denote the cross entropy loss as `(fw(x), y) = − ∑ i yi ln fw(x)i. De-\nnote the training set as S , defined over X × Y with |S|= N . The training objective is given as L(S, w) = 1N ∑ (x,y)∼S `(fw(x), y). Assume S is sampled from some true data distribution denoted D, we can define expected loss L(D, w) = E(x,y)∼D[`(fw(x), y)]. Throughout this paper, we refer a local minimum of L(S, w) corresponding to a local minimizer w0 as just the local minimum w0. Given such w0, our paper’s outline as well as our main achievements are:\n• In Section 4 we relates Fisher information to neural network training as a prerequisite. • In Section 5.1 we propose a metric γ(w0) that well captures local minima’s generalizability. • In Section 5.2 we provide a generalization bound related to γ(w0). • In Section 5.3 we propose an approximation γ̂(w0) for γ(w0), which is shown to be very\neffective in Section 7.1 via extensive empirical evaluations. • In Section 6 we devise a practical regularizer from γ(w0) that consistently improves gen-\neralizability across different DNNs, as evaluated in Section 7.2." }, { "heading": "3.1 OTHER NOTATIONS", "text": "Denote∇w as gradient, Jw[·] as Jacobian matrix,∇2w as Hessian, DKL(·‖·) as KL divergence, ‖·‖2 as spectrum or Euclidean norm, ‖·‖F as Frobenius norm, |·| as determinant, tr(·) as trace norm, ρ(·) as spectral radius, ``S(w) as log-likelihood on S, and [·]i for selecting the ith entry. We define `x(w) ∈ RK whose ith entry is− ln fw(x)i so that `(fw(x), y) = `x(w)T y. We define ȳ as argmax(y) and ỹ ∈ RK the one-hot vector whose ȳ-th entry is 1 and otherwise 0. Then we define L̃(S, w) ∈ RN as the “simplified” loss vector of S whose entries are `(fw(x), ỹ) for (x, y) ∈ S , i.e., we approximate the cross entropy loss `(fw(x), y) by `(fw(x), ỹ)." }, { "heading": "4 LOCAL MINIMUM AND FISHER INFORMATION", "text": "First of all, if y is strictly one-hot, no local minimum will even exist with 100% training accuracy, since the cross entropy loss will always be positive. To admit good local minima in the first place, we assume the widely used label smoothing (Szegedy et al., 2016) is applied to train all models in our analysis. Label smoothing enables us to assume a local minimum w0 (in this case, also a global minimum) of the training loss with ∑ (x,y)∈S DKL(fw0(x)‖y) = 0.\nEach sample (x, y) ∈ S has its label cx sampled by p(cx = i|x) = yi, denoted as (x, cx) ∼ S. The joint probability p(x, cx) modeled by the DNN is p(x, cx = i;w) = p(cx = i|x;w) p(x) = [fw(x)]i p(x) with p(x) = 1N . We can relate the training loss L(S, w) to the negative log-likelihood −``S(w) = − ∑ (x,y)∈S Ecx∼y ln p(x, cx;w) by:\nL(S, w) = 1 N ∑ (x,y)∈S `x(w) T y = − 1 N ∑ (x,y)∈S E cx∼y ln p(cx|x;w) = − 1 N ``S(w) + ln 1 N\nAnd so w0 also corresponds to a local maximum of the likelihood function. The observed Fisher information evaluated at w0 is defined as I(w0) = − 1N∇ 2 w``S(w0). We can further derive:\nI(w0) = ∇2wL(S, w0) = E (x,cx)∼S [∇w ln p(cx|x;w0)∇w ln p(cx|x;w0)T ] (1)\nThe first equality is straightforward; the second has its proof in Appendix A. Since p(cx = i|x) = yi and ln p(cx = i|x;w0) = [`x(w0)]i, we can further simplify the Equation 1 to:\nI(w0) = 1\nN ∑ (x,y)∈S K∑ i=1 ( ∇w[`x(w0)]i )( ∇w[`x(w0)]i )T (2)\nRemark: When we assume global optimality, we have ∇w`(fw0(x), y) = 0 as DKL(fw0(x)‖y) = 0; yet it does not indicate I(w0) ∈ RW×W 6= 0 in Equation 2." }, { "heading": "5 LOCAL MINIMA CHARACTERIZATION", "text": "In this section, we derive and propose our metric, provide a PAC-Bayes generalization bound, and lastly, propose and give intuitions of an effective approximation of our metric for modern DNNs." }, { "heading": "5.1 FISHER DETERMINANT AS GENERALIZATION METRIC", "text": "We would like a metric to compare different local minima. Similar to the various definitions of “flatness/sharpness”, we take a small neighborhood of the target local minimum w0 into account. Formally for a sufficiently small V , we define the model class M(w0) as the largest connected subset of {w ∈ RW : L(S, w) ≤ h} that contains w0, where the height h is defined as a real number such that the volume (namely the Lebesgue measure) ofM(w0) is V . By the Intermediate Value Theorem, for any sufficiently small volume V there exists a corresponding height h.\nWe propose our metric γ(·), where lower γ(w0) indicates a better local minimum w0:\nγ(w0) = ln|I(w0)| (3)\nAs a metric, γ(w0) requires |I(w0)| 6= 0. Therefore, we state the following Assumption 1. Assumption 1. The local minima w0 we care about in the comparison are well isolated and unique in their corresponding neighborhoodM(w0).\nThe Assumption 1 is quite reasonable. For state-of-the-art network architectures used in practice, this is often the fact. To be precise, the Assumption 1 is violated when the Hessian matrix at a local minimum is singular. Specifically, Orhan & Pitkow (2018) summarizes three sources of the singularity: (i) due to a dead neuron, (ii) due to identical neurons, and (iii) linear dependence of the neurons. As well demonstrated in Orhan & Pitkow (2018), network with skip connection, e.g. ResNet (He et al., 2016a), WRN (Zagoruyko & Komodakis, 2016), and DenseNet (Huang et al., 2017) used in our experiments, can effectively eliminate all the aforementioned singularity.\nIn Dinh et al. (2017), the authors pointed out another source of the singularity specifically for networks with scale-invariant activation functions, e.g. ReLU, referred as the rescaling issue. Namely, one can rescales the model parameters layer-wise so that the underlying function represented by the network remains unchanged in the region. In practice, this issue is not critical. Firstly, most modern deep ReLU networks, e.g. ResNet, WRN, and DenseNet, have normalization layers, e.g. BatchNorm (Ioffe & Szegedy, 2015), applied before the activations. BatchNorm shifts all the inputs to the ReLU function, equivalently shifting the ReLU horizontally which makes it no longer scaleinvariant. Secondly, due to the ubiquitous use of Gaussian weights initialization scheme and weight decay, most local minima obtained by gradient learning have weights of a relatively small norm. Consequently, in practice, we will not compare two local minima essentially the same but have one as the rescaled version of the other with a much larger norm of the weights.\nNote that normally we have a limited size of the dataset, and so an approximation of γ(w0) is a must. We present our approximation scheme and its intuition in Section 5.3." }, { "heading": "5.1.1 CONNECTION TO FISHER INFORMATION APPROXIMATION (FIA) CRITERION", "text": "Our metric γ(w0) is closely related to the FIA criterion. From Information Theory, the MDL principle suggests that among different statistical models the best is the one that best compresses both the sampled data and the model (Rissanen, 1978). Accordingly, Rissanen (1996) derived the FIA criterion to compare statistical models, each of which is a class of model in the neighborhood of a global minimum w0. The model class’s FIA criterion is written as (lower FIA is better):\nFIA = − ∑\n(x,y)∈S E cx∼y ln p(x, cx;w0) +\nW 2 ln N 2π + ln ∫ M(w0) √ |Iw| dw\nOn the right hand side, the first two terms are both constants. To see the connection to our metric, we replace the expected Fisher information Iw with the tractable observed one I(w0). Assuming the training loss is locally quadratic in M(w0), an assumption later formalized and validated as Assumption 2, since I(w0) = ∇2wL(S, w0) from Equation 1, we can modify the last term to be lnV + ln √ |I(w0)|.\nRemark: Although in a similar format, the FIA criterion and our metric are essentially different due to the appearance of observed Fisher information in place of the expected one, making our metric both tractable and much more applicable (no longer requires global optimality).\n5.1.2 CONNECTION TO EXISTING FLATNESS/SHARPNESS METRICS\nAs mentioned in Section 2, the “flatness” of a local minimum was firstly related to the generalization ability of the neural network in Hochreiter & Schmidhuber (1997), where the concept and the method are both preliminary. The idea is recently popularized in the context of deep learning by a series of paper such as Keskar et al. (2017); Chaudhari et al. (2017); Wu et al. (2017). Our approach roughly shares the same intuition with these existing works, namely, a “flat” local minimum admits less complexity and so generalizes better than a “sharp” one. To our best knowledge, our paper is the first among these work that provides both the theoretical analysis including a generalization bound and the empirical verification of both an efficient metric and a practical regularizer for modern network architectures." }, { "heading": "5.2 GENERALIZATION BOUND", "text": "Assumption 2. Given the training loss L(S, w), its local minimum w0 satisfying Assumption 1 and the associated neighborhoodM(w0) whose volume V is sufficiently small, as described in Section 3, 4 and 5.1, respectively, when confined toM(w0), we assume that L(S, w) is quadratic.\nThe Assumption 2 is quite reasonable as well. Grünwald & Grunwald (2007) suggests that, a loglikelihood function, under regularity conditions (1) existence of its 1st, 2nd & 3rd derivatives and (2) uniqueness of its maximum in the region, behaves locally like a quadratic function around its maximum. In our case, L(S, w) corresponds to the log-likelihood function ``S(w) and so w0 corresponds to a local maximum of ``S(w). Since L(S, w) is analytic and w0 is the only local minimum of L(S, w) inM(w0), the training loss indeed can be considered locally quadratic. Similar to Langford & Caruana (2002), Harvey et al. (2017) and Neyshabur et al. (2017), we apply the PAC-Bayes Theorem (McAllester, 2003) to derive a generalization bound for our metric. Specifically, we pick a uniform prior P over w ∈ M(w0) according to the maximum entropy principle and after observing the training data S pick the posterior Q of density q(w) ∝ e−|L(S,w0)−L(S,w)|. Then Theorem 1 bounds the expected L(D, w) using γ(w0). See its proof in Appendix B. Theorem 1. Given |S|= N , D, L(S, w) and L(D, w) described in Section 3, a local minimum w0 with L0 =∆ L(S, w0), the volume V of M(w0) sufficiently small, the Assumption 1 & 2 satisfied, and P,Q defined above, for any δ ∈ (0, 1], we have with probability at least 1− δ that:\nE w∼Q [L(D, w)] ≤ L0 +A+ 2\n√ 2L0 + 2A+ ln 2Nδ\nN − 1 , A = WV\n2 W π 1 W eγ(w0)/W\n4πe\nIn short, Theorem 1 shows that a lower γ(w0) indicates a more generalizable local minimum w0." }, { "heading": "5.3 APPROXIMATION", "text": "As stated in Section 4, in practice an approximation of γ(w0) as γ̂(w0) is necessary, as calculating γ(w0) involves computing the determinant of a W × W matrix. Let us first assume we have an imagined training set S ′ of size W , a local minimum w0 of L(S ′, w) and so correspondingly a fullrank observed Fisher information matrix I ′(w0) so that ln|I ′(w0)| is well defined. In reality, we only have a training set S ⊂ S ′ with a singular I(w0). Notice that w0 is also a local minimum of L(S, w) since ∑ (x,y)∈S′ DKL(fw0(x)‖y) = 0 as assumed in Section 4. We then approximate eigenvalues of I ′(w0) by those of its sub-matrices and so to approximate ln|I ′(w0)|. First of all, we replace y by its one-hot version ỹ defined in Section 3.1 since they are very close. This drastically reduces the cost of gradient calculation. With L̃(S, w) ∈ RN and ȳ defined in Section 3.1, according to Equation 2, the observed Fisher information I ′(w0) ∈ RW×W is:\nI ′(w0) ≈ 1\nW ∑ (x,y)∈S′ ( ∇w[`x(w0)]ȳ )( ∇w[`x(w0)]ȳ )T = 1\nW Jw[L̃(S ′, w)]TJw[L̃(S ′, w)] =\n1\nW Jw[L̃(S ′, w)] Jw[L̃(S ′, w)]T (4)\nLet {λm}Wm=1 denote the eigenvalues of I ′(w0); then γ(w0) = ln ∏W m=1 λm = ∑W m=1 lnλm. Without calculating all W eigenvalues, we can perform a Monte-Carlo estimation of γ(w0) by\nrandomly samplingN ′ < N < W eigenvalues from {λm}Wm=1. We denote the samples as {λn}N ′ n=1 and we have WN ′ ∑N ′ n=1 lnλn ≈ ∑W m=1 lnλm. Suppose the estimation is run T times, we have limT→∞ 1 T ∑T t=1 W N ′ ∑N ′ n=1 lnλn = γ(w0).\nIn practice {λn}N ′ n=1 is inaccessible since we don’t have I ′(w0) in the first place. Instead, we sample St ⊂ S with |St|= N ′ for T times and define\nξt(w0) = ∆ Jw[L̃(St, w0)]Jw[L̃(St, w0)]T ∈ RN\n′×N ′\nNotice that ξt(w0) is a principal sub-matrix of WI ′(w0) by removing rows & columns for data in S \\ St. According to Theorem 3, one can roughly estimate the size of eigenvalues of a matrix by those of its sub-matrices. Therefore we propose to estimate γ(w0) by γ̂(w0) with:\nγ̂(w0) = ∆ 1\nT T∑ t=1 ln |ξt(w0)|, γ(w0) ≈ W N ′ γ̂(w0) +W ln 1 W as T →∞ (5)\nWe leave Theorem 3 as well as the derivation of Equation 5 to Appendix C. In proposing γ̂(w0), we ignore the constants and irrelevant scaling factors because what matters is the relative size of γ(·) when comparing different local minima. Empirically we find that given relatively large number of sample trials T , our metric γ̂(·) can effectively capture the generalizability of a local minimum even for a small N ′ (details in Section 7.1 and in Appendix D)." }, { "heading": "6 LOCAL MINIMA REGULARIZATION", "text": "Besides pragmatism, devising a practical regularizer based on γ(w0) also “verifies” our theoretical understanding of DNN training, helping for future improvement of the learning algorithms. However, converting γ(w0) to a practical regularizer is non-trivial due to the computation burden of:\n1. optimizing terms related to the gradient, which involves calculating the Hessian\n2. computing the eigenvalues in each training step, which is even more expensive\nWe first solve the second issue and then the first one. To solve the second issue, we propose to optimize a surrogate term for γ(w0) which avoids eigenvalue computations, namely the trace norm of the observed Fisher information tr(I(w0)). These two terms have the relation:\n1\nW γ(w0) =\n1\nW ln|I(w0)| ≤ ln tr(I(w0))− lnW\nAnother major benefit of using the trace norm is that, unlike γ(w0), tr(I(w0)) still remains well defined even with a small training set |S|= N . From Equation 2 we have:\ntr(I(w0)) = 1\nN ∑ (x,y)∈S K∑ i=1 ‖∇w[`x(w0)]i‖22\nThe cost of computing tr(I(w0)) is linear in the number of its terms (in the double summation). We therefore simplify the calculation by replacing y with ỹ similar to Equation 4 so that\ntr(I(w0)) ≈ 1\nN ∑ (x,y)∈S ‖∇w`(fw0(x), ỹ)‖ 2 2 = 1 N N∑ i=1 ‖∇w[L̃(S, w0)]i‖22\nwhere ỹ and L̃(·, ·) are defined in Section 3.1. As in gradient-based training we never exactly reach the local minimum w0, we choose to optimize tr(I(w)) during the entire training process. We have ∇w 1|B| ∑ i ‖∇w[L̃(B, w0)]i‖ 2 2 = 1 |B| ∑ i∇w‖∇w[L̃(B, w0)]i‖ 2 2 for each mini-batch B. Then we can further reduce the computation cost by batching. In specific, we randomly split B into M subbatches of equal size, namely {Bi}Mi=1. We define gi = ∆ 1 |Bi| ∑ (x,y)∈Bi `(fw0(x), ỹ) and compute ‖gi‖22 for gi ∈ {gi}Mi=1 instead of computing ‖[L̃(B, w0)]i‖22 for each data point in B.\nWe deal with the first computation burden by adopting first order approximation. For any w, with a sufficiently small α > 0 we have L̃(Bi, w − αgi) ≈ L̃(Bi, w)− Jw[L̃(Bi, w)] αgi. Then\n1\n|Bi| |Bi|∑ j=1 [L̃(Bi, w)− L̃(Bi, w − αgi)]j ≈ 1 |Bi| |Bi|∑ j=1 [ Jw[L̃(Bi, w)] αgi ] j = α‖gi‖22\nTherefore, we propose to optimize the following regularized training objective for each update step:\nL(B, w) + βRα(w), Rα(w) =∆ 1\nM M∑ i=1 1 |Bi| |Bi|∑ j=1 [ L̃(Bi, w)− L̃(Bi, w − αgi) ] j\n(6)\nWe omit any second order term when computing ∇wRα(w), simply by no back-prop through gi. On the other hand, we find that gradient clipping, especially at the beginning of the training, is necessary to make the generalization boost consistent. We have 4 hyper-parameters: α, β, the number of sub-batches M and the gradient clip threshold τ . Our approach is formalized as:\nAlgorithm 1 Regularized Mini-batch Learning (Single Update Step) 1: procedure UPDATE(w,B; α, β,M, τ ) . Last 4 are hyper-parameters 2: {Bi}Mi=1 ← B . Split the mini-batch B into M sub-batches 3: for i← 1 to M do 4: gi ← 1|Bi| ∑ (x,y)∈Bi `(fw0(x), ỹ) . Compute the gradient of the sub-batch\n5: gi = copy(gi) . A copy prevents gradient flow 6: end for 7: ComputeRα(w) by Equation 6 . Use the copied version of gi 8: ∇wLreg ← ∇w[L(B, w) + βRα(w)] 9: Clip∇wLreg with threshold τ . clip_by_global_norm in TensorFlow 10: Gradient update with clipped∇wLreg . Update with any gradient-based optimizer 11: end procedure" }, { "heading": "7 EXPERIMENTS", "text": "We perform two sets of experiments to illustrate the effectiveness of our metric γ(w0). We demonstrate that: (1) the approximation γ̂(w0) captures the generalizability well across local minima; (2) our regularization technique based on γ(w0) provides consistent generalization gain for DNNs.\nThroughout our theoretical analysis, we assume that label smoothing (LS) is applied during model training in order to obtain well-defined local minima (first mentioned in Section 4). In all our empirical evaluations, we perform both the version with LS applied and without. Results are very similar and so we stick to the version without LS since this is the same as the original training setup in papers of the various network architectures that we used." }, { "heading": "7.1 EXPERIMENTS ON LOCAL MINIMA CHARACTERIZATION", "text": "We perform comprehensive evaluations to compare our metric γ̂(·) with several others on ResNet20 (He et al., 2016a) for the CIFAR-10 dataset (architecture details in Appendix E). Our metric consistently outperforms others in indicating local minima’s generalizability. Specifically, Sokolić et al. (2017) proposed a robustness-based metric used as a regularizer; Wu et al. (2017) proposed to use Frobenius norm of the Hessian as a metric; Keskar et al. (2017) proposed a metric closely related to the spectral radius of Hessian. In summary, we compare 4 metrics, all evaluated at a local minimum w given S. All four metrics go for “smaller values indicate better generalization”.\n• Robustness: 1N ∑ (x,y)∈S ‖Jx[fw(x)]‖ 2 2\n• Frobenius norm: ‖∇2wL(S, w)‖ 2 F\n• Spectral radius: ρ(∇2wL(S, w)) • Ours: γ̂(w) = 1T ∑T t=1 ln|ξ(St, w0)|, St ⊂ S\nBoth the Frobenius norm and the spectral radius based metric are related to ours, as from Equation 1 we have ‖∇2wL(S, w)‖ 2 F = ‖I(w)‖ 2 F and ρ(∇2wL(S, w)) = ρ(I(w)). These two metric, however, are too expensive to compute for the entire training set S; we instead calculate them by averaging the results for T sampled St ⊂ S, similar to when we compute γ̂(w). We leave details of how we exactly compute these metrics in our experiments to Appendix D.\nWe perform evaluations in three scenarios, similar to Neyshabur et al. (2017); Keskar et al. (2017). We examine different local minima due to (1) a confusion set of varying size in training, (2) different data augmentation schemes, and (3) different batch size. In specific,\n• In Scenario I, we randomly select a subset of 10000 images as the training set and train the DNN with a confusion set consisting of CIFAR-10 samples with random labels. We vary the size of the confusion set so that the resulting local minima generalize differently to the test set while all remain close-to-zero training losses. We consider confusion size of 0, 1k, 2k, 3k, 4k and 5k. We calculate all metrics based on the sampled 10000 training images.\n• In Scenario II, we vary the level of data augmentation. We apply horizontal flipping, denoted flip-only, random cropping from images with 1 pixel padded each side plus flipping, denoted 1-crop-f, random cropping with 4 pixels padded each side plus flipping, denoted 4-crop-f and no data augmentation at all, denoted no-aug. Under all schemes, the network achieves perfect training accuracy. All the metrics are computed on the un-augmented training set.\n• In Scenario III, we vary the batch size. Hoffer et al. (2017) suggests that large batch size leads to poor generalization. We consider the batch size to be 128, 256, 512 and 1024.\nThe default values for the 3 variables are confusion size 0, 4-crop-f and batch size 128. For each configuration in each scenario, we train 5 models and report results (average & standard deviations) of all metrics as well as the test errors (in percentage). For the confusion set experiments, we sample a new training set and a new confusion set every time. In all scenarios, we train the model for 200 epochs with an initial learning rate 0.1, divided by 10 whenever the training loss plateaus. Within each scenario, we find the final training loss very small and very similar across different models and the training accuracy essentially equal to 1, indicating the convergence to local minima.\nThe results are in Figure 1, 2 and 3 for Scenario I, II and III, respectively. Our metric significantly outperforms others and is very effective in capturing the generalization properties, i.e., a lower value of our metric consistently indicates a better generalizable local minimum." }, { "heading": "7.2 EXPERIMENTS ON LOCAL MINIMA REGULARIZATION", "text": "We evaluate our regularizer on CIFAR-10 & CIFAR-100 for four different network architectures including a plain CNN, ResNet-20, Wide ResNet (Zagoruyko & Komodakis, 2016) and DenseNet (Huang et al., 2017). We use WRN-28-2-B(3,3) from the Wide ResNet paper and the DenseNetBC-k=12 from the DensetNet paper. See Appendix E for further architecture details. We denote the four networks as CNN, ResNet, WRN and DenseNet, respectively.\nWe manually set α = 0.0001 in all experiments and select the other three hyper-parameters in Algorithm 1 by validation via a 45k/5k training data split for each of the network architecture on each dataset. In specific, we consider β ∈ {10, 20, 30, 40, 50, 75, 100, 125}, M ∈ {4, 8, 16} and τ ∈ {1, 5, 10, 15}. We keep all the other training hyper-parameters, schemes as well as the setup identical to those in their original paper (details in Appendix E). The training details of the plain CNN are also in Appendix E. We train 5 separate models for each network-dataset combination and report the test errors in percentage (mean ± std.) in Table 1, where “+reg” indicates training with our regularizer applied. The results demonstrate that our method provides consistent generalization improvement to a wide range of DNNs." }, { "heading": "7.2.1 THE CHOICE OF THE OPTIMIZER", "text": "As described in Algorithm 1, our proposed regularizer is not tied to a specific optimizer. We perform experiments with SGD+Momentum because it is chosen to be used in ResNet, WRN, and DenseNet, helping all of them achieve current or previous state-of-the-art results. Our regularizer aims to find better “flatter” minima to improve generalization whereas adaptive optimization methods such as\nAdam (Kingma & Ba, 2014) and AdaGrad (Duchi et al., 2011) try to boost up convergence, yet usually at the cost of generalizability. Recent works (Wilson et al., 2017; Keskar & Socher, 2017) show that adaptive methods generalize worse than SGD+Momentum. In specific, very similar to our setup, Keskar & Socher (2017) demonstrates that SGD+Momentum consistently outperforms the others on ResNet and DenseNet for CIFAR-10 and CIFAR-100. Other approaches that also utilize local curvature to improve SGD, such as the Entropy-SGD (Chaudhari et al., 2017) mentioned in Section 2, have empirical results rather preliminary compared to ours." }, { "heading": "7.2.2 GENERALIZATION BOOST AS A RESULT OF BETTER LOCAL MINIMA", "text": "Our regularizer essentially optimizes an upper bound of the proposed metric during training. We perform a sanity check to illustrate that the regularizer indeed induces better local minima characterized by our metric. For ResNet, Wide-ResNet and DenseNet trained on CIFAR-10, we compute the metric on local minima of similar training loss obtained with or without applying the regularizer. Table 2 shows that the resulting generalization boost aligns with what captured by our metric." }, { "heading": "8 CONCLUSION AND FUTURE WORK", "text": "In this paper, we show a bridge between the field of deep learning theory and regularization methods with respect to the generalizability of local minima. We propose a metric that captures the generalization properties of different local minima and provide its theoretical analysis including a generalization bound. We further derive an efficient approximation of the metric and devise a practical and effective regularizer from it. Empirical results demonstrate our success in both capturing and improving the generalizability of DNNs. Our exploration promises a direction for future work on the regularization and optimization of DNNs." }, { "heading": "A PROOF OF EQUATION 1", "text": "To prove the second equality in Equation 1, it suffices to prove the following equality:\n−∇2w``S(w) = ∑\n(x,y)∈S K∑ i=1 yi[∇w ln p(cx = i|x;w)∇w ln p(cx = i|x;w)T ]\nFor convenience, we change the notation of the local minimum from w0 to w and further denote p(cx = i|x;w) as pxw(i). Since−∇2w``S(w) = − ∑ (x,y)∈S ∑K i=1 yi ∇2w ln pxw(i), for each (x, y) ∈ S and i ∈ {1, 2, ...,K}, we have:\n[∇2w ln pxw(i)]j,k = ∂2\n∂wj∂wk ln pxw(i)\n= ∂\n∂wj\n( ∂ ∂wk pxw(i)\npxw(i)\n)\n= pxw(i)\n∂2\n∂wj∂wk pxw(i) pxw(i) 2 − ∂ ∂wj pxw(i) pxw(i) ∂ ∂wk pxw(i) pxw(i)\n=\n∂2\n∂wj∂wk pxw(i)\npxw(i) − ∂ ∂wj ln pxw(i) · ∂ ∂wk ln pxw(i)\nSince w0 is a local minimum (also a global minimum) described in Section 4 as yi = pxw(i) for i = 1, 2, ...,K, when taking the double summation, the first term above becomes:\n∑ (x,y)∈S K∑ i=1\n∂2\n∂wj∂wk pxw(i) =\n∂2\n∂wj∂wk ∑ (x,y)∈S K∑ i=1 pxw(i) = ∂2 ∂wj∂wk N = 0\nThen it follows that:\n[∇2w``S(w)]j,k = − ∑\n(x,y)∈S K∑ i=1 yi[∇w ln pxw(i)∇w ln pxw(i)T ]j,k" }, { "heading": "B PROOF OF THE GENERALIZATION BOUND IN SECTION 5.2", "text": "First let us review the PAC-Bayes Theorem in McAllester (2003):\nTheorem 2. For any data distribution D and a loss function L(·, ·) ∈ [0, 1], let L(D, w) and L(S, w) be the expected loss and training loss respectively for the model paramterized by w, with the training set |S|= N . For any prior distribution P with a model class C as its support, any posterior distribution Q over C (not necessarily Bayesian posterior), and for any δ ∈ (0, 1], we have with probability at least 1− δ that:\nE w∼Q [L(D, w)] ≤ E w∼Q [L(S, w)] + 2\n√ 2DKL(Q||P) + ln 2Nδ\nN − 1\nAs eγ(w0) = |I(w0)|, we can rewrite the generalization bound we want to prove in Section 5.2 as:\n(7) E w ∼Q [L(D, w)] ≤ L0 +\nW · V 2/Wπ1/W |I(w0)|1/W\n4πe\n+ 2\n√ W · V 2/Wπ1/W |I(w0)|1/W + 4πeL0 + 2πe ln 2Nδ\n2πe(N − 1)\nAs defined in Section 5.2, given the model classM(w0), whose volume is V , for the neural network fw, the uniform prior P attains the probability density function p(w) = 1V for any w ∈ M(w0) and the posterior Q has density q(w) ∝ e−|L(S,w)−L0|. Based on Assumption 2 and the observed Fisher information I(w0) derived in Section 4, especially the Equation 2, we have:\nL(S, w) = L0 + 1\n2 (w − w0)TI(w0)(w − w0) ∀w ∈M(w0)\nDenote Σ = [I(w0)]−1 = [∇2wL(S, w0)]−1. Then Q is a truncated multivariate Gaussian distribution whose density function q is:\nq(w;w0,Σ) =\n√ (2π)−n|Σ|−1 exp{− 12 (w − w0)\nTΣ−1(w − w0)}∫ M(w0) √ (2π)−n|Σ|−1 exp{− 12 (w − w0)TΣ−1(w − w0)} dw\n= exp{− 12 (w − w0) TΣ−1(w − w0)}∫ M(w0) exp{− 1 2 (w − w0)TΣ−1(w − w0)} dw\n(8)\nDenote the denominator of Equation 8 as Z and define:\ng(w;w0,Σ) = ∆ −1\n2 (w − w0)TΣ−1(w − w0)} ≤ 0\nThen q can also be written as:\nq(w;w0,Σ) = exp{g(w;w0,Σ)}\nZ In order to derive a generalization bound in the form of the PAC-Bayes Theorem, it suffices to prove an upper bound of the KL divergence term:\nDKL(Q||P) = E w∼Q\nln q(w)\np(w)\n= − E w∼Q\nln 1\nV + E w∼Q ln q(w)\n= lnV + E w∼Q\ng(w;w0,Σ) + ln 1\nZ\n≤ lnV + E w∼Q 0− ln (∫ M(w0) exp{g(w;w0,Σ)} dw )\n≤ lnV − ln (∫ M(w0) exp{− max w∈M(w0) L(S, w)} dw )\n= lnV − ln ( V · exp{− max\nw∈M(w0) L(S, w)} ) = lnV − lnV + h = h\nwhere h is the height ofM(w0) defined in Section 5.1. For convenience, we shift down L(S, w) by L0 and denote the shifted training loss L0(w) =∆ L(S, w)− L0 so that L0(w0) = 0. Then\nL0(w) = 1\n2 (w − w0)TΣ−1(w − w0) ∀w ∈M(w0)\nFurthermore, the following two sets are equivalent\n{w ∈ RW : L(S, w) = h} = {w ∈ RW : L0(w) = h− L0}\nboth of which are the W -dimensional hyperellipsoid given by the equation L0(w) = h−L0, which can be converted to the standard form for hyperellipsoids as:\n(w − w0)T Σ−1\n2(h− L0) (w − w0) = 1\nThe volume enclosed by this hyperellipsoid is exactly the volume ofM(w0), i.e., V ; so we have πW/2\nΓ(W2 + 1)\n√ 2W (h− L0)W |Σ| = V\nSolve for h, with the Stirling’s approximation for factorial Γ(n+ 1) ≈ √ 2πn (n e )n , we have\nh = L0 + (V · Γ(W2 + 1))\n2/W\n2π|Σ|1/W ≈ L0 +\nV 2/Wπ1/WW (W+1)/W |I(w0)|1/W\n4πe\nwhere Γ(·) denotes the Gamma function. Notice that for modern DNNs we have W 1, and so W W+1 W ≈W . Then the generalization bound in the form of the PAC-Bayes Theorem is given as:\nE w∼Q [L(D, w)] ≤ E w∼Q [L(S, w)] + 2\n√ W · V 2/Wπ1/W |I(w0)|1/W + 4πeL0 + 2πe ln 2Nδ\n2πe(N − 1) We can further bound the first term on the right hand side as:\nE w∼Q [L(S, w)] ≤ E w∼Q [ max w∈M(w0) L(S, w)] = h\nPutting it all together, we can finally obtain Equation 7." }, { "heading": "C DERIVATION OF EQUATION 5", "text": "First, let us present the well-known theorem in linear algebra that relates the eigenvalues of a matrix to those of its sub-matrices. Theorem 3. Given an n×n real symmetric matrix A with eigenvalues λ1 ≤ ... ≤ λn, for any k < n denote its principal sub-matrix as B obtained from removing n − k rows and columns from A. Let ν1 ≤ ... ≤ νk be the eigenvalues of B. Then for any 1 ≤ r ≤ k, we have λr ≤ νr ≤ λr+n−k.\nLet {νn}N ′ n=1 be the eigenvalues of 1 W ξ t(w0), which is a N ′ ×N ′ sub-matrix of I ′(w0); then\nγ̂(w0) = 1\nT T∑ t=1 ln |ξt(w0)| = 1 T T∑ t=1 ln |W · 1 W ξt(w0)| = N ′ lnW + 1 T T∑ t=1 N ′∑ n=1 ln νn\nTheorem 2 gives the relation between νn and λn, defined above and in Section 5.3 as the nth smallest eigenvalues of 1W ξ\nt(w0) and that of I ′(w0), respectively. For sufficiently largeN ′, we can use νn to approximate λn. Ignoring the eigenvalues of I ′(w0) larger than λN ′ is reasonable when estimating γ(w0), since in general the majority of the eigenvalues of the Hessian for DNNs are close to zero with only a few large “outliers” (Pennington & Worah, 2018; Sagun et al., 2018), and so the smallest eigenvalues are the dominant terms in γ(w0). A specific bound of the eigenvalues remains an open question, though. In short, we have ∑N ′ n=1 νn ≈ ∑N ′ n=1 λ ′ n and consequently:\nW N ′ γ̂(w0) +W ln 1 W = W N ′ γ̂(w0)−W lnW\n= W\nN ′\n( γ̂(w0)−N ′ lnW ) = 1\nT T∑ t=1 W N ′ N ′∑ n=1 ln νn\n≈ 1 T T∑ t=1 W N ′ N ′∑ n=1 lnλ′n\nFinally we we have\nlim T→∞\n1\nT T∑ t=1 W N ′ N ′∑ n=1 lnλ′n = γ(w0)" }, { "heading": "D DETAILS OF CALCULATING THE METRICS IN SECTION 7.1", "text": "For the following metrics, we apply estimation by sampling a subset St from the full training set S for T times and averaging the results:\n• Frobenius norm: ‖∇2wL(S, w)‖ 2 F\n• Spectral radius: ρ(∇2wL(S, w)) • Ours: γ̂(w) = 1T ∑T t=1 ln|ξ(St, w0)|\nFor the Frobenius norm based metric, from Equation 2 we have:\n‖∇2wL(S, w)‖ 2 F = ‖I(w)‖ 2 F =\n1\nN ∑ (x,y)∈S K∑ i=1 ∥∥∥(∇w[`x(w0)]i)(∇w[`x(w0)]i)T∥∥∥2 F\nSimilar to Equation 4, we approximate y by ỹ and so\n‖∇2wL(S, w)‖ 2 F ≈\n1\nN ∑ (x,y)∈S ∥∥∥(∇w[`x(w0)]ȳ)(∇w[`x(w0)]ȳ)T∥∥∥2 F\nSumming over the entire Hessian matrix is too expensive as there are W ×W ×N entries in total. We therefore estimate the quantity by first sampling a subset St ⊂ S and then sampling 100,000 entries of (∇w[`x(w0)]ȳ)(∇w[`x(w0)]ȳ)T . We perform the estimation T times and average the results, similar to the approach when computing γ̂(w).\nAlso by Equation 2 and the approximation in Equation 4, the spectral radius of Hessian is equivalent to the squared spectral norm of 1/ √ NJw[L̃(S, w)]. We also perform estimation (with irrelevant\nscaling constants dropped) by sampling St for T times, i.e., via 1T ∑ t ‖Jw[L̃(St, w)]‖ 2 2.\nFurthermore, in all our experiments that involves samplings St, we set |St|= N ′ = T = 100." }, { "heading": "E ARCHITECTURE AND TRAINING DETAILS IN SECTION 7", "text": "Architecture details are as below\n• The plain CNN is a 6-layer convolutional neural network similar to the baseline in Lee et al. (2016) yet without the “mlpconv” layers (resulting in a much fewer number of parameters). Specifically, the 6 layers has numbers of filters as {64, 64, 128, 128, 192, 192}. We use 3 × 3 kernel size and ReLU as the activation function. After the second and the fourth convolutional layer we insert a 2 × 2 max pooling operation. After the last convolutional layer, we apply a global average pooling before the final softmax classifier.\n• For ResNet-20 and WRN-28-2-B(3,3), we use the same architecture as in their original papers, with the only difference that we use pre-activation as in He et al. (2016b). This results in slightly stronger baselines than the models in their original papers.\n• For DenseNet-BC-k=12 we use the the architecture identical to the one used in the original paper.\nThe training details are\n• For the plain CNN, we initialize the weights according to the scheme in He et al. (2016a) and apply l2 regularization of a coefficient 0.0001. We perform standard data augmentation, the one denoted 4-crop-f in Section 7.1. We use stochastic gradient descent with Nesterov momentum set to 0.9 and a batch size of 128. We train 200 epochs in total with the learning rate initially set to 0.01 and then divided by 10 at epoch 100 and 150.\n• For ResNet-20, WRN-28-2-B(3,3) and DenseNet-BC-k=12, we use the same hyperparameters, training schemes, data augmentation schemes, optimization methods, etc., as those in their original papers, respectively." } ]
2,019
null
SP:cf0aed09560d12961f718e915b72a2c5403c4e4a
[ "This work proposes a simple pruning method that dynamically sparsifies the network during training. This is achieved by performing at fixed intervals magnitude based pruning for either individual weights or entire neurons. While similar methods have been explored before, this work proposes a slight twist; instead of updating the weights of the model by following the gradient of the parameters of the dense model, they update the parameters of the dense model according to the gradients of the sparse model. Essentially, this corresponds to a variant of the straight-through estimator [1], where in the forward pass we evaluate the compressed model, but in the backward pass we update the model as if the compression didn’t take place. The authors argue that this process allows for ``feedback” in the pruning mechanism, as the pruned weights still receive gradient updates hence they can be ``re-activated” at later stages of training. They then provide a convergence analysis about the optimization procedure with such a gradient, and show that for strongly convex functions the method converges in the vicinity of the global optimum, whereas for non-convex functions it converges to the neighbourhood of a stationary point. Finally, the authors perform extensive experimental evaluation and show that their method is better than the baselines that they considered.", "In this paper, the authors proposed a novel model compression method that uses error feedbacks to dynamically allocates sparsity patterns during training. The authors provided a systematic overview of a good number of existing model compression algorithms depending on the relative order of pruning and training processes. The effectiveness of the proposed algorithm is illustrated by comparing its generalization performance with 6 existing algorithms (and their variants) with two standard datasets and various networks of standard structures. The authors also showed the convergence rate and the fundamental limit of the proposed algorithm with two theorems. " ]
Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models. Moreover, their performance surpasses that of models generated by all previously proposed pruning schemes.
[ { "affiliations": [], "name": "Sebastian U. Stich" } ]
[ { "authors": [ "Guillaume Bellec", "David Kappel", "Wolfgang Maass", "Robert Legenstein" ], "title": "Deep rewiring: Training very sparse deep networks", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "arXiv preprint arXiv:1308.3432,", "year": 2013 }, { "authors": [ "Miguel A Carreira-Perpinán", "Yerlan Idelbayev" ], "title": "learning-compression” algorithms for neural net pruning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Xiaoliang Dai", "Hongxu Yin", "Niraj Jha" ], "title": "Nest: A neural network synthesis tool based on a grow-and-prune paradigm", "venue": "IEEE Transactions on Computers,", "year": 2019 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Tim Dettmers", "Luke Zettlemoyer" ], "title": "Sparse networks from scratch: Faster training without losing performance", "venue": "arXiv preprint arXiv:1907.04840,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M Roy", "Michael Carbin" ], "title": "Stabilizing the lottery ticket hypothesis", "venue": "arXiv preprint arXiv:1903.01611,", "year": 2019 }, { "authors": [ "Yarin Gal", "Jiri Hron", "Alex Kendall" ], "title": "Concrete dropout", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Trevor Gale", "Erich Elsen", "Sara Hooker" ], "title": "The state of sparsity in deep neural networks", "venue": "arXiv preprint arXiv:1902.09574,", "year": 2019 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: Training ImageNet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Yiwen Guo", "Anbang Yao", "Yurong Chen" ], "title": "Dynamic network surgery for efficient DNNs", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Song Han", "Jeff Pool", "John Tran", "William Dally" ], "title": "Learning both weights and connections for efficient neural network", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Song Han", "Jeff Pool", "Sharan Narang", "Huizi Mao", "Shijian Tang", "Erich Elsen", "Bryan Catanzaro", "John Tran", "William J Dally" ], "title": "DSD: regularizing deep neural networks with dense-sparse-dense training flow", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Yang He", "Guoliang Kang", "Xuanyi Dong", "Yanwei Fu", "Yi Yang" ], "title": "Soft filter pruning for accelerating deep convolutional neural networks", "venue": "In International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Yang He", "Ping Liu", "Ziwei Wang", "Zhilan Hu", "Yi Yang" ], "title": "Filter pruning via geometric median for deep convolutional neural networks acceleration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Sai Praneeth Karimireddy", "Quentin Rebjock", "Sebastian Stich", "Martin Jaggi" ], "title": "Error feedback fixes SignSGD and other gradient compression schemes", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Simon Lacoste-Julien", "Mark W. Schmidt", "Francis R. Bach" ], "title": "A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method", "venue": "arXiv preprint arXiv:1212.2002,", "year": 2012 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A Solla" ], "title": "Optimal brain damage", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 1990 }, { "authors": [ "Namhoon Lee", "Thalaiyasingam Ajanthan", "Philip HS Torr" ], "title": "SNIP: Single-shot network pruning based on connection sensitivity", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hao Li", "Soham De", "Zheng Xu", "Christoph Studer", "Hanan Samet", "Tom Goldstein" ], "title": "Training quantized nets: A deeper understanding", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Zhuang Liu", "Mingjie Sun", "Tinghui Zhou", "Gao Huang", "Trevor Darrell" ], "title": "Rethinking the value of network pruning", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through l_0 regularization", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sangkug Lym", "Esha Choukse", "Siavash Zangeneh", "Wei Wen", "Mattan Erez", "Sujay Shanghavi" ], "title": "Prunetrain: Gradual structured pruning from scratch for faster neural network training", "venue": "In International Conference for High Performance Computing, Networking, Storage and Analysis (SC),", "year": 2019 }, { "authors": [ "Decebal Constantin Mocanu", "Elena Mocanu", "Peter Stone", "Phuong H Nguyen", "Madeleine Gibescu", "Antonio Liotta" ], "title": "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov" ], "title": "Variational dropout sparsifies deep neural networks", "venue": "In ICML - International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Hesham Mostafa", "Xin Wang" ], "title": "Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization", "venue": "In ICML - International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michael C Mozer", "Paul Smolensky" ], "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 1989 }, { "authors": [ "Sharan Narang", "Erich Elsen", "Gregory Diamos", "Shubho Sengupta" ], "title": "Exploring sparsity in recurrent neural networks", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Kirill Neklyudov", "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry P Vetrov" ], "title": "Structured bayesian pruning via log-normal multiplicative noise", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": null, "year": 2017 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Suraj Srinivas", "Akshayvarun Subramanya", "R Venkatesh Babu" ], "title": "Training sparse neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Sebastian U. Stich", "Sai P. Karimireddy" ], "title": "The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication", "venue": "arXiv preprint arXiv:1909.05350,", "year": 2019 }, { "authors": [ "Sebastian U Stich", "Jean-Baptiste Cordonnier", "Martin Jaggi" ], "title": "Sparsified SGD with memory", "venue": "NeurIPS - Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jianbo Ye", "Xin Lu", "Zhe Lin", "James Z. Wang" ], "title": "Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Penghang Yin", "Jiancheng Lyu", "Shuai Zhang", "Stanley Osher", "Yingyong Qi", "Jack Xin" ], "title": "Understanding straight-through estimator in training activation quantized neural nets", "venue": "In ICLR - International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhonghui You", "Kun Yan", "Jinmian Ye", "Meng Ma", "Ping Wang" ], "title": "Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks", "venue": "In NeurIPS - Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "venue": "arXiv preprint arXiv:1710.01878,", "year": 2017 }, { "authors": [ "Liu" ], "title": "DPF can provide an alternative training scheme to compress the model to an extremely high compression ratio without sacrificing the test accuracy, where most of the existing methods still meet severe quality loss (including Frankle & Carbin", "venue": "Frankle et al", "year": 2019 }, { "authors": [ "e.g. Lacoste-Julien" ], "title": "Combining these two estimates (with wt = wT ) shows the claim. Furthermore, we also claimed that also the dense model converges to a neighborhood of optimal solution. This follows by L-smoothness", "venue": null, "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "Highly overparametrized deep neural networks show impressive results on machine learning tasks. However, with the increase in model size comes also the demand for memory and computer power at inference stage—two resources that are scarcely available on low-end devices. Pruning techniques have been successfully applied to remove a significant fraction of the network weights while preserving test accuracy attained by dense models. In some cases, the generalization of compressed networks has even been found to be better than with full models (Han et al., 2015; 2017; Mocanu et al., 2018).\nThe sparsity of a network is the number of weights that are identically zero, and can be obtained by applying a sparsity mask on the weights. There are several different approaches to find sparse models. For instance, one-shot pruning strategies find a suitable sparsity mask by inspecting the weights of a pretrained network (Mozer & Smolensky, 1989; LeCun et al., 1990; Han et al., 2017). While these algorithms achieve a substantial size reduction of the network with little degradation in accuracy, they are computationally expensive (training and refinement on the dense model), and they are outperformed by algorithms that explore different sparsity masks instead of a single one. In dynamic pruning methods, the sparsity mask is readjusted during training according to different criteria (Mostafa & Wang, 2019; Mocanu et al., 2018). However, these methods require fine-tuning of many hyperparameters.\nWe propose a new pruning approach to obtain sparse neural networks with state-of-the-art test accuracy. Our compression scheme uses a new saliency criterion that identifies important weights in the network throughout training to propose candidate masks. As a key feature, our algorithm not only evolves the pruned sparse model alone, but jointly also a (closely related) dense model that is used in a natural way to correct for pruning errors during training. This results in better generalization properties on a wide variety of tasks, since the simplicity of the scheme allows us further to study it from a theoretical point of view, and to provide further insights and interpretation. We do not require\ntuning of additional hyperparameters, and no retraining of the sparse model is needed (though can further improve performance)." }, { "heading": "Contributions.", "text": "• A novel dynamic pruning scheme, that incorporates an error feedback in a natural way Sec. 3\nand finds a trained sparse model in one training pass. Sec. 5 • We demonstrate state-of-the-art performance (in accuracy and sparsity), Sec. 5\nourperforming all previously proposed pruning schemes. Sec. 5 • We complement our results by an ablation study that provides further insights. Sec. 6\nand convergence analysis for convex and non-convex objectives. Sec. 4" }, { "heading": "2 RELATED WORK", "text": "Previous works on obtaining pruned networks can (loosely) be divided into three main categories.\nPruning after training. Training approaches to obtain sparse networks usually include a three stage pipeline—training of a dense model, one-shot pruning and fine-tuning—e.g., (Han et al., 2015). Their results (i.e., moderate sparsity level with minor quality loss) made them the standard method for network pruning and led to several variations (Guo et al., 2016; Carreira-Perpinán & Idelbayev, 2018).\nPruning during training. Zhu & Gupta (2017) propose the use of magnitude-based pruning and to gradually increase the sparsity ratio while training the model from scratch. A pruning schedule determines when the new masks are computed (extending and simplifying (Narang et al., 2017)). He et al. (2018) (SFP) prune entire filters of the model at the end of each epoch, but allow the pruned filters to be updated when training the model. Deep Rewiring (DeepR) (Bellec et al., 2018) allows for even more adaptivity by performing pruning and regrowth decisions periodically. This approach is computationally expensive and challenging to apply to large networks and datasets. Sparse evolutionary training (SET) (Mocanu et al., 2018) simplifies prune–regrowth cycles by using heuristics for random growth at the end of each training epoch and NeST (Dai et al., 2019) by inspecting gradient magnitudes.\nDynamic Sparse Reparameterization (DSR) (Mostafa & Wang, 2019) implements a prune– redistribute–regrowth cycle where target sparsity levels are redistributed among layers, based on loss gradients (in contrast to SET, which uses fixed, manually configured, sparsity levels). Sparse Momentum (SM) (Dettmers & Zettlemoyer, 2019) follows the same cycle but instead using the mean momentum magnitude of each layer during the redistribute phase. SM outperforms DSR on ImageNet for unstructured pruning by a small margin but has no performance difference on CIFAR experiments. Our approach also falls in the dynamic category but we use error compensation mechanisms instead of hand crafted redistribute–regrowth cycles.\nPruning before training. Recently—spurred by the lottery ticket hypothesis (LT) (Frankle & Carbin, 2019)—methods which try to find a sparse mask that can be trained from scratch have attracted increased interest. For instance, Lee et al. (2019) propose SNIP to find a pruning mask by inspecting connection sensitivities and identifying structurally important connections in the network for a given task. Pruning is applied at initialization, and the sparsity mask remains fixed throughout training.\nNote that Frankle & Carbin (2019); Frankle et al. (2019) do not propose an efficient pruning scheme to find the mask, instead they rely on iterative pruning, repeated for several full training passes.\nFurther Approaches. Srinivas et al. (2017); Louizos et al. (2018) learn gating variables (e.g. through `0 regularization) that minimize the number of nonzero weights, recent parallel work studies filter pruning for pre-trained models (You et al., 2019). Gal et al. (2017); Neklyudov et al. (2017); Molchanov et al. (2017) prune from Bayesian perspectives to learn dropout probabilities during training to prune and sparsify networks as dropout weight probabilities reach 1. Gale et al. (2019) extensively study recent unstructured pruning methods on large-scale learning tasks, and find that complex techniques (Molchanov et al., 2017; Louizos et al., 2018) perform inconsistently. Simple magnitude pruning approaches achieve comparable or better results (Zhu & Gupta, 2017)." }, { "heading": "3 METHOD", "text": "We consider the training of a non-convex loss function f : Rd → R. We assume for a weight vector w ∈ Rd to have access to a stochastic gradient g(w) ∈ Rd such that E[g(w)] = ∇f(w). This corresponds to the standard machine learning setting with g(w) representing a (mini-batch) gradient of one (or several) components of the loss function. Stochastic Gradient Descent (SGD) computes a sequence of iterates by the update rule\nwt+1 := wt − γtg(wt) , (SGD)\nfor some learning rate γt. To obtain a sparse model, a general approach is to prune some of the weights of wt, i.e., to set them to zero. Such pruning can be implemented by applying a mask m ∈ {0, 1}d to the weights, resulting in a sparse model w̃t := m wt, where denotes the entry-wise (Hadamard) product. The mask could potentially depend on the weights wt (e.g., smallest magnitude pruning), or depend on t (e.g., the sparsity is incremented over time).\nBefore we introduce our proposed dynamic pruning scheme, we formalize the three main existing types of pruning methodologies (summarized in Figure 1). These approaches differ in the way the mask is computed, and the moment when it is applied.1\nPruning before training. A mask m0 (depending on e.g. the initialization w0 or the network architecture of f ) is applied and (SGD) is used for training on the resulting subnetwork f̃(w) := f(m0 w) with the advantage that only pruned weights need to be stored and updated2, and that by training with SGD a local minimum of the subnetwork f̃ (but not of f—the original training target) can be reached. In practice however, it remains a challenge to efficiently determine a good mask m0 and a wrongly chosen mask at the beginning strongly impacts the performance.\n1The method introduced in Section 2 typically follow one of these broad themes loosely, with slight variations in detail. For the sake of clarity we omit a too technical and detailed discussion here.\n2When training on f̃(w), it suffices to access stochastic gradients of f̃(w), denoted by g̃(w), which can potentially be cheaper be computed than by naively applying the mask to g(w) (note g̃(w) = m0 g(w)).\nPruning after training (one-shot pruning). A dense model is trained, and pruning is applied to the trained model wT . As the pruned model w̃T = mT wT is very likely not at a local optimum of f , fine-tuning (retraining with the fixed mask mT ) is necessary to improve performance.\nPruning during training (incremental and dynamic pruning). Dynamic schemes change the mask mt every (few) iterations based on observations during training (i.e. by observing the weights and stochastic gradients). Incremental schemes monotonically increase the sparsity pattern, fully dynamic schemes can also reactivate previously pruned weights. In contrast to previous dynamic schemes that relied on elaborated heuristics to adapt the mask mt, we propose a simpler approach:\nDynamic pruning with feedback (DPF, Algorithm 1). Our scheme evaluates a stochastic gradient at the pruned model w̃t = mt wt and applies it to the (simultaneously maintained) dense model wt:\nwt+1 := wt − γtg(mt wt) = wt − γtg(w̃t) . (DPF)\nApplying the gradient to the full model allows to recover from “errors”, i.e. prematurely masking out important weights: when the accumulated gradient updates from the following steps drastically change a specific weight, it can become activated again (in contrast to incremental pruning approaches that have to stick to sub-optimal decisions). For illustration, observe that (DPF) can equivalently be written as wt+1 = wt − γtg(wt + et), where et := w̃t −wt is the error produced by the compression. This provides a different intuition of the behavior of (DPF), and connects it with the concept of error-feedback (Stich et al., 2018; Karimireddy et al., 2019).3 We illustrate this principle in Figure 2 and give detailed pseudocode and further implementation details in Appendix A.1. The DPF scheme can also be seen as an instance of a more general class of schemes that apply (arbitrary) perturbed gradient updates to the dense model. For instance straight-through gradient estimators (Bengio et al., 2013) that are used to empirically simplify the backpropagation can be seen as such perturbations. Our stronger assumptions on the structure of the perturbation allow to derive non-asymptotic convergence rates in the next section, though our analysis could also be extended to the setting in (Yin et al., 2019) if the perturbations can be bounded." }, { "heading": "4 CONVERGENCE ANALYSIS", "text": "We now present convergence guarantees for (DPF). For the purposes of deriving theoretical guarantees, we assume that the training objective is smooth, that is ‖∇f(w)−∇f(v)‖ ≤ L ‖w − v‖, ∀w,v ∈ Rd, for a constant L > 0, and that the stochastic gradients are bounded E ‖g(w̃t)‖2 ≤ G2 for every pruned model w̃t = mt(wt) wt. The quality of this pruning is defined as the parameter δt ∈ [0, 1] such that\nδt := ‖wt − w̃t‖2 / ‖wt‖2 . (1)\nPruning without information loss corresponds to w̃t = wt, i.e., δt = 0, and in general δt ≤ 1. Convergence on Convex functions. We first consider the case when f is in addition µ-strongly convex, that is 〈∇f(v),w − v〉 ≤ f(w) − f(v) − µ2 ‖w − v‖\n2, ∀w,v ∈ Rd. While it is clear that this assumption does not apply to neural networks, it eases the presentation as strongly convex functions have a unique (global) minimizer w? := arg minw∈Rd f(w).\nTheorem 4.1. Let f be µ-strongly convex and learning rates given as γt = 4µ(t+2) . Then for a randomly chosen pruned model ũ of the iterates {w̃0, . . . , w̃T } of DPF, concretely ũ = w̃t with probability pt = 2(t+1) (T+1)(T+2) , it holds that—in expectation over the stochasticity and the selection of ũ:\nEf(ũ)− f(w?) = O ( G2\nµT + LE\n[ δt ‖wt‖2 ]) . (2)\nThe rightmost term in (2) measures the average quality of the pruning. However, unless δt → 0 or ‖wt‖ → 0 for t→∞, the error term never completely vanishes, meaning that the method converges only to a neighborhood of the optimal solution (this not only holds for the pruned model, but also for\n3Our variable wt corresponds to x̃t in the notation of Karimireddy et al. (2019). Their error-fixed SGD algorithm evaluates gradients at perturbed iterates xt := x̃t + et, which correspond precisely to w̃t = wt + et in our notation. This shows the connection of these two methods.\nthe jointly maintained dense model, as we will show in the appendix). This behavior is expected, as the global optimal model w? might be dense and cannot be approximated well by a sparse model.\nFor one-shot methods that only prune the final (SGD) iterate wT at the end, we have instead: Ef(w̃t)− f(w?) ≤ 2E (f(wT )− f(w?)) + LδTE ‖wT ‖2 = O ( LG2\nµ2T + LE\n[ δT ‖wT ‖2 ]) ,\nas we show in the appendix. First, we see from this expression that the estimate is very sensitive to δT and wT , i.e. the quality of the pruning the final model. This could be better or worse than the average of the pruning quality of all iterates. Moreover, one looses also a factor of the condition number Lµ in the asymptotically decreasing term, compared to (2). This is due to the fact that standard convergence analysis only achieves optimal rates for an average of the iterates (but not the last one). This shows a slight theoretical advantage of DPF over rounding at the end.\nConvergence on Non-Convex Functions to Stationary Points. Secondly, we consider the case when f is a non-convex function and show convergence (to a neighborhood) of a stationary point.\nTheorem 4.2. Let learning rate be given as γ = c√ T\n, for c = √\nf(w0)−f(w?) LG2 . Then for pruned\nmodel ũ chosen uniformly at random from the iterates {w̃0, . . . , w̃T } of DPF, concretely ũ := w̃t with probability pt = 1T+1 , it holds—in expectation over the stochasticity and the selection of ũ:\nE ‖∇f(ũ)‖2 = O (√ L(f(w0)− f(w?))G√\nT + L2E\n[ δt ‖wt‖2 ]) . (3)\nExtension to Other Compression Schemes. So far we put our focus on simple mask pruning schemes to achieve high model sparsity. However, the pruning scheme in Algorithm 1 could be replaced by an arbitrary compressor C : Rd → Rd, i.e., w̃t = C(wt). Our analysis extends to compressors as e.g. defined in (Karimireddy et al., 2019; Stich & Karimireddy, 2019), whose quality is also measured in terms of the δt parameters as in (1). For example, if our objective was not to obtain a sparse model, but to produce a quantized neural network where inference could be computed faster on low-precision numbers, then we could define C as a quantized compressor. One variant of this approach is implemented in the Binary Connect algorithm (BC) (Courbariaux et al., 2015) with prominent results, see also (Li et al., 2017) for further insights and discussion." }, { "heading": "5 EXPERIMENTS", "text": "We evaluated DPF together with its competitors on a wide range of neural architectures and sparsity levels. DPF exhibits consistent and noticeable performance benefits over its competitors." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Datasets. We evaluated DPF on two image classification benchmarks: (1) CIFAR-10 (Krizhevsky & Hinton, 2009) (50K/10K training/test samples with 10 classes), and (2) ImageNet (Russakovsky et al., 2015) (1.28M/50K training/validation samples with 1000 classes). We adopted the standard data augmentation and preprocessing scheme from He et al. (2016a); Huang et al. (2016); for further details refer to Appendix A.2.\nModels. Following the common experimental setting in related work on network pruning (Liu et al., 2019; Gale et al., 2019; Dettmers & Zettlemoyer, 2019; Mostafa & Wang, 2019), our main experiments focus on ResNet (He et al., 2016a) and WideResNet (Zagoruyko & Komodakis, 2016). However, DPF can be effectively extended to other neural architectures, e.g., VGG (Simonyan & Zisserman, 2015), DenseNet (Huang et al., 2017). We followed the common definition in He et al. (2016a); Zagoruyko & Komodakis (2016) and used ResNet-a, WideResNet-a-b to represent neural network with a layers and width factor b.\nBaselines. We considered the state-of-the-art model compression methods presented in the table below as our strongest competitors. We omit the comparison to other dynamic reparameterization methods, as DSR can outperform DeepR (Bellec et al., 2018) and SET (Mocanu et al., 2018) by a noticeable margin (Mostafa & Wang, 2019). Implementation of DPF. Compared to other dynamic reparameterization methods (e.g. DSR and SM) that introduced many extra hyperparameters, our method has trivial hyperparameter tuning\nScheme Reference Pruning How the mask(s) are found Lottery Ticket (LT) 2019, FDRC before training 10-30 successive rounds of (full training + pruning).SNIP 2019, LAT By inspecting properties/sensitivity of the network.\nOne-shot + fine-tuning (One-shot P+FT) 2015, HPDT after training Saliency criterion (prunes smallest weights).\nIncremental pruning + fine-tuning (Incremental) 2017, ZG incremental Saliency criterion. Sparsity is gradually incremented.\nDynamic Sparse Reparameterization (DSR) 2019, MW dynamic Prune–redistribute–regrowth cycle. Sparse Momentum (SM) 2019, DZ Prune–redistribute–regrowth + mean momentum. DPF ours Reparameterization via error feedback.\noverhead. We perform pruning across all neural network layers (no layer-wise pruning) using magnitude-based unstructured weight pruning (inherited from Han et al. (2015)). We found the best preformance when updating the mask every 16 iterations (see also Table 11) and we keep this value fixed for all experiments (independent of the architecture or task).\nUnlike our competitors that may ignore some layers (e.g. the first convolution and downsampling layers in DSR), we applied DPF (as well as the One-shot P+FT and Incremental baselines) to all convolutional layers while keeping the last fully-connected layer4, biases and batch normalization layers dense. Lastly, our algorithm gradually increases the sparsity st of the mask from 0 to the desired sparsity using the same scheduling as in (Zhu & Gupta, 2017); see Appendix A.2.\nTraining schedules. For all competitors, we adapted their open-sourced code and applied a consistent (and standard) training scheme over different methods to ensure a fair comparison. Following the standard training setup for CIFAR-10, we trained ResNet-a for 300 epochs and decayed the learning rate by 10 when accessing 50% and 75% of the total training samples (He et al., 2016a; Huang et al., 2017); and we trained WideResNet-a-b as Zagoruyko & Komodakis (2016) for 200 epochs and decayed the learning rate by 5 when accessing 30%, 60% and 80% of the total training samples. For ImageNet training, we used the training scheme in (Goyal et al., 2017) for 90 epochs and decayed learning rate by 10 at 30, 60, 80 epochs. For all datasets and models, we used mini-batch SGD with Nesterov momentum (factor 0.9) with fine-tuned learning rate for DPF. We reused the tuned (or recommended) hyperparameters for our baselines (DSR and SM), and fine-tuned the optimizer and learning rate for One-shot P+FT, Incremental and SNIP. The mini-batch size is fixed to 128 for CIFAR-10 and 1024 for ImageNet regardless of datasets, models and methods." }, { "heading": "5.2 EXPERIMENT RESULTS", "text": "CIFAR-10. Figure 3 shows a comparison of different methods for WideResNet-28-2. For low sparsity level (e.g. 50%), DPF outperforms even the dense baseline, which is in line with regularization properties of network pruning. Furthermore, DPF can prune the model up to a very high level (e.g. 99%), and still exhibit viable performance. This observation is also present in Table 1, where the results of training different state-of-the-art DNN architectures with higher sparsities are depicted. DPF shows reasonable performance even with extremely high sparsity level on large models (e.g. WideResNet-28-8 with sparsity ratio 99.9%), while other methods either suffer from significant quality loss or even fail to converge.\nBecause simple model pruning techniques sometimes show better performance than complex techniques (Gale et al., 2019), we further consider these simple models in Table 2. While DPF outperforms them in almost all settings, it faces difficulties pruning smaller models to extremely high sparsity ratios (e.g. ResNet-20 with sparsity ratio 95%).This seems however to be an artifact of fine-tuning, as DPF with extra fine-tuning convincingly outperforms all other methods regardless of the network size. This comes to no surprise as schemes like One-shot P+FT and Incremental do not benefit from extra fine-tuning, since it is already incorporated in their training procedure and they might become stuck in local minima. On the other hand, dynamic pruning methods, and in particular DPF, work on a different paradigm, and can still heavily benefit from fine-tuning.5\nFigure 13 (in Appendix A.3.4) depicts another interesting property of DPF. When we search for a subnetwork with a (small) predefined number of parameters for a fixed task, it is much better to run\n4The last fully-connected layer normally makes up only a very small faction of the total MACs, e.g. 0.05% for ResNet-50 on ImageNet and 0.0006% for WideResNet-28-2 on CIFAR-10.\n5Besides that a large fraction of the mask elements already converge during training (see e.g. Figure 4 below), not all mask elements converge. Thus DPF can still benefit from fine-tuning on the fixed sparsity mask.\nDPF on a large model (e.g. WideResNet-28-8) than on a smaller one (e.g. WideResNet-28-2). That is, DPF performs structural exploration more efficiently in larger parametric spaces.\nImageNet. We compared DPF to other dynamic reparameterization methods as well as the strong Incremental baseline in Table 3. For both sparsity levels (80% and 90%), DPF shows a significant improvement of top-1 test accuracy with fewer or equal number of parameters." }, { "heading": "6 DISCUSSION", "text": "Besides the theoretical guarantees, a straightforward benefit of DPF over one-shot pruning in practice is its fine-tuning free training process. Figure 12 in the appendix (Section A.3.3) demonstrates the trivial computational overhead (considering the dynamic reparameterization cost) of involving DPF to train the model from scratch. Small number of hyperparameters compared to other dynamic reparameterization methods (e.g. DSR and SM) is another advantage of DPF and Figure 11 further studies how different setups of DPF impact the final performance. Notice also that for DPF, inference is done only at sparse models, an aspect that could be leveraged for more efficient computations.\nEmpirical difference between one-shot pruning and DPF. From the Figure 2 one can see that DPF tends to oscillate among several local minima, whereas one-shot pruning, even with fine tuning, converges to a single solution, which is not necessarily close to the optimum. We believe that the wider exploration of DPF helps to find a better local minima (which can be even further improved by fine-tuning, as shown in Table 2). We empirically analyzed how drastically the mask changes between each reparameterization step, and how likely it is for some pruned weight to become non-zero in the later stages of training. Figure 4 shows at what stage of the training each element of the final mask becomes fixed. For each epoch, we report how many mask elements were flipped starting from this epoch. As an example, we see that for sparsity ratio 95%, after epoch 157 (i.e. for 43 epochs left), only 5% of the mask elements were changing. This suggests that, except for a small percentage of weights that keep oscillating, the mask has converged early in the training. In the final epochs, the algorithm keeps improving accuracy, but the masks are only being fine-tuned. A similar mask convergence behavior can be found in Appendix (Figure 7) for training ResNet-20 on CIFAR-10.\nDPF does not find a lottery ticket. The LT hypothesis (Frankle & Carbin, 2019) conjectures that for every desired sparsity level there exists a sparse submodel that can be trained to the same or better performance as the dense model. In Figure 5 we show that the mask found by DPF is not a LT, i.e., training the obtained sparse model from scratch does not recover the same performance. The (expensive) procedure proposed in Frankle & Carbin (2019); Frankle et al. (2019) finds different masks and achieves the same performance as DPF for mild sparsity levels, but DPF is much better for extremely sparse models (99% sparsity).\nExtension to structured pruning. The current state-of-the-art dynamic reparameterization methods only consider unstructured weight pruning. Structured filter pruning6 is either ignored (Bellec et al., 2018; Mocanu et al., 2018; Dettmers & Zettlemoyer, 2019) or shown to be challenging (Mostafa & Wang, 2019) even for the CIFAR dataset. In Figure 6 below we presented some preliminary results on CIFAR-10 to show that our DPF can also be applied to structured filter pruning schemes. DPF outperforms the current filter-norm based state-of-the-art method for structured pruning (e.g. SFP (He et al., 2018)) by a noticeable margin. Figure 16 in Appendix A.4.3 displays the transition procedure of the sparsity pattern (of different layers) for WideResNet-28-2 with different sparsity levels. DPF can be seen as a particular neural architecture search method, as it gradually learns to prune entire layers under the guidance of the feedback signal.\nWe followed the common experimental setup as mentioned in Section 5 with `2 norm based filter selection criteria for structured pruning extension on CIFAR-10. We do believe a better filter selection scheme (Ye et al., 2018; He et al., 2019; Lym et al., 2019) could further improve the results but we leave this exploration for the future work.\n6Lym et al. (2019) consider structured filter pruning and reconfigure the large (but sparse) model to small (but dense) model during the training for better training efficiency. Note that they perform model update on a gradually reduced model space, and it is completely different from the dynamic reparameterization methods (e.g. DSR, SM and our scheme) that perform reparameterization under original (full) model space." }, { "heading": "ACKNOWLEDGEMENTS", "text": "We acknowledge funding from SNSF grant 200021_175796, as well as a Google Focused Research Award." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 ALGORITHM", "text": "Algorithm 1 The detailed training procedure of DPF. input: uncompressed model weights w ∈ Rd, pruned weights: w̃, mask: m ∈ {0, 1}d; reparametrization\nperiod: p; training iterations: T . 1: for t = 0, . . . , T do 2: if p | t then . trigger mask update, per default every p = 16 iterations 3: compute mask m←mt(wt) . by arbitrary pruning scheme (e.g. unstructured magnitude pruning) 4: end if 5: w̃t ←m wt . apply (precomputed) mask 6: compute (mini-batch) gradient g(w̃) . forward/backward pass with pruned weights w̃t 7: wt+1 ← gradient update g(w̃) to wt . via arbitrary optimizer (e.g. SGD with momentum) 8: end for\noutput: wT and w̃T\nWe trigger the mask update every p = 16 iterations (see also Figure 11) and we keep this parameter fixed throughout all experiments, independent of architecture or task.\nWe perform pruning across all neural network layers (no layer-wise pruning) using magnitude-based unstructured weight pruning (inherited from Han et al. (2015)). Pruning is applied to all convolutional layers while keeping the last fully-connected layer, biases and batch normalization layers dense.\nWe gradually increases the sparsity st of the mask from 0 to the desired sparsity using the same scheduling as in Zhu & Gupta (2017); see Appendix A.2 below.\nA.2 IMPLEMENTATION DETAILS\nWe implemented our DPF in PyTorch (Paszke et al., 2017). All experiments were run on NVIDIA Tesla V100 GPUs. Sparse tensors in our implementation are respresented as the dense tenors multiplied by the corresponding binary masks.\nDatasets We evaluate all methods on the following standard image classification tasks:\n• Image classification for CIFAR-10 (Krizhevsky & Hinton, 2009). Dataset consists of a training set of 50K and a test set of 10K color images of 32× 32 pixels, as well as 10 target classes. We adopt the standard data augmentation and preprocessing scheme (He et al., 2016a; Huang et al., 2016).\n• Image classification for ImageNet (Russakovsky et al., 2015). The ILSVRC 2012 classification dataset consists of 1.28 million images for training, and 50K for validation, with 1K target classes. We use ImageNet-1k (Deng et al., 2009) and adopt the same data preprocessing and augmentation scheme as in He et al. (2016a;b); Simonyan & Zisserman (2015).\nGradual Pruning Scheme For Incremental baseline, we tuned their automated gradual pruning scheme st = sf + (si − sf ) ( 1− t−t0n∆t )3 to gradually adjust the pruning sparsity ratio st for t ∈ {t0, . . . , t0 + n∆t}. That is, in our setup, we increased from an initial sparsity ratio si = 0 to the desired target model sparsity ratio sf over the epoch (n) when performing the second learning rate decay, from the training epoch t0 = 0 and with pruning frequency ∆t = 1 epoch. In our experiments, we used this gradual pruning scheme over different methods, except One-shot P+FT, SNIP, and the methods (DSR, SM) that have their own fine-tuned gradual pruning scheme.\nHyper-parameters tuning procedure We grid-searched the optimal learning rate, starting from the range of {0.05, 0.10, 0.15, 0.20}. More precisely, we will evaluate a linear-spaced grid of learning rates. If the best performance was ever at one of the extremes of the grid, we would try new grid points so that the best performance was contained in the middle of the parameters.\nWe trained most of the methods by using mini-batch SGD with Nesterov momentum. For baselines involving fine-tuning procedure (e.g. Table 2), we grid-searched the optimal results by tuning the optimizers (i.e. mini-batch SGD with Nesterov momentum, or Adam) and the learning rates.\nThe optimal hyper-parameters for DPF The mini-batch size is fixed to 128 for CIFAR-10 and 1024 for ImageNet regardless of datasets and models.\nFor CIFAR-10, we trained ResNet-a and VGG for 300 epochs and decayed the learning rate by 10 when accessing 50% and 75% of the total training samples (He et al., 2016a; Huang et al., 2017); and we trained WideResNet-a-b as Zagoruyko & Komodakis (2016) for 200 epochs and decayed the learning rate by 5 when accessing 30%, 60% and 80% of the total training samples. The optimal learning rate for ResNet-a, WideResNet-a-b and VGG are 0.2, 0.1 and 0.2 respectively; the corresponding weight decays are 1e−4, 5e−4 and 1e−4 respectively. For ImageNet training, we used the training scheme in Goyal et al. (2017) for 90 epochs, where we gradually warmup the learning rate from 0.1 to 0.4 and decayed learning rate by 10 at 30; 60; 80 epochs. The used weight decay is 1e−4." }, { "heading": "A.3 ADDITIONAL RESULTS FOR UNSTRUCTURED PRUNING", "text": "" }, { "heading": "A.3.1 COMPLETE RESULTS OF UNSTRUCTURED PRUNING ON CIFAR-10", "text": "Table 4 details the numerical results for training SOTA DNNs on CIFAR-10. Some results of it reconstruct the Table 1 and Figure 3." }, { "heading": "A.3.2 UNDERSTANDING THE TRAINING DYNAMICS AND LOTTERY TICKET EFFECT", "text": "Figure 7 and Figure 8 complements Figure 4, and details the training dynamics (e.g. the converge of δ and masks) of DPF from the other aspect. Figure 9 compares the training dynamics between DPF\nand Incremental (Zhu & Gupta, 2017), demonstrating the fact that our scheme enables a drastical reparameterization over the dense parameter space for better generalization performance.\nFigure 10 in addition to the Figure 5 (in the main text) further studies the lottery ticket hypothesis under different training budgets (same epochs or same total flops). The results of DPF also demonstrate the importance of training-time structural exploration as well as the corresponding implicit regularization effects. Note that we do not want to question the importance of the weight initialization or the existence of the lottery ticket. Instead, our DPF can provide an alternative training scheme to compress the model to an extremely high compression ratio without sacrificing the test accuracy, where most of the existing methods still meet severe quality loss (including Frankle & Carbin (2019); Liu et al. (2019); Frankle et al. (2019))." }, { "heading": "A.3.3 COMPUTATIONAL OVERHEAD AND THE IMPACT OF HYPER-PARAMETERS", "text": "In Figure 11, we evaluated the top-1 test accuracy of a compressed model trained by DPF under different setups, e.g., different reparameterization period p, different sparsity ratios, different mini-batch\nsizes, as well as whether layer-wise pruning or not. We can witness that the optimal reparameterization (i.e p = 16) is quite consistent over different sparsity ratios and different mini-batch sizes, and we used it in all our experiments. The global-wise unstructured weight pruning (instead of layer-wise weight pruning) allows our DPF more flexible to perform dynamic parameter reallocation, and thus can provide better results especially for more aggressive pruning sparsity ratios. However, we also need to note that, for the same number of compressed parameters (layerwise or globalwise unstructured weight pruning), using global-wise pruning leads to a slight increase in the amount of MACs, as illustrated Table 5.\n0.5 0.6 0.7 0.8 0.9 0.95 Sparsity\n1 2\n4 8\n16 32 64R ep ar am\net er\niz at\nio n\nfre qu\nen cy -0.05 -0.04 -0.16 -0.26 -0.03 -0.42\n-0.04 -0.17 -0.01 -0.08 -0.09 -0.51\n0.00 -0.05 -0.18 -0.18 -0.20 -0.45\n-0.08 -0.04 -0.08 -0.18 -0.10 -0.02\n-0.09 0.00 -0.04 0.00 0.00 0.00\n-0.15 -0.14 0.00 -0.11 -0.15 -0.44\n-0.06 -0.12 -0.02 -0.11 -0.30 -1.03 1.0\n0.8\n0.6\n0.4\n0.2\n0.0\n(a) Reparameterization period vs. sparsity ratio. The heatmap value is the test accuracy minus the best accuracy of the corresponding sparsity. We use mini-batch size 128 w/o layerwise reparameterization.\n128 256 512 1024 Batch size\n1 2\n4 8\n16 32R ep ar\nam et\ner iz\nat io\nn fre\nqu en\ncy 91.95 92.01 92.04 91.74\n92.13 92.06 91.97 91.81\n92.03 91.95 92.02 91.76\n92.03 92.12 92.03 91.88\n92.21 92.32 91.98 91.90 92.10 92.13 92.05 91.62 91.75\n91.90\n92.05\n92.20\n(b) Reparameterization period vs. mini-batch size. The heatmap displays the test accuracy. We use sparsify ratio 0.80.\n0.5 0.6 0.7 0.8 0.9 0.95 Sparsity\nFa ls e Tr\nue La ye rw\nis e\nre pa\nra m\net er\niz at\nio n\n92.51 92.62 92.41 92.21 90.96 88.21\n92.54 92.45 92.11 91.63 89.71 83.35\n84\n86\n88\n90\n92\n(c) Reparameterization scheme (whether layerwise) vs. sparsity ratio. The heatmap displays the test accuracy. We use mini-batch size 128 w/ reparameterization period p = 16.\nFigure 11: Investigate how the reparameterization period/scheme and mini-batch size impact the generalization performance (test top-1 accuracy), for dynamically training (and reparameterizing) a compressed model from scratch (ResNet-20 with CIFAR-10).\nFigure 12 demonstrates the trivial computational overhead of involving DPF to gradually train a compressed model (ResNet-50) from scratch (on ImageNet). Note that we evaluated the introduced reparameterization cost for dynamic pruning, which is independent of (potential) significant system speedup brought by the extreme high model sparsity. Even though our work did not estimate the practical speedup, we do believe we can have a similar training efficiency as the values reported in Dettmers & Zettlemoyer (2019).\nA.3.4 IMPLICIT NEURAL ARCHITECTURE SEARCH\nDPF can provide effective training-time structural exploration or even implicit neural network search. Figure 13 below demonstrates that for the same pruned model size (i.e. any point in the x-axis), we can always perform “architecture search” to get a better (in terms of generalization) pruned model, from a larger network (e.g. WideResNet-28-8) rather than the one searched from a relatively small network (e.g. WideResNet-28-4)." }, { "heading": "A.4 ADDITIONAL RESULTS FOR STRUCTURED PRUNING", "text": "" }, { "heading": "A.4.1 GENERALIZATION PERFORMANCE FOR CIFAR-10", "text": "Figure 14 complements the results of structured pruning in the main text (Figure 6), and Table 6 details the numerical results presented in both of Figure 6 and Figure 14." }, { "heading": "A.4.2 UNDERSTANDING THE LOTTERY TICKET EFFECT", "text": "Similar to the observations in Section 6 (for unstructured pruning), Figure 15 instead considers structured pruning and again we found DPF does not find a lottery ticket. The superior generalization performance of DPF cannot be explained by the found mask or the weight initialization scheme." }, { "heading": "A.4.3 MODEL SPARSITY VISUALIZATION", "text": "Figure 16 below visualizes the model sparsity transition patterns for different model sparsity levels under the structured pruning. We can witness that due to the presence of residual connection, DPF gradually learns to prune the entire residual blocks." }, { "heading": "B MISSING PROOFS", "text": "In this section we present the proofs for the claims in Section 4.\nFirst, we give the proof for the stongly convex case. Here we follow Lacoste-Julien et al. (2012) for the general structure, combined with estimates from the error-feedback framework (Stich et al., 2018; Stich & Karimireddy, 2019) to control the pruning errors.\nProof of Theorem 4.1. By definition of (DPF), wt+1 = wt − γtg(w̃t), hence, E [ ‖wt+1 −w?‖2 | wt ] = ‖wt −w?‖2 − 2γt 〈wt −w?,Eg(w̃t)〉+ γ2t E ‖g(w̃t)‖ 2\n≤ ‖wt −w?‖2 − 2γt 〈wt −w?,∇f(w̃t)〉+ γ2tG2\n= ‖wt −w?‖2 − 2γt 〈w̃t −w?,∇f(w̃t)〉+ γ2tG2\n+ 2γt 〈w̃t −wt,∇f(w̃t)〉 . By strong convexity,\n−2 〈w̃t −w?,∇f(w̃t)〉 ≤ −µ ‖w̃t −w?‖2 − 2 (f(w̃t)− f(w?)) , and with ‖a + b‖2 ≤ 2 ‖a‖2 + 2 ‖b‖2 further\n−‖w̃t −w?‖2 ≤ − 1\n2 ‖wt −w?‖2 + ‖wt − w̃t‖2\nand with 〈a,b〉 ≤ 12α ‖a‖ 2 + α2 ‖b‖ 2 for a,b ∈ Rd and α > 0,\n2 〈w̃t −wt,∇f(w̃t)〉 ≤ 2L ‖w̃t −wt‖2 + 1\n2L ‖∇f(w̃t)‖2\n= 2L ‖w̃t −wt‖2 + 1\n2L ‖∇f(w̃t)−∇f(w?)‖2\n≤ 2L ‖w̃t −wt‖2 + f(w̃t)− f(w?) , where the last inequality is a consequence of L-smoothness. Combining all these inequalities yields\nE [ ‖wt+1 −w?‖2 | wt ] ≤ (\n1− µγt 2\n) ‖wt −w?‖2 − γt (f(w̃t)− f(w?)) + γ2tG2\n+ γt(2L+ µ) ‖w̃t −wt‖2 ≤ (\n1− µγt 2\n) ‖wt −w?‖2 − γt (f(w̃t)− f(w?)) + γ2tG2\n+ 3γtL ‖w̃t −wt‖2 . as µ ≤ L. Hence, by rearranging and multiplying with a weight λt > 0:\nλtE (f(w̃t)− f(w?)) ≤ λt(1− µγt/2)\nγt E ‖wt −w?‖2 − λt γt E ‖wt+1 −w?‖2 + γtλtG2\n+ 3λtLE ‖w̃t −wt‖2 . By plugging in the learning rate, γt = 4µ(t+2) and setting λt = (t+ 1) we obtain\nλtE (f(w̃t)− f(w?)) ≤ µ\n4\n[ t(t+ 1)E ‖wt −w?‖2 − (t+ 1)(t+ 2)E ‖wt+1 −w?‖2 ] + 4(t+ 1)\nµ(t+ 2) G2 + 3(t+ 1)LE ‖w̃t −wt‖2 .\nBy summing from t = 0 to t = T these λt-weighted inequalities, we obtain a telescoping sum: T∑ t=0 λtE (f(w̃t)− f(w?)) ≤ µ 4 [ 0− (T + 1)(T + 2)E ‖wt+1 −w?‖2 ] + 4(T + 1) µ G2\n+ 3L T∑ t=0 λtE ‖w̃t −wt‖2\n≤ 4(T + 1) µ G2 + 3L T∑ t=0 λtE ‖w̃t −wt‖2 .\nHence, for ΛT := ∑T t=0 λt = (T+1)(T+2) 2 ,\n1\nΛT T∑ t=0 λtE (f(w̃t)− f(w?)) ≤ 4(T + 1) µΛT G2 + 3L ΛT T∑ t=0 λtE ‖w̃t −wt‖2\n= O\n( G2\nµT +\nL\nΛT T∑ t=0\nλtE ‖w̃t −wt‖2 ) .\nFinally, using ‖w̃t −wt‖2 = δt ‖wt‖2 by (1), shows the theorem.\nBefore giving the proof of Theorem 4.2, we first give a justification for the remark just below Theorem 4.1 on the one-shot pruning of the final iterate.\nWe have by L-smoothness and 〈a,b〉 ≤ 12α ‖a‖ 2 + α2 ‖b‖ 2 for a,b ∈ Rd and α > 0 for any iterate wt:\nf(w̃t)− f(w?) ≤ f(wt)− f(w?) + 〈∇f(wt), w̃t −wt〉+ L\n2 ‖w̃t −wt‖2\n≤ f(wt)− f(w?) + 1\n2L ‖∇f(wt)‖2 + L ‖w̃t −wt‖2\n≤ 2(f(wt)− f(w?)) + δtL ‖wt‖2 . (4) Furthermore, again by L-smoothness,\nf(wT )− f(w?) ≤ L\n2 ‖wT −w?‖2 = O\n( LG2\nµ2T ) as standard SGD analysis gives the estimate E ‖wT −w?‖2 = O ( G2\nµ2T\n) , see e.g. Lacoste-Julien\net al. (2012). Combining these two estimates (with wt = wT ) shows the claim.\nFurthermore, we also claimed that also the dense model converges to a neighborhood of optimal solution. This follows by L-smoothness and (4): For any fixed model wt we have the estimate (4), hence for a randomly chosen (dense) model u (from the same distribution as the sparse model in Theorem 4.1) we have\nEf(u)− f(w?) (4) ≤ 2E [f(ũ)− f(w?)] + LE [ δt ‖wt‖2 ] (Thm 4.1) = O ( G2\nµT + LE\n[ δt ‖wt‖2 ]) .\nLastly, we give the proof of Theorem 4.2, following Karimireddy et al. (2019).\nProof of Theorem 4.2. By smoothness, and 〈a,b〉 ≤ 12 ‖a‖ 2 + 12 ‖b‖ 2 for a,b ∈ Rd,\nE [f(wt+1) | wt] ≤ f(wt)− γ 〈∇f(wt),Eg(w̃t)〉+ γ2 L\n2 E ‖g(w̃t)‖2\n≤ f(wt)− γ 〈∇f(wt),∇f(w̃t)〉+ γ2 LG2\n2\n= f(wt)− γ 〈∇f(w̃t),∇f(w̃t)〉+ γ2 LG2\n2 + γ 〈∇f(w̃t)−∇f(wt),∇f(w̃t)〉\n≤ f(wt)− γ ‖∇f(w̃t)‖2 + γ2 LG2\n2\n+ γ\n2 ‖∇f(w̃t)−∇f(wt)‖2 +\nγ 2 ‖∇f(w̃t)‖2\n≤ f(wt)− γ\n2 ‖∇f(w̃t)‖2 + γ2\nLG2 2 + γL2 2 ‖wt − w̃t‖2 ,\nand by rearranging\nE ‖∇f(w̃t)‖2 ≤ 2\nγ [Ef(wt)− Ef(wt+1)] + γLG2 + L2E ‖wt − w̃t‖2 .\nSumming these inequalities from t = 0 to t = T gives\n1\nT + 1 T∑ t=0 E ‖∇f(w̃t)‖2 ≤ 2 γ(T + 1) T∑ t=0 (E [f(wt)]− E [f(wt+1)]) + γLG2\n+ L2\nT + 1 T∑ t=0 E ‖wt − w̃t‖2\n≤ 2 (f(w0)− f(w ?))\nγ(T + 1) + LγG2 +\nL2\nT + 1 T∑ t=0 E ‖et‖2 .\nFinally, using ‖w̃t −wt‖2 = δt ‖wt‖2 by (1), and plugging in the stepsize γ that minimizes the right hand side shows the claim." } ]
2,020
null
SP:27e5d87807bde38fd23e80517608417aaaf724f3
[ "The authors propose the task of contextual text style transfer: transferring the style of one text into another (i.e., informal to formal, or offensive to non-offensive), when the text is present within some larger, provided context. The authors propose a model (CAST) which takes advantage of the additional context to perform the style transfer. CAST outperforms previous style transfer models according to several automatic metrics, as well as human evaluation.", "The paper proposes a new task for text style transfer, based on the idea that the the surrounding context of a sentence is important, whereas previous such tasks have only looked at sentences in isolation. Two new crowd-sourced datasets are created, and a combination of now fairly standard neural components is shown to outperform some strong baselines on the new datasets, on a variety of evaluation metrics. An ablation analysis of the components - including some auto-encoding auxiliary losses - shows that all the various parts are helpful to performance." ]
In this paper, we introduce a new task, Contextual Text Style Transfer, to translate a sentence within a paragraph context into the desired style (e.g., informal to formal, offensive to non-offensive). Two new datasets, Enron-Context and RedditContext, are introduced for this new task, focusing on formality and offensiveness, respectively. Two key challenges exist in contextual text style transfer: 1) how to preserve the semantic meaning of the target sentence and its consistency with the surrounding context when generating an alternative sentence with a specific style; 2) how to deal with the lack of labeled parallel data. To address these challenges, we propose a Context-Aware Style Transfer (CAST) model, which leverages both parallel and non-parallel data for joint model training. For parallel training data, CAST uses two separate encoders to encode each input sentence and its surrounding context, respectively. The encoded feature vector, together with the target style information, are then used to generate the target sentence. A classifier is further used to ensure contextual consistency of the generated sentence. In order to leverage massive non-parallel corpus and to enhance sentence encoder and decoder training, additional self-reconstruction and back-translation losses are introduced. Experimental results on Enron-Context and Reddit-Context demonstrate the effectiveness of the proposed model over state-of-the-art style transfer methods, across style accuracy, content preservation, and contextual consistency metrics.1
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2019 }, { "authors": [ "Cicero Nogueira dos Santos", "Igor Melnyk", "Inkit Padhi" ], "title": "Fighting offensive language on social media with unsupervised text style transfer", "venue": null, "year": 2018 }, { "authors": [ "Zhenxin Fu", "Xiaoye Tan", "Nanyun Peng", "Dongyan Zhao", "Rui Yan" ], "title": "Style transfer in text: Exploration and evaluation", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "NeurIPS", "year": 2014 }, { "authors": [ "Jeremy Howard", "Sebastian Ruder" ], "title": "Universal language model fine-tuning for text classification", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Toward controlled generation of text", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Zhiting Hu", "Haoran Shi", "Zichao Yang", "Bowen Tan", "Tiancheng Zhao", "Junxian He", "Wentao Wang", "Lianhui Qin", "Di Wang" ], "title": "Texar: A modularized, versatile, and extensible toolkit for text generation", "venue": "arXiv preprint arXiv:1809.00794,", "year": 2018 }, { "authors": [ "Harsh Jhamtani", "Varun Gangal", "Eduard Hovy", "Eric Nyberg" ], "title": "Shakespearizing modern language using copy-enriched sequence-to-sequence models", "venue": "arXiv preprint arXiv:1707.01161,", "year": 2017 }, { "authors": [ "Yoon Kim" ], "title": "Convolutional neural networks for sentence classification", "venue": "arXiv preprint arXiv:1408.5882,", "year": 2014 }, { "authors": [ "Bryan Klimt", "Yiming Yang" ], "title": "Introducing the enron corpus", "venue": "In CEAS,", "year": 2004 }, { "authors": [ "Guillaume Lample", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Unsupervised machine translation using monolingual corpora only", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Juncen Li", "Robin Jia", "He He", "Percy Liang" ], "title": "Delete, retrieve, generate: a simple approach to sentiment and style transfer", "venue": null, "year": 2018 }, { "authors": [ "Lajanugen Logeswaran", "Honglak Lee", "Samy Bengio" ], "title": "Content preserving text generation with attribute controls", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Sourab Mangrulkar", "Suhani Shrivastava", "Veena Thenkanidiyoor", "Dileep Aroor Dinesh" ], "title": "A context-aware convolutional natural language generation model for dialogue systems", "venue": "In SIGDIAL,", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Geoffrey Zweig" ], "title": "Context dependent recurrent neural network language model", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2012 }, { "authors": [ "Remi Mir", "Bjarke Felbo", "Nick Obradovich", "Iyad Rahwan" ], "title": "Evaluating style transfer for text", "venue": "arXiv preprint arXiv:1904.02295,", "year": 2019 }, { "authors": [ "Courtney Napoles", "Keisuke Sakaguchi", "Matt Post", "Joel Tetreault" ], "title": "Ground truth for grammatical error correction metrics", "venue": "In ACL,", "year": 2015 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In ACL,", "year": 2002 }, { "authors": [ "Shrimai Prabhumoye", "Yulia Tsvetkov", "Ruslan Salakhutdinov", "Alan W Black" ], "title": "Style transfer through back-translation", "venue": "arXiv preprint arXiv:1804.09000,", "year": 2018 }, { "authors": [ "Sudha Rao", "Joel Tetreault" ], "title": "Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer", "venue": null, "year": 2018 }, { "authors": [ "Iulian Vlad Serban", "Chinnadhurai Sankar", "Mathieu Germain", "Saizheng Zhang", "Zhouhan Lin", "Sandeep Subramanian", "Taesup Kim", "Michael Pieper", "Sarath Chandar", "Nan Rosemary Ke", "Sai Mudumba", "Alexandre de Brébisson", "Jose Sotelo", "Dendi Suhubdy", "Vincent Michalski", "Alexandre Nguyen", "Joelle Pineau", "Yoshua Bengio" ], "title": "A deep reinforcement learning chatbot", "venue": "arXiv preprint arXiv:1709.02349,", "year": 2017 }, { "authors": [ "Tianxiao Shen", "Tao Lei", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Style transfer from non-parallel text by cross-alignment", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Alessandro Sordoni", "Yoshua Bengio", "Hossein Vahabi", "Christina Lioma", "Jakob Grue Simonsen", "Jian-Yun Nie" ], "title": "A hierarchical recurrent encoder-decoder for generative context-aware query suggestion", "venue": "In CIKM,", "year": 2015 }, { "authors": [ "Alessandro Sordoni", "Michel Galley", "Michael Auli", "Chris Brockett", "Yangfeng Ji", "Margaret Mitchell", "Jian-Yun Nie", "Jianfeng Gao", "Bill Dolan" ], "title": "A neural network approach to context-sensitive generation of conversational responses", "venue": "In NAACL,", "year": 2015 }, { "authors": [ "Sandeep Subramanian", "Guillaume Lample", "Eric Michael Smith", "Ludovic Denoyer", "Marc’Aurelio Ranzato", "Y-Lan Boureau" ], "title": "Multiple-attribute text style transfer", "venue": "arXiv preprint arXiv:1811.00552,", "year": 2018 }, { "authors": [ "Jian Tang", "Yifan Yang", "Samuel Carton", "Ming Zhang", "Qiaozhu Mei" ], "title": "Context-aware natural language generation with recurrent neural networks", "venue": "arXiv preprint arXiv:1611.09900,", "year": 2016 }, { "authors": [ "Oriol Vinyals", "Quoc V. Le" ], "title": "A neural conversational model", "venue": "arXiv preprint arXiv:1506.05869,", "year": 2015 }, { "authors": [ "Tian Wang", "Kyunghyun Cho" ], "title": "Larger-context language modelling", "venue": "arXiv preprint arXiv:1511.03729,", "year": 2015 }, { "authors": [ "Wenlin Wang", "Zhe Gan", "Wenqi Wang", "Dinghan Shen", "Jiaji Huang", "Wei Ping", "Sanjeev Satheesh", "Lawrence Carin" ], "title": "Topic compositional neural language model", "venue": "arXiv preprint arXiv:1712.09783,", "year": 2017 }, { "authors": [ "Tsung-Hsien Wen", "Milica Gasic", "Dongho Kim", "Nikola Mrksic", "Pei-Hao Su", "David Vandyke", "Steve Young" ], "title": "Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking", "venue": "In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue,", "year": 2015 }, { "authors": [ "Jingjing Xu", "Xu Sun", "Qi Zeng", "Xuancheng Ren", "Xiaodong Zhang", "Houfeng Wang", "Wenjie Li" ], "title": "Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach", "venue": "arXiv preprint arXiv:1805.05181,", "year": 2018 }, { "authors": [ "Ruochen Xu", "Tao Ge", "Furu Wei" ], "title": "Formality style transfer with hybrid textual annotations", "venue": "arXiv preprint arXiv:1903.06353,", "year": 2019 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Chris Dyer", "Eric P Xing", "Taylor Berg-Kirkpatrick" ], "title": "Unsupervised text style transfer using language models as discriminators", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Hongyu Zang", "Xiaojun Wan" ], "title": "Towards automatic generation of product reviews from aspectsentiment scores", "venue": "In Proceedings of the 10th International Conference on Natural Language Generation,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Text style transfer has recently been applied to many applications with remarkable success (e.g., sentiment manipulation, formalized writing). Early work relied on parallel corpora with a sequenceto-sequence learning framework (Bahdanau et al., 2015; Jhamtani et al., 2017). However, collecting annotations for parallel data is highly time-consuming. There has been a recent surge of interest in developing text style transfer models using non-parallel data (Hu et al., 2017; Li et al., 2018; Prabhumoye et al., 2018; Subramanian et al., 2018), assuming that disentangling style information from semantic content can be achieved in an auto-encoding fashion with the introduction of additional regularizers (e.g., adversarial discriminators (Shen et al., 2017) or language models (Yang et al., 2018)).\nDespite promising results, these techniques still have a long way towards practical use. Specifically, existing models mostly focus on sentence-level rewriting. However, in real-world applications, sentences typically reside in a proper context such as a paragraph. For example, in the formalized writing task, the rewritten span should align well with the surrounding context (e.g., personal email, scientific content) to keep a coherent text flow. Taking a single sentence as the sole input of a style transfer model may fail to preserve topical coherency of the generated sentence with its surrounding context, resulting in poor semantic and logical consistency on the paragraph level (see Example C in Table 4).\nMotivated by this, we propose and investigate a new task - Contextual Text Style Transfer. Given a paragraph, the system aims to automatically edit sentences into a desired style, while keeping the edited section topically coherent with its surrounding context. To achieve this goal, we propose a novel Context-Aware Style Transfer (CAST) model, by jointly considering style transfer and context alignment. For parallel training data, CAST uses two separate encoders to encode the source sentence and its surrounding context, respectively, and a decoder to translate the encoded features\n1Source code, and collected new datasets will be released upon acceptance.\ninto the target sentence. A pre-trained coherence classifier is further applied to regularize the generated target sentence to be consistent with the context. To overcome the data sparsity issue, we further leverage non-parallel data by using a hybrid approach. With large-scale non-parallel corpus, the training of the sentence encoder and decoder are enhanced via additional self-reconstruction and back-translation objectives. A pre-trained style classifier is also used for style regularization. The final CAST model is jointly trained with both parallel and non-parallel data.\nAs this is a newly proposed task, we also introduce two new datasets, Enron-Context and RedditContext, collected via crowdsourcing. The former contains 14,734 formal vs. informal paired samples from Enron (Klimt & Yang, 2004) (an email dataset), and the latter contains 23,158 offensive vs. non-offensive paired samples from Reddit (Serban et al., 2017). Each paired sample contains an original sentence and a human-rewritten sentence with the desired style, accompanied by its paragraph context. Besides this, in order to enhance model training, we exploit additional 28,375/29,774 formal/informal non-parallel sentences from GYAFC (Rao & Tetreault, 2018), and 53,028/53,714 offensive/non-offensive non-parallel sentences from Reddit (dos Santos et al., 2018).\nThe main contributions of this work are summarized as follows: (i) We propose a new task - Contextual Text Style Transfer, which aims to translate an input sentence into a desired style, while preserving its style-irrelevant semantics and topical consistency with the surrounding context. (ii) We introduce two new datasets for this task, Enron-Context and Reddit-Context, which provide reliable benchmarks for measuring contextual style transfer models. (iii) We present a new model - Context-Aware Style Transfer (CAST), which jointly optimizes the generation quality of the target sentence and its topical coherency with adjacent sentences. Extensive experiments on these two new datasets demonstrate that the proposed CAST model outperforms state-of-the-art baselines." }, { "heading": "2 RELATED WORK", "text": "Text Style Transfer Text style transfer aims to modify an input sentence into a desired style while preserving its style-independent semantics. Previous work has explored this as a sequence-tosequence learning task using parallel corpora with paired source/target sentences in different styles. For example, Jhamtani et al. (2017) pre-trained word embeddings by leveraging external dictionaries mapping Shakespearean words to modern English words and additional text. However, available parallel data in different styles are very limited. Therefore, there is a recent surge of interest in considering a more realistic setting, where only non-parallel stylized corpora are available. A typical approach is: (i) disentangling latent space as content and style features; then (ii) generating stylistic sentences by tweaking style-relevant features and passing them through a decoder, together with the original content-relevant features (Xu et al., 2018).\nMany of these approaches borrowed the idea of adversarial discriminator/classifier from the Generative Adversarial Network (GAN) framework (Goodfellow et al., 2014). For example, Shen et al. (2017); Fu et al. (2018); Lample et al. (2018) used adversarial classifiers to force the decoder to transfer the encoded source sentence into a different style/language. Alternatively, Li et al. (2018) achieved disentanglement by filtering stylistic words of input sentences. Another direction for text style transfer without parallel data is using back-translation (Prabhumoye et al., 2018) with a denoising auto-encoding objective (Logeswaran et al., 2018; Subramanian et al., 2018).\nRegarding the tasks, sentiment transfer is one of the most widely studied problems. From informality to formality (Rao & Tetreault, 2018) is another direction of text style transfer, aiming to change the style of a given sentence to more formal text. dos Santos et al. (2018) presented an approach to transferring offensive text to non-offensive based on social network data. In Prabhumoye et al. (2018), the authors proposed the political slant transfer task. However, all these previous studies did not directly consider context-aware text style transfer, which is the main focus of this work.\nContext-aware Text Generation Our work is related to context-aware text generation (Mikolov & Zweig, 2012; Tang et al., 2016), which can be applied to many NLP tasks (Mangrulkar et al., 2018). For example, previous work has investigated language modeling with context information (Wang & Cho, 2015; Wang et al., 2017), treating the preceding sentences as context. There are also studies on response generation for conversational systems (Sordoni et al., 2015b; Wen et al., 2015), where dialogue history is treated as a context. Zang & Wan (2017) introduced a neural model to generate long reviews from aspect-sentiment scores given the topics. Vinyals & Le (2015) proposed a model\nto predict the next sentence given the previous sentences in a dialogue session. Sordoni et al. (2015a) presented a hierarchical recurrent encoder-decoder model to encode dialogue context. Our work is the first to explore context information in the text style transfer task." }, { "heading": "3 CONTEXTUAL TEXT STYLE TRANSFER", "text": "In this section, we first describe the problem definition and provide an overview of the model architecture in Section 3.1. Section 3.2 presents the proposed Context-Aware Style Transfer (CAST) model with parallel data, and Section 3.3 further introduces how to augment the CAST model with non-parallel data in a hybrid approach." }, { "heading": "3.1 OVERVIEW", "text": "Problem Definition The problem of contextual text style transfer is defined as follows. Given a style-labelled parallel dataset P = {(xi, li), (yi, l̃i), ci}Mi=1, where the i-th instance contains the original sentence xi in style li, its corresponding rewritten sentence yi in another style l̃i, and the paragraph context ci. xi and yi are expected to contain the same semantic content, but in different language styles (i.e., li 6= l̃i). The goal is to transform xi in style li to yi in style l̃i, while keeping the sentence yi semantically coherent with its context ci. In practice, labelled parallel data may be difficult to garner. Therefore, we assume that additional non-parallel data U = {(xi, li)}Ni=1 can be leveraged to enhance overall model training.\nTraining Objective The overall architecture of the proposed CAST model is illustrated in Figure 1. The hybrid model training process consists of two paths, one for parallel data and the other for non-parallel data. In the parallel path, a Seq2Seq loss and a contextual coherence loss are defined, to learn the two encoders (sentence encoder and context encoder) and the sentence decoder with labeled parallel data. The non-parallel path is designed to further enhance the sentence encoder and decoder with three additional losses: (i) a self-reconstruction loss; (ii) a back-translation loss; and (iii) a style classification loss. The overall training objective, taking both parallel and non-parallel paths into consideration, can be written as:\nLP,Ufinal = L P c−s2s + λ1L P cohere + λ2L U recon + λ3L U back−trans + λ4L U style , (1)\nwhere λ1, λ2, λ3 and λ4 are hyper-parameters to balance different objectives. Each of these loss terms will be explained in the following sub-sections." }, { "heading": "3.2 CAST WITH PARALLEL DATA", "text": "In this subsection, we discuss the training objective associated with parallel data, consisting of (i) a contextual Seq2Seq loss; and (ii) a contextual coherence loss.\nContextual Seq2Seq Loss When parallel data are available, a Seq2Seq model can be directly learned for text style transfer. We denote Seq2Seq model as (E,D), where the semantic representation of sentence xi is extracted by the encoder E (i.e., E(xi)), and the decoder D aims to learn a conditional distribution of yi given the encoded feature E(xi) and style l̃i:\nLPs2s = − E xi,yi∼P log pD(yi|E(xi), l̃i) . (2)\nHowever, in such a sentence-to-sentence style transfer setting, the context of the paragraph is ignored, which if well utilized, could help improve the quality of generated text (such as paragraphlevel topical coherence).\nThus, to take advantage of the paragraph context ci information, we use two separate encoders Es and Ec to encode the sentence and the context independently. The outputs of the two encoders are combined via a linear layer, to obtain a context-aware sentence representation, which is used for generating the target sentence. The model is trained to minimize the following loss:\nLPc−s2s = − E xi,ci,yi∼P log pD(yi|Es(xi), Ec(ci), l̃i) . (3)\nCompared with Eqn. (2), the use of Ec(ci) makes the text style transfer process context-dependent. The generated sentence can be denoted as ỹi = D(Es(xi), Ec(ci), l̃i).\nContextual Coherence Loss To enforce contextual coherence (i.e., to make the generated sentence yi align with the surrounding context ci), we train a coherence classifier that aims to distinguish whether ci is the context of yi, by adopting a language model with an objective similar to next sentence prediction (Devlin et al., 2019).\nSpecifically, assume that yi is the t-th sentence of a paragraph pi (i.e., yi = p (t) i ), and ci = {p(0)i , . . . ,p (t−1) i ,p (t+1) i , . . . ,p (T ) i } is its surrounding context. We first reconstruct the paragraph pi = {p(0)i , . . . ,p (T ) i } by inserting yi into the proper position in ci, denoted as [ci;yi]. Based on this, we obtain a paragraph representation ui via a language model encoder. Then, we apply a linear layer to the representation, followed by a tanh function and a softmax layer to predict a binary label si, which indicates whether ci is the context of yi :\nui = LM([ci; f(yi)]) (4) pLM(si|ci,yi) = softmax (tanh (Wui + b)) . (5)\nwhere LM represents the language model encoder, and si = 1 indicates that ci is the context of yi. Note that since ỹi are discrete tokens which are non-differentiable, we use the continuous feature, denoted as f(ỹi), that generates ỹi as the input of the language model. We construct paired data {yi, ci, si}Ni=1 for training the classifier, where the negative samples are generated by replacing a sentence in a paragraph with another random sentence. After pre-training, the coherence classifier is used to obtain the contextual coherence loss:\nLPcohere = − E xi,ci∼P log pLM(si = 1|ci, f(ỹi)) . (6)\nIntuitively, minimizing LPcohere encourages the ỹi to blend better to its context ci. Note that the coherence classifier is pre-trained, and fixed during the training of the CAST model. The above coherence loss can be used to update the parameters of Es, Ec and D during model training." }, { "heading": "3.3 CAST WITH NON-PARALLEL DATA", "text": "For the contextual style transfer task, there are not many parallel datasets available with style-labeled paragraph pairs. To overcome the data sparsity issue, we propose to further boost the CAST model by leveraging additional non-parallel data U = {(xi, li)}Ni=1, which are less expensive to collect. In order to fully exploit U to enhance the training of the sentence encoder and decoder (Es, D), we introduce three additional training losses, detailed below.\nReconstruction Loss The reconstruction loss aims to encourageEs andD to reconstruct the input sentence itself, if the desired style is the same as the input. The corresponding objective is similar to Eqn. (2):\nLUrecon = − E xi∼U log pD(xi|Es(xi), li) . (7)\nCompared with Eqn. (2), here we encourage the decoderD to recover xi’s original stylistic property as accurate as possible when given the style label li. The self-reconstructed sentence is denoted as x̂i = D(Es(xi), li).\nBack-Translation Loss The back-translation loss requires the model to reconstruct the input sentence after a transformation loop. Specifically, the input sentence xi is first transferred into the target style, i.e., x̃i = D(Es(xi), l̃i). Then the generated target sentence is transferred back into its original style, i.e., x̂i = D(Es(x̃i), li). The back-translation loss is defined as:\nLUback−trans = − E xi∼U,x̃i∼pD(yi|Es(xi),l̃i)) log pD(xi|Es(x̃i), li) . (8)\nwhere the source style is denoted as li, and the target style is denoted as l̃i.\nStyle Classification Loss To further boost the model, we use U to train a classifier to predict the style of a given sentence, and regularize the training of (Es, D) with the pre-trained style classifier. Specifically, the objective for training the style classifier is:\nLstyle = − E xi∼U log pC(li|xi) . (9)\nwhere pC(·) denotes the style classifier. After the classifier is trained, we keep its parameters fixed, and apply it to update the parameters of (Es, D). Specifically, the style classification loss defined over the pre-trained style classifier can be written as:\nLUstyle = − E xi∼U\n[ E\nx̂i∼pD(x̂i|Es(xi),li) log pC(li|x̂i) + E x̃i∼pD(x̃i|Es(xi),l̃i) log pC(l̃i|x̃i)\n] . (10)" }, { "heading": "4 NEW DATASETS FOR CONTEXTUAL TEXT STYLE TRANSFER", "text": "Existing text style transfer datasets, no matter parallel or non-parallel, do not contain contextual information, thus are not suitable for our new task. Therefore, we introduce two new datasets: Enron-Context and Reddit-Context, derived from two existing datasets - Enron (Klimt & Yang, 2004) and Reddit Politics (Serban et al., 2017), respectively.\nEnron-Context To build a formality transfer dataset with paragraph contexts, we randomly sampled emails from the Enron corpus (Klimt & Yang, 2004). After pre-processing and filtering with NLTK (Bird et al., 2009), we asked Amazon Mechanical Turk (AMT) annotators to identify informal sentences within each email, and rewrite them in a more formal style. Then, we asked different annotators to verify if each rewritten sentence is more formal than the original sentence.\nReddit-Context We further collected a new offensive vs. non-offensive dataset from the Reddit Politics corpus (Serban et al., 2017). First, we performed classification on the original dataset at sentence level, to identify offensive sentences from whole paragraphs. After filtering some extremely long/short sentences, we asked AMT annotators to rewrite the offensive ones to non-offensive alternatives. To provide robust datasets for benchmark, we randomly selected a subset of sentences to be rewritten into two references, which makes up of 10% of the whole dataset.\nAfter manually removing wrong or duplicated annotations, we obtained a total of 14,734 rewritten sentences for Enron-Context, and 23,158 for Reddit-Context. We also limited the vocabulary size by using words with frequency equal or larger than 20 (70) in Enron (Reddit). Table 1 provides the statistics on the two datasets.\nNon-parallel Corpus Besides parallel datasets, we also explore non-parallel datasets to enhance model training. For formality transfer, one choice is Grammarlys Yahoo Answers Formality Corpus (GYAFC) (Rao & Tetreault, 2018), crawled and annotated from two domains in Yahoo Answers.\nThis corpus contains paired informal and formal sentences, without context. We randomly selected a subset of sentences from the original dataset, and used it in a non-parallel manner. By the end, we collected 28,375/29,774 formal/informal sentences. The second dataset is the offensive/nonoffensive Reddit dataset. Following dos Santos et al. (2018), we used a pre-trained classifier to extract offensive/non-offensive sentences from Reddit posts. In total, we collected 53,028/53,714 offensive/non-offensive sentences." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we compare our model with state-of-the-art baselines on the two new datasets, and provide both quantitative analysis and human evaluation to validate the effectiveness of our model." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Datasets Table 2 provides a summary of the parallel and non-parallel datasets used for the two style transfer tasks. For the non-parallel datasets, we split them into two: one for the proposed model training, and the other for the style classifier pre-training. Similarly, for the parallel datasets, the training sets are divided into two as well, for the training of CAST and the coherence classifier, respectively.\nEvaluation Metrics The contextual style transfer task requires generating sentences to: (i) preserve the original content and structure in the source sentence; (ii) conform to the pre-specified style; and (iii) align with the surrounding context in the paragraph. Thus, we consider the following automatic metrics to evaluate the effectiveness of different methods:\n(i) Content Preservation. We assess the degree of content preservation based on n-gram statistics, by measuring BLEU scores (Papineni et al., 2002) between generated sentences and human references. Following Rao & Tetreault (2018), we also use GLEU for the formality transfer task, which was originally introduced for the grammatical error correction task (Napoles et al., 2015). For offensiveness transfer, we include perplexity (PPL) as used in dos Santos et al. (2018), which is computed by a word-level LSTM language model pre-trained on non-offensive sentences.\n(ii) Style Accuracy. Similar to prior work, we generate samples from the model, and measure style accuracy (i.e., Acc. of the pre-trained style classifier).\n(iii) Context Coherence. As aforementioned, we use the pre-trained coherence classifier to measure how the generated sentences match the surrounding context.\nFor formality transfer, the pre-trained style classifier and coherence classifier reaches 91.35% and 86.78% accuracy, respectively. For offensiveness transfer, the accuracies are 93.47% and 84.96%, respectively. Therefore, we consider them as reliable to serve as evaluation metrics.\nBaselines We compare our proposed model with several baselines: (i) Seq2Seq: a Transformerbased Seq2Seq model (i.e., Eqn. (2)), taking only sentences as inputs, and trained only on parallel data; (ii) Contextual Seq2Seq: a Transformer-based contextual Seq2Seq model (i.e., Eqn. (3)), taking both context and the sentence as input, and trained only on parallel data; (iii) Hybrid Seq2Seq (Xu et al., 2019): a Seq2Seq model leveraging both parallel and non-parallel data; (iv) ControlGen (Hu et al., 2017; 2018): a state-of-the-art text transfer model using non-parallel data.\nImplementation Details The context encoder, sentence encoder and sentence decoder are all implemented as a one-layer Transformer with 4 heads. The hidden dimension of one head is 256, and\nthe hidden dimension of the feed-forward sub-layer is 1024. The context encoder is set to take maximum of 50 words from the surrounding context of the target sentence. For the style classifier, we use a standard CNN-based sentence classifier (Kim, 2014).\nSince the non-parallel corpus U contains more samples than the parallel corpus P , we down-sample U to assign each mini-batch the same number of parallel and non-parallel samples to balance the training, alleviating the catastrophic forgetting problem described in Howard & Ruder (2018). We train the model using Adam optimizer with mini-batch size 64 and learning rate 0.0005. The validation set is used to select the best hyper-parameters. Hard-sampling (Logeswaran et al., 2018) is used to back-propagate the loss through discrete tokens from the pre-trained classifier to the model.\nFor the ControlGen (Hu et al., 2017) baseline, we use the code provided by the authors, and use their default hyper-parameter setting. For Hybrid Seq2Seq (Xu et al., 2019), we re-implement their model following the original paper." }, { "heading": "5.2 EXPERIMENTAL RESULTS", "text": "Formality Transfer Results on the formality transfer task are summarized in Table 3. The CAST model, which leverages both context and non-parallel data, achieves the best performance over all the baselines. Particularly, CAST is able to boost GLEU and Coherence scores with a large margin. Hybrid Seq2Seq also achieves good performance by utilizing non-parallel data. By incorporating the context information, Contextual Seq2Seq also improves over the vanilla Seq2Seq model. As expected, ControlGen does not perform well, since only non-parallel data is used for training.\nOffensiveness Transfer Results are summarized in Table 3. CAST achieves the best performance over all the metrics except for the PPL. In terms of Coherence, both methods that leverage the context achieve a better performance compared with the Seq2Seq baseline. Contextual Seq2Seq also improves BLEU, which is different from the observation in the formality transfer task. Our model produces slightly worse performance on PPL than ControlGen. We hypothesize that this is because our model tends to use the same non-offensive word to replace an offensive word, producing some unusual sentences.\nQualitative Examples Table 4 presents some qualitative examples. Generally, we observe that our model is better at replacing informal words with formal ones (Example B and C), and generate more context-aware sentences (Example A and C), possibly due to the use of coherence and style classifiers. We also observe that the exploitation of the context information can also help our model preserve the semantic content in the original sentence (Example B).\nAblation Study To investigate the effectiveness of individual components, we perform ablation studies by removing some components of the proposed model. Results on both tasks are provided in Table 5. The context encoder and the coherence classifier play an important role in the proposed model. The context encoder is able to improve content preservation and style transfer accuracy, while the coherence classifier can help improve the coherence score but not much for style accuracy. By using these two components, our model can find a proper balance between transferring to the correct style and maintaining the consistency with context. When both of them are removed (the 4th row), performance on all the metrics drops significantly. We also observe that without using non-parallel data, the model performs poorly, showing the importance of using a hybrid method.\nHuman Evaluation Considering the subjective nature of this task, we conduct human evaluations based on content preservation, style control and context consistency, following Mir et al. (2019). Given the original sentence and the transferred sentence with the corresponding context, the AMT crowd-source workers were asked to select the best one based on these three aspects. The evaluation interface also allows a neutral option, if the worker considers both sentences as equally good in certain aspect. We randomly sampled 200 sentences from the corresponding test set, and collected three human responses for each pair. Table 6 reports the pairwise comparison results on both tasks. Based on human judgment, the quality of transferred sentences by CAST is significantly higher than the other methods on all three metrics. This is consistent with our observation in the experiments on automatic metrics." }, { "heading": "6 CONCLUSION", "text": "In this paper, we present a new task - contextual text style transfer. To provide benchmarks for this new task, we introduce two new datasets, which contain annotated sentence pairs accompanied by paragraph contexts. We also propose a new model, which can jointly capture content preservation and context coherence, and exploit additional abundant non-parallel data for boosting performance. In both quantitative and human evaluations, our approach significantly outperforms baseline methods that do not rely on context information. Ablation study also demonstrates the effectiveness of different components in the model design. We believe our current model takes a first step towards modeling context information for text style transfer, and would like to explore more advanced solutions to integrating context." } ]
2,019
null
SP:5f026e00085a3f771abf068bd884e27a6f9d9e44
[ "The paper aims to address the covariate shift issue of behavior cloning (BC). The main idea of the paper is to learn a policy by minimizing a BC loss and an uncertainty loss. This uncertainty loss is defined as a variance of a policy posterior given by demonstration. To approximate this posterior, the paper uses an ensemble approach, where an ensemble of policies is learned from demonstrations. This approach leads to a method called disagreement-regularized imitation learning (DRIL). The paper proofs for a tabular setting that DRIL has a linear regret bound in terms of the horizon, which is better than that of BC which has a quadratic regret bound. Empirical evaluation shows that DRIL outperforms BC in both discrete and continuous control tasks, and it outperforms GAIL in discrete control tasks. ", "The paper proposes an imitation learning algorithm that combines behavioral cloning with a regularizer that encourages the agent to visit states similar to the demonstrated states. The key idea is to use ensemble disagreement to approximate uncertainty, and use RL to train the imitation agent to visit states in which an ensemble of cloned imitation policies is least uncertain about which action the expert would take. Experiments on image-based Atari games show that the proposed method significantly outperforms BC and GAIL baselines in three games, and performs comparably or slightly better than the baselines in the remaining three games." ]
We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems on which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.
[ { "affiliations": [], "name": "Kianté Brantley" }, { "affiliations": [], "name": "Wen Sun" }, { "affiliations": [], "name": "Mikael Henaff" } ]
[ { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Kai-Wei Chang", "Akshay Krishnamurthy", "Alekh Agarwal", "John Langford", "Hal Daumé III" ], "title": "Learning to search better than your teacher", "venue": null, "year": 2015 }, { "authors": [ "Ching-An Cheng", "Byron Boots" ], "title": "Convergence of value aggregation for imitation learning", "venue": "arXiv preprint arXiv:1801.07292,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder– decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Hal Daumé", "John Langford", "Daniel Marcu" ], "title": "Search-based structured prediction", "venue": "CoRR, abs/0907.0786,", "year": 2009 }, { "authors": [ "Mikael Henaff" ], "title": "Explicit explore-exploit algorithms in continuous state spaces", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Mikael Henaff", "Alfredo Canziani", "Yann LeCun" ], "title": "Model-predictive policy learning with uncertainty regularization for driving in dense traffic", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado P. van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Gheshlaghi Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Ashley Hill", "Antonin Raffin", "Maximilian Ernestus", "Adam Gleave", "Rene Traore", "Prafulla Dhariwal", "Christopher Hesse", "Oleg Klimov", "Alex Nichol", "Matthias Plappert", "Alec Radford", "John Schulman", "Szymon Sidor", "Yuhuai Wu" ], "title": "Stable baselines. https://github.com/hill-a/ stable-baselines, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization, 2014. URL http://arxiv.org/abs/1412.6980. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations", "venue": "San Diego,", "year": 2015 }, { "authors": [ "Ilya Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms", "venue": "https:// github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail,", "year": 2018 }, { "authors": [ "Hoang M Le", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudı́k", "Yisong Yue", "Hal Daumé III" ], "title": "Hierarchical imitation and reinforcement learning", "venue": "arXiv preprint arXiv:1803.00590,", "year": 2018 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Tengyu Ma" ], "title": "Learning self-correctable policies and value functions from demonstrations with negative sampling", "venue": "CoRR, abs/1907.05634,", "year": 2019 }, { "authors": [ "Kunal Menda", "Katherine Rose Driggs-Campbell", "Mykel J. Kochenderfer" ], "title": "Ensembledagger: A bayesian approach to safe imitation", "venue": "learning. ArXiv,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Rémi Munos", "Csaba Szepesvári" ], "title": "Finite-time bounds for fitted value iteration", "venue": "J. Mach. Learn. Res.,", "year": 2008 }, { "authors": [ "Ashvin Nair", "Bob McGrew", "Marcin Andrychowicz", "Wojciech Zaremba", "Pieter Abbeel" ], "title": "Overcoming exploration in reinforcement learning with demonstrations", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped DQN", "venue": "CoRR, abs/1602.04621,", "year": 2016 }, { "authors": [ "Deepak Pathak", "Dhiraj Gandhi", "Abhinav Gupta" ], "title": "Self-supervised exploration via disagreement", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Dean A. Pomerleau" ], "title": "Alvinn: An autonomous land vehicle in a neural network", "venue": "Advances in Neural Information Processing Systems", "year": 1989 }, { "authors": [ "Siddharth Reddy", "Anca D. Dragan", "Sergey Levine" ], "title": "SQIL: imitation learning via regularized behavioral cloning", "venue": "CoRR, abs/1905.11108,", "year": 2019 }, { "authors": [ "Stephane Ross", "Drew Bagnell" ], "title": "Efficient reductions for imitation learning", "venue": "In Yee Whye Teh and Mike Titterington (eds.), Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Stephane Ross", "J Andrew Bagnell" ], "title": "Reinforcement and imitation learning via interactive no-regret learning", "venue": "arXiv preprint arXiv:1406.5979,", "year": 2014 }, { "authors": [ "Stephane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Fumihiro Sasaki", "Tetsuya Yohira", "Atsuo Kawaguchi" ], "title": "Sample efficient imitation learning for continuous control", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "Nitish Srivastava", "Geoffrey E. Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Wen Sun", "Arun Venkatraman", "Geoffrey J Gordon", "Byron Boots", "J Andrew Bagnell" ], "title": "Deeply aggrevated: Differentiable imitation learning for sequential prediction", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Arun Venkatraman", "Martial Hebert", "J. Andrew Bagnell" ], "title": "Improving multi-step prediction of learned time series models", "venue": "In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Ruohan Wang", "Carlo Ciliberto", "Pierlugi Amadori", "Yiannis Demiris" ], "title": "Random expert distillation: Imitation learning via expert policy support estimation", "venue": "In Proceedings of International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Sean Welleck", "Kianté Brantley", "Hal Daumé III", "Kyunghyun Cho" ], "title": "Non-monotonic sequential text generation", "venue": "CoRR, abs/1902.02192,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Training artificial agents to perform complex tasks is essential for many applications in robotics, video games and dialogue. If success on the task can be accurately described using a reward or cost function, reinforcement learning (RL) methods offer an approach to learning policies which has proven to be successful in a wide variety of applications (Mnih et al., 2015; 2016; Lillicrap et al., 2016; Hessel et al., 2018). However, in other cases the desired behavior may only be roughly specified and it is unclear how to design a reward function to characterize it. For example, training a video game agent to adopt more human-like behavior using RL would require designing a reward function which characterizes behaviors as more or less human-like, which is difficult.\nImitation learning (IL) offers an elegant approach whereby agents are trained to mimic the demonstrations of an expert rather than optimizing a reward function. Its simplest form consists of training a policy to predict the expert’s actions from states in the demonstration data using supervised learning. While appealingly simple, this approach suffers from the fact that the distribution over states observed at execution time can differ from the distribution observed during training. Minor errors which initially produce small deviations become magnified as the policy encounters states further and further from its training distribution. This phenomenon, initially noted in the early work of (Pomerleau, 1989), was formalized in the work of (Ross & Bagnell, 2010) who proved a quadratic O( T 2) bound on the regret and showed that this bound is tight. The subsequent work of (Ross et al., 2011) showed that if the policy is allowed to further interact with the environment and make queries to the expert policy, it is possible to obtain a linear bound on the regret. However, the ability to query an expert can often be a strong assumption.\nIn this work, we propose a new and simple algorithm called DRIL (Disagreement-Regularized Imitation Learning) to address the covariate shift problem in imitation learning, in the setting where the agent is allowed to interact with its environment. Importantly, the algorithm does not require any additional interaction with the expert. It operates by training an ensemble of policies on the demonstration data, and using the disagreement in their predictions as a cost which is optimized through RL together with a supervised behavioral cloning cost. The motivation is that the policies in the ensemble will tend to agree on the set of states covered by the expert, leading to low cost, but are more likely to disagree on states not covered by the expert, leading to high cost. The RL cost\n∗Work done while at Microsoft Research.\nthus guides the agent back towards the distribution of the expert, while the supervised cost ensures that it mimics the expert within the expert’s distribution.\nOur theoretical results show that, subject to realizability and optimization oracle assumptions1, our algorithm obtains aO( κT ) regret bound, where κ is a measure which quantifies a tradeoff between the concentration of the demonstration data and the diversity of the ensemble outside the demonstration data. We evaluate DRIL empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning, often recovering expert performance with only a few trajectories." }, { "heading": "2 PRELIMINARIES", "text": "We consider episodic finite horizon MDP in this work. Denote by S the state space, A the action space, and Π the class of policies the learner is considering. Let T denote the task horizon and π? the expert policy whose behavior the learner is trying to mimic. For any policy π, let dπ denote the distribution over states induced by following π. Denote C(s, a) the expected immediate cost of performing action a in state s, which we assume is bounded in [0, 1]. In the imitation learning setting, we do not necessarily know the true costs C(s, a), and instead we observe expert demonstrations. Our goal is to find a policy π which minimizes an observed surrogate loss ` between its actions and the actions of the expert under its induced distribution of states, i.e.\nπ̂ = arg minEs∼dπ [`(π(s), π?(s))] (1)\nFor the following, we will assume ` is the total variation distance (denoted by ‖ · ‖), which is an upper bound on the 0−1 loss. Our goal is thus to minimize the following quantity, which represents the distance between the actions taken by our policy π and the expert policy π?:\nJexp(π) = Es∼dπ [ ‖π(·|s)− π?(·|s)‖ ] (2)\nDenote Qπt (s, a) as the standard Q-function of the policy π, which is defined as Q π t (s, a) = E [∑T τ=t C(sτ , aτ )|(st, at) = (s, a), aτ ∼ π ] . The following result shows that if ` is an upper bound on the 0 − 1 loss and C satisfies certain smoothness conditions, then minimizing this loss within translates into an O( T ) regret bound on the true task cost JC(π) = Es,a∼dπ [C(s, a)]: Theorem 1. (Ross et al., 2011) If π satisfies Jexp(π) = , and Qπ ? T−t+1(s, a)−Qπ ? T−t+1(s, π ?) ≤ u for all time steps t, actions a and states s reachable by π, then JC(π) ≤ JC(π?) + uT .\nUnfortunately, it is often not possible to optimize Jexp directly, since it requires evaluating the expert policy on the states induced by following the current policy. The supervised behavioral cloning cost JBC, which is computed on states induced by the expert, is often used instead:\nJBC(π) = Es∼dπ? [‖π ?(·|s)− π(·|s)‖] (3)\nMinimizing this loss within yields a quadratic regret bound on regret: Theorem 2. (Ross & Bagnell, 2010) Let JBC(π) = , then JC(π) ≤ JC(π?) + T 2 .\nFurthermore, this bound is tight: as we will discuss later, there exist simple problems which match the worst-case lower bound." }, { "heading": "3 ALGORITHM", "text": "Our algorithm is motivated by two criteria: i) the policy should act similarly to the expert within the expert’s data distribution, and ii) the policy should move towards the expert’s data distribution\n1We assume for the analysis the action space is discrete, but the state space can be large or infinite.\nAlgorithm 1 Disagreement-Regularized Imitation Learning (DRIL) 1: Input: Expert demonstration data D = {(si, ai)}Ni=1 2: Initialize policy π and policy ensemble ΠE = {π1, ..., πE} 3: for e = 1, E do 4: Sample De ∼ D with replacement, with |De| = |D|. 5: Train πe to minimize JBC(πe) on De to convergence. 6: end for 7: for i = 1, ... do 8: Perform one gradient update to minimize JBC(π) using a minibatch from D. 9: Perform one step of policy gradient to minimize Es∼dπ,a∼π(·|s)[C clip U (s, a)]. 10: end for\nif it is outside of it. These two criteria are addressed by combining two losses: a standard behavior cloning loss, and an additional loss which represents the variance over the outputs of an ensemble ΠE = {π1, ..., πE} of policies trained on the demonstration data D. We call this the uncertainty cost, which is defined as:\nCU(s, a) = Varπ∼ΠE(π(a|s)) = 1\nE E∑ i=1 ( πi(a|s)− 1 E E∑ i=1 πi(a|s) )2\nThe motivation is that the variance over plausible policies is high outside the expert’s distribution, since the data is sparse, but it is low inside the expert’s distribution, since the data there is dense. Minimizing this cost encourages the policy to return to regions of dense coverage by the expert. Intuitively, this is what we would expect the expert policy π? to do as well. The total cost which the algorithm optimizes is given by:\nJalg(π) = Es∼dπ? [‖π ?(·|s)− π(·|s)‖]︸ ︷︷ ︸ JBC(π) +Es∼dπ,a∼π(·|s)[CU(s, a)]︸ ︷︷ ︸ JU(π)\nThe first term is a behavior cloning loss and is computed over states generated by the expert policy, of which the demonstration data D is a representative sample. The second term is computed over the distribution of states generated by the current policy and can be optimized using policy gradient.\nNote that the demonstration data is fixed, and this ensemble can be trained once offline. We then interleave the supervised behavioral cloning updates and the policy gradient updates which minimize the variance of the ensemble. The full algorithm is shown in Algorithm 1. We also found that dropout (Srivastava et al., 2014), which has been proposed as an approximate form of ensembling, worked well (see Appendix D).\nIn practice, for the supervised loss we optimize the KL divergence between the actions predicted by the policy and the expert actions, which is an upper bound on the total variation distance due to Pinsker’s inequality. We also found it helpful to use a clipped uncertainty cost:\nCclipU (s, a) = { −1 if CU(s, a) ≤ q +1 else\nwhere the threshold q is a top quantile of the raw uncertainty costs computed over the demonstration data. The threshold q defines a normal range of uncertainty based on the demonstration data, and values above this range incur a positive cost (or negative reward).\nThe RL cost can be optimized using any policy gradient method. In our experiments we used advantage actor-critic (A2C) (Mnih et al., 2016) or PPO (Schulman et al., 2017), which estimate the expected cost using rollouts from multiple parallel actors all sharing the same policy (see Appendix C for details). We note that model-based RL methods could in principle be used as well if sample efficiency is a constraint." }, { "heading": "4 ANALYSIS", "text": "" }, { "heading": "4.1 COVERAGE COEFFICIENT", "text": "We now analyze DRIL for MDPs with discrete action spaces and potentially large or infinite state spaces. We will show that, subject to assumptions that the policy class contains an optimal policy and that we are able to optimize costs within of their global minimum, our algorithm obtains a regret bound which is linear in κT , where κ is a quantity which depends on the environment dynamics, the expert distribution d?π , and our learned ensemble. Intuitively, κ represents a tradeoff between how concentrated the demonstration data is and how high the variance of the ensemble is outside the expert distribution. Assumption 1. (Realizability) π? ∈ Π. Assumption 2. (Optimization Oracle) For any given cost function J , our minimization procedure returns a policy π̂ ∈ Π such that J(π̂) ≤ arg minπ∈Π J(π) + .\nThe motivation behind our algorithm is that the policies in the ensemble agree inside the expert’s distribution and disagree outside of it. This defines a reward function which pushes the learner back towards the expert’s distribution if it strays away. However, what constitutes inside and outside the distribution, or sufficient agreement or disagreement, is ambiguous. Below we introduce quantities which makes these ideas precise. Definition 1. For any set U ⊆ S, define the concentrability inside of U as α(U) = maxπ∈Π sups∈U dπ(s) dπ?(s) .\nThe notion of concentrability has been previously used to give bounds on the performance of value iteration (Munos & Szepesvári, 2008). For a set U , α(U) will be low if the expert distribution has high mass at the states in U that are reachable by policies in the policy class. Definition 2. Define the minimum variance of the ensemble outside of U as β(U) = mins/∈U,a∈AVarπ∼ΠE [π(a|s)].\nWe now define the κ coefficient as the minimum ratio of these two quantities over all possible subsets of S . Definition 3. We define κ = minU⊆S α(U)β(U) .\nWe can view κ as the quantity which minimizes the tradeoff over different subsets U between coverage by the expert policy inside of U , and variance of the ensemble outside of U ." }, { "heading": "4.2 REGRET BOUND", "text": "We now establish a relationship between the κ coefficient just defined, the cost our algorithm optimizes, and Jexp defined in Equation (2) which we would ideally like to minimize and which translates into a regret bound. All proofs can be found in Appendix A. Lemma 1. For any π ∈ Π, we have Jexp(π) ≤ κJalg(π).\nThis result shows that if κ is not too large, and we are able to make our cost function Jalg(π) small, then we can ensure Jexp(π) is also small. This result is only useful if our cost function can indeed achieve a small minimum. The next lemma shows that this is the case. Lemma 2. minπ∈Π Jalg(π) ≤ 2 .\nHere is the threshold specified in Assumption 2. Combining these two lemmas with the previous result of Ross et al. (2011), we get a regret bound which is linear in κT . Theorem 3. Let π̂ be the result of minimizing Jalg using our optimization oracle, and assume that Qπ ?\nT−t+1(s, a)−Qπ ? T−t+1(s, π ?) ≤ u for all actions a, time steps t and states s reachable by π. Then\nπ̂ satisfies JC(π̂) ≤ JC(π?) + 3uκ T .\nOur bound is an improvement over that of behavior cloning if κ is less than O(T ). Note that DRIL does not require knowledge of κ. The quantity κ is problem-dependent and depends on the\nenvironment dynamics, the expert policy and the policies in the learned ensemble. We next compute κ exactly for a problem for which behavior cloning is known to perform poorly, and show that it is independent of T . Example 1. Consider the tabular MDP given in (Ross & Bagnell, 2010) as an example of a problem where behavioral cloning incurs quadratic regret, shown in Figure 1. There are 3 states S = (s0, s1, s2) and two actions (a1, a2). Each policy π can be represented as a set of probabilities π(a1|s) for each state s ∈ S 2. Assume the models in our ensemble are drawn from a posterior p(π(a1|s)|D) given by a Beta distribution with parameters Beta(n1 + 1, n2 + 1) where n1, n2 are the number of times the pairs (s, a1) and (s, a2) occur, respectively, in the demonstration data D. The agent always starts in s0 and the expert’s policy is given by π?(a1|s0) = 1, π?(a1|s1) = 0, π?(a1|s2) = 1. For any (s, a) pair, the task cost is C(s, a) = 0 if a = π?(s) and 1 otherwise. Here d?π = ( 1 T , T−1 T , 0). For any π, dπ(s0) = 1 T and dπ(s1) ≤ T−1 T due to the dynamics of the MDP, so dπ(s)d?π(s) ≤ 1 for s ∈ {s0, s1}. Writing out α({s0, s1}), we get: α({s0, s1}) = maxπ∈Π sups∈{s0,s1} dπ(s) d?π(s) ≤ 1.\nFurthermore, since s2 is never visited in the demonstration data, for each policy πi in the ensemble we have πi(a1|s2), πi(a2|s2) ∼ Beta(1, 1) = Uniform(0, 1). It follows that Varπ∼ΠE(π(a|s2)) is approximately equal 3 to the variance of a uniform distribution over [0, 1], i.e. 112 . Therefore:\nκ = min U⊆S α(U) β(U) ≤ α({s0, s1}) β({s0, s1}) . 1 1 12 = 12\nApplying our result from Theorem 3, we see that our algorithm obtains an O( T ) regret bound on this problem, in contrast to the O( T 2) regret of behavioral cloning4." }, { "heading": "5 RELATED WORK", "text": "The idea of learning through imitation dates back at least to the work of (Pomerleau, 1989), who trained a neural network to imitate the steering actions of a human driver using images as input. The problem of covariate shift was already observed, as the author notes: “the network must not solely be shown examples of accurate driving, but also how to recover once a mistake has been made”.\nThis issue was formalized in the work of (Ross & Bagnell, 2010), who on one hand proved an O( T 2) regret bound, and on the other hand provided an example showing this bound is tight. The subsequent work (Ross et al., 2011) proposed the DAGGER algorithm which obtains linear regret, provided the agent can both interact with the environment, and query the expert policy. Our approach also requires environment interaction, but importantly does not need to query the expert. Also of\n2Note that π(a2|s) = 1− π(a1|s). 3Via Hoeffding’s inequality, with probability 1− δ the two differ by at most O( √ log(1/δ)/|ΠE|).\n4Observe that a policy with π(a1|s0) = 1 − T, π(a2|s1) = T, π(a2|s2) = 1 has a behavioral cloning cost of but a regret of T 2.\nnote is the work of (Venkatraman et al., 2015), which extended DAGGER to time series prediction problems by using the true targets as expert corrections.\nImitation learning has been used within the context of modern RL to help improve sample efficiency (Chang et al., 2015; Ross & Bagnell, 2014; Sun et al., 2017; Hester et al., 2018; Le et al., 2018; Cheng & Boots, 2018) or overcome exploration (Nair et al., 2017). These settings assume the reward is known and that the policies can then be fine-tuned with reinforcement learning. In this case, covariate shift is less of an issue since it can be corrected using the reinforcement signal.\nThe work of (Luo et al., 2019) also proposed a method to address the covariate shift problem when learning from demonstrations when the reward is known, by conservatively extrapolating the value function outside the training distribution using negative sampling. This addresses a different setting from ours, and requires generating plausible states which are off the manifold of training data, which may be challenging when the states are high dimensional such as images. The work of (Reddy et al., 2019) proposed to treat imitation learning within the Q-learning framework, setting a positive reward for all transitions inside the demonstration data and zero reward for all other transitions in the replay buffer. This rewards the agent for repeating (or returning to) the expert’s transitions. The work of (Sasaki et al., 2019) also incorporates a mechanism for reducing covariate shift by fitting a Q-function that classifies whether the demonstration states are reachable from the current state. Random Expert Distillation (Wang et al., 2019) uses Random Network Distillation (RND) (Burda et al., 2019) to estimate the support of the expert’s distribution in state-action space, and minimizes an RL cost designed to guide the agent towards the expert’s support. This is related to our method, but differs in that it minimizes the RND prediction error rather than the ensemble variance and does not include a behavior cloning cost. The behavior cloning cost is essential to our theoretical results and avoids certain failure modes, see Appendix B for more discusion.\nGenerative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016) is a state-of-the-art algorithm which addresses the same setting as ours. It operates by training a discriminator network to distinguish expert states from states generated by the current policy, and the negative output of the discriminator is used as a reward signal to train the policy. The motivation is that states which are outside the training distribution will be assigned a low reward while states which are close to it will be assigned a high reward. This encourages the policy to return to the expert distribution if it strays away from it. However, the adversarial training procedure means that the reward function is changing over time, which can make the algorithm unstable or difficult to tune. In contrast, our approach uses a simple fixed reward function. We include comparisons to GAIL in our experiments.\nUsing disagreement between models in an ensemble to represent uncertainty has recently been explored in several contexts. The works of (Shyam et al., 2018; Pathak et al., 2019; Henaff, 2019) used disagreement between different dynamics models to drive exploration in the context of modelbased RL. Conversely, (Henaff et al., 2019) used variance across different dropout masks to prevent policies from exploiting error in dynamics models. Ensembles have also been used to represent uncertainty over Q-values in model-free RL in order to encourage exploration (Osband et al., 2016). Within the context of imitation learning, the work of (Menda et al., 2018) used the variance of the ensemble together with the DAGGER algorithm to decide when to query the expert demonstrator to minimize unsafe situations. Here, we use disagreement between different policies trained on demonstration data to address covariate shift in the context of imitation learning." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 TABULAR MDPS", "text": "As a first experiment, we applied DRIL to the tabular MDP of (Ross & Bagnell, 2010) shown in Figure 1. We computed the posterior over the policy parameters given the demonstration data using a separate Beta distribution for each state s with parameters determined by the number of times each action was performed in s. For behavior cloning, we sampled a single policy from this posterior. For DRIL, we sampled an ensemble of 5 policies and used their negative variance to define an additional reward function. We combined this with a reward which was the probability density function of a given state-action pair under the posterior distribution, which corresponds to the supervised learning loss, and used tabular Q-learning to optimize the sum of these two reward functions. This experiment\nwas repeated 500 times for time horizon lengths up to 500 and N = 1, 5, 10 expert demonstration trajectories.\nFigure 2 shows plots of the regret over the 500 different trials across different time horizons. Although BC achieves good average performance, it exhibits poor worst-case performance with some trials incurring very high regret, especially when using fewer demonstrations. Our method has low regret across all trials, which stays close to constant independantly of the time horizon, even with a single demonstration. This performance is better than that suggested by our analysis, which showed a worst-case linear bound with respect to time horizon." }, { "heading": "6.2 ATARI ENVIRONMENTS", "text": "We next evaluated our approach on six different Atari environments. We used pretrained PPO (Schulman et al., 2017) agents from the stable baselines repository (Hill et al., 2018) to generate N = {1, 3, 5, 10, 15, 20} expert trajectories. We compared against two other methods: standard behavioral cloning (BC) and Generative Adversarial Imitation Learning (GAIL). Results are shown in Figure 3a. DRIL outperforms behavioral cloning across most environments and numbers of demonstrations, often by a substantial margin. In many cases, our method is able to match the expert’s performance using a small number of trajectories. Figure 3b shows the evolution of the uncertainty cost and the policy reward throughout training. In all cases, the reward improves while the uncertainty cost decreases.\nWe were not able to obtain meaningful performance for GAIL on these domains, despite performing a hyperparameter search across learning rates for the policy and discriminator, and across different numbers of discriminator updates. We additionally experimented with clipping rewards in an effort to stabilize performance. These results are consistent with those of (Reddy et al., 2019), who also reported negative results when running GAIL on images. While improved performance might be possible with more sophisticated adversarial training techniques, we note that this contrasts with our method which uses a fixed reward function obtained through simple supervised learning.\nIn Appendix D we provide ablation experiments examining the effects of the cost function clipping and the role of the BC loss. We also compare the ensemble approach to a dropout-based approximation and show that DRIL works well in both cases." }, { "heading": "6.3 CONTINUOUS CONTROL", "text": "We next report results of running our method on 6 different continuous control tasks from the PyBullet5 and OpenAI Gym (Brockman et al., 2016) environments. We again used pretrained agents to generate expert demonstrations, and compared to Behavior Cloning and GAIL.\nResults for all methods are shown in Figure 4. In these environments we found Behavior Cloning to be a much stronger baseline than for the Atari environments: in several tasks it was able to match expert performance using as little as 3 trajectories, suggesting that covariate shift may be less of an issue. Our method performs similarly to Behavior Cloning on most tasks, except on Walker2D, where it yields improved performance for N = 1, 3, 5 trajectories. GAIL performs\n5https://github.com/bulletphysics/bullet3/tree/master/examples/ pybullet/gym/pybullet_envs/examples\nsomewhat better than DRIL on HalfCheetah and Walker2D, but performs worse than both DRIL and BC on LunarLander and BipedalWalkerHardcore. The fact that DRIL is competitive across all tasks provides evidence of its robustness." }, { "heading": "7 CONCLUSION", "text": "Addressing covariate shift has been a long-standing challenge in imitation learning. In this work, we have proposed a new method to address this problem by penalizing the disagreement between an ensemble of different policies trained on the demonstration data. Importantly, our method requires no additional labeling by an expert. Our experimental results demonstrate that DRIL can often match expert performance while using only a small number of trajectories across a wide array of tasks, ranging from tabular MDPs to pixel-based Atari games and continuous control tasks. On the theoretical side, we have shown that our algorithm can provably obtain a low regret bound for problems in which the κ parameter is low.\nThere are multiple directions for future work. On the theoretical side, characterizing the κ parameter on a larger array of problems would help to better understand the settings where our method can expect to do well. Empirically, there are many other settings in structured prediction (Daumé et al., 2009) where covariate shift is an issue and where our method could be applied. For example, in dialogue and language modeling it is common for generated text to become progressively less coherent as errors push the model off the manifold it was trained on. Our method could potentially be used to fine-tune language or translation models (Cho et al., 2014; Welleck et al., 2019) after training by applying our uncertainty-based cost function to the generated text." }, { "heading": "A PROOFS", "text": "Lemma 1. For any π ∈ Π we have Jexp(π) ≤ κJalg(π)\nProof. We will first show that for any π ∈ Π and U ⊆ S, we have Jexp(π) ≤ α(U)β(U)Jalg(π). We can rewrite this as:\nJexp(π) = Es∼dπ [ ‖π(·|s)− π?(·|s)‖ ] = Es∼dπ [ I(s ∈ U)‖π(·|s)− π?(·|s)‖ ] + Es∼dπ [ I(s /∈ U)‖π(·|s)− π?(·|s)‖ ]\nWe begin by bounding the first term:\nEs∼dπ [ I(s ∈ U)‖π(·|s)− π?(·|s)‖ ] = ∑ s∈U dπ(s)‖π(·|s)− π?(·|s)‖\n= ∑ s∈U dπ(s) dπ?(s) dπ?(s)‖π(·|s)− π?(·|s)‖\n≤ ∑ s∈U ( max π′∈Π sup s∈U dπ′(s) dπ?(s) ) ︸ ︷︷ ︸\nα(U)\ndπ?(s)‖π(·|s)− π?(·|s)‖\n= α(U) ∑ s∈U dπ?(s)‖π(·|s)− π?(·|s)‖\n≤ α(U) ∑ s∈S dπ?(s)‖π(·|s)− π?(·|s)‖\n= α(U)Es∼dπ? [ ‖π(·|s)− π?(·|s)‖ ] = α(U)JBC(π)\nWe next bound the second term:\nEs∼dπ [ I(s /∈ U)‖π(·|s)− π?(·|s)‖ ] ≤ Es∼dπ [ I(s /∈ U) ] ≤ Es∼dπ [ I(s /∈ U)mina∈AVarπi∼ΠE [πi(a|s)]\nβ(U) ] = 1\nβ(U) Es∼dπ\n[ I(s /∈ U) ∑ a∈A π(a|s)Varπi∼ΠE [πi(a|s)] ]\n= 1 β(U) ∑ s/∈U dπ(s) ∑ a∈A\nπ(a|s)Varπi∼ΠE [πi(a|s)]︸ ︷︷ ︸ A(π)\nNow observe we can decompose the RL cost as follows:\nJU(π) = Es∼dπ,a∼π(·|s) [ Varπi∼ΠEπi(a|s) ] = ∑ s dπ(s) ∑ a π(a|s) [ Varπi∼ΠEπi(a|s)\n] = ∑ s∈U dπ(s) ∑ a π(a|s) [ Varπi∼ΠEπi(a|s) ] ︸ ︷︷ ︸\nB(π)\n+ ∑ s/∈U dπ(s) ∑ a π(a|s) [ Varπi∼ΠEπi(a|s) ] ︸ ︷︷ ︸\nA(π)\nPutting these together, we get the following:\nJexp(π) ≤ α(U)JBC(π) + 1\nβ(U) A(π)\n= α(U)β(U) β(U) JBC(π) + α(U) α(U)β(U) A(π)\n≤ α(U) β(U) JBC(π) + α(U) β(U) A(π)\n≤ α(U) β(U)\n( JBC(π) +A(π) ) ≤ α(U) β(U) ( JBC(π) + JU(π)\n) = α(U) β(U) Jalg(π)\nHere we have used the fact that β(U) ≤ 1 since 0 ≤ π(a|s) ≤ 1 and α(U) ≥ sups∈U d?π(s) d?π(s) = 1 hence 1α(U) ≤ 1. Taking the minimum over subsets U ⊆ S, we get Jexp(π) ≤ κJalg(π).\nLemma 2. minπ∈Π Jalg(π) ≤ 2\nProof. Plugging the optimal policy into Jalg, we get:\nJalg(π ?) = JBC(π ?) + JU(π ?) = 0 + Es∼dπ? ,a∼π?(·|s) [ Varπi∼ΠE [πi(a|s)] ] = Es∼dπ? ,a∼π?(·|s) [ 1 E E∑ i=1 ( πi(a|s)− π̄(a|s)\n)2] ≤ Es∼dπ? ,a∼π?(·|s) [ 1 E E∑ i=1 ( πi(a|s)− π?(a|s) )2 + ( π̄(a|s)− π?(a|s)\n)2] = Es∼dπ? ,a∼π?(·|s) [ 1 E E∑ i=1 ( πi(a|s)− π?(a|s) )2] ︸ ︷︷ ︸\nTerm1\n+Es∼dπ? ,a∼π?(·|s) [( π̄(a|s)− π?(a|s) )2] ︸ ︷︷ ︸\nTerm2\nWe will first bound Term 1:\nEs∼dπ? ,a∼π?(·|s) [ 1 E E∑ i=1 ( πi(a|s)− π?(a|s) )2] = 1 E Es∼dπ? [∑ a∈A π?(a|s) E∑ i=1 ( πi(a|s)− π?(a|s) )2]\n≤ 1 E Es∼dπ? [∑ a∈A π?(a|s) E∑ i=1 ∣∣∣πi(a|s)− π?(a|s)∣∣∣]\n≤ 1 E Es∼dπ? [ E∑ i=1 ∑ a∈A ∣∣∣πi(a|s)− π?(a|s)∣∣∣]\n≤ 1 E E∑ i=1 Es∼dπ? [ ‖πi(·|s)− π?(·|s)‖ ] ≤ 1 E E∑ i=1\n=\nWe will next bound Term 2:\nEs∼dπ? ,a∼π?(·|s) [( π̄(a|s)− π?(a|s) )2] = Es∼dπ? ,a∼π?(·|s) [( π?(a|s)− 1\nE E∑ i=1 πi(a|s) )2]\n= Es∼dπ? ,a∼π?(·|s) [( 1 E E∑ i=1 π?(a|s)− 1 E E∑ i=1 πi(a|s) )2]\n= Es∼dπ? ,a∼π?(·|s) [( 1 E E∑ i=1 (π?(a|s)− πi(a|s)) )2]\n≤ Es∼dπ? ,a∼π?(·|s) [ 1 E2 E E∑ i=1 ( π?(a|s)− πi(a|s) )2] (Cauchy − Schwarz)\n= 1\nE E∑ i=1 Es∼dπ? ,a∼π?(·|s) [( π?(a|s)− πi(a|s) )2] ≤ 1 E E∑ i=1 Es∼dπ? ,a∼π?(·|s) [∣∣∣π?(a|s)− πi(a|s)∣∣∣]\n≤ 1 E E∑ i=1 Es∼dπ? [ ‖π?(·|s)− πi(·|s)‖ ] = 1\nE E∑ i=1 JBC(πi)\n≤\nThe last step follows from our optimization oracle assumption: 0 ≤ minπ∈Π JBC(π) ≤ JBC(π?) = 0, hence JBC(πi) ≤ 0 + = . Combining the bounds on the two terms, we get Jalg(π?) ≤ 2 . Since π? ∈ Π, the result follows.\nTheorem 1. Let π̂ be the result of minimizing Jalg using our optimization oracle, and assume that Qπ ?\nT−t+1(s, a) − Qπ ? T−t+1(s, π ?) ≤ u for all a ∈ A, t ∈ {1, 2, ..., T}, dtπ(s) > 0. Then π̂ satisfies\nJ(π̂) ≤ J(π?) + 3uκ T .\nProof. By our optimization oracle and Lemma 2, we have\nJalg(π̂) ≤ min π∈Π Jalg(π) +\n≤ 2 + = 3\nCombining with Lemma 1, we get:\nJexp(π̂) ≤ κJalg(π̂) ≤ 3κ\nApplying Theorem 1 from (Ross et al., 2011), we get J(π̂) ≤ J(π?) + 3uκ T .\nB IMPORTANCE OF BEHAVIOR CLONING COST\nThe following example shows how minimizing the uncertainty cost alone without the BC cost can lead to highly sub-optimal policies if the demonstration data is generated by a stochastic policy which is only slightly suboptimal. Consider the following deterministic chain MDP:\nThe agent always starts in s1, and gets a reward of 1 in s3 and 0 elsewhere. The optimal policy is given by:\nπ?(·|s0) = (0, 1) π?(·|s1) = (0, 1) π?(·|s2) = (0, 1) π?(·|s3) = (0, 1)\nAssume the demonstration data is generated by the following policy, which is only slightly suboptimal:\nπdemo(·|s0) = (0, 1) πdemo(·|s1) = (0, 1) πdemo(·|s2) = (0.1, 0.9) πdemo(·|s3) = (0, 1)\nLet us assume realizability and perfect optimization for simplicity. If both transitions (s2, a0) and (s2, a1) appear in the demonstration data, then Random Expert Distillation (RED) will assign zero\ncost to both transitions. If we do not use bootstrapped samples to train the ensemble, then DRIL without the BC cost (we will call this UO-DRIL for Uncertainty-Only DRIL) will also assign zero cost to both transitions since all models in the ensemble would recover the Bayes optimal solution given the demonstration data. If we are using bootstrapped samples, then the Bayes optimal solution for each bootstrapped sample may differ and thus the different policies in the ensemble might disagree in their predictions, although given enough demonstration data we would expect these differences (and thus the uncertainty cost) to be small.\nNote also that since no samples at the state s0 occur in the demonstration data, both RED and UODRIL will likely assign high uncertainty costs to state-action pairs at (s0, a0), (s0, a1) and thus avoid highly suboptimal policies which get stuck at s0.\nNow consider policies π̂1, π̂2 given by:\nπ̂1(·|s0) = (0, 1) π̂1(·|s1) = (0, 1) π̂1(·|s2) = (1, 0) π̂1(·|s3) = (0, 1)\nand\nπ̂2(·|s0) = (0, 1) π̂2(·|s1) = (0, 1) π̂2(·|s2) = (0.2, 0.8) π̂2(·|s3) = (0, 1)\nBoth of these policies only visit state-action pairs which are visited by the demonstration policy. In the case described above, both RED and UO-DRIL will assign π̂1 and π̂2 similarly low costs. However, π̂1 will cycle forever between s1 and s2, never collecting reward, while π̂2 will with high probability reach s3 and stay there, thus achieving high reward. This shows that minimizing the uncertainty cost alone does not necessarily distinguish between good and bad policies. However, π̂1 will incur a higher BC cost than π̂2, since π̂2 more closely matches the demonstration data at s2. This shows that including the BC cost can be important for further disambiguating between policies which all stay within the distribution of the demonstration data, but have different behavior within that distribution." }, { "heading": "C EXPERIMENTAL DETAILS", "text": "C.1 ATARI ENVIRONMENTS\nAll behavior cloning models were trained to minimize the negative log-likelihood classification loss on the demonstration data for 500 epochs using Adam (Kingma & Ba, 2014) and a learning rate of 2.5 · 10−4. We stopped training once the validation error did not improve for 20 epochs. For our method, we initially performed a hyperparameter search on Space Invaders over the values shown in Table 1\nWe then chose the best values and kept those hyperparameters fixed for all other environments. All other A2C hyperparameters follow the default values in the repo (Kostrikov, 2018): policy networks consisted of 3-layer convolutional networks with 8−32−64 feature maps followed by a single-layer MLP with 512 hidden units.\nFor GAIL, we used the implementation in (Kostrikov, 2018) and replaced the MLP discriminator by a CNN discriminator with the same architecture as the policy network. We initially performed a hyperparameter search on Breakout with 10 demonstrations over the values shown in Table 2. However, we did not find any hyperparameter configuration which performed better than behavioral cloning.\nC.2 CONTINUOUS CONTROL\nAll behavior cloning and ensemble models were trained to minimize the mean-squared error regression loss on the demonstration data for 500 epochs using Adam (Kingma & Ba, 2014) and a learning rate of 2.5 · 10−4. Policy networks were 2-layer fully-connected MLPs with tanh activations and 64 hidden units." }, { "heading": "D ABLATION EXPERIMENTS", "text": "In this section we provide ablation experiments examining the effects of the cost function clipping and the role of the BC loss. We also compare the ensemble approach to a dropout-based approximation and show that DRIL works well in both cases.\nResults are shown in Figure 4. First, switching from the clipped cost in {−1,+1} to the the raw cost causes a drop in performance. One explanation may be that since the raw costs are always positive (which corresponds to a reward which is always negative), the agent may learn to terminate the episode early in order to minimize the total cost incurred. Using a cost/reward which has both positive and negative values avoids this behavior.\nSecond, optimizing the pure BC cost performs better than the pure uncertainty cost for some environments (SpaceInvaders, BeamRider) while optimizing the pure uncertainty cost performs better than BC in Breakout. DRIL, which optimizes both, has robust performance and performs the best over all environments.\nFor the dropout approximation we trained a single policy network with a dropout rate of 0.1 applied to all layers except the last, and estimated the variance for each state-action pair using 5 different dropout masks. Similarly to the ensemble approach, we computed the 98th quantile of the variance on the demonstration data and used this value in our clipped cost. MC-dropout performs similarly to the ensembling approach, which shows that our method can be paired with different approaches to posterior estimation." } ]
2,020
DISAGREEMENT-REGULARIZED IMITATION LEARNING
SP:2bf8148e5dadace0b6dd4b9f715fa8261f2a52db
[ "This paper is well-written and it provides a convergence result for traditional Q-learning, with linear function approximation, when using an Adam-like update (AMSGrad). It does the same for a variation of this algorithm where the momentum-like term is reset every now and then. This second result is not that exciting as it ends up concluding that the “best way” to converge with such an approach is by resetting the momentum-like term rarely. That being said, it is still interesting to have such theoretical result. On the empirical side, this paper evaluates the traditional Q-learning algorithm with non-linear function approximation (through a neural network), using Adam (and AdamR) while not using a target network, in both an LQR problem and a subset of the Atari games. The empirical results are not necessarily that convincing and there are important details missing. I’m willing to increase my score if my concerns w.r.t. the empirical validation are addressed since this paper presents a potentially interesting theoretical result with Adam, which is so prominent in the literature nowadays.", "This paper claims to propose a method to train q-based agents that use “alternating” Q-learning. However, the alternating approach given in the paper appears to be the normal Bellman update implemented in most versions of DQN. Furthermore, the citation given for AltQ (Mnih et al. 2016) makes no mention of the term “Alternating Q learning”." ]
Differently from the popular Deep Q-Network (DQN) learning, Alternating Qlearning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient. Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance. Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before. In this paper, we first provide a solid exploration on how well AltQ performs with Adam. We then take a further step to improve the implementation by adopting the technique of parameter restart. More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method. The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation. To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning.
[]
[ { "authors": [ "Joshua Achiam", "Ethan Knight", "Pieter Abbeel" ], "title": "Towards characterizing divergence in deep Q-learning", "venue": "arXiv preprint arXiv:1903.08894,", "year": 2019 }, { "authors": [ "Itamar Arel", "Cong Liu", "T Urbanik", "A.G. Kohls" ], "title": "Reinforcement learning-based multi-agent system for network traffic signal control", "venue": "IET Intelligent Transport Systems,", "year": 2010 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Jalaj Bhandari", "Daniel Russo", "Raghav Singal" ], "title": "A finite time analysis of temporal difference learning with linear function approximation", "venue": "arXiv preprint arXiv:1806.02450,", "year": 2018 }, { "authors": [ "Shehroze Bhatti", "Alban Desmaison", "Ondrej Miksik", "Nantas Nardelli", "N. Siddharth", "Philip H.S. Torr" ], "title": "Playing doom with slam-augmented deep reinforcement learning", "venue": "arXiv preprint arXiv:1612.00380,", "year": 2016 }, { "authors": [ "Qi Cai", "Zhuoran Yang", "Jason D Lee", "Zhaoran Wang" ], "title": "Neural temporal-difference learning converges to global optima", "venue": "arXiv preprint arXiv:1905.10027,", "year": 2019 }, { "authors": [ "Xiangyi Chen", "Sijia Liu", "Ruoyu Sun", "Mingyi Hong" ], "title": "On the convergence of a class of Adam-type algorithms for non-convex optimization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zaiwei Chen", "Sheng Zhang", "Thinh T. Doan", "Siva Theja Maguluri", "John-Paul Clarke" ], "title": "Finite-time analysis of Q-learning with linear function approximation", "venue": "arXiv preprint arXiv:1905.11425,", "year": 2019 }, { "authors": [ "Adithya M Devraj", "Sean P Meyn" ], "title": "Fastest convergence for Q-learning", "venue": "arXiv preprint arXiv:1707.03770,", "year": 2017 }, { "authors": [ "Simon S Du", "Yuping Luo", "Ruosong Wang", "Hanrui Zhang" ], "title": "Provably efficient Q-learning with function approximation via distribution shift error checking oracle", "venue": null, "year": 1906 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Eyal Even-Dar", "Yishay Mansour" ], "title": "Learning rates for Q-learning", "venue": "Journal of Machine Learning Research,", "year": 2003 }, { "authors": [ "Euhanna Ghadimi", "Hamid Reza Feyzmahdavian", "Mikael Johansson" ], "title": "Global convergence of the heavy-ball method for convex optimization", "venue": "In Proceeding of IEEE European Control Conference (ECC), pp", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ethan Knight", "Osher Lerner" ], "title": "Natural gradient deep Q-learning", "venue": "arXiv preprint arXiv:1803.07482,", "year": 2018 }, { "authors": [ "F.L. Lewis", "D. Vrabie" ], "title": "Reinforcement learning and adaptive dynamic programming for feedback control", "venue": "IEEE Circuits and Systems Magazine,", "year": 2009 }, { "authors": [ "Frank L Lewis", "Kyriakos G Vamvoudakis" ], "title": "Reinforcement learning for partially observable dynamic processes: Adaptive dynamic programming using measured output data", "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),", "year": 2011 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Tyler Lu", "Dale Schuurmans", "Craig Boutilier" ], "title": "Non-delusional Q-learning and value-iteration", "venue": "In Proceedings of the Thirty-second Conference on Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Borislav Mavrin", "Hengshuai Yao", "Linglong Kong" ], "title": "Deep reinforcement learning with decorrelation", "venue": "arXiv preprint arXiv:1903.07765,", "year": 2019 }, { "authors": [ "Francisco S. Melo", "Sean P. Meyn", "M. Isabel Ribeiro" ], "title": "An analysis of reinforcement learning with function approximation", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing atari with deep reinforcement learning", "venue": "arXiv preprint arXiv:1312.5602,", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory Lectures on Convex Optimization: A Basic Course, volume 87", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Tran Thi Phuong", "Le Trieu Phong" ], "title": "On the convergence proof of AMSGrad and a new version", "venue": "arXiv preprint arXiv:1904.03590,", "year": 2019 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of Adam and beyond", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Phuong Thi Tran" ], "title": "On the convergence proof of AMSGrad and a new version", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "John N. Tsitsiklis", "Benjamin Van Roy" ], "title": "Analysis of temporal-diffference learning with function approximation", "venue": "Advances in Neural Information Processing Systems", "year": 1997 }, { "authors": [ "Kyriakos G Vamvoudakis" ], "title": "Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach", "venue": "Systems & Control Letters,", "year": 2017 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double Qlearning", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Draguna Vrabie", "O. Pastravanu", "Murad Abu-Khalaf", "Frank L. Lewis" ], "title": "Adaptive optimal control for continuous-time linear systems based on policy", "venue": "iteration. Automatica,", "year": 2009 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "Zhuora Yang", "Yuchen Xie", "Zhaoran Wang" ], "title": "A theoretical analysis of deep Q-learning", "venue": "arXiv preprint arXiv:1901.00137,", "year": 2019 }, { "authors": [ "Xiangyu Zhao", "Long Xia", "Liang Zhang", "Zhuoye Ding", "Dawei Yin", "Jiliang Tang" ], "title": "Deep reinforcement learning for page-wise recommendations", "venue": "In Proceedings of the 12th ACM Conference on Recommender Systems,", "year": 2018 }, { "authors": [ "Guanjie Zheng", "Fuzheng Zhang", "Zihan Zheng", "Yang Xiang", "Nicholas Jing Yuan", "Xing Xie", "Zhenhui Li" ], "title": "DRN: A deep reinforcement learning framework for news recommendation", "venue": "In Proceedings of the 2018 World Wide Web Conference on World Wide Web,", "year": 2018 }, { "authors": [ "Dongruo Zhou", "Yiqi Tang", "Ziyan Yang", "Yuan Cao", "Quanquan Gu" ], "title": "On the convergence of adaptive gradient methods for nonconvex optimization", "venue": "arXiv preprint arXiv:1808.05671,", "year": 2018 }, { "authors": [ "Zhenpeng Zhou", "Xiaocheng Li", "Richard N Zare" ], "title": "Optimizing chemical reactions with deep reinforcement learning", "venue": "ACS Central Science,", "year": 2017 }, { "authors": [ "Fangyu Zou", "Li Shen", "Zequn Jie", "Weizhong Zhang", "Wei Liu" ], "title": "A sufficient condition for convergences of Adam and RMSProp", "venue": "arXiv preprint arXiv:1811.09358,", "year": 2018 }, { "authors": [ "Shaofeng Zou", "Tengyu Xu", "Yingbin Liang" ], "title": "Finite-sample analysis for SARSA and Q-learning with linear function approximation", "venue": "arXiv preprint arXiv:1902.02234,", "year": 2019 }, { "authors": [ "Reddi" ], "title": "Different from the regret bound for AMSGrad", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "Differently from the popular Deep Q-Network (DQN) learning, Alternating Qlearning (AltQ) does not fully fit a target Q-function at each iteration, and is generally known to be unstable and inefficient. Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance. Although Adam appears to be a natural solution, its performance in AltQ has rarely been studied before. In this paper, we first provide a solid exploration on how well AltQ performs with Adam. We then take a further step to improve the implementation by adopting the technique of parameter restart. More specifically, the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method. The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation. To the best of our knowledge, this is the first theoretical study on the Adam-type algorithms in Q-learning." }, { "heading": "1 INTRODUCTION", "text": "Q-learning (Watkins & Dayan, 1992) is one of the most important model-free reinforcement learning (RL) problems, which has received considerable research attention in recent years (Bertsekas & Tsitsiklis, 1996; Even-Dar & Mansour, 2003; Hasselt, 2010; Lu et al., 2018; Achiam et al., 2019). When the state-action space is large or continuous, parametric approximation of the Q-function is often necessary. One remarkable success of parametric Q-learning in practice is its combination with deep learning, known as the Deep Q-Network (DQN) learning (Mnih et al., 2013; 2015). It has been applied to various applications in computer games (Bhatti et al., 2016), traffic control (Arel et al., 2010), recommendation systems (Zheng et al., 2018; Zhao et al., 2018), chemistry research (Zhou et al., 2017), etc. Its on-policy continuous variant (Silver et al., 2014) has also led to great achievements in robotics locomotion (Lillicrap et al., 2016).\nThe DQN algorithm is performed in a nested-loop manner, where the outer loop follows an one-step update of the Q-function (via the empirical Bellman operator for Q-learning), and the inner loop takes a supervised learning process to fit the updated (i.e., target) Q-function with a neural network. In practice, the inner loop takes a sufficiently large number of iterations under certain optimizer (e.g. stochastic gradient descent (SGD) or Adam) to fit the neural network well to the target Q-function.\nIn contrast, a conventional Q-learning algorithm runs only one SGD step in each inner loop, in which case the overall Q-learning algorithm updates the Q-function and fits the target Q-function alternatively in each iteration. We refer to such a Q-learning algorithm with alternating updates as Alternating Q-learning (AltQ). Although significantly simpler in the update rule, AltQ is well known to be unstable and have weak performance (Mnih et al., 2016). This is in part due to the fact that the inner loop does not fit the target Q-function sufficiently well. To fix this issue, Mnih et al. (2016) proposed a new exploration strategy and asynchronous sampling schemes over parallel computing units (rather than the simple replay sampling in DQN) in order for the AltQ algorithm to achieve comparable or better performance than DQN. As another alternative, Knight & Lerner (2018) proposed a more involved natural gradient propagation for AltQ to improve the performance. All these schemes require more sophisticated designs or hardware support, which may place AltQ less advantageous compared to the popular DQN, even with their better performances. This motivates us to ask the following first question.\n• Q1: Can we design a simple and easy variant of the AltQ algorithm, which uses as simple setup as DQN and does not introduce extra computational burden and heuristics, but still achieves better and more stable performance than DQN?\nIn this paper, we provide an affirmative answer by introducing novel lightweight designs to AltQ based on Adam. Although Adam appears to be a natural tool, its performance in AltQ has rarely been studied yet. Thus, we first provide a solid exploration on how well AltQ performs with Adam (Kingma & Ba, 2014), where the algorithm is referred to as AltQ-Adam. We then take a further step to improve the implementation of AltQ-Adam by adopting the technique of parameter restart (i.e., restart the initial setting of Adam parameters every a few iterations), and refer to the new algorithm as AltQ-AdamR. This is the first time that restart is applied for improving the performance of RL algorithms although restart has been used for conventional optimization before.\nIn a batch of 23 Atari 2600 games, our experiments show that both AltQ-Adam and AltQ-AdamR outperform the baseline performance of DQN by 50% on average. Furthermore, AltQ-AdamR effectively reduces the performance variance and achieves a much more stable learning process. In our experiments for the linear quadratic regulator (LQR) problems, AltQ-AdamR converges even faster than the model-based value iteration (VI) solution. This is a rather surprising result given that the model-based VI has been treated as the performance upper bound for the Q-learning (including DQN) algorithms with target update (Lewis & Vrabie, 2009; Yang et al., 2019).\nRegarding the theoretical analysis of AltQ algorithms, their convergence guarantee has been extensively studied (Melo et al., 2008; Chen et al., 2019b). More references are given in Section 1.1. However, all the existing studies focus on the AltQ algorithms that take a simple SGD step. Such theory is not applicable to the proposed AltQ-Adam and AltQ-AdamR that implement the Adam-type update. Thus, the second intriguing question we address here is described as follows.\n• Q2: Can we provide the convergence guarantee for AltQ-Adam and AltQ-AdamR or their slightly modified variants (if these two algorithms do not always converge by nature)?\nIt is well known in optimization that Adam does not always converge, and instead, a slightly modified variant AMSGrad proposed in Reddi et al. (2018) has been widely accepted as an alternative to justify the performance of Adam-type algorithms. Thus, our theoretical analysis here also focuses on such slightly modified variants AltQ-AMSGrad and AltQ-AMSGradR of the proposed algorithms. We show that under the linear function approximation (which is the structure that the current tools for analysis of Q-learning can handle), both AltQ-AMSGrad and AltQ-AMSGradR converge to the global optimal solution under standard assumptions for Qlearning. To the best of our knowledge, this is the first non-asymptotic convergence guarantee on Q-learning that incorporates Adam-type update and momentum restart. Furthermore, a slight adaptation of our proof provides the convergence rate for the AMSGrad for conventional strongly convex optimization which has not been studied before and can be of independent interest. Notations We use ‖x‖ := ‖x‖2 = √ xTx to denote the `2 norm of a vector x, and use ‖x‖∞ = max i |xi| to denote the infinity norm. When x, y are both vectors, x/y, xy, x2, √ x are all calculated in the element-wise manner, which will be used in the update of Adam and AMSGrad. We denote [n] = 1, 2, . . . , n, and bxc ∈ Z as the largest integer such that bxc ≤ x < bxc+ 1." }, { "heading": "1.1 RELATED WORK", "text": "Empirical performance of AltQ: AltQ algorithms that strictly follow the alternating updates are rarely used in practice, particularly in comparison with the well-accepted DQN learning and its improved variants of dueling network structure (Wang et al., 2016), double Q-learning (Hasselt, 2010) and variance exploration and sampling schemes (Schaul et al., 2015). Mnih et al. (2016) proposed the asynchronous one-step Q-learning that is conceptually close to AltQ with competitive performance against DQN. However, the algorithm still relies on a slowly moving target network like DQN, and the multi-thread learning also complicates the computational setup. Lu et al. (2018) studied the problem of value overestimation and proposed the non-delusional Q-learning algorithm that employed the so-called pre-conditioned Q-networks, which is also computationally complex. Knight & Lerner (2018) proposed a natural gradient propagation for AltQ to improve the performance, where the gradient implementation is complex. In this paper, we propose two simple and computationally efficient schemes to improve the performance of AltQ.\nTheoretical analysis of AltQ: Since proposed in Watkins & Dayan (1992), Q-learning has aroused great interest in theoretic analysis. The line of theoretic research of AltQ that are most relevant to our study lies in the Q-learning with function approximation. A large number of works study Q-learning with linear function approximation such as Bertsekas & Tsitsiklis (1996); Devraj & Meyn (2017); Zou et al. (2019); Chen et al. (2019b); Du et al. (2019), to name a few. More recently, convergence of AltQ with neural network parameterization was given in Cai et al. (2019), which exploits the linear structure of neural networks in the overparamterized regime for analysis. It is worth noting that all the existing analysis of AltQ with function approximation considers the simple SGD update, whereas our analysis in this paper focuses on the more involved Adam-type updates.\nConvergence analysis of Adam: Adam was proposed in Kingma & Ba (2014) and has achieved a great success in training deep neural networks. Kingma & Ba (2014) and Reddi et al. (2018) provided regret bounds under the online convex optimization framework for Adam and AMSGrad, respectively. However, Tran et al. (2019) pointed out errors in the proofs of the previous two papers and corrected them. Recently, convergence analysis of Adam and AMSGrad in nonconvex optimization was provided in Zou et al. (2018); Zhou et al. (2018); Chen et al. (2019a); Phuong & Phong (2019), in which the Adam-type algorithms were guaranteed to converge to a stationary point. To the best of our knowledge, our study is the first convergence analysis of the Adam-type of algorithms for Q-learning." }, { "heading": "2 PRELIMINARIES", "text": "We consider a Markov decision process with a considerably large or continuous state space S ⊂ RM and action space A ⊂ RN with a non-negative bounded reward function R : S ×A → [0, Rmax]. We define U(s) ⊂ A as the admissible set of actions at state s, and π : S → A as a feasible stationary policy. We seek to solve a discrete-time sequential decision problem with γ ∈ (0, 1) as follows:\nmaximize π Jπ(s0) = EP [ ∞∑ t=0 γtR(st, π(st)) ] ,\nsubject to st+1 ∼ P (·|st, at). (1)\nLet J?(s) := Jπ?(s) be the optimal value function when applying the optimal policy π?. The corresponding optimal Q-function can be defined as\nQ?(s, a) := R(s, a) + γEPJ?(s′), (2)\nwhere s′ ∼ P (·|s, a) and we use the same notation hereafter when no confusion arises. In other words, Q?(s, a) is the reward of an agent who starts from state s and takes action a at the first step and then follows the optimal policy π? thereafter." }, { "heading": "2.1 ALTQ ALGORITHM", "text": "This paper focuses on the Alternating Q-learning (AltQ) algorithm that uses a parametric function Q̂(s, a; θ) to approximate the Q-function with a parameter θ of finite and relatively small dimension. The update rule of AltQ-learning is given by\nTQ̂(s, a; θt) = R(s, a) + γ max a′∈U(s′)\nQ̂(s′, a′; θt); (3)\nθt+1 = θt − αt ( Q̂t(s, a; θt)− TQ̂(s, a; θt) ) ∂ ∂θt Q̂t(s, a; θt), (4)\nwhere αt is the step size at time t. It is immediate from the equations that AltQ performs the update by taking one step temporal target update and one step parameter learning in an alternating fashion." }, { "heading": "2.2 DQN ALGORITHM", "text": "As DQN is also included in this work for performance comparison. We recall the update of DQN in the following as reference. Differently from AltQ, DQN updates the parameters in a nested loop. Within the t-th inner loop, DQN first obtains the target Q-function as in Equation (5), and then uses a\nneural network to fit the target Q-function by running Y steps of a certain optimization algorithm as Equation (6). The update rule of DQN is given as follows.\nTQ̂(s, a; θ0t ) = R(s, a) + γ max a′∈U(s′) Q̂(s′, a′; θ0t ), (5)\nθYt = Optimizer(θ 0 t , T Q̂(s, a; θ 0 t )), (6)\nwhere Optimizer can be SGD or Adam for example, and Equation (6) is thus a supervised learning process with TQ̂(s, a; θ0t )) as the ”supervisor”. At the t-th outer loop, DQN performs the so-called target update as θ0t+1 = (1− τ)θ0t + τθYt . (7) In practice, when one of the momentum-based optimizers is adopted for Equation (6), such as Adam, it is only initialized once at the beginning of the first inner loop. The historical gradient terms then accumulate throughout multiple inner loops with different targets. While this stabilizes the DQN training empirically, it is still lack of theoretical understanding on how the optimizer affects the training with various moving targets. As we will discuss in detail in Section 5, the analysis of AltQ with Adam can potentially shed light on such ambiguity and inspire future work for this matter.\nNote that AltQ and DQN mainly differ at how the Q-function evolves after each step of sampling. A fair comparison between the algorithms should be made without introducing dramatic difference on gradient propagation (Knight & Lerner, 2018), policy structure, exploration and sampling strategies (Mnih et al., 2016). In practice, the vanilla AltQ is often slow in convergence and unstable with high variance. To improve the performance, we propose to incorporate Adam and restart schemes, which are easy to implement and yield improved performance than DQN." }, { "heading": "3 ACCELERATED ALTERNATING Q-LEARNING ALGORITHMS", "text": "In this section, we first describe how to incorporate Adam to the AltQ algorithm, and then introduce a novel implementation scheme to improve the performance of AltQ with Adam.\nAltQ with Adam-type update We propose a new AltQ algorithm with Adam-type update (AltQAdam) as described in Algorithm 1. Its update is similar to the well-known Adam (Kingma & Ba, 2014). The iterations evolve by updating the exponentially decaying average of historical gradients (mt) and squared historical gradients (vt). The hyper-parameters β1, β2 are used to exponentially decrease the rate of the moving averages. The difference between Algorithm 1 and the standard Adam in supervised learning is that in AltQ, there is no fixed target to “supervise” the learning process. The target is always moving along with iteration t, leading to more noisy gradient estimations. The proposed algorithm sheds new light on the possibility of using Adam to deal with such unique challenge brought by RL.\nAltQ-Adam with momentum restart We also introduce the restart technique to AltQ-Adam and propose AltQ-AdamR as Algorithm 2. Traditional momentum-based algorithms largely depend on the historical gradient direction. When part of the historical information is incorrect, the estimation error tends to accumulate. The restart technique can be employed to deal with this issue. One way to restart the momentum-based methods is to initialize the momentum at some restart iteration. That is, at restart iteration r, we reset mr, vr, i.e., mr = 0, vr = 0, which yields θr+1 = θr. It is an intuitive implementation technique to adjust the trajectory from time to time, and can usually help mitigate the aforementioned problem while keeping fast convergence property. For the implementation, we execute the restart periodically with a period r. It turns out that the restart technique can significantly improve the numerical performance, which can be seen in Section 4." }, { "heading": "4 EMPIRICAL PERFORMANCE", "text": "We empirically evaluate the proposed algorithms in this section. The linear quadratic regulator (LQR) is a direct numerical demonstration of the convergence analysis under linear function approximation which will be discussed in the next section. Atari 2600 game (Bellemare et al., 2013; Brockman et al., 2016), a classic benchmark for DQN evaluations, is also used to show the effectiveness of the proposed algorithms for complicated tasks. In practice, we also make a small adjustment to the proposed algorithms. That is, we re-scale the loss term of L(θt) := Q̂t(s, a; θt)− TQ̂(s, a; θt)\nAlgorithm 1 AltQ-Adam 1: Input: η, θ1, β1, β2, , γ,m0 = 0, v0 = 0. 2: for t = 1, 2, . . . ,K do 3: TQ̂(s, a; θt) = R(s, a) + γmaxa′ Q̂(s′, a′; θt)\n4: gt = ( Q̂t(s, a; θt)− TQ̂(s, a; θt) ) ∂ ∂θt Q̂t(s, a; θt) 5: mt = (1− β1)mt−1 + β1gt 6: vt = (1− β2)vt−1 + β2g2t 7: θt+1 = θt − η mt√vt+ 8: end for 9: Output: θK\nAlgorithm 2 AltQ-AdamR 1: Input: η, θ1, β1, β2, , γ,m0 = 0, v0 = 0, r. 2: for t = 1, 2, . . . ,K do 3: if mod(t, r) = 0 then 4: mt = 0, vt = 0 5: end if 6: TQ̂(s, a; θt) = R(s, a) + γmaxa′ Q̂(s′, a′; θt)\n7: gt = ( Q̂t(s, a; θt)− TQ̂(s, a; θt) ) ∂ ∂θt Q̂t(s, a; θt) 8: mt = (1− β1)mt−1 + β1gt 9: vt = (1− β2)vt−1 + β2g2t\n10: θt+1 = θt − η mt√vt+ 11: end for 12: Output: θK\nin Equation (4) as L̃(θt) = τ̃2L(θt) with some scaling factor τ̃ ∈ (0, 1], which is beneficial for stabilizing the learning process.\nWe find that in both experiments, AltQ-AdamR outperforms both AltQ-Adam and DQN in terms of convergence speed and variance reduction. Compared with DQN in the empirical experiments of Atari games, under the same hyper-parameter settings, AltQ-Adam and AltQ-AdamR improve the performance of DQN by 50% on average." }, { "heading": "4.1 LINEAR QUADRATIC REGULATOR", "text": "We numerically validate the proposed algorithms through an infinite-horizon discrete-time LQR problem whose background can be found in Appendix A.1. A typical model-based solution (with known dynamics), known as the discrete-time algebraic Riccati equation (DARE), is adopted to derive the optimal policy u?t = −K?xt. The performance of the learning algorithm is then evaluated at each step of iterate t with the Euclidean norm ‖Kt −K?‖. The performance result for each method is averaged over 10 trials with different random seeds. All algorithms share the same set of random seeds and are initialized with the same θ0. The hyper-parameters of the learning settings are also consistent and further details are shown in Table 1. Note that for all the implementations, we also adopt the double Q-update (Hasselt, 2010) to help prevent over-estimations of the Q-value. The performance results are seen in Figure 1. Here we highlight main observations from the LQR experiments.\n• AltQ-AdamR outperforms DARE In ideal cases where data sampling perfectly emulates the system dynamics and the target is accurately learned in each inner loop, DARE for LQR would become equivalent to the DQN-like update if the neural network is replaced with a parameterzied linear function. In practice, such ideal conditions are difficult to satisfy, and hence the actual Q-learning with target update is usually far slower (in terms of number of steps of target updates) than DARE. Note that AltQ-AdamR performs significantly well and even converges faster than DARE, and thus implies it is faster than the most well-performing Q-learning with target update.\n• AltQ-AdamR outperforms AltQ-Adam Overall, under the same batch sampling scheme and restart period, AltQ-AdamR achieves a faster convergence and smaller variance than AltQ-Adam.\nFigure 1: LQR experiments with performance evaluated in terms of policy loss ‖Kt −K?‖2.\nFigure 2: Atari game experiment with performance normalized and averaged over 23 games." }, { "heading": "4.2 ATARI GAMES", "text": "We apply the proposed AltQ algorithms to more challenging tasks of deep convolutional neural network playing a group of Atari 2600 games. The particular DQN we train to compare against adopts the dueling network structure (Wang et al., 2016), double Q-learning setup (Van Hasselt et al., 2016), -greedy exploration and experience replay (Mnih et al., 2013). Adam is also adopted, without momentum restart, as the optimizer for the inner-loop supervised learning process. AltQ-Adam and AltQ-AdamR are implemented using the identical setup of network construction, exploration and sampling strategies.\nWe test all the three algorithms with a batch of 23 Atari games. The choice of 10 million steps of iteration is a common setup for benchmark experiments with Atari games. Although this does not guarantee the best performance in comparison with more time-consuming training with 50 million steps or more, it is sufficient to illustrate different performances among the selected methods. The software infrastructure is based on the baseline implementation of OpenAI. Selections of the hyperparameters are listed in Table 2. We summarize the results in Figure 2. The overall performance is illustrated by first normalizing the return of each method with respect to the results obtained from DQN, and then averaging the performance of all 23 games to obtain the mean return and standard deviation. Considering we use a smaller buffer size than common practice, DQN is not consistently showing improved return over all tested games. Therefore, the self-normalized average return of DQN in Figure 2 is not strictly increasing from 0 to 100%.\nOverall, both AltQ-Adam and AltQ-AdamR achieve significant improvement in comparison with the DQN results. While AltQ-Adam is suffering from a higher variance, periodic restart (AltQAdamR) resolves the issue efficiently with an on-par performance on average and far smaller variance. Specifically, in terms of the maximum average return, AltQ-Adam and AltQ-AdamR perform no worse then DQN on 17 and 20 games respectively out of the 23 games being evaluated." }, { "heading": "5 CONVERGENCE ANALYSIS", "text": "In this section, we characterize the convergence guarantee for the proposed AltQ-learning algorithms. Furthermore, like most of the related papers, we focus on convergence analysis under the linear approximation class. Understanding the analytical behavior in the linear case is an important stepping stone to understand general cases such as deep neural network. A linear approximation of the Q-function Q̂(s, a; θ) can be written as\nQ̂(s, a; θ) = Φ(s, a)T θ, (8)\nwhere θ ∈ Rd, and Φ : S ×A → Rd is a vector function of size d, and the elements of Φ represent the nonlinear kernel (feature) functions." }, { "heading": "5.1 MODIFICATION OF ALGORITHMS", "text": "Although Adam has obtained great success as an optimizer in deep learning, it is well known that Adam by nature is non-convergent even for simple convex loss functions (Reddi et al., 2018). Instead, a slightly modified version called AMSGrad (Reddi et al., 2018) is widely used to study the convergence property of the Adam-type algorithms. Compared with the update rule of Adam, AMSGrad makes the sequence v̂t,i increasing along the time step t for each entry i ∈ [d]. Here, we apply the update rule of AMSGrad to the AltQ algorithm and refer to such an algorithm as AltQ-AMSGrad. Algorithm 3 describes AltQ-AMSGrad in detail, where ΠD,V̂ 1/4t (θ ′) = min θ∈D\n∥∥∥V̂ 1/4t (θ′ − θ)∥∥∥. Correspondingly, we introduce AltQ-AMSGradR which applies the same update rule as Algorithm 3, but resets mt, v̂t with a period of r, i.e., mt = 0, v̂t = 0,∀t = kr, k = 1, 2, · · · .\nAlgorithm 3 AltQ-AMSGrad 1: Input: α, λ, θ1, β1, β2,m0 = 0, v̂0 = 0. 2: for t = 1, 2, . . . , T do 3: αt =\nα√ t , β1t = β1λt 4: gt = ( φT (st, at)θt − r(st, at)−max\na′ φT (st+1, a ′)θt\n) φ(st, at)\n5: mt = (1− β1t)mt−1 + β1tgt 6: vt = (1− β2)v̂t−1 + β2g2t 7: v̂t = max(v̂t−1, vt), V̂t = diag(v̂1, . . . , v̂d)\n8: θt+1 = ΠD,V̂ 1/4t\n( θt − αtV̂ − 12 t mt ) 9: end for\n10: Output: 1T ∑T t=1 θt" }, { "heading": "5.2 MAIN RESULTS", "text": "Our theoretical analysis here focuses on the slight variants, AltQ-AMSGrad and AltQ-AMSGradR. Before stating the theorems, we first introduce some technical assumptions for our analysis. Assumption 1. At each iteration t, the noisy gradient is unbiased and uniformly bounded, i.e. gt = ḡt + ξt with Eξt = 0 where ḡt = E[gt], and ‖gt‖ < G∞,∀t. Thus ‖gt‖∞ < G∞ and ‖gt‖2 < G2∞. Assumption 2. (Chen et al., 2019b, Lemma 6.7) The equation ḡ(θ) = 0 has a unique solution θ?, which implies that there exists a c > 0, such that for any θ ∈ Rd we have\n(θ − θ?)T ḡ(θ) ≥ c ‖θ − θ?‖2 . (9) Assumption 3. The domainD ⊂ Rd of approximation parameters is a ball originating at θ = 0 with bounded diameter containing θ?. That is, there existsD∞, such that ‖θm − θn‖ < D∞,∀θm, θn ∈ D, and θ? ∈ D.\nAssumption 1 is standard in the theoretical analysis of Adam-type algorithms (Chen et al., 2019a; Zhou et al., 2018). Under linear function approximation and given Assumption 3 and bounded r(·), Assumption 1 is almost equivalent to the assumption of bounded φ(·) which is commonly taken in related RL work (Tsitsiklis & Van Roy, 1997; Bhandari et al., 2018). Assumption 2 has been proved as a key technical lemma in Chen et al. (2019b) under certain assumptions. Such an assumption appears to be the weakest in the existing studies of the theoretic guarantee for Q-learning with function approximation.\nWe next provide the convergence results of AltQ-AMSGrad and AltQ-AMSGradR under linear function approximation in the following two theorems. Theorem 1. (Convergence of AltQ-AMSGrad) Suppose αt = α√t , β1t = β1λ\nt and δ = β1/β2 with δ, λ ∈ (0, 1) for t = 1, 2, . . . in Algorithm 3. Given Assumptions 1 ∼ 3, the output of AltQ-AMSGrad satisfies:\nE ‖θout − θ?‖ ≤ B1 T\n+ B2 √ T T + B3 √ 1 + log T T d∑ i=1 E ‖g1:T,i‖ , (10)\nwhere B1 = G∞D\n2 ∞\n2α2c(1−β1) + β1G∞D\n2 ∞\n2αc(1−β1)(1−λ)2 + ‖θ1 − θ ?‖2 , B2 = dG∞D\n2 ∞\n2αc(1−β1) , and B3 = α(1+β1)\n2c(1−β1)2(1−δ) √ 1−β2 .\nIn Theorem 1, B1, B2, B3 in the bound in Equation (10) are constants and independent of time. Therefore, under the choice of the stepsize and hyper-parameters in Algorithm 3, AltQ-AMSGrad achieves a convergence rate of O ( 1√ T ) when ∑d i=1 ‖g1:T,i‖ << √ T which is justified in Duchi et al. (2011). Remark 1. Our proof of convergence here has two major differences from that for AMSGrad in Reddi et al. (2018): (a) The two algorithms are quite different. AltQ-AMSGrad is a Q-learning algorithm alternatively finding the best policy, whereas AMSGrad is an optimizer for conventional optimization and does not have alternating nature. (b) Our analysis is on the convergence rate whereas Reddi et al. (2018) provides regret bound. In fact, a slight modification of our proof also provides the convergence rate of AMSGrad for conventional strongly convex optimization, which can be of independent interest. Moreover, our proof avoids the theoretical error in the proof in Reddi et al. (2018) pointed out by Tran et al. (2019).\nIn the following theorem, we provide the convergence result for AltQ-AMSGradR. Theorem 2. (Convergence of AltQ-AMSGradR) Under the same condition of Theorem 1, the output of AltQ-AMSGradR satisfies:\nE ‖θout − θ?‖ ≤ B1 T\n+ B2 √ 1 + log T\nT\nd∑ i=1 E ‖g1:T,i‖+ B3 T √T + bT/rc∑ k=1 √ kr − 1 + 1\nT bT/rc∑ k=0 ( G∞D 2 ∞ α √ kr + 2 + 4c(1− β1)E ‖θkr − θ?‖2 ) ,\n(11)\nwhere B1 = β1D\n2 ∞G∞\n2αc(1−β1)(1−λ)2 , B2 = α(1+β1) 2c(1−β1)2(1−δ) √ 1−β2 , and B3 = dG∞D\n2 ∞\n2αc(1−β1) .\nTheorem 2 indicates that for AltQ-AMSGradR to enjoy a convergence rate of O (\n1√ T\n) the restart\nperiod r needs to be sufficiently large and ∑d i=1 ‖g1:T,i‖ << √ T . In practice as demonstrated by the experiments in Section 4, AltQ-AMSGradR typically performs well, not necessarily under the theoretical conditions." }, { "heading": "6 CONCLUSION", "text": "We propose two types of the accelerated AltQ algorithms, and demonstrate their superior performance over the state-of-the-art through a linear quadratic regulator problem and a batch of 23 Atari 2600 games.\nNotably, Adam is not the only scheme in the practice for general optimization. Heavy ball (Ghadimi et al., 2015) and Nesterov (Nesterov, 2013) are also popular momentum-based methods. When adopting such methods in AltQ-learning for RL problems, however, we tend to observe a less stable learning process than AltQ-Adam. This is partially caused by the fact that they optimize over a shorter historical horizon of updates than Adam. Furthermore, the restart scheme provides somewhat remarkable performance in our study. It is thus of considerable future interest to further investigate the potential of such a scheme. One possible direction is to develop an adaptive restart mechanism with changing period determined by an appropriately defined signal of restart. This will potentially relieve the effort in hyper-parameter tuning of finding a good fixed period." }, { "heading": "A FURTHER DETAILS AND RESULTS ON EXPERIMENTS", "text": "We discuss more details on the experiment setup and provide further results that are not included in Section 4.\nA.1 LINEAR QUADRATIC REGULATOR\nThe linear quadratic regulator (LQR) problem is of great interest for control community where Lewis et al. applies PQL to both discrete-time problems (Lewis & Vrabie, 2009; Lewis & Vamvoudakis, 2011) and continuous-time problems (Vamvoudakis, 2017; Vrabie et al., 2009).\nWe empirically validate the proposed algorithms through an infinite-horizon discrete-time LQR problem defined as\nminimize π\nJ = ∞∑ t=0 ( xTt Qxt + u T t Rut + 2x T t Nut ) ,\nsubject to xt+1 = Axt +But,\nwhere ut = π(xt).\nA typical model-based solution (with known A and B) considers the problem backwards in time and iterates a dynamic equation known as the discrete-time algebraic Riccati equation (DARE):\nP = ATPA− (ATPB +N)(R+BTPB)−1(BTPA+NT ) +Q, (12)\nwith the cost-to-go P being positive definite. The optimal policy satisfies u?t = −K?xt with\nK? = (R+BTPB)−1(NT +BTPA). (13)\nFor experiments, we parameterize a quadratic Q-function with a matrix parameter H in the form of\nQ(x, u;H) = [ x u ]T [ Hxx Hxu Hux Huu ] [ x u ] . (14)\nThe corresponding linear policy satisfies u = −Kx, and K = H−1uuHux. The performance of the learning algorithm is then evaluated at each step of iterate i with the Euclidean norm ‖Ki −K?‖2.\nA.2 ATARI GAMES\nWe list detailed experiments of the 23 Atari games evaluated with the proposed algorithms in Figure 3. All experiments are executed with the same set of two random seeds. Each task takes about 20-hour of wall-clock time on a GPU instance. All three methods being evaluated share similar training time. AltQ-Adam and AltQ-AdamR can be further accelerated in practice with a more memory-efficient implementation considering the target network is not required. We keep our implementation of proposed algorithms consistent with the DQN we are comparing against. Other techniques that are not included in this experiment are also compatible with AltQ-Adam and AltQ-AdamR, such like asynchronous exploration (Mnih et al., 2013) and training with decorrelated loss (Mavrin et al., 2019).\nOverall, AltQ-Adam significantly increases the performance by over 100% in some of the tasks including Asterix, BeamRider, Enduro, Gopher, etc. However, it also illustrates certain instability with complete failure on Amidar and Assault. This is mostly caused by the sampling where we are using a relevantly small buffer size with 10% of the common configured size in Atari games with experience replay. Notice that those failures tend to appear when the -greedy exploration has evolved to a certain level where the immediate policy is effectively contributing to the accumulated experience. This potentially amplifies the biased exploration that essentially leads to the observed phenomenon.\nInterstingly, AltQ-AdamR that incorporates the restart scheme resolves the problem of high variance of average return brought by AltQ-Adam and provides a more consistent performance across the task\ndomain. This implies that momentum restart effectively corrects the accumulated error and stabilizes the training process." }, { "heading": "B PROOF OF THEOREM 1", "text": "Different from the regret bound for AMSGrad obtained in Reddi et al. (2018), our analysis is on the convergence rate. In fact, a slight modification of our proof also provides the convergence rate for AMSGrad for conventional strongly convex optimization, which can be of independent interest. Moreover, our proof avoids the theoretical error in the proof in Reddi et al. (2018) pointed out by (Tran et al., 2019). Before proving the theorems, we first provide some useful lemmas. Lemma 1. (Zhou et al., 2018, Lemma A.1) Let {gt,mt, v̂t} for t = 1, 2, . . . be sequences generated by Algorithm 3 and ḡt = E[gt]. Under Assumption 1, ‖ḡt‖ ≤ G∞, ‖mt‖ ≤ G∞, ‖v̂t‖ ≤ G2∞. Lemma 2. (Reddi et al., 2018, Lemma 2) Let {mt, V̂t} for t = 1, 2, . . . be sequences generated by Algorithm 3. Given αt, β1t, β2 as specified in Theorem 1, we have\nT∑ t=1 αt ∥∥∥V̂ − 12t mt∥∥∥2 ≤ α(1− β1)(1− δ)√1− β2 d∑ i=1 ‖g1:T,i‖ √√√√ T∑ t=1 1 t\n≤ α √ 1 + log T\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 ‖g1:T,i‖ .\n(15)\nLemma 3. Let αt = α√t and β1t = β1λ t for t = 1, 2, . . . . Then\nT∑ t=1 β1t αt ≤ β1 α(1− λ)2 . (16)\nProof. The proof is based on taking the standard sum of geometric sequences. T∑ t=1 β1t αt = T∑ t=1 β1t √ t α ≤ T∑ t=1 β1λ t−1t α = β1 α ( 1 (1− λ) T∑ t=1 λt−1 − TλT ) ≤ β1 α(1− λ)2 . (17)\nWith the lemmas above, we are ready to prove Theorem 1. Observe that\nθt+1 = ΠD,V̂ 1/4t\n( θt − αtV̂ − 12 t mt ) = min\nθ∈D ∥∥∥V̂ 1/4t (θt − αtV̂ − 12t mt − θ)∥∥∥ . Clearly ΠD,V̂ 1/4t (θ\n?) = θ? due to Assumption 3. We start from the update of θt when t ≥ 2.∥∥∥V̂ 1/4t (θt+1 − θ?)∥∥∥2 = ∥∥∥ΠD,V̂ 1/4t V̂ 1/4t (θt − θ? − αtV̂ − 12t mt)∥∥∥2\n≤ ∥∥∥V̂ 1/4t (θt − θ? − αtV̂ − 12t mt)∥∥∥2\n= ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + ∥∥∥αtV̂ −1/4t mt∥∥∥2 − 2αt(θt − θ?)Tmt\n= ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + ∥∥∥αtV̂ −1/4t mt∥∥∥2 − 2αt(θt − θ?)T (β1tmt−1 + (1− β1t)gt)\n(i) ≤ ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + ∥∥∥αtV̂ −1/4t mt∥∥∥2 + αtβ1t( 1αt ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + αt ∥∥∥V̂ −1/4t mt−1∥∥∥2) − 2αt(1− β1t)(θt − θ?)T gt\n(ii) ≤ ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + ∥∥∥αtV̂ −1/4t mt∥∥∥2 + β1t ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tβ1t ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2 − 2αt(1− β1t)(θt − θ?)T gt,\nwhere (i) follows from the Cauchy-Schwarz inequality, and (ii) holds because v̂t+1,i ≥ v̂t,i,∀t, ∀i. Next, we take the expectation over all samples used up to time step t on both sides, which still preserves the inequality. Since we consider i.i.d. sampling case, by letting Ft be the filtration of all the sampling up to time t, we have\nE [ (θt − θ?)T gt ] = E [ E [ (θt − θ?)T gt ] |Ft−1 ] = E [ (θt − θ?)T ḡt ] . (18)\nThus we have E ∥∥∥V̂ 1/4t (θt+1 − θ?)∥∥∥2 ≤ E\n∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tE∥∥∥V̂ −1/4t mt∥∥∥2 + β1tE∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tβ1tE∥∥∥V̂ −1/4t−1 mt−1∥∥∥2 − 2αt(1− β1t)E [ (θt − θ?)T gt\n] (i) = E\n∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tE∥∥∥V̂ −1/4t mt∥∥∥2 + β1tE∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tβ1tE ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2 − 2αt(1− β1t)E [ (θt − θ?)T ḡt\n] (ii) ≤ E\n∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tE ∥∥∥V̂ −1/4t mt∥∥∥2 + β1tE∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tβ1tE ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2 − 2αtc(1− β1t)E ‖θt − θ?‖2\n(iii) ≤ E ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tE∥∥∥V̂ −1/4t mt∥∥∥2 + β1tE ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tβ1E ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2 − 2αtc(1− β1)E ‖θt − θ?‖2\n(iv) ≤ E ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 + α2tE∥∥∥V̂ −1/4t mt∥∥∥2 +G∞D2∞β1t + α2tβ1E ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2 − 2αtc(1− β1)E ‖θt − θ?‖2 ,\nwhere (i) follows from Equation (18), (ii) follows due to Assumption 2 and 1 − β1t > 0, (iii) follows from β1t < β1 < 1 and E ‖θt − θ?‖2 > 0, and (iv) follows from ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 ≤∥∥∥V̂ 1/4t ∥∥∥2 2 ‖θt − θ?‖2 ≤ G∞D2∞ by Lemma 1 and Assumption 3. We note that (iii) is the key step to\navoid the error in the proof in Reddi et al. (2018), where we can directly bound 1 − β1t, which is impossible in Reddi et al. (2018). By rearranging the terms in the above inequality and taking the summation over time steps, we have\n2c(1− β1) T∑ t=2 E ‖θt − θ?‖2\n≤ T∑ t=2 1 αt ( E ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2−E∥∥∥V̂ 1/4t (θt+1 − θ?)∥∥∥2)+ T∑ t=2 β1tG∞D 2 ∞ αt\n+ T∑ t=2 αtE ∥∥∥V̂ −1/4t mt∥∥∥2 + T∑ t=2 αtβ1E ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2\n(i) ≤ T∑ t=2 1 αt ( E ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2−E∥∥∥V̂ 1/4t (θt+1 − θ?)∥∥∥2)+ T∑ t=2 β1tG∞D 2 ∞ αt\n+ T∑ t=2 αtE ∥∥∥V̂ −1/4t mt∥∥∥2 + T∑ t=2 αt−1β1E ∥∥∥V̂ −1/4t−1 mt−1∥∥∥2\n≤ T∑ t=2 1 αt ( E ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2−E∥∥∥V̂ 1/4t (θt+1 − θ?)∥∥∥2)+ T∑ t=2 β1tG∞D 2 ∞ αt\n+ (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2 ,\nwhere (i) follows from αt < αt−1. With further adjustment of the first term in the right hand side of the last inequality, we can then bound the sum as\n2c(1− β1) T∑ t=2 E ‖θt − θ?‖2\n≤ T∑ t=2 1 αt E (∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2 − ∥∥∥V̂ 1/4t (θt+1 − θ?)∥∥∥2)+ T∑ t=2 β1tG∞D 2 ∞ αt\n+ (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2\n= E ∥∥∥V̂ 1/42 (θ2 − θ?)∥∥∥2\nα2 + T∑ t=3 E\n ∥∥∥V̂ 1/4t (θt − θ?)∥∥∥2\nαt − ∥∥∥V̂ 1/4t−1 (θt − θ?)∥∥∥2 αt−1 − E ∥∥∥V̂ 1/4T (θT+1 − θ?)∥∥∥2\nαT + T∑ t=2 β1tG∞D 2 ∞ αt + (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2\n= E ∥∥∥V̂ 1/42 (θ2 − θ?)∥∥∥2\nα2 + T∑ t=3 E\n(∑d i=1 v̂ 1/2 t,i (θt,i − θ?i )2 αt − ∑d i=1 v̂ 1/2 t−1,i(θt,i − θ?i )2 αt−1 )\n− E ∥∥∥V̂ 1/4T (θT+1 − θ?)∥∥∥2\nαT + T∑ t=2 β1tG∞D 2 ∞ αt + (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2 .\n= E ∥∥∥V̂ 1/42 (θ2 − θ?)∥∥∥2\nα2 + T∑ t=3 d∑ i=1 E(θt,i − θ?i )2 ( v̂ 1/2 t,i αt − v̂ 1/2 t−1,i αt−1 )\n− E ∥∥∥V̂ 1/4T (θT+1 − θ?)∥∥∥2\nαT + T∑ t=2 β1tG∞D 2 ∞ αt + (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2 .\nSo far we just rearrange the terms in the series sum. Next, we are ready to obtain the upper bound.\n2c(1− β1) T∑ t=2 E ‖θt − θ?‖2\n(i) ≤\nE ∥∥∥V̂ 1/42 (θ2 − θ?)∥∥∥2\nα2 +D2∞ T∑ t=3 d∑ i=1 E\n( v̂ 1/2 t,i\nαt − v̂ 1/2 t−1,i αt−1\n)\n− E ∥∥∥V̂ 1/4T (θT+1 − θ?)∥∥∥2\nαT + T∑ t=2 β1tG∞D 2 ∞ αt + (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2\n≤ E ∥∥∥V̂ 1/42 (θ2 − θ?)∥∥∥2\nα2 +D2∞ d∑ i=1 E v̂ 1/2 T,i αT + T∑ t=2 β1tG∞D 2 ∞ αt + (1 + β1) T∑ t=1 αtE ∥∥∥V̂ −1/4t mt∥∥∥2\n(ii) ≤ G∞D 2 ∞ α2 + dG∞D\n2 ∞ √ T\nα + β1G∞D\n2 ∞\nα(1− λ)2 +\nα(1 + β1) √ 1 + log T\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ‖g1:T,i‖ ,\n(19)\nwhere (i) follows from Assumption 3 and because v̂ 1/2 t,i\nαt > v̂ 1/2 t−1,i αt−1 , and (ii) follows from Lemmas 1 - 3.\nFinally, applying the Jensen’s inequality yields\nE ‖θout − θ?‖2 ≤ 1\nT T∑ t=1 E ‖θt − θ?‖2 . (20)\nWe conclude our proof by further applying the bound in Equation (19) to Equation (20)." }, { "heading": "C PROOF OF THEOREM 2", "text": "To prove the convergence for AltQ-AMSGradR, the major technical development beyond the proof of Theorem 1 lies in dealing with the parameter restart. More specifically, the moment approximation terms are reset every r steps, i.e., mkr = v̂kr = 0 for k = 1, 2, . . . , which implies θkr+1 = θkr for k = 1, 2, . . . . For technical convenience, we define θ0 = θ1. Using the arguments similar to Equation (19), in a time window that does not contain a restart (i.e. kr ≤ S ≤ (k + 1)r − 1) we have\n2c(1− β1) S∑\nt=kr\nE ‖θt − θ?‖2\n(i) ≤ G∞D 2 ∞ αkr+2 + dG∞D\n2 ∞ √ S\nα +\nα(1 + β1)\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ‖gkr+1:S,i‖ √√√√ S∑ t=kr+1 1 t\n+G∞D 2 ∞ S∑ t=kr+2 β1t αt + 2c(1− β1) ( E ‖θkr+1 − θ?‖2 + E ‖θkr − θ?‖2 ) (ii) = G∞D 2 ∞ √ kr + 2\nα + dG∞D\n2 ∞ √ S\nα +\nα(1 + β1)\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ‖gkr+1:S,i‖ √√√√ S∑ t=kr+1 1 t\n+G∞D 2 ∞ S∑ t=kr+2 β1t αt + 4c(1− β1)E ‖θkr − θ?‖2 ,\nwhere (i) follows from Equation (19) and (ii) follows from θkr+1 = θkr due to the definition of restart. Then we take the summation over the total time steps and obtain\n2c(1− β1) T∑ t=1 E ‖θt − θ?‖2\n= 2c(1− β1) bT/rc∑ k=1 kr−1∑ t=(k−1)r E ‖θt − θ?‖2 + T∑ t=bT/rcr E ‖θt − θ?‖2 − E ‖θ0 − θ?‖2 ≤ bT/rc∑ k=0 ( G∞D 2 ∞ α √ kr + 2 + 4c(1− β1)E ‖θkr − θ?‖2 ) + bT/rc∑ k=1 dG∞D 2 ∞ α √ kr − 1\n+ dG∞D\n2 ∞ √ T\nα +\nα(1 + β1)\n(1− β1)(1− δ) √ 1− β2 bT/rc∑ k=1 d∑ i=1 E ∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t\n+ α(1 + β1)\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t\n+G∞D 2 ∞ bT/rc∑ k=1 kr−1∑ t=(k−1)r+2 β1t αt +G∞D 2 ∞ T∑ t=bT/rcr+2 β1t αt\n≤ bT/rc∑ k=0 ( G∞D 2 ∞ α √ kr + 2 + 4c(1− β1)E ‖θkr − θ?‖2 ) + bT/rc∑ k=1 dG∞D 2 ∞ α √ kr − 1\n+ dG∞D\n2 ∞ √ T\nα +\nα(1 + β1)\n(1− β1)(1− δ) √ 1− β2 bT/rc∑ k=1 d∑ i=1 E ∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t\n+ α(1 + β1)\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t +G∞D 2 ∞ T∑ t=1 β1t αt .\nWe can bound the term G∞D2∞ ∑T t=1 β1t αt\nby Lemma 3. Next, we bound another key term in the above inequality. We first observe that ∀k ≥ 2,∀i ∈ [d],\n∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t (i) ≤ ∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t + |gkr,i| √ 1 kr\n(ii) ≤ ∥∥g(k−1)r+1:kr,i∥∥ √√√√ kr∑ t=(k−1)r+1 1 t ,\n(21)\nwhere (i) holds due to |gt,i| √ 1 t > 0 and (ii) follows from the Cauchy-Schwarz inequality. Then we have\nbT/rc∑ k=1 d∑ i=1 ∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t + d∑ i=1 ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t\n(i) ≤ bT/rc∑ k=1 d∑ i=1 |gkr,i| √ 1 kr + bT/rc∑ k=1 d∑ i=1 ∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t\n+ d∑ i=1 ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t\n= bT/rc∑ k=1 d∑ i=1 ∥∥g(k−1)r+1:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r+1 1 t + |gkr,i| √ 1 kr +\nd∑ i=1 ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t\n(ii) ≤ bT/rc∑ k=1 d∑ i=1 ∥∥g(k−1)r+1:kr,i∥∥ √√√√ kr∑ t=(k−1)r+1 1 t + d∑ i=1 ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t\n= d∑ i=1 bT/rc∑ k=1 ∥∥g(k−1)r+1:kr,i∥∥ √√√√ kr∑ t=(k−1)r+1 1 t + ∥∥gbT/rcr+1:T,i∥∥ √√√√ T∑ t=bT/rcr+1 1 t (iii) ≤\nd∑ i=1 ‖g1:T,i‖ √√√√ T∑ t=1 1 t ,\nwhere (i) follows from |gkr,i| √ 1 kr ,∀k ≥ 1,∀i ∈ [d], (ii) follows from Equation (21) and (iii) holds due to the Cauchy-Schwarz inequality. Then we have\n2c(1− β1) T∑ t=1 E ‖θt − θ?‖2\n≤ bT/rc∑ k=0 ( G∞D 2 ∞ α √ kr + 2 + 4c(1− β1)E ‖θkr − θ?‖2 ) + bT/rc∑ k=1 dG∞D 2 ∞ α √ kr − 1\n+ dG∞D\n2 ∞ √ T\nα +\nα(1 + β1)\n(1− β1)(1− δ) √ 1− β2 bT/rc∑ k=1 d∑ i=1 E ∥∥g(k−1)r:kr−1,i∥∥ √√√√ kr−1∑ t=(k−1)r 1 t\n+ α(1 + β1)\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ∥∥gbT/rcr:T,i∥∥ √√√√ T∑ t=bT/rcr 1 t +G∞D 2 ∞ T∑ t=1 β1t αt\n≤ bT/rc∑ k=0 ( G∞D 2 ∞ α √ kr + 2 + 4c(1− β1)E ‖θkr − θ?‖2 ) + bT/rc∑ k=1 dG∞D 2 ∞ α √ kr − 1\n+ dG∞D\n2 ∞ √ T\nα +\nα(1 + β1)\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ‖g1:T,i‖ √√√√ T∑ t=1 1 t +G∞D 2 ∞ T∑ t=1 β1t αt\n(i) ≤ bT/rc∑ k=0 ( G∞D 2 ∞ α √ kr + 2 + 4c(1− β1)E ‖θkr − θ?‖2 ) + bT/rc∑ k=1 dG∞D 2 ∞ α √ kr − 1\n+ dG∞D\n2 ∞ √ T\nα + α(1 + β1)\n√ d(1 + log T )\n(1− β1)(1− δ) √ 1− β2 d∑ i=1 E ‖g1:T,i‖+ β1G∞D 2 ∞ α(1− λ)2 ,\nwhere (i) follows from Lemma 2 and Lemma 3.\nFinally, applying the Jensen’s inequality and the above bound, we obtain\nE ‖θout − θ?‖2\n≤ 1 T T∑ t=1 E ‖θt − θ?‖2\n≤ 1 T bT/rc∑ k=0 ( G∞D 2 ∞ 2cα(1− β1) √ kr + 2 + 2E ‖θkr − θ?‖2 ) + 1 T bT/rc∑ k=1 dG∞D 2 ∞ 2cα(1− β1) √ kr − 1\n+ dG∞D\n2 ∞ √ T\n2cα(1− β1) +\nα(1 + β1) √ d(1 + log T )\n2c(1− β1)2(1− δ) √ 1− β2 d∑ i=1 E ‖g1:T,i‖+ β1G∞D 2 ∞ 2cα(1− β1)(1− λ)2 ,\nwhich concludes the proof." } ]
2,019
null
SP:2036673d54d07683d1dfdad4567ea18029344359
[ "The paper presents a method of scaling up towards action spaces, that exhibit natural hierarchies (such as a controllable resolution of actions), throughout joint training of Q-functions over these. Authors notice, and exploit a few interesting properties, such as inequalities that emerge when action spaces form strict subsets that lead to nice parametrisation of policies in a differential way. The evaluation is performed in simple toy-ish tasks, and in micro-management problem in 5 scenarios in the game of SC2.", "This paper proposes a method to progressively explore the action space for RL. The proposed method is called “growing action spaces”. The basic idea is that actions can usually be grouped by a hierarchical structure: the lowest level is the coarsest and higher levels gradually refine the action partition. This method effectively captures many RL settings, including multi-agent learning. One effective approach is to apply the action hierarchy. " ]
In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.
[]
[ { "authors": [ "Minoru Asada", "Shoichi Noda", "Sukoya Tawaratsumida", "Koh Hosoda" ], "title": "Purposive behavior acquisition for a real robot by vision-based reinforcement learning", "venue": "Machine learning,", "year": 1996 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Guillaume M JB Chaslot", "Mark HM Winands", "H JAAP VAN DEN HERIK", "Jos WHM Uiterwijk", "Bruno Bouzy" ], "title": "Progressive strategies for monte-carlo tree search", "venue": "New Mathematics and Natural Computation,", "year": 2008 }, { "authors": [ "Marco Colombetti", "Marco Dorigo" ], "title": "Robot shaping: developing situated agents through learning", "venue": "International Computer Science Institute,", "year": 1992 }, { "authors": [ "Adrien Couëtoux", "Jean-Baptiste Hoock", "Nataliya Sokolovska", "Olivier Teytaud", "Nicolas Bonnard" ], "title": "Continuous upper confidence trees", "venue": "In International Conference on Learning and Intelligent Optimization,", "year": 2011 }, { "authors": [ "Wojciech Marian Czarnecki", "Siddhant M Jayakumar", "Max Jaderberg", "Leonard Hasenclever", "Yee Whye Teh", "Simon Osindero", "Nicolas Heess", "Razvan Pascanu" ], "title": "Mix&match-agent curricula for reinforcement learning", "venue": "arXiv preprint arXiv:1806.01780,", "year": 2018 }, { "authors": [ "Gabriel Dulac-Arnold", "Richard Evans", "Hado van Hasselt", "Peter Sunehag", "Timothy Lillicrap", "Jonathan Hunt", "Timothy Mann", "Theophane Weber", "Thomas Degris", "Ben Coppin" ], "title": "Deep reinforcement learning in large discrete action spaces", "venue": "arXiv preprint arXiv:1512.07679,", "year": 2015 }, { "authors": [ "Jeffrey L Elman" ], "title": "Learning and development in neural networks: The importance of starting small", "venue": null, "year": 1993 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Carlos Florensa", "David Held", "Markus Wulfmeier", "Michael Zhang", "Pieter Abbeel" ], "title": "Reverse curriculum generation for reinforcement learning", "venue": "arXiv preprint arXiv:1707.05300,", "year": 2017 }, { "authors": [ "Alex Graves", "Marc G Bellemare", "Jacob Menick", "Remi Munos", "Koray Kavukcuoglu" ], "title": "Automated curriculum learning for neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Hado V Hasselt" ], "title": "Double q-learning", "venue": "In Advances in Neural Information Processing Systems, pp. 2613–2621,", "year": 2010 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "George Konidaris", "Andrew Barto" ], "title": "Autonomous shaping: Knowledge transfer in reinforcement learning", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Dennis Lee", "Haoran Tang", "Jeffrey O Zhang", "Huazhe Xu", "Trevor Darrell", "Pieter Abbeel" ], "title": "Modular architecture for starcraft ii with deep reinforcement learning", "venue": "In Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Andrew W Moore" ], "title": "The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces", "venue": "In Advances in neural information processing systems,", "year": 1994 }, { "authors": [ "Rémi Munos", "Andrew Moore" ], "title": "Variable resolution discretization in optimal control", "venue": "Machine learning,", "year": 2002 }, { "authors": [ "Adithyavairavan Murali", "Lerrel Pinto", "Dhiraj Gandhi", "Abhinav Gupta" ], "title": "Cassl: Curriculum accelerated self-supervised learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Andrew Y Ng", "Daishi Harada", "Stuart Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In ICML,", "year": 1999 }, { "authors": [ "Yangchen Pan", "Amir-massoud Farahmand", "Martha White", "Saleh Nabi", "Piyush Grover", "Daniel Nikovski" ], "title": "Reinforcement learning with function-valued action spaces for partial differential equation control", "venue": null, "year": 2018 }, { "authors": [ "Anastasia Pentina", "Viktoriia Sharmanska", "Christoph H Lampert" ], "title": "Curriculum learning of multiple tasks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Tabish Rashid", "Mikayel Samvelyan", "Christian Schroeder de Witt", "Gregory Farquhar", "Jakob Foerster", "Shimon Whiteson" ], "title": "Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning", "venue": "arXiv preprint arXiv:1803.11485,", "year": 2018 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Mikayel Samvelyan", "Tabish Rashid", "Christian Schroeder de Witt", "Gregory Farquhar", "Nantas Nardelli", "Tim GJ Rudner", "Chia-Man Hung", "Philip HS Torr", "Jakob Foerster", "Shimon Whiteson" ], "title": "The starcraft multi-agent challenge", "venue": null, "year": 2019 }, { "authors": [ "Oliver G Selfridge", "Richard S Sutton", "Andrew G Barto" ], "title": "Training and tracking in robotics", "venue": "In IJCAI, pp", "year": 1985 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Satinder Pal Singh" ], "title": "Transfer of learning by composing solutions of elemental sequential tasks", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Kenneth O Stanley", "Risto Miikkulainen" ], "title": "Competitive coevolution through evolutionary complexification", "venue": "Journal of artificial intelligence research,", "year": 2004 }, { "authors": [ "Sainbayar Sukhbaatar", "Zeming Lin", "Ilya Kostrikov", "Gabriel Synnaeve", "Arthur Szlam", "Rob Fergus" ], "title": "Intrinsic motivation and automatic curricula via asymmetric self-play", "venue": "arXiv preprint arXiv:1703.05407,", "year": 2017 }, { "authors": [ "Peter Sunehag", "Guy Lever", "Audrunas Gruslys", "Wojciech Marian Czarnecki", "Vinicius Zambaldi", "Max Jaderberg", "Marc Lanctot", "Nicolas Sonnerat", "Joel Z Leibo", "Karl Tuyls" ], "title": "Value-decomposition networks for cooperative multi-agent learning", "venue": "arXiv preprint arXiv:1706.05296,", "year": 2017 }, { "authors": [ "Gabriel Synnaeve", "Nantas Nardelli", "Alex Auvolat", "Soumith Chintala", "Timothée Lacroix", "Zeming Lin", "Florian Richoux", "Nicolas Usunier" ], "title": "Torchcraft: a library for machine learning research on real-time strategy games", "venue": "arXiv preprint arXiv:1611.00625,", "year": 2016 }, { "authors": [ "Aviv Tamar", "Yi Wu", "Garrett Thomas", "Sergey Levine", "Pieter Abbeel" ], "title": "Value iteration networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ming Tan" ], "title": "Multi-agent reinforcement learning: Independent vs. cooperative agents", "venue": "In Proceedings of the tenth international conference on machine learning,", "year": 1993 }, { "authors": [ "Matthew E Taylor", "Peter Stone", "Yaxin Liu" ], "title": "Transfer learning via inter-task mappings for temporal difference learning", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Guy Tennenholtz", "Shie Mannor" ], "title": "The natural language of actions", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Gerald Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "Communications of the ACM,", "year": 1995 }, { "authors": [ "Nicolas Usunier", "Gabriel Synnaeve", "Zeming Lin", "Soumith Chintala" ], "title": "Episodic exploration for deep deterministic policies: An application to starcraft micromanagement tasks", "venue": "arXiv preprint arXiv:1609.02993,", "year": 2016 }, { "authors": [ "Oriol Vinyals", "Timo Ewalds", "Sergey Bartunov", "Petko Georgiev", "Alexander Sasha Vezhnevets", "Michelle Yeo", "Alireza Makhzani", "Heinrich Küttler", "John Agapiou", "Julian Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "Wu", "Dani Yogatama", "Julia Cohen", "Katrina McKinney", "Oliver Smith", "Tom Schaul", "Timothy Lillicrap", "Chris Apps", "Koray Kavukcuoglu", "Demis Hassabis", "David Silver" ], "title": "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/, 2019", "venue": null, "year": 2019 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "arXiv preprint arXiv:1511.06581,", "year": 2015 }, { "authors": [ "Shimon Whiteson", "Matthew E Taylor", "Peter Stone" ], "title": "Adaptive tile coding for value function approximation", "venue": null, "year": 2007 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever" ], "title": "Learning to execute", "venue": "arXiv preprint arXiv:1410.4615,", "year": 2014 } ]
[ { "heading": null, "text": "In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces." }, { "heading": "1 INTRODUCTION", "text": "The value of curricula has been well established in machine learning, reinforcement learning, and in biological systems. When a desired behaviour is sufficiently complex, or the environment too unforgiving, it can be intractable to learn the behaviour from scratch through random exploration. Instead, by “starting small” (Elman, 1993), an agent can build skills, representations, and a dataset of meaningful experiences that allow it to accelerate its learning. Such curricula can drastically improve sample efficiency (Bengio et al., 2009).\nTypically, curriculum learning uses a progression of tasks or environments. Simple tasks that provide meaningful feedback to random agents are used first, and some schedule is used to introduce more challenging tasks later during training (Graves et al., 2017). However, in many contexts neither the agent nor experimenter has such unimpeded control over the environment. In this work, we instead make use of curricula that are internal to the agent, simplifying the exploration problem without changing the environment. In particular, we grow the size of the action space of reinforcement learning agents over the course of training.\nAt the beginning of training, our agents use a severely restricted action space. This helps exploration by guiding the agent towards rewards and meaningful experiences, and provides low variance updates during learning. The action space is then grown progressively. Eventually, using the most unrestricted action space, the agents are able to find superior policies. Each action space is a strict superset of the more restricted ones. This paradigm requires some domain knowledge to identify a suitable hierarchy of action spaces. However, such a hierarchy is often easy to find. Continuous action spaces can be discretised with increasing resolution. Similarly, curricula for coping with the large combinatorial action spaces induced by many agents can be obtained from the prior that nearby agents are more likely to need to coordinate. For example, in routing or traffic flow problems nearby agents or nodes may wish to adopt similar local policies to alleviate global congestion. Our method will be valuable when it is possible to identify a restricted action space in which random exploration leads to significantly more meaningful experiences than random exploration in the full action space.\nWe propose an approach that uses off-policy reinforcement learning to improve sample efficiency in this type of curriculum learning. Since data from exploration using a restricted action space is still valid in the Markov Decision Processes (MDPs) corresponding to the less restricted action spaces, we can learn value functions in the less restricted action space with ‘off-action-space’ data collected by exploring in the restricted action space. In our approach, we learn value functions corresponding to each level of restriction simultaneously. We can use the relationships of these value functions to\neach other to accelerate learning further, by using value estimates themselves as initialisations or as bootstrap targets for the less restricted action spaces, as well as sharing learned state representations.\nEmpirically, we first demonstrate the efficacy of our approach in two simple control tasks, in which the resolution of discretised actions is progressively increased. We then tackle a more challenging set of problems with combinatorial action spaces, in the context of StarCraft micromanagement with large numbers of agents (50-100). Given the heuristic prior that nearby agents in a multiagent setting are likely to need to coordinate, we use hierarchical clustering to impose a restricted action space on the agents. Agents in a cluster are restricted to take the same action, but we progressively increase the number of groups that can act independently of one another over the course of training. Our method substantially improves sample efficiency on a number of tasks, outperforming learning any particular action space from scratch, a number of ablations, and an actor-critic baseline that learns a single value function for the behaviour policy, as in the work of Czarnecki et al. (2018). Code is available, but redacted here for anonymity." }, { "heading": "2 RELATED WORK", "text": "Curriculum learning has a long history, appearing at least as early as the work of Selfridge et al. (1985) in reinforcement learning, and for the training of neural networks since Elman (1993). In supervised learning, one typically has control of the order in which data is presented to the learning algorithm. For learning with deep neural networks, Bengio et al. (2009) explored the use of curricula in computer vision and natural language processing. Many approaches use handcrafted schedules for task curricula, but others (Zaremba & Sutskever, 2014; Pentina et al., 2015; Graves et al., 2017) study diagnostics that can be used to automate the choice of task mixtures throughout training. In a self-supervised control setting, Murali et al. (2018) use sensitivity analysis to automatically define a curriculum over action dimensions and prioritise their search space.\nIn some reinforcement learning settings, it may also be possible to control the environment so as to induce a curriculum. With a resettable simulator, it is possible to use a sequence of progressively more challenging initial states (Asada et al., 1996; Florensa et al., 2017). With a procedurally generated task, it is often possible to automatically tune the difficulty of the environments (Tamar et al., 2016). Similar curricula also appear often in hierarchical reinforcement learning, where skills can be learned in comparatively easy settings and then composed in more complex ways later (Singh, 1992). Taylor et al. (2007) use more general inter-task mappings to transfer Q-values between tasks that do not share state and action spaces. In adversarial settings, one may also induce a curriculum through self-play (Tesauro, 1995; Sukhbaatar et al., 2017; Silver et al., 2017). In this case, the learning agents themselves define the changing part of the environment.\nA less invasive manipulation of the environment involves altering the reward function. Such reward shaping allows learning policies in an easier MDP, which can then be transferred to the more difficult sparse-reward task (Colombetti & Dorigo, 1992; Ng et al., 1999). It is also possible to learn reward shaping on simple tasks and transfer it to harder tasks in a curriculum (Konidaris & Barto, 2006).\nIn contrast, learning with increasingly complex function approximators does not require any control of the environment. In reinforcement learning, this has often taken the form of adaptively growing the resolution of the state space considered by a piecewise constant discretised approximation (Moore, 1994; Munos & Moore, 2002; Whiteson et al., 2007). Stanley & Miikkulainen (2004) study continual complexification in the context of coevolution, growing the complexity of neural network architectures through the course of training. These works progressively increase the capabilities of the agent, but not with respect to its available actions.\nIn the context of planning on-line with a model, there are a number of approaches that use progressive widening to consider increasing large action spaces over the course of search (Chaslot et al., 2008), including in planning for continuous action spaces (Couëtoux et al., 2011). However, these methods cannot directly be applied to grow the action space in the model-free setting.\nA recent related work tackling our domain is that of Czarnecki et al. (2018), who train mixtures of two policies with an actor-critic approach, learning a single value function for the current mixture of policies. The mixture contains a policy that may be harder to learn but has a higher performance ceiling, such as a policy with a larger action space as we consider in this work. The mixing coefficient is initialised to only support the simpler policy, and adapted via population based training\n(Jaderberg et al., 2017). In contrast, we simultaneously learn a different value function for each policy, and exploit the properties of the optimal value functions to induce additional structure on our models. We further use these properties to construct a scheme for off-action-space learning which means our approach may be used in an off-policy setting. Empirically, in our settings, we find our approach to perform better and more consistently than an actor-critic algorithm modeled after Czarnecki et al. (2018), although we do not take on the significant additional computational requirements of population based training in any of our experiments.\nA number of other works address the problem of generalisation and representation for value functions with large discrete action spaces, without explicitly addressing the resulting exploration problem (Dulac-Arnold et al., 2015; Pan et al., 2018). These approaches typically rely on action representations from prior knowledge. Such representations could be used in combination with our method to construct a hierarchy of action spaces with which to shape exploration." }, { "heading": "3 BACKGROUND", "text": "We formalise our problem as a MDP, specified by a tuple < S,A, P, r, γ >. The set of possible states and actions are given by S and A, P is the transition function that specifies the environment dynamics, and γ is a discount factor used to specify the discounted return R = ∑T t=0 γ\ntrt for an episode of length T . We wish our agent to maximise this return in expectation by learning a policy π that maps states to actions. The state-action value function (Q-function) is given by Qπ = Eπ[R|s, a]. The optimal Q-function Q∗ satisfies the Bellman optimality equation:\nQ∗(s, a) = T Q∗(s, a) = E[r(s, a) + γmax a′ Q∗(s′, a′)]. (1)\nQ-learning (Watkins & Dayan, 1992) uses a sample-based approximation of the Bellman optimality operator T to iteratively improve an estimate of Q∗. Q-learning is an off-policy method, meaning that samples from any policy may be used to improve the value function estimate. We use this property to engage Q-learning for off-action-space learning, as described in the next section.\nWe also introduce some notation for restricted action spaces. In particular, for an MDP with unrestricted action space A we define a set of N action spaces A`, ` ∈ {0, . . . , N − 1}. Each action space is a subset of the next: A0 ⊂ A1 ⊂ . . . ⊂ AN−1 ⊆ A. A policy restricted to actions A` is denoted π`(a|s). The optimal policy in this restricted policy class is π∗` (a|s), and its corresponding action-value and value functions are Q∗` (s, a) and V ∗ ` (s) = maxaQ ∗ ` (s, a).\nAdditionally, we define a hierarchy of actions by identifying for every action a ∈ A`, ` > 0 a parent action parent`(a) in the space of A`−1. Since action spaces are subsets of larger action spaces, for all a ∈ A`−1,parent`(a) = a, i.e., one child of each action is itself. Simple pieces of domain knowledge are often sufficient to define these hierarchies. For example, a discretised continuous action can identify its nearest neighbour in A`−1 as a parent. In Section 5 we describe a possible hierarchy for multi-agent action spaces. One could also imagine using action-embeddings (Tennenholtz & Mannor, 2019) to learn such a hierarchy from data." }, { "heading": "4 CURRICULUM LEARNING WITH GROWING ACTION SPACES", "text": "We build our approach to growing action spaces (GAS) on off-policy value-based reinforcement learning. Q-learning and its deep-learning adaptations have shown strong performance (Hessel et al., 2018), and admit a simple framework for off-policy learning." }, { "heading": "4.1 OFF-ACTION-SPACE LEARNING", "text": "A value function for an action space A` may be updated with transitions using actions drawn from its own action space, or any more restricted action spaces, if we use an off-policy learning algorithm. The restricted transitions simply form a subset of the data required to learn the value functions of the less restricted action spaces. To exploit this, we simultaneously learn an estimated optimal value function Q̂∗` (s, a) for each action space A`, and use samples drawn from a behaviour policy based on a value function for low ` to directly train the higher ` value functions.\nAt the beginning of each episode, we sample ` according to some distribution. The experiences generated in that episode are used to update all of the Q̂∗≥`(s, a). This off-action-space learning is a type of off-policy learning that enables efficient exploration by restricting it to the low-` regime. We sample at the beginning of the episode rather than at each timestep because, if the agent uses a high-` action, it may enter a state that is inaccessible for a lower-` policy, and we do not wish to force a low-` value function to generalise to states that are only accessible at higher `.\nSince data from a restricted action space only supports a subset of the state-action space relevant for the value functions of less restricted action spaces, we hope that a suitable function approximator still allows some generalisation to the unexplored parts of the less restricted state-action space." }, { "heading": "4.2 VALUE ESTIMATES", "text": "Note that: V ∗i (s) ≤ V ∗j (s)∀s if i < j. (2)\nThis is because each action space is a strict subset of the larger ones, so the agent can always in the worst case fall back to a policy using a more restricted action space.\nThis monotonicity intuitively recommends an iterative decomposition of the value estimates, in which Q̂∗`+1(s, a) is estimated as a sum of Q̂ ∗ ` (s, a) and some positive ∆`(s, a). This is not immediately possible due to the mismatch in the support of each function. However, we can leverage a hierarchical structure in the action spaces when present, as described in Section 3. In this case we can use:\nQ̂∗`+1(s, a) = Q̂ ∗ ` (s,parent`(a)) + ∆`(s, a). (3)\nThis is a task-specific upsampling of the lower-` value function to intialise the next value function. Both Q̂∗` (s, a) and ∆`(s, a) are learned components. We could further regularise or restrict the functional form of ∆` to ensure its positivity when parent`(a) = a. However, we did not find this to be valuable in our experiments, and simply initialised ∆` to be small.\nThe property (2) also implies a modified Bellman optimality equation:\nQ∗` (s, a) = E[r(s, a) + γmax i≤` max a′ Q∗i (s ′, a′)] (4)\nThe maxi<` are redundant in their role as conditions on the optimal value function Q∗` . However, the Bellman optimality equation also gives us the form of a Q-learning update, where the term in the expectation on the RHS is used as an operator that iteratively improves an estimate of Q∗. When these estimates are inaccurate, the modified form of the Bellman equation may lead to different updates, allowing the solutions to higher ` to be bootstrapped from those at lower `.\nWe expect that policies with low ` are easier to learn, and that therefore the corresponding Q̂∗` is higher value and more accurate earlier in training than those at high `. These high values could be picked up by the extra maximisation in the modified bootstrap, and thereby rapidly learned by the higher-` value functions. Empirically however, we find that using this form for the target in our loss function performs no better than just maximising over Q̂∗` (s\n′, a′). We discuss the choice of target and these results in more detail in Section 6.2." }, { "heading": "4.3 REPRESENTATION", "text": "By sharing parameters between the function approximators of each Q`, we can learn a joint state representation, which can then be iteratively decoded into estimates of Q∗ for each `. This shared embedding can be iteratively refined by, e.g., additional network layers for each Q̂∗` to maintain flexibility along with transfer of useful representations. This simple approach has had great success in improving the efficiency of many multi-task solutions using deep learning (Ruder, 2017)." }, { "heading": "4.4 CURRICULUM SCHEDULING", "text": "We need to choose a schedule with which to increase the ` used by the behaviour policy over the course of training. Czarnecki et al. (2018) use population based training (Jaderberg et al., 2017) to choose a mixing parameter on the fly. However, this comes at significant computational cost, and\noptimises greedily for immediate performance gains. We use a simple linear schedule on a mixing parameter α ∈ [0, N ]. Initially α = 0 and we always choose ` = 0. Later, we pick ` = bαc with probability dαe − α and ` = dαe with probability α − bαc (e.g. if α = 1.1, we choose ` = 1 with 90% chance and ` = 2 with 10% chance). This worked well empirically with little effort to tune. Many other strategies exist for tuning a curriculum automatically (such as those explored by Graves et al. (2017)), and could be beneficial, at the cost of additional overhead and algorithmic complexity." }, { "heading": "5 GROWING ACTION SPACES FOR MULTI-AGENT CONTROL", "text": "In cooperative multi-agent control, the full action space allows each ofN agents to take actions from a setAagent, resulting in an exponentially large action space of size |Aagent|N . Random exploration in this action space is highly unlikely to produce sensical behaviours, so growing the action space as we propose is particularly valuable in this setting. One approach would be to limit the actions available to each agent, as done in our discretised continuous control experiments (Section 6.1) and those of Czarnecki et al. (2018). However, the joint action space would still be exponential in N . We propose instead to use hierarchical clustering, and to assign the same action to nearby agents.\nAt the first level of the hierarchy, we treat the whole team as a single group, and all agents are constrained to take the same action. At the next level of the hierarchy, we split the agents into k groups using an unsupervised clustering algorithm, allowing each group to act independently. At each further level, every group is split once again into k smaller groups. In practice, we simply use k-means clustering based on the agent’s spatial position, but this can be easily extended to more complex hierarchies using other clustering approaches.\nTo estimate the value function, we compute a state-value score V̂ (s), and a group-action delta ∆`(s, ag, g) for each group g at each level `. Then, we compute an estimated group-action value for each group, at each level, using a per-group form of (3): Q̂∗`+1(s, ag) = Q̂ ∗ ` (s,parentk(ag)) + ∆`(s, ag, g). We use Q̂∗−1(s, ·) = V̂ (s) to initialise the iterative computation, similarly to the dueling architecture of Wang et al. (2015). The estimated value of the parent action is the estimated value of the entire parent group all taking the same action as the child group. At each level ` we now have a set of group-action values.\nIn effect, a multi-agent value-learning problem still remains at each level `, but with a greatly reduced number of agents at low `. We could simply use independent Q-learning (Tan, 1993), but instead choose to estimate the joint-action value at each level as the mean of the group-action values for the groups at that `, as in the work of Sunehag et al. (2017). A less restrictive representation, such as that proposed by Rashid et al. (2018), could help, but we leave this direction to future work.\nA potential problem is that the clustering changes for every state, which may interfere with generalisation as group-actions will not have consistent semantics. We address this in two ways. First, we include the clustering as part of the state, and the cluster centroids are re-initialised from the previous timestep for t > 0 to keep the cluster semantics approximately consistent. Second, we use a functional representation that produces group-action values that are broadly agnostic to the identifier of the group. In particular, we compute a spatially resolved embedding, and pool over the locations occupied by each group. See Figure 2 and Section 6.2 for more details." }, { "heading": "6 EXPERIMENTS", "text": "We investigate two classes of problems that have a natural hierarchy in the action space. First, simple control problems where a coarse action discretisation can help accelerate exploration, and fine action discretisation allows for a more optimal policy. Second, the cooperative multi-agent setting, discussed in Section 5, using large-scale StarCraft micromanagement scenarios." }, { "heading": "6.1 DISCRETISED CONTINUOUS CONTROL", "text": "As a proof-of-concept, we look at two simple examples: versions of the classic Acrobot and Mountain Car environments with discretised action spaces. Both tasks have a sparse reward of +1 when the goal is reached, and we make the exploration problem more challenging by terminating episodes\nwith a penalty of -1 if the goal is not reached within 500 timesteps. The normalised remaining time is concatenated to the state so it remains Markovian despite the time limit. There is a further actuation cost of 0.05‖a‖2. At A0, the actions apply a force of +1 and −1. At each subsequent A`>0, each action is split into two children, one that is the same as the parent action, and the other applying half the force. Thus, there are 2` actions in A`. The results of our experiments are shown in Figure 1. Training with the lower resolutions A0 and A1 from scratch converges to finding the goal, but incurs significant actuation costs. Training with A2 from scratch almost never finds the goal with -greedy exploration. We also tried decaying the at a quarter of the rate (A2 slow ) without success. In these cases, the policy converges to the one that minimises actuation costs, never finding the goal. Training with a growing action space explores to find the goal early, and then uses this experience to transition smoothly into a solution that finds the goal but takes a slower route that minimises actuation costs while achieving the objective." }, { "heading": "6.2 COMBINATORIAL ACTION SPACES: STARCRAFT BATTLES", "text": "" }, { "heading": "6.2.1 LARGE-SCALE STARCRAFT MICROMANAGEMENT", "text": "The real-time strategy game StarCraft and its sequel StarCraft II have emerged as popular platforms for benchmarking reinforcement learning algorithms (Synnaeve et al., 2016; Vinyals et al., 2017). Full game-play has been tackled by e.g. (Lee et al., 2018; Vinyals et al., 2019), while other works focus on sub-problems such as micromanagement, the low-level control of units engaged in a battle between two armies (e.g. (Usunier et al., 2016)). Efforts to approach the former problem have required some subset of human demonstrations, hierarchical methods, and massive compute scale, and so we focus on the latter as a more tractable benchmark to evaluate our methods.\nMost previous work on RL benchmarking with StarCraft micromanagement is restricted to maximally 20-30 units (Samvelyan et al., 2019; Usunier et al., 2016). In our experiments we focus on much larger-scale micromanagement scenarios with 50-100 units on each side of the battle. To further increase the difficulty of these micromanagement scenarios, in our setting the starting locations of the armies are randomised, and the opponent is controlled by scripted logic that holds its position until any agent-controlled unit is in range, and then focus-fires on the closest enemy. This increases the exploration challenge, as our agents need to learn to find the enemy first, while they hold a strong defensive position. The action space for each unit permits an attack-move or move action in eight cardinal directions, as well as a stop action that causes the unit to passively hold its position.\nIn our experiments, we use k = 2 for k-means clustering and split down to at most four or eight groups. The maximum number of groups in an experiment with A` is 2`. Although our approach is designed for off-policy learning, we follow the common practice of using n-step Q-learning to accelerate the propagation of values (Hessel et al., 2018). Our base algorithm uses the objective of n-step Q-learning from the work of Mnih et al. (2016), and collects data from multiple workers into a short queue similarly to Espeholt et al. (2018). Full details can be found in the Appendix." }, { "heading": "6.2.2 MODEL ARCHITECTURE", "text": "We propose an architecture to efficiently represent the value functions of the action-space hierarchy. The overall structure is shown in Figure 2. We start with the state of the scenario (1). Ally units are blue and split into two groups. From the state, features are extracted from the units and map (see Appendix for full details). These features are concatenated with a one-hot representation of the unit’s group (for allied agents), and are embedded with a small MLP. A 2-D grid of embeddings is constructed by adding up the unit embeddings for all units in each cell of the grid (2). The embeddings are passed through a residual CNN to produce a final embedding (3), which is copied several times and decoded as follows. First, a state-value branch computes a scalar value by taking a global mean pooling (4) and passing the result through a 2-layer MLP (6). Then, for each `, a masked mean-pooling is used to produce an embedding for each group at that A` by masking out the positions in the spatial embedding where there are no units of that group (5a, 5b, 5c). A single evaluation MLP for each ` is used to decode this embedding into a group action-score (7a, 7b, 7c). This architecture allows a shared state representation to be efficiently decoded into value-function contributions for groups of any size, at any level of restriction in the action space.\nWe consider two approaches for combining these outputs. In our default approach, described in Section 5, each group’s action-value is given by the sum of the state-value and group-action-scores for the group and its parents (8a, 8b). In ‘SEP-Q’, each group’s action-value is simply given by the state-value added to the group-action score, i.e., Q̂∗` (s, ag) = V̂ (s) + ∆`(s, ag, g). This is an ablation in which the action-value estimates for restricted action spaces do not initialise the actionvalue estimates of their child actions." }, { "heading": "6.2.3 RESULTS AND DISCUSSION", "text": "Figure 3 presents the results of our method, as well as a number of baselines and ablations, on a variety of micromanagement tasks. Our method is labeled Growing Action Spaces GAS(`), such that GAS(2) will grow from A0 to A2. Our primary baselines are policies trained with action spaces A0 or A2 from scratch. GAS(2) consistently outperforms both of these variants. Policies trained from scratch on A2 struggle with exploration, in particular in the harder scenarios where the opponent has a numbers advantage. Policies trained from scratch on A0 learn quickly, but plateau comparatively low, due to the limited ability of a single group to position effectively. GAS(2) benefits from the efficient exploration enabled by an intialisation at A0, and uses the data gathered under this policy to efficiently transfer to A2; enabling a higher asymptotic performance. We also compare against a Mix&Match (MM) baseline using the actor-critic approach of Czarnecki et al. (2018), but adapted for our new multi-agent setting and supporting a third level in the mixture\nof policies (A0, A1, A2). We tuned hyperparameters for all algorithms on the easiest, fastesttraining scenario (80 marines vs. 80 marines). On this scenario, MM learns faster but plateaus at the same level as GAS(2). MM underperforms on all other scenarios to varying degrees. Learning separate value functions for each A`, as in our approach, appears to accelerate the transfer learning in the majority of settings. Another possible explanation is that MM may be more sensitive to hyperparameters. We do not use population based training to tune hyperparameters on the fly, which could otherwise help MM adapt to each scenario. However, GAS would presumably also benefit from population based training, at the cost of further computation and sample efficiency.\nThe policies learned by GAS exhibit good tactics. Control of separate groups is used to position our army so as to maximise the number of attacking units by forming a wall or a concave that surrounds the enemy, and by coordinating a simultaneous assault. Figure 5 in the Appendix shows some example learned policies. In scenarios where MM fails to learn well, it typically falls into a local minimum of attacking head-on.\nIn each scenario, we test an ablation GAS (2): ON-AC that does not use our off-action-space update, instead training each level of the Q-function only with data sampled at that level. This ablation performs somewhat worse on average, although the size of the impact varies in different scenarios. In some tasks, it is beneficial to accelerate learning for finer action spaces using data drawn from the off-action-space policy. In Appendix A.1.1, the same ablation shows significantly worse performance on the Mountain Car task and comparable performance on Acrobot.\nWe present a number of further ablations on two scenarios. The most striking failure is of the ‘SEPQ’ variant which does not compose the value function as a sum of scores in the hierarchy. It is critical to ensure that values are well-initialised as we move to less restricted action spaces. In the discretised continuous control tasks, ‘SEP-Q’ also underperforms, although less dramatically.\nThe choice of target is less important: performing a max over coarser action spaces to construct the target as described in Section 4.2 does not improve learning speed as intended. One potential reason is that maximising over more potential targets increases the maximisation bias already present in\nQ-learning (Hasselt, 2010). Additionally, we use an n-step objective which combines a partial onpolicy return with the bootstrap target, which could reduce the relative impact of the choice of target.\nFinally, we experiment with a higher `. Unfortunately, asymptotic performance is degraded slightly once we use A3 or higher. One potential reason is that it decreases the average group size, pushing against the limits of the spatial resolution that may be captured by our CNN architecture. Higher ` increases the amount of time that there are fewer units than groups, leaving certain groups empty and rendering our masked pooling operation degenerate. We do not see a fundamental limitation that should restrict the further growth of the action space, although we note that most hierarchical approaches in the literature avoid too many levels of depth. For example, Czarnecki et al. (2018) only mix between two sizes of action spaces rather than the three we progress through in the majority of our GAS experiments." }, { "heading": "7 CONCLUSION", "text": "In this work, we presented an algorithm for growing action spaces with off-policy reinforcement learning to efficiently shape exploration. We learn value functions for all levels of a hierarchy of restricted action spaces simultaneously, and transfer data, value estimates, and representations from more restricted to less restricted action spaces. We also present a strategy for using this approach in cooperative multi-agent control. In discretised continuous control tasks and challenging multiagent StarCraft micromanagement scenarios, we demonstrate empirically the effectiveness of our approach and the value of off-action-space learning. An interesting avenue for future work is to automatically identify how to restrict action spaces for efficient exploration, potentially through meta-optimisation. We also look to explore more complex and deeper hierarchies of action spaces." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DISCRETISED CONTINUOUS CONTROL", "text": "" }, { "heading": "A.1.1 ADDITIONAL ABLATIONS", "text": "Here, we present results on some additional ablations of GAS on the discretised continous control tasks. SEP-Q performs slightly worse on both tasks, a less dramatic failure than in the StarCraft experiments. These value functions are simpler, and it is easier to learn the new action space’s value without relying so much on the previous one. ON-AC performs worse only on Mountain Car, suggesting once again that the significance of this component of the algorithm is somewhat problemdependent. We also test a version that follows the intuition of the ‘Match’ objective of M&M more closely, adapted for the value-based setting: instead of using an adaptive initialisation of each level’s Q-function as described in the main text, we use an L2 penalty to ‘Match’ the new level’s value function to its parent action, which should have a similar effect. This variant performs similarly here (perhaps slightly worse in the more challenging Mountain Car)." }, { "heading": "A.1.2 HYPERPARAMETERS", "text": "For our experiments in discretised continous control, we use a standard DQN trainer (Mnih et al., 2015) with the following parameters.\nParameter Value batch size 128 replay buffer size 10000 target update interval 200 initial 1.0 final 0.1 decay 25000 env steps ` lead-in 25000 env steps ` growth 25000 env steps env steps per model udpate 4 Adam learning rate 5e-4 Adam 1e-4\nFor GAS experiments, we keep the mixing coefficient α = 0 for 25000 environment steps, and then increase it linearly by 1 every 25000 steps until reaching the maximum value. We use γ = 0.998 for our Acrobot experiments, but reduce it to γ = 0.99 for Mountain Car to prevent divergingQ-values.\nOur model consists of fully-connected ReLU layers, with 128 hidden units for the first and 64 hidden units for all subsequent layers. Two layers are applied as an encoder. Then, for each ` one layer is applied on the current embedding to produce a new embedding, and an evaluation layer on that embedding produces the Q-values for that level." }, { "heading": "A.2 STARCRAFT MICROMANAGEMENT SCENARIOS", "text": "" }, { "heading": "A.2.1 SCENARIOS AND LEARNED STRATEGIES", "text": "We explore five Starcraft micromanagement scenarios: 50 hydralisks vs 50 hydralisks, 80 marines vs 80 marines, 80 marines vs 85 marines, 60 marines vs 65 marines, 95 zerglings vs 50 marines. In these scenarios, our model controls the first set of units, and the opponent controls the second set.\nThe opponent is a scripted opponent that holds its location until an opposing unit is within range to attack. Then, the opponent will engage in an ”attack-closest” behavior, as described in Usunier et al. (2016), where each unit individually targets the closest unit to it. Having the opponent remain stationary until engaged makes this a more difficult problem – the agent must find its opponent, and attack into a defensive position, which requires good positions prior to engagement.\nAs mentioned in section 6.2, all of our scenarios require control of a much larger number of units than previous work. The 50 hydralisks and 80v80 marines scenarios are both imbalanced as a result of attacking into a defensive position. The optimal strategy for 80 marines vs 85 marines and 60 vs 65 marines requires slightly more sophisticated unit positioning, and the 95 zerglings vs 50 marines scenario requires the most precise positioning. The agent can use the enemy’s initial stationary positioning to its advantage by slightly surrounding the opponent in a concave, ensuring that the outermost units are in its attack range, but far enough away to be out of range of the center-most enemy units. Ideally, the timing of the groups in all scenarios should be coordinated such that all units get in range of the opponent at roughly the same point in time. Figure 5 shows how our model is able to exhibit this level of unit control." }, { "heading": "A.2.2 FEATURES", "text": "We use a standard features for the units and map, given by TorchcraftAI 1\nFor each of the units, the following features are extracted: 1https://github.com/TorchCraft/TorchCraftAI\n• Current x, y positions. • Current x, y velocities. • Current hitpoints • Armor and damage values • Armor and damage types • Range versus both ground and air units • Current weapon cooldown • A few boolean flags on some miscellaneous unit attributes\nApproximate normalization for each feature keep its value approximately between 0-1.\nFor the map, the following features are extracted for each tile in the map:\n• a one-hot encoding of tile’s the ground height (4 channels) • boolean representing or not the given tile is walkable • boolean representing or not the given tile is buildable • and boolean representing or not the given tile is covered by fog of war.\nThe features form a HxWx7 tensor, where our map has height H and width W ." }, { "heading": "A.2.3 ENVIRONMENT DETAILS", "text": "We use a frame-skip of 25, approximately 1 second of real time, allowing for reasonably fine-grained control but without making the exploration and credit assignment problems too challenging.\nWe calculate at every timestep the difference in total health points (HP) and number of units for the enemy from the last step, normalised by the total starting HP and unit count. As a reward function, we use the normalised damage dealt, plus 4 times the normalised units killed, plus an additional reward of 8 for winning the scenario by killing all enemy units. This reward function is designed such that the agent gets some reward for doing damage and killing units, but the reward from doing damage will never be greater than from winning the scenario. Ties and timeouts are considered losses." }, { "heading": "A.3 EXPERIMENTAL DETAILS", "text": "" }, { "heading": "A.3.1 MODEL", "text": "As described in Section 6.2.2 a custom model architecture is used for Starcraft micromanagement. Each unit’s feature vector is embedded to size 128 in step 2 of Figure 2. The grid where the unit features and map features are scattered onto is the size of the Starcraft map of the scenario in walktiles downsampled by a factor of 8. After being embedded, the unit features for ally and enemy units are concatenated with the downsampled map features and sent into a ResNet encoder with four residual blocks (stride 7 padding 3). The output is an embedding of size 64.\nThe decoder uses a mean pooling over the embedding cells as described in Section 6.2.2. Each evaluator is a 2-layer MLP with 64 hidden units and 17 outputs, one for each action. All layers are separated with ReLU nonlinearities." }, { "heading": "A.3.2 TRAINING HYPERPARAMETERS", "text": "We use 64 parallel actors to collect data in a short queue from which batches are removed when they are consumed by the learner. We use batches of 32 6-step segments for each update.\nFor the Q-learning experiments, we used the Adam optimizer with a learning rate of 2.5 × 10−4 and = 1 × 10−4. For the MM baseline experiments, we use a learning rate of 1 × 10−4, entropy loss coefficient of 8 × 10−3 and value loss coefficient 0.5. The learning rates and entropy loss coefficient were tuned by random search, training with A0 from scratch on the 80 marines vs 80\nmarines scenario with 10 configurations sampled from log uniform(−5,−3) for the learning rate and log uniform(−3,−1) for the entropy loss coefficient. For Q-learning, we use an -greedy exploration strategy , decaying linearly from 1.0 to 0.1 over the first 10000 model updates. We also use a target network that copies the behaviour model’s parameters every 200 model updates.\nWe also use a linear schedule to grow the action-space. There is a lead in of 5000 model updates, during which the action-space is held constant atA0, to prevent the action space from growing when or the policy entropy is too high. The action-space is then grown linearly at a rate of 10000 model updates per level of restriction, so that after 10000 updates, we act entirely at A1 and after 20000, entirely at A2." } ]
2,019
null
SP:76b0a90c46bc2151088210ca47ea4761706f1716
[ "The paper claims that for invertible neural networks, mathematical guarantees on invertibility is not enough, and we also require numerical invertibility. To this end, the lipschitz constants/condition numbers of Jacobians of both the forward and inverse maps of invertible NNs based on coupling layers are examined mathematically and experimentally. The paper also displays cases that expose non-invertibility in these architectures via gradient-based construction of adversarial inputs, as well as a decorrelation benchmark task, and show that spectral normalization can be a remedy for stabilizing these flows.", "This paper analyses the numerical invertibility of analytically invertible neural networks (INN). The numerical invertibility depends on the Lipschitz constant of the respective transformation. The paper provides Lipschitz bounds on the components of building blocks for certain INN architectures, which would guarantee numerical stability. Furthermore, this paper shows empirically, that the numerical invertibility can indeed be a problem in practice." ]
Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks. Invertible neural networks (INNs), however, provide several mathematical guarantees by design, such as the ability to approximate non-linear diffeomorphisms. One less studied advantage of INNs is that they enable the design of bi-Lipschitz functions. This property has been used implicitly by various works to design generative models, memory-saving gradient computation, regularize classifiers, and solve inverse problems. In this work, we study Lipschitz constants of invertible architectures in order to investigate guarantees on stability of their inverse and forward mapping. Our analysis reveals that commonly-used INN building blocks can easily become noninvertible, leading to questionable “exact” log likelihood computations and training difficulties. We make use of numerical analysis tools to diagnose non-invertibility in practice. Finally, based on our theoretical analysis, we show how to guarantee numerical invertibility for one of the most common INN architectures.
[]
[ { "authors": [ "Cem Anil", "James Lucas", "Roger Grosse" ], "title": "Sorting out Lipschitz function approximation", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Lynton Ardizzone", "Jakob Kruse", "Sebastian Wirkert", "Daniel Rahner", "Eric W Pellegrini", "Ralf S Klessen", "Lena Maier-Hein", "Carsten Rother", "Ullrich Köthe" ], "title": "Analyzing inverse problems with invertible neural networks", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "U.M. Ascher" ], "title": "Numerical methods for evolutionary differential equations", "venue": "Computational Science and Engineering. Society for Industrial and Applied Mathematics,", "year": 2008 }, { "authors": [ "Jens Behrmann", "Sören Dittmer", "Pascal Fernsel", "Peter Maaß" ], "title": "Analysis of Invariance and Robustness via Invertibility of ReLU-Networks", "venue": "arXiv preprint arXiv:1806.09730,", "year": 2018 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky T.Q. Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Robin Brügger", "Christian F. Baumgartner", "Ender Konukoglu" ], "title": "A partially reversible u-net for memory-efficient volumetric image segmentation", "venue": "In Medical Image Computing and Computer Assisted Intervention – MICCAI,", "year": 2019 }, { "authors": [ "Ricky T.Q. Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ricky T.Q. Chen", "Jens Behrmann", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Residual flows for invertible generative modeling", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Brian Cheung", "Jesse A Livezey", "Arjun K Bansal", "Bruno A Olshausen" ], "title": "Discovering hidden factors of variation in deep networks", "venue": "arXiv preprint arXiv:1412.6583,", "year": 2014 }, { "authors": [ "Michael Cogswell", "Faruk Ahmed", "Ross Girshick", "Larry Zitnick", "Dhruv Batra" ], "title": "Reducing overfitting in deep networks by decorrelating representations", "venue": "arXiv preprint arXiv:1511.06068,", "year": 2015 }, { "authors": [ "Ivo Danihelka", "Balaji Lakshminarayanan", "Benigno Uria", "Daan Wierstra", "Peter Dayan" ], "title": "Comparison of maximum likelihood and GAN-based training of real NVPs", "venue": "arXiv preprint arXiv:1705.05263,", "year": 2017 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "NICE: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real NVP", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale adversarial representation learning", "venue": "arXiv preprint arXiv:1907.02544,", "year": 2019 }, { "authors": [ "Conor Durkan", "Artur Bekasov", "Ian Murray", "George Papamakarios" ], "title": "Neural spline flows", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "H. Federer" ], "title": "Geometric measure theory", "venue": "Grundlehren der mathematischen Wissenschaften. Springer,", "year": 1969 }, { "authors": [ "Aidan N Gomez", "Mengye Ren", "Raquel Urtasun", "Roger B Grosse" ], "title": "The reversible residual network: Backpropagation without storing activations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Henry Gouk", "Eibe Frank", "Bernhard Pfahringer", "Michael Cree" ], "title": "Regularisation of neural networks by enforcing Lipschitz continuity", "venue": "arXiv preprint arXiv:1804.04368,", "year": 2018 }, { "authors": [ "Will Grathwohl", "Ricky T.Q. Chen", "Jesse Bettencourt", "Ilya Sutskever", "David Duvenaud" ], "title": "FFJORD: Free-form continuous dynamics for scalable reversible generative models", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Aditya Grover", "Manik Dhar", "Stefano Ermon" ], "title": "Flow-GAN: Combining maximum likelihood and adversarial learning in generative models", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of Wasserstein GANs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Suyog Gupta", "Ankur Agrawal", "Kailash Gopalakrishnan", "Pritish Narayanan" ], "title": "Deep learning with limited numerical precision", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Emiel Hoogeboom", "Rianne van den Berg", "Max Welling" ], "title": "Emerging convolutions for generative normalizing flows", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jörn-Henrik Jacobsen", "Arnold Smeulders", "Edouard Oyallon" ], "title": "i-RevNet: Deep invertible networks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jörn-Henrik Jacobsen", "Jens Behrmann", "Richard Zemel", "Matthias Bethge" ], "title": "Excessive invariance causes adversarial vulnerability", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Priyank Jaini", "Kira A. Selby", "Yaoliang Yu" ], "title": "Sum-of-squares polynomial flow", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Durk P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Cornelius Lanczos" ], "title": "An iteration method for the solution of the eigenvalue problem of linear differential and integral operators", "venue": "Journal of Research of the National Bureau of Standards,", "year": 1950 }, { "authors": [ "Daniel Levy", "Matthew D Hoffman", "Jascha Sohl-Dickstein" ], "title": "Generalizing Hamiltonian Monte Carlo with neural networks", "venue": "arXiv preprint arXiv:1711.09268,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Mario Lucic", "Karol Kurach", "Marcin Michalski", "Sylvain Gelly", "Olivier Bousquet" ], "title": "Are GANs created equal? A large-scale study", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Matthew MacKay", "Paul Vicol", "Jimmy Ba", "Roger B Grosse" ], "title": "Reversible recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dougal Maclaurin", "David Duvenaud", "Ryan Adams" ], "title": "Gradient-based hyperparameter optimization through reversible learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Aravindh Mahendran", "Andrea Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Danilo Rezende", "Shakir Mohamed" ], "title": "Variational inference with normalizing flows", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Jiaming Song", "Shengjia Zhao", "Stefano Ermon" ], "title": "A-NICE-MC: Adversarial training for MCMC", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yang Song", "Chenlin Meng", "Stefano Ermon" ], "title": "MintNet: Building invertible neural networks with masked convolutions", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv preprint arXiv:1312.6199,", "year": 2013 }, { "authors": [ "Jakub M Tomczak", "Max Welling" ], "title": "Improving variational auto-encoders using householder flow", "venue": "arXiv preprint arXiv:1611.09630,", "year": 2016 }, { "authors": [ "Aladin Virmaux", "Kevin Scaman" ], "title": "Lipschitz regularity of deep neural networks: Analysis and efficient estimation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Adam (Kingma", "Ba" ], "title": "2015) with fixed learning rate 1e-4 and no weight decay, and trained on mini-batches of size 64. The CIFAR-10 images were normalized to the range [-0.5, 0.5], and were dequantized with uniform noise in [0, 1e-6]. Effect of Model Depth. Furthermore, we investigated the effect of network depth on stability since its an additional influence factor besides the selection of each invertible building block", "venue": null, "year": 2015 }, { "authors": [ "Kingma", "Dhariwal" ], "title": "2018), they observed the opposite phenomenon that a model trained with maximum likelihood generates better samples after decreasing the entropy in the prior. See Figure 21 for samples after refitting the prior", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Invertible neural networks (INNs) have become a standard building block in the deep learning toolkit. Invertibility is useful for training generative models with exact likelihoods (Dinh et al., 2014; 2017; Kingma & Dhariwal, 2018; Kingma et al., 2016; Behrmann et al., 2019; Chen et al., 2019), increasing posterior flexibility in VAEs (Rezende & Mohamed, 2015; Tomczak & Welling, 2016; Papamakarios et al., 2017), learning transition operators in MCMC samplers (Song et al., 2017; Levy et al., 2017), computing memory-efficient gradients (Gomez et al., 2017; Donahue & Simonyan, 2019), allowing for bi-directional training (Grover et al., 2018), solving inverse problems (Ardizzone et al., 2019) and analysing adversarial robustness (Jacobsen et al., 2019).\nThe application space of INNs is rapidly growing and many approaches for constructing invertible architectures have been proposed. A common way to construct invertible networks is to use triangular coupling layers (Dinh et al., 2014; 2017; Kingma & Dhariwal, 2018), where dimension partitioning is interleaved with ResNet-type computation. Another approach is to use various forms of masked convolutions, generalizing the dimension partitioning approach of coupling layers (Song et al., 2019; Hoogeboom et al., 2019). To avoid dimension partitioning altogether, multiple approaches based on efficiently estimating the log-determinant of the Jacobian, necessary for applying the change of variable formula, have been proposed to allow for free-form Jacobian structure (Grathwohl et al., 2019; Behrmann et al., 2019; Chen et al., 2019).\nFrom a mathematical perspective, invertible architectures enable several unique guarantees like:\n• Enabling flexible approximation of non-linear diffeomorphisms (Rezende & Mohamed, 2015; Dinh et al., 2017; Kingma & Dhariwal, 2018; Chen et al., 2019) • Memory-saving gradient computation (Gomez et al., 2017; Donahue & Simonyan, 2019) • Fast analytical invertibility (Dinh et al., 2014) • Guaranteed preservation of mutual information and exact access to invariants of deep\nnetworks (Jacobsen et al., 2018; 2019).\nDespite the increased interest in invertible neural networks, little attention has been paid to guarantees on their numerical invertibility. Specifically, this means analyzing their ability to learn bi-Lipschitz\nneural networks, i.e. Lipschitz continuous neural networks with a bound on the Lipschitz constant of the forward and inverse mapping.\nWhile the stability analysis of neural networks has received significant attention e.g. due to adversarial examples (Szegedy et al., 2013), the focus here is only on bounding Lipschitz constants of the forward mapping. However, bounding the Lipschitz constant of the inverse mapping is of major interest, e.g. when reconstructing inputs from noisy or imprecise features. In fact, analytical invertibility as provided by some invertible architectures does not necessarily imply numerical invertibility in practice.\nIn this paper, we first discuss the relevance of controlling the bi-Lipschitz bounds of invertible networks. Afterwards we analyze Lipschitz bounds of commonly used invertible neural network building blocks. Our contributions are:\n• We argue for forward and inverse stability analysis as a unified viewpoint on invertible network (non-)invertibility. To this end, we derive Lipschitz bounds of commonly-used invertible building blocks for their forward and inverse maps. • We numerically monitor and detect (non-)invertibility for different practical tasks such as\nclassification and generative modeling. • We show how this overlooked issue with non-invertibility can lead to questionable claims\nwhen computing exact likelihoods with the change-of-variable formula. • Finally, we study spectral normalization as a stabilizer for one of the most commonly-used\nfamily of INN architectures, namely additive coupling blocks." }, { "heading": "2 BACKGROUND AND MOTIVATION", "text": "Invertible neural networks are bijective functions with a parametrized forward mappingFθ : Rd → Rd with Fθ : x 7→ z, where θ ∈ Rp defines the parameter vector. Additionally, they define an inverse mapping F−1θ : Rd → Rd with F −1 θ : z 7→ x. This inverse can be given in closed-form (analytical inverse, e.g. Dinh et al. (2017); Kingma & Dhariwal (2018)) or approximated numerically (numerical inverse, e.g. Behrmann et al. (2019); Song et al. (2019)).\nBefore we discuss building blocks of invertible networks, we provide some background and motivation for studying forward and inverse stability.\nDefinition 1 (Lipschitz and bi-Lipschitz continuity). A function F : (Rd1 , ‖ · ‖) → (Rd2 , ‖ · ‖) is called Lipschitz continuous if there exists a constant L =: Lip(F ) such that\n‖F (x1)− F (x2)‖ ≤ L‖x1 − x2‖, ∀x1, x2 ∈ Rd1 .\nIf an inverse F−1 : (Rd2 , ‖ · ‖)→ (Rd1 , ‖ · ‖) and a constant L∗ =: Lip(F−1) exists such that\n‖F−1(y1)− F−1(y2)‖ ≤ L∗‖y1 − y2‖, ∀y1, y2 ∈ Rd2 ,\nthen F is called bi-Lipschitz continuous.\nRemark 2. We focus on invertible functions F : (Rd, ‖ · ‖2)→ (Rd, ‖ · ‖2), i.e. functions where the domain and co-domain are of the same dimensionality d and the norm is given by the euclidian norm.\nLemma 3. (Rademacher (Federer, 1969, Theorem 3.1.6)) If F : Rd → Rd is a locally Lipschitz continuous function (i.e. functions whose restriction to a neighborhood around any point is Lipschitz), then F is differentiable almost everywhere. Moreover, if F is Lipschitz continuous, then\nLip(F ) = sup x∈Rd\n‖JF (x)‖2,\nwhere JF (x) is the Jacobian matrix of F at x and ‖JF (x)‖2 denotes its spectral norm.\nLipschitz bounds on the forward mapping are of crucial importance in several areas, including in adversarial example research (Szegedy et al., 2013), to avoid exploding gradients, or the training of Wasserstein GANs (Anil et al., 2019). The stability of the inverse, however, can have a similar impact. For instance, having a Lipschitz bound on the inverse may avoid vanishing gradients during training.\nGiven that deep-learning computations are carried out with limited precision, imprecision is always introduced in both the forward and backward passes, i.e., zδ = F (x) + δ and x̂δ = F−1(zδ). Instability in either pass will aggravate this problem, and essentially make the invertible network numerically non-invertible. To summarize, this problem occurs in the following situations:\n• Numerical reconstruction of x, where features zδ are inexact due to limited precision (e.g. when computations are executed in single precision as common on modern hardware). • Reconstruction based on imprecise measurements from physical devices (e.g. when using\ninvertible networks for inverse problems (Ardizzone et al., 2019)). • Numerical re-computation of intermediate activations of the neural network to allow for\nmemory-efficient backpropagation (Gomez et al., 2017).\nFurthermore, some computations are performed via numerical approximation, which in turn adds another source of imprecision that might be aggravated via instability. Examples include:\n• Numerical forward computation, as in Neural ODEs (Chen et al., 2018) (numerical solver is used to approximate dynamic of ODE). • Numerical inverse computation, e.g. via fixed-point iterations as in i-ResNets (Behrmann\net al., 2019) or MintNet (Song et al., 2019) or via ODE-solvers for the backward dynamics as in Neural ODEs (Chen et al., 2018).\nAs an example of why bi-Lipschitz continuity is critical for numerical stability in invertible functions, let’s consider the simple mappings F1(x) = log(x), F−11 (z) = exp(z), and F2(x) = x, F −1 2 (z) = z. Though both functions tend to infinity when x → ∞ , F1 is much less stable. Consider the introduction of numerical imprecision as zδ = F1(x)+ δ where δ denotes the introduced imprecision. Then this imprecision is magnified in the inverse pass as:\n||F−11 (z)− F −1 1 (z δ)||22 ≈ ||δ ∂F−11 (z δ)\n∂zδ ||22 = ||δ exp\n( zδ ) ||22. (1)\nA similar example can be constructed for both the forward and backward passes, which speaks to the importance of bi-Lipschitz continuity. For an additional discussion on the connection of Lipschitz constants and numerical errors, we refer to Appendix B." }, { "heading": "3 STABILITY OF INVERTIBLE NEURAL NETWORKS", "text": "" }, { "heading": "3.1 LIPSCHITZ BOUNDS FOR BUILDING BLOCKS OF INVERTIBLE NETWORKS", "text": "Research on invertible networks has produced a large variety of architectural building blocks. Yet, the focus of prior work was on obtaining flexible architectures while maintaining invertibility guarantees. Here, we build on the work in (Behrmann et al., 2019), where bi-Lipschitz bounds were proven for invertible ResNets, by deriving Lipschitz bounds on the forward and inverse mapping of common building blocks. Together with an overview of common invertible building blocks, we provide our main results in Table 1. We chose these particular model classes in order to cover both coupling-based approaches and free-form approaches like Neural ODE (Chen et al., 2018) and i-ResNets (Behrmann et al., 2019). The derivations of the bounds are given in Appendix A. Note that the bounds provide the worst-case stability and serve mainly as a guideline for future designs of invertible building blocks." }, { "heading": "3.2 CONTROLLING STABILITY OF BUILDING BLOCKS", "text": "As shown in Table 1, there are many factors that influence the stability of INNs. Of particular importance are the Lipschitz constants Lip(g) of the sub-network g for i-ResNets (Behrmann et al., 2019) and affine coupling blocks (Dinh et al., 2014), and Lip(s), Lip(t) for additive coupling blocks (Dinh et al., 2017). Whereas computing the Lipschitz constants of neural networks is NP-hard (Virmaux & Scaman, 2018), there is a simple data-independent upper bound:\nLip(g) ≤ L∏ i=1 ‖Ai‖2, for g(x) = AL ◦ φ ◦AL−1 ◦ · · · ◦A2 ◦ φ ◦A1, (2)\nwhere Ai are linear layers, ‖ · ‖2 is the spectral norm and φ a contractive activation function (Lip(φ) ≤ 1). The above bound was used by (Behrmann et al., 2019) in conjunction with spectral normalization (Miyato et al., 2018; Gouk et al., 2018) to ensure a contractive residual block g. In particular, this employs a normalization via:\nà = κ A\nσ̂1 , with σ̂1 ≈ σ1 = ‖A‖2 (approx. via power-method),\nwhere κ > 0 is a coefficient that sets the approximate upper bound on the spectral norm of each linear layer Ai. Thus, by setting an appropriate coefficient κ depending on the targeted Lipschitz bound of the building block, this approach enables one to control both forward and inverse stability. Note that the above discussion can be generalized to other `p-norms, see (Chen et al., 2019).\nHowever, this is not sufficient when using affine coupling blocks because their bound on the Lipschitz constant holds only locally. In particular, it depends on the regions of the inputs x to the coupling block. While inputs to the first layer are usually bounded by the nature of the data, obtaining bounds for intermediate activations is less straightforward. One interesting avenue for future work could be local regularizers like gradient penalties (Gulrajani et al., 2017), where spectral normalization could be used post-hoc to certify stability.\nLastly, we use ActNorm (Kingma & Dhariwal, 2018) in several architectures and avoid small diagonal terms which would yield large Lipschitz constants in the inverse (see Table 1) by adding a positive constant. Further stabilization could be achieved via bounding the scaling." }, { "heading": "4 NUMERICAL EXPERIMENTS", "text": "In this section, we study numerical invertibility for several objectives and architecture settings. This section is structured by task:\n1. Classification: we show that INN classifiers can become non-invertible on CIFAR-10 and discuss consequences for memory-efficient backpropagation.\n2. Density estimation: we analyze the numerical invertibility of SOTA trained density models. 3. Generative modeling: we study the stability of adversarially trained INN generators,\ndiscuss the consequences for likelihood evaluation, and stabilize an additive-coupling based INN generator using spectral normalization.\n4. Decorrelation: we perform an in-depth study of the effect of different architecture settings on a simple task, where both stable and unstable solutions are possible. Furthermore, we show that spectral normalization is effective at stabilizing additive-coupling based flows.\nWe use the following measures to diagnose the numerical instability of invertible models:\n• Reconstruction error. We measure the `2-distance between the input x and its reconstruction, i.e. ||x(i) − F−1θ (Fθ(x(i)))||2. • Conditioning of the Jacobian and max/min singular values. For forward stability, we\nare interested in the behavior of the Jacobian JF (x), while for inverse stability we are interested in the Jacobian of the inverse mapping JF−1(x). We compute the singular values of the Jacobians using the SVD, which allows us to compute its condition number. 1\nWhile the reconstruction error allows us to quantitatively monitor non-invertibility even before reconstruction artifacts are perceptible, the linear approximation JF (x) and its singular values provide insights into unstable directions of the forward (very large singular values of JF (x)) and inverse (very small singular values of JF (x)) mapping. Both measures were also used in (Jacobsen et al., 2018), where an ill-conditioned inverse was observed.\n4.1 CLASSIFICATION WITH INVERTIBLE MODELS\nIn this section, we show that when training an INN for classification, there exist both stable and unstable solutions. We compare a stable model that uses additive coupling and an unstable model that uses affine coupling and ActNorm both inside and between the blocks—for additional experimental details see Appendix C. These models achieve similar test accuracies of 90.2% and 90.5%, respectively (Figure 6, Appendix C); however, we note that the goal of this experiment is not to achieve SOTA accuracy on CIFAR-10. Rather, we aim to show that models trained for classification with reasonable accuracy, can vary greatly with respect to stability. To observe the differences in stability, we plot the reconstruction results in Figure 2.\nAn important use-case of INNs is to enable memoryefficient training, by re-computing activations in the backward pass rather than storing them in memory during the forward pass (Gomez et al., 2017). This approach enables e.g. large-scale generative modeling (Donahue & Simonyan, 2019) and scaling segmentation networks to high-resolution medical images (Brügger et al., 2019).\nRe-computing the activations, however, relies on a numerically precise inverse mapping. To better understand the effect of numerical errors, we perform an analysis similar to Gomez et al. (2017): we track the angle between the true and memory-saving gradients during training (Figure 1). As expected from the reconstruction results, we observe that the affine model yields gradients very different from the true gradient; in fact, after approximately 20 epochs of training, the memory-saving gradients of the affine model contain numerically infinite or nan values. Thus, it would not be possible to train the affine model successfully using memory-saving gradients.\n1 Using the SVD is feasible for CIFAR-10, but becomes prohibitively expensive for ImageNet (with images of size 3× 256× 256); for such larger images, one could instead use the Lanczos algorithm (Lanczos, 1950) to find the largest and smallest singular values, to compute the condition number." }, { "heading": "4.2 ANALYZING STATE-OF-THE-ART DENSITY MODELS", "text": "In this section, we analyze the invertibility of trained density models. In particular, we expose non-invertibility in models that otherwise appear stable by optimizing in input space to find examples that are poorly reconstructed by the model. Here, we take a trained Glow model (Kingma & Dhariwal, 2018)2 and optimize the input using Projected Gradient Descent (PGD) (Madry et al., 2018). Our goal is to find a point x′ in the domain of the invertible model such that the reconstruction F−1(F (x′)) differs from x′. In particular, we start with a datapoint x and use PGD to find a perturbed example x′ that has high reconstruction error via Eq. 3 (additional details in Appendix E).\nargmax ||x′−x||∞≤\n||x′ − F−1(F (x′))||2. (3)\nAs shown in Figure 3, this attack is effective for finding examples that are perceptually identical to test examples, yet induce large reconstruction errors. This attack can be understood as a worstcase invertibility diagnosis; however, we note that unsuccessful attacks can be due to algorithmic\n2We used the PyTorch implementation from https://github.com/y0ast/Glow-PyTorch. This trained model achieves a likelihood of 3.39 bits-per-dimension.\nissues and thus do not necessarily imply stable invertible models. Also, it is not always clear how to get gradients that are able to exploit numerical instabilities, leaving room for improvement via gradient-free methods (e.g., the boundary attack from Brendel et al. (2018)).\nWe present in Appendix E additional experiments using the PGD attack on the additive Glow model from (Kingma & Dhariwal, 2018) trained on CelebA, where we show that the model becomes non-invertible outside the valid input range of images. Furthermore, we analyze a trained residual flow model (Chen et al., 2019) on CIFAR-10, where we cannot find such dramatic non-invertible inputs, which is to be expected given the stability bounds of i-ResNet blocks (see Table 1)." }, { "heading": "4.3 GENERATIVE MODELING WITH INVERTIBLE MODELS", "text": "Generative models based on invertible networks have been predominantly trained using maximum likelihood estimation (MLE). Another viable approach is to train them adversarially (ADV), as done in Flow-GAN (Danihelka et al., 2017; Grover et al., 2018). Flow-GAN is appealing as it can result in a generator capable of producing high-quality samples (as in GANs), while also giving access to exact density estimates, which GANs lack. Prior work (Danihelka et al., 2017; Grover et al., 2018) has compared these two techniques for training flows (MLE vs ADV); the main conclusion of these studies was that training with MLE yields good likelihoods but relatively poor samples, while training with a GAN loss yields good samples but likelihoods orders of magnitude worse than MLE training.\nHere, we analyze in depth the effect of Flow-GAN training on numerical stability. We use networks with repeated additive coupling layers, and ActNorm between blocks. We examine two architectures, both with 3 levels, i.e., ‘squeeze’ between levels (additional details in Appendix D):\n• Stable: having a depth of 4 (i.e., 4 blocks per level) and spectral normalization applied to all the convolution layers, see section 3.2 for details. • Unstable: having a depth of 16, ActNorm within coupling layer, and no spectral normaliza-\ntion applied to the convolutional layers.\nTable 2 shows that models trained with MLE objective can achieve good bits-per-dimenstion (BPD), but models trained with ADV can achieve better sample quality as measured by the Frechet Inception Distance (FID), a common measure of sample quality (Heusel et al., 2017; Lucic et al., 2018). This justifies why considering training INN with objective functions other than MLE is desired.\nBroken Flow-GAN. In Figure 4, we show that an INN trained only with adversarial loss can become non-invertible, depending on the architecture. We perform forward and inverse passes repeatedly on the same mini-batch. The unstable model shows visible reconstruction errors quickly. Table 3 shows that the unstable model has BPD orders of magnitude larger than the stable model.\nFlow-GAN “Likelihood”. Typically likelihood is computed by a change of variables, which assumes invertibility. When a network is numerically non-invertible, the assumption breaks, and the computed value becomes some numerical approximation to the density. The model used in Figure 4 consists of additive coupling blocks, ActNorm, and squeezing operations. All these operations have data independent log-determinant Jacobian. Thus, obtaining a numerical value via the change of variable formula is straightforward. However, it is unclear what this numerical value represents, likely it cannot be trusted as true likelihood due to the lack of invertibility. In this case, we advocate for ensuring invertibility using the remedies discussed here to make sure BPD values are trustworthy. In terms of sample quality, however, we currently observe a tradeoff since the stable model yields higher FID scores than the unstable model (Table 2, includes also MLE-models for comparison). Yet, we believe that proper tuning e.g. of spectral normalization could remove this tradeoff.\nLastly, one can adjust for the likelihood by adjusting the prior (see Appendix I). In summary, in this section we point out that Flow-GAN can become non-invertible, in which case the computed likelihood cannot be taken as ground truth likelihood (Grover et al., 2018). In sum, INN trained with MLE are stable, but the sample quality is worse than those trained with ADV. Yet, training with ADV loss might make INN non-invertible, which defeats the purpose of using INN in the first place. Hence, when training with alternative objectives, numerical stability is a crucial property to consider." }, { "heading": "4.4 DECORRELATION TASK", "text": "As the last part of our empirical study, we use a simple decorrelation task to benchmark the stability of invertible models. In particular, we compute the correlation matrix C ∈ Rd×d via\nCθj,l = 1\nN N∑ i=1\n( Fθ(x (i))j − µ̂j ) ( Fθ(x (i))l − µ̂l )\nσ̂j σ̂l , (4)\nwhere µ̂j is the estimated mean over output samples Fθ(x(i)) and σ̂j the estimated standard deviation. Then, we optimize the parameters θ to minimize the off-diagonal correlation, i.e.\nmin θ ‖Cθ − diag(Cθ)‖F , (5)\nwhere ‖ · ‖F is the Frobenius-norm 3. Decorrelation objectives have been used in (Cogswell et al., 2015) to reduce overfitting and in (Cheung et al., 2014) to disentangle hidden activations.\nThis objective serves as a good task for our purposes for two reasons:\n1. Decorrelation is a simpler objective than optimizing outputs z = Fθ(x) to follow a factorized Gaussian as in Normalizing Flows (Rezende & Mohamed, 2015). Furthermore, it will show that changing the objective to less standard tasks can lead to larger instabilities compared to using INNs on more common tasks such as density estimation.\n2. Decorrelation allows multiple solutions using invertible mappings, where both stable and unstable transforms are equally valid for the given objective. See Appendix F for a motivation based on a simple 2D toy example.\nIn summary, the decorrelation objective offers an environment to study which INN components steer the mapping towards stable or unstable solutions, that are equally plausible for the given task.\nIn our experiments shown in Figure 5, we focus on coupling-based models like Glow (Kingma & Dhariwal, 2018), which are analytically invertible and thus allow to a simpler analysis compared to models relying on numerical inversion like i-ResNets (Behrmann et al., 2019). We evaluate the effects of different architectural choices on numerical stability, including additive vs. affine coupling layers, ActNorm, and architecture depth. For ActNorm we study two settings: 1) between coupling blocks, 2) inside blocks, i.e. as part of the function g in additive blocks or s and t in affine blocks. Details on the architectures/ training schemes and extended results are provided in Appendix H.\n3We provide example PyTorch code for the decorrelation objective in Appendix G." }, { "heading": "5 RELATED WORK", "text": "Invertibility and stability of deep networks. The inversion from activations in standard neural networks to inputs has been studied in various works, e.g. via optimization in input space (Mahendran & Vedaldi, 2014). Linking invertibility and inverse stability for relu-networks was e.g. done in Behrmann et al. (2018). However, few works study the stability of INNs: Gomez et al. (2017) study the numerical errors in the gradient computation when using their memory-efficient backpropagation variant. Similarly to our empirical analysis, (Jacobsen et al., 2018) computed the SVD of the Jacobian of a trained i-RevNet and observed an ill-conditioned Jacobian. Lastly, the i-ResNet architecture (Behrmann et al., 2019) yields bi-Lipschitz bounds by design.\nOn the other hand, the stability of neural networks has been of major interest due to the problem of exploding and vanishing gradients, and more recently due to adversarial examples (Szegedy et al., 2013) and training of Wasserstein GANs (Arjovsky et al., 2017). See e.g. (Anil et al., 2019) for a promising approach to learn flexible Lipschitz neural networks.\nInvertible building blocks. Besides the invertible building blocks we studied in Table 1, several other approaches were proposed. Most prominently, autogressive models like MAF (Papamakarios et al., 2017) or IAF (Kingma et al., 2016) provide invertible models that are not studied in our analysis. Furthermore, several newer coupling layers that require numerical inversion have been introduced (Jaini et al., 2019; Durkan et al., 2019). Besides the coupling-based approaches, multiple approaches (Chen et al., 2018; Behrmann et al., 2019; Chen et al., 2019; Song et al., 2019) use numerical inversion schemes, where the interplay of numerical errors due to stability and errors due to the numerical approximation of the inverse adds another dimension to the study of invertibility.\nFixed-Point arithmetic and limited precision. Maclaurin et al. (2015); MacKay et al. (2018) implement invertible computation using fixed-point numbers, with specially-designed schemes to store information that is “lost” when bits are shifted due to multiplication/division, enabling exact invertibility at the cost of additional memory usage. As Gomez et al. (2017) point out, this approach allows exact numerical inversion when using additive coupling blocks independent of stability issues. However, our stability analysis aims for a broadly applicable methodology beyond the special case of additive coupling. Lastly, there may be connections to deep learning using limited precision, see e.g. (Gupta et al., 2015), which could provide more insights into our observed numerical errors." }, { "heading": "6 CONCLUSION", "text": "Numerical instability is an important concern for the practical application of invertible models. If for instance analytical invertibility does not carry through to the numerical computation due to instabilities or numerical errors, the consequences can be arbitrarily severe. As shown in our experiments, this can impact memory-efficient backpropagation (Gomez et al., 2017) and thus significantly reduces the usability of invertible networks if not handled appropriately. Flow-GAN illustrates another application where non-invertibility poses a serious threat, as instabilities can strongly influence or even break likelihood-computation.\nIn this paper, we shed light on the underlying causes of instability by deriving Lipschitz bounds on many of the atomic building blocks commonly used to construct INNs. From a practical standpoint, we used diagnostics to measure stability and provided an empirical framework to benchmark stability. Further, we have shown how to guarantee stability for one of the most common INN architectures. We hope that this will inspire future work to view numerical stability as a crucial axis in the design of new building blocks and architectures for invertible neural networks." }, { "heading": "A DERIVATIONS OF LIPSCHITZ BOUNDS", "text": "The bounds for invertible ResNets are taken from (Behrmann et al., 2019). For Neural ODEs (Chen et al., 2018), one needs to consider a Lipschitz constant Lip(F ) that holds for all t ∈ [0, T ], i.e.\n‖F (t, x1)− F (t, x2)‖2 ≤ Lip(F )‖x1 − x2‖2, for all t ∈ [0, T ].\nThen, the claimed bound is a standard result, see e.g. (Ascher, 2008, Theorem 2.3). Note that the inverse is given by dy(t)dt = −F (y(t), t), hence the same bound holds. In the subsequent subsections, we derive the bounds for coupling layers.\nA.1 DERIVATION OF LIPSCHITZ BOUND FOR ADDITIVE COUPLING LAYERS\nConsider an additive coupling block defined as\nF (x)I1 = xI1 F (x)I2 = xI2 + g(xI1),\nwhere I1, I2 is a disjoint partition of indices {1, ..., d} of the same cardinality, i.e. |I1| = |I2| = d2 . Further, xI1 , xI2 correpsonds to the corresponding dimension of x ∈ Rd and g : R d 2 → R d2 . By Lemma 3, it is\nLip(F ) = sup x∈Rd\n‖JF (x)‖2.\nThus, in order to obtain a bound on the Lipschitz constant, it is helpful to look into the structure of the Jacobian. If the partitions I1 and I2 correspond to the first and last d2 indices, the Jacobian has a lower-block structure with an identity diagonal, i.e.\nJF (x) =\n( I 0\nJg(x) I\n) .\nBy using this structure, we can derive the following upper bound:\nLip(F )2 = sup x∈Rd ‖JF (x)‖22\n= sup x∈Rd sup ‖x∗‖2=1\n‖JF (x)x∗‖22\n= sup x∈Rd sup ‖x∗‖2=1\n‖(JF (x)x∗)I1‖22 + ‖(JF (x)x∗)I2‖22\n= sup x∈Rd sup ‖x∗‖2=1\n‖x∗I1‖ 2 2 + ‖x∗I2 + Jg(x)x ∗ I1‖ 2 2\n≤ sup x∈Rd sup ‖x∗‖2=1 ‖x∗I1‖ 2 2 + (‖x∗I2‖2 + ‖Jg(x)x ∗ I1‖2) 2 (6)\n= sup x∈Rd sup ‖x∗‖2=1\n‖x∗I1‖ 2 2 + ‖x∗I2‖ 2 2 + 2‖x∗I2‖2‖Jg(x)x ∗ I1‖2 + ‖Jg(x)x ∗ I1‖ 2 2\n= sup x∈Rd sup ‖x∗‖2=1\n‖x∗‖22 + 2‖x∗I2‖2‖Jg(x)x ∗ I1‖2 + ‖Jg(x)x ∗ I1‖ 2 2\n= sup x∈Rd sup ‖x∗‖2=1\n1 + 2‖x∗I2‖2‖Jg(x)x ∗ I1‖2 + ‖Jg(x)x ∗ I1‖ 2 2\n= sup x∈Rd sup ‖x∗‖2=1\n1 + 2‖Jg(x)x∗I1‖2 + ‖Jg(x)x ∗ I1‖ 2 2\n= sup x∈Rd sup ‖x∗‖2=1\n( 1 + ‖Jg(x)x∗I1‖2 )2 = sup x∈Rd (1 + ‖Jg(x)‖2)2\n⇒ Lip(F ) ≤ 1 + Lip(g).\nFurthermore, the inverse of F can be obtained via the simple algebraic transformation (y := F (x))\nF−1(y)I1 = yI1 F−1(y)I2 = yI2 − g(yI1).\nSince the only difference to the forward mapping is the minus sign, the Lipschitz bound for the inverse is the same as for the forward mapping.\nA.2 DERIVATION OF LIPSCHITZ BOUND FOR AFFINE COUPLING LAYERS\nSince the structure of the forward and inverse mapping for affine coupling layers has some differences, we split the derivation of the Lipschitz bounds into two sections. First, we start with the forward mapping and then reuse several steps for the bounds on the inverse mapping.\nA.2.1 DERIVATION FOR THE FORWARD MAPPING\nConsider an affine coupling block defined as\nF (x)I1 = xI1 F (x)I2 = xI2 g(s(xI1)) + t(xI1),\nwhere g(·) 6= 0 for all XI2 and I1, I2 as before. The Jacobian for this operation has the structure\nJF (x) =\n( I 0\nDI(xI2)Dg′(xI1)Js(xI1) + Jt(xI1) Dg(s(xI1))\n) ,\nwhere D are following diagonal matrices DI(xI2) = diag ( (xI2)1, . . . , (xI2)|I2| ) ,\nDg′(xI1) = diag ( g′(s(xI2)1, . . . , g ′(s(xI2)|I2|) )\nDg(s(xI1)) = diag ( g(s(xI2)1, . . . , g(s(xI2)|I2|) ) .\nDenote\nM(x) := DI(xI2)Dg′(xI1)Js(xI1) + Jt(xI1).\nBy using an analogous derivation as in equation 6 (up to the inequality sign), we get\nLip(F )2 ≤ sup x∈Rd sup ‖x∗‖2=1 ‖x∗I1‖ 2 2 +\n( ‖Dg(s(xI1))x∗I2‖2 + ‖M(x)x ∗ I1‖2 )2 = sup x∈Rd max i∈[|I1|] (1, Dg(s(xI1)i)) 2 + 2 max i∈[|I1|] (Dg(s(xI1)i))‖M(x)‖2 + ‖M(x)‖22\n≤ sup x∈Rd max i∈[|I1|] (1, Dg(s(xI1)i)) 2 + 2 max i∈[|I1|] (1, Dg(s(xI1)i))‖M(x)‖2 + ‖M(x)‖22\n= sup x∈Rd ( max i∈[|I1|] (1, Dg(s(xI1)i)) + ‖M(x)‖2 )2\n⇐⇒ Lip(F ) ≤ max i∈[|I1|] (1, Dg(s(xI1)i)) + sup x∈Rd ‖M(x)‖2.\nNext, we will look into the structure of M(x) to derive a more precise bound. Since inputs x are assumed to be bounded as x ∈ [a, b]d, it holds\n‖DI(xI2)‖2 ≤ max(|a|, |b|).\nFurthermore, let the derivative g′ of the element-wise function g be globally bounded by c, i.e. supx∈R g ′(x) ≤ cg′ . Then, it is\n‖Dg′(xI1)‖2 ≤ cg′ .\nIn a similar manner as in section A.1, the spectral norm of the Jacobian of the scale-function s and translation-function t can be bounded by their Lipschitz constant, i.e.\n‖Js(xI1)‖2 ≤ Lip(s) ‖Jt(xI1)‖2 ≤ Lip(t).\nBy using above bounds, we obtain\nsup x∈Rd\n‖M(x)‖22 ≤ max(|a|, |b|) · c · Lip(s) + Lip(t).\nIf we further assume, that the elementwise-function g is globally upper bounded by cg and we insert above bounds, we obtain\nLip(F ) ≤ max(1, cg) + max(|a|, |b|) · cg′ · Lip(s) + Lip(t).\nA.2.2 DERIVATION FOR THE INVERSE MAPPING\nFor the affine coupling block from section A.2.1, the inverse is defined as\nF−1(y)I1 = yI1 F−1(y)I2 = (yI2 − t(xI1)) g(s(yI1)),\nwhere g(·) 6= 0 for all XI2 , I1, I2 as before and denotes elementwise division. The Jacobian for this operation has the structure\nJF (x) =\n( I 0\nM∗(y) D 1 g (s(xI1))\n) ,\nwhere D 1 g (s(xI1)) denotes a diagonal matrix, as before. Furthermore, M∗ is defined as\nM∗(y) = DI(yI2)D( 1g ) ′(s(yI1))Js(yI1)−D( 1g )′(s(yI1))Js(yI1)DI(t(yI1))−D 1g (s(yI1))Jt(yI1),\nwhere D( 1g ) ′(s(xI1)) also denotes a diagonal matrix. Using analogous arguments as in section A.2.1, we obtain the bound\nLip(F−1) ≤ max i∈[|I1|] (1, D 1 g (s(xI1)i)) + sup x∈Rd ‖M∗(x)‖2.\nHence, we need to further bound the spectral norm of M∗. First, assume that 1g , the derivative ( 1 g )′ and translation t is globally upper bounded by c 1\ng , c( 1g ) ′ and ct respectively. Furthermore consider\nbounded inputs y ∈ [a∗, b∗]d. Then we obtain the bound\nsup x∈Rd\n‖M∗(x)‖22 ≤ max(|a∗|, |b∗|) · c( 1g )′ · Lip(s) + c( 1g )′ · Lip(s) · ct + c 1g · Lip(t).\nHence, we can bound the Lipschitz constant of the inverse of an affine block as\nLip(F−1) ≤ max i∈[|I1|] (1, c 1 g ) + max(|a∗|, |b∗|) · c( 1g )′ · Lip(s) + c( 1g )′ · Lip(s) · ct + c 1g · Lip(t)." }, { "heading": "B NUMERICAL ERRORS AND LIPSCHITZ CONSTANTS", "text": "In a general setting, connecting numerical errors e.g. due to floating point operations to Lipschitz constants of the underlying mapping in a quantitative manner is not straightforward. For example, numerical errors due to limited precision occurs when summing to floating point numbers. As discussed in (Gomez et al., 2017), this occurs in additive coupling layers and is one source of numerical errors we observe in our experiments.\nTo formalize the connection to the Lipschitz constant, consider the following two mappings:\nF (x) = z, (analytical exact computation) Fδ(x) = z + δ =: zδ, (floating point inexact computation)\nIn order to bound the error in the reconstruction due to the imprecision in the forward mapping, let xδ1 = F −1(zδ). Now consider\n‖x− xδ1‖2 ≤ Lip(F−1)‖z − zδ‖2 = Lip(F−1)‖δ‖2,\nwhere the Lipschitz constant of the inverse is used to bound the influence of the numerical error in the forward mapping. However, similarly to the forward mapping, the inverse mapping can also be imprecise. Thus, we introduce\nF−1δ (zδ) = xδ1 + δ2 := xδ2\nto formalize the numerical error in the inverse mapping. Hence, we obtain the bound\n‖x− (xδ1 + δ2)‖2 ≤ ‖x− xδ1‖2 + ‖δ2‖2 ≤ Lip(F−1)‖z − zδ‖2 + ‖δ2‖2 = Lip(F−1)‖δ‖2 + ‖δ2‖2,\nwhere the numerical errors of the mapping are denoted via δ (forward) and δ2 (inverse). While obtaining quantitative values for δ and δ2 for a model as complex as deep neural networks is hard, above formalization still provides insights into a potential role of the inverse stability when reconstructing inputs." }, { "heading": "C ADDITIONAL DETAILS FOR CLASSIFICATION EXPERIMENTS", "text": "Both affine and additive coupling-based models have the same architecture, that consists of 3 levels, 16 blocks per level, and 128 hidden channels. Each level consists of a sequence of residual blocks that operate on the same dimensionality. Between levels, the input is spatially downsampled by 2× in both width and height, while the number of channels is increased by 4×. Each residual block consists of a chain of 3× 3, 1× 1, 3× 3 convolutions, with ReLU activations in between. We trained on CIFAR-10 for 200 epochs, using SGD with Nesterov momentum 0.9 and weight decay 5e-4, with initial learning rate 0.01, decayed by a factor of 5 at epochs 60, 120, and 160. We found that\nusing a smaller initial learning rate of 0.01 was important for training the INN classifier, as opposed to the standard initial learning rate of 0.1 used for ResNets (Zagoruyko & Komodakis, 2016). We used standard data augmentation (random cropping and horizontal flipping)." }, { "heading": "D EXPERIMENTAL DETAILS FOR FLOW-GAN", "text": "For the experiments in Section 4.3, the ADV models are trained with standard binary cross entropy loss and the hyperparameters in Table 4, whereas the MLE models are trained with learning rate 1e-3." }, { "heading": "E EXTENDED RESULTS FOR CRAFTED, NON-INVERTIBLE INPUTS", "text": "PGD Setup. To find non-invertible inputs for Glow and Residual Flows, we used PGD (Eq. 3) with = 0.1 and step size 0.01. For the Glow model in Section 4.2, we consistently found inputs with severe reconstruction errors (as shown in Figure 3) in fewer than 10 PGD iterations. For the Residual Flow model analyzed in this section, we ran 200 iterations of PGD. In each iteration, the pixel values of the perturbed image were clipped to the valid input range that the respective model was trained on ([−0.5, 0.5] for Glow and [0, 1] for the Residual Flow).\nCrafted Inputs for Residual Flows. We also applied the PGD attack from Section 4.2 to a Residual Flow (Chen et al., 2019) pre-trained on CIFAR-10 (Figure 7).4 We find that, while there are visible differences between the crafted input x′ and its reconstruction x̂′, the reconstruction errors are less severe for the Residual Flow compared to Glow (analyzed in Section 4.2).\nInstability Outside the Range of Training Inputs. Here, we applied the PGD attack from Section 4.2 to a Glow model pre-trained on Celeb-A (Liu et al., 2015). 5 This model was trained on images normalized to the range [−0.5, 0.5]. While the PGD attack was not successful at finding adversarial inputs in [−0.5, 0.5], it succeeded when the range was increased to [−0.7, 0.7] (by using = 0.2 and not clipping the perturbed inputs to [−0.5, 0.5]), yielding the example shown in Figure 8. Thus, we found that invertible models can become numerically non-invertible on out-of-distribution data.\n4We used the pre-trained model from https://github.com/rtqichen/residual-flows. 5We used the model from https://github.com/openai/glow." }, { "heading": "F MOTIVATING THE DECORRELATION TASK", "text": "To motivate the decorrelation task as simple toy environment for stability of invertible models consider the following task (and its visualization in Figure 9):\n• Consider input data x that is distributed via a standard normal distribution, i.e. x ∼ N (0, I) (Figure 9 (left)). • Assume the data is transformed by a rotation matrix R and a diagonal matrix D, i.e. y = RDx (Figure 9 (2. from left)). • Goal: decorrelate transformed data y using an invertible mapping.\nSince correlation is independent of scale (scaling by standard deviation of the data), at least the two solutions A2 = D−1RT (Figure 9 (2. from right)) and A1 = RT (Figure 9 (right)) are equally valid for the given decorrelation task. However, the conditioning of the mappings A1 and A2 can be largely different if the scaling matrix D has a high condition number. Hence, this task both offers a stable solution, namely A1, and a (potentially) unstable solution A2.\nTo conclude, decorrelation can allow multiple solutions with different stability. Hence, decorrelation is a natural simple task to study which solution the INN picks. Furthermore, guiding the network to a stable solution is a justified strategy for this task and it is not expected to harm performance." }, { "heading": "G DECORRELATION EXAMPLE CODE", "text": "Here we provide an example implementation of the decorrelation objective used in Section 4.4, that minimizes the norm of the off-diagonal entries in the correlation matrix.\nListing 1: Example PyTorch code to implement the decorrelation loss used in our experiments. z = model(img) z_flat = z.view(z.size(0), -1) z_flat = z_flat - z_flat.mean(dim=0) # Subtract mean z_flat = z_flat / (z_flat.std(dim=0) + 1e-8) # Standardize correlation = (torch.mm(z_flat.t(), z_flat) / (z_flat.size(0)-1)) loss = torch.norm(correlation - torch.diag(torch.diagonal(correlation))) optimizer.zero_grad() loss.backward() optimizer.step()" }, { "heading": "H EXTENDED RESULTS FOR DECORRELATION", "text": "Decorrelation Experiment Details Here we provide additional details on the model architectures and training schemes we used in our numerical experiments.\nFor all the decorrelation experiments, we used a 3-level model with blocks of depth 16. We used Adam (Kingma & Ba, 2015) with fixed learning rate 1e-4 and no weight decay, and trained on mini-batches of size 64. The CIFAR-10 images were normalized to the range [-0.5, 0.5], and were dequantized with uniform noise in [0, 1e-6].\nEffect of Model Depth. Furthermore, we investigated the effect of network depth on stability since its an additional influence factor besides the selection of each invertible building block. Starting with a 3-level additive model with ActNorm, we vary the depth of the blocks between {4, 16, 32} and train with the decorrelation objective. The quantitative reconstruction errors and condition numbers of the Jacobians are shown in Figure 10. As expected, deeper architectures become unstable faster than shallow ones.\nLoss Plots. Here we show that all the model variants investigated in the decorrelation experiments achieve their objective, i.e., the loss decreased enough for the correlation matrices to be diagonal.\nEvolution of Condition Numbers, Max & Min Singular Values for Decorrelation. Here we plot the condition numbers, maximum and minimum singular values during training for the decorrelation task. We include plots for all settings discussed in the main paper as well as the appendix." }, { "heading": "I REFITTING PRIOR IN FLOW-GAN", "text": "In general, if the model is not optimized with forward KL, DKL(Pdata||Pθ), as in the case when optimizing with maximum likelihood, we cannot be sure F (x), x ∼ Pdata is best fitted with a standard Normal. Hence, a reasonable strategy, without changing anything in the learned network, is to refit the prior parameters. Here we simply optimize for maximum likelihood (as typically done for flow models) while only fitting a diagonal variance in prior. This can be interpreted as increasing the entropy of our model. In Kingma & Dhariwal (2018), they observed the opposite phenomenon that a model trained with maximum likelihood generates better samples after decreasing the entropy in the prior. See Figure 21 for samples after refitting the prior." } ]
2,019
null
SP:a1d1e8d13b1df53435caa45e5fed856fcdd1b6ec
[ "This paper introduces a neural controller architecture for learning abstract algorithmic solutions to search and planning problems. By combining abstract and domain-specific components, the model is able to mimic two classical algorithms quite closely across several domains. The precision of the learning is very high; verified by generalization to substantially larger problem sizes and different domains. One notable conclusion is that Evolutionary Strategies is able to learn algorithmic solutions whose precision is on par with deterministic algorithms. The method of triggering learning based on curriculum level performance is a notable feature that nicely couples generalization progress with learning, and yields insightful learning curves.", "This paper proposes modifications and modular extensions to the differential neural computer (DNC). The approach is nicely modular, decoupling the data modules from algorithmic modules. This enables the authors to pretrain the data modules with supervised learning and to train the small algorithmic modules with neural evolution strategies (NES). NES is a global optimization method (which may be understood as policy gradients where the parameters of the neural policy are the actions) and consequently this enables the authors to use discrete selection mechanisms instead of the soft attention mechanisms of DNC." ]
A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems. Therefore, we present a novel architecture, based on memory-augmented networks, that is inspired by the von Neumann and Harvard architectures of modern computers. This architecture enables the learning of abstract algorithmic solutions via Evolution Strategies in a reinforcement learning setting. Applied to Sokoban, sliding block puzzle and robotic manipulation tasks, we show that the architecture can learn algorithmic solutions with strong generalization and abstraction: scaling to arbitrary task configurations and complexities, and being independent of both the data representation and the task domain.
[]
[ { "authors": [ "Gianluca Baldassarre", "Marco Mirolli" ], "title": "Intrinsically motivated learning systems: an overview", "venue": null, "year": 2013 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In International Conference on Machine Learning,", "year": 2009 }, { "authors": [ "Edoardo Conti", "Vashisht Madhavan", "Felipe Petroski Such", "Joel Lehman", "Kenneth Stanley", "Jeff Clune" ], "title": "Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Sreerupa Das", "C Lee Giles", "Guo-Zheng Sun" ], "title": "Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory", "venue": "In Conference of Cognitive Science Societyy, pp", "year": 1992 }, { "authors": [ "Yoav Freund", "Robert E Schapire" ], "title": "A decision-theoretic generalization of online learning and an application to boosting", "venue": "Journal of Computer and System Sciences,", "year": 1997 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette", "Tiago Ramalho", "John Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external", "venue": "memory. Nature,", "year": 2016 }, { "authors": [ "Rasmus Boll Greve", "Emil Juul Jacobsen", "Sebastian Risi" ], "title": "Evolving neural turing machines for reward-based learning", "venue": "In Proceedings of the Genetic and Evolutionary Computation Conference", "year": 2016 }, { "authors": [ "Armand Joulin", "Tomas Mikolov" ], "title": "Inferring algorithmic patterns with stack-augmented recurrent nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Łukasz Kaiser", "Samy Bengio" ], "title": "Can active memory replace attention", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Łukasz Kaiser", "Ilya Sutskever" ], "title": "Neural GPUs learn algorithms", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Anders Krogh", "John A Hertz" ], "title": "A simple weight decay can improve generalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 1992 }, { "authors": [ "Ankit Kumar", "Ozan Irsoy", "Peter Ondruska", "Mohit Iyyer", "James Bradbury", "Ishaan Gulrajani", "Victor Zhong", "Romain Paulus", "Richard Socher" ], "title": "Ask me anything: Dynamic memory networks for natural language processing", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Long-Ji Lin" ], "title": "Self-improving reactive agents based on reinforcement learning, planning and teaching", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Horia Mania", "Aurelia Guy", "Benjamin Recht" ], "title": "Simple random search of static linear policies is competitive for reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jakob Merrild", "Mikkel Angaju Rasmussen", "Sebastian Risi" ], "title": "HyperNTM: evolving scalable neural turing machines through HyperNEAT", "venue": "In International Conference on the Applications of Evolutionary Computation,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Michael C Mozer", "Sreerupa Das" ], "title": "A connectionist symbol manipulator that discovers the structure of context-free languages", "venue": "In Advances in Neural Information Processing Systems,", "year": 1993 }, { "authors": [ "Arvind Neelakantan", "Quoc V Le", "Ilya Sutskever" ], "title": "Neural programmer: Inducing latent programs with gradient descent", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Pierre-Yves Oudeyer", "Frederic Kaplan" ], "title": "What is intrinsic motivation? a typology of computational approaches", "venue": "Frontiers in neurorobotics,", "year": 2009 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Daniel L Silver", "Qiang Yang", "Lianghao Li" ], "title": "Lifelong machine learning systems: Beyond learning algorithms", "venue": "In AAAI Spring Symposium: Lifelong Machine Learning,", "year": 2013 }, { "authors": [ "Kenneth O Stanley", "Risto Miikkulainen" ], "title": "Evolving neural networks through augmenting topologies", "venue": "Evolutionary computation,", "year": 2002 }, { "authors": [ "Sainbayar Sukhbaatar", "Jason Weston", "Rob Fergus" ], "title": "End-to-end memory networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Matthew E Taylor", "Peter Stone" ], "title": "Transfer learning for reinforcement learning domains: A survey", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Joshua B Tenenbaum", "Charles Kemp", "Thomas L Griffiths", "Noah D Goodman" ], "title": "How to grow a mind: Statistics, structure, and abstraction", "venue": null, "year": 2011 }, { "authors": [ "Greg Wayne", "Chia-Chun Hung", "David Amos", "Mehdi Mirza", "Arun Ahuja", "Agnieszka GrabskaBarwinska", "Jack Rae", "Piotr Mirowski", "Joel Z Leibo", "Adam Santoro" ], "title": "Unsupervised predictive memory in a goal-directed agent", "venue": null, "year": 2018 }, { "authors": [ "Tillman Weyde", "Radha Manisha Kopparti" ], "title": "Feed-forward neural networks need inductive bias to learn equality relations", "venue": null, "year": 2018 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Tobias Glasmachers", "Yi Sun", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Wojciech Zaremba", "Tomas Mikolov", "Armand Joulin", "Rob Fergus" ], "title": "Learning simple algorithms from examples", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Zheng Zeng", "Rodney M Goodman", "Padhraic Smyth" ], "title": "Discrete recurrent neural networks for grammatical inference", "venue": "IEEE Transactions on Neural Networks,", "year": 1994 } ]
[ { "heading": "1 INTRODUCTION", "text": "Transferring solution strategies from one problem to another is a crucial ability for intelligent behavior (Silver et al., 2013). Current learning systems can learn a multitude of specialized tasks, but extracting the underlying structure of the solution for effective transfer is an open research problem (Taylor & Stone, 2009). Abstraction is key to enable these transfers (Tenenbaum et al., 2011) and the concept of algorithms in computer science is an ideal example for such transferable abstract strategies. An algorithm is a sequence of instructions, which solves a given problem when executed, independent of the specific instantiation of the problem. For example, consider the task of sorting a set of objects. The algorithmic solution, specified as the sequence of instructions, is able to sort any number of arbitrary classes of objects in any order, e.g., toys by color, waste by type, or numbers by value, by using the same sequence of instructions, as long as the features and compare operations defining the order are specified. Learning such structured, abstract strategies enables the transfer to new domains and representations (Tenenbaum et al., 2011). Moreover, abstract strategies as algorithms have built-in generalization capabilities to new task configurations and complexities.\nHere, we present a novel architecture for learning abstract strategies in the form of algorithmic solutions. Based on the Differential Neural Computer (Graves et al., 2016) and inspired by the von Neumann and Harvard architectures of modern computers, the architectures modular structure allows for straightforward transfer by reusing learned modules instead of relearning, prior knowledge can be included, and the behavior of the modules can be examined and interpreted. Moreover, the individual modules of the architecture can be learned with different learning settings and strategies – or be hardcoded if applicable – allowing to split the overall task into easier subproblems, contrary to the end-to-end learning philosophy of most deep learning architectures. Building on memory-augmented neural networks (Graves et al., 2016; Neelakantan et al., 2016; Weston et al., 2015; Joulin & Mikolov, 2015), we propose a flexible architecture for learning abstract strategies as algorithmic solutions and show the learning and transferring of such in symbolic planning tasks." }, { "heading": "1.1 THE PROBLEM OF LEARNING ALGORITHMIC SOLUTIONS", "text": "We investigate the problem of learning algorithmic solutions which are characterized by three requirements: R1 – generalization to different and unseen task configurations and task complexities, R2 – independence of the data representation, and R3 – independence of the task domain.\nPicking up the sorting algorithm example again, R1 represents the ability to sort lists of arbitrary length and initial order, while R2 and R3 represent the abstract nature of the solution. This abstraction enables the algorithm, for example, to sort a list of binary numbers while being trained only on hexadecimal numbers (R2). Furthermore, the algorithm trained on numbers is able to sort\nlists of strings (R3). If R1 – R3 are fulfilled, the algorithmic solution does not need to be retrained or adapted to solve unforeseen task instantiations – only the data specific operations need to be adjusted.\nResearch on learning algorithms typically focuses on identifying algorithmic generated patterns or solving algorithmic problems (Neelakantan et al., 2016; Zaremba & Sutskever, 2014; Kaiser & Sutskever, 2016; Kaiser & Bengio, 2016), less on finding algorithmic solutions (Joulin & Mikolov, 2015; Zaremba et al., 2016) fulfilling the three discussed requirements R1 – R3. While R1 is typically tackled, as it represents the overall goal of generalization in machine learning, the abstraction abilities from R2 and R3 are missing. Additionally, most algorithms require a form of feedback, using computed intermediate results from one computational step in subsequent steps, and a variable number of computational steps to solve a problem instance. Thus, it is necessary to be able to cope with varying numbers of steps and determining when to stop, in contrast to using a fixed number of steps (Neelakantan et al., 2016; Sukhbaatar et al., 2015), making the learning problem more challenging in addition.\nA crucial feature for algorithms is the ability to save and retrieve data. Therefore, augmenting neural networks with different forms of external memory, e.g., matrices, stacks, tapes or grids, to increase their expressiveness and to separate computation from memory, especially in long time dependencies setups, is an active research direction (Graves et al., 2016; Weston et al., 2015; Joulin & Mikolov, 2015; Zaremba et al., 2016; Sukhbaatar et al., 2015; Kumar et al., 2016; Greve et al., 2016) with earlier work in the field of grammar learning (Das et al., 1992; Mozer & Das, 1993; Zeng et al., 1994). These memory-augmented networks improve performance on a variety of tasks like reasoning and inference in natural language (Graves et al., 2016; Weston et al., 2015; Sukhbaatar et al., 2015; Kumar et al., 2016), learning of simple algorithms and algorithmic patterns (Joulin & Mikolov, 2015; Zaremba et al., 2016; Graves et al., 2014), and navigation tasks (Wayne et al., 2018).\nThe contribution of this paper is a novel modular architecture building on a memory-augmented neural network (DNC (Graves et al., 2016)) for learning algorithmic solutions in a reinforcement learning setting. We show that the learned solutions fulfill all three requirements R1 – R3 for an algorithmic solution and the architecture can process a variable number of computational steps." }, { "heading": "2 A NEURAL COMPUTER ARCHITECTURE FOR ALGORITHMIC SOLUTIONS", "text": "In this section, we introduce the novel modular architecture for learning algorithmic solutions, shown in Figure 1. The architecture builds on the Differential Neural Computer (DNC) (Graves et al., 2016) and its modular design is inspired by modern computer architectures, related to (Neelakantan et al., 2016; Weston et al., 2015).\nThe DNC augments a controller neural network with a differentiable autoassociative external memory to separate computation from memory, as memorization is usually done in the networks weights. The controller network learns to write and read information from that memory by emitting an interface vector which is mapped onto different vectors by linear transformations. These vectors control the read and write operations of the memory, called read and write heads. For writing and reading, multiple attention mechanisms are employed, including content lookup, temporal linkage and memory allocation. Due to the design of the interface and the attention mechanisms, the DNC is independent of the memory size and fully differentiable, allowing gradient-based end-to-end learning.\nOur architecture. In order to learn algorithmic solutions, the computations need to be decoupled from the specific data and task. To enable such data and task independent computations, we propose multiple alterations and extensions to the DNC, inspired by modern computer architectures.\nFirst, information flow is divided into two streams, data and control. This separation allows to disentangle data representation dependent manipulations from data independent algorithmic instructions. Due to this separation, the algorithmic modules need to be extended to include two memories, a data and a computational memory. The data memory stores and retrieves the data stream, whereas the computational memory works on information generated by the control signal flow through the learnable controller and memory transformations. The two memories are coupled, operating on the same locations, and these locations are determined by the computational memory, and hence by the control stream. As with the DNC, multiple read and write heads can be used. In our experiments, one read and two write heads are used, with one write head constrained to the previously read location.\nIn contrast to the DNC, but in line with the computer architecture-inspired design and the goal of learning deterministic algorithms, writing and reading uses hard attentions instead of soft attentions. Hard attention means that only one memory location can be written to and read from (unique addresses), instead of an weighted average over all locations as with soft attentions. Such hard attention was shown to be beneficial for generalization (Greve et al., 2016). We also employed an additional attention mechanism for reading, called usage linkage, similar to the temporal linkage of the DNC, but instead of capturing temporal relations, it captures usage relations, i.e., the relation between written memory location and previously read location. With both linkages in two directions and the content look up, the model has five attention mechanisms for reading. While the final read memory location is determined by a weighted combination of these attentions (see attention in Figure 5 in the Appendix), each attention mechanism itself uses hard decisions, returning only one memory location. See Appendix C for the effect of the introduced modifications and extensions.\nFor computing the actual solution, operating only on the control stream is not enough, as the model still needs to manipulate the data. Therefore, we added several modules operating on the data stream, inspired by the architecture of computers. In particular, an Input, TransformD, ALU (arithmetic logic unit) and Output module were added (more details in Section 2.2). These modules manipulate the data, steered by the algorithmic modules. The full architecture is shown in Figure 1.\nAs algorithms typically involve recursive or iterative data manipulation, the model receives its own output as input in the next computation step, making the whole architecture an output-input model. With all aforementioned extensions, algorithmic solutions fulfilling R1 – R3 can be learned." }, { "heading": "2.1 THE ALGORITHMIC MODULES", "text": "The algorithmic modules consist of the Controller, the Memory and the TransformC module and build the core of the model. These modules learn the algorithmic solution operating on the control stream. With t as the current computational step and c as the control stream (see Figure 1), the input-output of the modules are C(ci,t, cm,t−1, cf,t−1, ca,t−1, co,t−1) 7−→ cc,t , M(ci,t, cc,t) 7−→ cm,t, dm,t and TC(cc,t, cm,t, ci,t) 7−→ cf,t. The algorithmic modules are based on the DNC with the alterations and extensions described before. Next we discuss how these algorithmic modules can be learned before looking into the data-dependent modules." }, { "heading": "2.1.1 LEARNING OF THE ALGORITHMIC MODULES", "text": "Learning the algorithmic modules, and hence the algorithmic solution, is done in a reinforcement learning setting using Natural Evolution Strategies (NES) (Wierstra et al., 2014). NES is a blackbox optimizer that does not require differentiable models, giving more freedom to the model design, e.g., the hard attention mechanisms are not differentiable. NES updates a search distribution of the parameters to be learned by following the natural gradient towards regions of higher fitness using a population of offsprings (altered parameters) for exploration. Let θ be the parameters to be learned and using an isotropic multivariate Gaussian search distribution with fixed variance σ2, the stochastic natural gradient at iteration t is given by\n∇θtE ∼N(0,I) [u(θt + σ )] ≈ 1\nPσ P∑ i=1 u(θit) i ,\nwhere P is the population size and u(·) is the rank transformed fitness (Wierstra et al., 2014). The parameters are updated by\nθt+1 = θt + α\nPσ P∑ i=1 u(θit) i ,\nwith learning rate α. Recent research showed that NES and related approaches like Random Search (Mania et al., 2018) or NEAT (Stanley & Miikkulainen, 2002) are powerful alternatives in reinforcement learning. They are easier to implement and scale, perform better with sparse rewards and credit assignment over long time scales, have fewer hyperparameters (Salimans et al., 2017) and were used to train memory-augmented networks (Greve et al., 2016; Merrild et al., 2018).\nFor robustness and learning efficiency, weight decay for regularization (Krogh & Hertz, 1992) and automatic restarts of runs stuck in local optima are used as in (Wierstra et al., 2014). This restarting can be seen as another level of evolution, where some lineages die out. Another way of dealing with early converged or stuck lineages is to add intrinsic motivation signals like novelty, that help to get attracted by another local optima, as in NSRA-ES (Conti et al., 2018). In the experiments however, we found that within our setting, restarting – or having an additional survival of the fittest on the lineages – was more effective, see Appendix C for a comparison.\nThe algorithmic solutions are learned in a curriculum learning setup (Bengio et al., 2009) with sampling from old lessons (Zaremba & Sutskever, 2014) to prevent unlearning and to foster generalization. Furthermore, we created bad memories, a learning from mistakes strategy, similar to the idea of AdaBoost (Freund & Schapire, 1997), which samples previously failed tasks to encourage focusing on the hard tasks. This can also be seen as a form of experience replay (Mnih et al., 2015; Lin, 1992), but only using the task configurations, the initial input to the model, not the full generated sequence. Bad memories were developed for training the data-dependent modules to ensure their robustness and 100% accuracy, which is crucial to learn algorithmic solutions. If the individual modules do not have 100% accuracy, no stable algorithmic solution can be learned even if the algorithmic modules are doing the correct computations. For example, if one module has an accuracy of 99%, the 1% error prevents learning an algorithmic solution that works always. This problem is even reinforced as the proposed model is an output-input architecture that works over multiple computation steps using its own output as the new input – meaning the overall accuracy drops to 36.6% for 100 computation steps. Therefore using the bad memories strategy, and thus focusing on the mistakes, helps significantly in achieving robust results when learning the modules, enabling the learning of algorithmic solutions. While the bad memories strategy was crucial to achieve 100% robustness when training the data-dependent modules, the effect on learning the algorithmic solutions was less significant (see Appendix C for an evaluation)." }, { "heading": "2.2 DATA-DEPENDENT MODULES", "text": "The data-dependent modules (Input, ALU, TransformD and Output) are responsible for all operations that involve direct data contact, such as receiving the input data from the outside or manipulating a data word with an operation chosen by the algorithmic modules. Thus, these modules need to be learned or designed for a specific data representation and task. However, as all modules only have to perform a certain subtask, these modules are typically easier to train.\nAs learning the algorithmic modules via NES does not rely on gradients and due to the information flow split, the data-dependent modules can be instantiated arbitrarily, e.g., can have nondifferentiable parts, do not need to be neural networks or can be hardcoded. Therefore, prior knowledge can be incorporated by implementing it directly into these modules. The modular design facilitates the transfer of learned modules, e.g., using the same algorithmic solution in a new domain without retraining the algorithmic modules or learning a new algorithm within the same domain without retraining the data modules. Next the general functionality of the modules will be explained.\nThe Input module is the interface to the external world and responsible for data preprocessing. Therefore, it receives the external input data and the data from the previous computational step. It sends data to the memory and control signals to the subsequent modules with information about the presented data or the state of the algorithm – formally as I(de,t, do,t−1) 7−→ ci,t, di,t . The ALU module performs the basic operations which the architecture can use to modify data. Therefore, it receives the data and a control signal indicating which operation to apply and outputs\nthe modified data and control signals about the operation – A(cf,t, df,t) 7−→ ca,t, da,t. As in many applications the basic operations only modify a part of the data and to reduce the complexity of the ALU, a TransformD module extracts the relevant part from the data beforehand – TD(dm,t) 7−→ df,t – or just transfers the unmodified data if no transformation is required for the task.\nThe Output module combines the result of the data manipulation operation from the ALU module and the data before the manipulation. It inserts the local change done by the ALU into the original data word –O(ca,t, da,t, dm,t) 7−→ co,t, do,t. As before with the Transformation module, depending on the task, the Output module can also be designed to just pass on the received data." }, { "heading": "3 EXPERIMENTS", "text": "We investigate the learning of symbolic planning tasks, where task complexity is measured as the number of computational steps required to solve a task, i.e., the size of the corresponding search tree (see Figure 2). Learning is done in the Sokoban domain, whereas the generalization and abstraction requirements R1 – R3 are shown by transferring to (1) longer planning tasks, (2) bigger Sokoban worlds, (3) a different data representation, and (4) two different task domains – sliding block puzzle and robotic manipulation.\nIn Sokoban, an agent interacts in a grid world with four actions – moving up, right, down or left. Therefore, the ALU can perform four operations and additionally a nop operation that leaves the given configuration unchanged. The world contains empty spaces that can be entered, walls that block movement and boxes that can be pushed onto empty space. A task is given by a start configuration of the world and the desired goal configuration. For learning, we use a world of size 6×6 that is enclosed by walls. A world is represented with binary vectors and four-dimensional one-hot encodings for each position, resulting in 144-dimensional data words. The configuration of each world – inner walls, boxes and agent position – is sampled randomly. Each world is generated by sampling uniformly the number of additional inner walls from [0, 2] and boxes from [1, 5]. The positions of these walls, boxes and the position of the agent are sampled uniformly from the empty spaces. An example task and the learned solution is shown in the Appendix in Figure 5 – the penguin is the agent, icebergs are boxes, iceblocks are walls and water is empty space." }, { "heading": "3.1 ALGORITHMIC MODULES", "text": "In the experiments we use a feedforward neural network as Controller with a layer size of 16 neurons and tanh activation. The TransformC is a linear layer projecting its 27-dimensional input onto the 5 operations of the ALU using leaky-ReLU activation and one-hot encoding. The computational memory has a word size of 8 bit, the Input module generates 3 control signals (2 for Learning to Search), and the ALU and Output module control signal feedback is not used here. Thus, the input to the Controller consists of 16 control signals and in total there are about 1600 parameters." }, { "heading": "3.1.1 LEARNING OF THE DATA-DEPENDENT MODULES", "text": "All data-dependent modules are trained in a supervised setting and consist of feedforward networks. They optimize a cross entropy loss using Adam (Kingma & Ba, 2015) on a mini-batch size of\n20. To improve their generalization and robustness, the bad memories mechanism described in Section 2.1.1 is used with a buffer size of 200 and 50% of the samples within a mini-batch are sampled from that. The following task-dependent instantiations of the data-dependent modules are examples used for the Sokoban domain.\nThe Input module learns an equality function using differential rectifier units as inductive bias (Weyde & Kopparti, 2018) and consists of a feedforward network with 10 hidden units and leaky-ReLU activation. Using the learned binary equality signal Ie,t at step t, it produces three binary control signals according to c[1]i,t = (1−Ie,t)−c [2] i,t−1, c [2] i,t = Ie,t+c [2] i,t−1, and c [3] i,t = Ie,tc [2] i,t−1 , indicating the different phases of the algorithm. For the Learning to Search experiment only the first two signals are used.\nThe TransformD module extracts a different view on the data, if required by the ALU, as described in Section 2.2. Here, it consists of a feedforward network with 500 hidden neurons and uses leaky-ReLU activation. For the Sokoban domain, the actions that the agent can take – and therefore the operation the ALU can apply – only change the world locally. Thus, the TransformD module extracts a local observation of the world df , i.e., the agent and the two adjacent locations in all four directions, as these are the only locations where an action can produce a change.\nThe ALU module receives the data view extracted by TransformD and the control signal from TransformC, that encodes the operation to apply. It learns to apply the operations, i.e., it learns an action model by learning preconditions and effects, and outputs the (potential) local change together with a control signal indicating if the action changed the world or not. The local change is encoded as the direction of the change and the three according spaces. The module consists of two feedforward networks, one for the control signal ca and one for applying the actions producing the manipulated data da. The learned ca is used to gate the output between the output of the action network and the data input without change. The control network has two hidden layers with sizes [64, 64], the action network has hidden layers with [128, 64] neurons and both use leaky-ReLU activations.\nThe Output module inserts the (locally) changed data from the ALU into the data stream. It receives the data from the memory dm, the data da and control stream ca from the ALU. It consists of two feedforward networks for learning the data do and the control signal co stream. The control network has two hidden layers with sizes [500, 250], the data network has hidden layers with [500, 500] neurons and both use leaky-ReLU activations. The control signal co is used for gating between the data with the inserted change and the original data dm. To ensure that the Output module uses the manipulated data of the ALU and is not learning to manipulate the data itself, it is constrained to learn a binary mask that indicates where the change needs to be inserted. This binary map indicates for each position in da where to insert it in dm and can be seen as a structured prediction problem. Note, the training data only consists of data and control signals, the true binary mask is not known." }, { "heading": "3.2 LEARNING ALGORITHMIC SOLUTIONS", "text": "We investigate the learning of two algorithms, (1) a search algorithm and (2) a search-based planning algorithm. The data-dependent modules do not need to be retrained for the different algorithms. For evaluating that the learned strategy is an abstract algorithmic solution, we show that it fulfills the three requirements R1 – R3 discussed in Section 1.1." }, { "heading": "3.2.1 LEARNING TO SEARCH", "text": "In the first task, the model has to learn breadth-first-search to find the desired goal configuration. For that purpose, the initial input to the model is the start and goal configuration and subsequent inputs are the goal configuration and the output of the model from the previous computation step. To solve the task, the model has to learn to produce the correct search tree and recognizing that the goal configuration is reached by choosing the nop operation for the correct computation step.\nFor the curriculum learning the levels are defined as the number of nodes from the search tree that have to be fully explored, e.g., for Level 1, up to five correct computation steps have to be performed on the initial configuration; for Level 3 the initial configuration as well as the two subsequently found configurations need to be fully explored (see Figure 2(a)). This requires up to 13 correct computational steps. Curriculum levels are specified up to Level 21 that involves up to 85 correct computation steps to be solved. An additional Level 22 is activated afterwards that consists of new samples from all 21 levels for evaluation. To prevent unlearning of previous levels, 20% of the samples in the mini-batch are sampled uniformly from previous levels. As in (Wierstra et al., 2014)\nwe use restarting, but here the run automatically restarts if the maximum fitness of a level is not reached within 2500 iterations. All experiments have a total budget of 10.000 iterations.\nThe fitness function f uses step-wise binary losses computed as comparison to the correct solution over mini-batches of N samples and is defined as\nf =\n{ 1 N ∑N n f [n] e if 1N ∑N n f [n] e < 100\n1 N ∑N n f [n] e + f [n] b otherwise\n, with\n(1)\nf [n]e = 100\n3T [n] e\n∑T [n]e t=1 I(c [n] f,t = c̃ [n] f,t) + 2I(d [n] m,t = d̃ [n] m,t) and f [n] b = 20I(c [n] f,T [n] e +1 = nop) ,\nwhere Te is the number of steps required for constructing the search tree or when the first mistake occurs, cf,t is the operation chosen to be applied by the ALU from TransformC at step t, dm,t is the data word read from the memory, and c̃f,t and d̃m,t are the correct choices respectively. The exploration fitness f [n]e captures the fraction of correct computation steps until the goal configuration is found, scaled to 0-100%. Note that, NES therefore only uses a single scalar value that summarizes the performance of the parameters over N samples and all computational steps. The learning rate α is to 0.01, the σ of the search distribution to 0.1, weight decay is applied with 0.9995, mini-batch size is N = 20 and the population size is P = 20.\nWe use a gini coefficient based ranking that gives more importance to samples with higher fitness (Schaul et al., 2010). The maximum fitness is 120 for all levels and a level is solved when 250 subsequent iterations have the maximum fitness, i.e., 5000 samples are solved correctly. The bad memories consist of 200 samples and 25% of the samples within a mini-batch are sampled uniformly from those. Whenever 10 subsequent iterations achieve the maximum fitness, the buffer is cleared and no learning is performed." }, { "heading": "3.2.2 LEARNING TO PLAN (SEARCH + BACKTRACK)", "text": "In the second task, the model has to learn, in addition to the breadth-first-algorithm that computes a search tree to the goal configuration, to also extract the path from the search tree that encodes the solution to the given planning problem (see Figure 2 and Figure 5 in the Appendix). Therefore, the model has to not only learn to encode and perform two different algorithms, but also to switch between them at the correct computation step.\nThe initial input to the model is the start and goal configuration and subsequent inputs are the goal configuration and the output of the model from the previous computation step, as before. When the goal configuration is found by the model, the input is the start configuration and the previous output. To solve the task, the model has to learn to produce the search tree and recognizing that the goal configuration is reached as before. In addition, after recognizing the goal configuration, the model needs to switch behavior and output the path of the search tree encoding the planning solution. This solution consists of the states from the initial to the goal configuration and nop operations in reverse order. Therefore, the number of maximum computation steps increases up to 89 in Level 21. The fitness function is defined as in Equation equation 1 but with\nf [n] b =\n50\n3T [n] b\n∑T [n]e +T [n]b t=T [n] e +1 I(c [n] f,t = nop) + 2I(d [n] m,t = d̃ [n] m,t) ,\nwhere Tb is the number of steps required for backtracking the solution or when the first mistakes occurs. The maximum fitness is 150 and all other settings remain as before." }, { "heading": "3.3 R1 – GENERALIZATION TO UNSEEN TASK CONFIGURATIONS AND COMPLEXITIES", "text": "A main goal in all learning tasks, is to achieve generalization – to not only learn to solve seen situations, but to learn a solution that generalizes to unseen situations. One evaluation of this generalization ability is built into our learning process itself. A curriculum level is solved after 250 subsequent iterations (5000 samples) with maximum fitness and iterations with maximum fitness do not trigger learning. Thus, if presenting a new level that involves more complex tasks, the fitness stays at maximum and no learning is triggered, the previously learned solution generalizes to the new setting – generalizes to more complex tasks (see Figure 2).\nThis generalization is shown in Figure 3. For example, in the Learning to Plan setup (Figure 3(b)), after 3 levels the algorithmic solution is found and no learning is triggered anymore during the run. Moreover, the last triggered learning was for curriculum Level 3 – meaning a complexity of 15 computational steps – and the found solution generalizes up to the highest specified curriculum Level 21 with 89 computational steps. Learning the algorithmic solution is done within 3 levels and 2563 iterations. Figure 3(d) shows the evaluation of learning to solve the two tasks over 15 runs each. In contrast, the original DNC (Graves et al., 2016) model and a stack-augmented recurrent neural network for algorithmic patterns (Joulin & Mikolov, 2015) are not able to solve Level 1 when trained in a supervised setup with gradient descent and considerably more training iterations, see Figure 3(c) and the Appendix B for implementation details.\nTask complexity. Additionally, we evaluated the learned algorithmic solution with task complexities far beyond the specified curriculum learning levels, i.e., complexities experienced during training. Therefore, we used the run shown in Figure 3(b) and solved tasks requiring 330.631 computational steps (corresponds to level 82.656), having been trained only up to 15 steps (see Figure 2 for the complexities) and having been tested during training only up to 89 steps. Remember the models recurrent output-input structure, given the initial task input, the model performs 330.631 computational steps, i.e., learns to build a search tree with over 330.600 nodes, autonomously correct to compute and output the solution. Moreover, the solution learned in 6×6 environments, successfully solved all tasks within 8×8 environments. Thus, the learned strategy represents an abstract algorithmic solution that generalizes and scales to arbitrary task configurations and complexities, fulfilling R1. The learned algorithmic solution is explained with an example in the Appendix A." }, { "heading": "3.4 R2 – INDEPENDENCE OF THE DATA REPRESENTATION", "text": "Algorithmic solutions are independent of the data representation, meaning the abstract strategy is still working if the encoding is changed, as long as the data-dependent operations are adjusted. Consider again a sorting algorithm. Its algorithmic behavior stays the same independent of if it has to sort a list of numbers encoded binary or hexadecimally, as long as the compare operators are defined. To show that our learned algorithmic solutions have this feature and fulfill R2, we change the representation of the data, but reuse the learned algorithmic modules and the model can\nstill solve all tasks without retraining. The data-dependent modules are adapted and relearned. The changed representation, e.g., the penguin represents a wall instead of the agent, and results over 10.000 iterations (200.000 samples) over all curriculum levels are shown in Figure 4 (left). The fitness is at maximum from the start, showing that all samples in all levels are successfully solved without triggering learning while operating on the new data representation and hence, R2 is fulfilled." }, { "heading": "3.5 R3 – INDEPENDENCE OF THE TASK DOMAIN", "text": "Requirement R3 states that an algorithmic solution is independent of the task domain. Consider again the sorting algorithm example: as long as the compare operators are defined, it is able sort arbitrary objects. Therefore, the data-dependent modules are adapted and relearned but we reuse the learned algorithmic solution on two new task domains.\nAs new domains, 3 × 3 sliding block puzzles and a robotic manipulation task are used (Figure 4). Configurations are represented with binary vectors as described for Sokoban in Section 3. For the puzzle domain, actions are sliding adjacent tiles onto the free (white) space from four directions. A task configuration is given as a start and goal board configuration. In the robotic manipulation domain, a task is given as start and goal configuration of the objects. The available actions are the four locations on which objects can be stacked, e.g., the action pos1 encodes to move the gripper to the position and place the grasped object on top, or to pick up the top object if no object is grasped. The maximum stacking height is 3 boxes, resulting in a discrete representation of the object configuration with a 3× 4 grid. As with the new data representation, the learned algorithmic solution is able to solve all 200.000 presented samples from all curriculum levels in the new domains without triggering learning (Figure 4), showing the independence of the task domain, fulfilling R3." }, { "heading": "4 CONCLUSION", "text": "We present a novel architecture for learning algorithmic solutions and showed how it can learn abstract strategies that generalize and scale to arbitrary task configurations and complexities (R1) (Section 3.3), and are independent to both, the data representation (R2) (Section 3.4) and the task domain (R3) (Section 3.5). Such algorithmic solutions represent abstract strategies that can be transferred directly to novel problem instantiations, a crucial ability for intelligent behavior.\nTo show that our architecture is capable of learning strategies fulfilling the algorithm requirements R1 – R3 in symbolic planning tasks, we performed experiments with complexities orders of magnitude higher than seen during training (15 vs. 330.631 steps, and Figure 2 & 3), and transferred the learned solution to bigger state spaces, a new data representation and two new task domains (Figure 4) – showing, to the best of our knowledge, for the first time how such abstract strategies can be represented and learned with memory-augmented networks. The learned algorithmic solution can be applied to any problem that can be framed as such a symbolic search or planning problem.\nThe modular structure and the information flow of the architecture enable the learning of algorithmic solutions, the transfer of those, and the incorporation of prior knowledge. Using Natural Evolution Strategies for learning removes constraints on the individual modules, allowing for arbitrary module instantiations and combinations, and the beneficial use of a non-differentiable memory module (Greve et al., 2016). As the complexity and structure of the algorithmic modules need to be specified, it is an interesting road for future work to learn these in addition, building on the ideas from Greve et al. (2016); Merrild et al. (2018). Showing how algorithmic solutions characterized by R1 – R3 can be represented and learned with memory-augmented networks sets the foundation for future work, extending beyond symbolic planning and incorporating intrinsic motivation (Oudeyer & Kaplan, 2009; Baldassarre & Mirolli, 2013) to discover new and unexpected strategies." }, { "heading": "A BEHAVIOR OF THE LEARNED ALGORITHMIC SOLUTION", "text": "Figure 5 highlights the learned algorithmic behavior – one memory location is read with content lookup attention repeatedly until all operations have been applied, the node is fully explored. Then attention shifts towards temporal linkage to read the next data to be explored. This pattern continuous until the goal configuration is found in step 11. After that, behavior changes to output the backtracking solution by switching to usage linkage attention and nop operations until reaching the initial configuration.\nbacktrack\nstart config.\ngoal config.\ndata from Output do\nALU command cf computational steps\nnop nop nop search\n..\n.. .. corresponding search tree\nbacktracking the found solution\nat te\nnt io\nn\nread: 1 r: 1 write: 1 w: 2 r: 1 w: 3 r: 1 w: 4 r: 2 w: 5 r: 2 w: 6 r: 2 w: 7 r: 2 w: 8 r: 3 w: 9 r: 3 w: 10 r: 3 w: 11\nr: 3 w: 13 r: 1 w: 14 r: 15 w: 15 r: 12 w: 12\nnop\nt t u u c\n1\n2 3 5\n6 9 10 12\ndi\ndm\nde\nFigure 5: The behavior of the learned model on a task from Level 3 (see Sec. 3.2 for details) and the corresponding search tree that is constructing implicitly. In the search phase, the model fully explores one node by successively applying all operations, before reading the next node, until the goal is found. Then behavior changes in the backtrack phase, where the solution of the planning task is emitted as the states from start to goal in reverse order along with nop operations. The algorithmic behavior can also be seen in the repetitive patterns of the attention vector, showing the five attention mechanisms for reading (temporal and usage linkage in both directions, and content lookup), that represents how strong each mechanism for reading is used in each computation step." }, { "heading": "B DETAILS ON THE IMPLEMENTATIONS OF THE COMPARISON METHODS", "text": "Both models, the orignal Differential Neural Computer (DNC) (Graves et al., 2016) and the stackaugmented recurrent network (Joulin & Mikolov, 2015) are trained in a supervised setting with cross-entropy losses for 500.000 iterations to compensate the pretraining of the data modules. They use the same output-input loop as our architecture, i.e., receiving their own output as input in the next computation step in addition to the goal configuration. The loss is computed based on the correct sequences of configurations and the control signal indicating that the goal has been reached, similar like the fitness function from our architecture in equation 1. Both use a LSTM network with 256 hidden units as controller and the memory word size is set to 152, equal to our model. Like our architecture, the DNC has one read and two write heads. The stack-augmented model uses four stacks with the three actions PUSH, POP, and NO OP." }, { "heading": "C EVALUATION OF THE LEARNING PROCESS AND MODEL COMPONENTS", "text": "For evaluation the effect of the individual modifications and extensions we compared our architecture with and without them on the Learning to Search task. In all setups all runs had a budget of 10.000 iterations. The bar plots show mean and standard deviation of the number of learning iterations, numbers on top of the bars show the number of runs that triggered learning in that level. Plots below the bar plot show the number of runs that successfully solved the according curriculum level, i.e., where they ended after the budget of 10.000 iterations. All comparisons are done without the restarting mechanisms, except in the evaluation for that mechanism.\nNOVELTY AND RESTARTS\nHere two mechanisms to face the problem of getting stuck in local optima are evaluated, namely the automatic restart as in the original NES (Wierstra et al., 2014) and the use of an additional novelty signal as in NSRA-ES (Conti et al., 2018). For the novelty calculation, we defined the behavior as the sequence of read memory locations and applied ALU operations. The baseline model does not use either of the two mechanisms. While we did not observe an improvement using novelty, the automatic restarts reduced the number of learning iterations, see Figure 7. Note that the baseline and novelty model are also able to learn algorithmic solutions, but they require more iterations and, hence, they die out before the final curriculum level due to reaching the budget of 10.000 iterations.\nCONSTRAINED WRITE HEAD\nHere we evaluated the introduced constrained write head, that updates the previously read memory location. We compared against two models without this constrained head, one with one write head and one with two write heads to compensate the missing constrained head. The constrained head was a necessary modification to enable the efficient learning of algorithmic solutions, see Figure 8.\nUSAGE-LINKAGE AND HARD ATTENTION VS. SOFT ATTENTION\nHere the introduced usage-linkage and hard attention mechanism for memory access are evaluated. While using hard attention instead of soft attention was a necessary modification to enable efficient learning of algorithmic solutions, the introduced usage-linkage had a smaller impact on the Learning to Search task, as shown in Figure 9. When applied to the Learning to Plan setup however, the usagelinkage improved the learning of algorithmic solutions significantly, see Figure 10. Both results show that the model learns to use the attention mechanisms that are required for the algorithmic solution, i.e., the usage-linkage is especially useful for the backtracking in the Learning to Plan setup compared to the Learning to Search setup where no backtracking is required.\nBAD MEMORIES\nThe bad memories approach was developed while learning the data-dependent modules and was a necessary mechanism to learn robust and generalized modules with 100% accuracy, as explained in Section 2.1.1. For learning the algorithmic solutions, the impact of this learning from mistakes strategy was less significant, see Figure 11." } ]
2,019
null
SP:da33f43dc72578ff039a1843c3bbbfc70ed4a685
[ "The paper proposes a regression approach that, given a few training (support) samples of a regression task (input and desired output pairs), should be able to output the values of the target function on additional (query) inputs. The proposed method is to learn a set of basis functions (MLPs) and a weight generator that for a given support set predicts weights using which the basis functions are linearly combined to form the predicted regression function, which is later tested (using the MSE metric) w.r.t. the ground truth. The method is trained on a large collection of randomly sampled task from the target family and is tested on a separate set of random tasks. The experiments include: ", "The authors propose using sparse adaptive basis function models for few shot regression. The basis functions and the corresponding weights are generated via respective networks whose parameters are shared across all tasks. Elastic net regularization is used to encourage task specific sparsity in the weights, the idea being that with only a small number of available training examples, learning a sparse basis is easier than learning a dense basis with many more parameters. The method is validated on both synthetic data and on image completion tasks. " ]
Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples. Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a method that focuses on regression tasks. Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions. This enables using a few labelled samples to learn a good approximation of the entire function. We design a Basis Function Learner network to encode the basis functions for a task distribution, and a Weights Generator to generate the weight vector for a novel task. We show that our model outperforms current state of the art meta-learning methods in various regression tasks.
[]
[ { "authors": [ "Martín Abadi", "Ashish Agarwal", "Paul Barham", "Eugene Brevdo", "Zhifeng Chen", "Craig Citro", "Greg S Corrado", "Andy Davis", "Jeffrey Dean", "Matthieu Devin" ], "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1603.04467,", "year": 2016 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Marta Garnelo", "Dan Rosenbaum", "Christopher Maddison", "Tiago Ramalho", "David Saxton", "Murray Shanahan", "Yee Whye Teh", "Danilo Rezende", "SM Ali Eslami" ], "title": "Conditional neural processes", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Marta Garnelo", "Jonathan Schwarz", "Dan Rosenbaum", "Fabio Viola", "Danilo J. Rezende", "S.M. Ali Eslami", "Yee Whye Teh" ], "title": "Neural processes, 2018b", "venue": null, "year": 2018 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Bharath Hariharan", "Ross Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih", "Jonathan Schwarz", "Marta Garnelo", "Ali Eslami", "Dan Rosenbaum", "Oriol Vinyals", "Yee Whye Teh" ], "title": "Attentive neural processes", "venue": null, "year": 1901 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In International Conference on Machine Learning (ICML) Deep Learning Workshop,", "year": 2015 }, { "authors": [ "Brenden Lake", "Ruslan Salakhutdinov", "Jason Gross", "Joshua Tenenbaum" ], "title": "One shot learning of simple visual concepts", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society,", "year": 2011 }, { "authors": [ "Yann LeCun" ], "title": "The mnist database of handwritten digits. http://yann", "venue": "lecun. com/exdb/mnist/,", "year": 1998 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Zhenguo Li", "Fengwei Zhou", "Fei Chen", "Hang Li" ], "title": "Meta-sgd: Learning to learn quickly for few shot learning", "venue": "arXiv preprint arXiv:1707.09835,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Raymond H Myers" ], "title": "Classical and modern regression with applications, volume 2. Duxbury press", "venue": null, "year": 1990 }, { "authors": [ "Vinod Nair", "Geoffrey E Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Sachin Ravi", "Hugo Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2016 }, { "authors": [ "Andrei A Rusu", "Dushyant Rao", "Jakub Sygnowski", "Oriol Vinyals", "Razvan Pascanu", "Simon Osindero", "Raia Hadsell" ], "title": "Meta-learning with latent embedding optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Donald F Specht" ], "title": "A general regression neural network", "venue": "IEEE transactions on neural networks,", "year": 1991 }, { "authors": [ "Flood Sung", "Yongxin Yang", "Li Zhang", "Tao Xiang", "Philip HS Torr", "Timothy M Hospedales" ], "title": "Learning to compare: Relation network for few-shot learning", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "I. Tosic", "P. Frossard" ], "title": "Dictionary learning", "venue": "IEEE Signal Processing Magazine,", "year": 2011 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Timothy Lillicrap", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Yu-Xiong Wang", "Ross Girshick", "Martial Hebert", "Bharath Hariharan" ], "title": "Low-shot learning from imaginary data", "venue": null, "year": 2018 }, { "authors": [ "Mitchell Wortsman", "Kiana Ehsani", "Mohammad Rastegari", "Ali Farhadi", "Roozbeh Mottaghi" ], "title": "Learning to learn how to learn: Self-adaptive visual navigation using meta-learning", "venue": "arXiv preprint arXiv:1812.00971,", "year": 2018 }, { "authors": [ "Jaesik Yoon", "Taesup Kim", "Ousmane Dia", "Sungwoong Kim", "Yoshua Bengio", "Sungjin Ahn" ], "title": "Bayesian model-agnostic meta-learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chi Zhang", "Yixin Zhu", "Song-Chun Zhu" ], "title": "Metastyle: Three-way trade-off among speed, flexibility, and quality in neural style transfer", "venue": "In The AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Ruixiang Zhang", "Tong Che", "Zoubin Ghahramani", "Yoshua Bengio", "Yangqiu Song" ], "title": "Metagan: An adversarial approach to few-shot learning", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Hui Zou", "Trevor Hastie" ], "title": "Regularization and variable selection via the elastic net", "venue": "Journal of the royal statistical society: series B (statistical methodology),", "year": 2005 } ]
[ { "heading": null, "text": "Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples. Previous few-shot learning works have mainly focused on classification and reinforcement learning. In this paper, we propose a method that focuses on regression tasks. Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions. This enables using a few labelled samples to learn a good approximation of the entire function. We design a Basis Function Learner network to encode the basis functions for a task distribution, and a Weights Generator to generate the weight vector for a novel task. We show that our model outperforms current state of the art meta-learning methods in various regression tasks." }, { "heading": "1 INTRODUCTION", "text": "Regression deals with the problem of learning a model relating a set of inputs to a set of outputs. The learned model can be thought as function y = F (x) that gives a prediction y ∈ Rdy given input x ∈ Rdx where dy and dx are dimensions of the output and input respectively. Typically, a regression model is trained on a large number of data points to be able to provide accurate predictions for new inputs. Recently, there have been a surge in popularity on few-shot learning methods (Vinyals et al., 2016; Koch et al., 2015; Gidaris & Komodakis, 2018). Few-shot learning methods require only a few examples from each task to be able to quickly adapt and perform well on a new task. These few-shot learning methods in essence are learning to learn i.e. the model learns to quickly adapt itself to new tasks rather than just learning to give the correct prediction for a particular input sample.\nIn this work, we propose a few shot learning model that targets few-shot regression tasks. Our model takes inspiration from the idea that the degree of freedom of F (x) can be significantly reduced when it is modeled a linear combination of sparsifying basis functions. Thus, with a few samples, we can estimate F (x). The two primary components of our model are (i) the Basis Function Learner network which encodes the basis functions for the distribution of tasks, and (ii) the Weights Generator network which produces the appropriate weights given a few labelled samples. We evaluate our model on the sinusoidal regression tasks and compare the performance to several meta-learning algorithms. We also evaluate our model on other regression tasks, namely the 1D heat equation tasks modeled by partial differential equations and the 2D Gaussian distribution tasks. Furthermore, we evaluate our model on image completion as a 2D regression problem on the MNIST and CelebA data-sets, using only a small subset of known pixel values. To summarize, our contributions for this paper are:\n• We propose to address few shot regression by linear combination of a set of sparsifying basis functions.\n• We propose to learn these (continuous) sparsifying basis functions from data. Traditionally, basis functions are hand-crafted (e.g. Fourier basis).\n• We perform experiments to evaluate our approach using sinusoidal, heat equation, 2D Gaussian tasks and MNIST/CelebA image completion tasks." }, { "heading": "2 RELATED WORK", "text": "Regression problems has long been a topic of study in the machine learning and signal processing community (Myers & Myers, 1990; Specht, 1991). Though similar to classification, regression estimates one or multiple scalar values and is usually thought of as a single task problem. A single model is trained to only perform regression on only one task. Our model instead reformulates the regression problem as a few-shot learning problem, allowing for our model to be able to perform regressions of tasks sampled from the same task distribution.\nThe success achieved by deep neural networks heavily relies on a large amount of data, especially labelled ones. As labelling data is time-consuming and labor-intensive, learning from limited labelled data is drawing more and more attention. A prominent approach is meta learning. Meta learning, also referred as learning to learn, aims at learning an adaptive model across different tasks. Meta learning has shown potential in style transfer (Zhang et al., 2019), visual navigation (Wortsman et al., 2018), etc. Meta learning has also been applied to few-shot learning problems, which concerns models that can learn from prior experiences to adapt to new tasks. Lake et al. (2011) proposed the one-shot classification problem and introduced the Omniglot data set as a few-shot classification data set, similar to MNIST (LeCun, 1998) for traditional classification. Since then, there has been a surge of meta learning methods striving to solve few-shot problems. Some meta learning approaches learn a similarity metric (Snell et al., 2017; Vinyals et al., 2016; Koch et al., 2015) between new test examples with few-shot training samples to make the prediction. The similarity metric used here can be Euclidean distance, cosine similarity or more expressive metric learned by relation networks (Sung et al., 2018). On the other hand, optimization-based approaches learn how to optimize the model directly. Finn et al. (2017) learned an optimal initialization of models for different tasks in the same distribution, which is able to achieve good performance by simple gradient descent. Rusu et al. (2019) learned how to perform gradient descent in the latent space to adapt the model parameters more effectively. Ravi & Larochelle (2016) employed an LSTM to learn an optimization algorithm. Generative models are also proposed to overcome the limitations resulted from few-shot setting (Zhang et al., 2018; Hariharan & Girshick, 2017; Wang et al., 2018) .\nFew-shot regression tasks are used among various few-shot leaning methods (Finn et al., 2017; Rusu et al., 2019; Li et al., 2017). In most existing works, these experiment usually does not extend beyond the sinusoidal and linear regression tasks.\nA prominent family of algorithms that tackles a similar problem as few-shot regression is Neural Processes (Garnelo et al., 2018b;a; Kim et al., 2019). Neural Processes algorithms model the distributions of the outputs of regression functions using Deep Neural Networks given pairs of input-output pairs. Similar to Variational Autoencoders (Kingma & Welling, 2013), Neural Processes employ a Bayesian approach in modelling the output distribution of regression function using an encoder-decoder architecture. Our model on the other hand employs a deterministic approach where we directly learn a set of basis functions to model the output distribution. Our model also does not produce any latent vectors but instead produces predictions via a dot product between the learned basis functions and weight vector. Our experiment results show that our model (based on sparse linear combination of basis functions) compares favorably to Neural Processes (based on conditional stochastic processes).\nOur proposed sparse linear representation framework for few shot regression makes the few shot regression problem appears to be similar to another research problem called dictionary learning (DL) (Tosic & Frossard, 2011), which focuses on learning dictionaries of atoms that provide efficient representations of some class of signals. However the differences between DL and our problem are significant: Our problems are continuous rather than discrete as in DL, and we only observe a very small percentage of samples. Detailed comparison with DL is discussed in the appendix." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "We first provide problem definition for few-shot regression. We aim at developing a model that can rapidly regress to a variety of equations and functions based on only a few training samples. We assume that each equation we would like to regress is a task Ti sampled from a distribution p(T ). We train our model on a set of training tasks, Strain, and evaluate it on a separate set of testing tasks, Stest. Unlike few-shot classification tasks, the tasks distribution p(T ) is continuous for regression task in general. Each regression task is comprised of training samples Dtrain and validation samples Dval, for both the training set Strain and testing set Stest, Dtrain is comprised of K training samples and labels Dtrain = {(xkt ,ykt )|k = 1...K} while Dval is comprised of N samples and labels Dval = {(xnp ,ynp )|n = 1...N}. The goal of few-shot regression is to regress the entire, continuous output range of the equation given only the few points as training set." }, { "heading": "3.2 FEW-SHOT REGRESSION VIA LEARNING SPARSIFYING BASIS FUNCTIONS", "text": "Here we discuss our main idea. We would like to model the unknown function y = F (x) given only Dtrain = {(xkt ,ykt )|k = 1...K}. With small K, e.g. K = 10, this is an ill-posed task, as F (x) can take any form. As stated before, we assume that each function we would like to regress is a task Ti drawn from an unknown distribution p(T ). To simplify discussion, we assume scalar input and scalar output. Our idea is to learn sparse representation of the unknown function F (x), so that a few samples {(xkt , ykt )|k = 1...K} can provide adequate information to approximate the entire F (x). Specifically, we model the unknown function F (x) as a linear combination of a set of basis functions {φi(x)}:\nF (x) = ∑ i wiφi(x) (1)\nMany handcrafted basis functions have been developed to expand F (x). For example, the Maclaurin series expansion (Taylor series expansion at x = 0) uses {φi(x)} = {1, x, x2, x3, ...}:\nF (x) = w0 + w1x+ w2x 2 + ... (2)\nIf F (x) is a polynomial, (2) can be a sparse representation, i.e. only a few non-zero, significant wi, and most wi are zero or near zero. However, if F (x) is a sinusoid, it would require many terms to represent F (x) adequately, e.g.:\nsin(x) ≈ w1x+ w3x3 + w5x5 + w7x7 + ...+ wMxM (3) In (3), M is large and M K. Given only K samples {(xkt , ykt )|k = 1...K}, it is not adequate to determine {wi} and model the unknown function. On the other hand, if we use the Fourier basis\ninstead, i.e., {φi(x)} = {1, sin(x), sin(2x), ..., cos(x), cos(2x), ...}, clearly, we can obtain a sparse representation: we can adequately approximate the sinusoid with only a few terms. Under Fourier basis, there are only a few non-zero significant weights wi, and K samples are sufficient to estimate the significant wi and approximate the function. Essentially, with a sparsifying basis {φi(x)}, the degree of freedom of F (x) can be significantly reduced when it is modeled using (1), so that K samples can well estimate F (x).\nOur approach is to use the set of training tasks drawn from p(T ) to learn {φi(x)} that result in sparse representation for any task drawn from p(T ). The set of {φi(x)} is encoded in the Basis Function Learner Network that takes in x and outputs Φ(x) = [φ1(x), φ2(x), ..., φM (x)]T . In our framework, Φ(x) is the same for any task drawn from p(T ), as it encodes the set of {φi(x)} that can sparsely represent any task from p(T ). We further learn a Weights Generator Network to map the K training samples of a novel task to a constant vector w = [w1, w2, ..., wM ]T . The unknown function is modeled as wTΦ(x)." }, { "heading": "3.3 MODEL ARCHITECTURE", "text": "An overview of our model is depicted in Figure 1. Given a regression task T with Dtrain = {(xkt ,ykt )|k = 1...K}, the model is tasked to approximate the function across the entire output. The training samples, xkt ∈ Rdx first passed though the Basis Function Learner which is represented as a network Φθ(x), parameterized by trainable parameters θ. The Basis Function Learner outputs set of learned basis functions in the form of a vector, Φ(x) ∈ Rdφ , where dφ is the number of the basis functions we would like the Basis Function Learner to learn.We represent the Basis Function Learner as a series of fully connected layers followed by a ReLU non-linearity activation function (Nair & Hinton, 2010).\nThe set of learned basis functions Φ(x), together with the labels ykt ∈ Rdy are then passed into the Weights Generator. The Weights Generator, represented as a network Gψ(Φ(xkt ),ykt ), with trainable parameters ψ, takes the input Φ(xkt ),y k t and outputs a weights vector wk for each training sample of a regression task. The final weights vector, w for task T is then obtained by taking a mean of the K weight vectors. The Weights Generator consists a series of B self attention blocks following by a final fully connected layer to transform the output into the desired dimensions. We provide architecture details of Weights Generator network in the appendix.\nThe model is then applied to make prediction for any input x. During meta-training, the validation set Dval = {xnp ,ynp |n = 1...N} for a task T is given. The prediction is produced by taking a dot product between task-specific weights vector, w and the set of learned basis functions:\nynpred = w TΦθ(x n p ) (4)\nTo train our model, we design a loss function L that consists of three terms. The first term is a mean-squared error between the validation set labels ynp ∈ Dval and the predicted ynpred. We also add two penalty terms on the weights vector w generated for each task. The first penalty term is on the L1 norm of the generated weight vectors. This is to encourage the learned weight vectors to be sparse in order to approximate the unknown function with a few significant basis functions. The second penalty term is on the L2 norm of the generated weights vector. This is used to reduce the variance of the estimated weights as commonly used in regression (Zou & Hastie (2005)). The full loss function L is as follows:\nLθ,ψ = 1\nN ∑ ynp∈Dval (ynp − ynpred)2 + λ1||wT ||1 + λ2||wT ||2 (5)\nwhere λ1 and λ2 represents the weightage of the L1 and L2 terms respectively. Note that, it turns out that our loss function for meta learning is is similar to that of the Elastic Net Regression (Zou & Hastie, 2005) with both L1 and L2 regularization terms. However, the difference is significant: Instead of focusing on a single regression task as in (Zou & Hastie, 2005), we use this loss function to learn (i) the parameter θ for the Basis Function Learner network, which encodes the sparsifying basis functions for any task drawn from a task distribution, and (ii) the parameter ψ for the Weight Generator network, which produces the weights for any novel task drawn from the same task distribution." }, { "heading": "4 EXPERIMENTS AND ANALYSIS", "text": "In this section we describe the experiments we ran and introduce the types of regression task used to evaluate our method. For all of our experiments, we set the learning rate to 0.001 and use the Adam Optimizer (Kingma & Ba, 2014) as the optimization method to preform stochastic gradient decent on our model. We implement all our models using the Tensorflow (Abadi et al., 2016) library. In the following subsections, we decribe each of experiments in more detail. We include the experiments on the 1D Heat Equation and 2D Gaussian regression tasks in the appendix." }, { "heading": "4.1 1D REGRESSION", "text": "For all 1D Regression tasks, the Basis Function Learner consists of two fully connected layers with 40 hidden units. For the loss function we set λ1 = 0.001 and λ2 = 0.0001.\nSinusoidal Regression. We first evaluate our model on the sinusoidal regression task which is a few-shot regression task that is widely used by other few-shot learning methods as a few-shot learning task to evaluate their methods on (Finn et al., 2017; Li et al., 2017; Rusu et al., 2019). The target function is defined as y(x) = Asin(ωx + b), where amplitude A, phase b, frequency ω are the parameters of the function. We follow the setup exactly as in (Li et al., 2017). We sample the each parameters uniformly from range A ∈ [0.1, 5.0], b ∈ [0, π] and ω ∈ [0.8, 1.2]. We train our model on tasks of batch size 4 and 60000 iterations for 5,10 and, 20 shot cases, where each training task contains K ∈ {5, 10, 20} training samples and 10 validation samples. We compare our method against recent few-shot learning methods including Meta-SGD (Li et al., 2017), MAML (Finn et al., 2017), EMAML ,BMAML (Yoon et al., 2018) and the Neural Processes family of methods including Neural Processes (Garnelo et al., 2018b) Conditional Neural Processes (Garnelo et al., 2018a) and Attentive Neural Processes (Kim et al., 2019). We use the officially released code for these three methods 1. We show the results in Table 1.\nTable 2: Mean-Squared Error results for the alternative Sinusoidal Regression Task. Lower is better.\nMethod Alt. Sinusoidal 1000 tasks 10 shot 5 shot\nEMAML 1.524± 0.034 2.238± 0.045 BMAML 1.412± 0.033 2.157± 0.049 Ours 0.918± 0.051 2.389± 0.103 Ours(Ensemble) 0.630± 0.035 1.857± 0.081\nWe provide two variants our model in this experimental setup. The two models differ only in the size of the Weights Generator. For the \"small\" model the Weights Generator consist of B = 1 self-attention blocks followed by a fully connected layer of 40 hidden units. The self-attention block consists of three parallel weight projections of 40 dimensions followed by fully connected layers of 80 and 40 hidden units respectively. The \"large\" model consists\n1https://github.com/deepmind/neural-processes\nof B = 3 self-attention blocks also followed by a fully connected layer of 40 hidden units. Each self-attention block has weight projections of 64 dimensions followed by fully connected layers of 128 and 64 hidden units respectively. Both MAML and Meta-SGD uses an architecture of 2 fully connected layers with 40 hidden units which is similar to the architecture of the Basis Learner network, though both Meta-SGD and MAML both have additional optimization for individual tasks. The Neural Process family of methods uses encoder archtecture of 4 fully connected layers with 128 hidden units and decoder architecture of 2 fully connected layers of 128 hidden units respectively which is more similar in architecture our larger model.\nSimilarly, we also compare our methods against two variants of EMAML and BMAML. The \"small\" model consist of 2 fully connected layers with 40 hidden units each while the \"large\" model consists of 5 fully connected layers with 40 hidden units each. This is to ensure fair comparison as both BMAML and EMAML lack a separate network to generate weight vectors but are ensemble methods that aggregate results from Mp number of model instances. We set the number of model instances in BMAML and EMAML to 10.\nAlternative Sinusoidal Regression We also evaluate our method on another version of the sinusoidal task as introduced by Yoon et al. (2018). The range of A remain the same while the range of b is increased to [0, 2π] and the range of ω is increased to [0.5, 2.0]. An extra noise term, is also added the function y(x). For noise , we sample it from distribution N ∼ (0, (0.01A)2). We also fix the total number of our tasks used during training to 1000 as in (Yoon et al., 2018). For this experimental setup we also include an ensemble version of our model where we train 10 separate instance of our model on the same 1000 tasks and aggregate their results by taking a mean of the predictions. We evaluate our model for both 10 shot and 5 shot cases and show the mean-squared error results in Table 2. For this experimental setup, we calculate the mean-squared error from 10 randomly points from 1000 advanced sinusoidal tasks.\nOur results show that our method outperforms all recent few-shot regression methods in sinusoidal tasks." }, { "heading": "4.2 2D REGRESSION ON IMAGE DATA", "text": "We also tested our method on more challenging image data, as done in (Garnelo et al., 2018a;b; Kim et al., 2019). We use MNIST (LeCun et al., 1998) and CelebA datasets (Liu et al., 2015) here for qualitative and quantitative comparison. Each image can be regarded as a continuous function f : R2 → Rdy , where dy = 1 if the image is gray-scale or or dy = 3 if it is RGB. The input x ∈ R2 to f is the normalized coordinates of pixels and the output y ∈ Rdy is the normalized pixel value. The size of the images is 28 × 28 in MNIST and rescaled to 32 × 32 in CelebA. During meta-training, we randomly sample K points from 784(1024) pixels in one image as Dtrain and another K points as Dval to form a regression task. In the meta-testing stage, the MSE is evaluated on 784(1024)−K pixels. 60,000(162,770) images are used for meta-training and 10,000 for meta-testing for MNIST(CelebA) dataset.\nWe compare our methods with NP family: CNP (Garnelo et al., 2018a), NP(Garnelo et al., 2018b) and ANP (Kim et al., 2019) for K = 50 and K = 100. Deeper network structure is adopted due to the complexity of regression on image data. Namely, we use 5 fully connected layers with 128 hidden units in Basis Function Learner and 3 attention blocks in Weight Generator for our method. The encoders and decoders in NP family are all MLPs including 4 fully connected layers with 128 hidden units. Thus, the comparison is fair in terms of network capacity. All the models are trained for 500 epochs with batch size 80. The MSE on 10,000 tasks from meta-testing set is reported with 95% confidence interval, shown in Table 3. The top results are highlighted. It can be observed that our method outperforms two of three NP methods and achieves MSE very close to most recent ANP. The outputs of regression on CelebA image data are high-dimension predictions, which demonstrates the effectiveness of our method in such challenging tasks. Note that ANP significantly improves upon NP and CNP using cross-attention, which can potentially be applied to our method as well.\nFigure 2 shows the qualitative results for testing images. Red pixels in the images denote points in Dtrain. The comparison with methods from NP family is shown in the Figure 2a. The regression by our method is clearly better than NP and CNP in 50-shot and 100-shot, which visually validates the quantitative results above. The qualitative results on CelebA can be found in Figure 7 in Appendix." }, { "heading": "4.3 ANALYSIS ON BASIS FUNCTIONS", "text": "In this subsection we provide some deeper analysis on the basis functions that are learned by our method. In particular, we provide some further evidence to our claim that our method learns a set of sparsifying basis functions that correspond to the regression tasks that we would like to model. To demonstrate the sparsity of basis functions, we take only the S largest weights in terms of |w| and their corresponding basis functions and illustrate the predicted regression function with the combination of only the S weights and basis functions. We conduct this experiment on both the sinusoidal regression task and the more difficult image completion task and show these S-weights predictions in Figures 3 and 2b respectively.\nThe figures illustrate that our method is able to produce a good prediction of the regression function with only a fraction of the full set learned basis function (40 for the sinusoidal task, 128 for the MNIST image completion task). This demonstrates the sparsity of Φ(x) as most of the prediction is carried out by just a small number of basis functions. This also demonstrates that our method is able to force most of the information of F (x) to be contained in a few terms. Therefore, using K samples to estimate the weights of these few important terms could achieve a good approximation of F (x)." }, { "heading": "4.4 ABLATION STUDIES", "text": "In this subsection we detail some ablation studies on our model to test the validity of certain design choices of our model. In particular we focus on the effects of the addition of self-attention operations in the Weights Generator and also the effects of using different penalty terms on our loss function.\nTo test out the effects of adding the selfattention operations to our model, we conduct a simple experiment where we replace the self attention operations in the self-attention block with just a single fully connected layer of equal dimensions as the self-attention weight projection. Essentially, this reduces the Weights Generator to be just a series of fully connected layers\nwith residual connections and layer normalization. We compare the simpler model performance on the sinusoidal regression task as specified in Table 1 with our original model and show the results in Table 4. The results show that adding the self-attention operations do improve our methods performance on the 1D sinusoidal regression task.\nWe also conducted experiments to test the effects of the different penalty terms on the the generated weights vector. In this ablation study, we compared our models trained using different variants of the\nloss function we presented in Equation 5. Similar to the previous study, we evaluate them on their performance on the sinusoidal regression task as specified in Table 1. The variants we tested out are: (i) Loss function with only the L1-norm penalty term ; (ii) Loss function with only the L2-norm penalty term (iii) Loss function with both L1 and L2-norm penalty terms. To demonstrate the sparsity of the weights vectors of each variant, we also show the a breakdown of the magnitude of the learned weight vectors over 100 sinusoidal tasks. We group the weight vectors into three groups : |w| less than 0.02 to indicate weights that are near zero, |w| between 0.02 and 1 and weights with magnitude more than 1. We show the results of the different variants in Table 5. We also present histograms of the magnitude of the learned weight vectors in Figure 4\nThe results do show that the combination of both L1 and L2 penalty terms do ultimately give the best performance for the sinusoidal regression task. In terms of sparsity, the model trained with only the L1 loss term do gives the highest percentage of sparse weights though we found the model with both L1 and L2 terms do give a better performance while still maintaining a relatively high percentage of near zero weights.\n5 CONCLUSION\nWe propose a few-shot meta learning system that focuses exclusively on regression tasks. Our model is based on the idea of linear representation of basis functions. We design a Basis Function Learner network to encode the basis functions for the entire task distribution. We also design a Weight generator network to generate the weights from the K training samples of a novel task drawn from the same task distribution. We show that our model has competitive performance in in various few short regression tasks." }, { "heading": "B ADDITIONAL ANALYSIS ON LEANED BASIS FUNCTIONS", "text": "Adding on to the experiments in Section 4.3, we also illustrate what happens when do the exact opposite. We take the prediction using the full set of weight vectors/basis function and study the effect of the prediction when we remove certain basis function from the prediction. Similar to the previous experiment, we remove the basis function by order of magnitude starting with the basis function with the largest corresponding |w|. we conduct this experiment on the sinusoidal regression task and illustrate the results in Figure 6.\nSimilarly, this study also demonstrates the importance of certain basis functions as removing them caused the prediction the change drastically. In particular, notice that for sinusoidal task, removing just 4 of the most important basis functions resulted in a less accurate prediction than using just 10 of the most important basis functions." }, { "heading": "C DETAILS ON THE WEIGHTS GENERATOR NETWORK ARCHITECTURE", "text": "Here we provide more details on the architecture of the Weights Generator Network. As mentioned previously in Section 3.3. The Weights Generator Network consists of a series of self attention blocks followed by a final fully connected layer. We define a self attention block as such: An attention block consists of a self attention operation on the input of self attention block. Following the self-attention operation, the resultant embedding is further passed through two fully connected layers. A residual connection (He et al., 2016) from the output of the self-attention operation to the output of the second fully connected layer. Finally, resultant embedding of the residual connection is then passed though a a layer normalization operation (Ba et al., 2016). Note that the input of the first self attention block will always be the input to the Weights Generator network, (Φ(xkt ),y k t ) whereas the inputs to subsequent attention blocks are the outputs of the previous attention block.\nFor the self-attention operation, the input is transformed into query, key and value vectors though their respective weight projections. These query, key and value vectors, Q, K and V then go through a scaled dot-product self-attention operation as introduced by (Vaswani et al., 2017):\nAtt(Q,K, V ) = softmax( QKT√ dk )V, (6)" }, { "heading": "D ADDITIONAL REGRESSION EXPERIMENTS", "text": "1D Heat Equation. We also evaluate our method on another 1D Regression task, the 1D heat Equation task, we define it as such: Consider a 1-dimensional rod of length L with both of its ends connected to heat sinks, i.e. the temperature of the ends will always be fixed at 0K unless a heat source is applied at the end. a constant point heat source is then applied to a random point s on the rod such the the heat point source will always have a temperature of 1.0K. We would like the model the temperature u(x, t) at each point of the rod a certain time t after applying the heat source until the temperature achieves equilibrium throughout the rod. The temperature at each point x after time t is given by the heat equation:\n∂u ∂t = k\n∂2u\n∂x2\nFor our experiments, we set L to be 5 and randomly sample K points of range [0, 5] on the heat equation curve. We fix the total number of tasks used during training to 1000 and evaluate our model on both 10 shot and 5 shot cases, similar to the experimental setup for the Advanced Sinusoidal tasks. We also compare our results to both EMAML and BMAML on this regrssion task and add an ensemble version of method for comparison.The results of our evaluation is presented in Table 6.\n2D Gaussian. We also evaluated our method on the for the 2D Gaussian regression tasks. For this task, we train our model to predict the probability distribution function of a two-dimensional Gaussian\ndistribution. We train our model from Gaussian distribution task with mean ranging from (−2,−2) to (2, 2) and standard deviation of range [0.1, 2]. We fix the standard deviation to be of the same value in both directions. Similar to the heat equation, we use the same setup as the Advanced Sinusoidal task and compare our methods to EMAML and BMAML. We evaluate our model on 10, 20 and 50 shot case. The results of our evaluation is presented in Table 7.\nQualitative results on CelebA datasets.We provide the qualitative results on CelebA datasets in Figure 7. We note that the RGB images are complex 2D functions. We choose them to evaluate so that we can see the results more directly, not to compare with image inpainting methods, which is also mentioned in (Garnelo et al., 2018a). The results in Figure 7a are consistent with Figure 2a. The regression results from our method are visually better than NP and CNP. The predictions using first S largest weights are shown in Figure 7b. The 2D image function is usually predicted with less than 50 weights, which suggests that the information of the 2D function is kept in several terms." }, { "heading": "E COMPARISON WITH DICTIONARY LEARNING", "text": "Our proposed sparse linear representation framework for few shot regression makes the few shot regression problem appears to be similar to another research problem called dictionary learning (DL), which focuses on learning dictionaries of atoms that provide efficient representations of some class of signals (Tosic & Frossard, 2011). However the differences between DL and our problem are significant: Our problems are continuous rather than discrete as in DL, and we only observe a very small percentage of samples.\nSpecifically, for a given y ∈ Rn, the goal of DL is to learn the dictionary (n by M ) Φ for some sparse w:\ny = Φw (7)\nIn typical DL, the entire y is given. Also, M > n for an overcomplete dictionary (Figure 8).\nIn few shot regression, the goal is to predict the entire continuous function y = F (x). Therefore, viewing this as the setup in (7), n is infinite. Moreover, only a few (K) samples of y is given: ykt = F (x k t ). The locations of the given samples (x k t ) are different for different y (different task). Therefore, our problem is significantly different and more difficult than DL. Typical DL algorithms solve (7) and return Φ, which is a n by M matrix of finite dimensions (the dictionary). In our setup, the basis matrix Φ has infinite entries, and Φ is encoded by the proposed Basis Function Learner network.\nGT NP CNP ANP Ours" } ]
2,019
null
SP:e1726e0e4f65ec99676002eca4ad9cdf27f60b56
[ "This paper presents a clear approach to improve the exploration strategy in reinforcement learning, which is named clustered reinforcement learning. The approach tries to push the agent to explore more states with high novelty and quality. It is done by adding a bonus reward shown in Eq. (3) to the reward function. The author first cluster states into clusters using the k-means algorithm. The bonus reward will return a high value for a state if the corresponding cluster has a high average reward. When the total reward in a cluster is smaller than a certain threshold, the bonus reward will consider the number of states explored. In the experiments, the authors test different models on two MuJoCo tasks and five Atari games. TRPO, TRPO-Hash, VIME are selected as baselines to compare with. Results show that the proposed bonus reward reaches faster convergence and the highest return in both MuJoCo tasks. In those five Atari games, the proposed method achieves the highest or second-highest average returns.", "This paper proposed a clustering based algorithm to improve the exploration performance in reinforcement learning. Similar to the count based approaches, the novelty of a new state was computed based on the statistics of the corresponding clusters. This exploration bonus was then combined with the TRPO algorithm to obtain the policy. The experimental results showed some improvement, compare with its competitors." ]
Exploration strategy design is one of the challenging problems in reinforcement learning (RL), especially when the environment contains a large state space or sparse rewards. During exploration, the agent tries to discover novel areas or high reward (quality) areas. In most existing methods, the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent. To tackle this problem, we propose a novel RL framework, called clustered reinforcement learning (CRL), for efficient exploration in RL. CRL adopts clustering to divide the collected states into several clusters, based on which a bonus reward reflecting both novelty and quality in the neighboring area (cluster) of the current state is given to the agent. Experiments on several continuous control tasks and several Atari-2600 games show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases.
[]
[ { "authors": [ "David Abel", "Alekh Agarwal", "Fernando Diaz", "Akshay Krishnamurthy", "Robert E. Schapire" ], "title": "Exploratory gradient boosting for reinforcement learning in complex", "venue": "domains. CoRR,", "year": 2016 }, { "authors": [ "Alexandr Andoni", "Piotr Indyk" ], "title": "Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions", "venue": "In FOCS, pp", "year": 2006 }, { "authors": [ "Peter Auer", "Ronald Ortner" ], "title": "Logarithmic online regret bounds for undiscounted reinforcement learning", "venue": "In NeurIPS, pp", "year": 2006 }, { "authors": [ "Marc G. Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "JAIR, 47:253–279,", "year": 2013 }, { "authors": [ "Marc G. Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Rémi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Ronen I. Brafman", "Moshe Tennenholtz" ], "title": "R-MAX - A general polynomial time algorithm for near-optimal reinforcement learning", "venue": "JMLR, 3:213–231,", "year": 2002 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos J. Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": null, "year": 2018 }, { "authors": [ "Moses Charikar" ], "title": "Similarity estimation techniques from rounding algorithms", "venue": "In STOC, pp", "year": 2002 }, { "authors": [ "Adam Coates", "Andrew Y. Ng" ], "title": "Learning feature representations with k-means", "venue": "In Neural Networks: Tricks of the Trade - Second Edition,", "year": 2012 }, { "authors": [ "Yan Duan", "Xi Chen", "Rein Houthooft", "John Schulman", "Pieter Abbeel" ], "title": "Benchmarking deep reinforcement learning for continuous control", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Justin Fu", "John D. Co-Reyes", "Sergey Levine" ], "title": "EX2: exploration with exemplar models for deep reinforcement learning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "VIME: variational information maximizing exploration", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Michael J. Kearns", "Satinder P. Singh" ], "title": "Near-optimal reinforcement learning in polynomial time", "venue": "Machine Learning,", "year": 2002 }, { "authors": [ "Alexander S. Klyubin", "Daniel Polani", "Chrystopher L. Nehaniv" ], "title": "Empowerment: a universal agent-centric measure of control", "venue": "In CEC,", "year": 2005 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin A. Riedmiller", "Andreas Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Dirk Ormoneit", "Saunak Sen" ], "title": "Kernel-based reinforcement learning", "venue": "Machine Learning,", "year": 2002 }, { "authors": [ "Ronald Ortner" ], "title": "Adaptive aggregation for reinforcement learning in average reward markov decision processes", "venue": "Annals OR,", "year": 2013 }, { "authors": [ "Georg Ostrovski", "Marc G. Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A. Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In ICML,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael I. Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "David Silver", "Aja Huang", "Chris J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Vedavyas Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy P. Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "Satinder P. Singh", "Tommi S. Jaakkola", "Michael I. Jordan" ], "title": "Reinforcement learning with soft state aggregation", "venue": "In NeurIPS, pp", "year": 1994 }, { "authors": [ "Bradly C. Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "CoRR, abs/1507.00814,", "year": 2015 }, { "authors": [ "Alexander L. Strehl", "Michael L. Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "JCSS, 74(8):1309–1331,", "year": 2008 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement learning - an introduction. Adaptive computation and machine learning", "venue": null, "year": 1998 }, { "authors": [ "Haoran Tang", "Rein Houthooft", "Davis Foote", "Adam Stooke", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "exploration: A study of count-based exploration for deep reinforcement learning", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "Hado van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado van Hasselt", "Marc Lanctot", "Nando de Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Marvin Zhang", "Zoe McCarthy", "Chelsea Finn", "Sergey Levine", "Pieter Abbeel" ], "title": "Learning deep neural network policies with continuous memory states", "venue": "In ICRA,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) (Sutton & Barto, 1998) studies how an agent can maximize its cumulative reward in an unknown environment, by learning through exploration and exploitation. A key challenge in RL is to balance the relationship between exploration and exploitation. If the agent explores novel states excessively, it might never find rewards to guide the learning direction. Otherwise, if the agent exploits rewards too intensely, it might converge to suboptimal behaviors and have fewer opportunities to discover more rewards from exploration.\nAlthough reinforcement learning, especially deep RL (DRL), has recently attracted much attention and achieved significant performance in a variety of applications, such as game playing (Mnih et al., 2015; Silver et al., 2016) and robot navigation (Zhang et al., 2016), exploration techniques in RL are far from satisfactory in many cases. Exploration strategy design is still one of the challenging problems in RL, especially when the environment contains a large state space or sparse rewards. Hence, it has become a hot research topic to design exploration strategy, and many exploration methods have been proposed in recent years.\nSome heuristic methods for exploration, such as -greedy (Silver et al., 2016; Sutton & Barto, 1998), uniform sampling (Mnih et al., 2015) and i.i.d./correlated Gaussian noise (Lillicrap et al., 2016; Schulman et al., 2015), try to directly obtain more different experiences during exploration. For hard applications or games, these heuristic methods are insufficient enough and the agent needs exploration techniques that can incorporate meaningful information about the environment.\nIn recent years, some exploration strategies try to discover novel state areas for exploring. The direct way to measure novelty is to count the visited experiences. In (Bellemare et al., 2016; Ostrovski et al., 2017), pseudo-counts are estimated from a density model. Hash-based method (Tang et al., 2017) counts the hash codes of states. There also exist some work using the counts of state-action pairs to design their exploration techniques, such as explicit explore or exploit (E3) (Kearns & Singh, 2002), R-Max Brafman & Tennenholtz (2002), UCRL (Auer & Ortner, 2006), UCAGG (Ortner, 2013). Besides, the state novelty can also be measured by empowerment (Klyubin et al., 2005), the agent’s belief of environment dynamics (Houthooft et al., 2016), prediction error of the system dynamics model (Pathak et al., 2017; Stadie et al., 2015), prediction by exemplar model (Fu et al., 2017), and the error of predicting features of states (Burda et al., 2018). All the above methods perform exploration mainly based on the novelty of states without considering the quality of states. Furthermore, there are some methods to estimate the quality of states. Kernel-based reinforcement\nlearning (Ormoneit & Sen, 2002) uses locally weighted averaging to estimate the quality (value) of states. UCRL (Auer & Ortner, 2006) and UCAGG (Ortner, 2013) compute average rewards for choosing optimistic values. The average reward can be regarded as an estimation of the quality of states to guide the exploring direction, but there are no methods using the quality of states as an exploration technique. Furthermore, in most existing methods, the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent.\nTo tackle this problem, we propose a novel RL framework, called clustered reinforcement learning (CRL), for efficient exploration in RL. The contributions of CRL are briefly outlined as follows:\n• CRL adopts clustering to divide the collected states into several clusters. The states from the same cluster have similar features. Hence, the clustered results in CRL provide a possibility to share meaningful information among different states from the same cluster.\n• CRL proposes a novel bonus reward, which reflects both novelty and quality in the neighboring area of the current state. Here, the neighboring area is defined by the states which share the same cluster with the current state. This bonus reward can guide the agent to perform efficient exploration, by seamlessly integrating novelty and quality of states.\n• Experiments on several continuous control tasks with sparse rewards and several hard exploratory Atari-2600 games (Bellemare et al., 2013) show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases. In particular, on several games known to be hard for heuristic exploration strategies, CRL achieves significantly improvement over baselines." }, { "heading": "2 RELATED WORK", "text": "Recently, there are some exploration strategies used to discover novel state areas. The direct way to measure the novelty of states is to count the visited experiences, which has been applied in several methods. In the tabular setting and finite Markov decision processes (MDPs), the number of stateaction pairs is finite which can be counted directly, such as model-based interval estimation with exploratory bonus (MBIE-EB) (Strehl & Littman, 2008), explicit explore or exploit (E3) (Kearns & Singh, 2002) and R-Max (Brafman & Tennenholtz, 2002). MBIE-EB adds the reciprocal of square root of counts of state-action pairs as the bonus reward to the augmented Bellman equation for exploring less visited ones with theoretical guarantee. E3 determines the action based on the counts of state-actions pairs. If the state has never been visited, the action is chosen randomly and if the state has been visited for some times, the agent takes the action that has been tried the fewest times before. R-Max uses counts of states as a way to check for known states.\nIn the continuous and high-dimensional space, the number of states is too large to be counted directly (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Abel et al., 2016). Bellemare et al. (2016) and Ostrovski et al. (2017) use a density model to estimate the state pseudo-count quantity, which is used to design the exploration bonus reward. Tang et al. (2017) counts the number of states by using the hash function to encode states and then it explores by using the reciprocal of visits as a form of reward bonus, which performs well on some hard exploration Atari-2600 games. Abel et al. (2016) records the number of cluster center and action pairs and makes use of it to select an action from the Gibbs distribution. These count-based methods encourage the agent to explore by making use of the novelty of states and do not take quality into consideration.\nFurthermore, there are some methods to estimate the quality of states. Average reward, in kernel-based reinforcement learning (Ormoneit & Sen, 2002), UCRL (Auer & Ortner, 2006) and UCAGG (Ortner, 2013), can be regarded as the quality of states. Kernel-based reinforcement learning (Ormoneit & Sen, 2002) is proposed to solve the stability problem of TD-learning by using locally weighted averaging to estimate the value of state. UCRL (Auer & Ortner, 2006) and UCAGG (Ortner, 2013) use average reward to choose optimistic values. Besides, the value of cluster space can also indicate the quality of states. Singh et al. (1994) uses the value of cluster space with Q-learning and TD(0) by soft state aggregation and provides convergence results. But these methods do not use the quality of states to explore more areas.\nTo the best of our knowledge, the novelty and quality in the neighboring area of the current state have not been well utilized to guide the exploration of the agent in existing methods, especially in the high dimensional state space. This motivates the work of this paper." }, { "heading": "3 NOTATION", "text": "In this paper, we adopt similar notations as those in (Tang et al., 2017). More specifically, we model the RL problem as a finite-horizon discounted Markov decision process (MDP), which can be defined by a tuple (S,A,P, r, ρ0, γ, T ). Here, S ∈ Rd denotes the state space,A ∈ Rm denotes the action space,P : S×A×S → R denotes a transition probability distribution, r : S×A → R denotes a reward function, ρ0 is an initial state distribution, γ ∈ (0, 1] is a discount factor, and T denotes the horizon time. In this paper, we assume r ≥ 0. For cases with negative rewards, we can transform them to cases without negative rewards. The goal of RL is to maximize Eπ,P [∑T t=0 γ tr (st, at) ] which is the total expected discounted reward over a policy π." }, { "heading": "4 CLUSTERED REINFORCEMENT LEARNING", "text": "This section presents the details of our proposed RL framework, called clustered reinforcement learning (CRL). The key idea of CRL is to adopt clustering to divide the collected states into several clusters, and then design a novel cluster-based bonus reward for exploration." }, { "heading": "4.1 CLUSTERING", "text": "Intuitively, both novelty and quality are useful for exploration strategy design. If the agent only cares about novelty, it might explore intensively in some unexplored areas without any reward. If the agent only cares about quality, it might converge to suboptimal behaviors and have low opportunity to discover unexplored areas with higher rewards. Hence, it is better to integrate both novelty and quality into the same exploration strategy.\nWe find that clustering can provide the possibility to integrate both novelty and quality together. Intuitively, a cluster of states can be treated as an area. The number of collected states in a cluster reflects the count (novelty) information of that area. The average reward of the collected states in a cluster reflects the quality of that area. Hence, based on the clustered results, we can design an exploration strategy considering both novelty and quality. Furthermore, the states from the same cluster have similar features, and hence the clustered results provide a possibility to share meaningful information among different states from the same cluster. The details of exploration strategy design based on clustering will be left to the following subsection. Here, we only describe the clustering algorithm.\nIn CRL, we perform clustering on states. Assume the number of clusters isK, and we have collected N state-action samples {(si, ai, ri)}Ni=1 with some policy. We need to cluster the collected states {si}Ni=1 into K clusters by using some clustering algorithm f : S → C, where C = {Ci}Ki=1 and Ci is the center of the i-th cluster. We can use any clustering algorithm in the CRL framework. Although more sophisticated clustering algorithms might be able to achieve better performance, in this paper we just choose k-means algorithm (Coates & Ng, 2012). K-means is one of the simplest clustering algorithms with wide applications. The detail of k-means is omitted here, and readers can find it in most machine learning textbooks." }, { "heading": "4.2 CLUSTERING-BASED BONUS REWARD", "text": "As stated above, clustering can provide the possibility of integrating both novelty and quality together for exploration. Here, we propose a novel clustering-based bonus reward, based on which many policy updating algorithms can be adopted to get an exploration strategy considering both novelty and quality.\nGiven a state si, it will be allocated to the nearest cluster by the cluster assignment function φ(si) = argmin\nk ‖si − Ck‖. Here, 1 6 k 6 K and ‖si − Ck‖ denotes the distance between si and the k-th\ncluster center Ck. The sum of rewards in the k-th cluster is denoted as Rk, which can be computed as follows:\nRk = N∑ i=1 riI(φ(si) = k), (1)\nwhere I(·) is an indicator function. Rk is also called cluster reward of cluster k in this paper. The number of states in the k-th cluster is denoted as Nk, which can be computed as follows:\nNk = N∑ i=1 I(φ(si) = k). (2)\nIntuitively, a larger Nk typically means that the area corresponding to cluster k has more visits (exploration), which implies the novelty of this area is lower. Hence, the bonus reward should be inversely proportional toNk. The average reward of cluster k, denoted as RkNk , can be used to represent the quality of the corresponding area of cluster k. Hence, the bonus reward should be proportional to RkNk .\nWith the above intuition, we propose a clustering-based bonus reward b : S → R to integrate both novelty and quality of the neighboring area of the current state s , which is defined as follows:\nb(s) = β max(η,Rφ(s))\nNφ(s) , (3)\nwhere β ∈ R+ is the bonus coefficient and η ∈ R+ is the count (novelty) coefficient. Typically, η is set to be a small number relative to a true reward 1.\nIn general, as long as there exist one or two states with positive rewards in cluster φ(s), Rφ(s) will larger than η. Hence, if b(s) = βηNφ(s) , it is highly possible that all states in cluster φ(s) have zero reward. Hence, when Rφ(s) = 0 which means no rewards have been got for cluster φ(s), the bonus reward should be determined by the count of the cluster. From Equation (3), a larger Nφ(s) will result in a smaller bonus reward b(s). This will guide the agent to explore novel area corresponding to clusters with less visits (exploration), which is reasonable. For two clusters with the same cluster reward, the cluster with smaller number of states (higher novelty) will be more likely to be explored, which is reasonable. For two clusters with the same number of states, the cluster with higher cluster reward (higher quality) will be more likely to be explored, which is also reasonable.\nHence, the clustering-based bonus reward function defined in Equation (3) is intuitively reasonable, and it can seamlessly integrate both novelty and quality into the same bonus function. Finally, the agent will adopt {(si, ai, ri + bi)}Ni=1 to update the policy (perform exploration). Many policy updating algorithms, such as trust region policy optimization (TRPO) (Schulman et al., 2015), can be adopted.\nAlgorithm 1 briefly presents the learning framework of CRL. We can see that CRL is actually a general framework, and we can get different RL variants by taking different clustering algorithms and different policy updating algorithms. Please note that ri + bi is only used for training Algorithm 1. But the performance evaluation (test) is measured without bi, which can be directly compared with existing RL methods without extra bonus reward." }, { "heading": "5 EXPERIMENTS", "text": "We use several continuous control tasks and several Atari-2600 games to evaluate CRL and baselines. We want to investigate and answer the following research questions:\n• Is the count-based exploration sufficient to encourage the agent to achieve the final goal of tasks?\n• Can CRL improve performance significantly across different tasks compared with other methods?\n1In our experiments, the true rewards are either zero or positive integers.\nAlgorithm 1 Framework of Clustered Reinforcement Learning (CRL) Initialize the number of clusters K, bonus coefficient β, count coefficient η for iteration j = 1, . . . , J do" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "" }, { "heading": "5.1.1 ENVIRONMENTS", "text": "MuJoCo. The rllab benchmark (Duan et al., 2016) consists of various continuous control tasks to test RL algorithms. We select MountainCar and CartpoleSwingup to compare our methods with other baselines. The experimental setups of MountainCar and CartPoleSwingup using sparse rewards can be found in Houthooft et al. (2016). In MountainCar, S ⊆ R3,A ⊆ R1. The agent receives a reward of +1 when the car escapes the valley from the right side, otherwise the agent receives a reward of 0. In CartpoleSwingup, S ⊆ R4,A ⊆ R1. The agent receives a reward of +1 when the cosine of pole angle is larger than 0.8, otherwise the agent receives a zero return at other positions. Figure 1 shows one snapshot for each task.\nArcade Learning Environment. The Arcade Learning Environment (ALE) (Bellemare et al., 2013) is a commonly used benchmark for RL algorithms because of its high-dimensional state space and wide variety of video games. We select a subset of Atari games2: Freeway, Frostbite, Gravitar, Solaris and Venture. Figure 2 shows a snapshot for each game. For example, in Freeway, the agent need to avoid the traffic, cross the road and get the reward. These games are classified into hard exploration category, according to the taxonomy in (Bellemare et al., 2016)." }, { "heading": "5.1.2 BASELINES", "text": "CRL is a general framework which can adopt many different policy updating (optimization) algorithms to get different variants. In this paper, we only adopt trust region policy optimiza-\n2The Montezuma game evaluated in Tang et al. (2017) is not adopted in this paper for evaluation, because this paper only uses raw pixels which are not enough for learning effective policy on Montezuma game for most methods including CRL and other baselines. We can use advanced feature to learn effective policy, but this is not the focus of this paper.\ntion (TRPO) (Schulman et al., 2015) as the policy updating algorithm for CRL, and leave other variants of CRL for future work. We will denote our method as CRLTRPO in the following content. The baselines for comparison include TRPO and TRPO-Hash (Tang et al., 2017). For continuous control problem, we choose VIME as a baseline.\nTRPO (Schulman et al., 2015) is a classic policy gradient method, which uses trust region to guarantee stable improvement of policy and can handle both discrete and continuous action space. Furthermore, this method is not too sensitive to hyper-parameters. TRPO adopts a Gaussian control noise as a heuristic exploration strategy.\nTRPO-Hash (Tang et al., 2017) is a hash-based method, which is a generalization of classic countbased method for high-dimensional and continuous state spaces. The main idea is to use localitysensitive hashing (LSH) (Andoni & Indyk, 2006) to encode continuous and high-dimensional data into hash codes, like {−1, 1}h. Here, h is the length of hash codes. TRPO-Hash has several variants in (Tang et al., 2017). For fair comparison, we choose SimHash (Charikar, 2002) (TRPO-Hash) as the hash function and pixels as inputs for TRPO-Hash in this paper, because our CRL also adopts pixels rather than advanced features as inputs. TRPO-Hash is trained by using the code provided by its authors.\nVIME (Houthooft et al., 2016) is a curiosity-driven exploration strategy, which seeks out unexplored state-action region by maximizing the information gain of agent’s belief of environments. VIME is also trained by using the code provided by its authors. Here, we select VIME to compare with our method in continuous control problem because this method only supports continuous state and action space." }, { "heading": "5.2 PERFORMANCE ON MUJOCO", "text": "Figure 3 shows the results of TRPO, TRPO-Hash, VIME and CRLTRPO in MountainCar and CartpoleSwingup. We can find that our CRLTRPO achieves the best performance on both MountainCar and CartpoleSwingup. In MountainCar, our method is the first one to reach the goal state and master a good policy. Our method outperforms all other methods with a large margin. The goal of TRPOHash is to help the agent explore more novel states. But TRPO-Hash might go through all states until reaching the goal state, which is the disadvantage of count (novelty) based exploration. We find that at the end of training, TRPO-Hash fails to achieve the goal that our method and VIME have achieved. The reason why TRPO-Hash fails is that the novelty of states diverts the agent’s attention. The worst case is that the agent collects all states until it finds the goal. This disadvantage of count-based methods might become more serious in the high-dimensional state space since it is impossible to go through all states in the high-dimensional state space. Therefore, strategies with only count-based exploration are insufficient." }, { "heading": "5.3 PERFORMANCE ON ATARI-2600", "text": "For the video games of Atari-2600, we compare CRLTRPO with other baselines. The agent is trained for 500 iterations in all experiments with each iteration consisting of 0.4M frames. The agent selects an action every 4 frames, so every iteration consists of 0.1M steps (0.4M frames). The\nlast frames of every 4 frames are used for clustering and counting. The performance is evaluated over 5 random seeds. The seeds for evaluation are the same for all methods.\nWe summarize all results in Table 1. Please note that TRPO and TRPO-Hash are trained with the code provided by the authors of TRPO-Hash. All hyper-parameters are reported in the supplementary material. We also compare our methods to double-DQN (van Hasselt et al., 2016), dueling network (Wang et al., 2016), A3C+ (Bellemare et al., 2016), double DQN with pseudo-count (Bellemare et al., 2016), the results of which are from (Tang et al., 2017). Furthermore, we show the training curves of our methods, TRPO and TRPO-Hash in Figure 4.\nCRLTRPO achieves significantly improvement over TRPO and TRPO-Hash on Freeway, Frostbite, Solaris and Venture. Please note that DQN-based methods reuse off-policy experience. Hence, DQN-based methods have better performance than TRPO without any exploration techniques in most cases. But our methods can still outperform DQN-based methods in most cases." }, { "heading": "6 CONCLUSION", "text": "In this paper, we propose a novel RL framework, called clustered reinforcement learning (CRL), for efficient exploration. By using clustering, CRL provides a general framework to adopt both novelty and quality in the neighboring area of the current state for exploration. Experiments on several continuous control tasks and several hard exploration Atari-2600 games show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases." }, { "heading": "A APPENDIX", "text": "A.1 HYPER-PARAMETER SETTING IN MUJOCO\nIn MuJoCo, the hyper-parameter setting of TRPO, TRPO-Hash, CRLTRPO is shown in Table 2. The hyper-parameter setting of VIME can be found in (Houthooft et al., 2016). The performance is evaluated over 5 random seeds. The seeds for evaluation are the same for all methods.\nA.2 HYPER-PARAMETER SETTING IN ATARI-2600\nThe hyper-parameter settings in TRPO, TRPO-Hash and CRLTRPO are shown in Table 3. The performance is evaluated over 5 random seeds. The seeds for evaluation are the same for all methods." } ]
2,019
null
SP:333b75014a3cde3eb10486b9b7db6eea42db3196
[ "This paper proposed a model that is capable of tracking dialogue states in a non-recursive fashion. The main techniques behind the non-recursive model is similar to that of the ICLR 2018 paper \"NON-AUTOREGRESSIVE NEURAL MACHINE TRANSLATION\". Unfortunately, as state tacking can be formulated as one special case of sequence decoding, there is not much of innovation that can be claimed in this paper considering the \"fertility\" idea was already been proposed. The paper did illustrate a strong experimental results on a recent dataset comparing with many state-of-the-art models. However, it is not clear how much innovation this work generates and how the ICLR community would benefit from the problem that the paper is addressing. ", "The authors build on recent work for non-autoregressive encoder-decoder models in the context of machine translation (most significantly [Gu, et al., ICLR18]) and adapt this to dialogue state tracking. Specifically, as in [Gu, et al, ICLR18], they use a fertility decoder modified for DST to be on a per-slot basis which is input to a second decoder to generate the (open-vocabulary) tokens representing dialogue state. An interesting aspect of the resulting model as formulated is that the latent space also takes into account interdependencies between generated slot values, which leads to a direct structured prediction-like loss of joint accuracy. Additionally, as slot values have a smaller (and likely peakier) combinatorial space, NAT models actually are more applicable to DST than MT. The resulting model achieves state-of-the-art empirical results on the MultiWOZ dataset while incurring decoding times that are an order of magnitude faster. " ]
Recent efforts in Dialogue State Tracking (DST) for task-oriented dialogues have progressed toward open-vocabulary or generation-based approaches where the models can generate slot value candidates from the dialogue history itself. These approaches have shown good performance gain, especially in complicated dialogue domains with dynamic slot values. However, they fall short in two aspects: (1) they do not allow models to explicitly learn signals across domains and slots to detect potential dependencies among (domain, slot) pairs; and (2) existing models follow auto-regressive approaches which incur high time cost when the dialogue evolves over multiple domains and multiple turns. In this paper, we propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots. In particular, the non-autoregressive nature of our method not only enables decoding in parallel to significantly reduce the latency of DST for realtime dialogue response generation, but also detect dependencies among slots at token level in addition to slot and domain level. Our empirical results show that our model achieves the state-of-the-art joint accuracy across all domains on the MultiWOZ 2.1 corpus, and the latency of our model is an order of magnitude lower than the previous state of the art as the dialogue history extends over time.
[ { "affiliations": [], "name": "Hung Le" }, { "affiliations": [], "name": "Richard Socher" }, { "affiliations": [], "name": "Steven C.H. Hoi" } ]
[ { "authors": [ "Paweł Budzianowski", "Tsung-Hsien Wen", "Bo-Hsiang Tseng", "Iñigo Casanueva", "Stefan Ultes", "Osman Ramadan", "Milica Gašić" ], "title": "MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Mostafa Dehghani", "Stephan Gouws", "Oriol Vinyals", "Jakob Uszkoreit", "Lukasz Kaiser" ], "title": "Universal transformers", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Mihail Eric", "Rahul Goel", "Shachi Paul", "Abhishek Sethi", "Sanchit Agarwal", "Shuyag Gao", "Dilek Hakkani-Tur" ], "title": "Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines", "venue": null, "year": 1907 }, { "authors": [ "Shuyang Gao", "Abhishek Sethi", "Sanchit Aggarwal", "Tagyoung Chung", "Dilek Hakkani-Tur" ], "title": "Dialog state tracking: A neural reading comprehension approach", "venue": null, "year": 1908 }, { "authors": [ "Marjan Ghazvininejad", "Omer Levy", "Yinhan Liu", "Luke Zettlemoyer" ], "title": "Constant-time machine translation with conditional masked language models", "venue": "arXiv preprint arXiv:1904.09324,", "year": 2019 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Rahul Goel", "Shachi Paul", "Dilek Hakkani-Tür" ], "title": "Hyst: A hybrid approach for flexible and accurate dialogue state tracking", "venue": "Proc. Interspeech 2019,", "year": 2019 }, { "authors": [ "Alex Graves", "Santiago Fernández", "Faustino Gomez", "Jürgen Schmidhuber" ], "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Jiatao Gu", "James Bradbury", "Caiming Xiong", "Victor O.K. Li", "Richard Socher" ], "title": "Non-autoregressive neural machine translation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matthew Henderson", "Blaise Thomson", "Jason D Williams" ], "title": "The second dialog state tracking challenge", "venue": "In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL),", "year": 2014 }, { "authors": [ "Matthew Henderson", "Blaise Thomson", "Steve Young" ], "title": "Word-based dialog state tracking with recurrent neural networks", "venue": "In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL),", "year": 2014 }, { "authors": [ "Matthew Henderson", "Blaise Thomson", "Steve J. Young" ], "title": "Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised adaptation", "venue": "IEEE Spoken Language Technology Workshop (SLT),", "year": 2014 }, { "authors": [ "Lukasz Kaiser", "Samy Bengio", "Aurko Roy", "Ashish Vaswani", "Niki Parmar", "Jakob Uszkoreit", "Noam Shazeer" ], "title": "Fast decoding in sequence models using discrete latent variables", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederick P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Gakuto Kurata", "Bing Xiang", "Bowen Zhou", "Mo Yu" ], "title": "Leveraging sentence-level information with encoder LSTM for semantic slot filling", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Hwaran Lee", "Jinsik Lee", "Tae yoon Kim" ], "title": "Sumbt: Slot-utterance matching for universal and scalable belief tracking", "venue": "In ACL,", "year": 2019 }, { "authors": [ "Jason Lee", "Elman Mansimov", "Kyunghyun Cho" ], "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Wenqiang Lei", "Xisen Jin", "Min-Yen Kan", "Zhaochun Ren", "Xiangnan He", "Dawei Yin" ], "title": "Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "Jindřich Libovickỳ", "Jindřich Helcl" ], "title": "End-to-end non-autoregressive neural machine translation with connectionist temporal classification", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attention-based neural machine translation", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Nikola Mrkšić", "Diarmuid Ó Séaghdha", "Tsung-Hsien Wen", "Blaise Thomson", "Steve Young" ], "title": "Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": "Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Elnaz Nouri", "Ehsan Hosseini-Asl" ], "title": "Toward scalable neural dialogue state tracking model", "venue": "arXiv preprint arXiv:1812.00899,", "year": 2018 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Osman Ramadan", "Paweł Budzianowski", "Milica Gasic" ], "title": "Large-scale multi-domain belief tracking with knowledge sharing", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Abhinav Rastogi", "Dilek Z. Hakkani-Tür", "Larry P. Heck" ], "title": "Scalable multi-domain dialogue state tracking", "venue": "IEEE Automatic Speech Recognition and Understanding Workshop (ASRU),", "year": 2017 }, { "authors": [ "Holger Schwenk" ], "title": "Continuous space translation models for phrase-based statistical machine translation", "venue": "In Proceedings of COLING 2012: Posters,", "year": 2012 }, { "authors": [ "Abigail See", "Peter J Liu", "Christopher D Manning" ], "title": "Get to the point: Summarization with pointergenerator networks", "venue": "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2017 }, { "authors": [ "Iulian V Serban", "Alessandro Sordoni", "Yoshua Bengio", "Aaron Courville", "Joelle Pineau" ], "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Yangyang Shi", "Kaisheng Yao", "Hu Chen", "Dong Yu", "Yi-Cheng Pan", "Mei-Yuh Hwang" ], "title": "Recurrent support vector machines for slot tagging in spoken language understanding", "venue": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2016 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The Journal of Machine Learning Research,", "year": 1929 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Ł ukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Yiren Wang", "Fei Tian", "Di He", "Tao Qin", "ChengXiang Zhai", "Tie-Yan Liu" ], "title": "Non-autoregressive machine translation with auxiliary regularization", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI2019),", "year": 2019 }, { "authors": [ "Tsung-Hsien Wen", "David Vandyke", "Nikola Mrkšić", "Milica Gasic", "Lina M. Rojas Barahona", "PeiHao Su", "Stefan Ultes", "Steve Young" ], "title": "A network-based end-to-end trainable task-oriented dialogue system", "venue": "In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers,", "year": 2017 }, { "authors": [ "Chien-Sheng Wu", "Andrea Madotto", "Ehsan Hosseini-Asl", "Caiming Xiong", "Richard Socher", "Pascale Fung" ], "title": "Transferable multi-domain state generator for task-oriented dialogue systems", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Puyang Xu", "Qi Hu" ], "title": "An end-to-end approach for handling unknown slot values in dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1448–1457", "venue": "Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Wu" ], "title": "A APPENDIX A.1 DATASET PRE-PROCESSING We follow similar data preprocessing procedures as Budzianowski et al", "venue": "MultiWOZ", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "In task-oriented dialogues, a dialogue agent is required to assist humans for one or many tasks such as finding a restaurant and booking a hotel. As a sample dialogue shown in Table 1, each user utterance typically contains important information identified as slots related to a dialogue domain such as attraction-area and train-day. A crucial part of a task-oriented dialogue system is Dialogue State Tracking (DST), which aims to identify user goals expressed during a conversation in the form of dialogue states. A dialogue state consists of a set of (slot, value) pairs e.g. (attraction-area, centre) and (train-day, tuesday). Existing DST models can be categorized into two types: fixed- and open-vocabulary. Fixed vocabulary models assume known slot ontology and generate a score for each candidate of (slot,value) (Ramadan et al., 2018; Lee et al., 2019). Recent approaches propose open-vocabulary models that can generate the candidates, especially for slots such as entity names and time, from the dialogue history (Lei et al., 2018; Wu et al., 2019).\nMost open-vocabulary DST models rely on autoregressive encoders and decoders, which encode dialogue history sequentially and generate token ti of individual slot value one by one conditioned on all previously generated tokens t[1:i−1]. For downstream tasks of DST that emphasize on low latency (e.g. generating real-time dialogue responses), auto-regressive approaches incur expensive time cost as the ongoing dialogues become more complex. The time cost is caused by two major components: length of dialogue history i.e. number of turns, and length of slot values. For complex dialogues extended over many turns and multiple domains, the time cost will increase significantly in both encoding and decoding phases.\nSimilar problems can be seen in the field of Neural Machine Translation (NMT) research where a long piece of text is translated from one language to another. Recent work has tried to improve the\n∗All work was done while the first author was a research intern at Salesforce Research Asia.\nlatency in NMT by using neural network architectures such as convolution (Krizhevsky et al., 2012) and attention (Luong et al., 2015). Several non- and semi-autoregressive approaches aim to generate tokens of the target language independently (Gu et al., 2018; Lee et al., 2018; Kaiser et al., 2018). Motivated by this line of research, we thus propose a non-autoregressive approach to minimize the time cost of DST models without a negative impact on the model performance.\nWe adopt the concept of fertility proposed by Gu et al. (2018). Fertility denotes the number of times each input token is copied to form a sequence as the input to the decoder for non-autoregressive decoding. We first reconstruct dialogue state as a sequence of concatenated slot values. The result sequence contains the inherent structured representation in which we can apply the fertility concept. The structure is defined by the boundaries of individual slot values. These boundaries can be easily obtained from dialogue state itself by simply measuring number of the tokens of individual slots. Our model includes a two-stage decoding process: (1) the first decoder learns relevant signals from the input dialogue history and generates a fertility for each input slot representation; and (2) the predicted fertility is used to form a structured sequence which consists of multiple sub-sequences, each represented as (slot token×slot fertility). The result sequence is used as input to the second decoder to generate all the tokens of the target dialogue state at once.\nIn addition to being non-autoregressive, our models explicitly consider dependencies at both slot level and token level. Most of existing DST models assume independence among slots in dialogue states without explicitly considering potential signals across the slots (Wu et al., 2019; Lee et al., 2019; Goel et al., 2019; Gao et al., 2019). However, we hypothesize that it is not true in many cases. For example, a good DST model should detect the relation that train departure should not have the same value as train destination (example in Table 1). Other cases include time-related pairs such as (taxi arriveBy, taxi leaveAt) and cross-domain pairs such as (hotel area, attraction area). Our proposed approach considers all possible signals across all domains and slots to generate a dialogue state as a set. Our approach directly optimizes towards the DST evaluation metric Joint Accuracy (Henderson et al., 2014b), which measures accuracy at state (set of slots) level rather than slot level.\nOur contributions in this work include: (1) we propose a novel framework of Non-Autoregressive Dialog State Tracking (NADST), which explicitly learns inter-dependencies across slots for decoding dialogue states as a complete set rather than individual slots; (2) we propose a non-autoregressive decoding scheme, which not only enjoys low latency for real-time dialogues, but also allows to capture dependencies at token level in addition to slot level; (3) we achieve the state-of-the-art performance on the multi-domain task-oriented dialogue dataset “MultiWOZ 2.1” (Budzianowski et al., 2018; Eric et al., 2019) while significantly reducing the inference latency by an order of magnitude; (4) we conduct extensive ablation studies in which our analysis reveals that our models can detect potential signals across slots and dialogue domains to generate more correct “sets” of slots for DST." }, { "heading": "2 RELATED WORK", "text": "Our work is related to two research areas: dialogue state tracking and non-autoregressive decoding." }, { "heading": "2.1 DIALOGUE STATE TRACKING", "text": "Dialogue State Tracking (DST) is an important component in task-oriented dialogues, especially for dialogues with complex domains that require fine-grained tracking of relevant slots. Traditionally,\nDST is coupled with Natural Language Understanding (NLU). NLU output as tagged user utterances is input to DST models to update the dialogue states turn by turn (Kurata et al., 2016; Shi et al., 2016; Rastogi et al., 2017). Recent approaches combine NLU and DST to reduce the credit assignment problem and remove the need for NLU (Mrkšić et al., 2017; Xu & Hu, 2018; Zhong et al., 2018). Within this body of research, Goel et al. (2019) differentiates two DST approaches: fixed- and openvocabulary. Fixed-vocabulary approaches are usually retrieval-based methods in which all candidate pairs of (slot, value) from a given slot ontology are considered and the models predict a probability score for each pair (Henderson et al., 2014c; Ramadan et al., 2018; Lee et al., 2019). Recent work has moved towards open-vocabulary approaches that can generate the candidates based on input text i.e. dialogue history (Lei et al., 2018; Gao et al., 2019; Wu et al., 2019). Our work is more related to these models, but different from most of the current work, we explicitly consider dependencies among slots and domains to decode dialogue state as a complete set." }, { "heading": "2.2 NON-AUTOREGRESSIVE DECODING", "text": "Most of prior work in non- or semi-autoregressive decoding methods are used for NMT to address the need for fast translation. Schwenk (2012) proposes to estimate the translation model probabilities of a phase-based NMT system. Libovickỳ & Helcl (2018) formulates the decoding process as a sequence labeling task by projecting source sequence into a longer sequence and applying CTC loss (Graves et al., 2006) to decode the target sequence. Wang et al. (2019) adds regularization terms to NAT models (Gu et al., 2018) to reduce translation errors such as repeated tokens and incomplete sentences. Ghazvininejad et al. (2019) uses a non-autoregressive decoder with masked attention to decode target sequences over multiple generation rounds. A common challenge in nonautoregressive NMT is the large number of sequential latent variables, e.g., fertility sequences (Gu et al., 2018) and projected target sequences (Libovickỳ & Helcl, 2018). These latent variables are used as supporting signals for non- or semi-autoregressive decoding. We reformulate dialogue state as a structured sequence with sub-sequences defined as a concatenation of slot values. This form of dialogue state can be inferred easily from the dialogue state annotation itself whereas such supervision information is not directly available in NMT. The lower semantic complexity of slot values as compared to long sentences in NMT makes it easier to adopt non-autoregressive approaches into DST. According to our review, we are the first to apply a non-autoregressive framework for generation-based DST. Our approach allows joint state tracking across slots, which results in better performance and an order of magnitude lower latency during inference." }, { "heading": "3 APPROACH", "text": "Our NADST model is composed of three parts: encoders, fertility decoder, and state decoder, as shown in Figure 1. The input includes the dialogue history X = (x1, ..., xN ) and a sequence of applicable (domain, slot) pairsXds = ((d1, s1), ..., (dG, sH)), whereG andH are the total numbers of domains and slots, respectively. The output is the corresponding dialogue states up to the current dialogue history. Conventionally, the output of dialogue state is denoted as tuple (slot, value) (or (domain-slot, value) for multi-domain dialogues). We reformulate the output as a concatenation of slot values Y di,sj : Y = (Y d1,s1 , ..., Y dI ,sJ ) = (yd1,s11 , y d1,s1 2 , ..., y dI ,sJ 1 , y dI ,sJ 2 , ...) where I and J are the numbers of domains and slots in the output dialogue state, respectively.\nFirst, the encoders use token-level embedding and positional encoding to encode the input dialogue history and (domain, slot) pairs into continuous representations. The encoded domains and slots are then input to stacked self-attention and feed-forward network to obtain relevant signals across dialogue history and generate a fertility Y dg,shf for each (domain, slot) pair (dg, sh). The output of fertility decoder is defined as a sequence: Yfert = Y d1,s1 f , ..., Y dG,sH f where Y dg,dh f ∈ {0,max(SlotLength)}. For example, for the MultiWOZ dataset in our experiments, we have max(SlotLength) = 9 according to the training data. We follow (Wu et al., 2019; Gao et al., 2019) to add a slot gating mechanism as an auxiliary prediction. Each gate g is restricted to 3 possible values: “none”, “dontcare” and “generate”. They are used to form higher-level classification signals to support fertility decoding process. The gate output is defined as a sequence: Ygate = Y d1,s1g , ..., Y dG,sH g .\nThe predicted fertilities are used to form an input sequence to the state decoder for nonautoregressive decoding. The sequence includes sub-sequences of (dg, sh) repeated by Y dg,sh f times\nand concatenated sequentially: Xds×fert = ((d1, s1)Y d1,s1 f , ..., (dG, sH) Y dG,sH f ) and ‖Xds×fert‖ = ‖Y ‖. The decoder projects this sequence through attention layers with dialogue history. During this decoding process, we maintain a memory of hidden states of dialogue history. The output from the state decoder is used as a query to attend on this memory and copy tokens from the dialogue history to generate a dialogue state.\nFollowing Lei et al. (2018), we incorporate information from previous dialogue turns to predict current turn state by using a partially delexicalized dialogue historyXdel = (x1,del, ..., xN,del) as an input of the model. The dialogue history is delexicalized till the last system utterance by removing real-value tokens that match the previously decoded slot values to tokens expressed as domain-slot. Given a token xn and the current dialogue turn t, the token is delexicalized as follows:\nxn,del =delex(xn) = { domainidx-slotidx, if xn ⊂ Ŷt−1. xn, otherwise.\n(1)\ndomainidx = Xds×fert[idx][0], slotidx = Xds×fert[idx][1], idx = Index(xn, Ŷt−1) (2)\nFor example, the user utterance “I look for a cheap hotel” is delexicalized to “I look for a hotel pricerange hotel.” if the slot hotel pricerange is predicted as “cheap” in the previous turn. This approach makes use of the delexicalized form of dialogue history while not relying on an NLU module as we utilize the predicted state from DST model itself. In addition to the belief state, we also use the system action in the previous turn to delexicalize the dialog history in a similar manner, following prior work (Rastogi et al., 2017; Zhong et al., 2018; Goel et al., 2019)." }, { "heading": "3.1 ENCODERS", "text": "An encoder is used to embed dialogue history X into a sequence of continuous representations Z = (z1, ..., zN ) ∈ RN×d. Similarly, partially delexicalized dialogue history Xdel is encoded to continuous representations Zdel ∈ RN×d. We store the encoded dialogue history Z in memory which will be passed to a pointer network to copy words for dialogue state generation. This helps to address the OOV challenge as shown in (See et al., 2017; Wu et al., 2019). We also encode each (domain, slot) pair into continuous representation zds ∈ Rd as input to the decoders. Each vector zds is used to store contextual signals for slot and fertility prediction during the decoding process.\nContext Encoder. Context encoder includes a token-level trainable embedding layer and layer normalization (Ba et al., 2016). The encoder also includes a positional encoding layer which follows sine and cosine functions (Vaswani et al., 2017). An element-wise summation is used to combine the token-level vectors with positional encoded vectors. We share the embedding weights to embed the raw and delexicalized dialogue history. The embedding weights are also shared to encode input to both fertility decoder and state decoder. The final embedding of X and Xdel is defined as:\nZ = Zemb + PE(X) ∈ RN×d (3) Zdel = Zemb,del + PE(Xdel) ∈ RN×d (4)\nDomain and Slot Encoder. Each (domain, slot) pair is encoded by using two separate embedding vectors of the corresponding domain and slot. Each domain g and slot h is embedded into a continuous representation zdg and zsh ∈ Rd. The final vector is combined by element-wise summation:\nzdg,sh = zdg + zsh ∈ Rd (5)\nWe share the embedding weights to embed domain and slot tokens in both fertility decoder and state decoder. However, for input to state decoder, we inject sequential information into the input Xds×fert to factor in position-wise information to decode target state sequence. In summary, Xds and Xds×fert is encoded as following:\nZds = Zemb,ds = zd1,s1 ⊕ ...⊕ zdG,sH (6) Zds×fert = Zemb,ds×fert + PE(Xds×fert) (7)\nZemb,ds×fert = (zd1,s1) Y d1,s1 f ⊕ ...⊕ (zdG,sH ) Y dG,sH f (8)\nwhere ⊕ denotes concatenation operation. Note that different from a typical decoder input in Transformer, we do not shift the input sequences to both fertility decoder and state decoder by one position as we consider non-autoregressive decoding process in both modules. Therefore, all output tokens are generated in position i based on all remaining positions of the sequence i.e. 1, ..., i− 1, i+ 1, ...‖Xds‖ in fertility decoder and 1, ..., i− 1, i+ 1, ...‖Xds×fert‖ in state decoder." }, { "heading": "3.2 FERTILITY DECODER", "text": "Given the encoded dialogue history Z, delexicalized dialogue history Zdel, and (domain,slot) pairs Zds, the contextual signals are learned and passed into each zds vector through a sequence of attention layers. We adopt the multi-head attention mechanism (Vaswani et al., 2017) to project the representations into multiple sub-spaces. The attention mechanism is defined as scaled dot-product attention between query Q, key K, and value V :\nAttention(Q,K, V ) = softmax( QKT√ dk V ) (9)\nEach multi-head attention is followed by a position-wise feed-forward network. The feed-forward is applied to each position separately and identically. We use two linear layers with a ReLU activation in between. The fertility decoder consists of 3 attention layers, each of which learns relevant contextual signals and incorporates them into zds vectors as input to the next attention layer:\nZoutds = Attention(Zds, Zds, Zds) ∈ RN×d (10) Zoutds = Attention(Z out ds , Zdel, Zdel) ∈ RN×d (11) Zoutds = Attention(Z out ds , Z, Z) ∈ RN×d (12)\nFor simplicity, we do not express the multi-head and feed-forward equations. We advise the reader to review Transformer network (Vaswani et al., 2017) for more detailed description. The multi-head structure has shown to obtain good performance in many NLP tasks such as NMT (Vaswani et al., 2017) and QA (Dehghani et al., 2019). By adopting this attention mechanism, we allow the models to explicitly obtain signals of potential dependencies across (domain, slot) pairs in the first attention layer, and contextual dependencies in the subsequent attention layers. Adding the delexicalized dialogue history as input can provide important contextual signals as the models can learn the mapping between real-value tokens and generalized domain-slot tokens. To further improve the model capability to capture these dependencies, we repeat the attention sequence for Tfert times with Zds. In an attention step t, the output from the previous attention layer t− 1 is used as input to current layer to compute Ztds. The output in the last attention layer Z Tfert ds is passed to two independent linear transformations to predict fertilities and gates:\nP gate = softmax(WgateZ Tfert ds ) (13)\nP fert = softmax(WfertZ Tfert ds ) (14)\nwhere Wgate ∈ Rd×3 and Wfert ∈ Rd×10. We use the standard cross-entropy loss to train the prediction of gates and fertilities:\nLgate = ∑ dg,sh − log(P gate(Y dg,shg )) (15)\nLfert = ∑ dg,sh − log(P fert(Y dg,shf )) (16)" }, { "heading": "3.3 STATE DECODER", "text": "Given the generated gates and fertilities, we form the input sequence Xds×fert. We filter out any (domain, slot) pairs that have gate either as “none” or “dontcare”. Given the encoded input Zds×fert, we apply a similar attention sequence as used in the fertility decoder to incorporate contextual signals into each zds×fert vector. The dependencies are captured at the token level in this decoding stage rather than at domain/slot higher level as in the fertility decoder. After repeating the attention sequence for Tstate times, the final output ZTstateds×fert is used to predict the state in the following:\nP statevocab = softmax(WstateZ Tstate ds×fert) (17)\nwhere Wstate ∈ Rd×‖V ‖ with V as the set of output vocabulary. As open-vocabulary DST models do not assume a known slot ontology, our models can generate the candidates from the dialogue history itself. To address OOV problem during inference, we incorporate a pointer network (Vinyals et al., 2015) into the Transformer decoder. We apply dot-product attention between the state decoder output and the stored memory of encoded dialogue history Z:\nP stateptr = softmax(Z Tstate ds×fertZ T ) (18) The final probability of predicted state is defined as the weighted sum of the two probabilities:\nP state = pstategen × P statevocab + (1− pstategen )× P stateptr (19) pstategen = sigmoid(WgenVgen) (20)\nVgen = Zds×fert ⊕ ZTstateds×fert ⊕ Zexp (21) where Wgen ∈ R3d×1 and Zexp is the expanded vector of Z to match dimensions of Zds×fert. The final probability is used to train the state generation following the cross-entropy loss function:\nLstate = ∑ dg,sh Y dg,sh f∑ m=0 − log(P state(ydg,shm )) (22)" }, { "heading": "3.4 OPTIMIZATION", "text": "We optimize all parameters by jointly training to minimize the weighted sum of the three losses: L = Lstate + αLgate + βLfert (23)\nwhere α ≥ 0 and β ≥ 0 are hyper-parameters." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 DATASET", "text": "MultiWOZ (Budzianowski et al., 2018) is one of the largest publicly available multi-domain taskoriented dialogue dataset with dialogue domains extended over 7 domains. In this paper, we use the new version of the MultiWOZ dataset published by Eric et al. (2019). The new version includes some correction on dialogue state annotation with more than 40% change across dialogue turns. On average, each dialogue has more than one domain. We pre-processed the dialogues by tokenizing, lower-casing, and delexicalizing all system responses following the pre-processing scripts from (Wu et al., 2019). We identify a total of 35 (domain, slot) pairs. Other details of data pre-processing procedures, corpus statistics, and list of (domain, slot) pairs are described in Appendix A.1." }, { "heading": "4.2 TRAINING PROCEDURE", "text": "We use label smoothing (Szegedy et al., 2016) to train the prediction of dialogue state Y but not for prediction of fertilities Yfert and gates Ygate. During training, we adopt 100% teacher-forcing learning strategy by using the ground-truth of Xds×fert as input to the state decoder. We also apply the same strategy to obtain delexicalized dialogue history i.e. dialogue history is delexicalized from the ground-truth belief state in previous dialogue turn rather than relying on the predicted belief state. During inference, we follow a similar strategy as (Lei et al., 2018) by generating dialogue state turn-by-turn and use the predicted belief state in turn t − 1 to delexicalize dialogue history in turn t. During inference, Xds×fert is also constructed by prediction Ŷgate and Ŷfert. We adopt the Adam optimizer (Kingma & Ba, 2015) and the learning rate strategy similarly as (Vaswani et al., 2017). Best models are selected based on the best average joint accuracy of dialogue state prediction in the validation set. All parameters are randomly initialized with uniform distribution (Glorot & Bengio, 2010). We did not utilize any pretrained word- or character-based embedding weights. We tuned the hyper-parameters with grid-search over the validation set (Refer to Appendix A.2 for further details). We implemented our models using PyTorch (Paszke et al., 2017) and released the code on GitHub 1." }, { "heading": "4.3 BASELINES", "text": "The DST baselines can be divided into 2 groups: open-vocabulary approach and fixed-vocabulary approach as mentioned in Section 2. Fixed-vocabulary has the advantage of access to the known candidate set of each slot and has a high performance of prediction within this candidate set. However, during inference, the approach suffers from unseen slot values for slots with evolving candidates such as entity names and time- and location-related slots." }, { "heading": "4.3.1 FIXED-VOCABULARY", "text": "GLAD (Zhong et al., 2018). GLAD uses multiple self-attentive RNNs to learn a global tracker for shared parameters among slots and a local tracker for individual slot. The model utilizes previous system actions as input. The output is used to compute semantic similarity with ontology terms.\nGCE (Nouri & Hosseini-Asl, 2018). GCE is a simplified and faster version of GLAD. The model removes slot-specific RNNs while maintaining competitive DST performance.\nMDBT (Ramadan et al., 2018). MDBT model includes separate encoding modules for system utterances, user utterances, and (slot, value) pairs. Similar to GLAD, The model is trained based on the semantic similarity between utterances and ontology terms.\nFJST and HJST (Eric et al., 2019). FJST refers to Flat Joint State Tracker, which consists of a dialog history encoder as a bidirectional LSTM network. The model also includes separate feedforward networks to encode hidden states of individual state slots. HJST follows a similar architecture but uses a hierarchical LSTM network (Serban et al., 2016) to encode the dialogue history.\nSUMBT (Lee et al., 2019). SUMBT refers to Slot-independent Belief Tracker, consisting of a multi-head attention layer with query vector as a representation of a (domain, slot) pair and key and\n1https://github.com/henryhungle/NADST\nvalue vector as BERT-encoded dialogue history. The model follows a non-parametric approach as it is trained to minimize a score such as Euclidean distance between predicted and target slots. Our approach is different from SUMBT as we include attention among (domain, slot) pairs to explicitly learn dependencies among the pairs. Our models also generate slot values rather than relying on a fixed candidate set." }, { "heading": "4.3.2 OPEN-VOCABULARY", "text": "TSCP (Lei et al., 2018). TSCP is an end-to-end dialogue model consisting of an RNN encoder and two RNN decoder with a pointer network. We choose this as a baseline because TSCP decodes dialogue state as a single sequence and hence, factor in potential dependencies among slots like our work. We adapt TSCP into multi-domain dialogues and report the performance of only the DST component rather than the end-to-end model. We also reported the performance of TSCP for two cases when the maximum length of dialogue state sequence L in the state decoder is set to 8 or 20 tokens. Different from TSCP, our models dynamically learn the length of each state sequence as the sum of predicted fertilities and hence, do not rely on a fixed value of L.\nDST Reader (Gao et al., 2019). DST Reader reformulates the DST task as a reading comprehension task. The prediction of each slot is a span over tokens within the dialogue history. The model follows an attention-based neural network architecture and combines a slot carryover prediction module and slot type prediction module.\nHyST (Goel et al., 2019). HyST model combines both fixed-vocabulary and open-vocabulary approach by separately choosing which approach is more suitable for each slot. For the openvocabulary approach, the slot candidates are formed as sets of all word n-grams in the dialogue history. The model makes use of encoder modules to encode user utterances and dialogue acts to represent the dialogue context.\nTRADE (Wu et al., 2019). This is the current state-of-the-art model on the MultiWOZ2.0 and 2.1 datasets. TRADE is composed of a dialog history encoder, a slot gating module, and an RNN decoder with a pointer network for state generation. SpanPtr is a related baseline to TRADE as reported by Wu et al. (2019). The model makes use of a pointer network with index-based copying instead of a token-based copying mechanism." }, { "heading": "4.4 RESULTS", "text": "We evaluate model performance by the joint goal accuracy as commonly used in DST (Henderson et al., 2014b). The metric compares the predicted dialogue states to the ground truth in each dialogue turn. A prediction is only correct if all the predicted values of all slots exactly match the corresponding ground truth labels. We ran our models for 5 times and reported the average results. For completion, we reported the results in both MultiWOZ 2.0 and 2.1.\nAs can be seen in Table 2, although our models are designed for non-autoregressive decoding, they can outperform state-of-the-art DST approaches that utilize autoregressive decoding such as (Wu et al., 2019). Our performance gain can be attributed to the model capability of learning crossdomain and cross-slot signals, directly optimizing towards the evaluation metric of joint goal accuracy rather than just the accuracy of individual slots. Following prior DST work, we reported the model performance on the restaurant domain in MultiWOZ 2.0 in Table 4. In this dialogue domain, our model surpasses other DST models in both Joint Accuracy and Slot Accuracy. Refer to Appendix A.3 for our model performance in other domains in both MultiWOZ2.0 and MultiWOZ2.1.\nLatency Analysis. We reported the latency results in term of wall-clock time (in ms) per prediction state of our models and the two baselines TRADE (Wu et al., 2019) and TSCP (Lei et al., 2018) in Table 4. For TSCP, we reported the time cost only for the DST component instead of the end-toend models. We conducted experiments with 2 cases of TSCP when the maximum output length of dialogue state sequence in the state decoder is set as L = 8 and L = 20. We varied our models for different values of T = Tfert = Tstate ∈ {1, 2, 3}. All latency results are reported when running in a single identical GPU. As can be seen in Table 4, NADST obtains the best performance when T = 3. The model outperforms the baselines while taking much less time during inference. Our approach is similar to TSCP which also decodes a complete dialogue state sequence rather than individual slots to factor in dependencies among slot values. However, as TSCP models involve sequential\nprocessing in both encoding and decoding, they require much higher latency. TRADE shortens the latency by separating the decoding process among (domain, slot) pairs. However, at the token level, TRADE models follow an auto-regressive process to decode individual slots and hence, result in higher average latency as compared to our approach. In NADST, the model latency is only affected by the number of attention layers in fertility decoder Tfert and state decoder Tstate. For approaches with sequential encoding and/or decoding such as TSCP and TRADE, the latency is affected by the length of source sequences (dialog history) and target sequence (dialog state). Refer to Appendix A.3 for visualization of model latency in terms of dialogue history length.\nAblation Analysis. We conduct an extensive ablation analysis with several variants of our models in Table 5. Besides the results of DST metrics, Joint Slot Accuracy and Slot Accuracy, we reported the performance of the fertility decoder in Joint Gate Accuracy and Joint Fertility Accuracy. These metrics are computed similarly as Joint Slot Accuracy in which the metrics are based on whether all predictions of gates or fertilities match the corresponding ground truth labels. We also reported the Oracle Joint Slot Accuracy and Slot Accuracy when the models are fed with ground truth Xds×fert and Xdel labels instead of the model predictions. We noted that the model fails when positional encoding of Xds×fert is removed before being passed to the state decoder. The performance drop can be explained because PE is responsible for injecting sequential attributes to enable non-autoregressive decoding. Second, we also note a slight drop of performance when slot gating is removed as the models have to learn to predict a fertility of 1 for “none” and “dontcare” slots as well. Third, removing Xdel as an input reduces the model performance, mostly due to the sharp decrease in Joint Fertility Accuracy. Lastly, removing pointer generation and relying on only P statevocab affects the model performance as the models are not able to infer slot values unseen during training, especially for slots such as restaurant-name and train-arriveby. We conduct other ablation experiments and report additional results in Appendix A.3.\nAuto-regressive DST. We conduct experiments that use an auto-regressive state decoder and keep other parts of the model the same. For the fertility decoder, we do not use Equation 14 and 16 as fertility becomes redundant in this case. We still use the output to predict slot gates. Similar to TRADE, we use the summation of embedding vectors of each domain and slot pair as input to the state decoder and generate slot value token by token. First, From Table 6, we note that the performance does not change significantly as compared to the non-autoregressive version. This reveals that our proposed NADST models can predict fertilities reasonably well and performance is comparable with the auto-regressive approach. Second, we observe that the auto-regressive models are less sensitive to the use of system action in dialogue history delexicalization. We expect this as predicting slot gates is easier than predicting fertilities. Finally, we note that our auto-regressive model variants still outperform the existing approaches. This could be due to the high-level dependencies among (domain, slot) pairs learned during the first part of the model to predict slot gates.\nVisualization and Qualitative Evaluation. In Figure 4, we include two examples of dialogue state prediction and the corresponding visualization of self-attention scores of Xds×fert in state decoder. In each heatmap, the highlighted boxes express attention scores among non-symmetrical domain-slot pairs. In the first row, 5 attention heads capture the dependencies of two pairs (trainleaveat, train-arriveby) and (train-departure, train-destination). The model prediction for these two slots matches the gold labels: (train-leaveat, 09:50), (train-arriveby, 11:30) and (train-departure, cambridge), (train-destination, ely) respectively. In the second row, besides slot-level dependency between domain-slot pairs (taxi-departure, taxi-destination), token-level dependency is exhibited through the attention between attraction-type and attraction-name. By attending on token representations of attraction-name with corresponding output “christ college”, the models can infer “attraction-type=college” correctly. In addition, our model also detects contextual dependency between train-departure and attraction-name to predict “train-departure=christ college.” Refer to Appendix A.4 for the dialogue history with gold and prediction states of these two sample dialogues." }, { "heading": "5 CONCLUSION", "text": "We proposed NADST, a novel Non-Autoregressive neural architecture for DST that allows the model to explicitly learn dependencies at both slot-level and token-level to improve the joint accuracy rather than just individual slot accuracy. Our approach also enables fast decoding of dialogue states by adopting a parallel decoding strategy in decoding components. Our extensive experiments on the well-known MultiWOZ corpus for large-scale multi-domain dialogue systems benchmark show that our NADST model achieved the state-of-the-art accuracy results for DST tasks, while enjoying a substantially low inference latency which is an order of magnitude lower than the prior work." }, { "heading": "A APPENDIX", "text": "A.1 DATASET PRE-PROCESSING\nWe follow similar data preprocessing procedures as Budzianowski et al. (2018) and Wu et al. (2019) on both MultiWOZ 2.0 and 2.1. The resulting corpus includes 8,438 multi-turn dialogues in training set with an average of 13.5 turns per dialogue. For the test and validation set, each includes 1,000 multi-turn dialogues with an average of 14.7 turns per dialogue. The average number of domains per dialogue is 1.8 for training, validation, and test sets. The MultiWOZ corpus includes much larger ontology than previous DST datasets such as WOZ (Wen et al., 2017) and DSTC2 (Henderson et al., 2014a). We identified a total of 35 (domain, slot) pairs across 7 domains. However, only 5 domains are included in the test data. Refer to Table 7 for the statistics of dialogues in these 5 domains.\nA.2 MODEL HYPER-PARAMETERS\nWe employed dropout (Srivastava et al., 2014) of 0.2 at all network layers except the linear layers of generation network components and pointer attention components. We used a batch size of 32, embedding dimension d = 256 in all experiments. We also fixed the number of attention heads to 16 in all attention layers. We shared the embedding weights to embed domain and slot tokens as input to fertility decoder and state decoder. We also shared the embedding weights between dialogue history encoder and state generator. We varied our models for different values of T = Tfert = Tstate ∈ {1, 2, 3}. In all experiments, the warmup steps are fine-tuned from a range from 13K to 20K training steps.\nA.3 ADDITIONAL RESULTS\nDomain-specific Results. We conduct experiments to evaluate our model performance in all 5 test domains in MultiWOZ2.0 and 2.1. From Table 8, our models perform better in restaurant and attraction domain in general. The performance in the taxi and hotel domain is significantly lower than other domains. This could be explained as the hotel domain has a complicated slot ontology with 10 different slots, larger than the other domains. For the taxi domain, we observed that dialogues with this domain are usually of multiple domains, including the taxi domain in combination with other domains. Hence, it is more challenging to track dialogue states in the taxi domain.\nLatency Results. We visualized the model latency against the length of dialogue history in Figure 2 and 3. In Figure 2, we only plot with dialogue history length up to 80 tokens as TSCP models do not use the full dialogue history as input. In Figure 3, for a fair comparison between TRADE and NADST, we plot the latency of the original TRADE which decodes dialogue state slot by slot and a new version of TRADE∗ model which decodes individual slots following a parallel decoding mechanism. Since TRADE independently generates dialogue state slot by slot, we enable parallel generation simply by feeding all slots into models at once (without impacts on performance). However, at the token level, TRADE∗ still follows an autoregressive decoding framework. Compared to TRADE∗ and TSCP, our model latency is only dependent on the model complexity i.e. the number\nof attention layers T = Tfert = Tstate. For TRADE∗ and TSCP, the model latency increases as dialogue extends over time while NADST latency is almost constant. The non-constant latency is mostly due to overhead processing such as delexicalizing dialogue history. Our approach is, hence, suitable especially for dialogues in multiple domains as they usually extend over more number of turns (e.g. 13 to 14 turns per dialogue in average in MultiWOZ corpus) In Figure 3, we noted that the latency of the original TRADE is almost unchanged as the dialogue history extends. This is most likely due to the model having to decode all possible (domain, slot) pairs rather than just relevant pairs as in NADST and TSCP. The TRADE∗ shows a clearer increasing trend of latency because the parallel process is independent of the number of (domain,slot) pairs considered. TRADE∗ still requires more time to decode than NADST as we also parallelize decoding at the token level.\nAblation Results. We conduct additional ablation experiments by varying the proportion of prediction values vs. ground-truth values for Xdel and Xds×fert as input to the models. As can be seen in Table 9, the model performance increases gradually as the proportion of prediction input %pred reduces from 100% (true prediction) to 0% (oracle prediction). In particular, we observe more sig-\nnificant changes in performance against changes of %pred ofXds×fert. The model performance can increase up to more than 67% joint accuracy when we have an oracle input of Xds×fert. However, we consider improving model performance by Xdel more practically achievable. For example, we can make use of a more sophisticated mechanism to delexicalize dialog history rather than exact word matching as the current strategy. Another example is having better Xdel through a pretrained NLU model. In the ideal case with access to ground-truth labels of both Xdel and Xds×fert, the model can obtain a joint accuracy of 73%.\nA.4 SAMPLE PREDICTION OUTPUT\nWe extracted prediction output in all turns for 2 example dialogues: MUL0536 and PMUL3759." } ]
2,020
NON-AUTOREGRESSIVE DIALOG STATE TRACKING
SP:7f1af600e64c0ad693a9b1cc198bbaf39cd884c6
[ "The paper presents SEED RL, which is a scalable reinforcement learning agent. The approach restructure the interface / division of functionality between the actors (environments) and the learner as compared to the distributed approach in IMPALA (a state-of-the-art distributed RL framework). Most importantly, the model is only in the learner in SEED while it is distributed in IMPALA. ", "This paper presents a scalable reinforcement learning training architecture which combines a number of modern engineering advances to address the inefficiencies of prior methods. The proposed architecture shows good performance on a wide variety of benchmarks from ALE to DeepMind Lab and Google Research Football. Important to the community, authors also open source their code and provide an estimate which shows that the proposed framework is cheaper to run on cloud platforms." ]
We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state of the art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 three times faster in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out. Github: http://github.com/google-research/seed_rl.
[ { "affiliations": [], "name": "Lasse Espeholt" }, { "affiliations": [], "name": "Raphaël Marinier" }, { "affiliations": [], "name": "Piotr Stanczyk" }, { "affiliations": [], "name": "Ke Wang" }, { "affiliations": [], "name": "Marcin Michalski" } ]
[ { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "J. Artif. Intell. Res. (JAIR),", "year": 2013 }, { "authors": [ "Pablo Samuel Castro", "Subhodeep Moitra", "Carles Gelada", "Saurabh Kumar", "Marc G. Bellemare" ], "title": "Dopamine: A Research Framework for Deep Reinforcement Learning. 2018", "venue": "URL http: //arxiv.org/abs/1812.06110", "year": 2018 }, { "authors": [ "Jianmin Chen", "Rajat Monga", "Samy Bengio", "Rafal Józefowicz" ], "title": "Revisiting distributed synchronous SGD", "venue": "CoRR, abs/1604.00981,", "year": 2016 }, { "authors": [ "Steven Dalton", "Iuri Frosio", "Michael Garland" ], "title": "Gpu-accelerated atari emulation for reinforcement learning", "venue": "CoRR, abs/1907.08467,", "year": 2019 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Marc’aurelio Ranzato", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V. Le", "Andrew Y. Ng" ], "title": "Large scale distributed deep networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Ben Deverett", "Ryan Faulkner", "Meire Fortunato", "Greg Wayne", "Joel Z Leibo" ], "title": "Interval timing in deep reinforcement learning agents", "venue": null, "year": 1905 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Rémi Munos", "Karen Simonyan", "Volodymyr Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning", "Shane Legg", "Koray Kavukcuoglu" ], "title": "IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross B. Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "CoRR, abs/1706.02677,", "year": 2017 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Frederic Besse", "Yan Wu", "Hamza Merzic", "Aäron van den Oord" ], "title": "Shaping belief states with generative environment models for RL", "venue": "URL http://arxiv.org/abs/1906.09237", "year": 1906 }, { "authors": [ "Sergio Guadarrama", "Anoop Korattikara", "Oscar Ramirez", "Pablo Castro", "Ethan Holly", "Sam Fishman", "Ke Wang", "Ekaterina Gonina", "Neal Wu", "Chris Harris", "Vincent Vanhoucke", "Eugene Brevdo" ], "title": "TF-Agents: A library for reinforcement learning in tensorflow", "venue": "https://github.com/ tensorflow/agents,", "year": 2018 }, { "authors": [ "Steven Hansen", "Will Dabney", "Andre Barreto", "Tom Van de Wiele", "David Warde-Farley", "Volodymyr Mnih" ], "title": "Fast task inference with variational intrinsic successor features", "venue": null, "year": 1906 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Daniel Horgan", "Bilal Piot", "Mohammad Gheshlaghi Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "CoRR, abs/1710.02298,", "year": 2017 }, { "authors": [ "Matteo Hessel", "Hubert Soyer", "Lasse Espeholt", "Wojciech Czarnecki", "Simon Schmitt", "Hado van Hasselt" ], "title": "Multi-task deep reinforcement learning with popart", "venue": "CoRR, abs/1809.04474,", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Dan Horgan", "John Quan", "David Budden", "Gabriel Barth-Maron", "Matteo Hessel", "Hado van Hasselt", "David Silver" ], "title": "Distributed prioritized experience replay", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Steven Kapturowski", "Georg Ostrovski", "John Quan", "Remi Munos", "Will Dabney" ], "title": "Recurrent experience replay in distributed reinforcement learning", "venue": null, "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of the 3rd International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Karol Kurach", "Anton Raichuk", "Piotr Stańczyk", "Michał Zając", "Olivier Bachem", "Lasse Espeholt", "Carlos Riquelme", "Damien Vincent", "Marcin Michalski", "Olivier Bousquet" ], "title": "Google research football: A novel reinforcement learning environment", "venue": "arXiv preprint arXiv:1907.11180,", "year": 2019 }, { "authors": [ "Heinrich Küttler", "Nantas Nardelli", "Thibaut Lavril", "Marco Selvatici", "Viswanath Sivakumar", "Tim Rocktäschel", "Edward Grefenstette" ], "title": "TorchBeast: A PyTorch Platform for Distributed RL", "venue": "arXiv preprint arXiv:1910.03552,", "year": 2019 }, { "authors": [ "Ang Li", "Huiyi Hu", "Piotr Mirowski", "Mehrdad Farajtabar" ], "title": "Cross-view policy learning for street navigation", "venue": "arXiv preprint arXiv:1906.05930,", "year": 2019 }, { "authors": [ "Eric Liang", "Richard Liaw", "Philipp Moritz", "Robert Nishihara", "Roy Fox", "Ken Goldberg", "Joseph E Gonzalez", "Michael I Jordan", "Ion Stoica" ], "title": "Rllib: Abstractions for distributed reinforcement learning", "venue": "arXiv preprint arXiv:1712.09381,", "year": 2017 }, { "authors": [ "Ashique Mahmood" ], "title": "Incremental Off-policy Reinforcement Learning Algorithms", "venue": "PhD thesis, University of Alberta,", "year": 2017 }, { "authors": [ "Sam McCandlish", "Jared Kaplan", "Dario Amodei", "OpenAI Dota Team" ], "title": "An empirical model of large-batch training", "venue": "CoRR, abs/1812.06162,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Arun Nair", "Praveen Srinivasan", "Sam Blackwell", "Cagdas Alcicek", "Rory Fearon", "Alessandro De Maria", "Vedavyas Panneershelvam", "Mustafa Suleyman", "Charles Beattie", "Stig Petersen", "Shane Legg", "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver" ], "title": "Massively parallel methods for deep reinforcement learning", "venue": "arXiv preprint arXiv:1507.04296,", "year": 2015 }, { "authors": [ "Shayegan Omidshafiei", "Daniel Hennes", "Dustin Morrill", "Rémi Munos", "Julien Pérolat", "Marc Lanctot", "Audrunas Gruslys", "Jean-Baptiste Lespiau", "Karl Tuyls" ], "title": "Neural replicator dynamics", "venue": "CoRR, abs/1906.00190,", "year": 2019 }, { "authors": [ "Tobias Pohlen", "Bilal Piot", "Todd Hester", "Mohammad Gheshlaghi Azar", "Dan Horgan", "David Budden", "Gabriel Barth-Maron", "Hado Van Hasselt", "John Quan", "Mel Večerík" ], "title": "Observe and look further: Achieving consistent performance on atari", "venue": "arXiv preprint arXiv:1805.11593,", "year": 2018 }, { "authors": [ "Rajat Raina", "Anand Madhavan", "Andrew Y Ng" ], "title": "Large-scale deep unsupervised learning using graphics processors", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "In Proc. of ICLR,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017", "venue": null, "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Adam Stooke", "Pieter Abbeel" ], "title": "Accelerated methods for deep reinforcement learning", "venue": "CoRR, abs/1803.02811,", "year": 2018 }, { "authors": [ "Adam Stooke", "Pieter Abbeel" ], "title": "rlpyt: A research code base for deep reinforcement learning in pytorch", "venue": "arXiv preprint arXiv:1909.01500,", "year": 2019 }, { "authors": [ "Felipe Petroski Such", "Vashisht Madhavan", "Edoardo Conti", "Joel Lehman", "Kenneth O. Stanley", "Jeff Clune" ], "title": "Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning", "venue": "CoRR, abs/1712.06567,", "year": 2017 }, { "authors": [ "Richard S Sutton" ], "title": "Learning to predict by the methods of temporal differences", "venue": "Machine learning,", "year": 1988 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: an introduction mit press", "venue": null, "year": 1998 }, { "authors": [ "Yuandong Tian", "Qucheng Gong", "Wenling Shang", "Yuxin Wu", "C. Lawrence Zitnick" ], "title": "Elf: An extensive, lightweight and flexible research platform for real-time strategy games", "venue": "Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Dhruva Tirumala", "Hyeonwoo Noh", "Alexandre Galashov", "Leonard Hasenclever", "Arun Ahuja", "Greg Wayne", "Razvan Pascanu", "Yee Whye Teh", "Nicolas Heess" ], "title": "Exploiting hierarchy for learning and transfer in kl-regularized RL", "venue": "URL http://arxiv.org/ abs/1903.07438", "year": 1903 }, { "authors": [ "Aäron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "CoRR, abs/1807.03748,", "year": 2018 }, { "authors": [ "Hado van Hasselt" ], "title": "Double Q-learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2010 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double q", "venue": null, "year": 2020 }, { "authors": [ "Alexander Sasha Vezhnevets", "Yuhuai Wu", "Remi Leblond", "Joel Leibo" ], "title": "Options as responses", "venue": null, "year": 1909 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado van Hasselt", "Marc Lanctot", "Nando de Freitas" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "Conference on Machine Learning,", "year": 2016 }, { "authors": [ "2017a. Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Scaling sgd batch size to 32k for imagenet training", "venue": null, "year": 2017 }, { "authors": [ "Adam (lr" ], "title": "Target network update interval 2500 updates Value function rescaling", "venue": "(Kingma & Ba,", "year": 2014 } ]
[ { "heading": null, "text": "Github: http://github.com/google-research/seed_rl." }, { "heading": "1 INTRODUCTION", "text": "The field of reinforcement learning (RL) has recently seen impressive results across a variety of tasks. This has in part been fueled by the introduction of deep learning in RL and the introduction of accelerators such as GPUs. In the very recent history, focus on massive scale has been key to solve a number of complicated games such as AlphaGo (Silver et al., 2016), Dota (OpenAI, 2018) and StarCraft 2 (Vinyals et al., 2017).\nThe sheer amount of environment data needed to solve tasks trivial to humans, makes distributed machine learning unavoidable for fast experiment turnaround time. RL is inherently comprised of heterogeneous tasks: running environments, model inference, model training, replay buffer, etc. and current state-of-the-art distributed algorithms do not efficiently use compute resources for the tasks. The amount of data and inefficient use of resources makes experiments unreasonably expensive. The two main challenges addressed in this paper are scaling of reinforcement learning and optimizing the use of modern accelerators, CPUs and other resources.\nWe introduce SEED (Scalable, Efficient, Deep-RL), a modern RL agent that scales well, is flexible and efficiently utilizes available resources. It is a distributed agent where model inference is done centrally combined with fast streaming RPCs to reduce the overhead of inference calls. We show that with simple methods, one can achieve state-of-the-art results faster on a number of tasks. For optimal performance, we use TPUs (cloud.google.com/tpu/) and TensorFlow 2 (Abadi et al., 2015) to simplify the implementation. The cost of running SEED is analyzed against IMPALA (Espeholt et al., 2018) which is a commonly used state-of-the-art distributed RL algorithm (Veeriah et al. (2019); Li et al. (2019); Deverett et al. (2019); Omidshafiei et al. (2019); Vezhnevets et al. (2019); Hansen et al. (2019); Schaarschmidt et al.; Tirumala et al. (2019), ...). We show cost reductions of up to 80% while being significantly faster. When scaling SEED to many accelerators, it can train on millions of frames per second. Finally, the implementation is open-sourced together with examples of running it at scale on Google Cloud (see Appendix A.4 for details) making it easy to reproduce results and try novel ideas.\n∗Equal contribution" }, { "heading": "2 RELATED WORK", "text": "For value-based methods, an early attempt for scaling DQN was Nair et al. (2015) that used asynchronous SGD (Dean et al., 2012) together with a distributed setup consisting of actors, replay buffers, parameter servers and learners. Since then, it has been shown that asynchronous SGD leads to poor sample complexity while not being significantly faster (Chen et al., 2016; Espeholt et al., 2018). Along with advances for Q-learning such as prioritized replay (Schaul et al., 2015), dueling networks (Wang et al., 2016), and double-Q learning (van Hasselt, 2010; Van Hasselt et al., 2016) the state-of-the-art distributed Q-learning was improved with Ape-X (Horgan et al., 2018). Recently, R2D2 (Kapturowski et al., 2018) achieved impressive results across all the Arcade Learning Environment (ALE) (Bellemare et al., 2013) games by incorporating value-function rescaling (Pohlen et al., 2018) and LSTMs (Hochreiter & Schmidhuber, 1997) on top of the advancements of Ape-X.\nThere have also been many approaches for scaling policy gradients methods. A3C (Mnih et al., 2016) introduced asynchronous single-machine training using asynchronous SGD and relied exclusively on CPUs. GPUs were later introduced in GA3C (Mahmood, 2017) with improved speed but poor convergence results due to an inherently on-policy method being used in an off-policy setting. This was corrected by V-trace (Espeholt et al., 2018) in the IMPALA agent both for singlemachine training and also scaled using a simple actor-learner architecture to more than a thousand machines. PPO (Schulman et al., 2017) serves a similar purpose to V-trace and was used in OpenAI Rapid (Petrov et al., 2018) with the actor-learner architecture extended with Redis (redis.io), an in-memory data store, and was scaled to 128,000 CPUs. For inexpensive environments like ALE, a single machine with multiple accelerators can achieve results quickly (Stooke & Abbeel, 2018). This approach was taken a step further by converting ALE to run on a GPU (Dalton et al., 2019).\nA third class of algorithms is evolutionary algorithms. With simplicity and massive scale, they have achieved impressive results on a number of tasks (Salimans et al., 2017; Such et al., 2017).\nBesides algorithms, there exist a number of useful libraries and frameworks for reinforcement learning. ELF (Tian et al., 2017) is a framework for efficiently interacting with environments, avoiding Python global-interpreter-lock contention. Dopamine (Castro et al., 2018) is a flexible research focused RL framework with a strong emphasis on reproducibility. It has state of the art agent implementations such as Rainbow (Hessel et al., 2017) but is single-threaded. TF-Agents (Guadarrama et al., 2018) and rlpyt (Stooke & Abbeel, 2019) both have a broader focus with implementations for several classes of algorithms but as of writing, they do not have distributed capability for largescale RL. RLLib (Liang et al., 2017) provides a number of composable distributed components and a communication abstraction with a number of algorithm implementations such as IMPALA and Ape-X. Concurrent with this work, TorchBeast (Küttler et al., 2019) was released which is an implementation of single-machine IMPALA with remote environments.\nSEED is closest related to IMPALA, but has a number of key differences that combine the benefits of single-machine training with a scalable architecture. Inference is moved to the learner but environments run remotely. This is combined with a fast communication layer to mitigate latency issues from the increased number of remote calls. The result is significantly faster training at reduced costs by as much as 80% for the scenarios we consider. Along with a policy gradients (V-trace) implementation we also provide an implementation of state of the art Q-learning (R2D2). In the work we use TPUs but in principle, any modern accelerator could be used in their place. TPUs are particularly well-suited given they high throughput for machine learning applications and the scalability. Up to 2048 cores are connected with a fast interconnect providing 100+ petaflops of compute." }, { "heading": "3 ARCHITECTURE", "text": "Before introducing the architecture of SEED, we first analyze the generic actor-learner architecture used by IMPALA, which is also used in various forms in Ape-X, OpenAI Rapid and others. An overview of the architecture is shown in Figure 1a.\nA large number of actors repeatedly read model parameters from the learner (or parameter servers). Each actor then proceeds the local model to sample actions and generate a full trajectory of observations, actions, policy logits/Q-values. Finally, this trajectory along with recurrent state is transferred\nto a shared queue or replay buffer. Asynchronously, the learner reads batches of trajectories from the queue/replay buffer and optimizes the model.\nThere are a number of reasons for why this architecture falls short:\n1. Using CPUs for neural network inference: The actor machines are usually CPU-based (occasionally GPU-based for expensive environments). CPUs are known to be computationally inefficient for neural networks (Raina et al., 2009). When the computational needs of a model increase, the time spent on inference starts to outweigh the environment step computation. The solution is to increase the number of actors which increases the cost and affects convergence (Espeholt et al., 2018).\n2. Inefficient resource utilization: Actors alternate between two tasks: environment steps and inference steps. The compute requirements for the two tasks are often not similar which leads to poor utilization or slow actors. E.g. some environments are inherently single-threading while neural networks are easily parallelizable.\n3. Bandwidth requirements: Model parameters, recurrent state and observations are transferred between actors and learners. Relatively to model parameters, the size of the observation trajectory often only accounts for a few percents.1 Furthermore, memory-based models send large states, increase bandwidth requirements.\nWhile single-machine approaches such as GA3C (Mahmood, 2017) and single-machine IMPALA avoid using CPU for inference (1) and do not have network bandwidth requirements (3), they are restricted by resource usage (2) and the scale required for many types of environments.\nThe architecture used in SEED (Figure 1b) solves the problems mentioned above. Inference and trajectory accumulation is moved to the learner which makes it conceptually a single-machine setup with remote environments (besides handling failures). Moving the logic effectively makes the actors a small loop around the environments. For every single environment step, the observations are sent to the learner, which runs the inference and sends actions back to the actors. This introduces a new problem: 4. Latency.\nTo minimize latency, we created a simple framework that uses gRPC (grpc.io) - a high performance RPC library. Specifically, we employ streaming RPCs where the connection from actor to learner is kept open and metadata sent only once. Furthermore, the framework includes a batching module that efficiently batches multiple actor inference calls together. In cases where actors can fit on the same machine as learners, gRPC uses unix domain sockets and thus reduces latency, CPU and syscall overhead. Overall, the end-to-end latency, including network and inference, is faster for a number of the models we consider (see Appendix A.7).\n1With 100,000 observations send per second (96 x 72 x 3 bytes each), a trajectory length of 20 and a 30MB model, the total bandwidth requirement is 148 GB/s. Transferring observations uses only 2 GB/s.\nThe IMPALA and SEED architectures differ in that for SEED, at any point in time, only one copy of the model exists whereas for distributed IMPALA each actor has its own copy. This changes the way the trajectories are off-policy. In IMPALA (Figure 2a), an actor uses the same policy πθt for an entire trajectory. For SEED (Figure 2b), the policy during an unroll of a trajectory may change multiple times with later steps using more recent policies closer to the one used at optimization time.\nA detailed view of the learner in the SEED architecture is shown on Figure 3. Three types of threads are running: 1. Inference 2. Data prefetching and 3. Training. Inference threads receive a batch of observations, rewards and episode termination flags. They load the recurrent states and send the data to the inference TPU core. The sampled actions and new recurrent states are received, and the actions are sent back to the actors while the latest recurrent states are stored. When a trajectory is fully unrolled it is added to a FIFO queue or replay buffer and later sampled by data prefetching threads. Finally, the trajectories are pushed to a device buffer for each of the TPU cores taking part in training. The training thread (the main Python thread) takes the prefetched trajectories, computes gradients using the training TPU cores and applies the gradients on the models of all TPU cores (inference and training) synchronously. The ratio of inference and training cores can be adjusted for maximum throughput and utilization. The architecture scales to a TPU pod (2048 cores) by roundrobin assigning actors to TPU host machines, and having separate inference threads for each TPU host. When actors wait for a response from the learner, they are idle so in order to fully utilize the machines, we run multiple environments on a single actor.\nTo summarize, we solve the issues listed previously by:\n1. Moving inference to the learner and thus eliminating any neural network related computations from the actors. Increasing the model size in this architecture will not increase the need for more actors (in fact the opposite is true).\n2. Batching inference on the learner and having multiple environments on the actor. This fully utilize both the accelerators on the learner and CPUs on the actors. The number of\nTPU cores for inference and training is finely tuned to match the inference and training workloads. All factors help reducing the cost of experiments.\n3. Everything involving the model stays on the learner and only observations and actions are sent between the actors and the learner. This reduces bandwidth requirements by as much as 99%.\n4. Using streaming gRPC that has minimal latency and minimal overhead and integrating batching into the server module.\nWe provide the following two algorithms implemented in the SEED framework: V-trace and Qlearning." }, { "heading": "3.1 V-TRACE", "text": "One of the algorithms we adapt into the framework is V-trace (Espeholt et al., 2018). We do not include any of the additions that have been proposed on top of IMPALA such as van den Oord et al. (2018); Gregor et al. (2019). The additions can also be applied to SEED and since they are more computational expensive, they would benefit from the SEED architecture." }, { "heading": "3.2 Q-LEARNING", "text": "We show the versatility of SEED’s architecture by fully implementing R2D2 (Kapturowski et al., 2018), a state of the art distributed value-based agent. R2D2 itself builds on a long list of improvements over DQN (Mnih et al., 2015): double Q-learning (van Hasselt, 2010; Van Hasselt et al., 2016), multi-step bootstrap targets (Sutton, 1988; Sutton & Barto, 1998; Mnih et al., 2016), dueling network architecture (Wang et al., 2016), prioritized distributed replay buffer (Schaul et al., 2015; Horgan et al., 2018), value-function rescaling (Pohlen et al., 2018), LSTM’s (Hochreiter & Schmidhuber, 1997) and burn-in (Kapturowski et al., 2018).\nInstead of a distributed replay buffer, we show that it is possible to keep the replay buffer on the learner with a straightforward flexible implementation. This reduces complexity by removing one type of job in the setup. It has the drawback of being limited by the memory of the learner but it was not a problem in our experiments by a large margin: a replay buffer of 105 trajectories of length 120 of 84× 84 uncompressed grayscale observations (following R2D2’s hyperparameters) takes 85GBs of RAM, while Google Cloud machines can offer hundreds of GBs. However, nothing prevents the use of a distributed replay buffer together with SEED’s central inference, in cases where a much larger replay buffer is needed." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate SEED on a number of environments: DeepMind Lab (Beattie et al., 2016), Google Research Football (Kurach et al., 2019) and Arcade Learning Environment (Bellemare et al., 2013)." }, { "heading": "4.1 DEEPMIND LAB AND V-TRACE", "text": "DeepMind Lab is a 3D environment based on the Quake 3 engine. It features mazes, laser tag and memory tasks. We evaluate on four commonly used tasks. The action set used is from Espeholt et al. (2018) although for some tasks, higher return can be achieved with bigger action sets such as the one introduced in Hessel et al. (2018). For all experiments, we used an action repeat of 4 and the number of frames in plots is listed as environment frames (equivalent to 4 times the number of steps). The same set of 24 hyperparameter sets and the same model (ResNet from IMPALA) was used for both agents. More details can be found in Appendix A.1.2." }, { "heading": "4.1.1 STABILITY", "text": "The first experiment evaluates the effect of the change in off-policy behavior described in Figure 2. Exactly the same hyperparameters are used for both IMPALA and SEED, including the number of environments used. As is shown in Figure 4, the stability across hyperparameters of SEED is slightly better than IMPALA, while achieving slightly higher final returns." }, { "heading": "4.1.2 SPEED", "text": "For evaluating performance, we compare IMPALA using an Nvidia P100 with SEED with multiple accelerator setups. They are evaluated on the same set of hyperparameters. We find that SEED is 2.5x faster than IMPALA using 2 TPU v3 cores (see Table 1), while using only 77% more environments and 41% less CPU (see section 4.4.1). Scaling from 2 to 8 cores results in an additional 4.4x speedup with sample complexity maintained (Figure 5). The speed-up is greater than 4x due to using 6 cores for training and 2 for inference instead of 1 core for each, resulting in better utilization. A 5.3x speed-up instead of 4.4x can be obtained by increasing the batch size linearly with the number of training cores, but contrary to related research (You et al., 2017b; Goyal et al., 2017) we found that increased batch size hurts sample complexity even with methods like warm-up and actor de-correlation (Stooke & Abbeel, 2018). We hypothesize that this is due to the limited actor and environment diversity in DeepMind Lab tasks. In McCandlish et al. (2018) they found that Pong scales poorly with batch size but that Dota can be trained effectively with a batch size five orders of magnitude larger. Note, for most models, the effective batch size is batch size · trajectory length. In Figure 5, we include a run from a limited sweep on “explore_goal_locations_small” using 64 cores with an almost linear speed-up. Wall-time performance is improved but sample complexity is heavily penalized.\nWhen using an Nvidia P100, SEED is 1.58x slower than IMPALA. A slowdown is expected because SEED performs inference on the accelerator. SEED does however use significantly fewer CPUs and is lower cost (see Appendix A.6). The TPU version of SEED has been optimized but it is likely that improvements can be found for SEED with P100." }, { "heading": "Architecture Accelerators Environments Actor CPUs Batch Size FPS Ratio DeepMind Lab", "text": "" }, { "heading": "4.2 GOOGLE RESEARCH FOOTBALL AND V-TRACE", "text": "Google Research Football is an environment similar to FIFA video games (ea.com). We evaluate SEED on the “Hard” task introduced in Kurach et al. (2019) where the baseline IMPALA agent achieved a positive average score after 500M frames using the engineered “checkpoint” reward function but a negative average score using only the score as a reward signal. In all experiments we use the model from Kurach et al. (2019) and sweep over 40 hyperparameter sets with 3 seeds each. See all hyperparameters in Appendix A.2.1." }, { "heading": "4.2.1 SPEED", "text": "Compared to the baseline IMPALA using 2 Nvidia P100’s (and CPUs for inference) in Kurach et al. (2019) we find that using 2 TPU v3 cores in SEED improves the speed by 1.6x (see Table 1). Additionally, using 8 cores adds another 4.1x speed-up. A speed-up of 4.5x is achievable if the batch size is increased linearly with the number of training cores (5). However, we found that increasing the batch size, like with DeepMind Lab, hurts sample complexity." }, { "heading": "4.2.2 INCREASED MAP SIZE", "text": "More compute power allows us to increase the size of the Super Mini Map (SMM) input state. By default its size is 96 x 72 (x 4) and it represents players, opponents, ball and the active player as 2d bit maps. We evaluated three sizes: (1) Default 96 x 72, (2) Medium 120 x 90 and (3) Large 144 x 108. As shown in Table 1, switching from Default to Large SMM results in 60% speed reduction. However, increasing the map size improves the final score (Table 2). It may suggest that the bit map representation is not granular enough for the task. For both reward functions, we improve upon the results of Kurach et al. (2019). Finally, training on 4B frames improves the results by a significant margin (an average score of 0.46 vs. 4.76 in case of the scoring reward function).\n4.3 ARCADE LEARNING ENVIRONMENT AND Q-LEARNING\nWe evaluate our implementation of R2D2 in SEED architecture on 57 Atari 2600 games from the ALE benchmark. This benchmark has been the testbed for most recent deep reinforcement learning agents because of the diversity of visuals and game mechanics." }, { "heading": "Checkpoint reward", "text": "We follow the same evaluation procedure as R2D2. In particular, we use the full action set, no lossof-life-as-episode-end heuristic and start episodes with up to 30 random no-ops. We use 8 TPU v3 cores and 610 actors to maximize TPU utilization. This achieves 260K environment FPS and performs 9.5 network updates per second. Other hyperparameters are taken from R2D2, and are fully reproduced in appendix A.3.1.\nFigure 6 shows the median humannormalized scores for SEED, R2D2, Ape-X and Rainbow. As expected, SEED has similar sample efficiency as R2D2, but it is 3.1x faster (see Table 1). This allows us to reach a median humannormalized score of 1880% in just 1.8 days of training instead of 5, establishing a new wall-time state of the art on Atari-57.\nWith the number of actors increased to 1200, a batch size increased to 256 and without framestacking, we can achieve 440K environment FPS and learn using 16\nbatches per second. However, this significantly degrades sample efficiency and limits the final median human-normalized score to approximately 1000%." }, { "heading": "4.4 COST COMPARISONS", "text": "With growing complexity of environments as well as size of neural networks used in reinforcement learning, the need of running big experiments increases, making cost reductions important. In this\nsection we analyze how increasing complexity of the network impacts training cost for SEED and IMPALA. In our experiments we use the pricing model of Google AI Platform, ML Engine.2\nOur main focus is on obtaining lowest possible cost per step, while maintaining training speed. Distributed experiments from Espeholt et al. (2018) (IMPALA) used between 150 and 500 CPUs, which translates into $7.125 - $23.75 of actors’ cost per hour. The cost of single-GPU learner is $1.46 per hour. Due to the relatively high expense of the actors, our main focus is to reduce number of actors and to obtain high CPU utilization." }, { "heading": "4.4.1 DEEPMIND LAB", "text": "Our DeepMind Lab experiment is based on the ResNet model from IMPALA. We evaluate increasing the number of filters in the convolutional layers: (1) Default 1x (2) Medium 2x and (3) Large 4x. Experiments are performed on the “explore_goal_locations_small” task. IMPALA uses a single Nvidia Tesla P100 GPU for training while inference is done on CPU by the actors. SEED uses one TPU v3 core for training and one for inference.\nFor IMPALA, actor CPU utilization is close to 100% but in case of SEED, only the environment runs on an actor making CPU idle while waiting for inference step. To improve utilization, a single SEED actor runs multiple environments (between 12 and 16) on a 4-CPU machine." }, { "heading": "Model Actors CPUs Envs. Speed Cost/1B Cost ratio IMPALA", "text": "As Table 4 shows, SEED turns out to be not only faster, but also cheaper to run. The cost ratio between SEED and IMPALA is around 4. Due to high cost of inference on a CPU, IMPALA’s cost increases with increasing complexity of the network. In the case of SEED, increased network size has smaller impact on overall costs, as inference accounts for about 30% of the costs (see Table A.5)." }, { "heading": "4.4.2 GOOGLE RESEARCH FOOTBALL", "text": "We evaluate cost of running experiments with Google Research Football with different sizes of the Super Mini Map representation (the size has virtually no impact on environment’s speed.) For IMPALA, two Nvidia P100 GPUs were used for training and SEED used one TPU v3 core for training and one for inference.\nFor IMPALA, we use one core per actor while SEED’s actors run multiple environments per actor for better CPU utilization (8 cores, 12 environments).\nFor the default size of the SMM, per experiment training cost differs by only 68%. As the Google Research Football environment is more expensive than DeepMind Lab, training and inference costs\n2TPU cores are sold in multiples of 8, by running many experiments at once we use as many cores per experiment as needed. See cloud.google.com/ml-engine/docs/pricing." }, { "heading": "Model Actors CPUs Envs. Speed Cost/1B Cost ratio IMPALA", "text": "have relatively smaller impact on the overall experiment cost. The difference increases when the size of the SMM increases and SEED needing relatively fewer actors." }, { "heading": "4.4.3 ARCADE LEARNING ENVIRONMENT", "text": "Due to lack of baseline implementation for R2D2, we do not include cost comparisons for this environment. However, comparison of relative costs between ALE, DeepMind Lab and Football suggests that savings should be even more significant. ALE is the fastest among the three environments making inference proportionally the most expensive. Appendix A.5 presents training cost split between actors and the learner for different setups." }, { "heading": "5 CONCLUSION", "text": "We introduced and analyzed a new reinforcement learning agent architecture that is faster and less costly per environment frame than previous distributed architectures by better utilizing modern accelerators. It achieved a 11x wall-time speedup on DeepMind Lab compared to a strong IMPALA baseline while keeping the same sample efficiency, improved on state of the art scores on Google Research Football, and achieved state of the art scores on Atari-57 3.1x faster (wall-time) than previous research.\nThe agent is open-sourced and packaged to easily run on Google Cloud. We hope that this will accelerate reinforcement learning research by allowing the community to replicate state-of-the-art results and build on top of them.\nAs a demonstration of the potential of this new agent architecture, we were able to scale it to millions of frames per second in some realistic scenarios (80x speedup compared to previous research). However, this requires increasing the number of environments and using larger batch sizes which hurts sample efficiency in the environments tested. Preserving sample efficiency with larger batchsizes has been studied to some extent in RL (Stooke & Abbeel, 2018; McCandlish et al., 2018) and in the context of supervised learning (You et al., 2017b;a; 2019; Goyal et al., 2017). We believe it is still an open and increasingly important area of research in order to scale up reinforcement learning." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to thank Steven Kapturowski, Georg Ostrovski, Tim Salimans, Aidan Clark, Manuel Kroiss, Matthieu Geist, Leonard Hussenot, Alexandre Passos, Marvin Ritter, Neil Zeghidour, Marc G. Bellemare and Sylvain Gelly for comments and insightful discussions and Marcin Andrychowicz, Dan Abolafia, Damien Vincent, Dehao Chen, Eugene Brevdo and Ruoxin Sang for their code contributions." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DEEPMIND LAB", "text": "" }, { "heading": "A.1.1 LEVEL CACHE", "text": "We enable DeepMind Lab’s option for using a level cache for both SEED and IMPALA which greatly reduces CPU usage and results in stable actor CPU usage at close to 100% for a single core." }, { "heading": "Parameter Range", "text": "" }, { "heading": "A.1.2 HYPERPARAMETERS", "text": "" }, { "heading": "A.2 GOOGLE RESEARCH FOOTBALL", "text": "" }, { "heading": "A.2.1 HYPERPARAMETERS", "text": "" }, { "heading": "Parameter Range", "text": "" }, { "heading": "A.3 ALE", "text": "" }, { "heading": "A.3.1 HYPERPARAMETERS", "text": "We use the same hyperparameters as R2D2 (Kapturowski et al., 2018), except that we use more actors in order to best utilize 8 TPU v3 cores. For completeness, agent hyperparameters are in table 8 and environment processing parameters in table 9. We use the same neural network architecture as R2D2, namely 3 convolutional layers with filter sizes [32, 64, 64] , kernel sizes [8× 8, 4× 4, 3× 3] and strides [4, 2, 1], ReLU activations and “valid\" padding. They feed into a linear layer with 512 units, feeding into an LSTM layer with 512 hidden units (that also uses the one-hot encoded previous action and the previous environment reward as input), feeding into dueling heads with 512 hidden units. We use Glorot uniform (Glorot & Bengio, 2010) initialization." }, { "heading": "Evaluation: = 10−3", "text": "" }, { "heading": "Game Human R2D2 SEED 8 TPU v3 cores", "text": "" }, { "heading": "A.3.2 FULL RESULTS ON ATARI-57", "text": "" }, { "heading": "A.4 SEED LOCALLY AND ON CLOUD", "text": "SEED is open-sourced together with an example of running it both on a local machine and with scale using AI Platform, part of Google Cloud. We provide a public Docker image with low-level components implemented in C++ already pre-compiled to minimize the time needed to start SEED experiments.\nThe main pre-requisite to running on Cloud is setting up a Cloud Project. The provided startup script uploads the image and runs training for you. For more details please see github.com/ google-research/seed_rl." }, { "heading": "A.5 EXPERIMENTS COST SPLIT", "text": "" }, { "heading": "A.6 COST COMPARISON ON DEEPMIND LAB USING NVIDIA P100 GPUS", "text": "In section 4.4.1, we compared the cost of running SEED using two TPU v3 cores and IMPALA on a single Nvidia P100 GPU. In table 12, we also compare the cost when both agents run on a single Nvidia P100 GPU on DeepMind Lab. Even though this is a sub-optimal setting for SEED because it performs inference on the accelerator and therefore benefits disproportionately from more efficient accelerators such as TPUs, SEED compares favorably, being 1.76x cheaper than IMPALA per frame." }, { "heading": "Architecture Actors CPUs Envs. Speed Cost/1B Cost ratio", "text": "A.7 INFERENCE LATENCY" } ]
2,020
ACCELERATED CENTRAL INFERENCE
SP:1e09b69a3d713355bd967d8205fde97a911042e7
[ "Developing stable GAN training method has gained much attention these years. This paper propose to tackle this issue via involving distributionally robust optimization into GAN training. Its main contribution is to combine Sinha et al with GAN, proposing a new GAN training method on the basis of vanilla GAN. Relative theory results are proved and detailed experiments are conducted. ", "The present work proposes to combine GANs with adversarial training replacing the original GAN lass with a mixture of the original GAN loss and an adversarial loss that applies an adversarial perturbation to both the input image of the discriminator, and to the input noise of the generator. The resulting algorithm is called robust GAN (RGAN). Existing results of [Goodfellow et al 2014] (characterizing optimal generators and discriminators in terms of the density of the true data) are adapted to the new loss functions and generalization bounds akin to [Arora et al 2017] are proved. Extensive experiments show a small but consistent improvement over a baseline method." ]
Generative adversarial networks (GANs) are powerful generative models, but usually suffer from instability which may lead to poor generations. Most existing works try to alleviate this problem by focusing on stabilizing the training of the discriminator, which unfortunately ignores the robustness of both generator and discriminator. In this work, we consider the robustness of GANs and propose a novel robust method called robust generative adversarial network (RGAN). Particularly, we design a robust optimization framework where the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. The generator tries to map the worst input distribution (rather than a specific input distribution, typically a Gaussian distribution used in most GANs) to the real data distribution, while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation. We have provided theories showing that the generalization of the new robust framework can be guaranteed. A series of experiments on CIFAR-10, STL-10 and CelebA datasets indicate that our proposed robust framework can improve consistently on four baseline GAN models. We also provide ablation analysis and visualization showing the efficacy of our method on both generator and discriminator quantitatively and qualitatively.
[]
[ { "authors": [ "Jonas Adler", "Sebastian Lunz" ], "title": "Banach wasserstein gan", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Yingyu Liang", "Tengyu Ma", "Yi Zhang" ], "title": "Generalization and equilibrium in generative adversarial nets (gans)", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "David Berthelot", "Thomas Schumm", "Luke Metz" ], "title": "Began: Boundary equilibrium generative adversarial networks", "venue": "arXiv preprint arXiv:1703.10717,", "year": 2017 }, { "authors": [ "LI Chongxuan", "Taufik Xu", "Jun Zhu", "Bo Zhang" ], "title": "Triple generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Grigorios G Chrysos", "Jean Kossaifi", "Stefanos Zafeiriou" ], "title": "Robust conditional generative adversarial networks", "venue": "arXiv preprint arXiv:1805.08657,", "year": 2018 }, { "authors": [ "Zihang Dai", "Amjad Almahairi", "Philip Bachman", "Eduard Hovy", "Aaron Courville" ], "title": "Calibrating energy-based generative adversarial networks", "venue": "arXiv preprint arXiv:1702.01691,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Takuhiro Kaneko", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Label-noise robust generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jiwei Li", "Will Monroe", "Tianlin Shi", "Sébastien Jean", "Alan Ritter", "Dan Jurafsky" ], "title": "Adversarial learning for neural dialogue generation", "venue": "arXiv preprint arXiv:1701.06547,", "year": 2017 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": "arXiv preprint arXiv:1802.05957,", "year": 2018 }, { "authors": [ "Shakir Mohamed", "Balaji Lakshminarayanan" ], "title": "Learning in implicit generative models", "venue": "arXiv preprint arXiv:1610.03483,", "year": 2016 }, { "authors": [ "Sebastian Nowozin", "Botond Cseke", "Ryota Tomioka" ], "title": "f-gan: Training generative neural samplers using variational divergence minimization", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Guo-Jun Qi" ], "title": "Loss-sensitive generative adversarial networks on lipschitz densities", "venue": "arXiv preprint arXiv:1701.06264,", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "arXiv preprint arXiv:1710.10571,", "year": 2017 }, { "authors": [ "Kiran K Thekumparampil", "Ashish Khetan", "Zinan Lin", "Sewoong Oh" ], "title": "Robustness of conditional gans to noisy labels", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Masatoshi Uehara", "Issei Sato", "Masahiro Suzuki", "Kotaro Nakayama", "Yutaka Matsuo" ], "title": "Generative adversarial nets from a density ratio estimation perspective", "venue": "arXiv preprint arXiv:1610.02920,", "year": 2016 }, { "authors": [ "David Warde-Farley", "Yoshua Bengio" ], "title": "Improving generative adversarial networks with denoising feature matching", "venue": null, "year": 2016 }, { "authors": [ "Xiang Wei", "Boqing Gong", "Zixia Liu", "Wei Lu", "Liqiang Wang" ], "title": "Improving the improved training of wasserstein gans: A consistency term and its dual effect", "venue": "arXiv preprint arXiv:1803.01541,", "year": 2018 } ]
[ { "heading": "INTRODUCTION", "text": "Generative adversarial networks (GANs) (Goodfellow et al., 2014) have been enjoying much attention recently due to their great success on different tasks and datasets (Radford et al., 2015)(Salimans et al., 2016)(Ho & Ermon, 2016) (Li et al., 2017)(Chongxuan et al., 2017). The framework of GANs can be formulated as the game between generator and discriminator. The generator tries to produce the fake distribution which approximates the real data distribution, while the discriminator attempts to distinguish the fake distribution from the real distribution. These two players compete with each other iteratively. GANs are also popular for their theoretical value. Training the discriminator is showed to equivalent to training a good estimator for the density ratio between the fake distribution and the real one (Nowozin et al., 2016)(Uehara et al., 2016)(Mohamed & Lakshminarayanan, 2016).\nThe discriminator generally measures the departure between the model distribution and real data distribution with certain divergence measure, e.g. Jensen-Shannon divergence or f - divergence (Nowozin et al., 2016). Arjovsky et al. proved that the supports of the fake and real distributions are typically disjoint on low dimensional manifolds and there is a nearly trivial discriminator which can correctly classify the real and fake data (Arjovsky et al., 2017). The loss of such discriminator converges quickly to zero which causes the vanishing gradient for generator. To alleviate such problem, Arjovsky et al. proposed the Wasserstein GAN based on Wasserstein metric requiring no joint supports. Since it is inconvenient to minimize directly the Wasserstein distance, they solve the dual problem by clipping the weights to ensure the Lipschitz condition for discriminator. Later Gulrajani et al. proposed the gradient penalty to guarantee the Lipschitz condition (Gulrajani et al., 2017). Spectral normalization is also proposed to stabilize the training of the discriminator (Miyato et al., 2018).\nMost existing methods try to improve the stability of GANs by controlling the discriminator. However, the robustness of GANs have not adequately been considered. When the discriminator is not robust to noise (i.e., the discriminator cannot measure the distance between the fake and real distribution accurately), some examples might be mis-classified which consequently misleads the training of generator. Meanwhile, the poor generalization performance of generator might cause the “blurry” generated images for some potential input noise. Robust Conditional Generative Adversarial Net-\nworks is proposed to improve the robustness of conditional GAN for noised data. However, this method can merely implement on conditional GAN which improves the ability of generator only to defend the noise (Chrysos et al., 2018). Some other researchers focus on the robustness of GAN to label noise (Thekumparampil et al., 2018)(Kaneko et al., 2019).\nIn this paper, we attempt to improve the robustness of GANs in a systematical way by promoting the robustness of both discriminator and generator. We propose a novel robust method called robust generative adversarial network (RGAN) where the generator and discriminator still compete with each other iteratively, but in a worst-case setting. Specifically, a robust optimization is designed with considering the worst distribution within the small Wasserstein ball. The generator tries to map the worst input distribution (rather than a specific distribution) to the real data distribution, while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation. We provide some theoretical analysis for the proposed Robust GAN including generalization. We also implement our robust framework on different baseline GANs (i.e., DCGAN, WGAN-GP, and BWGAN) (Radford et al., 2015)(Adler & Lunz, 2018), observing substantial improvements consistently on all the datasets used in this paper." }, { "heading": "GENERATIVE ADVERSARIAL NETWORK", "text": "The principle of GAN is a game between two players: generator and discriminator, both of which are usually formulated as the deep neural networks. The generator tries to generate a fake example to fool discriminator, while the discriminator attempts to distinguish between fake and real images. Formally, the training procedure of GAN can be formulated as:\nmin G max D S(G,D) , Ex∼Pr [logD(x)] + Ex̃∼Pg [(1− logD(G(zi)))] (1)\nwhere, x and x̃ = G(z) are real and fake examples sampled from the real data distribution Pr and generation distribution Pg respectively. The generation distribution is defined by G(z) where z ∼ Pz (Pz is a specific input noise distribution). The minmax problem cannot be solved directly since the expectation of the real and generation distribution is usually intractable. Therefore, the approximation problem is defined as:\nmin G max D\nSm(G,D) , 1\nm m∑ i=1 [logD(xi)] + 1 m m∑ i=1 [(1− logD(G(zi)))] (2)\nwhere m examples of xi and zi are sampled from distributions Pr and Pz and the mean value of loss is used to approximate the original problem. However, such a way might not ensure a good robustness of discriminator and generator. Some noised images might not be classified correctly and potential input noise points will cause degraded generation. In this paper, for alleviating such problem, we design a distributionally robust optimization. Particularly, we consider the worst distribution (rather than a specific single distribution) within the small range." }, { "heading": "ROBUST GENERATIVE ADVERSARIAL NETWORK", "text": "As we discussed in the previous sections, although most existing GAN methods can stabilize the training of the discriminator, the robustness might not be adequately considered. In other words, the discriminator might not perform well on some noised data which consequently misleads the training of generator. Similarly, the generator might produce poor generations for certain input noise points if its robustnessis not good. To alleviate such problem, we design the distributionally robust optimization on GAN. Before we discuss how we can achieve this, we first elaborate the distributionally robust optimization." }, { "heading": "DISTRIBUTIONALLY ROBUST OPTIMIZATION", "text": "Let d : X ×X → R+ ∪{∞}. The departure between x and x0 can then be represented by d(x, x0). For distributionally robust optimization, the robustness region P = {P : D(P, P0) ≤ ρ} is considered, a ρ -neighborhood of the distribution P0 under the divergence D(., .) instead of a single\ndistribution1. The distributionally robust optimization can be formulated as (Sinha et al., 2017):\nmin θ sup P∈P EP [l(X; θ)] (3)\nwhere l(.) is a loss function parameterized by θ. The problem of (3) is typically intractable for arbitrary ρ.\nIn order to solve this problem, we first present a proposition:\nProposition 0.1 Let l: θ ×X → R and d: X ×X → R+ be continuous. Then, for any distribution P0 and ρ > 0 we have\nsup P∈P {EP [l(X; θ)]− γW (P, P0)} = EP0 [sup x∈X {l(x; θ)− γd(x, x0)}] (4)\n(Proof is provided in (Sinha et al., 2017)).\nWith Proposition 0.1, we can reformulate (3) with the Lagrangian relaxation as follows:\nmin θ EP0 sup x∈X [l(x; θ)− λd(x, x0)] (5)\nwhere the second term d(x, x0) is to restrict the distance between two points." }, { "heading": "ROBUST TRAINING OVER GENERATOR", "text": "With the distributionally robust optimization, we first discuss how we can perform robust training over generator. The generator of GAN tries to map a noise distribution Pz to the image distribution Pr. The objective of generator is described as follows:\nmin G\n1\nm m∑ i=1 [log(1−D(G(zi)))], where zi ∼ Pz (6)\nTypically, Pz is a Gaussian distribution. For improving the robustness, we consider all the possible distributions within the robust region Pz = {P : W (P, Pz) ≤ ρz} rather than a single specific distribution (typically a Gaussain in most existing GANs). Here we use the Wasserstein metric to measure the distance between P and Pz , where P is the ρz-neighbor of the original distribution Pz . However, it is difficult to consider all the distributions in this small region, the alternative way is to consider their upper bound (the worst distribution). The robust optimization problem for G is then described as follows:\nmin G sup P∈Pz\n1\nm m∑ i=1 [log(1−D(G(zi)))], where zi ∼ P (7)\nAccording to Proposition 0.1, we can relax (7) as:\nmin G max r\n1\nm m∑ i=1 [log(1−D(G(zi + ri)))− λz‖r‖22], where zi ∼ Pz (8)\nDifferent from those previous methods, our method attempts to map the worst distribution (in the ρz-neighborhood of the original distribution Pz) to the image distribution. Intuitively, we sample the noise points which are most likely (or the worst) to generate the blurry images and optimize the generator based on these risky points. Therefore, such generator would be robust against poor input noises and might be less likely to generate the low-quality images." }, { "heading": "ROBUST TRAINING OVER DISCRIMINATOR", "text": "In traditional GANs described by (2), the generator attempts to generate a fake distribution to approximate the real data distribution, while the discriminator tries to learn the decision boundary to separate real and fake distributions. Apparently, a discriminator with a poor robustness would inevitably mislead the training of generator. In this section, we utilize the popular adversarial learning\n1Normally, the Wasserstein metric W (., .) is used and corresponding d(x, x0) = ‖x− x0‖2p where p > 0\nmethod and propose the robust optimization method to improve the discriminator’s robustness both for clean and noised data.\nSpecifically, we define the robust regions for both the fake distribution Pg = {P : W (P, Pg) ≤ ρg} and real distribution Pr = {P : W (P, Pr) ≤ ρr}. The generator tries to reduce the distance between the fake distribution Pg and real distribution Pr. The discriminator attempts to separate the worst distributions in Pg and Pr. Intuitively, the worst distributions are closer to decision boundary (less discriminative) and they are able to guide the training of discriminator to perform well on ”confusing” data points near the classification boundary (such discriminator can be more robust than original one). We can reformulate (2) in the robust version:\nmax D sup P1∈Pr\n1\nm m∑ i=1 [logD(x′i)] + sup P2∈Pg 1 m m∑ i=1 [log(1−D(G′(zi)))] (9)\nwhere zi ∼ Pz , x′i ∼ P1 and G′ ∼ P2. Using Proposition 0.1, we can relax the alternate problem as:\nmax D min r1,r2\n1\nm m∑ i=1 [logD(xi + r i 1)]+ 1 m m∑ i=1 [log(1−D(G(zi) + ri2))] + λd m m∑ i=1 [‖ri1‖22] + ‖ri2‖22]\nwith zi ∼ Pz, xi ∼ Pr\n(10)\nHere r1 = {ri1}mi=1 is the set of small perturbations for the points sampled from real distribution Pd which tries to make the real distribution closer to the fake distribution. r2 = {ri2}mi=1 tries to make fake distribution closer to real one. Intuitively, these perturbations try to enhance the difficulty of classification task for discriminator by making real and fake data less distinguishable and it can help promote the robustness of discriminator." }, { "heading": "OVERALL OPTIMIZATION", "text": "We now integrate the robust training of generator and discriminator into a single framework:\nmin G max D V (G,D) , (1− λ)S(G,D) + sup P :W (P,Pr)≤ρr λEx∼P [logD(x)]\n+ sup P :W (P,Pg)≤ρg\nλEG′∼P [(1− logD(G′(z)))] (11)\nwhereG′(zi) = G(zi)+ri2 and zi ∼ pλz . pλz is the mixture distribution defined by pλz = (1−λ)pz + λp′z and p ′ z is the worst distribution defined by p ′ z = arg maxP :W (P,Pz)≤ρz Ex∼P [1− logD(G(x))]. ri2 is arbitrary perturbation. It is noted that we also combine the original GAN into the framework, allowing a more flexible training. The specific algorithm is given as below:\nAlgorithm 1 Algorithm for RGAN. 1: for number of training iterations do 2: Sample a batch of input noise zi ∼ Pz of size m, a batch of real data xi ∼ Pr of size m. λ is the trade-off parameter for original objective and our objective. 1 and 2 are amplitude of perturbation for input and images respectively.\n3: find the worst perturbation {rzadv, rdadv1, rdadv2} by maximizing the objective of generator and minimizing the objective of discriminator: 4: rizadv = arg minri:‖ri‖2=1[log(1−D(G(zi + r i))) + λz‖ri‖22] 5: ridadv1 = arg minri:‖ri‖2=1[logD(xi + r i) + λd‖ri1‖22] 6: ridadv2 = arg minri:‖ri‖2=1[log(1−D(G(zi) + r i)) + λd‖ri2‖22] 7: Update G by descending along its stochastic gradient: 8: ∇θg [ 1m ∑m i=1[log(1−D(G(zi))) + λ m ∑m i=1[log(1−D(G(zi + 1rizadv)))\n9: Update D by descending along its stochastic gradient: 10: ∇θd [Sm(G,D)+ λm ∑m i=1[logD(xi+ 2r i dadv1)]+ λ m ∑m i=1[log(1−D(G(zi)+ 2ridadv2))]] 11: end for" }, { "heading": "THEORETICAL ANALYSIS", "text": "In this section, we provide theoretical analysis for the RGAN but leave the proof details in appendix. We now show that the optimal discriminator of RGAN balances the mixture of real distributions and the mixture of fake distributions as Lemma 0.2.\nLemma 0.2 For arbitrary fixed G, the optimal D of the game defined by the utility function V (G,D) is:\nD∗G(x) = pλr (x)\npλr (x) + p λ g (x)\n(12)\nwhere, pλr (x) = (1 − λ)pr + λp′r is the mixture distribution for real data with λ ∈ [0, 1]. p′r is the worst distribution defined by p′r = arg minP :W (P,Pr)≤ρr Ex∼P [logD(x)]. pλg (x) = (1 − λ)pg + λp ′ g is the mixture distribution for fake data. The worst distribution p ′ g is defined by p ′ g = arg minP :W (P,Pg)≤ρg EG′∼P [1− logD(G′(z))].\nWe further show the optimum point of the utility function V (G,D) as Lemma 0.3.\nLemma 0.3 When the optimum discriminatorD∗ is achieved, the utility function reaches the global minimum if and only if pλg (x) = p λ r (x).\nThe min-max problem of (11) is computationally intractable due to the expectations over real and fake distributions. An alternate way is to approximate the original problem with the empirical average of finite examples:\nmin G max D Vm(G,D) , (1− λ)Sm(G,D) +\nλ\nm m∑ i=1 [logD(x′i)]\n+ λ\nm m∑ i=1 [(1− logD(G′(zi)))] (13)\nwhere x′i ∼ p′r,G′ ∼ p′g and zi ∼ pλz . pλz is the mixture distribution defined by pλz = (1−λ)pz+λp′z and p′z is the worst distribution defined by p ′ z = arg maxP :W (P,Pz)≤ρz Ex∼P [1− logD(G(x))].\nWe now provide the analysis for generalization ability as Lemma 0.4. First, we give some assumptions:\nAssumption 1 We provide the following assumptions for RGAN:\n1. The discriminator logDθ(x) is kθ-Lipschitz in its parameter θ, i.e., |logDθ(x) − logD′θ(x)| ≤ kθ‖θ − θ′‖.\n2. The discriminator logDθ(x) is kx-Lipschitz in its x, i.e., |logDθ(x)− logDθ(x′)| ≤ kx‖x− x′‖." }, { "heading": "3. The distance between two arbitrary samples is bounded, i.e., ‖x− x′‖ ≤ ∆B .", "text": "The generalization ability of discriminator is defined as in (Qi, 2017)(Arora et al., 2017) and it describes if and how fast the difference |V θm − V θ| converges, where, V θ = maxD V (G∗, D) and V θm = maxD Vm(G ∗, D).\nLemma 0.4 Under Assumption 1, with at least probability 1− η, we have:\n|V θm − V θ| ≤ (14)\nwhen the number of samples\nm ≥ C∆ 2 B(kx) 2\n2 (Nlog\nkθN\n+ log\n1 η ) (15)\nwhere C is a sufficiently large constant, and N is the number of parameters of the discriminator function.\nSimilarly, the generalizability of the generator can be defined as convergence of difference |Qφm − Qφ|, where, Qφ = minG V (G,D∗) and Qφm = minG Vm(G,D∗). We first give the assumptions:\nAssumption 2 We provide the following assumptions for RGAN:" }, { "heading": "1. The generator Gφ(z) is kφ-Lipschitz in its parameter φ, i.e., |Gφ(z)−G′φ(z)| ≤ kφ‖φ− φ′‖.", "text": "2. The discriminator Gφ(z) is kz-Lipschitz in its z, i.e., |Gφ(z)−Gφ(z′)| ≤ kz‖z − z′‖." }, { "heading": "3. The distance between two arbitrary samples is bounded, i.e., ‖z − z′‖ ≤ ∆Bz.", "text": "Lemma 0.5 Under Assumption 2, with at least probability 1− η, we have:\n|Qφm −Qφ| ≤ (16)\nwhen the number of samples\nm ≥ Cg∆ 2 Bz k2xk 2 z\n2 (Nglog\nkθkφNg + log 1\nη ) (17)\nwhere Cg is a sufficiently large constant, and Ng is the number of parameters of the generator function." }, { "heading": "EXPERIMENTS", "text": "We present a series of experiments in this section. First, we show that our proposed RGAN can improve the performance of different kinds of baseline models including WGAN-GP, DCGAN, WGAN-GP (resnet), and BWGAN. Inception score and FID are used to evaluate the quality of generations. Following many previous relevant work, we mainly conduct on CIFAR-10 and STL10 the comparison between our proposed method and various baseline models quantitatively while visualizing different models qualitatively on both CIFAR-10 and CelebA. In addition, we plot the bar charts for different baseline models and RGANs on two datasets (CIFAR-10 and STL-10). We also perform the ablation analysis to examine closely our proposed framework. Furthermore, we provide visualizations showing that the performance of baseline models may degrade given some specific input noises (sampled from the worst distribution). In comparison, our proposed method is more robust and can still perform fairly well. In the third part, we provide the visualizations of T-SNE embedding for original and worst distributions. Moreover, we show some images generated by baseline models and our proposed model." }, { "heading": "QUANTITATIVE COMPARISON", "text": "To evaluate the performance of our proposed method, we follow the previous works (Gulrajani et al., 2017; Adler & Lunz, 2018) on robustness and mainly conduct experiments on the CIFAR-10 and STL-10 dataset. There are 4 baseline models including WGAN-GP, WGAN-GP (resnet), DCGAN, and BWGAN. We implemented our proposed robust strategy on these baselines and would like to check if the robust training could indeed improve the performance. The structures and settings of our method are the same as baseline models. We train WGAN-GP, DCGAN, BWGAN and our proposed RGANs with 50, 000 training samples for 200, 000 epochs. For WGAN-GP (resnet) and our corresponding model, we found that 100, 000 epochs appear sufficient. For each 500 epochs, we calculate the inception score for 50, 000 generated images. For training RGANs, there are three hyper-parameters λ (trade off our objective and original one), 1 and 2. We set λ = 0.1 which is searched from {0.001, 0.01, 0.1, 0.5, 1, 2}. We also set 1 = 0.01 and 2 = 4 which was searched from {0.001, 0.01, 0.1, 0.2, 0.5, 1, 2, 4, 5, 10}. For STL-10, we train our models and corresponding baselines with 80w training samples with size 48 × 48. The training settings are totally the same with settings for CIFAR-10. Note that we do not need to adjust hyper-parameters for achieving better performance on the second dataset.\nWe list the performance for different models in Table 1. Clearly, our proposed RGAN (which is based on WGAN-GP-res) achieves the best result among all the methods in terms of both the criteria, i.e., Inception Score and FID. In order to check if the proposed robust strategy can indeed improve over different baselines, we also detail the performance in Figure 1 where we plot the bar charts for\ndifferent baseline models and their robust version with RGAN on two datasets (CIFAR-10 and STL10).2 It is noted that the robust strategy can consistently improve the baselines on the two datasets in terms of both the criteria. In addition, we also show the convergence curves in Figure 2. Clearly, when our robust strategy is applied on the baseline GANs, an obvious increase of the inception scores can be observed (though the convergence speed is similar to that of baseline models). All these experiments indicate that the robust training is indeed necessary and useful." }, { "heading": "ABLATION ANALYSIS", "text": "We conduct the ablation analysis in this subsection. Specifically, we experiment on CIFAR-10 with robust training over generator only, robust training over discriminator only, and robust training over both the generator and discriminator, trying to see if a robust training is necessary on both generator and discriminator. The results are listed in Table 2. As observed, robust training on either generator\n2BWGAN appears not to converge in STL-10 in our experiments. For fair comparison, we did not report the performance when BWGAN is used as the baseline in STL-10.\nor discriminator can consistently improve the performance of all baseline models, while a joint robust training on both generator and discriminator can further boost the performance. It is interesting to note that robust training on discriminator only could lead to more performance gain than on generator only, implying that a robust discriminator may be more important. This would be investigated as future work.\nIn addition, taking again CIFAR-10 as one illustrative dataset, we also show that our proposed robust method can perform robust on some potential input noise which might lead to poor generations (input noise sampled from the worst input distribution). Specifically, we generate 50, 000 images with RGAN and various baseline models from the original distribution and worst distribution for five times. Then, we compute the inception score and their corresponding standard deviation. The\nresults are showed in Figure 3. As observed, without the robust training, those baseline models perform consistently worst in the case of the worst noises input than that of the original input noises. This shows that the traditional GANs may not be robust and may lead to worse performance in case of certain poor input noise. In comparison, when the robust training is implemented, RGAN leads to similar performance even if the worst input noise is given." }, { "heading": "VISUALIZATION", "text": "We present visualization results to compare various methods qualitatively." }, { "heading": "VISUALIZATION ON CIFAR-10", "text": "In this subsection, we present a series of visualization trying to understand visually why the robust GAN could lead to better performance than the traditional GANs. To this end, we sample 500 data points from the original input distribution and worst input distribution respectively. We then plot the 2-dimensional T-SNE embedding of these points. We also would like to plot the real data distribution and the generated data from the traditional GAN as well as our robust GAN. For clarity, we take WGAN-GP as one example but we should bear in mind that the conclusion is basically the same for other traditional GANs like DCGAN. These plots are made in Figure 4 where one can inspect the meaning of each subfigure in the caption. We highlight some remarks as follows. First, Figure 4(a) indicates that the worst distribution covers wider range of areas, especially low density areas of the original distribution; this might cause poor generations since the worst input noise distribution is significantly different from the original input noise. Second, (b) shows that the worst real distribution (red) actually looks much similar to the worst generation distribution. It may be more robust and meaningful to minimize in the worst-case setting the departure of the real data distribution and the fake data distribution, which is conducted in our RGAN. Third, (c) shows that the real data distribution varies largely from the generated data points obtained by traditional GANs, indicating the poor generalization of the traditional GAN; in comparison, with a robust optimization in the worst-case setting, (d) demonstrates that the generated data look very close to the real data." }, { "heading": "VISUALIZATION ON CELEBA", "text": "To clearly examine the visual quality, we demonstrate some images generated by WGAN-GP, DCGAN and their corresponding RGANs on the CelebA dataset. These generated images are shown in Figure 5. As we can observe from these examples, the existing GANs may sometimes lead to very bad generations as circled in (a) and (c). In comparison, with the robust training under the worstcase distribution, such very bad examples can hardly be seen in RGAN. This clearly demonstrates the advantages of the proposed model." }, { "heading": "CONCLUSION", "text": "In this paper, we consider the generalization issue of GANs and propose a robust model called robust generative adversarial network (RGAN). We have designed a robust optimization framework where the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. The generator tries to map the worst input distribution (rather than a specific input distribution) to real data distribution, while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation. We have provided theories showing that the generalization of the new robust framework can be guaranteed. We also have conducted extensive experiments on CIFAR-10, STL-10 and CelebA datasets with two criteria (Inception score and FID) indicating that our proposed robust framework can improve consistently on several baseline GAN models. Ablation analysis and visualization have demonstrated the advantages of RGAN both quantitatively and qualitatively." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A. PROOF", "text": "Lemma 0.2 For arbitrary fixed G, the optimal D of the game defined by the utility function V (G,D) is:\nD∗G(x) = pλr (x)\npλr (x) + p λ g (x)\n(18)\nwhere, pλr (x) = (1 − λ)pr + λp′r is the mixture distribution for real data with λ ∈ [0, 1]. p′r is the worst distribution defined by p′r = arg minP :W (P,Pr)≤ρr Ex∼P [logD(x)]. pλg (x) = (1 − λ)pg + λp ′ g is the mixture distribution for fake data. The worst distribution p ′ g is defined by p ′ g = arg minP :W (P,Pg)≤ρg EG′∼P [1− logD(G′(z))]. Proof: Given the classifier and generator, the utility function can be rewritten as\nV (G,D) , (1− λ)[Ex∼Pr [logD(x)] + EG∼Pg [(1− logD(G(zi)))]] + sup P :W (P,Pr)≤ρr λEx∼P [logD(x)]\n+ sup P :W (P,Pg)≤ρg\nλEG′∼P [(1− logD(G′(z)))]\n= (1− λ) ∫ pr(x)log(D(x))dx\n+ (1− λ) ∫ pg(x)log(1−D(x))dx\n+ λ ∫ p′r(x)log(D(x))dx\n+ λ ∫ p′g(x)log(1−D(x))dx\n= ∫ pλr (x)log(D(x))dx\n+ ∫ pλg (x)log(1−D(x))dx\n(19)\nwhere G′(zi) = G(zi) + ri2 and zi ∼ Pz . ri2 is arbitrary perturbation. Then, it is easy to prove that the optimal D is D∗G(x) = pλr (x) pλr (x)+p λ g (x) .\nLemma 0.3 When the optimum discriminatorD∗ is achieved, the utility function reaches the global minimum if and only if pλg (x) = p λ r (x)." }, { "heading": "Proof:", "text": "Given the optimal D∗, we can reformulate the function V (G,D):\nV (G,D∗) = ∫ pλr (x)log(\npλr (x)\npλr (x) + p λ g (x)\n)dx+ ∫ pλg (x)log(\npλg (x)\npλr (x) + p λ g (x)\n)dx\n= ∫ pλr (x)log(\npλr (x)\n(pλr (x) + p λ g (x))/2\n)dx\n+ ∫ pλg (x)log(\npλg (x)\n(pλr (x) + p λ g (x))/2\n)dx− 2log2\n= −2log2 +KL(pλr (x)||(pλr (x) + pλg (x))/2) +KL(pλg (x)||(pλr (x) + pλg (x))/2)\n(20)\nThen, V (G,D∗) can be rewritten as:\nV (G,D∗) = −2log2 + 2JSD(pλr (x)||pλg (x)) (21)\nwhere JSD is the Jensen-Shannon divergence, which is always non-negative and the unique optimum is achieved if and only if pλr (x) = p λ g (x).\nAssumption 1 We provide following assumptions for RGAN:\n1. The discriminatorDθ(x) is kθ-Lipschitz in its parameter θ, i.e., |logDθ(x)−logD′θ(x)| ≤ kθ‖θ− θ′‖.\n2. The discriminator Dθ(x) is kx-Lipschitz in its x, i.e., |logDθ(x)− logDθ(x′)| ≤ kx‖x− x′‖." }, { "heading": "3. The distance between two arbitrary samples is bounded, i.e., ‖x− x′‖ ≤ ∆B .", "text": "Lemma 0.4 Under assumption 1, with at least probability 1− η, we have:\n|V θm − V θ| ≤ (22)\nwhen the number of samples\nm ≥ C∆ 2 B(kx) 2\n2 (Nlog\nkθN\n+ log\n1 η ) (23)\nwhere C is a sufficiently large constant, and N is the number of parameters of the discriminator function, V θ = maxD V (G∗, D) and V θm = maxD Vm(G ∗, D)." }, { "heading": "Proof:", "text": "To prove the bound, we need to apply the McDiarmid’s inequality. We first bound the change of function V θm(D,G\n∗) when a sample is changed. When i-th samples are replaced by x1i, x1i, Gz1i and G′z1i , the function changes to V θi m (D,G ∗). Then, we have\n|V θm(D,G∗)− V θim (D,G∗)|\n= 1\nm |(1− λ)[logD(xi) + logD(Gzi)]\n+ λ[logD(x′i) + logD(G ′ zi)] − (1− λ)[logD(x1i) + logD(Gz1i)] − λ[logD(x′1i) + logD(G′z1i)]| ≤ 1− λ m kx‖xi − x1i‖+ 1− λ m kx‖Gzi −Gz1i‖\n+ λ\nm kx‖x′i − x′1i‖+\nλ m kx‖G′zi −G ′ z1i‖\n≤ 2 m kx∆B\n(24)\nNow we can apply the McDiarmid’s inequality. We have\nP (|V θm(D,G∗)− V (D,G∗)| ≥ /2)\n≤ 2exp(− 2m\n8k2x∆ 2 B\n) (25)\nThe above bound applies to a single discriminatorDθ. To get the union bound, we consider a /8kθnet N , i.e. for any Dθ, there is a θ′ ∈ N so that ‖θ − θ′‖ ≤ /8kθ. This standard net can be constructed to contain finite discriminators such that N ≤ O(Nlog(kθN/ )). N is the number of parameters of discriminator (we here assume the parameter space of the loss function is bounded, then we can construct such a net containing finite points). Therefore, for all θ ∈ N , we have\n|V θm − V θ| ≤ /2 (26)\nwhen m ≥ C∆ 2 B(kx) 2 2 (Nlog kθN + log 1 η ).\nWe further consider the bound beyond θ and we can easily obtain the bounds with the first assumption:\n|V θ(D,G∗)− V θ ′ (D,G∗)| ≤ 2kθ‖θ − θ′‖ (27)\nand\n|V θm(D,G∗)− V θ ′ m (D,G ∗)| ≤ 2kθ‖θ − θ′‖ (28)\nThe final bound for all discriminator can be obtainged with assumption ‖θ − θ′‖ ≤ /8kθ:\n|V θm(D,G∗)−V θ(D,G∗)|\n≤ |V θm(D,G∗)− V θ ′ m (D,G ∗)| + |V θ ′\nm (D,G ∗)− V θ ′ (D,G∗)|\n+ |V θ ′ (D,G∗)− V θ(D,G∗)| ≤\n(29)\nAssumption 2 We provide the following assumptions for RGAN:" }, { "heading": "1. The generator Gφ(z) is kφ-Lipschitz in its parameter φ, i.e., |Gφ(z)−G′φ(z)| ≤ kφ‖φ− φ′‖.", "text": "2. The discriminator Gφ(z) is kz-Lipschitz in its z, i.e., |Gφ(z)−Gφ(z′)| ≤ kz‖z − z′‖." }, { "heading": "3. The distance between two arbitrary samples is bounded, i.e., ‖z − z′‖ ≤ ∆Bz .", "text": "Lemma 0.5 Under assumption 2, with at least probability 1− η, we have:\n|Qφm −Qφ| ≤ (30)\nwhen the number of samples\nm ≥ Cg∆ 2 Bz k2xk 2 z\n2 (Nglog\nkθkφNg + log 1\nη ) (31)\nwhere Cg is a sufficiently large constant, and Ng is the number of parameters of the generator function, Qφ = minG V (G,D∗) and Qφm = minG Vm(G,D ∗)." }, { "heading": "Proof:", "text": "Proof is skipped due to its similarity to Lemma 0.4." } ]
2,019
null
SP:c163bfc3f289c0c63ec25bbd21a63a921518ed22
[ "This paper presents a method for symbolic superoptimization — the task of simplifying equations into equivalent expressions. The main goal is to design a method that does not rely on human input in defining equivalence classes, which should improve scalability of the simplification method to a larger set of expressions. The solution uses a reinforcement learning method for training a neural model that transforms an equation tree into a simpler but equivalent one. The model consists of (i) a tree encoder, a recursive LSTM that operates over the input equation tree, (ii) a sub-tree selector, a probability distribution over the nodes in the input equation tree, and (iii) a tree decoder, a two layer LSTM that includes a tree layer and a symbol generation layer. The RL reward uses an existing method for determining soft equivalence between the output tree and the input tree along with a positive score for compressing. ", "This paper provides a novel approach to the problem of simplifying symbolic expressions without relying on human input and information. To achieve this, they apply a REINFORCE framework with a reward function involving the number of symbols in the final output together with a probabilistic testing scheme to determine equivalence. The model itself consists of a tree-LSTM-based encoder-decoder module with attention, together with a sub tree selector. Their main contribution is that this framework works entirely independent of human-labelled data. In their experiments, they show that this deep learning approach outperforms their provided human-independent baselines, while sharing similar performance with human-dependent ones." ]
Deep symbolic superoptimization refers to the task of applying deep learning methods to simplify symbolic expressions. Existing approaches either perform supervised training on human-constructed datasets that define equivalent expression pairs, or apply reinforcement learning with human-defined equivalent transformation actions. In short, almost all existing methods rely on human knowledge to define equivalence, which suffers from large labeling cost and learning bias. We thus propose HISS, a reinforcement learning framework for symbolic superoptimization that keeps humans outside the loop. HISS introduces a treeLSTM encoder-decoder network with attention to ensure tractable learning. Our experiments show that HISS can discover more simplification rules than existing human-dependent methods, and can learn meaningful embeddings for symbolic expressions, which are indicative of equivalence.1
[ { "affiliations": [], "name": "Hui Shi" }, { "affiliations": [], "name": "Yang Zhang" }, { "affiliations": [], "name": "Xinyun Chen" }, { "affiliations": [], "name": "Yuandong Tian" }, { "affiliations": [], "name": "Jishen Zhao" } ]
[ { "authors": [ "David Alvarez-Melis", "Tommi S Jaakkola" ], "title": "Tree-structured decoding with doubly-recurrent neural networks. 2016", "venue": null, "year": 2016 }, { "authors": [ "Forough Arabshahi", "Sameer Singh", "Animashree Anandkumar" ], "title": "Combining symbolic expressions and black-box function evaluations in neural programs", "venue": "arXiv preprint arXiv:1801.04342,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Dynamic programming and optimal control, volume 1", "venue": "Athena scientific Belmont, MA,", "year": 1995 }, { "authors": [ "Sebastian Buchwald" ], "title": "Optgen: A generator for local optimizations", "venue": "In International Conference on Compiler Construction,", "year": 2015 }, { "authors": [ "Rudy Bunel", "Matthew Hausknecht", "Jacob Devlin", "Rishabh Singh", "Pushmeet Kohli" ], "title": "Leveraging grammar and reinforcement learning for neural program synthesis", "venue": "arXiv preprint arXiv:1805.04276,", "year": 2018 }, { "authors": [ "Cheng-Hao Cai", "Yanyan Xu", "Dengfeng Ke", "Kaile Su" ], "title": "Learning of human-like algebraic reasoning using deep feedforward neural networks", "venue": "Biologically inspired cognitive architectures,", "year": 2018 }, { "authors": [ "Xinyun Chen", "Yuandong Tian" ], "title": "Learning to progressively plan", "venue": "arXiv preprint arXiv:1810.00337,", "year": 2018 }, { "authors": [ "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Tree-to-tree neural networks for program translation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Leonardo De Moura", "Nikolaj Bjørner. Z" ], "title": "An efficient smt solver", "venue": "In International conference on Tools and Algorithms for the Construction and Analysis of Systems,", "year": 2008 }, { "authors": [ "Mehdi Drissi", "Olivia Watkins", "Aditya Khant", "Vivaswat Ojha", "Pedro Sandoval", "Rakia Segev", "Eric Weiner", "Robert Keller" ], "title": "Program language translation using a grammar-driven tree-to-tree model", "venue": "arXiv preprint arXiv:1807.01784,", "year": 2018 }, { "authors": [ "Abhinav Jangda", "Greta Yorsh" ], "title": "Unbounded superoptimization", "venue": "Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software,", "year": 2017 }, { "authors": [ "Zhihao Jia", "Oded Padon", "James Thomas", "Todd Warszawski", "Matei Zaharia", "Alex Aiken" ], "title": "Taso: optimizing deep learning computation with automatic generation of graph substitutions", "venue": "In Proceedings of the 27th ACM Symposium on Operating Systems Principles,", "year": 2019 }, { "authors": [ "Rajeev Joshi", "Greg Nelson", "Keith Randall" ], "title": "Denali: a goal-directed superoptimizer, volume 37", "venue": null, "year": 2002 }, { "authors": [ "Levente Kocsis", "Csaba Szepesvári" ], "title": "Bandit based monte-carlo planning", "venue": "In European conference on machine learning,", "year": 2006 }, { "authors": [ "Wang Ling", "Edward Grefenstette", "Karl Moritz Hermann", "Tomáš Kočiskỳ", "Andrew Senior", "Fumin Wang", "Phil Blunsom" ], "title": "Latent predictor networks for code generation", "venue": "arXiv preprint arXiv:1603.06744,", "year": 2016 }, { "authors": [ "Hongbin Liu", "Ickjai Lee" ], "title": "End-to-end trajectory transportation mode classification using bilstm recurrent neural network", "venue": "12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE),", "year": 2017 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Henry Massalin" ], "title": "Superoptimizer: a look at the smallest program", "venue": "In ACM SIGARCH Computer Architecture News,", "year": 1987 }, { "authors": [ "Emilio Parisotto", "Abdel-rahman Mohamed", "Rishabh Singh", "Lihong Li", "Dengyong Zhou", "Pushmeet Kohli" ], "title": "Neuro-symbolic program synthesis", "venue": "arXiv preprint arXiv:1611.01855,", "year": 2016 }, { "authors": [ "Jonathan Ragan-Kelley", "Connelly Barnes", "Andrew Adams", "Sylvain Paris", "Frédo Durand", "Saman Amarasinghe" ], "title": "Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines", "venue": "Acm Sigplan Notices,", "year": 2013 }, { "authors": [ "Eric Schkufza", "Rahul Sharma", "Alex Aiken" ], "title": "Stochastic superoptimization", "venue": "In ACM SIGPLAN Notices,", "year": 2013 }, { "authors": [ "Kai Sheng Tai", "Richard Socher", "Christopher D Manning" ], "title": "Improved semantic representations from tree-structured long short-term memory networks", "venue": "arXiv preprint arXiv:1503.00075,", "year": 2015 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever" ], "title": "Learning to execute", "venue": "arXiv preprint arXiv:1410.4615,", "year": 2014 }, { "authors": [ "Victor Zhong", "Caiming Xiong", "Richard Socher" ], "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "venue": "arXiv preprint arXiv:1709.00103,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Superoptimization refers to the task of simplifying and optimizing over a set of machine instructions, or code (Massalin, 1987; Schkufza et al., 2013), which is a fundamental problem in computer science. As an important direction in superoptimization, symbolic expression simplification, or symbolic superoptimization, aims at transforming symbolic expression to a simpler form in an effective way, so as to avoid unnecessary computations. Symbolic superoptimization is an important component in compilers, e.g. LLVM and Halide, and it also has a wide application in mathematical engines including Wolfram2, Matlab, and Sympy.\nOver recent years, applying deep learning methods to address symbolic superoptimization has attracted great attention. Despite their variety, existing algorithms can be roughly divided into two categories. The first category is supervised learning, i.e. to learn a mapping between the input expressions and the output simplified expressions from a large number of human-constructed expression pairs (Arabshahi et al., 2018; Zaremba & Sutskever, 2014). Such methods rely heavily on a human-constructed dataset, which is time- and labor-consuming. What is worse, such systems are highly susceptible to bias, because it is generally very hard to define a minimum and comprehensive axiom set for training. It is highly possible that some forms of equivalence are not covered in the training set, and fail to be recognized by the model. In order to remove the dependence on human annotations, the second category of methods leverages reinforcement learning to autonomously discover simplifying equivalence (Chen et al., 2018). However, to make the action space tractable, such systems still rely on a set of equivalent transformation actions defined by human beings, which again suffers from the labeling cost and learning bias.\nIn short, the existing neural symbolic superoptimization algorithms all require human input to define equivalences. It would have benefited from improved efficiency and better simplification if there were algorithms independent of human knowledge. In fact, symbolic superoptimization should have been a task that naturally keeps human outside the loop, because it directly operates on machine code, whose consumers and evaluators are machines, not humans.\n∗Authors contributed equally to this paper. 1The code is available at https://github.com/shihui2010/symbolic_simplifier. 2https://www.wolframalpha.com/\nTherefore, we propose Human-Independent Symbolic Superoptimization (HISS), a reinforcement learning framework for symbolic superoptimization that is completely independent of human knowledge. Instead of using human-defined equivalence, HISS adopts a set of unsupervised techniques to maintain the tractability of action space. First, HISS introduces a tree-LSTM encoder-decoder architecture with attention to ensure that its exploration is confined within the set syntactically correct expressions. Second, the process of generating a simplified expression is broken into two stages. The first stage selects a sub-expression that can be simplified and the second stage simplifies the sub-expression. We performed a set of evaluations on artificially generated expressions as well as a publicly available code dataset, called the Halide dataset (Chen & Tian, 2018), and show that HISS can achieve competitive performance. We also find out that HISS can automatically discover simplification rules that are not included in the human predefined rules in the Halide dataset." }, { "heading": "2 RELATED WORK", "text": "Superoptimization origins from 1987 with the first design of Massalin (1987). With the probabilistic testing to reduce the testing cost, the brute force searching is aided with a pruning strategy to avoid searching sub-spaces that contains pieces of code that have known shorter alternatives. Due to the explosive searching space for exhaustive searching, the capability of the first superoptimizer is limited to only very short programs. More than a decades later, Joshi et al. (2002) presented Denali, which splits the superoptimization problem into two phases to expand the capability to optimize longer programs. STOKE (Schkufza et al., 2013) follows the two phases but sacrifices the completeness for efficiency in the second phase.\nRecent attempts to improve superoptimization are categorized into two fields: exploring transformation rules and accelerating trajectory searching. Searching the rules are similar to the problem of superoptimization on limited size program, but targeting more on the comprehensiveness of the rules. Buchwald (2015) exhaustively enumerates all possible expressions given the syntax and checks the equivalence of pairs of expressions by SMT solver. A similar method with an adaption of the SMT solver to reuse the previous result is proposed by Jangda & Yorsh (2017). On the other hand, deep neural networks are trained to guide the trajectory searching (Cai et al., 2018; Chen & Tian, 2018).\nConsidering transformation rule discovery as a limited space superoptimization, the large action space and sparse reward are the main challenges for using neural networks. Special neural generator structures are proposed for decoding valid symbolic programs, which leverage the syntax constraints to reduce the searching space as well as learn the reasoning of operations, and are gaining popularity in program synthesis (Parisotto et al., 2016; Zhong et al., 2017; Bunel et al., 2018), program translation (Chen et al., 2018; Drissi et al., 2018), and other code generation tasks (Ling et al., 2016; Alvarez-Melis & Jaakkola, 2016). Among the symbolic expression decoders, the family of tree structure RNNs (Parisotto et al., 2016; Drissi et al., 2018; Alvarez-Melis & Jaakkola, 2016; Chen et al., 2018) are more flexible than template-based predictors (Ling et al., 2016; Zhong et al., 2017)." }, { "heading": "3 THE HISS ARCHITECTURE", "text": "In this section, we will detail our proposed HISS architecture. We will first introduce a few notations. T denotes a tree; a denotes a vector, and A denotes a matrix. We introduce an LSTM(·) function that summarizes standard one-step LSTM operation as\n[ht, ct] = LSTM(xt,ht−1, ct−1), (1)\nwhere ht, ct and xt denote the output, cell and input at time t of a standard LSTM respectively." }, { "heading": "3.1 FRAMEWORK OVERVIEW", "text": "Our problem can be formulated as follows. Given a symbolic expression TI , represented in the expression tree form, our goal is to find a simplified expression TO, such that 1) the two expressions are equivalent, and 2) TO contains a smaller number of nodes than TI . It is important to write the symbolic expressions in their expression tree form, rather than strings because HISS will be operating on tree structures. An expression tree assigns a node for each operation or variable. Each non-terminal node represents an operation, and each terminal node, or leaf node,\nLSTM1\nLSTM2\nLP\nLP\nAtt\nLP\nLSTMout\nSoftmax ⋮\n⋮\nLSTM\nLP\nLSTM\nLSTM\nLP\nSigmoid\nLP\nSoftmax⋯\n⋯\n⋯ ⋯\n⋯\n⋯\n⋮ ⋮\n⋯\n⋮\n⋮\n⋯\n#$\n#%\n#&\n#'\n(%\n('\n(&\n)%→+,-. /%→+,-.\n)%→0 . /%→0 .\n)%. /%.\n)&. /&.\n)'. /'.\n)%→&. /%→&.)' /'\n)& /& )%\n/%\n{)%}\n)%→+,-.\nProb. of choosing sub-tree\n)%\nD%\nCopy & Split Concatenate\nLSTM Output & Cell Input Output Previous Output & Cell\nLP\nLinear Projection (Feed Forward)\nAtt\nAttention Module (Eq. (6))\n(a) The tree encoder (green) and subtree selector (orange)." }, { "heading": "LSTM1", "text": "LSTM2\nLP\nLP\nAtt\nLP\nLSTMout\nSoftmax ⋮\n⋮\nLSTM\nLP\nLSTM\nLSTM\nLP\nSigmoid\nLP\nSoftmax⋯\n⋯\n⋯ ⋯\n⋯\n⋯\n⋮ ⋮\n⋯\n⋮\n⋮\n⋯\n#$\n#%\n#&\n#'\n(%\n('\n(&\n)%→+,-. /%→+,-.\n)%→0 . /%→0 .\n)%. /%.\n)&. /&.\n)'. /'.\n)%→&. /%→&.)' /'\n)& /& )%\n/%\n{)%}\n)%→+,-.\nProb. of choosing sub-tree\n)%\nD%\nCopy & S lit Concatenate\nLSTM Output & Cell Input Output Previous Output & Cell\nLP\nLinear Projection (Feed Forward)\nAtt\nAttention Module (Eq. (6))\n(b) The tree decoder.\nLSTM1\nLSTM2\nLP\nLP\nAtt\nLP\nLSTMout\nSoftmax ⋮\n⋮\nLSTM\nLP\nLSTM\nLSTM\nLP\nSigmoid\nLP\nSoftmax⋯\n⋯\n⋯ ⋯\n⋯\n⋯\n⋮ ⋮\n⋯\n⋮\n⋮\n⋯\n#$\n#%\n#&\n#'\n(%\n('\n(&\n)%→+,-. /%→+,-.\n)%→0 . /%→0 .\n)%. /%.\n)&. /&.\n)'. /'.\n)%→&. /%→&.)' /'\n)& /& )%\n/%\n{)%}\n)%→+,-.\nProb. of choosing sub-tree\n)%\nD%\nCopy & Split Concatenate\nLSTM Output & Cell Input Output Previous Output & Cell\nLP\nLinear Projection (Feed Forward)\nAtt\nAttention Module (Eq. (6))\nFigure 1: The HISS architecture, illustrated on a three-node binary subtree, where i is the parent of j, k, and p is the parent of i.\nrepresents a variable or a constant. The arguments of operation are represented as the descendant subtrees of the corresponding node. Compared to string representation, tree representation naturally ensures any randomly generated expression in its form is syntactically correct. It also makes working with subexpressions easier: simply by working with subtrees.\nHISS approaches the problem using the reinforcement learning framework, where the action of generating simplified expressions are divided into two consecutive actions. The first action is to pick a subexpression (or subtree) that can be simplified, and the second action generates the simplified expression for the selected subexpression.\nAccordingly, HISS contains three modules. The first module is a tree encoder, which computes an embedding for each subtree (including the entire tree) of the input expression. The embeddings are useful for picking subtree for simplification as well as simplifying a subtree. The second module is a subtree selector, which selects a subtree for simplification. The third module is a tree decoder with an attention mechanism, which generates a simplified expression based on the input subtree embedding. The subsequent subsections will introduce each module respectively." }, { "heading": "3.2 THE TREE ENCODER", "text": "The tree encoder generates embedding for every subtree of the input expression. We apply the N -ary Tree LSTM as proposed in Tai et al. (2015), where N represents the maximum number of arguments that an operation has. It is important to note that although different operations have a different number of arguments, for structural uniformity, we assume that all operations have N arguments, with the excessive arguments being a NULL symbol.\nThe tree encoder consists of two layers. The first layer is called the embedding layer, which is a fullyconnected layer that converts the one-hot representation of each input symbol to an embedding. The second layer is the tree LSTM layer, which is almost the same as the regular LSTM, except that the cell information now flows from the children nodes to their parent node. Formally, denote ci, hi, and xi as the cell, output and input of node i respectively. Then the tree LSTM encoder performs the following information\n[hi, ci] = LSTM xi, ⋃ j∈D(i) hj , ⋃ j∈D(i) cj , (2) where D(i) denotes the set of children of node i. Fig. 1(a) plots the architecture of the tree LSTM encoder (in green). Since each node fuses the information from its children, which again fuse the information from their own children, it is easy to see that the output hi summarizes the information of the entire subtree led by node i, and thus can be regarded as an embedding for this subtree." }, { "heading": "3.3 THE SUBTREE SELECTOR", "text": "The subtree selector performs the first action to select a subtree for simplification. It takes the output of the tree encoder, {hi}, as its input, and produces the probability with which each subtree is\nselected. It consists of two feed-forward layers followed by a softmax layer across all the nodes in the input tree. Figure 1(a) shows the architecture of the subtree selector (in orange)." }, { "heading": "3.4 THE TREE DECODER", "text": "Once a subtree has been selected, and suppose the root node of the selected subtree is node i, the output of the encoder at node i, hi, is then fed into the tree decoder, which generates a simplified version of the subtree. The tree decoder can be regarded as the inverse process of the tree encoder: the latter fuses information from the children to the parents, whereas the former unrolls the information from parents down to the entire N -ary tree. When the parent node outputs a non-operation symbol, the expansion of this branch terminates and no child is further decoded.\nThe tree decoder adopts a novel LSTM architecture with attention, which, compared with the attention LSTM proposed by Chen et al. (2018), is more parameter- and computationally-efficient. It consists of two layers. The first layer is a tree LSTM layer, and the second layer is the symbol generation layer with attention. Figure 1(b) illustrates the decoder structure. The decoder shares the same vocabulary with the encoder, and the embedding layer of the decoder shares the parameters with the embedding layer in the encoder.\nTree LSTM Layer. The tree LSTM in the decoder needs to accomplish two tasks. First, it needs to extract the information for generating the output for the current node. Second, it needs to split and pass on the information to its children. To better control the information flow, we introduce two tracks of LSTMs for the two different tasks. Formally, denote [h′i, c ′ i] as the output and cell of node i, and assume [j1, · · · , jN ] are children nodes of i. Also denote yp as the decoder output for node p, which is the parent node of node i (If node i is already the root node of the selected subtree, then yp becomes a special start token). Then the first LSTM track extracts the information that generates the current output:\n[h′i→out, c ′ i→out] = LSTMout(yp,h ′ i, c ′ i). (3)\nThe second LSTM track splits and passes on the information to the children, i.e. ∀n ∈ {1, · · · , N} [h′i→jn , c ′ i→jn ] = LSTMn(yp,h ′ i, c ′ i). (4)\nNotice that we have appended a subscript to the LSTM(·) to emphasize that LSTM functions with different subscripts do not share parameters. Finally, the LSTM information for a specific children is derived by linearly projecting the output track and that specific children track:\nh′jn = Wh[h ′ j→out,h ′ i→jn ] + bh, c ′ jn = Wc[c ′ j→out, c ′ i→jn ] + bc. (5)\nWe find that this linear projection is useful for adding additional dependencies between the parent output and the descendants, so that the generated expression is more coherent.\nSymbol Generation Layer with Attention. The symbol generation layer takes the output track produced by the previous tree LSTM layer, h′i→out, as input, and outputs the probability distribution of generating different output symbols for the current node. It adopts an attention mechanism (Bahdanau et al., 2014) to attend to the relevant part in the encoder, so that the input and output expressions have better correspondence. Formally, when generating the output for decoder node i, the attention weight on encoder node j is computed from h′i→out and hj as follows:\nei(j) = v T tanh(Wdh′i→out +Wehj + ba), [ai(1), · · · , ai(J)] = softmax([ei(1), · · · , ei(J)]), (6)\nwhere J is the total number of input nodes at the encoder. Finally, the probability of symbol generation at node i, denoted as pi, is computed by passing into a linear projection layer h′i→out and an attention context vector ci, which is a linear combination of the encoder embeddings with the attention weights, i.e.\npi = Wo[h ′ i→out; ci] + bo, where ci = J∑ j=1 ai(j)hj . (7)" }, { "heading": "4 LEARNING WITH HISS", "text": "In this section, we will elaborate on the training and inference schemes of HISS. In particular, we will introduce several mechanisms to improve the exploration efficiency of HISS." }, { "heading": "4.1 TRAINING", "text": "We apply the standard REINFORCE framework (Williams, 1992) for training, where the reward function is given by\nR(TI , TO) = γcard(TO) if TI ≡ TO, −βγcard(TO) otherwise, (8)\nwhere ‘≡’ denotes that the two expressions are equivalent; card(·) denotes the number of nodes in the tree expression, or the length of the expression. β is a hyperparameter that depicts the penalty of not producing an equivalent expression. This reward prioritizes equivalence, and given equivalence, favors shorter expressions. We applied a probabilistic testing scheme to determine equivalence as proposed in Massalin (1987). For each input, multiple outputs are decoded via beam search, on each of which a reward is evaluated. We introduce the following mechanisms to maintain the efficiency and stability of training.\nCurriculum Learning. Since generating the simplified expression is divided into two actions, subtree selection and subtree simplification, directly learning both can lead to very inefficient exploration. Instead, we introduce a two-stage curriculum. The first stage trains only the encoder and decoder on very short expressions (maximum depth less than four). The subtree selector is not trained. Instead, we always feed the entire tree to the decoder for simplification. The second stage trains all the modules on longer expressions.\nSubtree Embedding Similarity. In order to guide the encoder to learning meaning embeddings, we introduce an additional `2 loss to enforce that the equivalent expressions have similar encoder embeddings, i.e. similar his . Specifically, for each input expressions TI , we decode a set of generated expressions S = {TO} with beam search, and obtain their embeddings {h(TO)} by feeding them back into the encoder (here we add an argument to h to emphasize that the embedding is a function of input expression). Then the `2 loss is expressed as follows\nL = 1 |S| ∑ TO∈S ‖h(TO)− h(TI)‖22 · (−1)1[TI 6≡TO ], (9)\nwhere 1[·] denotes the indicator function, which equals one if the statement in its argument is true, and zero otherwise. Note that this `2 applies to the encoder only, and can be optimized by regular gradient descent methods. REINFORCE is not needed." }, { "heading": "4.2 INFERENCE", "text": "Similar to training, the inference is performed by decoding multiple outputs via beam search and finding the best result as the final output. In order to accelerate the inference process, we introduce an offline procedure. During the first stage of the curriculum training, i.e. training on very short expressions, all the simplified equivalence rules discovered are logged. During inference, if the subtree to be fed into the decoder has an exact match in the log, we will apply the logged simplified equivalence directly, rather than redoing the entire decoding process." }, { "heading": "5 EXPERIMENTS", "text": "We performed two experiments. The first experiment compares HISS with human-independent naive search algorithms. The second experiment compares HISS with existing human-dependent state-of-the-art systems on benchmark datasets. Throughout all the experiments, we use the same hyperparameter setting. Additional details on the hyperparameters and how the hyperparameters are determined can be found in appendix A.3. Besides the experiments discussed in this section, some additional experiment results are reported in appendix B and a set of ablation studies are introduced in appendix C." }, { "heading": "5.1 COMPARING WITH HUMAN-INDEPENDENT METHODS", "text": "Since there are no existing human-independent methods specifically for symbolic superoptimization, we compare several search algorithms. Due to the search complexity, the evaluation cannot be performed on very long expressions. Therefore, this experiment is performed on the traverse equivalence dataset.\nTraverse Equivalence Dataset As a complement to the Halide dataset, we build a dataset by traversing all the possible expressions with maximum depth four that consist of operations and symbols drawn from the Halide vocabulary in table 3. Among these expressions, we use the FingerPrint method (Jia et al., 2019) to test if they can be simplified, and discard those that cannot. From the remaining expressions, we sample 900 expressions as the training set, 300 as the validation set and 300 as the test set. The advantage of this dataset is that it is not built from human-predefined equivalence relation. However, the disadvantage of this dataset is that it does not contain expressions with a maximum depth greater than four, limited by the complexity of the FingerPrint method. Additional details of our Traverse Equivalence dataset can be found in appendix A.1.\nTraining Since HISS does not operate on very long expressions, it is only trained with stage-one in curriculum learning (the one with no subtree selection). Additional details regarding training can be found in appendix A.4.\nBaselines Two baseline searching methods are compared: Monte Carlo Tree Search (MCTS) (Bertsekas, 1995) and Markov Chain Monte Carlo (MCMC) (Schkufza et al., 2013). MCTS decides the expression tree from root to leaves, choosing one symbol from Halide vocabulary for each node, and adopts Upper Confidence Bound (Kocsis & Szepesvári, 2006) for balancing exploration and exploitation. Similar to Schkufza et al. (2013), MCMC takes one of four transformations: 1) replace an operator by another random operator, and generate or discard operands if two operator takes a different number of operands. 2) replace a variable/constant with another random variable/constant. 3) replace a subexpression with a random single variable/constant. 4) replace a variable/constant with a random expression. The probability distribution of taking the transformation is defined as the same as in Schkufza et al. (2013).\nMetrics. Three metrics are introduced: 1) hit rate, defined as percentage of expressions that the model successfully found an equivalence given the computation budget parameter; 2) expression length reduction, defined as reduction in the total number of tokens; and 3) tree size reduction, defined as reduction in the number of nodes in the expression tree.\nResults. The performance comparison of three models is shown in figure 2. The sampling parameter in the horizontal axis refers to the beam size for HISS , the max trials budget for MCTS for each token decoded, and the sampling budget for MCMC. These quantities equivalently define the number of\nsearch attempts per token. As can be seen, HISS is significantly more powerful in finding the simpler equivalent than MCTS and MCMC. MCMC performs almost equally well as HISS in terms of Hit Rate, and both of them far outperform MCTS. However, both MCTS and HISS adopt top-down decoding in the huge decoding space, while MCMC starts with the input expression and applies local transformations, which makes it much easier to find an equivalence. Also, MCMC achieves much worse average length reduction and average tree size reduction than HISS." }, { "heading": "5.2 COMPARING WITH HUMAN-DEPENDENT METHODS", "text": "In this section, we compare HISS with existing human-dependent state-of-the-art methods on the Halide dataset.\nHalide Dataset The Halide dataset (Chen & Tian, 2018) is the benchmark dataset for symbolic expression simplification. It consists of equivalent expression pairs constructed from human predefined rules. Since HISS is an unsupervised method, we only use the longer expression from each pair for training. Additional details of the Halide dataset can be found in appendix A.2.\nTraining & Inference In this experiment, HISS is trained with both stages of curriculum learning. The first stage is trained on the moderately short expressions from the Traverse Equivalence dataset, whose configuration follows that in section 5.1. The second stage is trained on the Halide training set in an unsupervised manner. More details can be found in appendix A.4.\nSince HISS only simplifies one subtree at a time, while the actual simplification usually requires sequentially simplifying multiple subtrees, we iteratively invoke the HISS procedure for both training and inference, as in Chen & Tian (2018). Table 2 shows an example of the iterative process. The iterations terminate when 1) the number of iterations reaches 20; 2) the simplification output becomes a single node; or 3) the subtree selector assigns small scores (below 0.05) to all subtrees.\nBaselines Two baselines are included: 1) Halide (Ragan-Kelley et al., 2013), which applies Halide predefined rules; 2) Z3, the simplification function in Z3, a high-performance theorem prover developed by De Moura & Bjørner (2008), to perform transformations using the Halide predefined rules. It is worth mentioning that both baselines have access to the Halide predefined rules that are used to construct the dataset, which gives them an advantage over HISS.\nMetrics Expression length reduction and tree size reduction are applied as the metrics.\nResults Figure 3 shows the performance of HISS compared with the baselines. As can be seen, HISS outperforms both Halide and Z3 in both metrics. This result is quite non-trivial because the Halide dataset is built exclusively from the Halide’s ruleset, to which both baseline algorithms have access. Therefore, this result implies that even for expressions that are specifically designed to be simplified by a set of predefined rules, they can still be further simplified by rules that are outside of the set." }, { "heading": "5.3 SIMPLIFICATION PROCESS ANALYSIS", "text": "In this subsection, we will provide an in-depth analysis of the simplification process of HISS and Halide, which can explain why human-predefined rules can be insufficient, and how HISS can exceed the limit of human-predefined rules.\nExample Simplification Rules To start with, table 1 lists some example simplification rules learned by HISS . For each example rule, the corresponding human-predefined rule in Halide is also listed if available. There are two observations. First, HISS is able to learn the most fundamental axiomatic identities such as the inverse relationship between plus and minus, and between multiplication and division, most of which can be matched with the human-predefined rules in Halide. Second and more importantly, HISS can also uncover some more complicated rules that have no matches in Halide, nor could be equivalently derived from any composition of the rules in Halide. A close inspection into these rules reveals that these rules, despite their complexity, are still very fundamental, and therefore the failure to include these rules is expected to impact the simplification performance. In addition to these fundamental rules, HISS is also able to find some involved but interesting rules, which are listed in table 4 and discussed in appendix B.2.\nExample Simplification Traces. To illustrate how the completeness of the rules can impact on the simplification performance, table 2 compares the simplification traces of Halide and HISS for the following expression\n(((((144 - (v0*72))/2)*2) + 4) ≤ ((150 - (v0*72))/4)*4) (10) For Halide, each step represents the process of applying one Halide predefined rule to simplify the expression. For HISS, each step represents one simplification iteration. As each step, the subtree that is chosen for simplification for the next step is marked with box, unless the entire expression is chosen. As can be seen, HISS follows some reasonable steps to trim the constants and eliminates v0, but Halide gets stuck after some trivial constant reduction. The reason for this failure is there are no such rules as (((c0− (v0 ∗ c1))/c2) ∗ c2 7→ c0− (v0 ∗ c1) or x+ y 7→ y+x or (x+ y) ∗ z 7→ x ∗ z+ y ∗ z in the Halide ruleset. Of course, one can fix this problem one time by appending the aforementioned rules to the ruleset, but this does not fundamentally solve the problem because one would never be able to exhaustively list all the possible rules needed for simplification in the ruleset.\nWhy Subtree Selector Matters. Table 2 also illustrates why the subtree selector is an integral part of HISS. In many reduction steps, only a subexpression is simplified (as boxed). Without the subtree selector, the decoder of HISS would have to process the entire expression only to simplify the subexpression, which makes it hard to effectively learn via reinforcement learning. For more analysis on the effect of the subtree selector, please refer to appendix C." }, { "heading": "6 CONCLUSIONS", "text": "We presented HISS as a symbolic expression simplification algorithm that is independent of human knowledge. We demonstrated that removing the dependence on humans is advantageous for this task\nbecause machines can autonomously figure out rewrite rules that humans fail to discover, and thus achieve comparably well simplification results. We also showed that we are one step closer to finding an equivalence-preserving embedding for symbolic expressions. Although HISS has achieved promising results, there is still much room for improvement. Although HISS has adopted several techniques to reduce the complexity of the search space, learning simplification rules on very long expressions is still challenging, which calls for the exploration of more efficient reinforcement learning algorithms as a future research direction." }, { "heading": "7 ACKNOWLEDGEMENT", "text": "We thank the anonymous reviewers for their valuable feedback. This paper is supported by NSF grant 1829525." }, { "heading": "A EXPERIMENT SETUP", "text": "" }, { "heading": "A.1 TRAVERSE EQUIVALENCE DATASET", "text": "Table 3 shows the vocabulary of Halide syntax, which is used to construct the traverse equivalence dataset. As there are 14 binary operators, 1 unary operator, and 1 ternary operator, the total number of different expressions of depth two would be over 50k, and the number of different expressions of depth three is over 1.25e+14. However, the rule of thumb is that the simplification is most possible when there are some variables repeatedly appear in the expression, for example, (x + y) − x 7→ y while (x + z) + y could not be simplified. Therefore, we constraint the enumeration of expression to only 16 operators, 5 constants, and the first three variables (v0, v1, v2). As a result, roughly 3 billion expressions are enumerated.\nTo check whether an expression could be simplified in this plethora of possible candidates, we adopted the FingerPrint method (Jia et al., 2019). The idea is that equivalent expressions should produce the same output under the same set of input. Initially, n sets of random value assignment to all variables are generated, and the fingerprint of an expression is defined as the tuple of the corresponding n output given the assignment. According to the fingerprint, expressions with exactly n (n = 4 in our case) same results are grouped into a shard. It can be implied that any equivalent expressions must be in the same shard. Thus, the shards could be processed in parallel. It is highly possible that the initial assignment would lead to an extremely large shard. Therefore, when processing each shard, we repetitively compute new fingerprints until the shard breaks into small shards containing fewer than 5,000 expressions. Then, we use probabilistic testing on each pair of expressions from the same shard to determine whether the expression pair is equivalent. For each expression pair that is equivalent, the longer expression is labeled as reducible. After all the pairs are tested, all the expressions that are labeled as reducible are retained, and the rest are removed.\nFurther, it is obvious that if an expression could be simplified, the longer expression containing it could also be simplified. So if an expression contains a subexpression that could be simplified, then this expression does not represent the minimal sequence of this simplification and is removed from the dataset as well. After the FingerPrint method and the minimality checking, we obtain approximately 20,000 equivalent expression pairs, from which 1,500 samples are randomly sampled and further split into training, testing, and validation set with a ratio of 6:2:2." }, { "heading": "A.2 HALIDE DATASET", "text": "The Halide dataset (Chen & Tian, 2018) contains around 10,000 training sequences, 1,000 testing sequences, and 1,000 validation sequences, generated and split randomly. The number of words for each sequence ranges from 6 to 100, averaged at 58. The expressions generated contain many constants beyond {0, 1, 2, True, False}, and the constant is renamed to constant symbols shown in table 3, and the same constant value is renamed to the same constant symbol. There are at most 14 constant symbols and at most 15 variables in a single expression." }, { "heading": "A.3 HYPERPARAMETER SETTING AND DETERMINATION", "text": "The input to the network is one-hot encoded sequences where the vocabulary size is 50, then the input is encoded by a single fully connected layer with output size 32. The hidden units of LSTM are set to 64 for both encoder and decoder, as a common setting adopted in many previous works (Liu & Lee, 2017), and the number of layers is 1. The output size of the encoder is 64, and the output size of the decoder is equal to the vocabulary size (50). The subtree selector consists of two feed-forward layers with output sizes of 128 and 1 respectively. The model is trained with the ADAM optimizer with a learning rate of 1e-3. Rather than tuned on the validation set, this hyperparameter setting is determined by following the common setting in previous works ((Liu & Lee, 2017)) as well as referencing to the Halide vocabulary size. The same hyperparameter setting has been applied throughout this research project. Finetuning the hyperparameters is expected to have a minor effect on the performance as compared to refining our major algorithm design, e.g. introducing subtree selector and `2 embedding regularizer, which will be discussed in further details in the ablation studies in appendix C.\nThe penalty of not getting an equivalent expression, β (as in Eq. (8)), is set to 0.1. This is motivated by our observation that in the Monte Carlo sampling results the ratio of equivalent expressions over nonequivalent expressions is roughly 10:1 on a randomly initialized model. Therefore, by matching β to this ratio we can balance the reward and penalty and achieve an average reward of around zero, at least for the initial iterations, which is shown to contribute to a more stable REINFORCE training. However, please be reminded that β is not a crucial parameter because the baseline removal process in REINFORCE would automatically balance the reward and penalty. A good choice of β would mostly only benefit the onset of the training when the baseline estimation is not yet accurate. The weight to embedding similarity loss term is set to 0.1." }, { "heading": "A.4 TRAINING AND INFERENCE", "text": "As mentioned, the training for section 5.1 involves only the first stage in the curriculum learning, whereas the training for section 5.2 involves both stages in the curriculum learning. Below are the details regarding the training and inference schemes.\nTraining Iterations For the stage-one training in both experiments, we apply an identical setting, where the encoder-decoder model is trained for 10 epochs (90,000 steps). The model with the highest hit rate on the validation set is selected for evaluation in section 5.1 as well as for the stagetwo training of the curriculum. The stage-two training takes two weeks to train the HISS for full simplification pipeline on RTX 2080 for 40 epochs (400k steps).\nBeam Search The beam search algorithm is performed as follows. In the beginning, the top k choices of the root node with the highest probability are decoded. Then for each choice of root node value, the next step would be to decode the top k choices of each child node of the root node. Since the decoding processes of different child nodes of the same parent node are independent, we then perform the Cartesian product of all the k choices for each node at the current step, and preserve the top k combinations with the highest probability. Repeating this way, at step t, beam search decodes k highest probability tree up to depth t. Finally, top s (s < k) expressions are backtracked and used to estimate the expected reward of the model. The probability of beam search decoded expression is re-normalized according to Bunel et al. (2018). In all the experiments, we set beam size k = 20 and s = 20.\nConstant Folding Since the Halide dataset contains a large number of constant values, we apply a technique called constant folding to both stage-two training and inference of HISS as well as to all other baselines. Specifically, once the expression is rewritten by the neural network in the symbolic domain, it will be checked if all the leaf nodes in the subtree are constant. If so, the expression is executed and replaced by a new single node with the execution result. Constant folding is applied in both training and inference." }, { "heading": "B ADDITIONAL EXPERIMENT RESULTS", "text": "" }, { "heading": "B.1 ATTENTION VISUALIZATION", "text": "To understand the attention mechanism, we visualize the attention to the input sequence shown in figure 4. We find that when decoding an operator, the attention tends to be flat (with a few exceptions in the right two figures), because it needs to understand the overall logic. This is different from machine translation or summarization, where the output attention of a single word is usually focused on several input tokens. On the other hand, when decoding a variable, the model attends sharply to the corresponding variable in the input." }, { "heading": "B.2 EXAMPLES OF INVOLVED SIMPLIFICATION RULES", "text": "In order to further appreciate the ability of HISS in finding equivalence, in addition to the rules listed is table 1, we list some simplification rules discovered by HISS on randomly generated expressions that are more involved in table 4. In fact, it takes the authors quite a while to figure out the equivalence. These rules are hardly useful in practice because no humans will code in this way, but it is a vivid illustration of the advantage of HISS in finding powerful simplifications beyond human knowledge." }, { "heading": "B.3 ROBUSTNESS AGAINST RANDOM INITIALIZATION", "text": "To assess if the performance of HISS is robust to random model initialization, we perform the same experiment in section 5.1 eight times with different random initialization and compute the mean and standard deviation of the three metrics, which are shown in table 5. Compared to the absolute value of the mean, the standard deviation is very small, which shows that the random initialization has a minor influence on the performance of HISS." }, { "heading": "C ABLATION STUDIES", "text": "In this section, we introduce a set of ablation studies that investigate the significance of the major components of HISS in terms of contribution to performance. The major components of interest are the subtree embedding similarity loss as in Eq. (9), the subtree selector as introduced in section 3.3, and the tree-LSTM encoder-decoder architecture as in sections 3.2 and 3.4.\nAs an overview, figure 5 compares the performance between HISS and the following variants of HISS on the Halide dataset.\n• No-Embed-Loss: The original HISS trained without the subtree embedding similarity loss. • No-Selector: The original HISS without the subtree selector. • Linear-LSTM: The Tree-LSTM structure is removed and a simple linear-LSTM is used instead. The embedding loss as well as the subtree selector are no longer applicable and therefore also removed.\nOther than the variations aforementioned, all the experiment settings are identical to the experiment in section 5.2. As can be seen in figure 5, without training with the embedding similarity loss or the subtree selector, the performance is significantly compromised. Furthermore, removing the treeLSTM structure leads to almost a complete failure. The subsequent subsections further investigate why each of these modules has such a significant impact on the performance respectively." }, { "heading": "C.1 SUBTREE EMBEDDING SIMILARITY", "text": "We would like to investigate the specific effects that introducing the embedding similarity loss brings.\nThe direct goal of the embedding similarity loss is to better cluster the embeddings whose corresponding expressions are equivalent. Therefore, we would like to first check if this direct goal is achieved. To evaluate this, in the experiment described in section 5.1, we select the six mostpopulated subsets of equivalent expressions in the test set and evaluate the similarity of the embeddings computed by HISS in two ways. First, the embeddings are further projected to twodimensional space using t-SNE (Maaten & Hinton, 2008), which forms a scatter plot as in Figure 6(a-1). The points corresponding to equivalent expressions are shown in the same color. As can be seen, the embeddings equivalent expressions are highly clustered. Notice that this result is on the low-dimensional projection of the embedding. To better evaluate their similarity in the original space, we compare their inter- and intra-subset distances. The inter-/intra-subset distance of a subset is defined as the Euclidean distance between the centroid of the subset and the samples outside/within the subset. Figure 6(a-2) illustrates the box plot of these distances. As shown, there is a significant difference between intra- and inter-subset distances. Except for the first subset, the quartile intervals are well separated.\nFigure 6(b-1) and (b-2) show the same plots on the No-Embed-Loss model, i.e. the HISS variant that is trained without the embedding similarity loss. As can be seen, the scatter points are apparently less well-clustered, and the difference between inter- and intra-subset distances, although still exists, is smaller. Therefore, we can draw two conclusions. First, even without the embedding similarity loss, HISS is still able to learn embeddings that are somewhat clustered according to equivalence, which shows the goal of finding equivalent expressions roughly aligns with the need to cluster the embedding based on the expression equivalence. Second, the embedding similarity loss can improve embedding clustering.\nHowever, what we are more interested in is why an improved clustering of the embeddings would lead to a significant performance gain. To answer this question, figure 7 compares the average reward as a function of training epoch of HISS and its variant without the embedding similarity loss, which gives us some very interesting insights. Notice that despite their initial difference, which is due to the random initialization, both reward curves reach the same plateau at around epoch seven, where they both become stagnant for a while. However, with the help of embedding similarity loss, HISS\nis finally able to escape from the plateau and reach a higher reward level, whereas the one without the embedding similarity loss gets trapped in the plateau. This result suggests that the embedding similarity loss provides an extra training signal to address the convergence issue of REINFORCE." }, { "heading": "C.2 SUBTREE SELECTOR", "text": "We have already demonstrated the inner-workings of the subtree selector in section 5.3 and shown in figure 5 that subtree selector is indispensable for the good performance of HISS. Here we would like to intuitively explain why this is the case. Table 6 compares the simplification traces of HISS with and without the subtree selector on the following expression\n((v1+v2)-7)≤(((((max(v1,16)+18)/8)*8)+(v1+v2))-27) (11)\nAs can be seen, HISS with the subtree selector can first offset the ‘*8’ and ‘/8’ terms in the subexpression ((max(v1,16)+18)/8)*8), before it applies the cancellation rule to merge the constants ‘-7’ and ‘-27’. On the other hand, HISS without the subtree selector is unable to identify the reducible subexpression, and so it only applies the cancellation rule. This result shows that the reason why the subtree selector helps is that it can thoroughly check on each subexpression, and therefore contributes to a better simplification. Without the subtree selector, the algorithm is prone to overlook some small subexpressions that are reducible." }, { "heading": "C.3 TREE LSTM V.S. LINEAR LSTM", "text": "We can see the conspicuous disadvantage of the Linear-LSTM model from figure 5. To illustrate the fundamental issue with the Linear-LSTM model, we sampled some output of the model trained on the Traverse Equivalence dataset, listed in table 7. The model was trained with REINFORCE for 200k steps, with two different reward settings: 1) the equivalence reward as in Eq. 8; 2) the equivalence reward plus a valid expression reward, which is an additional reward of 0.1 if the decoded sequence is syntactically correct.\nAs can be seen in table 7, when trained with only the equivalence reward, the linear decoder is unable to decode even a valid expression, not to mention generating an equivalent expression to the input. So the linear decoder could hardly converge and learn useful information from reward, which was almost always a negative constant. As a remedy, we can add a valid expression reward to guide\nthe linear LSTM to generate valid expressions. However, as can be observed in table 7, under this additional reward, the model would overfit this reward by only generating short valid expressions, which are not equivalent to the input. This is because short expressions contain only one variable or constant have a higher generation probability than longer valid expressions. During the training, whenever the model happens to generate a single expression, it was rewarded a positive valid expression reward. So the behavior of generating only a single constant or variable is strengthened. Still, the probability that a random variable was equivalent to the input expression, is small, and therefore the model can hardly be guided by equivalence reward, but focuses only on generating short but valid expressions.\nThis experiment demonstrates the advantage of using the tree LSTM, which is guaranteed to generate syntactically correct, and which has a much higher probability of hitting an equivalent expression. Therefore, the tree LSTM can be much better guided by the equivalence reward." } ]
2,020
DEEP SYMBOLIC SUPEROPTIMIZATION WITHOUT HUMAN KNOWLEDGE
SP:2fa2a5ffa0193c0e5840bd18dc500739d2c369e0
[ "This work presents a simple technique for tuning the learning rate for Neural Network training when under a \"budget\" -- the budget here is specified as a fixed number of epochs that is expected to be a small fraction of the total number of epochs required to achieve maximum accuracy. The main contribution of this paper is in showing that a simpler linear decay schedule that goes to zero at the end of the proposed budget achieves good performance. The paper proposes a framework called budget-aware schedule which represents any learning rate schedule where the ratio of learning rate at time `'t' base learning rate is only a function of the ratio of 't' to total budget 'T'. In this family of schedules, the paper shows that a simple linear decay works best for all budgets. In the appendix, the authors compare their proposed schedule with adaptive techniques and show that under a given budget, it outperforms latets adaptive techniques like adabound, amsgrad, etc.", "This paper analyzed which learning rate schedule (LRS) should be used when the budget (number of iteration) is limited. First, the authors have introduced the concept of BAS (Budget-Aware Schedule). Various LRSs are classified, and it is experimentally shown that the LRSs based on BAS performed better. Among them, the performance of the linear decay method was shown to be simple and robust." ]
In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence. However, the growing complexity of machine learning datasets and models may violate such assumptions. Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints. Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training. We analyze the following problem: “given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?” We focus on the number of optimization iterations as the representative resource. Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget. Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing. We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation). We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget. We also revisit existing approaches for fast convergence and show that budgetaware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting.
[ { "affiliations": [], "name": "Mengtian Li" }, { "affiliations": [], "name": "Ersin Yumer" } ]
[ { "authors": [ "Olivier Bachem", "Mario Lucic", "Andreas Krause" ], "title": "Practical coreset constructions for machine learning", "venue": "arXiv preprint arXiv:1703.06476,", "year": 2017 }, { "authors": [ "Mariusz Bojarski", "Davide Del Testa", "Daniel Dworakowski", "Bernhard Firner", "Beat Flepp", "Prasoon Goyal", "Lawrence D. Jackel", "Miguel Pozuelo Monfort", "Urs Muller", "Jiakai Zhang", "Xin Zhang", "Junbo Jake Zhao", "Karol Zieba" ], "title": "End to end learning for self-driving", "venue": "cars. CoRR,", "year": 2016 }, { "authors": [ "Léon Bottou" ], "title": "Stochastic gradient learning in neural networks", "venue": "In Proceedings of Neuro-Nı̂mes 91, Nimes, France,", "year": 1991 }, { "authors": [ "Léon Bottou" ], "title": "Online algorithms and stochastic approximations", "venue": "URL http://leon.bottou.org/papers/bottou-98x. revised,", "year": 1998 }, { "authors": [ "Léon Bottou", "Frank E. Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "SIAM Review,", "year": 2018 }, { "authors": [ "Han Cai", "Tianyao Chen", "Weinan Zhang", "Yong Yu", "Jun Wang" ], "title": "Efficient architecture search by network transformation", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Shengcao Cao", "Xiaofang Wang", "Kris M Kitani" ], "title": "Learnable embedding space for efficient neural architecture compression", "venue": null, "year": 2019 }, { "authors": [ "João Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "venue": null, "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L. Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": null, "year": 2018 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": null, "year": 2016 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": null, "year": 2016 }, { "authors": [ "Felix Dräxler", "Kambis Veschgini", "Manfred Salmhofer", "Fred A. Hamprecht" ], "title": "Essentially no barriers in neural network energy landscape", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Simon S Du", "Jason D Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "arXiv preprint arXiv:1811.03804,", "year": 2018 }, { "authors": [ "Simon S. Du", "Jason D. Lee", "Yuandong Tian", "Barnabás Póczos", "Aarti Singh" ], "title": "Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Meherwar Fatima", "Maruf Pasha" ], "title": "Survey of machine learning algorithms for disease diagnostic", "venue": "Journal of Intelligent Learning Systems and Applications,", "year": 2017 }, { "authors": [ "Hamid Reza Feyzmahdavian", "Arda Aytekin", "Mikael Johansson" ], "title": "An asynchronous mini-batch algorithm for regularized stochastic optimization", "venue": "IEEE Transactions on Automatic Control,", "year": 2016 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P. Vetrov", "Andrew Gordon Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Akhilesh Gotmare", "Nitish Shirish Keskar", "Caiming Xiong", "Richard Socher" ], "title": "A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation", "venue": null, "year": 2019 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Moritz Hardt", "Benjamin Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CVPR, pp", "year": 2016 }, { "authors": [ "Bo Yang Hsueh", "Wei Li", "I-Chen Wu" ], "title": "Stochastic gradient descent with hyperbolic-tangent decay on classification", "venue": null, "year": 2019 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free. ICLR, 2017a", "venue": null, "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": null, "year": 2017 }, { "authors": [ "Yangqing Jia", "Evan Shelhamer", "Jeff Donahue", "Sergey Karayev", "Jonathan Long", "Ross Girshick", "Sergio Guadarrama", "Trevor Darrell" ], "title": "Caffe: Convolutional architecture for fast feature embedding", "venue": "In ACM Multimedia,", "year": 2014 }, { "authors": [ "Will Kay", "João Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Apostol Natsev", "Mustafa Suleyman", "Andrew Zisserman" ], "title": "The kinetics human action video", "venue": "dataset. CoRR,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "ICLR,", "year": 2015 }, { "authors": [ "Scott Kirkpatrick", "C Daniel Gelatt", "Mario P Vecchi" ], "title": "Optimization by simulated annealing", "venue": null, "year": 1983 }, { "authors": [ "Robert D. Kleinberg", "Yuanzhi Li", "Yang Yuan" ], "title": "An alternative view: When does sgd escape local minima", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": null, "year": 2012 }, { "authors": [ "Alina Kuznetsova", "Hassan Rom", "Neil Alldrin", "Jasper Uijlings", "Ivan Krasin", "Jordi Pont-Tuset", "Shahab Kamali", "Stefan Popov", "Matteo Malloci", "Tom Duerig", "Vittorio Ferrari" ], "title": "The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale", "venue": null, "year": 2018 }, { "authors": [ "Lisha Li", "Kevin G. Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet S. Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": null, "year": 2017 }, { "authors": [ "Xiangru Lian", "Ce Zhang", "Huan Zhang", "Cho-Jui Hsieh", "Wei Zhang", "Ji Liu" ], "title": "Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft COCO: Common objects in context", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture", "venue": "search. ICLR,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: Stochastic gradient descent with warm restarts", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate", "venue": null, "year": 2019 }, { "authors": [ "W. Ma", "S. Wang", "R. Hu", "Y. Xiong", "R. Urtasun" ], "title": "Deep rigid instance scene flow", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "David McAllester", "Bart Selman", "Henry Kautz" ], "title": "Evidence for invariants in local search", "venue": "In AAAI,", "year": 1997 }, { "authors": [ "Dmytro Mishkin", "Nikolay Sergievskiy", "Jiri Matas" ], "title": "Systematic evaluation of convolution neural network advances on the imagenet", "venue": "Computer Vision and Image Understanding,", "year": 2017 }, { "authors": [ "Arkadi Nemirovski", "Anatoli Juditsky", "Guanghui Lan", "Alexander Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2009 }, { "authors": [ "Yaghout Nourani", "Bjarne Andresen" ], "title": "A comparison of simulated annealing cooling strategies", "venue": "Journal of Physics A: Mathematical and General,", "year": 1998 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": null, "year": 2018 }, { "authors": [ "Piyush Rai", "Hal Daumé", "Suresh Venkatasubramanian" ], "title": "Streamed learning: one-pass svms", "venue": "In IJCAI,", "year": 2009 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture", "venue": null, "year": 2019 }, { "authors": [ "Sashank J Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": null, "year": 2018 }, { "authors": [ "Leslie N. Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "WACV, pp", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR, pp", "year": 2015 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In CVPR,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A Alemi" ], "title": "Inception-v4, inception-resnet and the impact of residual connections on learning", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Russell H Taylor", "Arianna Menciassi", "Gabor Fichtinger", "Paolo Dario" ], "title": "Medical robotics and computer-integrated surgery", "venue": "Springer handbook of robotics,", "year": 2008 }, { "authors": [ "T. Tieleman", "G. Hinton" ], "title": "RMSProp: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": null, "year": 2018 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Growing a brain: Fine-tuning by increasing model capacity", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Ashia C Wilson", "Rebecca Roelofs", "Mitchell Stern", "Nati Srebro", "Benjamin Recht" ], "title": "The marginal value of adaptive gradient methods in machine learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Saining Xie", "Zhuowen Tu" ], "title": "Holistically-nested edge detection", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Zhichao Yin", "Trevor Darrell", "Fisher Yu" ], "title": "Hierarchical discrete distribution decomposition for match density estimation", "venue": null, "year": 2019 }, { "authors": [ "Martin Zinkevich" ], "title": "Online convex programming and generalized infinitesimal gradient ascent", "venue": "In ICML, pp", "year": 2003 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "Cao et al", "Cai" ], "title": "Under this setting, the goal is to rank the performance of different architectures instead of obtaining the best possible accuracy as in the regular case of budgeted training. Then one could ask the question that whether budgeted training techniques help in better predicting the relative rank", "venue": "Real et al.,", "year": 2019 }, { "authors": [ "Adam (Kingma", "Ba" ], "title": "2015)), we further include the classical method RMSprop (Tieleman & Hinton, 2012) and the more recent AdaBound (Luo et al., 2019). We tune these adaptive methods for CIFAR-10 and summarize the results in Fig A. We observe the similar conclusion that budgetaware linear schedule outperforms adaptive methods for all given budgets. Like SGD, those adaptive learning rate methods also takes input a parameter of base learning", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "In most practical settings and theoretical analyses, one assumes that a model can be trained until convergence. However, the growing complexity of machine learning datasets and models may violate such assumptions. Indeed, current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints. Therefore, we introduce a formal setting for studying training under the non-asymptotic, resource-constrained regime, i.e., budgeted training. We analyze the following problem: “given a dataset, algorithm, and fixed resource budget, what is the best achievable performance?” We focus on the number of optimization iterations as the representative resource. Under such a setting, we show that it is critical to adjust the learning rate schedule according to the given budget. Among budget-aware learning schedules, we find simple linear decay to be both robust and high-performing. We support our claim through extensive experiments with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation). We also analyze our results and find that the key to a good schedule is budgeted convergence, a phenomenon whereby the gradient vanishes at the end of each allowed budget. We also revisit existing approaches for fast convergence and show that budgetaware learning schedules readily outperform such approaches under (the practical but under-explored) budgeted training setting." }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks have made an undeniable impact in advancing the state-of-the-art for many machine learning tasks. Improvements have been particularly transformative in computer vision (Huang et al., 2017b; He et al., 2017). Much of these performance improvements were enabled by an ever-increasing amount of labeled visual data (Russakovsky et al., 2015; Kuznetsova et al., 2018) and innovations in training architectures (Krizhevsky et al., 2012; He et al., 2016).\nHowever, as training datasets continue to grow in size, we argue that an additional limiting factor is that of resource constraints for training. Conservative prognostications of dataset sizes – particularly for practical endeavors such as self-driving cars (Bojarski et al., 2016), assistive medical robots (Taylor et al., 2008), and medical analysis (Fatima & Pasha, 2017) – suggest one will train on datasets orders of magnitude larger than those that are publicly available today. Such planning efforts will become more and more crucial, because in the limit, it might not even be practical to visit every training example before running out of resources (Bottou, 1998; Rai et al., 2009).\nWe note that resource-constrained training already is implicitly widespread, as the vast majority of practitioners have access to limited compute. This is particularly true for those pursuing research directions that require a massive number of training runs, such as hyper-parameter tuning (Li et al., 2017) and neural architecture search (Zoph & Le, 2017; Cao et al., 2019; Liu et al., 2019).\n†Work done while at Argo AI.\nInstead of asking “what is the best performance one can achieve given this data and algorithm?”, which has been the primary focus in the field so far, we decorate this question with budgeted training constraints as follows: “what is the best performance one can achieve given this data and algorithm within the allowed budget?”. Here, the allowed budget refers to a limitation on the total time, compute, or cost spent on training. More specifically, we focus on limiting the number of iterations. This allows us to abstract out the specific constraint without loss of generality since any one of the aforementioned constraints could be converted to a finite iteration limit. We make the underlying assumption that the network architecture is constant throughout training, though it may be interesting to entertain changes in architecture during training (Rusu et al., 2016; Wang et al., 2017).\nMuch of the theoretical analysis of optimization algorithms focuses on asymptotic convergence and optimality (Robbins & Monro, 1951; Nemirovski et al., 2009; Bottou et al., 2018), which implicitly makes use of an infinite compute budget. That said, there exists a wide body of work (Zinkevich, 2003; Kingma & Ba, 2015; Reddi et al., 2018; Luo et al., 2019) that provide performance bounds which depend on the iteration number, which apply even in the non-asymptotic regime. Our work differs in its exploration of maximizing performance for a fixed number of iterations. Importantly, the globally optimal solution may not even be achievable in our budgeted setting.\nGiven a limited budget, one obvious strategy might be data subsampling (Bachem et al., 2017; Sener & Savarese, 2018). However, we discover that a much more effective, simpler, and under-explored strategy is adopting budget-aware learning rate schedules — if we know that we are limited to a single epoch, one should tune the learning schedule accordingly. Such budget-aware schedules have been proposed in previous work (Feyzmahdavian et al., 2016; Lian et al., 2017), but often for a fixed learning rate that depends on dataset statistics. In this paper, we specifically point out linearly decaying the learning rate to 0 at the end of the budget, may be more robust than more complicated strategies suggested in prior work. Though we are motivated by budget-aware training, we find that a linear schedule is quite competitive for general learning settings as well. We verify our findings with state-of-the-art models on ImageNet (image classification), Kinetics (video classification), MS COCO (object detection and instance segmentation), and Cityscapes (semantic segmentation).\nWe conduct several diagnostic experiments that analyze learning rate decays under the budgeted setting. We first observe a statistical correlation between the learning rate and the full gradient magnitude (over the entire dataset). Decreasing the learning rate empirically results in a decrease in the full gradient magnitude. Eventually, as the former goes to zero, the latter vanishes as well, suggesting that the optimization has reached a critical point, if not a local minimum1. We call this phenomenon budgeted convergence and we find it generalizes across budgets. On one hand, it implies that one should decay the learning rate to zero at the end of the training, even given a small budget. On the other hand, it implies one should not aggressively decay the learning rate early in the optimization (such as the case with an exponential schedule) since this may slow down later progress. Finally, we show that linear budget-aware schedules outperform recently-proposed fast-converging methods that make use of adaptive learning rates and restarts.\nOur main contributions are as follows:\n1Whether such a solution is exactly a local minimum or not is debatable (see Sec 2).\n• We introduce a formal setting for budgeted training based on training iterations and provide an alternative perspective for existing learning rate schedules. • We discover that budget-aware schedules are handy solutions to budgeted training. Specif-\nically, our proposed linear schedule is more simple, robust, and effective than prior approaches, for both budgeted and general training. • We provide an empirical justification of the effectiveness of learning rate decay based on\nthe correlation between the learning rate and the full gradient norm." }, { "heading": "2 RELATED WORK", "text": "Learning rates. Stochastic gradient descent dates back to Robbins & Monro (1951). The core is its update step: wt = wt−1 − αtgt, where t (from 1 to T ) is the iteration, w are the parameters to be learned, g is the gradient estimator for the objective function2 F , and αt is the learning rate, also known as step size. Given base learning rate α0, we can define the ratio βt = αt/α0. Then the set of {βt}Tt=1 is called the learning rate schedule, which specifies how the learning rate should vary over the course of training. Our definition differs slighter from prior art as it separates the base learning rate and learning rate schedule. Learning rates are well studied for (strongly) convex cost surfaces and we include a brief review in Appendix H.\nLearning rate schedule for deep learning. In deep learning, there is no consensus on the exact role of the learning rate. Most theoretical analysis makes the assumption of a small and constant learning rate (Du et al., 2018a;b; Hardt et al., 2016). For variable rates, one hypothesis is that large rates help move the optimization over large energy barriers while small rates help converge to a local minimum (Loshchilov & Hutter, 2017; Huang et al., 2017a; Kleinberg et al., 2018). Such hypothesis is questioned by recent analysis on mode connectivity, which has revealed that there does exist a descent path between solutions that were previously thought to be isolated local minima (Garipov et al., 2018; Dräxler et al., 2018; Gotmare et al., 2019). Despite a lack of theoretical explanation, the community has adopted a variety of heuristic schedules for practical purposes, two of which are particularly common:\n• step decay: drop the learning rate by a multiplicative factor γ after every d epochs. The default for γ is 0.1, but d varies significantly across tasks.\n• exponential: βt = γt. There is no default parameter for γ and it requires manual tuning. State-of-the-art codebases for standard vision benchmarks tend to employ step decay (Xie & Tu, 2015; Huang et al., 2017b; He et al., 2017; Carreira & Zisserman, 2017; Wang et al., 2018; Yin et al., 2019; Ma et al., 2019), whereas exponential decay has been successfully used to train Inception networks (Szegedy et al., 2015; 2016; 2017). In spite of their prevalence, these heuristics have not been well studied. Recent work proposes several new schedules (Loshchilov & Hutter, 2017; Smith, 2017; Hsueh et al., 2019), but much of this past work limits their evaluation to CIFAR and ImageNet. For example, SGDR (Loshchilov & Hutter, 2017) advocates for learning-rate restarts based on the results on CIFAR, however, we find the unexplained form of cosine decay in SGDR is more effective than the restart technique. Notably, Mishkin et al. (2017) demonstrate the effectiveness of linear rate decay with CaffeNet on downsized ImageNet. In our work, we rigorously evaluate on 5 standard vision benchmarks with state-of-the-art networks and under various budgets. Gotmare et al. (2019) also analyze learning rate restarts and in addition, the warm-up technique, but do not analyze the specific form of learning rate decay.\nAdaptive learning rates. Adaptive learning rate methods (Tieleman & Hinton, 2012; Kingma & Ba, 2015; Reddi et al., 2018; Luo et al., 2019) adjust the learning rate according to the local statistics of the cost surface. Despite having better theoretical bounds under certain conditions, they do not generalize as well as momentum SGD for benchmark tasks that are much larger than CIFAR (Wilson et al., 2017). We offer new insights by evaluating them under the budgeted setting. We show fast descent can be trivially achieved through budget-aware schedules and aggressive early descent is not desirable for achieving good performance in the end.\n2Note that g can be based on a single example, a mini-batch, the full training set, or the true data distribution. In most practical settings, momentum SGD is used, but we omit the momentum here for simplicity." }, { "heading": "3 LEARNING RATES AND BUDGETS", "text": "" }, { "heading": "3.1 BUDGET-AWARE SCHEDULES", "text": "Learning rate schedules are often defined assuming unlimited resources. As we argue, resource constraints are an undeniable practical aspect of learning. One simple approach for modifying an existing learning rate schedule to a budgeted setting is early-stopping. Fig 1 shows that one can dramatically improve results of early stopping by more than 60% by tuning the learning rate for the appropriate budget. To do so, we simply reparameterize the learning rate sequence with a quantity not only dependent on the absolute iteration t, but also the training budget T :\nDefinition (Budget-Aware Schedule). Let T be the training budget, t be the current step, then a training progress p is t/T . A budget-aware learning rate schedule is\nβp : p 7→ f(p), (1) where f(p) is the ratio of learning rate at step t to the base learning rate α0.\nAt first glance, it might be counter-intuitive for a schedule to not depend on T . For example, for a task that is usually trained with 200 epochs, training 2 epochs will end up at a solution very distant from the global optimal no matter the schedule. In such cases, conventional wisdom from convex optimization suggests that one should employ a large learning rate (constant schedule) that efficiently descends towards the global optimal. However, in the non-convex case, we observe empirically that a better strategy is to systematically decay the learning rate in proportion to the total iteration budget.\nBudge-Aware Conversion (BAC). Given a particular rate schedule βt = f(t), one simple method for making it budget-aware is to rescale it, i.e., βp = f(pT0), where T0 is the budget used for the original schedule. For instance, a step decay for 90 epochs with two drops at epoch 30 and epoch 60 will convert to a schedule that drops at 1/3 and 2/3 training progress. Analogously, an exponential schedule 0.99t for 200 epochs will be converted into (0.99200)p.\nIt is worth noting that such an adaptation strategy already exists in well-known codebases (He et al., 2017) for training with limited schedules. Our experiments confirm the effectiveness of BAC as a general strategy for converting many standard schedules to be budget-aware (Tab 1). For our remaining experiments, we regard BAC as a known technique and apply it to our baselines by default.\nRecent schedules: Interestingly, several recent learning rate schedules are implicitly defined as a function of progress p = tT , and so are budget-aware by our definition:\n• poly (Jia et al., 2014): βp = (1−p)γ . No parameter other than γ = 0.9 is used in published work. • cosine (Loshchilov & Hutter, 2017): βp = η + 12 (1− η)(1 + cos(πp)). η specify a lower\nbound for the learning rate, which defaults to zero. • htd (Hsueh et al., 2019): βp = η + 12 (1 − η)(1 − tanh(L + (U − L)p)). Here η has the\nsame representation as in cosine. It is reported that L = −6 and U = 3 performs the best. The poly schedule is a feature in Caffe (Jia et al., 2014) and adopted by the semantic segmentation community (Chen et al., 2018; Zhao et al., 2017). The cosine schedule is a byproduct in work that promotes learning rate restarts (Loshchilov & Hutter, 2017). The htd schedule is recently proposed (Hsueh et al., 2019), which however, contains only limited empirical evaluation. None of these analyze their budget-aware property or provides intuition for such forms of decay. These schedules were\ntreated as “yet another schedule”. However, our definition of budget-aware makes these schedules stand out as a general family." }, { "heading": "3.2 LINEAR SCHEDULE", "text": "Inspired by existing budget-aware schedules, we borrow an even simpler schedule from the simulated annealing literature (Kirkpatrick et al., 1983; McAllester et al., 1997; Nourani & Andresen, 1998)3:\nlinear : βp = 1− p. (2)\nIn Fig 2, we compare linear schedule with various existing schedules under the budget-aware setting. Note that this linear schedule is completely parameter-free. This property is particularly desirable in budgeted training, where little budget exists for tuning such a parameter. The excellent generalization of linear schedule across budgets (shown in the next section) might imply that the cost surface of deep learning is to some degree self-similar. Note that a linear schedule, together with\n3A link between SGD and simulated annealing has been recognized decades ago, where learning rate plays the role of temperature control (Bottou, 1991). Therefore, cooling schedules in simulated annealing can be transferred into learning rate schedules for SGD.\nother recent budget-aware schedules, produces a constant learning rate in the asymptotic limit i.e., limT→∞(1 − t/T ) = 1. Consequently, such practically high-performing schedules tend to be ignored in theoretical convergence analysis (Robbins & Monro, 1951; Bottou et al., 2018)." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we first compare linear schedule against other existing schedules on the small CIFAR10 dataset and then on a broad suite of vision benchmarks. The CIFAR-10 experiment is designed to extensively evaluate each learning schedule while the vision benchmarks are used to verify the observation on CIFAR-10. We provide important implementation settings in the main text while leaving the rest of the details to Appendix K. In addition, we provide in Appendix A the evaluation with a large number of random architectures in the setting of neural architecture search." }, { "heading": "4.1 CIFAR", "text": "CIFAR-10 (Krizhevsky & Hinton, 2009) is a dataset that contains 70,000 tiny images (32 × 32). Given its small size, it is widely used for validating novel architectures. We follow the standard setup for dataset split (Huang et al., 2017b), which is randomly holding out 5,000 from the 50,000 training images to form the validation set. For each budget, we report the best validation accuracy among epochs up till the end of the budget. We use ResNet-18 (He et al., 2016) as the backbone architecture and utilize SGD with base learning rate 0.1, momentum 0.9, weight decay 0.0005 and a batch size 128.\nWe study learning schedules in several groups: (a) constant (equivalent to not using any schedule). (b) & (c) exponential and step decay, both of which are commonly adopted schedules. (d) htd (Hsueh et al., 2019), a quite recent addition and not yet adopted in practice . We take the parameters with the best-reported performance (−6, 3). Note that this schedule decays much slower initially than the linear schedule (Fig 2). (e) the smooth-decaying schedules (small curvature), which consists of cosine (Loshchilov & Hutter, 2017), poly (Jia et al., 2014) and the linear schedule.\nAs shown in Tab 2, the group of schedules that are budget-aware by our definition, outperform other schedules under all budgets. The linear schedule in particular, performs best most of the time including the typical full budget case. Noticeably, when exponential schedule is well-tuned for this task (γ = 0.97), it fails to generalize across budgets. In comparison, the budget-aware group does not require tuning but generalizes much better.\nWithin the budget-aware schedules, cosine, poly and linear achieve very similar results. This is expected due to the fact that their numerical similarity at each step (Fig 2). These results might indicate that the key for a robust budgeted-schedule is to decay smoothly to zero. Based on these observations and results, we suggest linear schedule should be the “go-to” budget-aware schedule." }, { "heading": "4.2 VISION BENCHMARKS", "text": "In the previous section, we showed that linear schedule achieves excellent performance on CIFAR10, in a relatively toy setting. In this section, we study the comparison and its generalization to practical large scale datasets with various state-of-the-art architectures. In particular, we set up experiments to validate the performance of linear schedule across tasks and budgets.\nIdeally, one would like to see the performance of all schedules in Fig 2 on vision benchmarks. Due to resource constraints, we include only the off-the-shelf step decay and the linear schedule. Note our CIFAR-10 experiment suggests that using cosine and poly will achieve similar performance as linear, which are already budget-aware schedules given our definition, so we focus on linear schedule in this section. More evaluation between cosine, poly and linear can be found in Appendix A & D.\nWe consider the following suite of benchmarks spanning many flagship vision challenges:\nImage classification on ImageNet. ImageNet (Russakovsky et al., 2015) is a widely adopted standard for image classification task. We use ResNet-18 (He et al., 2016) and report the top-1 accuracy on the validation set with the best epoch. We follow the step decay schedule used in (Huang et al., 2017b; PyTorch, 2019), which drops twice at uniform interval (γ = 0.1 at p ∈ { 13 , 2 3}). We set the full budget to 100 epochs (10 epochs longer than typical) for easier computation of the budget.\nObject detection and instance segmentation on MS COCO. MS COCO (Lin et al., 2014) is a widely recognized benchmark for object detection and instance segmentation. We use the standard COCO AP (averaged over IoU thresholds) metric for evaluating bounding box output and instance mask output. The AP of the final model on the validation set is reported in our experiment. We use the challenge winner Mask R-CNN (He et al., 2017) with a ResNet-50 backbone and follow its setup. For training, we adopt the 1x schedule (90k iterations), and the off-the-shelf (He et al., 2017) step decay that drops 2 times with γ = 0.1 at p ∈ { 23 , 8 9}.\nSemantic segmentation on Cityscapes. Cityscapes (Cordts et al., 2016) is a dataset commonly used for evaluating semantic segmentation algorithms. It contains high quality pixel-level annotations of 5k images in urban scenarios. The default evaluation metric is the mIoU (averaged across class) of the output segmentation map. We use state-of-the-art model PSPNet (Zhao et al., 2017) with a ResNet-50 backbone and the full budget is 400 epochs as in standard set up. The mIoU of the best epoch is reported. Interestingly, unlike other tasks in this series, this model by default uses the poly schedule. For complete evaluation, we add step decay that is the same in our ImageNet experiment in Tab 3 and include the off-the-shelf poly schedule in Tab E.\nVideo classification on Kinetics with I3D. Kinetics (Kay et al., 2017) is a large-scale dataset of YouTube videos focusing on human actions. We use the 400-category version of the dataset and a variant of I3D (Carreira & Zisserman, 2017) with training and data processing code publicly available (Wang et al., 2018). The top-1 accuracy of the final model is used for evaluating the performance. We follow the 4-GPU 300k iteration schedule (Wang et al., 2018), which features a step decay that drops 2 times with γ = 0.1 at p ∈ { 12 , 5 6}.\nIf we factor in the dimension of budgets, Tab 3 shows a clear advantage of linear schedule over step decay. For example, on ImageNet, linear achieves 51.5% improvement at 1% of the budget. Next, we consider the full budget setting, where we simply swap out the off-the-shelf schedule with linear schedule. We observe better (video classification) or comparable (other tasks) performance after the swap. This is surprising given the fact that linear schedule is parameter-free and thus not optimized for the particular task or network.\nIn summary, the smoothly decaying linear schedule is a simple and effective strategy for budgeted training. It significantly outperforms traditional step decay given limited budgets, while achieving comparable performance with the normal full budget setting." }, { "heading": "5 DISCUSSION", "text": "In this section, we summarize our empirical analysis with a desiderata of properties for effective budget-aware learning schedules. We highlight those are inconsistent with conventional wisdom and follow the experimental setup in Sec 4.1 unless otherwise stated.\nDesideratum: budgeted convergence. Convergence of SGD under non-convex objectives is measured by limt→∞ E[||∇F ||2] = 0 (Bottou et al., 2018). Intuitively, one should terminate the optimization when no further local improvement can be made. What is the natural counterpart for “convergence” within a budget? For a dataset of N examples {(xi, yi)}Ni=1, let us write the full gradient as g∗t = 1 N ∑N i=1∇F (xi, yi). We empirically find that the dynamics of ||g∗t || over time highly correlates with the learning rate αt (Fig 3). As the learning rate vanishes for budget-aware schedules, so does the gradient magnitude. We call this “vanishing gradient” phenomenon budgeted convergence. This correlation suggests that decaying schedules to near-zero rates (and using BAC) may be more effective than early stopping. As a side note, budgeted convergence resonates with classic literature that argues that SGD behaves similar to simulated annealing (Bottou, 1991). Given that αt and ||g∗t || decrease, the overall update ||−αtgt|| also decreases4. In other words, large moves are more likely given large learning rates in the beginning, while small moves are more likely given small learning rates in the end. However, the exact mechanism by which the learning rate influences the gradient magnitude remains unclear.\nDesideratum: don’t waste the budget. Common machine learning practise often produces multiple checkpointed models during a training run, where a validation set is used to select the best one. Such additional optimization is wasteful in our budgeted setting. Tab 4 summarizes the progress point at which the best model tends to be found. Step decay produces an optimal model somewhat towards the end of the training, while linear and poly are almost always optimal at the precise end of the training. This is especially helpful for state-of-the-art models where evaluation can be expensive. For example, validation for Kinetics video classification takes several hours. Budget-aware schedules require validation on only the last few epochs, saving additional compute.\n4Note that the momentum in SGD is used, but we assume vanilla SGD to simplify the discussion, without losing generality.\nAggressive early descent. Guided by asymptotic convergence analysis, faster descent of the objective might be an apparent desideratum of an optimizer. Many prior optimization methods explicitly call for faster decrease of the objective (Kingma & Ba, 2015; Clevert et al., 2016; Reddi et al., 2018). In contrast, we find that one should not employ aggressive early descent because large learning rates can prevent budgeted convergence. Consider AMSGrad (Reddi et al., 2018), an adaptive learning rate that addresses a convergence issue with the widely-used Adam optimizer (Kingma & Ba, 2015). Fig 4 shows that while AMSGrad does quickly descend over the training objective, it still underperforms budget-aware linear schedules over any given training budget. To examine why, we derive the equivalent rate β̃t for AMSGrad (Appendix B) and show that it is dramatically larger than our defaults, suggesting the optimizer is too aggressive. We include more adaptive methods for evaluation in Appendix E.\nWarm restarts. SGDR (Loshchilov & Hutter, 2017) explores periodic schedules, in which each period is a cosine scaling. The schedule is intended to escape local minima, but its effectiveness has been questioned (Gotmare et al., 2019). Fig 5 shows that SDGR has faster descent but is inferior to budget-aware schedules for any budget (similar to the adaptive optimizers above). Additional comparisons can be found in Appendix F. Whether there exists a method that achieves promising anytime performance and budgeted performance at the same time remains an open question." }, { "heading": "6 CONCLUSION", "text": "This paper introduces a formal setting for budgeted training. Under this setup, we observe that a simple linear schedule, or any other smooth-decaying schedules can achieve much better performance. Moreover, the linear schedule even offers comparable performance on existing visual recognition tasks for the typical full budget case. In addition, we analyze the intriguing properties of learning rate schedules under budgeted training. We find that the learning rate schedule controls the gradient magnitude regardless of training stage. This further suggests that SGD behaves like simulated annealing and the purpose of a learning rate schedule is to control the stage of optimization.\nAcknowledgements: We thank Xiaofang Wang, Simon S. Du, Leonid Keselman, Chen-Hsuan Lin and David McAllester for insightful discussions and comments." }, { "heading": "A BUDGETED TRAINING FOR NEURAL ARCHITECTURE SEARCH", "text": "A.1 RANK PREDICTION\nIn the main text, we list neural architecture search as an application of budgeted training. Due to resource constraint, these methods usually train models with a small budget (10-25 epochs) to evaluate their relative performance (Cao et al., 2019; Cai et al., 2018; Real et al., 2019). Under this setting, the goal is to rank the performance of different architectures instead of obtaining the best possible accuracy as in the regular case of budgeted training. Then one could ask the question that whether budgeted training techniques help in better predicting the relative rank. Unfortunately, budgeted training has not been studied or discussed in the neural architecture search literature, it is unknown how well models only trained with 10 epochs can tell the relative performance of the same ones that are trained with 200 epochs. Here we conduct a controlled experiment and show that proper adjustment of learning schedule, specifically the linear schedule, indeed improves the accuracy of rank prediction.\nWe adapt the code in (Cao et al., 2019) to generate 100 random architectures, which are obtained by random modifications (adding skip connection, removing layer, changing filter numbers) on top of ResNet-18 (He et al., 2017). First, we train these architectures on CIFAR-10 given full budget (200 epochs), following the setting described in Sec 4.1. This produces a relative rank between all pairs of random architectures based on the validation accuracy and this rank is considered as the target to predict given limited budget. Next, every random architecture is trained with various learning schedules under various small budgets. For each schedule and each budget, this generates a complete rank. We treat this rank as the prediction and compare it with the target full-budget rank. The metric we adopt is Kendall’s rank correlation coefficient (τ ), a standard statistics metric for measuring rank similarity. It is based on counting the inversion pairs in the two ranks and (τ +1)/2 is approximately the probability of estimating the rank correctly for a pair.\nWe consider the following schedules: (1) constant, it might be possible that no learning rate schedule is required if only the relative performance is considered. (2) step decay (γ = 0.1, decay at p ∈ { 13 , 2 3}), a schedule commonly used both in regular training and neural architecture search (Zoph & Le, 2017; Pham et al., 2018). (3) cosine, a schedule often used in neural architecture search (Cai et al., 2018; Real et al., 2019). (4) linear, our proposed schedule. The results of their rank prediction capability can be seen in Tab A.\nThe results suggest that with more budget, we can better estimate the full-budget rank between architectures. And even if only relative performance is considered, learning rate decay should be applied. Specifically, smooth-decaying schedule, such as linear or cosine, are preferred over step decay.\nWe list some additional details about the experiment. To reduce stochastic noise, each configuration under both the small and full budget is repeated 3 times and the median accuracy is taken. The fullbudget model is trained with linear schedule, similar results are expected with other schedules as evidenced by the CIFAR-10 results in the main text (Tab 2). Among the 100 random architectures, 21 cannot be trained, the rest of 79 models have validation accuracy spanning from 0.37 to 0.94, with the distribution mass centered at 0.91. Such skewed and widespread distribution is the typical case in neural architecture search. We remove the 21 models that cannot be trained for our experiments. We take the epoch with the best validation accuracy for each configuration, so the drawback of constant or step decay not having the best model at the very end does not affect this experiment (see Sec 5).\nEpoch (Budget) 1 (0.5%) 2 (1%) 10 (5%) 20 (10%)\nconst 0.3451 0.4595 0.6720 0.6926 step-d2 0.2746 0.3847 0.6651 0.7279 cosine 0.3211 0.4847 0.7023 0.7563 linear 0.3409 0.4348 0.7398 0.7351\nTable A: Small-budget and full-budget model rank correlation measured in Kendall’s tau. Smoothdecaying schedules like linear and cosine can more accurately predict the true rank of different architectures given limited budget.\nEpoch (Budget) 1 (0.5%) 2 (1%) 10 (5%) 20 (10%)\nconst 0.3892 0.4699 0.6689 0.7061 step-d2 0.4014 0.4780 0.6980 0.7754 cosine 0.4616 0.5498 0.7530 0.8029 linear 0.4759 0.5745 0.7652 0.8192\nTable B: Small-budget validation accuracy averaged across random architectures. Linear schedule is the most robust under small budgets.\nEpoch (Budget) 1 (0.5%) 2 (1%) 10 (5%) 20 (10%)\nconst 0.4419 0.5343 0.7550 0.8015 step-d2 0.4590 0.5455 0.7894 0.8848 cosine 0.5326 0.6265 0.8615 0.9087 linear 0.5431 0.6626 0.8644 0.9305\nTable C: Tab B normalized by the full-budget accuracy and then averaged across architectures. Linear schedule achieves solutions closer to their full-budget performance than the rest of schedules under small budgets.\nA.2 BUDGETED PERFORMANCE ACROSS ARCHITECTURES\nTo reinforce our claim that linear schedule generalizes across different settings, we compare budgeted performance of various schedules on random architectures generated in the previous section. We present two versions of the results. The first is to directly average the validation accuracy of different architecture with each schedule and under each budget (Tab B). The second is to normalize by dividing the budgeted accuracy by the full-budget accuracy of the same architecture and then average across different architectures (Tab C). The second version assumes all architectures enjoy equal weighting. Under both cases, linear schedule is the most robust schedule across architectures under various budgets." }, { "heading": "B EQUIVALENT LEARNING RATE FOR AMSGRAD", "text": "In Sec 5, we use equivalent learning rate to compare AMSGrad (Reddi et al., 2018) with momentum SGD. Here we present the derivation for the equivalent learning rate β̃t.\nLet η1, η2 and be hyper-parameters, then the momentum SGD update rule is:\nmt = η1mt−1 + (1− η1)gt, (3)\nwt = wt−1 − α(1)0 βtmt, (4) while the AMSGrad update rule is:\nmt = η1mt−1 + (1− η1)gt, (5) vt = η2vt−1 + (1− η2)g2t , (6) m̂t = mt\n1− ηt1 , (7)\nv̂t = vt\n1− ηt2 , (8)\nv̂maxt = max(v̂ max t , v̂t) (9)\nwt = wt−1 − α(2)0 m̂t√ v̂maxt + . (10)\nComparing equation 4 with 10, we obtain the equivalent learning rate:\nβ̃t = α (2) 0\nα (1) 0\n1 (1− ηt1)( √ v̂maxt + ) , (11)\nBudget 1% 5% 10% 25% 50% 100%\nSubset .3834 .6446 .7848 .8586 .9234 N/A Full .5544 .8328 .9042 .9338 .9464 .9534\nTable D: Comparison with offline data subsampling. “Subset” meets the budget constraint by randomly subsample the dataset prior to training, while “full” uses all the data, but restricting the number of iterations. Note that budget-aware schedule is used for “full”.\nNote that the above equation holds per each weight. For Fig 4a, we take the median across all dimensions as a scalar summary since it is a skewed distribution. The mean appears to be even larger and shares the same trend as the median. In our experiments, we use the default hyper-parameters (which also turn out to have the best validation accuracy): α(1)0 = 0.1, α (2) 0 = 0.001, η1 = 0.9, η2 = 0.99 and = 10−8." }, { "heading": "C DATA SUBSAMPLING", "text": "Data subsampling is a straight-forward strategy for budgeted training and can be realized in several different ways. In our work, we limit the number of iterations to meet the budget constraint and this effectively limits the number of data points seen during the training process. An alternative is to construct a subsampled dataset offline, but keep the same number of training iterations. Such construction can be done by random sampling, which might be the most effective strategy for i.i.d (independent and identically distributed) dataset. We show in Tab D that even our baseline budgeaware step decay, together with a limitation on the iterations, can significantly outperform this offline strategy. For the subset setting, we use the off-the-shelf step decay (step-d2) while for the full set setting, we use the same step decay but with BAC applied (Sec 3.1). For detailed setup, we follow Sec 4.1, of the main text.\nOf course, more complicated subset construction methods exist, such as core-set construction (Bachem et al., 2017). However, such methods usually requires a feature summary of each data point and the computation of pairwise distance, making such methods unsuitable for extremely large dataset. In addition, note that our subsampling experiment is conducted on CIFAR-10, a well-constructed and balanced dataset, making smarter subsampling methods less advantageous. Consequently, the result in Tab D can as well provides a reasonable estimate for other complicated subsampling methods." }, { "heading": "D ADDITIONAL EXPERIMENTS ON CITYSCAPES (SEMANTIC SEGMENTATION)", "text": "In the main text, we compare linear schedule against step decay for various tasks. However, the off-the-shelf schedule for PSPNet (Zhao et al., 2017) is poly instead of step decay. Therefore, we include the evaluation of poly schedule on Cityscapes (Cordts et al., 2016) in Tab E. Given the similarity of poly and linear (Fig 2), and the opposite results on CIFAR-10 and Cityscapes, it is inconclusive that one is strictly better than the other within the smooth-decaying family. However, these smooth-decaying methods both outperform step decay given limited budgets.\nBudget 1% 5% 10% 25% 50% 100%\npoly .5476 ± .0023 .6755 ± .0012 .7093 ± .0058 .7416 ± .0028 .7562 ± .0045 .7593 ± .0043 linear .5424 ± .0034 .6654 ± .0014 .7076 ± .0047 .7399 ± .0005 .7575 ± .0041 .7633 ± .0008\nTable E: Comparison with off-the-shelf poly schedule on Cityscapes Cordts et al. (2016) using PSPNet Zhao et al. (2017). Poly and linear are similar smooth-decaying schedules (Fig 2) and thus have similar performance. The exact rank differs from task to task." }, { "heading": "E ADDITIONAL COMPARISON WITH ADAPTIVE LEARNING RATES", "text": "0 50 100 150 200 Epoch\n0.0\n0.2\n0.4\n0.6\n0.8\n1.0 (a) Training Loss\nLinear 10% Linear 25% Linear 50% Linear 100% AMSGrad + Linear AMSGrad AdaBound RMSprop\n0 50 100 150 200 Epoch\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0 (b) Validation Accuracy\nLinear 10% Linear 25% Linear 50% Linear 100% AMSGrad + Linear AMSGrad AdaBound RMSprop\nMethod Val Accu\nRMSprop .9258 AdaBound .9306 AMSGrad .9113\nAMSGrad + Linear .9340\nSGD + Linear 10% .9218 SGD + Linear 25% .9412 SGD + Linear 50% .9546 SGD + Linear 100% .9562\nFigure A: Comparison between budget-aware linear schedule and adaptive learning rate methods on CIFAR-10. We see while adaptive learning rate methods appear to descent faster than full budget linear schedule, at each given budget, they are surpassed by the corresponding linear schedule.\nIn the main text we compare linear schedule with AMSGrad (Reddi et al., 2018) (the improved version over Adam (Kingma & Ba, 2015)), we further include the classical method RMSprop (Tieleman & Hinton, 2012) and the more recent AdaBound (Luo et al., 2019). We tune these adaptive methods for CIFAR-10 and summarize the results in Fig A. We observe the similar conclusion that budgetaware linear schedule outperforms adaptive methods for all given budgets.\nLike SGD, those adaptive learning rate methods also takes input a parameter of base learning rate, which can also be annealed using an existing schedule. Although it is unclear why one needs to anneal an adaptive methods, we find that it in facts boosts the performance (“AMSGrad + Linear” in Fig A)." }, { "heading": "F ADDITIONAL COMPARISON WITH SGDR", "text": "This section provides additional evaluation to show that learning rate restart produces worse results than our proposed budgeted training techniques under budgeted setting. In (Loshchilov & Hutter, 2017), both a new form of decay (cosine) and the technique of learning rate restart are proposed. To avoid confusion, we use “cosine schedule”, or just “cosine”, to refer to the form of decay and SGDR to a schedule of periodical cosine decays. The comparison with cosine schedule is already included in the main text. Here we focus on evaluating the periodical schedule. SGDR requires two parameters to specify the periods: T0, the length of the first period; Tmult, where i-th period has length Ti = T0T i−1mult . In Fig B, we plot the off-the-shelf SGDR schedule with T0 = 10 (epoch), Tmult = 2. The validation accuracy plot (on the right) shows that it might end at a very poor solution (0.8460) since it is not budget-aware. Therefore, we consider two settings to compare\n0 50 100 150 200 Epoch\n0\n20\n40\n60\n80\n100\nFr ac\ntio n\nof B\nas e\nLe ar\nni ng\nR at\ne (%\n) (a) Learning Rate Schedule\n0 50 100 150 200 Epoch\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nAc cu\nra cy\n(b) Validation Accuracy\nSGDR (off-the-shelf) SGDR-r2\nFigure B: One issue with off-the-shelf SGDR (T0 = 10, Tmult = 2) is that it is not budget-aware and might end at a poor solution. We convert it to a budget aware schedule by setting it to restart n times at even intervals across the budget and n = 2 is shown here (SGDR-r2).\nEpoch 30 50 150\nSGDR .9320 .9458 .9510 linear .9350 .9506 .9532\nTable F: Comparison with off-the-shelf SGDR at the end of each period after the first restart.\nBudget 1% 5% 10% 25% 50% 100%\nSGDR-r1 .5002 .7908 .8794 .9250 .9380 .9488 SGDR-r2 .4710 .7888 .8738 .9216 .9412 .9502 linear .6654 .8920 .9218 .9412 .9546 .9562\nTable G: Comparison with SGDR under budget-aware setting. “SGDR-r1” refers to restarting learning rate once at midpoint of the training progress, and “SGDR-r2” refers to restarting twice at even interval.\nlinear schedule with SGDR. The first is to compare only at the end of each period of SGDR, where budgeted convergence is observed. The second is to convert SGDR into a budget-aware schedule by setting the schedule to restart n times at even intervals across the budget. The results under the first and second setting is shown in Tab F and Tab G respectively. Under both budget-aware and budget-unaware setting, linear schedule outperforms SGDR. For detailed setup, we follow Sec 4.1, of the main text and take the median of 3 runs." }, { "heading": "G ADDITIONAL ILLUSTRATIONS", "text": "In Sec 5, we refer to validation accuracy curve for training on CIFAR-10, which we provide here in Fig C." }, { "heading": "H LEARNING RATES IN CONVEX OPTIMIZATION", "text": "For convex cost surfaces, constant learning rates are guaranteed to converge when less or equal than 1/L, where L is the Lipschitz constant for the gradient of the cost function ∇F (Bottou et al., 2018). Another well-known result ensures convergence for sequences that decay neither too fast nor too slow (Robbins & Monro, 1951): ∑∞ t=1 αt =∞, ∑∞ t=1 α 2 t <∞. One common such instance in convex optimization is αt = α0/t. For non-convex problems, similar results hold for convergence to a local minimum (Bottou et al., 2018). Unfortunately, there does not exist a theory for learning rate schedules in the context of general non-convex optimization.\n0 50 100 150 200 Epoch\n0.0\n0.3\n0.6\n0.9\n1.2\n1.5 (a) Training Loss\nstep-d2 linear\n0 50 100 150 200 Epoch\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0 (b) Validation Accuracy\nstep-d2 linear\nFigure C: Training loss and validation accuracy for training ResNet-18 on CIFAR-10 using step decay and linear schedule. No generalization gap is observed when we only modify learning rate schedule. This figure provides details for the discussion of “don’t waste budget”.\n0 50 100 150 200 Epoch\n30\n40\n50\n60\n70\nW ei\ngh t N\nor m\n(a) Constant\n0 50 100 150 200 Epoch\n20\n30\n40\n50\nW ei\ngh t N\nor m\n(b) Step Decay\n0 50 100 150 200 Epoch\n20\n30\n40\n50\nW ei\ngh t N\nor m\n(c) Exponential\n0 50 100 150 200 Epoch\n20\n30\n40\n50\nW ei\ngh t N\nor m\n(d) SGDR\n0 5 10 15 20 Epoch\n25\n35\n45\n55\nW ei\ngh t N\nor m\n(e) Linear 10%\n0 25 50 75 100 Epoch\n25\n35\n45\n55\nW ei\ngh t N\nor m\n(f) Linear 50%\n0 50 100 150 200 Epoch\n25\n35\n45\n55\nW ei\ngh t N\nor m\n(g) Linear 100%\n0 25 50 75 100 Training Progress (%)\n25\n35\n45\n55\nW ei\ngh t N\nor m\n(h) Linear (Across Budgets)\n0.07\n0.11\n0.15\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\n0.00\n0.05\n0.10\nLe ar\nni ng\nR at\ne\nFigure D: The corresponding weight norm plots for Fig 3 and Fig 5. We find that the weight norm exhibits a similar trend as the gradient norm." }, { "heading": "I FULL GRADIENT NORM AND THE WEIGHT NORM", "text": "In Sec 5, we plot the full gradient norm of the cross-entropy loss, excluding the regularization part. In fact, we use an L2-regularization (weight decay) of 0.0004 for these experiments. For completeness, we plot the weight norm in Fig D." }, { "heading": "J ADDITIONAL ABLATION STUDIES", "text": "Here we explore variations of batch size (Tab H) and initial learning rate (Tab I). Our definition of budget is the number of examples seen during training. So when the batch size increases, the number of iterations decreases. For example, on CIFAR-10, the full budget is training with batch size 128 for 200 epochs. If we train with batch size 1024 for 20% of the budget, that means training for 5 epochs.\nBatch Size Schedule 20% 50% 100%\n64 step-d2 .9436 ± .0037 .9505 ± .0009 .9519 ± .0009 64 linear .9473 ± .0021 .9511 ± .0008 .9526 ± .0020\n256 step-d2 .8939 ± .0027 .9291 ± .0021 .9431 ± .0008 256 linear .9143 ± .0018 .9415 ± .0038 .9484 ± .0013\n1024 step-d2 .5851 ± .0460 .7703 ± .0121 .8805 ± .0007 1024 linear .7415 ± .0141 .8553 ± .0023 .8992 ± .0042\nTable H: Comparison between linear and step decay with different batch sizes. We can see that even when we vary the batch size, linear schedule outperforms step decay." }, { "heading": "K ADDITIONAL IMPLEMENTATION DETAILS", "text": "Image classification on ImageNet. We adapt both the network architecture (ResNet-18) and the data loader from the open source PyTorch ImageNet example5. The base learning rate used is 0.1 and weight decay 5 × 10−4. We train using 4 GPUs with asynchronous batch normalization and batch size 128.\nVideo classification on Kinetics with I3D. The 400-category version of the dataset is used in the evaluation. We use an open source codebase6 that has training and data processing code publicly available. Note that the codebase implements a variant of standard I3D (Carreira & Zisserman, 2017) that has ResNet as the backbone. We follow the configuration of run i3d baseline 300k 4gpu.sh, which specifies a base learning rate 0.005 and a weight decay 10−4. Only learning rate schedule is modified in our experiments. We train using 4 GPUs with asynchronous batch normalization and batch size 32.\nObject detection and instance segmentation on MS COCO. We use the open source implementation of Mask R-CNN7, which is a PyTorch re-implementation of the official codebase Detectron in the Caffe 2 framework. We only modify the part of the code for learning rate schedule. The codebase sets base learning rate to 0.02 and weight decay 10−4. We train with 8 GPUs (batch size 16) and keep the built-in learning rate warm up mechanism, which is an implementation technique that increases learning rate for 0.5k iterations and is intended for stabilizing the initial phase of multiGPU training (Goyal et al., 2017). The 0.5k iterations are kept fixed for all budgets and learning rate decay is applied to the rest of the training progress.\nSemantic segmentation on Cityscapes. We adapt a PyTorch codebase obtained from correspondence with the authors of PSPNet. The base learning rate is set to 0.01 with weight decay 10−4. The training time augmentation includes random resize, crop, rotation, horizontal flip and Gaussian blur. We use patch-based testing time augmentation, which cuts the input image to patches of 713× 713 and processes each patch independently and then tiles the patches to form a single output. For overlapped regions, the average logits of two patches are taken. We train using 4 GPUs with synchronous batch normalization and batch size 12.\n5https://github.com/pytorch/examples/tree/master/imagenet. PyTorch version 0.4.1.\n6https://github.com/facebookresearch/video-nonlocal-net. Caffe 2 version 0.8.1. 7https://github.com/roytseng-tw/Detectron.pytorch. PyTorch version 0.4.1." } ]
2,020
BUDGETED TRAINING: RETHINKING DEEP NEURAL NETWORK TRAINING UNDER RESOURCE CONSTRAINTS
SP:4fdd94362be6718ab249cdb8da4e75b9eade64bd
[ "In this paper, the authors propose modifications to baseline seq-to-seq systems for wave-to-wave translation. To handle possibly long inputs and outputs, as well as significant length differences, they propose to use sliding windows. For high-dimensional outputs, they use an iterative approach predicting each dimension independently. They evaluate their models on earthquake data on on activity translation (video to motion capture).", "This work explains how to use a seq2seq encoder-decoder neural network on the case of multivariate time series. The authors name this particular application of seq2seq the wave2wave network. Given a multivariate time series covering a time interval, it is split into subintervals of equal length, such that each block is a matrix. This matrix becomes an input into a recurrent encoder. On the decoder side, the similar matrix is produced at the output. The proposed neural network is tested on two data sets: an earthquake and activity translation." ]
The understanding of sensor data has been greatly improved by advanced deep learning methods with big data. However, available sensor data in the real world are still limited, which is called the opportunistic sensor problem. This paper proposes a new variant of neural machine translation seq2seq to deal with continuous signal waves by introducing the window-based (inverse-) representation to adaptively represent partial shapes of waves and the iterative back-translation model for high-dimensional data. Experimental results are shown for two real-life data: earthquake and activity translation. The performance improvements of onedimensional data was about 46 % in test loss and that of high-dimensional data was about 1625 % in perplexity with regard to the original seq2seq.
[]
[ { "authors": [ "Alexander Alexandrov", "Konstantinos Benidis", "Michael Bohlke-Schneider", "Valentin Flunkert", "Jan Gasthaus", "Tim Januschowski", "Danielle C. Maddix", "Syama Sundar Rangapuram", "David Salinas", "Jasper Schulz", "Lorenzo Stella", "Ali Caner Türkmen", "Yuyang Wang" ], "title": "Gluonts: Probabilistic time series models in python", "venue": null, "year": 1906 }, { "authors": [ "Vijay Badrinarayanan", "Alex Kendall", "Roberto Cipolla" ], "title": "Segnet: A deep convolutional encoderdecoder architecture for image segmentation", "venue": "CoRR, abs/1511.00561,", "year": 2015 }, { "authors": [ "Zhe Cao", "Tomas Simon", "Shih-En Wei", "Yaser Sheikh" ], "title": "Realtime multi-person 2d pose estimation using part affinity fields", "venue": null, "year": 2017 }, { "authors": [ "Mia Xu Chen", "Orhan Firat", "Ankur Bapna", "Melvin Johnson", "Wolfgang Macherey", "George Foster", "Llion Jones", "Mike Schuster", "Noam Shazeer", "Niki Parmar", "Ashish Vaswani", "Jakob Uszkoreit", "Lukasz Kaiser", "Zhifeng Chen", "Yonghui Wu", "Macduff Hughes" ], "title": "The best of both worlds: Combining recent advances in neural machine translation", "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": null, "year": 2014 }, { "authors": [ "Yoshua Bengio Dzmitry Bahdanau", "Kyunghyun Cho" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arxiv,", "year": 2014 }, { "authors": [ "Brendan J. Frey", "Delbert Dueck" ], "title": "Clustering by passing messages between data", "venue": "points. Science,", "year": 2007 }, { "authors": [ "Vu Cong Duy Hoang", "Philipp Koehn", "Gholamreza Haffari", "Trevor Cohn" ], "title": "Iterative backtranslation for neural machine translation", "venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation,", "year": 2018 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei" ], "title": "A Efros. Image-to-image translation with conditional adversarial networks", "venue": "arxiv,", "year": 2016 }, { "authors": [ "A. Iwaki", "H. Fujiwara" ], "title": "Synthesis of high-frequency ground motion using information extracted from low-frequency ground motion: A case study in kanto area. journal of Japan sssociation for earthquake", "venue": null, "year": 2013 }, { "authors": [ "Alireza Koochali", "Peter Schichtel", "Sheraz Ahmed", "Andreas Dengel" ], "title": "Probabilistic forecasting of sensory data with generative adversarial networks - forgan", "venue": null, "year": 1903 }, { "authors": [ "Minh-Thang Luong", "Hieu Pham", "Christopher D. Manning" ], "title": "Effective approaches to attentionbased neural machine translation", "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Xiao-Jiao Mao", "Chunhua Shen", "Yu-Bin Yang" ], "title": "Image denoising using very deep fully convolutional encoder-decoder networks with symmetric skip connections", "venue": "CoRR, abs/1603.09056,", "year": 2016 }, { "authors": [ "Natalia Neverova", "Christian Wolf", "Graham W.Taylor", "Florian Nebout" ], "title": "Multi-scale deep learning for gesture detection and localization", "venue": "Workshop on Looking at People (ECCV),", "year": 2014 }, { "authors": [ "D. Roggen", "G. Trster", "P. Lukowicz", "A. Ferscha", "R. Chavarriaga" ], "title": "Opportunistic human activity and context recognition", "venue": null, "year": 2013 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc Le" ], "title": "Sequence to sequence learning with neural networks", "venue": null, "year": 2014 }, { "authors": [ "Aäron van den Oord", "Sander Dieleman", "Heiga Zen", "Karen Simonyan", "Oriol Vinyals", "Alex Graves", "Nal Kalchbrenner", "Andrew W. Senior", "Koray Kavukcuoglu" ], "title": "Wavenet: A generative model for raw audio", "venue": "CoRR, abs/1609.03499,", "year": 2016 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Guilin Liu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "URL http://arxiv.org/ abs/1808.06601", "venue": "Video-to-video synthesis. CoRR,", "year": 2018 }, { "authors": [ "Kelvin Xu", "Jimmy Ba", "Ryan Kiros", "Kyunghyun Cho", "Aaron C. Courville", "Ruslan Salakhutdinov", "Richard S. Zemel", "Yoshua Bengio" ], "title": "Show, attend and tell: Neural image caption generation with visual attention", "venue": "arxiv,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The problem of shortage of training data but can be supplied by other sensor data is called an opportunistic sensor problem (Roggen et al., 2013). For example in human activity logs, the video data can be missing in bathrooms by ethical reasons but can be supplied by environmental sensors which have less ethical problems. For this purpose we propose to extend the sequence-to-sequence (seq2seq) model (Cho et al., 2014; Sutskever et al., 2014; Dzmitry Bahdanau, 2014; Luong et al., 2015) to translate signal wave x (continuous time-series signals) into other signal wave y. The straight-forward extension does not apply by two reasons: (1) the lengths of x and y are radically different, and (2) both x and y are high dimensionals.\nFirst, while most of the conventional seq2seq models handle the input and output signals whose lengths are in the same order, we need to handle the output signals whose length are sometimes considerably different than the input signals. For example, the sampling rate of ground motion sensor is 100Hz and the duration of an earthquake is about 10sec. That is, the length of the output signal wave is 10000 times longer in this case. Therefore, the segmentation along temporal axis and discarding uninformative signal waves are required. Second, signal waves could be high dimensionals; motion capture data has 129 dimensions and acceleormeter data has 18 dimensions. While most of the conventional seq2seq does not require the high-dimensional settings, meaning that it is not usual to translate multiple languages simultaneously, we need to translate signal waves in high dimensions into other signal waves in high dimensions simultaneously.\nTo overcome these two problems we propose 1) the window-based representation function and 2) the wave2wave iterative back-translation model in this paper. Our contributions are the following:\n• We propose a sliding window-based seq2seq model wave2wave (Section 4.1), • We propose the wave2wave iterative back-translation model (Section 4.2) which is the key\nto outperform for high-dimensional data." }, { "heading": "2 RELATED WORKS", "text": "Related works include various encoder-decoder architectures and generative adversarial networks (GANs). First, the encoder-decoder architecture has several variations: (1) CNNs in both sides (Badrinarayanan et al., 2015), (2) RNNs in both sides (Cho et al., 2014; Sutskever et al., 2014; Dzmitry Bahdanau, 2014; Luong et al., 2015), or (3) one side is CNN and the other is RNN (Xu et al., 2015). When one side is related to autoregressive model (van den Oord et al., 2016), further variations are appeared. These architectures are considered to be distinctive. The pros of CNN is an\nefficient extraction of features and overall execution while the pros of RNN is its excellent handling of time-series or sequential data. CNN is relatively weak in handling time-series data. In this reason, the time domain is often handled by RNN. The encoder-decoder architecture using CNNs in both sides is used for semantic segmentation (Badrinarayanan et al., 2015), image denoising (Mao et al., 2016), and super-resolution (Chen et al., 2018), which are often not related to time-series. In the context of time-series, GluonTS (Alexandrov et al., 2019) uses the encoder-decoder approach which aims at time-series prediction task where parameters in encoder and decoder are shared. Apart from the difference of tasks, our approach does not share the parameters in encoder and those in decoder. All the more our model assumes that the time-series multi-modal data are related to the multi-view of the same targeted object which results in multiple modalities. Second, among various GAN architectures, several GANs aims at handling time-series aspect. Vid2vid (Wang et al., 2018) is an extension of pix2pix (Isola et al., 2016) which aims at handling video signals. ForGAN (Koochali et al., 2019) aims at time-series prediction task." }, { "heading": "3 SEQ2SEQ", "text": "Architecture with context vector Let x1:S denotes a source sentence consisting of time-series S words, i.e., x1:S = (x1, x2, . . . , xS). Meanwhile, y1:T = (y1, . . . , yT ) denotes a target sentence corresponding to x1:S . With the assumption of a Markov property, the conditional probability p(y1:T |x1:S), translation from a source sentence to a target sentence, is decomposed into a time-step translation p(y|x) as in (1):\nlog p(y1:T |x1:S) = T∑ t=1 log p(yt|y<t, ct) (1)\nwhere y<s = (y1, y2, . . . , ys−1) and cs is a context vector representing the information of source sentence x1:S to generate an output word yt.\nTo realize such time-step translation, the seq2seq architecture consists of (a) a RNN (Reccurent Neural Network) encoder and (b) a RNN decoder. The RNN encoder computes the current hidden state hencs given the previous hidden state h enc s−1 and the current input xs, as in (2):\nhencs = RNNenc(xs,h enc s−1) (2)\nwhere RNNenc denotes a multi-layered RNN unit.\nThe RNN decoder computes a current hidden state hdect given the previous hidden state and then compute an output yt.\nhdect = RNNdec(h dec t−1) (3) pθ(yt|y<t, ct) = softmax ( gθ(h dec t , ct) ) (4)\nwhere RNNdec denotes a conditional RNN unit, gθ(·) is the output function to convert h dec t and ct to the logit of yt, and θ denotes parameters in RNN units.\nWith training data D = {yn1:T , xn1:S}Nn=1, the parameters θ are optimized so as to minimize the loss function L(θ) of log-likelihood:\nL(θ) = − 1 N N∑ n=1 T∑ t=1 log pθ(y n t |yn<t, ct) (5)\nor squared error:\nL(θ) = 1 N N∑ n=1 T∑ t=1 ( ynt − gθ(hdec n t, c n t ) )2\n(6)\nGlobal Attention To obtain the context vector cs, we use global attention mechanism (Luong et al., 2015). The global attention considers an alignment mapping in a global manner, between encoder hidden states hencs and a decoder hidden step h dec t .\nat(s) = align(hdect ,h enc s ) (7)\n= exp(score(hdect ,h enc s )∑T\ns exp(score(h dec t ,h enc s ))\n(8)\nwhere the score is computed by weighted inner product as follows\nscore(hdect ,h enc s ) = h dec t> Wah enc s (9)\nwhere the weight parameter Wa is obtained so as to minimize the loss function L(θ). Then, the context vector ct is obtained as a weighted average of encoder hidden states as\nct = S∑ s=1 at(s)h enc s (10)" }, { "heading": "4 PROPOSED METHOD: WAVE2WAVE", "text": "The problems of global attention model are that (1) the lengths of input and output are radically different, and that (2) both input and output sequences are high dimensionals. For example in activity translation, there are 48 motion sensors and 3 accelerometer sensors. Their frequency rates are as high as 50Hz and 30Hz respectively. Therefore, the number of steps S, T in both encoder and decoders are prohibitively large so that the capturing information of source sentence x1:S is precluded in the context vector c." }, { "heading": "4.1 WINDOW-BASED REPRESENTATION", "text": "Let us consider the case that source and target sentences are multi-dimensional continuous timeseries, signal waves, as shown in Figure 1 1. That is, each signal at time-step x1:S is expressed as dx-dimensional vector xs—there are dx sensors in the source side. Then a source signal wave x1:S consists of S-step dx-dimensional signal vectors, i.e., x1:S = (x1,x2, . . . ,xS).\n1We note that signal waves in Figure 1 are depicted as one-dimensional waves for clear visualization.\nTo capture an important shape informaion from complex signal waves (see Figure 1), we introduce trainable window-based representation function R(·) as\nrencs′ = R(W enc s′ ) (11)\nwhere W encs′ is a s ′-th window with fixed window-width wenc, expressed as dx × wenc-matrix as W encs′ = [ xwenc(s′−1)+1,xwenc(s′−1)+2, . . . ,xwenc(s′−1)+wenc ] , (12)\nand rencs′ is extracted representation vector inputted to the seq2seq encoder as shown in Figure 1 —the dimension of renc is the same as the one of the hidden vector h enc .\nSimilarly, to approximate the complex target waves well, we introduce inverse representation function, R−1(·) which is separately trained from R−1(·) as\nW dect′ = R −1(rdect′ ) (13)\nwhere rdect′ is the t ′-th output vector from seq2seq decoder as shown in Figure 1, and W dect′ is a window matrix which is corresponding to a partial wave of target waves y1:T = (y1, . . . ,yT ).\nThe advantage of window-based architecture are three-fold: firstly, the number of steps in both encoder and decoder could be largely reduced and make the seq2seq with context vector work stably. Secondly, the complexity and variation in the shape inside windows are also largely reduced in comparison with the entire waves. Thus, important information could be extracted from source waves and the output sequence could be accurately approximated by relatively simple representation R(·) and inverse-representation R−1(·) functions respectively. Thirdly, both representation R(·) and inverse-representation R−1(·) functions are trained end-to-end manner by minimizing the loss L(θ) where both functions are modeled by fully-connected (FC) networks.\nFigure 1 depicts the overall architecture of our wave2wave with an example of toy-data. The wave2wave consists of encoder and decoder with long-short term memory (LSTM) nodes in their inside, representation function R(W encs′ ) and inverse-representation function R\n−1(W dect′ ). In this figure, one-dimensional 10000-time-step continuous time-series are considered as an input and an output and the width of window is set to 2000— there are 5 window steps for both encoder and decoder, i.e., wenc = wdec = 2000 and S′ = T ′ = 5. Then, 1 × 2000 encoder-window-matrix W encs′ is converted to dr dimensional encoder-representation vector rencs′ by the representation function R(W encs′ ) . Meanwhile, the output decoder, dr dimensional decoder-representation r dec t′ , is converted to 1× 2000 decoder-window-matrix W dect′ by the inverse representation function R−1(rdect′ )." }, { "heading": "4.2 WAVE2WAVE ITERATIVE MODEL", "text": "We consider two different ways to tackle with high-dimensional sensor data. Since NMT for machine translation handles embeddings of words, the straightforward extention to high-dimensional settings uses the dx-dimensional source signal at the same time step as source embeddings, and the dy-dimensional target signal at the same time step as target embeddings. We call this a wave2wave model, i.e. our standard model. Alternatively, we can build dy independent embeddings separately for corresponding individual 1-dimensional target signal at each time step while we use the same dx-dimensional source signal embeddings. We call this a Wave2WaveIterative model. We suppose that the former model would be effective when sensor data are correlated while the latter model would be effective when sensor data are independent. Algorithm 1 shows the latter algorithm.\nAlgorithm 1: Wave2waveIterative model\nData: srcdx×S , tgtdy×T , esrc← xdx , etgtj ← y dy j def trainWave2WaveIterative(esrc × S, etgt × T ): for j = (1,dy) do\nf(j) = trainWave2Wave(esrc × S,etgtj ×T ); end\nNote that the embedding esrc is equivalent to dx-dimensionally decomposed representation of rencs′ , and etgt is equivalent to dy-dimensionally decomposed representation of rdect′ . The back-translation is a technique to improve the performance by bi-directional translation removing the noise under a neutral-biased translation (Hoang et al., 2018). We deploy this technique which we call the wave2wave iterative back-translation model." }, { "heading": "5 EVALUATION ON REAL-LIFE DATA: GROUND MOTION TRANSLATION", "text": "In this section, we apply our proposed method, wave2wave, to predict a broadband-ground motion from only its long-period motion, caused by the same earthquake. In this section, wave2wave translates one dimensional signal wave into one dimensional signal wave.\nGround motions of earthquakes cause fatal damages on buildings and infrastructures. Physics-based numerical simulators are used to generate ground motions at a specific place, given the property of earthquake, e.g., location and scale to estimate the damages on buildings and infrastructures (Iwaki & Fujiwara, 2013). However, the motion generated by simulators are limited only long periods, longer than 1 second due to heavy computational costs, and the lack of detailed knowledge of the subsurface structure.\nA large amount of ground motion data have been collected by K(kyosin)-NET over the past 20 years in Japan. Machine learning approaches would be effective to predict broadband-ground motions including periods less than 1 second, from simulated long period motions. From this perspective, we apply our method wave2wave to this problem by setting long-ground motion as an input and broadband-ground motion as an output, with the squared loss function L(θ). As for training data, we use 365 ground motion data collected at the observation station, IBR011, located at Ibaraki prefecture, Japan from Jan. 1, 2000 to Dec. 31, 2017—originally there are 374 data but 10 data related Tohoku earthquakes and the source deeper than 300m are removed. As for testing, we use 9 ground-motion data of earthquakes occurred at the beginning of 2018.\nIn addition, both long and broadband ground motion data are cropped to the fixed length, i.e., s = t = 10000ms and its amplitude is smoothed using RMS (Root Mean Square) envelope with 200ms windows to capture essential property of earthquake motion. Moreover, as for data augmentation, inphase and quadrature components, and those absolute values are extracted from each ground motion. That is, there are totally 365 × 3 training data. Figure 2a shows an example of 3 components of a ground motion of earthquake occurred on May 17, 2018, Chiba in Japan, and corresponding RMS envelopes.\nTable 1 depicts the mean-squared loss of training of three methods, simple encoder-decoder, simple seq2seq, and our proposed method with the same setting as the toy data except henc = hdec = 50. This table shows that our wave2wave methods basically outperform other methods although wave2wave with the small window-width wenc = wdec = 500 is lost by simple encoder-decoder with large hidden layer dz = 1000 in train loss. This indicates that window-based representation and inverse-representation functions are helpful similarly in toy data.\nFigure 2b depicts examples of predicted broadband ground motions of earthquakes occurred on Jan. 24 and May 17, 2018. These show that our method wave2wave predict enveloped broadband ground motion well given long-period ground motion although there is little overfitting due to small training data.\nIt is expected that predicted broadband-motion combined with simulated long-period motion could be used for more accurate estimation of the damages on buildings and infrastructures.\n0 2000 4000 6000 8000 10000 −30 −20 −10\n0 10 20 30\nIn -p ha\nse [g\nal ] Original broadband\n0 2000 4000 6000 8000 10000 0 5\n10 15 20\nEnveloped broadband\n0 2000 4000 6000 8000 10000 −30 −20 −10\n0 10 20 30 Qu ad ra pt ur e [g al ]\n0 2000 4000 6000 8000 10000 0 5\n10 15 20\n0 2000 4000 6000 8000 10000 time [10ms]\n0\n10\n20\n30\nAb so lu te [g\nal ]\n0 2000 4000 6000 8000 10000 time [10ms]\n0 5\n10 15 20\n(a) Example of enveloped ground motion" }, { "heading": "6 EVALUATION ON REAL-LIFE DATA: ACTIVITY TRANSLATION", "text": "This section deploys wave2wave for activity translation (Refer Figure 3). Until the previous section, the signals were one dimensions. The signals in this section are high-dimensional in their inputs\nas well as outputs. The dimensions of motion capture, video, and accelerometer are 129, 48, and 18 dimensions, respectively, in the case of MHAD dataset2. Under the mild assumption that the\ntargeted person which are recorded in three different modalities, including motion capture, video, and accelerometer, are synchronized and the noise such as the effect of other surrounding persons is eliminated. Hence, we assume that each signal shows one of the multi-view projections of a single person. That is, we can intuitively think that they are equivalent. Under this condition, we do a translation from motion capture to video (Similarly, accelerometer to motion capture, and video to accelerometer, and these inverse directions).\n6.0.1 OVERALL ARCHITECTURE\nWave Signal Figure 3 shows that motion capture and video can be considered as wave signal. When video is input, W encs′ takes the form of pose vectors which are converted by OpenPose library (Cao et al., 2017). Then, this representation is convereted into the window representation by R(W encs′ ). When motion capture is input,W enc s′ takes the form of motion capture vectors. In this way we used these signals for input as well as output for wave2wave. The raw output are reconstructed by R−1(W dect′ ) for the output of representation W dec t′ .\nWave Signal Dimensionality Reduction As an alternative to use FC layer before the input, we use the clustering algorithm, specifically an affinity propagation (Frey & Dueck, 2007), in order to reduce the size of representation as a whole. While most clustering algorithms need to supply the number of clusters beforehand, this affinity propagation algorithm solves the appropriate cluster number as a result.\nMulti-Resolution Spatial Pyramid Additinaly structures in wave2wave is the multi-scalability since the frame rate of multimodal data are considerably different. We adopted the approach of multi-resolution spatial pyramid by a dynamic pose (Neverova et al., 2014). We assume that the sequence of frames across modalities is synchronized and sampled at a given temporal step v and concatenated to form a spatio-temporal 3-d volume.\n6.0.2 EXPERIMENTAL EVALUATION\nExperimental Setups We used the MHAD dataset from Berkeley. We used video, accelerometer, and mocap modalities. We used video with Cluster-01/Cam01-02 subsets, and the whole mocap (optical) and accelerometer data with 12 persons/5 trials. Video input was preprocessed by OpenPose which identifies 48 dimensions of vectors. Optical mocap had the position of the keypoints whose dimension was 129. Accelerometer were placed in 6 places in the body whose dimension was 18. We used the parameters in wave2wave with cross entropy loss function L(θ) = − 1N ∑N n=1 log pθ(y n 1:T |xn1:S) with LSTM modules 500, embedding size 500, dropout 3, maximum sentence length 400, and batch size 100. We used Adam optimizer. We used v = 2, 3, 4 for multi-resolution spatial pyramid. We used the same parameter set for wave2wave interactive model. We use Titan Xp.\n2http://tele-immersion.citris-uc.org/berkeley mhad.\nHuman Understandability One characteristic of activity translation can be observed in the direction of wave2wave translation with accelerometer to video, e.g. acc2cam. That is, the accelerometer data takes the form that is not understandable by human beings by its nature but translation to video makes this visible. By selecting 50 test cases, the human could understand 48 cases. 96 % is fairly good. The second characteristic of activity translation is opportunistic sensor problem, e.g. when we cannot use video camera in bathrooms, we use other sensor modality, e.g. accelerometer, and then translate it to video which can use at this opportunity. This corresponds to the case of acceleromter to video, e.g. acc2cam. We conduct this experiments. Upon watching the video signals on a screen we could observe the basic human movements. By selecting 50 test cases, the human could understand 43 cases.\nExperimental Results Major experimental results are shown in Table 2. We used wenc = {1, 5, 10, 20, 30, 60}. For each window size we measured one target with perplexity (ppl) and the whole target with perplexity (ppl). We compared several wave2wave results with (1) the seq2seq model without dimensionality reduction (via clustering), (2) the seq2seq model with dimensionality reduction. All the experiments are done with the direction from accelerometer to motion capture (acc2moc).\nFirstly, the original seq2seq model did not work well without dimensionality reduction of input space. The perplexity was 58000.42. This figure suggests that the optimization of deep learning did not go progress due to the complexity of the training data or the bad initialization. However, the results were improved fairly well if we do dimensionality reduction using clustering. This figure is close to the results by wave2wave (iterative) with wenc = 60.\nSecondly, wenc = 5 performed better than other window size for perplexity when dz = 1. When this became high dimensional, the wave2wave iterative model performed better than the wave2wave mode: 3.44 vs 10.73 in perplexity. Since motion capture has dz = 129 dimensions, the representation space becomes Rdz when we let R denote the parameter space of one point in motion capture. Compared with this the wave2wave iterative model equipped with the representation space linear with R. The wave2wave iterative model has an advantage in this point. Moreover, the wave2wave iterative back-translation model made the best score in perplexity when dz = 1 as well as dz = 129." }, { "heading": "7 CONCLUSION", "text": "We proposed a method to translate between waves wave2wave with a sliding window-based mechanism and iterative back-translation model for high-dimensional data. Experimental results for two real-life data show that this is positive. Performance improvements were about 46 % in test loss for one dimensional case and about 1625 % in perplexity for high-dimensional case using the iterative back-translation model." } ]
2,019
null
SP:13d8b88675da21a709b023c18bf47d6fc2c12924
[ "This paper proposed a variant of policy gradient algorithm with mirror descent update, which is a natural generalization of projected policy gradient descent. The authors also proposed a variance reduced policy gradient algorithm following the variance reduction techniques in optimization. The authors further proved the convergence of the proposed algorithms and some experiments are conducted to show the effectiveness of their algorithms. The paper is not written in a very good way due to many typos and misleading notations. Moreover, there seem to be some technical issues in the proofs of both main theorems.", "This paper proposes MPO, a policy optimization method with convergence guarantees based on stochastic mirror descent that uses the average of previous gradients to update the policy parameters. A lower-variance method, VRMPO, is then proposed that matches the best known convergence rate in the in the literature. Experiments show that (1) MPO converges faster than basic policy optimization methods on a small task, and (2) VRMPO achieves a performance comparable to, and often better than, popular policy optimization methods (TD3, DDPG, PPO, and TRPO) on MuJoCo." ]
Improving sample efficiency has been a longstanding goal in reinforcement learning. In this paper, we propose the VRMPO: a sample efficient policy gradient method with stochastic mirror descent. A novel variance reduced policy gradient estimator is the key of VRMPO to improve sample efficiency. Our VRMPO needs only O( −3) sample trajectories to achieve an -approximate first-order stationary point, which matches the best-known sample complexity. We conduct extensive experiments to show our algorithm outperforms state-of-the-art policy gradient methods in various settings.
[]
[ { "authors": [ "Alekh Agarwal", "Sham M Kakade", "Jason D Lee", "Gaurav Mahajan" ], "title": "Optimality and approximation with policy gradient methods in markov decision processes", "venue": null, "year": 1908 }, { "authors": [ "Navid Azizan", "Babak Hassibi" ], "title": "Stochastic gradient/mirror descent: Minimax optimality and implicit regularization", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Heinz H Bauschke", "Patrick L Combettes" ], "title": "Convex analysis and monotone operator theory in Hilbert spaces, volume 408", "venue": null, "year": 2011 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "Mirror descent and nonlinear projected subgradient methods for convex optimization", "venue": "Operations Research Letters,", "year": 2003 }, { "authors": [ "Dimitri P Bertsekas" ], "title": "Convex optimization theory", "venue": "Athena Scientific Belmont,", "year": 2009 }, { "authors": [ "Ching-An Cheng", "Xinyan Yan", "Nolan Wagener", "Byron Boots" ], "title": "Fast policy learning through imitation and reinforcement", "venue": "In Conference on Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Ching-An Cheng", "Xinyan Yan", "Byron Boots" ], "title": "Trajectory-wise control variates for variance reduction in policy gradient methods", "venue": "arXiv preprint arXiv:1908.03263,", "year": 2019 }, { "authors": [ "Ching-An Cheng", "Xinyan Yan", "Nathan Ratliff", "Byron Boots" ], "title": "Predictor-corrector policy optimization", "venue": "In Proceedings of International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Mohammad Ghavamzadeh" ], "title": "Path consistency learning in tsallis entropy regularized mdps", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Bo Dai", "Albert Shaw", "Lihong Li", "Lin Xiao", "Niao He", "Zhen Liu", "Jianshu Chen", "Le Song" ], "title": "Sbeed: Convergent reinforcement learning with nonlinear function approximation", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Damek Davis", "Benjamin Grimmer" ], "title": "Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems", "venue": "SIAM Journal on Optimization,", "year": 2019 }, { "authors": [ "Simon S Du", "Jianshu Chen", "Lihong Li", "Lin Xiao", "Dengyong Zhou" ], "title": "Stochastic variance reduction methods for policy evaluation", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Roy Fox", "Ari Pakman", "Naftali Tishby" ], "title": "Taming the noise in reinforcement learning via soft updates", "venue": "In Conference on Uncertainty in Artificial Intelligence,", "year": 2016 }, { "authors": [ "Scott Fujimoto", "Herke van Hoof", "David Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Claudio Gentile" ], "title": "The robustness of the p-norm algorithms", "venue": "Machine Learning,", "year": 2003 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan", "Hongchao Zhang" ], "title": "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Will Grathwohl", "Dami Choi", "Yuhuai Wu", "Geoffrey Roeder", "David Duvenaud" ], "title": "Backpropagation through the void: Optimizing control variates for black-box gradient estimation", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Evan Greensmith", "Peter L Bartlett", "Jonathan Baxter" ], "title": "Variance reduction techniques for gradient estimates in reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E Turner", "Sergey Levine" ], "title": "Q-prop: Sample-efficient policy gradient with an off-policy critic", "venue": "International Conference on Learning Representation,", "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Samuel Horváth", "Peter Richtárik" ], "title": "Nonconvex variance reduced optimization with arbitrary sampling", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Tang Jie", "Pieter Abbeel" ], "title": "On a connection between importance sampling and the likelihood ratio policy gradient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Sham M Kakade" ], "title": "A natural policy gradient", "venue": "In Advances in Neural Information Processing Systems,", "year": 2002 }, { "authors": [ "Yunwen Lei", "Ke Tang" ], "title": "Stochastic composite mirror descent: optimal bounds with high probabilities", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": null, "year": 2016 }, { "authors": [ "Hao Liu", "Yihao Feng", "Yi Mao", "Dengyong Zhou", "Jian Peng", "Qiang Liu" ], "title": "Action-dependent control variates for policy optimization via stein identity", "venue": "International Conference on Learning Representation,", "year": 2018 }, { "authors": [ "Hongzi Mao", "Shaileshh Bojja Venkatakrishnan", "Malte Schwarzkopf", "Mohammad Alizadeh" ], "title": "Variance reduction for reinforcement learning in input-driven environments", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Alberto Maria Metelli", "Matteo Papini", "Francesco Faccio", "Marcello Restelli" ], "title": "Policy optimization via importance sampling", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Jean-Jacques Moreau" ], "title": "Proximité et dualité dans un espace hilbertien", "venue": "Bull. Soc. Math. France,", "year": 1965 }, { "authors": [ "Arkadi Nemirovski", "Anatoli Juditsky", "Guanghui Lan", "Alexander Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM Journal on Optimization,", "year": 2009 }, { "authors": [ "Arkadii Semenovich Nemirovsky", "David Borisovich Yudin" ], "title": "Problem complexity and method efficiency in optimization", "venue": null, "year": 1983 }, { "authors": [ "Y Nesterov" ], "title": "Introductory lectures on convex optimization: a basic course", "venue": "Kluwer Academic Publishers, Dordrecht,", "year": 2004 }, { "authors": [ "Andrew Y Ng", "Daishi Harada", "Stuart Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In International Conference on Machine Learning,", "year": 1999 }, { "authors": [ "Lam M Nguyen", "Jie Liu", "Katya Scheinberg", "Martin Takáč. Sarah" ], "title": "A novel method for machine learning problems using stochastic recursive gradient", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Lam M Nguyen", "Jie Liu", "Katya Scheinberg", "Martin Takáč" ], "title": "Stochastic recursive gradient algorithm for nonconvex optimization", "venue": "arXiv preprint arXiv:1705.07261,", "year": 2017 }, { "authors": [ "Matteo Papini", "Marcello Restelli Matteo Pirotta" ], "title": "Stochastic variance-reduced policy gradient", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jan Peters", "Katharina Mülling", "Yasemin Altun" ], "title": "Relative entropy policy search", "venue": "In AAAI,", "year": 2010 }, { "authors": [ "Sashank J Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabas Poczos", "Alex Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Alfréd Rényi" ], "title": "On measures of entropy and information", "venue": "In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability,", "year": 1961 }, { "authors": [ "R Tyrrell Rockafellar" ], "title": "Monotone operators and the proximal point algorithm", "venue": "SIAM journal on control and optimization,", "year": 1976 }, { "authors": [ "Mark Schmidt", "Nicolas Le Roux", "Francis Bach" ], "title": "Minimizing finite sums with the stochastic average gradient", "venue": "Mathematical Programming,", "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Philipp Moritz", "Sergey Levine", "Michael Jordan", "Pieter Abbeel" ], "title": "Highdimensional continuous control using generalized advantage estimation", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Claude Elwood Shannon" ], "title": "A mathematical theory of communication", "venue": "Bell system technical journal,", "year": 1948 }, { "authors": [ "Zebang Shen", "Alejandro Ribeiro", "Hamed Hassani", "Hui Qian", "Chao Mi" ], "title": "Hessian aided policy gradient", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "David Silver", "Guy Lever", "Nicolas Heess", "Thomas Degris", "Daan Wierstra", "Martin Riedmiller" ], "title": "Deterministic policy gradient algorithms", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Charles Stein" ], "title": "Approximate computation of expectations", "venue": "Lecture Notes-Monograph Series,", "year": 1986 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 1998 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2000 }, { "authors": [ "Philip S Thomas", "William C Dabney", "Stephen Giguere", "Sridhar Mahadevan" ], "title": "Projected natural actor-critic", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Lex Weaver", "Nigel Tao" ], "title": "The optimal reward baseline for gradient-based reinforcement learning", "venue": "In Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence,", "year": 2001 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Cathy Wu", "Aravind Rajeswaran", "Yan Duan", "Vikash Kumar", "Alexandre M Bayen", "Sham Kakade", "Igor Mordatch", "Pieter Abbeel" ], "title": "Variance reduction for policy gradient with action-dependent factorized baselines", "venue": "International Conference on Learning Representation,", "year": 2018 }, { "authors": [ "Pan Xu", "Felicia Gao", "Quanquan Gu" ], "title": "An improved convergence analysis of stochastic variancereduced policy gradient", "venue": "Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "Tianbing Xu", "Qiang Liu", "Jian Peng" ], "title": "Stochastic variance reduction for policy gradient estimation", "venue": "arXiv preprint arXiv:1710.06034,", "year": 2017 }, { "authors": [ "Huizhuo Yuan", "Chris Junchi Li", "Yuhao Tang", "Yuren Zhou" ], "title": "Policy optimization via stochastic recursive gradient algorithm. 2019", "venue": null, "year": 2019 }, { "authors": [ "Siqi Zhang", "Niao He" ], "title": "On the convergence rate of stochastic mirror descent for nonsmooth nonconvex optimization", "venue": "arXiv preprint arXiv:1806.04781,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is one of the most wonderful fields of artificial intelligence, and it has achieved great progress recently (Mnih et al., 2015; Silver et al., 2017). To learn the optimal policy from the delayed reward decision system is the fundamental goal of RL. Policy gradient methods (Williams, 1992; Sutton et al., 2000) are powerful algorithms to learn the optimal policy.\nDespite the successes of policy gradient method, suffering from high sample complexity is still a critical challenge for RL. Many existing popular methods require more samples to be collected for each step to update the parameters (Silver et al., 2014; Lillicrap et al., 2016; Schulman et al., 2015; Mnih et al., 2016; Haarnoja et al., 2018), which partially reduces the effectiveness of the sample. Although all the above existing methods claim it improves sample efficiency, they are all empirical results which lack a strong theory analysis of sample complexity.\nTo improve sample efficiency, in this paper, we explore how to design an efficient and stable algorithm with stochastic mirror descent (SMD). Due to its advantage of the simplicity of implementation, low memory requirement, and low computational complexity (Nemirovsky & Yudin, 1983; Beck & Teboulle, 2003; Lei & Tang, 2018), SMD is one of the most widely used methods in machine learning. However, it is not sound to apply SMD to policy optimization directly, and the challenges are two-fold: (I) The objective of policy-based RL is a typical non-convex function, but Ghadimi et al. (2016) show that it may cause instability and even divergence when updating the parameter of a non-convex objective function by SMD via a single batch sample. (II) Besides, the large variance of gradient estimator is the other bottleneck of applying SMD to policy optimization for improving sample efficiency. In fact, in reinforcement learning, the non-stationary sampling process with the environment leads to the large variance of existing methods on the estimate of policy gradient, which results in poor sample efficiency (Papini et al., 2018; Liu et al., 2018).\nContributions To address the above two problems correspondingly, in this paper (I) We analyze the theoretical dilemma of applying SMD to policy optimization. Our analysis shows that under the common Assumption 1, for policy-based RL, designing the algorithm via SMD directly can not guarantee the convergence. Hence, we propose the MPO algorithm with a provable convergence guarantee. Designing an efficiently computable, and unbiased gradient estimator by averaging its historical policy gradient is the key to MPO. (II) We propose the VRMPO: a sample efficient policy optimization algorithm via constructing a variance reduced policy gradient estimator. Specifically, we propose an efficiently computable policy gradient estimator, utilizing fresh information and yielding a more accurate estimation of the gradient w.r.t the objective, which is the key to improve sample efficiency. We prove VRMPO needs O( −3) sample trajectories to achieve an -approximate firstorder stationary point ( -FOSP) (Nesterov, 2004). To our best knowledge, our VRMPO matches the\nbest-known sample complexity among the existing literature. Besides, we conduct extensive experiments, which further show that our algorithm outperforms state-of-the-art bandit algorithms in various settings." }, { "heading": "2 BACKGROUND AND NOTATIONS", "text": "" }, { "heading": "2.1 POLICY-BASED REINFORCEMENT LEARNING", "text": "We consider the Markov decision processes M = (S,A,P,R, ρ0, γ), where S is state space, A is action space; At time t, the agent is in a state St ∈ S and takes an action At ∈ A, then it receives a feedback Rt+1; P ass′ = P (s\n′ |s, a) ∈ P is the probability of the state transition from s to s′ under taking a ∈ A; The bounded reward function R : S × A → [−R,R], Ras 7→ E[Rt+1|St = s,At = a]; ρ0 : S → [0, 1] is the initial state distribution and γ ∈ (0, 1) is discounted factor. Policy πθ(a|s) is a probability distribution on S × A with the parameter θ ∈ Rp. Let τ = {st, at, rt+1}Hτt=0 be a trajectory, where s0 ∼ ρ0(s0), at ∼ πθ(·|st), rt+1 = R(st, at), st+1 ∼ P (·|st, at), and Hτ is the finite horizon of τ . The expected return J(πθ) is defined as:\nJ(θ) def = J(πθ) = ∫ τ P (τ |θ)R(τ)dτ = Eτ∼πθ [R(τ)], (1)\nwhere P (τ |θ) = ρ0(s0) ∏Hτ t=0 P (st+1|st, at)πθ(at|st) is the probability of generating τ , R(τ) =∑Hτ\nt=0 γ trt+1 is the accumulated discounted return.\nLet J (θ) = −J(θ), the central problem of policy-based RL is to solve the problem: θ∗ = arg max\nθ J(θ)⇐⇒ θ∗ = arg min θ J (θ). (2)\nComputing the∇J(θ) analytically, we have\n∇J(θ) = Eτ∼πθ [ Hτ∑ t=0 ∇θ log πθ(at|st)R(τ)]. (3)\nFor any trajectory τ , let g(τ |θ) = ∑Hτ t=0∇θ log πθ(at|st)R(τ), which is an unbiased estimator of ∇J(θ). Vanilla policy gradient (VPG) is a straightforward way to solve problem (2): θ ← θ + αg(τ |θ),\nwhere α is step size. Assumption 1. (Sutton et al., 2000; Papini et al., 2018) For each pair (s, a), any θ ∈ Rp, and all components i, j, there exists positive constants G, F s.t.,\n|∇θi log πθ(a|s)| ≤ G, | ∂2\n∂θi∂θj log πθ(a|s)| ≤ F. (4)\nAccording to the Lemma B.2 of (Papini et al., 2018), Assumption 1 implies ∇J(θ) is L-Lipschiz, i.e., ‖∇J(θ1)−∇J(θ2)‖ ≤ L‖θ1 − θ2‖, where\nL = RH(HG2 + F )/(1− γ), (5) Besides, Assumption 1 implies the following property of the policy gradient estimator. Lemma 1 (Properties of stochastic differential estimators (Shen et al., 2019)). Under Assumption 1, for any policy πθ and τ ∼ πθ, we have\n‖g(τ |θ)−∇J(θ)‖2 ≤ G 2R2\n(1− γ)4 def = σ2. (6)" }, { "heading": "2.2 STOCHASTIC MIRROR DESCENT", "text": "Now, we review some basic concepts of SMD; in this section, the notation follows (Nemirovski et al., 2009). Let’s consider the stochastic optimization problem,\nmin θ∈Dθ\n{f(θ) = E[F (θ; ξ)]}, (7)\nwhere Dθ ∈ Rn is a nonempty convex compact set, ξ is a random vector whose probability distribution, µ is supported on Ξ ∈ Rd and F : Dθ × Ξ → R. We assume that the expectation E[F (θ; ξ)] = ∫ Ξ F (θ; ξ)dµ(ξ) is well defined and finite-valued for every θ ∈ Dθ. Definition 1 (Proximal Operator (Moreau, 1965)). T is a function defined on a closed convex X , and α > 0.Mψα,T (z) is the proximal operator of T , which is defined as:\nMψα,T (z) = arg min x∈X {T (x) + 1 α Dψ(x, z)}, (8)\nwhere ψ(x) is a continuously-differentiable, ζ-strictly convex function: 〈x− y,∇ψ(x)−∇ψ(y)〉 ≥ ζ‖x−y‖2, ζ > 0,Dψ is Bregman distance:Dψ(x, y) = ψ(x)−ψ(y)−〈∇ψ(y), x−y〉, ∀ x, y ∈ X .\nStochastic Mirror Descent The SMD solves (7) by generating an iterative solution as follows,\nθt+1 =Mψαt,`(θ)(θt) = arg minθ∈Dθ {〈gt, θ〉+\n1\nαt Dψ(θ, θt)}, (9)\nwhere αt > 0 is step-size, `(θ) = 〈gt, θ〉 is the first-order approximation of f(θ) at θt, gt = g(θt, ξt) is stochastic subgradient such that g(θt) = E[g(θt, ξt)] ∈ ∂f(θ)|θ=θt , {ξt}t≥0 represents a draw form distribution µ, and ∂f(θ) = {g|f(θ)− f(ω) ≤ gT (θ − ω),∀ω ∈ dom(f)}.\nIf we choose ψ(x) = 1\n2 ‖x‖22, which implies Dψ(x, y) =\n1 2 ‖x − y‖22, since then iteration (9) is the\nproximal gradient (Rockafellar, 1976) view of SGD. Thus, SMD is a generalization of SGD.\nConvergence Criteria: Bregman Gradient Bregman gradient is a generation of projected gradient (Ghadimi et al., 2016). Recently, Zhang & He (2018); Davis & Grimmer (2019) develop it to measure the convergence of an algorithm for the non-convex optimization problem. Evaluating the difference between each candidate solution x and its proximity is the critical idea of Bregman gradient to measure the stationarity of x. Specifically, let X be a closed convex set on Rn, α > 0, T (x) is defined on X . The Bregman gradient of T at x ∈ X is:\nGψα,T (x) = α −1(x−Mψα,T (x)), (10)\nwhereMψα,T (·) is defined in Eq.(8). If ψ(x) = 1\n2 ‖x‖22, then x∗ is a critical point of T if and only if\nGψα,T (x∗) = ∇T (x∗) = 0 (Bauschke et al. (2011);Theorem 27.1). Thus, Bregman gradient (10) is a generalization of gradient. The following Remark 1 is helpful for us to understand the significance of Bregman gradient, and it gives us some insights to understand this convergence criterion. Remark 1. Let T be a convex function, by the Proposition 5.4.7 of Bertsekas (2009): x∗ is a stationarity point of T if and only if\n0 ∈ ∂(T + δX )(x∗), (11) where δX (·) is the indicator function on X . Furthermore, suppose ψ(x) is twice continuously differentiable, let x̃ =Mψα,T (x), by the definition of proximal operatorM ψ α,T (·), we have\n0 ∈ ∂(T + δX )(x̃) + ( ∇ψ(x̃)−∇ψ(x) ) (?) ≈ ∂(T + δX )(x̃) + αGψα,T (x)∇ 2ψ(x), (12)\nEq.(?) holds due to the first-order Taylor expansion of∇ψ(x). By the criteria of (11), if Gψα,T (x) ≈ 0, Eq.(12) implies the origin point 0 is near the set ∂(T + δX )(x̃), i.e., x̃ is close to a stationary point.\nIn practice, we choose T (θ) = 〈−∇J(θt), θ〉, since then discriminant criterion (12) is suitable to RL problem (2). For the non-convex problem (2), we are satisfied with finding an -approximate First-Order Stationary Point ( -FOSP) (Nesterov, 2004), denoted by θ , such that\n‖Gψα,T (θ )(θ )‖ ≤ . (13)" }, { "heading": "3 POLICY OPTIMIZATION WITH STOCHASTIC MIRROR DESCENT", "text": "In this section, we solve the problem (2) via SMD. Firstly, we analyze the theoretical dilemma of applying SMD directly to policy optimization. Then, we propose a convergent mirror policy optimization algorithm (MPO)." }, { "heading": "3.1 THEORETICAL DILEMMA", "text": "Let T = {τk}N−1k=0 be a collection of trajectories, τk ∼ πθk , we receive gradient information:\n−g(τk|θk) = − Hτk∑ t=0 ∇θ log πθ(at|st)R(τk)|θ=θk , (14)\nthen by SMD (9), to solve (2), for each 0 ≤ k ≤ N − 1, we define the update rule as follows,\nθk+1 =Mψαk,〈−g(τk|θk),θ〉(θk) = arg minθ {〈−g(τk|θk), θ〉+ 1 αk Dψ(θ, θk)}, (15)\nwhere αk > 0 is step-size and other symbols are consistent to previous paragraphs. Due to −J(θ) is non-convex, according to (Ghadimi et al., 2016), a standard strategy for analyzing non-convex optimization methods is to pick up the output θ̃N randomly according to the following distribution over {1, 2, · · · , N}:\nP (θ̃N = θn) = ζαn − Lα2n∑N\nk=1(ζαk − Lα2k) . (16)\nwhere ζ is defined in Definition 1, 0 < αn < ζL ,n = 1, 2, · · · , N . Theorem 1. (Ghadimi et al., 2016) Under Assumption 1, and the total trajectories are {τk}Nk=1. Consider the sequence {θk}Nk=1 generated by (15), the output θ̃N = θn follows the probability mass distribution of (16). Let 0 < αk < ζL , `(g, u) = 〈g, u〉, the term L and σ are defined in Eq.(5) and Eq.(6) correspondingly. Then, we have\nE[‖Gψαn,`(−gn,θn)(θn)‖ 2] ≤\n( J(θ∗)− J(θ1) ) +σ 2\nζ ∑N k=1 αk∑N\nk=1(ζαk − Lα2k) , (17)\nwhere gn is short for g(τn|θn).\nUnfortunately, it is worth to notice that the lower bound of (17) reaches( J(θ∗)− J(θ1) ) +σ 2\nζ ∑N k=1 αk∑N\nk=1(ζαk − Lα2k) ≥ σ\n2\nζ2 , (18)\nwhich can not guarantee the convergence of the iteration (15), no matter how the step-size αk is specified. Thus, under the Assumption 1, generating the solution {θk}Nk=1 according to (15) and the output following (16) lack a strong convergence guarantee.\nAn Open Problem The iteration (15) is a very important and general scheme that unifies many existing algorithms. For example, if the mirror map ψ(θ) = 1\n2 ‖θ‖22, then the update (15) is reduced to\npolicy gradient algorithm (Sutton et al., 2000) which is widely used in modern RL. The update (15) is natural policy gradient (Kakade, 2002; Peters & Schaal, 2008; Thomas et al., 2013) if we choose mirror map ψ(θ) = 12θ\n>F (θ)θ, where F = Eτ∼πθ [∇θ log πθ(s, a)∇θ log πθ(s, a)>] is Fisher information matrix. If ψ is Boltzmann-Shannon entropy function (Shannon, 1948), then Dψ is known as KL divergence and update (15) is reduced to relative entropy policy search (Peters et al., 2010; Fox et al., 2016; Chow et al., 2018). Despite the vast body of work around above specific methods, current works are scattered and fragmented in both theoretical and empirical aspects (Agarwal et al., 2019). Thus, it is of great significance to establish the fundamental theoretical convergence properties of iteration (15).\nPlease notice that for the non-convexity of problem (2), the lower bound of (18) holds under Assumption 1. It is natural to ask:\nWhat conditions guarantee the convergence of scheme (15)?\nThis is an open problem. Although, the iteration (15) is intuitively a convergent scheme, as discussed above that particular mirror maps ψ can lead (15) to some popular empirically effective RL algorithms; there is still no generally complete theoretical convergence analysis of (15). Such convergence properties not only help us to understand better why those methods work but also inspire us to design novel algorithms with the principled approaches. We leave this open problem and the related questions, e.g., how fast the iteration (15) converges to global optimality or its finite sample analysis, as future works.\nAlgorithm 1 Mirror Policy Optimization Algorithm (MPO) 1: Initialize: parameter θ1, step-sizeαk > 0, g0 = 0, parametric policy πθ(a|s), and map ψ. 2: for k = 1 to N do 3: Generate a trajectory τk = {st, at, rt+1} Hτk t=0 ∼ πθk , temporary variable g0 = 0.\ngk ← Hτk∑ t=0 ∇θ log πθ(at|st)R(τk)|θ=θk (21)\nĝk ← 1\nk gk + (1−\n1 k )gk−1 (22)\nθk+1 ← arg min ω {〈−ĝk, ω〉+\n1\nαk Dψ(ω, θk)} (23)\n4: end for 5: Output: θ̃N according to (16)." }, { "heading": "3.2 AN IMPLEMENTATION WITH CONVERGENT GUARANTEE", "text": "In this section, we propose a convergent implementation defined as follows, for each step k:\nθk+1 =Mψαk,〈−ĝk,θ〉(θk) = arg minθ∈Θ{〈−ĝk, θ〉+ 1 αk Dψ(θ, θk)}, (19)\nwhere ĝk is an arithmetic mean of previous episodes’ gradient estimate {g(τi|θi)}ki=1:\nĝk = 1\nk k∑ i=1 g(τi|θi). (20)\nWe present the details of an implementation in Algorithm 1. Notice that Eq.(22) is an incremental implementation of the average (20), thus, (22) enjoys a lower storage cost than (20).\nFor a given episode, the gradient flow (20)/(22) of MPO is slightly different from the traditional VPG, REINFORCE (Williams, 1992), A2C (Mnih et al., 2016) or DPG (Silver et al., 2014) whose gradient estimator follows (14) that is according to the trajectory of current episode, while our MPO uses an arithmetic mean of previous episodes’ gradients. The estimator (14) is a natural way to estimate the term −∇J(θt) = −E[ ∑Hτt k=0∇θ log πθ(ak|sk)R(τt)], i.e. using a single current trajectory to estimate policy gradient. Unfortunately, under Assumption 1, the result of (18) shows using (14) with SMD lacks a guarantee of convergence. This is exactly the reason why we abandon the way (14) and turn to propose (20)/(22) to estimate policy gradient. We provide the convergence analysis of our scheme (20)/(22) in the next Theorem 2. Theorem 2 (Convergence Rate of Algorithm 1). Under Assumption 1, and the total trajectories are {τk}Nk=1. Consider the sequence {θk}Nk=1 generated by Algorithm 1, and the output θ̃N = θn follows the distribution of Eq.(16). Let 0 < αk < ζL , `(g, u) = 〈g, u〉, the term L and σ are defined in Eq.(5) and Eq.(6) correspondingly. Let ĝk = 1k ∑k i=1 gi, where gi = ∑Hτi t=0 ∇θ log πθ(at|st)R(τi)|θ=θi . Then we have\nE[‖Gψαn,`(−gn,θn)(θn)‖ 2] ≤\n( J(θ∗)− J(θ1) ) +σ 2\nζ ∑N k=1\nαk k∑N\nk=1(ζαk − Lα2k) . (24)\nWe prove the proof in Appendix A. Let αk = ζ/2L, then, Eq(24) is E[‖Gψαn,`(−ĝn,θn)(θn)‖ 2] ≤ 4L ( J(θ∗)−J(θ1) ) +2σ2 ∑N k=1 1 k Nζ2 = O(lnN/N). Our scheme of MPO partially answers the previous open problem through conducting a new policy gradient estimator." }, { "heading": "4 VRMPO: A VARIANCE REDUCTION IMPLEMENTATION OF MPO", "text": "In this section, we propose a variance reduction version of MPO: VRMPO. In optimization community, variance reduction gradient estimator is a very popular method with provable convergence\nAlgorithm 2 Variance-Reduced Mirror Policy Optimization (VRMPO). 1: Initialize: Policy πθ(a|s) with parameter θ̃0, mirror map ψ,step-size αk > 0, epoch size K,m. 2: for k = 1 to K do 3: θk,0 = θ̃k−1, generate Tk = {τi}N1i=1 ∼ πθk,0 4: θk,1 = θk,0 − αkGk,0, where Gk,0 = −∇̂N1J(θk,0) = − 1N1 ∑N1 i=1 g(τi|θk,0).\n5: for t = 1 to m− 1 do 6: Generate {τj}N2j=1 ∼ πθk,t\nGk,t = Gk,t−1 + 1\nN2 N2∑ j=1 (−g(τj |θk,t) + g(τj |θk,t−1)), (25)\nθk,t+1 = arg min ω {〈Gk,t, ω〉+\n1\nαk Dψ(ω, θk,t)} (26)\n7: end for 8: θ̃k = θk,t with t chosen uniformly randomly from {0, 1, ...,m}. 9: end for\n10: Output: θ̃K .\nguarantee (Reddi et al., 2016; Fang et al., 2018; Horváth & Richtárik, 2019). Inspired by the above works, now, we present an efficiently computable policy gradient estimator. For any initial θ0, let {τ0j }Nj=1 ∼ πθ0 , we calculate the initial gradient estimate as follows,\nG0 = −∇̂NJ(θ0) def = − 1\nN N∑ j=1 g(τ0j |θ0). (27)\nLet θ1 = θ0 − αG0, for each time t ∈ N+, let {τ tj}Nj=1 be the trajectories generated by πθt , we define the policy gradient estimate Gt and the update rule of parameter as follows,\nGt = Gt−1 + 1\nN N∑ j=1 (−g(τ tj |θt) + g(τ tj |θt−1)), (28)\nθt+1 = arg min θ {〈Gt, θ〉+\n1 α Dψ(θ, θt)}, (29)\nwhere α > 0 is step-size. We present more details in Algorithm 2.\nIn (28),−g(τ tj |θt) and g(τ tj |θt−1) share the same trajectory {τ tj}Nj=1, which plays a critical role in reducing the variance of gradient estimate (Shen et al., 2019). Besides, it is different from (20), we admit a simple recursive formulation to conduct the policy gradient estimator Gt (28), which captures some techniques from SARAH (Nguyen et al., 2017a;b). At each time t, the term 1 N ∑N j=1 ( − g(τ tj |θt) + g(τ tj |θt−1) ) can be seen as an additional “noise” for the policy gradient estimate. A lot of practices show that conducting a gradient estimator with the additional noise enjoys a lower variance and speeding up the convergence (Reddi et al., 2016; Schmidt et al., 2017; Nguyen et al., 2017a;b; Fang et al., 2018).\nTheorem 3 (Convergence Analysis of Algorithm 2). The sequence {θ̃k}Kk=1 is generated according to Algorithm 2. Under Assumption 1, let ζ > 532 , for any positive scalar , the batch size of\nthe trajectories of outer loop N1 = ( 1\n8Lζ2 + 1 2(ζ− 532 )\n( 1 + 132ζ2 )) σ2 2 , the iteration times of in-\nner loop m − 1 = N2 = √( 1\n8Lζ2 + 1 2(ζ− 532 )\n( 1 + 132ζ2 ))σ , the iteration times of outer loop\nK = 8L(E[J (θ̃0)]− J (θ∗)) (m− 1)(ζ − 532 ) ( 1 + 116ζ2 ) 1 2 , and step size αk = 14L . Then, Algorithm 2 outputs the point θ̃K achieves\nE [ ‖Gψ α,〈−∇J(θ̃K),θ〉(θ̃K)‖ ] ≤ . (30)\nBy the result in Theorem 3, under Assumption 1, to achieve the -FOSP, Algorithm 2 (VRMPO) needs K(N1 +(m−1)N2) = 8L(E[J (θ̃0)]−J (θ ∗))\n(ζ− 532 )\n( 1+ 116ζ2 )( 1+ √( 1 8Lζ2 +\n1 2(ζ− 532 )\n( 1 + 132ζ2 )) σ ) 1 2 =\nO( 1 3 ) random trajectories. As far as we know, our VRMPO matches the best-known sample complexity as the HAPG algorithm (Shen et al., 2019).\nIn fact, according to (Shen et al., 2019), REINFORCE needs O( −4) random trajectory trajectories to achieve the -FOSP, and no provable improvement on its complexity has been made so far. The same order of sample complexity of REINFORCE is shown by Xu et al. (2019). With the additional\nassumptions Var [∏H\nh=0 πθ0(ah|sh) πθt(ah|sh)\n] ,Var[g(τ |θ)] < +∞, Papini et al. (2018) show that the SVRPG\nachieves the sample complexity ofO( −4). Later, under the same assumption as (Papini et al., 2018), Xu et al. (2019) reduce the sample complexity of SVRPG toO( − 103 ). We provide more details of the comparison in Table 1, from which it is easy to see that our VRMPO matches the best-known sample complexity with least conditions." }, { "heading": "5 RELATED WORKS", "text": "Stochastic Variance Reduced Gradient in RL Although it has achieved considerable successes in supervised learning, stochastic variance reduced gradient optimization is rarely a matter of choice in RL. To our best knowledge, Du et al. (2017) firstly introduce SVRG (Johnson & Zhang, 2013) to off-policy evaluation. Du et al. (2017) transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle-point problem, then they solve the problem via SVRG straightforwardly. Later, to improve sample efficiency for complex RL, Xu et al. (2017) combine SVRG with TRPO (Schulman et al., 2015). Similarly, Yuan et al. (2019) introduce SARAH (Nguyen et al., 2017a) to TRPO to improve sample efficiency. However, the results presented by Xu et al. (2017) and Yuan et al. (2019) are empirical, which lacks a strong theory analysis. Metelli et al. (2018) present a surrogate objective function with a Rényi divergence (Rényi et al., 1961) to reduce the variance caused by importance sampling.\nRecently, Papini et al. (2018) propose a stochastic variance reduced version of policy gradient (SVRPG), and they define the gradient estimator via important sampling,\nGt = G̃t−1 + 1\nN N∑ j=1 ( − g(τ tj |θt) + H∏ h=0 πθ0(ah|sh) πθt(ah|sh) g(τ tj |θt−1) ) , (31)\nwhere G̃t−1 is an unbiased estimator according to the trajectory generated by πθt−1 . Although the above algorithms are practical empirically, their gradient estimates are dependent heavily on important sampling. This fact partially reduces the effectiveness of variance reduction. Later, Shen et al. (2019) remove the important sampling term, and they construct the gradient estimator as follows,\nGt = G̃t−1 + 1\nN N∑ j=1 ( − g(τ tj |θt) + g(τ tj |θt−1) ) . (32)\nIt is different from (Du et al., 2017; Xu et al., 2017; Papini et al., 2018; Shen et al., 2019), the proposed VRMPO admits a stochastic recursive iteration to estimate the policy gradient, see Eq.(28). Our VRMPO exploits the fresh information to improve convergence and reduce variance. Besides, VRMPO reduces the storage cost significantly due to it doesn’t require to store the complete historical information. We provide more details of the comparison in Table 1.\nBaseline Methods for Variance Reduction of Policy Gradient Baseline (also also known as control variates (Cheng et al., 2019a) or reward reshaping (Ng et al., 1999; Jie & Abbeel, 2010)) is a widely used technique to reduce the variance (Weaver & Tao, 2001; Greensmith et al., 2004). For example, A2C (Sutton & Barto, 1998; Mnih et al., 2016) introduces the value function as baseline function, Wu et al. (2018) consider action-dependent baseline, and Liu et al. (2018) use the Stein’s identity (Stein, 1986) as baseline. Q-Prop (Gu et al., 2017) makes use of both the linear dependent baseline and GAE (Schulman et al., 2016) to reduce variance. Cheng et al. (2019b) present a predictorcorrector framework that can transform a first-order model-free algorithm into a new hybrid method that leverages predictive models to accelerate policy learning. Mao et al. (2019) derive a bias-free,\nAlgorithm Estimator Conditions Complexity\nVPG/REINFORCE Eq.(14) Assumption 1;Var[g(τ |θ)] < +∞ O( −4) SVRPG (Papini et al., 2018) Eq.(31) Assumption 1;Var[ρt],Var[g(τ |θ)] < +∞ O( −4) SVRPG (Xu et al., 2019) Eq.(31) Assumption 1;Var[ρt],Var[g(τ |θ)] < +∞ O( −10/3) HAPG (Shen et al., 2019) Eq.(32) Assumption 1 O( −3) VRMPO (Our Works) Eq.(28) Assumption 1 O( −3)\nTable 1: Comparison on complexity required to achieve ‖∇J(θ)‖ ≤ . Particularly, if ψ(θ) =\n1 2‖θ‖ 2 2, then the result (30) of our VRMPO is measured by gradient. Beside, ρt def =\n∏H\nh=0 πθ0 (ah|sh) πθt (ah|sh) .\ninput-dependent baseline to reduce variance, and analytically show its benefits over state-dependent baselines. Recently, Grathwohl et al. (2018); Cheng et al. (2019a) provide a standard explanation for the benefits of such approaches with baseline function.\nHowever, the capacity of all the above methods is limited by their choice of baseline function (Liu et al., 2018). In practice, it is troublesome to design a proper baseline function to reduce the variance of policy gradient estimate. Our VRMPO avoids the selection of baseline function, and it uses a current sample trajectory to construct a novel, efficiently computable gradient estimator to reduce variance and speed convergence." }, { "heading": "6 EXPERIMENTS", "text": "" }, { "heading": "6.1 NUMERICAL ANALYSIS OF MPO", "text": "In this section, we use an experiment to demonstrate MPO converges faster than VPG/REINFORCE. Then, we test how the mirror map ψ effects the performance of MPO.\nPerformance Comparison We compare the convergence rate of MPO with REINFORCE and VPG empirically on the Short Corridor with Switched Actions domain (Chapter 13, Sutton & Barto (2018); We provide some details in Appendix B). The task is to estimate the value function of state s1, V (s1) = G0 ≈ −11.6.\nπθ(a|s) = exp{Lθ(s, a)}∑\na′∈A exp{Lθ(s, a ′)}\n. The initial pa-\nrameter θ0 = U [−0.5, 0.5], where U is uniform distribution.\nBefore we report the experimental results, it is necessary to explain why we only use VPG and REINFORCE as the baseline to compare with our MPO. VPG/REINFORCE is one of the most basic policy gradient methods in RL, and extensive modern policy-based algorithms are derived from VPG/REINFORCE. Our MPO is a novel framework via mirror map to learn the parameter, see (23). Thus, it is natural to compare with VPG and REINFORCE. The result in Figure 1 shows that MPO converges faster significantly and achieves a better performance than both REINFORCE and MPO.\nEffect of Mirror Map ψ We use ψ = `p-norm to test how the mirror map affects the performance of MPO. Particularly, the iteration (23) reduces to gradient descent update if ψ = `2-norm. For ψ = `pnorm, Eq.(23) has a closed implementation. Let ψ∗(y) = ( ∑n i=1 |yi|q) 1 q be the conjugate map of ψ,\nwhere p−1 + q−1 = 1, p, q > 1. According to (Beck & Teboulle, 2003), (23) is equivalent to\nθk+1 = ∇ψ∗(∇ψ(θk) + αkĝk),\nwhere ∇ψj(x) and ∇ψ∗j (y) are p-norm link functions (Gentile, 2003): ∇ψj(x) = sign(xj)|xj |p−1\n‖x‖p−2p ,∇ψ∗j (y) =\nsign(yj)|yj |q−1\n‖y‖q−2q , and j is coordinate index of the vector∇ψ,∇ψ∗.\nTo compare fairly, we use the same random seed, and let p run in [P ] = {1.1, 1.2, · · · , 1.9, 2, 3, 4, 5}. For the non-Euclidean distance case, we only show p = 3, 4, 5, and “best”, where p = “best” value is that case it achieves the best performance among the set [P ]. For the limitation of space, we provide more details of experiments in Appendix D.1.\nThe result in Figure 2 shows that the best method is produced by non-Euclidean distance, not the Euclidean distance. The traditional policy gradient methods such as REINFORCE, VPG, and DPG are all the algorithms update parameters in Euclidean distance. This simple experiment gives us some lights that one can create better algorithms by combining the existing approaches with non-Euclidean distance, which is an interesting direction, and we left it as future work." }, { "heading": "6.2 EVALUATE VRMPO ON CONTINUOUS CONTROL TASKS", "text": "In this section, we compare VRMPO on the MuJoCo continuous control tasks (Todorov et al., 2012) from OpenAI Gym (Brockman et al., 2016). We compare VRMPO with DDPG (Lillicrap et al., 2016), PPO (Schulman et al., 2017), TRPO (Schulman et al., 2015), and TD3 (Fujimoto et al., 2018). For fairness, all the setups mentioned above share the same network architecture that computes the policy and state value. We run all the algorithms with ten random seeds. The results of max-average epoch return are present in Table 2, and return curves are shown in Figure 3. For the limitation of space, we present all the details of experiments and some practical tricks for the implementation of VRMPO in Appendix D.2-D.5; in this section, we only offer our experimental results. We evaluate the performance of VRMPO by the following three aspects: score performance, the stability of training, and variance.\nScore Performance Comparison From the results of Figure 3 and Table 2, overall, VRMPO outperforms the baseline algorithms in both final performance and learning process. Our VRMPO also learns considerably faster with better performance than the popular TD3 on Walker2d-v2, HalfCheetah-v2, Hopper-v2, InvDoublePendulum-v2, and Reacher-v2 domains. On the InvDoublePendulum-v2 task, our VRMPO has only a small advantage over other algorithms. This is because the InvPendulum-v2 task is relatively easy. The advantage of our VRMPO becomes more powerful when the task is more difficult. It is worth noticing that on the HalfCheetah-v2 domain, our VRMPO achieves a significant max-average score 16000+, which outperforms far more than the second-best score 11781.\nStability The stability of an algorithm is also an important topic in RL. Although DDPG exploits the off-policy sample, which promotes its efficiency in stable environments, DDPG is unstable on the Reacher-v2 task, while our VRMPO learning faster significantly with lower variance. DDPG fails to make any progress on InvDoublePendulum-v2 domain, and the result is corroborated by the work (Dai et al., 2018). Although TD3 takes the minimum value between a pair of critics to limit overes-\n50 100 150 200 250 300 350 400 450 500 Epoch\n0\n1000\n2000\n3000 4000 A ve ra ge R et ur n\nVRMPO DDPG PPO TD3 TRPO\n(a) Walker2d-v2\n200 400 600 800 1000 Epoch\n0\n2000\n4000\n6000\n8000\n10000\n12000\n14000\n16000\nA ve\nra ge\nR et\nur n\nVRMPO DDPG PPO TD3 TRPO\n(b) HalfCheetah-v2\n50 100 150 200 250 300 350 400 450 500 Epoch\n10\n9\n8\n7\n6\n5\n4\nAv er\nag e\nR et\nur n\nVRMPO PPO TD3 TRPO DDPG\n(c) Reacher-v2\ntimation, it learns severely fluctuating in the InvertedDoublePendulum-v2 environment. In contrast, our VRMPO is consistently reliable and effective in different tasks.\nVariance Comparison As we can see from the results in Figure 3, our VRMPO converges with a considerably low variance in the Hopper-v2, InvDoublePendulum-v2, and Reacher-v2. Although the asymptotic variance of VRMPO is slightly larger than other algorithms in HalfCheetah-v2, the final performance of VRMPO outperforms all the baselines significantly. The result in Figure 3 also implies conducting a proper gradient estimator not only reduce the variance of the score during the learning but speed the convergence of training." }, { "heading": "7 CONCLUSION", "text": "In this paper, we propose the mirror policy optimization (MPO) by estimating the policy gradient via dynamic batch-size of historical gradient information. Results show that making use of historical gradients to estimate policy gradient is more effective to speed convergence. We also propose a variance reduction implementation for MPO: VRMPO, and prove the complexity of VRMPO achieves O( −3). To our best knowledge, VRMPO matches the best-known sample complexity so far. Finally, we evaluate the performance of VRMPO on the MuJoCo continuous control tasks, results show that VRMPO outperforms or matches several state-of-art algorithms DDPG, TRPO, PPO, and TD3." }, { "heading": "A PROOF OF THEOREM 2", "text": "Let f(x) be a L-smooth function defined on Rn, i.e ‖∇f(x) − ∇f(y)‖ ≤ L‖x − y‖. Then, for ∀x, y ∈Rn, the following holds\n‖f(x)− f(y)− 〈∇f(y), x− y〉‖ ≤ L 2 ‖x− y‖2. (33)\nThe following Lemma 2 is useful for our proof. Lemma 2 (Ghadimi et al. (2016), Lemma 1 and Proposition 1). Let X be a closed convex set in Rd, h : X → R be a convex function, but possibly nonsmooth, and Dψ : X × X → R is Bregman divergence. Moreover, define\nx+ = arg min u∈X\n{ 〈g, u〉+ 1\nη Dψ(u, x) + h(u) } PX (x, g, η) = 1\nη (x− x+), (34)\nwhere g ∈ Rd, x ∈ X , and η > 0. Then, the following statement holds\n〈g, PX (x, g, η)〉 ≥ ζ‖PX (x, g, η)‖2 + 1\nη [h(x+)− h(x)], (35)\nwhere ζ is a positive constant determined by ψ (i.e. ψ is a a continuously-differentiable and ζstrictly convex function) that satisfying 〈x − y,∇ψ(x) −∇ψ(y)〉 ≥ ζ‖x − y‖2. Moreover, for any g1, g2 ∈ Rd, the following statement holds\n‖PX (x, g1, η)− PX (x, g2, η)‖ ≤ 1\nζ ‖g1 − g2‖. (36)\nTheorem 2 (Convergence Rate of Algorithm 1) Under Assumption 1, and the total trajectories are {τk}Nk=1. Consider the sequence {θk}Nk=1 generated by Algorithm 1, and the output θ̃N = θn follows the distribution of Eq.(16). Let 0 < αk < ζL , `(g, u) = 〈g, u〉, the term L and σ are defined in Eq.(5) and Eq.(6) correspondingly. Let ĝk = 1k ∑k i=1 gi, where gi = ∑Hτi t=0 ∇θ log πθ(at|st)R(τi)|θ=θi . Then we have\nE[‖Gψαn,`(−gn,θn)(θn)‖ 2] ≤\n( J(θ∗)− J(θ1) ) +σ 2\nζ ∑N k=1\nαk k∑N\nk=1(ζαk − Lα2k) .\nProof. (Proof of Theorem 2)\nLet T = {τk}Nk=1 be the trjecories generated by the differentiable parametric policy πθ. At each terminal end of a trajectory τk = {st, at, rt+1} Hτk t=0 ∈ T , let\ngk = Hτk∑ t=0 ∇θ log πθ(at|st)R(τk)|θ=θk , ĝk = 1 k k∑ i=1 gi,\naccording to Algorithm 1, at the terminal end of k-th episode, k = 1, 2, · · · , N , the following holds,\nθk+1 = arg min θ\n{ 〈−ĝk, θ〉+ 1\nαk Dψ(θ, θk)\n} =Mψαk,`(−ĝk,θ)(θk).\nTo simplify expression, let J (θ) = −J(θ), then J (θ) is L-smooth, from Eq.(33), we have J (θk+1) ≤ J (θk) + 〈 ∇θJ (θ) ∣∣ θ=θk , θk+1 − θk 〉 + L\n2 ∥∥∥θk+1 − θk∥∥∥2 = J (θk)− αk 〈 ∇J (θk),Gψαk,`(−ĝk,θk)(θk) 〉 + Lα2k\n2 ∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 = J (θk)− αk 〈 ĝk,Gψαk,`(−ĝk,θk)(θk) 〉 + Lα2k\n2 ∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 + αk 〈 k,Gψαk,`(−ĝk,θk)(θk) 〉 ,\nwhere k = −ĝk − (−∇J(θk)) = −ĝk − ∇J (θk). By Eq.(34) and let h(x) ≡ 0 and η = α, then PX (θ, g, α) = Gψα,`(g,θ)(θ). Furthermore, by Eq.(35), let η = αk and g = −ĝk, then we have\nJ (θk+1) ≤ J (θk)− αkζ ∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 + Lα2k2 ∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2\n+ αk 〈 k,Gψαk,`(−ĝk,θk)(θk) 〉 = J (θk)− αkζ\n∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 + Lα2k2 ∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 + αk 〈 k,Gψαk,`(−∇J(θk),θk)(θk)\n〉 + αk 〈 k,Gψαk,`(−ĝk,θk)(θk)− G ψ αk,`(−∇J(θk),θk)(θk) 〉 . (37)\nRearrange Eq.(37), we have\nJ (θk+1) ≤ J (θk)− ( ζαk −\nLα2k 2 )∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 + αk〈 k,Gψαk,`(−∇J(θk),θk)(θk)〉 + αk‖ k‖\n∥∥∥Gψαk,`(−ĝk,θk)(θk)− Gψαk,`(−∇J(θk),θk)(θk)∥∥∥. By Eq.(36), let x = θk, g1 = −ĝk, g2 = −∇J(θk), h(x) ≡ 0, then the following statement holds\nJ (θk+1)\n≤ J (θk)− ( ζαk −\nLα2k 2 )∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 + αk〈 k,Gψαk,`(−∇J(θk),θk)(θk)〉+ αkL ‖ k‖2. (38)\nSumming the above Eq.(38) from k = 1 toN and with the condition αk ≤ ζ\nL , we have the following\nstatement N∑ k=1 ( ζαk − Lα2k )∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 ≤\nN∑ k=1 ( ζαk − Lα2k 2 )∥∥∥Gψαk,`(−ĝk,θk)(θk)∥∥∥2 ≤\nN∑ k=1 [ αk 〈 k,Gψαk,`(−∇J(θk),θk)(θk) 〉 + αk ζ ‖ k‖2 ] + J (θ1)− J (θk+1)\n≤ N∑ k=1 [ αk 〈 k,Gψαk,`(−∇J(θk),θk)(θk) 〉 + αk ζ ‖ k‖2 ] + J (θ1)− J ∗. (39)\nRecall\ngk = Hτk∑ t=0 ∇θ log πθ(at|st)R(τk), ĝk = 1 k k∑ i=1 gi,\nby policy gradient theorem\nE[−gk] = E[−ĝk] = −∇J(θk) = ∇J (θk). (40)\nLet Fk be the σ-field generated by all random variables defined before round k, ̃k = gk −∇J (θk) then the Eq.(40) implies: for k = 1, · · · , N ,\nE [〈 k,Gψαk,`(−∇J(θk),θk)(θk) 〉∣∣∣Fk−1] = E[〈̃k,Gψαk,`(−∇J(θk),θk)(θk)〉∣∣∣Fk−1] = 0. Let δs = ∑s t=1 ̃t, noticing that for s = 1, · · · , k,\nE[〈δs, ̃s+1〉|δs] = 0. (41)\nFurthermore, the following statement holds\nE[‖δk‖2] = E [ ‖δk−1‖2 + 2 〈 δk−1, ̃k 〉 + ‖̃t‖2 ] (41) = E [ ‖δk−1‖2 + ‖̃t‖2 ] = · · · = k∑ t=1 E‖̃t‖2.\n(42)\nBy Lemma 1 and Eq.(42), we have\nE[‖ k‖2] = 1\nk2 k∑ t=1 E‖̃t‖2 ≤ σ2 k . (43)\nTaking Eq.(43) in to Eq.(39), and taking expections w.r.t FN , we have N∑ k=1 ( ζαk − Lα2k ) E [∥∥Gψαk,`(−ĝk,θk)(θk)∥∥2] ≤ J θ1)− J ∗ + σ2ζ N∑ k=1 αk k .\nNow, consider the output θ̃N = θn follows the distribution of Eq.(16), we have\nE[‖Gψαn,`(−gn,θn)(θn)‖ 2] ≤\n( J(θ∗)− J(θ1) ) +σ 2\nζ ∑N k=1\nαk k∑N\nk=1(ζαk − Lα2k) .\nParticularly, if the step-size αk is fixed to a constant:ζ/2L, then\nE[‖Gψαn,`(−ĝn,θn)(θn)‖ 2] ≤\n4L ( J(θ∗)− J(θ1) ) +2σ2 ∑N k=1 1 k\nNζ2 .\nRecall the following estimation\nN∑ k=1 1 k = lnN + C + o(1),\nwhere C is the Euler constant—a positive real number and o(1) is infinitesimal. Thus the overall convergence rate reaches O( lnNN ) as\nE[‖Gψαn,`(−ĝn,θn)(θn)‖ 2] ≤\n4LD2J + 2σ ∑N k=1 1 k\nNζ = O( lnN N )." }, { "heading": "B SHORT CORRIDOR WITH SWITCHED ACTIONS", "text": "Consider the small corridor grid world which contains three sates S = {1, 2, 3}. The reward is −1 per step. In each of the three nonterminal states there are only two actions, right and left. These actions have their usual consequences in the state 1 and state 3 (left causes no movement in the first state), but in the state 2 they are reversed. so that right moves to the left and left moves to the right.\nAn action-value method with -greedy action selection is forced to choose between just two policies: choosing right with high probability 1 − 2 on all steps or choosing left with the same high probability on all time steps. If = 0.1, then these two policies achieve a value (at the start state) of less than −44 and −82, respectively, as shown in the following graph.\nA method can do significantly better if it can learn a specific probability with which to select right. The best probability is about 0.58, which achieves a value of about −11.6." }, { "heading": "C PROOF OF THEOREM 3", "text": "We need the following lemmas to prove the convergence result.\nLemma 3 (Lemma1 (Fang et al., 2018)). Under Assumption 1, Gk,t is generated according to (25), θk,t is generated according to (26), then for any 0 ≤ t ≤ m, the following holds\nE[‖Gk,t −∇J (θk,t)‖2] ≤ L2\nN2 E[‖θk,t − θk,t−1‖2] + E[‖Gk,t−1 − J (θk,t−1)‖2]. (44)\nTelescoping Eq.(3) over t from 1 to the time t, then the following holds\nE[‖Gk,t −∇J (θk,t)‖2] ≤ t∑ i=1 L2 N2 E[‖θk,i+1 − θk,i‖2] + E[‖Gk−1,0 −∇J (θ̃k−1)‖2]. (45)\nLemma 4. Let ζ > 532 , the batch size of the trajectories of outer loop N1 =( 1 8Lζ2 + 1 2(ζ− 532 ) ( 1 + 132ζ2 )) σ2\n2 , the iteration times of inner loop m − 1 = N2 =√( 1\n8Lζ2 + 1 2(ζ− 532 )\n( 1 + 132ζ2 )) σ\n, and step size αk = 14L . For each k and t, Gk,0 and θk,0 are\ngenerated by Algorithm 2, then the following holds, E‖∇J (θk,0)−Gk,0‖2 ≤ ( 1\n8Lζ2 +\n1 2(ζ − 532 ) ( 1 + 1 32ζ2 ))−1 2. (46)\nProof.\nE‖∇J (θk,0)−Gk,0‖2 = E‖∇J(θk,0)− 1\nN1 N1∑ i=1 g(τi|θk,0)‖2 (47)\n= 1\nN21 N1∑ i=1 E‖∇J(θk,0)− g(τi|θk,0)‖2 (48)\n(6) ≤ σ 2\nN1 (49) = ( 1\n8Lζ2 +\n1 2(ζ − 532 ) ( 1 + 1 32ζ2 ))−1\n2︸ ︷︷ ︸ def = 21 . (50)\nTheorem 3 (Convergence Rate of VRMPO) The sequence {θ̃k}Kk=1 is generated according to Algorithm 2. Under Assumption 1, let ζ > 532 , the batch size of the trajectories of outer loop\nN1 = ( 1\n8Lζ2 + 1 2(ζ− 532 )\n( 1 + 132ζ2 )) σ2\n2 , the iteration times of inner loop m − 1 = N2 =√( 1 8Lζ2 + 1 2(ζ− 532 ) ( 1 + 132ζ2 ))σ , the iteration times of outer loop K = 8L (m− 1)(ζ − 532 ) ( 1 +\n1 16ζ2 ) 1 2 , and step size αk = 14L . Then, Algorithm 2 outputs the point θ̃K achieves\nE[‖Gψ α,〈−∇J(θ̃K),θ〉(θ̃K)‖] ≤ . (51)\nProof. (Proof of Theorem 3)\nBy the definition of Bregman grdient mapping in Eq.(10) and iteration (26), let αk = α, we have\n1 α (θk,t − θk,t+1) (26) = 1 α\n( θk,t − arg min\nu {〈Gk,t, u〉+\n1\nαk Dψ(u, θk,t)} ) ︸ ︷︷ ︸\ngk,t\n(10) = Gψα,〈Gk,t,u〉(θk,t),\n(52)\nwhere we introduce gk,t to simplify notations.\nStep 1: Analyze the inner loop of Algorithm 2 Now, we analyze the inner loop of Algorithm 2. In this step, our goal is to prove\nE[J (θ̃k)]− E[J (θ̃k−1)] ≤ − m−1∑ t=1 ( ηE[‖gk,t‖2]− α 2 2 ) .\nIn fact,\nJ (θk,t+1) (33) ≤ J (θk,t) + 〈∇J (θk,t), θk,t+1 − θk,t〉+ L\n2 ‖θk,t+1 − θk,t‖2\n(52) = J (θk,t)− α 〈∇J (θk,t), gk,t〉+\nLα2\n2 ‖gk,t‖2\n= J (θk,t)− α 〈∇J (θk,t)−Gk,t, gk,t〉 − α 〈Gk,t, gk,t〉+ Lα2\n2 ‖gk,t‖2\n≤ J (θk,t) + α\n2 ‖∇J (θk,t)−Gk,t‖2 − α 〈Gk,t, gk,t〉+\n( Lα2\n2 + α 2\n) ‖gk,t‖2 (53)\n(35)\n≤ J (θk,t) + α\n2 ‖∇J (θk,t)−Gk,t‖2 − ζα‖gk,t‖2 +\n( Lα2\n2 + α 2\n) ‖gk,t‖2, (54)\nEq.(53) holds due to the Cauchy-Schwarz inequality |〈u,v〉| ≤ ‖u‖‖v‖ ≤ 1 2\n(‖u‖2 +‖v‖2) for any u,v ∈ Rn. Eq.(54) holds if h ≡ 0 by Eq.(35). Taking expectation on both sides of Eq(54), we have\nE[J (θk,t+1)] ≤ E[J (θk,t)] + α 2 E [ ‖∇J (θk,t)−Gk,t‖2 ] − ( ζα− Lα 2 2 − α 2 ) E [ ‖gk,t‖2 ] ≤ E[J (θk,t)] + α\n2 t∑ i=1 L2 N2 E‖θk,i+1 − θk,i‖2\n+ α\n2 E‖Gk−1,0 −∇J (θk−1,0)‖2 −\n( ζα− Lα 2\n2 − α 2\n) E‖gk,t‖2 (55)\nEq.(55) holds due to Lemma 3.\nBy Lemma 4, Eq.(55) and Eq.(52), we have\nE[J (θk,t+1)] ≤ E[J (θk,t)] + α3L2\n2N2 t∑ i=1 E [ ‖gk,i‖2 ] + α 21 2 − ( ζα− Lα 2 2 − α 2 ) E‖gk,t‖2.\nRecall the parameter θ̃k−1 = θk−1,m is generated by the last time of (k − 1)-th episode, we now consider the following equation\nE[J (θk,t+1)]− E[J (θ̃k−1)]\n≤α 3L2\n2N2 t∑ j=1 j∑ i=1 E‖gk,i‖2 + α 2 t∑ j=1 21 − ( ζα− Lα 2 2 − α 2 ) t∑ j=1 E‖gk,j‖2\n≤α 3L2\n2N2 t∑ j=1 t∑ i=1 E‖gk,i‖2 + α 2 t∑ j=1 21 − ( ζα− Lα 2 2 − α 2 ) t∑ j=1 E‖gk,j‖2\n= α3L2t\n2N2 t∑ i=1 E‖gk,i‖2 + α 2 t∑ j=1 21 − ( ζα− Lα 2 2 − α 2 ) t∑ j=1 E‖gk,j‖2\n≤α 3L2(m− 1)\n2N2\nt∑ i=1 E‖gk,i‖2 + α 2 t∑ j=1 21 − ( ζα− Lα 2 2 − α 2 ) t∑ j=1 E‖gk,j‖2 (56)\n= α\n2 t∑ j=1 21 − ( ζα− Lα 2 2 − α 2 − α 3L2(m− 1) 2N2 ) ︸ ︷︷ ︸\ndef =η= ζ − 532 4L\nt∑ j=1 E‖gk,j‖2\n=− t∑ i=1 ηE[‖gk,i‖2]− α 2 (( 1 8Lζ2 + 1 2(ζ − 532 ) ( 1 + 1 32ζ2 )))−1\n2︸ ︷︷ ︸ = 21 , (57) Eq.(56) holds due to t ≤ m− 1. If t = m− 1, then by the last Eq.(57) implies\nE[J (θ̃k)]− E[J (θ̃k−1)] ≤ − m−1∑ t=1 ( ηE[‖gk,t‖2]− α 2 21 ) . (58)\nStep 2: Analyze the outer loop of Algorithm 2\nWe now consider the output of Algorithm 2, E[J (θ̃K)]− E[J (θ̃0)] = ( E[J (θ̃1)]− E[J (θ̃0)] ) + ( E[J (θ̃2)]− E[J (θ̃1)] ) + · · ·+ ( E[J (θ̃K)]− E[J (θ̃K−1)]\n) (58)\n≤ − m−1∑ t=0 ( ηE‖g1,t‖2 − α 2 21 ) − m−1∑ t=0 ( ηE‖g2,t‖2 − α 2 21 ) − · · · −\nm−1∑ t=0 ( ηE‖gK,t‖2 − α 2 21 ) =−\nK∑ k=1 m−1∑ t=0 ( ηE‖gk,t‖2 − α 2 21 ) =−\nK∑ k=1 m−1∑ t=1 ( ηE‖gk,t‖2 ) + Kα 2 21,\nthen we have K∑ k=1 m−1∑ t=1 ( ηE‖gk,t‖2 ) ≤ E[J (θ̃0)]− J (θ∗) + K(m− 1)α 2 21. (59)\nRecall the notation in Eq.(52)\ngk,t = 1\nα (θk,t − arg min u {〈Gk,t, u〉+\n1 α Dψ(u, θk,t)}) = Gψα,〈Gk,t,u〉(θk,t),\nand we introduce following g̃(θk,t) to simplify notations,\ng̃(θk,t) = Gψα,〈−∇J(θk,t),u〉(θk,t) def = g̃k,t\n= 1\nα\n( θk,t − arg min\nu {〈−∇J(θk,t), u〉+\n1 α Dψ(u, θk,t)}\n) . (60)\nThen, the following holds\nE‖g̃k,t‖2 ≤E‖gk,t‖2 + E‖g̃k,t − gk,t‖2\n(36)\n≤ E‖gk,t‖2 + 1\nζ2 E‖∇J (θk,t)−Gk,t‖2, (61)\nEq.(61) holds due to the Eq.(36).\nLet ν be the number that is selected randomly from {1, · · · , (m − 1)K} which is the output of Algorihtm 2,for the convenience of proof the there is no harm in hypothesis that ν = k · (m− 1) + t and we denote the output θν = θk,t.\nNow, we analyze above Eq.(61) and show it is bounded as following two parts (63) and (66)\nE‖g(θν)‖2 = 1\n(m− 1)K K∑ k=1 m−1∑ t=1 E‖gk,t‖2 (59) ≤ E[J (θ̃0)]− J (θ ∗) (m− 1)Kη + α 2η 21, (62)\nwhich implies the following holds\nE‖gk,t‖2 ≤ E[J (θ̃0)]− J (θ∗)\n(m− 1)Kη +\nα 2η 21. (63)\nFor another part of Eq.(61), notice ν = k(m− 1) + t, then we have\nE‖∇J (θk,t)−Gk,t‖2 =E‖∇J (θν)−Gν‖2 (64) (45)\n≤ E\n[ L2\nN2 t∑ i=1 E‖θk,i+1 − θk,i‖2 + E[‖Gk−1,0 −∇J (θ̃k−1)‖2] ] (46)\n≤ E\n[ L2\nN2 t∑ i=1 E‖θk,i+1 − θk,i‖2 + α 2 21 ] (52) = E [ L2α2\nN2 t∑ i=1\nE‖gk,i‖2 ] + α\n2 21\nt≤m ≤ E\n[ L2α2\nN2 m−1∑ i=1\nE‖gk,i‖2 ] + α\n2 21\n≤L 2α2\nKN2 K∑ k=1 m−1∑ t=1 E‖gk,t‖2 + α 2 21 (65)\n(59) ≤ L 2α2\nKN2η\n( E[J (θ̃0)]− J (θ∗) ) + (L2α3(m− 1)\n2N2η + α 2\n) 21, (66)\nEq.(65) holds due to the fact that the probability of selecting ν = k · (m− 1) + t is less than 1K . Taking Eq(62) and Eq.(65) into Eq.(61), then we have the following inequity E‖g̃k,t‖2 ≤ ( 1\n(m− 1)Kη +\nL2α2\nKN2ηζ2\n)( E[J (θ̃0)]− J (θ∗) ) + (L2α3(m− 1)\n2N2ηζ2 +\nα\n2ζ2 +\nα\n2η\n) 21.\nRecall α = 14L , N1 =\n( 1 8Lζ2 + 1 2(ζ− 532 ) ( 1 + 132ζ2 )) σ2\n2 , N2 = m − 1 =√( 1\n8Lζ2 + 1 2(η− 532 )\n( 1 + 132ζ2 )) σ\n, then we have\nE‖Gα,〈−∇J(θ̃K),θ〉‖ 2 = E‖g̃k,t‖2 ≤\n4L K(m− 1)(ζ − 532 ) ( 1 + 1 16ζ2 ) (E[J (θ̃0)]− J (θ∗)) + 1 2 2.\n(67)\nFurthermore, K = 8L(1 + 116ζ2 ) (m− 1)(ζ − 532 ) · E[J (θ̃0)]− J (θ ∗) 2 , we have\nE[‖Gψ α,〈−∇J(θ̃K),θ〉(θ̃K)‖] ≤ . (68)" }, { "heading": "D EXPERIMENTS", "text": "D.1 EXPERIMENTS DETAILS OF FIGURE 2\nFor all `p, we set p ∈ [P ] = {1.1, 1.2, · · · , 1.9, 2, 3, 4, 5}, we set γ = 0.99. The learning rate is chosen by grid search from the set{0.01, 0.02, 0.04, 0.08, 0.1}. For our implementation of MPO, we use a two layer feedforward neural network of 200 and 100 hidden nodes respectively, with rectified linear units (ReLU) between each layer.\nD.2 SOME PRACTICAL TRICKS FOR THE IMPLEMENTATION OF VRMPO\nWe present the details of the practical tricks we apply to VRMPO in the following Algorithm 3.\n(I) For the complex real-world domains, we should tune necessitate meticulous hyper-parameter. In order to improve sample efficiency, we draw on the technique of Double Q-learning (Van Hasselt et al., 2016) to VRMPO.\n(II) For Algorithm 2, the update rule of policy gradient (28)/(25) is a full-return update according to R(τ), which is the expensive Monte Carlo method and it tends to learn slowly. In practice, we use the one-step actor-critic structure. Let D be the replay memory, replacing the term 1 N2 ∑N2 j=1(−g(τj |θk,t) + g(τj |θk,t−1)) (in (25)) as the following δk,t\nδk,t = 1\nN2 N2∑ i=1 (∇θLθk,t(si, ai)−∇θLθk,t−1(si, ai)), (69)\nwhere Lθ(s, a) = − log πθ(s, a)Qω(s, a) is the training loss of actor, {(si, ai)}Ni=1 ∼ D, Qω(s, a) is an estimate of action-value that can be trained to minimize the loss of critic\nLω = 1\nN N∑ i=1 (Qωt−1(si, ai)−Qω(si, ai))2. (70)\nMore details of implementation are provided in the following Algorithm 3.\nIn this section, we use `p as the mirror map.\nAlgorithm 3 On-line VRMPO Initialize: Policy πθ(a|s) with parameter θ̃0, mirror map ψ,step-size α > 0, epoch size K,m. Initialize: Parameter ω̃j0, j = 1, 2 ,0 < κ < 1 . for k = 1 to K do\nfor each domain step do at ∼ πθ̃k−1(·|st) st+1 ∼ P (·|st, at) D = D ∪ {(st, at, rt, st+1)} end for sample mini-batch {(si, ai)}Ni=1 ∼ D θk,0 = θ̃k−1, ωk,0 = ω̃ j k−1, j = 1, 2 Lθ(s, a) = − log πθ(s, a) ( min j=1,2\nQωjk−1 (s, a))︸ ︷︷ ︸\nDouble Q-Learning (Van Hasselt et al., 2016) θk,1 = θk,0 − αkGk,0, where Gk,0 = 1N ∑N i=1∇θLθ(si, ai) ∣∣∣ θ=θk,0 for t = 1 to m− 1 do /* Update Actor (m− 1) Epochs */\nsample mini-batch {(si, ai)}Ni=1 ∼ D\nδk,t = 1\nN N∑ i=1 ∇θLθ(si, ai) ∣∣∣ θ=θk,t − 1 N N∑ i=1 ∇θLθ(si, ai) ∣∣∣ θ=θk,t−1\n(71)\nGk,t = δk,t +Gk,t−1 (72)\nθk,t+1 = arg min u {〈Gk,t, u〉+\n1\nαk Dψ(u, θk,t)} (73)\nend for for t = 1 to m− 1 do\n/* Update Critic (m− 1) Epochs */ sample mini-batch {(si, ai)}Ni=1 ∼ D\nLωjk−1,t−1 (ω) =\n1\nN N∑ i=1 (Qωjk−1,t−1 (si, ai)−Qω(si, ai))2, j = 1, 2 (74)\nωjk,t = arg minω Lωjk−1,t−1 (ω), j = 1, 2 (75)\nend for θ̃k def = θk,m−1, ω̃ j k def = ωjk,m−1, j = 1, 2 /* Soft Update */ θ̃k ← κθ̃k−1 + (1− κ)θ̃k ω̃jk ← κω̃ j k−1 + (1− κ)ω̃ j k, j = 1, 2\nend for\nD.3 TEST SCORE COMPARISON\nWe compare the VRMPO with baseline algorithm on test score. All the results are shown in the following Figure 6.\nD.4 MAX-RETURN COMPARISON\nWe compare the VRMPO with baseline algorithm on max-return. All the results are shown in the following Figure 7.\n50 100 150 200 250 300 350 400 450 500 Epoch\n0\n1000\n2000\n3000 4000 A ve ra ge T es t R et ur n\nVRMPO DDPG PPO TD3 TRPO\n(a) Walker2d-v2\n50 100 150 200 250 300 350 400 450 500 Epoch\n0\n2000\n4000\n6000\n8000\n10000\n12000\nA ve\nra ge\nT es\nt R et\nur n\nVRMPO DDPG PPO TD3 TRPO\n(b) HalfCheetah-v2\n50 100 150 200 250 300 350 400 450 500 Epoch\n10\n9\n8\n7\n6\n5\n4\nAv er\nag e\nTe st\nR et\nur n\nDDPG VRMPO PPO TD3 TRPO\n(c) Reacher-v2\nD.5 DETAILS OF BASELINE IMPLEMENTATION\nFor all algorithms, we set γ = 0.99. For VRMPO, the learning rate is chosen by grid search from the set {0.1, 0.01, 0.004, 0.008}, batch-size N = 100. Memory size |D| = 106. We run 5000 iterations for each epoch.\nDDPG For our implementation of DDPG, we use a two layer feedforward neural network of 400 and 300 hidden nodes respectively, with rectified linear units (ReLU) between each layer for both the actor and critic, and a final tanh unit following the output of the actor. This implementation is largely based on the recent work by (Fujimoto et al., 2018).\nTD3 For our implementation of TD3, we refer to the work TD3 (Fujimoto et al., 2018) and https: //github.com/sfujim/TD3.\nWe excerpt some necessary details about the implementation of TD3 (Fujimoto et al., 2018). TD3 maintains a pair of critics along with a single actor. For each time step, we update the pair of critics towards the minimum target value of actions selected by the target policy:\ny = r + γ min i=1,2 Qθ′i (s ′ , πφ′(s ′) + ),\n∼ clip(N (0, σ),−c, c).\nEvery d iterations, the policy is updated with respect to Qθ1 following the deterministic policy gradient algorithm. The target policy smoothing is implemented by adding ∼ N (0, 0.2) to the actions chosen by the target actor network, clipped to (−0.5, 0.5), delayed policy updates consists of only updating the actor and target critic network every d iterations, with d = 2. While a larger d would result in a larger benefit with respect to accumulating errors, for fair comparison, the critics are only trained once per time step, and training the actor for too few iterations would cripple learning. Both target networks are updated with τ = 0.005.\nTRPO and PPO For implementation of TRPO/PPO, we refer to https://github.com/openai/ baselines/tree/master/baselines and https://spinningup.openai.com/en/latest/algorithms/trpo.html." } ]
2,019
null
SP:24fb2650085abd5599f3dcd187a62a514608423a
[ "This paper aims at improving the computational cost of variance reduction methods while preserving their benefits regarding the fast provable convergence. The existing variance reduction based methods suffer from higher per-iteration gradient query complexity as compared to the vanilla mini-batch SGD, which limits their utility in many practical settings. This paper notices that, for many models, as the training progresses the gradient vectors start exhibiting structure in the sense that only a small number of coordinates have large magnitude. Based on this observation, the paper proposes a modified variance reduction method (by modifying the SpiderBoost method), where a 'memory vector' keeps track of the coordinates of the gradient vectors with large variance. Let $d$ be the size of the model parameter. During each iteration, one computes the gradient for $k_1$ coordinates with the highest variance (according to the memory vector) and an additional $k_2$ random coordinates. ", "The author(s) provide a method which combines some property of SCGS method and SpiderBoost. Theoretical results are provided and achieve the state-of-the-art complexity, which match the one of SpiderBoost. Numerical experiments show some advantage compared to SpiderBoost on some deep neural network architecture for some standard datasets MNIST, SVHN, and CIFAR-10. " ]
Variance reduction methods such as SVRG (Johnson & Zhang, 2013) and SpiderBoost (Wang et al., 2018) use a mixture of large and small batch gradients to reduce the variance of stochastic gradients. Compared to SGD (Robbins & Monro, 1951), these methods require at least double the number of operations per update to model parameters. To reduce the computational cost of these methods, we introduce a new sparsity operator: The random-top-k operator. Our operator reduces computational complexity by estimating gradient sparsity exhibited in a variety of applications by combining the top-k operator (Stich et al., 2018; Aji & Heafield, 2017) and the randomized coordinate descent operator. With this operator, large batch gradients offer an extra benefit beyond variance reduction: A reliable estimate of gradient sparsity. Theoretically, our algorithm is at least as good as the best algorithm (SpiderBoost), and further excels in performance whenever the random-top-k operator captures gradient sparsity. Empirically, our algorithm consistently outperforms SpiderBoost using various models on various tasks including image classification, natural language processing, and sparse matrix factorization. We also provide empirical evidence to support the intuition behind our algorithm via a simple gradient entropy computation, which serves to quantify gradient sparsity at every iteration.
[ { "affiliations": [], "name": "SPARSE GRADIENTS" }, { "affiliations": [], "name": "Melih Elibol" }, { "affiliations": [], "name": "Michael I. Jordan" }, { "affiliations": [], "name": "Lihua Lei" } ]
[ { "authors": [ "cent Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "venue": null, "year": 2015 }, { "authors": [ "Alham Fikri Aji", "Kenneth Heafield" ], "title": "Sparse communication for distributed gradient descent", "venue": "CoRR, abs/1704.05021,", "year": 2017 }, { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha: The first direct acceleration of stochastic gradient methods", "venue": "In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2017 }, { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha x: Practical momentum method for stochastic sum-of-nonconvex optimization", "venue": "arXiv preprint arXiv:1802.03866,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Elad Hazan" ], "title": "Variance reduction for faster non-convex optimization", "venue": "ArXiv e-prints", "year": 2016 }, { "authors": [ "Zeyuan Allen-Zhu", "Yang Yuan" ], "title": "Improved SVRG for non-strongly-convex or sum-of-non-convex objectives", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Yves Chauvin", "David E. Rumelhart (eds" ], "title": "Backpropagation: Theory, Architectures, and Applications. L", "venue": "Erlbaum Associates Inc., Hillsdale, NJ, USA,", "year": 1995 }, { "authors": [ "Aaron Defazio" ], "title": "On the ineffectiveness of variance reduced optimization for deep learning, 2019", "venue": "URL https://openreview.net/forum?id=B1MIBs05F7", "year": 2019 }, { "authors": [ "Aaron Defazio", "Francis Bach", "Simon Lacoste-Julien" ], "title": "SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Cong Fang", "Chris Junchi Li", "Zhouchen Lin", "Tong Zhang" ], "title": "Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "F. Maxwell Harper", "Joseph A. Konstan" ], "title": "The movielens datasets: History and context", "venue": "ACM Trans. Interact. Intell. Syst.,", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "CoRR, abs/1512.03385,", "year": 2015 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/ ̃kriz/cifar.html", "venue": "Yann LeCun and Corinna Cortes. MNIST handwritten digit database", "year": 2010 }, { "authors": [ "Lihua Lei", "Michael Jordan" ], "title": "Less than a Single Pass: Stochastically Controlled Stochastic Gradient", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,", "year": 2017 }, { "authors": [ "Lihua Lei", "Michael I Jordan" ], "title": "On the adaptivity of stochastic gradient-based optimization", "venue": "arXiv preprint arXiv:1904.04480,", "year": 2019 }, { "authors": [ "Lihua Lei", "Cheng Ju", "Jianbo Chen", "Michael I Jordan" ], "title": "Non-convex finitesum optimization via scsg methods", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Mitchell Marcus", "Grace Kim", "Mary Ann Marcinkiewicz", "Robert MacIntyre", "Ann Bies", "Mark Ferguson", "Karen Katz", "Britta Schasberger" ], "title": "The penn treebank: Annotating predicate argument structure", "venue": "In Proceedings of the Workshop on Human Language Technology,", "year": 1994 }, { "authors": [ "David R Musser" ], "title": "Introspective sorting and selection algorithms", "venue": "Software: Practice and Experience,", "year": 1997 }, { "authors": [ "Arkadi Nemirovski", "Anatoli Juditsky", "Guanghui Lan", "Alexander Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM Journal on optimization,", "year": 2009 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Lam M Nguyen", "Marten van Dijk", "Dzung T Phan", "Phuong Ha Nguyen", "Tsui-Wei Weng", "Jayant R Kalagnanam" ], "title": "Optimal finite-sum smooth non-convex optimization with sarah", "venue": null, "year": 1901 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Nhan H Pham", "Lam M Nguyen", "Dzung T Phan", "Quoc Tran-Dinh" ], "title": "Proxsarah: An efficient algorithmic framework for stochastic composite nonconvex optimization", "venue": null, "year": 1902 }, { "authors": [ "Sashank J Reddi", "Ahmed Hefny", "Suvrit Sra", "Barnabas Poczos", "Alex Smola" ], "title": "Stochastic variance reduction for nonconvex optimization", "venue": "arXiv preprint arXiv:1603.06160,", "year": 2016 }, { "authors": [ "Sashank J Reddi", "Suvrit Sra", "Barnabás Póczos", "Alex Smola" ], "title": "Fast incremental method for nonconvex optimization", "venue": "arXiv preprint arXiv:1603.06159,", "year": 2016 }, { "authors": [ "Steffen Rendle", "Christoph Freudenthaler", "Zeno Gantner", "Lars Schmidt-Thieme" ], "title": "Bpr: Bayesian personalized ranking from implicit feedback", "venue": "In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence,", "year": 2009 }, { "authors": [ "H. Robbins", "S. Monro" ], "title": "A stochastic approximation method", "venue": "Annals of Mathematical Statistics,", "year": 1951 }, { "authors": [ "Nicolas Le Roux", "Mark Schmidt", "Francis Bach" ], "title": "A stochastic gradient method with an exponential convergence rate for finite training sets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Sebastian U. Stich", "Jean-Baptiste Cordonnier", "Martin Jaggi" ], "title": "Sparsified SGD with memory", "venue": "CoRR, abs/1809.07599,", "year": 2018 }, { "authors": [ "Zhe Wang", "Kaiyi Ji", "Yi Zhou", "Yingbin Liang", "Vahid Tarokh" ], "title": "Spiderboost: A class of faster variance-reduced algorithms for nonconvex optimization", "venue": "CoRR, abs/1810.10690,", "year": 2018 }, { "authors": [ "Lin Xiao", "Tong Zhang" ], "title": "A proximal stochastic gradient method with progressive variance reduction", "venue": "SIAM Journal on Optimization,", "year": 2014 }, { "authors": [ "Dongruo Zhou", "Pan Xu", "Quanquan Gu" ], "title": "Stochastic nested variance reduction for nonconvex optimization", "venue": "In Proceedings of the 32nd International Conference on Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Optimization tools for machine learning applications seek to minimize the finite sum objective\nmin x∈Rd\nf(x) , 1\nn n∑ i=1 fi(x), (1)\nwhere x is a vector of parameters, and fi : Rd → R is the loss associated with sample i. Batch SGD serves as the prototype for modern stochastic gradient methods. It updates the iterate x with x− η∇fI(x), where η is the learning rate and∇fI(x) is the batch stochastic gradient, i.e.\n∇fI(x) = 1 |I| ∑ i∈I ∇fi(x).\nThe batch size |I| in batch SGD directly impacts the stochastic variance and gradient query complexity of each iteration of the update rule.\nIn recent years, variance reduction techniques have been proposed by carefully blending large and small batch gradients (e.g. Roux et al., 2012; Johnson & Zhang, 2013; Defazio et al., 2014; Xiao & Zhang, 2014; Allen-Zhu & Yuan, 2016; Allen-Zhu & Hazan, 2016; Reddi et al., 2016a;b; Allen-Zhu, 2017; Lei & Jordan, 2017; Lei et al., 2017; Allen-Zhu, 2018b; Fang et al., 2018; Zhou et al., 2018; Wang et al., 2018; Pham et al., 2019; Nguyen et al., 2019; Lei & Jordan, 2019). They are alternatives to batch SGD and are provably better than SGD in various settings. While these methods allow for greater learning rates than batch SGD and have appealing theoretical guarantees, they require a per-iteration query complexity which is more than double than that of batch SGD. Defazio (2019) questions the utility of variance reduction techniques in modern machine learning problems, empirically identifying query complexity as one issue. In this paper, we show that gradient sparsity (Aji & Heafield, 2017) can be used to significantly reduce the query complexity of variance reduction methods. Our work is motivated by the observation that gradients tend to be ”sparse,” having only\na small fraction of large coordinates. Specifically, if the indices of large gradient coordinates (measured in absolute value) are known before updating model parameters, we compute the derivative of only those coordinates while setting the remaining gradient coordinates to zero. In principle, if sparsity is exhibited, using large gradient coordinates will not effect performance and will significantly reduce the number of operations required to update model parameters. Nevertheless, this heuristic alone has three issues: (1) bias is introduced by setting other entries to zero; (2) the locations of large coordinates are typically unknown; (3) accessing a subset of coordinates may not be easily implemented for some problems like deep neural networks.\nWe provide solutions for all three issues. First, we introduce a new sparse gradient operator: The random-top-k operator. The random-top-k operator is a composition of the randomized coordinate descent operator and the top-k operator. In prior work, the top-k operator has been used to reduce the communication complexity of distributed optimization (Stich et al., 2018; Aji & Heafield, 2017) applications. The random-top-k operator has two phases: Given a stochastic gradient and a pair of integers (k1, k2) that sum to k, the operator retains k1 coordinates which are most ”promising” in terms of their ”likelihood” to be large on average, then randomly selects k2 of the remaining coordinates with appropriate rescaling. The first phase captures sparsity patterns while the second phase eliminates bias. Second, we make use of large batch gradients in variance reduction methods to estimate sparsity patterns. Inspired by the use of a memory vector in Aji & Heafield (2017), the algorithm maintains a memory vector initialized with the absolute value of the large batch gradient at the beginning of each outer loop and updated by taking an exponential moving average over subsequent stochastic gradients. Coordinates with large values in the memory vector are more ”promising,” and the random-top-k operator will pick the top k1 coordinate indices based on the memory vector. Since larger batch gradients have lower variance, the initial estimate is quite accurate. Finally, for software that supports dynamic computation graphs, we provide a cost-effective way (sparse back-propagation) to implement the random-top-k operator.\nIn this work we apply the random-top-k operator to SpiderBoost (Wang et al., 2018), a recent variance reduction method that achieves optimal query complexity, with a slight modification based on the ”geometrization” technique introduced by Lei & Jordan (2019). Theoretically, we show that our algorithm is never worse than SpiderBoost and can strictly outperform it when the random-top-k operator captures gradient sparsity. Empirically, we demonstrate the improvements in computation for various tasks including image classification, natural language processing, and sparse matrix factorization.\nThe rest of the paper is organized as follows. In Section 2, we define the random-top-k operator, our optimization algorithm, and a description of sparse backpropagation. The theoretical analyses are presented in Section 3, followed by experimental results in Section 4. All technical proofs are relegated to Appendix A, and additional experimental details can be found in Appendix B." }, { "heading": "2 STOCHASTIC VARIANCE REDUCTION WITH SPARSE GRADIENTS", "text": "Generally, variance reduction methods reduce the variance of stochastic gradients by taking a snapshot ∇f(y) of the gradient ∇f(x) every m steps of optimization, and use the gradient information in this snapshot to reduce the variance of subsequent smaller batch gradients ∇fI(x) (Johnson & Zhang, 2013; Wang et al., 2018). Methods such as SCSG (Lei & Jordan, 2017) utilize a large batch gradient, which is typically some multiple in size of the small batch gradient b, which is much more practical and is what we do in this paper. To reduce the cost of computing additional gradients, we use sparsity by only computing a subset k of the total gradients d, where y ∈ Rd. For d, k, k1, k2 ∈ Z+, let k = k1 + k2, where 1 ≤ k ≤ d for a parametric model of dimension d. In what follows, we define an operator which takes vectors x, y and outputs y′, where y′ retains only k of the entries in y, k1 of which are selected according to the coordinates in x which have the k1 largest absolute values, and the remaining k2 entries are randomly selected from y. The k1 coordinate indices and k2 coordinate indices are disjoint. Formally, the operator rtopk1,k2 : R\nd → Rd is defined for x, y ∈ Rd as\n( rtopk1,k2(x, y) ) ` = y` if k1 > 0 and |x|` ≥ |x|(k1) (d−k1) k2\ny` if ` ∈ S 0 otherwise,\nwhere |x| denotes a vector of absolute values, |x|(1) ≥ |x|(2) ≥ . . . ≥ |x|(d) denotes the order statistics of coordinates of x in absolute values, and S denotes a random subset with size k2 that is uniformly drawn from the set {` : |x|` < |x|(k1)}. For instance, if x = (11, 12, 13,−14,−15), y = (−25,−24, 13, 12, 11) and k1 = k2 = 1, then S is a singleton uniformly drawn from {1, 2, 3, 4}. Suppose S = {2}, then rtop1,1(x, y) = (0, 4y2, 0, 0, y5) = (0,−96, 0, 0, 11). If k1 + k2 = d, rtopk1,k2(x, y) = y. On the other hand, if k1 = 0, rtop0,k2(x, y) does not depend on x and returns a rescaled random subset of y. This is the operator used in coordinate descent methods. Finally, rtopk1,k2(x, y) is linear in y. The following Lemma shows that rtopk1,k2(x, y) is an unbiased estimator of y, which is a crucial property in our later analysis. Lemma 1. Given any x, y ∈ Rd,\nE ( rtopk1,k2(x, y) ) = y, Var ( rtopk1,k2(x, y) ) = d− k1 − k2\nk2 ‖ top−k1(x, y)‖ 2,\nwhere E is taken over the random subset S involved in the rtopk1,k2 operator and\n(top−k1(x, y))` = { y` if k1 > 0 and |x|` < |x|(k1) 0 otherwise.\nOur algorithm is detailed as below.\nAlgorithm 1: SpiderBoost with Sparse Gradients. Input: Learning rate η, inner loop size m, outer loop size T , large batch size B, small batch size b,\ninitial iterate x0, memory decay factor α, sparsity parameters k1, k2. 1 I0 ∼ Unif({1, . . . , n}) with |I0| = B 2 M0 := |∇fI0(x0)| 3 for j = 1, ..., T do 4 x\n(j) 0 := xj−1, M (j) 0 := Mj−1\n5 Ij ∼ Unif({1, . . . , n}) with |Ij | = B 6 ν\n(j) 0 := ∇fIj (x (j) 0 )\n7 Nj := m (for implementation) or Nj ∼ geometric distribution with mean m (for theory) 8 for t = 0, . . . , Nj − 1 do 9 x\n(j) t+1 := x (j) t − ην (j) t\n10 I(j)t ∼ Unif([n]) with |I (j) t | = b 11 ν (j) t+1 := ν (j) t + rtopk1,k2 ( M (j) t ,∇fI(j)t (x (j) t+1)−∇fI(j)t (x (j) t ) ) 12 M (j) t+1 := α|ν (j) t+1|+ (1− α)M (j) t\n13 xj := x (j) Nj , Mj := M (j) Nj\nOutput: xout = xT (for implementation) or xout = xT ′ where T ′ ∼ Unif([T ]) (for theory)\nThe algorithm includes an outer-loop and an inner-loop. In the theoretical analysis, we generate Nj as Geometric random variables. This trick is called ”geometrization”, proposed by Lei & Jordan (2017) and dubbed by Lei & Jordan (2019). It greatly simplifies analysis (e.g. Lei et al., 2017; Allen-Zhu, 2018a). In practice, as observed by Lei et al. (2017), setting Nj to m does not impact performance in any significant way. We only use ”geometrization” in our theoretical analysis for clarity. Similarly, for our theoretical analysis, the output of our algorithm is selected uniformly at random from the set of outer loop iterations. Like the use of average iterates in convex optimization, this is a common technique for nonconvex optimization proposed by Nemirovski et al. (2009). In practice, we simply use the last iterate.\nSimilar to Aji & Heafield (2017), we maintain a memory vector M (j)t at each iteration of our algorithm. The memory vector is initialized to the large batch gradient computed before every pass through the inner loop, which provides a relatively accurate gradient sparsity estimate of x(j)0 . The exponential moving average gradually incorporates information from subsequent small batch gradients to account for changes to gradient sparsity. We then use M (j)t as an approximation to the variance of each gradient coordinate in our rtopk1,k2 operator. With M (j) t as input, the rtopk1,k2\noperator targets k1 high variance gradient coordinates in addition to k2 randomly selected coordinates.\nThe cost of invoking rtopk1,k2 is dominated by the algorithm for selecting the top k coordinates, which has linear worst case complexity when using the introselect algorithm (Musser, 1997)." }, { "heading": "2.1 SPARSE BACK-PROPAGATION", "text": "A weakness of our method is the technical difficulty of implementing a sparse backpropagation algorithm in modern machine learning libraries, such as Tensorflow (Abadi et al., 2015) and Pytorch (Paszke et al., 2017). Models implemented in these libraries generally assume dense structured parameters. The optimal implementation of our algorithm makes use of a sparse forward pass and assumes a sparse computation graph upon which backpropagation is executed. Libraries that support dynamic computation graphs, such as Pytorch, will construct the sparse computation graph in the forward pass, which makes the required sparse backpropagation trivial. We therefore expect our algorithm to perform quite well on libraries which support dynamic computation graphs.\nConsider the forward pass of a deep neural network, where φ is a deep composition of parametric functions,\nφ(x; θ) = φL(φL−1(. . . φ0(x; θ0) . . . ; θL−1); θL). (2)\nThe unconstrained problem of minimizing over the θ` can be rewritten as a constrained optimization problem as follows:\nmin θ\n1\nn n∑ i=1 loss(z(L+1)i , yi)\ns.t. z(L+1)i = φL(z (L) i ; θL)\n...\nz (`+1) i = φ`(z (`) i ; θ`)\n...\nz (1) i = φ0(xi; θ0).\n(3)\nIn this form, zL+1i is the model estimate for data point i. Consider φ`(x; θ`) = σ(x T θ`) for 1 ≤ ` < L, φL be the output layer, and σ be some subdifferentiable activation function. If we apply the rtopk1,k2 operator per-layer in the forward-pass, with appropriate scaling of k1 and k2 to account for depth, we see that the number of multiplications in the forward pass is reduced to k1 + k2: σ(rtopk1,k2(v, x)\nT rtopk1,k2(v, θ`)). A sparse forward-pass yields a computation graph for a (k1 + k2)-parameter model, and back-propagation will compute the gradient of the objective with respect to model parameters in linear time (Chauvin & Rumelhart, 1995)." }, { "heading": "3 THEORETICAL COMPLEXITY ANALYSIS", "text": "" }, { "heading": "3.1 NOTATION AND ASSUMPTIONS", "text": "Denote by ‖ · ‖ the Euclidean norm and by a ∧ b the minimum of a and b. For a random vector Y ∈ Rd,\nVar(Y ) = d∑ i=1 Var(Yi).\nWe say a random variable N has a geometric distribution, N ∼ Geom(m), if N is supported on the non-negative integers with\nP(N = k) = γk(1− γ), ∀k = 0, 1, . . . ,\nfor some γ such that EN = m. Here we allow N to be zero to facilitate the analysis.\nAssumption A1 on the smoothness of individual functions will be made throughout the paper.\nA1 fi is differentiable with ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖,\nfor some L <∞ and for all i ∈ {1, . . . , n}.\nAs a direct consequence of assumption A1, it holds for any x, y ∈ Rd that\n− L 2 ‖x− y‖2 ≤ fi(x)− fi(y)− 〈∇fi(y), x− y〉 ≤ L 2 ‖x− y‖2. (4)\nTo formulate our complexity bounds, we define\nf∗ = inf x f(x), ∆f = f(x0)− f∗.\nFurther we define σ2 as an upper bound on the expected norm of the stochastic gradients:\nσ2 = sup x\n1\nn n∑ i=1 ‖∇fi(x)‖2. (5)\nBy Cauchy-Schwarz inequality, it is easy to see that σ2 is also a uniform bound of ‖∇f(x)‖2. Finally, we assume that sampling an index i and accessing the pair ∇fi(x) incur a unit of cost and accessing the truncated version rtopk1,k2(m,∇fi(x)) incur (k1 + k2)/d units of cost. Note that calculating rtopk1,k2(m,∇fI(x)) incurs |I|(k1 + k2)/d units of computational cost. Given our framework, the theoretical complexity of the algorithm is\nCcomp( ) , T∑ j=1 ( B + 2bNj k1 + k2 d ) . (6)" }, { "heading": "3.2 WORST-CASE GUARANTEE", "text": "Theorem 1. Under the following setting of parameters\nηL =\n√ k2\n6dm , B =\n⌈ 2σ2 2 ∧ n ⌉\nFor any T ≥ T ( ) , 4∆f/ηm 2, E‖∇f(xout)‖ ≤ .\nIf we further set\nm = Bd\nb(k1 + k2) ,\nthe complexity to achieve the above condition is\nECcomp( ) = O ( σ 3 ∧ √ n 2 ) L∆f √ b(k1 + k2)\nk2 . Recall that the complexity of SpiderBoost (Wang et al., 2018) is\nO\n(( σ\n3 ∧ √ n 2 ) L∆f ) .\nThus as long as b = O(1), k1 = O(k2), our algorithm has the same complexity as SpiderBoost under appropriate settings. The penalty term O( √ b(k1 + k2)/k2) is due to the information loss by sparsification." }, { "heading": "3.3 DATA ADAPTIVE ANALYSIS", "text": "Let g (j) t = ‖ top−k1(M (j) t ,∇f(x (j) t+1)−∇f(x (j) t ))‖2, and\nG (j) t =\n1\nn n∑ i=1 ‖ top−k1(M (j) t ,∇fi(x (j) t+1)−∇fi(x (j) t ))‖2.\nBy Cauchy-Schwarz inequality and the linearity of top−k1 , it is easy to see that g (j) t ≤ G (j) t . If our algorithm succeeds in capturing sparsity, both g(j)t and G (j) t will be small. In this subsection we will analyze the complexity under this case. Further define Rj as\nRj = Ejg(j)Nj + EjG(j)Nj b , (7)\nwhere Ej is taken over all randomness in j-th outer loop (line 4-13 of Algorithm 1). Theorem 2. Under the following setting of parameters\nηL = √ b ∧m 3m , B = ⌈ 3σ2 2 ∧ n ⌉\nFor any T ≥ T ( ) , 6∆f/ηm 2,\nE‖∇f(xout)‖2 ≤ 2 2\n3 + (d− k1 − k2)m k2 ER̄T ,\nwhere\nR̄T = 1\nT T∑ j=1 Rj .\nIf ER̄T ≤ 2 k23(d−k1−k2)m , then E‖∇f(xout)‖ ≤ .\nIf we further set\nm = Bd\nb(k1 + k2) ,\nthe complexity to achieve the above condition is\nECcomp( ) = O\n(( σ\n3 ∧ √ n 2 ) L∆f √ k1 + k2\nd\nb\nb ∧m\n) .\nIn practice, m is usually much larger than b. As a result, the complexity of our algorithm is O( √\n(k1 + k2)/d) smaller than that of SpiderBoost if our algorithm captures gradient sparsity. Although this type of data adaptive analyses is not as clean as the worst-case guarantee (Theorem 1), it reveals the potentially superior performance of our algorithm. Similar analyses have been done for various other algorithms, including AdaGrad (Duchi et al., 2011) and Adam (Kingma & Ba, 2014)." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present a variety of experiments to illustrate gradient sparsity and demonstrate the performance of Sparse SpiderBoost. By computing the entropy of the empirical distribution of the absolute value of stochastic gradient coordinates, we show that certain models exhibit gradient sparsity during optimization. To evaluate the performance of variance reduction with sparse gradients, we compute the loss over gradient queries per epoch of Sparse Spiderboost and SpiderBoost for a number of image classification problems. We also compare Sparse SpiderBoost, SpiderBoost, and SGD on a natural language processing task and sparse matrix factorization.\nFor all experiments, unless otherwise specified, we run SpiderBoost and Sparse SpiderBoost with a learning rate η = 0.1, large-batch size B = 1000, small-batch size b = 100, inner loop length of m = 10, memory decay factor of α = 0.5, and k1 and k2 both set to 5% of the total number of model parameters. We call the sum k1 + k2 = k = 10% the sparsity of the optimization algorithm." }, { "heading": "4.1 GRADIENT SPARSITY AND IMAGE CLASSIFICATION", "text": "Our experiments in this section test a number of image classification tasks for gradient sparsity, and plot the learning curves of some of these tasks. We test a 2-layer fully connected neural network with hidden layers of width 100, a simple convolutional neural net which we describe in detail in Appendix B, and Resnet-18 (He et al., 2015). All models use ReLu activations. For datasets, we use CIFAR-10 (Krizhevsky et al.), SVHN (Netzer et al., 2011), and MNIST (LeCun & Cortes, 2010). None of our experiments include Resnet-18 on MNIST as MNIST is an easier dataset; it is included primarily to provide variety.\nOur method relies partially on the assumption that the magnitude of the derivative of some model parameters are greater than others. To measure this, we compute the entropy of the empirical distribution of the absolute value of stochastic gradient coordinates. In Algorithm 1, the following term updates our estimate of the variance of each coordinate’s derivative:\nM (j) t+1 := α|ν (j) t+1|+ (1− α)M (j) t .\nConsider the entropy of the following probability vector p(j)t = M (j) t /‖M (j) t ‖1. The entropy of p provides us with a measure of how much structure there is in our gradients. To see this, consider the hypothetical scenario where pi = 1/d. In this scenario we have no structure; the top k1 component of our sparsity operator is providing no value and entropy is maximized. On the other hand, if a single entry pi = 1 and all other entries pj = 0, then the top k1 component of our sparsity operator is effectively identifying the only relevant model parameter.\nTo measure the potential of our sparsity operator, we compute the entropy of p while running SpiderBoost on a variety of datasets and model architectures. The results of running this experiment are summarized in the following table.\nTable 1 provides the maximum entropy as well as the entropy of the memory vector before and after training for 150 epochs, for each dataset and each model. For each model, the entropy at the beginning of training is almost maximal. This is due to random initialization of model parameters. After 150 epochs, the entropy of Mt for the convolutional model drops to approximately 3, which suggests a substantial amount of gradient structure. Note that for the datasets that we tested, the gradient structure depends primarily on the model and not the dataset. In particular, for Resnet-18, the entropy appears to vary minimally after 150 epochs.\nFigure 1 compares SpiderBoost alone to SpiderBoost with 10% sparsity (10% of parameter derivatives). All experiments in this section are run for 50 epochs. In our comparison to SpiderBoost, we measure the number of gradient queries over the size of the dataset N . A single gradient query is taken to be the cost of computing a gradient for a single data point. If i is the index of a single sample, then∇fi(x) is a single gradient query. Using the batch gradient to update model parameters for a dataset of size B has a gradient query cost of B. For a model with d parameters, using a single sample to update k model parameters has a gradient query cost of k/d, etc.\nOur results of fitting the convolutional neural network to MNIST show that sparsity provides a significant advantage compared to using SpiderBoost alone. We only show 2 epochs of this experiment since the MNIST dataset is fairly simple and convergence is rapidly achieved. The results of training Resnet-18 on CIFAR-10 suggests that our sparsity algorithm works well on large neural networks, and non-trivial datasets. We believe Resnet-18 on CIFAR-10 does not do as well due to the gradient density we observe for Resnet-18 in general. Sparsity here not only has the additional benefit of reducing gradient query complexity, but also provides a dampening effect on variance due to the additional covariates in SpiderBoost’s update to model parameters. Results for the rest of these experiments can be found in Appendix B." }, { "heading": "4.2 NATURAL LANGUAGE PROCESSING", "text": "We evaluate Sparse SpiderBoost’s performance on an LSTM-based (Hochreiter & Schmidhuber, 1997) generative language model. We compare Sparse SpiderBoost, SpiderBoost, and SGD. We train our LSTM model on the Penn Treebank (Marcus et al., 1994) corpus. The natural language processing model consists of a word embedding of dimension 128 of 1000 tokens, which is jointly learned with the task. The LSTM has a hidden and cell state dimension of 1024. All three optimization algorithms operate on this model. The variance reduction training algorithm for this type of model can be found in Appendix B. We run SpiderBoost and Sparse SpiderBoost with a learning rate η = 0.2, large-batch size B = 40, small-batch size b = 20, inner loop length of m = 2. We run SGD with learning rate 0.2 and batch size is 20. Figure 2 shows SpiderBoost is slightly worse than SGD, and sparsity provides a noticeable improvement over SGD." }, { "heading": "4.3 SPARSE MATRIX FACTORIZATION", "text": "For our experiments with sparse matrix factorization, we perform Bayesian Personalized Ranking (Rendle et al., 2009) on the MovieLens database (Harper & Konstan, 2015) with a latent dimension of 20. To satisfy m = B/b, we run SpiderBoost and Sparse SpiderBoost with a large-batch size B = 1030, small-batch size b = 103, inner loop length of m = 10. For this experiment, we run SpiderBoost with the following learning rate schedule:\nη(a, b, t) = b+ (a− b)m− t m ,\nwhere a = 1.0 and b = 0.1. The schedule interpolates from a to b as the algorithm progresses through the inner loop. For instance, within the inner loop, at iteration 0 the learning rate is 1.0, and at iteration m the learning rate is 0.1. We believe this is a natural way to utilize the low variance\nat the beginning of the inner loop, and is a fair comparison to an exponential decay learning rate schedule for SGD. Details of the SGD baselines are provided in Figure 2. We see SpiderBoost is slightly worse than SGD, and sparsity provides a slight improvement over SGD, especially in the first few epochs." }, { "heading": "5 CONCLUSION", "text": "In this paper, we show how sparse gradients with memory can be used to improve the gradient query complexity of SVRG-type variance reduction algorithms. While we provide a concrete sparse variance reduction algorithm for SpiderBoost, the techniques developed in this paper can be adapted to other variance reduction algorithms.\nWe show that our algorithm provides a way to explicitly control the gradient query complexity of variance reduction methods, a problem which has thus far not been addressed. Assuming our algorithm captures the sparsity structure of the optimization problem, we also prove that the complexity of our algorithm is an improvement over SpiderBoost. The results of our comparison to SpiderBoost validates this assumption, and entropy measures provided in Table 1 empirically support our hypothesis that gradient sparsity exists.\nTable 1 also supports the results in Aji & Heafield (2017), which shows that the top-k operator generally outperforms the random-k operator. Our random-top-k operator takes advantage of the superior performance of the top-k operator while eliminating bias via a secondary random-k operator. Not every problem we tested exhibited sparsity structure. While this is true, our analysis proves that our algorithm performs no worse than SpiderBoost in these settings. Even when there is no structure, our algorithm reduces to a random sampling of k1 + k2 coordinates, which is essentially a randomized coordinate descent analogue of SpiderBoost. Empirically, we see that Sparse SpiderBoost outperforms SpiderBoost when no sparsity structure is present. We believe this is due to the variance introduced by additional covariates in the SpiderBoost update, which is mitigated in Sparse SpiderBoost by our random-top-k operator.\nThe results of our experiments on natural language processing and matrix factorization demonstrate that, with additional effort, variance reduction methods are competitive with SGD. While we view this as progress toward improving the practical viability of variance reduction algorithms, we believe further improvements can be made, such as better utilization of reduced variance during training, and better control over increased variance in very high dimensional models such as dense net (Defazio, 2019). We recognize these issues and hope to make progress on them in future work." }, { "heading": "A TECHNICAL PROOFS", "text": "A.1 PREPARATORY RESULTS\nLemma 2 (Lemma 3.1 of Lei & Jordan (2019)). Let N ∼ Geom(m). Then for any sequence D0, D1, . . . with E|DN | <∞,\nE(DN −DN+1) = 1\nm (D0 − EDN ) .\nRemark 1. The requirement E|DN | < ∞ is essential. A useful sufficient condition if |Dt| = O(Poly(t)) because a geometric random variable has finite moments of any order.\nLemma 3 (Lemma B.2 of Lei & Jordan (2019)). Let z1, . . . , zM ∈ Rd be an arbitrary population and J be a uniform random subset of [M ] with size m. Then\nVar 1 m ∑ j∈J zj ≤ I(m < M) m · 1 M M∑ j=1 ‖zj‖22.\nProof of Lemma 1. WLOG, assume that |x1| ≥ |x2| ≥ . . . ≥ |xd|. Let S be a random subset of {k1 + 1, . . . , d} with size k2. Then(\nrtopk1,k2(x, y) ) ` = y` ( I(` ≤ k1) + d− k1 k2 I(` ∈ S) ) .\nAs a result, E [( rtopk1,k2(x, y) ) ` ] = y` ( I(` ≤ k1) + d− k1 k2 I(` > k1)P (` ∈ S) ) = y`,\nand\nVar [( rtopk1,k2(x, y) ) ` ] = ( d− k1 k2 )2 y2` I(` > k1)P (` ∈ S)(1− P (` ∈ S))\n= d− k1 − k2\nk2 y2` I(` > k1).\nTherefore,\nVar ( rtopk1,k2(x, y) ) = d− k1 − k2\nk2\n∑ `>k1 y2` = d− k1 − k2 k2 ‖ top−k1(x, y)‖ 2.\nA.2 ANALYSIS OF A SINGLE INNER LOOP\nLemma 4. For any j, t,\nEj,t(ν(j)t+1 − ν (j) t ) = ∇f(x (j) t+1)−∇f(x (j) t )\nand\nVarj,t(ν (j) t+1 − ν (j) t ) ≤\nη2L2\nb ‖ν(j)t ‖2 + d− k1 − k2 k2\n( g (j) t + G (j) t\nb\n) ,\nwhere Ej,t and Varj,t are taken over the randomness of I(j)t and the random subset S involved in the rtopk1,k2 operator.\nProof. By definition,\nν (j) t+1 − ν (j) t = rtopk1,k2\n( M\n(j) t ,∇fI(j)t (x (j) t+1)−∇fI(j)t (x\n(j) t ) ) .\nLet S be the random subset involved in rtopk1,k2 . Then S is independent of (I (j) t ,M (j) t , x (j) t+1, x (j) t ).\nBy Lemma 1, ES ( ν (j) t+1 − ν (j) t ) = ∇fI(j)t (x (j) t+1)−∇fI(j)t (x (j) t ) and\nVarS ( ν (j) t+1 − ν (j) t ) = d− k1 − k2\nk2 ∥∥∥top−k1 (M (j)t ,∇fI(j)t (x(j)t+1)−∇fI(j)t (x(j)t ))∥∥∥2 . Since I(j)t is independent of (M (j) t , x (j) t+1, x (j) t ), the tower property of conditional expectation and variance implies that\nEj,t ( ν (j) t+1 − ν (j) t ) = EI(j)t ( ∇fI(j)t (x (j) t+1)−∇fI(j)t (x (j) t ) ) = ∇f(x(j)t+1)−∇f(x (j) t ),\nand\nVarj,t ( ν (j) t+1 − ν (j) t ) = EI(j)t ( VarS ( ν (j) t+1 − ν (j) t )) + VarI(j)t ( ES ( ν (j) t+1 − ν (j) t )) . (8)\nTo bound the first term, we note that top−k1 is linear in y and thus\nEI(j)t ∥∥∥top−k1 (M (j)t ,∇fI(j)t (x(j)t+1)−∇fI(j)t (x(j)t ))∥∥∥2 = ∥∥∥EI(j)t top−k1 (M (j)t ,∇fI(j)t (x(j)t+1)−∇fI(j)t (x(j)t ))∥∥∥2 + VarI(j)t [ top−k1 ( M (j) t ,∇fI(j)t (x (j) t+1)−∇fI(j)t (x (j) t ) )]\n= g (j) t + VarI(j)t 1 b\n∑ i∈I(j)t top−k1(M (j) t ,∇fi(x (j) t+1)−∇fi(x (j) t )) ≤ g(j)t + G (j) t\nb , (9)\nwhere the last inequality uses Lemma 3. To bound the second term of (8), by Lemma 3,\nVarI(j)t\n( ES ( ν (j) t+1 − ν (j) t )) = VarI(j)t ( ∇fI(j)t (x (j) t+1)−∇fI(j)t (x (j) t ) )\n≤ 1 b 1 n n∑ i=1 ‖∇fi(x(j)t+1)−∇fi(x (j) t )‖2 (i) ≤ L 2 b ‖x(j)t+1 − x (j) t ‖2 (ii) = η2L2 b ‖ν(j)t ‖2,\nwhere (i) uses assumption A1 and (ii) uses the definition that x(j)t+1 = x (j) t − ην (j) t .\nLemma 5. For any j, t,\nEj,t‖ν(j)t+1 −∇f(x (j) t+1)‖2 ≤ ‖ν (j) t −∇f(x (j) t )‖2 +\nη2L2\nb ‖ν(j)t ‖2 + d− k1 − k2 k2\n( g (j) t + G (j) t\nb\n) ,\nwhere Ej,t and Varj,t are taken over the randomness of I(j)t and the random subset S involved in the rtopk1,k2 operator.\nProof. By Lemma 4, we have\nν (j) t+1 −∇f(x (j) t+1) = ν (j) t −∇f(x (j) t ) + ( ν (j) t+1 − ν (j) t − Ej,t(ν (j) t+1 − ν (j) t ) ) .\nSince I(j)t is independent of (ν (j) t , x (j) t ),\nCovj,t ( ν (j) t −∇f(x (j) t ), ν (j) t+1 − ν (j) t ) = 0.\nAs a result,\nEj,t‖ν(j)t+1 −∇f(x (j) t+1)‖2 = ‖ν (j) t −∇f(x (j) t )‖2 + Varj,t(ν (j) t+1 − ν (j) t ).\nThe proof is then completed by Lemma 4.\nLemma 6. For any j,\nEj‖ν(j)Nj −∇f(x (j) Nj\n)‖2 ≤ mη 2L2\nb Ej‖ν(j)Nj ‖\n2 + σ2I(B < n)\nB + (d− k1 − k2)m k2 Rj ,\nwhere Ej is taken over all randomness in j-th outer loop (line 4-13 of Algorithm 1). 4.\nProof. By definition,\n‖ν(j)t+1‖ ≤ ‖ν (j) t ‖+ ∥∥∥rtopk1,k2 (M (j)t ,∇fI(j)t (x(j)t+1)−∇fI(j)t (x(j)t ))∥∥∥ ≤ ‖ν(j)t ‖+\n∥∥∥∇fI(j)t (x(j)t+1)−∇fI(j)t (x(j)t )∥∥∥ ≤ ‖ν(j)t ‖+ 1\nb ∑ i∈I(j)t ∥∥∥∇fi(x(j)t+1)−∇fi(x(j)t )∥∥∥ ≤ ‖ν(j)t ‖+ √√√√1 b\n∑ i∈I(j)t ∥∥∥∇fi(x(j)t+1)−∇fi(x(j)t )∥∥∥2\n≤ ‖ν(j)t ‖+ √√√√√√2b ∑ i∈I(j)t ∥∥∥∇fi(x(j)t+1)∥∥∥2 + ∑ i∈I(j)t ∥∥∥∇fi(x(j)t )∥∥∥2 \n≤ ‖ν(j)t ‖+ √√√√2n b ( 1 n n∑ i=1 ∥∥∥∇fi(x(j)t+1)∥∥∥2 + 1n n∑ i=1 ∥∥∥∇fi(x(j)t )∥∥∥2 ) ≤ ‖ν(j)t ‖+ √ 2nσ\nAs a result, ‖ν(j)t ‖ ≤ ‖ν (j) 0 ‖+ t √ 2nσ, (10)\nThus, ‖ν(j)t −∇f(x (j) t )‖2 ≤ 2‖ν (j) t ‖2 + 2‖∇f(x (j) t )‖2 = Poly(t).\nThis implies that we can apply Lemma 2 on the sequence Dt = ‖ν(j)t −∇f(x (j) t )‖2.\nLetting j = Nj in Lemma 5 and taking expectation over all randomness in Ej , we have\nEj‖ν(j)Nj+1 −∇f(x (j) Nj+1 )‖2\n≤ Ej‖ν(j)Nj −∇f(x (j) Nj\n)‖2 + η 2L2\nb Ej‖ν(j)Nj ‖\n2 + d− k1 − k2\nk2 Ej g(j)Nj + G(j)Njb \n= Ej‖ν(j)Nj −∇f(x (j) Nj\n)‖2 + η 2L2\nb Ej‖ν(j)Nj ‖\n2 + d− k1 − k2\nk2 Rj . (11)\nBy Lemma 2,\nEj‖ν(j)Nj −∇f(x (j) Nj )‖2 − Ej‖ν(j)Nj+1 −∇f(x (j) Nj+1 )‖2\n= 1\nm\n( ‖ν(j)0 −∇f(x (j) 0 )‖2 − Ej‖ν (j) Nj −∇f(x(j)Nj )‖ 2 )\n= 1\nm\n( Ej‖ν(j)0 −∇f(xj−1)‖2 − Ej‖ν (j) Nj −∇f(xj)‖2 ) , (12)\nwhere the last line uses the definition that xj−1 = x (j) 0 , xj = x (j) Nj . By Lemma 3,\nEj‖ν(j)0 −∇f(xj−1)‖2 ≤ σ2I(B < n)\nB . (13)\nThe proof is completed by putting (11), (12) and (13) together.\nLemma 7. For any j, t,\nf(x (j) t+1) ≤ f(x (j) t ) +\nη 2 ‖ν(j)t −∇f(x (j) t )‖2 − η 2 ‖∇f(x(j)t )‖2 − η 2 (1− ηL)‖ν(j)t ‖2.\nProof. By (4),\nf(x (j) t+1) ≤ f(x (j) t ) + 〈 ∇f(x(j)t ), x (j) t+1 − x (j) t 〉 + L\n2 ‖x(j)t − x (j) t+1‖2\n= f(x (j) t )− η 〈 ∇f(x(j)t ), ν (j) t 〉 + η2L\n2 ‖ν(j)t ‖2\n= f(x (j) t ) +\nη 2 ‖ν(j)t −∇f(x (j) t )‖2 − η 2 ‖∇f(x(j)t )‖2 − η 2 ‖ν(j)t ‖2 +\nη2L\n2 ‖ν(j)t ‖2.\nThe proof is then completed.\nLemma 8. For any j,\nEj‖∇f(xj)‖2 ≤ 2\nηm Ej(f(xj−1)− f(xj)) + Ej‖ν(j)Nj −∇f(xj)‖ 2 − (1− ηL)Ej‖ν(j)Nj ‖ 2,\nwhere Ej is taken over all randomness in j-th outer loop (line 4-13 of Algorithm 1).\nProof. Since ‖∇f(x)‖ ≤ σ for any x,\n|f(x(j)t+1)− f(x (j) t )| ≤ σ‖ν (j) t ‖.\nThis implies that\n|f(x(j)t )| ≤ σ t∑\nk=0\n‖ν(j)t ‖+ |f(x (j) 0 )|.\nAs shown in (10), ‖ν(j)t ‖ = Poly(t) and thus |f(x (j) t )| = Poly(t). This implies that we can apply Lemma 2 on the sequence Dt = f(x (j) t ).\nLetting j = Nj in Lemma 7 and taking expectation over all randomness in Ej , we have\nEjf(x(j)Nj+1) ≤ Ejf(x (j) Nj\n) + η\n2 ‖ν(j)Nj −∇f(x (j) Nj )‖2 − η 2 ‖∇f(x(j)Nj )‖ 2 − η 2 (1− ηL)‖ν(j)Nj ‖ 2.\nBy Lemma 2,\nEjf(x(j)Nj )− Ejf(x (j) Nj+1\n) = 1\nm Ej(f(x(j)0 )− f(x (j) Nj\n)) = 1\nm Ej(f(xj−1)− f(xj)).\nThe proof is then completed.\nCombining Lemma 6 and Lemma 8, we arrive at the following key result on one inner loop.\nTheorem 3. For any j,\nE‖∇f(xj)‖2 ≤ 2\nηm Ej(f(xj−1)− f(xj)) +\nσ2I(B < n)\nB + (d− k1 − k2)m k2 Rj\n− ( 1− ηL− mη 2L2\nb\n) Ej‖ν(j)Nj ‖ 2.\nA.3 COMPLEXITY ANALYSIS\nProof of Theorem 1. By definition (7) of Rj and the smoothness assumption A1,\nERj ≤ b+ 1\nb L2E‖x(j)Nj+1 − x (j) Nj ‖2 ≤ 2η2L2E‖ν(j)Nj ‖ 2.\nBy Theorem 3,\nE‖∇f(xj)‖2 ≤ 2\nηm Ej(f(xj−1)− f(xj)) +\nσ2I(B < n)\nB − ( 1− ηL− mη 2L2\nb − 2(d− k1 − k2)mη 2L2\nk2\n) Ej‖ν(j)Nj ‖ 2.\nSince ηL = √ k2/6dm,\nηL+ mη2L2\nb +\n2(d− k1 − k2)mη2L2\nk2 ≤ 1√ 6 + 1 6 + 1 3 ≤ 1.\nAs a result,\nE‖∇f(xj)‖2 ≤ 2\nηm Ej(f(xj−1)− f(xj)) +\nσ2I(B < n)\nB .\nSince xout = xT ′ where T ′ ∼ Unif([T ]), we have\nE‖∇f(xout)‖2 ≤ 2\nηmT E(f(x0)− f(xT+1)) +\nσ2I(B < n)\nB ≤ 2∆f ηmT + σ2I(B < n) B .\nThe setting of T and B guarantees that\n2∆f ηmT ≤ 2 2 , σ2I(B < n) B ≤ 2 2 .\nTherefore, E‖∇f(xout)‖2 ≤ 2.\nBy Cauchy-Schwarz inequality, E‖∇f(xout)‖ ≤ √\nE‖∇f(xout)‖2 ≤ . In this case, the average computation cost is\nECcomp( ) = T ( ) ( B + 2(k1 + k2)\nd bm\n) = 3BT ( )\n= O ( B∆f ηm 2 ) = O (√ BbL∆f 2 √ k1 + k2 k2 ) .\nThe proof is then proved by the setting of B.\nProof of Theorem 2. Under the setting of η,\nηL+ mη2L2 b ≤ 1√ 3 + 1 3 ≤ 1.\nBy Theorem 3,\nE‖∇f(xj)‖2 ≤ 2\nηm Ej(f(xj−1)− f(xj)) +\nσ2I(B < n) B + d− k1 − k2 k2 Rj .\nBy definition of xout,\nE‖∇f(xout)‖2 ≤ 2∆f ηmT + σ2I(B < n) B + (d− k1 − k2)m k2 ER̄T .\nUnder the settings of T and B,\n2∆f ηmT ≤ 2 3 , σ2I(B < n) B ≤ 2 3 .\nThis proves the first result. The second result follows directly. For the computation cost, similar to the proof of Theorem 1, we have\nECcomp( ) = O(BT ) = O ( L∆f 2 B√ m(b ∧m) ) .\nThe proof is then completed by trivial algebra." }, { "heading": "B EXPERIMENTS", "text": "B.1 DESCRIPTION OF SIMPLE CONVOLUTIONAL NEURAL NETWORK\nThe simple convolutional neural network used in the experiments consists of a convolutional layer with a kernel size of 5, followed by a max pool layer with kernel size 2, followed by another convolutional layer with kernel size 5, followed by a fully connected layer of input size 16× side2 × 120 (side is the size of the second dimension of the input), followed by a fully connected layer of size 120× 84, followed by a final fully connected layer of size 84× the output dimension.\nB.2 NATURAL LANGUAGE PROCESSING\nThe natural language processing model consists of a word embedding of dimension 128 of 1000 tokens, which is jointly learned with the task. The LSTM has a hidden and cell state dimension of 1024.\nAlgorithm 2: SpiderBoost for Natural Language Processing. Input: Learning rate η, inner loop size m, number of iterations T , large batch matrix Z2 with `2\nbatches of size B, small batch matrix Z1 with `1 batches of size b, initial iterate x0, initial states s0 and S0.\n1 for t = 0, ..., T − 1 do 2 i = mod(t, `1) 3 j = mod(t, `2) 4 if i = 0 then 5 st = 0\n6 if j = 0 then 7 St = 0\n8 if mod(t,m) = 0 then 9 νt, St+1 := ∇fZ2j (xt, St)\n10 st+1 = st\n11 else 12 gp := ∇fZ1i(xt−1, st−1) 13 gc, st+1 := ∇fZ1i(xt, st) 14 νt := νt−1 + (gc − gb) 15 St+1 = St\n16 xt+1 := xt − ηνt Output: xT\nBefore describing Algorithm 2, let us derive the full batch gradient of a generative language model. We encode the vocabulary of our dataset of length N so that D ∈ NN is a sequence of integers corresponding to one-hot encodings of each token. We model the transition p(Di+1|Di, si) using an RNN model M as M(Di, si) = Di+1, si+1, where si is the sequential model state at step i. The\nmodel M can be thought of as a classifier with cross entropy loss L and additional dependence on si. The batch gradient objective can therefore be formulated by considering the full sequence of predictions from i = 0 to i = N − 1, generating for each step i the output D̂i+1, si+1. Each token is one-hot encoded as an integer (from 0 to the size of the vocabulary), so the empirical risk is given by\nJ(D;x) = 1\nN N−1∑ i=0 L(D̂i, Di).\nThus, the full batch gradient is simply the gradient of J with respect to x.\nIn Algorithm 2, D is split into b contiguous sequences of length `1 = N/b and stored in a matrix Z1 ∈ Nb×`1 . Taking a pass over Z1 requires maintaining a state si ∈ Nb for each entry in a batch, which is reset before every pass over Z1. To deal with maintaining state for batches at different time scales, we define a different matrix Z2 ∈ Nb×`2 which maintains a different set of states Si ∈ NB for each entry of batch size B. We denote by g, st+1 = ∇fZ1j (x, st) the gradient of our model with respect to x, where ∇fZ1j denotes the gradient function corresponding to the jth batch of matrix Z1. The function fZ1j simply computes the loss of the jth batch of matrix Z1." } ]
2,020
null
SP:d94b0a398257e68c8888f0fdb9e6765881f798af
[ "This paper properly applied several technique from RNN and graph neural networks to model dynamically-evolving, multi-relational graph data. There are two key component: a RNN to encode temporal information from the past event sequences, and a neighborhood aggregator collects the information from the neighbor nodes. The contribution on RNN part is design the loss and parameterizes the tuple of the graph. The contribution of the second part was adapting Multi-Relational Aggregator to this network. The paper is well-written. Although I'm familiar with the dataset, the analysis and comparison seems thorough. ", "The paper proposes a recurrent and autorgressive architecture to model temporal knowledge graphs and perform multi-time-step inference in the form of future link prediction. Specifically, given a historical sequence of graphs at discrete time points, the authors build sequential probabilistic approach to infer the next graph using joint over all previous graphs factorized into conditional distributions of subject, relation and the objects. The model is parameterized by a recurrent architecture that employs a multi-step aggregation to capture information within the graph at particular time step. The authors also propose a sequential approach to perform multi-step inference. The proposed method is evaluated on the task of future link prediction across several baselines, both static and dynamic, and ablation analysis is provided to measure the effect of each component in the architecture." ]
Modeling dynamically-evolving, multi-relational graph data has received a surge of interests with the rapid growth of heterogeneous event data. However, predicting future events on such data requires global structure inference over time and the ability to integrate temporal and structural information, which are not yet well understood. We present Recurrent Event Network (RE-NET), a novel autoregressive architecture for modeling temporal sequences of multi-relational graphs (e.g., temporal knowledge graph), which can perform sequential, global structure inference over future time stamps to predict new events. RE-NET employs a recurrent event encoder to model the temporally conditioned joint probability distribution for the event sequences, and equips the event encoder with a neighborhood aggregator for modeling the concurrent events within a time window associated with each entity. We apply teacher forcing for model training over historical data, and infer graph sequences over future time stamps by sampling from the learned joint distribution in a sequential manner. We evaluate the proposed method via temporal link prediction on five public datasets. Extensive experiments1 demonstrate the strength of RE-NET, especially on multi-step inference over future time stamps.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "CoRR, abs/1409.0473,", "year": 2015 }, { "authors": [ "Antoine Bordes", "Nicolas Usunier", "Alberto García-Durán", "Jason Weston", "Oksana Yakhnenko" ], "title": "Translating embeddings for modeling multi-relational data", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Elizabeth Boschee", "Jennifer Lautenschlager", "Sean O’Brien", "Steve Shellman", "James Starz", "Michael Ward" ], "title": "Icews coded event data", "venue": "Harvard Dataverse,", "year": 2015 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "In EMNLP,", "year": 2014 }, { "authors": [ "Shib Sankar Dasgupta", "Swayambhu Nath Ray", "Partha Talukdar" ], "title": "Hyte: Hyperplane-based temporally aware knowledge graph embedding", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Tim Dettmers", "Pasquale Minervini", "Pontus Stenetorp", "Sebastian Riedel" ], "title": "Convolutional 2d knowledge graph embeddings", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Nan Du", "Hanjun Dai", "Rakshit Trivedi", "Utkarsh Upadhyay", "Manuel Gomez-Rodriguez", "Le Song" ], "title": "Recurrent marked temporal point processes: Embedding event history to vector", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Alberto García-Durán", "Sebastijan Dumancic", "Mathias Niepert" ], "title": "Learning sequence encoders for temporal knowledge graph completion", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "Palash Goyal", "Nitin Kamra", "Xinran He", "Yan Liu" ], "title": "Dyngem: Deep embedding method for dynamic graphs", "venue": "arXiv preprint arXiv:1805.11273,", "year": 2018 }, { "authors": [ "Palash Goyal", "Sujit Rokka Chhetri", "Arquimedes Canedo" ], "title": "dyngraph2vec: Capturing network dynamics using dynamic graph representation learning", "venue": "Knowledge-Based Systems,", "year": 2019 }, { "authors": [ "William L. Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Xu Han", "Shulin Cao", "Xin Lv", "Yankai Lin", "Zhiyuan Liu", "Maosong Sun", "Juan-Zi Li" ], "title": "Openke: An open toolkit for knowledge embedding", "venue": null, "year": 2018 }, { "authors": [ "Seyed Mehran Kazemi", "Rishab Goel", "Kshitij Jain", "Ivan Kobyzev", "Akshay Sethi", "Peter Forsyth", "Pascal Poupart" ], "title": "Relational representation learning for dynamic (knowledge) graphs: A survey", "venue": null, "year": 1905 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": null, "year": 2016 }, { "authors": [ "Srijan Kumar", "Xikun Zhang", "Jure Leskovec" ], "title": "Predicting dynamic embedding trajectory in temporal interaction networks", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Julien Leblay", "Melisachew Wudage Chekol" ], "title": "Deriving validity time in knowledge graph", "venue": "In Companion of the The Web Conference 2018 on The Web Conference", "year": 2018 }, { "authors": [ "Kalev Leetaru", "Philip A Schrodt" ], "title": "Gdelt: Global data on events, location, and tone, 1979–2012", "venue": "In ISA annual convention,", "year": 2013 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter Battaglia" ], "title": "Learning deep generative models of graphs", "venue": "arXiv preprint arXiv:1803.03324,", "year": 2018 }, { "authors": [ "Farzaneh Mahdisoltani", "Joanna Asia Biega", "Fabian M. Suchanek" ], "title": "Yago3: A knowledge base from multilingual wikipedias", "venue": "In CIDR,", "year": 2014 }, { "authors": [ "Giang Hoang Nguyen", "John Boaz Lee", "Ryan A. Rossi", "Nesreen K. Ahmed", "Eunyee Koh", "Sungchul Kim" ], "title": "Continuous-time dynamic network embeddings", "venue": "In WWW,", "year": 2018 }, { "authors": [ "Rasmus Berg Palm", "Ulrich Paquet", "Ole Winther" ], "title": "Recurrent relational networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Aldo Pareja", "Giacomo Domeniconi", "Jie Chen", "Tengfei Ma", "Toyotaro Suzumura", "Hiroki Kanezashi", "Tim Kaler", "Charles E. Leisersen" ], "title": "Evolvegcn: Evolving graph convolutional networks for dynamic graphs", "venue": null, "year": 1902 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin A. Riedmiller", "Raia Hadsell", "Peter W. Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": null, "year": 2018 }, { "authors": [ "Michael Sejr Schlichtkrull", "Thomas N. Kipf", "Peter Bloem", "Rianne van den Berg", "Ivan Titov", "Max Welling" ], "title": "Modeling relational data with graph convolutional networks", "venue": "In ESWC,", "year": 2018 }, { "authors": [ "Youngjoo Seo", "Michaël Defferrard", "Pierre Vandergheynst", "Xavier Bresson" ], "title": "Structured sequence modeling with graph convolutional recurrent networks", "venue": "In ICONIP,", "year": 2017 }, { "authors": [ "Uriel Singer", "Ido Guy", "Kira Radinsky" ], "title": "Node embedding over temporal graphs", "venue": "arXiv preprint arXiv:1903.08889,", "year": 2019 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": null, "year": 1902 }, { "authors": [ "Lucas Theis", "Aäron van den Oord", "Matthias Bethge" ], "title": "A note on the evaluation of generative models", "venue": "arXiv preprint arXiv:1511.01844,", "year": 2015 }, { "authors": [ "Rakshit Trivedi", "Hanjun Dai", "Yichen Wang", "Le Song" ], "title": "Know-evolve: Deep temporal reasoning for dynamic knowledge graphs", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Rakshit Trivedi", "Mehrdad Farajtabar", "Prasenjeet Biswal", "Hongyuan Zha" ], "title": "Dyrep: Learning representations over dynamic graphs", "venue": "ICLR", "year": 2019 }, { "authors": [ "Théo Trouillon", "Johannes Welbl", "Sebastian Riedel", "Éric Gaussier", "Guillaume Bouchard" ], "title": "Complex embeddings for simple link prediction", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Bishan Yang", "Wen tau Yih", "Xiaodong He", "Jianfeng Gao", "Li Deng" ], "title": "Embedding entities and relations for learning and inference in knowledge", "venue": "bases. CoRR,", "year": 2015 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Jiaxuan You", "Yichen Wang", "Aditya Pal", "Pong Eksombatchai", "Chuck Rosenburg", "Jure Leskovec" ], "title": "Hierarchical temporal convolutional networks for dynamic recommender systems", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Lekui Zhou", "Yang Yang", "Xiang Ren", "Fei Wu", "Yueting Zhuang" ], "title": "Dynamic network embedding by modeling triadic closure process", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Lekui Zhou", "Yang Yang", "Xiang Ren", "Fei Wu", "Yueting Zhuang" ], "title": "Dynamic network embedding by modeling triadic closure process", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning on dynamically-evolving, graph-structured data has emerged as an important problem in a wide range of applications, including social network analysis (Zhou et al., 2018a; Trivedi et al., 2019), knowledge graph reasoning (Trivedi et al., 2017; Nguyen et al., 2018; Kazemi et al., 2019), event forecasting (Du et al., 2016), and recommender systems (Kumar et al., 2019; You et al., 2019). Previous methods over dynamic graphs mainly focus on learning time-sensitive structure representations for node classification and link prediction in single-relational graphs. However, the rapid growth of heterogeneous event data (Mahdisoltani et al., 2014; Boschee et al., 2015) has created new challenges on modeling temporal, complex interactions between entities (i.e., viewed as a temporal knowledge graph or a TKG), and calls for approaches that can predict new events in different future time stamps based on the history—i.e., structure inference of a TKG over time.\nRecent attempts on learning over temporal knowledge graphs have focused on either predicting missing events (facts) for the observed time stamps (García-Durán et al., 2018; Dasgupta et al., 2018; Leblay & Chekol, 2018), or estimating the conditional probability of observing a future event using temporal point process (Trivedi et al., 2017; 2019). However, the former group of methods adopts an interpolation problem formulation over TKGs and thus cannot predict future events, as representations of unseen time stamps are unavailable. The latter group of methods, including Know-Evolve and its extension, DyRep, computes the probability of future events using ground-truths of the proceeding events during inference time, and cannot model concurrent events occurring within the same time window—which often happens when event time stamps are discrete. It is thus desirable to have a principled method that can infer graph structure sequentially over time and can incorporate local structural information (e.g., concurrent events) during temporal modeling.\nTo this end, we propose a sequential structure inference architecture, called Recurrent Event Network (RE-NET), for modeling heterogeneous event data in the form of temporal knowledge graphs. Key ideas of RE-NET are based on the following observations: (1) predicting future events can be viewed as a sequential (multi-step) inference of multi-relational interactions between entities over time; (2)\n1Code and data have been uploaded and will be published upon acceptance of the paper.\ntemporally adjacent events may carry related semantics and informative patterns, which can further help inform future events (i.e., temporal information); and (3) multiple events may co-occur within the same time window and exhibit structural dependencies as they share entities (i.e., local structural information). To incorporate these ideas, RE-NET defines the joint probability distribution of all the events in a TKG in an autoregressive fashion, where it models the probability distribution of the concurrent events at the current time step conditioned on all the preceding events (see Fig. 1b for an illustration). Specifically, a recurrent event encoder, parametrized by RNNs, is used to summarize information of the past event sequences, and a neighborhood aggregator is employed to aggregate the information of concurrent events for the related entity within each time stamp. With the summarized information of the past event sequences, our decoder defines the joint probability of a current event. Such an autoregressive model can be effectively trained by using teacher forcing. Global structure inference for predicting future events can be achieved by performing sampling in a sequential manner.\nWe evaluate our proposed method on temporal link prediction task, by testing the performance of multi-step inference over time on five public temporal knowledge graph datasets. Experimental results demonstrate that RE-NET outperforms state-of-the-art models of both static and temporal graph reasoning, showing its better capacity to model temporal, multi-relational graph data with concurrent events. We further show that RE-NET can perform effective multi-step inference to predict unseen entity relationships in a distant future." }, { "heading": "2 RELATED WORK", "text": "Our work is related to previous studies on temporal knowledge graph reasoning, temporal modeling on homogeneous graphs, recurrent graph neural networks, and deep autoregressive models.\nTemporal KG Reasoning. There are some recent attempts on incorporating temporal information in modeling dynamic knowledge graphs. (Trivedi et al., 2017) presented Know-Evolve which models the occurrence of a fact as a temporal point process. However, this method is built on a problematic formulation when dealing with concurrent events, as shown in Section F. Several embedding-based methods have been proposed (García-Durán et al., 2018; Leblay & Chekol, 2018; Dasgupta et al., 2018) to model time information. They embed the associate into a low dimensional space such as relation embeddings with RNN on the text of time (García-Durán et al., 2018), time embeddings (Leblay & Chekol, 2018), and temporal hyperplanes (Leblay & Chekol, 2018). However, these models do not capture temporal dependency and cannot generalize to unobserved time stamps.\nTemporal Modeling on Homogeneous Graphs. There are attempts on predicting future links on homogeneous graphs (Pareja et al., 2019; Goyal et al., 2018; 2019; Zhou et al., 2018b; Singer et al., 2019). Some of the methods try to incorporate and learn graphical structures to predict future links (Pareja et al., 2019; Zhou et al., 2018b; Singer et al., 2019), while other methods predict by reconstructing an adjacency matrix by using an autoencoder (Goyal et al., 2018; 2019). These\nmethods seek to predict on single-relational graphs, and are designed to predict future edges in one future step (i.e., for t+ 1). However, our work focuses on “multi-relational” knowledge graphs and aims for multi-step prediction (i.e., for t+ 1, . . . , t+ k).\nRecurrent Graph Neural Models. There have been some studies on recurrent graph neural models for sequential or temporal graph-structured data (Sanchez-Gonzalez et al., 2018; Battaglia et al., 2018; Palm et al., 2018; Seo et al., 2017; Pareja et al., 2019). These methods adopt a messagepassing framework for aggregating nodes’ neighborhood information (e.g., via graph convolutional operations). GN (Sanchez-Gonzalez et al., 2018; Battaglia et al., 2018) and RRN (Palm et al., 2018) update node representations by a message-passing scheme between time stamps. Some prior methods adopt an RNN to memorize and update the states of node embeddings that are dynamically evolving (Seo et al., 2017), or memorize and update the model parameters for different time stamps (Pareja et al., 2019). In contrast, our proposed method, RE-NET, aims to leverage autoregressive modeling to parameterize the joint probability distributions of events with RNNs.\nDeep Autoregressive Models. Deep autoregressive models define joint probability distributions as a product of conditionals. DeepGMG (Li et al., 2018) and GraphRNN (You et al., 2018) are deep generative models of graphs and focus on generating homogeneous graphs where there is only a single type of edge. In contrast to these studies, our work focuses on generating heterogeneous graphs, in which multiple types of edges exist, and thus our problem is more challenging. To the best of my knowledge, this is the first paper to formulate the structure inference (prediction) problem for temporal, multi-relational (knowledge) graphs in an autoregressive fashion." }, { "heading": "3 PROPOSED METHOD: RE-NET", "text": "We consider a temporal knowledge graph (TKG) as a multi-relational, directed graph with timestamped edges (relationships) between nodes (entities). An event is defined as a time-stamped edge, i.e., (subject entity, relation, object entity, time) and is denoted by a quadruple (s, r, o, t) or (st, rt, ot). We denote a set of events at time t asGt. A TKG is built upon a sequence of event quadruples ordered ascending based on their time stamps, i.e., {Gt}t = {(si, ri, oi, ti)}i (with ti < tj ,∀i < j), where each time-stamped edge has a direction pointing from the subject entity to the object entity.2 The goal of learning generative models of events is to learn a distribution P(G) over temporal knowledge graphs, based on a set of observed event sets {G1, ..., GT }. To model lasting events which span over a time range, i.e., (s, r, o, [t1, t2]), we simply partition such event into a sequence of time-stamp events {Gt1 , ..., Gt2}. We leave more sophisticated modeling of lasting events as future work." }, { "heading": "3.1 RECURRENT EVENT NETWORK", "text": "Sequential Structure Inference in TKG. The key idea in RE-NET is to define the joint distribution of all the eventsG = {G1, ..., GT } in an autoregressive manner, i.e., P(G) = ∏T t=1 P(Gt|Gt−m:t−1). Basically, we decompose the joint distribution into a sequence of conditional distributions (e.g., P(Gt|Gt−m:t−1)), where we assume the probability of the events at a time step, e.g. Gt, only depends on the events at the previous m steps, e.g., Gt−m:t−1. For each conditional distribution P(Gt|Gt−m:t−1), we further assume that the events inGt are mutually independent given the previous events Gt−m:t−1. In this way, the joint distribution can be rewritten as follows.\nP(G) = ∏ t ∏ (st,rt,ot)∈Gt P(st, rt, ot|Gt−m:t−1)\n= ∏ t ∏ (st,rt,ot)∈Gt P(ot|st, rt, Gt−m:t−1) · P(rt|st, Gt−m:t−1) · P(st|Gt−m:t−1). (1)\nIntuitively, the generation process of each triplet (st, rt, ot) is defined as below. Given all the past events Gt−m:t−1, we fist generate a subject entity st through the distribution P(st|Gt−m:t−1). Then we further generate a relation rt with P(rt|st, Gt−m:t−1), and finally the object entity ot is generated by defining P(ot|st, rt, Gt−m:t−1). In this work, we assume that P(ot|st, rt, Gt−m:t−1) and P(rt|st, Gt−m:t−1) depend only on events that are related to s, and focus on modeling the following joint probability:\nP(st, rt, ot|Gt−m:t−1) = P(ot|s, r,N(s)t−m:t−1) · P(rt|s,N (s) t−m:t−1) · P(st|Gt−m:t−1), (2)\n2The same triple (s, r, o) may occur multiple times in different time stamps, yielding different event quadruples.\nwhere Gt becomes N (s) t which is a set of neighboring entities interacted with subject entity s under all relations at time stamp t. For the third probability, the event sets should be considered since subject is not given. Next, we introduce how we parameterize these distributions.\nRecurrent Event Encoder. RE-NET parameterizes P(ot|s, r, Gt−m:t−1) in the following way: P(ot|s, r,N(s)t−m:t−1) ∝ exp ( [es : er : ht−1(s, r)] > ·wot ) , (3)\nwhere es, er ∈ Rd are learnable embedding vectors specified for subject entity s and relation r. ht−1(s, r) ∈ Rd is a history vector which encodes the information from the neighbor sets interacted with s in the past, as well as the global information from graph structures of Gt−1:t−m. Basically, [es : er : ht−1(s, r)] is an encoding to summarize all the past information. Based on that, we further compute the probability of different object entities ot by passing the encoding into a linear softmax classifier parameterized by {wot}. Similarly, we define the probabilities for relations and subjects as follows:\nP(rt|s,N(s)t−m:t−1) ∝ exp ( [es : ht−1(s))] > ·wrt ) , (4)\nP(st|Gt−m:t−1) ∝ exp ( H>t−1 ·wst ) , (5)\nwhere ht−1(s) captures all the local information about s in the past, and Ht−1 ∈ Rd is a vector representation to encode the global graph structures Gt−1:t−m.\nFor each time step t, since the hidden vectors ht−1(s), ht−1(s, r) and Ht−1 preserve the information from the past events, and we update them in the following recurrent way:\nht(s, r) = RNN1(g(N (s) t ),Ht,ht−1(s, r)), (6)\nht(s) = RNN2(g(N (s) t ),Ht,ht−1(s)), (7)\nHt = RNN3(g(Gt),Ht−1), (8)\nwhere g is an aggregation function, and N(s)t stands for all the events related to s at the current time step t. Intuitively, we obtain the current information related to s by aggregating all the related events at time t, i.e., g(N(s)t ). Then we update the hidden vector ht(s, r) by using the aggregated information g(N(s)t ) at the current step, the past value ht−1(s, r) and also the global hidden vector Ht. The hidden vector ht(s) is updated in a similar way. For the aggregation of all events g(Gt), we define g(Gt) = max({g(N(s)t )}s), which is from the element-wise max-pooling operation over all g(N(s)t ). We use Gated Recurrent Units Cho et al. (2014) as RNN. Details are described in Section A.\nFor each subject entity s, it can interact with multiple relations and object entities at each time step t. In other words, the set N(s)t can contain multiple events. Designing effective aggregation functions g to aggregate information from N(s)t for s is therefore a nontrivial problem. Next, we introduce how we design g(·) in RE-NET." }, { "heading": "3.2 MULTI-RELATIONAL GRAPH (RGCN) AGGREGATOR", "text": "Here we discuss the aggregate function g(·), which capture different kinds of neighborhood information for each subject entity and relation, i.e., (s, r). We first introduce two simple aggregation\nfunctions, i.e., mean pooling aggregator and attentive pooling aggregator. These two simple aggregators only collect neighboring entities under the same relation r. Then we introduce a more powerful aggregation function, i.e., multi-relational aggregator.\nMean Pooling Aggregator. The baseline aggregator simply takes the element-wise mean of the vectors in {eo : o ∈ N(s,r)t }, where N (s,r) t is a set of objects interacted with s under r at t. But the mean aggregator treats all neighboring objects equally, and thus ignores the different importance of each neighbor entity.\nAttentive Pooling Aggregator. We define an attentive aggregator based on the additive attention introduced in (Bahdanau et al., 2015) to distinguish the important entities for (s, r). The aggregate function is defined as g(N(s,r)t ) = ∑ o∈N(s,r)t\nαoeo, where αo = softmax(v> tanh(W (es; er; eo))). v ∈ Rd and W ∈ Rd×3d are trainable weight matrices. By adding the attention function of the subject and the relation, the weight can determine how relevant each object entity is to the subject and relation.\nMulti-Relational Aggregator. Here, we introduce a multi-relational graph aggregator based on (Schlichtkrull et al., 2018). This is a general aggregator that can incorporate information from multi-relational neighbors and multi-hop neighbors. Formally, the aggregator is defined as follows:\ng(N(s)t ) = h (l+1) s = σ (∑ r∈R ∑ o∈N(s,r)t 1 cs W (l)r h (l) o + W (l) 0 h (l) s ) , (9)\nwhere initial hidden representations for each node (h(0)o ) are set to trainable embedding vectors (eo) for each node.\nBasically, each relation can derive a local graph structure between entities, which further yield a message on each entity by aggregating the information from the neighbors of that entity, i.e.,∑\no∈N(s,r)t 1 cs W (l) r h (l) o . The overall message on each entity is further computed by aggregating all the relation-specific messages, i.e., ∑ r∈R ∑\no∈N(s,r)t 1 cs W (l) r h (l) o . Finally, the aggregator g(N(s)t ) is defined\nby combining both the overall message and the information from past steps, i.e., W (l)0 h (l) s .\nTo distinguish between different relations, we introduce independent weight matrices {W (l)r } for each relation r. Furthermore, the aggregator collects representations of multi-hop neighbors by introducing multiple layers of the neural network, with each layer indexed by l. The number of layers determines the depth to which the node reaches to aggregate information from its local neighborhood. We depict this aggregator in Fig. 2.\nThe major issue of this aggregator is that the number of parameters grows rapidly with the number of relations. In practice, this can easily lead to overfitting on rare relations and models of very large size. Thus, we adopt the block-diagonal decomposition (Schlichtkrull et al., 2018), where each relation-specific weight matrix is decomposed into a block-diagonal by decomposing into lowdimensional matrices. W (l)r in equation 9 is defined as a block diagonal matrix, diag(A (l) 1r , ...,A (l) Br) where A(l)kr ∈ R (d(l+1)/B)×(d(l)/B) and B is the number of basis matrices. The block decomposition reduces the number of parameters and helps to prevent overfitting." }, { "heading": "3.3 PARAMETER LEARNING AND INFERENCE OF RE-NET", "text": "Parameter Learning via Event Prediction. The (object) entity prediction given (s, r) can be viewed as a multi-class classification task, where each class corresponds to one object entity. Similarly, relation prediction given s and subject entity prediction can be considered as a multi-class classification task. Here we omit the notation for previous events. To learn weights and representations for entities and relations, we adopt a multi-class cross-entropy loss to the model’s output.The loss function is comprised of three losses and is defined as:\nL = − ∑\n(s,r,o,t)∈G\n( log(P(ot|st, rt) + λ1 log(P(rt|st)) + λ2 log(P(st)) ) , (10)\nwhere G is set of events, and λ1 and λ2 are importance parameters that control the importance of each loss term. λ1 and λ2 can chosen depending on a task. If the task aims to predict o given (s, r), then we can give small values to λ1 and λ2. Each probability is defined in equations 3, 4, and 5, respectively. We apply teacher forcing for model training over historical data.\nAlgorithm 1: Inference algorithm of RE-NET Input: Observed graph sequence: {G1, ..., Gt−1}, Number of events to sample at each step: M . Output: An estimation of the conditional distribution: P(Gt+∆t|G:t).\n1 t′ = t 2 while t′ ≤ t+ ∆t do 3 Sample M number of s ∼ P(s|Ĝt+1:t′−1, G:t) by Equation 5. 4 Pick top-k triples {(s1, r1, o1, t′), ..., (sk, rk, ok, t′)} ranked by Equation 2. 5 Ĝt′ = {(s1, r1, o1, t′), ..., (sk, rk, ok, t′)} 6 t′ = t′ + 1\n7 Estimate the probability of each event P(s, r, o|Ĝt+1:t+∆t−1, G:t) by Equation 2. 8 Estimate the joint distribution of all events P(Gt+∆t|Ĝt+1:t+∆t−1, G:t) by Equation 1. 9 return P(Gt+∆t|Ĝt+1:t+∆t−1, G:t) as the estimation.\nMulti-step Inference over Time. At inference time, RE-NET seeks to predict the forthcoming events based on the previous observations. Suppose that the current time is t and we aim at predicting events at time t + ∆t, then the problem of multi-step inference can be formalized as an inference problem, i.e., inferring the conditional probability P(Gt+∆t|G:t). The problem is nontrivial as we need to integrate over all Gt+1:t+∆t−1. To achieve efficient inference, we draw a sample of Gt+1:t+∆t−1, and estimate the conditional probability in the following way:\nP(Gt+∆t|G:t) = ∑\nGt+1:t+∆t−1\nP(Gt+∆t|G:t+∆t−1)P(Gt+∆t−1|G:t+∆t−2) · · ·P(Gt+1|G:t)\n= EP(Gt+1:t+∆t−1|G:t)[P(Gt+∆t|G:t+∆t−1)] ' P(Gt+∆t|Ĝt+1:t+∆t−1, G:t) (11) Such an inference procedure is intuitive. Basically, one starts with computing P(Gt+1|G:t), and drawing a sample Ĝt+1 from the conditional distribution. With this sample, one can further compute P(Gt+2|Ĝt+1, G:t). By iteratively computing the conditional distribution for Gt′ and drawing a sample from it, one can eventually estimate P(Gt+∆t|G:t) as P(Gt+∆t|Ĝt+1:t+∆t−1, G:t). In practice, we can improve the estimation by drawing multiple graph samples at each step, but RE-NET already performs very well with a single sample, and thus we only draw one sample graph at each step for better efficiency. Based on the estimation of the conditional distribution, we can further predict events which are likely to form in the future. We summarize the detailed inference algorithm in Algorithm 1. In Algorithm 1, we sample one graph at a time. To obtain the graph, we first sample M number of s (line 3) and pick top-k triples (line 4). Then we have a knowledge graph at time t′ (line 5).\nComputational Complexity Analysis. Here we analyze the time complexity of the graph generation algorithm 1. To compute P(st|Gt−m:t−1) (equation 5), it takes O(|E|Lm), where |E| is the maximum number of triples among {Gt−m, ..., Gt−1}, L is the number of layers of aggregation, and m is the number of the past time steps since we unroll m time steps in RNN. From this probability, we sample M number of subjects s. To compute P(st, rt, ot|Gt−m:t−1) (equation 2), it takes O(DLm), where D is the maximum degree of entities. To get probabilities of all possible triples given sampled subjects, it needs O(M |R||O|DLm) where |R| is the total number of relations and |O| is the total number of entities. Thus, the time complexity for generating one graph is O(|E|Lm+M |R||O|(DLm+ log k)) where k is the cutoff number for picking top-k triples. The time complexity is linear to the number of entities and relations, and the number of sampled s." }, { "heading": "4 EXPERIMENTS", "text": "Evaluating the quality of generated graphs is challenging, especially in knowledge graphs (Theis et al., 2015). Instead, we evaluate our proposed method on a link prediction task on temporal knowledge graphs. The task of predicting future links aims to predict unseen relationships with object entities given (s, r, ?, t) (or subject entities given (?, r, o, t)), based on the observed events in the TKG. Essentially, the task is a ranking problem over all the events (s, r, ?, t) (or (?, r, o, t)). RE-NET can approach this problem by computing the probability of each event in a distant future with the inference algorithm in Algorithm 1, and further rank all the events according to their probabilities.\nWe evaluate our proposed method on three benchmark tasks: (1) predicting future events on three event-based datasets; (2) predicting future facts on two knowledge graphs which include facts with time spans, and (3) studying parameter sensitivity and ablation of our proposed method. Section 4.1\nsummarizes the datasets, and the supplementary material contains additional information. In all these experiments, we perform predictions on time stamps that are not observed during training." }, { "heading": "4.1 EXPERIMENTAL SET-UP", "text": "Datasets. We use five datasets: 1) three event-based temporal knowledge graphs: ICEWS18 (Boschee et al., 2015), ICEWS14 (Trivedi et al., 2017), and GDELT (Leetaru & Schrodt, 2013); and 2) two knowledge graphs where temporally associated facts have meta-facts as (s, r, o, [ts, te]) where ts is the starting time point and te is the ending time point: WIKI (Leblay & Chekol, 2018) and YAGO (Mahdisoltani et al., 2014). The details of the datasets are described in Section B.\nEvaluation Setting and Metrics. For each dataset except ICEWS14, we split it into three subsets, i.e., train(80%)/valid(10%)/test(10%), by time stamps. Thus, (times of train) < (times of valid) < (times of test). We report Mean Reciprocal Ranks (MRR) and Hits@1/3/10, using the filtered version and the raw version of the datasets. Similar to the definition of filtered setting in (Bordes et al., 2013), during evaluation, we remove from the list of corrupted triplets all the triplets that appear either in the train, dev, or test set.\nCompetitors. We compare our approach to baselines for static graphs and temporal graphs:\n(1) Static Methods. By ignoring the edge time stamps, we construct a static, cumulative graph for all the training events, and apply multi-relational graph representation learning methods including TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), RGCN (Schlichtkrull et al., 2018), ConvE (Dettmers et al., 2018), and RotatE (Sun et al., 2019).\n(2) Temporal Reasoning Methods. We also compare state-of-the-art temporal reasoning methods for knowledge graphs, including Know-Evolve3 (Trivedi et al., 2017), TA-DistMult (García-Durán et al., 2018), HyTE (Dasgupta et al., 2018), and TTransE (Leblay & Chekol, 2018). TA-DistMult, HyTE, and TTransE are for a interpolation task which is to make predictions at time t such that t1 < t < t2, which is different from our setting. We give random values or embeddings that are not observed during training. To see the effectiveness of our recurrent event encoder, we use encoders of previous work and our MLP decoder as baselines; we compare Know-Evolve, Dyrep (Trivedi et al., 2019), and GCRN (Seo et al., 2017) combined with our MLP decoder, which are called Know-Evolve+MLP, DyRep+MLP, and R-GCRN+MLP. The GCRN utilizes Graph Gonvolutional Network (Kipf & Welling, 2016). Instead, we use RGCN (Schlichtkrull et al., 2018) to deal with multi-relational graphs.\n3*: We found a problematic formulation in Know-Evolve when dealing with concurrent events (Eq. (3) in its paper) and a flaw in its evaluation code. The performance dramatically drops after fixing the evaluation code. Details of this issues are discussed in Section F.\n(3) Variants of RE-NET. To evaluate the importance of different components of RE-NET, we varied our base model in different ways: RE-NET w/o multi-step which does not update history during inference, RE-NET without the aggregator (RE-NET w/o agg.), RE-NET with a mean aggregator (RE-NET w. mean agg.), and RE-NET with an attentive aggregator (RE-NET w. attn agg.). takes a zero vector instead of a representation of the aggregator. RE-NET w. GT (s, r) denotes RE-NET with ground truth history or interactions during multi-step inference, and thus the model knows all the interactions before the time for testing. It does not update history (or generate a graph) since it already has ground truth history. Experiment settings and implementation details of RE-NET and baselines are described in Section C." }, { "heading": "4.2 PERFORMANCE COMPARISON ON TEMPORAL KNOWLEDGE GRAPHS.", "text": "In this section we compare our proposed method with other baselines. The test results are obtained by averaged metrics over the entire test sets on datasets.\nPerformances on Event-based TKGs. Table 1 summarizes results on three event-based datasets: ICEWS18, GDELT, and ICEWS14. Our proposed RE-NET outperforms all other baselines on these datasets. Static methods show good results but they underperform our method since they do not consider temporal factors. Also, RE-NET outperforms all other temporal methods, which demonstrates effectiveness of the proposed method. The modified Know-Evolve with our MLP decoder (Know-Evovle+MLP) achieves the better performances than Know-Evolve, which shows effectiveness of our MLP decoder, but there is still a large gap from our model. We notice that Know-Evolve and DyRep has a gradient exploding issue on their encoder since their RNN-like structures keep accumulating embedding over time. This issue degrades their performances. Graph Convolutional Recurrent Network (GCRN) is not for dynamic and multi-relational graphs, and is not capable of link prediction. We modified the model to work on dynamic graphs and our problem setting by using RGCN instead of GCN, and our MLP decoder. The modified model (R-GCRN+MLP) shows good performances but it does not outperform our method. R-GCRN+MLP has a similar structure to ours in that it has a recurrent encoder and an RGCN aggregator but it lacks multi-step inference, global information, and sophisticated modeling for the recurrent encoder. These results of the combined models suggest the our recurrent event encoder shows better performances in link prediction. Importantly, all these temporal methods are not capable of multi-step inference, while RE-NET sequentially infers multi-step events.\nPerformances on Public KGs. Previous results have proved the effectiveness of RE-NET, and here we will compare the method on the Public KGs: WIKI and YAGO. In Table 2, our proposed RE-NET outperforms all other baselines. In these datasets, baselines show better results than in the Event-based TKGs. This is due to the characteristics of the datasets; they have facts that are valid within a time span. However, our proposed method consistently outperforms the static and temporal\n0.3\nmethods. which implies that RE-NET effectively infers new events using a powerful event encoder and an aggregator, and provides accurate prediction results.\nPerformances of Prediction over Time. Next, we further study performances of RE-NET over time. Figs. 3 shows the performance comparisons over different time stamps on the ICEWS18, GDELT, WIKI, and YAGO datasets with filtered Hits@3 metrics. RE-NET consistently outperforms baseline methods for all different time stamps. Performances of each method fluctuate since testing entities are different at each time step. We notice that with the increase of time step, the difference between RE-NET and ConvE is getting smaller as shown in Fig. 3. This is expected since further future events are harder to predict. Furthermore, we can think that the decline of the performances is due to the generation of a long graph sequence. To estimate the joint probability distribution of all events in a distant future, RE-NET should generates a long sequence of graphs. The quality of generated graphs deteriorates when RE-NET generates a long graph sequence." }, { "heading": "4.3 ABLATION STUDY", "text": "In this section, we study the effect of variations in RE-NET. To evaluate the importance of different components of RE-NET, we varied our base model in different ways, measuring the change in performance on the link prediction task on the ICEWS18 dataset. We present the results in Tables 1, 2, and Figs. 4.\nDifferent Aggregators. We first analyze the effect of the aggregator. In Tables 1, 2, we observe that RE-NET w/o agg. hurts model quality. This suggests that introducing aggregators make the model capable of dealing with concurrent events and aggregators improve the prediction performances. Fig. 4a shows the performances of RE-NET with different aggregators. Among them, RGCN aggregator outperforms other aggregators. This aggregator has the advantage of exploring multirelational neighbors not limited to neighbors under the same relation. Also, RE-NET with an attentive aggregator shows better performances than RE-NET with a mean aggregator, which implies that giving different attention weights to each neighbor helps predictions.\nGlobal Information. We further observe that representations from global graph structures help the predictions. Fig. 4b shows effectiveness of a representation of global graph structures. We consider that global representations give information beyond local graph structures.\nEmpirical Probabilities. Here, we study the role of P(st|Gt−m:t−1) and P(rt|s, Gt−m:t−1). We simply denote them as P(s) and P(r) for brevity. Also, P(st, rt|Gt−m:t−1) (or simply P(s, r)) is equivalent to P(s)P(r). In Fig 4c, emp. P(s) denotes a model with empirical P(s) (or Pe(s)) which is defined as Pe(s) = (# of s-related triples) / (total # of triples). Also, emp. P(s, r) denotes a model with Pe(s) and Pe(r) which is defined as Pe(r) = (# of r-related triples) / (total # of triples). Thus, Pe(s, r) = Pe(s)Pe(r). RE-NET use a trained P(s) and P(r). The results show that the trained P(s) and P(r) help RE-NET for multi-step predictions. Using Ps(s) underperforms RE-NET, and using Pe(s, r) = Pe(s)Pe(r) shows the worst performances, which suggests that training each part of the probability in equation 2 gives better prediction performances." }, { "heading": "4.4 SENSITIVITY ANALYSIS", "text": "In this section, we study the parameter sensitivity of RE-NET including the length of history for the event encoder, cutoff position k for events to generate a graph. Furthermore, we study the layers of RGCN aggregator. We report the performance change of RE-NET on the ICEWS18 dataset by varying the hyper-parameters in Table 5.\nLength of Past History in Recurrent Event Encoder. The recurrent event encoder takes the sequence of past interactions up to m graph sequences or previous histories. Figure 5a shows the performances with varying length of past histories. When RE-NET uses longer histories, MRR is getting higher. However, the MRR is not likely to go higher when the length of history is 5 and over. This implies that long history does not make big differences.\nCut-off Position k at Inference Time. To generate a graph at each time, we cut off top-k triples on ranking results. Fig. 5b shows the performances with choosing different cutoff position k. When k is 0, RE-NET does not generate graphs for estimating P(Gt+∆t|G:t), and it shows the lowest result. which means RE-NET performs single-step predictions, . When k is larger, the performance is getting higher and it is saturated after 500. We notice that the conditional distribution P(Gt+∆t|G:t) can be approximated by P(Gt+∆t|Ĝt+1:t+∆t−1, G:t) by using a larger cutoff position. Layers of RGCN Aggregator. We examine the number of layers in the RGCN aggregator. The number of layers in the aggregator means the depth to which the node reaches. Fig. 5c shows the performances according to different numbers of layers of RGCN. We notice that 2-layered RGCN improves the performances considerably compared to 1-layered RGCN since 2-layered RGCN aggregates more information. However, RE-NET with 3-layered RGCN underperforms RE-NET with 2-layered RGCN. We conjecture that the bigger parameter space leads to overfitting." }, { "heading": "5 CONCLUSION", "text": "In this work, we studied the sequential graph generation on temporal knowledge graphs. To tackle this problem, we proposed Recurrent Event Network (RE-NET) which models temporal, multi-relational, and concurrent interactions between entities. A recurrent event encoder in RE-NET summarizes information of the past event sequences, and a neighborhood aggregator collects the information of concurrent. RE-NET defines the joint probability of all events, and thus is capable of inferring global structures in a sequential manner. We tested the proposed model on a link prediction task on temporal knowledge graphs. The experiment revealed that the proposed RE-NET outperforms all the static and temporal methods and our extensive experiments shows its strength. Interesting future work includes modeling lasting events and performing inference on the long-lasting graph structures." }, { "heading": "A RECURRENT EVENT ENCODER", "text": "We define a recurrent event encoder based on RNN as follows:\nht(s, r) = RNN(g(Nt(s)),Ht,ht−1(s, r)).\nWe use Gated Recurrent Units (Cho et al., 2014) as RNN: at = [es : er : g(Nt(s)) : Ht] zt = σ(Wzat + Uzht−1)\nrt = σ(Wrat + Urht−1)\nht = (1− zt) ◦ ht−1 + zt ◦ tanh(What + Uh(rt ◦ ht−1)), where : is concatenation, σ(·) is an activation function, and ◦ is a Hadamard operator. The input is a concatenation of four vectors: subject embedding, object embedding, aggregation of neighborhood representations, and global information vector (es, er, g(Nt(s)),Ht). ht(s) and Ht are similarly defined. For ht(s), a concatenation of subject embedding, aggregation of neighborhood representations, and global information vector (es, g(Nt(s)),Ht) is input. For Ht, aggregation of the whole graph representations g(Gt) is input." }, { "heading": "B DATASET", "text": "We use five datasets: 1) three event-based temporal knowledge graphs and 2) two knowledge graphs where temporally associated facts have meta-facts as (s, r, o, [ts, te]) where ts is the starting time point and te is the ending time point. The first group of graphs includes Integrated Crisis Early Warning System (ICEWS18 (Boschee et al., 2015) and ICEWS14 (Trivedi et al., 2017)), and Global Database of Events, Language, and Tone (GDELT) (Leetaru & Schrodt, 2013). The second group of graphs includes WIKI (Leblay & Chekol, 2018) and YAGO (Mahdisoltani et al., 2014).\nICEWS18 is collected from 1/1/2018 to 10/31/2018, ICEWS14 is from 1/1/2014 to 12/31/2014, and GDELT is from 1/1/2018 to 1/31/2018. The ICEWS14 is from (Trivedi et al., 2017). We didn’t use their version of the GDELT dataset since they didn’t release the dataset.\nWIKI and YAGO datasets have temporally associated facts (s, r, o, [ts, te]). We preprocess the datasets such that each fact is converted to {(s, r, o, ts), (s, r, o, ts + 1t), ..., (s, r, o, te)} where 1t is a unit time to ensure each fact has a sequence of events. Noisy events of early years are removed (before 1786 for WIKI and 1830 for YAGO).\nThe difference between the first group and the second group is that facts happen multiple times (even periodically) on the first group (event-based knowledge graphs) while facts last long time but are not likely to occur multiple times in the second group.\nDataset statistics are described on table 3." }, { "heading": "C DETAILED EXPERIMENTAL SETTINGS", "text": "Model details of RE-NET. We use Gated Recurrent Units (Cho et al., 2014) as our recurrent event encoder, where the length of history is set as m = 10 which means saving past 10 event sequences. If the events related to s are sparse, we check the previous time steps until we get m previous time steps related to the entity s. We pretrain the parameters related to equations 5 and 8 due to large size of training graphs. We use a multi-relational aggregator to compute Ht. The aggregator provides hidden representations for each node and we max-pool over all hidden representations to get Ht. At inference time, RE-NET performs multi-step prediction across the time stamps in dev and test sets. In each time step, we sample 1000 (= M) number of subjects and save top-1000 (= k) triples to use\nthem as a generated graph . We set the size of entity/relation embeddings to be 200 and embedding of unobserved embeddings are randomly initialized. We use two-layer RGCN in the RGCN aggregator with block dimension 2× 2. The model is trained by the Adam optimizer (Kingma & Ba, 2014). We set λ1 to 0.1, the learning rate to 0.001 and the weight decay rate to 0.00001. All experiments were done on GeForce GTX 1080 Ti.\nExperimental Settings for Baseline Methods. In this section, we provide detailed settings for baselines. We use implementations of TransE and DistMult4. We implemented TTransE and TADistMult based on the implementation of TransE and Distmult, respectively. For TA-DistMult, We use temporal tokens with the vocabulary of year, month and day on the ICEWS dataset and the vocabulary of year, month, day, hour and minute on the GDELT dataset. We use a margin-based ranking loss with L1 norm for TransE and use a binary cross-entropy loss for DistMult and TA-DistMult. We validate the embedding size among 100 and 200. We set the batch size to 1024, margin to 1.0, negative sampling ratio to 1, and use the Adam optimizer.\nWe use the implementation of ComplEx5 Han et al. (2018). We validate the embedding size among 50, 100 and 200. The batch size is 100, the margin is 1.0, and the negative sampling ratio is 1. We use the Adagrad optimizer.\nWe use the implementation of HyTE6. We use every timestamp as a hyperplane. The embedding size is set to 128, the negative sampling ratio to 5, and margin to 1.0. We use time agnostic negative sampling (TANS) for entity prediction, and the Adam optimizer.\nWe use the codes for ConvE7 and use implementation by Deep Graph Library8. Embedding sizes are 200 for both methods. We use 1 to all negative sampling for ConvE and use 10 negative sampling ratio for RGCN, and use the Adam optimizer for both methods. We use the codes for Know-Evolve9. For Know-Evolve, we fix the issue in their codes. Issues are described in Section F. We follow their default settings.\nWe use the code for RotatE10. The hidden layer/embedding size is set to 100, and batch size 256; other values follow the best values for the larger FB15K dataset configurations supplied by the author. The author reports filtered metrics only, so we added the implementation of the raw setting.\n4https://github.com/jimmywangheng/knowledge_representation_pytorch 5https://github.com/thunlp/OpenKE 6https://github.com/malllabiisc/HyTE 7https://github.com/TimDettmers/ConvE 8https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn 9https://github.com/rstriv/Know-Evolve\n10https://github.com/DeepGraphLearning/KnowledgeGraphEmbedding\n0.2\n0.3\n0 1000 2000 3000 4000 5000" }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "D.1 RESULTS WITH RAW METRICS.\nTable 4 shows the performance comparison on ICEWS18, GDELT, ICEWS14 with raw settings. Our proposed RE-NET outperforms all other baselines. Figs. 6 shows the performance comparisons over different time stamps on the ICEWS18, GDELT, WIKI and YAGO datasets with filtered MRR. Our proposed RE-NET consistently outperform baselines over time.\nD.2 COMPARISONS WITH CONVE+RGCN.\nTo examine aggregation techniques in other baselines, we combine ConvE and R-GCN. We first run 2-layered R-GCN over the training graph and then each node has its own transformed representations. We run ConvE on this transformed representations. As in Table 5, ConvE+R-GCN shows better performances than R-GCN and worse performances than ConvE, which implies that the aggregation technique is not helpful to ConvE. However, aggregators in our framework is a complementary and necessary component to the temporal part, which shows superiority over the baselines.\nD.3 COMPARISONS WITH DYNAMIC METHODS.\nHere we compare our method with dynamic methods on homogeneous graphs: EvolveGCN-O (Pareja et al., 2019), DynGEM (Goyal et al., 2018), dyngraph2vecAE (Goyal et al., 2019), DynTriad (Zhou et al., 2018b), and tNodeEmbed (Singer et al., 2019). These methods were proposed to predict interactions at a future time on homogeneous graphs, while our proposed method is for predicting interactions on multi-relational graphs (or knowledge graphs). Furthermore, those methods predict links at one future time stamp, whereas our method seeks to predict interactions at multiple future time stamps. We modified some methods to apply them on multi-relational graphs as follows.\nExperimental Settings. We adopt R-GCN (Schlichtkrull et al., 2018) for EvolveGCN-O and call it EvolveRGCN. We convert knowledge graphs into homogeneous graphs for dyngraph2vecAE. The idea of this method is to reconstruct an adjacency matrix using an auto-encoder and regard it as a future adjacency matrix. If we keep relations, relation-specific adjacency matrices will be extremely sparse; the method learns to reconstruct near-zero adjacency matrices. tNodeEmbed is a temporal method on homogeneous graphs. To use this on multi-relational graphs, we first train entity\nembeddings with DistMult and set these as initial embeddings for entities in tNodeEmbed. Also we give entity embeddings as input to LSTM of tNodeEmbed. We concatenate output of LSTM and relation embeddings to predict objects. We did not modified other methods since it is not trivial to extend the methods.\nResults. As shown in Table 5, RE-NET significantly outperforms the methods. Furthermore, all the dynamic methods do not show good performances. We conjecture that the methods are not designed for multi-relational graphs, and thus it is not capable of effectively handling multiple relations, which leads to degradation of their performances. Also, DynGEM is not suitable for our setting since it predicts edges based on observed edges at future time stamps. However, in our setting, we are not given any observed edges at future times stamps, so it shows poor performances." }, { "heading": "E CASE STUDY", "text": "In this section, we study RE-NET’s predictions. Its predictions depend on interaction histories. We categorize histories into three cases: (1) consistent interactions with an object, (2) a specific temporal pattern, and (3) irrelevant history. RE-NET can learn (1) and (2) cases, so it achieves high performances. For the first case, static methods cannot predict the answer since it does not see past interactions. However, RE-NET can predict the answer because it consistently interacts with an object. The second case shows specific temporal patterns on relations: ( Arrest, o )→ ( Use force, o ). Without knowing this pattern, one method might predict “Businessman\" instead of “Men\". RE-NET is able to learn these temporal patterns so it can predict the second case. Lastly, the third case shows irrelevant history to the answer and the history is not helpful to predictions. RE-NET fails to predict the third case.\nF IMPLEMENTATION ISSUES OF KNOW-EVOLVE We found a problematic formulation in the Know-Evolve model and codes. The intensity function (equation 3 in (Trivedi et al., 2017)) is defined as λs,rr (t|t̄) = f(gs,rr (t̄))(t− t̄), where g(·) is a score function, t is current time, and t̄ is the most recent time point when either subject or object entity was involved in an event. This intensity function is used in inference to rank entity candidates. However, they don’t consider concurrent event at the same time stamps, and thus t̄ will become t after one event. For example, we have events e1 = (s, r, o1, t1), e2 = (s, r, o2, t1). After e1, t̄ will become t (subject s’s most recent time point), and thus the value of intensity function for e2 will be 0. This is problematic in inference since if t = t̄, then the intensity function will always be 0 regardless of entity candidates. In inference, all object candidates are ranked by the intensity function. But all intensity scores for all candidates will be 0 since t = t̄, which means all candidates have the same 0 score. In their code, they give the highest ranks (first rank) for all entities including the ground truth object in this case. Thus, we fixed their code for a fair comparison; we give an average rank to entities who have the same scores." }, { "heading": "G THEORETICAL ANALYSIS", "text": "Here we analyze the model capacity of RE-NET of capturing complex time-invariant local structure like (Hamilton et al., 2017), as well as the emerging global community structure as (You et al., 2018).\nTheorem 1 Let {Gt}τt=1 be the snapshot of temporal knowledge graph after τ time-steps. Let h0v ∈ Rd, v ∈ {si} ∪ {oi} to be the input feature representation for Algorithm 1 of each entity node v. Suppose that there exists a fixed positive constant C ∈ R+ such that ||h0v − h0v′ || > C for all pair of all pair of entities v, v′. Then we have that ∀ > 0, there exist a parameter setting Θ for RE-NET s.t. after K = 4 layers of aggregation,\n|hKv,τ − cv,τ , | < ,∀v ∈ V,∀τ ∈ [T ], where hKv,τ are output values generated by RE-NET and cv,τ are clustering coefficients of {Gi}τi=1.\nObservation 1 Consider a temporal graph under stochastic block model described in Section G.2. Let h0v ∈ Rd, v ∈ {si} ∪ {oi} to be the input feature representation for Algorithm 1 of each node. Suppose that a constant portion pc of input representations can be linearly separated by a hyperplane, while the representation of other nodes lies on the hyperplane. There exists a parameter setting of RE-NET that can output the probability that new node j connected to node i.\nG.1 PROOF FOR THEOREM 1\nUsing pooling aggregator of GraphSAGE, we can actually copy its behavior by setting recurrent weight matrix of the RNN model to be 0. In this case, we lose all time-dependency our RE-NET and the representation model becomes time-invariant. However, RE-NET have exactly the same model capacity as GraphSAGE.\nG.2 ANALYSIS FOR OBSERVATION 1\nHere we define the generation process of our temporal graph. Assume that the generation process of the graph follows a stochastic block model, and there are two communities in the graph. Half of the nodes belong to community A and the other half belong to community B. Nodes within one community have probability ps to be connected while other pairs have pd < ps probability to be connected. The edges in the graph are introduced into the graph in a time. Suppose a sequence of time-steps, a new node is introduced to the community and each edge is added to the graph.\nThis observation follows from three facts: (1) For each node vj in the neighborhood N (v), using pooling aggregator, we can detect their community assignment sj . We assign the output of community A to be +1 and the output of community B to be −1. (2) The error of incorrectly discerning the community of a node decrease exponentially with the number of links. For example let the node v be in community A. Let the total number of nodes at time t to be nt, by Hoeffding’s inequality we have\nP( ∑\nj:vj∈N (j)t sj < 0) < exp(−2(ps − pd)2|N (j)t|)\n(3) Given the correct community classification, the relation classifier is able to predict the probability of linking nodes.\nCombining these three facts, RE-NET is able to infer the community structure of the node." } ]
2,019
null
SP:6ecf7180d11e9eaf100d489c1c20123cde7a258d
[ "This paper introduces an approach to recover weights of ReLU neural networks by querying the network with specifically constructed inputs. The authors notice that the decision regions of such networks are piece-wise linear corresponding to activations of individual neurons. This allows to identify hyperplanes that constitute the decision boundary and find intersection points of the decision boundaries corresponding to neurons at different layers of the network. However, weights can be recovered only up to permutations of neurons in each layer and up to a constant scaling factor for each layer.", "This paper introduces a procedure for reconstructing the architecture and weights of deep ReLU network, given only the ability to query the network (observe network outputs for a sequence of inputs). The algorithm takes advantage of the piecewise linearity of ReLU networks and an analysis by [Hanin and Rolnick, 2019b] of the boundaries between linear regions as bent hyperplanes. The observation that a boundary bends only for other boundaries corresponding to neurons in earlier network layers leads to a recursive layer-by-layer procedure for recovering network parameters. Experiments show ability to recover both random networks and networks trained for a memorization task. The method is currently limited to ReLU networks and does not account for any parameter-sharing structure, such as that found in convolutional networks." ]
The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network’s parameters cannot be identified from its outputs. Here, we show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. ReLU networks are piecewise linear and the boundaries between pieces correspond to inputs for which one of the ReLUs switches between inactive and active states. Thus, first-layer ReLUs can be identified (up to sign and scaling) based on the orientation of their associated hyperplanes. Later-layer ReLU boundaries bend when they cross earlier-layer boundaries and the extent of bending reveals the weights between them. Our algorithm uses this to identify the units in the network and weights connecting them (up to isomorphism). The fact that considerable parts of deep networks can be identified from their outputs has implications for security, neuroscience, and our understanding of neural networks.
[]
[ { "authors": [ "Frances S Chance", "Larry F Abbott", "Alex D Reyes" ], "title": "Gain modulation from background synaptic", "venue": "input. Neuron,", "year": 2002 }, { "authors": [ "Rong Ge", "Rohith Kuditipudi", "Zhize Li", "Xiang Wang" ], "title": "Learning two-layer neural networks with symmetric inputs", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Surbhi Goel", "Adam Klivans" ], "title": "Learning neural networks with two nonlinear layers in polynomial time", "venue": "Preprint arXiv:1709.06010,", "year": 2017 }, { "authors": [ "Surbhi Goel", "Varun Kanade", "Adam Klivans", "Justin Thaler" ], "title": "Reliably learning the ReLU in polynomial time", "venue": "In Conference on Learning Theory (COLT),", "year": 2017 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "How to start training: The effect of initialization and architecture", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "Complexity of linear regions in deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Boris Hanin", "David Rolnick" ], "title": "Deep ReLU networks have surprisingly few activation patterns", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification", "venue": "In IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "P Heggelund" ], "title": "Receptive field organization of simple cells in cat striate cortex", "venue": "Experimental Brain Research,", "year": 1981 }, { "authors": [ "Matthew Jagielski", "Nicholas Carlini", "David Berthelot", "Alex Kurakin", "Nicolas Papernot" ], "title": "Highfidelity extraction of neural network models", "venue": "arXiv preprint arXiv:1909.01838,", "year": 2019 }, { "authors": [ "Konrad P Kording", "Christoph Kayser", "Wolfgang Einhauser", "Peter Konig" ], "title": "How are complex cell properties adapted to the statistics of natural stimuli", "venue": "Journal of neurophysiology,", "year": 2004 }, { "authors": [ "Smitha Milli", "Ludwig Schmidt", "Anca D Dragan", "Moritz Hardt" ], "title": "Model reconstruction from model explanations", "venue": "In Proceedings of the Conference on Fairness, Accountability, and Transparency,", "year": 2019 }, { "authors": [ "Seong Joon Oh", "Bernt Schiele", "Mario Fritz" ], "title": "Towards reverse-engineering black-box neural networks. In Explainable AI: Interpreting", "venue": "Explaining and Visualizing Deep Learning,", "year": 2019 }, { "authors": [ "Maithra Raghu", "Ben Poole", "Jon Kleinberg", "Surya Ganguli", "Jascha Sohl-Dickstein" ], "title": "On the expressive power of deep neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Matus Telgarsky" ], "title": "Representation benefits of deep feedforward networks", "venue": "Preprint arXiv:1509.08101,", "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "The behavior of deep neural networks is as complex as it is powerful. The relation of individual parameters to the network’s output is highly nonlinear and is generally unclear to an external observer. Consequently, it has been widely supposed in the field that it is impossible to recover the parameters of a network merely by observing its output on different inputs.\nBeyond informing our understanding of deep learning, going from function to parameters could have serious implications for security and privacy. In many deployed deep learning systems, the output is freely available, but the network used to generate that output is not disclosed. The ability to uncover a confidential network not only would make it available for public use but could even expose data used to train the network if such data could be reconstructed from the network’s weights.\nThis topic also has implications for the study of biological neural networks. Experimental neuroscientists can record some variables within the brain (e.g. the output of a complex cell in primary visual cortex) but not others (e.g. the pre-synaptic simple cells), and many biological neurons appear to be well modeled as the ReLU of a linear combination of their inputs (Chance et al., 2002). It would be highly useful if we could reverse engineer the internal components of a neural circuit based on recordings of the output and our choice of input stimuli.\nIn this work, we show that it is, in fact, possible in many cases to recover the structure and weights of an unknown ReLU network by querying it. Our method leverages the fact that a ReLU network is piecewise linear and transitions between linear pieces exactly when one of the ReLUs of the network transitions from its inactive to its active state. We attempt to identify the piecewise linear surfaces in input space where individual neurons transition from inactive to active. For neurons in the first layer, such boundaries are hyperplanes, for which the equations determine the weights and biases of the first layer (up to sign and scaling). For neurons in subsequent layers, the boundaries are “bent hyperplanes” that bend where they intersect boundaries associated with earlier layers. Measuring these intersections allows us to recover the weights between the corresponding neurons.\nOur major contributions are:\n• We identify how the architecture, weights, and biases of a network can be recovered from the arrangement of boundaries between linear regions in the network.\n• We implement this procedure and demonstrate its success in recovering trained and untrained ReLU networks. • We show that this algorithm “degrades gracefully,” providing partial weights even when full" }, { "heading": "2 RELATED WORK", "text": "Various works within the deep learning literature have considered the problem of learning a network given its output on inputs drawn (non-adaptively) from a given distribution. It is known that this problem is in general hard (Goel et al., 2017), though positive results have been found for certain specific choices of distribution in the case that the network has only one or two layers (Ge et al., 2019; Goel & Klivans, 2017). By contrast, we consider the problem of learning about a network of arbitrary depth, given the ability to issue queries at specified input points. In this work, we leverage the theory of linear regions within a ReLU network, an area that has been studied e.g. by Telgarsky (2015); Raghu et al. (2017); Hanin & Rolnick (2019a). Most recently Hanin & Rolnick (2019b) considered the boundaries between linear regions as arrangements of “bent hyperplanes”. Milli et al. (2019); Jagielski et al. (2019) show the effectiveness of this strategy for networks with one hidden layer. For inference of other properties of unknown networks, see e.g. Oh et al. (2019).\nNeuroscientists have long considered similar problems with biological neural networks, albeit armed with prior knowledge about network structure. For example, it is believed that complex cells in the primary visual cortex, which are often seen as translation-invariant edge detectors, obtain their invariance through what is effectively a two-layer neural network (Kording et al., 2004). A first layer is believed to extract edges, while a second layer essentially implements maxpooling. Heggelund (1981) perform physical experiments akin to our approach of identifying one ReLU at a time, by applying inputs that move individual neurons above their critical threshold one by one. Being able to solve such problems more generically would be useful for a range of neuroscience applications." }, { "heading": "3 PRELIMINARIES", "text": "" }, { "heading": "3.1 DEFINITIONS", "text": "In general, we will consider fully connected, feed-forward neural networks (multilayer perceptrons) with ReLU activations. Each such network N defines a function N (x) from input space Rnin to output space Rnout . We denote the layer widths of the network by nin (input layer), n1, n2, . . . , nd, nout (output layer). We use Wk to denote the weight matrix from layer (k − 1) to layer k, where layer 0 is the input; and bk denotes the bias vector for layer k. Given a neuron z in the network, we use z(x) to denote its preactivation for input x ∈ Rnin . Thus, for the jth neuron in layer k, we have\nzkj (x) = nk−1∑ i=1 Wkij ReLU(z k−1 i (x) + b k i ).\nFor each neuron z, we will use Bz to denote the set of x for which z(x) = 0. In general1, Bz will be an (nin−1)-dimensional piecewise linear surface in Rnin (see Figure 1, in which input dimension is 2 and the Bz are simply lines). We call Bz the boundary associated with neuron z, and we say that B = ⋃ Bz is the boundary of the overall network. We refer to the connected components of Rnin \\B as regions. Throughout this paper, we will make the Linear Regions Assumption: The set of regions is the set of linear pieces of the piecewise linear function N (x). While this assumption has tacitly been made in the prior literature, it is noted in Hanin & Rolnick (2019b) that there are cases where it does not hold – for example, if an entire layer of the network is zeroed out for some inputs." }, { "heading": "3.2 ISOMORPHISMS OF NETWORKS", "text": "Before showing how to infer the parameters of a neural network, we must consider to what extent these parameters can be inferred unambiguously. Given a network N , there are a number of other networks N ′ that define exactly the same function from input space to output space. We say that such networks are isomorphic to N . For multilayer perceptrons with ReLU activation, we consider the following network isomorphisms:\nPermutation. The order of neurons in each layer of a network N does not affect the underlying function. Formally, let pk,σ(N ) be the network obtained fromN by permuting layer k according to σ (along with the corresponding weight vectors and biases). Then, pk,σ(N ) is isomorphic to N for every layer k and permutation σ.\nScaling. Due to the ReLU’s equivariance under multiplication, it is possible to scale the incoming weights and biases of any neuron, while inversely scaling the outgoing weights, leaving the overall function unchanged. Formally, for z the ith neuron in layer k and c any positive constant, let sz,c(N ) be the network obtained from N by replacing Wk·i, bki , and Wk+1 by cWk·i, cbki , and (1/c)W k+1 i· , respectively. It is simple to prove that sz,c(N ) is isomorphic to N (see Appendix A). Thus, we can hope to recover a network only up to layer-wise permutation and neuron-wise scaling. Formally, pi,σ(N ) and sz,c(N ) are generators for a group of isomorphisms of N . (As we shall see in §5, some networks also possess additional isomorphisms.)" }, { "heading": "4 THE ALGORITHM", "text": "" }, { "heading": "4.1 INTUITION", "text": "Consider a network N and neuron z ∈ N , so that Bz is the boundary associated with neuron z. Recall that Bz is piecewise linear. We say that Bz bends at a point if Bz is nonlinear at that point (that is, if the point lies on the boundary of several regions). As observed in Hanin & Rolnick (2019b), Bz can bend only at points where it intersects boundaries Bz′ for z′ in an earlier layer of\n1More precisely, this holds for all but a measure zero set of networks, and any network for which this is not true may simply be perturbed slightly.\nthe network. In general, the converse also holds; Bz bends wherever it intersects such a boundary Bz′ (see Appendix A). Then, for any two boundaries Bz and Bz′ , one of the following must hold: Bz bends at their intersection (in which case z occurs in a deeper layer of the network), Bz′ bends (in which case z′ occurs in a deeper layer), or neither bends (in which case z and z′ occur in the same layer). It is not possible for both Bz and Bz′ to bend at their intersection – unless that intersection is also contained in another boundary, which is vanishingly unlikely in general. Thus, the architecture of the network can be determined by evaluating the boundaries Bz and where they bend in relation to one another.\nMoving beyond architecture, the weights and biases of the network can also be determined from the boundaries, one layer at a time. Boundaries for neurons in the first layer do not bend and are simply hyperplanes; the equations of these hyperplanes expose the weights from the input to the first layer (up to permutation, scaling, and sign). For each subsequent layer, the weight between neurons z and z′ can be determined by calculating how Bz′ bends when it crosses Bz . The details of our algorithm below are intended to make these intuitions concrete and perform efficiently even when the input space is high-dimensional.\nAlgorithm 1 The first layer Initialize P1 = P2 = S1 = {} for t = 1, . . . , L do\nSample line segment ` P1 ← P1 ∪ PointsOnLine(`)\nend for for p ∈ P1 do\nH = InferHyperplane(p) if TestHyperplane(H) then\nS1 ← S1 ∪ GetParams(H) else P2 ← P2 ∪ {p} end if\nend for return Parameters S1,\nunused sample points P2\nAlgorithm 2 Additional layers Input Pk and S1, . . . , Sk−1 Initialize Sk = {} for p1 ∈ Pk−1 on boundary Bz do\nInitialize Az = {p1}, Lz = Hz = {} while Lz 6⊇ Layer k − 1 do\nPick pi ∈ A and v p′, Bz′ = ClosestBoundary(pi,v) if p′ on boundary then\nAz ← Az ∪ {p′ + } Lz ← Lz ∪ {z′} Hz ← Hz ∪ {InferHyperplane(pi)}\nelse Pk ← Pk ∪ {p1}; break end if\nend while if Lz ⊇ Layer k − 1 then Sk ← GetParams(Tz) end if\nend for return Parameters Sk, unused sample points Pk+1" }, { "heading": "4.2 THE FIRST LAYER", "text": "We begin by identifying the first layer of the network N , for which we must infer the number of neurons, the weight matrix W1, and the bias vector b1. As noted above, for each z = z1i in the first layer, the boundary Bz is a hyperplane with equation W1·ix+ b 1 i = 0. For each neuron z in a later layer of the network, the boundaryBz will, in general, bend and not be a (complete) hyperplane (see Appendix A). We may therefore find the number of neurons in layer 1 by counting the hyperplanes contained in the network’s boundary B, and we can infer weights and biases by determining the equations of these hyperplanes.\nBoundary points along a line. Our algorithm is based upon the identification of points on the boundary B. One of our core algorithmic primitives is a subroutine PointsOnLine that takes as input a line segment ` ⊂ Rnin and approximates the set ` ∩ B of boundary points along `. The algorithm proceeds by leveraging the fact that boundary points subdivide ` into regions within which N (x) is linear. We maintain a list of points in order along ` (initialized to the endpoints and midpoint of `) and iteratively perform the following operation: For each three consecutive points on our list, x1,x2,x3, we determine if the vectors (N (x2)−N (x1))/||x2−x1||2 and (N (x3)−N (x2))/||x3− x2||2 are equal (to within computation error) – if so, we remove the point x2 from our list, otherwise we add the points (x1 + 2x2)/3 and (x3 + 2x2)/3 to our list.2 The points in the list converge by binary search to the set of discontinuities of the gradient ∂N (x)/∂x, which are our desired boundary points. Note that PointsOnLine is where we make use of our ability to query the network.\n2These weighted averages speed up the search algorithm by biasing it towards points closer towards the center of the segment, which is where we expect the most intersections given our choice of segments.\nSampling boundary points. In order to identify the boundaries Bz for z in layer 1, we begin by identifying a set of boundary points with at least one on each Bz . A randomly chosen line segment through input space will intersect some of the Bz – indeed, if it is long enough, it will intersect any fixed hyperplane with probability 1. We sample line segments ` in Rnin and run PointsOnLine on each. Many sampling distributions are possible, but in our implementation we choose to sample segments of fixed (long) length, tangent at their midpoints to a sphere of fixed (large) radius. This ensures that each of our sample lines remains far from the origin, where boundaries are in closer proximity and therefore more easily confused with one another (this will become useful in the next step). Let P1 be the overall set of boundary points identified on our sample line segments.\nInferring hyperplanes. We now proceed to fit a hyperplane to each of the boundary points we have just identified. For each p ∈ P1, there is a neuron z such that p lies on Bz . The boundary Bz is piecewise linear, with nonlinearities only along other boundaries, and with probability 1, p does not lie on a boundary besides Bz . Therefore, within a small enough neighborhood of p, Bz is given by a hyperplane, which we call the local hyperplane at p. If z is in layer 1, then Bz equals the local hyperplane. The subroutine InferHyperplane takes as input a point p on a boundary Bz and approximates the local hyperplane within which p lies. This algorithm proceeds by sampling many small line segments around p, running PointsOnLine to find their points of intersection withBz , and performing a linear regression to find the equation of the hyperplane containing these points.\nTesting hyperplanes. Not all of the hyperplanes we have identified are actually boundaries for neurons in layer 1, so we need to test which hyperplanes are contained in B in their entirety, and which are the local hyperplanes of boundaries that bend. The subroutine TestHyperplane takes as input a point p and a hyperplane H containing that point, and determines whether the entire hyperplane H is contained in the boundary B of the network. This algorithm proceeds by sampling points within H that lie far from p and applying PointsOnLine to a short line segment around each such point to check whether these points all lie on B. Applying TestHyperplane to those hyperplanes inferred in the preceding step allows us to determine those Bz for which z is in layer 1.\nFrom hyperplanes to parameters. Finally, we identify the first layer of N from the equations of hyperplanes contained in B. The number of neurons in layer 1 is given simply by the number of distinct Bz which are hyperplanes. As we have observed, for z = z1i in layer 1, the hyperplane Bz is given by W1·ix+ b 1 i = 0. We can thus determine W 1 ·i and b 1 i up to multiplication by a constant. However, we have already observed that scaling W1·i and b 1 i by a positive constant (while inversely scaling W2i·) is a network isomorphism (§3.2). Therefore, we need only determine the true sign of the multiplicative constant, corresponding to determining which side of the hyperplane is zeroed out by the ReLU. This determination of sign will be performed below in §4.3. Sample complexity. We expect the number of queries necessary to obtain weights and biases (up to sign) for the first layer should grow as O(nin( ∑ i ni) log n1), which for constant-width networks is only slightly above the number of parameters being inferred. Assuming that biases in the network are bounded above, each sufficiently long line has at least a constant probability of hitting a given hyperplane, suggesting that log n1 lines are required according to a coupon collector-style argument. Hanin & Rolnick (2019a) show that under natural assumptions, the number of boundary points intersecting a given line through input space grows linearly in the total number of neurons in the network. Finally, each boundary point on a line requires O(nin) queries in order to fit a hyperplane." }, { "heading": "4.3 ADDITIONAL LAYERS", "text": "We now assume that the weights W1, . . . ,Wk−1 and biases b1, . . . ,bk−1 have already been determined within the network N , with the exception of the sign choice for weights and biases at each neuron in layer k− 1. We now show how it is possible to determine the weights Wk and biases bk, along with the correct signs for Wk−1 and bk−1.\nClosest boundary along a line. In this part of our algorithm, we will need the ability to move along a boundary to its intersection with another boundary. For this purpose, the subroutine ClosestBoundary will be useful. It takes as input a point p, a vector v and the network parameters as determined up to layer k − 1, and outputs the smallest c > 0 such that q = p+ cv lies on Bz for some z in layer at most k − 1. In order to compute c, we consider the region R within which p lies, which is associated with a certain pattern of active and inactive ReLUs. For each boundary Bz , we can calculate the hyperplane equation which would define Bz were it to intersect R, due to the fixed pattern of active and inactive neurons withinR, and we can calculate the distance\nfrom p to this hyperplane. While not every boundary Bz intersects R, the closest boundary does, allowing us to find the desired c.\nUnused boundary points. In order to identify the boundaries Bz for z in layer k, we wish to identify a set of boundary points with at least one on each such boundary. However, in previous steps of our algorithm, a set Pk−1 of boundary points was created, of which some were used in ascertaining the parameters of earlier layers. We now consider the subset Pk ⊂ Pk−1 of points that were not found to belong to Bz , for z in layers 1 through k− 1. These points have already had their local hyperplanes determined.\nExploring boundary intersections. Consider a point p1 ∈ Pk such that p1 ∈ Bz . Note that Bz will, in general, have nonlinearities where it intersects each Bz′ for which z′ lies in an earlier layer than z. We explore these intersections, and in particular attempt to find a point ofBz ∩Bz′ for every z′ in layer k − 1. Given the local hyperplane H at p1, we pick a direction v along H and apply ClosestBoundary to calculate the closest point of intersection p′ with Bz′ for all z′ already identified in the network. (Below we discuss how best to pick v.) Note that if z is in layer k, then p′ must be on Bz as well as Bz′ , while if z is in a later layer of the network, then there may exist unidentified neurons in layers below z and therefore Bz may bend before meeting Bz′ . We check if p′ lies on Bz by applying PointsOnLine, and if so apply InferHyperplane to calculate the local hyperplane of Bz on the other side of Bz′ from p1. We select a representative point p2 on this local hyperplane. We repeat the process of exploration from the points p1,p2, . . . until one of the following occurs: (i) a point of Bz ∩Bz′ has been identified for every z′ in layer k− 1 (this may be impossible; see §5), (ii) z is determined to be in a layer deeper than k (as a result of p′ not lying on Bz), or (iii) a maximum number of iterations has been reached.\nHow to explore. An important step in our algorithm is exploring points of Bz that lie on other boundaries. Given a set of points Az = {p1,p2, . . . ,pm} on Bz , we briefly consider several methods for picking a point pi and direction v along the local hyperplane at pi to apply ClosestBoundary. One approach is to pick a random point pi from those already identified and a random direction v; this has the advantage of simplicity. However, it is somewhat faster to consider for which z′ the intersectionBz∩Bz′ has not yet been identified and attempt specifically to find points on these intersections. One approach for this is to pick a missing z′ and identify for which pi the boundary Bz′ lies on the boundary of the region containing pi and solve a linear program to find v. Another approach is to pick a missing z′ and a point pi, calculate the hyperplane H which would describeBz′ under the activation pattern of pi, and choose v along the local hyperplane to pi such that the distance toH is minimized. This is the approach which we take in our implementation, though more sophisticated approaches may exist and present an interesting avenue for further work.\nFrom boundaries to parameters. We now identify layer k ofN , along with the sign of the parameters of layer k−1, by measuring the extent to which hyperplanes bend at their intersection. We are, in addition, able to identify the correct signs at layer k − 1 by solving an overconstrained system of constraints capturing the influence of neurons in layer k− 1 on different regions of input space. The following theorem formalizes the inductive step that allows us to go from what we know at layer k − 1 (weights and biases, up to scaling and sign) to the equivalent set of information for layer k, plus filling in the signs for layer k − 1. The proof is given in Appendix B. Theorem. The following holds true for deep multi-layer perceptronsN satisfying the Linear Region Assumption (§3.1), excluding a set of networks with measure zero: Suppose that the weights and biases ofN are known up through layer k− 1, with the exception that for each neuron in layer k − 1, the sign of the incoming weights and the bias is unknown. Suppose also that for each z in layer k, there exists an ordered set of points Az = {p1,p2, . . . ,pm} such that: (i) Each point lies on the boundary of Bz , and in (the interior of) a distinct region with respect to the earlier-layer boundaries already known; (ii) each point (except for p1) has a precursor in an adjacent region; (iii) for each such pair of points, the local hyperplanes of Bz are known, as is the boundaryBz′ dividing them (z′ in an earlier layer); (iv) the set of such z′ includes all of layer k−1. Then, it is possible to recover the weights and biases for layer k, with the exception that for each neuron, the sign of the incoming weights and the bias is unknown. It is also possible to recover the sign for every neuron in layer k − 1.\nNote that even when assumption (iv) of the Theorem is violated, the algorithm recovers the weights corresponding to whichever boundaries are successfully crossed (as we verify empirically in §6)." }, { "heading": "5 DISCUSSION", "text": "We here explore some reasons why our algorithm may fail, motivate our recursive approach, and discuss the potential for generalizations to different architectures.\nNon-intersecting boundaries. It is possible that for some neurons z and z′ in consecutive layers, there is no point of intersection between the boundaries Bz and Bz′ (or that this intersection is very small), making it impossible to infer the weight between z and z′ by our algorithm. Some such cases represent an ambiguity in the underlying network – an additional isomorphism to those described in §3.2. Namely, Bz ∩ Bz′ is empty if one of the following cases holds: (1) whenever z is active, z′ is inactive; (2) whenever z is active, z′ is active; (3) whenever z is inactive, z′ is inactive; or (4) whenever z is inactive, z′ is active. In case 1, observe that a slight perturbation to the weight w between z and z′ has no effect upon the network’s output; thus w is not uniquely determined. Cases 2-4 present a more complicated picture; depending on the network, there may or may not be additional isomorphisms.\nBoundary topology. For simplicity in our algorithm, we have not considered the relatively rare cases where boundaries Bz are disconnected or bounded. If Bz is disconnected, then it may not be possible to find a connected path along it that intersects all boundaries arising from the preceding layer. In this case, it is simple to infer that two independently identified pieces of the boundary belong to the same neuron to infer the full weight vector. Next, if Bz is bounded for some z, then it is a closed (d − 1)-dimensional surface within d-dimensional input space3. While our algorithm requires no modification in this case, bounded Bz may be more difficult to find by intersection with randomly chosen lines, and a more principled sampling method may be helpful.\nOur recursive approach. Our approach proceeds layer by layer, leveraging the fact that each boundary bends only those for those boundaries corresponding to earlier neurons in the network. Our approach in the first layer is, however, distinct from (and simpler than) the algorithm for subsequent layers. One might wonder why, once the first k − 1 layers have been identified, it is not possibly simply to apply our first-layer algorithm to the nk−1-dimensional “input space” arising from activations of layer k − 1. Unfortunately, this is not possible in general, as this would require the ability to evaluate layer k for arbitrary settings of layer k− 1. ReLU networks are hard to invert, and therefore it is unclear how one could manufacture an input for a specified layer k− 1 activation, even while knowing the parameters for the first k − 1 layers. Other architectures. While we have expressed our algorithm in terms of multilayer perceptrons with ReLU activation, it also extends to various other architectures of neural network. Other piecewise linear activation functions admit similar algorithms. For a network with convolutional layers, it is possible to use the same approach to infer the weights between neurons, with two caveats: (i) As we have stated it, the algorithm does not account for weight-sharing – the number of “neurons” in each layer is thus dependent on the input size, and is very large for reasonably sized images. (ii) Pooling layers do affect the partition into activation regions, and indeed introduce new discontinuities into the gradient; our algorithm therefore does not apply. For skip connections as in ResNets (He et al., 2016), our algorithm holds with slight modification, which we defer until future work." }, { "heading": "6 EXPERIMENTS", "text": "We verified the success of our algorithm on both untrained and trained networks. In keeping with literature on ReLU network initialization (He et al., 2015; Hanin & Rolnick, 2018), networks were initialized using i.i.d. normal weights with variance 2/fan-in and i.i.d. normal biases with unit variance. We trained networks on either the MNIST dataset or a memorization task of 1000 “datapoints” of dimension 10 with coordinates drawn i.i.d. from a unit Gaussian and given arbitrary binary labels. Training was performed using the Adam optimizer and a cross-entropy loss applied to the softmax of the final layer, over 20 epochs for MNIST and 1000 epochs for the memorization task. The trained networks (when sufficiently large) were able to attain near-perfect accuracy. We observed that both the first-layer algorithm and additional-layer algorithm identified weights and biases to within extremely high accuracy (see Figures 3 and 4). Even in cases where, for the additional-layer algorithm, a small fraction of neurons were not identified (see §5), the algorithm was able to correctly predict the remaining parameters.\n3For 2D input, such Bz must be topological circles, but for higher dimensions, it is conceivable for them to be more complicated surfaces, such as toroidal polyhedra." }, { "heading": "7 CONCLUSION", "text": "In this work, we have shown that it is often possible to recover the architecture, weights, and biases of deep ReLU networks by repeated queries. We proceed by identifying the boundaries between linear regions of the network and the intersections of these boundaries. Our approach is theoretically justified and empirically validated on networks before and after training. Where the algorithm does not succeed in giving a complete set of weights, it is nonetheless able to give a partial set of weights, and incompleteness in some cases reflects unresolvable ambiguities about the network.\nOur approach works for a wide variety of networks, though not all. It is limited to ReLU or otherwise piecewise linear activation functions, though we believe it possible that a continuous version of this method could potentially be developed in future work for use with sigmoidal activation. If used with convolutional layers, our method does not account for the symmetries of the network and therefore scales with the size of the input as well as the number of features, resulting in high computation. Finally, the method is not robust to defenses such as adding noise to the outputs of the network, and therefore can be thwarted by a network designer that seeks to hide their weights/architecture.\nWe believe that the methods we have introduced here will lead to considerable advances in identifying neural networks from their outputs, both in the context of deep learning and, more speculatively, in neuroscience. While the implementation we have demonstrated here is effective in small instances, we anticipate future work that optimizes these methods for efficient use with different architectures and at scale." }, { "heading": "A USEFUL LEMMATA", "text": "Lemma 1 (Isomorphism under scaling). Given an MLP N with ReLU activation, the network sz,c(N ) is isomorphic to N for every neuron z and constant c > 0.\nProof. Suppose that z = zki is the ith neuron in layer k. Then, for each neuron z k+1 j in layer k + 1 of the network N , we have:\nzk+1j (x) = nk∑ i=1 Wkij ReLU(z k i (x) + b k i )\n= nk∑ i=1 Wkij ReLU (( nk−1∑ h=1 Wk−1hi ReLU(z k−1 h (x) + b k−1 h ) ) + bki ) (1)\nBy comparison, in network sz,c(N ), we have:\nzk+1j (x) = nk∑ i=1 1 c Wkij ReLU(z k i (x) + cb k i )\n= nk∑ i=1 1 c Wkij ReLU (( nk−1∑ h=1 cWk−1hi ReLU(z k−1 h (x) + b k−1 h ) ) + cbki )\n= nk∑ i=1 Wkij ReLU (( nk−1∑ h=1 Wk−1hi ReLU(z k−1 h (x) + b k−1 h ) ) + bki ) , (2)\nwhere we used the property that ReLU(cx) = cReLU(x) for any c > 0.\nAs expressions (1) and (2) are equal, we conclude that sz,c(N ) is isomorphic to N .\nLemma 2 (Bending hyperplanes). The set of networks N with the following property has measure zero in the space of networks: There exist neurons zk−1i and z k j in consecutive layers such that the boundary Bzkj intersects Bzk−1i but does not bend at the intersection.\nProof. Observe that Bzkj is defined by the equation:\n0 = zkj (x) = nk−1∑ h=1 Wkhj ReLU(z k−1 h (x) + b k−1 h ).\nAs it does not bend when it intersects Bzk−1i , the gradient of the RHS must remain unchanged when ReLU(zk−1i (x) + b k−1 i ) flips between active and inactive. Unless another neuron transitions simultaneously with zk−1i (an event that occurs with measure zero), this can happen only if W k ij = 0, which itself is a measure zero event." }, { "heading": "B PROOF OF THEOREM", "text": "In this proof, we will show how the information we are given by the assumptions of the theorem is enough to recover the weights and biases for each neuron z in layer k. We will proceed for each z individually, progressively learning weights between z and each of the neurons in the preceding layer (though for skip connections this procedure could also easily be generalized to learn weights from z to earlier layers).\nFor each of the points pi ∈ Az , suppose that Hi is the local hyperplane associated with pi on boundaryBz . The gradient ∂z∂x (pi) at pi is orthogonal toHi, and we thus already know the direction of the gradient, but its magnitude is unknown to us. We will proceed in order through the points p1,p2, . . . ,pm, with the goal of identifying ∂z∂x (pi) for each pi, up to a single scaling factor, as this computation will end up giving us the incoming weights for z.\nWe begin with p1 by assigning ∂z∂x (p1) arbitrarily to either one of the two unit vectors orthogonal to Hi. Due to scaling invariance (Lemma 1), the weights of N can be rescaled without changing the function so that ∂z∂x (pi) is multiplied by any positive constant. Therefore, our arbitrary choice can be wrong at most in its sign, and we need not determine the sign at this stage. Now, suppose towards induction that we have identified ∂z∂x (pi) (up to sign) for i = 1, . . . , s − 1. We wish to identify ∂z ∂x (ps).\nBy assumption (ii), there exists a precursor pr to ps such that Hr and Hs intersect on a boundary Bz′ . Let vr = tz ∂z∂x (pr) be our estimate of ∂z ∂x (pr), for unknown sign tz ∈ {+1,−1}. Let vs be a unit normal vector to Hs, so that vs = ctz ∂z∂x (ps) for some unknown constant c. We pick the sign of vs so that it has the same orientation as vr with respect to the surfaceBz , and thus c > 0. Finally, let v = tz′ ∂z ′ ∂x (pr) = tz′ ∂z′ ∂x (ps) be our estimate of the gradient of z ′; where tz′ ∈ {+1,−1} is also an unknown sign (recall that since z′ is in layer k− 1 we know its gradient up to sign). We will use v and vr to identify vs.\nSuppose that z = zkj is the jth neuron in layer k and that z ′ = zk−1h is the hth neuron in layer k− 1. Recall that\nz(x) = zkj (x) = nk−1∑ i=1 Wkij ReLU(z k−1 i (x) + b k−1 i ). (3)\nAsBz′ is the boundary between inputs for which z′ = zk−1h is active and inactive, ReLU(z k−1 h (x)+ bk−1h ) must equal zero either (Case 1) on Hr or (Case 2) on Hs.\nIn Case 1, we have ∂z\n∂x (ps)−\n∂z ∂x (pr) = W k hj\n∂z′ ∂x (pr),\nor equivalently: ctzvs − tzvr = Wkhjtz′v, which gives us the equation: cvs − vr = Wkhjtztz′v. Since we know the vectors vs,vr,v, we are able to deduce the constant c.\nA similar equation arises in Case 2:\nvr − cvs = Wkhjtztz′v, giving rise to the same value of c. We thus may complete our induction. In the process, observe that we have calculated a constant Wkhjtztz′t\n′, where the sign t′ is +1 in Case 1 and −1 in Case 2. Note that tz′t′ can be calculated based on whether v points towards pr or ps. Therefore, we have obtained Wkhjtz , which is exactly the weight (up to z-dependent sign) that we wished to find. Once we have all weights incoming to z (up to sign), it is simple to identify the bias for this neuron (up to sign) by calculating the equation of any known local hyperplane for Bz and using the known weights and biases from earlier layers.\nTo complete the proof, we must now also calculate the correct signs tz′ of the neurons in layer k−1. Pick some z = zkj in layer k and observe that for all points ps ∈ Az there corresponds an equation, obtained by taking gradients in equation (3):\n∂zkj ∂x (ps) = nk−1∑ i=1 Wkij1i,s ∂zk−1i ∂x (ps),\nwhere 1i,s equals 1 if ps is on the active side of Bzk−1i . We can substitute in our (sign-unknown) values for these various quantities:\ntzvs = nk−1∑ i=1 Wkij1i,stzk−1i vi.\nNow, we may estimate 1i,s by a function 1′i,s that is 1 if ps and vi are on the same side of Bzk−1i . This estimate will be wrong exactly when tzk−1i = −1. Thus, 1i,s = (1 + tzk−1i 1 ′ i,s)/2, giving us the equation:\ntzvs = nk−1∑ i=1 Wkij 1 + tzk−1i 1′i,s 2 tzk−1i vi\n= 1\n2 nk−1∑ i=1 Wkij(tzk−1i + 1′i,s)vi\nAll the terms of this equation are known, with the exception of tz and the nk−1 variables tzk−1i – giving us a linear system in nk−1 + 1 variables. For a given zkj , there are nk−1 different ps representing the intersections with Bz′ for each z′ in layer k − 1; choosing these ps should in general give linearly independent constraints. Moreover, the equation is in fact a vector equality with dimension nin; hence, it is a highly overconstrained system, enabling us to identify the signs tzk−1i for each zk−1i . This completes the proof of the theorem." } ]
2,019
null
SP:cd75cf49f7e773f69c08c6489ec9f63f9a2de4ad
[ "Neural architecture search usually aims to find a single fixed architecture for the task of interest. The paper proposes to condition the architecture on the input instances by introducing a \"selection network\" that learns to retain a subset of branches in the architecture during each inference pass. The intuition is that easier instances require less compute (hence a shallower/sparser architecture) as compared to the more difficult ones. The authors show improved results on CIFAR-10 and ImageNet in terms of accuracy-latency trade-off over some handcrafted architectures and NAS baselines. The method resembles sparsely gated mixture of experts [1] at a high-level, but has been implemented in a way that better fits the context of architecture search (which is still technically interesting).", "This paper proposes an instance-aware dynamic network, ISBNet, for efficient image classification. The network consists of layers of cell structures with multiple branches within. During the inference, the network uses SelectionNet to compute a \"calibration weight matrix\", which essentially controls which branches within the cell should be used to compute the output. Similar to previous works in NAS, this paper uses Gumbel Softmax to compute the branch selection probability. The network is trained to minimize a loss function that considers both the accuracy and the inference cost. Training of the network is divided into two stages: First, a high temperature is used to ensure all the branches are sufficiently optimized, and at the second stage, the authors aneal the temperature. During the inference, branches are selected if their probability computed by Gumbel Softmax is larger than a certain threshold." ]
Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search (NAS). Although remarkable efficiency and accuracy have been achieved, existing expert designed and NAS models neglect the fact that input instances are of varying complexity and thus different amounts of computation are required. Inference with a fixed model that processes all instances through the same transformations would incur computational resources unnecessarily. Customizing the model capacity in an instance-aware manner is required to alleviate such a problem. In this paper, we propose a novel Instanceaware Selective Branching Network-ISBNet to support efficient instance-level inference by selectively bypassing transformation branches of insignificant importance weight. These weights are dynamically determined by a lightweight hypernetwork SelectionNet and recalibrated by gumbel-softmax for sparse branch selection. Extensive experiments show that ISBNet achieves extremely efficient inference in terms of parameter size and FLOPs comparing to existing networks. For example, ISBNet takes only 8.70% parameters and 31.01% FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10.
[]
[ { "authors": [ "Bowen Baker", "Otkrist Gupta", "Ramesh Raskar", "Nikhil Naik" ], "title": "Accelerating neural architecture search using performance prediction", "venue": "arXiv preprint arXiv:1705.10823,", "year": 2017 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Smash: one-shot model architecture search through hypernetworks", "venue": "arXiv preprint arXiv:1708.05344,", "year": 2017 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "Proxylessnas: Direct neural architecture search on target task and hardware", "venue": "arXiv preprint arXiv:1812.00332,", "year": 2018 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "arXiv preprint arXiv:1805.09501,", "year": 2018 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Amir Gholami", "Kiseok Kwon", "Bichen Wu", "Zizheng Tai", "Xiangyu Yue", "Peter Jin", "Sicheng Zhao", "Kurt Keutzer" ], "title": "Squeezenext: Hardware-aware neural network design", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Gao Huang", "Danlu Chen", "Tianhong Li", "Felix Wu", "Laurens van der Maaten", "Kilian Q Weinberger" ], "title": "Multi-scale dense networks for resource efficient image classification", "venue": "arXiv preprint arXiv:1703.09844,", "year": 2017 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical reparameterization with gumbel-softmax", "venue": "arXiv preprint arXiv:1611.01144,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Chris J Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The concrete distribution: A continuous relaxation of discrete random variables", "venue": "arXiv preprint arXiv:1611.00712,", "year": 2016 }, { "authors": [ "Alejandro Newell", "Kaiyu Yang", "Jia Deng" ], "title": "Stacked hourglass networks for human pose estimation", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "arXiv preprint arXiv:1802.01548,", "year": 2018 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Dimitrios Stamoulis", "Ruizhou Ding", "Di Wang", "Dimitrios Lymberopoulos", "Bodhi Priyantha", "Jie Liu", "Diana Marculescu" ], "title": "Single-path nas: Designing hardware-efficient convnets in less than 4 hours", "venue": null, "year": 1904 }, { "authors": [ "Xin Wang", "Fisher Yu", "Zi-Yi Dou", "Trevor Darrell", "Joseph E Gonzalez" ], "title": "Skipnet: Learning dynamic routing in convolutional networks", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Sanghyun Woo", "Jongchan Park", "Joon-Young Lee", "In So Kweon" ], "title": "Cbam: Convolutional block attention module", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Bichen Wu", "Xiaoliang Dai", "Peizhao Zhang", "Yanghan Wang", "Fei Sun", "Yiming Wu", "Yuandong Tian", "Peter Vajda", "Yangqing Jia", "Kurt Keutzer" ], "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search", "venue": "arXiv preprint arXiv:1812.03443,", "year": 2018 }, { "authors": [ "Saining Xie", "Alexander Kirillov", "Ross Girshick", "Kaiming He" ], "title": "Exploring randomly wired neural networks for image recognition", "venue": "arXiv preprint arXiv:1904.01569,", "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "Snas: stochastic neural architecture search", "venue": "arXiv preprint arXiv:1812.09926,", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": null, "text": "Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search (NAS). Although remarkable efficiency and accuracy have been achieved, existing expert designed and NAS models neglect the fact that input instances are of varying complexity and thus different amounts of computation are required. Inference with a fixed model that processes all instances through the same transformations would incur computational resources unnecessarily. Customizing the model capacity in an instance-aware manner is required to alleviate such a problem. In this paper, we propose a novel Instanceaware Selective Branching Network-ISBNet to support efficient instance-level inference by selectively bypassing transformation branches of insignificant importance weight. These weights are dynamically determined by a lightweight hypernetwork SelectionNet and recalibrated by gumbel-softmax for sparse branch selection. Extensive experiments show that ISBNet achieves extremely efficient inference in terms of parameter size and FLOPs comparing to existing networks. For example, ISBNet takes only 8.70% parameters and 31.01% FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10." }, { "heading": "1 INTRODUCTION", "text": "Deep convolutional neural networks (CNNs) (He et al., 2016; Zoph et al., 2018) have revolutionized computer vision with increasingly larger and more sophisticated architectures. These model architectures have been designed and calibrated by domain experts with rich engineering experience. To achieve good inference results, these models typically comprise hundreds of layers and contain tens of millions of parameters and consequently, consume substantial amounts of computational resources for both training and inference. Recently, there has been a growing interest in efficient network design (Howard et al., 2017; Iandola et al., 2016; Zhang et al., 2018; Sandler et al., 2018) and neural architecture search (NAS) (Zoph et al., 2018; Real et al., 2018; Liu et al., 2018b), respectively with the objective of devising network architectures that are efficient during inference and automating the architecture design process.\nMany efficient architectures have indeed been designed in recent years. SqueezeNet (Iandola et al., 2016) and MobileNet (Howard et al., 2017) substantially reduce parameter size and computation cost in terms of FLOPs on mobile devices. More recent works such as MobileNetV2 (Sandler et al., 2018) and ShuffleNetV2 (Ma et al., 2018) further reduce the FLOPs. It is well recognized that devising these architectures is non-trivial and requires engineering expertise.\nAutomating the architecture design process via neural architecture search (NAS) has attracted increasing attention in recent years. Mainstream NAS algorithms (Zoph & Le, 2016; Zoph et al., 2018; Real et al., 2018) search for the network architecture iteratively. In each iteration, an architecture is proposed by a controller, and then trained and evaluated. The evaluation performance is in turn exploited to update the controller. This process is incredibly slow because both the controller and each derived architecture require training. For instance, the reinforcement learning (RL) based controller NASNet (Zoph et al., 2018) takes 1800 GPU days and the evolution algorithm based controller AmoebaNet (Real et al., 2018) incurs 3150 GPU days to obtain the best architecture. Many acceleration methods (Baker et al., 2017; Liu et al., 2018a; Bender et al., 2018; Pham et al., 2018) have been proposed to accelerate the search process, and more recent works (Liu et al., 2018b;\nXie et al., 2018; Wu et al., 2018; Cai et al., 2018) remove the controller and instead optimize the architecture selection and parameters together with gradient-based optimization algorithms.\nWhile both expert designed and NAS searched models have produced remarkable efficiency and prediction performance, they have neglected one critical issue that would affect inference efficiency. The architectures of these models are fixed during inference time and thus not adaptive to the varying complexity of input instances. However, in real-world applications, there are only a small fraction of input instances requiring deep representations (Wang et al., 2018; Huang et al., 2017a). Consequently, expensive computational resources would be wasted if all instances are treated equally. Designing a model with sufficient representational power to cover the hard instances, and meanwhile a finer-grained control to provide just necessary computation dynamically for instance of varying difficulty is therefore essential.\nIn this paper, we propose ISBNet to address the aforementioned issue with its building block Cell as illustrated in Figure 1. Following the widely adopted strategy in NAS (Zoph et al., 2018; Pham et al., 2018; Liu et al., 2018b; Xie et al., 2018), the backbone network is a stack of L structurally identical cells, receiving inputs from their two previous cells and each cell contains N inter-connected computational Nodes. The architecture of ISBNet deviates from the conventional wisdom of NAS which painstakingly search for the connection topology and the corresponding transformation operation of each connection. In ISBNet, each node is instead simply connected to its prescribed preceding node(s) and each connection transforms via a candidate set of B operations (branches). To allow for instance-aware inference control in the branch level, we integrate L lightweight hypernetworks SelectionNets, one for each cell to determine the importance weight of each branch. Gumbel-softmax (Jang et al., 2016; Maddison et al., 2016) is further introduced to recalibrate these weights, which enables efficient gradient-based optimization during training, and more importantly, leads to sparse branch selection during inference for efficiency.\nThe contributions of ISBNet can be summarized as follows:\n• ISBNet is a general architecture framework combining advantages from both efficient network design and NAS, whose components are readily customizable.\n• ISBNet is a novel architecture supporting the instance-level selective branching mechanism by introducing lightweight SelectionNets, which improves inference efficiency significantly by reducing redundant computation.\n• ISBNet successfully integrates gumbel-softmax to the branch selection process, which enables direct gradient descent optimization and is more tractable than RL-based method.\n• ISBNet achieves state-of-the-art inference efficiency in terms of parameter size and FLOPs and inherently supports applications requiring fine-grained instance-level control.\nOur experiments show that ISBNet is extremely efficient during inference and successfully selects only vital branches on a per-input basis. In particular, with a minor 1.07% accuracy decrease, ISBNet reduces the parameter size and FLOPs by 10x and 11.31x respectively comparing to the NAS searched high-performance architecture DARTS (Liu et al., 2018b). Furthermore, with a tiny model of 0.57M parameters, ISBNet achieves much better accuracy while with only 8.03% and 30.60% inference time parameter size and FLOPs comparing to the expert-designed efficient network ShuffleNetV2 1.5x (Ma et al., 2018). We also conduct ablation studies and visualize the branch selection process to understand the proposed architecture better. The main results and findings are summarized in Sec 4.2 and Sec 4.3." }, { "heading": "2 RELATED WORK", "text": "Efficient Network Design. Designing resource-aware networks (Iandola et al., 2016; Gholami et al., 2018; Ma et al., 2018; Sandler et al., 2018; Gholami et al., 2018) has attracted a great deal of attention in recent years. However, these works mainly focus on reducing parameter size and inference FLOPs in many works. For instances, SqueezeNet (Hu et al., 2018) reduces parameters and computation with the fire module; MobileNetV2 (Sandler et al., 2018) utilize depth-wise and point-wise convolution for more parameter-efficient convolutional neural networks; ShuffleNetV2 (Ma et al., 2018) proposes lightweight group convolution with channel shuffle to facilitate the information flowing across the channels. To make inference efficient, many of these transformations are introduced to the candidate operation set in ISBNet.\nMany recent works explore conditional (Wang et al., 2018) and resource-constrained prediction (Huang et al., 2017a) for efficiency. SkipNet (Wang et al., 2018) introduces a gating hypernetwork to determine whether to bypass each residual layer (He et al., 2016) conditional on the current input instance. Compared with SkipNet, ISBNet provides more efficient and diversified branch selections for the backbone network and the hypernetworks in ISBNet are optimized in an end-to-end training manner instead of generally less tractable policy gradient (Williams, 1992). MSDNet (Huang et al., 2017a) supports budgeted prediction within prescribed computational resource constraint during inference by inserting multiple classifiers into a 2D multi-scale version of DenseNet (Huang et al., 2017b). By early-exit into a classifier, MSDNet can provide approximate predictions with minor accuracy decrease. Functionally, ISBNet also supports budgeted prediction by dynamically controlling the number of branches selected, therefore per-input inference cost.\nNeural Architecture Search. Mainstream NAS (Zoph et al., 2018; Real et al., 2018) treats architecture search as a stand-alone process whose optimization is severed from candidate architecture optimization. Search algorithms such as RL-based NAS (Zoph et al., 2018) and evolutionary-based NAS (Real et al., 2018) obtain state-of-the-art architectures at an unprecedented amount of the GPUtime searching cost. Recently, many works have been proposed to accelerate the search pipeline, e.g., via performance prediction (Baker et al., 2017; Liu et al., 2018a), hypernetworks generating initialization weights (Brock et al., 2017), weight sharing (Bender et al., 2018; Pham et al., 2018). These approaches greatly alleviate the search inefficiency while the scalability issue remains unsolved.\nA number of proposals (Liu et al., 2018b; Wu et al., 2018; Cai et al., 2018) instead integrate the architecture search process and architecture optimization into the same gradient-based optimization framework. In particular, DARTS (Liu et al., 2018b) relaxes discrete search space to be continuous by introducing operation mixing weights to each connection and optimizes these weights directly with gradient back-propagated from validation loss. Similarly, the discrete search space in SNAS (Xie et al., 2018) is modeled with sets of one-hot random variables for each connection, which is made differentiable by relaxing the discrete distribution with continuous concrete distribution (Jang et al., 2016; Maddison et al., 2016). In terms of architecture optimization, ISBNet also relaxes the discrete branch selection to continuous importance weights optimized by gradient descent; while instead of direct optimization on the weights, SelectionNets are introduced to dynamically generate these weights which are more effective and meanwhile bring about larger model capacity. Further, SelectionNets enable instance-level architecture customization rather than finding a fixed model." }, { "heading": "3 INSTANCE-AWARE SELECTIVE BRANCHING NETWORK", "text": "" }, { "heading": "3.1 THE BACKBONE NETWORK", "text": "The backbone network is constructed with a stack of L cells, each of which is a directed acyclic graph consisting of an ordered sequence of N intermediate nodes. As is illustrated in Figure 1, xl0 and xl1 are the cell input nodes from the two preceding cells; each intermediate node x l i(i ≥ 2) of the lth cell forms a latent representation and receives n input nodes1 from its preceding nodes:\n1n can be larger than 2 for deeper and wider local representation. E.g., n = i− 1 for each xli leads to dense connection, i.e., DenseNet (Huang et al., 2017b).\nxli = ∑ j∈Sli Fj,i(xlj),Sli ⊂ {0, 1, · · · , i− 1} ∧ |Sli| = n (1)\nThereby, each cell contains C = n ·N connections in total. The connection passes information from node xlj to x l i after the aggregation of a candidate set of B branches of transformation inspired from widely-adopted transformations in NAS (Pham et al., 2018; Liu et al., 2018b; Xie et al., 2018) and efficient network design (Iandola et al., 2016; Sandler et al., 2018; Zhang et al., 2018):\nFj,i(xlj) = B∑ b=1 wb · Fb(xlj) (2)\nwhere wb here represents the importance of the bth branch (operation) of the connection and is dynamically generated by the cell hypernetwork rather than a fixed learned parameter as is in existing NAS methods (Liu et al., 2018b; Xie et al., 2018). We shall introduce the hypernetwork in Section 3.2. Finally, the output of the cell xlout is aggregated by concatenating the output from all the intermediate nodes. We shall use superscript l, subscript c and b to index the cell, connection, and branch respectively.\nRecent work (Xie et al., 2019) reveals that architectures with randomly generated connection achieve surprisingly competitive results comparing to best NAS models, which is confirmed empirically in our experiments on smaller datasets. In this paper, we thus mainly focus on the branch transformation and selection part and their impact on the inference efficiency instead of specifying a detailed connection topology. Under this architecture formulation framework, we can readily adjust the number of candidate branches B and also the specific transformations before training, customizing model capacity and efficiency respectively depending on the difficulty of the task and resource constraints in deployment.\n3.2 SelectionNet FOR WEIGHT RECALIBRATION\nTo support instance level inference control, we introduce L lightweight hypernetworks SelectionNet, one for each cell. Each SelectionNet SNetl receives the same input as the lth cell, specifically the two output nodes xl−2out , x l−1 out (i.e. x l 0, x l 1) from the preceding cells, and concurrently produce C sets of recalibration weights, one for each connection of the cell:\nW l = SNetl(xl0, x l 1) (3)\nwhere W l ∈ RC×B is the recalibration weight matrix for the lth cell. The SelectionNet SNetl dynamically generates these weights with a pipeline of m = 2 convolutional blocks, a global average pooling and finally an affine transformation. For the m convolutional transformation, we adopt separable convolution (Sandler et al., 2018) which contains a point-wise convolution and a depth-wise convolution of stride 2 and kernel size 5 × 5. The stride reduces the parameter size and computation of SNetl, and the larger kernel size for depth-wise convolution incurs negligible overhead while extracts features for the immediate weight generation with a larger local receptive field.\nThe recalibration weights given by the SelectionNet is reminiscent of convolutional attention mechanism (Hu et al., 2018; Woo et al., 2018; Newell et al., 2016), where attention weights are determined dynamically by summarizing information of the immediate input and then exploited to recalibrate the relative importance of different input dimensions, e.g., channels in SENet (Hu et al., 2018). In ISBNet, the recalibration weights are introduced to the branch. Particularly, each candidate operation of the connection is coupled with a rescaling weight.\nThe gumbel-softmax (Jang et al., 2016; Maddison et al., 2016) technique and the reparameterization trick (Kingma & Welling, 2013) is introduced to further recalibrate these weights generated by the SelectionNet, to enable efficient gradient-based optimization for the whole network during training, and more importantly, ensure a sparse selection of important branches during inference. More specifically, each set of importance weights W lc ∈ RB for the cth connection of the lth cell (Clc) after the following recalibration of the gumbel-softmax follows concrete distribution (Maddison et al., 2016) controlled by a temperature parameter τ :\nw̃lc,b = exp((wlc,b +G l c,b)/τ)∑B\nb′=1 exp((w l c,b′ +G l c,b′)/τ)\n, τ > 0 (4)\nwhere w̃lc,b is then directly used for branch recalibration as in Equation 2, and G l c,b = − log(− log(U lc,b)) here is a gumbel random variable coupling with bth branch by sampling U lc,b from Uniform(0, 1) (Jang et al., 2016). The concrete distribution (Maddison et al., 2016) suggests that (1) w̃c,b = 1B , as τ → +∞, and more importantly (2):\np( lim τ→0\nw̃lc,b = 1) = exp(w l c,b)/ B∑ b′=1 exp(wlc,b′) (5)\nTherefore, high temperature leads to uniform dense branch selection while lower temperature tends to sparsely sample branches following a categorical distribution parameterized by softmax(W lc).\n3.3 OPTIMIZATION AND INFERENCE FOR ISBNet\nWith the continuous relaxation of the gumbel-softmax (Jang et al., 2016; Maddison et al., 2016) and the reparameterization (Kingma & Welling, 2013), the branch selection process of the SelectionNets is made directly differentiable with respect to the weight wlc,b. In particular, the gradient\n∂L ∂w̃lc,b\nbackpropagated from the loss function L to w̃lc,b through the backbone network can be directly backpropagated towlc,b with low variance (Maddison et al., 2016), and further to the lth SelectionNet unimpededly. Therefore, parameters of the whole network can be optimized in an end-to-end manner by gradient descent.\nThe temperature τ of Equation 4 regulates the sparsity of the branch selection. A relatively higher temperature forces the weights to distribute more uniformly so that all the branches of each connection are efficiently trained. While a low temperature instead tends to sparsely sample one branch from the categorical distribution parameterized by the importance weights dynamically determined by SelectionNets, thus supporting finer-grained instance-level inference control by bypassing unimportant branches. To leverage both characteristics, we propose a two stage training scheme for ISBNet: (1) the first stage pretrains the whole network with a fixed relatively high temperature till convergence. (2) the second stage fine-tunes the parameters with τ steadily annealing to a relatively low temperature. The first stage ensures that branches are sufficiently optimized before the instanceaware selection and the fine-tuning in the second stage helps maintain the performance of ISBNet under sparse branch selection during inference.\nTo further promote inference efficiency and reduce redundancy, a regularization term is explicitly introduced in the fine-tuning stage which takes into account the expectation of the resource consumptionR in the final loss function L for correctly classified instances:\nL = LCE + λ1||w||22 + λ21ŷ=y logE[R]\n≈ LCE + λ1||w||22 + λ21ŷ=y log L∑ l=1 C∑ c=1 B∑ b=1 w̃lc,b · R(F lc,b(·)) (6)\nwhere LCE and λ1||w||22 denotes the cross-entropy loss and the weight decay term, y is the ground truth class label, ŷ the prediction, λ2 controls the regularization strength and R(·) calculates the resource consumption of each operation F lc,o(·). The operation importance weight w̃lc,b represents the probability of the corresponding branch F lc,b being selected during inference, and therefore the regularization term E[R] corresponds to the expectation of the aggregated resource required for each input instance.\nThe resource regularizer is readily adjustable depending on deployment constraints, which may include the parameter size, FLOPs, and memory access cost (MAC). In this work, we mainly focus on the inference time, specifically FLOPs, which can be calculated beforehand for each branch.\nR(F lc,b(·)) is thus a constant here, which means that the regularizerR is also directly differentiable with respect to w̃lc,b. We denote ISBNet trained with regularization strength λ2 as ISBNet-R-λ2.\nDuring inference, the instance-level selective branching is achieved for each connection Clc by selecting branches of the top k largest recalibration weights whose aggregated weight slc just exceeds a threshold T . Denoting W̃ lc sorted in descending order as Ŵ l c , then:\nslc = min{sk : sk = k∑ b=1 ŵlc,b ∧ sk ≥ T} (7)\nAfter the selection, the recalibration weight w̃lc,b of the selected branch is rescaled by 1 slc to stabilize the scale of the representation. Consequently, the SelectionNet will select only necessary branches2 for each instance depending on the input difficulty and meanwhile the FLOPs of each branch, i.e., trading off between LCE and R in Equation 6. Furthermore, the resource consumption of each instance can be precisely regulated in a finer-grained manner by scheduling the threshold dynamically for each connection. In this paper, the same threshold is shared among all connections for simplicity and ISBNet inference with the threshold t is denoted as ISBNet-T-t.\nUnder such an inference scheme, the backbone network comprises up to (2B−1)L·C possible candidate subnets, corresponding to each unique branch selection of allL·C connections. For a small ISBNet of 10 cells, with 5 candidate operations, 8 connections per cell, there are (25 − 1)8·10 ≈ 2·10119 possible candidate architectures of different branch combination, which is orders of magnitudes larger than the search space of conventional NAS (Pham et al., 2018; Liu et al., 2018b; Xie et al., 2018; Cai et al., 2018; Stamoulis et al., 2019)." }, { "heading": "4 EXPERIMENTS", "text": "We now compare the performance of ISBNet with the best-performing expert-designed efficient networks and NAS architectures using benchmark dataset CIFAR-10 and ImageNet. The experimental details are presented in Sec. 4.1; main results are reported in Sec. 4.2, followed by the visualizations of the branch selection process of ISBNet in Sec. 4.3." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Dataset CIFAR-10 contains 50,000 training images and 10,000 test images of 32 × 32 pixels in 10 classes. We adopt standard data pre-processing and augmentation pipeline (Liu et al., 2018b; Xie et al., 2018) and apply AutoAugment (Cubuk et al., 2018), cutout (DeVries & Taylor, 2017) of length 16. ImageNet contains 1.2 million training and 50,000 validation images in 1000 classes. We adopt standard augmentation scheme following (Liu et al., 2018b; Xie et al., 2018) and apply label smoothing of 0.1 and AutoAugment. Results are reported with 224× 224 center crop.\nCandidate Operation Set The following 5 candidate operations (B = 5) are adopted for demonstration and can be readily adjusted in deployment:\n• 3× 3 max-pooling • 3× 3 avg-pooling • skip connection\n• 3× 3 separable-conv • 5× 5 separable-conv\nIn particular, separable-conv stands for two separable convolution (Howard et al., 2017) of ReLUConv-Conv-BN. Skip connection allows for efficient representation forwarding; pooling layers are computationally lightweight with no parameter; and separable-conv dominates the parameter size and computation in each connection. The three types of operations support trade-off between representation power and efficiency for the branch selection of each connection.\nTemperature Annealing Scheme In the pre-training stage, the temperature τ is fixed to 3 till convergence. In the fine-tuning stage, τ is initialized to 1.0 and is further annealed by exp(−0.0006) ≈ 0.999 every epoch to 0.5 for CIFAR-10. The temperature is 3 throughout for ImageNet.\n2At least one branch will be selected for each connection.\nArchitecture Details For CIFAR-10, we evaluate two ISBNet architectures of different size for demonstration: (1) ISBNet(S), a small network with L = 5 cells and 15 initial channels; (2) ISBNet(M), a medium network with L = 10 cells and 20 initial channels. For ImageNet, we evaluate a medium network with L = 10 cells and 32 initial channels.\nAll the architectures contain N = 4 intermediate nodes in each cell and we adopt a simple node connection strategy of connecting to the two preceding nodes (i.e. xli−1 and x l i−2) for CIFAR-10, and connecting to the preceding node (xli−1) and randomly x l 0 or x l 1 for ImageNet. Further, nodes directly connected to the input nodes are downsampled with stride 2 for L3 -th and 2L 3 -th cells. An auxiliary classifier with weight 0.4 is connected to the output of 2L3 -th cell for extra regularization.\nOptimization Details For CIFAR-10, we apply SGD with momentum 0.9 and weight decay 3 · 10−4 for 1200 epochs for both training stages. The learning rate is initialized to 0.025 and 0.005 for the pre-training and fine-tuning stage respectively. We use batch size 256/128 for CIFAR-10 (ISBNet(S)/ISBNet(M)) and batch size 128 for ImageNet to fit the whole network into one Titan RTX. For ImageNet, we apply SGD with nesterov-momentum 0.9 and weight decay 3 × 10−5 for 250 epochs. We adopt drop-connection and drop-branch linearly to 0.1 and 0.7/0.5 respectively for CIFAR-10/ImageNet. The learning rate is annealed to zero with one cycle of cosine learning rate scheduler (Loshchilov & Hutter, 2016).\n4.2 ISBNet PERFORMANCE EVALUATION\nOverall Results and Discussion. Table 1 summarizes the overall performance on CIFAR-10 of ISBNet under different inference threshold T and resource constraint strengthR. In terms of training efficiency, ISBNet takes only 2.5 and 5.5 GPU training days for ISBNet(S) and ISBNet(M) respectively without any architecture searching, which is up to three orders of magnitudes less time than conventional evolution-based NAS or RL-based NAS thanks to the efficient network design and the end-to-end gradient-based optimization.\nAs for inference time performance, ISBNet reduces a drastic amount of the parameter size and FLOPs comparing to baseline networks. Specifically, with comparable accuracy, ISBNet(S)-R-0.5T-0.8 only takes 0.20M parameters and 29.28M FLOPs on average during inference, which is only 8.03% and 30.60% of efficient network ShuffleNetV2 1.5×; ISBNet(S)-R-0.0-T-0.8 achieves up to 10x and 11x parameter size and FLOPs reduction than DARTS with 1.07% accuracy decrease. The drastic parameter size and FLOPs reduction demonstrate that the selective branching mechanism in ISBNet enables extremely efficient instance-level prediction. This is also corroborated by the significant reduction of the parameter size and FLOPs from training to inference of ISBNet, i.e. from 0.57M parameters and 84.65M FLOPs to 0.33M and 47.91M in ISBNet(S)-R-0.0-T-0.8.\nResults in Table 1 also validate that the resource regularizer effectively regularizes the network for more efficient inference on both parameter size and FLOPs, although only FLOPs is explicitly regularized. Specifically, the larger the regularization strength λ, the more efficient ISBNet is, while at the cost of a minor accuracy decrease. For instance, the inference parameter size of ISBNet(M)T-0.8 is reduced from 1.02M, 0.89M to 0.66M, and FLOPs from 139.46M, 119.56M to 74.90M for regularization strength from 0.0, 0.1 to 0.5 respectively.\nThe results show that a small ISBNet is able to achieve competitive accuracy comparable to the best efficient and NAS-searched models, meanwhile with far fewer inference parameters and FLOPs. This raises questions about the necessity of current laborious architecture search of NAS (Zoph et al., 2018; Real et al., 2018). In this paper, we propose a selective branching mechanism evocative of convolutional attention (Hu et al., 2018; Woo et al., 2018) via the introduction of the hypernetworks SelectionNet, which leads to larger model capacity and enables selective branching. With 19.30% more parameters and 8.54% more FLOPs, ISBNet(S) integrated with SelectionNets achieves 0.41% noticeably higher accuracy. Further trained with gumbel-softmax, SelectionNets enables the network to efficiently select necessary branches and customize its architecture on a per-input basis during inference. Gumbel-softmax is necessary for maintaining accuracy because with plain softmax trained SelectionNets, ISBNet(S)-Softmax suffers from a catastrophic accuracy decrease, from 4.37% to 15.22%, while with limited parameter size and FLOPs reduction with an inference threshold 0.6.\nAccuracy-FLOPs Trade-off. Table 2 summarizes the performance of ISBNet under different thresholds on ImageNet. The results demonstrate that ISBNet achieves quite competitive results compared with expert-designed efficient networks and NAS-searched models even with a simple connection scheme and a preconfigured candidate operation set. Further, with the selective branching of SelectionNet, one single network of ISBNet supports efficiency-accuracy trade-offs by simply controlling the importance threshold. In particular, ISBNet reduce the parameter size by 18.37% and FLOPs by 18.14% with a threshold 0.9 with a minor 0.2 accuracy decrease. With a threshold 0.7, ISBNet achieves another 12.24% and 14.02% redundancy reduction respectively. This confirms that networks can support efficient instance-aware inference with the selective branching mechanism." }, { "heading": "4.3 VISUALIZATION OF SELECTIVE BRANCHING", "text": "Ratio of Selective Branching. We visualize in Figure 2 the average recalibration weight and branch selection ratio of representative cells in ISBNet(S)-R-0.0-T-0.8, which shows the ratio during training and in the final model of each branch being selected during inference respectively. An obvious stratified pattern can be observed that one separable-convolution branch gradually dominates the connection in the first cell while in subsequent cells, the branch selection tends to be more\nuniform and diversified. This pattern demonstrates that features extracted in lower layers share similar branch transformation where branch pruning can be performed to reduce the parameter size; while the instance-aware efficient inference requires diversified branch selections ascending the layers. Further experiments show that the average number of branches selected in the last cell is 1.1, indicating that only a small number of branches are required for the inference of most instances.\nQualitative Difference between Instances. Denoting instances that the network is confident with in prediction as easy instance and uncertain as hard instance, we visualize the clustering of easy and hard instances in Figure 3 to help understand the selective branching mechanism. We find that the certainty of the prediction made by ISBNet depends mainly on the image quality. In general, easy instances are more salient (clear with high contrast) while hard instances are more inconspicuous (dark with low contrast). We also compute the accuracy and average FLOPs of each cluster. On average, easy instances achieve much higher classification accuracy with 11.2% fewer FLOPs compared with hard instances. This shows that\ncomputation could be greatly reduced without sacrificing accuracy by selective bypassing unimportant branches for relatively easy instances." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have proposed ISBNet, a novel network framework with the advantages of both efficient network design and neural architecture search. To achieve efficient instance-aware inference, a series of lightweight hypernetworks are introduced to cells of the backbone network to determine importance weights for selective branching. We have also integrated gumbel-softmax and the reparameterization trick to the branch selection process, which enables accessible and tractable gradient-based end-to-end training, and more importantly, extremely efficient inference. The inference efficiency is further enhanced with the resource-aware regularization. Extensive experiments and visualizations have been conducted, and the results validate the efficiency of instance-aware selective branching inference." } ]
2,019
ISBNET: INSTANCE-AWARE SELECTIVE BRANCHING NETWORKS
SP:b2a151ab2ee385b50881be2865f6503902f2fcc9
[ "This paper introduces a model that learns a slot-based representation, along with a transition model to predict the evolution of these representations in a sparse fashion, all in a fully unsupervised way. This is done by leveraging a self-attention mechanism to decide which slots should be updated in a given transition, leaving the others untouched. The model learns to encode in a slot-wise and is trained on single step transitions.", "This paper proposes to use a ‘slot-based’ (factored) representation of a ‘scene’ s.t. a forward model learned over some observed transitions only requires sparse updates to the current representation. The results show that jointly learning the forward model and the scene representation encourages meaningful ‘entities’ to emerge in each slot. Additionally, the paper argues that this representation allows for better generalization and can also guide exploration by rewarding actions that change multiple entities" ]
Learning an agent that interacts with objects is ubiquituous in many RL tasks. In most of them the agent’s actions have sparse effects : only a small subset of objects in the visual scene will be affected by the action taken. We introduce SPECTRA, a model for learning slot-structured transitions from raw visual observations that embodies this sparsity assumption. Our model is composed of a perception module that decomposes the visual scene into a set of latent objects representations (i.e. slot-structured) and a transition module that predicts the next latent set slot-wise and in a sparse way. We show that learning a perception module jointly with a sparse slot-structured transition model not only biases the model towards more entity-centric perceptual groupings but also enables intrinsic exploration strategy that aims at maximizing the number of objects changed in the agents trajectory.
[]
[ { "authors": [ "Rene Baillargeon", "Elizabeth Spelke", "Stan Wasserman" ], "title": "Object permanence in five-month-old infants", "venue": "Cognition, 20:191–208,", "year": 1985 }, { "authors": [ "Yoshua Bengio" ], "title": "The consciousness prior", "venue": "arXiv preprint arXiv:1709.08568,", "year": 2017 }, { "authors": [ "Lars Buesing", "Theophane Weber", "Sebastien Racaniere", "S.M. Ali Eslami", "Danilo Rezende", "David P. Reichert", "Fabio Viola", "Frederic Besse", "Karol Gregor", "Demis Hassabis", "Daan Wierstra" ], "title": "Learning and querying fast generative models for reinforcement learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Christopher P. Burgess", "Loic Matthey", "Nicholas Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick", "Alexander Lerchner" ], "title": "Monet: Unsupervised scene decomposition and representation, 2019", "venue": null, "year": 2019 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": "CoRR, abs/1406.1078,", "year": 2014 }, { "authors": [ "Carlos Diuk", "Andre Cohen", "Michael L. Littman" ], "title": "An object-oriented representation for efficient reinforcement learning", "venue": "In Proceedings of the 25th International Conference on Machine Learning,", "year": 2008 }, { "authors": [ "S.M. Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Koray Kavukcuoglu", "Geoffrey E. Hinton" ], "title": "Attend, infer, repeat: Fast scene understanding with generative models, 2016", "venue": null, "year": 2016 }, { "authors": [ "Klaus Greff", "Sjoerd van Steenkiste", "Jrgen Schmidhuber" ], "title": "Neural expectation maximization, 2017", "venue": null, "year": 2017 }, { "authors": [ "Klaus Greff", "Raphal Lopez Kaufman", "Rishabh Kabra", "Nick Watters", "Chris Burgess", "Daniel Zoran", "Loic Matthey", "Matthew Botvinick", "Alexander Lerchner" ], "title": "Multi-object representation learning with iterative variational inference, 2019", "venue": null, "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Alexander S Klyubin", "Daniel Polani", "Chrystopher L Nehaniv" ], "title": "All else being equal be empowered", "venue": "In European Conference on Artificial Life,", "year": 2005 }, { "authors": [ "Adam R. Kosiorek", "Hyunjik Kim", "Ingmar Posner", "Yee Whye Teh" ], "title": "Sequential attend, infer, repeat: Generative modelling of moving objects, 2018", "venue": null, "year": 2018 }, { "authors": [ "Navneet Madhu Kumar" ], "title": "Empowerment-driven exploration using mutual information estimation", "venue": "arXiv preprint arXiv:1810.05533,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Adam Santoro", "David Raposo", "David G.T. Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter W. Battaglia", "Timothy P. Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "CoRR, abs/1706.01427,", "year": 2017 }, { "authors": [ "Elizabeth S. Spelke" ], "title": "Where perceiving ends and thinking begins: The apprehension of objects in infancy", "venue": null, "year": 2013 }, { "authors": [ "Sjoerd van Steenkiste", "Michael Chang", "Klaus Greff", "Jrgen Schmidhuber" ], "title": "Relational neural expectation maximization: Unsupervised discovery of objects and their interactions, 2018", "venue": null, "year": 2018 }, { "authors": [ "Sjoerd van Steenkiste", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "A perspective on objects and systematic generalization in model-based RL", "venue": "CoRR, abs/1906.01035,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need, 2017", "venue": null, "year": 2017 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Matko Bosnjak", "Christopher P. Burgess", "Alexander Lerchner" ], "title": "Cobra: Data-efficient model-based rl through unsupervised object discovery and curiosity-driven exploration, 2019", "venue": null, "year": 2019 }, { "authors": [ "Thophane Weber", "Sbastien Racanire", "David P. Reichert", "Lars Buesing", "Arthur Guez", "Danilo Jimenez Rezende", "Adria Puigdomnech Badia", "Oriol Vinyals", "Nicolas Heess", "Yujia Li", "Razvan Pascanu", "Peter Battaglia", "Demis Hassabis", "David Silver", "Daan Wierstra" ], "title": "Imagination-augmented agents for deep reinforcement learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia" ], "title": "Relational deep reinforcement learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Zambaldi" ], "title": "The 3 blocks are 1-layer MLP that output key, query and value vectors", "venue": null, "year": 2018 }, { "authors": [ "Santoro" ], "title": "Under review as a conference paper at ICLR 2020 encoder similar to what is done by Zambaldi et al", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "Learning an agent that interacts with objects is ubiquituous in many RL tasks. In most of them the agent’s actions have sparse effects : only a small subset of objects in the visual scene will be affected by the action taken. We introduce SPECTRA, a model for learning slot-structured transitions from raw visual observations that embodies this sparsity assumption. Our model is composed of a perception module that decomposes the visual scene into a set of latent objects representations (i.e. slot-structured) and a transition module that predicts the next latent set slot-wise and in a sparse way. We show that learning a perception module jointly with a sparse slot-structured transition model not only biases the model towards more entity-centric perceptual groupings but also enables intrinsic exploration strategy that aims at maximizing the number of objects changed in the agents trajectory." }, { "heading": "1 INTRODUCTION", "text": "Recent model-free deep reinforcement learning (DRL) approaches have achieved human-level performance in a wide range of tasks such as games (Mnih et al., 2015). A critical known drawback of these approaches is the vast amount of experience required to achieve good performance. The promise of model-based DRL is to improve sample-efficiency and generalization capacity across tasks. However model-based algorithms pose strong requirements about the models used. They have to make accurate predictions about the future states which can be very hard when dealing with high dimensional inputs such as images. Thus one of the core challenge in model-based DRL is learning accurate and computationally efficient transition models through interacting with the environment. Buesing et al. (2018) developed state-space models techniques to reduce computational complexity by making predictions at a higher level of abstraction, rather than at the level of raw pixel observations. However these methods focused on learning a state-space model that doesn’t capture the compositional nature of observations: the visual scene is represented by a single latent vector and thus cannot be expected to generalize well to different objects layouts.\nExtensive work in cognitive science (Baillargeon et al., 1985; Spelke, 2013) indeed show that human perception is structured around objects. Object-oriented MDPs (Diuk et al., 2008) show the benefit of using object-oriented representations for structured exploration although the framework as it is presented requires hand-crafted symbolic representations. Bengio (2017) proposed as a prior (the consciousness prior) that the dependency between high-level variables (such as those describing actions, states and their changes) be represented by a sparse factor graph, i.e., with few high-level variables at a time interacting closely, and inference performed sequentially using attention mechanisms to select a few relevant variables at each step.\nBesides, a recent line of work (Greff et al., 2017; van Steenkiste et al., 2018; Eslami et al., 2016; Kosiorek et al., 2018; Greff et al., 2019; Burgess et al., 2019) has focused on unsupervised ways to decompose a raw visual scene in terms of objects. They rely on a slot-structured representation (see Figure 1) of the scene where the latent space is a set of vectors and each vector of the set is supposed to represent an “object” (which we refer to as “entity”) of the scene. However, to the best of our knowledge, Watters et al. (2019) is the only work that investigates the usefulness of slot-structured representations for RL. They introduced a method to learn a transition model that is applied to all the slots of their latent scene representation. Extending their work, we go further and posit that slot-wise transformations should be sparse and that the perception module should be learned jointly with the transition model.\nWe introduce Sparse Entity-Centric Transitions (SPECTRA), an entity-centric action-conditioned transition model that embodies the fact that the agents actions have sparse effects: that means that each action will change only a few slots in the latent set and let the remaining ones unchanged. This is motivated by the physical consideration that agent interventions are localized in time and space. Our contribution is motivated by three advantages:\n− Sparse transitions enable transferable model learning. The intuition here is that the sparsity of the transitions will bias the model towards learning primitive transformations (e.g. how pushing a box affects the state of a box being pushed etc) rather than configurationdependent transformations, the former being more directly transferable to environments with increased combinatorial complexity.\n− Sparse transitions enable a perception module (when trained jointly) to be biased towards more meaningful perceptual groupings, thus giving potentially better representations that can be used for downstream tasks, compared to representations learned from static data.\n− Sparse transitions enable an exploration strategy that learns to predict actions that will change the state of as many entities as possible in the environment without relying on pixels error loss." }, { "heading": "2 RELATED WORK", "text": "Unsupervised visual scene decomposition. Learning good representations of complex visual scenes is a challenging problem for AI models that is far from solved. Recent work (Greff et al., 2017; van Steenkiste et al., 2018; Eslami et al., 2016; Kosiorek et al., 2018; Greff et al., 2019; Burgess et al., 2019) has focused on learning models that discover objects in the visual scene. Greff et al. (2019) further advocates for the importance of learning to segment and represent objects jointly. Like us they approach the problem from a spatial mixture perspective. van Steenkiste et al. (2018) and Kosiorek et al. (2018) build upon Greff et al. (2017) and Eslami et al. (2016) respectively by incorporating next-step prediction as part of the training objective in order to guide the network to learn about essential properties of objects. As specified in van Steenkiste et al. (2019) we also believe that objects are task-dependent and that learning a slot-based representations along with sparse transitions bias the perception module towards entity-centric perceptual groupings and that those structured representations could be better suited for RL downstream tasks.\nSlot-based representation for RL. Recent advances in deep reinforcement learning are in part driven by a capacity to learn good representations that can be used by an agent to update its policy. Zambaldi et al. (2018) showed the importance of having structured representations and computation when it comes to tasks that explicitly targets relational reasoning. Watters et al. (2019) also show the importance of learning representations of the world in terms of objects in a simple model-based setting. Zambaldi et al. (2018) focuses on task-dependent structured computation. They use a selfattention mechanism (Vaswani et al., 2017) to model an actor-critic based agent where vectors in the set are supposed to represent entities in the current observation. Like Watters et al. (2019) we take a model-based approach: our aim is to learn task-independent slot-based representations that can be further used in downstream tasks. We leave the RL part for future work and focus on how learning those representations jointly with a sparse transition model may help learn a better transition model." }, { "heading": "3 SPECTRA", "text": "Our model is composed of two main components: a perception module and a transition module (section 3.1). The way we formulated the transition implicitly defines an exploration policy (section 3.3) that aims at changing the states of as many entities as possible.\nChoice of Environment. Here we are interested in environments containing entities an agent can interact with and where actions only affect a few of them. Sokoban is thus a good testbed for our model. It consists of a difficult puzzle domain requiring an agent to push a set of boxes onto goal locations. Irreversible wrong moves can make the puzzle unsolvable. Each room is composed of walls, boxes, targets, floor and the agent avatar. The agent can take 9 different actions (no-op, 4 types of push and 4 types of move).\nFully Observed vs Learned Entities. The whole point is to work with slot-based representations learned from a raw pixels input. There is no guarantee that those learned slots will effectively correspond to entities in the image. We thus distinguish two versions of the environment (that correspond to two different levels of abstraction):\n− Fully observed entities: the input is structured. Each entity corresponds to a spatial location in the grid. Entities are thus represented by their one-hot label and indexed by their x-y coordinate. This will be referred to as the fully observed setting. There is no need for a perception module in this setting.\n− Raw pixels input: the input is unstructured. We need to infer the latent entities representations. This will be referred to as the latent setting." }, { "heading": "3.1 MODEL OVERVIEW", "text": "The idea is to learn an action-conditioned model of the world where at each time step the following take place:\n− Pairwise Interactions: Each slot in the set gathers relevant information about the slots conditioned on the action taken\n− Active entity selection : Select slots that will be modified by the action taken − Update: Update the selected slots and let the other ones remain unchanged.\nIdeally, slots would correspond to unsupervisedly learned entity-centric representations of a raw visual input like it is done by Burgess et al. (2019); Greff et al. (2019). We show that learning such perception modules jointly with the sparse transition biases the perceptual groupings to be entity-centric.\nPerception module. The perception module is composed of an encoder fenc and a decoder fdec. The encoder maps the input image x to a set of K latent entities such that at time-step t we have fenc(x\nt) = st ∈ RK×p. It thus outputs a slot-based representation of the scene where each slot is represented in the same way and is supposed to capture properties of one entity of the scene. Like (Burgess et al., 2019; Greff et al., 2019) we model the input image xt with a spatial Gaussian\nMixture Model. Each slot stk is decoded by the same decoder fdec into a pixel-wise mean µik and a pixel-wise assignment mtik (non-negative and summing to 1 over k). Assuming that the pixels i are independent conditioned on st, the conditional likelihood thus becomes:\npθ(x t|st) = D∏ i=1 ∑ k mtikN (xti ;µtik, σ2) with µtik,mtik = fdec(stk)i.\nAs our main goal is to investigate how sparse transitions bias the groupings of entities, in our experiments we use a very simple perception module represented in Figure 1. We leave it for future work to incorporate more sophisticated perception modules.\nPairwise interactions. In order to estimate the transition dynamics, we want to select relevant entities (represented at time t by the set st ∈ RK×p) that will be affected by the action taken, so we model the fact that each entity needs to gather useful information from entities interacting with the agent ( i.e. is the agent close ? is the agent blocked by a wall or a box ? etc..). To that end we propose to use a self-attention mechanism (Vaswani et al., 2017). From the k-th entity representation stk at time t, we extract a row-vector key K t k, a row-vector query Q t k and a row-vector value V t k conditioned on the action taken such that (aggregating the rows into corresponding matrices and ignoring the temporal indices):\ns̃ = softmax( KQT√\nd )V\nwhere the softmax is applied separately on each row. In practice we concatenate the results of several attention heads to use it as input to the entity selection phase.\nEntity selection. Once the entities are informed w.r.t. possible pairwise interactions the model needs to select which of these entities will be affected by the action taken at. Selection of the entities are regulated by a selection gate (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) computed slotwise as:\nf tk = σ(MLP ([s̃ t k; a t])) (1)\nwhere f tk can be interpreted as the probability for an entity to be selected.\nUpdate. Finally, each selected entity is updated conditioned on its state stk at time-step t and the action taken at. We thus simply have:\nst+1k = f t kfθ([s t k, a t]) + (1− f tk)stk fθ is a learned action-conditioned transformation that is applied slot-wise. We posit that enforcing the transitions to be slot-wise and implicitly sparse will bias the model towards learning more primitive transformations. We verify this assumption in next subsection in the simpler case where the entities are fully observed (and not inferred with a perception module)." }, { "heading": "4 EXPERIMENTS", "text": "In this work we demonstrate three advantages of entity-centric representations learned by SPECTRA:\n− Implicitly imposing the transitions to be sparse will enable us to learn transition models that will transfer better to environments with increased combinatorial complexity. Section 4.1.\n− Learning slot-based representations jointly with a sparse transition model will bias the perceptual groupings to be entity-centric. Section 4.2.\n− Finally we investigate the usefulness of the implicit exploration scheme induced by SPECTRA when learning the model jointly. Section 4.3." }, { "heading": "4.1 LEARNED PRIMITIVE TRANSFORMATIONS", "text": "In this section we show that sparse selection in the transitions yields learned slot-wise transformations that are transferable to out-of-distribution settings with increased combinatorial complexity. We restrict ourselves to the fully observed setting. Like (Zambaldi et al., 2018) the entities correspond to a spatial location in the 7× 7 grid. Each entity sk is thus described in terms of its label to which we append its x-y coordinate. The results in Figure 2 are intuitive; to learn the right transitions with our formulation, the model is forced to:\n− select only the relevant entities to be updated. − learn the right primitive transformation (e.g. if the agent slot is selected to be modified\nby any of the move actions, then its position is vacated, so the model should map any concatenation of [agent, move] to the floor label etc...). See Figure 2, right.\nHere entity representations are not learned and thus correspond to their labels. We thus train the model with a simple cross-entropy loss. We are interested in comparing two settings:\n− Sparse setting: the transformation is still done slot-wise to selected entities only. Each slot contains the label and x-y coordinate of the entity only. The transformation is applied to a concatenation of the entity label and the action [label,action].\n− Full setting: the transformation is still done slot-wise but this time each slot in s̃t potentially contains information about all the other slots in the set. The transformation is applied to a concatenation of the entity representation s̃tk and the action [tildes t k,action]. Thus we\nhypothesize that the transformation module will learn configuration-dependent rules (e.g. if an agent is close to a box and a wall, and 3 steps ahead there is a target to be reached, and it takes a move action to do so) that will not be easily transferable to environments with increased complexity and a wider variety of contexts.\nBoth settings are illustrated in Figure 7 of the Appendix. In Figure 2 we reported the evolution of training and evaluation losses of both the full and the sparse settings when the models are trained in a 7x7 environment with one box and evaluated in a 7x7 environment with two boxes." }, { "heading": "4.2 STRUCTURED REPRESENTATION LEARNING", "text": "In this section we demonstrate how learning a perception module along with sparse transitions will bias this module towards learning entity-centric perceptual groupings of the raw pixel input. In\norder to verify this intuition we compare in Figure 3 the reconstructions from the perception module when it is trained separately vs jointly with the sparse transition module. In this experiment the input is not structured anymore but just a raw 112x112x3 pixel image. We used a simple perception module as described in Figure 1.\nWe thus distinguish two losses, a reconstruction loss\nLpercep = D∑ i=1 log ∑ k mtikN (xti ;µtik, σ2)\nand a transition loss\nLtrans = D∑ i=1 log ∑ k m̂t+1ik N (x t+1 i ; µ̂ t+1 ik , σ 2)\nwith µtik,m t ik = fdec(s t k)i, s t = fenc(x t), µ̂t+1ik , m̂ t+1 ik = fdec(ŝ t+1 k )i, and ŝ t+1 k = ftrans(s t k) is the future state predicted by the transition function.\nfdec, fenc and ftrans are respectively the decoder, the encoder and the transition modules. For the joint training (resp. separate training) setting, gradients from Ltrans are back-propagated through parameters of fenc and ftrans (resp. ftrans only). In both settings, gradients from Lpercep are back-propagated through parameters of fenc and fdec.\nIn Figure 3 we put particular attention on the masked reconstructions from slots containing visual information about the agent. We can directly notice that the perceptual groupings done by the encoder, when it is trained jointly with the transition module, are agent-centric: the information about the agent is contained in one slot only (whereas it is often contained in several slots in the separate training settings). Moreover, in Figure 4 we see the joint training setting leading to a better transition model: we hypothesize that the transformations are easier to learn specifically because they have to focus on the effects of the actions taken on entities, i.e., involving a few strongly dependent variables at a time rather than more global but more specific configurations involving all the variables in the state, as suggested by Bengio (2017).\nWe also visualized the transformations learned by both settings. To do so, we manually increased the value of the update gate fk for a few slots k. An example is given in Figure 5 and additional ones are given in section B of the Appendix." }, { "heading": "4.3 INTRINSIC EXPLORATION STRATEGY", "text": "In many environments a uniformly random policy is insufficient to produce action and observation sequences representative enough to be useful for downstream tasks. In this paper we suggest to learn an exploration policy jointly with the model, based on an intrinsic reward that depends on the transition model itself and exploits its entity-centric structure to quantify the diversity of aspects of the environment modified by exploratory behavior. Our model learns to first select entities that will be changed and then learns how to transform the selected entities. Similar to the empowerment intrinsic objectives (Klyubin et al., 2005; Kumar, 2018), a natural exploration strategy in settings like Sokoban would be to follow trajectories that overall have as many entities being selected as possible. If the agent indeed never pushes a box on target when learning its transition model, it will not be able to transfer its knowledge to a task where it has to push all the boxes on all the targets. We thus suggest to learn a policy that maximizes the number of entities selected, as predicted by the current model. We alternate between policy update and model update.\nWe used a 10-step DQN for the exploration policy and have the DQN and the model share the same 1-step replay buffer. The DQN policy is -greedy with decaying from 1 to 0.3. In order to train the DQN we used the following intrinsic 1-step reward:\nr(st, at) = ∑ k 1(ftk≥h) (2)\nwith h a chosen threshold for the update gate value. We expect this training strategy to promote trajectories with as many entities that will have their state changed as possible. We thus expect the agent to learn not to get stuck, aim for the boxes, push them etc... In order to validate that intuition, we first conduct experiments in the fully observed setting. In this setting we consider the following types of moves:\n− valid move: Whenever the agent takes a move action in a valid direction, two entities will have their state changed: the initial location of the agent and the next one.\n− valid push : Whenever the agent takes a push action and a box is available to be pushed in the chosen directions, three entities will have their state changed: the initial location of the agent, the initial location of the box and the next location of the box.\n− blocked push : Whenever the agent takes a push action when there is no box to push in the chosen direction, nothing happens.\n− blocked move: Whenever the agent takes a move action in a non-valid direction (against a wall, a box etc...), nothing happens.\nWith our suggested training strategy we expect the agent to promote trajectories with more transitions of type valid move than blocked move and blocked push and hopefully with the number of valid push transitions increased as well. During training, we thus monitor the true number of entities changed in the transitions stored in the shared 1-step buffer. We also performed the same experiment in the raw input pixels setting and monitored the true number of entities changed in the 1-step buffer during training. Results are reported in Figure 6 and confirm our hypothesis: the agent learns to avoid actions that will result in no changes in the environment (blocked push and blocked move. Details of the hyperparameters are given in Appendix." }, { "heading": "5 CONCLUSION", "text": "We have introduced SPECTRA, a novel model to learn a sparse slot-structured transition model. We provided evidence to show that sparsity in the transitions yields models that learns more primitive transformations (rather than configuration-dependent) and thus transfer better to out-of-distribution environments with increased combinatorial complexity. We also demonstrated that the implicit sparsity of the transitions enables an exploration strategy that aims at maximizing the number of entities that be will be modified on the agent’s trajectory. In Figure 6 we showed that with this simple exploration strategy the agent leans to avoid actions that will not change the environment (blocked move and blocked push). Preliminary results in pixel space show that SPECTRA biases even a simple perception module towards perceptual groupings that are entity-centric. In Figure 5 we also showed the benefit of jointly training the perception (encoder) module and the transition module. We anticipate that our model could be improved by incorporating a more sophisticated perception module. In the future we aim to use SPECTRA to investigate possible uses in model-based reinforcement learning." }, { "heading": "A ARCHITECTURE AND HYPERPARAMETERS", "text": "A.1 FULLY OBSERVED SETTING\nIn the fully observed setting the input at time t is a set ot ∈ {0, 1}N×7 corresponding to one-hot labels (that can be agent (off and on target), box ( off and on target), wall, target and floor). of each entity in a 7 × 7 grid (N = 49). We also append their normalized x − y coordinates so that the\nfinal input to the transition model is a set st ∈ {0, 1}N×9. Like detailed previously in Figure 7, the transition model is composed of two modules: the selection module and the transformation module.\nIn section 4.1 we also distinguished between the sparse and the full setting and they are described in 7. In the full setting, there is no more selection bottleneck and the transition module is a simple transformer-like architecture.\nSelection module. The selection module is a transformer-like architecture. It takes as input at time step t the concatenation et = [st, at] of the set st and the action at. The selection module is then composed of 2 attention heads where is head is stack of 3 attention blocks (Vaswani et al., 2017; Zambaldi et al., 2018). The 3 blocks are 1-layer MLP that output key, query and value vectors of channels size 32, 64, 64 respectively. The first two blocks are followed byRELU non linearities and the last one doesn’t have any. The output of the attention phase is thus the concatenation of values obtained from the 2 attentions heads s̃t ∈ RN×112. To obtain the selection binary selection variables we then simply apply slot-wise a single layer MLP to the concatenation ẽt = [s̃t, at] followed by a logSoftmax non-linearity in order to compute the log-probabilities of each entity to be modified by the action taken. The output of the selection module is thus a set of log-probabilities lt ∈ RN×2.\nTransformation module. The transformation module is a simple shared 2-layers MLP that is applied slot-wise to the the concatenation et = [st, at] of the input set st ∈ {0, 1}N×9 and the action taken. It outputs channels of sizes 16, 7 respectively. The first layer is followed by a RELU non-linearity and the last one by a logSoftmax non-linearity in order to compute the log-probabilities of the label of each predicted entity.\nFull setting. In the full setting, we don’t have a selection bottleneck anymore. The transformation module is thus directly applied to the output of the attention phase ẽt = [s̃t, at]. It consits this time of a simple shared 3-layers MLP that is applied slot-wise and outputs channels of sizes 64, 32, 7 respectively. The first two layers are followed by a RELU non-linearity and the last one by a logSoftmax non-linearity .\nA.2 LATENT SETTING\nIn the latent setting the input at time t is a raw pixels (RGB) image ot ∈ R112×112×3. In the latent setting, the transition model is composed of a perception module, a selection module and a transformation module.\nPerception module. When dealing with unstructured input we first need a way to extract entities latent representations. For this work we used a very simple and naive perception module, with an\nencoder similar to what is done by Zambaldi et al. (2018); Santoro et al. (2017). Like detailed in Figure 1, we use a CNN to parse pixel inputs into k feature maps of size n × n, where k is the number of output channels of the CNN. We choose arbitrarily n = 4 and didn’t perform any hyperparameter search for the CNN architecture. We then concatenate x and y coordinates to each k-dimensional pixel feature-vector to indicate the pixels position in the map. We treat the resulting pixel-feature vectors as the set of entities st ∈ RN×k where here N = n2 = 16. We denote as stcoord ∈ RN×k+2 the entities set to which we have appended the x-y position in the map.\nAs our loss is a pixel loss we also need a decoder that decodes each entity stk,coord of the set s t back to its corresponding mean µtk and mask m t k. The CNN of the encoder outputs channels of size (16, 32, 32, 32, 32). All layers (except the last one) are followed by RELU non-linearities. Kernel sizes are (3, 3, 4, 3) and strides (2, 2, 2, 2, 1). The decoder is composed of a 2-layers MLP followed by a stack of transposed convolutions. The MLP outputs channels of sizes (7× 34, 7× 7× 34) with a RELU non-linearity between the 2 layers. The output is then resized to 7 × 7 × 34 map that will be fed to the convolution part. For the convolution part, it outputs maps of channel sizes (4, 4, 4, 4, 4) with RELU non-linearities between each layer. The kernel sizes are (3, 3, 5, 4).\nSelection and Tranformation modules. The selection and transformation module are very similar to the fully observed setting, except that they operate on the latent space, so we do not apply LogSofmax non-linearities for the transformation part. The input of the selection module is stcoord and the input to the transformation module is st. The selection module is composed of 2 attention heads where is head is stack of 3 attention blocks (Vaswani et al., 2017; Zambaldi et al., 2018). The 3 blocks are 1-layer MLP that output key, query and value vectors of channels size 34, 16, 16 respectively. The first two blocks are followed by RELU non linearities and the last one doesn’t have any. The output of the attention phase is thus the concatenation of values obtained from the 2 attentions heads s̃t ∈ RN×32. To obtain the selection binary selection variables we then simply apply slot-wise a 3-layers MLP of channels sizes 16, 32, 32 respectively to the concatenation ẽt = [s̃t, at] followed by a Softmax non-linearity in order to compute the probabilities of each entity to be modified by the action taken. The output of the selection module is thus a set of probabilities pt ∈ RN×2. The transformation module is a simple 2-layers MLP of channels sizes 32,32 respectively with a RELU non-linearity between the two layers." }, { "heading": "B ADDITIONAL VISUALISATIONS", "text": "In this section we reported additional visualizations similar to Figure 3 and 5 where we monitor:\n− Differences in slot-wise masked decodings of the perception module when it is trained jointly and separately from the sparse transitions.\n− Differences in the slot-wise transformations earned by the transition model when it is trained separately and jointly with the perception module.\nWe notice that joint training enables to learn slot-structured representation that are entity-centric and thus enable to learn better transition models. The transformations learned are especially visually more interpretable." } ]
2,019
SPECTRA: SPARSE ENTITY-CENTRIC TRANSITIONS
SP:3d76cac4f6c4d3bb1003b739801a4981c0db00b8
[ "This paper apply a model-based RL algorithm, DyNA-PPO for designing biological sequences. By being model-based, this algorithm is sample efficiency compared to model-free RL algorithms. This advantage is attractive and important in the context of biological sequence design since the designed is constrained to be done in the large batch / low round settings. To further improves model efficiency, the authors reduce learning bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross validation. To encourage diversity in the target distribution they also penalize the reward using a visitation-based strategy.", "In this work the authors propose a framework for combinatorial optimisation problems in the conditions that the measurements are expensive. The basic idea is to make an approximation of the reward function and then train the policy using the simulated environment based on the approximated reward function. The applications are shown in a set of biological tasks, which shows that the model performs well compared to the baselines. " ]
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design. RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered. We propose a model-based variant of PPO, DyNA PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds. To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned.
[ { "affiliations": [], "name": "Christof Angermueller" }, { "affiliations": [], "name": "David Dohan" }, { "affiliations": [], "name": "Ramya Deshpande" } ]
[ { "authors": [ "Frances H Arnold" ], "title": "Design by directed evolution", "venue": "Accounts of chemical research,", "year": 1998 }, { "authors": [ "Dzmitry Bahdanau", "Philemon Brakel", "Kelvin Xu", "Anirudh Goyal", "Ryan Lowe", "Joelle Pineau", "Aaron Courville", "Yoshua Bengio" ], "title": "An actor-critic algorithm for sequence prediction, 2016", "venue": null, "year": 2016 }, { "authors": [ "Marc G. Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation, 2016", "venue": null, "year": 2016 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Irwan Bello", "Hieu Pham", "Quoc V. Le", "Mohammad Norouzi", "Samy Bengio" ], "title": "Neural combinatorial optimization with reinforcement learning, 2016", "venue": null, "year": 2016 }, { "authors": [ "Yoshua Bengio", "Andrea Lodi", "Antoine Prouvost" ], "title": "Machine learning for combinatorial optimization: a methodological tour d’horizon", "venue": null, "year": 2018 }, { "authors": [ "Helen M Berman", "Philip E Bourne", "John Westbrook", "Christine Zardecki" ], "title": "The protein data bank", "venue": "In Protein Structure,", "year": 2003 }, { "authors": [ "David H Brookes", "Jennifer Listgarten" ], "title": "Design by adaptive sampling", "venue": "arXiv preprint arXiv:1810.03714,", "year": 2018 }, { "authors": [ "David H. Brookes", "Akosua Busia", "Clara Fannjiang", "Kevin Murphy", "Jennifer Listgarten" ], "title": "A view of estimation of distribution algorithms through the lens of expectation-maximization, 2019a", "venue": null, "year": 2019 }, { "authors": [ "David H Brookes", "Hahnbeom Park", "Jennifer Listgarten" ], "title": "Conditioning by adaptive sampling for robust design", "venue": "arXiv preprint arXiv:1901.10060,", "year": 2019 }, { "authors": [ "Prabal Chhibbar", "Arpit Joshi" ], "title": "Generating protein sequences from antibiotic resistance genes data using generative adversarial networks", "venue": "arXiv preprint arXiv:1904.13240,", "year": 2019 }, { "authors": [ "Hanjun Dai", "Elias B. Khalil", "Yuyu Zhang", "Bistra Dilkina", "Le Song" ], "title": "Learning combinatorial optimization algorithms over graphs, 2017", "venue": null, "year": 2017 }, { "authors": [ "Pieter-Tjerk De Boer", "Dirk P Kroese", "Shie Mannor", "Reuven Y Rubinstein" ], "title": "A tutorial on the cross-entropy method", "venue": "Annals of operations research,", "year": 2005 }, { "authors": [ "Ronald PH de Jongh", "Aalt DJ van Dijk", "Mattijs K Julsing", "Peter J Schaap", "Dick de Ridder" ], "title": "Designing eukaryotic gene expression regulation using machine learning", "venue": "Trends in biotechnology,", "year": 2019 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "Pilco: A model-based and data-efficient approach to policy search", "venue": "In Proceedings of the 28th International Conference on machine learning", "year": 2011 }, { "authors": [ "Rafael Gómez-Bombarelli", "Jennifer N Wei", "David Duvenaud", "José Miguel Hernández-Lobato", "Benjamı́n Sánchez-Lengeling", "Dennis Sheberla", "Jorge Aguilera-Iparraguirre", "Timothy D Hirzel", "Ryan P Adams", "Alán Aspuru-Guzik" ], "title": "Automatic chemical design using a data-driven continuous representation of molecules", "venue": "ACS central science,", "year": 2018 }, { "authors": [ "Ryan-Rhys Griffiths", "José Miguel Hernández-Lobato" ], "title": "Constrained Bayesian optimization for automatic chemical design", "venue": "arXiv preprint arXiv:1709.05501,", "year": 2017 }, { "authors": [ "Sergio Guadarrama", "Anoop Korattikara", "Oscar Ramirez", "Pablo Castro", "Ethan Holly", "Sam Fishman", "Ke Wang", "Ekaterina Gonina", "Neal Wu", "Chris Harris", "Vincent Vanhoucke", "Eugene Brevdo" ], "title": "TF-Agents: A library for reinforcement learning in tensorflow", "venue": "https://github.com/ tensorflow/agents,", "year": 2018 }, { "authors": [ "Anvita Gupta", "James Zou" ], "title": "Feedback gan (fbgan) for dna: A novel feedback-loop architecture for optimizing protein functions", "venue": "arXiv preprint arXiv:1804.01694,", "year": 2018 }, { "authors": [ "Tatsunori B. Hashimoto", "Steve Yadlowsky", "John C. Duchi" ], "title": "Derivative free optimization via repeated classification, 2018", "venue": null, "year": 2018 }, { "authors": [ "John Ingraham", "Vikas K Garg", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Generative models for graphbased protein", "venue": null, "year": 2019 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Modelbased policy optimization", "venue": "arXiv preprint arXiv:1906.08253,", "year": 2019 }, { "authors": [ "Wengong Jin", "Kevin Yang", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Learning multimodal graph-tograph translation for molecular optimization", "venue": "arXiv preprint arXiv:1812.01070,", "year": 2018 }, { "authors": [ "Lukasz Kaiser", "Mohammad Babaeizadeh", "Piotr Milos", "Blazej Osinski", "Roy H Campbell", "Konrad Czechowski", "Dumitru Erhan", "Chelsea Finn", "Piotr Kozakowski", "Sergey Levine" ], "title": "Model-based reinforcement learning for atari", "venue": null, "year": 1903 }, { "authors": [ "Nathan Killoran", "Leo J Lee", "Andrew Delong", "David Duvenaud", "Brendan J Frey" ], "title": "Generating and designing dna with deep generative models", "venue": "arXiv preprint arXiv:1712.06148,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In 2nd International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Scott Kirkpatrick", "C Daniel Gelatt", "Mario P Vecchi" ], "title": "Optimization by simulated annealing", "venue": null, "year": 1983 }, { "authors": [ "Wouter Kool", "Herke van Hoof", "Max Welling" ], "title": "Attention, learn to solve routing problems", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ksenia Korovina", "Sailun Xu", "Kirthevasan Kandasamy", "Willie Neiswanger", "Barnabas Poczos", "Jeff Schneider", "Eric P Xing" ], "title": "ChemBO: Bayesian Optimization of Small Organic Molecules with Synthesizable Recommendations", "venue": null, "year": 1908 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-ensemble trust-region policy optimization", "venue": "arXiv preprint arXiv:1802.10592,", "year": 2018 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 1945 }, { "authors": [ "Ge Liu", "Haoyang Zeng", "Jonas Mueller", "Brandon Carter", "Ziheng Wang", "Jonas Schilz", "Geraldine Horny", "Michael E Birnbaum", "Stefan Ewert", "David K Gifford" ], "title": "Antibody complementarity determining region design using high-capacity machine learning", "venue": null, "year": 2019 }, { "authors": [ "Debora S Marks", "Lucy J Colwell", "Robert Sheridan", "Thomas A Hopf", "Andrea Pagnani", "Riccardo Zecchina", "Chris Sander" ], "title": "Protein 3d structure computed from evolutionary sequence variation", "venue": "PloS one,", "year": 2011 }, { "authors": [ "Sanzo Miyazawa", "Robert L Jernigan" ], "title": "Residue–residue potentials with a favorable contact pair term and an unfavorable high packing density term, for simulation and threading", "venue": "Journal of molecular biology,", "year": 1996 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "J. Mockus", "Vytautas Tiesis", "Antanas Zilinskas" ], "title": "The application of Bayesian methods for seeking the extremum", "venue": "Towards Global Optimization, 2:117–129,", "year": 2014 }, { "authors": [ "Alexander Mordvintsev", "Christopher Olah", "Mike Tyka" ], "title": "Deepdream-a code example for visualizing neural networks", "venue": "Google Research,", "year": 2015 }, { "authors": [ "Daniel Neil", "Marwin Segler", "Laura Guasch", "Mohamed Ahmed", "Dean Plumbley", "Matthew Sellwood", "Nathan Brown" ], "title": "Exploring deep recurrent models with reinforcement learning for molecule design", "venue": "In International Conference on Learning Representations Workshop,", "year": 2018 }, { "authors": [ "Mohammad Norouzi", "Samy Bengio", "Navdeep Jaitly", "Mike Schuster", "Yonghui Wu", "Dale Schuurmans" ], "title": "Reward augmented maximum likelihood for neural structured prediction", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Baolin Peng", "Xiujun Li", "Jianfeng Gao", "Jingjing Liu", "Kam-Fai Wong", "Shang-Yu Su" ], "title": "Deep dyna-q: Integrating planning for task-completion dialogue policy learning", "venue": "arXiv preprint arXiv:1801.06176,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Frederic Runge", "Danny Stoll", "Stefan Falkner", "Frank Hutter" ], "title": "Learning to design rna, 2018", "venue": null, "year": 2018 }, { "authors": [ "Sari Sabban", "Mikhail Markovsky" ], "title": "Ramanet: Computational de novo protein design using a long short-term memory generative adversarial neural network", "venue": "BioRxiv, pp", "year": 2019 }, { "authors": [ "Tim Salimans", "Jonathan Ho", "Xi Chen", "Szymon Sidor", "Ilya Sutskever" ], "title": "Evolution strategies as a scalable alternative to reinforcement learning", "venue": "arXiv preprint arXiv:1703.03864,", "year": 2017 }, { "authors": [ "Paul J Sample", "Ban Wang", "David W Reid", "Vlad Presnyak", "Iain J McFadyen", "David R Morris", "Georg Seelig" ], "title": "Human 5 utr design and variant effect prediction from a massively parallel translation assay", "venue": "Nature biotechnology,", "year": 2019 }, { "authors": [ "Benjamin Sanchez-Lengeling", "Alán Aspuru-Guzik" ], "title": "Inverse molecular design using machine learning: Generative models for matter", "venue": "engineering. Science,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Bobak Shahriari", "Kevin Swersky", "Ziyu Wang", "Ryan P Adams", "Nando De Freitas" ], "title": "Taking the human out of the loop: A review of Bayesian optimization", "venue": "Proceedings of the IEEE,", "year": 2015 }, { "authors": [ "Eugene I Shakhnovich", "AM Gutin" ], "title": "A new approach to the design of stable proteins", "venue": "Protein Engineering, Design and Selection,", "year": 1993 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical bayesian optimization of machine learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 2012 }, { "authors": [ "Joe Staines", "David Barber" ], "title": "Optimization by variational bounding", "venue": "In ESANN,", "year": 2013 }, { "authors": [ "Joanna I Sułkowska", "Faruck Morcos", "Martin Weigt", "Terence Hwa", "José N Onuchic" ], "title": "Genomicsaided structure prediction", "venue": "Proceedings of the National Academy of Sciences,", "year": 2012 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ye Wang", "Haochen Wang", "Liyang Liu", "Xiaowo Wang" ], "title": "Synthetic promoter design in escherichia coli based on generative adversarial network", "venue": "BioRxiv, pp", "year": 2019 }, { "authors": [ "Martin Weigt", "Robert A White", "Hendrik Szurmant", "James A Hoch", "Terence Hwa" ], "title": "Identification of direct residue contacts in protein–protein interaction by message passing", "venue": "Proceedings of the National Academy of Sciences,", "year": 2009 }, { "authors": [ "Daan Wierstra", "Tom Schaul", "Tobias Glasmachers", "Yi Sun", "Jan Peters", "Jürgen Schmidhuber" ], "title": "Natural evolution strategies", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Jacob Witten", "Zack Witten" ], "title": "Deep learning regression model for antimicrobial peptide", "venue": "design. BioRxiv,", "year": 2019 }, { "authors": [ "Zachary Wu", "SB Jennifer Kan", "Russell D Lewis", "Bruce J Wittmann", "Frances H Arnold" ], "title": "Machine learning-assisted directed protein evolution with combinatorial libraries", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Zhenpeng Zhou", "Steven Kearnes", "Li Li", "Richard N Zare", "Patrick Riley" ], "title": "Optimization of molecules via deep reinforcement learning", "venue": "Scientific reports,", "year": 2019 } ]
[ { "heading": null, "text": "The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design. RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered. We propose a model-based variant of PPO, DyNA PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds. To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned." }, { "heading": "1 INTRODUCTION", "text": "Driven by real-world obstacles in health and disease requiring new drugs, treatments, and assays, the goal of biological sequence design is to identify new discrete sequences x which optimize some oracle, typically an experimentally-measured functional property f(x). This is a difficult black-box optimization problem over a combinatorially large search space in which function evaluation relies on slow and expensive wet-lab experiments. The setting induces unusual constraints in black-box optimization and reinforcement learning: large synchronous batches with few rounds total.\nThe current gold standard for biomolecular design is directed evolution, which was recently recognized with a Nobel prize (Arnold, 1998) and is a form of randomized local search. Despite its impact, directed evolution is sample inefficient and relies on greedy hillclimbing to the optimal sequences. Recent work has demonstrated that machine-learning-guided optimization (Section 3) can find better sequences faster.\n∗Work done as an intern at Google.\nReinforcement learning (RL) provides a flexible framework for black-box optimization that can harness modern deep generative sequence models. This paper proposes a simple method for improving the sample efficiency of policy gradient methods such as PPO (Schulman et al., 2017) for black-box optimization by using surrogate models that are trained online to approximate f(x). Our method updates the policy’s parameters using sequences x generated by the current policy πθ(x), but evaluated using a learned surrogate f ′(x), instead of the true, but unknown, oracle reward function f(x). We learn the parameters of the reward model, w, simultaneously with the parameters of the policy. This is similar to other model-based RL methods, but simpler, since in the context of sequence optimization, the state-transition model is deterministic and known. Initially the learned reward model, f ′(x), is unreliable, so we rely entirely on f(x) to assess sequences and update the policy. This allows a graceful fallback to PPO when the model is not effective. Over time, the reward model becomes more reliable and can be used as a cheap surrogate, similar to Bayesian optimization methods (Shahriari et al., 2015). We show empirically that cross-validation is an effective heuristic for assessing the model quality, which is simpler than the inference required by Bayesian optimization.\nWe rigorously evaluate our method on three in-silico sequence design tasks that draw on experimental data to construct functions f(x) characteristic of real-world design problems: optimizing binding affinity of DNA sequences of length 8 (search space size 48); optimizing anti-microbial peptide sequences (search space size 2050), and optimizing binary sequences where f(x) is defined by the energy of an Ising model for protein structure (search space size 2050). These do not rely on wet lab experiments, and thus allow for large-scale benchmarking across a range of methods. We show that our DyNA PPO method achieves higher cumulative reward for a given budget (measured in terms of number of calls to f(x)) than existing methods, such as standard PPO, various forms of the cross-entropy method, Bayesian optimization, and evolutionary search.\nIn summary, our contributions are as follows:\n• We provide a model-based RL algorthm, DyNA PPO, and demonstrate its effectiveness in performing sample efficient batched black-box function optimization.\n• We address model bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross-validation.\n• We propose a visitation-based exploration bonus and show that it is more effective than entropy-regularization in identifying multiple local optima.\n• We present a new optimization task for benchmarking methods for biological sequence design based on protein energy Ising models." }, { "heading": "2 METHODS", "text": "Let f(x) be the function that we want to optimize and x ∈ V T a sequence of length T over a vocabulary V such as DNA nucleotides (|V | = 4) or amino acids (|V | = 20). We assume N experimental rounds and that B sequences can be measured per round. Let Dn = {(x, f(x))} be the data acquired in round n with |Dn| = B. For simplicity, we assume that the sequence length T is constant, but our approach based on generating sequences autoregressively easily generalizes to variable-length sequences." }, { "heading": "2.1 MARKOV DECISION PROCESS", "text": "We formulate the design of a single sequence x as a Markov decision process M = (S,A, p, r) with state space S, action space A, transition function p, and reward function r. The state space S = ∪t=1...TV t is the set of all possible sequence prefixes and A corresponds to the vocabulary V . A sequence is generated left to right. At time step t, the state st = a0, ..., at−1 corresponds to the t last tokens and the action at ∈ A to the next token. The transition function p(st + 1|st) = stat is deterministic and corresponds to appending at to st. The reward r(st, at) is zero except at the last step T , where it corresponds to the functional measurement f(sT−1). For generating variable-length sequences, we extend the vocabulary by a special end-of-sequence token and terminate sequence generation when this token is selected.\nAlgorithm 1: DyNA PPO 1: Input: Number of experiment rounds N 2: Input: Number of model-based training rounds M 3: Input: Set of candidate models S = {f ′} 4: Input: Minimum model score τ for model-based training 5: Input: Policy πθ with initial parameters θ 6: for n = 1, 2, ...N do 7: Collect samples Dn = {x, f(x)} using policy πθ 8: Train policy πθ on Dn 9: Fit candidate models f ′ ∈ S on ⋃n i=1Di and compute their score by cross-validation 10: Select the subset of models S′ ⊆ S with a score ≥ τ 11: if S ′ 6= ∅ then 12: for m = 1, 2, ...M do 13: Sample a batch of sequences x from πθ and observe the reward f ′′(x) = 1|S′| ∑ f ′∈S′ f ′(x) 14: Update πθ on {x, f ′′(x)} 15: end for 16: end if 17: end for" }, { "heading": "2.2 POLICY OPTIMIZATION", "text": "We train a policy πθ(at|st) to optimize the expected sum of rewards : E[R(s1:t)|s0, θ] = ∑ st ∑ at πθ(at|st)r(st, at). (1)\nWe use proximal policy optimization (PPO) with KL trust-region constraint (Schulman et al., 2017), which we have found to be more stable and sample efficient than REINFORCE (Williams, 1992). We have also considered off-policy deep Q-learning (DQN) (Mnih et al., 2015), and categorical distributional deep Q-learning (CatDQN) (Bellemare et al., 2017), which are in principle more sampleefficient than on-policy learning using PPO since they can reuse samples multiple times. However, they performed worse than PPO in our experiments (Appendix C). We implement algorithms using the TF-Agents RL library (Guadarrama et al., 2018).\nWe employ autoregressive models with one fully-connected layer as policy and value networks since they are faster to train and outperformed recurrent networks in our experiments. At time step t, the network takes as input the W last characters at−W , ..., at−1 that are one-hot encoded, where the context window size W is a hyper-parameter. To provide the network with information about the current position of the context window, it also receives the time step t, which is embedded using a sinusoidal positional encoding (Vaswani et al., 2017), and concatenated with the one-hot characters. The policy network outputs a distribution πθ(at|st) over next the token at. The value network V (st), which approximates the expected future reward for being in state st, is used as a baseline to reduce the variance of stochastic estimates of equation 1 (Schulman et al., 2017)." }, { "heading": "2.3 MODEL-BASED POLICY OPTIMIZATION", "text": "Model-based RL learns a model of the environment that is used as a simulator to provide additional pseudo-observations. While model-free RL has been successful in domains where interaction with the environment is cheap, such as those where the environment is defined by a software program, its high sample complexity may be unrealistic for biological sequence design. In model-based RL, the MDP M = (S,A, p, r) is approximated by a model M′ = (S,A, p′, r′) with the same state space S and action space A asM (Sutton & Barto, 2018, Ch. 8). Since the transition function p is deterministic in our case, only the reward function r(st, at) needs to be approximated by r′(st, at). Since r(sT , aT ) is non-zero at the last step T and then corresponds to f(x) with x == sT−1, the problem reduces to approximating f(x). This can be done by supervised regression by fitting a regressor f ′(x) on the data ∪n′<=nDn′ collected so far. We then use the resulting model to collect additional observations (x, f ′(x)) and update the policy in a simulation phase, instead of only using observations (x, f(x)) from the the true environment, which are expensive to collect. We call our method DyNA PPO since it is similar to the DYNA architecture (Sutton (1991); Peng et al. (2018)) and since can be used for DNA sequence design.\nModel-based RL provides the promise of improved sample efficiency when the model is accurate, but it can reduce performance if insufficient data are available for training a trustworthy model. In this case, the policy is prone to exploit regions where the model is inaccurate (Janner et al., 2019). To reap the benefit of model-based RL when the model is accurate and avoid reduced performance when it is not, we (i) automatically select the model from a set of candidate models of varying complexity, (ii) only use the selected model if it is accurate, and iii) stop model-based training as soon the the model uncertainty increases by a certain threshold. After each round of experiment, we fit a set of candidate models on all available data to estimate f(x) via supervised regression. We quantify the accuracy of each candidate model by the R2 score, which we estimate by five-fold cross-validation. See Appendix G for a discussion of different data splitting strategies to select models using crossvalidation. If the R2 score of all candidate model is below a pre-specified threshold τ , we do not perform model-based training in that round. Otherwise, we build an ensemble model that includes all models with a score greater or equal than τ , and use the average prediction as reward for training the policy. We considered τ as a tunable hyper-parameter, were we found τ = 0.5 to be optimal for all problems (see Figure 14. By ignoring the model if it is inaccurate, we aim to prevent the policy from exploiting deficiencies of the model (Janner et al., 2019).\nWe perform up to M model-based optimization rounds (see Algorithm 1) and stop as soon as the model uncertainty increased by a certain factor relative to the model uncertainty at the first round (m = 1). This is motivated by our observation that the model uncertainty is strongly correlated with the unknown model error, and prevents from training the policy with inaccurate model predictions (see Figure 12, 13) as soon as the model starts to explore regions on which the model was not trained on.\nFor models, we consider nearest neighbor regression, Bayesian ridge regression, random forests, gradient boosting trees, Gaussian processes, and ensemble of deep neural networks. Within each model family, we additionally use cross-validation for tuning hyper-parameters, such as the number of trees, tree depth, kernels and kernel parameters, or the number of hidden layers and units (see Appendix A.7 for details). By testing and optimizing the hyper-parameters of different models automatically, the model capacity can dynamically increase as data becomes available.\nIn Bayesian optimization, non-parametric models such as Gaussian processes are popular regressors, and they also automatically grow model capacity as more data arrives (Shahriari et al., 2015). However, with Bayesian optimization there is no opportunity to ignore the regressor entirely if it is unreliable. Furthermore, Bayesian optimization relies on performing (approximate) Bayesian inference, which in practice is sensitive to the choice of hyper-parameter (Snoek et al., 2012).\nOverall, our method combines the positive attributes of both generative and discriminative approaches to sequence design. Our experiments do not compare to prior work on model-based RL, since these methods primarily focus on estimating a dynamics model for state transitions." }, { "heading": "2.4 DIVERSITY-PROMOTING REWARD FUNCTION", "text": "Learning policies to generate diverse sequences is important because of several reasons. In many applications, f(x) is an in-vitro (taking place outside a living organism) surrogate for an in-vivo taking place inside a living organism) functional measurement that is even more expensive to evaluate than f(x). The in-vivo measurement may depend on properties that are correlated with f(x) and others that are not captured at all in-vitro, such as off-target effects or toxicity. To improve the chance that a sequence satisfying the ultimate in-vivo criteria is found, it is therefore desirable for the optimization procedure to discover a diverse set of candidate optima. Here, diversity is a downstream metric, for which training the policy πθ(x) to maximize equation 1 will not necessarily yield good performance. For example, a high-quality policy can learn to always generate the same sequence x with a high value of f(x), and will therefore result in zero diversity. An additional reason that diversity matters is that it yields a good exploration strategy, even for scenarios where optimizing equation 1 is sufficient. Finally, use of strategies that reward high-diversity policies can reduce the policies’ tendency to generate exact duplicates.\nTo increase sequence diversity, we employ a simple exploration reward bonus based on the density of proposed sequences, similar to existing exploration techniques based on state visitation frequency (Bellemare et al., 2016). Specifically, we define the final reward as rT = f(x)−λ·dens (x), where dens (x) ∈ N+ is the weighted number of sequences that have been proposed in previous\nrounds with a distance of less than away from x, where the weight decays linearly with the distance. This reward penalizes proposing similar sequences multiple times, where the strength of the penalty is controlled by λ. As a result, the policy learns not to generate related sequences and hence explores the search space more effectively. We used the edit distance as distance metric and tuned the distance radius , where setting > 0 improved exploration on high-dimensional problems (see Figure 11). We also considered an alternative penalty based on the nearest neighbor distance of the proposed sequence to past sequences, which we found to be less effective (see Figure 9)." }, { "heading": "3 RELATED WORK", "text": "Recently, machine learning approaches have been shown to be effective in optimizing real-world DNA and protein sequences (Wang et al., 2019; Chhibbar & Joshi, 2019; de Jongh et al., 2019; Liu et al., 2019; Sample et al., 2019; Wu et al., 2019). Existing methods for biological sequence design fall into three broad categories: evolutionary search, optimization using discriminative models (e.g. Bayesian optimization), and optimization using generative models (e.g. the cross entropy method).\nEvolutionary approaches perform direct local search in the space of sequences. They include the aforementioned directed evolution and derivatives with application-specific mutation and recombination steps. Evolutionary approaches are appealing since they are simple and can easily incorporate human intuition into the design process, but generally suffer from low sample efficiency.\nOptimization methods based on discriminative models alternate between two steps: (i) using the data that have been collected so far to fit a regressor f ′(x) to approximate f(x), and (ii) using f ′(x) to define an acquisition function that is optimized to select the next batch of sequences. Recently, such an approach was used to optimize the binding affinity of IgG antibodies (Liu et al., 2019), where a neural network ensemble was used for f ′(x). In general, optimizing the acquisition function is a non-trivial combinatorial optimization problem. Liu et al. (2019) employed activation maximization, where gradient-based optimization is performed on a continuous relaxation of the discrete search space. However, this requires f ′(x) to be differentiable and optimization of a continuous relaxation is vulnerable to leaving the data manifold (cf. deep dream (Mordvintsev et al., 2015)).\nBayesian optimization defines an acquisition function such as the expected improvement (Mockus et al., 2014) based on the uncertainty of f ′(x), which enables balancing exploration and exploitation (overview provided in Shahriari et al. (2015)). Gaussian processes (GPs) are commonly used for Bayesian black-box optimization since they provide calibrated uncertainty estimates. Unfortunately, GPs are hard to scale to large, high-dimensional datasets and are sensitive to the choice of hyperparameters. In response, recent work has performed continuous black-box optimization in the latent space of a deep generative model (Gómez-Bombarelli et al., 2018). However, this approach requires a pre-trained model such as a variational autoencoder to obtain the latent embeddings. Our modelbased reinforcement learning approach is similar to these approaches in that we train a reinforcement learning policy to optimize a model f ′(x). However, our policy is also trained directly on observations of f(x) and is able to resort to model-free training by automatically identifying if the model f ′(x) is too inaccurate to be used as surrogate of f(x). Janner et al. (2019) investigated conditions in which an estimate of model generalization (their analysis uses validation accuracy) could justify model usage in such model-based policy optimization settings. Hashimoto et al. (2018) proposed using a cascade of classifiers, one per round, to guide sampling progressively better candidates.\nOptimization methods based on generative models seek to learn a distribution pθ(x) parameterized by θ that maximizes the expected value of f(x): Ex∼Pθ(x)[f(x)]. We note that this is the same form as variational optimization objectives, which allow the use of parameter-space evolutionary strategies (Staines & Barber, 2013; Wierstra et al., 2014; Salimans et al., 2017). Variants of the cross entropy method (De Boer et al., 2005; Brookes et al., 2019a) optimize θ, by alternating two steps: (i) sampling x ∼ pθ(x) and evaluating f(x), and (ii) updating θ to maximize this expectation. Methods differ in how step (ii) is performed. For example, hillclimb-MLE (Neil et al., 2018) performs maximum-likelihood training on the top k sequences from step (i). Similarly, Feedback GAN (FBGAN) uses samples whose target function value f(x) exceeds a fixed threshold for training a generative adversarial network (Gupta & Zou, 2018). Design by Adaptive Sampling (DbAs) performs weighted MLE of variational autoencoders (Kingma & Welling, 2014), where a sample’s weight corresponds to the probability that f(x) is greater than a quantile cutoff under an noise\nmodel (Brookes & Listgarten, 2018). In Brookes et al. (2019b), pθ(x) is further restricted to stay close to a prior distribution over sequences.\nAn alternative approach for optimizing the above expectation is RL. While RL has been used for generating natural text (Bahdanau et al., 2016), small molecules (Zhou et al., 2019), and RNA sequences that fold into a particular structure (Runge et al., 2018), we are not aware of applications of RL to optimizing DNA and protein sequences.\nDyNA PPO is related to existing work on model-based RL for sample efficient control (Deisenroth & Rasmussen, 2011; Kurutach et al., 2018; Peng et al., 2018; Kaiser et al., 2019; Janner et al., 2019), with the key difference that the state transition function is known and the reward function is unknown in our work, whereas most existing model-based RL approaches seek to model the state-transition function and consider the reward function as known.\nPrior work on sequence generation incorporates non-differentiable rewards, like BLEU in machine translation, via weighted maximum likelihood (MLE). Norouzi et al. (2016) introduce reward augmented MLE, while Bahdanau et al. (2016) fine tune an MLE-pretrained model using actor-critic methods. Reinforcement learning has also been applied to solving combinatorial optimization problems (Bello et al., 2016; Bengio et al., 2018; Dai et al., 2017; Kool et al., 2018). In this setting sample complexity is less important because evaluating f(x) only involves a fast software program.\nRecent work has proposed generative models of protein structures (Sabban & Markovsky, 2019) or generative models of amino acids conditional on protein structure (Ingraham et al., 2019). Such methods are outside of the scope of this paper’s experiments, since they could only be used in experimental settings where protein structures, which are expensive to measure, are available.\nFinally, DNA and protein design differs from small molecule design (Griffiths & Hernández-Lobato, 2017; Kusner et al., 2017; Gómez-Bombarelli et al., 2018; Jin et al., 2018; Sanchez-Lengeling & Aspuru-Guzik, 2018; Korovina et al., 2019) in the following points: (i) the number of sequences measured in parallel in the lab is typically higher (hundred or thousands vs. dozens) due to the maturity of DNA synthesis and sequencing technology, (ii) the search space is a set of sequences instead of molecular graphs, which require specialized network architectures for both discriminative and generative models, and (iii) molecules must be optimized subject to the constraint that there is a set of reactions to synthesize them, whereas practically all DNA or protein sequences are synthesizable." }, { "heading": "4 EXPERIMENTS", "text": "In the next three sections, we compare DyNA PPO to existing methods on three in-silico optimization problems that we designed in collaboration with life scientists to faithfully simulate the behavior of real wet-lab experiments, which would be cost prohibitive for a comprehensive methodological evaluation. Along the way, we present ablation experiments to help to better understand the behavior of DyNA PPO.\nWe compare the performance of model-free policy optimization (PPO) and model-based optimization (DyNA PPO) with the following methods that we discussed in Section 3. Further details for each method can be found in Appendix A:\n• RegEvolution: Local search based on regularized evolution (Real et al., 2019), which has performed well on other black-box optimization tasks and can be seen as an instance of directed evolution.\n• DbAs: Cross-entropy optimization using variational autoencoders (Brookes & Listgarten, 2018).\n• FBGAN: Cross entropy optimization using generative adversarial networks (Gupta & Zou, 2018).\n• Bayesopt GP: Bayesian optimization using a Gaussian process regressor and activation maximization as acquisition function solver.\n• Bayesopt ENN Bayesian optimization using an ensemble of neural network regressors and activation maximization as acquisition function solver.\n• Random: Guessing sequences uniformly at random.\nWe quantify optimization performance by the cumulative maximum reward f(x) for sequences proposed up to a given round, and we use the area under the cumulative maximum reward curve to summarize one optimization trajectory as a single number. We quantify sequence diversity (Section 2.4) in terms of the mean pairwise hamming distance between the sequences proposed at each round. For problems with known optima, we also report the fraction of global optima found. We replicate experiments with 50 random seeds." }, { "heading": "4.1 OPTIMIZATION OF PROTEIN CONTACT ISING MODELS", "text": "We first consider synthetic black-box optimization problems based on the 3D structure of naturallyoccurring proteins. Ising models fit on sets of evolutionary-related protein sequences have been shown to be accurate predictors for proteins’ 3D structure (Shakhnovich & Gutin, 1993; Weigt et al., 2009; Marks et al., 2011; Sułkowska et al., 2012). We consider the inverse problem: given a protein, we seek to find the amino acid sequence that minimizes the energy of the Ising model parameterized by its structure. Optimizers are given a budget of 10 rounds with batch size 1000 and we consider sequences of length 50 (search space size 2050). The functional form of the energy function is given in Appendix B.1.\nOn the left of Figure 1 we consider the optimization trajectory for a representative protein and on the right we compare the best f(x) found for each method across a range of proteins. We find that DyNA PPO considerably outperforms the other methods. We expect that this is because this synthetic\nreward landscape can be well-described by a model fit using few examples, which also explains the good performance of Bayesian optimization. On the left of Figure 2 we vary the number of inner-loop policy optimization rounds with observations from the model-based environment, where using 0 rounds corresponds to performing standard PPO. Since the surrogate model is of sufficient accuracy already at the beginning (right plot), performing more inner policy optimization rounds increases performance and enables DyNA PPO to generate high-quality sequences using very few evaluations of f(x)." }, { "heading": "4.2 OPTIMIZATION OF TRANSCRIPTION FACTOR BINDING SITES", "text": "Transcription factors are protein sequences that bind to DNA sequences and regulate their activity. Barrera et al. (2016) measured the binding affinity of numerous transcription factors against all possible length-8 DNA sequences (V = 4). The resulting dataset defines 158 different discrete optimization tasks, where the goal of each task is to find a DNA sequence of length eight that maximizes the affinity towards one of the transcription factors. It is well suited for in-silico benchmarking since (i) it is exhaustive and thereby does not require estimating missing f(x) and (ii) the distinct local optima of all tasks are known and can be used to quantify exploration (see Appendix B.2 for details). The optimization methods are given a budget of 10 rounds with a batch size of B = 100 sequences. The search space size is 48. We use one task (CRX REF R1) for optimizing the hyper-parameters of all methods, and test performance on 41 heterogeneous hold-out tasks.\nFigure 3 plots the performance of methods on a single representative binding target (SIX REF R1) as a function of the total number of sequences measured so far. We find that DyNA PPO and PPO outperform all other methods in terms of both the cumulative maximum f(x) found as well as the fraction of local optima discovered. We also find that the diversity of proposed sequences quantified by the fraction of global optima found is high compared to other generative approaches. This shows that our method continues to explore the search space by proposing novel sequences instead of converging to a single sequence or a handful of sequences–a desired property as discussed\nin Section 2.4. Across all tasks DyNA PPO and PPO rank highest compared with other methods (Table 1).\nIn Figure 4 and 5, we analyze the effects two key design decisions of DyNA PPO: model-based training and promoting exploration. We find that automated model selection automatically increases the complexity of the model, but that the models are not always accurate enough to be used for modelbased training. This explains the relatively small improvement of DyNA PPO over PPO. We also find that the exploration bonus outlined in Section 2.4 is more effective than entropy regularization in finding multiple local optima and promoting sequence diversity." }, { "heading": "4.3 OPTIMIZATION OF ANTI-MICROBIAL PEPTIDES", "text": "Next, we seek to design antimicrobial peptides (AMPs). AMPs are relatively short (8 - 75 amino acids) protein sequences (|V | = 20 amino acids), which are promising candidates against multiresistant pathogens due to their wide range of antimicrobial activities. We use the dataset proposed by Witten & Witten (2019), which contains 6,760 unique AMP sequences and their antimicrobial activity towards multiple pathogens. We follow Witten & Witten (2019) for preprocessing the dataset and generating non-AMP sequences as negative training samples. Unlike the transcription factor binding site dataset, we do not have wet-lab experiments for every sequence in the search space. Therefore, we fit random forest classifiers to predict if a sequence is antimicrobial towards a certain pathogen in the dataset (see Section B.3), and use the predicted probability as the functional measurement f(x) to optimize. Given the high accuracy of the classifiers (cross-validated AUC 0.94 and 0.99), we expect that the reward landscape of f(x) is of realistic difficulty. We perform 8 rounds with a batch size 250 and restrict the sequence length to at most 50 characters (search space size 2050).\nFigure 6 compares methods on C. alibicani. We find that model-based optimization using DyNA PPO enables finding high reward sequences in early rounds, though model-free PPO slightly surpasses the performance of DyNA PPO later on. Both DyNA PPO and PPO considerably outperform the other methods in terms of the maximum f(x) found. The density based exploration bonus prevents PPO and DyNA PPO from generating non-unique sequences (Figure 11). Stopping modelbased training as soon as the model uncertainty increased by a certain factor prevents DyNA PPO from converging to a sub-optimal solution when performing many model-based optimization rounds (Figure 12,13)." }, { "heading": "5 CONCLUSION", "text": "We have shown that RL is an attractive alternative to existing methods for designing DNA and protein sequences. We have proposed DyNA PPO, a model-based extension of PPO (Schulman et al., 2017) with automatic model selection that improves sample efficiency, and incorporates a reward function that promotes exploration by penalizing identical sequences. By approximating an expensive wet-lab experiment with a surrogate model, we can perform many rounds of optimization in simulation. While this work has been focused on showing the benefit of DyNA PPO for biological sequence design, we believe that the large-batch, low-round optimization setting described here may well be of general interest, and that model-based RL may be applicable in other scientific and economic domain." }, { "heading": "A.1 REGULARIZED EVOLUTION", "text": "Regularized evolution is a variant of directed evolution that regularizes the search by keeping a fixed number of individuals alive as candidates for selection (analogous to death by aging). At each round, it generates a batch of child sequences by sampling two parent sequences per child from the population via tournament selection, i.e. selecting the fittest out of K randomly sampled individuals. It then performs crossover of the two parent sequences by copying the characters of one parent from left to right and randomly transitioning to transcribing from the other parent sequence with some crossover probability at each step. Child sequences are mutated by substituting characters independently by other characters with some substitution probability. For variable-length sequences, we also allowed insertion and deletion mutations. As hyper-parameters, we tune the tournament size, substitution-, insertion-, and deletion-probabilities." }, { "heading": "A.2 MCMC AND SIMULATED ANNEALING", "text": "MCMC and simulated annealing (Kirkpatrick et al., 1983) resemble evolution with no crossover, and selection only occurring between an individual and its parent. Beginning with a random population, each individual evolves as a single chain, with neighborhood structure defined by the mutation operator described in section A.1. We denote x and x′ as a parent and child sequence, respectively. A transition x → x′ is always accepted if the reward increases (f(x′) > f(x)). Otherwise, the transition is accepted with some acceptance probability. For MCMC, the acceptance probability is f(x′)/f(x), while for simulated annealing it is exp((f(x′) − f(x))/T ) for some temperature T . A high temperature increases the likelihood of accepting a move that decreases the reward. The next mutation on the chain begins from x if the transition is rejected, and from x′ otherwise. We treated the temperature T as a tunable hyper-parameter in addition to the evolution hyper-parameters described in section A.1." }, { "heading": "A.3 FEEDBACK GAN", "text": "We follow the methodology suggested by Gupta & Zou (2018). Instead of using a constant threshold for selecting positive sequences as described in the original publication, we used a quantile cutoff, which does not depend on the absolute scale of f(x) and performed better in our experiments. As hyper-parameters, we tuned the quantile cutoff, learning rate, batch size, discriminator and generator training epochs, the gradient penalty weight, the Gumble softmax temperature, and the number of latent variables of the generator." }, { "heading": "A.4 DBAS", "text": "We follow the methodology suggested by Brookes & Listgarten (2018). As hyper-parameters, we optimized the quantile for selecting training samples, learning rate, batch size, training epochs, number of hidden units of the MLP generator and discriminator, and number of latent variables. The generative model is an variational autoencoder with a multi-layer perceptron decoder. We also considered DbAs with a LSTM as generative model, which performed slightly better than a VAE on the TfBind8 problem but worse on the PdbIsing and AMP problem (see Figure 8)." }, { "heading": "A.5 BAYESIAN OPTIMIZATION", "text": "As regressors, we considered a Gaussian process (GP) with RBF kernel on one-hot features, and an ensemble of ten fully-connected neural networks with one fully connected layer and 128 hidden units. We used the regressor output to compute the expected improvement or posterior mean acquisition function, which we maximized by gradient ascent for a certain number of acquisition steps following Killoran et al. (2017). We took the resulting B unique sequences with highest acquisition function value as sequences to measure in the next round. We tuned the length scale and variance of the RBF kernel, and the learning rate, batch size, and number of training epochs of the neural network ensemble. We further tuned the number of gradient ascent steps for activation maximization." }, { "heading": "A.6 PPO AND DYNA PPO", "text": "We used the PPO implementation of the TF-Agents RL library (Guadarrama et al., 2018). After each round, we trained trained the agent on the collected batch of sequences for a relatively high number of steps (about 72) since it resulted in a performance increase compared with performing only a single training step. We used the adaptive KL trust region penalty, which performed slightly better than importance ratio clipping in our experiments Schulman et al. (2017). We used a policy and value network with one fully connected layer and 128 hidden units. Both networks take the current position and the W last generated characters as input, which we padded at the beginning of the sequence. We set the context window W to the minimum of the total sequence length and 50. As hyper-parameters, we tuned the learning rate, number of training steps, adaptive KL target, and entropy regularization. For DyNA PPO, we also tuned the maximum number of model-based optimization rounds M (see Section 2.3)." }, { "heading": "A.7 AUTOMATED MODEL SELECTION", "text": "Automatic model selection optimizes the hyper-parameters of a set of candidate models by randomized search, and evaluates each hyper-parameter configuration by five-fold cross-validation using the R2 score. To account for randomness in the R2 score between models due to different crossvalidation splits, we used the same split for evaluating each of the models per round. We considered the following candidate models (implemented in Scikit-learn (Pedregosa et al., 2011)) and corresponding hyper-parameters:\n• KNeighborsRegressor: n neighbors • BayesianRidge: alpha 1, alpha 2, lambda 1, lamdba 2 • RandomForestRegressor: max depth, max features, n estimators • ExtraTreesRegressor: max depth, max features, n estimators • GradientBoostingRegressor: learning rate, max depth, n estimators • GaussianProcessRegressor: with RBF, RationalQuadratic, and Matern kernel\nWe also considered an ensemble of 10 neural networks with two convolutional layers and one fully connected layer, and optimized the learning rate and number of training epochs." }, { "heading": "B DATASET DETAILS", "text": "" }, { "heading": "B.1 PROTEIN CONTACT ISING MODELS", "text": "Given a protein from the Protein Data Bank (Berman et al., 2003), we compute the energy E(x) for sequence x as E(x) = ∑ i φi(xi) + ∑ ij Cijφ(xi, xj), where xi refers to the character in the i-th position of sequence x. Cij is an indicator for whether theCα atoms of the residues at positions i and j are separated by less than 6 Angstroms when the protein folds. φ(xi, xj) is a widely-used ‘pair potential’ based on co-occurence probabilities derived from the structures of real-world proteins (Miyazawa & Jernigan, 1996). The same 20 x 20 table of pair potentials is used at all positions in the sequence, and thus the difference in energy functions across proteins is dictated only by their differing contact map structure. We set the local term φi(xi) to zero. In future work, it would be interesting to consider non-zero local terms.\nOur experiments consider a set of qualitatively-different proteins listed at the bottom-right of Figure 1. We identify the local optima using the same procedure as in Section B.2, except without accounting for reverse complements." }, { "heading": "B.2 TRANSCRIPTION FACTOR BINDING SITE DATASET", "text": "We used the dataset described by Barrera et al. (2016), and min-max normalized binding affinities between zero and one. To reduce computational costs, we only considered the first replicate (REF R1) of each wild type transcription factor in the dataset, which resulted in 41 optimization targets that we used for comparing optimizers as described in Section 4.2. We extracted local optima for\neach binding target as follows. First, we separated sequences into forward and reverse sequences by ordering sequences lexicographically and including each sequence in the set of forward sequences unless the set already contained its reverse complement. We then chose the 100 forward sequences with the highest binding affinity and clustered them using the hamming distance metric, where we determined the number of clusters by finding the number of PCA components required to explain 95% of variance. We then used the sequences with the highest reward per cluster and their reverse complement as local optima." }, { "heading": "B.3 ANITMICROBIAL PEPTIDE DATASET", "text": "We downloaded the dataset1 provided by Witten & Witten (2019), and followed the paper for preprocessing sequences and generating non-AMP sequences as negative training samples. We additionally excluded sequences containing cysteine and sequences shorter than 15 or longer than 50 amino acids. We fit one classifier to predict if a sequence is antimicrobial towards either E.coli, S.aureus, P.aeruginosa, or B.subtilis, which we used for hyper-parameter tuning, and a second classifier for C. alibicani, which we used for hold-out evaluation. We used C. alibicani as hold-out target since its antimicrobial activity was least correlated with the activity of other pathogenes in the dataset with more than 1000 AMP sequences. We used random forest classifiers since they were more accurate (cross-validated AUC 0.99 and 0.94) than alternative models such as k-nearest neighbors, Gaussian processes, or neural networks. Since sequences are variable-length, we padded them to the maximum sequence length of 50 and extended the vocabulary by an additional end of sequence token. Tokens after the fist end of sequence token were ignored when evaluating f(x).\n1https://github.com/zswitten/Antimicrobial-Peptides" }, { "heading": "C COMPARISION OF RL METHODS", "text": "DyNA PPO is built on PPO, which we have found to outperform other policy-based and value-based RL methods in practice on our problems. In Figure 7 we contrast the performance of PPO (Schulman et al., 2017), REINFORCE (Williams, 1992), deep Q-learning (DQN) (Mnih et al., 2015), and categorical distributional deep Q-learning (CatDQN) (Bellemare et al., 2017) on all problems considered in Section 4. We find that PPO has better exploration properties than REINFORCE, which tends to converge too soon to a local optimum. The poor performance of DQN and CatDQN can be explained by the sparse reward (the reward is only non-zero at the terminal state), such that the Bellman error and training loss for updating the Q network are zero in most states. We also found the performance of DQN and CatDQN to be sensitive to the choice of the epsilon greedy rate and Boltzmann temperature for trading-off exploration and exploitation and increasing diversity." }, { "heading": "D COMPARISON OF ADDITIONAL BASELINES", "text": "" }, { "heading": "E ANALYSIS OF THE EXPLORATION BONUS", "text": "F ANALYSIS OF MODEL-BASED TRAINING" }, { "heading": "G COMPARISON OF CROSS-VALIDATION SPLITTING STRATEGIES", "text": "We used the k-fold cross-validation tools in scikit-learn for performing model selection. After publication of the paper, we discovered that the default behavior in sklearn.model selection.KFold is to not shuffle the input data but to slice them into chunks based on the input ordering.\nWhen we switched to using random cross-validation folds, we found that the predictive accuracy of models was considerably higher than when using folds based on the data order. This led to different models being selected, which led to a slight decrease in black-box optimization performance compared to when not shuffling the input data (Figure 15).\nOur data was sorted in the order in which it appeared in the optimization rounds. Hence, k-fold cross-validation without shuffling corresponds to splitting the data approximately by rounds, which favorably selects models that generalize across rounds. This is desired since samples in the same" }, { "heading": "TF Bind Protein Ising AMP", "text": "round tend to be correlated. It splits the data only approximately by rounds if the number of folds k is not equal to the number of rounds performed so far.\nIn response, we ran experiments using a true round-based split, which performed similarly to splitting the data approximately by rounds using sklearn.model selection.KFold with shuffle=False. Based on the similar performance of these two slitting strategies, we did not change the experiments in the paper." } ]
2,020
null
SP:b1f2e7dee0606c25926a81ac32462c8bd2cb4808
[ "This paper proposes an algorithm to learn coordination strategies for multi-agent reinforcement learning. It combines gradient-based optimization (Actor-critic) with Neuroevolution (genetic algorithms style). Specifically, Actor-critic is used to train an ensemble of agents (referred to as “team”) using a manually designed agent-specific reward. Coordination within a team is then learned with Neuroevolution. The overall design accommodates sharing of data between Actor-critic and Neuroevolution, and migration of policies. Evaluation is done using the multi-particle environments (Lowe et. al. 2017) and a Rover domain task.", "This paper proposes to use a two-level optimization process to solve the challenge of optimizing the team reward and the agent's reward simultaneously, which are often not aligned. It applies the evolutionary algorithm to optimize the sparse team reward, while using RL (TD3) to optimize the agent's dense reward. In this way, there is no need to combine these two rewards into a scalar that often requires extensive manual tuning." ]
Many cooperative multiagent reinforcement learning environments provide agents with a sparse team-based reward, as well as a dense agent-specific reward that incentivizes learning basic skills. Training policies solely on the team-based reward is often difficult due to its sparsity. Also, relying solely on the agent-specific reward is sub-optimal because it usually does not capture the team coordination objective. A common approach is to use reward shaping to construct a proxy reward by combining the individual rewards. However, this requires manual tuning for each environment. We introduce Multiagent Evolutionary Reinforcement Learning (MERL), a split-level training platform that handles the two objectives separately through two optimization processes. An evolutionary algorithm maximizes the sparse team-based objective through neuroevolution on a population of teams. Concurrently, a gradient-based optimizer trains policies to only maximize the dense agent-specific rewards. The gradient-based policies are periodically added to the evolutionary population as a way of information transfer between the two optimization processes. This enables the evolutionary algorithm to use skills learned via the agent-specific rewards toward optimizing the global objective. Results demonstrate that MERL significantly outperforms state-of-the-art methods, such as MADDPG, on a number of difficult coordination benchmarks.
[]
[ { "authors": [ "C. Colas", "O. Sigaud", "P.-Y. Oudeyer" ], "title": "Gep-pg: Decoupling exploration and exploitation in deep reinforcement learning algorithms", "venue": "arXiv preprint arXiv:1802.05054,", "year": 2018 }, { "authors": [ "S. Devlin", "M. Grześ", "D. Kudenko" ], "title": "Multi-agent, reward shaping for robocup keepaway", "venue": "In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2011 }, { "authors": [ "D. Floreano", "P. Dürr", "C. Mattiussi" ], "title": "Neuroevolution: from architectures to learning", "venue": "Evolutionary Intelligence,", "year": 2008 }, { "authors": [ "J. Foerster", "R.Y. Chen", "M. Al-Shedivat", "S. Whiteson", "P. Abbeel", "I. Mordatch" ], "title": "Learning with opponent-learning awareness", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2018 }, { "authors": [ "J.N. Foerster", "G. Farquhar", "T. Afouras", "N. Nardelli", "S. Whiteson" ], "title": "Counterfactual multi-agent policy gradients", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "D.B. Fogel" ], "title": "Evolutionary computation: toward a new philosophy of machine intelligence, volume 1", "venue": null, "year": 2006 }, { "authors": [ "S. Fujimoto", "H. van Hoof", "D. Meger" ], "title": "Addressing function approximation error in actor-critic methods", "venue": "arXiv preprint arXiv:1802.09477,", "year": 2018 }, { "authors": [ "M. Jaderberg", "V. Dalibard", "S. Osindero", "W.M. Czarnecki", "J. Donahue", "A. Razavi", "O. Vinyals", "T. Green", "I. Dunning", "K. Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "N. Justesen", "S. Risi" ], "title": "Learning macromanagement in starcraft from replays using deep learning", "venue": "IEEE Conference on Computational Intelligence and Games (CIG),", "year": 2017 }, { "authors": [ "S. Khadka", "K. Tumer" ], "title": "Evolution-guided policy gradient in reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "S. Khadka", "S. Majumdar", "T. Nassar", "Z. Dwiel", "E. Tumer", "S. Miret", "Y. Liu", "K. Tumer" ], "title": "Collaborative evolutionary reinforcement learning", "venue": "arXiv preprint arXiv:1905.00976v2,", "year": 2019 }, { "authors": [ "H. Kitano", "M. Asada", "Y. Kuniyoshi", "I. Noda", "E. Osawa" ], "title": "Robocup: The robot world cup", "venue": null, "year": 1995 }, { "authors": [ "A. Lazaridou", "A. Peysakhovich", "M. Baroni" ], "title": "Multi-agent cooperation and the emergence of (natural) language", "venue": "arXiv preprint arXiv:1612.07182,", "year": 2016 }, { "authors": [ "F.-D. Li", "M. Wu", "Y. He", "X. Chen" ], "title": "Optimal control in microgrid using multi-agent reinforcement learning", "venue": "ISA transactions,", "year": 2012 }, { "authors": [ "T.P. Lillicrap", "J.J. Hunt", "A. Pritzel", "N. Heess", "T. Erez", "Y. Tassa", "D. Silver", "D. Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "M.L. Littman" ], "title": "Markov games as a framework for multi-agent reinforcement learning", "venue": "In Machine learning proceedings", "year": 1994 }, { "authors": [ "S. Liu", "G. Lever", "J. Merel", "S. Tunyasuvunakool", "N. Heess", "T. Graepel" ], "title": "Emergent coordination through competition", "venue": "arXiv preprint arXiv:1902.07151,", "year": 2019 }, { "authors": [ "R. Lowe", "Y. Wu", "A. Tamar", "J. Harb", "O.P. Abbeel", "I. Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "B. Lüders", "M. Schläger", "A. Korach", "S. Risi" ], "title": "Continual and one-shot learning through neural networks with dynamic external memory", "venue": "In European Conference on the Applications of Evolutionary Computation,", "year": 2017 }, { "authors": [ "I. Mordatch", "P. Abbeel" ], "title": "Emergence of grounded compositional language in multi-agent populations", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "A.Y. Ng", "D. Harada", "S. Russell" ], "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "venue": "In ICML,", "year": 1999 }, { "authors": [ "A. Rahmattalabi", "J.J. Chung", "M. Colby", "K. Tumer. D" ], "title": "Structural credit assignment in tightly coupled multiagent domains", "venue": "In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2016 }, { "authors": [ "C. Resnick", "W. Eldridge", "D. Ha", "D. Britz", "J. Foerster", "J. Togelius", "K. Cho", "J. Bruna" ], "title": "Pommerman: A multi-agent playground", "venue": "arXiv preprint arXiv:1809.07124,", "year": 2018 }, { "authors": [ "S. Shalev-Shwartz", "S. Shammah", "A. Shashua" ], "title": "Safe, multi-agent, reinforcement learning for autonomous driving", "venue": "arXiv preprint arXiv:1610.03295,", "year": 2016 }, { "authors": [ "W. Sheng", "Q. Yang", "J. Tan", "N. Xi" ], "title": "Distributed multi-robot coordination in area exploration", "venue": "Robotics and Autonomous Systems,", "year": 2006 }, { "authors": [ "D. Silver", "T. Hubert", "J. Schrittwieser", "I. Antonoglou", "M. Lai", "A. Guez", "M. Lanctot", "L. Sifre", "D. Kumaran", "T. Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "W.M. Spears", "K.A. De Jong", "T. Bäck", "D.B. Fogel", "H. De Garis" ], "title": "An overview of evolutionary computation", "venue": "In European Conference on Machine Learning,", "year": 1993 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement learning: An introduction, volume 1", "venue": "MIT press Cambridge,", "year": 1998 }, { "authors": [ "S. Thrun", "W. Burgard", "D. Fox" ], "title": "A real-time algorithm for mobile robot mapping with applications to multi-robot and 3d mapping", "venue": "In ICRA,", "year": 2000 }, { "authors": [ "K. Tumer", "A. Agogino" ], "title": "Distributed agent-based air traffic flow management", "venue": "In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems,", "year": 2007 }, { "authors": [ "O. Vinyals", "T. Ewalds", "S. Bartunov", "P. Georgiev", "A.S. Vezhnevets", "M. Yeo", "A. Makhzani", "H. Küttler", "J. Agapiou", "J. Schrittwieser" ], "title": "Starcraft ii: A new challenge for reinforcement learning", "venue": "arXiv preprint arXiv:1708.04782,", "year": 2017 }, { "authors": [ "S.A. Williamson", "E.H. Gerding", "N.R. Jennings" ], "title": "Reward shaping for valuing communications during multi-agent coordination", "venue": "In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume", "year": 2009 }, { "authors": [ "L. Yliniemi", "A.K. Agogino", "K. Tumer" ], "title": "Multirobot coordination for space exploration", "venue": "AI Magazine,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Cooperative multiagent reinforcement learning (MARL) studies how multiple agents can learn to coordinate as a team toward maximizing a global objective. Cooperative MARL has been applied to many real world applications such as air traffic control (Tumer and Agogino, 2007), multi-robot coordination (Sheng et al., 2006; Yliniemi et al., 2014), communication and language (Lazaridou et al., 2016; Mordatch and Abbeel, 2018), and autonomous driving (Shalev-Shwartz et al., 2016).\nMany such environments endow agents with a team reward that reflects the team’s coordination objective, as well as an agent-specific local reward that rewards basic skills. For instance, in soccer, dense local rewards could capture agent-specific skills such as passing, dribbling and running. The agents must then coordinate when and where to use these skills in order to optimize the team objective, which is winning the game. Usually, the agent-specific reward is dense and easy to learn from, while the team reward is sparse and requires the cooperation of all or most agents.\nHaving each agent directly optimize the team reward and ignore the agent-specific reward usually fails or is sample-inefficient for complex tasks due to the sparsity of the team reward. Conversely, having each agent directly optimize the agent-specific reward also fails because it does not capture the team’s objective, even with state of the art multiagent RL algorithms such as MADDPG (Lowe et al., 2017).\nOne solution to this problem is to use reward shaping, where extensive domain knowledge about the task is used to create a proxy reward function (Rahmattalabi et al., 2016). Constructing this proxy reward function is difficult in complex environments, and is domain-dependent. Apart from requiring domain knowledge and manual tuning, this approach also poses risks of changing the underlying problem itself (Ng et al., 1999). Simple approaches to creating a proxy reward via linear combinations of the two objectives also fail to solve or generalize to complex coordination tasks (Devlin et al., 2011; Williamson et al., 2009).\nIn this paper, we introduce Multiagent Evolutionary Reinforcement Learning (MERL), a state-ofthe-art algorithm for cooperative MARL that does not require reward shaping. MERL is a split-level training platform that combines gradient-based and gradient-free optimization. The gradient-free optimizer is an evolutionary algorithm that maximizes the team objective through neuroevolution. The gradient-based optimizer is a policy gradient algorithm that maximizes each agent’s dense, local rewards. These gradient-based policies are periodically copied into the evolutionary population. The two processes operate concurrently and share information through a shared replay buffer.\nA key strength of MERL is that it is a general method which does not require domain-specific reward shaping. This is because MERL optimizes the team objective directly while simultaneously leveraging agent-specific rewards to learn basic skills. We test MERL in a number of multiagent coordination benchmarks. Results demonstrate that MERL significantly outperforms state-of-the-art methods such as MADDPG, while using the same observations and reward functions. We also demonstrate that MERL scales gracefully to increasing complexity of coordination objectives where MADDPG and its variants fail to learn entirely." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Markov Games: A standard reinforcement learning (RL) setting is often formalized as a Markov Decision Process (MDP) and consists of an agent interacting with an environment over a finite number of discrete time steps. This formulation can be extended to multiagent systems in the form of partially observable Markov games (Littman, 1994; Lowe et al., 2017). An N -agent Markov game is defined by a global state of the world, S, and a set of N observations {Oi} and N actions {Ai} corresponding to theN agents. At each time step t, each agent observes its corresponding observation Oti and maps it to an action A t i using its policy πi.\nEach agent receives a scalar reward rti based on the global state St and joint action of the team. The world then transitions to the next state St+1 which produces a new set of observations {Oi}. The process continues until a terminal state is reached. Ri = ∑T t=0 γ\ntrti is the total return for agent i with discount factor γ ∈ (0, 1]. Each agent aims to maximize its expected return. TD3: Policy gradient (PG) methods frame the goal of maximizing the expected return as the minimization of a loss function. A widely used PG method for continuous, high-dimensional action spaces is DDPG (Lillicrap et al., 2015). Recently, (Fujimoto et al., 2018) extended DDPG to Twin Delayed DDPG (TD3), addressing its well-known overestimation problem. TD3 is the state-of-the-art, off-policy algorithm for model-free DRL in continuous action spaces.\nTD3 uses an actor-critic architecture (Sutton and Barto, 1998) maintaining a deterministic policy (actor) π : S → A, and two distinct criticsQ : S ×A → Ri. Each critic independently approximates the actor’s action-value function Qπ. A separate copy of the actor and critics are kept as target networks for stability and are updated periodically. A noisy version of the actor is used to explore the environment during training. The actor is trained using a noisy version of the sampled policy gradient computed by backpropagation through the combined actor-critic networks. This mitigates overfitting of the deterministic policy by smoothing the policy gradient updates.\nEvolutionary Reinforcement Learning (ERL) is a hybrid algorithm that combines Evolutionary Algorithms (EAs) (Floreano et al., 2008; Lüders et al., 2017; Fogel, 2006; Spears et al., 1993), with policy gradient methods (Khadka and Tumer, 2018). Instead of discarding the data generated during a standard EA rollout, ERL stores this data in a central replay buffer shared with the policy gradient’s own rollouts - thereby increasing the diversity of the data available for the policy gradient learners. Since the EA directly optimizes for episode-wide return, it biases exploration towards states with higher long-term returns. The policy gradient algorithm which learns using this state distribution inherits this implicit bias towards long-term optimization. Concurrently, the actor trained by the policy gradient algorithm is inserted into the evolutionary population allowing the EA to benefit from the fast gradient-based learning.\nRelated Work: Lowe et al. (2017) introduced MADDPG which tackled the inherent non-stationarity of a multiagent learning environment by leveraging a critic which had full access to the joint state and action during training. Foerster et al. (2018b) utilized a similar setup with a centralized critic across agents to tackle StarCraft micromanagement tasks. An algorithm that could explicitly model other agents’ learning was investigated in Foerster et al. (2018a). However, all these approaches rely\non a dense agent reward that properly captures the team objective. Methods to solve for these types of agent-specific reward functions were investigated in Li et al. (2012) but were limited to tasks with strong simulators where tree-based planning could be used.\nA closely related work to MERL is (Liu et al., 2019) where Population-Based Training (PBT) (Jaderberg et al., 2017) is used to optimize the relative importance between a collection of dense, shaped rewards automatically during training. This can be interpreted as a singular central reward function constructed by scalarizing a collection of reward signals where the scalarization coefficients are adaptively learned during training. In contrast, MERL optimizes its reward functions independently with information transfer across them facilitated through shared replay buffers and policy migration directly. This form of information transfer through a shared replay buffer has been explored extensively in recent literature (Colas et al., 2018; Khadka et al., 2019)." }, { "heading": "3 MULTIAGENT EVOLUTIONARY REINFORCEMENT LEARNING", "text": "MERL leverages both agent-specific and team objectives through a hybrid algorithm that combines gradient-free and gradient-based optimization. The gradient-free optimizer is an evolutionary algorithm that maximizes the team objective through neuroevolution. The gradient-based optimizer trains policies to maximize agent-specific rewards. These gradient-based policies are periodically added to the evolutionary population and participate in evolution. This enables the evolutionary algorithm to use agent-specific skills learned by training on the agent-specific rewards toward optimizing the team objective without needing to resort to reward shaping.\nAlgorithm 1 Multiagent Evolutionary Reinforcement Learning 1: Initialize a population of k multi-head teams popπ , each with weights θπ initialized randomly 2: Initialize a shared critic Q with weights θQ 3: Initialize an ensemble of N empty cyclic replay buffersRk, one for each agent 4: Define a white Gaussian noise generatorWg random number generator r() ∈ [0, 1) 5: for generation = 1,∞ do 6: for team π ∈ popπ do 7: g,R = Rollout (π,R, noise=None, ξ) 8: _,R = Rollout (π,R, noise=Wg , ξ = 1) 9: Assign g as π’s fitness 10: end for 11: Rank the population popπ based on fitness scores 12: Select the first e teams π ∈ popπ as elites 13: Select the remaining (k − e) teams π from popπ , to form Set S using tournament selection 14: while |S| < (k − e) do 15: Single-point crossover between a randomly sampled π ∈ e and π ∈ S and append to S 16: end while 17: for Agent k=1,N do 18: Randomly sample a minibatch of T transitions (oi, ai, li, oi+1) from Rk\n19: Compute yi = li + γ min j=1,2\nQ′j(oi+1, a∼|θQ ′ j )\n20: where a∼ = π′pg(k, oi+1|θπ ′ pg ) [action sampled from the kth head of π′pg] + 21: Update Q by minimizing the loss: L = 1T ∑ i(yi −Q(oi, ai|θQ)2 22: Update πkpg using the sampled policy gradient\n∇θπpgJ ∼ 1 T ∑ ∇aQ(o, a|θQ)|o=oi,a=ai∇θπpgπ k pg(s|θπpg)|o=oi\n23: Soft update target networks: θπ ′ ⇐ τθπ + (1− τ)θπ′ and θQ′ ⇐ τθQ + (1− τ)θQ′ 24: end for 25: Migrate the policy gradient team popj : for weakest π ∈ popjπ : θπ ⇐ θπpg 26: end for Policy Topology: We represent our multiagent (team) policies using a multi-headed neural network π as illustrated in Figure 1. The head πk represents the k-th agent in the team. Given an incoming observation for agent k, only the output of πk is considered as agent k’s response. In essence, all\nagents act independently based on their own observations while sharing weights (and by extension, the features) in the lower layers (trunk). This is commonly used to improve learning speed (Silver et al., 2017). Further, each agent k also has its own replay buffer (Rk) which stores its experience defined by the tuple (state, action, next state, local reward) for each interaction with the environment (rollout) involving that agent.\nTeam Reward Optimization: Figure 2 illustrates the MERL algorithm. A population of multi-headed teams, each with the same topology, is initialized with random weights. The replay bufferRk is shared by the k-th agent across all teams. The population is then evaluated for each rollout. The team reward for each team is disbursed at the end of the episode and is considered as its fitness score. A selection operator selects a portion of the population for survival with probability proportionate to their fitness scores. The weights of the teams in the population are probabilistically perturbed through mutation and crossover operators to create the next generation of teams. A portion of the teams with the highest relative fitness are preserved as elites. At any given time, the team with the highest fitness, or the champion, represents the best solution for the task.\nPolicy Gradient: The procedure described so far resembles a standard EA except that each agent k stores each of its experiences in its associated replay buffer (Rk) instead of just discarding it. However, unlike EA, which only learns based on the low-fidelity global reward, MERL also learns from the experiences within episodes of a rollout using policy gradients. To enable this kind of \"local learning\", MERL initializes one multi-headed policy network πpg and one critic Q. A noisy version of πpg is then used to conduct its own set of rollouts in the environment, storing each agent k’s experiences in its corresponding buffer (Rk) similar to the evolutionary rollouts.\nAgent-Specific Reward Optimization: Crucially, each agent’s replay buffer is kept separate from that of every other agent to ensure diversity amongst the agents. The shared critic samples a random mini-batch uniformly from each replay buffer and uses it to update its parameters using gradient descent. Each agent πkpg then draws a mini-batch of experiences from its corresponding buffer (Rk) and uses it to sample a policy gradient from the shared critic. Unlike the teams in the evolutionary population which directly seek to optimize the team reward, πpg seeks to maximize the agent-specific local reward while exploiting the experiences collected via evolution.\nSkill Migration: Periodically, the πpg network is copied into the evolving population of teams and can propagate its features by participating in evolution. This is the core mechanism that combines policies learned via agent-specific and team rewards. Regardless of whether the two rewards are aligned, evolution ensures that only the performant derivatives of the migrated network are retained. This mechanism guarantees protection against destructive interference commonly seen when a direct scalarization between two reward functions is attempted. Further, the level of information exchange is automatically adjusted during the process of learning, in contrast to being manually tuned by an expert designer.\nAlgorithm 1 provides a detailed pseudo-code of the MERL algorithm. The choice of hyperparameters is explained in the Appendix. Additionally, our source code 1 is available online.\n1https://tinyurl.com/y6erclts" }, { "heading": "4 EXPERIMENTS", "text": "We adopt environments from (Lowe et al., 2017) and (Rahmattalabi et al., 2016) to perform our experiments. Each environment consists of multiple agents and landmarks in a two-dimensional world. Agents take continuous control actions to move about the world. Figure 3 illustrates the four environments which are described in more detail below.\nPredator-Prey: In this environment, N slower cooperating agents (predators) must chase the faster adversary (prey) around an environment with L large landmarks in randomly-generated locations. The predators get a reward when they catch (touch) the prey while the prey is penalized. The team reward for the predators is the cumulative number of prey-touches in an episode. Each predator can also compute the average distance to the prey and use it as its agent-specific reward. All agents observe the relative positions and velocities of the other agents as well as the positions of the landmarks. The prey can accelerate 33% faster than the predator and has a higher top speed. We tests two versions termed simple and hard predator-prey where the prey is 30% and 100% faster, respectively. Additionally, the prey itself learns dynamically during training. We use DDPG (Lillicrap et al., 2015) as a learning algorithm for training the prey policy. All of our candidate algorithms are tested on their ability to train the team of predators in catching this prey.\nPhysical Deception: N agents cooperate to reach a single target Point of Interest (POI) among N POIs. They are rewarded based on the closest distance of any agent to the target. A lone adversary also desires to reach the target POI. However, the adversary does not know which of the POIs is the correct one. Thus the cooperating agents must learn to spread out and cover all POIs so as to deceive the adversary as they are penalized based on the adversary’s distance to the target. The team reward for the agents is then the cumulative reward in an episode. We use DDPG (Lillicrap et al., 2015) to train the adversary policy.\nKeep-Away: In this scenario, a team of N cooperating agents must reach a target POI out of L total POIs. Each agent is rewarded based on its distance to the target. We construct the team reward as simply the sum of the agent-specific rewards in an episode. An adversary also has to occupy the target while keeping the cooperating agents from reaching the target by pushing them away. To incentivize this behavior, the adversary is rewarded based on its distance to the target POI and penalized based on the distance of the target from the nearest cooperating agent. Additionally, it does not know which of the POIs is the target and must infer this from the movement of the agents. DDPG (Lillicrap et al., 2015) is used to train the adversary policy.\nRover Domain: This environment is adapted from (Rahmattalabi et al., 2016). Here, N agents must cooperate to reach a set of K POIs. Multiple agents need to simultaneously go to the same POI in order to observe it. The number of agents required to observe a POI is termed the coupling requirement. Agents do not know and must infer the coupling factor from the rewards obtained. If a team with fewer agents than this number go to a POI, no reward is observed. The team’s reward is the percentage of POIs observed at the end of an episode.\nEach agent can also locally compute its distance to its closest POI and use it as its agent-specific reward. Its observation comprises two channels to detect POIs and rovers, respectively. Each channel receives intensity information over 10◦ resolution spanning the 360◦ around the agent’s position loosely based on the characteristic of a Pioneer robot (Thrun et al., 2000). This is similar to a LIDAR. Since it returns the closest reflector, occlusions make the problem partially-observable. A coupling\nfactor of 1 is similar to the cooperative navigation task in Lowe et al. (2017). We test coupling factors from 1 to 7 to capture extremely complex coordination objectives.\nCompared Baselines: We compare the performance of MERL with a standard neuroevolutionary algorithm (EA) (Fogel, 2006), MADDPG (Lowe et al., 2017) and MATD3, a variant of MADDPG that integrates the improvements described within TD3 (Fujimoto et al., 2018) over DDPG. Internally, MERL uses EA and TD3 as its team-reward and agent-specific reward optimizer, respectively. MADDPG was chosen as it is the state-of-the-art multiagent RL algorithm. We implemented MATD3 to ensure that the differences between MADDPG and MERL do not originate from having the more stable TD3 over DDPG.\nMethodology for Reported Metrics: For MATD3 and MADDPG, the team network was periodically tested on 10 task instances without any exploratory noise. The average score was logged as its performance. For MERL and EA, the team with the highest fitness was chosen as the champion for each generation. The champion was then tested on 10 task instances, and the average score was logged. This protocol shielded the reported metrics from any bias of the population size. We conduct 5 statistically independent runs with random seeds from {2019, 2023} and report the average with error bars showing a 95% confidence interval. All scores reported are compared against the number of environment steps (frames). A step is defined as the multiagent team taking a joint action and receiving a feedback from the environment. To make the comparisons fair across single-team and population-based algorithms, all steps taken by all teams in the population are counted cumulatively.\n5 RESULTS\nPredator-Prey: Figure 4 shows the comparative performance in controlling the team of predators in the Predator-Prey environment. Note that this is an adversarial environment where the prey dynamically adapts against the predators. The prey (considered as part of the environment in this analysis) uses DDPG to learn constantly against our team of predators. This is why predator performance (measured as number of prey\ntouches) exhibits ebb and flow during learning. MERL outperforms MATD3, EA, and MADDPG across both simple and hard variations of the task. EA seems to be approaching MERL’s performance but is significantly slower to learn. This is an expected behavior for neuroevolutionary methods which are known to be sample-inefficient. In contrast, MERL, by virtue of its fast policy-gradient components, learns significantly faster.\nPhysical Deception: Figure 5 (left) shows the comparative performance in controlling the team of agents in the Physical Deception environment. The performance here is largely based on how close the adversary comes to the target POI. Since the adversary starts out untrained, all compared algorithms start out with a fairly high score. As the adversary gradually learns to infer and move towards the target POI, MATD3 and MADDPG\ndemonstrate a gradual decline in performance. However, MERL and EA are able to hold their performance by concocting effective counter-strategies in deceiving the adversary. EA reaches the same performance as MERL but is slower to learn.\nKeep-Away: Figure 5 (right) show the comparative performance in Keep-Away. Similar to Physical Deception, MERL and EA are able to hold performance by attaining good counter-measures against\nthe adversary while MATD3 and MADDPG fail to do so. However, EA slightly outperforms MERL on this task.\nRover Domain: Figure 6 shows the comparative performance of MERL, MADDPG, MATD3, and EA tested in the rover domain with coupling factors 1, 3 and 7. In order to benchmark against the proxy reward functions that use scalarized linear combinations, we test MADDPG and MATD3 with two variations of reward functions. Global represents the scenario where only the sparse team reward is used. Mixed represents the scenario where a linear combination of the team-reward and agent-specific reward is used. Each reward is normalized before being combined. A weighing coefficient of 10 is used to amplify the teamreward’s influence in order to counter its sparsity. The weighing coefficient was tuned using a grid search (more details in Figure 7).\nMERL significantly outperforms all baselines across all coupling requirements. The tested baselines clearly degrade quickly beyond a coupling of 3. The increasing coupling requirement is equiv-\nalent to increasing difficulty in joint-space exploration and entanglement in the team objective. However, it does not increase the size of the state-space, complexity of perception, or navigation. This indicates that the degradation in performance is strictly due to the increase in complexity of the team objective.\nNotably, MERL is able to learn on coupling greater than n = 6 where methods without explicit reward shaping have been shown to fail entirely (Rahmattalabi et al., 2016). MERL successfully completes the task using the same set of information and coarse, unshaped reward functions as the other algorithms. The primary mechanism that enables this is MERL’s split-level approach that allows it to leverage the agent-specific reward function to solve navigation and perception while concurrently using the team-reward function to learn team formation and effective coordination.\nScalarization Coefficients for Mixed Rewards: Figure 7 shows the performance of MATD3 in optimizing mixed rewards computed with different coefficients used to amplify the team-reward relative to the agent-reward. The results demonstrate that finding a good balance between these two rewards through linear scalarization is difficult, as all values tested fail to make any progress in the task. This is because a static scalarization cannot capture the dynamic properties of which reward is important when and instead leads to an ineffective proxy. In contrast, MERL is able to leverage both reward functions without the need to explicitly combine them either linearly or via more complex mixing functions.\nTeam Behaviors: Figure 8 illustrates the trajectories generated for the Rover Domain with a coupling of n = 3. The trajectories for partially and fully trained MERL are shown in Figure 8 (a) and (b), respectively. During training, when MERL has not discovered team success (no POIs are successfully observed), MERL simply optimizes the agent-specific reward for each agent. This allows it to reach trajectories such as the ones shown in 8(a) where each agent learns to go towards a POI.\nSince each agent explicitly aims to reach a POI, the probability 3 agents congregating to the same POI is higher compared to random undirected exploration by each agent without the dense agent-specific reward. Once this scenario is discovered, the team reward optimizer (EA) within MERL explicitly selects for agent policies that jointly lead to such team-forming behaviors. Eventually it succeeds\nas shown in Figure 8(b). Here, team formation and collaborative pursuit of the POIs is immediately apparent. Two teams of 3 agents each form at the start of the episode. Further, the two teams also coordinate to pursue different POIs in order to maximize the team reward. While not perfect (the bottom POI is left unobserved), they do succeed in observing 3 out of the 4 POIs.\nIn contrast, MATD3-mixed fails to observe any POI. From the trajectories, it is apparent that the agents have successfully learned to perceive and navigate to reach POIs. However, they are unable to use this skill towards fulfilling the team objective. Instead each agent is rather split on the objective that it is optimizing. Some agents seem to be in sole pursuit of POIs without any regard for team for-\nmation or collaboration while others seem to exhibit random movements.\nThe primary reason for this is the mixed reward function that directly combines the agent-specific and team reward functions. Since the two reward functions have no guarantees of alignment across the state-space of the task, they invariably lead to learning these sub-optimal joint-behaviors that solve a certain form of scalarized mixed objective. In contrast, MERL by virtue of its bi-level optimization framework is able to leverage both reward functions without the need to explicitly combine them. This enables MERL to avoid these sub-optimal policies and solve the task without any reward shaping or manual tuning.\nSelection Rate: We ran experiments tracking whether the policies migrated from the policy gradient learners to the evolutionary population were selected or discarded during the subsequent selection process (Figure 9). Note that the expected selection rate if chosen at random is 0.1 as 1 policy is migrated into a population of 10. In contrast, the selection rate for migrated policies is significantly higher across all benchmarks with the exception of Keep-Away. This is consistent with the performance results seen in Keep-Away where EA initially outperforms MERL. However, in general, these results indicate that MERL’s integrative approach in combining the two optimization processes towards optimizing the team objective is crucial." }, { "heading": "6 CONCLUSION", "text": "In this paper, we introduced MERL, a split-level algorithm that leverages both agent-specific and team objectives by combining gradient-based and gradient-free optimization. MERL achieves this by using a fast policy-gradient optimizer to exploit dense agent-specific rewards while concurrently leveraging neuroevolution to tackle the team-objective.\nResults demonstrate that MERL significantly outperforms MADDPG, the state-of-the-art multiagent RL method, in a wide array of benchmarks. We also tested a modification of MADDPG to integrate TD3 - the state-of-the-art single-agent RL algorithm. These experiments demonstrated that the core improvements of MERL originate from its ability to leverage both team and agent-specific reward functions without the need to explicitly combine them. This differentiates MERL from other approaches like reward scalarization and reward shaping that either require extensive manual tuning or can detrimentally change the MDP (Ng et al., 1999) itself.\nFuture work will explore MERL for adversarial settings such as Pommerman (Resnick et al., 2018), StarCraft (Justesen and Risi, 2017; Vinyals et al., 2017) and RoboCup (Kitano et al., 1995; Liu et al., 2019). Further, extending MERL to general multi-reward settings such as is the case for multitask learning, is another promising area for future work." }, { "heading": "A HYPERPARAMETERS DESCRIPTION", "text": "Table 1 details the hyperparameters used for MERL, MATD3, and MADDPG in tackling predatorprey and cooperative navigation. The hyperparmaeters were inherited from Lowe et al. (2017) to match the original experiments for MADDPG and MATD3. The only exception to this was the use of hyperbolic tangent instead of Relu activation functions.\nTable 2 details the hyperparameters used for MERL, MATD3, and MADDPG in the rover domain. The hyperparameters themselves are defined below:\n• Optimizer = Adam Adam optimizer was used to update both the actor and critic networks for all learners.\n• Population size k This parameter controls the number of different actors (policies) that are present in the evolutionary population.\n• Rollout size This parameter controls the number of rollout workers (each running an episode of the task) per generation. Note: The two parameters above (population size k and rollout size) collectively modulates the proportion of exploration carried out through noise in the actor’s parameter space and its action space.\n• Target weight τ This parameter controls the magnitude of the soft update between the actors and critic networks, and their target counterparts.\n• Actor Learning Rate This parameter controls the learning rate of the actor network.\n• Critic Learning Rate This parameter controls the learning rate of the critic network.\n• Discount Rate This parameter controls the discount rate used to compute the return optimized by policy gradient.\n• Replay Buffer Size This parameter controls the size of the replay buffer. After the buffer is filled, the oldest experiences are deleted in order to make room for new ones.\n• Batch Size This parameters controls the batch size used to compute the gradients.\n• Actor Activation Function Hyperbolic tangent was used as the activation function.\n• Critic Activation Function Hyperbolic tangent was used as the activation function.\n• Number of Elites This parameter controls the fraction of the population that are categorized as elites. Since an elite individual (actor) is shielded from the mutation step and preserved as it is, the elite fraction modulates the degree of exploration/exploitation within the evolutionary population.\n• Mutation Probability This parameter represents the probability that an actor goes through a mutation operation between generation.\n• Mutation Fraction This parameter controls the fraction of the weights in a chosen actor (neural network) that are mutated, once the actor is chosen for mutation.\n• Mutation Strength This parameter controls the standard deviation of the Gaussian operation that comprises mutation.\n• Super Mutation Probability This parameter controls the probability that a super mutation (larger mutation) happens in place of a standard mutation.\n• Reset Mutation Probability This parameter controls the probability a neural weight is instead reset between N (0, 1) rather than being mutated.\n• Exploration Noise This parameter controls the standard deviation of the Gaussian operation that comprise the noise added to the actor’s actions during exploration by the learners (learner roll-outs).\n• TD3 Policy Noise Variance This parameter controls the standard deviation of the Gaussian operation that comprise the noise added to the policy output before applying the Bellman backup. This is often referred to as the magnitude of policy smoothing in TD3.\n• TD3 Policy Noise Clip This parameter controls the maximum norm of the policy noise used to smooth the policy.\n• TD3 Policy Update Frequency This parameter controls the number of critic updates per policy update in TD3." }, { "heading": "B ROLLOUT METHODOLOGY", "text": "Algorithm 2 describes an episode of rollout under MERL detailing the connections between the local reward, global reward, and the associated replay buffer. Algorithm 2 Function Rollout\n1: procedure ROLLOUT(π,R, noise, ξ) 2: fitness = 0 3: for j = 1:ξ do 4: Reset environment and get initial joint state js 5: while env is not done do 6: Initialize an empty list of joint action ja = [] 7: for Each agent (actor head) πk ∈ π and sk in js do 8: ja⇐ ja ∪ πk(sk|θπ k\n) + noiset 9: end for\n10: Execute ja and observe joint local reward jl, global reward g and joint next state js′ 11: for Each Replay BufferRk ∈ R and sk, ak, lk, s′k in js, ja, jl, js′ do 12: Append transition (sk, ak, lk, s′k) to Rk 13: end for 14: js = js′ 15: if env is done: then 16: fitness← g 17: end if 18: end while 19: end for 20: Return fitnessξ ,R 21: end procedure" }, { "heading": "C EVOLUTIONARY ALGORITHM POPULATION RUNS", "text": "Figure 10 compares EA with varying population sizes in the rover domain with a coupling of 3. Among the EA runs, a population size of 100 yields the best results converging to 0.3 in 100-millions frames. MERL (red) on the other hand is ran for 2-million frames and converges to 0.48. This is due to MERL’s ability to leverage gradient descent from its policy gradient components that lead to significantly faster learning performance." }, { "heading": "D EVOLUTIONARY STRATEGIES (ES)", "text": "D.1 ES POPULATION SWEEP\nFigure 11 compares ES with varying population sizes in the rover domain with a coupling of 3. Sigma for all ES runs are set at 0.1. Among the ES runs, a population size of 100 yields the best results converging to 0.1 in 100-millions frames. MERL (red) on the other hand is ran for 2-million frames and converges to 0.48.\nD.2 ES SIGMA SWEEP\nFigure 12 compares ES with varying variance of noises (sigma) that control the magnitude of each perturbation. The experiments are conducted in the rover domain with a coupling of 3 with a population size of 100. Among the ES runs, a sigma of 0.1 yields the best results converging to 0.1 in 100-millions frames. MERL (red) on the other hand is ran for 2-million frames and converges to 0.48." }, { "heading": "E PREDATOR-PREY WITH 3 PREY", "text": "Figure 13 shows the results of running MATD3 with varying number of prey in the predator-prey domain. The experiments are ongoing." } ]
2,019
null
SP:ea8267af45b09cc35349456d85eb39c58447e319
[ "The paper proposes a novel intrinsic reward/curiosity metric that combines both episodic and “life-long” novelty. Essentially two competing pressures that push agents to explore as many novel states in a single rollout as possible and to explore as many states as possible as evenly as possible. The primary contribution here is the episodic novelty measure, which relies on a state embedding that takes into account stochasticity in the environment. The paper covers this episodic curiosity measure and how it’s integrated with the life-long curiosity metric. It then demonstrates the impact of these metrics and variations compared to baselines on particular games and all 57 Arcade Learning Environment games.", "The work is motivated by the goal of having a comprehensive exploration of an agent in deep RL. For achieving that, the authors propose a count-based NGU agent, combining intrinsic and extrinsic bonuses as new rewards. An extrinsic/ long-term novelty module is used to control the amount of exploration across episodes, a life-long curiosity factor as its output. In the intrinsic/episodic novelty module, an embedding net and a KNN on episodic memory are applied to compute the current episodic reward. In the experiment, a universal value function approximator (UVFA) framework is used to simultaneously approximate the optimal value function with a set of rewards. The proposed method is tested on several hard exploration games. Other recent count-based models are compared in the paper." ]
We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memorybased intrinsic reward using k-nearest neighbors over the agent’s recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators (UVFA) to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.
[ { "affiliations": [], "name": "Adrià Puigdomènech Badia" }, { "affiliations": [], "name": "Pablo Sprechmann" }, { "affiliations": [], "name": "Alex Vitvitskyi" }, { "affiliations": [], "name": "Daniel Guo" }, { "affiliations": [], "name": "Bilal Piot" }, { "affiliations": [], "name": "Steven Kapturowski" }, { "affiliations": [], "name": "Olivier Tieleman" }, { "affiliations": [], "name": "Martín Arjovsky" }, { "affiliations": [], "name": "Alexander Pritzel" }, { "affiliations": [], "name": "Andew Bolt" }, { "affiliations": [], "name": "Charles Blundell" } ]
[ { "authors": [ "Marcin Andrychowicz", "Bowen Baker", "Maciek Chociej", "Rafal Jozefowicz", "Bob McGrew", "Jakub Pachocki", "Arthur Petron", "Matthias Plappert", "Glenn Powell", "Alex Ray" ], "title": "Learning dexterous in-hand manipulation", "venue": "arXiv preprint arXiv:1808.00177,", "year": 2018 }, { "authors": [ "Gabriel Barth-Maron", "Matthew W Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributed distributional deterministic policy gradients", "venue": "arXiv preprint arXiv:1804.08617,", "year": 2018 }, { "authors": [ "Marc Bellemare", "Sriram Srinivasan", "Georg Ostrovski", "Tom Schaul", "David Saxton", "Remi Munos" ], "title": "Unifying count-based exploration and intrinsic motivation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Lucas Beyer", "Damien Vincent", "Olivier Teboul", "Sylvain Gelly", "Matthieu Geist", "Olivier Pietquin" ], "title": "Mulex: Disentangling exploitation from exploration in deep rl", "venue": null, "year": 1907 }, { "authors": [ "Charles Blundell", "Benigno Uria", "Alexander Pritzel", "Yazhe Li", "Avraham Ruderman", "Joel Z Leibo", "Jack Rae", "Daan Wierstra", "Demis Hassabis" ], "title": "Model-free episodic control", "venue": "arXiv preprint arXiv:1606.04460,", "year": 2016 }, { "authors": [ "Jane Bromley", "Isabelle Guyon", "Yann LeCun", "Eduard Säckinger", "Roopak Shah" ], "title": "Signature verification using a\" siamese\" time delay neural network", "venue": "In Advances in neural information processing systems,", "year": 1994 }, { "authors": [ "Yuri Burda", "Harri Edwards", "Deepak Pathak", "Amos Storkey", "Trevor Darrell", "Alexei A Efros" ], "title": "Large-scale study of curiosity-driven learning", "venue": "arXiv preprint arXiv:1808.04355,", "year": 2018 }, { "authors": [ "Yuri Burda", "Harrison Edwards", "Amos Storkey", "Oleg Klimov" ], "title": "Exploration by random network distillation", "venue": "arXiv preprint arXiv:1810.12894,", "year": 2018 }, { "authors": [ "Jongwook Choi", "Yijie Guo", "Marcin Moczulski", "Junhyuk Oh", "Neal Wu", "Mohammad Norouzi", "Honglak Lee" ], "title": "Contingency-aware exploration in reinforcement learning", "venue": "arXiv preprint arXiv:1811.01483,", "year": 2018 }, { "authors": [ "Adrien Ecoffet", "Joost Huizinga", "Joel Lehman", "Kenneth O Stanley", "Jeff Clune" ], "title": "Go-explore: a new approach for hard-exploration problems", "venue": null, "year": 1901 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Nick Haber", "Damian Mrowca", "Stephanie Wang", "Li F Fei-Fei", "Daniel L Yamins" ], "title": "Learning to play with intrinsically-motivated, self-aware agents", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Dan Horgan", "John Quan", "David Budden", "Gabriel Barth-Maron", "Matteo Hessel", "Hado Van Hasselt", "David Silver" ], "title": "Distributed prioritized experience replay", "venue": "arXiv preprint arXiv:1803.00933,", "year": 2018 }, { "authors": [ "Rein Houthooft", "Xi Chen", "Yan Duan", "John Schulman", "Filip De Turck", "Pieter Abbeel" ], "title": "Vime: Variational information maximizing exploration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "arXiv preprint arXiv:1611.05397,", "year": 2016 }, { "authors": [ "Max Jaderberg", "Valentin Dalibard", "Simon Osindero", "Wojciech M Czarnecki", "Jeff Donahue", "Ali Razavi", "Oriol Vinyals", "Tim Green", "Iain Dunning", "Karen Simonyan" ], "title": "Population based training of neural networks", "venue": "arXiv preprint arXiv:1711.09846,", "year": 2017 }, { "authors": [ "Steven Kapturowski", "Georg Ostrovski", "Will Dabney", "John Quan", "Remi Munos" ], "title": "Recurrent experience replay in distributed reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hyoungseok Kim", "Jaekyeom Kim", "Yeonwoo Jeong", "Sergey Levine", "Hyun Oh Song" ], "title": "Emi: Exploration with mutual information maximizing state and action embeddings", "venue": "arXiv preprint arXiv:1810.01176,", "year": 2018 }, { "authors": [ "Gregory Koch", "Richard Zemel", "Ruslan Salakhutdinov" ], "title": "Siamese neural networks for one-shot image recognition", "venue": "In ICML deep learning workshop,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Rémi Munos", "Tom Stepleton", "Anna Harutyunyan", "Marc Bellemare" ], "title": "Safe and efficient off-policy reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Xiaoxiao Guo", "Honglak Lee", "Richard L Lewis", "Satinder Singh" ], "title": "Action-conditional video prediction using deep networks in atari games", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Ian Osband", "Charles Blundell", "Alexander Pritzel", "Benjamin Van Roy" ], "title": "Deep exploration via bootstrapped dqn", "venue": "In Advances In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Georg Ostrovski", "Marc G Bellemare", "Aäron van den Oord", "Rémi Munos" ], "title": "Count-based exploration with neural density models", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Deepak Pathak", "Pulkit Agrawal", "Alexei A Efros", "Trevor Darrell" ], "title": "Curiosity-driven exploration by self-supervised prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Tobias Pohlen", "Bilal Piot", "Todd Hester", "Mohammad Gheshlaghi Azar", "Dan Horgan", "David Budden", "Gabriel Barth-Maron", "Hado Van Hasselt", "John Quan", "Mel Večerík" ], "title": "Observe and look further: Achieving consistent performance on atari", "venue": "arXiv preprint arXiv:1805.11593,", "year": 2018 }, { "authors": [ "Alexander Pritzel", "Benigno Uria", "Sriram Srinivasan", "Adrià Puigdomènech", "Oriol Vinyals", "Demis Hassabis", "Daan Wierstra", "Charles Blundell" ], "title": "Neural episodic control", "venue": null, "year": 2017 }, { "authors": [ "Nikolay Savinov", "Anton Raichuk", "Raphaël Marinier", "Damien Vincent", "Marc Pollefeys", "Timothy Lillicrap", "Sylvain Gelly" ], "title": "Episodic curiosity through reachability", "venue": "arXiv preprint arXiv:1810.02274,", "year": 2018 }, { "authors": [ "Tom Schaul", "Daniel Horgan", "Karol Gregor", "David Silver" ], "title": "Universal value function approximators", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Bradly C Stadie", "Sergey Levine", "Pieter Abbeel" ], "title": "Incentivizing exploration in reinforcement learning with deep predictive models", "venue": "arXiv preprint arXiv:1507.00814,", "year": 2015 }, { "authors": [ "Christopher Stanton", "Jeff Clune" ], "title": "Deep curiosity search: Intra-life exploration can improve performance on challenging deep reinforcement learning problems", "venue": "arXiv preprint arXiv:1806.00553,", "year": 2018 }, { "authors": [ "Tom Stepleton" ], "title": "The pycolab game engine. https://github.com/deepmind/pycolab/ tree/master/pycolab, 2017", "venue": null, "year": 2017 }, { "authors": [ "Alexander L Strehl", "Michael L Littman" ], "title": "An analysis of model-based interval estimation for markov decision processes", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Ziyu Wang", "Tom Schaul", "Matteo Hessel", "Hado Van Hasselt", "Marc Lanctot", "Nando De Freitas" ], "title": "Dueling network architectures for deep reinforcement learning", "venue": "arXiv preprint arXiv:1511.06581,", "year": 2015 }, { "authors": [ "David Warde-Farley", "Tom Van de Wiele", "Tejas Kulkarni", "Catalin Ionescu", "Steven Hansen", "Volodymyr Mnih" ], "title": "Unsupervised control through non-parametric discriminative rewards", "venue": "arXiv preprint arXiv:1811.11359,", "year": 2018 }, { "authors": [ "Zhongwen Xu", "Hado P van Hasselt", "David Silver" ], "title": "Meta-gradient reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Kapturowski" ], "title": "2019) for a detailed experimental of trade-offs on different treatments", "venue": null, "year": 2019 }, { "authors": [ "Munos" ], "title": "2016) explains in which conditions the sequence of Q-functions", "venue": null, "year": 2016 }, { "authors": [ "Pathak" ], "title": "Intuitively, the controllable states should contain the information relevant to the action performed by the agent given two consecutive observations. However it might contain other type of information as long as it can be easily ignored by our simple classifier, g", "venue": "As noted in Burda et al. (2018b),", "year": 2017 }, { "authors": [ "Ecoffet" ], "title": "The agent might switch between alternatives without having exhausted all learning opportunities, rendering choosing the initial option uninteresting from a novelty perspective. The episodic approach with a meta-episode of length one would be forced to make similar choices", "venue": null, "year": 2019 }, { "authors": [ "Burda" ], "title": "2018b), in Montezuma’s Revenge each level contains 6 doors and 4 keys. If the agent walks through a door holding a key, it receives a reward of 300 consuming the key in the process. In order to clear a level, the agent needs open two doors located just before the final room. During exploration, the agent needs to hold on to two keys to see what it could do with them later in the episode, sacrificing the immediate reward of opening more accessible doors", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The problem of exploration remains one of the major challenges in deep reinforcement learning. In general, methods that guarantee finding an optimal policy require the number of visits to each state–action pair to approach infinity. Strategies that become greedy after a finite number of steps may never learn to act optimally; they may converge prematurely to suboptimal policies, and never gather the data they need to learn. Ensuring that all state-action pairs are encountered infinitely often is the general problem of maintaining exploration (François-Lavet et al., 2018; Sutton & Barto, 2018). The simplest approach for tackling this problem is to consider stochastic policies with a non-zero probability of selecting all actions in each state, e.g. -greedy or Boltzmann exploration. While these techniques will eventually learn the optimal policy in the tabular setting, they are very inefficient and the steps they require grow exponentially with the size of the state space. Despite these shortcomings, they can perform remarkably well in dense reward scenarios (Mnih et al., 2015). In sparse reward settings, however, they can completely fail to learn, as temporally-extended exploration (also called deep exploration) is crucial to even find the very few rewarding states (Osband et al., 2016).\n∗Equal contribution.\nRecent approaches have proposed to provide intrinsic rewards to agents to drive exploration, with a focus on demonstrating performance in non-tabular settings. These intrinsic rewards are proportional to some notion of saliency quantifying how different the current state is from those already visited (Bellemare et al., 2016; Haber et al., 2018; Houthooft et al., 2016; Oh et al., 2015; Ostrovski et al., 2017; Pathak et al., 2017; Stadie et al., 2015). As the agent explores the environment and becomes familiar with it, the exploration bonus disappears and learning is only driven by extrinsic rewards. This is a sensible idea as the goal is to maximise the expected sum of extrinsic rewards. While very good results have been achieved on some very hard exploration tasks, these algorithms face a fundamental limitation: after the novelty of a state has vanished, the agent is not encouraged to visit it again, regardless of the downstream learning opportunities it might allow (Bellemare et al., 2016; Ecoffet et al., 2019; Stanton & Clune, 2018). Other methods estimate predictive forward models (Haber et al., 2018; Houthooft et al., 2016; Oh et al., 2015; Pathak et al., 2017; Stadie et al., 2015) and use the prediction error as the intrinsic motivation. Explicitly building models like this, particularly from observations, is expensive, error prone, and can be difficult to generalize to arbitrary environments. In the absence of the novelty signal, these algorithms reduce to undirected exploration schemes, maintaining exploration in a non-scalable way. To overcome this problem, a careful calibration between the speed of the learning algorithm and that of the vanishing rewards is required (Ecoffet et al., 2019; Ostrovski et al., 2017).\nThe main idea of our proposed approach is to jointly learn separate exploration and exploitation policies derived from the same network, in such a way that the exploitative policy can concentrate on maximising the extrinsic reward (solving the task at hand) while the exploratory ones can maintain exploration without eventually reducing to an undirected policy. We propose to jointly learn a family of policies, parametrised using the UVFA framework (Schaul et al., 2015a), with various degrees of exploratory behaviour. The learning of the exploratory policies can be thought of as a set of auxiliary tasks that can help build a shared architecture that continues to develop even in the absence of extrinsic rewards (Jaderberg et al., 2016). We use reinforcement learning to approximate the optimal value function corresponding to several different weightings of intrinsic rewards.\nWe propose an intrinsic reward that combines per-episode and life-long novelty to explicitly encourage the agent to repeatedly visit all controllable states in the environment over an episode. Episodic novelty encourages an agent to periodically revisit familiar (but potentially not fully explored) states over several episodes, but not within the same episode. Life-long novelty gradually down-modulates states that become progressively more familiar across many episodes. Our episodic novelty uses an episodic memory filled with all previously visited states, encoded using the self-supervised objective of Pathak et al. (2017) to avoid uncontrollable parts of the state space. Episodic novelty is then defined as similarity of the current state to previously stored states. This allows the episodic novelty to rapidly adapt within an episode: every observation made by the agent potentially changes the per-episode novelty significantly. Our life-long novelty multiplicatively modulates the episodic similarity signal and is driven by a Random Network Distillation error (Burda et al., 2018b). In contrast to the episodic novelty, the life-long novelty changes slowly, relying upon gradient descent optimisation (as opposed to an episodic memory write for episodic novelty). Thus, this combined notion of novelty is able to generalize in complex tasks with large, high dimensional state spaces in which a given state is never observed twice, and maintain consistent exploration both within an episode and across episodes.\nThis paper makes the following contributions: (i) defining an exploration bonus combining life-long and episodic novelty to learn exploratory strategies that can maintain exploration throughout the agent’s training process (to never give up), (ii) to learn a family of policies that separate exploration and exploitation using a conditional architecture with shared weights, (iii) experimental evidence that the proposed method is scalable and performs on par or better than state-of-the-art methods on hard exploration tasks. Our work differs from Savinov et al. (2018) in that it is not specialised to navigation tasks, our method incorporates a long-term intrinsic reward and is able to separate exploration and exploitation policies. Unlike Stanton & Clune (2018), our work relies on no privileged information and combines both episodic and non-episodic novelty, obtaining superior results. Our work differs from Beyer et al. (2019) in that we learn multiple policies by sharing weights, rather than just a common replay buffer, and our method does not require exact counts and so can scale to more realistic domains such as Atari. The paper is organized as follows. In Section 2 we describe the proposed intrinsic reward. In Section 3, we describe the proposed agent and general framework. In Section 4 we present experimental evaluation." }, { "heading": "2 THE NEVER-GIVE-UP INTRINSIC REWARD", "text": "We follow the literature on curiosity-driven exploration, where the extrinsic reward is augmented with an intrinsic reward (or exploration bonus). The augmented reward at time t is then defined as rt = r e t + βr i t, where r e t and r i t are respectively the extrinsic and intrinsic rewards, and β is a positive scalar weighting the relevance of the latter. Deep RL agents are typically trained on the augmented reward rt, while performance is measured on extrinsic reward ret only. This section describes the proposed intrinsic reward rit.\nOur intrinsic reward rit satisfies three properties: (i) it rapidly discourages revisiting the same state within the same episode, (ii) it slowly discourages visits to states visited many times across episodes, (iii) the notion of state ignores aspects of an environment that are not influenced by an agent’s actions.\nWe begin by providing a general overview of the computation of the proposed intrinsic reward. Then we provide the details of each one of the components. The reward is composed of two blocks: an episodic novelty module and an (optional) life-long novelty module, represented in red and green respectively in Fig. 1 (right). The episodic novelty module computes our episodic intrinsic reward and is composed of an episodic memory, M , and an embedding function f , mapping the current observation to a learned representation that we refer to as controllable state. At the beginning of each episode, the episodic memory starts completely empty. At every step, the agent computes an episodic intrinsic reward, repisodict , and appends the controllable state corresponding to the current observation to the memory M . To determine the bonus, the current observation is compared to the content of the episodic memory. Larger differences produce larger episodic intrinsic rewards. The episodic intrinsic reward repisodict promotes the agent to visit as many different states as possible within a single episode. This means that the notion of novelty ignores inter-episode interactions: a state that has been visited thousands of times gives the same intrinsic reward as a completely new state as long as they are equally novel given the history of the current episode.\nA life-long (or inter-episodic) novelty module provides a long-term novelty signal to statefully control the amount of exploration across episodes. We do so by multiplicatively modulating the exploration bonus repisodict with a life-long curiosity factor, αt. Note that this modulation will vanish over time, reducing our method to using the non-modulated reward. Specifically, we combine αt with r episodic t as follows (see also Fig. 1 (right)):\nrit = r episodic t ·min {max {αt, 1} , L} (1)\nwhere L is a chosen maximum reward scaling (we fix L = 5 for all our experiments). Mixing rewards this way, we leverage the long-term novelty detection that αt offers, while rit continues to encourage our agent to explore all the controllable states.\nEmbedding network: f : O → Rp maps the current observation to a p-dimensional vector corresponding to its controllable state. Consider an environment that has a lot of variability independent of the agent’s actions, such as navigating a busy city with many pedestrians and vehicles. An agent could\nvisit a large number of different states (collecting large cumulative intrinsic rewards) without taking any actions. This would not lead to performing any meaningful form of exploration. To avoid such meaningless exploration, given two consecutive observations, we train a Siamese network (Bromley et al., 1994; Koch et al., 2015) f to predict the action taken by the agent to go from one observation to the next (Pathak et al., 2017). Intuitively, all the variability in the environment that is not affected by the action taken by the agent would not be useful to make this prediction. More formally, given a triplet {xt, at, xt+1} composed of two consecutive observations, xt and xt+1, and the action taken by the agent at, we parameterise the conditional likelihood as p(a|xt, xt+1) = h(f(xt), f(xt+1)), where h is a one hidden layer MLP followed by a softmax. The parameters of both h and f are trained via maximum likelihood. This architecture can be thought of as a Siamese network with a one-layer classifier on top, see Fig. 1 (left) for an illustration. For more details about the architecture, see App. H.1, and hyperparameters, see App. F.\nEpisodic memory and intrinsic reward: The episodic memory M is a dynamically-sized slotbased memory that stores the controllable states in an online fashion (Pritzel et al., 2017). At time t, the memory contains the controllable states of all the observations visited in the current episode, {f(x0), f(x1), . . . , f(xt−1)}. Inspired by theoretically-justified exploration methods turning stateaction counts into a bonus reward (Strehl & Littman, 2008), we define our intrinsic reward as\nrepisodict = 1√ n(f(xt)) ≈ 1√∑\nfi∈Nk K(f(xt), fi) + c (2)\nwhere n(f(xt)) is the counts for the visits to the abstract state f(xt). We approximate these counts n(f(xt)) as the sum of the similarities given by a kernel functionK : Rp×Rp → R, over the content of M . In practice, pseudo-counts are computed using the k-nearest neighbors of f(xt) in the memory M , denoted by Nk = {fi}ki=1. The constant c guarantees a minimum amount of “pseudo-counts” (fixed to 0.001 in all our experiments). Note that when K is a Dirac delta function, the approximation becomes exact but consequently provides no generalisation of exploration required for very large state spaces. Following Blundell et al. (2016); Pritzel et al. (2017), we use the inverse kernel for K,\nK(x, y) =\nd2(x,y) d2m\n+ (3)\nwhere is a small constant (fixed to 10−3 in all our experiments), d is the Euclidean distance and d2m is a running average of the squared Euclidean distance of the k-th nearest neighbors. This running average is used to make the kernel more robust to the task being solved, as different tasks may have different typical distances between learnt embeddings. A detailed computation of the episodic reward can be found in Alg. 1 in App. A.1.\nIntegrating life-long curiosity: In principle, any long-term novelty estimator could be used as a basis for the modulator αt. We found Random Network Distillation (Burda et al., 2018b, RND) worked well, is simple to implement and easy to parallelize. The RND modulator αt is defined by introducing a random, untrained convolutional network g : O → Rk, and training a predictor network ĝ : O → Rk that attempts to predict the outputs of g on all the observations that are seen during training by minimizing err(xt) = ||ĝ(xt; θ) − g(xt)||2 with respect to the parameters of ĝ, θ. We then define the modulator αt as a normalized mean squared error, as done in Burda et al. (2018b): αt = 1 +\nerr(xt)−µe σe\n, where σe and µe are running standard deviation and mean for err(xt). For more details about the architecture, see App. H.2, and hyperparameters, see App. F." }, { "heading": "3 THE NEVER-GIVE-UP AGENT", "text": "In the previous section we described an episodic intrinsic reward for learning policies capable of maintaining exploration in a meaningful way throughout the agent’s training process. We now demonstrate how to incorporate this intrinsic reward into a full agent that maintains a collection of value functions, each with a different exploration-exploitation trade-off.\nUsing intrinsic rewards as a means of exploration subtly changes the underlying Markov Decision Process (MDP) being solved: if the augmented reward rt = ret + βr i t varies in ways unpredictable from the action and states, then the decision process may no longer be a MDP, but instead be a Partially Observed MDP (POMDP). Solving PODMPs can be much harder than solving MDPs, so to\navoid this complexity we take two approaches: firstly, the intrinsic reward is fed directly as an input to the agent, and secondly, our agent maintains an internal state representation that summarises its history of all inputs (state, action and rewards) within an episode. As the basis of our agent, we use Recurrent Replay Distributed DQN (Kapturowski et al., 2019, R2D2) as it combines a recurrent state, experience replay, off-policy value learning and distributed training, matching our desiderata.\nUnlike most of the previously proposed intrinsic rewards (as seen in Section 1), the never-give-up intrinsic reward does not vanish over time, and thus the learned policy will always be partially driven by it. Furthermore, the proposed exploratory behaviour is directly encoded in the value function and as such it cannot be easily turned off. To overcome this problem, we proposed to jointly learn an explicit exploitative policy that is only driven by the extrinsic reward of the task at hand.\nProposed architecture: We propose to use a universal value function approximator (UVFA) Q(x, a, βi) to simultaneously approximate the optimal value function with respect to a family of augmented rewards of the form rβit = r e t +βir i t. We employ a discrete number N of values {βi}N−1i=0 including the special case of β0 = 0 and βN−1 = β where β is the maximum value chosen. In this way, one can turn-off exploratory behaviour simply by acting greedily with respect to Q(x, a, 0). Even before observing any extrinsic reward, we are able to learn a powerful representation and set of skills that can be quickly transferred to the exploitative policy. In principle, one could think of having an architecture with only two policies, one with β0 = 0 and one with β1 > 0. The advantage of learning a larger number of policies comes from the fact that exploitative and exploratory policies could be quite different from a behaviour standpoint. Having a larger number of policies that change smoothly allows for more efficient training. For a detailed description of the specific values of βi we use in our experiments, see App.A. We adapt the R2D2 agent that uses the dueling network architecture of Wang et al. (2015) with an LSTM layer after a convolutional neural network. We concatenate to the output of the network a one-hot vector encoding the value of βi, the previous action at−1, the previous intrinsic reward rit and the previous extrinsic reward r e t . We describe the precise architecture in App. H.3 and hyperparameters in App. F.\nRL Loss functions: As a training loss we use a transformed Retrace double Q-learning loss. In App. E we provide the details of the computation of the Retrace loss (Munos et al., 2016). In addition, we associate for each βi a γi, with γ0 = 0.997, and γN−1 = 0.99. We remark that the exploitative policy β0 is associated with the highest discount factor γ0 = γmax and the most exploratory policy βN−1 with the smallest discount factor γ0 = γmin. We can use smaller discount factors for the exploratory policies because the intrinsic reward is dense and the range of values is small, whereas we would like the highest possible discount factor for the exploitative policy in order to be as close as possible from optimizing the undiscounted return. For a detailed description of the specific values of γi we use in our experiments, see App. A.\nDistributed training: Recent works in deep RL have achieved significantly improved performance by running on distributed training architectures that collect large amounts of experience from many actors running in parallel on separate environment instances (Andrychowicz et al., 2018; Barth-Maron et al., 2018; Burda et al., 2018b; Espeholt et al., 2018; Horgan et al., 2018; Kapturowski et al., 2019; Silver et al., 2016). Our agent builds upon the work by Kapturowski et al. (2019) to decouple learning from acting, with actors (256 unless stated otherwise) feeding experience into a distributed replay buffer and the learner training on randomly sampled batches from it in a prioritized way (Schaul et al., 2015b). Please refer to App. A for details." }, { "heading": "4 EXPERIMENTS", "text": "We begin by analyzing the exploratory policy of the Never Give Up (NGU) agent with a single reward mixture. We perform such analysis by using a minimal example environment in Section 4.1. We observe the performance of its learned policy, as well as highlight the importance of learning a representation for abstract states. In Section 4.2, we analyze the performance of the full NGU agent, evaluating its effectiveness on the Arcade Learning Environment (ALE; Bellemare et al. (2013)). We measure the performance of the agent against baselines on hard exploration games, as well as dense reward games. We expand on the analysis of the NGU agent by running it on the full set of Atari games, as well as showing multiple ablations on important choices of hyperparameters of the model." }, { "heading": "4.1 CONTROLLED SETTING ANALYSIS", "text": "In this section we present a simple example to highlight the effectiveness of the exploratory policy of the NGU agent, as well as the importance of estimating the exploration bonus using a controllable state representation. To isolate the effect of the exploratory policy, we restrict the analysis to the case of a single exploratory policy (N = 1, with β = 0.3). We introduce a gridworld environment, Random Disco Maze, implemented with the pycolab game engine (Stepleton, 2017), depicted in Fig. 2 (left). At each episode, the agent finds itself in a new randomly generated maze of size 21x21. The agent can take four actions {left, right, up, down}, moving a single position at a time. The environment is fully observable. If the agent steps into a wall, the episode ends and a new maze is generated. Crucially, at every time step, the color of each wall fragment is randomly sampled from a set of five possible colors, enormously increasing the number of possible states. This irrelevant variability in color presents a serious challenge to algorithms using exploration bonuses based on novelty, as the agent is likely to never see the same state twice. This experiment is purely exploratory, with no external reward. The goal is to see if the proposed model can learn a meaningful directed exploration policy despite the large visual distractions providing a continual stream of observation novelty to the agent. Fig. 2 shows the percentage of unique states (different positions in the maze) visited by agents trained with the proposed model and one in which the mapping f is a fixed random projection (i.e. f is untrained). The proposed model learns to explore any maze sampled from the task-distribution. The agent learns a strategy that resembles depth-first search: it explores as far as possible along each branch before backtracking (often requiring backtracking a few dozen steps to reach an unexplored area). The model with random projections, as well as our baseline of RND, do not show such exploratory behaviour1. Both models do learn to avoid walking into walls, doing so would limit the amount of intrinsic reward it would receive. However, staying alive is enough: simply oscillating between two states will produce different (and novel) controllable states at every time step." }, { "heading": "4.2 ATARI RESULTS", "text": "In this section, we evaluate the effectiveness of the NGU agent on the Arcade Learning Environment (ALE; (Bellemare et al., 2013)). We use standard Atari evaluation protocol and pre-processing as described in Tab. 8 of App. F.4, with the only difference being that we do not use frame stacking. We restrict NGU to using the same setting and data consumption as R2D2, the best performing algorithm on Atari (Kapturowski et al., 2019). While we compare our results with the best published methods on this benchmark, we note that different baselines use very different training regimes with very different computational budgets. Comparing distributed and non-distributed methods is in general difficult. In an effort to properly assess the merits of the proposed model we include two additional baselines: as NGU is based on R2D2 using the Retrace loss (instead of its n-step objective) we include this as a baseline, and since we use RND as a reward modulator, we also include R2D2 with Retrace using the RND intrinsic reward. These methods are all run for 35 billion frames using the same protocol as that of R2D2 (Kapturowski et al., 2019). We detail the use of compute resources of the algorithms in App. D. We report the return averaged over 3 different seeds.\nArchitecture: We adopt the same core architecture as that used by the R2D2 agent to facilitate comparisons. There are still a few choices to make, namely: the size of the learned controllable states,\n1See video of the trained agent here: https://youtu.be/9HTY4ruPrHw\nthe clipping factor L in (1), and the number of nearest neighbours to use for computing pseudo-counts in (2). We selected these hyperparameters by analysing the performance of the single policy agent, NGU(N = 1), on two representative exploration games: Montezuma’s Revenge and Pitfall!. We report this study in App. B. We used the same fixed set of hyperparameters in all the remaining experiments.\nNGU agent: We performed further ablations in order to better understand several major design choices of the full NGU agent on a set of 8 Atari games: the set of 5 dense reward games chosen to select the hyperparameters of Mnih et al. (2015), as well as 3 hard exploration games (Montezuma’s Revenge, Pitfall!, and Private Eye). For a detailed description of the results on these games as well as results on more choices of hyperparameters, please see App.C. The ablations we perform are on the number of mixtures N , the impact of the off-policy data used (referred to as CMR below), the maximum magnitude of β (by default 0.3 if not explicitly mentioned), the use of RND to scale the intrinsic reward, and the performance of the agent in absence of extrinsic rewards. We denote by Cross Mixture Ratio (CMR) the proportion in the training batches of experience collected using different values of βi from the one being trained. A CMR of 0 means training each policy only with data produced by the same βi, while a CMR of 0.5 means using equal amounts of data produced by βi and βj 6=i. Our base agent NGU has a CMR of 0.\nThe results are shown in Fig. 3. Several conclusions can be extracted from these results: Firstly, sharing experience from all the actors (with CMR of 0.5) slightly harms overall average performance on hard exploration games. This suggests that the power of acting differently for different conditioning mixtures is mostly acquired through the shared weights of the model rather than shared data. Secondly, we observe an improvement, on average, from increasing the number of mixtures N on hard exploration games. Thirdly, as one can observe in analyzing the value of β, the value of β = 0.3 is the best average performing value, whereas β = 0.2 and β = 0.5 make the average performance worse on those hard exploration games. These values indicate, in this case, the limits in which NGU is either not having highly enough exploratory variants (β too low) or policies become too biased towards exploratory behavior (β too high). Further, the use of the RND factor seems to be greatly beneficial on these hard exploration games. This matches the great success of existing literature, in which long-term intrinsic rewards appear to have a great impact (Bellemare et al., 2016; Ostrovski et al., 2017; Choi et al., 2018). Additionally, as outlined above, the motivation behind studying these variations on this set of 8 games is that those hyperparameters are of general effect, rather than specific to exploration. However, surprisingly, with the exception of the case of removing the extrinsic reward, they seem to have little effect on the dense reward games we analyze (with all error bars overlapping). This suggests that NGU and its hyperparameters are relatively robust: as extrinsic rewards become dense, intrinsic rewards (and their relative weight to the extrinsic rewards) naturally become less relevant. Finally, even without extrinsic reward re, we can still obtain average superhuman performance on the 5 dense reward games we evaluate, indicating that the exploration policy of NGU is an adequate high performing prior for this set of tasks. That confirms the findings of Burda et al. (2018a), where they showed that there is a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. The heuristics of surviving and exploring what is controllable seem to be highly general and beneficial, as we have seen in the Disco Maze environment in Section 4.1, as well as on Atari.\nHard exploration games: We now evaluate the full NGU agent on the six hard exploration games identified by Bellemare et al. (2016). We summarise the results on Tab. 1. The proposed method achieves on similar or higher average return than state-of-the-art baselines on all hard exploration tasks. Remarkably, to the best of our knowledge, this is the first method without use of privileged information that obtains a positive score on Pitfall!, with NGU(N = 1)-RND obtaining a best score of 15,200. Moreover, in 4 of the 6 games, NGU(N = 32) appears to substantially improve against the single mixture case NGU(N = 1). This shows how the exploitative policy is able to leverage the shared weights with all the intrinsically-conditioned mixtures to explore games in which it is hard to do so, but still optimize towards maximizing the final episode score. In Fig. 4 we can see these conclusions more clearly: both in terms of mean and median human normalized scores, NGU greatly improves upon existing algorithms.\nWhile direct comparison of the scores is interesting, the emphasis of this work is on learning directed exploration strategies that encourage the agent to cover as much of the environment as possible. In\nFig. 4.2 (left) we observe the average episodic return of NGU run with and without RND on Pitfall!. NGU(N = 32) is able to learn a directed exploration policy capable of exploring an average of 46 rooms per episode, crossing 14 rooms before receiving the first extrinsic reward. We also observe that, in this case, using RND makes our model be less data efficient. This is also the case for NGU(N = 1), as observed on NGU(N = 1)-RND in Tab. 1, the best performing Pitfall! agent. We conjecture three main hypotheses to explain this: firstly, on Pitfall! (and unlike Montezuma’s Revenge) rooms are frequently aliased to one another, thus the agent does not obtain a large reward for discovering new rooms. This phenomenon would explain the results seen in Fig. 4.2 (right), in which RND greatly improves the results of NGU(N = 32). Secondly, the presence of a timer in the observation acts as a spurious source of novelty which greatly increases the number of unique states achievable even within a single room. Thirdly, as analyzed in Section 3.7 of Burda et al. (2018b), RND-trained agents often keep ’interacting with danger’ instead of exploring further, and Pitfall! is a game in which this can be highly detrimental, due to the high amount of dangerous elements in each room. Finally, we observe that NGU(N = 1) obtains better results than NGU(N = 32). Our intuition is that, in this case, a single policy should be simpler to learn and can achieve quite good results on this task, since exploration and exploitation policies are greatly similar.\nDense reward games: Tab. 2 shows the results of our method on dense reward games. NGU(N = 1) underperforms relative to R2D2 on most games (indeed the same can be said of R2D2(Retrace) that serves as the basis of NGU). Since the intrinsic reward signal may be completely misaligned with the goal of the game, these results may be expected. However, there are cases such as Pong, in which NGU(N = 1) catastrophically fails to learn to perform well. Here is where NGU(N = 32) solves\nthis issue: the exploitative policy learned by the agent is able to reliably learn to play the game. Nevertheless, NGU(N = 32) has limitations: even though its learned policies are vastly superhuman and empirically reasonable, they do not match R2D2 on Breakout and Beam Rider. This suggests that the representations learned by using the intrinsic signal still slightly interfere with the learning process of the exploitative mixture. We hypothesize that alleviating this further by having non-shared representations between mixtures should help in solving this issue.\nResults on all Atari 57 games: The proposed method achieves an overall median score of 1354.4%, compared to 95% for Nature DQN baseline, 191.8% for IMPALA, 1920.6% for R2D2, and 1451.8% for R2D2 using retrace loss. Please refer to App. G for separate results on individual games. Even though its overall median score is lower than R2D2, NGU maintains good performance on all games, performing above human level on 51 out of the 57 games. This also shows further confirmation that the learned exploitative mixture is still able to focus on maximizing the score of the game, making the algorithm able to obtain great performance across all games.\nAnalysis of Multiple Mixtures: in Fig. 6, we can see NGU(N = 32) evaluated with β0 = 0 (used in all reported numerical results) against NGU(N = 32) evaluated with β31 = 0.3. We can observe different trends in the games: on Q*Bert the policies of the agent seem to converge to the exploitative policy regardless of the β condition, with its learning curve being almost identical to the one shown for R2D2 in Kapturowski et al. (2019). As seen in App. G, this is common in many games. The second most common occurrence is what we see on Pitfall! and Beam Rider, in which the policies quantitatively learn very different behaviour. In these cases, the exploitative learns to focus on its objective, and sometimes it does so by benefiting from the learnings of the exploratory policy, as it is the case in Pitfall!2, where R2D2 never achieves a positive score. Finally, there is the exceptional case of Montezuma’s Revenge, in which the reverse happens: the exploratory policy obtains better score than the exploitative policy. In this case, extremely long-term credit assignment is required in order for the exploitative policy to consolidate the knowledge of the exploratory policy. This is because, to achieve scores that are higher than 16k, the agent needs to go to the second level of the game, going through many non-greedy and sometimes irreversible actions. For a more detailed analysis of this specific problem, see App. I.2." }, { "heading": "5 CONCLUSIONS", "text": "We present a reinforcement learning agent that can effectively learn on both sparse and dense reward scenarios. The proposed agent achieves high scores in all Atari hard-exploration games, while still maintaining a very high average score over the whole Atari-57 suite. Remarkably, it is, to the best of our knowledge, the first algorithm to achieve non-zero rewards on the challenging game of Pitfall! without relying on human demonstrations, hand-crafted features, or manipulating the state of the environment. A central contribution of this work is a method for learning policies that can maintain\n2See videos of NGU on Pitfall with β0, β31: https://sites.google.com/view/nguiclr2020\nexploration throughout the training process. In the absence of extrinsic rewards, the method produces a policy that aims at traversing all controllable states of the MDP in a depth-first manner. We highlight that this could have impact beyond this specific application and/or algorithmic choices. For instance, one could use it as a behaviour policy to facilitate learning models of the environment or as a prior for planning methods.\nThe proposed method is able to leverage large amounts of compute by running on distributed training architectures that collect large amounts of experience from many actors running in parallel on separate environment instances. This has been crucial for solving most challenging tasks in deep RL in recent years (Andrychowicz et al., 2018; Espeholt et al., 2018; Silver et al., 2016), and this method is able to utilize such compute to obtain strong performance on the set of hard-exploration games on Atari. While this is certainly a desirable feature and allows NGU to achieve a remarkable performance, it comes at the price of high sample complexity, consuming a large amount of simulated experience taking several days of wall-clock time. An interesting avenue for future research lies in improving the data efficiency of these agents.\nFurther, the episodic novelty measure relies on the notion of controllable states to drive exploration. As observed on the Atari hard-exploration games, this strategy performs well on several tasks, but it may not be the right signal for some environments. For instance, in some environments it might take more than two consecutive steps to see the consequences of the actions taken by the agent. An interesting line for future research is learning effective controllable states beyond a simple inverse dynamics model.\nAdditionally, the proposed work relies on the assumption that while different, one can find good exploratory and exploitative policies that are similar enough to be effectively represented using a shared parameterization (implemented using the UVFA framework). This can be limiting when the two policies are almost adversarial. This can be seen in games such as ‘Surround’ and ‘Ice hockey’.\nFinally, the hyperparameter β depends on the scale of the extrinsic reward. Thus, environments with significantly different extrinsic reward scales, might require different values of β. An interesting avenue forward is the dynamic adaptation of β, which could be done by using techniques such as Population Based Training (PBT)(Jaderberg et al., 2017) or Meta-gradients(Xu et al., 2018). Another advantage of dynamically tuning this hyperparameter would be to allow for the model to become completely exploitative when the agent has reached a point in which further exploring does not lead to improvements on the exploitative policy. This is not trivially achievable however, as including such a mechanism would require calibrating the adaptation to be aligned to the speed of learning of the exploitative policy." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Daan Wierstra, Steph Hughes-Fitt, Andrea Banino, Meire Fortunato, Melissa Tan, Benigno Uria, Borja Ibarz, Mohammad Gheshlaghi Azar, Remi Munos, Bernardo Avila Pires, Andre Barreto, Vali Irimia, Sam Ritter, David Raposo, Tom Schaul and many other colleagues at DeepMind for helpful discussions and comments on the manuscript." }, { "heading": "A EVALUATION SETUP", "text": "The evaluation we do is also identical to the one done in R2D2 Kapturowski et al. (2019): a parallel evaluation worker, which shares weights with actors and learners, runs the Q-network against the environment. This worker and all the actor workers are the two types of workers that draw samples from the environment. For Atari, we apply the standard DQN pre-processing, as used in R2D2. More concretely, this is how actors, evaluators, and learner are run:\nLearner:\n• Sample from the replay buffer a sequence of augmented rewards rt, intrinsic rewards rit, observations x, actions a and discounts γi.\n• Use Q-network to learn from (rt, x, a) with retrace using the procedure used by R2D2. As specified in Fig. 1, rit is sampled because it is fed as an input to the network.\n• Use last 5 frames of the sampled sequences to train the action prediction network as specified in Section 2. This means that, for every batch of sequences, all time steps are used to train the RL loss, whereas only 5 time steps per sequence are used to optimize the action prediction loss.\n• (If using RND) also use last 5 frames of the sampled sequences to train the predictor of RND as also specified in Section 2." }, { "heading": "Evaluator and Actor", "text": "• Obtain xt, ret , rit−1, and discount γi. • With these inputs, compute forward pass of R2D2 to obtain at. • With xt, compute rit using the embedding network as described in Section 2. • (actor) Insert xt, at, rt = ret + βirit, γi, and rit in the replay buffer. • Step on the environment with at." }, { "heading": "Distributed training", "text": "As in R2D2, we train the agent with a single GPU-based learner, performing approximately 5 network updates per second (each update on a mini-batch of 64 length-80 sequences, as explained below, and each actor performing ∼ 260 environment steps per second on Atari. We assign to each actor a fixed value in the set {βi}N−1i=0 and the actor acts according to an -greedy version of this policy. More concretely for the j-th actor we assign the value βh with h = j mod N − 1. In our experiments, we use the following βi:\nβi = 0 if i = 0 β if i = N − 1 β · σ(10 2i−(N−2)N−2 ) otherwise\nwhere σ is the sigmoid function. This choice of βi, as you can see in Fig.7(a), allows to focus more on the two extreme cases which are the fully exploitative policy and very exploratory policy.\nIn the replay buffer, we store fixed-length sequences of (x, a, r) tuples. In all our experiments we collect sequences of length 80 timesteps, where adjacent overlap by 40 time-steps. These sequences never cross episode boundaries. Additionally, we store in the replay the value of the βi used by the actor as well as the initial recurrent state, that we use to initialize the network at training time. Please refer to Kapturowski et al. (2019) for a detailed experimental of trade-offs on different treatments of recurrent states in replay. Given a single batch of trajectories we unroll both online and target networks on the same sequence of states to generate value estimates. We use prioritized experience replay. We followed the same prioritization scheme proposed in Kapturowski et al. (2019) using a mixture of max and mean of the TD-errors with priority exponent η = 1.0. In addition, we associate for each βi a γi such that:\nγi = 1− exp ( (N − 1− i) log(1− γmax) + i log(1− γmin)\nN − 1\n) , (4)\nwhere γmax is the maximum discount factor and γmin is the minimal discount factor. This form allows to have discount factors evenly spaced in log-space between 1− γmax and 1− γmin. For more intuition, we provide a graph of the {γi}N−1i=0 in Fig.7(b) in App.A. We remark that the exploitative policy β0 is associated with the highest discount factor γ0 = γmax and the most exploratory policy βN−1 with the smallest discount factor γ0 = γmin. We can use smaller discount factors for the exploratory policies because the intrinsic reward is dense and the range of values is small, whereas we would like the highest possible discount factor for the exploitative policy in order to be as close as possible to optimizing the undiscounted return. In our experiments, we use γmax = 0.997 and γmin = 0.99." }, { "heading": "A.1 NEVER-GIVE-UP INTRINSIC REWARD ALGORITHM", "text": "We present the algorithm for computing the intrinsic reward in Alg. 1. We follow the notations defined in Sec. 2 in the paragraph relative to the episodic intrinsic reward:\n• M the episodic memory containing at time t the previous embeddings {f(x0), f(x1), . . . , f(xt−1)}. • k is the number of nearest neighbours. • Nk = {fi}ki=1 is the set of k-nearest neighbours of f(xt) in the memory M . • K the kernel defined as K(x, y) =\nd2(x,y)\nd2m +\nwhere is a small constant, d is the Euclidean\ndistance and d2m is a running average of the squared Euclidean distance of the k-nearest neighbors.\n• c is the pseudo-counts constant. • ξ cluster distance. • sm maximum similarity." }, { "heading": "A.2 COMPLEXITY ANALYSIS", "text": "The space complexity is constant. The number of weights that the network has can be computed from the architecture seen in App. F. Furthermore, for our episodic memory buffer, we pre-allocate memory at the beginning of training, with size detailed in App. F. In cases in which the episode is longer than the size of the memory, the memory acts a ring buffer, deleting oldest entries first.\nTime complexity is O(M ·N), where N is the number of frames, and M is the size of our memory. This is due to the fact that we do one forward pass per frame, and we compute the distance from the embeddings produced by the embeddings network to the contents of our memory in order to retrieve the k-nearest neighbors.\nAlgorithm 1: Computation of the episodic intrinsic reward at time t: repisodict . Input :M ; k; f(xt); c; ; ξ; sm; d2m Output :repisodict\n1 Compute the k-nearest neighbours of f(xt) in M and store them in a list Nk 2 Create a list of floats dk of size k /* The list dk will contain the distances between the embedding f(xt) and its neighbours Nk. */ 3 for i ∈ {1, . . . , k} do 4 dk[i]← d2(f(xt), Nk[i]) 5 end 6 Update the moving average d2m with the list of distances dk /* Normalize the distances dk with the updated moving average d 2 m. */ 7 dn ← dkd2m /* Cluster the normalized distances dn i.e. they become 0 if too small and 0k is a list of k zeros. */ 8 dn ← max(dn − ξ, 0k) /* Compute the Kernel values between the embedding f(xt) and its neighbours Nk. */ 9 Kv ← dn+ /* Compute the similarity between the embedding f(xt) and its\nneighbours Nk. */ 10 s← √∑k\ni=1Kv[i] + c\n/* Compute the episodic intrinsic reward at time t: r i t. */\n11 if s > sm then 12 repisodict ← 0 13 else 14 repisodict ← 1s" }, { "heading": "B ABLATIONS FOR NGU(N=1)", "text": "As mentioned in Section 4.2, we here show ablations on the size of the learned controllable states, the clipping factor L in (1), and the number of nearest neighbours to use for computing pseudo-counts in (2).\nDue to the lack of a pure exploitative mode, as seen in 4.2, NGU(N=1) fails to perform well in dense reward games. Therefore, in order to obtain high signal from these ablations, we analyze the performance of NGU(N=1) on the two most popular sparse reward games: Montezuma’s Revenge and Pitfall!." }, { "heading": "B.1 SIZE OF CONTROLLABLE STATES", "text": "In Fig. 8 and Fig. 9 we can see the performance of NGU(N=1) with different sizes of the size of the controllable state on Pitfall! and Montezuma’s Revenge respectively. As we can observe, that there is small to no impact on Pitfall!, with scores that sometimes reach more than 25,000 points. On Montezuma’s Revenge 32 is the value that is consistently better than 64. A size of 16 as the controllable state size sometimes solves the level, but is in general less stable." }, { "heading": "B.2 NEAREST NEIGHBORS USED", "text": "We proceed to show a similar analysis on Fig. 10 and Fig. 11 regarding the amount of nearest neighbors on Pitfall! and Montezuma’s Revenge respectively. As we can see, there are slight gains\nfrom using more neighbors on Pitfall!, whereas there is a clear difference in performance in using 10 neighbors in Montezuma’s Revenge when compared to using 5 or 30 neighbors." }, { "heading": "B.3 CLIPPING FACTOR L", "text": "Finally, we show the performance of NGU(N=1) on Fig. 12 and Fig. 13 regarding the clipping factor L Pitfall! and Montezuma’s Revenge respectively. As we can observe, Pitfall! is again robust to the value of this hyperparameter, with marginally worse performance in the case of L = 10. This is expected, as RND is generally detrimental to the performance of NGU on Pitfall!, as seen in Section 4.2. On the other hand, the highest value of clipping appears to work best on Montezuma’s Revenge. In our initial investigations, we observed that clipping this value was required on Montezuma’s Revenge to make the algorithm stable. Further analysis is required in order to show the range of values of L that are higher than 10 and are detrimental to the performance of NGU(N=1) on this task." }, { "heading": "C ABLATIONS FOR NGU(N=32)", "text": "" }, { "heading": "C.1 GENERAL ABLATIONS", "text": "Tab. 3 shows the results for all the ablations we performed on 8 games for NGU(N = 32). We can see that the conclusions of Sec. 4.2 hold, with a few additional facts to observe:\n• The best score in Montezuma’s Revenge is obtained by using a non-zero Cross Mixture Ratio, even though it is relatively close to the score obtained by NGU(N = 32). • N = 2 and N = 8 have lower average human normalized score on the set of 3 hard\nexploration games when compared to N = 16 or N = 32. Concretely on the set of hard\nexploration games of Tab. 3, they only achieve super-human performance on Montezuma’s Revenge. • Even though we have seen that the results of β = 0.2 and β = 0.5 have lower average\non the 3 hard exploration games of Tab. 3, they still individually outperform RND, R2D2, R2D2(Retrace), and R2D2+RND on Pitfall! and Private Eye. • In the case of Private Eye the distance in score might be misleading, as rewards are very\nsparse of large value. For instance, after reaching a score of 40k, if we ignore minor rewards, there are only two rewards to be collected of around 30k points. This creates what seems to be large differences in scores. • On Breakout, a high score is achieved without extrinsic reward. This is due to the fact that\nthe exploratory policy learns to survive, which eventually leads to a high score." }, { "heading": "C.2 FURTHER ABLATIONS ON HARD EXPLORATION", "text": "On Tab.4 we show further results on the case of β = 0.2 and β = 0.5. We compare them to human performance as well as the base NGU(N = 32), with β = 0.3.\nAs we can observe, in this case the difference in terms of relative performance among games is less pronounced than the ones observed on Tab. 3. In fact, results are slightly better for both values of β on all 3 games, with a maximum difference of 1.5k points on Solaris between β = 0.3 and β = 0.2. We hypothesize that this is due to the nature of these specific games: the policies learnt on these three games seem to focus on exploitation rather than extended exploration of the environment, and in that case, similar to what we see for dense reward games in Sec. 4.2, the method shows less variability with respect to this hyperparameter." }, { "heading": "D ALGORITHM COMPUTATION COMPARISON", "text": "On Tab. 5 we can see a comparison of the computation used between different algorithms.\nComputation is still difficult to compare even when taking actor steps and parameter updates into account: distributed the number of actors in distributed setups will affect how much data the learner will be able to consume, but also how off-policy such data is (e.g. in R2D2, if a learner is learning from many actors, the data that is sampled from the replay buffer will be more recent than with fewer actors)." }, { "heading": "E DETAILS ON THE RETRACE ALGORITHM", "text": "Retrace (Munos et al., 2016) is an off-policy Reinforcement Learning algorithm that can be used for evaluation or control. In the evaluation setting the goal is mainly to estimate the action-value function Qπ of a target policy π from trajectories drawn from a behaviour policy µ. In the control\nsetting the target policy, or more precisely the sequence of target policies, depends on the sequence of Q-functions that will be generated through the process of approximating Q∗. To do so, we consider trajectories τ starting from the state-action couple (x, a) and then following the behaviour policy µ of the form: τ = (xt, at, rt, xt+1)t∈N , (5) with (x0, a0) = (x, a), ∀t ≥ 1, at ∼ µ(.|xt), ∀t ≥ 0, rt = r(xt, at) and ∀t ≥ 0, xt+1 ∼ P (.|xt, at). The expectation Eµ is over all admissible trajectories τ generated by the behaviour policy µ starting in state x doing action a and then following the behaviour policy µ.\nThe general Retrace operator T , that depends on µ and π, is:\nT Q(x, a) = Q(x, a) + Eµ ∑ t≥0 γt ( t∏ s=1 cs ) δt , (6) where the temporal difference δt is defined as:\nδt = rt + γ ∑ a∈A π(a|xt+1)Q(xt+1, a)−Q(xt, at), (7)\nand the cutting traces coefficients cs as:\ncs = λmin ( 1, π(as|xs) µ(as|xs) ) . (8)\nTheorem 2 of Munos et al. (2016) explains in which conditions the sequence of Q-functions:\nQk+1 = TkQk, (9) where Tk depends on the policy-couple (µk, πk) converges to the optimal Q-value Q∗. In particular one of the conditions is that the sequence of target policies πk is greedy or -greedy with respect to Qk (more details can be found in Munos et al. (2016)).\nIn practice, at a given time t, we can only consider finite sampled sequences (xs, as, rs, xs+1)t+ks=t starting from (xt, at) and then following the behaviour policy µ. Therefore, we define the finite sampled-Retrace operator as:\nT̂Q(xt, at) = Q(xt, at) + t+k−1∑ s=t γs−t\n( s∏\ni=t+1\nci ) δs. (10)\nIn addition, we use two neural networks. One target network Q(x, a; θ−) and an online network Q(x, a; θ). The target network is used to compute the target value ŷt that the online network will try to fit:\nŷt = T̂Q(xt, at; θ −), (11)\n= Q(xt, at; θ −) + t+k−1∑ s=t γs−t\n( s∏\ni=t+1\nci )( rs + γ\n∑ a∈A π(a|xs+1)Q(xs+1, a; θ−)−Q(xs, as; θ−)\n) .\n(12)\nIn the control scenario the policy chosen π(a|x) is greedy or -greedy with respect to the online network Q(x, a; θ). Then, the online network is optimized to minimize the loss:\nL(xt, at, θ) = (Q(xt, at; θ)− ŷt)2 . (13)\nMore generally, one can use transformed Retrace operators(Pohlen et al., 2018):\nT hQ(x, a) = Eµ h h−1(Q(x, a)) +∑\nt≥0\nγt\n( t∏\ns=1\ncs ) δht , (14) where h ∈ RR is a real-function and the temporal difference δht is defined as:\nδht = rt + γ ∑ a∈A π(a|xt+1)h−1(Q(xt+1, a))− h−1(Q(xt, at)). (15)\nThe role of the function h is to reduce (squash) the scale of the action-value function to make it easier to approximate for a neural network without changing the optimal property of the operator T . In particular, we use the function h:\n∀z ∈ R, h(z) = sign(z)( √ |z|+ 1− 1) + z, (16)\n∀z ∈ R, h−1(z) = sign(z)\n((√ 1 + 4 (|z|+ 1 + )− 1\n2\n) − 1 ) , (17)\nwith = 10−2." }, { "heading": "F HYPERPARAMETERS", "text": "" }, { "heading": "F.1 SELECTION OF HYPERPARAMETERS", "text": "In order to select the hyperparameters used for NGU(N = 32) for all 57 Atari games, which are shown on Tab. 6, we ran a grid search with the ranges shown on Tab. 9. We used 3 seeds on the set of 8 Atari games shown in Tab. 3. Regarding the hyperparameters concerning the kernel K (Kernel and the number of neighbors used), we fixed them after determining suitable ranges of the intrinsic reward in our initial experimentation on Atari. After running the grid search with those hyperparameters, we selected the combination with the highest amount games (out of 8) that held a score greater than our human benchmark. As one can see on the multiple mixtures ablations seen on Tab. 3, as well as the single mixture ablations on App B, the only agent that achieved superhuman performance on the set of 8 games is NGU(N = 32).\nFinally, in order to obtain the R2D2+RND baseline, we ran a sweep over the β hyperparameter with values 0.1, 0.3, and 0.5, over the 8 games shown in Tab. 3. Coincidentally, like NGU(N = 32), the best value of β was determined to be 0.3." }, { "heading": "F.2 COMMON HYPERPARAMETERS", "text": "These are the hyperparameters used in all the experiments. We expose a full list of hyperparameters here for completeness. However, as one can see, the R2D2-related architectural hyperparameters are identical to the original R2D2 hyperparameters. Shown in Tab. 6." }, { "heading": "F.3 DISCO MAZE HYPERPARAMETERS", "text": "Hyperparameters are shown in Tab. 7." }, { "heading": "F.4 ATARI PRE-PROCESSING HYPERPARAMETERS", "text": "Hyperparameters are shown in Tab. 8." }, { "heading": "F.5 HYPERPARAMETER RANGES", "text": "On Tab. 9 we can see the ranges we used to sweep over in our experiments." }, { "heading": "G DETAILED ATARI RESULTS", "text": "" }, { "heading": "H.2 ARCHITECTURE OF THE RANDOM NETWORK DISTILLATION", "text": "" }, { "heading": "H.1 ARCHITECTURE OF THE EMBEDDING NETWORK WITH INVERSE DYNAMICS PREDICTION", "text": "" }, { "heading": "H NETWORK ARCHITECTURES", "text": "" }, { "heading": "H.3 ARCHITECTURE OF THE R2D2 AGENT", "text": "" }, { "heading": "I CONTROLLABLE STATES", "text": "In this section we evaluate properties of the learned controllable states. We further present a study of the performance of the algorithm when having access to oracle controllable states containing only the necessary information. We use Montezuma’s Revenge as a case-study.\nI.1 INSPECTING THE PROPERTIES OF LEARNED CONTROLLABLE STATES\nAs explained in Section 2, we train the embedding network f using an inverse dynamics model as done by Pathak et al. (2017). Intuitively, the controllable states should contain the information relevant to the action performed by the agent given two consecutive observations. However it might contain other type of information as long as it can be easily ignored by our simple classifier, g.\nAs noted in Burda et al. (2018b), for this game, one can identify a novel state by using five pieces of information: the (x, y) position of the player, a room identifier, the level number, and the number of keys held. This information can be easily extracted from the RAM state of the game as described in Section I.3 bellow. One question that we could ask is whether this information is present (or easily decodable) or not in the learned controllable state. We attempted to answer this question by training a linear classifier to predict the (x, y) coordinates and the room identified from the learned controllable state. Importantly we do not backpropagate the errors to the embedding network f . Figure 19 shows the average results over the episodes as the training of the agent progresses. We can see that the squared error in predicting the (x, y) position of the agent stabilises to a more or less constant value, which suggests that it can successfully generalise to new rooms (we do not observe an increase in the error when new rooms are discovered). The magnitude of the error is of the order of 12 units, which less than 10% of the range (see Section I.3). This is to be expected, as it is the most important information for predicting which action was taken. It shows that the information is quite accessible and probably has a significant influence in the proposed novelty measure. The room identifier, on the other hand, is information that is not necesary to predict the action taken by the agent. Unlike the previous case, one can see jumps in the error as training progresses as the problem becomes harder.\nIt stabilises around an error slightly above 20%, which is reasonably good considering that random chance is 96%. This means that even if there is nothing specifically encouraging this information to be there, it is still present and in turn can influence the proposed novelty signal.\nAn avenue of future work is to research alternative methods for learning controllable states that directly search for retaining all relevant information. While very good results can be obtained with one of the simple alternative of an inverse dynamics model, it is reasonable to think that better results could be attained when using a better crafted one. To inform this question, we investigate in the next section what results could we obtain if we explicitly use as controllable states the quantities that we were trying to predict in this section." }, { "heading": "I.2 MONTEZUMA’S REVENGE WITH HAND-CRAFTED CONTROLLABLE STATES", "text": "In the previous section we analysed the properties of the learned controllable states. A valid question to ask is: how would the NGU work if we had access to an oracle controllable state containing only the relevant information? This analysis is a form of upper bound performance for a given agent architecture. We ran the NGU(N=1) model with two ablations: without RND and without extrinsic rewards. Instead of resetting the memory after every episode, we do it after a small number of consecutive episodes, which we call a meta-episode. This structure plays an important role when the agent faces irreversible choices. In this setting, approaches using non-episodic exploration bonuses are even more susceptible to suffer from the “detachment” problem described in Ecoffet et al. (2019). The agent might switch between alternatives without having exhausted all learning opportunities, rendering choosing the initial option uninteresting from a novelty perspective. The episodic approach with a meta-episode of length one would be forced to make similar choices. However, when run with multiple episodes it can offer an interesting alternative. In the first episode, the agent starts with an empty episodic memory can can choose arbitrarily one of the options. In the second episode, the episodic memory contains all the experience collected in the first episode. The agent is then rewarded for not repeating the strategy followed in the first one, as revisiting those states will lead to lower intrinsic reward. Thus, the agent is encourage to learn diverse behaviour across episodes without needing to choose between alternatives nor being susceptible to the detachment problem. Results are summarized in Fig. 19. We report the average episodic return (left) as well as the average number of visited rooms per meta-episode (right). The model achieves higher scores than the one using learned controllable states (as reported in Section 4.2).\nIncorporating long-term novelty in the exploration bonus, encourages the agent to concentrate in the less explored areas of the environment. Similarly to what we observed with learned controllable states, this provides a boost both in data efficiency as well as final performance, obtaining close to 15,000 average return and visiting an average of 25 rooms per episode. In this run, three out of five seeds reach the second level of the game, one of which reaches the third level with an average of fifty different rooms per episode. We also observe that, when running in the absence of extrinsic rewards, the agent remarkably still achieves a very high extrinsic reward. Secondly, the agent is able\nto consistently reach a large number of rooms and explore more than 20 rooms without any extrinsic guidance.\nAs noted in Burda et al. (2018b), in Montezuma’s Revenge each level contains 6 doors and 4 keys. If the agent walks through a door holding a key, it receives a reward of 300 consuming the key in the process. In order to clear a level, the agent needs open two doors located just before the final room. During exploration, the agent needs to hold on to two keys to see what it could do with them later in the episode, sacrificing the immediate reward of opening more accessible doors. Any agent that acts almost greedily will struggle with what looks like a high level choice. With the right representation and using meta-episodes, our method can handle this problem in an interesting way. When the number of keys held is represented in the controllable state, the agent chooses a different key-door combination on each of the three episodes in which we do not wipe our episodic memory. At the end of training, in the first episode after wiping the episodic memory, our agent shows a score of 14, 660± 196, while the third episode the agent shows a score of 34, 040± 9, 835, exploring on average over 30 rooms and consistently going to the second level3. The agent learns a complex exploratory policy spanning several episodes that can handle irreversible choices and overcome “distractor” rewards. We do not observe different key-door combinations across episodes when using learned controllable states. Presumably the signal of the number of held keys in the learned controllable states is not strong enough to treat them as sufficiently different.\nThe results describe in this section support the idea that significant gains can be obtained by improving the respresentation of the controllable states, suggesting that the study of learning better representations is an interesting line for future work. Recent works have explored ways of measuring novelty by learning controllable aspects of an environment (Kim et al., 2018; Warde-Farley et al., 2018), and we believe that some of these ideas could be also useful in this setting." }, { "heading": "I.3 HAND-CRAFTED STATE FEATURES FOR MONTEZUMA’S REVENGE", "text": "We obtain the hand-crafted features for Montezuma’s Revenge by observing the RAM state of the game at every time step. More concretely:\n• x and y can be observed at positions 0xAA and 0xAB respectively, represented by integers with a range of [0, 153]× [0, 122]. • Room id and level number can be found in positions 0x83 and 0xB9 respectively. We\nprovide this information as a single integer to our agent in the form of rid + 24 ∗ ln where rid ∈ {0, . . . , 23} is the room id, and ln is the level number. • Byte 0xC1 is the player’s inventory. We count the number of keys being held (and provide\nthis information to the agent) by adding the bits {2, . . . , 6}, which correspond to the binary slots for keys.\n3See video of the three episodes at https://sites.google.com/view/nguiclr2020" } ]
2,020
NEVER GIVE UP: LEARNING DIRECTED EXPLORATION STRATEGIES
SP:1ac8384ea71a1d51086464a466cd3167da4336c1
[ "The authors study the phenomena of self-introduced distributional shift. They define the term along with the term hidden incentives for distributional shift. The latter describes factors that motivate the learner to change the distribution in order to achieve a higher performance. The authors study both phenomena in two domains (one being a prisoner dilemma and the other a recommender system) and show how meta-learning reveals the hidden incentives for distributional shift. They then propose an approach based on swapping learners between environments to reduce self introduced distributional shift.", "The main idea of the paper: When using meta-learning there is an inherent incentive for the learner to win by making the task easier. The authors generalise this effect to a larger class of problems where the learning framework induces a set of Hidden Incentive for Distributional Shift (HIDS) and introduce Context Swapping, a HIDS mitigation technique. In the experimental section, the authors propos a HIDS unit test which then they employ to show that PBT (Population Based-Trainng), a popular meta-learning algorithm exhibits HIDS ant that context swapping helps fixing it. " ]
Decisions made by machine learning systems have increasing influence on the world. Yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in online learning for applications such as content recommendation, where the (choice of) content displayed can change users’ perceptions and preferences, or even drive them away, causing a shift in the distribution of users. Generally speaking, it is possible for an algorithm to change the distribution of its own inputs. We introduce the term self-induced distributional shift (SIDS) to describe this phenomenon. A large body of work in reinforcement learning and causal machine learning aims to deal with distributional shift caused by deploying learning systems previously trained offline. Our goal is similar, but distinct: we point out that changes to the learning algorithm, such as the introduction of meta-learning, can reveal hidden incentives for distributional shift (HIDS), and aim to diagnose and prevent problems associated with hidden incentives. We design a simple environment as a ‘unit test’ for HIDS, as well as a content recommendation environment which allows us to disentangle different types of SIDS. We demonstrate the potential for HIDS to cause unexpected or undesirable behavior in these environments, and propose and test a mitigation strategy.
[]
[ { "authors": [ "Hunt Allcott", "Matthew Gentzkow" ], "title": "Social media and fake news in the 2016 election", "venue": "Journal of Economic Perspectives,", "year": 2017 }, { "authors": [ "Michelle A. Amazeen", "Bartosz W. Wojdynski" ], "title": "Reducing native advertising deception: Revisiting the antecedents and consequences of persuasion knowledge in digital news contexts", "venue": "Mass Communication and Society,", "year": 2018 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gómez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando de Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Stuart Armstrong", "Xavier" ], "title": "O’Rorke. Good and safe uses of AI oracles", "venue": "ArXiv preprint,", "year": 2017 }, { "authors": [ "Stuart Armstrong", "Xavier O’Rourke" ], "title": "Indifference methods for managing agent rewards", "venue": "Technical report, Future of Humanity Institute,", "year": 2017 }, { "authors": [ "K.J. Åström" ], "title": "Optimal control of Markov Processes with incomplete state information", "venue": "Journal of Mathematical Analysis and Applications,", "year": 1965 }, { "authors": [ "P. Auer", "N. Cesa-Bianchi", "Y. Freund", "R.E. Schapire" ], "title": "Gambling in a rigged casino: The adversarial multi-armed bandit problem", "venue": "In Foundations of Computer Science,", "year": 1995 }, { "authors": [ "Eytan Bakshy", "Solomon Messing", "Lada A. Adamic" ], "title": "Exposure to ideologically diverse news and opinion on", "venue": "Facebook. Science,", "year": 2015 }, { "authors": [ "Nick Bostrom" ], "title": "Superintelligence: Paths, Dangers, Strategies", "venue": null, "year": 2014 }, { "authors": [ "Rich Caruana", "Yin Lou", "Johannes Gehrke", "Paul Koch", "Marc Sturm", "Noemie Elhadad" ], "title": "Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission", "venue": "In International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Paul Christiano", "Buck Shlegeris", "Dario Amodei" ], "title": "Supervising strong learners by amplifying weak experts", "venue": "ArXiv preprint,", "year": 2018 }, { "authors": [ "Michael K. Cohen", "Elliot Catt", "Marcus Hutter" ], "title": "A strongly asymptotically optimal agent in general environments", "venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Ajeya Cotra" ], "title": "Iterated distillation and amplification", "venue": "Technical report, AI Alignment,", "year": 2017 }, { "authors": [ "Dominic DiFranzo", "Kristine" ], "title": "Gloria-Garcia. Filter bubbles and fake news", "venue": "XRDS,", "year": 2017 }, { "authors": [ "K. Eric Drexler" ], "title": "Reframing superintelligence: Comprehensive AI services as general intelligence", "venue": "Technical report, Future of Humanity Institute,", "year": 2019 }, { "authors": [ "Mostafa M. El-Bermawy" ], "title": "Your echo chamber is destroying democracy, 2016", "venue": "URL https: //www.wired.com/2016/11/filter-bubble-destroying-democracy/", "year": 2016 }, { "authors": [ "Tom Everitt" ], "title": "Towards Safe Artificial General Intelligence", "venue": "PhD thesis, Australian National University,", "year": 2018 }, { "authors": [ "Tom Everitt", "Pedro A. Ortega", "Elizabeth Barnes", "Shane Legg" ], "title": "Understanding agent incentives using causal influence diagrams. part i: Single action settings, 2019", "venue": null, "year": 2019 }, { "authors": [ "Lisa K. Fazio", "Nadia M. Brashier", "B. Keith Payne", "Elizabeth J. Marsh" ], "title": "Knowledge does not protect against illusory truth", "venue": "Journal of Experimental Psychology: General, 144(5):993–1002,", "year": 2015 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Seth Flaxman", "Sharad Goel" ], "title": "Filter bubbles, echo chambers, and online news consumption", "venue": "Public Opinion Quarterly,", "year": 2015 }, { "authors": [ "Conor Friedersorf" ], "title": "Youtube extremism and the long tail: Unlimited selection is revealing ugly truths about what some americans want in their politics", "venue": null, "year": 2018 }, { "authors": [ "Oguzhan Gencoglu", "Mark van Gils", "Esin Guldogan", "Chamin Morikawa", "Mehmet Süzen", "Mathias Gruber", "Jussi Leinonen", "Heikki Huttunen" ], "title": "HARK side of deep learning - from grad student descent to automated machine learning", "venue": "ArXiv preprint,", "year": 2019 }, { "authors": [ "Mohammad Gheshlaghi Azar", "Alessandro Lazaric", "Emma Brunskill" ], "title": "Online stochastic optimization under correlated bandit feedback", "venue": "ArXiv preprint,", "year": 2014 }, { "authors": [ "Dong Gong", "Zhen Zhang", "Qinfeng Shi", "Anton van den Hengel", "Chunhua Shen", "Yanning Zhang" ], "title": "Learning an optimizer for image deconvolution", "venue": "ArXiv preprint,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow" ], "title": "A research agenda: Dynamic models to defend against correlated attacks", "venue": "ArXiv preprint,", "year": 2019 }, { "authors": [ "Jacob Groshek", "Karolina Koc-Michalska" ], "title": "Helping populism win? Social media use, filter bubbles, and support for populist presidential candidates in the 2016 us election campaign", "venue": "Information, Communication & Society,", "year": 2017 }, { "authors": [ "Dylan Hadfield-Menell", "Smitha Milli", "Pieter Abbeel", "Stuart Russell", "Anca Dragan" ], "title": "Inverse reward design", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "M. Jaderberg", "V. Dalibard", "S. Osindero", "W.M. Czarnecki", "J. Donahue", "A. Razavi", "O. Vinyals", "T. Green", "I. Dunning", "K. Simonyan", "C. Fernando", "K. Kavukcuoglu" ], "title": "Population Based Training of Neural Networks", "venue": "ArXiv preprint,", "year": 2017 }, { "authors": [ "Leslie Pack Kaelbling", "Michael L. Littman", "Anthony R. Cassandra" ], "title": "Planning and acting in partially observable stochastic domains", "venue": "Artificial Intelligence,", "year": 1998 }, { "authors": [ "A. Kalousis" ], "title": "Model selection via meta-learning: A comparative study", "venue": "In IEEE International Conference on Tools with Artificial Intelligence,", "year": 2000 }, { "authors": [ "Varol Kayhan" ], "title": "Confirmation bias: Roles of search engines and search contexts", "venue": "In International Conference on Information Systems,", "year": 2015 }, { "authors": [ "W. Bradley Knox", "Peter Stone" ], "title": "TAMER: Training an Agent Manually via Evaluative Reinforcement", "venue": "In IEEE 7th International Conference on Development and Learning,", "year": 2008 }, { "authors": [ "John Langford", "Tong Zhang" ], "title": "The epoch-greedy algorithm for multi-armed bandits with side information", "venue": "In Neural Information Processing Systems,", "year": 2008 }, { "authors": [ "Ed. Lee Howell" ], "title": "Digital wildfires in a hyperconnected world", "venue": "Global Risks 2013. World Economic Forum,", "year": 2013 }, { "authors": [ "Jan Leike", "Miljan Martic", "Victoria Krakovna", "Pedro A. Ortega", "Tom Everitt", "Andrew Lefrancq", "Laurent Orseau", "Shane Legg" ], "title": "AI safety gridworlds", "venue": "Technical report, DeepMind Safety Research,", "year": 2017 }, { "authors": [ "Jan Leike", "David Krueger", "Tom Everitt", "Miljan Martic", "Vishal Maini", "Shane Legg" ], "title": "Scalable agent alignment via reward modeling: a research direction", "venue": "Technical report, DeepMind Safety Research,", "year": 2018 }, { "authors": [ "Lihong Li", "Wei Chu", "John Langford", "Robert E. Schapire" ], "title": "A contextual-bandit approach to personalized news article recommendation", "venue": "In International Conference on World Wide Web,", "year": 2010 }, { "authors": [ "Jonathan Lorraine", "David Duvenaud" ], "title": "Stochastic hyperparameter optimization through hypernetworks", "venue": "ArXiv preprint,", "year": 2018 }, { "authors": [ "D.D. Luxton", "J.D. June", "J.M. Fairall" ], "title": "Social media and suicide: A public health perspective", "venue": "American journal of public health,", "year": 2012 }, { "authors": [ "Matthew MacKay", "Paul Vicol", "Jonathan Lorraine", "David Duvenaud", "Roger Grosse" ], "title": "Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions", "venue": "ArXiv preprint,", "year": 2019 }, { "authors": [ "Luke Metz", "Niru Maheswaranathan", "Brian Cheung", "Jascha Sohl-Dickstein" ], "title": "Learning unsupervised learning rules", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Paul Mihailidis", "Samantha Viotty" ], "title": "Spreadable spectacle in digital culture: Civic expression, fake news, and the role of media literacies in \"post-fact\" society", "venue": "American Behavioural Scientist,", "year": 2017 }, { "authors": [ "Jose G. Moreno-Torres", "Troy Raeder", "RocíO Alaiz-RodríGuez", "Nitesh V. Chawla", "Francisco Herrera" ], "title": "A unifying view on dataset shift in classification", "venue": "Pattern Recognition,", "year": 2012 }, { "authors": [ "Tien T. Nguyen", "Pik-Mai Hui", "F. Maxwell Harper", "Loren Terveen", "Joseph A. Konstan" ], "title": "Exploring the filter bubble: The effect of using recommender systems on content diversity", "venue": "In Proceedings of the 23rd International Conference on World Wide Web,", "year": 2014 }, { "authors": [ "Safiya Umoja Noble" ], "title": "Algorithms of Oppression: How Search Engines Reinforce Racism", "venue": null, "year": 2018 }, { "authors": [ "Stephen M. Omohundro" ], "title": "The basic AI drives", "venue": "In Conference on Artificial General Intelligence,", "year": 2008 }, { "authors": [ "Pedro A. Ortega", "Vishal Maini" ], "title": "Building safe artificial intelligence: specification, robustness, and assurance, 2018", "venue": null, "year": 2018 }, { "authors": [ "Eli Pariser" ], "title": "The Filter Bubble: What the Internet Is Hiding from You", "venue": null, "year": 2011 }, { "authors": [ "Gordon Pennycook", "Tyrone D Cannon", "David G. Rand" ], "title": "Prior exposure increases perceived accuracy of fake news", "venue": "Journal of Experimental Psychology (forthcoming),", "year": 2019 }, { "authors": [ "Erich Prisner" ], "title": "Game Theory Through Examples", "venue": "Mathematical Association of America,", "year": 2014 }, { "authors": [ "Joaquin Quionero-Candela", "Masashi Sugiyama", "Anton Schwaighofer", "Neil D. Lawrence" ], "title": "Dataset Shift in Machine Learning", "venue": null, "year": 2009 }, { "authors": [ "Neil C. Rabinowitz" ], "title": "Meta-learners’ learning dynamics are unlike learners", "venue": "ArXiv preprint,", "year": 2019 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B. Tenenbaum", "Hugo Larochelle", "Richard S. Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "ArXiv preprint,", "year": 2018 }, { "authors": [ "Manoel Horta Ribeiro", "Raphael Ottoni", "Robert West", "Virgílio A.F. Almeida", "Wagner Meira" ], "title": "Auditing radicalization pathways", "venue": "youtube,", "year": 2019 }, { "authors": [ "David Robson" ], "title": "The myth of the online echo chamber, 2018", "venue": "URL http://www.bbc.com/", "year": 2018 }, { "authors": [ "Kevin Roose" ], "title": "The making of a youtube radical", "venue": null, "year": 2019 }, { "authors": [ "Peter Schulam", "Suchi Saria" ], "title": "Reliable decision support using counterfactual models", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Virag Shah", "Jose Blanchet", "Ramesh Johari" ], "title": "Bandit learning with positive externalities", "venue": "Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Chengcheng Shao", "Giovanni Luca Ciampaglia", "Onur Varol", "Kai-Cheng Yang", "Alessandro Flammini", "Filippo Menczer" ], "title": "The spread of low-credibility content by social bots", "venue": "Nature Communications,", "year": 2018 }, { "authors": [ "Jasper Snoek", "Hugo Larochelle", "Ryan P Adams" ], "title": "Practical Bayesian optimization of machine learning algorithms", "venue": "In Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Introduction to Reinforcement Learning", "venue": null, "year": 1998 }, { "authors": [ "Richard S Sutton", "Anna Koop", "David Silver" ], "title": "On the role of tracking in stationary environments", "venue": "In International conference on Machine learning,", "year": 2007 }, { "authors": [ "Chih-Chun Wang", "Sanjeev R Kulkarni", "H Vincent Poor" ], "title": "Bandit problems with side observations", "venue": "IEEE Transactions on Automatic Control,", "year": 2005 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "In Machine Learning,", "year": 1992 }, { "authors": [ "media. Shao" ], "title": "2018) examine the role of social bots in spreading fake news", "venue": null, "year": 2018 }, { "authors": [ "Pennycook" ], "title": "2019) examine the role of the illusory truth effect in fake news", "venue": null, "year": 2019 }, { "authors": [ "searches. Nguyen" ], "title": "similarly study the effect of recommender systems on individual users", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Consider a household robot, one of whose duties is to predict when its owner will ask it for coffee. We would like the robot to notice its owners preference for having coffee in the morning, but we would not want the robot to prevent its owner from sleeping late just because the robot is unsure if the owner will still want coffee if they wake up in the afternoon. While doing so would result in a better prediction, such a strategy is cheating - by changing the task rather than solving the task as intended. More specifically, waking the owner is an example of what we call self-induced distributional shift (SIDS), as it changes the distribution of inputs to the robot’s coffee prediction algorithm. SIDS is not necessarily undesirable: consider an algorithm meant to alert drivers of imminent collisions. If it works well, such a system will help drivers avoid crashing, thus making self-refuting predictions which result in SIDS. What separates this example from the coffee robot that disturbs its owner’s sleep? The collision-alert system alters its data distribution in a way that is aligned with the goal of fewer collisions, whereas the coffee robot’s strategy results in changes that are misaligned with the goal of good coffee-timing (Leike et al., 2018).\nThis makes it an example of a specification problem (Leike et al., 2017; Ortega & Maini, 2018): we did not intend the robot to ensure its predictions were good using such a strategy, yet a naive specification (e.g. maximizing likelihood) incentivized that strategy. Ideally, we’d like to specify which kinds of SIDS are acceptable, i.e. the means by which a learner is intended or allowed to influence the world in order to achieve its’ ends (i.e. increase its performance), but doing so in full generality can be difficult. An alternative, more tractable problem which we address in this work is to accept the possibility of SIDS, but to carefully manage incentives for SIDS.\nInformally, a learner has an incentive to behave in a certain way when doing so can increase its performance (e.g. higher accuracy, or increased reward). When meta-learning optimizes over a longer time horizon, or using a different algorithm, than the original “inner loop” learner, this can reveal new incentives for SIDS that were not apparent in the original learner’s behavior. We call these hidden incentives for distributional shift (HIDS), and note that keeping HIDS hidden can be important for achieving aligned behavior. Notably, even in the absence of an explicit meta-learning algorithm machine learning practitioners employ “manual meta-learning”, also called “grad student descent” (Gencoglu et al., 2019) in the iterative process of algorithm design, model selection, hyperparameter\ntuning, etc. Considered in this broader sense, meta-learning seems indispensable, making HIDS relevant for all machine learning practitioners.\nA real-world setting where incentives for SIDS could be problematic is content recommendation: algorithmically selecting which media or products to display to the users of a service. For example (see Figure 1), a profit-driven algorithm might engage in upselling: persuading users to purchase or click on items they originally had no interest in. Recent media reports have described ‘engagement’- (click or view-time) driven recommendation services such as YouTube contributing to viewer radicalization (Roose, 2019; Friedersorf, 2018). A recent study supports these claims, finding that many YouTube users “systematically migrate from commenting exclusively on milder content to commenting on more extreme content” (Ribeiro et al., 2019).1 See Appendix 1 for a review of real-world issues related to content recommendation.\nOur goal in this work is to show both (1) that meta-learning can reveal HIDS, and (2) that this means applying meta-learning to a learning scenario not only changes the way in which solutions are searched for, but also which solutions are ultimately found. Our contributions are as follows:\n1. We identify and define the phenomena of SIDS (self-induced distributional shift) and HIDS (hidden incentives for distributional shift).\n2. We create two simple environments for studying identifying and studying HIDS: a “unit test” based on the Prisoner’s Dilemma, and a content recommendation environment which disentangles two types of SIDS.\n3. We demonstrate experimentally that meta-learning reveals HIDS in these environments, yielding agents that achieve higher performance via SIDS, but may follow sub-optimal policies.\n4. We propose and test a mitigation strategy based on swapping learners between environments in order to reduce incentives for SIDS." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 DISTRIBUTIONAL SHIFT AND CONTENT RECOMMENDATION", "text": "In general, distributional shift refers to change of the data distribution over time. In supervised learning with data x and labels y, this can be more specifically described as dataset shift: change in the joint distribution of P (x, y) between the training and test sets (Moreno-Torres et al., 2012; Quionero-Candela et al., 2009). As identified by Moreno-Torres et al. (2012), two common kinds of distributional shift are:\n1. Covariate shift: changing P (x). In the context of content recommendation, this corresponds to changing the user base of the recommendation system. For instance, a media outlet which publishes inflammatory content may appeal to users with extreme views while alienating more moderate users. This self-selection effect (Kayhan, 2015) may appear to a recommendation system as an increase in performance, leading to a feedback effect, as previously noted by Shah et al. (2018). This type of feedback effect has been identified as contributing to filter bubbles and radicalization (Pariser, 2011; Kayhan, 2015). We observe this type of change in our experiments, as shown in Figure 1.\n1 The authors argue that commenting on a video is a good proxy for supporting its viewpoint, since only 5 out of 900 comments they checked opposed the viewpoint of the video commmented on.\n2. Concept shift: changing P (y|x). In the context of content recommendation, this corresponds to changing a given user’s interest in different kinds of content. For example, exposure to a fake news story has been shown to increase the perceived accuracy of (and thus presumably the interest in) the story, an example of the illusory truth effect (Pennycook et al., 2019)." }, { "heading": "2.2 META-LEARNING AND POPULATION BASED TRAINING", "text": "Meta-learning is the use of machine learning techniques to learn machine learning algorithms. This generally involves instantiating multiple learning scenarios which run in an inner loop (IL), while an outer loop (OL) uses the outcomes of the inner loop(s) as data-points from which to learn which learning algorithms are most effective (Metz et al., 2019). The number of IL steps per OL step is called the interval of the OL. Many recent works have focused on multi-task meta-learning where the OL seeks to find learning rules that generalize to unseen tasks by training the IL on a distribution of tasks - this is often used as an approach to one- or few-shot learning, e.g. Finn et al. (2017); Ren et al. (2018), or transfer learning, e.g. Andrychowicz et al. (2016). Single-task meta-learning includes learning an optimizer for a single task, e.g. Gong et al. (2018), adaptive methods for selecting models, e.g. Kalousis (2000), or for setting hyperparameters, e.g. Snoek et al. (2012). For simplicity in this initial study we focus on single-task meta-learning.\nPopulation-based training (PBT) (Jaderberg et al., 2017) is a meta-learning algorithm that trains multiple learners L1, ..., Ln in parallel, after each interval (T steps of IL) applying an evolutionary OL step which consists of:\n1. Evaluate the performance of each learner, 2. Replace both parameters and hyperparameters of low-performing (bottom 20%) learners with\ncopies of those from randomly chosen high-performing (top 20%) learners (EXPLOIT), 3. Randomly perturb the hyperparameters (but not the parameters) of all learners (EXPLORE).\nTwo distinctive features of PBT (compared with other hyperoptimization methods, such as Bayesian optimization (Snoek et al., 2012)) are notable for us because they give the OL more control over the learning process:\n1. PBT applies OL optimization to parameters, not just hyperparameters. This means the OL can directly select for parameters which lead to SIDS, instead of only being able to influence parameter values via hyperparameters, which may be much more limiting. 2. PBT uses multiple OL steps within a single training run. This gives the OL more overall influence over the dynamics and outcome of the training run." }, { "heading": "2.3 SPECIFICATION AND INCENTIVES", "text": "We define specification as the process of a (typically human) designer instantiating a learning algorithm in a real-world learning scenario (see Appendix 2 for formal definitions). A specification problem occurs when the outcome of a learning scenario differs from the intentions of the designer. Specification is often viewed as concerned solely with the choice of performance metric, and indeed researchers often select learners solely on the basis of performance. However, our work emphasizes that the choice of learning algorithm is also an aspect of specification, as noted by Ortega & Maini (2018).\nIn particular, we consider this choice from the point of view incentives, similarly to Everitt et al. (2019). Their work focused on identifying which incentives exist, but we note that incentives may exist and yet not be pursued by a learner; for example, in supervised learning, there is an incentive to overfit the test set in order to increase test performance, but algorithms are designed to not do that. We thus distinguish between the existence of an incentive in a learner’s operational context and its presence in a learner’s objective, or revealed specification (Ortega & Maini, 2018), which is what a learner is “trying” to accomplish. Given an incentive that is present in the operational context, we say it is hidden from a learner if it does not appear in the objective, and revealed if it does." }, { "heading": "3 SELF-INDUCED DISTRIBUTION SHIFT (SIDS) AND HIDDEN INCENTIVES FOR DISTRIBUTIONAL SHIFT (HIDS)", "text": "" }, { "heading": "3.1 SIDS", "text": "To formally define SIDS, we assume there exists some reference data distribution, which is the distribution of data that the learner would encounter “by default”. This is a standard assumption\nfor classification problems (Moreno-Torres et al., 2012); in reinforcement learning, the reference distribution could be the initial distribution over states, or the distribution over states which results from following some reference policy. We say that SIDS occurs whenever the behavior (e.g. actions or predictions, or mere existence), of the learner leads it to encounter a distribution other than this reference distribution. This definition excludes distributional shift which would happen even if the learner were not present - e.g. for a crash prediction algorithm trained on data from the summer, snowy roads in the winter are an example of distributional shift, but not self-induced distributional shift (SIDS).\nIn order to highlight the phenomenon of SIDS, we distinguish between the (often implicit) assumptions of the machine learning algorithm (e.g. the i.i.d. assumption), vs. the model of the environments in which the algorithm is trained/deployed (e.g. our synthetic content recommendation environment). This is formalized in Appendix 2. This distinction allows us to explicitly model situations in which the assumptions of a learning algorithm are violated. For instance, in Sec. 4.2 we explicitly model a partially observable environment whose underlying state determines the data distribution of the examples that a standard supervised learning algorithm observes at each time-step." }, { "heading": "3.2 HIDS", "text": "Referring to Section 2.3, we say that incentives for SIDS are hidden if they are not part of the objective of a learner. Like SIDS, HIDS are not necessarily good or bad. Rather, our point is that designers should be cognizant of which incentives exist, and whether they are hidden or revealed to a learner. More specifically, changing the learning algorithm can reveal incentives that were previously hidden, leading learners to adopt unanticipated and potentially undesirable strategies for maximizing performance. For instance, by optimizing for performance after a sequence of inner loop updates, meta-learning can fail to distinguish between solving the task as intended and making the task easier via SIDS, and thus can reveal hidden incentives for distributional shift (HIDS).\nIn many settings, such as reinforcement learning (Sutton & Barto, 1998), learners are intended to increase performance via SIDS. For prediction tasks, on the other hand, learners are typically not meant to seek distributional shift, even if there is an incentive to do so, as we illustrate with the coffee robot example in the introduction. And even in reinforcement learning, SIDS can be undesirable, as we illustrate in Sec. 4.1." }, { "heading": "3.3 CONTEXT SWAPPING: A MITIGATION TECHNIQUE", "text": "We propose a technique called context swapping for mitigating HIDS revealed by meta-learning. The idea of context swapping is for learners to experience a “natural” distribution 2 of trajectories, P (τ), as compared to the “unnatural” distributions which can result when meta-learning is applied. Formally, we can characterize the natural distribution as:\nP (τ) = ∫ P (L)Pµ(τ |L)dL (1)\nwhere L is a learner, selected at random according to a fixed distribution P (L). Here, a learner is a fully described learning algorithm3 and Pµ(τ |L) is the distribution over trajectories that results from running the algorithm in an environment µ. Importantly, L is sampled from P (L), instead of being chosen via meta-learning. To provide learners with a distribution approximating P (τ), context swapping relies on training a population of N learners {L1, ..., LN} in parallel. Each learner inhabits one of N copies E1:N . = {E1, ..., EN} of the same environment µ.The E1:N share the same initial state distribution and time-step, but may be in different states on any particular time-step. 4\nThe technique of context swapping consists in shuffling the learners through the different copies of the environment, so which copy a given learner inhabits can change at any (or every) time-step. In this work, we use a deterministic permutation of learners against environment copies, so that learner Li acts in copy Ej on time-steps t if and only if j = (i + t) mod N . When N is larger than the interval of the OL optimizer, each learner will inhabit each copy for at most a single time-step before\n2 Note that this concept is different from the reference distribution mentioned in 3.1; the “natural” (non-metalearned) distribution might still exhibit SIDS, and does so in both of the environments we introduce.\n3 For example, L might completely specify a deep learning algorithm including the choice of initial parameters, and P (L) might be the distribution induced by the randomness of the initialization.\n4 Note that the learners consist entirely of software; any hardware (e.g. a robot body) would be considered part of an environment, as is typical in reinforcement learning (Sutton & Barto, 1998).\nan OL step is applied. This removes the incentive for learners to manipulate the future states they encounter, although they may still have incentives to influence each others’ future states. Under the assumption that different copies of the environment do not influence each other, this technique can address HIDS in practice, as we show in Sec. 4.1.1." }, { "heading": "4 EXPERIMENTS", "text": "To clearly introduce the concepts of SIDS and HIDS, we opt for simple illustrative environments. Code for our experiments is available at https://anonymous.4open.science/ r/66c5e3a4-2a45-4d71-ae58-d097e12ebae1/.\nIn Section 4.1, we introduce a “unit test” for HIDS. Our primary goal with this unit test is for the reader to walk away with a crisp understanding of HIDS. Put simply, our experiments show that you can have a learner which behaves as intended, and just by using meta-learning (e.g. PBT), and without changing the performance metric (e.g. loss or rewards), the learner’s objective can change completely, leading to unintended behavior. On the practical side, the unit test can be used to diagnose and compare learning algorithms. We show that context swapping is an effective mitigation technique in this environment.\nIn Section 4.2, we model a content recommendation system. The goal of these experiments is to provide a practical understanding of different types of SIDS (concept shift and covariate shift), and to demonstrate how HIDS could create issues for real-world recommender systems. We emphasize that SIDS takes place in this environment by construction. The point of our experiments is that meta-learning can increase the rate and/or extent of SIDS. Context swapping is not effective in this environment, highlighting the need for alternatives." }, { "heading": "4.1 HIDS UNIT TEST", "text": "We test whether algorithms reveal hidden incentives using a version of the prisoner’s dilemma (Prisner, 2014) where an agent plays each round against its past self. The reward function is presented in Table 1.An agent in this environment has a long-term, non-myopic incentive for cooperation (with its future self), but a current-time-step, myopic incentive for defection (from its future self). The unit test evaluates whether a learning algorithm reveals the non-myopic incentive even when the agent is meant to optimize for the present reward only (i.e. uses discount rate γ = 0). Naively, we’d expect the non-myopic incentive to be hidden from the agent in this case, and for the agent to consistently defect; learning algorithms that do so pass the test. But some learning algorithms also fail the unit test, revealing the incentive for the agent to cooperate with its future self. While aiming for myopic behavior may seem odd, myopic learners have no incentives to cause distributional shift, since it can only improve future performance. And while making learners myopic is an extreme approach to managing HIDS, doing so proves to be non-trivial.\nFormally, this environment is not a 2x2 game (like the original prisoner’s dilemma); it is a partially observable Markov Decision Process (POMDP) (Åström, 1965; Kaelbling et al., 1998):\nst = at−1\not = {} at ∈ {defect, cooperate}\nP (st, at) = at\nR(st, at) = I(st = cooperate) + βI(at = cooperate)− 1/2\nwhere I is an indicator function, and β = −1/2 is a parameter controlling the alignment of incentives (see Appendix 3.1 for an exploration of different β values.). The initial state is sampled as s0 ∼ U(defect, cooperate)" }, { "heading": "4.1.1 HIDS UNIT TEST EXPERIMENTAL RESULTS AND DISCUSSION", "text": "We first show that agents trained with PBT fail the unit test more often when compared with baseline agents that do not use meta-learning. We use REINFORCE (Williams, 1992) with discount factor γ = 0 as the IL optimizer for these experiments. Policies are represented by a single realvalued parameter θ (initialized as θ ∼ N (0, 1)) passed through a sigmoid whose output represents P (at = defect). PBT (with default settings, see Section 2.2) is used to tune the learning rate, with reward on the final time-step of the interval as the performance measure for PBT. We initialize the learning rate log-uniformly between 0.01 and 1.0 for all experiments (whether using PBT or not). We expect and confirm that the following two factors lead to higher rates of failure (cooperation):\n1. Shorter intervals: These give the OL more opportunities to influence the population. 2. Larger populations: These make outliers with exceptional non-myopic performance more\nlikely, and OL makes them likely to survive and propagate.\nThe baseline (no PBT) agents pass the unit test: P (cooperate) (averaged over agents) is close to 0% - see blue curves in Figure 2. However, despite the disincentive for cooperation and the myopic inner loop, agents trained with PBT and large populations fail the unit test: P (cooperate) is around 90% - see the top right subplot of Figure 2.\nFurthermore, we verify that context swapping significantly mitigates the effect of HIDS, decreasing undesirable cooperate behaviour to near-baseline levels - see bottom rows of Figure 2. This effect can be explained as follows: Because context swapping transfers the benefits of a learner’s action to the next learner to inhabit that environment, it increases that learner’s fitness, and thereby reduces the relative fitness (as evaluated by PBT’s EXPLOIT step) of the non-myopic cooperate behaviour. We observe some interesting exceptions with the combination of small populations and short PBT intervals. Although context swapping still significantly decreases the effect of HIDS, non-myopic cooperate behaviour is observed as much as 20% of the time (for #learners=10, T = 1; see bottom-left plot).\nWe also observe that PBT reveals HIDS even when T = 1. We provide a detailed explanation for how this might happen in Appendix 3.1.2. But we also note that for T = 1, the explanation that PBT operates on a longer time horizon than the inner loop does not apply, making it especially surprising that HIDS are revealed. Thus we hypothesize that there are at least 2 mechanisms by which PBT is revealing HIDS: (1) optimizing over a longer time-scale, and (2) picking up on the correlation between an agent’s current policy and the underlying state. Mechanism (2) can be explained informally as reasoning as: “If I’m cooperating, then I was probably cooperating on the last time-step as well, so my reward should be higher”. As support for these hypotheses, we run control experiments identifying two algorithms (each sharing only one of these properties) that can fail the unit test (although context swapping remains effective):\n1. Optimizing over a longer time-scale: replacing PBT with REINFORCE as an outer-loop optimizer.The outer-loop optimizes the parameters to maximize the summed reward of the last T time-steps. As with PBT, we observe non-myopic behavior, but now only when T > 1. This supports our hypothesis that the exploitation of HIDS is due not to PBT in particular, but just to the introduction of sufficiently powerful meta-learning. See Figure 2 for results. 2. Exploiting correlation: Q-learning with γ = 0 an = 0.1-greedy behavior policy and no meta-learning. If either state was equally likely, the Q-values would be the average of the values in each column in Table 1, so the estimated Q(defect) would be larger. But the -greedy policy correlates states and actions, so the top-left and bottom-right entries carry more weight in the estimates, sometimes causing Q(defect) ≈ Q(cooperate) and persistent nonmyopic behavior. See Figure 3 for results, Appendix 3.1.4 for more results, and Appendix 3.1.3 for important experimental details." }, { "heading": "4.2 HIDS IN CONTENT RECOMMENDATION", "text": "We now present a toy environment for modeling content recommendation of news articles, which includes the potential for SIDS by incorporating the mechanisms mentioned in Sec. 2.1, discussed as contributing factors to the problems of fake news and filter bubbles. Specifically, the environment assumes that presenting an article to a user can influence (1) their interest in similar articles, and (2) their propensity to use the recommendation service. These correspond to modeling self-induced concept shift of users, and self-induced covariate shift of the user base, respectively (see Sec. 2.1). The environment is designed to be as simple as possible while incorporating both of these effects.\nThis environment includes the following components, which change over (discrete) time: User type: xt, Article type: yt, User interests: Wt (propensity for users of each type to click on articles of each type), and User loyalty: gt (propensity for users of each type to use the platform). At each time step t, a user xt is sampled from a categorical distribution, based on the loyalty of the different user types. The recommendation system (a classifier) selects which type of article to present in the top position, and finally the user ‘clicks’ an article yt, according to their interests.\nUser loyalty for user type xt undergoes covariate shift: in accordance with the self-selection effect, gt increases or decreases proportionally to that user type’s interest in the top article. The interests of user type xt (represented by a column of Wt) also change, undergoing concept shift; in accordance with the illusory truth effect, their interest in the topic of the top article (as chosen by the recommender system) always increases. The update rates of gt,Wt are specified by α1, α2.\nFormally, this environment is similar to a POMDP\\R, i.e. a POMDP with no reward function, also known as a world model (Armstrong & O’Rourke, 2017; Hadfield-Menell et al., 2017); the difference is that the learner observes the input (otpre) before acting and only observes the target (otpost) after acting. The states, observations, and actions given below. For further details on this environment, including the state transition function, see Appendix 3.2.1.\nst = (gt,Wt, xt, yt)\notpre, a t, otpost = (x t, ŷt, yt)" }, { "heading": "4.2.1 CONTENT RECOMMENDATION EXPERIMENTAL RESULTS AND DISCUSSION", "text": "Our recommender system is a 1-layer MLP trained with SGD-momentum. Actions are sampled from the MLP’s predictive distribution. For PBT, we use T = 10 and 20 agents, and use accuracy to evaluate performance. We run 20 trials, and match random seeds for trials with and without PBT. See Appendix 3.2.2 for full experimental details.\nWe find that PBT yields significant improvements in training time and accuracy, but also greater distributional shift; see Figure 4. User base and user interests both change faster with PBT, and in particular user interests change more overall. We observe that the distributions over user types typically saturate (to a single user type) after a few hundred time-steps (Figure 1; Figure 4, Right). We run long enough to reach such states, to demonstrate that the increase in SIDS from PBT is not transitory. The environment has a number of free parameters, and our results are qualitatively consistent so long as (1) the initial user distribution is approximately uniform, and (2) the covariate shift rate (α1) is faster than the concept shift rate (α2). See Appendix 3.2.4 for details.\nWe measure concept shift (change in P (y|x)) as the cosine distance between each user types’ initial and current interest vectors. And we measure covariate shift (change in P (x)) as the KL-divergence between the current and initial user distributions, parametrized by g1 and gt, respectively. In Figure 5, we plot concept shift and covariate shift as a function of accuracy. We observe that for both types of SIDS, at low levels of accuracy PBT actually causes less shift than occur in baseline agents; HIDS are only observed for accuracies above 60%. This suggests that only relatively strong performers are able to pick up on the HIDS revealed by PBT. See Figure 5." }, { "heading": "5 RELATED WORK", "text": "SIDS in practice: We introduce the term SIDS, but we are far from the first to study such problems. Caruana et al. (2015) provide an example of asthmatic patients having lower predicted risk of pneumonia. Treating asthmatics with pneumonia less aggressively on this basis would be an example of harmful SIDS; the reason they had lower pneumonia risk was because they had received more aggressive lung-related care already. Schulam & Saria (2017) note that such predictive models are commonly used to inform decision-making, and propose modeling counterfactuals (e.g. “how would this patient fare with less aggressive treatment”) to avoid making such (potentially) self-refuting predictions. While their goal is to make accurate predictions in the presence of SIDS, our goal is to identify and manage incentives for SIDS. Environments with agents that that react to a learner (such as adversaries) naturally produce SIDS. Goodfellow (2019) argues that adversarial defenses that do not account for distributional shift are critically flawed. Non-i.i.d bandits: Contextual bandits (Wang et al., 2005; Langford & Zhang, 2008) are frequently discussed as an approach to content recommendation (Li et al., 2010). While bandit algorithms typically make the i.i.d. assumption, counter-examples exist (Gheshlaghi Azar et al., 2014; Shah et al., 2018); most famously, adversarial bandits (Auer et al., 1995). Closest to our work is Shah et al. (2018), who consider self-induced covariate shift in the context of multi-armed bandits. Our task in Sec. 4.2 is similar to their problem statement, but more general in that we include user features, thus disentangling covariate shift and concept shift. Our motivation is also different: Shah et al. (2018) seek to exploit SIDS, whereas we aim to avoid hidden incentives for SIDS. Safety and incentives: Understanding and managing the incentives of learners is also a focus of Armstrong & O’Rourke (2017); Everitt (2018); Everitt et al. (2019); Cohen et al. (2019). Emergent incentives to influence the world (such as HIDS) are at the heart of many concerns about the safety of advanced AI systems (Omohundro, 2008; Bostrom, 2014). Yet it is unclear if or when machine learning systems might pursue such “instrumental goals” in practice. Indeed, Drexler (2019) argues that machine learning should and typically does use time- and resource-bounded problem statements, making dangerous instrumental goals less likely to emerge. The same idea underlies\nseveral more concrete approaches to building safe superintelligent AI systems: myopic reinforcement learning (Leike et al., 2018; Knox & Stone, 2008; Cohen et al., 2019) and its application in iterated amplification (Christiano et al., 2018; Cotra, 2017) and question answering systems (Everitt et al., 2019; Armstrong & O’Rorke, 2017). Managing HIDS seems critically important for the safety of these approaches: they rely on enforcing myopia, which our experiments show is not straightforward. HIDS and meta-learning: As far as we know, our work is the first to consider the problem of HIDS, or its relation to meta-learning. A few previous works have some relevance or resemblance. Rabinowitz (2019) documents qualitative differences in learning behavior when meta-learning is applied. MacKay et al. (2019) and Lorraine & Duvenaud (2018) view meta-learning as a bilevel optimization problem, with the inner loop playing a best-response to the outer loop. In our work, the inner loop is unable to achieve such best-response behavior; the outer loop is too powerful (see Fig. 2). Finally, Sutton et al. (2007) note that meta-learning can change learning behavior in a way that improves performance by preventing convergence of the inner loop. Their goal of improving performance by “tracking” local characteristics of the environment is orthogonal to our goal of managing incentives to control such local characteristics." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "We have identified the phenomenon of self-induced distributional shift (SIDS), and the problems that can arise when there are hidden incentives for algorithms to induce distributional shift (HIDS). Our work highlights the interdisciplinary nature of issues with real-world deployment of ML systems - we show how HIDS could play a role in important technosocial issues like filter bubbles and the propagation of fake news. There are a number of potential implications for our work:\n1. When HIDS are a concern, our methodology and environments can be used to help diagnose whether and to what extent the final performance/behavior of a learner is due to SIDS and/or incentives for SIDS, i.e. to quantify their influence on that learner.\n2. Comparing this quantitative analysis for different algorithms could help us understand which features of algorithms affect their propensity to reveal HIDS, and aid in the development of safer and more robust algorithms.\n3. Characterizing and identifying HIDS in these tests is a first step to analyzing and mitigating other (problematic) incentives, as well as to developing theoretical understanding of incentives.\nBroadly speaking, our work emphasizes that the choice of machine learning algorithm plays an important role in specification, independently of the choice of performance metric. A learner can use SIDS to increase performance according to the intended performance metric, and yet still behave in an undesirable way, if we did not intend the learner to improve performance by that method. In other words, performance metrics are incomplete specifications: they only specify our goals or ends, while our choice of learning algorithm plays a role in specifying the means by which we intend an learner to achieve those ends. With increasing deployment of ML algorithms in daily life, we believe that (1) understanding incentives and (2) specifying desired/allowed means of improving performance are important avenues of future work to ensure fair, robust, and safe outcomes." }, { "heading": "1 CONTENT RECOMMENDATION IN THE WILD", "text": "Filter bubbles, the spread of fake news, and other techno-social issues are widely reported to be responsible for the rise of populism (Groshek & Koc-Michalska, 2017), increase in racism and prejudice against immigrants and refugees (Noble, 2018), increase in social isolation and suicide (Luxton et al., 2012), and, particularly with reference to the 2016 US elections, are decried as threatening the foundations of democracy (El-Bermawy, 2016). Even in 2013, well before the 2016 American elections, a World Economic Forum report identified these problems as a global crisis (Lee Howell, 2013).\nWe focus on two related issues in which content recommendation algorithms play a role: fake news and filter bubbles." }, { "heading": "1.1 FAKE NEWS", "text": "Fake news (also called false news or junk news) is an extreme version of yellow journalism, propaganda, or clickbait, in which media that is ostensibly providing information focuses on being eye-catching or appealing, at the expense of the quality of research and exposition of factual information. Fake news is distinguished by being specifically and deliberately created to spread falsehoods or misinformation (Merriam-Webster, 2017; Mihailidis & Viotty, 2017).\nWhy does fake news spread? It may at first seem the solution is simply to educate people about the truth, but research tells us the problem is more multifaceted and insidious, due to a combination of related biases and cognitive effects including confirmation bias (people are more likely to believe things that fit with their existing beliefs), priming (exposure to information unconsciously influences the processing of subsequent information, i.e. seeing something in a credible context makes things seem more credible) and the illusory truth effect (i.e. people are more likely to believe something simply if they are told it is true).\nAllcott & Gentzkow (2017) track about 150 fake news stories during the 2016 US election, and find the average American adult saw 1-2 fake news stories, just over half believed the story was true, and likelihood of believing fake news increased with ideological segregation (polarization) of their social media. Shao et al. (2018) examine the role of social bots in spreading fake news by analyzing 14 million Twitter messages. They find that bots are far more likely than humans to spread misinformation, and that success of a fake news story (in terms of human retweets) was heavily dependent on whether bots had shared the story.\nPennycook et al. (2019) examine the role of the illusory truth effect in fake news. They find that even a single exposure to a news story makes people more likely to believe that it is true, and repeat viewings increase this likelihood. They find that this is not true for extremely implausible statements (e.g. “the world is a perfect cube”), but that “only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy” of the story. The situation is further complicated by peoples’ inability to distinguish promoted content from real news - Amazeen & Wojdynski (2018) find that fewer than 1/10 people were able to tell when content was an advertisement, even when it was explicitly labelled as such. Similarly, Fazio et al. (2015) find that repeated exposure to incorrect trivia make people more likely to believe it, even when they are later able to identify the trivia as incorrect." }, { "heading": "1.2 FILTER BUBBLES", "text": "Filter bubbles, a term coined and popularized by Pariser (2011) are created by positive or negative feedback loops which encourage users or groups of users towards increasing within-group similarity, while driving up between-group dissimilarity. The curation of this echo chamber is called selfselection (people are more likely to look for or select things that fit their existing preferences), and favours what Techopedia (2018) calls intellectual isolation. In the context of social and political opinions, this is often called the polarization effect (Wikipedia contributors, 2018).\nFilter bubbles can be encouraged by algorithms in two main ways. The first is the most commonly described: simply by showing content that is similar to what a user has already searched for, search or recommender systems create a positive feedback loop of increasingly-similar content (Pariser, 2011; Kayhan, 2015). The second way is similar but opposite - if the predictions of an algorithm are good for a certain group of people, but bad for others, the algorithm can do better on its metrics by driving hard-to-predict users away. Then new users to the site will either be turned off entirely, or see an artificially homogenous community of like-minded peers, a phenomena Shah et al. (2018) call positive externalities.\nIn a study of 50,000 US-based internet users, Flaxman & Goel (2015) find that two things increase with social media and search engine use: (1) exposure of an individual to opposing or different viewpoints, and (2) mean ideological distance between users. Many studies cite the first result as evidence of the benefits of internet and social media (Robson, 2018; Bakshy et al., 2015), but the correlation of exposure with ideological distances demonstrates that exposure is not enough, and might even be counterproductive.\nFacebook’s own study on filter bubbles results show that the impact of the news feed algorithm on filter bubble “size” (a measure of homogeneity of posts relative to a baseline) is almost as large as the impact of friend group composition (Bakshy et al., 2015). Kayhan (2015) specifically study the role of search engines in confirmation bias, and find that search context and the similarity of results in search engine results both reinforce existing biases and increase the likelihood of future biased searches. Nguyen et al. (2014) similarly study the effect of recommender systems on individual users’ content diversity, and find that the set of options recommended narrows over time.\nFilter bubbles create an ideal environment for the spread of fake news: they increase the likelihood of repeat viewings of similar content, and because of the illusory truth effect, that content is more likely to be believed and shared (Pennycook et al., 2019; DiFranzo & Gloria-Garcia, 2017; Pariser, 2011). We are not claiming that HIDS are entirely or even mostly responsible for these problems, but we do note that they can play a role that is worth addressing." }, { "heading": "2 FORMAL DEFINITIONS", "text": "Here we formalize the concepts of learning scenario, operational context, and problem statement for maximal clarity and to highlight issues of applying machine learning in practice. A learning scenario, L consists of a learning algorithm, L, operational context, $, and performance metric, L. The performance metric is a quantification of learners’ performance, used during testing, validation, and sometimes training, to evaluate and compare learners. The operational context is the real-world setting where a learner operates. It includes things like the training data or reinforcement learning environment. Significantly, it also includes external factors, such as human users and the computer hardware running the learning algorithm, which may potentially influence the data or states the learner encounters. The learning algorithm instantiates a (potentially stochastic) mapping:\nL : ($,L) −→ O (2)\nwhere O is the output of learning, including things like a learned model, learning curves, and/or a complete log of any computations or data processed during learning.\nA problem statement, $̃, is a model of an operational context, used by humans to analyze properties of learning scenarios. Like all models, problem statements make simplifying assumptions, such as assuming i.i.d. inputs in the presence of distributional shift. Researchers often design a learning algorithm with a specific problem statement, $̃intended, in mind, and only evaluate it in operational contexts that are carefully controlled to match $̃intended. On the other hand, practitioners often deploy learning algorithms in less controlled operational contexts that are not faithful to $̃intended. Our work employs and advocates for an empirical methodology of testing a learning algorithm designed for $̃intended in a problem statement $̃realistic which seems more realistic (but may still fail to capture important aspects of an operational context), in order to detect possible failure modes and develop mitigation strategies." }, { "heading": "3 EXTRA EXPERIMENTS AND REPRODUCIBILITY DETAILS", "text": "" }, { "heading": "3.1 HIDS UNIT TEST", "text": "" }, { "heading": "3.1.1 ALIGNMENT OF INCENTIVES EXPLORATION", "text": "This section presents an exploration of the parameter β, which controls the alignment of incentives in the HIDS unit test.\nTo clarify the interpretation of experiments, we distinguish between environments in which myopic (defect) vs. nonmyopic (cooperate) incentives are opposed, orthogonal, or compatible. Note that in this unit test myopic behaviour (defection) is what we want to see.\n1. Incentive-opposed: Optimal myopic behavior is incompatible with optimal nonmyopic behavior (classic prisoner’s dilemma; these experiments are in the main paper).\n2. Incentive-orthogonal: Optimal myopic behavior may or may not be optimal nonmyopic behavior.\n3. Incentive-compatible: Optimal myopic behavior is necessarily also optimal nonmyopic behavior.\nWe focused on incentive-opposed environment (β = −1/2) in the main paper in order to demonstrate that HIDS can be powerful enough to change the behavior of the system in an undesirable way. Here we also explore incentive-compatible and incentive-orthogonal environments because they provide useful baselines, helping us distinguish a systematic bias towards nonmyopic behavior from other reasons (such as randomness or optimization issues) for behavior that does not follow a myopically optimal policy." }, { "heading": "3.1.2 WORKING THROUGH A DETAILED EXAMPLE FOR PBT WITH T = 1", "text": "To help provide intuition on how (mechanistically) PBT could lead to persistent levels of cooperation, we walk through a simple example (with no inner loop). Consider PBT with T = 1 and a population of 5 deterministic agents A1, ..., A5 playing cooperate and receiving reward of r(Ai) = 0. Now suppose A1 suddenly switches to play defect. Then r(A1) = 1/2 on the next time-step (while the other agents’ reward is still 0), and so PBT’s EXPLOIT step will copy A1 (without loss of generality to A2). On the following time-step, r(A2) = 1/2, and r(A1) = −1/2, so PBT will clone A2 to A1, and the cycle repeats. Similar reasoning applies for larger populations, and T > 1." }, { "heading": "3.1.3 Q-LEARNING EXPERIMENT DETAILS", "text": "We show that, under certain conditions, Q-learning can learn to (primarily) cooperate, and thus fails the HIDS unit test. We estimate Q-values using the sample-average method, which is guaranteed to converge in the fully observed, tabular case (Sutton & Barto, 1998). The agent follows the -greedy policy with = 0.1. In order to achieve this result, we additionally start the agent off with one synthetic memory where both state and action are defect and therefor R(defect) = −.5, and we hard-code the starting state to be cooperate (which normally only happens 50% of the time). Without this kind of an initialization, the agent always learns to defect. However, under these conditions, we find that 10/30 agents learned to play cooperate most of the time, with Q(cooperate) and Q(defect) both hovering around −0.07, while others learn to always defect, with Q(cooperate) ≈ −0.92 and Q(defect) ≈ −0.45. context swapping, however, prevents majority-cooperate behavior from ever emerging, see Figure 9." }, { "heading": "3.1.4 Q-LEARNING: FURTHER RESULTS", "text": "To give a more representative picture of how often Q-learning fails the unit test, we run a larger set of experiments with Q-learning, results are in Figure 8. It’s possible that the failure of Q-learning is not persistent, since we have not proved otherwise, but we did run much longer experiments and still observe persistent failure, see Figure 7." }, { "heading": "3.2 CONTENT RECOMMENDATION", "text": "" }, { "heading": "3.2.1 ENVIRONMENT DETAILS", "text": "The evironment has the following components:\n1. User type, xt: categorical variable representing different types of users. The content recommender conditions its predictions on the type of the current user.\n2. User loyalty, gt: the propensity for users of each type to use the platform. User xt is sampled from a categorical distribution with parameters given by softmax(gt).\n3. Article type, yt: a categorical variable (one-hot encoding) representing the type of article selected by the user.\n4. User interests, Wt: a matrix whose entries W tx,y represent the average interest user of type x have in articles of type y.\nAt each time step t, a user xt is sampled from a categorical distribution (based on the loyalty of the different user types), then the recommendation system selects which type of article to present in the top position, and finally, the user selects an article. The goal of the recommendation system is to predict the likelihood that the user would click on each of the available articles, in order to select the one which is most interesting to the user.\nUser loyalty for xt then changes in accordance with the self-selection effect, increasing or decreasing proportionally to their interest in the top article. The interests of user type xt (represented by a column of Wt) also change; in accordance with the illusory truth effect, their interest in the topic of the top article (as chosen by the recommender system) always increases. Overall, this environment is an extremely crude representation of reality, but it allows us to incorporate both the effects of self-selection (via covariate shift), and the illusory truth effect (via concept shift).\nFormally, this environment is similar to a POMDP\\R, i.e. a POMDP with no reward function, also known as a world model (Armstrong & O’Rourke, 2017; Hadfield-Menell et al., 2017); the difference is that the learner observes the input before acting and only observes the target after acting. The states, observations, and actions given below.\nst = (gt,Wt, xt, yt)\notpre, a t, otpost = (x t, ŷt, yt)\nThe state transition function is defined by:\ngt+1xt = g t xt + α1W t xt,ŷt\nW t+1/2 xt,ŷt =W t xt,ŷt + α2; W t+1 xt =\nW t+1/2 xt\n‖Wt+1/2xt ‖2 xt+1 ∼ softmax(gt+1) yt+1 ∼ softmax(Wt+1xt+1)\nWhere ŷt is the top article as chosen by the recommender, and α1, α2 represent the rate of covariate and concept shift (respectively). The update for Wt+1 merely increases the interest of user type xt in article type ŷt, then normalizes the interests for that user type." }, { "heading": "3.2.2 REPRODUCIBILITY DETAILS", "text": "For these experiments, the recommendation system is a ReLU-MLP with 1 hidden layer of 100 units, trained via supervised learning with SGD (learning rate = 0.01) to predict which article a user will select. Actions are sampled from the MLP’s predictive distribution. We apply PBT without any hyperparameter selection (this amounts to just doing the EXPLOIT step), and an interval of 10, selecting on accuracy. We use a population of 20 learners (whether applying PBT or not), and match random seeds for the trials with and without PBT. We initialize g1 and W1 to be the same across the 20 copies of the environment (i.e. the learners start with the same user population), but these values diverge throughout learning. For the environment, we set the number of user and article types both to 10. Initial user loyalties are randomly sampled from N (0, 0.03), α1 = 0.03, and α2 = 0.003." }, { "heading": "3.2.3 CONTEXT SWAPPING IN CONTENT RECOMMENDATION", "text": "We believe context swapping is not appropriate for the content recommendation environment, since when the environments diverge, optimal behavior may differ across environments. Nevertheless, we ran experiments with it for completeness. The main effect appears to be to hamper learning when PBT is not used, see Figure 10. Notably, it does not appear to significantly influence the rate or extent of SIDS when combined with PBT." }, { "heading": "3.2.4 EXPLORATION OF ENVIRONMENT PARAMETERS", "text": "In Figure 11, we examine the effect of the rate-of-change parameters (α1, α2) of the content recommendation environment on the results provided in the paper. As noted there, our results are qualitatively consistent so long as (1) the initial user distribution is approximately uniform, and (2) the covariate shift rate (α1) is faster than the concept shift rate (α2). These distributions are updated by different mechanisms, and are not directly comparable. Concept shift changes the task more radically, requiring a learner to change its predictions, rather than just become accurate on a wider range of inputs. We conjecture that changes in P (y|x) must therefore be kept smooth enough for the outer loop to have pressure to capitalize on HIDS." } ]
2,019
null
SP:efc663895e7ee0d78501c66be7c242d7f882d45d
[ "This paper studies the problem of learning disentangled representation in a hierarchical manner. It proposed a hierarchical disentangle network (HDN) which tackles the disentangling process in a coarse-to-fine manner. Specifically, common representations are captured at root level and unique representations are learned at lower hierarchical level. The HDN is trained in a generative adversarial network (GAN) manner, with additional hierarchical classification loss enforcing the disentanglement. Experiments are conducted on CelebA (attributes), Fashion-MNIST (category), and CAD Cars (category & pose).", "This paper proposed the hierarchical disentangle network (HDN) that leverages hierarchical characteristics of object categories to learn disentangled representation in multiple levels. Their coarse-to-fine manner approach allows each level to focus on learning specific representations in its granularity. This is achieved through supervised learning on each level where they train classifiers to distinguish each particular category from its ‘sibling’ categories which are close to each other. Experiments are conducted on four datasets to validate the method. " ]
An object can be described as the combination of primary visual attributes. Disentangling such underlying primitives is the long-term objective of representation learning. It is observed that categories have the natural hierarchical characteristics, i.e. any two objects can share some common primitives in a particular category level while they may possess their unique ones in another. However, previous works usually operate in a flat manner (i.e. in a particular level) to disentangle the representations of objects. Though they may obtain the primitives to constitute objects as the categories in that level, their results are obviously not efficient and complete. In this paper, we propose the hierarchical disentangle network (HDN) to exploit the rich hierarchical characteristics among categories to divide the disentangling process in a coarse-to-fine manner, such that each level only focuses on learning the specific representations and finally the common and unique representations in all levels jointly constitute the raw object. Specifically, HDN is designed based on an encoder-decoder architecture. To simultaneously ensure the disentanglement and interpretability of the encoded representations, a novel hierarchical generative adversarial network (GAN) is elaborately designed. Quantitative and qualitative evaluations on four object datasets validate the effectiveness of our method.
[]
[ { "authors": [ "Reza Abbasi-Asl", "Bin Yu" ], "title": "Interpreting convolutional neural networks through compression", "venue": null, "year": 2017 }, { "authors": [ "Karim Ahmed", "Mohammad Haris Baig", "Lorenzo Torresani" ], "title": "Network of experts for large-scale image categorization", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In IEEE,CVPR,", "year": 2017 }, { "authors": [ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Bolei Zhou", "Joshua B. Tenenbaum", "William T. Freeman", "Antonio Torralba" ], "title": "GAN dissection: Visualizing and understanding generative adversarial networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Aaron C. Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2013 }, { "authors": [ "Zhangjie Cao", "Mingsheng Long", "Jianmin Wang", "Philip S. Yu" ], "title": "Hashnet: Deep learning to hash by continuation", "venue": "In IEEE , ICCV,", "year": 2017 }, { "authors": [ "Angel X. Chang", "Thomas A. Funkhouser", "Leonidas J. Guibas", "Pat Hanrahan", "Qi-Xing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su", "Jianxiong Xiao", "Li Yi", "Fisher Yu" ], "title": "Shapenet: An information-rich 3d model repository", "venue": "CoRR, abs/1512.03012,", "year": 2015 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Yunjey Choi", "Min-Je Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In IEEE,CVPR,", "year": 2018 }, { "authors": [ "Jia Deng", "Jonathan Krause", "Alexander C. Berg", "Fei-Fei Li" ], "title": "Hedging your bets: Optimizing accuracy-specificity trade-offs in large scale visual recognition", "venue": "In IEEE,CVPR,", "year": 2012 }, { "authors": [ "Jia Deng", "Nan Ding", "Yangqing Jia", "Andrea Frome", "Kevin Murphy", "Samy Bengio", "Yuan Li", "Hartmut Neven", "Hartwig Adam" ], "title": "Large-scale object classification using label relation graphs", "venue": "In ECCV, pp", "year": 2014 }, { "authors": [ "Nan Ding", "Jia Deng", "Kevin P. Murphy", "Hartmut Neven" ], "title": "Probabilistic label relation graphs with ising models", "venue": "In IEEE,", "year": 2015 }, { "authors": [ "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Generating images with perceptual similarity metrics based on deep networks", "venue": "In NIPS, pp", "year": 2016 }, { "authors": [ "Alexey Dosovitskiy", "Thomas Brox" ], "title": "Inverting visual representations with convolutional networks", "venue": "In IEEE,CVPR,", "year": 2016 }, { "authors": [ "Sanja Fidler", "Sven J. Dickinson", "Raquel Urtasun" ], "title": "3d object detection and viewpoint estimation with a deformable 3d cuboid model", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Ruth C. Fong", "Andrea Vedaldi" ], "title": "Interpretable explanations of black boxes by meaningful perturbation", "venue": "In IEEE ,ICCV,", "year": 2017 }, { "authors": [ "Robert Geirhos", "Patricia Rubisch", "Claudio Michaelis", "Matthias Bethge", "Felix A. Wichmann", "Wieland Brendel" ], "title": "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Abel Gonzalez-Garcia", "Joost van de Weijer", "Yoshua Bengio" ], "title": "Image-to-image translation for cross-domain disentanglement", "venue": null, "year": 2018 }, { "authors": [ "Wonjoon Goo", "Juyong Kim", "Gunhee Kim", "Sung Ju Hwang" ], "title": "Taxonomy-regularized semantic deep convolutional neural networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In NIPS,", "year": 2014 }, { "authors": [ "Gregory Griffin", "Pietro Perona" ], "title": "Learning and using taxonomies for fast visual categorization", "venue": "In IEEE,CVPR,", "year": 2008 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge J. Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-toimage translation", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Sung Ju Hwang", "Leonid Sigal" ], "title": "A unified semantic embedding: Relating taxonomies and attributes", "venue": "In NIPS, pp", "year": 2014 }, { "authors": [ "Takuhiro Kaneko", "Kaoru Hiramatsu", "Kunio Kashino" ], "title": "Generative adversarial image synthesis with decision tree latent controller", "venue": "In IEEE,CVPR,", "year": 2018 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Oscar Li", "Hao Liu", "Chaofan Chen", "Cynthia Rudin" ], "title": "Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Shan Li", "Weihong Deng", "JunPing Du" ], "title": "Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild", "venue": "In IEEE, CVPR,", "year": 2017 }, { "authors": [ "Haomiao Liu", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Deep supervised hashing for fast image retrieval", "venue": "In IEEE,CVPR,", "year": 2016 }, { "authors": [ "Wei Liu", "Dragomir Anguelov", "Dumitru Erhan", "Christian Szegedy", "Scott E. Reed", "Cheng-Yang Fu", "Alexander C. Berg" ], "title": "SSD: single shot multibox detector", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In IEEE ,ICCV,", "year": 2015 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond Y.K. Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In IEEE , ICCV,", "year": 2017 }, { "authors": [ "Marcin Marszalek", "Cordelia Schmid" ], "title": "Constructing category hierarchies for visual recognition", "venue": "In ECCV, pp", "year": 2008 }, { "authors": [ "Michaël Mathieu", "Junbo Jake Zhao", "Pablo Sprechmann", "Aditya Ramesh", "Yann LeCun" ], "title": "Disentangling factors of variation in deep representation using adversarial training", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "CoRR, abs/1411.1784,", "year": 2014 }, { "authors": [ "Vicente Ordonez", "Jia Deng", "Yejin Choi", "Alexander C. Berg", "Tamara L. Berg" ], "title": "From large scale image categorization to entry-level categories", "venue": "In IEEE,", "year": 2013 }, { "authors": [ "Sebastian Palacio", "Joachim Folz", "Jörn Hees", "Federico Raue", "Damian Borth", "Andreas Dengel" ], "title": "What do deep networks like to see", "venue": null, "year": 2018 }, { "authors": [ "Joseph Redmon", "Santosh Kumar Divvala", "Ross B. Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In IEEE,CVPR,", "year": 2016 }, { "authors": [ "Scott E. Reed", "Kihyuk Sohn", "Yuting Zhang", "Honglak Lee" ], "title": "Learning to disentangle factors of variation with manifold interaction", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross B. Girshick", "Jian Sun" ], "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "venue": "In NIPS, pp", "year": 2015 }, { "authors": [ "Salah Rifai", "Yoshua Bengio", "Aaron C. Courville", "Pascal Vincent", "Mehdi Mirza" ], "title": "Disentangling factors of variation for facial expression recognition", "venue": "In ECCV,", "year": 2012 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael S. Bernstein", "Alexander C. Berg", "Fei-Fei Li" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Tim Salimans", "Ian J. Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Deep inside convolutional networks: Visualising image classification models and saliency", "venue": "maps. CoRR,", "year": 2013 }, { "authors": [ "Krishna Kumar Singh", "Utkarsh Ojha", "Yong Jae Lee" ], "title": "Finegan: Unsupervised hierarchical disentanglement for fine-grained object generation and discovery", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Nitish Srivastava", "Ruslan Salakhutdinov" ], "title": "Discriminative transfer learning with tree-based priors", "venue": "In NIPS, pp. 2094–2102,", "year": 2013 }, { "authors": [ "Pierre Stock", "Moustapha Cissé" ], "title": "Convnets and imagenet beyond accuracy: Explanations, bias detection, adversarial examples and model criticism", "venue": null, "year": 2017 }, { "authors": [ "Joshua B. Tenenbaum", "William T. Freeman" ], "title": "Separating style and content", "venue": "In NIPS, pp", "year": 1996 }, { "authors": [ "Luan Tran", "Xi Yin", "Xiaoming Liu" ], "title": "Disentangled representation learning GAN for pose-invariant face recognition", "venue": "In IEEE,CVPR,", "year": 2017 }, { "authors": [ "Chaoyue Wang", "Chaohui Wang", "Chang Xu", "Dacheng Tao" ], "title": "Tag disentangled generative adversarial network for object image re-rendering", "venue": "In IJCAI,", "year": 2017 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": "algorithms. CoRR,", "year": 2017 }, { "authors": [ "Jianwen Xie", "Yifei Xu", "Erik Nijkamp", "Ying Nian Wu", "Song-Chun Zhu" ], "title": "Generative hierarchical learning of sparse FRAME models", "venue": "In IEEE, CVPR,", "year": 2017 }, { "authors": [ "Zhicheng Yan", "Hao Zhang", "Robinson Piramuthu", "Vignesh Jagadeesh", "Dennis DeCoste", "Wei Di", "Yizhou Yu" ], "title": "HD-CNN: hierarchical deep convolutional neural networks for large scale visual recognition", "venue": "In IEEE,", "year": 2015 }, { "authors": [ "Huei-Fang Yang", "Kevin Lin", "Chu-Song Chen" ], "title": "Supervised learning of semantics-preserving hash via deep convolutional neural networks", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Linjie Yang", "Ping Luo", "Chen Change Loy", "Xiaoou Tang" ], "title": "A large-scale car dataset for finegrained categorization and verification", "venue": "In IEEE,", "year": 2015 }, { "authors": [ "Dong Yi", "Zhen Lei", "Shengcai Liao", "Stan Z. Li" ], "title": "Learning face representation from scratch", "venue": "CoRR, abs/1411.7923,", "year": 2014 }, { "authors": [ "Matthew D. Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In ECCV, pp", "year": 2014 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros", "Eli Shechtman", "Oliver Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": "In IEEE,CVPR,", "year": 2018 }, { "authors": [ "Bin Zhao", "Fei-Fei Li", "Eric P. Xing" ], "title": "Large-scale category structure aware image categorization", "venue": "In NIPS, pp", "year": 2011 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Learning hierarchical features from deep generative models", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Luisa M. Zintgraf", "Taco S. Cohen", "Tameem Adel", "Max Welling" ], "title": "Visualizing deep neural network decisions: Prediction difference analysis", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning, as one basic and hot topic in machine learning and computer vision community, has achieved significant progress in recent years on different tasks such as recognition (Russakovsky et al., 2015), detection (Ren et al., 2015; Redmon et al., 2016; Liu et al., 2016b) and generation (Goodfellow et al., 2014), benefiting from the rapid development of representation learned by deep neural networks. Considering the strong capacity of deep representation, in this paper, we mainly focus on the deep representation learning framework.\nDespite great success the deep representations have achieved as mentioned above, two important problems are still unresolved or less considered, i.e. the interpretability and the disentanglement of the learned representations. In the past decades, various works have been developed to reveal the black box of deep learning (Zeiler & Fergus, 2014; Dosovitskiy & Brox, 2016b; Bau et al., 2017; Simonyan et al., 2013; Stock & Cissé, 2017; Zhang et al., 2017) and move us closer to the goal of disentangling the variations within data (Reed et al., 2014; Mathieu et al., 2016; Rifai et al., 2012; Tran et al., 2017; Gonzalez-Garcia et al., 2018; Huang et al., 2018; Chen et al., 2016). Even though they have brought great insights to us, they still have some limitations. For instance, (Chen et al., 2016; Xie et al., 2017; Zhao et al., 2017) learn to disentangle variation factors within each category using generative models, instead of investigating the similarities and differences among categories, leading to poor discriminability. Therefore, the learned representations would not well conform to human perception. Though (Gonzalez-Garcia et al., 2018; Huang et al., 2018) try to obtain the domain-invariant and domain-specific knowledge, they can only handle two categories one time, which is not that efficient. In this paper, we attempt to learn disentangled representations in a more natural and efficient manner.\nLet us first discuss how humans understand an object. Generally speaking, an object can be regarded as the combination of many semantic attributes. Hundreds of thousands of objects in the world can be clustered and recognized by humans just because we can figure out the common and unique\nattributes of an object compared to others. Besides, a man who never play the billiards can only recognize a table in an image, while a sports fan may regard it as a billiard table. Both of them are right since categories have natural hierarchical structure. As shown in Fig. 1(a), given six leaf-level categories, they can be organized in a three-level hierarchy considering the common and different features they have. Each child category in the hierarchy is a special case of its parent category since it inherits all features from its parent category and has extra features that are not present in its parent category. From another perspective, each parent category is the abstraction of all its child categories considering it contains the attributes that are present in all its child categories. Then we come back to the task of disentangling representation learning. It aims to learn the representation encoding useful information that can be applied in other tasks (e.g. building classifiers and predictors) (Bengio et al., 2013). Taking the hierarchical nature of categories into account, if we only learn the representations of an object in a flat manner for a specific category level as previous works do, it will not be scalable and comprehensive for the machine to be qualified for various tasks in the real world.\nOur work aims to exploit the natural hierarchical characteristics among categories to divide the representation learning in a coarse-to-fine manner, such that each level only focuses on learning the specific representations. For instance, given a billiard table image in Fig. 1(b), it tangles the information of being a furniture, a table and a billiard table. We first extract the features that only contain the information of furniture from the image. By tracing from the root to leaf level, more and more information is extracted until we can recognize its belonging categories in all hierarchical levels. By doing so, the disentangled representations are expected to find wide and promising applications. For example, one can transfer the semantics in a specific category level from one object to another while keep information of other levels unchanged. Besides, it would help for the hierarchical image compression task using different levels of the disentangled representations. To achieve the objective of hierarchical disentangling and simultaneously interpreting the results so that humans can understand, we propose the hierarchical disentangle network (HDN), which draws lessons from hierarchical classification and the recent proposed generative adversarial nets (Goodfellow et al., 2014). Extensive experiments are conducted on four popular object datasets to validate the effectiveness of our method." }, { "heading": "2 RELATED WORKS", "text": "Disentangling Deep Representations. The goal of disentangling representation learning is to discover factors of variation within data (Bengio et al., 2013). Recent years have witnessed a substantial interest on such research area (Tenenbaum & Freeman, 1996), including works based on deep learning (Reed et al., 2014; Mathieu et al., 2016; Rifai et al., 2012; Wang et al., 2017; Tran et al., 2017; Gonzalez-Garcia et al., 2018; Huang et al., 2018; Chen et al., 2016). (Rifai et al., 2012) is probably the earliest to learn disentangled representations using deep networks for the task of emotion recognition. (Reed et al., 2014) is based on a higher-order Boltzmann machine and regards each factor variation of the manifold as its sub-manifold. (Mathieu et al., 2016) and (Chen et al., 2016) leverage the generative adversarial nets (GAN) to learn factors of variation. Recently, cross-domain translation methods (Gonzalez-Garcia et al., 2018; Huang et al., 2018) learn the domain-invariant and domain-specific representations. These works ignore the existing natural and inherent hierarchy relationships among categories, with which we can conduct the disentangling in a coarse-to-fine manner such that each level only focuses on learning the specific representations.\nNetwork Interpretability. Network interpretability aims to learn how the network works via visualizing it from the perspective that humans can understand. These methods can be briefly divided into two groups according to whether the visualization is involved in the network during training, i.e. the off-line methods and online methods. The off-line methods make attempts to visualize patterns in image space that activate each convolutional filter (Zeiler & Fergus, 2014; Dosovitskiy & Brox, 2016b;a; Bau et al., 2017; 2019) or to interpret the area in an image that is responsible for the network prediction (Simonyan et al., 2013; Fong & Vedaldi, 2017; Zintgraf et al., 2017; Abbasi-Asl & Yu, 2017; Stock & Cissé, 2017; Palacio et al., 2018; Geirhos et al., 2019). While such methods can explain what has already been learned by the model, they cannot improve the model interpretability in return. Instead, the online works propose to directly learn interpretable representations during training (Li et al., 2018; Zhang et al., 2017). However, these methods mainly focus on figuring out the running mechanism of networks while paying less attention to dissecting variations of the features among categories, which cannot make models really understand their inputs.\nHierarchy-regularized Learning. Semantic hierarchies have been explored in object classification task for accelerating recognition (Griffin & Perona, 2008; Marszalek & Schmid, 2008), obtaining a sequence of predictions (Deng et al., 2012; Ordonez et al., 2013), making use of category relation graphs (Deng et al., 2014; Ding et al., 2015), and improving recognition performance as additional supervision (Zhao et al., 2011; Srivastava & Salakhutdinov, 2013; Hwang & Sigal, 2014; Yan et al., 2015; Goo et al., 2016; Ahmed et al., 2016). While these discriminative classification works have achieved their expected goals, they usually lack interpretability. To address such issues, (Xie et al., 2017; Zhao et al., 2017) propose to use generative models to disentangle the factors from low-level representations to high-level ones that can construct a specific object. (Singh et al., 2019) uses an unsupervised generative framework to hierarchically disentangle the background, object shape and appearance from an image. However, they either deal with each category in isolation or ignore the discriminability of learned features, and thus cannot accurately disentangle the differences and similarities among categories." }, { "heading": "3 HIERARCHICAL REPRESENTATION LEARNING", "text": "Supposing that a category hierarchy is given in the form shown in Fig. 1(a), we use l = 1, ..., L to denote the level of hierarchy (L for the leaf level and 1 for the root level), Kl to denote the number of nodes at level l, nkl to denote the k-th node at level l, and C k l to denote the number of children for nkl . As illustrated in Fig. 1(b), given an original object image denoted as I o, our goal is to extract the feature Fl in the l-th level.\nGenerally speaking, an object O can be described as the combination of a group of visual attributes:\nO = A1 + ...+ Ai︸ ︷︷ ︸ level=1 + Ai+1 + ...+ Aj\n︸ ︷︷ ︸ level=2 ...\n+ Aj+1 + ...+ Am\n︸ ︷︷ ︸ level=l\n+ ∆(O) (1)\nwhere ∆(O) represents currently undefined attributes existing on O. As we have discussed, humans classify O in a particular category level according to a subset of the whole attribute set in Eqn.(1). Take the object in Fig. 1(b) for example, it can be regarded as a furniture since it contains the attribute subset {A1 + ...+ Ai}, and be classified to a table in terms of the attribute subset {A1 + ...+ Ai + Ai+1 + ...+Aj} present in it. Therefore, the disentangled feature Fl for our objectives in Fig. 1(b) is actually the reflection of the attribute subset formulated in Eqn.(1). Moreover, since the hierarchical correlations (i.e. the inherited relationship) among categories in different hierarchies, obviously the subset {A1 + ...+ Ai + Ai+1 + ...+ Aj} includes the subset {A1 + ...+ Ai}, naturally leading to the disentangled Fl−1 being the proper subset of Fl.\nTaking these into consideration, we design the hierarchical disentangle network (HDN) based on the autoencoder architecture in Fig. 2. The encoder E dissects the hierarchical representations given a semantic hierarchy. The decoder G plays the role of an interpreter to reflect the variations of semantic in the image space for different hierarchical levels guided by the hierarchical discriminator Dadv and classifiers Dcls (they share most network architecture except the output layers)." }, { "heading": "3.1 TOP-DOWN LEARNING OF HIERARCHICAL REPRESENTATIONS", "text": "Since Fl−1 is the proper subset of Fl, once Fl−1 is obtained, only the difference Rl (1 < l ≤ L) between Fl and Fl−1 needs to be encoded. Considering these, we devise a top-down representation extraction scheme.\nGiven Fl−1 and Rl, we aggregate them together to obtain the whole representation in the l-th level. Such procedure can be formulated as:\nFl = Fl−1 ⊕ Rl (2) where ⊕ means information aggregation. In summary, for hierarchical disentanglement, the common feature F1 in the root level and the unique ones {Rl}Ll=2 in deeper levels need to be encoded. To further interpret the semantics of these features to humans, the decoder reconstructs them in the image space. The semantics of F1 are shared among all its offspring which can be regarded as the invariant content of the object, while those of {Rl}Ll=2 are unique for different levels which plays the role of the variant style of the object. Therefore, F1 and {Rl}Ll=2 are processed in the upper and bottom branches respectively to make them play different roles during the reconstruction, as shown in Fig. 2." }, { "heading": "3.2 CONSTRAINTS FOR THE LEARNING PROCESS", "text": "The basic constraints of hierarchical disentanglement are making features in different levels perform their own duties. For an object O, the encoded F1 and {Rl}Ll=2 are complementary, as the constraints of Fl being the proper subset of Fl+1. F1 should encode just right information to describes its being the root category. Progressively using Rl, one can distinguish it from other categories in l-th level.\nApart from disentanglement, visualization of features in the image space is also one of our objectives. We turn to the popular conditional generative adversarial nets (cGANs) (Mirza & Osindero, 2014) which can control generated images given conditions. Our HDN leverages the disentangled features F1 and {Rl}Ll=2 to control the variations of reconstructed images in different category levels.\nTo ensure F1, {Rl}Ll=2 be well disentangled, we propose a random combination strategy for different levels of features from different objects and control the generated images through these combined features, as shown in Fig.2. Specifically, given F11, {R1l }Ll=2 and F 2 1, {R2l }Ll=2 from arbitrary two objects, we obtain the newly combined features F1 and {Rl}Ll=2, where ∀1 ≤ l ≤ L, Rl (F1 if l = 1) come from either the first or second object. The newly combined features are aggregated together as the input for the decoder G to generate a new object image Ig . Such image should satisfy the following losses:\n– Hierarchical classification loss. For each level, Ig should be classified to the category that Ril reflects (root level F1 only contains one category), defined as:\nJcls = EIg∼p(G)[− L∑\nl=2 Ckl−1∑ c=1 ycl log(Dcls(I g)cl )] (3)\nwhere Jcls is cross-entropy loss among local categories in each level which have a common parent node k such as the dashed rectangled categories in the bottom right corner of Fig.2. p(G) denotes distribution of generated imagesG(Fi1, {Ril}Ll=2). Dcls(I\ng)cl is probabilistic prediction on the c-th local category, and ycl is the ground truth local label of the generated object in the l-th level. Please note that we only focus on the local brother categories instead of all categories in that level. It makes the disentanglement more flexible. On one hand, the classification in each level can thus only focus on the unique features that are just discriminative among those local brother categories. On the other hand, the duties of different levels can be well disentangled, since if the semantic information encoded in different levels is tangled, after the random combination and image reconstruction, the hierarchical classifiers would be quite confused.\n– Adversarial loss. We employ GANs to match the distribution of reconstructed images to the real data distribution. Specifically, the LS-GAN (Mao et al., 2017) loss is adopted in light of its stable training, defined as:\nJGAN = EIg∼p(G) [(1−Dadv(Ig))2] (4)\n– Image reconstruction loss. As for F1 and {R1l }Ll=2 from one same object, we should be able to reconstruct it as close to the input as possible.\nJ Irecon = EIr∼p′(G) [||Ir − Io||1] (5)\nwhere p′(G) is the distribution of generations taking F1, {Rl}Ll=2 from the same objects as inputs.\n– Feature reconstruction loss. Apart from the image reconstruction loss, the feature reconstruction loss is added to HDN to stabilize the training process.\nJF,Rrecon = E(F1,{Rl}Ll=2)∼p(E) [||E(G(F1, {Rl} L l=2))− (F1, {Rl}Ll=2)||1] (6)\nwhere p(E) is the distribution of encoded hierarchical features E(Io).\nNow we combine the four loss functions defined in Eqn.(3), Eqn.(4), Eqn.(5) and Eqn.(6) into one comprehensive loss function for supervising the disentangling of E and visualization of G:\nJ(E,G) = Jcls + JGAN + αJ I recon + βJ F,R recon (7)\nwhere α and β are the hyper-parameters to balance the weights of the four terms.\nAs for the update of discriminator and hierarchical classifiers, we use the following loss:\nJ(D) =(EIo∼p(data)[− L∑\nl=2 Ckl−1∑ c=1 ycl log(Dcls(I o)cl )])\n+ (EIo∼p(data) [(1−Dadv(Io))2] + EIg∼p(G) [(Dadv(Ig))2])\n(8)" }, { "heading": "3.3 RELATIONSHIP WITH PREVIOUS WORK", "text": "It is noted that the recent work DTLC-GAN (Kaneko et al., 2018) has similarities with our method on motivations of learning hierarchical representations. Nevertheless, DTLC-GAN is indeed different from ours. Specifically, the detailed goals of leveraging hierarchical relationship are different. DTLC-GAN aims to maximize the mutual information between conditioned representation and data in each level, i.e. study how the appearance of data varies with more and more specific conditions and thus synthesize data with more fine-grained details. Our method focuses more on how humans distinguish objects from categories in different hierarchical levels and wishes such manner of understanding objects can be applied to the machine, i.e. learn the commonality and individuality of categories in nature. Therefore, the disentangled features of our method are mainly served for downstream discriminative tasks such as semantic retrieval, open world unseen category recognition as we have attempted in the following experiments. Besides, thanks to the disentangled commonality, our method can further realize the semantic translations between images by exchanging the individual parts, which has been a popular application in real world." }, { "heading": "4 EXPERIMENTS", "text": "Datasets: We conduct experiments on hierarchical annotated data from four datasets, typical examples in the hierarchy are shown in Fig.8, Fig.9 and Fig.10 in Appendix1. The first is CelebA dataset (Liu et al., 2015). It provides more than 200K face images with 40 attribute annotations. Following the official train/test protocol, we define a four-level hierarchical structure which has explicit attribute difference between any two levels. Specifically, all faces (root category) are first divided into two categories based on gender. Such initial categories are further classified according to the smile expression and hair color in the next two levels. With such ground-truth hierarchical annotations, we can validate our method more easily.\nThe second dataset named Fashion-MNIST (Xiao et al., 2017) is proposed as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same train/test split with MNIST. Since such dataset does not provide any hierarchical structure, we cluster T-shirt, coat, pullover as one super category and trouser, dress as another super one to construct a three-level hierarchy (root is fashion) according to their appearance similarities.\nThe other two datasets are 3D data, CADCars (Fidler et al., 2012) and ShapeNet (Chang et al., 2015). CADCars contains 183 3D Car models and ShapeNet is constitutive of 51,300 3D models covering 55 common and 205 finer-grained categories. Using the provided tools, we generated 24 2D images with 6 pose and 4 illumination variations for CADCars. These 2D data are clustered into four super categories, i.e. minibus, sedan, sports and SUV, and are further divided into 6 finer-grained categories for each super one based on pose annotations, which defines a three-level hierarchy. On ShapeNet, 12 2D images with pose variation are obtained for each 3D model. One three-level category-pose hierarchy named as ShapeNet-P similar to CADCars and one three-level hierarchy named as ShapeNet-C as in Fig.1.(a) are defined. Ratio of train/test split is 4:1.\nImplementation Details: Our HDN is implemented with Pytorch platform2. Design of the backbone follows recent proposed image generation (Karras et al., 2018) and translation works (Huang et al., 2018). Images are resized to 128*128 resolution for all datasets except Fashion-MNIST which is resized to 28*28. As shown in Fig.2, to match it with our task, we increase the number of 1*1 convolution branches such that originally one representation is disentangled into multiple hierarchical levels. We also equip the residual blocks with Adaptive Instance Normalization (AdaIN) whose parameters are dynamically generated by a multi-layer perception (MLP) from the disentangled latent codes. Besides, D has L output branches, one for real/fake predictions and the others for hierarchical classifications. More training details are given in the Appendix." }, { "heading": "4.1 DISENTANGLED RESULTS", "text": "Firstly, we replace one or several levels of disentangled features for an image with those of another image, and then observe the visual changes of generated image to validate the semantic consistence with pre-defined hierarchical structure. Fig.3 (and Fig.12, Fig.13 in the Appendix) shows such semantic translation results. It is observed that different level of features perform their own duties, i.e. they carry just enough information to control the variations within that level (e.g. gender, smile and hair color for CelebA we specially predefined on CelebA), but would not involve more belongs to other level. For instance, in Fig.3 change features of an image in arbitrary one, two or all levels to those of another image, the semantics would be changed correspondingly. Apart from {Rl}Ll=2, the common feature F1 also encodes information that is not discriminative among its offspring categories but is necessary to construct the object (e.g. the identity, pose and even the background information of a face image). To give a more intuitive feeling about the ability of disentangled features, we investigate the discriminabilites of them via the popular tSNE tool (Maaten & Hinton, 2008). As shown in Fig.4 (and Fig.14, Fig.15, Fig.16, Fig.17 in the Appendix), with only the common feature F1, samples are mixed together. When progressively be combined with features of deeper levels Rl, samples are better separated and almost consistent with the hierarchical structure.\nApart from direct visual edit, we also show that one can transform the source image smoothly by linear interpolation (with 5 equally spaced interpolation coefficients from 0.1 to 0.9) of disentangled features between the source and target. Such examples are shown in Fig.5. We can see that genders, expressions, hair colors and their combinations of the source images (first columns in each case) can be changed smoothly towards those of the targets (last columns of each case). Learning a smooth feature space with continuous variations is a significant issue for representation learning task, which can ensure the generalization ability for unseen similar objects. We have investigated this in Sec.4.3. At the end of this section, a quantitative evaluation of these results is conducted.\n1It is noted that the focus of this paper is to interpret the hierarchical structure within data. Therefore, we heuristically construct hierarchical structures based on human priors. One can also automatically obtain reasonable hierarchical annotations using machine learning technologies such as unsupervised clustering as (Goo et al., 2016) does.\nTo be specific, we use the learned hierarchical classifier D to evaluate whether the semantics are correctly disentangled and decoded into the randomly translated images as in Fig.3. To ensure D is reliable, the accuracy of hierarchical classifications on the test data is given as a reference. Table.1 gives the evaluation results. Firstly, it can be seen that the semantics of translated images with changing different levels are recognized correctly by the corresponding classifiers. Secondly, the deeper of the level, the more difficult of the translation, since the criteria for distinguish one category from others in the deeper level would become more and more complicated (summation of all criteria above this level). Finally, it becomes difficult to transfer the unique features and generate images when that information is difficult to be described and disentangled such as in the leaf-level of Fashion-MNIST and ShapeNet-C, which deserves to make more efforts." }, { "heading": "4.2 APPLICATION TO IMAGE RETRIEVAL", "text": "One of the objectives of learned representations is to be applied in real-world applications. Semantic image retrieval has been studied for years. Hashing is one effective and space-time efficient solution for this task. However, the semantic of target images that users expect are not always consistent just due to the tangled information of objects in different hierarchical levels. In this section, we conduct retrieval in different levels on CelebA. We compare three competing deep hashing methods, i.e. DSH (Liu et al., 2016a), HashNet (Cao et al., 2017) and SSDH (Yang et al., 2018), and two strong pre-trained GAN baselines, i.e. supervised StarGAN (Choi et al., 2018) and unsupervised StyleGAN Karras et al. (2018). The backbones of hashing methods are same with the bottom branch of encoder E of HDN and pretrained on CASIA WebFace dataset (Yi et al., 2014). In l-th level, a model with bit-length as same as the dimension of the concatenation of {Rl}l2 is trained. As for StarGAN and StyleGAN, the latent features before the last layer of discriminator are used as the representations of samples. To make a fair comparison with hashing methods, we also binarize the disentangled features via Sigmoid activation fucntion, which we named as HDN-B. Images of the test set are used to retrieve the training set.\nTable.2 gives the mAP results in different semantic levels. First, our method achieves the best performance, though we did not impose specific metric objectives on features while the flat StarGAN and StyleGAN perform not well. Second, HDN is more efficient since it only needs one model owing to disentanglement while hashing methods have to train a model in each level. We also tried to use only one model trained in the leaf-level to evaluate in high levels (methods with postfix of “-S”), but the results are inferior to those independently trained for each level. Third, HDN-B is better than HDN, which mainly due to the increased non-linear ability of features. Finally, the retrieval of HDN is more interpretable. As shown in Fig.6, with different parts of features, the returned images satisfy different semantic requirement, while for general method like SSDH one can not interpret the meanings of different code parts." }, { "heading": "4.3 UNSEEN CATEGORY PREDICTION AND SEMANTIC EDIT", "text": "Recognition of unseen categories is a challenging task for deep learning models, which has high requirements for the generalization ability of models. As our HDN learns features in different hierarchy levels, it can obtain sequenced category predictions for an object. Therefore, if unseen objects share similarities with seen ones, one should still obtain the right predictions in those levels, and the predictions among seen categories in levels where unseen objects have their own features should be confused. For levels with similarities and with their own features, accuracy and entropy of predictions of a linear hierarchical classifier trained with the disentangled features are used as evaluation metrics respectively. In this section, we test HDN on certain unseen leaf-level categories, i.e. bald and gray hair of CelebA, kinds of new tables and sofas of ShapeNet-C and objects with other poses of ShapeNet-P (typical examples of these data are shown in Fig.11 in the Appendix).\nTable.3 shows the quantitative results. Two conclusions can be reached. 1). In levels where seen and unseen objects share similarities (i.e. gender, smile levels on CelebA, Sofa/Table level on ShapeNetC, Loveseat/Club chair/Work table/Billiards on ShapeNet-P), most objects can be correctly classified. 2). In the leaf-level, unseen objects have the unique unseen features, leading to the prediction entropy increase obviously compared with that of seen objects. Besides, it is found that the unseen objects are more likely classified to similar seen categories in leaf-level. For instance, about 30% and 56% bald faces are recognized as black and golden hair respectively, 50% and 50% leather couches are predicted as loveseat and L-couch respectively, 44% and 50% of the frontal sofa/table are classified as the right 30◦ offset of frontal and left 30◦ offset of frontal. The semantic translations in Fig.7 between seen and unseen images also verify such results. The semantics of non-leaf levels can be transferred as usual, but the unseen unique features are not. Bald may be disentangled as golden or black hair due to the skin color. The material of leather is ignored in ShapeNet-C since model focus more on shape to distinguish seen objects rather than material during training. The translations to frontal pose are also confused as can be found in the cases of ShapeNet-P. Through this study, we think that disentangling the visual primitives of objects as learned knowledge is one of the most promising solutions for the ability of open-world recognition." }, { "heading": "5 CONCLUSIONS", "text": "We propose the hierarchical disentangle network (HDN) which exploits the natural characteristics among categories to divide the representation learning in a coarse-to-fine manner. Our model achieves promising disentangle results. We also show the applications of such disentangled features on semantic translation, retrieval and even unseen objects prediction. However, our work is just an early step towards the long goal of disentangled representation learning, limited by the capacity of generative models on large scale and heavily tangled categories, the performance of HDN is not quite well such as on the ImageNet dataset which deserves to make more efforts." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 NETWORK ARCHITECTURES AND TRAINING DETAILS", "text": "" }, { "heading": "A.1.1 NETWORK ARCHITECTURES", "text": "Following the backbone designs in (Huang et al., 2018) for image-to-image translation task, let c7s1-k denotes a 7 × 7 convolution block with k filters and stride 1. dk means a 4 × 4 convolution block with k filters and stride 2. Rk denotes a residual block that contains two 3 × 3 convolution blocks with k filters. The last layers for different hierarchy levels (common features of root level are encoded by the content encoder) in the style encoder include multiple c1s1-8 branches, i.e. 1 × 1 convolution block with 8 filters and stride 1. uk denotes a 2× nearest-neighbor upsampling layer followed by a 5 × 5 convolution block with k filters and stride 1. GAP denotes a global average pooling layer. Instance Normalization (IN) and Adaptive Instance Normalization (AdaIN) are adopted to the content encoder branch and decoder respectively. No normalization is used in the style encoder branch. Use ReLU activations in the encoder-decoder and Leaky ReLU with slope 0.2 in the discriminator and classifier. Multi-scale discriminators with 3 scales (single scale for FashionMNIST due to its too small resolutions) are used to ensure both realistic details and global structure. The last layer of the decoder is equipped with a tanh activations to normalize the values of generated images to the range of [−1, 1]. In the following, we give the detailed architectures of each module on different datasets.\nCelebA, CADCars & ShapeNet:\nContent encoder: c7s1-64, d128, d256, R256, R256, R256\nStyle encoder: c7s1-64, d128, d256, d256, d256, GAP, c1s1-8\nDecoder: R256, R256, R256, u128, u64, c7s1-3\nDiscriminator & Classifier: d64, d128, d256, d512\nFashion-MNIST:\nContent encoder: c7s1-32, d64, d128, R128, R128, R128\nStyle encoder: c7s1-32, d64, d128, R128, R128, R128, GAP, c1s1-8\nDecoder: R128, R128, R128, u64, u,32 c7s1-1\nDiscriminator & Classifier: d32, d64, d128, d256" }, { "heading": "A.1.2 TRAINING HYPERPARAMETERS", "text": "We use the Adam optimizer with β1 = 0.5, β2 = 0.999, and initial learning rate of 0.0001. We train HDN on all datasets for 300K iterations and half decay the learning rate every 100K iterations. We set batch size to 16. The loss weights α and β in Eqn.(7) are set as 10 and 1 respectively. Random mirroring is applied during training." }, { "heading": "A.2 HIERARCHICAL DATA CONSTRUCTION", "text": "In this section, Fig.8, Fig.9 and Fig.10 provide leaf-level examples for better understanding the commonalities and individualities among categories in different hierarchy levels. Take the CelebA for example, the root category face has two children distinguished by gender attribute. For each of the two super categories, it includes two finer granular children which are further divided by the smile expression. Finally in the leaf-level, each local branch are classified according to their hair colors, i.e. black, golden and brown hair. Within each leaf-level category, samples mainly contain intra-class variations caused by identities, age, pose, etc." }, { "heading": "A.3 DISENTANGLE RESULTS ON MORE DATASETS", "text": "In this section, we give disentangled results on CADCars, Fashion-MNIST and ShapeNet. Fig.12 and Fig.13 show the semantics of disentangled features which can well change the variations of generated images. Similarly, Fig.14, Fig.15, Fig.16 and Fig.17 show that progressively involving more features of deeper levels, samples become better separated, which verifies the results of disentanglement are consistent with the semantic hierarchical structures." }, { "heading": "A.4 CROSS DATASET EVALUATIONS", "text": "Learning general representation that can be applied across datasets is one of the long goals for machine. In this section, we briefly evaluate our method on datasets which have similar categories but quite different styles. To be specific, we evaluate HDN on a challenging facial expression dataset called RAF (Li et al., 2017) and a cars dataset named CompCars (Yang et al., 2015), using models trained on CelebA and CADCars respectively. RAF is a large-scale facial expression database with around 30K great-diverse facial images downloaded from the Internet. It provides expressions, race, age range and gender attributes annotations. Besides, the released images are compactly aligned which have little information about hair colors. Therefore, the leaf-level categories are obtained according to the race or age range. As for CompCars, it contains 163 car makes with 1,716 car models. Besides, it also labels the car type (i.e. SUV, Sedan, Sports, etc.). Based on these annotations, we\naThe expression recognition precision is about 50% and that of compound is about 30% using the VGG nets, as reported in (Li et al., 2017). Here level-3 can be regarded as the compound of gender and expressions.\nreplace the pose categories in the leaf-level with different car models, using only the profile pose images. Typical examples of the two hierarchical data are shown in Fig.18.\nTable4 gives the hierarchical prediction comparison between seen (CelebA and CADCars) and unseen (RAF and CompCars) datasets. Firstly, though data become challenging and have large domain shift, the learned models can still recognize partial objects in high levels. Secondly, due to loss of hair regions, the entropy of RAF data in the leaf-level is quite high. As for CompCars, the adopted images in this paper are all profile pose, accuracy of the compound of pose and car type prediction in the leaf-level is about 30%, i.e. the poses of most cars (30/45) are correctly predicted. Apart from quantitative results, Fig.19 shows the semantic translation results across datasets. It is observed that information of gender and smile is correctly disentangled and transferred. For the translation between CADCars and CompCars, given the unseen type Hatchback (fifth row), the translated result of the SUV looks like nothing on the earth. Besides, we also find it difficult to transfer the images from unseen dataset as shown in the last case, which mainly due to the domain shift for the generator." }, { "heading": "A.5 ABLATION STUDY", "text": "In this section, we make a justification of several choices made in our method, including usage of local ’brother’ categories for classification learning, image/feature reconstruction losses in Eqn.(5) and Eqn.(6). Specifically, we replace the local classification loss with the global one in each level\nto verify the effectiveness of local discriminability for disentangling hierarchical features. For the roles of reconstruction loss, we simply drop it during training.\nFirstly, we compare baselines with our full method in terms of the classification performance on the generated images controlled by disentangled features. From Table5, we can see that HDNfull overall performs better. Replacing the local classification loss with the global one in non-root levels would heavily do harm to the goal of hierarchical disentanglement, as the global one takes all categories in that level into consideration which needs the discriminative information in both parent and current levels, while we aim to separate such information in different levels, leading to conflicting objectives. Such conflict can be found in the translated cases shown in Fig.20. Without the local classification, only changing features of one level results in ambiguous generations (the third row), which can also be reflected in the quantitative evaluation of image quality in Table6. As for the reconstruction losses, they mainly stable the adversarial training. Without them, the quality of generated images would decrease in some extent. Besides, the feature reconstruction loss can boost the disentangling degree of features. As the 2D tSNE results in Fig.21 demonstrates, without such loss, the intra-class compactness and inter-class discriminability of samples in the embedding space become poor." }, { "heading": "A.6 QUALITY COMPARISON OF TRANSLATED IMAGES", "text": "In this section, we evaluate the quality of translated images controlled by hierarchical features on test set of CelebA . Since the disentangling paradigm of our method is similar to the image-to-image translation task, we further compare one of such kinds of cGAN-based works, i.e. StarGAN (Choi et al., 2018) which has been a popular framework for the multi-attribute translation task. Besides, we also compare a disentanglement work named ELEGANT3. We follow the hyper-parameters settings on CelebA in their publicly released codes. Due to resources-cost, the StyleGAN is only trained\n3This method is good at disentangling two factors in a model and can only change one factor of varition given a reference. The performance would become unstable for more than two factors, which has been verified\nfor 128*128 resolution for 300 epochs. We use the Inception Score (IS) (Salimans et al., 2016) and Frchet Inception Distance (FID) (Heusel et al., 2017) to measure semantics of images, and leverage the Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) to measure the diversity of generated visual modes. In Table6, it is observed HDN achieves comparable and even slightly better semantics compared with the state-of-the-art image-to-image translation method, which demonstrates that HDN can not only extract primitives of objects for discriminative tasks but also be applied to such graphical applications. Besides, we observed that IS and LPIPS are sensitive to artifacts while FID is more stable, which can be found in qualitative results in the following.\nApart from these quantitative measurements, Fig.22 compares the translated results. Our method performs comparable with StarGAN and better than ELEGANT. ELEGANT is also for disentangling, the results of which for few attributes (no more than two as suggested by the authors) looks good but would become much poor when multiple factors need to be dealt with." }, { "heading": "A.7 FAILURE CASE ANALYSIS", "text": "In this section, we show results of HDN on the challenging ImageNet dataset and analyze the limitations of current method on too complex dataset. We collect images from 3 super categories including house cats, dogs and big cats of ImageNet. Each super category contains 4 fine-grained categories, which thus constructs in a three-level hierarchy (root is animal). Following (Huang et al., 2018), all images split by official train/test protocol are processed by a pre-trained faster-rcnn head detector, and then cropped and resized to 128*128 resolution as the inputs for HDN. Examples of such hierarchical data are shown in Fig.23. Network architecture and training hyper-parameters are same with those of CelebA, CADCars and ShapeNet introduced above.\nWe first quantitatively evaluate the disentangled features as we did in Sec.4.1. The classification accuracy of test set in level 2 and 3 is 0.9293 and 0.8760, and that of generated images is 0.9493 and 0.8160, respectively. Fig.24 shows the tSNE embedding results using different levels of Fl. From these results, we may infer that our method has successfully disentangled the desired semantic features in different levels which have progressively increased discriminability and good generalization ability for test and generated images. However, qualitative investigation reveals that it is not the truth of all. Fig.25 shows some semantic translation results of source objects using the disentangled hierarchical features of the targets. It is observed that the disentangled features can only change partial appearance (e.g. the textures or colors of objects) of the source images which leaves out other necessary and even the key information to recognize the objects in that level from the perspective of humans (e.g. the shape of the lion other than the fur color). Besides, purely change only one level features, the translated images would look strange.\nby the authors of this work. We trained ELEGANT-2 for disentangling gender and smile, and ELEGANT-5 for all 5 attributes we used.\nReasons for these phenomena are mainly in two folds. For one thing, there exists too much information that can be leveraged for classification, since these ImageNet categories themselves are too complex and the differences among them are in many aspects. Therefore, classifiers can easily find “shortcuts” and extract only partial primitives constituting the objects in that level. Sometimes these “shortcuts” are even wrong ways, which is the so-called bias problem of ImageNet classification models (e.g. images containing black man are predicted as basketball) (Stock & Cissé, 2017; Geirhos et al., 2019). In the qualitative results of HDN on ImageNet, we also find that the semantics of disentangled features are not the whole to interpret the objects in that level and sometimes are even hard to understand by humans. This tells us that sometimes deep features can perform well in terms of certain measurements but may not work as our human expected, while our HDN can diagnose this kind of features as done in Fig.25. For another, the poor image quality is partially owing to the capacity of GAN. Generating high-quality images on ImageNet is the well-known tough problem which has not been well addressed by GANs until now due to the much complex data distribution. In our framework, in order to disentangle semantics, it is required to synthesize some nonexistent categories combined by semantics from different categories, which further makes distribution fitting harder for discriminator. We believe that HDN could be improved on ImageNet dataset by disentangling large scale of categories organized in a hierarchical structure, given a powerful enough generative framework." } ]
2,019
null
SP:cfbe7ae1f40e2c23a6161d04e3229bc860c79042
[ "of the paper: Learning from label proportions (LLP) is an area in machine learning that tries to learn a classifier that predicts labels of instances, with only bag-level aggregated labels given at the training stage. Instead of proposing a loss specialized for this problem, this paper proposes a regularization term for the LLP problem. The core contribution of this paper is to use the idea of consistency regularization, which has become very popular in semi-supervised learning in the recent years. The regularization term takes a perturbation of an input sample, and then force the output of the original and perturbed sample to be similar by minimizing a KL divergence of the two output distributions. Experiments show the performance of the proposed method under two bag generation settings. The paper also finds empirically that the hard L_1 has high correlation with the test error rate, which makes it an ideal candidate when the user splits the validation data from training data (meaning there are no ground truth labels for each instances).", "This paper proposes using Consistency Regularization and a new bag generation technique to better learn classification decision boundaries in a Label Proportion setting. The consistency regularization works to make sure that examples in the local neighbourhood have similar outputs. The authors further use K-means clustering to create a new bagging scenario they use to mimic real-world LLP settings. " ]
The problem of learning from label proportions (LLP) involves training classifiers with weak labels on bags of instances, rather than strong labels on individual instances. The weak labels only contain the label proportion of each bag. The LLP problem is important for many practical applications that only allow label proportions to be collected because of data privacy or annotation cost, and has recently received lots of research attention. Most existing works focus on extending supervised learning models to solve the LLP problem, but the weak learning nature makes it hard to further improve LLP performance with a supervised angle. In this paper, we take a different angle from semi-supervised learning. In particular, we propose a novel model inspired by consistency regularization, a popular concept in semi-supervised learning that encourages the model to produce a decision boundary that better describes the data manifold. With the introduction of consistency regularization, we further extend our study to non-uniform baggeneration and validation-based parameter-selection procedures that better match practical needs. Experiments not only justify that LLP with consistency regularization achieves superior performance, but also demonstrate the practical usability of the proposed procedures.
[]
[ { "authors": [ "Ehsan Mohammady Ardehaly", "Aron Culotta" ], "title": "Co-training for demographic classification using deep learning from label proportions", "venue": "IEEE International Conference on Data Mining Workshops (ICDMW),", "year": 2017 }, { "authors": [ "David Berthelot", "Nicholas Carlini", "Ian Goodfellow", "Nicolas Papernot", "Avital Oliver", "Colin Raffel" ], "title": "Mixmatch: A holistic approach to semi-supervised learning", "venue": null, "year": 1905 }, { "authors": [ "Gerda Bortsova", "Florian Dubost", "Silas Ørting", "Ioannis Katramados", "Laurens Hogeweg", "Laura Thomsen", "Mathilde Wille", "Marleen de Bruijne" ], "title": "Deep learning from label proportions for emphysema quantification", "venue": "In International Conference on Medical Image Computing and ComputerAssisted Intervention,", "year": 2018 }, { "authors": [ "Olivier Chapelle", "Bernhard Scholkopf", "Alexander Zien" ], "title": "Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Bee-Chung Chen", "Lei Chen", "Raghu Ramakrishnan", "David R Musicant" ], "title": "Learning from aggregate views", "venue": "In 22nd International Conference on Data Engineering", "year": 2006 }, { "authors": [ "Shuo Chen", "Bin Liu", "Mingjie Qian", "Changshui Zhang" ], "title": "Kernel k-means based framework for aggregate outputs classification", "venue": "IEEE International Conference on Data Mining Workshops,", "year": 2009 }, { "authors": [ "Zhensong Chen", "Zhiquan Qi", "Bo Wang", "Limeng Cui", "Fan Meng", "Yong Shi" ], "title": "Learning with label proportions based on nonparallel support vector machines", "venue": "Knowledge-Based Systems,", "year": 2017 }, { "authors": [ "Gabriel Dulac-Arnold", "Neil Zeghidour", "Marco Cuturi", "Lucas Beyer", "Jean-Philippe Vert" ], "title": "Deep multi-class learning from label proportions", "venue": "arXiv preprint arXiv:1905.12909,", "year": 2019 }, { "authors": [ "Kai Fan", "Hongyi Zhang", "Songbai Yan", "Liwei Wang", "Wensheng Zhang", "Jufu Feng" ], "title": "Learning a generative classifier from label proportions", "venue": null, "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Jerónimo Hernández-González", "Iñaki Inza", "Jose A Lozano" ], "title": "Learning bayesian network classifiers from label proportions", "venue": "Pattern Recognition,", "year": 2013 }, { "authors": [ "Jerónimo Hernández-González", "Inaki Inza", "Lorena Crisol-Ortı́z", "Marı́a A Guembe", "Marı́a J Iñarra", "Jose A Lozano" ], "title": "Fitting the data from embryo implantation prediction: Learning from label proportions", "venue": "Statistical methods in medical research,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Hendrik Kuck", "Nando de Freitas" ], "title": "Learning about individuals from group statistics", "venue": "arXiv preprint arXiv:1207.1393,", "year": 2012 }, { "authors": [ "Kuan-Ting Lai", "Felix X Yu", "Ming-Syan Chen", "Shih-Fu Chang" ], "title": "Video event detection by inferring temporal instance labels", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2014 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Dong-Hyun Lee" ], "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "venue": "In Workshop on Challenges in Representation Learning, ICML,", "year": 2013 }, { "authors": [ "Fan Li", "Graham Taylor" ], "title": "Alter-cnn: An approach to learning from label proportions with application to ice-water classification", "venue": null, "year": 2015 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Shin Ishii", "Masanori Koyama" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "David R Musicant", "Janara M Christensen", "Jamie F Olson" ], "title": "Supervised learning by training on aggregate outputs", "venue": "In Seventh IEEE International Conference on Data Mining (ICDM", "year": 2007 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Avital Oliver", "Augustus Odena", "Colin A Raffel", "Ekin Dogus Cubuk", "Ian Goodfellow" ], "title": "Realistic evaluation of deep semi-supervised learning algorithms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhiquan Qi", "Bo Wang", "Fan Meng", "Lingfeng Niu" ], "title": "Learning with label proportions via npsvm", "venue": "IEEE transactions on cybernetics,", "year": 2016 }, { "authors": [ "Novi Quadrianto", "Alex J Smola", "Tiberio S Caetano", "Quoc V Le" ], "title": "Estimating labels from label proportions", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Stefan Rueping" ], "title": "Svm classifier estimation from group probabilities", "venue": "In Proceedings of the 27th international conference on machine learning", "year": 2010 }, { "authors": [ "Marco Stolpe", "Katharina Morik" ], "title": "Learning from label proportions by optimizing cluster model selection", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2011 }, { "authors": [ "Tao Sun", "Dan Sheldon", "Brendan OConnor" ], "title": "A probabilistic approach for learning with label proportions applied to the us presidential election", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2017 }, { "authors": [ "Antti Tarvainen", "Harri Valpola" ], "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Juho Kannala", "Yoshua Bengio", "David Lopez-Paz" ], "title": "Interpolation consistency training for semi-supervised learning", "venue": "arXiv preprint arXiv:1903.03825,", "year": 2019 }, { "authors": [ "Bo Wang", "Zhensong Chen", "Zhiquan Qi" ], "title": "Linear twin svm for learning from label proportions", "venue": "In 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT),", "year": 2015 }, { "authors": [ "Felix X Yu", "Krzysztof Choromanski", "Sanjiv Kumar", "Tony Jebara", "Shih-Fu Chang" ], "title": "On learning from label proportions", "venue": "arXiv preprint arXiv:1402.5902,", "year": 2014 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "Procedings of the British Machine Vision Conference", "year": 2016 }, { "authors": [ "Komodakis" ], "title": "2016).We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "In traditional supervised learning, a classifier is trained on a dataset where each instance is associated with a class label. However, label annotation can be expensive or difficult to obtain for some applications. Take the embryo selection as an example (Hernández-González et al., 2018). To increase the pregnancy rate, clinicians would transfer multiple embryos to a mother at the same time. However, clinicians are unable to know the outcome of a particular embryo due to limitations of current medical techniques. The only thing we know is the proportion of embryos that implant successfully. To increase the success rate of embryo implantation, clinicians aim to select high-quality embryos through the aggregated results. In this case, only label proportions about groups of instances are provided to train the classifier, a problem setting known as learning from label proportions (LLP).\nIn LLP, each group of instances is called a bag, which is associated with a proportion label of different classes. A classifier is then trained on several bags and their associated proportion labels in order to predict the class of each unseen instance. Recently, LLP has attracted much attention among researchers because its problem setting occurs in many real-life scenarios. For example, the census data and medical databases are all provided in the form of label proportion data due to privacy issues (Patrini et al., 2014; Hernández-González et al., 2018). Other LLP applications include fraud detection (Rueping, 2010), object recognition (Kuck & de Freitas, 2012), video event detection (Lai et al., 2014), and ice-water classification (Li & Taylor, 2015).\nThe challenge in LLP is to train models without direct instance-level label supervision. To overcome this issue, prior work seeks to estimate either the individual label (Yu et al., 2013; Dulac-Arnold et al., 2019) or the mean of each class by the label proportions (Quadrianto et al., 2009; Patrini et al., 2014). However, the methodology behind developing these models do not portray LLP situations that occur in real life. First, these models can be improved by considering methods that can better leverage unlabeled data. Second, these models assume that bags of data are randomly generated, which is not the case for many applications. For example, the data of population census are collected on region, age, or occupation with varying group sizes. Third, training these models requires a\nvalidation set with labeled data. It would be more practical if the process of model selection relies only on the label proportions.\nThis paper aims to resolve the previous problems. Our main contributions are listed as follows:\n• We first apply a semi-supervised learning technique, consistency regularization, to the multi-class LLP problem. Consistency regularization considers an auxiliary loss term to enforce network predictions to be consistent when its input is perturbed. By exploiting the unlabeled instances, our method captures the latent structure of data and obtains the SOTA performance on three benchmark datasets.\n• We develop a new bag generation algorithm – the K-means bag generation, where training data are grouped by attribute similarity. Using this setup can help train models that are more applicable to actual LLP scenarios.\n• We show that it is possible to select models with a validation set consisting of only bags and associated label proportions. The experiments demonstrate correlation between baglevel validation error and instance-level test error. This potentially reduces the need of a validation set with instance-level labels." }, { "heading": "2 PRELIMINARY", "text": "" }, { "heading": "2.1 LEARNING FROM LABEL PROPORTIONS", "text": "We consider the multi-class classification problem in the LLP setting in this paper. Let xi ∈ RD be a feature vector of i-th example and yi ∈ {1, . . . , L} be a class label of i-th example, where L is the number of different classes. We define e(j) to be a standard basis vector [0, . . . , 1, . . . , 0] with 1 at j-th position and ∆L = {p ∈ RL+ : ∑L i pi = 1} to be a probability simplex. In the setting of LLP, each individual label yi is hidden from the training data. On the other hand, the training data are aggregated by a bag generation procedure. We are given M bags B1, . . . , BM , where each bag Bm contains a set Xm of instances and a proportion label pm, defined by\npm = 1 |Xm| ∑\ni:xi∈Xm\ne(yi), M⋃ m=1 Xm = {x1, . . . ,xN}.\nWe do not require each subset to be disjoint. Also, each bag may have different size. The task of LLP is to learn an individual-level classifier fθ : RD → ∆L to predict the correct label y = arg maxi fθ(x)i for a new instance x. Figure 1 illustrates the setting of learning from label proportions in the multi-class classification (Dulac-Arnold et al., 2019)." }, { "heading": "2.2 PROPORTION LOSS", "text": "The feasibility of the binary LLP setting has been theoretically justified by Yu et al. (2014). Specifically, Yu et al. (2014) propose the framework of Empirical Proportion Risk Minimization (EPRM),\nproving that the LLP problem is PAC-learnable under the assumption that bags are i.i.d sampled from an unknown probability distribution. The EPRM framework provides a generalization bound on the expected proportion error and guarantees to learn a probably approximately correct proportion predictor when the number of bags is large enough. Furthermore, the authors prove that the instance label error can be bounded by the bag proportion error. That is, a decent bag proportion predictor guarantees a decent instance label predictor.\nBased on the profound theoretical analysis, a vast number of LLP approaches learn an instance-level classifier by directly minimizing the proportion loss without acquiring the individual labels. To be more precise, given a bag B = (X,p), an instance-level classifier fθ and a divergence function dprop : RL × RL → R, the proportion loss penalizes the difference between the real proportion label pm and the estimated proportion label p̂ = 1\n|X| ∑ x∈X fθ(x), which is an average of the instance\npredictions within a bag. Thus, the proportion loss Lprop can be defined as follows:\nLprop(θ) = dprop(p, p̂).\nThe commonly used divergence functions are L1 and L2 function in prior work (Musicant et al., 2007; Yu et al., 2013). Ardehaly & Culotta (2017) and Dulac-Arnold et al. (2019), on the other hand, consider the cross-entropy function for the multi-class LLP problem." }, { "heading": "2.3 CONSISTENCY REGULARIZATION", "text": "Since collecting labeled data is expensive and time-consuming, the semi-supervised learning approaches aim to leverage a large amount of unlabeled data to mitigate the need for labeled data. There are many semi-supervised learning methods, such as pseudo-labeling (Lee, 2013), generative approaches (Kingma et al., 2014), and consistency-based methods (Laine & Aila, 2016; Miyato et al., 2018; Tarvainen & Valpola, 2017). Consistency-based approaches encourage the network to produce consistent output probabilities between unlabeled data and the perturbed examples. These methods rely on the smoothness assumption (Chapelle et al., 2009): if two data points xi and xj are close, then so should be the corresponding output distributions yi and yj . Then, the consistency-based approaches can enforce the decision boundary to traverse through the low-density region. More precisely, given a perturbed input x̂ taken from the input x, consistency regularization penalizes the distinction of model predictions between fθ(x) and fθ(x̂) by a distance function dcons : RL × RL → R. The consistency loss can be written as follows:\nLcons(θ) = dcons(fθ(x), fθ(x̂)).\nModern consistency-based methods (Laine & Aila, 2016; Tarvainen & Valpola, 2017; Miyato et al., 2018; Verma et al., 2019; Berthelot et al., 2019) differ in how perturbed examples are generated for the unlabeled data. Laine & Aila (2016) introduce the Π-Model approach, which uses the additive Gaussian noise for perturbed examples and chooses the L2 error as the distance function. However, a drawback to Π-Model is that the consistency target fθ(x̂) obtained from the stochastic network is unstable since the network changes rapidly during training. To address this problem, Temporal Ensembling (Laine & Aila, 2016) takes the exponential moving average of the network predictions as the consistency target. Mean Teacher (Tarvainen & Valpola, 2017), on the other hand, proposes averaging the model parametes instead of network predictions. Overall, the Mean Teacher approach significantly improves the quality of consistency targets and the empirical results on semi-supervised benchmarks.\nInstead of applying stochastic perturbations to the inputs, Virtual Adversarial Training or VAT (Miyato et al., 2018) computes the perturbed examples x̂ = x + radv, where\nradv = arg max r:||r||2≤\nDKL(fθ(x)‖fθ(x + r)). (1)\nThat is, the VAT approach attempts to generate a perturbation which most likely causes the model to misclassify the input in an adversarial direction. Finally, the VAT approach adopts Kullback-Leibler (KL) divergence to compute the consistency loss. In comparison to the stochastic perturbation, the VAT approach demonstrates the greater effectiveness in the semi-supervised learning problem." }, { "heading": "3 LLP WITH CONSISTENCY REGULARIZATION", "text": "With regards to weak supervision, the LLP scenario is similar to the semi-supervised learning problem. In the semi-supervised learning setting, only a small portion of training examples is labeled. On the other hand, in the LLP scenario, we are given the weak supervision of label proportions instead of the strong label on individual instances. Both settings are challenging since most training examples do not have individual labels. To address this challenge, semi-supervised approaches seek to exploit the unlabeled examples to further capture the latent structure of data.\nMotivated by these semi-supervised approaches, we combine the idea of leveraging the unlabeled data into the LLP problem. We make the same smoothness assumption and introduce a new concept incorporating consistency regularization with LLP. In particular, we consider the typical cross-entropy function between real label proportions and estimated label proportions. Given a bag B = (X,p), we define the proportion loss Lprop as follows:\nLprop(θ) = − L∑ i=1 pi log 1 |X| ∑ x∈X fθ(x)i.\nInterestingly, the proportion loss Lprop boils down to standard cross-entropy loss for fullysupervised learning when the bag size is one. To learn a decision boundary that better reflects the data manifold, we add an auxiliary consistency loss that leverages the unlabeled data. More formally, we compute the average consistency loss across all instances within the bag. Given a bag B = (X,p), the consistency loss Lcons can be written as follows:\nLcons(θ) = 1 |X| ∑ x∈X dcons(fθ(x), fθ(x̂)),\nwhere dcons is a distance function, and x̂ is a perturbed input of x. We can use any consistencybased approach to generate the perturbed examples and compute the consistency loss. Finally, we mix the two loss functions Lprop and Lcons with a hyperparameter α > 0, yielding the combined loss L for LLP: L(θ) = Lprop(θ) + αLcons(θ), where α controls the balance between the bag-level estimation of proportion labels and instancelevel consistency regularization.\nTo understand the intuition behind combining consistency regularization into LLP, we follow the Π-Model approach (Laine & Aila, 2016) to adopt the stochastic Gaussian noise as the perturbation and to use L2 as the distance function dcons in a toy example. Figure 2 illustrates how our method is able to produce a decision boundary that passes through the low-density region and captures the data manifold. On the other hand, the vanilla approach, which simply optimizes the proportion loss, gets easily stuck at a poor solution due to the lack of label information. This toy example shows the advantage of applying consistency regularization into LLP.\nAccording to Miyato et al. (2018), VAT is more effective and stable than Π-Model due to the way it generates the perturbed examples. For each data example, the Π-Model approach stochastically\nAlgorithm 1 LLP-VAT algorithm\nRequire: D = {(Xm,pm)}Mm=1: collection of bags Require: fθ(x): instance-level classifier with trainable parameters θ Require: g(x; θ) = x + radv: VAT augmentation function according to Equation 1 Require: w(t): ramp-up function for increasing the weight of consistency regularization Require: T : total number of iterations\nfor t = 1, . . . , T do for each bag (X,p) ∈ D do p̂← 1|X| ∑ x∈X fθ(x) . Estimated proportion label\nLprop = − ∑L j=1 pi log p̂i . Proportion loss\nLcons = 1|X| ∑\nx∈XDKL(fθ(x)‖fθ(g(x; θ))) . Consistency loss L = Lprop + w(t) · Lcons . Total loss update θ by gradient∇θL . e.g. SGD, Adam\nend for end for return θ\nperturbs inputs and trains the model to assign the same class distributions to all neighbors. In contrast, the VAT approach focuses on neighbors that are sensitive to the model. That is, VAT aims to generate a perturbed input whose prediction is the most different from the model prediction of its original input. The learning of VAT approach tends to be more effective in improving model generalization. Therefore, we adopt the VAT approach to compute the consistency loss for each instance in the bag. Additionally, to prevent the model from getting stuck at a local optimum in the early stage, we use the exponential ramp-up scheduling function (Laine & Aila, 2016) to increase the consistency weight gradually to the maximum value α. The full algorithm of LLP with VAT (LLP-VAT) is described in Algorithm 1." }, { "heading": "4 EXPERIMENTS", "text": "We evaluate our LLP-VAT on three benchmarks, including SVHN, CIFAR10, and CIFAR100. For model selection, we choose hyperparameters using a validation set without individual labels. Lastly, we report the test instance accuracy averaged over the last 10 epochs. The full experiment details are provided in the supplementary material." }, { "heading": "4.1 UNIFORM BAG GENERATION", "text": "For convenience, most LLP works validate their proposed methods with the uniform bag generation where the training data are randomly partitioned into bags of the same size. We evaluate our method using this bag generation procedure with the bag size n ∈ {16, 32, 64, 128, 256}. We drop the last incomplete bag if the number of training data is indivisible by the bag size. Table 1 shows the experimental results for the LLP scenario with a uniform bag generation.\nIn comparison to the vanilla approach, our LLP-VAT significantly improves the performance on CIFAR10 and CIFAR100. This indicates that applying consistency regularization into LLP does help learn a better classifier. As for SVHN, since the test accuracy is close to the fully-supervised performance when the bag size is small, there is no clear difference among three methods. In addition, the results also show that the performance of ROT is unstable and lead us to conclude that the unhelpful pseudo-labels would easily result in a worse classifier. Conversely, our LLP-VAT is more stable and obtains better test accuracy in most cases." }, { "heading": "4.2 K-MEANS BAG GENERATION", "text": "In this section, we further investigate our LLP-VAT in a more practical scenario. We observe that the uniform bag generation barely fits the real-world LLP situation because of following two reasons. First, the real-life data are usually grouped by attribute similarity instead of uniformly sampled. Second, each bag may have different bag sizes, i.e., the distribution of bag sizes is diverse. Consider the\nUS presidential election results (Sun et al., 2017), where the statistics of voting results are collected by geological regions (e.g., states). Also, each state have varying number of voters. Therefore, we introduce a new bag generation procedure—the K-means bag generation, where we cluster examples into bags by the K-means algorithm. Although those bags generated from the K-means bag generation are dependent on each other, violating the i.i.d. assumption, this setting is both challenging and worth-studying.\nSince we perform experiments on image datasets, it is meaningless to cluster data examples based on RGB pixels. We first adopt the principle component analysis algorithm, which is an unsupervised dimension reduction technique, to project the data into a low-dimensional representation space. This space may capture more important patterns in an images. Then we group the low-dimensional representations of the images following the K-means bag generation procedure. We conduct experiments with the number of clusters K ∈ {3120, 1560, 780, 390, 195} on CIFAR10 and CIFAR100, and K ∈ {4576, 2288, 1144, 572, 286} on SVHN. These numbers are selected to match the number of proportion labels in the uniform bag generation procedure. The distribution of bag sizes generated from the K-means procedure are shown in Figure 3.\nFor experiments, we do not compare our proposed method to the ROT loss, which needs to estimate individual labels iteratively for each bag. The procedure of the ROT algorithm is time-consuming and cannot be accelerated if bags are of varying sizes. Besides, for the K-means bag generation, there may be some large bags when the value of K is small. Because of the limited computational resource, we take a subsample in each bag if the bag size is larger than the threshold of 256. Particularly, when a large bag is sampled, we randomly sample 256 instances and assign the original label proportions to the reduced bag.\nThe experimental results of the K-means bag generation are shown in Table 2 and Table 3. Although this scenario violates the i.i.d. assumption, the results demonstrate that it is feasible to learn an instance-level classifier by simply minimizing the proportion loss. Also, our LLP-VAT significantly brings benefits for the k-means bag generation scenario on SVHN and CIFAR10, while showing comparable performance on CIFAR100. Interestingly, the performance of a model is not well-\ncorrelated with the value of K. One possible reason is that we might drop informative bags as we randomly split bags into validation and training." }, { "heading": "4.3 VALIDATION METRICS", "text": "Many modern machine learning models require a wide range of hyperparameter selections about the architecture, optimizer and regularization. However, for the realistic LLP scenario, we have no access to labeled instances during training. It is crucial to choose appropriate hyperparameters based on the bag-level validation error that is computed with only proportion labels. To evaluate the performance at the bag level, we consider four validation metrics: soft L1 error, hard L1 error, soft KL divergence, and hard KL divergence. Their definitions are given as follows. First, we define the output probabilities of an instance as the soft prediction and its one-hot encoding as the hard prediction. For each bag, we then compute the estimated label proportions by averaging these soft or hard predictions. Finally, we use the L1 error or KL divergence to measure the bag-level prediction error.\nTo investigate the relationship between the instance-level test error and the bag-level validation error, we compute the Pearson correlation coefficient between them on models trained for 400 epochs. The results are shown in Table 4. Surprisingly, we find that the hard L1 error has a strong positive correlation to test error rate on all benchmarks. This implies that it is feasible to select hyperparameters with only label proportions in realistic LLP scenarios. Interestingly, our finding is coherent to Yu et al. (2013). Although their and our works both adopt the hard L1 error for model selection, we focus on the multi-class LLP scenario instead of the binary classification problem they considered. Therefore, we suggest future multi-class LLP works could adopt the hard L1 validation metric for model selection.1" }, { "heading": "5 RELATED WORK", "text": "Kuck & de Freitas (2012) first introduce the LLP scenario and formulate the probabilistic model with the MCMC algorithm to generate consistent label proportions. Several following works (Chen et al., 2006; Musicant et al., 2007) extend the LLP setting to a variety of standard supervised learning algorithms. Without directly inferring instance labels, Quadrianto et al. (2009) propose a Mean Map algorithm with exponential-family parametric models. The algorithm uses empirical mean operators of each bag to solve a convex optimization problem. However, the success of the Mean Map algorithm is based on a strong assumption that the class-conditional distribution of data is\n1Nevertheless, we do not suggest using our validation metric for early stopping since the correlation is computed after the model converges.\nindependent of bags. To loosen the restriction, Patrini et al. (2014) propose a Laplacian Mean Map algorithm imposing an additional Laplacian regularization. Nevertheless, these Mean Map algorithms suffer from a fundamental drawback: they require the classifier to be a linear model.\nSeveral works tackle the LLP problem from Bayesian perspectives. For example, Fan et al. (2014) propose an RBM-based generative model to estimate the group-conditional likelihood of data. Hernández-González et al. (2013), on the other hand, develop a Bayesian classifier with an EM algorithm. Recently, Sun et al. (2017) propose a graphical model using counting potential to predict instance labels for the US presidential election. Furthermore, other works (Chen et al., 2009; Stolpe & Morik, 2011) adopt a k-means approach to cluster training data by label proportions. While some works (Fan et al., 2014; Sun et al., 2017) claim that they are suitable for large-scale settings, both Bayesian methods and clustering-based algorithms are rather inefficient and computationally expensive when applied to large image datasets.\nAnother line of work adopts a large-margin framework for the problem of LLP. Stolpe & Morik (2011) propose a variant of support vector regression using the inverse calibration method to estimate the class-conditional probability for bags. On the other hand, Yu et al. (2013) propose a procedure that alternates between assigning a label to each instance, also known as pseudo-labeling in the literature, and fitting an SVM classifier. Motivated by this idea, a number of works (Wang et al., 2015; Qi et al., 2016; Chen et al., 2017) infer individual labels and updated model parameters alternately. One major drawback of SVM-based approaches is that they are tailored for binary classification; they cannot extend to the multi-class classification setting efficiently.\nAs deep learning has garnered huge success in a number of areas, such as natural language processing, speech recognition, and computer vision, many works leverage the power of neural networks for the LLP problem. Ardehaly & Culotta (2017) are the first to apply deep models to the multiclass LLP setting. Also, Bortsova et al. (2018) propose a deep LLP method learning the extent of emphysema from the proportions of disease tissues. Concurrent to our work, Dulac-Arnold et al. (2019) also considers the multi-class LLP setting with bag-level cross-entropy loss. They introduce a ROT loss that combines two goals: jointly maximizing the probability of instance predictions and minimizing the bag proportion loss." }, { "heading": "6 CONCLUSION", "text": "In this paper, we first apply a novel semi-supervised learning technique, consistency regularization, to the multi-class LLP problem. Our proposed approach leverages the unlabeled data to learn a decision boundary that better depicts the data manifold. The empirical results validate that our approach obtains better performance than that achieved by existing LLP works. Furthermore, we introduce a non-uniform bag scenario - the K-means bag generation, where training instances are clustered by attribute relationships. This setting simulates more practical LLP situations than the uniform bag generation setting, which is often used in previous works. Lastly, we introduce a bag-level validation metrics, hard L1 error, for model selection with only label proportions. We empirically show that the bag-level hard L1 error has a strong correlation to the test classification error. For real-world applicability, we suggest that multi-class LLP methods relying on hyper-parameter tuning could evaluate their methodology based on the bag-level hard L1 error. One interesting future direction is combining the Mixup. In a nutshell, we hope that future LLP work can further explore the ideas presented in this paper." }, { "heading": "A EXPERIMENT DETAILS", "text": "A.1 DATASETS\nTo evaluate the effectiveness of our proposed method, we conduct experiments on three benchmark datasets, including SVHN (Netzer et al., 2011), CIFAR10, and CIFAR100 (Krizhevsky & Hinton, 2009). The SVHN dataset consists of 32x32 RGB digit images with 73,257 examples for training, 26,032 examples for testing, and 531,131 extra training examples that are not used in our experiments. The CIFAR10 and CIFAR100 datasets both consist of 50,000 training examples and 10,000 test examples. Each example is a 32x32 colored natural image, drawn from 10 classes and 100 classes respectively.\nA.2 EXPERIMENT SETUP\nImplementation details. For all experiments in this section, we adopt the Wide Residual Network with depth 28 and width 2 (WRN-28-2) following the standard specification in the paper (Zagoruyko & Komodakis, 2016).We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.0003. Additionally, we train models for a maximum of 400 epochs with a scheduler that scales the learning rate by 0.2 once the model finishes 320 epochs. To simulate the LLP setting, we split the training data by two bag generation algorithms described in Section 4.1 and 4.2. Once completing the bag generation, we then compute the proportion labels by averaging the class labels over each bag. To avoid over-fitting, we follow the common practice of data augmentation (He et al., 2016; Lin et al., 2013) padding an image by 4 pixels on each side, taking a random 32x32 crop and randomly flipping the image horizontally with the probability of 0.5 for all benchmarks. Hyperparameters. We compare our method, LLP-VAT, to ROT (Dulac-Arnold et al., 2019) and the vanilla approach, which simply minimizes the proportion loss. For ROT, we conduct experiments with a hyperparameter of α ∈ {0.1, 0.4, 0.7, 0.9} to compute the ROT loss. Following Oliver et al. (2018), we adopt the VAT approach to generate perturbed examples with a perturbation weight of 1 and 6 for SVHN and CIFAR10 (or CIFAR100) respectively. We measure the consistency loss with the KL divergence and a consistency weight of α ∈ {0.5, 0.1, 0.05, 0.01}. Model selection. For a fair comparison, we randomly sample 90% of bags for training and reserve the rest for validation. In the LLP setting, since there are no individual labels available in the validation set, we select hyperparameters based on the hard L1 error which is computed with only proportion labels. To be more specific, the hard L1 error for a bag B = (X,p) is defined by\nErr = ||p− p̂||1, p̂ = 1 |X| ∑ x∈X e(i ∗),\nwhere i∗ = arg maxi fθ(x)i and e (i∗) is the one-hot encoding of the prediction. Lastly, we report the test instance accuracy averaged over the last 10 epochs." }, { "heading": "B CONVERGENCE ANALYSIS OF LLP-VAT", "text": "To analyze the convergence performance of LLP-VAT, we plot the instance accuracy on the test set over training epochs. Figure 4 and 5 show the accuracy curve on the test set with the uniform bag generation and the K-means bag generation respectively. As shown in Figure 4 and 5, the experimental results demonstrate the stability of our LLP-VAT. When the training epoch gradually increases, the test instance accuracy goes up quickly and converges in the end.\nB.1 UNIFORM BAG GENERATION\nB.2 K-MEANS BAG GENERATION" } ]
2,019
null
SP:54eb8cf5375f436952059b8e6890a0550b98fb52
[ "This papers studies how to explore, in order to generate experience for faster learning of policies in context of RL. RL methods typically employ simple hand-tuned exploration schedules (such as epsilon greedy exploration, and changing the epsilon as training proceeds). This paper proposes a scheme for learning this schedule. The paper does this by modeling this as a non-stationary multi-arm bandit problem. Different exploration settings (tuple of choice of exploration, and the exact hyper-parameter), are considered as different non-stationary multi-arm bandits (while also employing some factorization) and expected returns are maintained over training. Arm (exploration strategy and hyper-parameter) is picked according to the return. The paper demonstrates results on the Atari suite of RL benchmarks, and shows results that demonstrate that their proposed search leads to faster learning.", "This paper develops a multi-arm bandit-based algorithm to dynamically adapt the exploration policy for reinforcement learning. The arms of the bandit are parameters of the policy such as exploration noise, per-action biases etc. A proxy fitness metric is defined that measures the return of the trajectories upon perturbations of the policy z; the bandit then samples perturbations z that are better than the average fitness of the past few perturbations." ]
Determining what experience to generate to best facilitate learning (i.e. exploration) is one of the distinguishing features and open challenges in reinforcement learning. The advent of distributed agents that interact with parallel instances of the environment has enabled larger scale and greater flexibility, but has not removed the need to tune or tailor exploration to the task, because the ideal data for the learning algorithm necessarily depends on its process of learning. We propose to dynamically adapt the data generation by using a non-stationary multi-armed bandit to optimize a proxy of the learning progress. The data distribution is controlled via modulating multiple parameters of the policy (such as stochasticity, consistency or optimism) without significant overhead. The adaptation speed of the bandit can be increased by exploiting the factored modulation structure. We demonstrate on a suite of Atari 2600 games how this unified approach produces results comparable to per-task tuning at a fraction of the cost.
[]
[ { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez Colmenarejo", "Matthew W. Hoffman", "David Pfau", "Tom Schaul", "Nando de Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "CoRR, abs/1606.04474,", "year": 2016 }, { "authors": [ "Peter Auer" ], "title": "Using confidence bounds for exploitation-exploration trade-offs", "venue": "Journal of Machine Learning Research,", "year": 2002 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Marc G Bellemare", "Will Dabney", "Rémi Munos" ], "title": "A distributional perspective on reinforcement learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "James S Bergstra", "Rémi Bardenet", "Yoshua Bengio", "Balázs Kégl" ], "title": "Algorithms for hyper-parameter optimization", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Omar Besbes", "Yonatan Gur", "Assaf Zeevi" ], "title": "Stochastic multi-armed-bandit problem with nonstationary rewards", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Diana Borsa", "Andre Barreto", "John Quan", "Daniel J. Mankowitz", "Hado van Hasselt", "Remi Munos", "David Silver", "Tom Schaul" ], "title": "Universal successor features approximators", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Olivier Chapelle", "Lihong Li" ], "title": "An empirical evaluation of thompson sampling", "venue": "In Advances in neural information processing systems,", "year": 2011 }, { "authors": [ "Wojciech Marian Czarnecki", "Siddhant M Jayakumar", "Max Jaderberg", "Leonard Hasenclever", "Yee Whye Teh", "Simon Osindero", "Nicolas Heess", "Razvan Pascanu" ], "title": "Mix&match-agent curricula for reinforcement learning", "venue": "arXiv preprint arXiv:1806.01780,", "year": 2018 }, { "authors": [ "Will Dabney", "Mark Rowland", "Marc G Bellemare", "Rémi Munos" ], "title": "Distributional reinforcement learning with quantile regression", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Gabriel Dulac-Arnold", "Richard Evans", "Hado van Hasselt", "Peter Sunehag", "Timothy Lillicrap", "Jonathan Hunt", "Timothy Mann", "Theophane Weber", "Thomas Degris", "Ben Coppin" ], "title": "Deep reinforcement learning in large discrete action spaces", "venue": "arXiv preprint arXiv:1512.07679,", "year": 2015 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Meire Fortunato", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Jacob Menick", "Matteo Hessel", "Ian Osband", "Alex Graves", "Volodymyr Mnih", "Remi Munos", "Demis Hassabis", "Olivier Pietquin", "Charles Blundell", "Shane Legg" ], "title": "Noisy networks for exploration", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Javier Garcı́a", "Fernando Fernández" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Dibya Ghosh", "Abhishek Gupta", "Sergey Levine" ], "title": "Learning actionable representations with goalconditioned policies", "venue": "arXiv preprint arXiv:1811.07819,", "year": 2018 }, { "authors": [ "Daniel Golovin", "Benjamin Solnik", "Subhodeep Moitra", "Greg Kochanski", "John Elliot Karro", "D. Sculley (eds" ], "title": "Google Vizier: A Service for Black-Box Optimization, 2017", "venue": null, "year": 2017 }, { "authors": [ "Tuomas Haarnoja", "Kristian Hartikainen", "Pieter Abbeel", "Sergey Levine" ], "title": "Latent space policies for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1804.02808,", "year": 2018 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Matteo Hessel", "Hado van Hasselt", "Joseph Modayil", "David Silver" ], "title": "On inductive biases in deep reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "Dan Horgan", "John Quan", "David Budden", "Gabriel Barth-Maron", "Matteo Hessel", "Hado Van Hasselt", "David Silver" ], "title": "Distributed prioritized experience replay", "venue": "arXiv preprint arXiv:1803.00933,", "year": 2018 }, { "authors": [ "Laurent Itti", "Pierre F Baldi" ], "title": "Bayesian surprise attracts human attention", "venue": "In Advances in Neural Information Processing Systems, pp", "year": 2006 }, { "authors": [ "Emilie Kaufmann", "Olivier Cappé", "Aurélien Garivier" ], "title": "On bayesian upper confidence bounds for bandit problems", "venue": "In Artificial intelligence and statistics,", "year": 2012 }, { "authors": [ "Marlos C. Machado", "Marc G. Bellemare", "Erik Talvitie", "Joel Veness", "Matthew J. Hausknecht", "Michael Bowling" ], "title": "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents", "venue": null, "year": 2017 }, { "authors": [ "Marco Mirolli", "Gianluca Baldassarre" ], "title": "Functions and mechanisms of intrinsic motivations", "venue": "In Intrinsically Motivated Learning in Natural and Artificial Systems,", "year": 2013 }, { "authors": [ "Ishan Misra", "Ross Girshick", "Rob Fergus", "Martial Hebert", "Abhinav Gupta", "Laurens Van Der Maaten" ], "title": "Learning by asking questions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Prabhat Nagarajan", "Garrett Warnell", "Peter Stone" ], "title": "The impact of nondeterminism on reproducibility in deep reinforcement learning", "venue": "In 2nd Reproducibility in Machine Learning Workshop at ICML 2018,", "year": 2018 }, { "authors": [ "Ashvin V Nair", "Vitchyr Pong", "Murtaza Dalal", "Shikhar Bahl", "Steven Lin", "Sergey Levine" ], "title": "Visual reinforcement learning with imagined goals", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Pierre-Yves Oudeyer", "Frdric Kaplan", "Verena V Hafner" ], "title": "Intrinsic motivation systems for autonomous mental development", "venue": "IEEE Transactions on Evolutionary Computation,", "year": 2007 }, { "authors": [ "Matthias Plappert", "Rein Houthooft", "Prafulla Dhariwal", "Szymon Sidor", "Richard Y. Chen", "Xi Chen", "Tamim Asfour", "Pieter Abbeel", "Marcin Andrychowicz" ], "title": "Parameter space noise for exploration", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Martin L. Puterman" ], "title": "Markov Decision Processes—Discrete Stochastic Dynamic Programming", "venue": null, "year": 1994 }, { "authors": [ "Vishnu Raj", "Sheetal Kalyani" ], "title": "Taming non-stationary bandits: A Bayesian approach", "venue": "arXiv preprint arXiv:1707.09727,", "year": 2017 }, { "authors": [ "Tom Schaul", "John Quan", "Ioannis Antonoglou", "David Silver" ], "title": "Prioritized experience replay", "venue": "arXiv preprint arXiv:1511.05952,", "year": 2015 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Curious model-building control systems", "venue": "In Proc. International Joint Conference on Neural Networks,", "year": 1991 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "Driven by compression progress: A simple principle explains essential aspects of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, art, science, music, jokes", "venue": "In Workshop on anticipatory behavior in adaptive learning systems,", "year": 2008 }, { "authors": [ "Felipe Petroski Such", "Vashisht Madhavan", "Rosanne Liu", "Rui Wang", "Pablo Samuel Castro", "Yulun Li", "Ludwig Schubert", "Marc Bellemare", "Jeff Clune", "Joel Lehman" ], "title": "An Atari model zoo for analyzing, visualizing, and comparing deep reinforcement learning agents", "venue": "arXiv preprint arXiv:1812.07069,", "year": 2018 }, { "authors": [ "William R Thompson" ], "title": "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples", "venue": null, "year": 1933 }, { "authors": [ "Yuhuai Wu", "Roman Ring", "Dani Yogatama", "Dario Wünsch", "Katrina McKinney", "Oliver Smith", "Tom Schaul", "Timothy Lillicrap", "Koray Kavukcuoglu", "Demis Hassabis", "Chris Apps", "David Silver" ], "title": "Grandmaster level in StarCraft II using multi-agent reinforcement learning", "venue": "doi: 10.1038/s41586-019-1724-z", "year": 2019 }, { "authors": [ "Shaun S Wang" ], "title": "A class of distortion operators for pricing financial and insurance risks", "venue": "Journal of risk and insurance,", "year": 2000 }, { "authors": [ "Zhongwen Xu", "Hado van Hasselt", "David Silver" ], "title": "Meta-gradient reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Daochen Zha", "Kwei-Herng Lai", "Kaixiong Zhou", "Xia Hu" ], "title": "Experience replay optimization", "venue": "In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is a general formalism modelling sequential decision making. It aspires to be broadly applicable, making minimal assumptions about the task at hand and reducing the need for prior knowledge. By learning behaviour from scratch, it has the potential to surpass human expertise or tackle complex domains where human intuition is not applicable. In practice, however, generality is often traded for performance and efficiency, with RL practitioners tuning algorithms, architectures and hyper-parameters to the task at hand (Hessel et al., 2019). A side-effect of this is that the resulting methods can be brittle, or difficult to reliably reproduce (Nagarajan et al., 2018).\nExploration is one of the main aspects commonly designed or tuned specifically for the task being solved. Previous work has shown that large sample-efficiency gains are possible, for example, when the exploratory behaviour’s level of stochasticity is adjusted to the environment’s hazard rate (Garcı́a & Fernández, 2015), or when an appropriate prior is used in large action spaces (DulacArnold et al., 2015; Czarnecki et al., 2018; Vinyals et al., 2019). Ideal exploration in the presence of function approximation should be agent-centred. It ought to focus more on generating data that supports the learning of agent at its current parameters θ, rather than making progress on objective measurements of information gathering. A useful notion here is learning progress (LP ), defined as the improvement of the learned policy πθ (Section 3).\nThe agent’s source of data is its behaviour policy. Beyond the conventional RL setting of a single stream of experience, distributed agents that interact with parallel copies of the environment can have multiple such data sources (Horgan et al., 2018). In this paper, we restrict ourselves to the setting where all behaviour policies are derived from a single set of learned parameters θ, for example when θ parameterises an action-value function Qθ. Consequently the behaviour policies are given by π(Qθ, z), where each modulation z leads to meaningfully different behaviour. This can be guaranteed if z is semantic (e.g. degree of stochasticity) and consistent across multiple time-steps. The latter is achieved by holding z fixed throughout each episode (Section 2).\nWe propose to estimate a proxy that is indicative of future learning progress, f(z) (Section 3), separately for each modulation z, and to adapt the distribution over modulations to maximize f , using a non-stationary multi-armed bandit that can exploit the factored structure of the modulations (Section 4). Figure 1 shows a diagram of all these components. This results in an autonomous adaptation of behaviour to the agent’s stage of learning (Section 5), varying across tasks and across time, and reducing the need for hyper-parameter tuning." }, { "heading": "2 MODULATED BEHAVIOUR", "text": "As usual in RL, the objective of the agent is to find a policy π that maximises the γ-discounted expected return Gt . = ∑∞ i=0 γ\niRt+i, where Rt is the reward obtained during the transition from time t to t + 1. A common way to address this problem is to use methods that compute the actionvalue function Qπ given by Qπ(s, a) .= E[Gt|s, a], i.e. the expected return when starting from state s with action a and then following π (Puterman, 1994).\nA richer representation of Qπ that aims to capture more information about the underlying distribution of Gt has been proposed by Bellemare et al. (2017), and extended by Dabney et al. (2018). Instead of approximating only the mean of the return distribution, we approximate a discrete set of n quantile values qν (where ν ∈ { 12n , 3 2n , . . . , 2n−1 2n }) such that P(Q\nπ ≤ qν) = ν. Outside the benefits in performance and representation learning (Such et al., 2018), these quantile estimates provide a way of inducing risk-sensitive behaviour. We approximate all qν using a single deep neural network with parameters θ, and define the evaluation policy as the greedy one with respect to the mean estimate:\nπθ(·|s) ∈ arg max a\n1\nn ∑ ν qν(s, a).\nThe behaviour policy is the central element of exploration: it generates exploratory behaviour (and experience therefrom) which is used to learn πθ; ideally in such a way as to reduce the total amount of experience required to achieve good performance. Instead of a single monolithic behaviour policy, we propose to use a modulated policy to support parameterized variation. Its modulations z should satisfy the following criteria: they need to (i) be impactful, having a direct and meaningful effect on generated behaviour; (ii) have small dimensionality, as to quickly adapt to the needs of the learning algorithm, and interpretable semantics to ease the choice of viable ranges and initialisation; and (iii) be frugal, in the sense that they are relatively simple and computationally inexpensive to apply. In this work, we consider five concrete types of such modulations:\nTemperature: a Boltzmann softmax policy based on action-logits, modulated by temperature, T .\nFlat stochasticity: with probability the agent ignores the action distribution produced by the softmax, and samples an action uniformly at random ( -greedy).\nPer-action biases: action-logit offsets, b, to bias the agent to prefer some actions.\nAction-repeat probability: with probability ρ, the previous action is repeated (Machado et al., 2017). This produces chains of repeated actions with expected length 11−ρ .\nOptimism: as the value function is represented by quantiles qν , the aggregate estimate Qω can be parameterised by an optimism exponent ω, such that ω = 0 recovers the default flat average, while positive values of ω imply optimism and negative ones pessimism. When near risk-neutral, our simple risk measure produces qualitatively similar transforms to those of Wang (2000).\nWe combine the above modulations to produce the overall z-modulated policy\nπ(a|s, z) .= (1− )(1− ρ) e 1 T (Qω(s,a)+ba)∑\na′∈A e 1 T (Qω(s,a\n′)+ba′ ) + (1− ρ) |A| + ρIa=at−1 ,\nwhere z .= (T, , b, ρ, ω), Ix is the indicator function, and the optimism-aggregated value is\nQω . = ∑ ν e −ωνqν∑\nν e −ων .\nNow that the behaviour policy can be modulated, the following two sections discuss the criteria and mechanisms for choosing modulations z." }, { "heading": "3 EXPLORATION & THE EFFECTIVE ACQUISITION OF INFORMATION", "text": "A key component of a successful reinforcement learning algorithm is the ability to acquire experience (information) that allows it to make expeditious progress towards its objective of learning to act in the environment in such a way as to optimise returns over the relevant (potentially discounted) horizon. The types of experience that most benefit an agent’s ultimate performance may differ qualitatively throughout the course of learning — a behaviour modulation that is beneficial in the beginning of training often enough does not carry over to the end, as illustrated by the analysis in Figure 5. However, this analysis was conducted in hindsight, and in general how to generate such experience optimally — optimal exploration in any environment — remains an open problem.\nOne approach is to require exploration to be in service of the agent’s future learning progress (LP ), and to optimise this quantity during learning. Although there are multiple ways of defining learning progress, in this work we opted for a task-related measure, namely the improvement of the policy in terms of expected return. This choice of measure corresponds to the local steepness of the learning curve of the evaluation policy πθ,\nLPt(∆θ) . = E s0 [V πθt+∆θ (s0)− V πθt (s0)] , (1)\nwhere the expectation is over start states s0, the value V π(s) = Eπ[ ∑ γiRi|s0 = s] is the γdiscounted return one would expected to obtain, starting in state s and following policy π afterwards, and ∆θ is the change in the agent’s parameters. Note that this is still a limited criterion, as it is myopic and might be prone to local optima.\nAs prefaced in the last section, our goal here is to define a mechanism that can switch between different behaviour modulations depending on which of them seems most promising at this point in the training process. Thus in order to adapt the distribution over modulations z, we want to assess the expected LP when learning from data generated according to z-modulated behaviour:\nLPt(z) . = E τ∼πθt (z) [LPt(∆θ(τ, t))],\nwith ∆θ(τ, t) the weight-change of learning from trajectory τ at time t. This is a subjective utility measure, quantifying how useful τ is for a particular learning algorithm, at this stage in training.\nProxies for learning progress: WhileLP (z) is a simple and clear progress metric, it is not readily available during training, so that in practice, a proxy fitness ft(z) ≈ LPt(z) needs to be used. A key practical challenge is to construct ft from inexpensively measurable proxies, in a way that is sufficiently informative to effectively adapt the distribution over z, while being robust to noise, approximation error, state distribution shift and mismatch between the proxies and learning progress. The ideal choice of f(z) is a matter of empirical study, and this paper only scratches the surface on this topic.\nAfter some initial experimentation, we opted for the simple proxy of empirical (undiscounted) episodic return: ft(z) = ∑ ai∼π(Qθt ,z)\nRi. This is trivial to estimate, but it departs from LP (z) in a number of ways. First, it does not contain learner-subjective information, but this is partly mitigated through the joint use of with prioritized replay (see Section 5.1) that over-samples high error experience. Another potential mechanism by which the episodic return can be indicative of future learning is because an improved policy tends to be preceded by some higher-return episodes – in general, there is a lag between best-seen performance and reliably reproducing it. Second, the fitness is based on absolute returns not differences in returns as suggested by Equation 1; this makes no difference to the relative orderings of z (and the resulting probabilities induced by the bandit), but it has the benefit that the non-stationarity takes a different form: a difference-based metric will\nappear stationary if the policy performance keeps increasing at a steady rate, but such a policy must be changing significantly to achieve that progress, and therefore the selection mechanism should keep revisiting other modulations. In contrast, our absolute fitness naturally has this effect when paired with a non-stationary bandit, as described in the next section." }, { "heading": "4 NON-STATIONARY BANDIT TAILORED TO LEARNING PROGRESS", "text": "The most effective modulation scheme may differ throughout the course of learning. Instead of applying a single fixed modulation or fixed blend, we propose an adaptive scheme, in which the choice of modulation is dynamically based on learning progress. The adaptation process is based on a non-stationary multi-armed bandit (Besbes et al., 2014; Raj & Kalyani, 2017), where each arm corresponds to a behaviour modulation z. The non-stationarity reflects the nature of the learning progress LPt(z) which depends on the time t in training through the parameters θt.\nBecause of non-stationarity, the core challenge for such bandit is to identify good modulation arms quickly, while only having access to a noisy, indirect proxy ft(z) of the quantity of interest LPt(z). However, our setting also presents an unusual advantage: the bandit does not need to identify the best z, as in practice it suffices to spread probability among all arms that produce reasonably useful experience for learning.\nConcretely, our bandit samples a modulation z ∈ {z1, . . . , zK} according to the probability that it results in higher than usual fitness (measured as the mean over a recent length-h window):\nPt(z) ∝ P(ft(z) ≥ mt), where mt . =\n1\nh t−1∑ t′=t−h ft′(zt′).\nNote that mt depends on the payoffs of the actually sampled modulations zt−h:t−1, allowing the bandit to become progressively more selective (if mt keeps increasing).\nEstimation: For simplicity, Pt(z) is inferred based on the empirical data within a recent time window of the same horizon h that is used to compute mt. Concretely, Pt(z) . = µt(z)/ ∑ z′ µt(z\n′) with the preferences µt(z) ≈ P(ft(z) ≥ mt) defined as\nµt(z) . =\n1 2 + ∑t−1 t′=t−h Ift′ (z′t)≥mtIzt′=z\n1 + n(z, h)\nwhere n(z, h) is the number of times that z was chosen in the corresponding time window. We encode a prior preference of 12 in the absence of other evidence, as an additional (fictitious) sample.\nAdaptive horizon: The choice of h can be tuned as a hyper-parameter, but in order to remove all hyper-parameters from the bandit, we adapt it online instead. The update is based on a regression accuracy criterion, weighted by how often the arm is pulled. For the full description, see Appendix A.\nFactored structure: As we have seen in Section 2, our concrete modulations z have additional factored structure that can be exploited. For that we propose to use a separate sub-bandit (each defined as above) for each dimension j of z. The full modulation z is assembled from the zj independently sampled from the sub-bandits. This way, denoting by Kj the number of arms for zj , the total number of arms to model is ∑ j Kj , which is a significant reduction from the number\nof arms in the single flattened space ∏ j Kj . This allows for dramatically faster adaptation in the bandit (see Figure 2). On the other hand, from the perspective of each sub-bandit, there is now another source of non-stationarity due to other sub-bandits shifting their distributions." }, { "heading": "5 EXPERIMENTS", "text": "The central claim of this paper is that the best fixed hyper-parameters in hindsight for behaviour differ widely across tasks, and that an adaptive approach obtains similar performance to the best choice without costly per-task tuning. We report a broad collection of empirical results on Atari 2600 (Bellemare et al., 2013) that substantiate this claim, and validate the effectiveness of the proposed components. From our results, we distill qualitative descriptions of the adaptation dynamics. To isolate effects, independent experiments may use all or subsets of the dimensions of z.\nTwo initial experiments in a toy grid-world setting are reported in Figure 2. They demonstrate that the proposed bandit works well in both stationary and non-stationary settings. Moreover, they highlight the benefits of using the exact learning progress LP (z), and the gap incurred when using less informative proxies f(z). They also indicate that the factored approach can deliver a substantial speed-up. Details of this setting are described in Appendix B." }, { "heading": "5.1 EXPERIMENTAL SETUP: ATARI", "text": "Our Atari agent is a distributed system inspired by Impala (Espeholt et al., 2018) and Ape-X (Horgan et al., 2018), consisting of one learner (on GPU), multiple actors (on CPUs), and a bandit providing modulations to the actors. On each episode t, an actor queries the bandit for a modulation zt, and the learner for the latest network weights θt. At episode end, it reports a fitness value ft(zt) to the bandit, and adds the collected experience to a replay table for the learner. For stability and reliability, we enforce a fixed ratio between experience generated and learning steps, making actors and learner run at the same pace. Our agents learn a policy from 200 million environment frames in 10-12h wall-clock time (compared to a GPU-week for the state-of-art Rainbow agent (Hessel et al., 2018)).\nBesides distributed experience collection (i.e., improved experimental turnaround time), algorithmic elements of the learner are similar to Rainbow: the updates use multi-step double Q-learning, with distributional quantile regression (Dabney et al., 2018) and prioritized experience replay (Schaul et al., 2015). All hyper-parameters (besides those determined by z) are kept fixed across all games and all experiments; these are listed in Appendix C alongside default values of z. These allow us to generate competitive baseline results (118 ± 6% median human-normalised score) with a so-called reference setting (solid black in all learning curves) which sets the exploration parameters to that is most commonly used in the literature ( = 0.01, ω = 0, T = 0, b = 0, ρ = 0).\nIf not mentioned otherwise, all aggregate results are across 15 games listed in Appendix D and at least N = 5 independent runs (seeds). Learning curves shown are evaluations of the greedy policy after the agent has experienced the corresponding number of environment frames. To aggregate scores across these fifteen games we use relative rank, an ordinal statistic that weighs each game equally (despite different score scales) and highlights relative differences between variants. Concretely, the performance outcome G(game, seed, variant) is defined as the average return of the greedy policy across the last 10% of the run (20 million frames). All outcomes G(game, ·, ·) are then jointly ranked, and the corresponding ranks are averaged across seeds. The averaged ranks are normalized to fall between 0 and 1, such that a normalized rank of 1 corresponds to all N seeds of a variant being ranked at the top N positions in the joint ranking. Finally, the relative ranks for each variant are averaged across all games. See also Appendix D." }, { "heading": "5.2 QUANTIFYING THE TUNING CHALLENGES", "text": "It is widely appreciated that the best hyper-parameters differ per Atari game. Figure 3 illustrates this point for multiple classes of modulations (different arms come out on top in different games), while Figure 4 quantifies this phenomenon across 15 games and 4 modulation classes and finds that this effect holds in general.\nIf early performance were indicative of final performance, the cost of tuning could be reduced. We quantify how much performance would be lost if the best fixed arm were based on the first 10% of the run. Figure 5 shows that the mismatch is often substantial. This also indicates the best choice is nonstationary: what is good in early learning may not be good later on — an issue sometimes addressed by hand-crafted schedules (e.g., DQN linearly decreases the value of (Mnih et al., 2015)).\nAnother approach is to choose not to choose, that is, feed experience from the full set of choices to the learner, an approach taken, e.g., in (Horgan et al., 2018). However, this merely shifts the problem, as it in turn necessitates tuning this set of choices. Figure 6 shows that the difference between a naive and a carefully curated set can indeed be very large (Table 4 in Appendix C lists all these sets)." }, { "heading": "5.3 ADAPTING INSTEAD OF TUNING", "text": "It turns out that adapting the distribution over z as learning progresses effectively addresses the three tuning challenges discussed above (per-task differences, early-late mismatch, handling sets). Figure 6 shows that the bandit can quickly suppress the choices of harmful elements in a noncurated set; in other words, the set does not need to be carefully tuned. At the same time, a game-\nspecific schedule emerges from the non-stationary adaptation, for example recovering an -schedule reminiscent of the hand-crafted one in DQN (Mnih et al., 2015) (see Figure 17 in Appendix E). Finally, the overall performance of the bandit is similar to that of the best fixed choice, and not far from an “oracle” that picks the best fixed z per game in hindsight (Figure 4).\nA number of other interesting qualitative dynamics emerge in our setting (Appendix E): action biases are used initially and later suppressed (e.g., on SEAQUEST, Figure 19); the usefulness of action\nrepeats varies across training (e.g., on H.E.R.O., Figure 18). Figure 16 looks at additional bandit baselines and finds that addressing the non-stationarity is critical (see Appendix E.3).\nFinally, our approach generalizes beyond a single class of modulations; all proposed dimensions can adapt simultaneously within a single run, using a factored bandit to handle the combinatorial space. Figure 13 shows this yields similar performance to adapting within one class. In a few games this outperforms the best fixed choice1 in hindsight; see Figure 6 (‘combo’) and Figure 7; presumably because of the added dynamic adaptation to the learning process. On the entire set of 57 Atari games, the bandit achieves similar performance (113 ± 2% median human-normalized score) to our fixed, tuned reference setting (118± 6%), despite operating on 60 different combinations of modulations." }, { "heading": "6 RELATED WORK", "text": "Here we focus on two facets of our research: its relation to exploration, and hyper-parameter tuning.\nFirst, our work can be seen as building on a rich literature on exploration through intrinsic motivation aimed at maximising learning progress. As the true learning progress is not readily available during training, much of this work targets one of a number of proxies: empirical return (Jaderberg et al., 2017); change in parameters, policy, or value function (Itti & Baldi, 2006); magnitude of training loss (Mirolli & Baldassarre, 2013; Schmidhuber, 1991); error reduction or derivative (Schmidhuber, 1991; Oudeyer et al., 2007); expected accuracy improvement (Misra et al., 2018); compression progress (Schmidhuber, 2008); reduction in uncertainty; improvement of value accuracy; or change in distribution of encountered states. Some of these have the desirable property that if the proxy is zero, so is LP . However, these proxies themselves may only be available in approximated form, and these approximations tend to be highly dependent on the state distribution under which they are evaluated, which is subject to continual shift due to the changes in policy. As a result, direct comparison between different learning algorithms under these proxies tends to be precarious.\nSecond, our adaptive behaviour modulation can be viewed as an alternative to per-task hyperparameter tuning, or hyper-parameter tuning with cross-task transfer (Golovin et al., 2017), and can be compared to other works attempting to reduce the need for this common practice. (Note that the best-fixed-arm in our experiments is equivalent to explicitly tuning the modulations as hyperparameters.) Though often performed manually, hyper-parameter tuning can be improved by random search (Bergstra et al., 2011), but in either case requires many full training cycles, whereas our work optimises the modulations on-the-fly during a single training run.\nLike our method, Population Based Training (PBT, Jaderberg et al., 2017) and meta-gradient RL (Andrychowicz et al., 2016; Xu et al., 2018) share the property of dynamically adapting hyperparameters throughout agent training. However, these methods exist in a distinctly different problem setting: PBT assumes the ability to run multiple independent learners in parallel with separate experience. Its cost grows linearly with the population size (typically > 10), but it can tune other hyper-parameters than our approach (such as learning rates). Meta-gradient RL, on the other hand,\n1Since it is too expensive to investigate all individual combinations in the joint modulation space, we only vary z-s along a single dimension at a time.\nassumes that the fitness is a differentiable function of the hyper-parameters, which may not generally hold for exploration hyper-parameters.\nWhile our method focuses on modulating behaviour in order to shape the experience stream for effective learning, a related but complementary approach is to filter or prioritize the generated experience when sampling from replay. Classically, replay prioritization has been based on TD error, a simple proxy for the learning progress conferred by an experience sample (Schaul et al., 2015). More recently, however, learned and thereby more adaptive prioritization schemes have been proposed (Zha et al., 2019), with (approximate) learning progress as the objective function." }, { "heading": "7 DISCUSSION & FUTURE WORK", "text": "Reiterating one of our key observations: the qualitative properties of experience generated by an agent impact its learning, in a way that depends on characteristics of the task, current learning parameters, and the design of the agent and its learning algorithm. We have demonstrated that by adaptively using simple, direct modulations of the way an agent generates experience, we can improve the efficiency of learning by adapting to the dynamics of the learning process and the specific requirements of the task. Our proposed method2 has the potential to accelerate RL research by reducing the burden of hyper-parameter tuning or the requirement for hand-designed strategies, and does so without incurring the computational overhead of some of the alternatives.\nThe work presented in this paper represents a first stab at exploiting adaptive modulations to the dynamics of learning, and there are many natural ways of extending this work. For instance, such an approach need not be constrained to draw only from experiences generated by the agent; the agent can also leverage demonstrations provided by humans or by other agents. Having an adaptive system control the use of data relieves system designers of the need to curate such data to be of high quality – an adaptive system can learn to simply ignore data sources that are not useful (or which have outlived their usefulness), as our bandit has done in the case of choosing modulations to generate experiences with (e.g., Figures 17, 18, 19).\nA potential limitation of our proposal is the assumption that a modulation remains fixed for the duration of an episode. This restriction could be lifted, and one can imagine scenarios in which the modulation used might depend on time or the underlying state. For example, an agent might generate more useful exploratory experiences by having low stochasticity in the initial part of an episode, but switching to have higher entropy once it reaches an unexplored region of state space.\nThere is also considerable scope to expand the set of modulations used. A particularly promising avenue might be to consider adding noise in parameter space, and controlling the variance (Fortunato et al., 2018; Plappert et al., 2018). In addition, previous works have shown that agents can learn diverse behaviours conditioned on a latent policy embedding (Eysenbach et al., 2018; Haarnoja et al., 2018), goal (Ghosh et al., 2018; Nair et al., 2018) or task specification (Borsa et al., 2019). A bandit could potentially be exposed to modulating the choices in abstract task space, which could be a powerful driver for more directed exploration.\nACKNOWLEDGEMENTS\n(omitted for anonymity)" }, { "heading": "A ADAPTIVE BANDIT", "text": "In this section we will briefly revisit the adaptive bandit proposed in Section 4 and provide more of the details behind the adaptability of its horizon. For clarity, we start by restating the setting and key quantities: the reference fitness mt, the active horizon ht and the observed fitness function ft : Z 7→ R, where Z = {z1, . . . , zK} defines a modulation class. The bandit samples a modulation z ∈ {z1, . . . , zK} according to the probability that this z will result in higher than average fitness (within a recent length-h window):\nPt(z) ∝ P(ft(z) ≥ mt), where mt . =\n1\nh t−1∑ t′=t−h ft′(zt′).\nAdapting z-probabilities. For simplicity, Pt(z) is inferred based on the empirical data within a recent time window of the same horizon h that is used to compute mt. Concretely, Pt(z) . = µt(z)/ ∑ z′ µt(z ′) with the preferences µt(z) ≈ P(ft(z) ≥ mt) defined as\nµt(z) . =\n1 2 + ∑t−1 t′=t−h Ift′ (z′t)≥mtIzt′=z\n1 + n(z, h) ,\nwhere n(z, h) is the number of times that z was chosen in the corresponding time window. We encode a prior preference of 12 in the absence of other evidence, as an additional (fictitious) sample.\nAdapting the horizon. As motivated in Section 4, the discrete horizon size ht = ht−1+1 is adapted in order to improve regression accuracy Lt(h):\nLt(h) . =\n1\n2\n( ft(zt)− f̄t,h(zt) )2 ,\nwhere f(zt) is the fitness of the modulation zt chosen at time t, and\nf̄t,h(z) . = mt + ∑t−1 t′=t−h ft(zt′)Izt′=z 1 + n(z, h) .\nThis objective is not differentiable w.r.t. h, so we perform a finite-difference step. As the horizon cannot grow beyond the amount of available data, the finite-difference is not symmetric around ht. Concretely, at every step we evaluate two candidates: one according to the current horizon ht, Lt(ht), and one proposing a shrinkage in the effective horizon, Lt(h′), where the new candidate horizon is given by:\nh′ . = max(2K, (1− η)ht).\nThus the new horizon proposes a shrinkage of up to η = 2% per step, but is never allowed to shrink beyond twice the number of arms K. Given a current sample of the fitness function ft(zt), we probe which of these two candidates ht or h′ best explains it, by comparing Lt(ht) and Lt(h′). If the shorter horizon seems to explain the new data point better, we interpret this as a sign of non-stationarity in the process and propose a shrinkage proportional to the relative error prediction reduction Lt(ht)−Lt(h\n′) Lt(ht) , namely:\nht+1 =\n{ max ( 2K, (1− ηLt(ht)−Lt(h\n′) Lt(ht) )ht ) if Lt(ht) > Lt(h′)\nht + 1 otherwise.\nFactored bandits. In the case of factored sub-bandits, they each maintain their own independent horizon h; we have not investigated whether sharing it would be beneficial." }, { "heading": "B LAVAWORLD EXPERIMENTS", "text": "In this section we will describe in further details the experiments behind Figure 2. These were conducted on LavaWorld, a small (96 states), deterministic four-rooms-style navigation domain, with deadly lava instead of walls. We chose this domain to illustrate what our proposed adaptive mechanism (Section 4) would do under somewhat idealised conditions where the learning is tabular\nand we can compute the ground-truth LP (z) and assess oracle performance. This investigation allows us to first see how well the bandit can deal with the kind of non-stationary arising from an RL learning process (entangled with exploration).\nIn this setting, we consider three modulation classes, , T, b, where b can boost the logits for any of the 4 available actions. The sets of modulations are , T ∈ {0.01, 0.1, 1} and bi ∈ {0, 0.1} resulting in 31 unique modulated (stochastic) policies for each Q-function (see Figure 8). Q-functions are look-up tables of size (96 × 4) as this domain contains 96 unique states. The single start state is in the top-left corner, and the single rewarding state is in the top-left corner of the top-right room, and is also absorbing. We treat the discount γ = 0.99 as probability of continuation, and terminate episodes stochastically based on this, or when the agent hits lava.\nWe study the behaviour of the system in two settings: one stationary, one non-stationary.\nIn the stationary setting (Figure 2, left), we considered modulation behaviours π(·|s, z) that do not change over time. They are computed based on a ’dummy’ action-value function Q, as described in Section 2, but this value does not change over time. The learning process is tabular and independent of this behaviour generating value Q. In this case, we compute the cumulative probability that an executed policy encounters the single sparse reward (‘expected reward’) as a function of the number of episodes, where we assume that the policy will be perfect after the first reward event. The LP (z) signal given to the oracle bandit is the true (stationary) expectation of this event for every modulation z. Given the extreme reward sparsity and the absence of learning, there is no obvious choice for a proxy measure f , so the non-oracle bandit reverts to a uniform choice over z. The reference Q-values are the optimal ones for the task. The results presented are averaged over 10 runs.\nSecondly, we considered a non-stationary setting (Figure 2, right) similar to the one above, but where the Q-values behind the modulated behaviour are learned over time. This is akin to the actual regime of operation this system would encounter in practice, although the learning of these values is idealised. These are initialised at zero, and the only update (given the absence of rewards) is to suppress the Q-value of any encountered transition that hits lava by 0.1; this learning update happens instantaneously. In other words, the agent does a random walk, but over time it dies in many different ways. The reported “expected reward” is again the probability that the policy induced by Qt encounters the final reward. In this case, a reasonable proxy f(z) for LP (z) is the binary signal on whether something was learned from the last episode (by encountering a new into-lava transition), which obtains a large fraction of the oracle’s performance." }, { "heading": "C ATARI HYPER-PARAMETERS", "text": "In this section we record the setting used in our Atari experiments: hyperparameters of the agents (Tables 2 and 3), environment and preprocessing (Table 1), and modulation sets for our behaviours (Table 4).\nReference modulations Unless specified otherwise, we use the following modulations by default: = 0.01, temperature T = 0.00001 (for tie-breaking between equal-valued actions), biases b = 0, optimism ω = 0, repeat probability ρ = 0. This corresponds to the most commonly used settings in the literature. On learning curve plots, this fixed setting is always shown in black.\nModulation sets The sets of curated and non-curated modulations we use are described in Table 4. Curated values were chosen based on the results in Figure 4.\nFixed hyperparameters The hyper-parameters used by our Atari agent are close to defaults used in the literature, with a few modification to improve learning stability in our highly distributed setting. For preprocessing and agent architecture, we use DQN settings detailed in Table 1 and Table 2. Table 3 summarizes the other hyper-parameters used by our Atari agent." }, { "heading": "D EVALUATION", "text": "In this section we detail the evaluation settings used for our Atari experiments, as well as the metrics used to aggregate results across games in Figure 5 in the main text.\nGames We evaluate on a set of 15 games chosen for their different learning characteristics: ASTERIX, BREAKOUT, DEMON ATTACK, FROSTBITE, H.E.R.O., MS. PAC-MAN, PRIVATE EYE, Q*BERT, SEAQUEST, SPACE INVADERS, STAR GUNNER, TENNIS, VENTURE, YARS’ REVENGE and ZAXXON.\nHuman-normalised scores are computed following the procedure in (Mnih et al., 2015), but differing in that we evaluate online (without interrupting training), average over policies with different weights θt (not freezing them), and aggregate over 20 million frames per point instead of 1 million.\nMetrics The relative rank statistic in Section 5.1 are normalized to fall between 0 and 1 for any set of outcomes G(game, ·, variant) for any number of seeds N . For this we simply scale the raw average ranks by their minimal and maximal values: N+12 and N + + N+12 where N + is the number of other outcomes this variant is jointly ranked with.\nIn Figure 5, we use a different metric to compare the effect of a chosen modulation early into the learning process. For this we compute Es[G(s)z ], the average episode return of modulation z at the end of training across all seeds s ∈ S, and z0, the modulation with highest average episode returns at the beginning of training (first 10% of the run). Based on this, we compute the normalised drop in performance resulting from committing prematurely to z0:\nPerformance drop(z) .= Es[G(s)z0 ]− Es[G (s) z− ]\nEs[G(s)z+ ]− Es[G (s) z− ]\nwhere z+ = arg maxz∈Z Es[G (s) z ], z− = arg minz∈Z Es[G (s) z ] and Z the modulation class considered in the study ( ’s, temperatures T , action repeat probabilities ρ, and optimism ω)." }, { "heading": "E ADDITIONAL ATARI RESULTS", "text": "E.1 PER-TASK NON-STATIONARITY\nIn this section we report detailed results which were presented in aggregate in Figure 4. Specifically, these results show that (1) the most effective set of modulations varies by game, (2) different modulations are preferable on different games, and (3) the non-stationary bandit performs comparably with the best choice of modulation class on all games.\nIn the next few figures we give per-game performance comparing fixed-arm modulations with the adaptive bandit behaviour for modulation epsilon (Figure 9), temperature (Figure 10), action repeats (Figure 11), and optimism (Figure 12). For reference, we include the performance of a uniform bandit (dashed-line in the figures), over the same modulation set as the bandits, as well as the best parameter setting across games (reference solid black line in the figures).\nE.2 COMBINATORIAL BANDITS\nIn this section we include additional results for the different combinatorial bandits run on the curated and extended sets (see Table 4). Most of these experiments were run on subsets of the curated/extended sets across modulations, rather than the full Cartesian product. As a convention, whenever a modulation class is omitted from the experiment name, the value for this class is set to the default reference value reported in Section C (Reference modulations). Thus, for instance if we refer to a per-class-modulation bandit, say optimism ω, the modulations z for this class would\nbe the ones reported in Table 4 (line 4), while all other modulation dimensions would be kept fixed to their reference values.\nFigure 13 shows that the combined bandit performs competitively compared with per-factor bandits (the same adaptive bandit but restricted to one class of modulation). In particular, it is worth noting that the per-factor bandit that performs best is game dependent. Nevertheless, the combined bandit, considering modulations across many of these dimensions, manages to recover a competitive performance across most games.\nIn Figure 14, we include a comparison plot between the combinatorial bandit on the curated set of 3 modulation classes ( , ρ, ω), its uniform counterpart on the same set and the reference fixed arm across games. First thing to notice is that on the curated set the uniform bandit is quite competitive, validating our initial observation that the problem of tuning can be shifted a level above, by carefully curating a set of good candidates. We can additionally see that the adaptive mechanism tends to fall in-between these two extremes: an uninformed arm selection and tuned arm selection. We can see that the adaptive mechanism can recover a behaviour close to uniform in some games (H.E.R.O., YARS’ REVENGE), while maintaining the ability to recover something akin to best arm identification in other games (see ASTERIX). Moreover there are (rare) instances, see ZAXXON, where the bandit outperforms both of these extremes.\nIn Figure 15 we include a plot comparing the performance of a combinatorial bandit on the full curated and extended modulation sets. These are bandits acting on across all modulation classes outlined in Table 4. As a reference, we include the performance of the per-class modulation bandits, as in Figure 13. The bias modulation class was omitted as modulating exclusively within this class leads to very poor performance as policies tend to lock into a particular preference. We can also see a negative impact on the overall performance when adding a bias set to the set of modulations the bandit operates on, as one can see from Figure 15 (magenta line). This is why we opted not to include this set in the extended bandit experiments reported in Figure 6 and restricted ourselves to the other 4 extended modulation sets.\nE.3 OTHER BANDIT BASELINES\nFinally, in Figure 16 we provide a comparison to other more established bandit algorithms, UCB (Auer, 2002; Kaufmann et al., 2012) and Thompson Sampling (Thompson, 1933; Chapelle & Li, 2011), that would need to learn with modulation to use. The results in this figure are averaged across 5 seeds, and jointly modulate across three classes ( , ρ, ω). We also include resulting learning curves for our proposed adaptation mechanism, the bandit described in Section 4, as well as uniform. The first thing to notice is that the stationary bandits, UCB and Thompson Sampling, are sometimes significantly worse than uniform, indicating that they prematurely lock into using modulations that may be good initially, but don’t help for long-term performance. We have already seen signs of this non-stationarity in the analysis in Figure 5 which shows that early commitment based on the evidence seen in the first part of training can be premature and might hinder the overall performance. In contrast, our proposed bandit can adapt to the non-stationarity present in the learning process, resulting in performance that is on par or better than these baselines in most of these games (with only one exception, in YARS’ REVENGE, where it matches the performance of the uniform bandit). In that context, it is worth highlighting that the best alternative baseline (UCB, Thompson Sampling, Uniform) differs from game to game, so outperforming all of them is a significant result. Another point is that UCB and Thompson Sampling still require some hyper-parameter tuning (we report the results for the best setting we found), and thus add extra tuning complexity, while our approach is hyper-parameter-free.\nE.4 BEHAVIOUR OF THE NON-STATIONARY BANDIT\nIn the previous section we saw that the bandit is able to effectively modulate the behaviour policy to give robust performance across games. In this section we dive in deeper to analyse the precise modulations applied by the bandit over time and per-game. We see that the bandit does indeed adaptively change the modulation over time and in a game-specific way.\nWith these results we aim to look into the choices of the bandit across the learning process, and in multiple games. Most of the effect seems to be present in early stages of training.\n• Epsilon schedules: Figure 17 shows the evolution of the value of by the bandit over time. We see that this gives rise to an adaptive type of epsilon schedule, where early in training (usually the first few million frames) large values are preferred, and as training progresses smaller values are preferred. This leads to a gradually increasingly greedy behaviour policy. • Action repeats decay: Figure 18 shows the bandit modulated values for action repeats.\nWe observe that as the agent progresses it can benefit from more resolution in the policy. Thus, the agent adaptively moves to increasingly prefer low action repeat probability over time, with a prolonged period of non-preference early in training. • SEAQUEST: Figure 19 shows the evolution of the sampling distributions of a combined\nbandit with access to all modulation dimensions: T , , b, ρ, and ω. Despite having over 7.5 million combinations of modulation values, the bandit can efficiently learn the quality of different arms. For instance, the agent quickly learns to prefer the down action over the up action, to avoid extreme left/right biases, and to avoid suppressing the fire action, which is consistent with our intuition (in SEAQUEST, the player must move below the sea level to receive points, avoid the left and right boundaries, and fire at incoming enemies). Moreover, as in the case of single arm bandits discussed above, the combined bandit prefers more stochastic choices of and temperature in the beginning of training and more deterministic settings later in training, and the action repeat probability decays over time." } ]
2,019
null
SP:a8cb23a70671d54f8784ac023bbecbcbd0bffcfa
[ "This paper proposes a way to attack and reconstruct a victim's neural architecture that is co-located on the same host. They do it through cache side-channel leakage and use Flush+Reload to extract the trace of victim's function call, which tells specific network operations. To recover the computational graph, they use the approximate time each operation takes to prune out any incompatible candidate computation graph. They show that they can reconstruct exactly the MalConv and ProxylessNAS. ", "This work proposed a method to reconstruct machine learning pipelines and network architectures using cache side-channel attack. It is based on a previous proposed method Flush+Reload that generates the raw trace of function calls. Then the authors applied several techniques to rebuild the computational graph from the raw traces. The proposed method is used to reconstruct MalConv which is a data pre-processing pipeline for malware detection and ProxyLessNas which is a network architecture obtained by NAS. " ]
New data processing pipelines and novel network architectures increasingly drive the success of deep learning. In consequence, the industry considers topperforming architectures as intellectual property and devotes considerable computational resources to discovering such architectures through neural architecture search (NAS). This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service (MLaaS), the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels. However, it is challenging to reconstruct novel architectures and pipelines without knowing the computational graph (e.g., the layers, branches or skip connections), the architectural parameters (e.g., the number of filters in a convolutional layer) or the specific pre-processing steps (e.g. embeddings). In this paper, we design an algorithm that reconstructs the key components of a novel deep learning system by exploiting a small amount of information leakage from a cache side-channel attack, Flush+Reload. We use Flush+Reload to infer the trace of computations and the timing for each computation. Our algorithm then generates candidate computational graphs from the trace and eliminates incompatible candidates through a parameter estimation process. We implement our algorithm in PyTorch and Tensorflow. We demonstrate experimentally that we can reconstruct MalConv, a novel data pre-processing pipeline for malware detection, and ProxylessNAS-CPU, a novel network architecture for the ImageNet classification optimized to run on CPUs, without knowing the architecture family. In both cases, we achieve 0% error. These results suggest hardware side channels are a practical attack vector against MLaaS, and more efforts should be devoted to understanding their impact on the security of deep learning systems.
[ { "affiliations": [], "name": "YOUR SPARE TIME" }, { "affiliations": [], "name": "Sanghyun Hong" }, { "affiliations": [], "name": "Michael Davinroy" }, { "affiliations": [], "name": "Yiǧitcan Kaya" }, { "affiliations": [], "name": "Dana Dachman-Soled" }, { "affiliations": [], "name": "Tudor Dumitras" } ]
[ { "authors": [ "November" ], "title": "USENIX Association", "venue": "ISBN 978-1-931971-33-1. URL https://www.usenix.org/ conference/osdi16/technical-sessions/presentation/abadi.", "year": 2016 }, { "authors": [ "Zeina Abu-Aisheh", "Romain Raveaux", "Jean-Yves Ramel", "Patrick Martineau" ], "title": "An exact graph edit distance algorithm for solving pattern recognition", "venue": null, "year": 2015 }, { "authors": [ "Adam Bates", "Benjamin Mood", "Joe Pletcher", "Hannah Pruse", "Masoud Valafar", "Kevin Butler" ], "title": "Detecting co-residency with active traffic analysis techniques", "venue": "In Proceedings of the 2012 ACM Workshop on Cloud Computing Security Workshop,", "year": 2012 }, { "authors": [ "Lejla Batina", "Shivam Bhasin", "Dirmanto Jap", "Stjepan Picek" ], "title": "CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel", "venue": "In 28th USENIX Security Symposium (USENIX Security", "year": 2019 }, { "authors": [ "Zachary DeVito" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "E. Bosman", "K. Razavi", "H. Bos", "C. Giuffrida" ], "title": "Dedup est machina: Memory deduplication as an advanced exploitation vector", "venue": "In 2016 IEEE Symposium on Security and Privacy (SP),", "year": 2016 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Szegedy Christian", "Vincent O. Vanhoucke" ], "title": "Processing images using deep neural networks, July 2017", "venue": "URL https://patents.google.com/patent/US9715642B2. US Patent 9,715,642", "year": 2017 }, { "authors": [ "Ambra Demontis", "Marco Melis", "Maura Pintor", "Matthew Jagielski", "Battista Biggio", "Alina Oprea", "Cristina NitaRotaru", "Fabio Roli" ], "title": "Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks", "venue": "In 28th USENIX Security Symposium (USENIX Security", "year": 2019 }, { "authors": [ "Vasisht Duddu", "Debasis Samanta", "D. Vijay Rao", "Valentina E. Balas" ], "title": "Stealing neural networks via timing side channels", "venue": "CoRR, abs/1812.11720,", "year": 2018 }, { "authors": [ "Kazushige Goto", "Robert A. van de Geijn" ], "title": "Anatomy of high-performance matrix multiplication", "venue": "ACM Trans. Math. Softw.,", "year": 2008 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Sanghyun Hong", "Michael Davinroy", "Yigitcan Kaya", "Stuart Nevans Locke", "Ian Rackow", "Kevin Kulda", "Dana Dachman-Soled", "Tudor Dumitras" ], "title": "Security analysis of deep neural networks operating in the presence of cache side-channel attacks", "venue": "CoRR, abs/1810.03487,", "year": 2018 }, { "authors": [ "Weizhe Hua", "Zhiru Zhang", "G. Edward Suh" ], "title": "Reverse engineering convolutional neural networks through side-channel information leaks", "venue": "In Proceedings of the 55th Annual Design Automation Conference,", "year": 2018 }, { "authors": [ "Taesoo Kim", "Marcus Peinado", "Gloria Mainar-Ruiz" ], "title": "STEALTHMEM: System-level protection against cache-based side channel attacks in the cloud", "venue": "In Presented as part of the 21st USENIX Security Symposium (USENIX Security", "year": 2012 }, { "authors": [ "T. Kohno", "A. Broido", "K.C. Claffy" ], "title": "Remote physical device fingerprinting", "venue": "IEEE Transactions on Dependable and Secure Computing,", "year": 2005 }, { "authors": [ "F. Liu", "Y. Yarom", "Q. Ge", "G. Heiser", "R.B. Lee" ], "title": "Last-level cache side-channel attacks are practical", "venue": "In 2015 IEEE Symposium on Security and Privacy,", "year": 2015 }, { "authors": [ "F. Liu", "Q. Ge", "Y. Yarom", "F. Mckeen", "C. Rozas", "G. Heiser", "R.B. Lee" ], "title": "Catalyst: Defeating last-level cache side channel attacks in cloud computing", "venue": "IEEE International Symposium on High Performance Computer Architecture (HPCA),", "year": 2016 }, { "authors": [ "Edward Raff", "Jon Barker", "Jared Sylvester", "Robert Brandon", "Bryan Catanzaro", "Charles K Nicholas" ], "title": "Malware detection by eating a whole exe", "venue": "In Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Thomas Ristenpart", "Eran Tromer", "Hovav Shacham", "Stefan Savage" ], "title": "Hey, you, get off of my cloud: Exploring information leakage in third-party compute clouds", "venue": "In Proceedings of the 16th ACM Conference on Computer and Communications Security,", "year": 2009 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "David So", "Quoc Le", "Chen Liang" ], "title": "The evolved transformer", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Bo Chen", "Ruoming Pang", "Vijay Vasudevan", "Mark Sandler", "Andrew Howard", "Quoc V. Le" ], "title": "Mnasnet: Platform-aware neural architecture search for mobile", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Florian Tramèr", "Fan Zhang", "Ari Juels", "Michael Reiter", "Thomas Ristenpart" ], "title": "Stealing machine learning models via prediction apis", "venue": "In 25th USENIX Security Symposium (USENIX Security 16),", "year": 2016 }, { "authors": [ "Venkatanathan Varadarajan", "Yinqian Zhang", "Thomas Ristenpart", "Michael Swift" ], "title": "A placement vulnerability study in multi-tenant public clouds", "venue": "In 24th USENIX Security Symposium (USENIX Security", "year": 2015 }, { "authors": [ "Yan Wang", "Wei-Lun Chao", "Divyansh Garg", "Bharath Hariharan", "Mark Campbell", "Kilian Q. Weinberger" ], "title": "Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Mario Werner", "Thomas Unterluggauer", "Lukas Giner", "Michael Schwarz", "Daniel Gruss", "Stefan Mangard" ], "title": "Scattercache: Thwarting cache attacks via cache set randomization", "venue": "In 28th USENIX Security Symposium (USENIX Security", "year": 2019 }, { "authors": [ "Qixue Xiao", "Yufei Chen", "Chao Shen", "Yu Chen", "Kang Li" ], "title": "Seeing is not believing: Camouflage attacks on image scaling algorithms", "venue": "In 28th USENIX Security Symposium (USENIX Security", "year": 2019 }, { "authors": [ "Mengjia Yan", "Christopher W. Fletcher", "Josep Torrellas" ], "title": "Cache telepathy: Leveraging shared resource attacks to learn", "venue": "DNN architectures. CoRR,", "year": 2018 }, { "authors": [ "Yuval Yarom" ], "title": "Mastik: A micro-architectural side-channel toolkit. Retrieved from School of Computer Science Adelaide: http://cs", "venue": "adelaide. edu. au/ ̃ yval/Mastik,", "year": 2016 }, { "authors": [ "Yuval Yarom", "Katrina Falkner" ], "title": "Flush+reload: A high resolution, low noise, l3 cache side-channel attack", "venue": "In 23rd USENIX Security Symposium (USENIX Security", "year": 2014 }, { "authors": [ "Y. Zhang", "A. Juels", "A. Oprea", "M.K. Reiter" ], "title": "Homealone: Co-residency detection in the cloud via sidechannel analysis", "venue": "IEEE Symposium on Security and Privacy,", "year": 2011 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "To continue outperforming state-of-the-art results, research in deep learning (DL) has shifted from manually engineering features to engineering DL systems, including novel data pre-processing pipelines (Raff et al., 2018; Wang et al., 2019) and novel neural architectures (Cai et al., 2019; Zoph et al., 2018). For example, a recent malware detection system MalConv, with a manually designed pipeline that combines embeddings and convolutions, achieves 6% better detection rate over previous state-of-the-art technique without pre-processing (Raff et al., 2018). In addition to designing data pre-processing pipelines, other research efforts focus on neural architecture search (NAS)—a method to automatically generate novel architectures that are faster, more accurate and more compact. For instance, the recent work of ProxylessNAS (Cai et al., 2019) can generate a novel architecture with 10% less error rate and 5x fewer parameters than previous state-of-the-art generic architecture. As a result, in the industry such novel DL systems are kept as trade secrets or intellectual property as they give their owners a competitive edge (Christian & Vanhoucke, 2017).\nThese novel DL systems are usually costly to obtain: generating the NASNet architectures (Zoph et al., 2018) takes almost 40K GPU hours and the MalConv authors had to test a large number of failed designs in the process of finding a successful architecture. As a result, an adversary who wishes to have the benefits of such DL systems without incurring the costs has an incentive to steal them. Compared to stealing a trained model (including all the weights), stealing the architectural\nThis work was done when Michael Davinroy was a research intern at the Maryland Cybersecurity Center.\ndetails that make the victim DL system novel provides the benefit that the new architectures and pipelines are usually applicable to multiple tasks. Training new DL systems based on these stolen details still provides the benefits, even when the training data is different. After obtaining these details, an attacker can train a functioning model, even on a different data set, and still benefit from the stolen DL system (So et al., 2019; Wang et al., 2019). Further, against a novel system, stealing its architectural details increases the reliability of black-box poisoning and evasion attacks (Demontis et al., 2019). Moreover, stealing leads to threats such as Camouflage attacks (Xiao et al., 2019) that trigger misclassifications by exploiting the image scaling algorithms that are common in DNN pre-processing pipelines.\nThe emerging Machine-Learning-as-a-Service (MLaaS) model that offers DL computation tools in the cloud makes remote hardware side-channel attacks a practical vector for stealing DL systems (Liu et al., 2015). Unlike prior stealing attacks, these attacks do not require physical proximity to the hardare that runs the system (Batina et al., 2019; Hua et al., 2018) or direct query access to train an approximate model (Tramèr et al., 2016). Cache side-channel attacks have especially been shown as practical in cloud computing for stealing sensitive information, such as cryptographic keys (Liu et al., 2015). Cache side-channel attacks are ubiquitous and difficult to defeat as they are inherent to the micro architectural design of modern CPUs (Werner et al., 2019).\nIn this paper, considering the incentives to steal a novel DL system and applicability of cache sidechannel attacks in modern DL settings, we design a practical attack to steal novel DL systems by leveraging only the cache side-channel leakage. Simulating a common cloud computing scenario, our attacker has a co-located VM on the same host machine as the victim DL system, and shares the last-level cache with the victim (Liu et al., 2015). As a result, even though the VMs are running on separate processor cores, the attacker can monitor the cache accesses a DL framework—PyTorch or TensorFlow—makes while the victim system is running (Liu et al., 2015).\nThe first step of our attack is launching a cache side-channel attack, Flush+Reload (Yarom & Falkner, 2014), to extract a single trace of victim’s function calls (Section 3). This trace corresponds to the execution of specific network operations a DL framework performs, e.g., convolutions or batch-normalizations, while processing an input sample. However, the trace has little information about the computational graph, e.g., the layers, branches or skip connections, or the architectural parameters, e.g., the number of filters in a convolutional layer. The limited prior work on side-channel attacks against DL systems assumed knowledge of the architecture family of the victim DNN (Yan et al., 2018; Duddu et al., 2018); therefore, these attacks are only able to extract variants of generic architectures, such as VGG (Simonyan & Zisserman, 2015) or ResNet (He et al., 2016). To overcome this challenge, we also extract the approximate time each DL operation takes, in addition to the trace, and we leverage this information to estimate the architectural parameters. This enables us to develop a reconstruction algorithm that generates a set of candidate graphs given the trace and eliminates the incompatible candidates given the parameters (Section 4). We apply our technique to two exemplar DL systems: the MalConv data pre-processing pipeline and a novel neural architecture produced by ProxylessNAS.\nContributions. We design an algorithm that reconstructs novel DL systems only by extracting cache side-channel information, that leaks DL computations, using Flush+Reload attack. We show that Flush+Reload reliably extracts the trace of computations and exposes the time each computational step takes in a practical cloud scenario. Using the extracted information, our reconstruction algorithm estimates the computational graph and the architectural parameters.\nWe demonstrate that our attacker can reconstruct a novel network architecture found from NAS process (ProxylessNAS) and a novel manually designed data pre-processing pipeline (MalConv) with no reconstruction error.\nWe demonstrate the threat of practical stealing attacks against DL by exposing that the vulnerability is shared across common DL frameworks, PyTorch and TensorFlow." }, { "heading": "2 BACKGROUND", "text": "Here, we discuss prior efforts in both crafting and stealing network architectures. There is a growing interest in crafting novel DL systems as they significantly outperform their generic counterparts. The\nimmense effort and computational costs of crafting them, however, motivates the adversaries to steal them.\nEffort to Design Deep Learning Systems. Creating deep learning systems traditionally takes the form of human design through expert knowledge and experience. Some problems require novel designs to manipulate the input in a domain-specific way that DNNs can process more effectively. For example, MalConv malware detection system (Raff et al., 2018) uses a manually designed preprocessing pipeline that can digest raw executable files as a whole. Pseudo LIDAR (Wang et al., 2019), by pre-processing the output of a simple camera sensor into a LIDAR-like representation, achieves four times better object detection accuracy than previous state-of-the-art technique. Moreover, recent work also focuses on automatically generating optimal architectures via neural architecture search (NAS). For example, reinforcement learning (Zoph & Le, 2016) or gradient-based approaches (Cai et al., 2019) have been proposed for learning to generate optimal architectures. Even though NAS procedures have been shown to produce more accurate, more compact and faster neural networks, the computational cost of the search can be an order of magnitude higher than training a generic architecture (Zoph et al., 2018).\nEffort to Steal Deep Learning Systems. Prior work on stealing DNN systems focus on two main threat models based on whether the attacker has physical access to the victim’s hardware. Physical access attacks have been proposed against hardware accelerators and they rely on precise timing measurements (Hua et al., 2018) or electromagnetic emanations (Batina et al., 2019). These attacks are not applicable in the cloud setting we consider. The remote attacks that are applicable in the cloud setting, on the other hand, have limitation of requiring precise measurements that are impractical in the cloud (Duddu et al., 2018). Further, the attack without this limitation (Hong et al., 2018) requires the attacker to know the family the target architecture comes from; thus, it cannot steal novel architectures. In our work, we design an attack to reconstruct novel DL systems by utilizing a practical cache side-channel attack in the cloud setting." }, { "heading": "3 EXTRACTING THE SEQUENCE OF COMPUTATIONS VIA FLUSH+RELOAD", "text": "" }, { "heading": "3.1 THREAT MODEL", "text": "We consider an attacker who aims to steal the key components in a novel DL system, i.e., a novel pre-processing pipeline or a novel network architecture. We first launch a Flush+Reload (Yarom & Falkner, 2014) attack to extract cache side-channel information leaked by DL computation. Our target setting is a cloud environment, where the victim’s DL system is deployed inside a VM—or a container—to serve the requests of external users. Flush+Reload, in this setting, is known to be a weak, and practical, side-channel attack (Liu et al., 2015). Further, as in MLaaS products in the cloud, the victim uses popular open-source DL frameworks, such as PyTorch (Benoit Steiner, 2019) or TensorFlow (Abadi et al., 2016).\nCapabilities. We consider an attacker that owns a co-located VM—or a container—in the same physical host machine as the victim’s system. Prior work has shown that spinning-up the co-located VM in the third-party cloud computing services does not require sophisticated techniques (Ristenpart et al., 2009; Zhang et al., 2011; Bates et al., 2012; Kohno et al., 2005; Varadarajan et al., 2015). Due to the co-location, the last-level cache (L3 cache) in the physical host is shared between multiple cores where the attacker’s and victim’s processes are; thus, our attacker can monitor the victim’s computations leaked at the L3 cache. We also note that, even if the victim uses GPUs, our attacker can still observe the same computations used for CPUs via cache side-channels (see Appendix A).\nKnowledge. We consider our attacker and the victim use the same version of the same open-source DL framework. This is realistic, in MLaaS scenarios such as AWS SageMaker or Google Cloud’s AutoML, as cloud providers recommend practitioners to use the common frameworks to construct their systems. These common practices also allow our attacker to reverse-engineer the frameworks offline and identify the lines of code to monitor with the Flush+Reload technique.\nFor example, AWS provides convenient deployment options for both PyTorch and TensorFlow: https://docs. aws.amazon.com/sagemaker/latest/dg/pytorch.html, and https://docs.aws.amazon.com/sagemaker/latest/dg/tf.html." }, { "heading": "3.2 FLUSH+RELOAD MECHANISM", "text": "Flush+Reload allows an adversary to continually monitor victim’s instruction access patterns by observing the time taken to load them from memory. This technique is effective to extract the computation flow of the victim’s program when the attacker and victim share memory (i.e., a shared library or page deduplication (Bosman et al., 2016)). The attacker flushes specific lines of code in a shared DL framework from the co-located machine’s cache-hierarchy and then measure the amount of time it takes to reload the lines of code. If the victim invokes the monitored line of code, the instruction will be reloaded into the shared cache, and when the attacker reloads the instruction, the access to it will be noticably faster. On the other hand, if the victim does not call the monitored line of code, the access to it will be slower because the instruction needs to be loaded from main memory (DRAM). By repeating this process, our attacker can tell when a victim has accessed a line of code." }, { "heading": "3.3 OVERVIEW OF OUR ATTACK PROCEDURE", "text": "In Figure 1, we illustrate our attack procedure. We split the steps into two phases: the online phase and the offline phase. In the online phase (step 2©), the attacker needs co-location to monitor the computations from the victim’s system. In the offline phase (steps 1©, 3©, 4©, and 5©) attacker does not require the co-location with the victim.\n1© First, our attacker analyzes the open-source DL framework to identify the lines of code to monitor. The attacker monitors the first line of each function that corresponds to the start of a DL computation. 2© Next, the attacker spins up a co-located VM and launches the Flush+Reload attack to extract the trace of victim system’s function calls. As the trace does not depend on the input sample, we only require to extract a single trace from one full invocation of the victim system. 3© Since the raw observations with Flush+Reload are noisy, the attacker applies filtering to highlight the regularities of DL computations reflected in the trace. 4© To estimate the architectural parameters, e.g., the input/output channels, kernel size, or strides, our attacker creates a lookup tables of timings and performed number of matrix multiplications by collecting traces from various parameter combinations. 5© Finally, using the victim’s computational trace lookup tables for estimating architectural parameters, the attacker starts the reconstruction process to steal the victim’s DL system (Sec 4)." }, { "heading": "3.4 MONITORING THE TOY NETWORK COMPUTATIONS VIA FLUSH+RELOAD", "text": "Experimental Setup. We implement our attack on Ubuntu 18.04 running on a host machine equipped with the Intel E3-1245v6 3.7GHz processors (8 cores, 32GB memory and 8MB cache shared between cores). For the step 1©, we analyze two popular open-source DL frameworks, PyTorch and TensorFlow, and identify the list of functions to monitor (see Appendix C for the full list of functions). We leverage the Mastik toolkit (Yarom, 2016) to launch the Flush+Reload attack, and while a victim DL system is running on a VM, our attacker monitors the list of functions—step 2©. For the reconstruction process conducted in offline after the extraction, we use Python v3.6 to implement the procedure.\nhttps://www.python.org\nToyNet Results. In Figure 2, we demonstrate the extracted trace via Flush+Reload while ToyNet is processing an input. ToyNet is composed of one convolution followed by a batch-norm and one depthwise convolution followed by a batch-norm and a ReLU activation. The 1st convolution has the parameters (in, out, kernel, stride) as (3, 10, 3, 1), and the depthwise convolution’s parameters are (10, 10, 1, 1). The network has a skip connection that adds the intermediate output (from the 1st convolution) to the final output. During inference, we feed in an input with dimensions 3x32x32.\nIn the middle panel of Figure 2, we also show the raw—noisy—trace from the Flush+Reload output. The trace only includes cache-hits where the attacker’s accesses to the lines of code are faster, i.e., when the victim invokes the function. Each element of the trace includes a timestamp and a function name. The name corresponds to the ToyNet layers, such as Conv2d and BatchNorm2d, and it also contains additional information such as the tensor (add) and the BLAS operations, e.g., GEMM(oncopy).\nOur attacker filters the raw trace according to the regular patterns in the DL computation. For example, a long function call, e.g., Conv2d in the ToyNet trace, can appear multiple times in the trace, as the cache can hit multiple times during Flush+Reload. In this case, we condense the multiple occurrences into a single invocation using a heuristic based on how close the timestamps are. We also observe the matrix multiplications such as GEMM(conv) and GEMM(oncopy) while DL computation is being processed. We count the individual occurrences and sum them up them based on the timestamp. After obtaining the processed trace (in the right panel), the attacker starts the reconstruction procedure." }, { "heading": "4 RECONSTRUCTING NOVEL DEEP LEARNING SYSTEMS", "text": "After processing the Flush+Reload trace, our attacker reconstructs the key components of the victim’s DL system. In this process, the attacker aims to generate the candidate computational graphs of the victim system and to eliminate the incompatible candidates by estimating the correct parameter set for each computation. For instance, in our ToyNet example, the attacker wants to identify the computational orders and the location of the start and end of a branch connection (computational graph). Also, the same attacker wants to estimate the parameters for each computation; for example, the input/output channels and the kernel size in the 1st Conv2d. In this small network that has one branch, there are only 10 candidate computational graphs; however, considering all possible combinations of parameters, this will result in untractable number of candidates. Prior work, in reconstruction, only considered generic architectures such as VGGs or ResNets with the unrealistic assumption that an attacker knows the architecture family (backbone); however, as our aim is to steal novel DL systems, we do not make this assumption. To overcome this problem, we design a reconstruction procedure, which we describe next.\nKnowledge of Our Attacker in Reconstruction. Here, we consider our attacker knows what tensor operations and functions to monitor in the victim’s open-source DL framework. These functions are model-independent; they correspond to architectural attributes designated by the deep learning framework (see Appendix C). We show that this knowledge is sufficient to reconstruct novel datapreprocessing pipelines, such as MalConv, that are usually shallower than the network architectures.\nTo reconstruct the deeper network architecture automatically designed by NAS algorithms, we assume our attacker has some knowledge about the NAS search space—e.g., NASNet search space (Zoph et al., 2018)—the victim’s search process relies on. This knowledge includes the list of layers used and the fact that a set of layers (known as blocks) are repeatedly used such as Normal and Reduction Blocks in NASNet. We make this assumption because, from the sequence of computations observed via Flush+Reload, our attacker can easily identify a set of layers and the repetitions of the layers. However, we do mot assume how each block is composed by using the layer observations directly; instead, we identify candidate blocks by using a sequence mining algorithm. We demonstrate that, under these assumptions, our attack reconstructs the ProxylessNAS-CPU in 12 CPU hours rather than running a NAS algorithm from scratch that takes 40k GPU hours." }, { "heading": "4.1 OVERVIEW OF OUR RECONSTRUCTION PROCEDURE.", "text": "We first focus on the invariant rules in the computations used for DL computations. For instance, there are unary operations and binary operations. The tensor addition used to implement a skip connection is binary operation; thus, our attacker can supplement the reconstruction process by pruning the incompatible candidates. We also exploit the fact that computation time is proportional to the number of element-wise multiplications in a computation. In the ToyNet example, the time the 1st convolution (2 million cycles) takes is shorter than the 2nd depthwise convolution (3.668 million cycles); thus, our attacker further eliminates the candidates by comparing the possible parameters for a computation with her offline profiling data—the lookup table.\nOur reconstruction procedures consist of two steps:\n1© Generation: The attacker can generate the candidate computational graphs from the Flush+Reload trace based on the invariant rules in DL computations. Using the rules, our attacker reduces the number of candidates significantly. 2© Elimination: Our attacker compares the time for each computation takes with profiling data and prunes the incompatible candidates. We estimate the parameters sequentially starting from the input. When the output dimension from a candidate does not match with the observation, we eliminate.\nError Metrics. To quantify the error of our reconstruction result, we use two similarity metrics. First, we use the graph edit distance (GED) (Abu-Aisheh et al., 2015) to compare the reconstructed computational graph with that of the victim. Second, we use the `1-distance to compute the error between the estimated architectural parameters and those in the victim system.\nVictims. We first reconstruct the MalConv (Raff et al., 2018), a novel data pre-processing pipeline that converts the binary file into a specific format so that a neural network can digest easily. Also, we show that our attacker can reconstruct the novel ProxylessNAS (Cai et al., 2019) architecture that shows the improved accuracy on the ImageNet classification with the less computational cost on a CPU." }, { "heading": "4.2 RECONSTRUCTING NOVEL PRE-PROCESSING PIPELINES", "text": "Here, we elaborate the reconstruction process of the MalConv (Raff et al., 2018) data pre-processing pipeline. MalConv receives the raw bytes of .exe files and determines whether the file is malicious or not. The uniqueness of MalConv comes from the way that it treats the sequence of bytes: 1) Code instructions in binary file are correlated spatially, but the correlation has discontinuities from function calls and jump commands that are difficult to capture by the sequence models, e.g., RNNs.\nNote that ProxylessNAS starts its searching process from a backbone architecture such as NASNet; thus, even if the paper reported a search took 200 GPU hours, this number does not include the time spent searching a backbone architecture, i.e., the 40k GPU hours to find NASNet.\n2) Also, each sequence has the order of two million steps which far exceedes the length of an input to any previous neural network classifier. MalConv tackles this problem by pre-processing the sequence of bytes (Figure 3). It first splits the upper four bits and the lower four bits (narrow operations) of a byte information; this helps the network capture the locality of closer bytes and distant bytes. Next, the pipeline uses one dimensional convolution to extract such localities and performs the element-wise multiplications of two outputs. Before feeding this information to the neural network, the pipeline uses max-pooling to reduce the training time caused by processing inputs with large dimensions. All these heuristics are examined manually (see Section 4 of the original paper); thus, our attacker can save time and effort by stealing the pipeline.\nGenerate Computational Graphs. The first step of our attacker is to reconstruct the computational graph candidates for the victim pipeline from the Flush+Reload trace. As we can see in the trace in Figure 3, the attacker cannot simply connect the components in the traces sequentially because the branch connection, e.g., [7] * (multiply). Also, from this component, our attacker knows which is the end of a branch but cannot know when the branch has started. We solve this problem by populating all possible candidates and pruning them later with the parameter estimation.\nOur algorithm populates the candidate computational graphs and the sample candidates found (see Appendix E). Our solution uses a recursive algorithm. Given a trace from Flush+Reload (T ), we pop each computation t from the back and construct the list of candidates l. At a high-level, the algorithm first traverses all the possible connections starting from the last computation to the first by using recursion. Then, when the base condition is met (i.e., the algorithm arrives the first computation, Embeddings), we backtrack the recursions to construct the list of candidate computational graphs. We focus on the computation type in this backtracking process; there are unary and binary computations. For the unary operations, we simply connect the current and preceding computations. However, for the binary operations, we split all the preceding computations into a set of two lists. Each set of two lists corresponds to a branch, and we continue backtracking for each branch and include all of the construction into our results. At the end, we found 20 candidates.\nEliminate Candidates with Computational Parameters. Next, our attacker further prunes the candidates based on the computational parameter estimation process. Our attacker, most importantly, focuses on the fact that computation time is dependent on the size of the matrix multiplication. This enables our attacker to profile the computational time taken for a set of param-\nMalConv Novel Pre-processing Pipeline Processed Trace\neter combinations in advance. The attacker is able to perform this offline by taking advantage of cloud infrastructure: that the hardware and software stacks composing the cloud are consistent. In the MalConv reconstruction, we profile the timing of the convolution and linear operations. For the convolutions, we consider input/output channels {1, 2, 4, 8, 16, 32, 128, 256}, kernels {1, 3, 5, 7, 11, 100, 200, 500, 1k, 10k}, and strides {1, 2, 5, 10, 100, 200, 500, 1k, 10k}. For the linear layers, we use input {4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048} and output dimensions {1, 10, 16, 20, 32, 40, 100, 128, 256, 512, 1k, 1024, 2048}. Once our attacker has the timing profiles with these parameter combinations, the attacker defines the potential parameter sets for the convolutions and linear layers. Then, the attacker checks, in each candidate, if the computational graph returns the correct output dimension (1,) for the input (8, 2000000). In this pruning process, there are the other operations such as Sigmoid, * (multipy), transponse, narrow, or pooling. We applied the universal rules for each case: 1) the Sigmoid and multiply do not change the input/output dimensions, 2) the transpose only swaps two dimensions in an input, 3) the narrow slice one chosen dimension, e.g., (8,2000000) to (4,1000000); thus we consider all the possible slice in checking, and 4) the pooling only requires us to estimate its window size, so we match this value to the stride of a preceding convolution. At the end of this parameter estimation, we can narrow down to only one architecture with the correct set of computational parameters, i.e., 0% error." }, { "heading": "4.3 RECONSTRUCTING NOVEL NETWORK ARCHITECTURES", "text": "Here, we show our attacker is able to steal a novel network architecture by describing the reconstruction process of the ProxylessNAS-CPU (Cai et al., 2019) that improves the accuracy of existing architecture, MobileNetV2 (Sandler et al., 2018), and also reduces the computation time. Indeed, the NAS search procedure warm-starts from an over-parameterized MobileNetV2 as a backbone; however, in our attack, we hypothesize our attacker is not aware of the backbone. Instead, we assume our attacker only knows the search space of MNasNet (Tan et al., 2019) (see Appendix D) where the authors come up with the MobileNetV2, opposed to the recent attacks in Sec 2.\nKnowing the search space does not, however, reduce the amount of efforts by our attacker in reconstruction. The network architectures found from the NAS procedure commonly are wide and deep, and they include multiple branch connections; thus, our attacker requires to consider exponential number of candidate computational graphs and the computation parameters, which makes the attack infeasible. To tackle this issue, we focus on the NAS procedure—this process factorizes the entire architecture into blocks by their functions. For instance, NASNet (Zoph et al., 2018) is composed of normal cells (blocks) and reduction cells. Within each block, the process considers the architecture combinations that provides the optimal performance. Thus, we first identify the potential blocks before we initiate the process for reconstructing candidate computational graphs.\nIdentifying Candidate Blocks. We utilize the frequent subsequence mining (FSM) method to identify the blocks composing the ProxylessNAS-CPU architecture. Our FSM method is simple: we iterate over the Flush+Reload trace with the fixed windows and count the occurrences of each subsequence. Since the attacker knows that in the search space that the victim uses, a maximum of nine computations are used to compose a block, we consider the window size from one to nine. Once we count the number of occurrences for each subsequence (candidate blocks), and we prune\nhttps://aws.amazon.com/ec2/instance-types/\nthem based on the rules in the search space: 1) a Conv2d operation is followed by a BatchNorm, 2) a block with a DepthConv2d must end with a Conv2d and BatchNorm (for a depthwise separable convolution), 3) a branch connection cannot merge (add) in the middle of the block, and 4) we take the most frequent block in each window. In Table 2, we describe the 9 identified blocks. We then run the generation process of reconstructing candidate computational graphs with the blocks instead of using each computation in the trace. At the end, we have 180,224 candidate computational graphs.\nEliminate Candidate with Computational Parameters. For each candidate composed of known blocks, our attacker estimates the computation parameters. However, the number of parameter combinations are also exponential; for example, within the search space, a Conv2d can have any number of input/output channels, kernel size {1, 3, 5}, and strides {1, 2}. Thus, we focus on the computation rules in a block. 1) We first found that DepthConv2d is only possible to have the same input/output channels. Also, the channel size can be identified by the number of GEMM(conv) operations. For instance, in Figure 4, the DepthConv2d has 143 GEMM(conv) invocations, which is close to the channel size. Since commonly the operation has an even number of channels, the attacker can easily reduce the candidates to 142 or 144. 2) We also know that the number of GEMM(oncopy) invocations is proportional to the matrix multiplication size in a Conv2d; thus, the attacker can compare the offline profiling results with the processed traces and estimate the parameters. For instance, the 1st Conv2d has 20 GEMM(oncopy), and we approximately have a set of input dimensions, e.g., (20- 30,112,112) from the previous block estimation. Thus, our attacker only profiles the variations of input channels {20´30}, kernels {1, 3, 5}, and strides {1, 2}—total 60 cases and check if there is a match. Moreover, 3) the Conv2d after DepthConv2d is the pointwise linear operation whose kernel and stride is one, which will further reduce the attacker’s efforts. Our attacker runs this elimination process and finally narrows down to only one architecture with the correct set of computational parameters, i.e., 0% error." }, { "heading": "5 DISCUSSION", "text": "In this section, we discuss defense mechanisms that prevent our attacker from reconstructing the victim’s DL system with an exact match. Prior work on defenses against cache side-channel attacks proposed system-level solutions (Kim et al., 2012; Liu et al., 2016; Werner et al., 2019). However, applying them requires infrastructure-wide changes from cloud providers. Also, even if the infrastructure is resilient to cache side-channel attacks, an attacker can leverage other attack vectors to leak similar information. Thus, we focus on the defenses that can be implemented in DL frameworks.\nWe design our defense mechanisms to obfuscate what the attacker observes via cache side-channels by increasing the noise in computations supported by DL frameworks. We discuss four approaches that blend noise into components of a DL framework; however, these mechanisms introduce a computational overhead by performing additional operations. This highlights that defending against our attack is not trivial and efficient countermeasures require further research.\nPadding Zeros to the Matrix Multiplication Operands. Our reconstruction algorithm estimates the computational parameters such as kernel sizes or strides based on the time taken for matrix multiplication. Hence, we consider increasing the size of operands randomly by padding zeros to\nthem. We keep the original sizes of the operands and, after the multiplication of augmented tensors, we convert the resulting tensor into that of the correct dimensions by removing the extra elements. With the augmentation, our attacker finds it difficult to reconstruct the victim’s DL system exactly by monitoring a single query. However, if our attacker can observe computations with multiple queries, the attacker can cancel-out the noise and estimate the parameters correctly.\nAdding Null/Useless Network Operations. This reconstruction attack assumes all the computations observed in the Flush+Reload trace are used to compute the output of a DL system. Thus, a defender can modify the victim’s architecture so that it includes the identity layers or the branches whose outputs are not used. We hypothesize a small number of null/useless operations will not increase the attacker’s computational burden significantly; this addition only increases the time needed to reconstruct the victim’s architecture by a few hours. If the defender includes an excessive amount of null/useless layers or branches, this can significantly increase the reconstruction time. However, this defense suffers from two issues: 1) the defense may still not make the reconstruction impossible, and 2) the victim also requires to perform the additional operations, which increases network evaluation time significantly.\nShuffling the Computation Order. We have seen in popular DL frameworks that, once a network architecture is defined, the computational order of performing operations is also invariant. We are able to shuffle the computation order of the victim’s DL system each time when the system processes an input. In particular, we can identify the dependency of operations in a victim’s DL system and compute the independent operations in a different order each time. This approach will make the observations from cache side-channels inconsistent, which results in the exponential number of candidate architectures that our attacker needs to consider. However, to compute the independent operations separately, the defender needs to store intermediate results in memory while processing an input; thus, this approach increases the space overhead of the DL computations.\nRunning Decoy Operations in Parallel. Lastly, we can make a DL framework run separate networks (decoy operations) in parallel on the same physical host. These networks obfuscate what our attacker will observe via Flush+Reload. Here, the attacker cannot reconstruct the victim architecture by monitoring a single query because the computational order does not reflect how the victim’s architecture is defined. However, if our attacker can observe the computations over multiple queries, the attacker can use the frequent sequence mining (FSM) that we used in the block identification to identify a repeated set of operations and can reconstruct the victim architecture. This defense also increases network evaluation time by running extra operations on the same machine." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "This work presents an attack that reconstructs a victim’s novel DL system through the information leakage from a cache side-channel, Flush+Reload. We steal key components of the victim’s system: a novel pre-processing pipeline and a novel network architecture. Observing the DL computations and the time to complete each computation enables the attacker to populate all candidate computational graphs and prune them with our parameter estimation process. In experiments, we demonstrate the feasibility of this reconstruction attack by reconstructing MalConv, a novel pre-processing pipeline for malicious file detection, and ProxylessNAS-CPU, a novel architecture for the ImageNet classification optimized to run on CPUs. We do this with 0% error. As novel DL systems become trade secrets, our results highlight the demands for future work on countermeasures against model theft." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank the anonymous reviewers for their valuable feedback. This research was partially supported by the Department of Defense, by NSF grants #CNS-1933033, #CNS-1840893, #CNS1453045 (CAREER), by a research partnership award from Cisco and by financial assistance award 70NANB15H328 from the U.S. Department of Commerce, National Institute of Standards and Technology. We would like to thank the NSF REU-CAAR program (NSF grant #CCF-1560193)." }, { "heading": "A APPLICABILITY TO GPUS", "text": "Our attack is not fundamentally different for GPUs. In most deep learning frameworks, when a network performs a computation, it invokes the same function implemented in C++ and the function decides whether the back-end computation can use GPUs or not. This practice maximizes the hardware compatibility of a framework; however, this also makes the framework vulnerable to our attacker who can still observe the common functions listed in Table 3 by monitoring the shared cache. On GPUs the timings would be different, so we would have to profile the computational times, e.g., the time taken for the matrix multiplication with various sizes of tensor operands. However, on both CPUs and GPUs, the computation time is proportional to the size of tensor operands, which enables our attacker to estimate the architecture parameters with timing observations." }, { "heading": "B DL COMPUTATIONS MONITORED IN PYTORCH AND TENSORFLOW", "text": "Figure 5 describes the reconstruction process of a small network in both the PyTorch and TensorFlow frameworks. On the left, we have the ground truth of the ToyNet architecture, which represents an example of a possible residual block in a victim network. In the middle and right, we show the observations of an adversary monitoring both PyTorch and TensorFlow code. The first entry indicates the monitored function corresponding to the desired architectural attribute. The second entry indicates the timestamp at which the adversary observes these functions, and the last entry is the number of general matrix multiplication (GEMM) function calls for the given layer observation.\nNaming conventions vary slightly between the two frameworks, but the information inferred is the same. The adversary attacking both networks sees functions calls that correspond to architectural attributes in the same order: Conv2d, BatchNorm2d, Conv2d/DepthwiseConv, BatchNorm2d, ReLU6, and TensorAdd. PyTorch does not distinguish between Conv2d and DepthwiseConv, but as stated in 4.1, we can differentiate the layers by timing data. Additionally, PyTorch and TensorFlow use different linear algebra libraries to perform matrix computation, so the implementations differ slightly. However, they both use variations on matrix multiplication algorithms that take into account system level optimizations, such as cache size (e.g. Goto’s algorithm). In both cases, we observe operations in nested iterations of these implementations and are able to monitor instructions that correspond to the size of the matrices being multiplied, giving an adversary the ability to estimate the parameters of the convolution layers.\nTo perform the estimations of these layer parameters, the adversary can profile candidates offline on similar hardware. They can then create a data set of candidate parameters for given observation ranges. For instance, the number of observed GEMM calls in the PyTorch example for the depthwise convolution layer gives the attacker the information that there are 10 output channels, and therefore also ten output channels in the 1st convolution. Additionally, the observed GEMM calls for the 1st convolution layer give the candidate kernel sizes of 3 and 5. Likewise in TensorFlow, the observed instructions fits the candidate kernel sizes of 3 or 5, and 0-24 output channels. Therefore, these\nCustomization of operations in TensorFlow: https://www.tensorflow.org/guide/create_op\nexploitable vulnerabilities exist independent of the specific deep learning framework a victim is using." }, { "heading": "C LIST OF FUNCTIONS MONITORED VIA FLUSH+RELOAD", "text": "Table 3 shows the exact lines of code we monitor in the PyTorch and TensorFlow frameworks. We use PyTorch v1.2.0 and Tensorflow v1.14.0. In both the frameworks, we are able to monitor a similar set of DL computations in the C++ native implementations. However, the back-end libraries supporting the matrix multiplications are different, i.e., PyTorch is compiled with OpenBLAS whereas TensorFlow uses Eigen and MKL-DNN. Even if the libraries are different, the multiplications are implemented using GOTO’s algorithm (Goto & Geijn, 2008). Therefore, we monitor the number of iterations of for-loops to estimate the overall size of a matrix multiplication." }, { "heading": "D MNASNET SEARCH SPACE", "text": "Tan et al. (2019) utilize a hierarchical search space over six parameters: ConvOp, KernelSize, SERatio, SkipOp, FilterSize, and #Layers. They choose to partition a CNN into a known, finite set of blocks and then further divide these blocks into possibly repeated layers. The number of repeats per layer in a given block i is a searchable parameter Ni, which is bounded at ˘1 the number of layers in MobileNetV2 on which block i is based. These layers are further divided into three possible\nhttps://github.com/pytorch/pytorch/commit/8554416a199c4cec01c60c7015d8301d2bb39b64\nhttps://github.com/tensorflow/tensorflow/commit/87989f69597d6b2d60de8f112e1e3cea23be7298\nnetwork layers (ConvOp): regular convolution, depthwise convolution, or mobile inverted bottleneck convolution. Additionally, the network layer parameters can vary. These parameters include the convolution kernel size (KernelSize), the squeeze-and-excitation ratio (SERatio), a possible skip op (SkipOp), and the output filter size (FilterSize). The squeeze-and-excitation ration (SERatio) of a given layer varies between 0 and 0.025; the convolution kernel size varies between 3 and 5; the skip op is either pooling, identity residual, or no skip; and the filter size varies between 0.75, 1.0, and 1.25 the filter size of the corresponding block in MobileNetV2. Overall, this gives an claimed typical search space size of 1013 possibilities with 5 blocks, 3 average layers per block, and 432 options for the sub search space of a each block. This size compares to the per-layer approach with the same parameters that has a search space size 1039." }, { "heading": "E SEARCHING CANDIDATE COMPUTATIONAL GRAPHS", "text": "" } ]
null
null
SP:4fba557254310577845d291e0f216dc76403c9ac
[ "The paper tries to handle the class imbalance problem by decoupling the learning process into representation learning and classification, in contrast to the current methods that jointly learn both of them. They comprehensively study several sampling methods for representation learning and different strategies for classification. They find that instance-balanced sampling gives the best representation, and simply adjusting the classifier will equip the model with long-tailed recognition ability. They achieve start of art on long-tailed data (ImageNet-LT, Places-LT and iNaturalist).", "The paper considers the problem of long-tailed image classification, where the class frequencies during (supervised) training of an image classifier are heavily skewed, so that the classifier underfits on under-represented classes. Different known and novel sampling schemes during training as well as post-training procedures to restore the class balance after training are studied. The overall best strategy turns out to be naive training on the skewed training set, and post-hoc rebalancing only of the classification stage. The paper presents various ablation studies and comparisons with related methods on the ImageNet-LT, Places-LT, and iNaturalist data sets, achieving state-of-the-art performance." ]
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g. by loss re-weighting, data re-sampling, or transfer learning from headto tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.
[ { "affiliations": [], "name": "LONG-TAILED RECOGNITION" }, { "affiliations": [], "name": "Bingyi Kang" }, { "affiliations": [], "name": "Saining Xie" }, { "affiliations": [], "name": "Marcus Rohrbach" }, { "affiliations": [], "name": "Zhicheng Yan" }, { "affiliations": [], "name": "Albert Gordo" }, { "affiliations": [], "name": "Jiashi Feng" }, { "affiliations": [], "name": "Yannis Kalantidis" } ]
[ { "authors": [ "Kaidi Cao", "Colin Wei", "Adrien Gaidon", "Nikos Arechiga", "Tengyu Ma" ], "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Nitesh V Chawla", "Kevin W Bowyer", "Lawrence O Hall", "W Philip Kegelmeyer" ], "title": "Smote: synthetic minority over-sampling technique", "venue": "Journal of artificial intelligence research,", "year": 2002 }, { "authors": [ "Yin Cui", "Yang Song", "Chen Sun", "Andrew Howard", "Serge Belongie" ], "title": "Large scale fine-grained categorization and domain-specific transfer learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Yin Cui", "Menglin Jia", "Tsung-Yi Lin", "Yang Song", "Serge Belongie" ], "title": "Class-balanced loss based on effective number of samples", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Chris Drummond", "Robert C Holte" ], "title": "C4. 5, class imbalance, and cost sensitivity: why undersampling beats over-sampling", "venue": "In Workshop on learning from imbalanced datasets II,", "year": 2003 }, { "authors": [ "Spyros Gidaris", "Nikos Komodakis" ], "title": "Dynamic few-shot visual learning without forgetting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Samantha Guerriero", "Barbara Caputo", "Thomas Mensink" ], "title": "Deep nearest class mean classifiers", "venue": "In International Conference on Learning Representations, Worskhop Track,", "year": 2018 }, { "authors": [ "Agrim Gupta", "Piotr Dollar", "Ross Girshick" ], "title": "Lvis: A dataset for large vocabulary instance segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Hui Han", "Wen-Yuan Wang", "Bing-Huan Mao" ], "title": "Borderline-smote: a new over-sampling method in imbalanced data sets learning", "venue": "In International conference on intelligent computing,", "year": 2005 }, { "authors": [ "Bharath Hariharan", "Ross Girshick" ], "title": "Low-shot visual recognition by shrinking and hallucinating features", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Munawar Hayat", "Salman Khan", "Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Max-margin class imbalanced learning with gaussian affinity", "venue": null, "year": 1901 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Chen Huang", "Yining Li", "Chen Change Loy", "Xiaoou Tang" ], "title": "Learning deep representation for imbalanced classification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Chen Huang", "Yining Li", "Change Loy Chen", "Xiaoou Tang" ], "title": "Deep imbalanced learning for face recognition and attribute prediction", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2019 }, { "authors": [ "Salman Khan", "Munawar Hayat", "Syed Waqas Zamir", "Jianbing Shen", "Ling Shao" ], "title": "Striking the right balance with uncertainty", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Salman H Khan", "Munawar Hayat", "Mohammed Bennamoun", "Ferdous A Sohel", "Roberto Togneri" ], "title": "Cost-sensitive learning of deep feature representations from imbalanced data", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2017 }, { "authors": [ "Tsung-Yi Lin", "Priya Goyal", "Ross Girshick", "Kaiming He", "Piotr Dollár" ], "title": "Focal loss for dense object detection", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Zhongqi Miao", "Xiaohang Zhan", "Jiayun Wang", "Boqing Gong", "Stella X Yu" ], "title": "Large-scale long-tailed recognition in an open world", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Dhruv Mahajan", "Ross Girshick", "Vignesh Ramanathan", "Kaiming He", "Manohar Paluri", "Yixuan Li", "Ashwin Bharambe", "Laurens van der Maaten" ], "title": "Exploring the limits of weakly supervised pretraining", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Hyun Oh Song", "Yu Xiang", "Stefanie Jegelka", "Silvio Savarese" ], "title": "Deep metric learning via lifted structured feature embedding", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "NIPS-W,", "year": 2017 }, { "authors": [ "Sylvestre-Alvise Rebuffi", "Alexander Kolesnikov", "Georg Sperl", "Christoph H Lampert" ], "title": "icarl: Incremental classifier and representation learning", "venue": "In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Li Shen", "Zhouchen Lin", "Qingming Huang" ], "title": "Relay backpropagation for effective learning of deep convolutional neural networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Jun Shu", "Qi Xie", "Lixuan Yi", "Qian Zhao", "Sanping Zhou", "Zongben Xu", "Deyu Meng" ], "title": "Meta-weightnet: Learning an explicit mapping for sample weighting", "venue": null, "year": 1902 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yu-Xiong Wang", "Deva Ramanan", "Martial Hebert" ], "title": "Learning to model the tail", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yu-Xiong Wang", "Ross Girshick", "Martial Hebert", "Bharath Hariharan" ], "title": "Low-shot learning from imaginary data", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Xi Yin", "Xiang Yu", "Kihyuk Sohn", "Xiaoming Liu", "Manmohan Chandraker" ], "title": "Feature transfer learning for face recognition with under-represented data", "venue": "Proceeding of IEEE Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xiao Zhang", "Zhiyuan Fang", "Yandong Wen", "Zhifeng Li", "Yu Qiao" ], "title": "Range loss for deep face recognition with long-tailed training data", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Yubo Zhang", "Pavel Tokmakov", "Martial Hebert", "Cordelia Schmid" ], "title": "A study on action detection in the wild", "venue": "arXiv preprint arXiv:1904.12993,", "year": 2019 }, { "authors": [ "Yaoyao Zhong", "Weihong Deng", "Mei Wang", "Jiani Hu", "Jianteng Peng", "Xunqiang Tao", "Yaohai Huang" ], "title": "Unequal-training for deep face recognition with long-tailed noisy data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Liu" ], "title": "VARYING THE BACKBONE ARCHITECTURE SIZE ImageNet-LT. In Figure 5 we compare the performance of different backbone architecture sizes (model capacity) under different methods, including of different methods 1) OLTR (Liu et al., 2019) using the authors’ codebase settings (OLTR*); 2) OLTR using the representation learning stage detailed in Section 5 (OLTR**); 3) cRT with the memory module", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Visual recognition research has made rapid advances during the past years, driven primarily by the use of deep convolutional neural networks (CNNs) and large image datasets, most importantly the ImageNet Challenge (Russakovsky et al., 2015). Such datasets are usually artificially balanced with respect to the number of instances for each object/class in the training set. Visual phenomena, however, follow a long-tailed distribution that many standard approaches fail to properly model, leading to a significant drop in accuracy. Motivated by this, a number of works have recently emerged that try to study long-tailed recognition, i.e., recognition in a setting where the number of instances in each class highly varies and follows a long-tailed distribution.\nWhen learning with long-tailed data, a common challenge is that instance-rich (or head) classes dominate the training procedure. The learned classification model tends to perform better on these classes, while performance is significantly worse for instance-scarce (or tail) classes. To address this issue and to improve performance across all classes, one can re-sample the data or design specific loss functions that better facilitate learning with imbalanced data (Chawla et al., 2002; Cui et al., 2019; Cao et al., 2019). Another direction is to enhance recognition performance of the tail classes by transferring knowledge from the head classes (Wang et al., 2017; 2018; Zhong et al., 2019; Liu et al., 2019). Nevertheless, the common belief behind existing approaches is that designing proper sampling strategies, losses, or even more complex models, is useful for learning high-quality representations for long-tailed recognition.\nMost aforementioned approaches thus learn the classifiers used for recognition jointly with the data representations. However, such a joint learning scheme makes it unclear how the long-tailed recognition ability is achieved—is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries? To answer this question, we take one step back and decouple long-tail recognition into representation learning and classification. For learning rep-\nresentations, the model is exposed to the training instances and trained through different sampling strategies or losses. For classification, upon the learned representations, the model recognizes the long-tailed classes through various classifiers. We evaluate the performance of various sampling and classifier training strategies for long-tailed recognition under both joint and decoupled learning schemes.\nSpecifically, we first train models to learn representations with different sampling strategies, including the standard instance-based sampling, class-balanced sampling and a mixture of them. Next, we study three different basic approaches to obtain a classifier with balanced decision boundaries, on top of the learned representations. They are 1) re-training the parametric linear classifier in a class-balancing manner (i.e., re-sampling); 2) non-parametric nearest class mean classifier, which classifies the data based on their closest class-specific mean representations from the training set; and 3) normalizing the classifier weights, which adjusts the weight magnitude directly to be more balanced, adding a temperature to modulate the normalization procedure.\nWe conduct extensive experiments to compare the aforementioned instantiations of the decoupled learning scheme with the conventional scheme that jointly trains the classifier and the representations. We also compare to recent, carefully designed and more complex models, including approaches using memory (e.g., OLTR (Liu et al., 2019)) as well as more sophisticated losses (Cui et al., 2019). From our extensive study across three long-tail datasets, ImageNet-LT, Places-LT and iNaturalist, we make the following intriguing observations:\n• We find that decoupling representation learning and classification has surprising results that challenge common beliefs for long-tailed recognition: instance-balanced sampling learns the best and most generalizable representations.\n• It is advantageous in long-tailed recognition to re-adjust the decision boundaries specified by the jointly learned classifier during representation learning: Our experiments show that this can either be achieved by retraining the classifier with class-balanced sampling or by a simple, yet effective, classifier weight normalization which has only a single hyperparameter controlling the “temperature” and which does not require additional training.\n• By applying the decoupled learning scheme to standard networks (e.g., ResNeXt), we achieve significantly higher accuracy than well established state-of-the-art methods (different sampling strategies, new loss designs and other complex modules) on multiple longtailed recognition benchmark datasets, including ImageNet-LT, Places-LT, and iNaturalist." }, { "heading": "2 RELATED WORK", "text": "Long-tailed recognition has attracted increasing attention due to the prevalence of imbalanced data in real-world applications (Wang et al., 2017; Zhou et al., 2017; Mahajan et al., 2018; Zhong et al., 2019; Gupta et al., 2019). Recent studies have mainly pursued the following three directions:\nData distribution re-balancing. Along this direction, researchers have proposed to re-sample the dataset to achieve a more balanced data distribution. These methods include over-sampling (Chawla et al., 2002; Han et al., 2005) for the minority classes (by adding copies of data), undersampling (Drummond et al., 2003) for the majority classes (by removing data), and class-balanced sampling (Shen et al., 2016; Mahajan et al., 2018) based on the number of samples for each class.\nClass-balanced Losses. Various methods are proposed to assign different losses to different training samples for each class. The loss can vary at class-level for matching a given data distribution and improving the generalization of tail classes (Cui et al., 2019; Khan et al., 2017; Cao et al., 2019; Khan et al., 2019; Huang et al., 2019). A more fine-grained control of the loss can also be achieved at sample level, e.g. with Focal loss (Lin et al., 2017), Meta-Weight-Net (Shu et al., 2019), re-weighted training (Ren et al., 2018), or based on Bayesian uncertainty (Khan et al., 2019). Recently, Hayat et al. (2019) proposed to balance the classification regions of head and tail classes using an affinity measure to enforce cluster centers of classes to be uniformly spaced and equidistant.\nTransfer learning from head- to tail classes. Transfer-learning based methods address the issue of imbalanced training data by transferring features learned from head classes with abundant training instances to under-represented tail classes. Recent work includes transferring the intra-class\nvariance (Yin et al., 2019) and transferring semantic deep features (Liu et al., 2019). However it is usually a non-trivial task to design specific modules (e.g. external memory) for feature transfer.\nA benchmark for low-shot recognition was proposed by Hariharan & Girshick (2017) and consists of a representation learning phase without access to the low-shot classes and a subsequent low-shot learning phase. In contrast, the setup for long-tail recognition assumes access to both head and tail classes and a more continuous decrease in in class labels. Recently, Liu et al. (2019) and Cao et al. (2019) adopt re-balancing schedules that learn representation and classifier jointly within a two-stage training scheme. OLTR (Liu et al., 2019) uses instance-balanced sampling to first learn representations that are fine-tuned in a second stage with class-balanced sampling together with a memory module. LDAM (Cao et al., 2019) introduces a label-distribution-aware margin loss that expands the decision boundaries of few-shot classes. In Section 5 we exhaustively compare to OLTR and LDAM, since they report state-of-the-art results for the ImageNet-LT, Places-LT and iNaturalist datasets. In our work, we argue for decoupling representation and classification. We demonstrate that in a long-tailed scenario, this separation allows straightforward approaches to achieve high recognition performance, without the need for designing sampling strategies, balance-aware losses or adding memory modules." }, { "heading": "3 LEARNING REPRESENTATIONS FOR LONG-TAILED RECOGNITION", "text": "For long-tailed recognition, the training set follows a long-tailed distribution over the classes. As we have less data about infrequent classes during training, the models trained using imbalanced datasets tend to exhibit under-fitting on the few-shot classes. But in practice we are interested in obtaining the model capable of recognizing all classes well. Various re-sampling strategies (Chawla et al., 2002; Shen et al., 2016; Cao et al., 2019), loss reweighting and margin regularization over few-shot classes are thus proposed. However, it remains unclear how they achieve performance improvement, if any, for long-tailed recognition. Here we systematically investigate their effectiveness by disentangling representation learning from classifier learning, in order to identify what indeed matters for longtailed recognition.\nNotation. We define the notation used through the paper. Let X = {xi, yi}, i ∈ {1, . . . , n} be a training set, where yi is the label for data point xi. Let nj denote the number of training sample for class j, and let n = ∑C j=1 nj be the total number of training samples. Without loss of generality, we assume that the classes are sorted by cardinality in decreasing order, i.e., if i < j, then ni ≥ nj . Additionally, since we are in a long-tail setting, n1 nC . Finally, we denote with f(x; θ) = z the representation for x, where f(x; θ) is implemented by a deep CNN model with parameter θ. The final class prediction ỹ is given by a classifier function g, such that ỹ = arg max g(z). For the common case, g is a linear classifier, i.e., g(z) = W>z + b, where W denotes the classifier weight matrix, and b is the bias. We present other instantiations of g in Section 4.\nSampling strategies. In this section we present a number of sampling strategies that aim at rebalancing the data distribution for representation and classifier learning. For most sampling strategies presented below, the probability pj of sampling a data point from class j is given by:\npj = nqj∑C i=1 n q i , (1)\nwhere q ∈ [0, 1] and C is the number of training classes. Different sampling strategies arise for different values of q and below we present strategies that correspond to q = 1, q = 0, and q = 1/2.\nInstance-balanced sampling. This is the most common way of sampling data, where each training example has equal probability of being selected. For instance-balanced sampling, the probability pIBj is given by Equation 1 with q = 1, i.e., a data point from class j will be sampled proportionally to the cardinality nj of the class in the training set.\nClass-balanced sampling. For imbalanced datasets, instance-balanced sampling has been shown to be sub-optimal (Huang et al., 2016; Wang et al., 2017) as the model under-fits for few-shot classes leading to lower accuracy, especially for balanced test sets. Class-balanced sampling has been used to alleviate this discrepancy, as, in this case, each class has an equal probability of being selected. The probability pCBj is given by Eq. (1) with q = 0, i.e., p CB j = 1/C. One can see this as a two-\nstage sampling strategy, where first a class is selected uniformly from the set of classes, and then an instance from that class is subsequently uniformly sampled.\nSquare-root sampling. A number of variants of the previous sampling strategies have been explored. A commonly used variant is square-root sampling (Mikolov et al., 2013; Mahajan et al., 2018), where q is set to 1/2 in Eq. (1) above.\nProgressively-balanced sampling. Recent approaches (Cui et al., 2018; Cao et al., 2019) utilized mixed ways of sampling, i.e., combinations of the sampling strategies presented above. In practice this involves first using instance-balanced sampling for a number of epochs, and then class-balanced sampling for the last epochs. These mixed sampling approaches require setting the number of epochs before switching the sampling strategy as an explicit hyper-parameter. Here, we experiment with a softer version, progressively-balanced sampling, that progressively “interpolates” between instancebalanced and class-balanced sampling as learning progresses. Its sampling probability/weight pj for class j is now a function of the epoch t,\npPBj (t) = (1− t\nT )pIBj +\nt T pCBj , (2)\nwhere T is the total number of epochs. Figure 3 in appendix depicts the sampling probabilities.\nLoss re-weighting strategies. Loss re-weighting functions for imbalanced data have been extensively studied, and it is beyond the scope of this paper to examine all related approaches. What is more, we found that some of the most recent approaches reporting high performance were hard to train and reproduce and in many cases require extensive, dataset-specific hyper-parameter tuning. In Section A of the Appendix we summarize the latest, best performing methods from this area. In Section 5 we show that, without bells and whistles, baseline methods equipped with a properly balanced classifier can perform equally well, if not better, than the latest loss re-weighting approaches." }, { "heading": "4 CLASSIFICATION FOR LONG-TAILED RECOGNITION", "text": "When learning a classification model on balanced datasets, the classifier weights W and b are usually trained jointly with the model parameters θ for extracting the representation f(xi; θ) by minimizing the cross-entropy loss between the ground truth yi and prediction W>f(xi; θ)+b. This is also a typical baseline for long-tailed recognition. Though various approaches of re-sampling, reweighting and transferring representations from head to tail classes have been proposed, the general scheme remains the same: classifiers are either learned jointly with the representations either endto-end, or via a two-stage approach where the classifier and the representation are jointly fine-tuned with variants of class-balanced sampling as a second stage (Cui et al., 2018; Cao et al., 2019).\nIn this section, we consider decoupling the representation from the classification in long-tailed recognition. We present ways of learning classifiers aiming at rectifying the decision boundaries on head- and tail-classes via fine-tuning with different sampling strategies or other non-parametric ways such as nearest class mean classifiers. We also consider an approach to rebalance the classifier weights that exhibits a high long-tailed recognition accuracy without any additional retraining.\nClassifier Re-training (cRT). A straightforward approach is to re-train the classifier with classbalanced sampling. That is, keeping the representations fixed, we randomly re-initialize and optimize the classifier weights W and b for a small number of epochs using class-balanced sampling. A similar methodology was also recently used in (Zhang et al., 2019) for action recognition on a long-tail video dataset.\nNearest Class Mean classifier (NCM). Another commonly used approach is to first compute the mean feature representation for each class on the training set and then perform nearest neighbor search either using cosine similarity or the Euclidean distance computed on L2 normalized mean features (Snell et al., 2017; Guerriero et al., 2018; Rebuffi et al., 2017). Despite its simplicity, this is a strong baseline (cf . the experimental evaluation in Section 5); the cosine similarity alleviates the weight imbalance problem via its inherent normalization (see also Figure 4).\nτ -normalized classifier (τ -normalized). We investigate an efficient approach to re-balance the decision boundaries of classifiers, inspired by an empirical observation: after joint training with instance-balanced sampling, the norms of the weights ‖wj‖ are correlated with the cardinality of the\nclasses nj , while, after fine-tuning the classifiers using class-balanced sampling, the norms of the classifier weights tend to be more similar (cf . Figure 2-left).\nInspired by the above observations, we consider rectifying imbalance of decision boundaries by adjusting the classifier weight norms directly through the following τ -normalization procedure. Formally, let W = {wj} ∈ Rd×C , where wj ∈ Rd are the classifier weights corresponding to class j. We scale the weights of W to get W̃ = {w̃j} by:\nw̃i = wi ||wi||τ , (3)\nwhere τ is a hyper-parameter controlling the “temperature” of the normalization, and || · || denotes the L2 norm. When τ = 1, it reduces to standard L2-normalization. When τ = 0, no scaling is imposed. We empirically choose τ ∈ (0, 1) such that the weights can be rectified smoothly. After τ -normalization, the classification logits are given by ŷ = W̃>f(x; θ). Note that we discard the bias term b here due to its negligible effect on the logits and final predictions.\nLearnable weight scaling (LWS). Another way of interpreting τ -normalization would be to think of it as a re-scaling of the magnitude for each classifier wi keeping the direction unchanged. This could be written as\nw̃i = fi ∗ wi,where fi = 1\n||wi||τ . (4)\nAlthough for τ -normalized in general τ is chosen through cross-validation, we further investigate learning fi on the training set, using class-balanced sampling (like cRT). In this case, we keep both the representations and classifier weights fixed and only learn the scaling factors fi. We denote this variant as Learnable Weight Scaling (LWS) in our experiments." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Datasets. We perform extensive experiments on three large-scale long-tailed datasets, including Places-LT (Liu et al., 2019), ImageNet-LT (Liu et al., 2019), and iNaturalist 2018 (iNatrualist, 2018). Places-LT and ImageNet-LT are artificially truncated from their balanced versions (Places2 (Zhou et al., 2017) and ImageNet-2012 (Deng et al., 2009)) so that the labels of the training set follow a long-tailed distribution. Places-LT contains images from 365 categories and the number of images per class ranges from 4980 to 5. ImageNet-LT has 1000 classes and the number of images per class ranges from 1280 to 5 images. iNaturalist 2018 is a real-world, naturally long-tailed dataset, consisting of samples from 8,142 species.\nEvaluation Protocol. After training on the long-tailed datasets, we evaluate the models on the corresponding balanced test/validation datasets and report the commonly used top-1 accuracy over all classes, denoted as All. To better examine performance variations across classes with different number of examples seen during training, we follow Liu et al. (2019) and further report accuracy on three splits of the set of classes: Many-shot (more than 100 images), Medium-shot (20∼100 images) and Few-shot (less than 20 images). Accuracy is reported as a percentage.\nImplementation. We use the PyTorch (Paszke et al., 2017) framework for all experiments1. For Places-LT, we choose ResNet-152 as the backbone network and pretrain it on the full ImageNet2012 dataset, following Liu et al. (2019). On ImageNet-LT, we report results with ResNet{10,50,101,152} (He et al., 2016) and ResNeXt-{50,101,152}(32x4d) (Xie et al., 2017) but mainly use ResNeXt-50 for analysis. Similarly, ResNet-{50,101,152} is also used for iNaturalist 2018. For all experiements, if not specified, we use SGD optimizer with momentum 0.9, batch size 512, cosine learning rate schedule (Loshchilov & Hutter, 2016) gradually decaying from 0.2 to 0 and image resolution 224× 224. In the first representation learning stage, the backbone network is usually trained for 90 epochs. In the second stage, i.e., for retraining a classifier (cRT), we restart the learning rate and train it for 10 epochs while keeping the backbone network fixed." }, { "heading": "5.2 SAMPLING STRATEGIES AND DECOUPLED LEARNING", "text": "In Figure 1, we compare different sampling strategies for the conventional joint training scheme to a number of variations of the decoupled learning scheme on the ImageNet-LT dataset. For the joint training scheme (Joint), the linear classifier and backbone for representation learning are jointly trained for 90 epochs using a standard cross-entropy loss and different sampling strategies, i.e., Instance-balanced, Class-balanced, Square-root, and Progressively-balanced. For the decoupled learning schemes, we present results when learning the classifier in all the ways presented in Section 4, i.e., re-initialize and re-train (cRT), Nearest Class Mean (NCM) as well as τ -normalized classifier. Below, we discuss a number of key observations.\nSampling matters when training jointly. From the Joint results in Figure 1 across sampling methods and splits, we see consistent gains in performance when using better sampling strategies (see also Table 5). The trends are consistent for the overall performance as well as the medium- and fewshot classes, with progressively-balanced sampling giving the best results. As expected, instancebalanced sampling gives the highest performance for the many-shot classes. This is well expected since the resulted model is highly skewed to the many-shot classes. Our results for different sampling strategies on joint training validate related works that try to design better data sampling methods.\nJoint or decoupled learning? For most cases presented in Figure 1, performance using decoupled methods is significantly better in terms of overall performance, as well as all splits apart from the many-shot case. Even the nonparametric NCM approach is highly competitive in most cases, while cRT and τ -normalized outperform the jointly trained baseline by a large margin (i.e. 5% higher than the jointly learned classifier), and even achieving 2% higher overall accuracy than the best jointly trained setup with progressively-balanced sampling. The gains are even higher for medium-\nand few-shot classes at 5% and 11%, respectively.\nTo further justify our claim that it is beneficial to decouple representation and classifier, we experiment with fine-tuning the backbone network (ResNeXt-50) jointly with the linear classifier. In Table 1, we present results when fine-tuning the whole network with standard or smaller (0.1×) learning rate, fine-tuning only the last block in the backbone, or only retraining the linear classifier and fixing the representation. Fine-tuning the whole network yields the worst performance (46.3% and 48.8%), while keeping the representation frozen performs best (49.5%). The trend is even more evident for the medium/few-shot classes. This result suggests that decoupling representation and classifier is desirable for long-tailed recognition.\n1We will open-source our codebase and models.\nInstance-balanced sampling gives the most generalizable representations. Among all decoupled methods, when it comes to overall performance and all splits apart from the many-shot classes, we see that Instance-balanced sampling gives the best results. This is particularly interesting, as it implies that data imbalance might not be an issue learning high-quality representations." }, { "heading": "5.3 HOW TO BALANCE YOUR CLASSIFIER?", "text": "Among the ways of balancing the classifier explored in Figure 1, the non-parametric NCM seems to perform slightly worse than cRT and τ -normalization. Those two methods are consistently better in most cases apart from the few-shot case, where NCM performs comparably. The biggest drop for the NCM approach comes from the many-shot case. It is yet still somehow surprising that both the NCM and τ -normalized cases give competitive performance even though they are free of additional training and involve no additional sampling procedure. As discussed in Section 4, their strong performance may stem from their ability to adaptively adjust the decision boundaries for many-, medium- and few-shot classes (see also Figure 4).\nIn Figure 2 (left) we empirically show the L2 norms of the weight vectors for all classifiers, as well as the training data distribution sorted in a descending manner with respect to the number of instances in the training set. We can observe that the weight norm of the joint classifier (blue line) is positively correlated with the number of training instances of the corresponding class. More-shot classes tend to learn a classifier with larger magnitudes. As illustrated in Figure 4, this yields a wider classification boundary in feature space, allowing the classifier to have much higher accuracy on data-rich classes, but hurting data-scarce classes. τ -normalized classifiers (gold line) alleviate this issue to some extent by providing more balanced classifier weight magnitudes. For retraining (green line), the weights are almost balanced except that few-shot classes have slightly larger classifier weight norms. Note that the NCM approach would give a horizontal line in the figure as the mean vectors are L2-normalized before nearest neighbor search.\nIn Figure 2 (right), we further investigate how the performance changes as the temperature parameter τ for the τ -normalized classifier varies. The figure shows that as τ increases from 0, many-shot accuracy decays dramatically while few-shot accuracy increases dramatically." }, { "heading": "5.4 COMPARISON WITH THE STATE-OF-THE-ART ON LONG-TAILED DATASETS", "text": "In this section, we compare the performance of the decoupled schemes to other recent works that report state-of-the-art results on on three common long-tailed benchmarks: ImageNet-LT, iNaturalist and Places-LT. Results are presented in Tables 2, 3 and 4, respectively.\nImageNet-LT. Table 2 presents results for ImageNet-LT. Although related works present results with ResNet-10 (Liu et al., 2019), we found that using bigger backbone architectures increases performance significantly on this dataset. We therefore present results for three backbones: ResNet-10, ResNeXt-50 and the larger ResNeXt-152. For the state-of-the-art OLTR method of Liu et al. (2019) we adopt results reported in the paper, as well as results we reproduced using the authors’ opensourced codebase2 with two training settings: the one suggested in the codebase and the one using our training setting for the representation learning. From the table we see that the non-parametric decoupled NCM method performs on par with the state-of-the-art for most architectures. We also see that when re-balancing the classifier properly, either by re-training or τ -normalizing, we get results that, without bells and whistles outperform the current state-of-the-art for all backbone architectures. We further experimented with adding the memory mechanism of Liu et al. (2019) on top of our decoupled cRT setup, but the memory mechanism didn’t seem to further boost performance (see Appendix B.4).\niNaturalist 2018. We further evaluate our decoupled methods on the iNaturalist 2018 dataset. We present results after 90 and 200 epochs, as we found that 90 epochs were not enough for the representation learning stage to converge; this is different from Cao et al. (2019) where they train for 90 epochs. From Table 3 we see that results are consistent with the ImageNet-LT case: re-balancing the classifier gives results that outperform CB-Focal (Cui et al., 2019). Our performance, when training only for 90 epochs, is slightly lower than the very recently proposed LDAM+DRW (Cao et al., 2019). However, with 200 training epochs and classifier normalization, we achieve a new state-of-the-art of 69.3 with ResNet-50 that can be further improved to 72.5 for ResNet-152. It is further worth noting that we cannot reproduce the numbers reported in Cao et al. (2019). We find that the τ -normalized classifier performs best and gives a new state-of-the-art for the dataset, while surprisingly achieving similar accuracy (69%/72% for ResNet-50/ResNet-152) across all many-, medium- and few-shot class splits, a highly desired result for long-tailed recognition. Complete results, i.e., for all splits and more backbone architectures can be found in Table 8 of the Appendix.\nPlaces-LT. For Places-LT we follow the protocol of Liu et al. (2019) and start from a ResNet-152 backbone pre-trained on the full ImageNet dataset. Similar to Liu et al. (2019), we then fine-tune the backbone with Instance-balanced sampling for representation learning. Classification follows with fixed representations for our decoupled methods. As we see in Table 4, all three decoupled methods outperform the state-of-the-art approaches, including Lifted Loss (Oh Song et al., 2016), Focal Loss (Lin et al., 2017), Range Loss (Zhang et al., 2017), FSLwF (Gidaris & Komodakis, 2018) and OLTR (Liu et al., 2019). Once again, the τ -normalized classifier give the top performance, with impressive gains for the medium- and few-shot classes.\n2https://github.com/zhmiao/OpenLongTailRecognition-OLTR" }, { "heading": "6 CONCLUSIONS", "text": "In this work, we explore a number of learning schemes for long-tailed recognition and compare jointly learning the representation and classifier to a number of straightforward decoupled methods. Through an extensive study we find that although sampling strategies matter when jointly learning representation and classifiers, instance-balanced sampling gives more generalizable representations that can achieve state-of-the-art performance after properly re-balancing the classifiers and without need of carefully designed losses or memory units. We set new state-of-the-art performance for three long-tailed benchmarks and believe that our findings not only contribute to a deeper understanding of the long-tailed recognition task, but can offer inspiration for future work." }, { "heading": "A LOSS RE-WEIGHTING STRATEGIES", "text": "Here, we summarize some of the best performing loss re-weighting methods that we compare against in Section 5. Introduced in the context of object detection where imbalance exists in most common benchmarks, the Focal loss (Lin et al., 2017) aims to balance the sample-wise classification loss for model training by down-weighing easy samples. To this end, given a probability prediction hi for the sample xi over its true category yi, it adds a re-weighting factor (1 − hi)γ with γ > 0 into the standard cross-entropy loss LCE :\nLfocal := (1− hi)γLCE = −(1− hi)γ log(hi). (5)\nFor easy samples (which may dominate the training samples) with large predicted probability hi for their true categories, their corresponding cross entropy loss will be down weighted. Recently, Cui et al. (2019) presented a class balanced variant of the focal loss and applied it to long-tailed recognition. They modulated the Focal loss for a sample from class j with a balance-aware coefficient equal to (1 − β)/(1 − βnj ). Very recently, Cao et al. (2019) proposed a label-distribution-aware margin (LDAM) loss that encourages few-shot classes to have larger margins, and their final loss is formulated as a cross-entropy loss with enforced margins:\nLLDAM := − log eŷj−∆j eŷj−∆j + ∑ c 6= jeŷc−∆c , (6)\nwhere ŷ are the logits and ∆j is a class-aware margin, inversely proportional to n 1/4 j ." }, { "heading": "B FURTHER ANALYSIS AND RESULTS", "text": "B.1 SAMPLING STRATEGIES\nIn Figure 3 we visualize the sampling weights for the four sampling strategies we explore. In Table 5 we present accuracy on ImageNet-LT for “all” classes when training the representation and classifier jointly. It is clear that better sampling strategies help when jointly training the classifier with the representations/backbone architecture.\nB.2 CLASSIFIER DECISION BOUNDARIES FOR τ -NORMALIZED AND NCM\nIn Figure 4 we illustrate the classifier decision boundaries before/after normalization with Eq.(3), as well as when using cosine distance. Balancing the norms also leads to more balanced decision boundaries, allowing the classifiers for few-shot classes to occupy more space.\nB.3 CLASSIFIER LEARNING COMPARISON TABLE\nTable 6 presents some comparative analysis for the four different ways of learning the classifier that are presented in Section 4.\nB.4 VARYING THE BACKBONE ARCHITECTURE SIZE\nImageNet-LT. In Figure 5 we compare the performance of different backbone architecture sizes (model capacity) under different methods, including of different methods 1) OLTR (Liu et al., 2019) using the authors’ codebase settings (OLTR*); 2) OLTR using the representation learning stage detailed in Section 5 (OLTR**); 3) cRT with the memory module from Liu et al. (2019) while training the classifier; 4) cRT; and 5) τ -normalized. we see that a) the authors’ implementation of OLTR over-fits for larger models, b) overfitting can be alleviated with our training setup (different training and LR schedules) c) adding the memory unit when re-training the classifier doesn’t increase performance. Additional results of Table 2 are given in Table 7.\niNaturalist 2018. In Table 8 we present an extended version of the results of Table 3. We show results per split as well as results with a ResNet-101 backbone. As we see from the table and mentioned in Section 5, training only for 90 epochs gives sub-optimal representations, while both large models and longer training result in much higher accuracy on this challenging, large-scale task. What is even more interesting, we see performance across the many-, medium- and few-shot splits being approximately equal after re-balancing the classifier, with only a small advantage for the many-shot classes.\n60.0\nOLTR* OLTR** cRT+Memory cRT 𝝉-norm LWS\nB.5 ON THE EXPLORATION OF DETERMINING τ\nThe current tau-normalization strategy does require a validation set to choose tau, which could be a disadvantage depending on the practical scenario. Can we do better?\nFinding τ value on training set. We also attempted to select τ directly on the training dataset. Surprisingly, final performance on testing set is very similar, with τ selected using training set only.\nWe achieve this goal by simulating a balanced testing distribution from the training set. We first feed the whole training set through the network to get the top-1 accuracy for each of the classes. Then, we average the class-specific accuracies and use the averaged accuracy as the metric to determine the tau value. As shown in Table 9, we compare the τ found on training set and validation set for all three datasets. We can see that both the vale of τ and the overall performances are very close to each other, which demonstrates the effectiveness of searching for τ on training set. This strategy offers a practical way to find τ even when validation set is not available.\nLearning τ value on training set. We further investigate if we can automatically learn the τ value instead of grid search. To this end, following cRT, we set τ as a learnable parameter and learn it on the training set with balanced sampling, while keeping all the other parameters fixed (including both the backbone network and classifier). Also, we compare the learned τ value and the corresponding results in the Table 9 (denoted by “learn” = 3). This further reduces the manual effort of searching best τ values and make the strategy more accessible for practical usage.\nB.6 COMPARING MLP CLASSFIIER WITH LINEAR CLASSIFIER\nWe experimented with MLPs with different layers (2 or 3) and different number of hidden neurons (2048 or 512). We use ReLU as activation function, set the batch size to be 512, and train the MLP using balanced sampling on fixed representation for 10 epochs with a cosine learning rate schedule, which gradually decrease the learning rate to zero. We conducted experiments on two datasets.\nOn ImageNet-LT, we use ResNeXt50 as the backbone network. The results are summarized in Table 10. We can see that when the MLP going deeper, the performance are getting worse. It probably means the backbone network is enough to learn discriminative representation.\nFor iNaturalist, we use the representation from a ResNet50 model trained for 200 epochs. We only consider a hidden dimension of 2048, as this dataset contains much more classes. The results are shown in Table 11, and show that performance drop is even more severe when a deeper classifier is used.\nB.7 COSINE SIMILARITY FOR CLASSIFICATION\nWe tried to replace the linear classifier with a cosine similarity classifier with (denoted by “cos”) and without (denoted by ”cos(noRelu)”) the last ReLU activation function, following Gidaris & Komodakis (2018). We summarize the results in Table 12, which show that they are comparable to each other." } ]
2,020
null
SP:00fed729e27d8c9d2a3d96fdb7e54c3e5cc0a94d
[ "The paper proposes a generative model for images. There's a probability mask per-pixel per-component (which yields mixing probabilities), and then a set of latents per-component that yield an image. The system is tested on a set of scenes like the GQN dataset, stacks of blocks, and the multi-dsprites dataset. The system is better than MONet, although there are a few lingering questions.", "The authors propose a probabilistic generative latent variable model representing a 2D image as a mixture of latent components. It formulates the scene generation problem as a spatial Gaussian mixture model where each Gaussian component comes from the decoding of an object-centric latent variable. The contribution of the proposed method from previous works is the introduction of an autoregressive prior on the component latents. This allows the model to capture autoregressive dependencies among different components and thus help generate coherent scenes, which has not been shown in the previous works. In the experiments, the authors compare GENESIS with MONet and VAEs qualitatively and quantitatively and show that the model outperforms the baseline in terms of both scene decomposition and generation." ]
Generative latent-variable models are emerging as promising tools in robotics and reinforcement learning. Yet, even though tasks in these domains typically involve distinct objects, most state-of-the-art generative models do not explicitly capture the compositional nature of visual scenes. Two recent exceptions, MONet and IODINE, decompose scenes into objects in an unsupervised fashion. Their underlying generative processes, however, do not account for component interactions. Hence, neither of them allows for principled sampling of novel scenes. Here we present GENESIS, the first object-centric generative model of rendered 3D scenes capable of both decomposing and generating scenes by capturing relationships between scene components. GENESIS parameterises a spatial GMM over images which is decoded from a set of object-centric latent variables that are either inferred sequentially in an amortised fashion or sampled from an autoregressive prior. We train GENESIS on several publicly available datasets and evaluate its performance on scene generation, decomposition, and semi-supervised learning.
[ { "affiliations": [], "name": "Martin Engelcke" }, { "affiliations": [], "name": "Adam R. Kosiorek" }, { "affiliations": [], "name": "Oiwi Parker Jones" }, { "affiliations": [], "name": "Ingmar Posner" } ]
[ { "authors": [ "Relja Arandjelović", "Andrew Zisserman" ], "title": "Object Discovery with a Copy-Pasting GAN", "venue": "arXiv preprint arXiv:1905.11369,", "year": 2019 }, { "authors": [ "Pablo Arbelaez", "Michael Maire", "Charless Fowlkes", "Jitendra Malik" ], "title": "Contour Detection and Hierarchical Image Segmentation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2010 }, { "authors": [ "Samaneh Azadi", "Deepak Pathak", "Sayna Ebrahimi", "Trevor Darrell" ], "title": "Compositional GAN: Learning Image-Conditional Binary Composition", "venue": "arXiv preprint arXiv:1807.07560,", "year": 2019 }, { "authors": [ "Rianne van den Berg", "Leonard Hasenclever", "Jakub M Tomczak", "Max Welling" ], "title": "Sylvester Normalizing Flows for Variational Inference", "venue": "Conference on Uncertainty in Artificial Intelligence,", "year": 2018 }, { "authors": [ "Adam Bielski", "Paolo Favaro" ], "title": "Emergence of Object Segmentation in Perturbed Generative Models", "venue": "arXiv preprint arXiv:1905.12663,", "year": 2019 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large Scale GAN Training for High Fidelity Natural Image Synthesis", "venue": "arXiv preprint arXiv:1809.11096,", "year": 2018 }, { "authors": [ "Christopher P Burgess", "Loic Matthey", "Nicholas Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick", "Alexander Lerchner" ], "title": "MONet: Unsupervised Scene Decomposition and Representation", "venue": null, "year": 1901 }, { "authors": [ "Mickaël Chen", "Thierry Artières", "Ludovic Denoyer" ], "title": "Unsupervised Object Segmentation by Redrawing", "venue": "arXiv preprint arXiv:1905.13539,", "year": 2019 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Pierre Comon" ], "title": "Independent Component Analysis. In J-L.Lacoume (ed.), Higher-Order Statistics, pp. 29–38", "venue": null, "year": 1992 }, { "authors": [ "Eric Crawford", "Joelle Pineau" ], "title": "Spatially Invariant Unsupervised Object Detection with Convolutional Neural Networks", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yann N Dauphin", "Angela Fan", "Michael Auli", "David Grangier" ], "title": "Language Modeling with Gated Convolutional Networks", "venue": "International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "SM Ali Eslami", "Nicolas Heess", "Theophane Weber", "Yuval Tassa", "David Szepesvari", "Geoffrey E Hinton" ], "title": "Attend, Infer, Repeat: Fast Scene Understanding with Generative Models", "venue": "Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Karl Friston" ], "title": "A Theory of Cortical Responses", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 2005 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep Sparse Rectifier", "venue": "Neural Networks. International Conference on Artificial Intelligence and Statistics,", "year": 2011 }, { "authors": [ "Klaus Greff", "Antti Rasmus", "Mathias Berglund", "Tele Hao", "Harri Valpola", "Jürgen Schmidhuber" ], "title": "Tagger: Deep Unsupervised Perceptual Grouping", "venue": "Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Klaus Greff", "Sjoerd van Steenkiste", "Jürgen Schmidhuber" ], "title": "Neural Expectation Maximization", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Klaus Greff", "Raphaël Lopez Kaufmann", "Rishab Kabra", "Nick Watters", "Chris Burgess", "Daniel Zoran", "Loic Matthey", "Matthew Botvinick", "Alexander Lerchner" ], "title": "Multi-Object Representation Learning with Iterative Variational Inference", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Frederic Besse", "Yan Wu", "Hamza Merzic", "Aaron van den Oord" ], "title": "Shaping Belief States with Generative Environment Models for RL", "venue": null, "year": 1906 }, { "authors": [ "Kalanit Grill-Spector", "Rafael Malach" ], "title": "The Human Visual Cortex", "venue": "Annual Review of Neuroscience,", "year": 2004 }, { "authors": [ "Oliver Groth", "Fabian B Fuchs", "Ingmar Posner", "Andrea Vedaldi" ], "title": "ShapeStacks: Learning VisionBased Physical Intuition for Generalised Object Stacking", "venue": "European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Jonathan Huang", "Kevin Murphy" ], "title": "Efficient Inference in Occlusion-Aware Generative models of Images", "venue": "arXiv preprint arXiv:1511.06362,", "year": 2015 }, { "authors": [ "D.H. Hubel", "T.N. Wiesel" ], "title": "Receptive Fields and Functional Architecture of Monkey Striate Cortex", "venue": "The Journal of Physiology,", "year": 1968 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "venue": "International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Michael I Jordan", "Zoubin Ghahramani", "Tommi S Jaakkola", "Lawrence K Saul" ], "title": "An Introduction to Variational Methods for Graphical Models", "venue": "Machine Learning,", "year": 1999 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Adam Kosiorek", "Hyunjik Kim", "Yee Whye Teh", "Ingmar Posner" ], "title": "Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects", "venue": "Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Adam R Kosiorek", "Sara Sabour", "Yee Whye Teh", "Geoffrey E Hinton" ], "title": "Stacked Capsule Autoencoders", "venue": "arXiv preprint arXiv:1906.06818,", "year": 2019 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dSprites: Disentanglement Testing Sprites Dataset", "venue": null, "year": 2017 }, { "authors": [ "Bruno A. Olshausen", "David J. Field" ], "title": "Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images", "venue": null, "year": 1996 }, { "authors": [ "Rajesh P.N. Rao", "Dana H. Ballard" ], "title": "Predictive Coding in the Visual Cortex: A Functional Interpretation of Some Extra-Classical Receptive-Field Effects", "venue": "Nature Neuroscience,", "year": 1999 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic Backpropagation and Approximate Inference in Deep Generative Models", "venue": "International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Lukasz Romaszko", "Christopher KI Williams", "Pol Moreno", "Pushmeet Kohli" ], "title": "Vision-as-InverseGraphics: Obtaining a Rich 3D Explanation of a Scene from a Single Image", "venue": "In IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Adam Santoro", "David Raposo", "David G.T. Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter W. Battaglia", "Timothy P. Lillicrap" ], "title": "A Simple Neural Network Module for Relational Reasoning", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christian Szegedy", "Sergey Ioffe", "Vincent Vanhoucke", "Alexander A Alemi" ], "title": "Inception-V4, Inception-Resnet and the Impact of Residual Connections on Learning", "venue": "AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The Information Bottleneck Method", "venue": "arXiv preprint arXiv:physics/0004057,", "year": 2000 }, { "authors": [ "Sjoerd van Steenkiste", "Michael Chang", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions", "venue": "arXiv preprint arXiv:1802.10353,", "year": 2018 }, { "authors": [ "Sjoerd van Steenkiste", "Karol Kurach", "Sylvain Gelly" ], "title": "A Case for Object Compositionality in Deep Generative Models of Images", "venue": "NeurIPS Workshop on Modeling the Physical World: Learning,", "year": 2018 }, { "authors": [ "Sjoerd van Steenkiste", "Francesco Locatello", "Jurgen Schmidhuber", "Olivier Bachem" ], "title": "Are Disentangled Representations Helpful for Abstract Visual Reasoning", "venue": null, "year": 1905 }, { "authors": [ "Brian A. Wandell" ], "title": "Foundations of Vision", "venue": "Sinauer Associates,", "year": 1995 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Matko Bosnjak", "Christopher P Burgess", "Alexander Lerchner" ], "title": "COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and CuriosityDriven Exploration", "venue": "arXiv preprint arXiv:1905.09275,", "year": 2019 }, { "authors": [ "Nicholas Watters", "Loic Matthey", "Christopher P Burgess", "Alexander Lerchner" ], "title": "Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs", "venue": "arXiv preprint arXiv:1901.07017,", "year": 2019 }, { "authors": [ "Jiajun Wu", "Erika Lu", "Pushmeet Kohli", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning to See Physics via Visual De-Animation", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiajun Wu", "Joshua B Tenenbaum", "Pushmeet Kohli" ], "title": "Neural Scene De-rendering", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Kun Xu", "Chongxuan Li", "Jun Zhu", "Bo Zhang" ], "title": "Multi-Objects Generation with Amortized Structural Regularization", "venue": "Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Zhenjia Xu", "Zhijian Liu", "Chen Sun", "Kevin Murphy", "William T Freeman", "Joshua B Tenenbaum", "Jiajun Wu" ], "title": "Unsupervised Discovery of Parts, Structure, and Dynamics", "venue": null, "year": 1903 }, { "authors": [ "Berg" ], "title": "2018) is again used to compute the mixing probabilities. However, GENESIS-S also has a second decoder with spatial broadcasting to obtain the scene components xk from zk. We found the use of two different decoders to be important for GENESIS-S in order for the model to decompose the input", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Task execution in robotics and reinforcement learning (RL) requires accurate perception of and reasoning about discrete elements in an environment. While supervised methods can be used to identify pertinent objects, it is intractable to collect labels for every scenario and task. Discovering structure in data—such as objects—and learning to represent data in a compact fashion without supervision are long-standing problems in machine learning (Comon, 1992; Tishby et al., 2000), often formulated as generative latent-variable modelling (e.g. Kingma & Welling, 2014; Rezende et al., 2014). Such methods have been leveraged to increase sample efficiency in RL (Gregor et al., 2019) and other supervised tasks (van Steenkiste et al., 2019). They also offer the ability to imagine environments for training (Ha & Schmidhuber, 2018). Given the compositional nature of visual scenes, separating latent representations into object-centric ones can facilitate fast and robust learning (Watters et al., 2019a), while also being amenable to relational reasoning (Santoro et al., 2017). Interestingly, however, state-of-the-art methods for generating realistic images do not account for this discrete structure (Brock et al., 2018; Parmar et al., 2018).\nAs in the approach proposed in this work, human visual perception is not passive. Rather it involves a creative interplay between external stimulation and an active, internal generative model of the world (Rao & Ballard, 1999; Friston, 2005). That this is necessary can be seen from the physiology of the eye, where the small portion of the visual field that can produce sharp images (fovea centralis) motivates the need for rapid eye movements (saccades) to build up a crisp and holistic percept of a scene (Wandell, 1995). In other words, what we perceive is largely a mental simulation of the external world. Meanwhile, work in computational neuroscience tells us that visual features (see, e.g., Hubel & Wiesel, 1968) can be inferred from the statistics of static images using unsupervised learning (Olshausen & Field, 1996). Experimental investigations further show that specific brain areas (e.g. LO) appear specialised for objects, for example responding more strongly to common objects than to scenes or textures, while responding only weakly to movement (cf. MT) (e.g., GrillSpector & Malach, 2004).\n∗Corresponding author: martin@robots.ox.ac.uk\nIn this work, we are interested in probabilistic generative models that can explain visual scenes compositionally via several latent variables. This corresponds to fitting a probability distribution pθ(x) with parameters θ to the data. The compositional structure is captured by K latent variables so that pθ(x) = ∫ pθ(x | z1:K)pθ(z1:K) dz1:K . Models from this family can be optimised using the variational auto-encoder (VAE) framework (Kingma & Welling, 2014; Rezende et al., 2014), by maximising a variational lower bound on the model evidence (Jordan et al., 1999). Burgess et al. (2019) and Greff et al. (2019) recently proposed two such models, MONet and IODINE, to decompose visual scenes into meaningful objects. Both works leverage an analysis-by-synthesis approach through the machinery of VAEs (Kingma & Welling, 2014; Rezende et al., 2014) to train these models without labelled supervision, e.g. in the form of ground truth segmentation masks. However, the models have a factorised prior that treats scene components as independent. Thus, neither provides an object-centric generation mechanism that accounts for relationships between constituent parts of a scene, e.g. two physical objects cannot occupy the same location, prohibiting the component-wise generation of novel scenes and restricting the utility of these approaches. Moreover, MONet embeds a convolutional neural network (CNN) inside of an recurrent neural network (RNN) that is unrolled for each scene component, which does not scale well to more complex scenes. Similarly, IODINE utilises a CNN within an expensive, gradient-based iterative refinement mechanism.\nTherefore, we introduce GENErative Scene Inference and Sampling (GENESIS) which is, to the best of our knowledge, the first object-centric generative model of rendered 3D scenes capable of both decomposing and generating scenes1. Compared to previous work, this renders GENESIS significantly more suitable for a wide range of applications in robotics and reinforcement learning. GENESIS achieves this by modelling relationships between scene components with an expressive, autoregressive prior that is learned alongside a sequential, amortised inference network. Importantly, sequential inference is performed in low-dimensional latent space, allowing all convolutional encoders and decoders to be run in parallel to fully exploit modern graphics processing hardware.\nWe conduct experiments on three canonical and publicly available datasets: coloured Multi-dSprites (Burgess et al., 2019), the GQN dataset (Eslami et al., 2018), and ShapeStacks (Groth et al., 2018). The latter two are simulated 3D environments which serve as testing grounds for navigation and object manipulation tasks, respectively. We show both qualitatively and quantitatively that in contrast to prior art, GENESIS is able to generate coherent scenes while also performing well on scene decomposition. Furthermore, we use the scene annotations available for ShapeStacks to show the benefit of utilising general purpose, object-centric latent representations from GENESIS for tasks such as predicting whether a block tower is stable or not.\nCode and models are available at https://github.com/applied-ai-lab/genesis." }, { "heading": "2 RELATED WORK", "text": "Structured Models Several methods leverage structured latent variables to discover objects in images without direct supervision. CST-VAE (Huang & Murphy, 2015), AIR (Eslami et al., 2016), SQAIR (Kosiorek et al., 2018), and SPAIR (Crawford & Pineau, 2019) use spatial attention to partition scenes into objects. TAGGER (Greff et al., 2016), NEM (Greff et al., 2017), and R-NEM (van Steenkiste et al., 2018a) perform unsupervised segmentation by modelling images as spatial mixture models. SCAE (Kosiorek et al., 2019) discovers geometric relationships between objects and their parts by using an affine-aware decoder. Yet, these approaches have not been shown to work on more complex images, for example visual scenes with 3D spatial structure, occlusion, perspective distortion, and multiple foreground and background components as considered in this work. Moreover, none of them demonstrate the ability to generate novel scenes with relational structure.\nWhile Xu et al. (2018) present an extension of Eslami et al. (2016) to generate images, their method only works on binary images with a uniform black background and assumes that object bounding boxes do not overlap. In contrast, we train GENESIS on rendered 3D scenes from Eslami et al. (2018) and Groth et al. (2018) which feature complex backgrounds and considerable occlusion to perform both decomposition and generation. Lastly, Xu et al. (2019) use ground truth pixel-wise flow fields as a cue for segmenting objects or object parts. Similarly, GENESIS could be adapted to also leverage temporal information which is a promising avenue for future research.\n1We use the terms “object” and “scene component” synonymously in this work.\nMONet & IODINE While this work is most directly related to MONet (Burgess et al., 2019) and IODINE (Greff et al., 2019), it sets itself apart by introducing a generative model that captures relations between scene components with an autoregressive prior, enabling the unconditional generation of coherent, novel scenes. Moreover, MONet relies on a deterministic attention mechanism rather than utilising a proper probabilistic inference procedure. This implies that the training objective is not a valid lower bound on the marginal likelihood and that the model cannot perform density estimation without modification. Furthermore, this attention mechanism embeds a CNN in a RNN, posing an issue in terms of scalability. These two considerations do not apply to IODINE, but IODINE employs a gradient-based, iterative refinement mechanism which expensive both in terms of computation and memory, limiting its practicality and utility. Architecturally, GENESIS is more similar to MONet and does not require expensive iterative refinement as IODINE. Unlike MONet, though, the convolutional encoders and decoders in GENESIS can be run in parallel, rendering the model computationally more scalable to inputs with a larger number of scene components.\nAdversarial Methods A few recent works have proposed to use an adversary for scene segmentation and generation. Chen et al. (2019) and Bielski & Favaro (2019) segment a single foreground object per image and Arandjelović & Zisserman (2019) segment several synthetic objects superimposed on natural images. Azadi et al. (2019) combine two objects or an object and a background scene in a sensible fashion and van Steenkiste et al. (2018b) can generate scenes with a potentially arbitrary number of components. In comparison, GENESIS performs both inference and generation, does not exhibit the instabilities of adversarial training, and offers a probabilistic formulation which captures uncertainty, e.g. during scene decomposition. Furthermore, the complexity of GENESIS increases with O(K), where K is the number of components, as opposed to the O(K2) complexity of the relational stage in van Steenkiste et al. (2018b).\nInverse Graphics A range of works formulate scene understanding as an inverse graphics problem. These well-engineered methods, however, rely on scene annotations for training and lack probabilistic formulations. For example, Wu et al. (2017b) leverage a graphics renderer to decode a structured scene description which is inferred by a neural network. Romaszko et al. (2017) pursue a similar approach but instead make use of a differentiable graphics render. Wu et al. (2017a) further employ different physics engines to predict the movement of billiard balls and block towers." }, { "heading": "3 GENESIS: GENERATIVE SCENE INFERENCE AND SAMPLING", "text": "In this section, we first describe the generative model of GENESIS and a simplified variant called GENESIS-S. This is followed by the associated inference procedures and two possible learning objectives. GENESIS is illustrated in Figure 1 and Figure 2 shows the graphical model in comparison to alternative methods. An illustration of GENESIS-S is included Appendix B.1, Figure 5.\nGenerative model Let x ∈ RH×W×C be an image. We formulate the problem of image generation as a spatial Gaussian mixture model (GMM). That is, every Gaussian component k = 1, . . . ,K represents an image-sized scene component xk ∈ RH×W×C . K ∈ N+ is the maximum number of scene components. The corresponding mixing probabilities πk ∈ [0, 1]H×W indicate whether the component is present at a location in the image. The mixing probabilities are normalised across scene components, i.e. ∀i,j ∑ k πi,j,k = 1, and can be regarded as spatial attention masks. Since there are strong spatial dependencies between components, we formulate an autoregressive prior distribution over mask variables zmk ∈ RDm which encode the mixing probabilities πk, as\npθ(z m 1:K) = K∏ k=1 pθ ( zmk | zm1:k−1 ) = K∏ k=1 pθ(z m k | uk)|uk=Rθ(zmk−1,uk−1) . (1)\nThe dependence on previous latents zm1:k−1 is implemented via an RNN Rθ with hidden state uk.\nNext, we assume that the scene components xk are conditionally independent given their spatial allocation in the scene. The corresponding conditional distribution over component variables zck ∈ RDc which encode the scene components xk factorises as follows,\npθ(z c 1:K | zm1:K) = K∏ k=1 pθ(z c k | zmk ) . (2)\nNow, the image likelihood is given by a mixture model,\np(x | zm1:K , zc1:K) = K∑ k=1 πk pθ(xk | zck) , (3)\nwhere the mixing probabilities πk = πθ(zm1:k) are created via a stick-breaking process (SBP) adapted from Burgess et al. (2019) as follows, slightly overloading the π notation,\nπ1 = πθ(z m 1 ) , πk = 1− k−1∑ j=1 πj πθ(zmk ) , πK = 1− K−1∑ j=1 πj . (4) Note that this step is not necessary for our model and instead one could use a softmax to normalise masks as in Greff et al. (2019).\nFinally, omitting subscripts, the full generative model can be written as\npθ(x) = ∫∫ pθ(x | zc, zm)pθ(zc | zm)pθ(zm) dzm dzc , (5)\nwhere we assume that all conditional distributions are Gaussian. The Gaussian components of the image likelihood have a fixed scalar standard deviation σ2x. We refer to this model as GENESIS. To investigate whether separate latents for masks and component appearances are necessary for decomposition, we consider a simplified model, GENESIS-S, with a single latent variable per component,\npθ(z1:K) = K∏ k=1 pθ(zk | z1:k−1). (6)\nIn this case, zk takes the role of zck in Equation (3) and of z m k in Equation (4), while Equation (2) is no longer necessary.\nApproximate posterior We amortise inference by using an approximate posterior distribution with parameters φ and a structure similar to the generative model. The full approximate posterior reads as follows,\nqφ(z c 1:K , z m 1:K | x) = qφ(zm1:K | x) qφ(zc1:K | x, zm1:K) , where\nqφ(z m 1:K | x) = K∏ k=1 qφ ( zmk | x, zm1:k−1 ) , and qφ(zc1:K | x, zm1:K) = K∏ k=1 qφ(z c k | x, zm1:k) , (7)\nwith the dependence on zm1:k−1 realised by an RNN Rφ. The RNN could, in principle, be shared with the prior, but we have not investigated this option. All conditional distributions are Gaussian. For GENESIS-S, the approximate posterior takes the form qφ(z1:K | x) = ∏K k=1 qφ(zk | x, z1:k−1) .\nLearning GENESIS can be trained by maximising the evidence lower bound (ELBO) on the logmarginal likelihood log pθ(x), given by\nLELBO(x) = Eqφ(zc,zm|x) [ log\npθ(x | zc, zm)pθ(zc | zm)pθ(zm) qφ(zc | zm,x)qφ(zm | x)\n] (8)\n= Eqφ(zc,zm|x)[log pθ(x | z c, zm)]− KL (qφ(zc, zm | x) || pθ(zc, zm)) . (9)\nHowever, this often leads to a strong emphasis on the likelihood term, while allowing the marginal approximate posterior qφ(z) = Epdata(x)[qφ(z | x)] to drift away from the prior distribution, hence increasing the KL-divergence. This also decreases the quality of samples drawn from the model. To prevent this behaviour, we use the Generalised ELBO with Constrained Optimisation (GECO) objective from Rezende & Viola (2018) instead, which changes the learning problem to minimising the KL-divergence subject to a reconstruction constraint. Let C ∈ R be the minimum allowed reconstruction log-likelihood, GECO then uses Lagrange multipliers to solve the following problem,\nθ?, φ? = argmin θ,φ KL (qφ(z c, zm | x) || pθ(zc, zm))\nsuch that Eqφ(zc,zm|x)[log pθ(x | z c, zm)] ≥ C .\n(10)" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we present qualitative and quantitative results on coloured Multi-dSprites (Burgess et al., 2019), the “rooms-ring-camera” dataset from GQN (Eslami et al., 2018) and the ShapeStacks dataset (Groth et al., 2018). We use an image resolution of 64-by-64 for all experiments. The number of components is set to K = 5, K = 7, and K = 9 for Multi-dSprites, GQN, and ShapeStacks, respectively. More details about the datasets are provided in Appendix A. Implementation and training details of all models are described in Appendix B." }, { "heading": "4.1 COMPONENT-WISE SCENE GENERATION", "text": "Unlike previous works, GENESIS has an autoregressive prior to capture intricate dependencies between scene components. Modelling these relationships is necessary to generate coherent scenes. For example, different parts of the background need to fit together; we do not want to create components such as the sky several times; and several physical objects cannot be in the same location. GENESIS is able to generate novel scenes by sequentially sampling scene components from the prior and conditioning each new component on those that have been generated during previous steps.\nAfter training GENESIS and MONet on the GQN dataset, Figure 3 shows the component-bycomponent generation process of novel scenes, corresponding to drawing samples from the respective prior distributions. More examples of generated scenes are shown in Figure 6, Appendix D. With GENESIS, either an object in the foreground or a part of the background is generated at every step and these components fit together to make up a semantically consistent scene that looks similar to the training data. MONet, though, generates random artefacts at every step that do not form a sensible scene. These results are striking but not surprising: MONet was not designed for scene generation. The need for such a model is why we developed GENESIS.\nNotably, GENESIS pursues a consistent strategy for scene generation: Step one generates the floor and the sky, defining the layout of the scene. Steps two to four generate individual foreground objects. Some of these slots remain empty if less than three objects are present in the scene. The final three steps generate the walls in the background. We conjecture that this strategy evolves during training as the floor and sky constitute large and easy to model surfaces that have a strong impact on the reconstruction loss. Finally, we observe that some slots contain artefacts of the sky at the top of the wall boundaries. We conjecture this is due to the fact that the mask decoder does not have skip connections as typically used in segmentation networks, making it difficult for the model to predict sharp segmentation boundaries. Scenes generated by GENESIS-S are shown in Figure 8 and Figure 9, Appendix D. While GENESIS-S does separate the foreground objects from the background, it generates them in one step and the individual background components are not very interpretable." }, { "heading": "4.2 INFERENCE OF SCENE COMPONENTS", "text": "Like MONet and IODINE, which were designed for unsupervised scene decomposition, GENESIS is also able to segment scenes into meaningful components. Figure 4 compares the decomposition of two images from the GQN dataset with GENESIS and MONet. Both models follow a similar decomposition strategy, but MONet fails to disambiguate one foreground object in the first example and does not reconstruct the background in as much detail in the second example. In Appendix E, Figure 10 illustrates the ability of both methods to disambiguate objects of the same colour and Figure 11 shows scene decomposition with GENESIS-S.\nFollowing Greff et al. (2019), we quantify segmentation performance with the Adjusted Rand Index (ARI) of pixels overlapping with ground truth foreground objects. We computed the ARI on 300 random images from the ShapeStacks test set for five models trained with different random seeds. GENESIS achieves an ARI of 0.73± 0.03 which is better than 0.63± 0.07 for MONet. This metric, however, does not penalise objects being over-segmented, which can give a misleading impression with regards to segmentation quality. This is illustrated in Figure 13, Appendix E.\nInspired by Arbelaez et al. (2010), we thus propose to use the segmentation covering (SC) of the ground truth foreground objects by the predicted masks. This involves taking a weighted mean over mask pairs, putting a potentially undesirable emphasis on larger objects. We therefore also consider taking an unweighted mean (mSC). For the same 300 images from the ShapeStacks test set and five different random seeds, GENESIS (SC: 0.64 ± 0.08, mSC: 0.60 ± 0.09) again outperforms MONet (SC: 0.52± 0.09, mSC: 0.49± 0.09). More details are provided in Appendix C." }, { "heading": "4.3 EVALUATION OF UNSUPERVISED REPRESENTATION UTILITY", "text": "Using a subset of the available labelled training images from ShapeStacks, we train a set of classifiers on the representations learned by GENESIS and several baselines to evaluate how well these representations capture the ground truth scene state. In particular, we consider three tasks: (1) Is a tower stable or not? (2) What is the tower’s height in terms of the number of blocks? (3) What is the camera viewpoint (out of 16 possibilities)? Tower stability is a particularly interesting property as it depends on in fine-grained object information and the relative positioning of objects. We selected the third task as learning scene representations from different views has previously been prominently explored in Eslami et al. (2018). We compare GENESIS and GENESIS-S against three baselines: MONet, a VAE with a spatial broadcast decoder (BD-VAE) and a VAE with a deconvolutional decoder (DC-VAE). The results are summarised in Table 1. The architectural details of the baselines are described in Appendix B.2 and Appendix B.3. The implementation details of the classifiers are provided in Appendix B.5.\nBoth GENESIS and GENESIS-S perform better than than the baselines at predicting tower stability and their accuracies on predicting the height of the towers is only outperformed by MONet. We conjecture that MONet benefits here by its deterministic segmentation network. Overall, this corroborates the intuition that object-centric representations are indeed beneficial for these tasks which focus on the foreground objects. We observe that the BD-VAE does better than the DC-VAE on all three tasks, reflecting the motivation behind its design which is aimed at better disentangling the underlying factors of variation in the data (Watters et al., 2019b). All models achieve a high accuracy at predicting the camera view. Finally, we note that none of models reach the stability prediction accuracies reported in Groth et al. (2018) which were obtained with an Inception-v4 classifier (Szegedy et al., 2017). This is not surprising considering that only a subset the training images is used for training the classifiers without data augmentation and at a reduced resolution." }, { "heading": "4.4 QUANTIFYING SAMPLE QUALITY", "text": "In order to quantify the quality of generated scenes, Table 2 summarises the Fréchet Inception Distances (FIDs) (Heusel et al., 2017) between 10,000 images generated by GENESIS as well several baselines and 10,000 images from the Multi-dSprites and the GQN test sets, respectively. The two GENESIS variants achieve the best FID on both datasets. While GENESIS-S performs better than GENESIS on GQN, Figure 8 and Figure 9 in Appendix D show that individual scene components are less interpretable and that intricate background patterns are generated at the expense of sensible foreground objects. It is not surprising that the FIDs for MONet are relatively large given that it was not designed for generating scenes. Interestingly, the DC-VAE achieves a smaller FID on GQN than the BD-VAE. This is surprising given that the BD-VAE representations are more useful for the ShapeStacks classification tasks. Given that the GQN dataset and ShapeStacks are somewhat similar in structure and appearance, this indicates that while FID correlates with perceptual similarity, it does not necessarily correlate with the general utility of the learned representations for downstream tasks. We include scenes sampled from the BD-VAE and the DC-VAE in Figure 7, Appendix D, where we observe that the DC-VAE models the background fairly well while foreground objects are blurry." }, { "heading": "5 CONCLUSIONS", "text": "In this work, we propose a novel object-centric latent variable model of scenes called GENESIS. We show that GENESIS is, to the best of our knowledge, the first unsupervised model to both decompose rendered 3D scenes into semantically meaningful constituent parts, while at the same time being able to generate coherent scenes in a component-wise fashion. This is achieved by capturing relationships between scene components with an autoregressive prior that is learned alongside a computationally efficient sequential inference network, setting GENESIS apart from prior art. Regarding future work, an interesting challenge is to scale GENESIS to more complex datasets and to employ the model in robotics or reinforcement learning applications. To this end, it will be necessary to improve reconstruction and sample quality, reduce computational cost, and to scale the model to higher resolution images. Another potentially promising research direction is to adapt the formulation to only model parts of the scene that are relevant for a certain task." }, { "heading": "ACKNOWLEDGMENTS", "text": "This research was supported by an EPSRC Programme Grant (EP/M019918/1), an EPSRC DTA studentship, and a Google studentship. The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work, http://dx.doi.org/10.5281/zenodo.22558, and the use of Hartree Centre resources. The authors would like to thank Yizhe Wu for his help with re-implementing MONet, Oliver Groth for his support with the GQN and ShapeStacks datasets, and Rob Weston for proof reading the paper." }, { "heading": "A DATASETS", "text": "Multi-dSprites (Burgess et al., 2019) Images contain between one and four randomly selected “sprites” from Matthey et al. (2017), available at https://github.com/deepmind/ dsprites-dataset. For each object and the background, we randomly select one of five different, equally spread values for each of the three colour channels and generate 70,000 images. We set aside 10,000 for validation and testing each. The script for generating this data will be released with the rest of our code.\nGQN (Eslami et al., 2018) The “rooms-ring-camera” dataset includes simulated 3D scenes of a square room with different floor and wall textures, containing one to three objects of various shapes and sizes. It can be downloaded from https://github.com/deepmind/gqn-datasets.\nShapeStacks (Groth et al., 2018) Images show simulated block towers of different heights (two to six blocks). Individual blocks can have different shapes, sizes, and colours. Scenes have annotations for: stability of the tower (binary), number of blocks (two to six), properties of individual blocks, locations in the tower of centre-of-mass violations and planar surface violations, wall and floor textures (five each), light presets (five), and camera view points (sixteen). More details about the dataset and download links can be found at https://shapestacks.robots.ox.ac.uk/.\nB IMPLEMENTATION DETAILS\nB.1 GENESIS ARCHITECTURE\nWe use the architecture from Berg et al. (2018) to encode and decode zmk with the only modification of applying batch normalisation (Ioffe & Szegedy, 2015) before the GLU non-linearities (Dauphin et al., 2017). The convolutional layers in the encoder and decoder have five layers with size-5 kernels, strides of [1, 2, 1, 2, 1], and filter sizes of [32, 32, 64, 64, 64] and [64, 32, 32, 32, 32], respectively. Fully-connected layers are used at the lowest resolution.\nThe encoded image is passed to a long short-term memory (LSTM) cell (Hochreiter & Schmidhuber, 1997) followed by a linear layer to compute the mask latents zmk of size 64. The LSTM state size is twice the latent size. Importantly, unlike the analogous counterpart in MONet, the decoding of zmk is performed in parallel. The autoregressive prior pθ ( zmk | zm1:k−1 ) is implemented as an LSTM with 256 units. The conditional distribution pθ(zck | zmk ) is parameterised by a multilayer perceptron (MLP) with two hidden layers, 256 units per layer, and ELUs (Clevert et al., 2016). We use the same component VAE featuring a spatial broadcast decoder as MONet to encode and decode zck, but we replace RELUs (Glorot et al., 2011) with ELUs.\nFor GENESIS-S, as illustrated in Figure 5, the encoder of zk is the same as for zmk above and the decoder from Berg et al. (2018) is again used to compute the mixing probabilities. However, GENESIS-S also has a second decoder with spatial broadcasting to obtain the scene components xk from zk. We found the use of two different decoders to be important for GENESIS-S in order for the model to decompose the input.\nB.2 MONET BASELINES\nWe followed the provided architectural details described in Burgess et al. (2019). Regarding unspecified details, we employ an attention network with [32, 32, 64, 64, 64] filters in the encoder and the reverse in the decoder. Furthermore, we normalise the mask prior with a softmax function to compute the KL-divergence between mask posterior and prior distributions.\nB.3 VAE BASELINES\nBoth the BD-VAE and the DC-VAE have a latent dimensionality of 64 and the same encoder as in Berg et al. (2018). The DC-VAE also uses the decoder from Berg et al. (2018). The BD-VAE has the same spatial broadcast decoder with ELUs as GENESIS, but with twice the number of filters to enable a better comparison.\nB.4 OPTIMISATION\nThe scalar standard deviation of the Gaussian image likelihood components is set to σx = 0.7. We use GECO (Rezende & Viola, 2018) to balance the reconstruction and KL divergence terms in the loss function. The goal for the reconstruction error is set to 0.5655, multiplied by the image dimensions and number of colour channels. We deliberately choose a comparatively weak reconstruction constraint for the GECO objective to emphasise KL minimisation and sample quality. For the remainining GECO hyperparameters, the default value of α = 0.99 is used and the step size for updating β is set to 10−5. We increase the step size to 10−4 when the reconstruction constraint is satisfied to accelerate optimisation as β tended to undershoot at the beginning of training.\nAll models are trained for 5 ∗ 105 iterations with a batch size of 32 using the ADAM optimiser (Kingma & Ba, 2015) and a learning rate of 10−4. With these settings, training GENESIS takes about two days on a single GPU. However, we expect performance to improve with further training. This particularly extends to training GENESIS on ShapeStacks where 5 ∗ 105 training iterations are not enough to achieve good sample quality.\nB.5 SHAPESTACKS CLASSIFIERS\nMultilayer perceptrons (MLPs) with one hidden layer, 512 units, and ELU activations are used for classification. The classifiers are trained for 100 epochs on 50,000 labelled examples with a batch size of 128 using a cross-entropy loss, the ADAM optimiser, and a learning rate of 10−4. As inputs to the classifiers, we concatenate zmk and z c k for GENESIS, zk for GENESIS-S, and the component VAE latents for the two MONet variants." }, { "heading": "C SEGMENTATION COVERING", "text": "Following Arbelaez et al. (2010), the segmentation covering (SC) is based on the intersection over union (IOU) between pairs of segmentation masks from two sets S and S′. In this work, we consider S to be the segmentation masks of the ground truth foreground objects and S′ to be the predicted segmentation masks. The covering of S by S′ is defined as:\nC(S′ → S) = 1∑ R∈S |R| ∑ R∈S |R| max R′∈S′ IOU(R,R′), (11)\nwhere |R| denotes the number of pixels belonging to mask R. Note that this formulation is slightly more general than the one in Arbelaez et al. (2010) which assumes that masks in S are nonoverlapping and cover the entire image. The above takes a weighted mean over IOU values, proportional to the number of pixels of the masks being covered. To give equal importance to masks of different sizes, we also consider taking an unweighted mean (mSC):\nCm(S ′ → S) = 1 |S| ∑ R∈S max R′∈S′ IOU(R,R′), (12)\nwhere |S| denotes the number of non-empty masks in S. Importantly and unlike the ARI, both segmentation covering variations penalise the over-segmentation of ground truth objects as this decreases the IOU for a pair of masks. This is illustrated in Figure 13, Appendix E." }, { "heading": "D COMPONENT-WISE SCENE GENERATION - GQN", "text": "E INFERENCE OF SCENE COMPONENTS" } ]
2,020
null
SP:938f9b4e59217d2e78c405464b452ddc8ba5c459
[ "This paper improves the robustness of smoothed classifiers by maximizing the certified radius, which is more efficient than adversarially train the smoothed classifier and achieves higher average robust radius and better certified robustness when the radius is not much larger than the training sigma. It proposes a novel objective which is derived by decomposing the 0/1 certified loss into the sum of 0/1 classification error and 0/1 robustness error. Three conditions are identified to make the optimization doable. Two surrogate losses (CE and hinge loss on the certified radius) for the two 0/1 errors are proposed as upper bounds of the 0/1 loss. Certified radius is derived as a function of the logits of Soft-RS to make the hinge loss differentiable. Numerical stability of the proposed objective is also analyzed by showing its gradient is bounded.", "This paper proposes a new approach to training models robust to perturbations (or 'attacks') within an l_2 radius, by maximizing a surrogate---a soft randomized smoothing loss---for the *certified radius* (a lower bound for the l_2 attack radius) of the classifier. This approach has the advantage of not needing to explicitly train against specific attacks, and is thus much faster and easier to optimize. The authors provide certain theoretical guarantees and also demonstrate strong empirical results relative to two baseline approaches." ]
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work (Cohen et al., 2019) shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radii. Our code is available at https://github.com/RuntianZ/macer.
[ { "affiliations": [], "name": "MAXIMIZING CERTIFIED RADIUS" }, { "affiliations": [], "name": "Runtian Zhai" }, { "affiliations": [], "name": "Chen Dan" }, { "affiliations": [], "name": "Di He" }, { "affiliations": [], "name": "Huan Zhang" }, { "affiliations": [], "name": "Boqing Gong" }, { "affiliations": [], "name": "Pradeep Ravikumar" }, { "affiliations": [], "name": "Cho-Jui Hsieh" }, { "affiliations": [], "name": "Liwei Wang" } ]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David A. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "CoRR, abs/1802.00420,", "year": 2018 }, { "authors": [ "Battista Biggio", "Igino Corona", "Davide Maiorca", "Blaine Nelson", "Nedim Šrndić", "Pavel Laskov", "Giorgio Giacinto", "Fabio Roli" ], "title": "Evasion attacks against machine learning at test time", "venue": "In Joint European conference on machine learning and knowledge discovery in databases,", "year": 2013 }, { "authors": [ "Nicholas Carlini", "David A. Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "CoRR, abs/1608.04644,", "year": 2016 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "Percy Liang", "John C Duchi" ], "title": "Unlabeled data improves adversarial robustness", "venue": "arXiv preprint arXiv:1905.13736,", "year": 2019 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security,", "year": 2017 }, { "authors": [ "Minhao Cheng", "Thong Le", "Pin-Yu Chen", "Jinfeng Yi", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Queryefficient hard-label black-box attack: An optimization-based approach", "venue": null, "year": 2019 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Gavin Weiguang Ding", "Yash Sharma", "Kry Yik Chau Lui", "Ruitong Huang" ], "title": "Mma training: Direct input space margin maximization through adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ruiqi Gao", "Tianle Cai", "Haochuan Li", "Cho-Jui Hsieh", "Liwei Wang", "Jason D Lee" ], "title": "Convergence of adversarial training in overparametrized neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Timon Gehr", "Matthew Mirman", "Dana Drachsler-Cohen", "Petar Tsankov", "Swarat Chaudhuri", "Martin Vechev" ], "title": "Ai2: Safety and robustness certification of neural networks with abstract interpretation", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Sven Gowal", "Krishnamurthy Dvijotham", "Robert Stanforth", "Rudy Bunel", "Chongli Qin", "Jonathan Uesato", "Timothy Mann", "Pushmeet Kohli" ], "title": "On the effectiveness of interval bound propagation for training verifiably robust models", "venue": "arXiv preprint arXiv:1810.12715,", "year": 2018 }, { "authors": [ "Ruitong Huang", "Bing Xu", "Dale Schuurmans", "Csaba Szepesvári" ], "title": "Learning with a strong adversary", "venue": "arXiv preprint arXiv:1511.03034,", "year": 2015 }, { "authors": [ "Harini Kannan", "Alexey Kurakin", "Ian J. Goodfellow" ], "title": "Adversarial logit pairing", "venue": "CoRR, abs/1803.06373,", "year": 2018 }, { "authors": [ "Mathias Lecuyer", "Vaggelis Atlidakis", "Roxana Geambasu", "Daniel Hsu", "Suman Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "arXiv preprint arXiv:1802.03471,", "year": 2018 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Second-order adversarial attack and certifiable", "venue": "robustness. CoRR,", "year": 2018 }, { "authors": [ "Xuanqing Liu", "Minhao Cheng", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Towards robust neural networks via random self-ensemble", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Andreas Maurer", "Massimiliano Pontil" ], "title": "Empirical Bernstein Bounds and Sample Variance Penalization", "venue": "arXiv e-prints, art", "year": 2009 }, { "authors": [ "Matthew Mirman", "Timon Gehr", "Martin Vechev" ], "title": "Differentiable abstract interpretation for provably robust neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "CoRR, abs/1511.04599,", "year": 2015 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Arunesh Sinha", "Michael Wellman" ], "title": "Towards the science of security and privacy in machine learning", "venue": "arXiv preprint arXiv:1611.03814,", "year": 2016 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization", "venue": null, "year": 1907 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "arXiv preprint arXiv:1707.04131,", "year": 2017 }, { "authors": [ "Hadi Salman", "Greg Yang", "Jerry Li", "Pengchuan Zhang", "Huan Zhang", "Ilya P. Razenshteyn", "Sébastien Bubeck" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": "URL http://arxiv.org/abs/1906.04584", "year": 1906 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Amin Ghiasi", "Zheng Xu", "John P. Dickerson", "Christoph Studer", "Larry S. Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "URL http://arxiv.org/abs/1904.12843", "year": 1904 }, { "authors": [ "Gagandeep Singh", "Timon Gehr", "Matthew Mirman", "Markus Püschel", "Martin Vechev" ], "title": "Fast and effective robustness certification", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Robert Stanforth", "Alhussein Fawzi", "Pushmeet Kohli" ], "title": "Are labels required for improving adversarial robustness", "venue": "arXiv preprint arXiv:1905.13725,", "year": 2019 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian J. Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "CoRR, abs/1312.6199,", "year": 2013 }, { "authors": [ "Shiqi Wang", "Yizheng Chen", "Ahmed Abdou", "Suman Jana" ], "title": "Mixtrain: Scalable training of formally robust neural networks", "venue": "arXiv preprint arXiv:1811.02625,", "year": 2018 }, { "authors": [ "Lily Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Luca Daniel", "Duane Boning", "Inderjit Dhillon" ], "title": "Towards fast computation of certified robustness for ReLU networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Zico Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Eric Wong", "Frank Schmidt", "Jan Hendrik Metzen", "J. Zico Kolter" ], "title": "Scaling provable adversarial defenses", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "arXiv preprint arXiv:1711.01991,", "year": 2017 }, { "authors": [ "Runtian Zhai", "Tianle Cai", "Di He", "Chen Dan", "Kun He", "John E. Hopcroft", "Liwei Wang" ], "title": "Adversarially robust generalization just requires more unlabeled data", "venue": "URL http://arxiv.org/abs/1906.00555", "year": 1906 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": "arXiv preprint arXiv:1905.00877,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Huan Zhang", "Hongge Chen", "Chaowei Xiao", "Bo Li", "Duane Boning", "Cho-Jui Hsieh" ], "title": "Towards stable and efficient training of verifiably robust neural networks", "venue": "arXiv preprint arXiv:1906.06316,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Modern neural network classifiers are able to achieve very high accuracy on image classification tasks but are sensitive to small, adversarially chosen perturbations to the inputs (Szegedy et al., 2013; Biggio et al., 2013). Given an image x that is correctly classified by a neural network, a malicious attacker may find a small adversarial perturbation δ such that the perturbed image x + δ, though visually indistinguishable from the original image, is assigned to a wrong class with high confidence by the network. Such vulnerability creates security concerns in many real-world applications.\nResearchers have proposed a variety of defense methods to improve the robustness of neural networks. Most of the existing defenses are based on adversarial training (Szegedy et al., 2013; Madry et al., 2017; Goodfellow et al., 2015; Huang et al., 2015; Athalye et al., 2018; Ding et al., 2020). During training, these methods first learn on-the-fly adversarial examples of the inputs with multiple attack iterations and then update model parameters using these perturbed samples together with the original labels. However, such approaches depend on a particular (class of) attack method. It cannot be formally guaranteed whether the resulting model is also robust against other attacks. Moreover, attack iterations are usually quite expensive. As a result, adversarial training runs very slowly.\nAnother line of algorithms trains robust models by maximizing the certified radius provided by robust certification methods (Weng et al., 2018; Wong & Kolter, 2018; Zhang et al., 2018; Mirman et al., 2018; Wang et al., 2018; Gowal et al., 2018; Zhang et al., 2019c). Using linear or convex relaxations of fully connected ReLU networks, a robust certification method computes a “safe radius” r for a classifier at a given input such that at any point within the neighboring radius-r ball of the input, the classifier is guaranteed to have unchanged predictions. However, the certification methods are usually computationally expensive and can only handle shallow neural networks with ReLU activations, so these training algorithms have troubles in scaling to modern networks.\nIn this work, we propose an attack-free and scalable method to train robust deep neural networks. We mainly leverage the recent randomized smoothing technique (Cohen et al., 2019). A randomized smoothed classifier g for an arbitrary classifier f is defined as g(x) = Eηf(x + η), in which η ∼ N (0, σ2I). While Cohen et al. (2019) derived how to analytically compute the certified radius of the randomly smoothed classifier g, they did not show how to maximize that radius to make the classifier\ng robust. Salman et al. (2019) proposed SmoothAdv to improve the robustness of g, but it still relies on the expensive attack iterations. Instead of adversarial training, we propose to learn robust models by directly taking the certified radius into the objective. We outline a few challenging desiderata any practical instantiation of this idea would however have to satisfy, and provide approaches to address each of these in turn. A discussion of these desiderata, as well as a detailed implementation of our approach is provided in Section 4. And as we show both theoretically and empirically, our method is numerically stable and accounts for both classification accuracy and robustness.\nOur contributions are summarized as follows:\n• We propose an attack-free and scalable robust training algorithm by MAximizing the CErtified Radius (MACER). MACER has the following advantages compared to previous works:\n– Different from adversarial training, we train robust models by directly maximizing the certified radius without specifying any attack strategies, and the learned model can achieve provable robustness against any possible attack in the certified region. Additionally, by avoiding time-consuming attack iterations, our proposed algorithm runs much faster than adversarial training. – Different from other methods (Wong & Kolter, 2018) that maximize the certified radius but are not scalable to deep neural networks, our method can be applied to architectures of any size. This makes our algorithm more practical in real scenarios. • We empirically evaluate our proposed method through extensive experiments on Cifar-10, ImageNet, MNIST, and SVHN. On all tasks, MACER achieves better performance than state-of-the-art algorithms. MACER is also exceptionally fast. For example, on ImageNet, MACER uses 39% less training time than adversarial training but still performs better." }, { "heading": "2 RELATED WORK", "text": "Neural networks trained by standard SGD are not robust – a small and human imperceptible perturbation can easily change the prediction of a network. In the white-box setting, methods have been proposed to construct adversarial examples with small `∞ or `2 perturbations (Goodfellow et al., 2015; Madry et al., 2017; Carlini & Wagner, 2016; Moosavi-Dezfooli et al., 2015). Furthermore, even in the black-box setting where the adversary does not have access to the model structure and parameters, adversarial examples can be found by either transfer attack (Papernot et al., 2016) or optimization-based approaches (Chen et al., 2017; Rauber et al., 2017; Cheng et al., 2019). It is thus important to study how to improve the robustness of neural networks against adversarial examples.\nAdversarial training So far, adversarial training has been the most successful robust training method according to many recent studies. Adversarial training was first proposed in Szegedy et al. (2013) and Goodfellow et al. (2015), where they showed that adding adversarial examples to the training set can improve the robustness against such attacks. More recently, Madry et al. (2017) formulated adversarial training as a min-max optimization problem and demonstrated that adversarial training with PGD attack leads to empirical robust models. Zhang et al. (2019b) further decomposed the robust error as the sum of natural error and boundary error for better performance. Finally, Gao et al. (2019) proved the convergence of adversarial training. Although models obtained by adversarial training empirically achieve good performance, they do not have certified error guarantees.\nDespite the popularity of PGD-based adversarial training, one major issue is that its speed is too slow. Some recent papers propose methods to accelerate adversarial training. For example, Freem (Shafahi et al., 2019) replays an adversarial example several times in one iteration, YOPO-m-n (Zhang et al., 2019a) restricts back propagation in PGD within the first layer, and Qin et al. (2019) estimates the adversary with local linearization.\nRobustness certification and provable defense Many defense algorithms proposed in the past few years were claimed to be effective, but Athalye et al. (2018) showed that most of them are based on “gradient masking” and can be bypassed by more carefully designed attacks. It is thus important to study how to measure the provable robustness of a network. A robustness certification algorithm takes a classifier f and an input point x as inputs, and outputs a “safe radius” r such that for any δ subject to ‖δ‖ ≤ r, f(x) = f(x + δ). Several algorithms have been proposed recently, including the convex polytope technique (Wong & Kolter, 2018), abstract interpretation methods (Singh et al., 2018; Gehr et al., 2018) and the recursive propagation algrithms (Weng et al., 2018; Zhang et al.,\n2018). These methods can provide attack-agnostic robust error lower bounds. Moreover, to achieve networks with nontrivial certified robust error, one can train a network by minimizing the certified robust error computed by the above-mentioned methods, and several algorithms have been proposed in the past year (Wong & Kolter, 2018; Wong et al., 2018; Wang et al., 2018; Gowal et al., 2018; Zhang et al., 2019c; Mirman et al., 2018). Unfortunately, they can only be applied to shallow networks with limited activation and run very slowly.\nMore recently, researchers found a new class of certification methods called randomized smoothing. The idea of randomization has been used for defense in several previous works (Xie et al., 2017; Liu et al., 2018) but without any certification. Later on, Lecuyer et al. (2018) first showed that if a Gaussian random noise is added to the input or any intermediate layer. A certified guarantee on small `2 perturbation can be computed via differential privacy. Li et al. (2018) and Cohen et al. (2019) then provided improved ways to compute the `2 certified robust error for Gaussian smoothed models. In this paper, we propose a new algorithm to train on these `2 certified error bounds to significantly reduce the certified error and achieve better provable adversarial robustness." }, { "heading": "3 PRELIMINARIES", "text": "Problem setup Consider a standard classification task with an underlying data distribution pdata over pairs of examples x ∈ X ⊂ Rd and corresponding labels y ∈ Y = {1, 2, · · · ,K}. Usually pdata is unknown and we can only access a training set S = {(x1, y1), · · · , (xn, yn)} in which (xi, yi) is i.i.d. drawn from pdata, i = 1, 2, · · · , n. The empirical data distribution (uniform distribution over S) is denoted by p̂data. Let f ∈ F be the classifier of interest that maps any x ∈ X to Y . Usually f is parameterized by a set of parameters θ, so we also write it as fθ. We call x′ = x + δ an adversarial example of x to classifier fθ if fθ can correctly classify x but assigns a different label to x′. Following many previous works (Cohen et al., 2019; Salman et al., 2019), we focus on the setting where δ satisfies `2 norm constraint ‖δ‖2 ≤ . We say that the model fθ is l 2-robust at (x, y) if it correctly classifies x as y and for any ‖δ‖2 ≤ , the model classifies x+δ as y. In the problem of robust classification, our ultimate goal is to find a model that is l 2-robust at (x, y) with high probability over (x, y) ∼ pdata for a given > 0.\nNeural network In image classification we often use deep neural networks. Let uθ : X → RK be a neural network, whose output at input x is a vector (u1θ(x), ..., u K θ (x)). The classifier induced by uθ(x) is fθ(x) = arg maxc∈Y u c θ(x).\nIn order to train θ by minimizing a loss function such as cross entropy, we always use a softmax layer on uθ to normalize it into a probability distribution. The resulting network is zθ(·;β) : X → P(K)1, which is given by zcθ(x;β) = e βucθ(x)/ ∑ c′∈Y e βuc ′ θ (x), ∀c ∈ Y , β is the inverse temperature. For simplicity, we will use zθ(x) to refer to zθ(x;β) when the meaning is clear from context. The vector zθ(x) = (z 1 θ(x), · · · , zKθ (x)) is commonly regarded as the “likelihood vector”, and zcθ(x) measures how likely input x belongs to class c.\nRobust radius By definition, the l 2-robustness of fθ at a data point (x, y) depends on the radius of the largest l2 ball centered at x in which fθ does not change its prediction. This radius is called the robust radius, which is formally defined as\nR(fθ;x, y) =\n{ inf\nfθ(x′)6=fθ(x) ‖x′ − x‖2 , when fθ(x) = y\n0 , when fθ(x) 6= y (1)\nRecall that our ultimate goal is to train a classifier which is l 2-robust at (x, y) with high probability over the sampling of (x, y) ∼ pdata. Mathematically the goal can be expressed as to minimize the expectation of the 0/1 robust classification error. The error is defined as\nl 0/1 −robust(fθ;x, y) := 1− 1{R(fθ;x,y)≥ }, (2)\nand the goal is to minimize its expectation over the population\nL 0/1 −robust(fθ) := E(x,y)∼pdata l 0/1 −robust(fθ;x, y). (3)\n1The probability simplex in RK .\nIt is thus quite natural to improve model robustness via maximizing the robust radius. Unfortunately, computing the robust radius (1) of a classifier induced by a deep neural network is very difficult. Weng et al. (2018) showed that computing the l1 robust radius of a deep neural network is NP-hard. Although there is no result for the l2 radius yet, it is very likely that computing the l2 robust radius is also NP-hard.\nCertified radius Many previous works proposed certification methods that seek to derive a tight lower bound of R(fθ;x, y) for neural networks (see Section 2 for related work). We call this lower bound certified radius and denote it by CR(fθ;x, y). The certified radius satisfies 0 ≤ CR(fθ;x, y) ≤ R(fθ;x, y) for any fθ, x, y. The certified radius leads to a guaranteed upper bound of the 0/1 robust classification error, which is called 0/1 certified robust error. The 0/1 certified robust error of classifier fθ on sample (x, y) is defined as\nl 0/1 −certified(fθ;x, y) := 1− 1{CR(fθ;x,y)≥ } (4)\ni.e. a sample is counted as correct only if the certified radius reaches . The expectation of certified robust error over (x, y) ∼ pdata serves as a performance metric of the provable robustness:\nL 0/1 −certified(fθ) := E(x,y)∼pdata l 0/1 −certified(fθ;x, y) (5)\nRecall that CR(fθ;x, y) is a lower bound of the true robust radius, which immediately implies that L 0/1 −certified(fθ) ≥ L 0/1 −robust(fθ). Therefore, a small 0/1 certified robust error leads to a small 0/1 robust classification error.\nRandomized smoothing In this work, we use the recent randomized smoothing technique (Cohen et al., 2019), which is scalable to any architectures, to obtain the certified radius of smoothed deep neural networks. The key part of randomized smoothing is to use the smoothed version of fθ, which is denoted by gθ, to make predictions. The formulation of gθ is defined as follows. Definition 1. For an arbitrary classifier fθ ∈ F and σ > 0, the smoothed classifier gθ of fθ is defined as\ngθ(x) = arg max c∈Y Pη∼N (0,σ2I)(fθ(x+ η) = c) (6)\nIn short, the smoothed classifier gθ(x) returns the label most likely to be returned by fθ when its input is sampled from a Gaussian distribution N (x, σ2I) centered at x. Cohen et al. (2019) proves the following theorem, which provides an analytic form of certified radius: Theorem 1. (Cohen et al., 2019) Let fθ ∈ F , and η ∼ N (0, σ2I). Let the smoothed classifier gθ be defined as in (6). Let the ground truth of an input x be y. If gθ classifies x correctly, i.e.\nPη(fθ(x+ η) = y) ≥ max y′ 6=y Pη(fθ(x+ η) = y ′) (7)\nThen gθ is provably robust at x, with the certified radius given by\nCR(gθ;x, y) = σ\n2 [Φ−1(Pη(fθ(x+ η) = y))− Φ−1(max y′ 6=y Pη(fθ(x+ η) = y ′))]\n= σ\n2 [Φ−1(Eη1{fθ(x+η)=y})− Φ −1(max y′ 6=y\nEη1{fθ(x+η)=y′})] (8)\nwhere Φ is the c.d.f. of the standard Gaussian distribution." }, { "heading": "4 ROBUST TRAINING VIA MAXIMIZING THE CERTIFIED RADIUS", "text": "As we can see from Theorem 1, the value of the certified radius can be estimated by repeatedly sampling Gaussian noises. More importantly, it can be computed for any deep neural networks. This motivates us to design a training method to maximize the certified radius and learn robust models.\nTo minimize the 0/1 robust classification error in (3) or the 0/1 certified robust error in (5), many previous works (Zhang et al., 2019b; Zhai et al., 2019) proposed to first decompose the error. Note that a classifier gθ has a positive 0/1 certified robust error on sample (x, y) if and only if exactly one of the following two cases happens:\n• gθ(x) 6= y, i.e. the classifier misclassifies x. • gθ(x) = y, but CR(gθ;x, y) < , i.e. the classifier is correct but not robust enough.\nThus, the 0/1 certified robust error can be decomposed as the sum of two error terms: a 0/1 classification error and a 0/1 robustness error:\nl 0/1 −certified(gθ;x, y) = 1− 1{CR(gθ;x,y)≥ }\n= 1{gθ(x)6=y}︸ ︷︷ ︸ 0/1 Classification Error +1{gθ(x)=y,CR(gθ;x,y)< }︸ ︷︷ ︸ 0/1 Robustness Error\n(9)" }, { "heading": "4.1 DESIDERATA FOR OBJECTIVE FUNCTIONS", "text": "Minimizing the 0-1 error directly is intractable. A classic method is to minimize a surrogate loss instead. The surrogate loss for the 0/1 classification error is called classification loss and denoted by lC(gθ;x, y). The surrogate loss for the 0/1 robustness error is called robustness loss and denoted by lR(gθ;x, y). Our final objective function is\nl(gθ;x, y) = lC(gθ;x, y) + lR(gθ;x, y) (10)\nWe would like our loss functions lC(gθ;x, y) and lR(gθ;x, y) to satisfy some favorable conditions. These conditions are summarized below as (C1) - (C3):\n• (C1) (Surrogate condition): Surrogate loss should be an upper bound of the original error function, i.e. lC(gθ;x, y) and lR(gθ;x, y) should be upper bounds of 1{gθ(x)6=y} and 1{gθ(x)=y,CR(gθ;x,y)< }, respectively.\n• (C2) (Differentiablity): lC(gθ;x, y) and lR(gθ;x, y) should be (sub-)differentiable with respect to θ.\n• (C3) (Numerical stability): The computation of lC(gθ;x, y) and lR(gθ;x, y) and their (sub)gradients with respect to θ should be numerically stable.\nThe surrogate condition (C1) ensures that l(gθ;x, y) itself meets the surrogate condition, i.e.\nl(gθ;x, y) = lC(gθ;x, y) + lR(gθ;x, y) ≥ l0/1 −certified(gθ;x, y) (11)\nConditions (C2) and (C3) ensure that (10) can be stably minimized with first order methods." }, { "heading": "4.2 SURROGATE LOSSES (FOR CONDITION C1)", "text": "We next discuss choices of the surrogate losses that ensure we satisfy condition (C1). The classification surrogate loss is relatively easy to design. There are many widely used loss functions from which we can choose, and in this work we choose the cross-entropy loss as the classification loss:\n1{gθ(x)6=y} ≤ lC(gθ;x, y) := lCE(gθ(x), y) (12)\nFor the robustness surrogate loss, we choose the hinge loss on the certified radius:\n1{gθ(x)=y,CR(gθ;x,y)< }\n≤ λ ·max { + ̃− CR(gθ;x, y), 0} · 1{gθ(x)=y} := lR(gθ;x, y) (13)\nwhere ̃ > 0 and λ ≥ 1̃ . We use the hinge loss because not only does it satisfy the surrogate condition, but also it is numerically stable, which we will discuss in Section 4.4." }, { "heading": "4.3 DIFFERENTIABLE CERTIFIED RADIUS VIA SOFT RANDOMIZED SMOOTHING (FOR CONDITION C2)", "text": "The classification surrogate loss in (12) is differentiable with respect to θ, but the differentiability of the robustness surrogate loss in (13) requires differentiability of CR(gθ;x, y). In this section we\nwill show that the randomized smoothing certified radius in (8) does not meet condition (C2), and accordingly, we will introduce soft randomized smoothing to solve this problem.\nWhether the certified radius (8) is sub-differentiable with respect to θ boils down to the differentiablity of Eη1{fθ(x+η)=y}. Theoretically, the expectation is indeed differentiable. However, from a practical point of view, the expectation needs to be estimated by Monte Carlo sampling Eη1{fθ(x+η)=y} ≈ 1k ∑k j=1 1{fθ(x+ηj)=y}, where ηj is i.i.d Gaussian noise and k is the number of samples. This estimation, which is a sum of indicator functions, is not differentiable. Hence, condition (C2) is still not met from the algorithmic perspective.\nTo tackle this problem, we leverage soft randomized smoothing (Soft-RS). In contrast to the original version of randomized smoothing (Hard-RS), Soft-RS is applied to a neural network zθ(x) whose last layer is softmax. The soft smoothed classifier g̃θ is defined as follows. Definition 2. For a neural network zθ : X → P(K) whose last layer is softmax and σ > 0, the soft smoothed classifier g̃θ of zθ is defined as\ng̃θ(x) = arg max c∈Y\nEη∼N (0,σ2I)[zcθ(x+ η)] (14)\nUsing Lemma 2 in Salman et al. (2019), we prove the following theorem in Appendix A: Theorem 2. Let the ground truth of an input x be y. If g̃θ classifies x correctly, i.e.\nEη[zyθ (x+ η)] ≥ max y′ 6=y\nEη[zy ′ θ (x+ η)] (15)\nThen g̃θ is provably robust at x, with the certified radius given by\nCR(g̃θ;x, y) = σ\n2 [Φ−1(Eη[zyθ (x+ η)])− Φ −1(max y′ 6=y Eη[zy ′ θ (x+ η)])] (16)\nwhere Φ is the c.d.f. of the standard Gaussian distribution.\nWe notice that in Salman et al. (2019) (see its Appendix B), a similar technique was introduced to overcome the non-differentiability in creating adversarial examples to a smoothed classifier. Different from their work, our method uses Soft-RS to obtain a certified radius that is differentiable in practice. The certified radius given by soft randomized smoothing meets condition (C2) in the algorithmic design. Even if we use Monte Carlo sampling to estimate the expectation, (16) is still sub-differentiable with respect to θ as long as zθ is sub-differentiable with respect to θ.\nConnection between Soft-RS and Hard-RS We highlight two main properties of Soft-RS. Firstly, it is a differentiable approximation of the original Hard-RS. To see this, note that when β → ∞, zyθ (x;β)\na.e.−−→ 1{y=argmaxc ucθ(x)}, so g̃θ converges to gθ almost everywhere. Consequently, the Soft-RS certified radius (16) converges to the Hard-RS certified radius (8) almost everywhere as β goes to infinity. Secondly, Soft-RS itself provides an alternative way to get a provable robustness guarantee. In Appendix A, we will provide Soft-RS certification procedures that certify g̃θ with the Hoeffding bound or the empirical Bernstein bound." }, { "heading": "4.4 NUMERICAL STABILITY (FOR CONDITION C3)", "text": "In this section, we will address the numerical stability condition (C3). While Soft-RS does provide us with a differentiable certified radius (16) which we could maximize with first-order optimization methods, directly optimizing (16) suffers from exploding gradients. The problem stems from the inverse cumulative density function Φ−1(x), whose derivative is huge when x is close to 0 or 1.\nFortunately, by minimizing the robustness loss (13) instead, we can maximize the robust radius free from exploding gradients. The hinge loss restricts that samples with non-zero robustness loss must satisfy 0 < CR(g̃θ;x, y) < + ̃, which is equivalent to 0 < ξθ(x, y) < γ where ξθ(x, y) = Φ−1(Eη[zyθ (x + η)]) − Φ−1(maxy′ 6=y Eη[z y′ θ (x + η)]) and γ = 2( +̃) σ . Under this restriction, the derivative of Φ−1 is always bounded as shown in the following proposition. The proof can be found in Appendix B. Proposition 1. Given any p1, p2, ...pK satisfies p1 ≥ p2 ≥ ... ≥ pK ≥ 0 and p1+p2+...+pK = 1, let γ = 2( +̃)σ , the derivative of min{[Φ −1(p1)−Φ−1(p2)], γ} with respect to p1 and p2 is bounded." }, { "heading": "4.5 COMPLETE IMPLEMENTATION", "text": "We are now ready to present the complete MACER algorithm. Expectations over Gaussian samples are approximated with Monte Carlo sampling. Let η1, · · · , ηk be k i.i.d. samples from N (0, σ2I). The final objective function is\nl(g̃θ;x, y) = lC(g̃θ;x, y) + lR(g̃θ;x, y)\n= − log ẑyθ (x) + λ ·max{ + ̃− CR(g̃θ;x, y), 0} · 1{g̃θ(x)=y}\n= − log ẑyθ (x) + λσ\n2 max{γ − ξ̂θ(x, y), 0} · 1{g̃θ(x)=y}\n(17)\nwhere ẑθ(x) = 1k ∑k j=1 zθ(x + ηj) is the empirical expectation of zθ(x + η) and ξ̂θ(x, y) = Φ−1(ẑyθ (x)) − Φ−1(maxy′ 6=y ẑ y′\nθ (x)). During training we minimize E(x,y)∼p̂data l(g̃θ;x, y). Detailed implementation is described in Algorithm 1. To simplify the implementation, we choose γ to be a hyperparameter instead of ̃. The inverse temperature of softmax β is also a hyperparameter.\nAlgorithm 1 MACER: robust training via MAximizing CErtified Radius 1: Input: Training set p̂data, noise level σ, number of Gaussian samples k, trade-off factor λ,\nhinge factor γ, inverse temperature β, model parameters θ 2: for each iteration do 3: Sample a minibatch (x1, y1), · · · , (xn, yn) ∼ p̂data 4: For each xi, sample k i.i.d. Gaussian samples xi1, · · · , xik ∼ N (x, σ2I) 5: Compute the empirical expectations: ẑθ(xi)← ∑k j=1 zθ(xij)/k for i = 1, · · · , n 6: Compute Gθ = {(xi, yi) : g̃θ(xi) = yi}: (xi, yi) ∈ Gθ ⇔ yi = arg maxc∈Y ẑcθ(xi) 7: For each (xi, yi) ∈ Gθ, compute ŷi: ŷi ← arg maxc∈Y\\{yi} ẑ c θ(xi) 8: For each (xi, yi) ∈ Gθ, compute ξ̂θ(xi, yi): ξ̂θ(xi, yi)← Φ−1(ẑyiθ (xi))− Φ−1(ẑ ŷi θ (xi)) 9: Update θ with one step of any first-order optimization method to minimize\n− 1 n n∑ i=1 log ẑyiθ (xi) + λσ 2n ∑ (xi,yi)∈Gθ max{γ − ξ̂θ(xi, yi), 0}\n10: end for\nCompare to adversarial training Adversarial training defines the problem as a mini-max game and solves it by optimizing the inner loop (attack generation) and the outer loop (model update) iteratively. In our method, we only have a single loop (model update). As a result, our proposed algorithm can run much faster than adversarial training because it does not require additional back propagations to generate adversarial examples.\nCompare to previous work The overall objective function of our method, a linear combination of a classification loss and a robustness loss, is similar to those of adversarial logit pairing (ALP) (Kannan et al., 2018) and TRADES (Zhang et al., 2019b). In MACER, the λ in the objective function (17) can also be viewed as a trade-off factor between accuracy and robustness. However, the robustness term of MACER does not depend on a particular adversarial example x′, which makes it substantially different from ALP and TRADES." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we empirically evaluate our proposed MACER algorithm on a wide range of tasks. We also study the influence of different hyperparameters in MACER on the final model performance." }, { "heading": "5.1 SETUP", "text": "To fairly compare with previous works, we follow Cohen et al. (2019) and Salman et al. (2019) to use LeNet for MNIST, ResNet-110 for Cifar-10 and SVHN, and ResNet-50 for ImageNet.\nMACER Training For Cifar-10, MNIST and SVHN, we train the models for 440 epochs using our proposed algorithm. The learning rate is initialized to be 0.01, and is decayed by 0.1 at the 200th/400th epoch. For all the models, we use k = 16, γ = 8.0 and β = 16.0. The value of λ trades off the accuracy and robustness and we find that different λ leads to different robust accuracy when the model is injected by different levels (σ) of noise. We find setting λ = 12.0 for σ = 0.25 and λ = 4.0 for σ = 0.50 works best. For ImageNet, we train the models for 120 epochs. The initial learning rate is set to be 0.1 and is decayed by 0.1 at the 30th/60th/90th epoch. For all models on ImageNet, we use k = 2, γ = 8.0 and β = 16.0. More details can be found in Appendix C.\nBaselines We compare the performance of MACER with two previous works. The first work (Cohen et al., 2019) trains smoothed networks by simply minimizing cross-entropy loss. The second one (Salman et al., 2019) uses adversarial training on smoothed networks to improve the robustness. For both baselines, we use checkpoints provided by the authors and report their original numbers whenever available. In addition, we run Cohen et al. (2019)’s method on all tasks as it is a speical case of MACER by setting k = 1 and λ = 0.\nCertification Following previous works, we report the approximated certified test set accuracy, which is the fraction of the test set that can be certified to be robust at radius r. However, the approximated certified test set accuracy is a function of the radius r. It is hard to compare two models unless one is uniformly better than the other for all r. Hence, we also use the average certified radius (ACR) as a metric: for each test data (x, y) and model g, we can estimate the certified radiusCR(g;x, y). The average certified radius is defined as 1|Stest| ∑ (x,y)∈Stest CR(g;x, y) where Stest is the test set. To estimate the certified radius for data points, we use the source code provided by Cohen et al. (2019)." }, { "heading": "5.2 RESULTS", "text": "We report the results on Cifar-10 and ImageNet in the main body of the paper. Results on MNIST and SVHN can be found in Appendix C.2.\nPerformance The performance of different models on Cifar-10 are reported in Table 1, and in Figure 1 we display the radius-accuracy curves. Note that the area under a radius-accuracy curve is equal to the ACR of the model. First, the plots show that our proposed method consistently achieves significantly higher approximated certified test set accuracy than Cohen et al. (2019). This shows that robust training via maximizing the certified radius is more effective than simply minimizing the cross entropy classification loss. Second, the performance of our model is different from that of Salman et al. (2019) for different r. For example, for σ = 0.25, our model achieves higher accuracy than Salman et al. (2019)’s model when r = 0/0.25/0.5, but the performance of ours is worse when r = 0.75. For the average certified radius, our models are better than Salman et al. (2019)’s models2 in all settings. For example, when σ = 0.25/0.50, the ACR of our model is about 3% larger than that of Salman et al. (2019)’s. The gain of our model is relatively smaller when σ = 1.0. This is because σ = 1.0 is a very large noise level (Cohen et al., 2019) and both models perform poorly. The ImageNet results are displayed in Table 2 and Figure 2, and the observation is similar. All experimental results show that our proposed algorithm is more effective than previous ones.\nTraining speed Since MACER does not require adversarial attack during training, it runs much faster to learn a robust model. Empirically, we compare MACER with Salman et al. (2019) on the average training time per epoch and the total training hours, and list the statistics in Table 3. For a fair comparison, we use the codes34 provided by the original authors and run all algorithms on the same machine. For Cifar-10 we use one NVIDIA P100 GPU and for ImageNet we use four NVIDIA P100 GPUs. According to our experiments, on ImageNet, MACER achieves ACR=0.544 in 117.90 hours. On the contrary, Salman et al. (2019) only achieves ACR=0.528 but uses 193.10 hours, which clearly shows that our method is much more efficient.\nOne might question whether the higher performance of MACER comes from the fact that we train for more epochs than previous methods. In Section C.3 we also run MACER for 150 epochs and compare it with the models in Table 3. The results show that when run for only 150 epochs, MACER still achieves a performance comparable with SmoothAdv, and is 4 times faster at the same time.\n2Salman et al. (2019) releases hundreds of models, and we select the model with the largest average certified radius for each σ as our baseline.\n3https://github.com/locuslab/smoothing 4https://github.com/Hadisalman/smoothing-adversarial" }, { "heading": "5.3 EFFECT OF HYPERPARAMETERS", "text": "In this section, we carefully examine the effect of different hyperparameters in MACER. All experiments are run on Cifar-10 with σ = 0.25 or 0.50. The results for σ = 0.25 are shown in Figure 3. All details can be found in Appendix C.4.\nEffect of k We sample k Gaussian samples for each input to estimate the expectation in (16). We can see from Figure 3(a) that using more Gaussian samples usually leads to better performance. For example, the radius-accuracy curve of k = 16 is uniformly above that of k = 1.\nEffect of λ The radius-accuracy curves in Figure 3(b) demonstrate the trade-off effect of λ. From the figure, we can see that as λ increases, the clean accuracy drops while the certified accuracy at large radii increases.\nEffect of γ γ is defined as the hyperparameter in the hinge loss. From Figure 3(c) we can see that when γ is small, the approximated certified test set accuracy at large radii is small since γ “truncates” the large radii. As γ increases, the robust accuracy improves. It appears that γ also acts as a trade-off between accuracy and robustness, but the effect is not as significant as the effect of λ.\nEffect of β Similar to Salman et al. (2019)’s finding (see its Appendix B), we also observe that using a larger β produces better results. While Salman et al. (2019) pointed out that a large β may make training unstable, we find that if we only apply a large β to the robustness loss, we can maintain training stability and achieve a larger average certified radius as well." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "In this work we propose MACER, an attack-free and scalable robust training method via directly maximizing the certified radius of a smoothed classifier. We discuss the desiderata such an algorithm would have to satisfy, and provide an approach to each of them. According to our extensive experiments, MACER performs better than previous provable l2-defenses and trains faster. Our strong empirical results suggest that adversarial training is not a must for robust training, and defense based on certification is a promising direction for future research. Moreover, several recent papers (Carmon et al., 2019; Zhai et al., 2019; Stanforth et al., 2019) suggest that using unlabeled data helps improve adversarially robust generalization. We will also extend MACER to the semisupervised setting." }, { "heading": "ACKNOWLEDGMENTS", "text": "We thank Tianle Cai for helpful discussions and suggestions. This work was done when Runtian Zhai was visiting UCLA under the Top-Notch Undergraduate Program of Peking University school of EECS. Chen Dan and Pradeep Ravikumar acknowledge the support of Rakuten Inc., and NSF via IIS1909816. Huan Zhang and Cho-Jui Hsieh acknowledge the support of NSF via IIS1719097. Liwei Wang acknowledges the support of Beijing Academy of Artificial Intelligence." }, { "heading": "A SOFT RANDOMIZED SMOOTHING", "text": "In this section we provide theoretical analysis and certification procedures for Soft-RS.\nA.1 PROOF OF THEOREM 2\nOur proof is based on the following lemma:\nLemma 1. For any measurable function f : X → [0, 1], define f̂(x) = Eη∼N (0,σ2I)f(x+ η), then x 7→ Φ−1(f̂(x)) is 1/σ-Lipschitz.\nThis lemma is the generalized version of Lemma 2 in Salman et al. (2019).\nProof of Theorem 2. Let y∗ = arg maxy′ 6=y Eη[z y′ θ (x+ η)]. For any c ∈ Y , define ẑcθ as:\nẑcθ(x) = Eη∼N (0,σ2I)[zcθ(x+ η)] (18)\nBecause zcθ : X → [0, 1], by Lemma 1 we have x 7→ Φ−1(ẑcθ(x)) is 1/σ-Lipschitz. Thus, ∀y′ 6= y, for any δ such that ‖δ‖2 ≤ σ 2 [Φ −1(Eη[zyθ (x+ η)])− Φ−1(maxy′ 6=y Eη[z y′ θ (x+ η)])]:\nΦ−1(ẑyθ (x+ δ)) ≥ Φ −1(ẑyθ (x))−\n1 2 [Φ−1(ẑyθ (x))− Φ −1(ẑy ∗ θ (x))]\nΦ−1(ẑy ′\nθ (x+ δ)) ≤ Φ −1(ẑy\n′ θ (x)) + 1 2 [Φ−1(ẑyθ (x))− Φ −1(ẑy ∗ θ (x))]\n≤ Φ−1(ẑy ∗ θ (x)) + 1 2 [Φ−1(ẑyθ (x))− Φ −1(ẑy ∗ θ (x))]\n(19)\nTherefore, Φ−1(Eηzyθ (x+ δ+ η)) ≥ Φ−1(Eηz y′ θ (x+ δ+ η)). Due to the monotonicity of Φ −1, we have Eηzyθ (x+ δ + η) ≥ Eηz y′ θ (x+ δ + η), which implies that g̃θ(x+ δ) = y.\nA.2 SOFT-RS CERTIFICATION PROCEDURE\nLet zA = Eη[zyθ (x + η)] and zB = maxy′ 6=y Eη[z y′\nθ (x + η)]. If there exist zA, zB ∈ [0, 1] such that P (zA ≥ zA ∧ zB ≤ zB) ≥ 1 − α, then with probability at least 1 − α, CR(g̃θ;x, y) ≥ σ 2 [Φ −1(zA)− Φ−1(zB)]. Meanwhile, zB ≤ 1− zA, so we can take zB = 1− zA, and\nP (CR(g̃θ;x, y) ≥ σΦ−1(zA)) ≥ 1− α (20)\nIt reduces to find a confidence lower bound of zA. Here we provide two bounds:\nHoeffding Bound The random variable zyθ (x+ η) has mean zA, and z y 1 , · · · , z y k are its k observations. Because zyj ∈ [0, 1] for any j = 1, · · · , k, we can use Hoeffding’s inequality to obtain a lower confidence bound: Lemma 2. (Hoeffding’s Inequality) Let X1, ..., Xk be independent random variables bounded by the interval [0, 1]. Let X = 1k ∑k j=1Xj , then for any t ≥ 0\nP (X − EX ≥ t) ≤ e−2kt 2\n(21)\nDenote ẑy = 1k ∑k j=1 z y j . By Hoeffding’s inequality we have\nP (ẑy − zA ≥ √ − logα\n2k ) ≤ α (22)\nHence, a 1− α confidence lower bound zA of zA is\nzA = ẑ y −\n√ − logα\n2k (23)\nEmpirical Bernstein Bound Maurer & Pontil (2009) provides us with a tighter bound: Theorem 3. (Theorem 4 in Maurer & Pontil (2009)) Under the conditions of Lemma 2, with probability at least 1− α,\nX − EX ≤\n√ 2S2 log 2α\nk + 7 log 2α 3(k − 1)\n(24)\nwhere S2 is the sample variance of X1, · · · , Xk, i.e.\nS2 =\n∑k j=1X 2 j − ( ∑k j=1Xj) 2 k\nk − 1 (25)\nConsequently, a 1− α confidence lower bound zA of zA is\nzA = ẑ y −\n√ 2S2 log 2α\nk − 7 log 2α 3(k − 1)\n(26)\nThe full certification procedure with the above two bounds is described in Algorithm 2.\nAlgorithm 2 Soft randomized smoothing certification 1: # Certify the robustness of g̃ around an input x with Hoeffding bound 2: function CERTIFYHOEFFDING(z, σ2, x, n0, n, α) 3: ẑ0 ← SAMPLEUNDERNOISE(z, x, n0, σ2)[1, :]/n0 4: ĉA ← arg maxc ẑ0 c\n5: ẑA ← SAMPLEUNDERNOISE(z, x, n, σ2)[1, ĉA]/n 6: zA ← ẑA −\n√ − logα\n2n\n7: if zA > 12 then return prediction ĉA and radius σΦ −1(zA) 8: else return ABSTAIN 9: end function\n10: # Certify with empirical Bernstein bound 11: function CERTIFYBERNSTEIN(z, σ2, x, n0, n, α) 12: ẑ0 ← SAMPLEUNDERNOISE(z, x, n0, σ2)[1, :]/n0 13: ĉA ← arg maxc ẑ0 c 14: A← SAMPLEUNDERNOISE(z, x, n, σ2) 15: ẑA ← A[1, ĉA]/n, S2A ← A[2,ĉA]−A[1,ĉA]2/n n−1\n16: zA ← ẑA − √ 2S2A log(2/α) n − 7 log(2/α) 3(n−1) 17: if zA > 12 then return prediction ĉA and radius σΦ −1(zA) 18: else return ABSTAIN 19: end function\n20: # Helper function: draw num samples from z(x+η) and return the 1st and 2nd sample moment 21: function SAMPLEUNDERNOISE(z, x, num, σ2) 22: Initialize a 2×K matrix A← (0, · · · , 0; 0, · · · , 0) 23: for j = 1 to num do 24: Sample noise ηj ∼ N (0, σ2I) 25: Compute: zj = z(x+ ηj) 26: Increment: A[1, :]← A[1, :] + zj , A[2, :]← A[2, :] + z2j 27: end for 28: return A 29: end function\nA.3 COMPARING SOFT-RS WITH HARD-RS\nWe use Soft-RS during training and use Hard-RS during certification. In this section, we empirically compare these two certification methods. We certify the nine models in Table 4. For each model,\nwe certify with both Hard-RS and Soft-RS. For Hard-RS, we use Clopper-Pearson bound and for Soft-RS, we use the empirical Bernstein bound with different choices of β. The results are displayed in Figure 4. The results show that Hard-RS consistently gives a larger lower bound of robust radius than Soft-RS. We also observe that there is a gap between Soft-RS and Hard-RS when β → ∞, which implies that the empirical Bernstein bound, though tighter than the Hoeffding bound, is still looser than the Clopper-Pearson bound." }, { "heading": "B PROOF OF PROPOSITION 1", "text": "Proof of Proposition 1. We only need to consider the case when Φ−1(p1)− Φ−1(p2) ≤ γ since the derivative is zero when Φ−1(p1) − Φ−1(p2) > γ. Obviously, p2 ≤ 0.5 and thus Φ−1(p2) ≤ 0. So Φ−1(p1) ≤ γ. Define p∗ ∈ (0, 1) such that Φ−1(p∗) = γ. Since Φ−1(p) is a strictly increasing function of p, p∗ is unique, and p1 ≤ p∗. min{[Φ−1(p1) − Φ−1(p2)], γ} = Φ−1(p1) − Φ−1(p2). Since p1 is the largest value and p1 + p2 + ... + pK = 1, we have 1K ≤ p1 ≤ p\n∗. Since [Φ−1]′(p) is continuous in any closed interval of (0, 1), the derivative of Φ−1(p1)−Φ−1(p2) with respect to p1 is bounded. Similarly, p2 is the largest among p2, ...pK and (K−1)p2 ≥ p2 + ...+pK = 1−p1 ≥ 1−p∗. Thus 1− p∗ ≥ p2 ≥ 1−p ∗ K−1 , and the derivative of Φ −1(p1)− Φ−1(p2) with respect to p2 is bounded." }, { "heading": "C SUPPLEMENTARY MATERIAL FOR EXPERIMENTS", "text": "C.1 COMPARED MODELS\nIn this section we list all compared models in the main body of this paper. Cifar-10 models are listed in Table 4, and ImageNet models are listed in Table 5.\nC.2 RESULTS ON MNIST AND SVHN\nHere we present experimental results on MNIST and SVHN. For comparison we also report the results produced by Cohen et al. (2019)’s method.\n5We first train with λ = 0.0, and then change to λ = 12.0 after learning rate decay.\nC.2.1 MNIST\nThe results are reported in Table 6. For all σ, we use k = 16, λ = 16.0, γ = 8.0 and β = 16.0.\nThe results are reported in Table 7. We use k = 16, λ = 12.0, γ = 8.0 and β = 16.0.\nC.3 MACER TRAINING FOR 150 EPOCHS\nIn Table 8 we report the performance and training time of MACER on Cifar-10 when it is only run for 150 epochs, and compare with SmoothAdv (Salman et al., 2019) and MACER (440 epochs). The learning rate is decayed by 0.1 at epochs 60 and 120. All other hyperparameters are kept the same as in Table 4.\nC.4 EFFECT OF HYPERPARAMETERS\nAll experiments are run on Cifar-10 with σ = 0.25 or 0.50. See Table 9 for detailed experimental settings. Results are reported in Tables 10-13." } ]
2,020
null
SP:820a879346c3ba370348f1086dab5b9c256175e9
[ "The paper studies the problem of density estimation and learning accurate generative models. The authors start from the observation that this problem has been approaches either using variational inference models, that scale very well but whose approximations may lead to degenerate results in practice, and diffusion maps, that scale poorly but are very effective in capturing the underlying data manifold. From here, the authors propose integrating the notion of random walk from diffusion maps into VAEs to avoid degenerate conditions. The proposed method is first defined in its generality, a practical implementation is presented, theoretical guarantees are provided, and empirical evidence of its effectiveness is reported.", "The paper proposes a new generative model for unsupervised learning, based on a diffusion random walk principle inspired by the manifold learning literature. The basic idea is to (probabilistically) map points to a latent space, perform a random walk in that space, and then map back to the original space again. Learning of the suitable maps is achieved by casting the problem in a variational inference framework." ]
Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning — with many default priors the posterior/likelihood pair q(z|x)/p(x|z) can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match. Conversely, diffusion maps (DM) automatically infer the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism. In this paper, we propose a) a principled measure for recognizing the mismatch between data and latent distributions and b) a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the locally bi-Lipschitz property, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the variational diffusion autoencoder (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold. Finally, we demonstrate our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.
[]
[ { "authors": [ "Alexander A. Alemi", "Ben Poole", "Ian Fischer", "Joshua V. Dillon", "Rif A. Saurous", "Kevin Murphy" ], "title": "An information-theoretic analysis of deep latent-variable models", "venue": "CoRR, abs/1711.00464,", "year": 2017 }, { "authors": [ "Samaneh Azadi", "Catherine Olsson", "Trevor Darrell", "Ian Goodfellow", "Augustus Odena" ], "title": "Discriminator rejection sampling", "venue": "arXiv preprint arXiv:1810.06758,", "year": 2018 }, { "authors": [ "Claus Bahlmann" ], "title": "Directional features in online handwriting recognition", "venue": "Pattern Recognition,", "year": 2006 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "venue": "In Advances in neural information processing systems,", "year": 2002 }, { "authors": [ "Tim R Davidson", "Luca Falorsi", "Nicola De Cao", "Thomas Kipf", "Jakub M Tomczak" ], "title": "Hyperspherical variational auto-encoders", "venue": "arXiv preprint arXiv:1804.00891,", "year": 2018 }, { "authors": [ "Charles Fefferman", "Sanjoy Mitter", "Hariharan Narayanan" ], "title": "Testing the manifold hypothesis", "venue": "Journal of the American Mathematical Society,", "year": 2016 }, { "authors": [ "Nicholas I Fisher", "Toby Lewis", "Brian JJ Embleton" ], "title": "Statistical analysis of spherical data", "venue": "Cambridge university press,", "year": 1993 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems (NIPS),", "year": 2014 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Thomas Hamelryck", "John T Kent", "Anders Krogh" ], "title": "Sampling realistic protein conformations using local structural bias", "venue": "PLoS Computational Biology,", "year": 2006 }, { "authors": [ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "Lagging inference networks and posterior collapse in variational autoencoders", "venue": "CoRR, abs/1901.05534,", "year": 2019 }, { "authors": [ "Matthew D Hoffman", "David M Blei", "Chong Wang", "John Paisley" ], "title": "Stochastic variational inference", "venue": "The Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Peter W Jones", "Mauro Maggioni", "Raanan Schul" ], "title": "Manifold parametrizations by eigenfunctions of the laplacian and heat kernels", "venue": "Proceedings of the National Academy of Sciences,", "year": 2008 }, { "authors": [ "Michael I Jordan", "Zoubin Ghahramani", "Tommi S Jaakkola", "Lawrence K Saul" ], "title": "An introduction to variational methods for graphical models", "venue": "Machine learning,", "year": 1999 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "NC Krieger Lassen", "D Juul Jensen", "Knut Conradsen" ], "title": "On the statistical analysis of orientation data", "venue": "Acta Crystallographica Section A: Foundations of Crystallography,", "year": 1994 }, { "authors": [ "Roy R Lederman", "Ronen Talmon" ], "title": "Learning the geometry of common latent variables using alternating-diffusion", "venue": "Applied and Computational Harmonic Analysis,", "year": 2018 }, { "authors": [ "Hariharan Narayanan", "Sanjoy Mitter" ], "title": "Sample complexity of testing the manifold hypothesis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "David Peel", "William J Whiten", "Geoffrey J McLachlan" ], "title": "Fitting mixtures of kent distributions to aid in joint set identification", "venue": "Journal of the American Statistical Association,", "year": 2001 }, { "authors": [ "Ali Razavi", "Aäron van den Oord", "Ben Poole", "Oriol Vinyals" ], "title": "Preventing posterior collapse with delta-vaes", "venue": "CoRR, abs/1901.03416,", "year": 2019 }, { "authors": [ "Luis A. Pérez Rey", "Vlado Menkovski", "Jacobus W. Portegies" ], "title": "Diffusion variational autoencoders", "venue": "CoRR, abs/1901.08991,", "year": 2019 }, { "authors": [ "Sam T Roweis", "Lawrence K Saul" ], "title": "Nonlinear dimensionality reduction by locally linear embedding", "venue": null, "year": 2000 }, { "authors": [ "Bernhard Schölkopf", "Alexander Smola", "Klaus-Robert Müller" ], "title": "Nonlinear component analysis as a kernel eigenvalue problem", "venue": "Neural computation,", "year": 1998 }, { "authors": [ "Uri Shaham", "Alexander Cloninger", "Ronald R Coifman" ], "title": "Provable approximation properties for deep neural networks", "venue": "Applied and Computational Harmonic Analysis,", "year": 2018 }, { "authors": [ "Uri Shaham", "Kelly Stanton", "Henry Li", "Boaz Nadler", "Ronen Basri", "Yuval Kluger" ], "title": "Spectralnet: Spectral clustering using deep neural networks", "venue": "arXiv preprint arXiv:1801.01587,", "year": 2018 }, { "authors": [ "Amit Singer", "Ronald R Coifman" ], "title": "Non-linear independent component analysis with diffusion maps", "venue": "Applied and Computational Harmonic Analysis,", "year": 2008 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee", "Xinchen Yan" ], "title": "Learning structured output representation using deep conditional generative models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Jakub M Tomczak", "Max Welling" ], "title": "Vae with a vampprior", "venue": "arXiv preprint arXiv:1705.07120,", "year": 2017 }, { "authors": [ "Ryan Turner", "Jane Hung", "Yunus Saatci", "Jason Yosinski" ], "title": "Metropolis-hastings generative adversarial networks", "venue": "arXiv preprint arXiv:1811.11357,", "year": 2018 }, { "authors": [ "Jiacheng Xu", "Greg Durrett" ], "title": "Spherical latent spaces for stable variational autoencoders", "venue": "arXiv preprint arXiv:1808.10805,", "year": 2018 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Infovae: Information maximizing variational autoencoders", "venue": "CoRR, abs/1706.02262,", "year": 2017 }, { "authors": [ "Jones" ], "title": "Under the above setting and assume (C1)-(C2), then there are positive constants c1, c2, c3 which only depend on M and g, s.t. for any x ∈ M, rM(x) being the inradius, there are d eigenfunctions", "venue": null, "year": 2008 }, { "authors": [ "Shaham" ], "title": "Let CM be the number of neighborhoods Ui = B(xi, δ) ∩M needed to coverM such that ∀x, y ∈ Ui, (1 − )‖x − y‖ ≤ dM(x, y) ≤ (1 + )‖x − y‖. Here, we choose δ = min(δM, κ−1ρ) where δM is the largest δ that preserves locally Euclidean neighborhoods and κ−1ρ is the smallest value", "venue": null, "year": 2018 }, { "authors": [ "dM", "·) on Ui" ], "title": "2018a), the first layer of a neural network is capable of using 4D units to select the subset of d coordinates ψ̃(x) from ψ(x) for x ∈ Ui and zeroing out the other D−d coordinates with ReLU bump functions. Then we can defineX(ψ̃(x)) = X(ψ(x)) on x ∈ Ui", "venue": null, "year": 2018 }, { "authors": [ "Singer", "Coifman" ], "title": "ψ(Uz0) the output covariance matrix is characterized by the Jacobian of the function fN mapping from Euclidean space (on the diffusion coordinates) to the output space, at the point z0. So the covariance of the data lying insize ψ(Uz0) is Jz0ΣJ T", "venue": null, "year": 2008 } ]
[ { "heading": null, "text": "Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning — with many default priors the posterior/likelihood pair q(z|x)/p(x|z) can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match. Conversely, diffusion maps (DM) automatically infer the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism. In this paper, we propose a) a principled measure for recognizing the mismatch between data and latent distributions and b) a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the locally bi-Lipschitz property, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the variational diffusion autoencoder (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold. Finally, we demonstrate our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models." }, { "heading": "1 INTRODUCTION", "text": "Recent developments in generative models such as variational auto-encoders (VAEs, Kingma & Welling (2013)) and generative adversarial networks (GANs, Goodfellow et al. (2014)) have made it possible to sample remarkably realistic points from complex high dimensional distributions at low computational cost. While their methods are very different — one is derived from variational inference and the other from game theory — their ends both involve learning smooth mappings from a user-defined prior distribution to the modeled distribution.\nThese maps are closely tied to manifold learning when the prior is supported over a Euclidean space (e.g. Gaussian or uniform priors) and the data lie on a manifold (also known as the Manifold Hypothesis, see Narayanan & Mitter (2010); Fefferman et al. (2016)). This is because manifolds themselves are defined by sets that have homeomorphisms to such spaces. Learning such maps is beneficial to any machine learning task, and may shed light on the success of VAEs and GANs in modeling complex distributions.\nFurthermore, the connection to manifold learning may explain why these generative models fail when they do. Known as posterior collapse in VAEs (Alemi et al., 2017; Zhao et al., 2017; He et al., 2019; Razavi et al., 2019) and mode collapse in GANs (Goodfellow, 2017), both describe cases where the forward/reverse mapping to/from Euclidean space collapses large parts of the input to a single output. This violates the bijective requirement of the homeomorphic mapping. It also results in degenerate latent spaces and poor generative performance. A major cause of such failings is when\nthe geometries of the prior and target data do not agree. We explore this issue of prior mismatch and previous treatments of it in Section 3.\nGiven their connection to manifold learning, it is natural to look to classical approaches in the field for ways to improve VAEs. One of the most principled methods is spectral learning (Schölkopf et al., 1998; Roweis & Saul, 2000; Belkin & Niyogi, 2002) which involves describing data from a manifold X ⊂ MX by the eigenfunctions of a kernel on MX . We focus specifically on DMs, where Coifman & Lafon (2006) show that normalizations of the kernel approximate a very specific diffusion process, the heat kernel over MX . A crucial property of the heat kernel is that, like its physical analogue, it defines a diffusion process that has a uniform stationary distribution — in other words, drawing from this stationary distribution draws uniformly from the data manifold. Moreover, Jones et al. (2008) established another crucial property of DMs, namely that distances in local neighborhoods in the eigenfunction space are nearly isometric to corresponding geodesic distances on the manifold. However, despite its strong theoretical guarantees, DMs are poorly equipped for large scale generative modeling as they are not easily scalable and do not provide an inverse mapping from the intrinsic feature space.\nIn this paper we address issues in variational inference and manifold learning by combining ideas from both. Theory in manifold learning allows us to better recognize prior mismatch, whereas variational inference provides a method to learn the difficult to approximate inverse diffusion map.\nOur contributions: 1) We introduce the locally bi-Lipschitz property, a sufficient condition for a homeomorphism, for measuring the stability of a mapping between latent and data distributions. 2) We introduce VDAEs, a class of variational autoencoders whose encoder-decoder feedforward pass approximates the diffusion process on the data manifold with respect to a user-defined kernel k. 3) We show that deep neural networks are capable of learning such diffusion processes, and 4) that networks approximating this process produce random walks that have certain desirable properties, including well defined transition and stationary distributions. 5) Finally, we demonstrate the utility of the VDAE framework on a set of real and synthetic datasets, and show that they have superior performance and satisfy the locally bi-Lipschitz property where GANs and VAEs do not." }, { "heading": "2 BACKGROUND", "text": "Variational inference (VI, Jordan et al. (1999); Wainwright et al. (2008)) is a machine learning method that combines Bayesian statistics and latent variable models to approximate some probability density p(x). VI assumes and exploits a latent variable structure in the assumed data generation process, that the observations x ∼ p(x) are conditionally distributed given unobserved latent vari-\nables z. By modeling the conditional distribution, then marginalizing over z, as in\npθ(x) = ∫ z pθ(x|z)p(z)dz, (1)\nwe obtain the model evidence, or likelihood that x could have instead been drawn from pθ(x). Maximizing Eq. 1 leads to an algorithm for finding likely approximations of p(x). As the cost of computing this integral scales exponentially with the dimension of z, we instead maximize the evidence lower bound (ELBO):\nlog pθ(x) ≥ −DKL(q(z|x)||p(z)) + Ez∼q(z|x)[log pθ(x|z)], (2)\nwhere q(z|x) is usually an approximation of pθ(z|x). Optimizing the ELBO is sped up by taking stochastic gradients (Hoffman et al., 2013), and further accelerated by learning a global function approximator qφ in an autoencoding structure (Kingma & Welling, 2013).\nDiffusion maps (DMs, Coifman & Lafon (2006)) on the other hand, are a class of kernel methods that perform non-linear dimensionality reduction on a set of observations X ⊆ MX , whereMX is the data manifold. Given a symmetric and positive kernel k, DM considers the induced random walk on the graph of X , where given x, y ∈ X , the transition probabilities p(y|x) = p(x, y) are row normalized versions of k(x, y). Moreover, the diffusion map ψ embeds the data X ∈ Rm into the Euclidean space RD so that the diffusion distance is approximated by Euclidean distance. This is a powerful property, as it allows the arbitrarily complex random walk induced by k on MX to become an isotropic Gaussian random walk on ψ(MX). SpectralNet is an algorithm introduced by algorithm in Shaham et al. (2018b) to speed up the diffusion map. Until recently, the method ψk could only be computed via the eigendecomposition of K. As a result, DMs were only be tractable for small datasets, or on larger datasets by combining landmark-based estimates and Nystrom approximation techniques. However, Shaham et al. (2018b) propose approximations of the function ψ itself in the case that the kernel k is symmetric. In particular, we will leverage SpectralNet to enforce our diffusion embedding prior.\nLocally bi-lipschitz coordinates by kernel eigenfunctions. (Jones et al. (2008)) analyzed the construction of local coordinates of Riemannian manifolds by Laplacian eigenfunctions and diffusion map coordinates. They establish, for all x ∈ X , the existence of some neighborhood U(x) and d spectral coordinates given U(x) that define a bi-Lipschitz mapping from U(x) to Rd. With a smooth compact Riemannian manifold, U(x) can be chosen to be a geodesic ball with radius a constant multiple of the inradius (the radius of the largest possible ball around x without intersecting with the manifold boundary), where the constant is uniform for all x, but the indices of the d spectral coordinates as well as the local bi-Lipschitz constants may depend on x. Specifically, the Lipschitz constants involve inverse of the inradius at x multiplied again by some global constants. For completeness we give a simplified statement of the Jones et al. (2008) result in the supplementary material.\nUsing the compactness of the manifold, one can always cover the manifold with m many neighborhoods (geodesic balls) on which the bi-Lipschitz property in Jones et al. (2008) holds. As a result, there are a total of D spectral coordinates, D ≤ md (in practice D is much smaller than md, since the selected spectral coordinates in the proof of Jones et al. (2008) tend to be low-frequency ones, and thus the selection on different neighborhoods tend to overlap), such that on each of them neighborhoods, there exists a subset of d spectral coordinates out of the D ones which are bi-Lipschitz on the neighborhood, and the Lipschitz constants can be bounded uniformly from below and above." }, { "heading": "3 MOTIVATION AND RELATED WORK", "text": "Our proposed measure and model is motivated by degenerate latent spaces and poor generative performance in a variational inference framework arising from prior mismatch: when the topologies of the data and prior distributions do not agree. In real world data, this is usually due to two factors: first, when the dimensionalities of the distributions do not match, and second, when the geometries do not match. It is easy to see that homeomorphisms between the distributions will not exist in either case: pointwise correspondences cannot be established, thus the bijective condition cannot be met. As a result, the model has poor generative performance — for each point not captured in the pointwise correspondence, the latent or generated distribution loses expressivity.\nThough the default choice of Gaussian distribution for p(z) is mathematically elegant and computationally expedient, there are many datasets, real and synthetic, for which this distribution is ill-suited. It is well known that spherical distributions are superior for modeling directional data (Fisher et al., 1993; Mardia, 2014), which can be found in fields as diverse as bioinformatics (Hamelryck et al., 2006), geology (Peel et al., 2001), material science (Krieger Lassen et al., 1994), natural image processing (Bahlmann, 2006), and simply preprocessed datasets1. Additionally observe that no homeomorphism exists between Rk and S1 for any k. For data distributed on more complex manifolds, the literature is sparse due to the difficult nature of such study. However, the manifold hypothesis is well-known and studied (Narayanan & Mitter, 2010; Fefferman et al., 2016).\nPrevious research on alleviating prior mismatch exists. Davidson et al. (2018); Xu & Durrett (2018) consider VAEs with the von-Mises Fisher prior, a geometrically hyperspherical prior. (Rey et al., 2019) further model arbitrarily complex manifolds as priors, but require explicit knowledge of the manifold (i.e. its projection map, scalar curvature, and volume). Finally, Tomczak & Welling (2017) consider mixtures of any pre-existing priors. But while these methods increase the expressivity of the priors available, they do not prescribe a method for choosing the prior itself. That responsibility still lies with the user.\nConvserly, our method chooses the best prior automatically. To our knowledge, ours is the first to take a data-driven approach to prior selection. By using some data to inform the prior, we not only guarantee the existence of a homeomorphism between data and prior distributions, we explicitly define it by the learned diffusion map ψ̃." }, { "heading": "4 METHOD", "text": "In this section we propose VDAEs, a variational inference method that, given the data manifold MX , observationsX ⊂MX , and a kernel k, models the geometry ofX by approximating a random walk over the latent diffusion manifoldMZ := ψ(MX). The model is trained by maximizing the local evidence: the evidence (i.e. log-likelihood) of each point given its random walk neighborhood. Points are generated from the trained model by sampling from π, the stationary distribution of the resulting random walk.\nStarting from some point x ∈ X , we can roughly describe one step of the walk as the composition of three functions: 1) the approximate diffusion map ψ̃Θ : MX → MZ , 2) a sampling procedure from the learned diffusion process z′ ∼ qφ(z′|x) = N (ψ̃Θ(x), C̃φ) on MZ , and 3) the learned inverse diffusion map ψ̃s−1θ : MZ → MX that produces x′ ∼ p(x′|z′) = N (ψ̃ −1 θ (z\n′), cI) where the constant c is user-defined and fixed.\nWe rely crucially on three advantages of our latent space ψ̃Θ(X): a) that it is well-defined (given the first D eigenvalues of k are distinct), b) well-approximated (given SpectralNet) and c) that Euclidean distances inMZ approximate single-step random walk distances onMX (see Section 2 and Coifman & Lafon (2006)). Thus the transition probabilities induced by k can be approximated by Gaussian kernels 2 inMZ .\nTherefore, to model a diffusion random walk overMZ , we must learn the functions ψ̃Θ, ψ̃−1θ , C̃φ that approximate the diffusion map, its inverse, and the covariance of the random walk onMZ , at all points z ∈MZ . SpectralNet gives us ψ̃Θ. To learn ψ̃−1θ and C̃φ, we use variational inference." }, { "heading": "4.1 THE LOWER BOUND", "text": "Formally, let us define Ux := Bd(x, δ) ∩MX , where Bd(x, δ) as the δ-ball around x with respect to d(·, ·), the diffusion distance onMZ . For each x ∈ X , we define the local evidence of x as\nEx′∼p(x′|x)|Ux log pθ(x ′|x), (3)\n1Any dataset where the data points have been normalized to be unit length becomes a subset of a hypersphere.\n2Importantly, note that the choice of a Gaussian kernel in the latent space is not dependent on the choice of k. We have this invariance due to the aforementioned property of diffusion embeddings.\nwhere p(x′|x)|Ux is the restriction of p(x′|x) to Ux. The resulting local evidence lower bound is:\nlog pθ(x ′|x) ≥ −DKL(qφ(z′|x)||pθ(z′|x))︸ ︷︷ ︸\ndivergence of random walk distributions\n+Ez′∼qφ(z′|x) log pθ(x ′|z′)︸ ︷︷ ︸\nneighborhood reconstruction error\n. (4)\nNote that the neighborhood reconstruction error should be differentiated from the self reconstruction error that is in VAEs. Eq. 4 produces the empirical loss function:\nL̃DVAE = −DKL(qφ(z′|x)||pθ(z′|x)) + log pθ(x′|z′i), (5)\nwhere z′i = gφ,Θ(x, i), i ∼ N (0, I). gφ,Θ is the deterministic, differentiable function, depending on ψ̃Θ and C̃φ, that generates qφ by the reparameterization trick 3 (Kingma & Welling, 2013).\nAlgorithm 1 VDAE training Θ, φ, θ ← Initialize parameters Obtain parameters Θ for the approximate diffusion map ψ̃Θ by Shaham et al. (2018b) while not converged do\nA← Random minibatch from X for x ∈ A do\nz′ ∼ pφ(z′|ψ̃Θ(x)) . Take one step of the diffusion random walk x′ ← arg miny∈A\\{x} |ψ̃Θ(y)− z′|2d . Find approximate nearest neighbor(s) g ← g + 1|A|∇φ,θ log pθ(x\n′|x) . Compute gradients of the loss, i.e. Eq. equation 4 Update φ, θ using g" }, { "heading": "4.2 THE SAMPLING PROCEDURE", "text": "Here we discuss the algorithm for generating data points from p(x). Composing qφ(z′|x)(≈ pθ(z\n′|x)) with pθ(x′|z′) gives us an approximation of pθ(x′|x). Then the simple, parallelizable, and fast random walk based sampling procedure naturally arises: initialize with an arbitrary point on the manifold x0 ∈MX , then pick suitably largeN and for n = 1, . . . , N draw xn ∼ p(x|xn−1). Eventually, our diffusion random walk converges on its stationary distribution π. By Coifman & Lafon (2006), this is guaranteed to be the uniform distribution on the data manifold. See Section 6.2 for examples of points drawn from this procedure." }, { "heading": "4.3 A PRACTICAL IMPLEMENTATION", "text": "We now introduce a practical implementation VDAEs, considering the case where ψ̃Θ(x), qφ(z′|x) and pθ(x′|z′) are neural network functions, as they are in VAEs and SpectralNet, respectively.\nThe neighborhood reconstruction error. Since qφ(z′|x) models the neighborhood of ψ̃Θ(x), we may sample qφ to obtain z′ (the neighbor of x in the latent space). This gives pθ(x′|x) ≈ ψ̃−1θ (qφ(z\n′|x)), where ψ−1 exists due to the bi-Lipschitz property. We can efficiently approximate x′ ∈ MX by considering the closest embedded data point ψ̃Θ(x) ∈ MZ to z′ = ψ̃Θ(x′). This is because Euclidean distance onMZ approximates the diffusion distance onMX . In other words, x′ ∼ pθ(x′|x) ≈ ψ̃−1θ (qφ(z′|x)) which we approximate empirically by\nx′ ≈ arg min y∈A d(ψ̃Θ(y), z ′) , z′ ∼ qφ(z′|x), (6)\nwhere A ⊆ X is the training batch. On the other hand, the divergence of random walk distributions −DKL(qφ(z′|x)||pθ(z′|x)) can be modeled simply as the divergence of two Gaussian kernels defined on MZ . Though pθ(z′|x) is intractable, the diffusion map ψ gives us the diffusion embedding Z, which is an approximation of the true distribution of pθ(z′|x) in a neighborhood around z = ψ(x). We estimate the first and\n3Though q depends on φ and Θ, we will use qφ := qφ,Θ to be consistent with existing VAE notation and to indicate that Θ is not learned by VI.\nsecond moments of this distribution in RD by computing the local Mahalanobis distance of points in the neighborhood. Then, by minimizing the KL divergence between qφ(z′|x) and the one implied by this Mahalanobis distance, we obtain the loss:\n−DKL(qφ(z′|x)||pθ(z′|x)) = − log |αΣ∗| |Cφ| + d− tr{(αΣ∗)−1Cφ}, (7)\nwhere Cφ(x) is a neural network function, Σ∗(x) = Cov(Bd(ψ(x), δ)∩Z) is the covariance of the points in a neighborhood of z = ψ(x) ∈ Z, and α is a scaling parameter. Note that Cφ(x) does not have to be diagonal, and in fact is most likely not. Combining Eqs. 6 and 7 we obtain Algorithm 1.\nNow we consider the sampling procedure. Since we use neural networks to approximate qφ(z′|x) and pθ(x′|z′), the generation procedure is highly parallelizable. We empirically observe the random walk enjoys rapid mixing properties — it does not take many iterations of the random walk to sample from all ofMZ 4. This leads to Algorithm 2.\nAlgorithm 2 VDAE sampling X0 ← Initialize with points X0 ⊂ X t← 0 while p(X0) 6≈ π do\nfor xt ∈ Xt do zt+1 ∼ pφ(z′|ψ̃Θ(xt)) . Take one step of the diffusion random walk xt+1 ∼ pθ(x|zt+1) . Map back into input space t← t+ 1" }, { "heading": "5 THEORY", "text": "We theoretically prove that the desired inverse map ψ−1 from spectral coordinate codes back to the manifold can be approximated by a decoder network, where the network complexity is bounded by quantities related to the intrinsic geometry of the manifold. This section relies heavily on the known bi-Lipschitz property of DMs Jones et al. (2008), which we are approximating with the VDAE latent space without the need for regularization." }, { "heading": "5.1 THEOREMS", "text": "The theory for the capacity of the encoder to mapM to the diffusion map space ψ(M) has already been considered in Shaham et al. (2018a) and Mishne et al. (2017). We instead focus on the decoder, which requires a different treatment. The following theorem is proved in Appendix A.3, based upon the result in Jones et al. (2008).\nTheorem 1. LetMX ⊂ Rm be a smooth d-dimensional manifold, ψ(MX) ⊂ RD be the diffusion map for D ≥ d large enough to have a subset of coordinates that are locally bi-Lipschitz. Let X = [X1, ..., Xm] be the set of all m extrinsic coordinates of the manifold. Then there exists a sparsely-connected ReLU network fN , with 4DCMX nodes in the first layer, 8dmN nodes in the second layer, and 2mN nodes in the third layer, and m nodes in the output layer, such that\n‖X(ψ(x))− fN (ψ(x))‖L2(ψ(MX)) ≤ √ mCψ/ √ N (8)\nwhere the norm is interpreted as ‖F‖2L2(ψ(M)) := ∫ ‖F (ψ(x))‖22dψ(x). Here Cψ depends on how\nsparsely X(ψ(x)) ∣∣ Ui\ncan be represented in terms of the ReLU wavelet frame on each neighborhood Ui, and CMX on the curvature and dimension of the manifoldMX .\nTheorem 1 is complementary to the theorem in Shaham et al. (2018a), which provides guarantees for the encoder, as Theorem 1 demonstrates a similar approximation theoretic argument for the decoder. The proof is built on two properties of ReLU neural networks: 1) their ability to split curved domains into small, almost Euclidean patches, 2) their ability to build differences of bump functions\n4For all experiments in Section 6, the number of steps required to draw from π is less than 10.\nWe also discuss the connections between the distribution at each point in diffusion map space, qφ(z|x), and the result of this distribution after being decoded through the decoder network fN (z) for z ∼ qφ(z|X). Similar to Singer & Coifman (2008), we characterize the covariance matrix Cov(fN (z)) := Ez∈qφ(z|x)[fN (z)fN (z)T ]. The following theorem is proved in Appendix A.3. Theorem 2. Let fN be a neural network approximation to X as in Theorem 1, such that it approximates the extrinsic manifold coordinates. Let C ∈ Rm×m be the covariance matrix C = Ez∈qφ(z|x)[fN (z)fN (z)T ]. Let qφ(z|x) ∼ N(ψ(x),Σ) with small enough Σ that there exists a patch Uz0 ⊂M around z0 satisfying the bi-Lipschitz property of Jones et al. (2008), and such that Pr(z ∼ qφ(z|x) 6∈ ψ(Uz0)) < . Then the number of eigenvalues of C greater than is at most d, and C = Jz0ΣJ T z0 +O( ) where Jz0 is the m×D Jacobian matrix at z0.\nTheorem 2 establishes the relationship between the covariance matrices used in the sampling procedure and their image under the decoder fN to approximate ψ−1. Similar to Singer & Coifman (2008), we are able to sample according to a multivariate normal distribution in the latent space. Thus, the resulting cloud in the data space is distorted (to first order) by the local Jacobian of the map fN . The key insight of Theorem 2 is from combining this idea with the observation of Jones et al. (2008) that ψ−1 depends locally on only d of the coordinates in theD dimensional latent space." }, { "heading": "6 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "6.1 VIDEO OF ROTATING FIGURE", "text": "We consider the problem of generating new frames from a video of rigid movement. We take 200 frames of a color video (each frame is 100 × 80 × 3) of a spinning bulldog Lederman & Talmon (2018). Due to the spinning of figure and the fixed background, this creates a low-dimensional approximately circular manifold.\nWe compare our method to VAE, the Wasserstein GAN Gulrajani et al. (2017) (with a bi-lipchitz constraint on the critic), and the hyperspherical VAE Davidson et al. (2018). For the VAE, we use a two dimensional Gaussian prior pθ(z), such that z ∼ N(0, I2). The noise injected to the GAN is drawn from a two dimensional uniform distribution pθ(z), such that zi ∼ U(0, 1), i = 1, 2. For the spherical VAE, we use a latent dimension ofD = 2, which highlights the dimension mismatch issue that occurs with a spherical prior. This is a benefit of VDAE, even if we choose D > d the latent embedding will still only be locally d dimensional. We use the same architecture for all networks which consists of one hidden layer with 512 neurons, activation function for all networks are tanh. In Fig. 2, we present 300 generated samples, by displaying them on a scatter plot with coordinates corresponding to their latent dimensions z1 and z2." }, { "heading": "6.2 DATA GENERATION FROM UNIFORMLY SAMPLED MANIFOLDS", "text": "In this series of experiments, we visualize the results of the sampling procedure in Algorithm 2 on three synthetic manifolds. As discussed in 4.2, we randomly select an initial seed point, then\nrecursively sample from pθ(x′|x) many times to simulate a random walk on the manifold. In the top row of Fig. 3, we highlight the location of the initial seed point, take 20 steps of the random walk, and display the resulting generated points on three learned manifolds. Clearly after a large number of resampling iterations, the algorithm continues to generate points on the manifold, and the distribution of sampled points converges to a uniform stationary distribution on the manifold. Moreover, this stationary distribution is reached very quickly. In the bottom row of the same Fig. 3, we show pθ(x′|x) by sampling a large number of points sampled from the single seed point. As can be seen, a single step of pθ(x′|x) covers a large part of the latent space. The architecture also uses one hidden layer of 512 neurons and tanh activations." }, { "heading": "6.3 CLUSTER CONDITIONAL SAMPLING", "text": "In this section, we deal with the problem of generating samples from data with multiple clusters in an unsupervised fashion (i.e. no a priori knowledge of the cluster structure). Clustered data creates a problem for many generative models, as the topology of the latent space (i.e. normal distribution) differs from the topology of the data space with multiple clusters.\nIn our first experiment, we show that our method is capable of generating new points from a particular cluster given an input point from that cluster. This generation is done in an unsupervised fashion, which is a different setting from the approach of conditional VAEs Sohn et al. (2015) that require training labels. We demonstrate this property on MNIST in Figure 4, and show that the newly generated points after short diffusion time remain in the equivalent class to the seeded image. Here the architecture is a standard fully convolutional architecture. Details can be found in Appendix A.4.\nThe problem of addressing difference in topologies between the latent space of a generative model and the output data has been acknowledged in recent works about rejection sampling (Azadi et al., 2018; Turner et al., 2018). Rejection sampling of neural networks consists of generating a large collection of samples using a standard GAN, and then designing a probabilistic algorithm to decide in a post-hoc fashion whether the points were truly in the support of the data distribution p(x).\nIn the following experiment, we compare to the standard example in the generative model literature. The data consists of nine bounded spherical densities with significant minimal separation, lying on a 5 × 5 grid. A standard GAN or VAE struggles to avoid generating points in the gaps between\nthese densities, and thus requires the post-sampling rejection analysis. On the other hand, our model creates a latent space that separates each of these clusters into their own features and only generates points that exist in the neighborhood of training data. Figure 5 clearly shows that this results in significantly fewer points generated in the gaps between clusters, as well as eliminating the need to generate additional points that are not in final generated set. Our VDAE architecture here uses one hidden layer of 512 neurons and tanh activations. GAN and DRS-GAN architectures are as described in Azadi et al. (2018)." }, { "heading": "6.4 EMPIRICAL EVALUATION OF THE LOCAL BI-LIPSCHITZ MEASURE", "text": "Here we describe a practical method for computing the local bi-Lipschitz property, then use it to evaluate several methods on the MNIST dataset. Let Z and X be metric spaces and f : Z → X . We define, for each z ∈ Z and k ∈ N, the function bilipk(z):\nbilipk(z) = minK s.t. 1 K ≤ dX(f(z), f(z\n′))\ndZ(z, z′) ≤ K, ∀ z′ ∈ Uz,k ∩ Z\nwhere Z := f−1(X) is the latent embedding of our dataset X 5, dX and dZ are metrics on X and Z, and Uz,k is the k-nearest neighborhood of z. Intuitively, increasing values ofK can be thought of as an increasing tendency of the learned map to stretch or compress regions of space. By analyzing various statistics of the local bi-Lipschitz measure evaluated at all points of a latent space Z, we can gain insight into how well-behaved a homeomorphism f is. In Table 1 we report the mean and standard deviation, over 10 runs, of the local bi-Lipschitz property for several methods trained on the MNIST dataset.\nThe comparison is between the Wassertein GAN (WGAN), the VAE, the hyperspherical VAE (SVAE), and our method. We use standard architectures prescribed by their respective papers to train the methods. For our method we use a single 500 unit hidden layer network architecture with ReLU nonlinearities for both the encoder and decoder.\nBy constraining our latent space to be the diffusion embedding of the data, our method finds a mapping that automatically enjoys the homeomorphic properties of an ideal mapping, and this is reflected in the low values of the local bi-Lipschitz constant. Conversely, other methods do not consider the topology of the data in the prior distribution. This is especially appparent in the VAE\n5For VAE, SVAE, and our method, these are the means of the posterior distributions. For GAN it is points drawn from the N (0, 1) prior.\nand SVAE, which must generate from the entirety of the input distribution X since they minimize a reconstruction loss. Interestingly, the mode collapse tendency of GANs alleviate the pathology of the bi-Lipschitz constant by allowing the GAN to focus on a subset of the distribution — but this comes at the cost, of course, of collapsing to a few modes of the dataset. Our method is able to reconstruct the entirety of X while simultaneously maintaining a low local bi-Lipschitz constant." }, { "heading": "A APPENDIX", "text": "A.1 DERIVATION OF LOCAL EVIDENCE LOWER BOUND (EQ. 4)\nWe begin with taking the log of the random walk transition likelihood,\nlog pθ(x ′|x) = log ∫ ′ z pθ(x ′, z′|x)dz′ (A.1)\n= log ∫ z pθ(x ′|z′, x)p(z′|x)q(z ′) q(z′) dz′ (A.2)\n= logEz′∼q(z′) [ pθ(x ′|z′, x)p(z ′|x)\nq(z′)\n] (A.3)\n≥ Ez′∼q(z′) [log pθ(x′|z′, x)] + Ez′∼q(z′) [ log\np(z′|x) q(z′)\n] (A.4)\n≥ Ez′∼q(z′) [log pθ(x′|z′, x)] +DKL[q(z′)||p(z′|x)] (A.5)\nwhere q(z′) is an arbitrary distribution. We let q(z′) to be the conditional distribution q(z′|x). Furthermore, if we make the simplifying assumption that pθ(x′|z′, z) = pθ(x′|z′), then we obtain Eq. 4\nlog pθ(x ′|x) ≥ −DKL(qφ(z′|x)||pθ(z′|x)) + Ez′∼qφ(z′|x) log pθ(x ′|z′). (A.6)\nA.2 RESULTS IN JONES ET AL. (2008)\nTo state the result in Jones et al. (2008), we need the following set-up:\n(C1)M is a d-dimensional smooth compact manifold, possibly having boundary, equipped with a smooth (at least C2) Riemannian metric g;\nWe denote the geodesic distance by dM, and the geodesic ball centering at x with radius r by BM(x, r). Under (C1), for each point x ∈M, there exists rM(x) which is the inradius, that is, r is the largest number s.t. BM(x, r) is containedM. Let 4M be the Laplacian-Beltrami operator on M with Neumann boundary condition, which is self-adjoint on L2(M,µ), µ being the Riemannian volume given by g. Suppose thatM is re-scaled to have volume 1. The next condition we need concerns the spectrum of the manifold Laplacian\n(C2) 4M has discrete spectrum, and the eigenvalues λ0 ≤ λ1 ≤ · · · satisfy the Weyl’s estimate, i.e. exists constant C which only depends onM s.t.\n|{j : λj ≤ T}| ≤ CT d/2.\nLet ψj be the eigenfunction associated with λj , {ψj}j form an orthonormal bases of L2(M,µ). The last condition is\n(C3) The heat kernel (defined by the heat equation onM) has the spectral representation as\nKt(x, y) = ∞∑ j=0 e−tλjψj(x)ψj(y).\nTheorem 3 (Theorem 2 Jones et al. (2008), simplified version). Under the above setting and assume (C1)-(C2), then there are positive constants c1, c2, c3 which only depend on M and g, s.t. for any x ∈ M, rM(x) being the inradius, there are d eigenfunctions of 4M, ψj1 , · · · , ψjd , which collectively give a mapping Ψ :M→ Rd by\nΨx(x) = (ψj1(x), · · · , ψjd(x)) satisfying that ∀y, y′ ∈ B(x, c1rM(x)),\nc2rM(z) −1dM(y, y ′) ≤ ‖Ψx(y)−Ψx(y′)‖ ≤ c3rM(z)−1−d/2dM(y, y′). That is, Ψ is bi-Lipschitz on the neighborhoodB(x, c1rM(x)) with the Lipschitz constants indicated as above. The subscript x in Ψx emphasizes that the indices j1, · · · , jd may depend on x.\nA.3 PROOFS\nProof of Theorem 1. The proof of Theorem 1 is actually a simple extension of the following theorem, Theorem 4, which needs to be proved for each individual extrinsic coordinate Xk, hence the additional factor of m coming from the L2 norm of m functions.\nTheorem 4. Let M ⊂ Rm be a smooth d-dimensional manifold, ψ(M) ⊂ RD be the diffusion map for D ≥ d large enough to have a subset of coordinates that are locally bi-Lipschitz. Let one of the m extrinsic coordinates of the manifold be denoted X(ψ(x)) for x ∈ M. Then there exists a sparsely-connected ReLU network fN , with 4DCM nodes in the first layer, 8dN nodes in the second layer, and 2N nodes in the third layer, such that\n‖X − fN‖L2(ψ(M)) ≤ Cψ√ N\n(A.7)\nwhere Cψ depends on how sparsely X(ψ(x)) ∣∣ Ui\ncan be represented in terms of the ReLU wavelet frame on each neighborhood Ui, and CM on the curvature and dimension of the manifoldM.\nProof of Theorem 4. The proof borrows from the main theorem of Shaham et al. (2018a). We adopt this notation and summarize the changes in the proof here. For a full description of the theory and guarantees for neural networks on manifolds, see Shaham et al. (2018a). Let CM be the number of neighborhoods Ui = B(xi, δ) ∩M needed to coverM such that ∀x, y ∈ Ui, (1 − )‖x − y‖ ≤ dM(x, y) ≤ (1 + )‖x − y‖. Here, we choose δ = min(δM, κ−1ρ) where δM is the largest δ that preserves locally Euclidean neighborhoods and κ−1ρ is the smallest value from Jones et al. (2008) such that every neighborhood Ui has a bi-Lipschitz set of diffusion coordinates.\nBecause of the locally bi-Lipschitz guarantee from Jones et al. (2008), we know for each Ui there exists an equivalent neighborhood ψ̃(Ui) in the diffusion map space, where ψ̃(x) = [ψi1(x), ..., ψid(x)]. Note that the choice of these d coordinates depends on the neighborhood Ui. Moreover, we know the Euclidean distance on ψ(Ui) is locally bi-Lipschitz w.r.t. dM(·, ·) on Ui.\nFirst, we note that as in Shaham et al. (2018a), the first layer of a neural network is capable of using 4D units to select the subset of d coordinates ψ̃(x) from ψ(x) for x ∈ Ui and zeroing out the other D−d coordinates with ReLU bump functions. Then we can defineX(ψ̃(x)) = X(ψ(x)) on x ∈ Ui.\nNow to apply the theorem from Shaham et al. (2018a), we must establish that X ∣∣ Ui\n: ψ̃(Ui) → R can be written efficiently in terms of ReLU functions. Because of the manifold and diffusion metrics being bi-Lipschitz, we know at a minimum that ψ̃ is invertible on ψ̃(Ui). Because of this invertibility, we will slightly abuse notation and refer to X(ψ(x)) = X(x), where this is understood to be the extrinsic coordinate of the manifold at the point x that cooresponds to ψ(x). we also know that ∀x, y ∈ Ui,\n|X(ψ̃(x))−X(ψ̃(y))| = |X(x)−X(y)| ≤ max\nz∈Ui ‖∇X(z)‖d(x, y)\n≤ maxz∈Ui ‖∇X(z)‖ 1− ‖ψ̃(x)− ψ̃(y)‖,\nwhere∇X(z) is understood to be the gradient of X(z) at the point z ∈M. This means X(ψ̃(x)) is a Lipschitz function w.r.t. ψ̃(x). Because X(ψ̃(x)) Lipschitz continuous, it can be approximated by step functions on a ball of radius 2−` to an error that is at most maxz∈Ui ‖∇X(z)‖1− 2 −`. This means the maximum ReLU wavelet coefficient is less than maxz∈Ui ‖∇X(z)‖1− (2 −` + 2−`+1). This fact, along\nwith the fact that ψ̃(Ui) is compact, gives the fact that on ψ̃(Ui), set of ReLU wavelet coefficients is in `1. And from Shaham et al. (2018a), if on a local patch the function is expressible in terms of ReLU wavelet coefficients in `1, then there is an approximation rate of 1√\nN for N ReLU wavelet\nterms.\nProof of Theorem 2. We borrow from Singer & Coifman (2008) to prove the following result. Given that the bulk of the distribution q lies insideψ(Uz0), we can consider only the action of fN onψ(Uz0) rather than on the whole space. Because the geodesic on U is bi-Lipschitz w.r.t. the Euclidean distance on the diffusion coordinates (the metric on the input space), we can use the results from Singer & Coifman (2008) and say that on ψ(Uz0) the output covariance matrix is characterized by the Jacobian of the function fN mapping from Euclidean space (on the diffusion coordinates) to the output space, at the point z0. So the covariance of the data lying insize ψ(Uz0) is Jz0ΣJ T z0 , with an O( ) perturbation for the fact that fraction of the data lies outside ψ(Uz0).\nThe effective rank of C being at most d comes from the locally bi-Lipschitz property. We know X(ψ(x)) only depends on the d coordinates ψ̃(x) as in the proof of Theorem 1, which implies fN (ψ(x)) satisfies a similarly property if fN fully learned X(ψ(x)). Thus, while J ∈ Rm×D, it is at most rank d, which means JΣJT is at most rank d as well.\nA.4 EXPERIMENT ARCHITECTURES\nA.4.1 CLUSTER CONDITIONAL SAMPLING WITH MNIST\nWe use the following encoder architecture:\n• Conv2D(channels=64, strides=(1,1), kernel=4) • Conv2D(channels=64, strides=(1,1), kernel=4) • MaxPooling2D(pool size=2) • Conv2D(channels=64, strides=(1,1), kernel=4) • Conv2D(channels=64, strides=(1,1), kernel=4) • MaxPooling2D(pool size=2) • Dense(512, ’relu’) • Dense(10, ’linear’)\nand the following decoder architecture: We use the following decoder architecture:\n• Dense(7 * 7, ’relu’) • Conv2DTranspose(channels=64, strides=(1,1), kernel=4) • Conv2DTranspose(channels=64, strides=(1,1), kernel=4) • UpSampling2D(pool size=2) • Conv2DTranspose(channels=64, strides=(1,1), kernel=4) • Conv2DTranspose(channels=64, strides=(1,1), kernel=4) • UpSampling2D(pool size=2)" } ]
2,019
null
SP:2f460faff6d62d462bd80a4545f3ce435d2ab0f6
[ "The authors present a novel work to address the problem of signal propagation in the recurrent neural networks. The idea is to build a attractor system for the signal transition from state h_{k-1} to h_k. If the attractor system converges to a equilibrium, then the hidden to hidden gradient is an identity matrix. This idea is elegant. The authors verify the performance of Increment RNN on long-term-dependency tasks and non-long-term-dependency tasks.", "In this paper, the authors propose the incremental RNN (iRNN), which is inspired by the continuous-time RNN (CTRNN). Theoretically, the equilibrium point of iRNN exists and is unique. Furthermore, the norm of the Jacobian between two hidden states is always one, provided that the Euler iterations converge. The authors proved this property as well as the exponential convergence rate of the Euler iteration. These properties avoid the vanishing/exploding gradient problem typical in RNN with long sequences in theory. Empirically, the proposed method is compared with multiple RNN architecture on various tasks. " ]
Recurrent neural networks (RNNs) are particularly well-suited for modeling longterm dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt’s (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.
[ { "affiliations": [], "name": "INCREMENTALLY EVOLVING" }, { "affiliations": [], "name": "ON AN" }, { "affiliations": [], "name": "EXPLODING GRADIENTS" }, { "affiliations": [], "name": "Anil Kag" }, { "affiliations": [], "name": "Ziming Zhang" } ]
[ { "authors": [ "Kerem Altun", "Billur Barshan", "Orkun Tunçel" ], "title": "Comparative study on classifying human activities with miniature inertial and magnetic sensors", "venue": "Pattern Recogn.,", "year": 2010 }, { "authors": [ "Davide Anguita", "Alessandro Ghio", "Luca Oneto", "Xavier Parra", "Jorge L. Reyes-Ortiz" ], "title": "Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine", "venue": "In Proceedings of the 4th International Conference on Ambient Assisted Living and Home Care,", "year": 2012 }, { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "David Balduzzi", "Muhammad Ghifary" ], "title": "Strongly-typed recurrent neural networks", "venue": "In Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Y. Bengio", "P. Simard", "P. Frasconi" ], "title": "Learning long-term dependencies with gradient descent is difficult", "venue": "Trans. Neur. Netw.,", "year": 1994 }, { "authors": [ "Yoshua Bengio", "Nicolas Boulanger-Lewandowski", "Razvan Pascanu" ], "title": "Advances in optimizing recurrent networks", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Víctor Campos", "Brendan Jou", "Xavier Giró-i Nieto", "Jordi Torres", "Shih-Fu Chang" ], "title": "Skip rnn: Learning to skip state updates in recurrent neural networks", "venue": "arXiv preprint arXiv:1708.06834,", "year": 2017 }, { "authors": [ "Bo Chang", "Minmin Chen", "Eldad Haber", "Ed H. Chi" ], "title": "AntisymmetricRNN: A dynamical system view on recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Shiyu Chang", "Yang Zhang", "Wei Han", "Mo Yu", "Xiaoxiao Guo", "Wei Tan", "Xiaodong Cui", "Michael Witbrock", "Mark A Hasegawa-Johnson", "Thomas S Huang" ], "title": "Dilated recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Dzmitry Bahdanau", "Yoshua Bengio" ], "title": "On the properties of neural machine translation: Encoder-decoder approaches", "venue": "arXiv preprint arXiv:1409.1259,", "year": 2014 }, { "authors": [ "Jasmine Collins", "Jascha Sohl-Dickstein", "David Sussillo" ], "title": "Capacity and Trainability in Recurrent Neural Networks", "venue": "arXiv e-prints, art", "year": 2016 }, { "authors": [ "Tim Cooijmans", "Nicolas Ballas", "César Laurent", "Çağlar Gülçehre", "Aaron Courville" ], "title": "Recurrent batch normalization", "venue": "arXiv preprint arXiv:1603.09025,", "year": 2016 }, { "authors": [ "Ron S Dembo", "Stanley C Eisenstat", "Trond Steihaug" ], "title": "Inexact newton methods", "venue": "SIAM Journal on Numerical analysis,", "year": 1982 }, { "authors": [ "Chengyue Gong", "Di He", "Xu Tan", "Tao Qin", "Liwei Wang", "Tie-Yan Liu" ], "title": "Frage: frequency-agnostic word representation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alex Graves" ], "title": "Adaptive computation time for recurrent neural networks", "venue": "CoRR, abs/1603.08983,", "year": 2016 }, { "authors": [ "Michiel Hermans", "Benjamin Schrauwen" ], "title": "Training and analysing deep recurrent neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Chiori Hori", "Takaaki Hori", "Teng-Yok Lee", "Ziming Zhang", "Bret Harsham", "John R Hershey", "Tim K Marks", "Kazuhiko Sumi" ], "title": "Attention-based multimodal fusion for video description", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Herbert Jaeger", "Mantas Lukosevicius", "Dan Popovici", "Udo Siewert" ], "title": "Optimization and applications of echo state networks with leaky-integrator neurons", "venue": "Neural networks : the official journal of the International Neural Network Society, 20:335–52,", "year": 2007 }, { "authors": [ "Li Jing", "Yichen Shen", "Tena Dubcek", "John Peurifoy", "Scott Skirlo", "Yann LeCun", "Max Tegmark", "Marin Soljačić" ], "title": "Tunable efficient unitary neural networks (eunn) and their application to rnns", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Aditya Kusupati", "Manish Singh", "Kush Bhatia", "Ashish Kumar", "Prateek Jain", "Manik Varma" ], "title": "Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yann Lecun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Tao Lei", "Yu Zhang", "Sida I. Wang", "Hui Dai", "Yoav Artzi" ], "title": "Simple recurrent units for highly parallelizable recurrence", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2018 }, { "authors": [ "James Martens", "Ilya Sutskever" ], "title": "Learning recurrent neural networks with hessian-free optimization", "venue": "In Proceedings of the 28th International Conference on Machine Learning", "year": 2011 }, { "authors": [ "Zakaria Mhammedi", "Andrew D. Hellicar", "Ashfaqur Rahman", "James Bailey" ], "title": "Efficient orthogonal parametrisation of recurrent neural networks using householder reflections", "venue": "CoRR, abs/1612.00188,", "year": 2016 }, { "authors": [ "Asier Mujika", "Florian Meier", "Angelika Steger" ], "title": "Fast-slow recurrent neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Murphy Yuezhen Niu", "Lior Horesh", "Isaac Chuang" ], "title": "Recurrent neural networks in the eye of differential equations", "venue": "arXiv preprint arXiv:1904.12933,", "year": 2019 }, { "authors": [ "Razvan Pascanu", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "How to construct deep recurrent neural networks", "venue": "arXiv preprint arXiv:1312.6026,", "year": 2013 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Razvan Pascanu", "Çaglar Gülçehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "How to construct deep recurrent neural networks", "venue": "CoRR, abs/1312.6026,", "year": 2013 }, { "authors": [ "Jeffrey Pennington", "Samuel Schoenholz", "Surya Ganguli" ], "title": "Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "F. Rosenblatt" ], "title": "Principles of neurodynamics", "venue": "Spartan Books,", "year": 1962 }, { "authors": [ "Yulia Rubanova", "Ricky T.Q. Chen", "David Duvenaud" ], "title": "Latent odes for irregularly-sampled time series", "venue": "CoRR, abs/1907.03907,", "year": 2019 }, { "authors": [ "Sachin S Talathi", "Aniket Vartak" ], "title": "Improving performance of recurrent neural network with relu nonlinearity", "venue": "arXiv preprint arXiv:1511.03771,", "year": 2015 }, { "authors": [ "Eugene Vorontsov", "Chiheb Trabelsi", "Samuel Kadoury", "Chris Pal" ], "title": "On orthogonality and learning recurrent networks with long term dependencies", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Pete Warden" ], "title": "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Scott Wisdom", "Thomas Powers", "John Hershey", "Jonathan Le Roux", "Les Atlas" ], "title": "Fullcapacity unitary recurrent neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Jiong Zhang", "Qi Lei", "Inderjit S. Dhillon" ], "title": "Stabilizing gradients for deep neural networks via efficient svd parameterization", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Jiong Zhang", "Qi Lei", "Inderjit S Dhillon" ], "title": "Stabilizing gradients for deep neural networks via efficient SVD parameterization", "venue": "arXiv preprint arXiv:1803.09327,", "year": 2018 }, { "authors": [ "Julian Georg Zilly", "Rupesh Kumar Srivastava", "Jan Koutník", "Jürgen Schmidhuber" ], "title": "Recurrent highway networks", "venue": "In ICML,", "year": 2017 } ]
[ { "heading": null, "text": "Recurrent neural networks (RNNs) are particularly well-suited for modeling longterm dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt’s (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks." }, { "heading": "1 INTRODUCTION", "text": "Recurrent neural networks (RNNs) in each round store a hidden state vector, hm ∈ RD, and upon receiving the input vector, xm+1 ∈ Rd, linearly transform the tuple (hm, xm+1) and pass it through a memoryless non-linearity to update the state over T rounds. Subsequently, RNNs output an affine function of the hidden states as its prediction. The model parameters (state/input/prediction parameters) are learnt by minimizing an empirical loss. This seemingly simple update rule has had significant success in learning complex patterns for sequential input data.\nNevertheless, that training RNNs can be challenging, and that performance can be uneven on tasks that require long-term-dependency (LTD), was first noted by Hochreiter (1991), Bengio et al. (1994) and later by other researchers. Pascanu et al. (2013b) attributed this to the fact that the error gradient back-propagated in time (BPTT), for the time-step m, is dominated by product of partials of hiddenstate vectors, ∏T−1 j=m ∂hj+1 ∂hj\n, and these products typically exhibit exponentially vanishing decay or explosion, resulting in incorrect credit assignment during training and test-time.\nRosenblatt (1962), on whose work we draw inspiration from, introduced continuous-time RNN (CTRNN) to mimic activation propagation in neural circuitry. CTRNN dynamics evolves as follows:\nτ ġ(t) = −αg(t) + φ(Ug(t) +Wx(t) + b), t ≥ t0. (1)\nHere, x(t) ∈ Rd is the input signal, g(t) ∈ RD is the hidden state vector of D neurons, ġi(t) is the rate of change of the i-th state component; τ, α ∈ R+, referred to as the post-synaptic time-constant, impacts the rate of a neuron’s response to the instantaneous activation φ(Ug(t) +Wx(t) + b); and U ∈ RD×D, W ∈ RD×d, b ∈ RD are model parameters. In passing, note that recent RNN works that draw inspiration from ODE’s (Chang et al., 2019) are special cases of CTRNN (τ = 1, α = 0).\nVanishing Gradients. The qualitative aspects of the CTRNN dynamics is transparent in its integral form:\ng(t) = e−α t−t0 τ g(t0) + 1\nτ ∫ t t0 e−α t−s τ φ(Ug(s) +Wx(s) + b)ds (2)\nThis integral form reveals that the partials of hidden-state vector with respect to the initial condition, ∂g(t) ∂g(t0)\n, gets attenuated rapidly (first term in RHS), and so we face a vanishing gradient problem. We will address this issue later but we note that this is not an artifact of CTRNN but is exhibited by ODEs that have motivated other RNNs (see Sec. 2).\nShannon-Nyquist Sampling. A key property of CTRNN is that the time-constant τ together with the first term −g(t), is in effect a low-pass filter with bandwidth ατ−1 suppressing high frequency components of the activation signal, φ((Ug(s)) + (Wx(s)) + b). This is good, because, by virtue of the Shannon-Nyquist sampling theorem, we can now maintain fidelity of discrete samples with respect to continuous time dynamics, in contrast to conventional ODEs (α = 0). Additionally, since high-frequencies are already suppressed, in effect we may assume that the input signal x(t) is slowly varying relative to the post-synaptic time constant τ .\nEquilibrium. The combination of low pass filtering and slowly time varying input has a significant bearing. The state vector as well as the discrete samples evolve close to the equilibrium state, i.e., g(t) ≈ φ(Ug(t) +Wx(t) + b) under general conditions (Sec. 3). Incremental Updates. Whether or not system is in equilibrium, the integral form in Eq. 2 points to gradient attenuation as a fundamental issue. To overcome this situation, we store and process increments rather than the cumulative values g(t) and propose dynamic evolution in terms of increments. Let us denote hidden state sequence as hm ∈ RD and input sequence xm ∈ Rd. For m = 1, 2, . . . , T , and a suitable β > 0\nτ ġ(t) = −α(g(t)± hm−1) + φ(U(g(t)± hm−1) +Wxm + b), g(0) = 0, t ≥ 0 (3) hm , h β·τ m , g(β · τ)\nIntuitively, say system is in equilibrium and−α(µ(xm, hm−1))+φ(Uµ(xm, hm−1)+Wxm+b) = 0. We note state transitions are marginal changes from previous states, namely, hm = µ(xm, hm−1)− hm−1. Now for a fixed input xm, as to which equilibrium is reached depends on hm−1, but are nevertheless finitely many. So encoding marginal changes as states leads to “identity” gradient.\nIncremental RNN (iRNN) achieves Identity Gradient. We propose to discretize Eq. 3 to realize iRNN (see Sec. 3). At time m, it takes the previous state hm−1 ∈ RD and input xm ∈ Rd and outputs hm ∈ RD after simulating the CTRNN evolution in discrete-time, for a suitable number of discrete steps. We show that the proposed RNN approximates the continuous dynamics and solves the vanishing/exploding gradient issue by ensuring identity gradientIn general, we consider two options, SiRNN, whose state is updated with a single CTRNN sample, similar to vanilla RNNs, and, iRNN, with many intermediate samples. SiRNN is well-suited for slowly varying inputs.\nContributions. To summarize, we list our main contributions: (A) iRNN converges to equilibrium for typical activation functions. The partial gradients of hiddenstate vectors for iRNNs converge to identity, thus solving vanishing/exploding gradient problem! (B) iRNN converges rapidly, at an exponential rate in the number of discrete samplings of Eq. 1. SiRNN, the single-step iRNN, is efficient and can be leveraged for slowly varying input sequences. It exhibits fast training time, has fewer parameters and better accuracy relative to standard LSTMs. (C) Extensive experiments on LTD datasets show that we improve upon standard LSTM accuracy as well as other recent proposals that are based on designing transition matrices and/or skip connections. iRNNs/SiRNNs are robust to time-series distortions such as noise paddings (D) While our method extends directly (see Appendix A.1) to Deep RNNs, we deem these extensions complementary, and focus on single-layer to highlight our incremental perspective." }, { "heading": "2 RELATED WORK", "text": "Gated Architectures. Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) is widely used in RNNs to model long-term dependency in sequential data. Gated recurrent unit (GRU) (Cho et al., 2014) is another gating mechanism that has been demonstrated to achieve similar\nperformance of LSTM with fewer parameters. Some recent gated RNNs include UGRNN (Collins et al., 2016), and FastGRNN (Kusupati et al., 2018). While mitigating vanishing/exploding gradients, they do not eliminate it. Often, these models incur increased inference, training costs, and model size.\nUnitary RNNs. Arjovsky et al. (2016); Jing et al. (2017); Zhang et al. (2018); Mhammedi et al. (2016) focus on designing well-conditioned state transition matrices, attempting to enforce unitary-property, during training. Unitary property does not generally circumvent vanishing gradient (Pennington et al. (2017)). Also, it limits expressive power and prediction accuracy while also increasing training time.\nDeep RNNs. These are nonlinear transition functions incorporated into RNNs for performance improvement. For instance, Pascanu et al. (2013a) empirically analyzed the problem of how to construct deep RNNs. Zilly et al. (2017) proposed extending the LSTM architecture to allow step-tostep transition depths larger than one. Mujika et al. (2017) proposed incorporating the strengths of both multiscale RNNs and deep transition RNNs to learn complex transition functions. While Deep RNNs offer richer representations relative to single-layers, it is complementary to iRNNs.\nResidual/Skip Connections. Jaeger et al. (2007); Bengio et al. (2013); Chang et al. (2017); Campos et al. (2017); Kusupati et al. (2018) feed-forward state vectors to induce skip or residual connections, to serve as a middle ground between feed-forward and recurrent models, and to mitigate gradient decay. Nevertheless, these connections, cannot entirely eliminate gradient explosion/decay. For instance, Kusupati et al. (2018) suggest hm = αmhm−1 + βmφ(Uhm−1 +Wxm + b), and learn parameters so that αm ≈ 1 and βm ≈ 0. Evidently, this setting can lead to identity gradient, observe that setting βm ≈ 0, implies little contribution from the inputs and can conflict with good accuracy, as also observed in our experiments.\nLinear RNNs. (Bradbury et al., 2016; Lei et al., 2018; Balduzzi & Ghifary, 2016) focus on speeding up RNNs by replacing recurrent connections, such as hidden-to-hidden interactions, with light weight linear components. This reduces training time, but results in significantly increased model size. For example, Lei et al. (2018) requires twice the number of cells for LSTM level performance.\nODE/Dynamical Perspective. Few ODE inspired architectures attempt to address stability, but do not end up eliminating vanishing/exploding gradients. Talathi & Vartak (2015) proposed a modified weight initialization strategy based on a dynamical system perspective on weight initialization process to successfully train RNNs composed of ReLUs. Niu et al. (2019) analyzed RNN architectures using numerical methods of ODE and propose a family of ODE-RNNs. Chang et al. (2019), propose Antisymmetric-RNN. Their key idea is to express the transition matrix in Eq. 1, for the special case α = 0, τ = 1, as a difference: U = V − V T and note that the eigenspectrum is imaginary. Nevertheless, Euler discretization, in this context leads to instability, necessitating damping of the system. As such vanishing gradient cannot be completely eliminated. Its behavior is analogous to FastRNN Kusupati et al. (2018), in that, identity gradient conflicts with high accuracy. In summary, we are the first to propose evolution over the equilibrium manifold, and demonstrating identity gradients. Neural ODEs (Chen et al., 2018; Rubanova et al., 2019) have also been proposed for time-series prediction to deal with irregularly sampled inputs. They parameterize the derivative of the hidden-state in terms of an autonomous differential equation and let the ODE evolve in continuous time until the next input arrives. As such, this is not our goal, our ODE explicitly depends on the input, and evolves until equilibrium for that input is reached. We introduce incremental updates to bypass vanishing/exploding gradient issues, which is not of specific concern for these works." }, { "heading": "3 METHOD", "text": "We use Euler’s method to discretize Eq. 3 in steps δ = ητ . Denoting the kth step as gk = g(kδ)\nτ gk − gk−1\nδ = −α(gk−1 + hm−1) + φ(U(gk−1 + hm−1) +Wxm + b), k ∈ [K] (4)\nRearranging terms we get a compact form for iRNN (see Fig. 1). In addition we introduce a learnable parameter ηkm and let it be a function of time m and the recursion-step k.\ngk = gk−1 + η k m(φ(U(gk−1 + hm−1) +Wxm + b)− α(gk−1 + hm−1)), k ∈ [K] (5)\nhKm = gK\nWe run the recursion for k ∈ [K] with some suitable initial condition. This could be g0 = 0 or initialized to the previous state, i.e., g0 = hm−1 at time m.\nIn many of our examples, we find the input sequence is slowly varying, and K = 1 can also realize good empirical performance. We refer to this as single-step-incremental-RNN (SiRNN).\nh1m = g0 + ηm(φ(U(g0 + hm−1) +Wxm + b)− α(hm−1 + g0)) (6) For both iRNN and SiRNN we drop the superscript whenever it is clear from the context.\nRoot Finding and Transitions. The two indices k and m should not be confused. The index m ∈ [T ] refers to the time index, and indexes input, xm and hidden state hm over time horizon T . The index k ∈ [K] is a fixed-point recursion for converging to the equilibrium solution at each time m, given input xm and the hidden state hm−1. We iterate over k so that at k = K, gK satisfies,\nφ(U(gK + hm−1) +Wxm + b)− α(gK + hm−1) ≈ 0 The recursion (Eq. 5) at timem runs forK rounds, terminates, and recursion is reset for the new input, xm+1. Indeed, Eq. 5 is a standard root-finding recursion, with gk−1 serving as the previous solution, plus a correction term, which is the error, φ(U(gk−1 + hm−1) +Wxm + b)− α(gk−1 + hm−1). If the sequence converges, the resulting solution is the equilibrium point. Proposition 2 guarantees a geometric rate of convergence.\nIdentity Gradient. We will informally (see Theorem 1) show here that partial gradients are identity. Say we have for sufficiently large K, hm = gK is the equilibrium solution. It follows that,\nφ(U(hm + hm−1) +Wxm + b)− α(hm + hm−1)) = 0 Taking derivatives, we have,\n∇φ(·)U (\n∂hm ∂hm−1 + I\n) −α ( ∂hm ∂hm−1 + I ) = 0 =⇒ (∇φ(·)U − αI) ( ∂hm ∂hm−1 + I ) = 0. (7)\nThus if the matrix (∇φ(·)U − αI) is not singular, it follows that ( ∂hm∂hm−1 + I) = 0.\nSiRNN vs. iRNN. SiRNN approximates iRNN. In particular, say xm is a constant in the segment, m ∈ [m0,m0 +K], then SiRNN trajectory of hidden states, denoted as h1m0+K is equal to the iRNN hidden state hKm0 , when both SiRNN and iRNN are initialized with g0 = hm−1. Thus, for slowly time-varying inputs we can expect SiRNN to closely approximate iRNN.\nResidual Connections vs. iRNN/SiRNN. As such, our architecture is a special case of skip/residual connections. Nevertheless, unlike skip connections, our connections are structured, and the dynamics driven by the error term ensures that the hidden state is associated with equilibrium and leads to identity gradient. No such guarantees are possible with unstructured skip connections. Note that for slowly varying inputs, after a certain transition-time period, we should expect SiRNN to be close to equilibrium as well. Without this imposed structure, general residual architectures can learn patterns that can be dramatically different (see Fig. 2)." }, { "heading": "3.1 IDENTITY GRADIENT PROPERTY AND CONVERGENCE GUARANTEES.", "text": "Let us now collect a few properties of Eq. 3 and Eq. 5. First, denote the equilibrium solutions for an arbirary input x ∈ Rd, arbitrary state-vector ν ∈ RD, in an arbitrary round:\nMeq(x, ν) = {µ ∈ RD | α(µ+ ν) = φ(U(µ+ ν) +Wx+ b)}\nWhenever the equilibrium set is a singleton, we denote it as a function heq(x, ν). For simplicity, we assume below that ηik is a positive constant independent of k and i. Proposition 1. Suppose, φ(·) is a 1-Lipshitz function in the norm induced by ‖ · ‖, and ‖U‖ < α, then for any xm ∈ Rd and hm−1 ∈ RD, it follows thatMeq(x, ν) is a singleton and as K → ∞, the iRNN recursions converge to this solution, namely, hm = limK→∞ gK = heq(xm, hm−1)\nProof. Define T : RD → Rd, with T (g) = (1− ηα)g+ η(φ(U(g+ hm−1) +Wxm + b)− hm−1). It follows that T (·) is a contraction:\n‖T (g)− T (g′)‖ ≤(1− ηα)‖g − g′‖+ η‖φ(U(g + hm−1) +Wxm + b)− φ(U(g′ + hm−1) +Wxm + b)‖ ≤ (1− ηα+ ‖U‖η)‖g − g′‖ < ‖g − g′‖.\nWe now invoke the Banach fixed point theorem, which asserts that a contractive operator on a complete metric space converges to a unique fixed point, namely, TK(g)→ g∗. Upon substitution, we see that this point g∗ must be such that, φ(U(g∗+hm−1)+Wxm+ b)− (g∗+hm−1) = 0. Thus equilibrium point exists and is unique. Result follows by setting hm , heq(xm, hm−1).\nHandling ‖U‖ ≤ α. In experiments, we set α = 1, and do not enforce ‖U‖ ≤ α constraint. Instead, we initialize U as a Gaussian matrix with IID mean zero, small variance components. As such, the matrix norm is smaller than 1. Evidently, the resulting learnt U matrix does not violate this condition.\nNext we show for η > 0, iRNN converges at a linear rate, which follows directly from Proposition 1. Proposition 2. Under the setup in Proposition 1, it follows that,\n‖hKm − heq(xm, hm−1)‖ , ‖gK − heq(xm, hm−1)‖ ≤ (1− αη + η‖U‖)K‖g1 − heq(xm, hm−1)‖\nRemark. Proposition 1 accounts for typical activation functions ReLU, tanh, sigmoids as well as deep RNNs (appendix A.1).\nIn passing we point out that, in our experiments, we learn parameters ηkm, and a result that accounts for this case is desirable. We describe this case in Appendix A.3. A fundamental result we describe below is that partials of hidden-state vectors, on the equilibrium surface is unity. For technical simplicity, we assume a continuously differentiable activation, which appears to exclude ReLU activations. Nevertheless, we can overcome this issue, but requires more technical arguments. The main difficulty stems from ensuring that derivatives along the equilibrium surface exist, and this can be realized by invoking the implicit function theorem (IFT). IFT requires continuous differentiability, which ReLUs violate. Nevertheless, recent results 1 suggests that one can state implicit function theorem for everywhere differentiable functions, which includes ReLUs. Theorem 1. Suppose φ(·) is a continuously differentiable, 1-Lipshitz function, with ‖U‖ < α. Then as K →∞, ∂hm∂hm−1 → ∂heq(xm,hm−1) ∂hm−1\n= −I . Furthermore, as K →∞ the partial gradients over arbitrary number of rounds for iRNN is identity.\n∂hr ∂hs\n= ∏\nr≥m>s\n∂hm ∂hm−1 = (−1)r−sI⇒ ∥∥∥∥∂hr∂hs ∥∥∥∥ = 1. (8) Proof. Define, ψ(g, hm−1) = φ(U(g+ hm−1) +Wxm+ b)−α(g+ hm−1). We overload notation and view the equilibrium point as a function of hm−1, i.e., g∗(hm−1) = heq(xm, hm−1). Invoking standard results2 in ODE’s, it follows that g∗(hm−1) is a smooth function, so long as the Jacobian, ∇gψ(g∗, hm−1) with respect to the first coordinate, g∗, is non-singular. Upon computation, we see that, ∇gψ(g∗, hm−1) = ∇φ(g∗, hm−1)U − αI , is non-singular, since ‖∇φ(g∗, hm−1)U‖ ≤ ‖U‖. It follows that we can take partials of the state-vectors. By taking the partial derivatives w.r.t. hm−1 in Eq. 5, at the equilibrium points we have [∇φ(g∗, hm−1)U −αI][ ∂g∗∂hm−1 + I] = 0 (see Eq. 7). The rest of the proof follows by observing that the first term is non-singular.\n1terrytao.wordpress.com/2011/09/12/the-inverse-function-theorem-for-everywhere-differentiable-maps/ 2http://cosweb1.fau.edu/~jmirelesjames/ODE_course/lectureNotes_\nshortVersion_day1.pdf\nRemark. We notice that replacing hm−1 with −hm−1 in Eq. 12 will lead to ∂heq∂hm−1 = I, which also has no impact on magnitudes of gradients. As a result, both choices are suitable for circumventing vanishing or exploding gradients during training, but still may converge to different local minima and thus result in different test-time performance. Furthermore, notice that the norm preserving property is somewhat insensitive to choices of α, so long as the non-singular condition is satisfied." }, { "heading": "3.2 IRNN DESIGN IMPLICATIONS: LOW-RANK MODEL PARAMETRIZATION", "text": "Fig. 2 depicts phase portrait and illustrates salient differences between RNN, FastRNN (RNN with skip connection), and iRNN (K=5). RNN and Fas-\ntRNN exhibit complex trajectories, while iRNN trajectory is smooth, projecting initial point (black circle) onto the equilibrium surface (blue) and moving within it (green). This suggests that iRNN trajectory belongs to a low-dimensional manifold.\nVariation of Equilibrium w.r.t. Input. As before, heq be an equilibrium solution for some tuple (hm−1, xm). It follows that,\n(αI−∇φ(U(heq + hm−1) +Wxm + b)U)∂heq = ∇φ(U(heq + hm−1) +Wxm + b)W∂xm This suggests that, whenever the input undergoes a slow variation, we expect that the equilibrium point moves in such a way that U∂heq must lie in a transformed span of W . Now W ∈ RD×d with d D, which implies that (αI−∇φ(U(heq + hm−1) +Wxm + b)U is rank-deficient. Low Rank Matrix Parameterization. For typical activation functions, note that whenever the argument is in the unsaturated regime, ∇φ(·) ≈ I. We then approximately get span(αI − U) ≈ span(W ). We can express these constraints as U = αI + V H with low-rank matrices V ∈ RD×d1 , H ∈ Rd1×D, and further map both Uhm and Wxm onto a shared space. Since in our experiments the signal vectors we encounter are low-dimensional, and sequential inputs vary slowly over time, we enforce this restriction in all our experiments. In particular, we consider,\nφ (P [U(hm + hm−1) +Wxm + b])− (hm + hm−1) = 0. (9)\nThe parameter matrix P ∈ RD×D maps the contributions from input and hidden states onto the same space. To decrease model-size we let P = U = (I+ V H) learn these parameters." }, { "heading": "4 EXPERIMENTS", "text": "We organize this section as follows. First, the experimental setup, competing algorithms will be described. Then we present an ablative analysis to highlight salient aspects of iRNN and justify some of our experimental choices. We then plot and tabulate experimental results on benchmark datasets." }, { "heading": "4.1 EXPERIMENTAL SETUP AND BASELINES", "text": "Choice of Competing Methods: We choose competing methods based on the following criteria: (a) methods that are devoid of additional application or dataset-specific heuristics, (b) methods that leverage only single cell/block/layer, and (c) methods without the benefit of complementary add-ons (such as gating, advanced regularization, model compression, etc.). Requiring (a) is not controversial since our goal is methodological. Conditions (b),(c) are justifiable since we could also leverage these add-ons and are not germane to any particular method3. We benchmark iRNN against standard RNN, LSTM (Hochreiter & Schmidhuber, 1997), (ungated) AntisymmetricRNN (Chang et al., 2019), (ungated) FastRNN (Kusupati et al., 2018).\n3These conditions eliminate some potential baselines. We provide specific justifications in the appendix A.5.\n∂hT−1\n‖ and loss gradients are omitted but displayed in A.7.8. For\niRNN (a) and (b) together show strong correlation of gradient with accuracy in contrast to other methods.\nUnitary RNN Variants. Results for methods based on unitary transitions (such as Arjovsky et al. (2016); Wisdom et al. (2016); Vorontsov et al. (2017); Zhang et al. (2018)) are not reported in the main paper (when available reported in appendix) for the following reasons: (a) They are substantially more expensive, and requiring large model sizes; (b) Apart from the benchmark copy and add tasks, results tabulated by FastRNN and Antisymmetric authors (see Zhang et al. (2018); Chang et al. (2019)) show that they are well below SOTA; (c) iRNN dominates unitary-RNN variants on add-task (see Sec. 4.3.1); (d) On copy task, while unitary invariants are superior, Vorontsov et al. (2017) attributes it to modReLU or leaky ReLU activations. Leaky ReLUs allow for linear transitions, and copy task being a memory task benefits from it. With hard non-linear activation, unitary RNN variants can take up to 1000’s of epochs for even 100-length sequences (Vorontsov et al. (2017)).\nImplementation. For all our experiments, we used the parametrized update formulation in Eq. 9 for iRNN . We used tensorflow framework for our experiments. For most competing methods apart from AntisymmetricRNN, which we implemented, code is publicly available. All the experiments were run on an Nvidia GTX 1080 GPU with CUDA 9 and cuDNN 7.0 on a machine with Intel Xeon 2.60 GHz CPU with 20 cores.\nDatasets. Pre-processing and feature extraction details for all publicly available datasets are in the appendix A.4. We replicate benchmark test/train split with 20% of training data for validation to tune hyperparameters. Reported results are based on the full training set, and performance achieved on the publicly available test set. Table 4 (Appendix) and A.4 describes details for all the data sets.\nHyper Parameters We used grid search and fine-grained validation wherever possible to set the hyper-parameters of each algorithm, or according to the settings published in (Kusupati et al., 2018; Arjovsky et al., 2016) (e.g. number of hidden states). Both the learning rate and η’s were initialized to 10−2. The batch size of 128 seems to work well across all the data sets. We used ReLU as the non-linearity and Adam (Kingma & Ba (2015)) as the optimizer for all the experiments." }, { "heading": "4.2 ABLATIVE ANALYSIS", "text": "We perform ablative analysis on the benchmark add-task (Sec 4.3.1) for sequence length 200 for 1000 iterations and explore mean-squared error as a metric. Fig. 3 depicts salient results.\n(a) Identity Gradients & Accuracy: iRNN accuracy is correlated with identity gradients. Increasing K improves gradients, and correlates with increased accuracy (Fig. 3). While other models ht = αht−1 + βφ((U − γI)ht−1 + Wxt), can realize identity gradients for suitable choices; linear (α = 1, β = 1, γ = 0, U = 0), FastRNN (α ≈ 1, β ≈ 0, γ = 0) and Antisymmetric (α = 1, β = 1, U = V − V T , ‖U‖ ≤ γ), this goal may not be correlated with improved test accuracy. FastRNN(η = 0.001), Antisymmetric (γ = 0.01, = 0.001) have good gradients but poorer test accuracy relative to FastRNN(η = 0.01), Antisymmetric(γ = 0.01, = 0.1), with poorer gradients.\n(b) Identity gradient implies faster convergence: Identity gradient, whenever effective, must be capable of assigning credit to the informative parts, which in turn results in larger loss gradients, and significantly faster convergence with number of iterations. This is borne out in figure 3(a). iRNN for larger K is closer to identity gradient with fewer (unstable) spikes (K = 1, 5, 10). With K = 10, iRNN converges within 300 iterations while competing methods take about twice this time (other baselines not included here exhibited poorer performance than the once plotted).\n(c) SiRNN (iRNN with K = 1 delivers good performane in some cases. Fig. 3(a) illustrates that iRNN K = {5, 10} achieves faster convergence than SiRNN, but the computational overhead per iteration roughly doubles or triples in comparison. SiRNN is faster relative to competitors. For this reason, we sometimes tabulate only SiRNN, whenever it is SOTA in benchmark experiments, since accuracy improves with K but requires higher overhead." }, { "heading": "4.3 LONG-TERM DEPENDENCY AND OTHER TASKS", "text": "We list five types of datasets, all of which in some way require effective gradient propagation: (1) Conventional Benchmark LTD tasks (Add & Copy tasks) that illustrate that iRNN can rapidly learn long-term dependence; (2) Benchmark vision tasks (pixel MNIST, perm-MNIST) that may not require long-term, but nevertheless, demonstrates that iRNN achieves SOTA for short term dependencies but with less resources. (3) Noise Padded (LTD) Vision tasks (Noisy MNIST, Noisy CIFAR), where a large noise time segment separates information segments and the terminal state, and so the learner must extract information parts while rejecting the noisy parts; (4) short duration activity embedded in a larger time-window (HAR-2, Google-30 in Appendix Table 4 and many others A.7), that usually arise in the context of smart IoT applications and require a small model-size footprint. Chang et al. (2019) further justify (3) and (4) as LTD, because for these datasets where only a smaller unknown segment(s) of a longer sequence is informative. (5) Sequence-sequence prediction tasks (PTB language modeling) that are different from terminal prediction (reported in appendix A.7)." }, { "heading": "4.3.1 STANDARD BENCHMARK LTD TASKS : ADDITION & COPY MEMORY", "text": "Addition and Copy tasks (Hochreiter & Schmidhuber, 1997) have long been used as benchmarks in the literature to evaluate LTD (Hori et al., 2017; Zhang et al., 2018; Arjovsky et al., 2016; Martens & Sutskever, 2011). We follow the setup described in Arjovsky et al. (2016) to create the adding and copying tasks. See appendix A.4 for detailed description. For both tasks we run iRNN with K = 5.\nFigure 4 show the average performance of various methods on these tasks. For the copying task we observe that iRNN converges rapidly to the naive baseline and is the only method to achieve zero average cross entropy. For the addition task, both FastRNN and iRNN solves the addition task but FastRNN takes twice the number of iterations to reach desired 0 MSE. 4 In both the tasks,\n4 Note that LSTM solves the addition problem in Arjovsky et al. (2016) only with more than 10k iterations. We only use 2k iterations in our experiments to demonstrate the effectiveness of our method.\niRNN performance is much more stable across number of online training samples. In contrast, other methods either takes a lot of samples to match iRNN ’s performance or depict high variance in the evaluation metric. This shows that iRNN converges faster than the baselines (to the desired error). These results demonstrate that iRNN easily and quickly learns the long term dependencies . We omitted reporting unitary RNN variants for Add and Copy task. See Sec. 4.1 for copy task. On Add-task we point out that our performance is superior. In particular, for the longer T = 750 length, Arjovsky et al. (2016), points out that MSE does not reach zero, and uRNN is noisy. Others either (Wisdom et al., 2016) do not report add-task or report only for shorter lengths (Zhang et al., 2018)." }, { "heading": "4.3.2 NON LTD VISION TASKS: PIXEL MNIST, PERMUTE MNIST", "text": "Next, we perform experiments on the sequential vision tasks: (a) classification of MNIST images on a pixel-by-pixel sequence; (b) a fixed random permuted MNIST sequence (Lecun et al., 1998). These tasks typically do not fall in the LTD categories (Chang et al., 2019), but are useful to demonstrate faster training, which can be attributed to better gradients.\nFor the pixel-MNIST task, Kusupati et al. (2018) reports that it takes significantly longer time for existing (LSTMs, Unitary, Gated, Spectral) RNNs to converge to reasonable performance. In contrast, FastRNN trains at least 2x faster than LSTMs. Our results (table 1) for iRNN shows a 9x speedup relative LSTMs, and 2x speedup in comparison to Antisymmetric. In terms of test accuracy, iRNN matches the performance of Antisymmetric, but with at least 3x fewer parameters. We did not gain much with increased K values5. For the permuted version of this task, we seem to outperform the existing baselines 6. In both tasks, iRNN trained at least 2x faster than the strongest baselines. These results demonstrate that iRNN converges much faster than the baselines with fewer parameters." }, { "heading": "4.3.3 NOISE PADDING TASKS: NOISY-MNIST, NOISY-CIFAR", "text": "Additionally, as in Chang et al. (2019), we induce LTD by padding CIFAR-10 with noise exactly replicating their setup, resulting in Noisy-CIFAR. We extend this setting to MNIST dataset resulting in Noisy-MNIST. Intuitively we expect our model to be resilient to such perturbations. We attribute iRNN’s superior performance to the fact that it is capable of suppressing noise. For example, say noise is padded at t > τ and this results in Wxt being zero on average. For iRNN the resulting states ceases to be updated. So iRNN recalls last informative state hτ (modulo const) unlike RNNs/variants! Thus information from signal component is possibly better preserved.\nResults for Noisy-MNIST and Noisy-CIFAR are shown in Table 2. Note that almost all timesteps contain noise in these datasets. LSTMs perform poorly on these tasks due to vanishing gradients. This\n5 For some existing comparisons LSTM have achieved roughly 98.9 with dataset specific heuristics (Cooijmans et al., 2016), but we could not achieve this performance in our comparison (and so have many others like (Kusupati et al., 2018; Zhang et al., 2018; Arjovsky et al., 2016)).\n6Note that there’s no standard permutation in the literature. This may be the main reason we could not replicate Chang et al. (2019) performance on the permute MNIST task.\nis consistent with the earlier observations (Chang et al., 2019). iRNN outperforms the baselines very comprehensively on CIFAR-10, while on MNIST the gains are smaller, as it’s a relatively easier task. These results show that iRNN is more resilient to noise and can account for longer dependencies." }, { "heading": "4.3.4 SHORT DURATION EMBEDDED ACTIVITY RECOGNITION TASKS: HAR-2, GOOGLE-30", "text": "We are interested in detecting activity embedded in a longer sequence with small footprint RNNs (Kusupati et al. (2018)): (a) Google-30 (Warden, 2018), i.e. detection of utterances of 30 commands plus background noise and silence, and (b) HAR-2 (Anguita et al., 2012), i.e. Human Activity Recognition from an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone.\nTable 3 shows accuracy, training time, number of parameters and prediction time. Even with K = 1, we compare well against competing methods, and iRNN accuracy improves with larger K. Interestingly, higher K yields faster training as well as moderate prediction time, despite the overhead of additional recursions. These results show that iRNN outperforms baselines on activity recognition tasks, and fits within IoT/edge-device budgets." }, { "heading": "5 CONCLUSION", "text": "Drawing inspiration from Rosenblatts Continuous RNNs, we developed discrete time incremental RNN (iRNN). Leveraging equilibrium properties of CTRNN, iRNN solves exploding/vanishing gradient problem. We show that iRNN improved gradients are directly correlated with improved test accuracy. A number of experiments demonstrate iRNNs responsiveness to long-term dependency tasks. In addition, due to its smooth low-dimensional trajectories, it has a lightweight footprint that can be leveraged for IoT applications." }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors would like to thank the Area Chair and the reviewers for their constructive comments. This work was supported partly by the National Science Foundation Grant 1527618, the Office of Naval Research Grant N0014-18-1-2257 and by a gift from ARM corporation." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 MULTI-LAYER DEEP RNN NETWORKS.", "text": "We point out in passing that our framework readily admits deep multi-layered networks within a single time-step. Indeed our setup is general; it applies to shallow and deep nets; small and large time steps. As a case in point, the Deep Transition RNN Pascanu et al. (2013c):\nhm+1 = fh(hm, xm+1) = φh(WLφL−1(WL−1 . . .W1φ1(Uhm +Wxm+1))\nis readily accounted by Theorem 1 in an implicit form:\nhm+1 = fh(hm+1 + hm, xm+1)− hm. So is Deep-RNN Hermans & Schrauwen (2013). The trick is to transform hm → hm + hm+1 and hm+1 → hm+hm+1. As such, all we need is smoothness of fh, which has no restriction on # layers. On the other hand, that we do not have to limit the number of time steps is the point of Theorem 1, which asserts that the partial differential of hidden states (which is primarily why vanishing/exploding gradient arises Pascanu et al. (2013b) in the first place) is identity!!" }, { "heading": "A.2 PSEUDO CODE AND IMPLEMENTATION", "text": "Given an input sequence and iRNN model parameters, the hidden states can be generated with the help of subroutine 1. This routine can be plugged into standard deep learning frameworks such as Tensorflow/PyTorch to learn the model parameters via back-propagation.\nAlgorithm 1: Pseudo Code for computing iRNN hidden states for one input sequence Data: Input sequence {xm}Tm=1 Require: Number of recursion steps K, Model parameters ( U,W, b, α, {ηkm} ) 1 Initial hidden state h0 = 0 2 for m = 1 to T do 3 Initialize g0 to zero or hm−1 4 for k = 1 to K do 5 gk = gk−1 + η k m ( φ(U(gk−1 + hm−1) +Wxm + b)− α(gk−1 + hm−1)\n) 6 hm = gK\nOutput: hidden states {hm}Tm=1" }, { "heading": "A.3 CONVERGENCE GUARANTEES FOR GENERAL LEARNING RATES.", "text": "Theorem 2 (Local Convergence with Linear Rate). Assume that the function F (gi) , φ(U(gi + hk−1) +Wxk + b)− (gi + hk−1) and the parameter η(i)k in Eq. 5 satisfies\n[η (i) k ] 2‖∇F (gi)F (gi)‖2 + 2η(i)k F (gi) >∇F (gi)F (gi) < 0,∀k, ∀i. (10)\nThen there exists > 0 such that if ‖g0 − heq‖ ≤ where heq denotes the fixed point, the sequence gi generated by the Euler method converges to the equilibrium solution inMeq(hk−1, xk) locally with linear rate.\nThe proof is based on drawing a connection between the Euler method and inexact Newton methods, and leverages Thm. 2.3 in Dembo et al. (1982). See appendix Sec. A.8.1 Thm. 3 and Sec. A.7.5 (for proof, empirical verification).\nCorollary 1. If ‖I+ η(i)k ∇F (gi)‖ < 1,∀k, ∀i, the forward propagation (Eq. 13) is stable and the sequence {gi} converges locally at a linear rate.\nThe proof is based on Thm. 2.3 in Dembo et al. (1982), Thm. 2 and Prop. 2 in Chang et al. (2019). See appendix A.8.1 Corollary. 2" }, { "heading": "A.4 DATASET DETAILS", "text": "Table 4 and table 6 lists the statistics of all the datasets described below.\nGoogle-12 & Google-30: Google Speech Commands dataset contains 1 second long utterances of 30 short words (30 classes) sampled at 16KHz. Standard log Mel-filter-bank featurization with 32 filters over a window size of 25ms and stride of 10ms gave 99 timesteps of 32 filter responses for a 1-second audio clip. For the 12 class version, 10 classes used in Kaggle’s Tensorflow Speech Recognition challenge7 were used and remaining two classes were noise and background sounds (taken randomly from remaining 20 short word utterances). Both the datasets were zero mean - unit variance normalized during training and prediction.\n7https://www.kaggle.com/c/tensorflow-speech- recognition-challenge\nHAR-28: Human Activity Recognition (HAR) dataset was collected from an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone. The features available on the repository were directly used for experiments. The 6 activities were merged to get the binarized version. The classes Sitting, Laying, Walking_Upstairs and Standing, Walking, Walking_Downstairs were merged to obtain the two classes. The dataset was zero mean - unit variance normalized during training and prediction.\nPenn Treebank: 300 length word sequences were used for word level language modeling task using Penn Treebank (PTB) corpus. The vocabulary consisted of 10,000 words and the size of trainable word embeddings was kept the same as the number of hidden units of architecture. This is the setup used in (Kusupati et al., 2018; Zhang et al., 2018).\nPixel-MNIST: Pixel-by-pixel version of the standard MNIST-10 dataset 9. The dataset was zero mean - unit variance normalized during training and prediction.\nPermuted-MNIST: This is similar to Pixel-MNIST, except its made harder by shuffling the pixels with a fixed permutation. We keep the random seed as 42 to generate the permutation of 784 pixels.\nNoisy-MNIST: To introduce more long-range dependencies to the Pixel-MNIST task, we define a more challenging task called the Noisy-MNIST, inspired by the noise padded experiments in Chang et al. (2019). Instead of feeding in one pixel at one time, we input each row of a MNIST image at every time step. After the first 28 time steps, we input independent standard Gaussian noise for the remaining time steps. Since a MNIST image is of size 28 with 1 RGB channels, the input dimension is m = 28. The total number of time steps is set to T = 1000. In other words, only the first 28 time steps of input contain salient information, all remaining 972 time steps are merely random noise. For a model to correctly classify an input image, it has to remember the information from a long time ago. This task is conceptually more difficult than the pixel-by-pixel MNIST, although the total amount of signal in the input sequence is the same.\nNoisy-CIFAR: This is exactly replica of the noise paded CIFAR task mentioned in Chang et al. (2019). Instead of feeding in one pixel at one time, we input each row of a CIFAR-10 image at every time step. After the first 32 time steps, we input independent standard Gaussian noise for the remaining time steps. Since a CIFAR-10 image is of size 32 with three RGB channels, the input dimension is m = 96. The total number of time steps is set to T = 1000. In other words, only the first 32 time steps of input contain salient information, all remaining 968 time steps are merely random noise. For a model to correctly classify an input image, it has to remember the information from a long time ago. This task is conceptually more difficult than the pixel-by-pixel CIFAR-10, although the total amount of signal in the input sequence is the same.\nAddition Task: We closely follow the adding problem defined in (Arjovsky et al., 2016; Hochreiter & Schmidhuber, 1997) to explain the task at hand. Each input consists of two sequences of length T. The first sequence, which we denote x, consists of numbers sampled uniformly at random U [0, 1]. The second sequence is an indicator sequence consisting of exactly two entries of 1 and remaining entries 0. The first 1 entry is located uniformly at random in the first half of the sequence, whilst the second 1 entry is located uniformly at random in the second half. The output is the sum of the two entries of the first sequence, corresponding to where the 1 entries are located in the second sequence. A naive strategy of predicting 1 as the output regardless of the input sequence gives an expected mean squared error of 0.167, the variance of the sum of two independent uniform distributions.\nCopying Task: Following a similar setup to (Arjovsky et al., 2016; Hochreiter & Schmidhuber, 1997), we outline the copy memory task. Consider 10 categories, {ai}9i=0. The input takes the form of a T + 20 length vector of categories, where we test over a range of values of T. The first 10 entries are sampled uniformly, independently and with replacement from {ai}7i=0, and represent the sequence which will need to be remembered. The next T − 1 entries are set to a8, which can be thought of as the ’blank’ category. The next single entry is a9, which represents a delimiter, which should indicate to the algorithm that it is now required to reproduce the initial 10 categories in the output. The remaining 10 entries are set to a8. The required output sequence consists of T + 10 repeated entries of a8, followed by the first 10 categories of the input sequence in exactly the same order. The goal is to minimize the average cross entropy of category predictions at each time step of\n8https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+ smartphones 9http://yann.lecun. com/exdb/mnist/\nthe sequence. The task amounts to having to remember a categorical sequence of length 10, for T time steps.\nA simple baseline can be established by considering an optimal strategy when no memory is available, which we deem the memoryless strategy. The memoryless strategy would be to predict a8 for T + 10 entries and then predict each of the final 10 categories from the set {ai}7i=0 i=0 independently and uniformly at random. The categorical cross entropy of this strategy is 10 log(8)T+20\nDSA-1910: This dataset is based on Daily and Sports Activity (DSA) detection from a resourceconstrained IoT wearable device with 5 Xsens MTx sensors having accelerometers, gyroscopes and magnetometers on the torso and four limbs. The features available on the repository were used for experiments. The dataset was zero mean - unit variance normalized during training and prediction.\nYelp-5: Sentiment Classification dataset based on the text reviews11. The data consists of 500,000 train points and 500,000 test points from the first 1 million reviews. Each review was clipped or padded to be 300 words long. The vocabulary consisted of 20000 words and 128 dimensional word embeddings were jointly trained with the network." }, { "heading": "A.5 BASELINE JUSTIFICATION", "text": "In our experiments section, we stated that some of the potential baselines were removed due to experimental conditions enforced in the setup. Here we clearly justify our choice. Mostly the reasoning is to avoid comparing complementary add-ons and compare the bare-bone cells.\n• Cooijmans et al. (2016) is removed since its an add-on and can be applied to any method. Besides its pixel-mnist results involve dataset specific heuristics.\n• Gong et al. (2018) is also an add-on and hence can be applied to any method.\n• Zilly et al. (2017); Pascanu et al. (2013a); Mujika et al. (2017) denote deep transitioning methods. They are add-ons for any single recurrent block and hence can be applied to any recurrent cell.\n• Gating variants of single recurrent cells (Chang et al., 2019; Kusupati et al., 2018) have also been removed. Since iRNN can be extended to a gating variant and hence its just an add-on." }, { "heading": "A.6 HYPER-PARAMETERS FOR REPRODUCIBILITY", "text": "We report various hyper-parameters we use in our experiments for reproduciblity. As mentioned earlier we mainly use ’ReLU’ as the non-linearity and Adam as the optimizer. Apart from this, other hyper-parameters are mentioned in table 5." }, { "heading": "A.7 ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "A.7.1 COPYING AND ADDITION TASKS", "text": "Figure 5 shows the results for remaining experiments for the addition task for length 100, 400." }, { "heading": "A.7.2 TRADITIONAL DATASETS", "text": "Table 7 shows the results including left out baselines for Pixel-MNIST and permute-MNIST task. Here we also include star rating prediction on a scale of 1 to 5 of Yelp reviews Yelp (2017). Table 8 shows the results for this dataset." }, { "heading": "A.7.3 ACTIVITY RECOGNITION DATASETS", "text": "We also include activity recognition tasks: (a)Google-12 Warden (2018) , i.e. detection of utterances of 10 commands plus background noise and silence and (b) DSA-19 Altun et al. (2010), Daily and Sports Activity (DSA) detection from a resource-constrained IoT wearable device with 5 Xsens MTx sensors having accelerometers, gyroscopes and magnetometers on the torso and four limbs. Table 9 shows results for these activities along with some other baselines for activity recognition tasks mentioned in Sec. 4.3.4 and described in Sec. A.4." }, { "heading": "A.7.4 PTB LANGUAGE MODELLING", "text": "We follow (Kusupati et al., 2018; Zhang et al., 2018) to setup our PTB experiments. We only pursue one layer language modelling, but with more difficult sequence length (300). Table 10 reports all the evaluation metrics for the PTB Language modelling task with 1 layer as setup by Kusupati et al. (2018), including test time and number of parameters (which we omitted from the main paper due to lack of space)." }, { "heading": "A.7.5 LINEAR RATE OF CONVERGENCE TO FIXED POINT", "text": "Empirically we verify the local convergence to a fixed point with linear rate by comparing the Euclidean distance between the approximate solutions, h(k)t , using Eq. 11 with g0 = 0 and the fixed points, ht, computed using FSOLVE from SCIPY. The learnable parameters are initialized suitably and then fixed. We illustrate our results in Fig. 6, which clearly demonstrates that the approximate solutions tend to converge with linear rate.\ngi = gi−1 + η i t(φ(U(gi−1 + ht−1) +Wxt + b)− α(gi−1 + ht−1)) (11)\nhKt = gK" }, { "heading": "A.7.6 THEORETICAL VERIFICATION", "text": "Here we include some experiments to show that our theoretical assumptions hold true.\nNon-Singularity of the matrix D For our iRNN parametrization to satisfy the conditions of having equillibrium points to be locally asymptotically stable, the eigen values of the matrixD = (∇φ(·)U− γI) should be negative. We plot a histogram of the eigenvalues of D for all the points in the HAR-2 dataset. As illustrated in the figure 7, all the eigenvalues are negative.\nA.7.7 IDENTITY GRADIENT COMPARISON iRNN VS RNN\nTo verify Theorem. 1 empirically, we train RNN and iRNN on the HAR-2 data set (see more details in Sec. 4), respectively, and plot in Fig. 8 the magnitude of gradient of the last layer hT w.r.t. the first\nlayer h1 in log scale to confirm that our approach leads to no vanishing or exploding gradients when the error is back-propagated through time. We also conducted experiments to verify that the gradient of iRNN is norm preserving (see Sec. A.7.8 and Figure . 3). As we see clearly, RNN suffers from serious vanishing gradient issue in training, while iRNN’s backpropagated gradients is close to 1, and the variance arises mainly our approximation of fixed points and stochastic behavior in training networks, demonstrating much better training stability of iRNN.\nA.7.8 GRADIENT NORM W.R.T. LOSS ‖ ∂L∂h1 ‖\nIn addition to the gradient ratio we plot in Sec.4.2, we also show in figure 9, the more popular quantity captured in earlier works (Arjovsky et al., 2016; Zhang et al., 2018), i.e. the gradient norm w.r.t. loss ‖ ∂L∂h1 ‖. We emphasize that this quantity alone is misleading in the context of resolving the issue of vanishing/exploding gradients. Since ‖ ∂L∂h1 ‖ = ‖ ∂L ∂hT ‖ ∗ ‖∂hT∂h1 ‖. The long term component controlling the gradients is ‖∂hT∂h1 ‖, but the other component, ‖ ∂L ∂hT ‖ could become zero by the virtue that the loss is nearly zero. This happens in our addition task experiment, because MSE is close to zero, we experience nearly 0 value for this quantity. But this is clearly because the MSE is 0. Also note that none of our graphs have log scale, which is not the case in earlier works. The conclusion that can be drawn from the loss-gradient is that it is somewhat stable, and can inform us about quality of convergence.\nWe also plot ‖ ∂hT∂hT−1 ‖ in figure 9 in order to show that indeed iRNN achieves identity gradients everywhere in the time horizon, since fig. 3 had shown that the ratio of ‖∂hT∂h1 ‖ and ‖ ∂hT ∂hT−1\n‖ equals 1 for iRNN ." }, { "heading": "A.7.9 DIFFERENT ACTIVATION FUNCTION", "text": "We also performed some experiments for sigmoid activation on HAR-2 dataset. The results for this variant also follow similar pattern as we saw in ReLU variant." }, { "heading": "A.8 PROOFS", "text": "" }, { "heading": "A.8.1 LOCAL CONVERGENCE WITH LINEAR RATE", "text": "Recall that we rewrite the fixed-point constraints in our iRNN as the following ODE:\ng′k(t) = F (gi) def = φ(U(gi + hk−1) +Wxk + b)− (gi + hk−1); g(0) = 0. (12)\nThen based on the Euler method, we have the following update rule for solving fixed-points:\ngi+1 = gi + η (i) k F (gi) (13)\n= gi + η (i) k [φ(U(gi + hk−1) +Wxk + b)− (gi + hk−1)]. (14)\nInexact Newton methods Dembo et al. (1982) refer to a family of algorithms that aim to solve the equation system F (z) = 0 approximately at each iteration using the following rule:\nzi+1 = zi + si, ri = F (zi) +∇F (zi)si, (15) where ∇F denotes the (sub)gradient of function F , and ri denotes the error at the i-th iteration between F (zi) and 0.\nBy drawing the connection between Eq. 13 and Eq. 15, we can set zi ≡ gi and si ≡ η(i)k F (gi). Then based on Eq. 15 we have\nri = F (gi) + η (i) k ∇F (gi)F (gi). (16)\nLemma 1 (Thm. 2.3 in Dembo et al. (1982)). Assume that ‖ri‖ ‖F (zi)‖ ≤ τ < 1,∀k, (17)\nwhere ‖ · ‖ denotes an arbitrary norm and the induced operator norm. There exists ε > 0 such that, if ‖z0 − z∗‖ ≤ ε, then the sequence of inexact Newton iterates {zi} converges to z∗. Moreover, the convergence is linear in the sense that ‖zi+1 − z∗‖∗ ≤ τ‖zi − z∗‖∗, where ‖y‖∗ = ‖∇F (z∗)y‖.\nTheorem 3 (Local Convergence with Linear Rate). Assume that the function F in Eq. 12 and the parameter η(i)k in Eq. 13 satisfy\n[η (i) k ] 2‖∇F (gi)F (gi)‖2 + 2η(i)k F (gi) >∇F (gi)F (gi) < 0,∀i,∀k. (18)\nThen there exists > 0 such that if ‖g0 − heq‖ ≤ where heq denotes the fixed point, the sequence {gi} generated by the Euler method converges to the equilibrium solution inMeq(hk−1, xk) locally with linear rate.\nProof. By substituting Eq. 16 into Eq. 17, to prove local convergence we need to guarantee\n‖F (gi) + η(i)k ∇F (gi)F (gi)‖ < ‖F (gi)‖. (19)\nBy taking the square of both sides in Eq. 19, we can show that Eq. 19 is equivalent to Eq. 18. We then complete our proof.\nCorollary 2. Assume that ‖I+ η(i)k ∇F (gi)‖ < 1,∀i,∀k holds. Then the forward propagation using Eq. 13 is stable and our sequence {gi} converges locally with linear rate.\nProof. By substituting Eq. 16 into Eq. 17 and based on the assumption in the corollary, we have\n‖ri‖ ‖F (gi)‖ = ‖F (gi) + η(i)k ∇F (gi)F (gi)‖ ‖F (gi)‖\n≤ ‖I+ η(i)k ∇F (gi)‖‖F (gi)‖\n‖F (gi)‖ < 1. (20)\nFurther based on Prop. 2 in Chang et al. (2019) and Thm. 2, we then complete our proof." } ]
2,020
null
SP:cd9024c0331b487fcb0cc13872f3ddb01f57ce15
[ "Authors proposed a multi-modal unsupervised algorithm to uncover the electricity usage of different appliances in a home. The detection of appliance was done by using both combined electricity consumption data and user location data from sensors. The unit of detection was set to be a 25-second window centered around any electricity usage spike. Authors used a encoder/decode set up to model two different factors of usage: type of appliance and variety within the same appliance. This part of the model was trained by predicting actual consumption. Then only the type of appliance was used to predict the location of people in the house, which was also factored into appliance related and unrelated factors. Locations are represented as images to avoid complicated modeling of multiple people.", "This paper proposed a learning algorithm to recover the events of using an appliance and as well as the location of the appliance in a home by using smart electricity meter and a motion sensor installed a home. In the model, the input is a window of electricity energy consumption and context and the output of the model is the location collected by the motion sensor. The appliance activation as the latent variables is learned using a autoencoder architecture. " ]
Learning home appliance usage is important for understanding people’s activities and optimizing energy consumption. The problem is modeled as an event detection task, where the objective is to learn when a user turns an appliance on, and which appliance it is (microwave, hair dryer, etc.). Ideally, we would like to solve the problem in an unsupervised way so that the method can be applied to new homes and new appliances without any labels. To this end, we introduce a new deep learning model that takes input from two home sensors: 1) a smart electricity meter that outputs the total energy consumed by the home as a function of time, and 2) a motion sensor that outputs the locations of the residents over time. The model learns the distribution of the residents’ locations conditioned on the home energy signal. We show that this cross-modal prediction task allows us to detect when a particular appliance is used, and the location of the appliance in the home, all in a self-supervised manner, without any labeled data.
[ { "affiliations": [], "name": "APPLIANCE USAGE" }, { "affiliations": [], "name": "Chen-Yu Hsu" }, { "affiliations": [], "name": "Abbas Zeitoun" }, { "affiliations": [], "name": "Guang-He Lee" }, { "affiliations": [], "name": "Dina Katabi" } ]
[ { "authors": [ "Martín Abadi", "Ashish Agarwal", "Paul Barham", "Eugene Brevdo", "Zhifeng Chen", "Craig Citro", "Greg S Corrado", "Andy Davis", "Jeffrey Dean", "Matthieu Devin" ], "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "venue": "arXiv preprint arXiv:1603.04467,", "year": 2016 }, { "authors": [ "Fadel Adib", "Zachary Kabelac", "Dina Katabi" ], "title": "Multi-person localization via rf body reflections", "venue": "In 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI", "year": 2015 }, { "authors": [ "K Carrie Armel", "Abhay Gupta", "Gireesh Shrimali", "Adrian Albert" ], "title": "Is disaggregation the holy grail of energy efficiency? the case of electricity", "venue": "Energy Policy,", "year": 2013 }, { "authors": [ "David Arthur", "Sergei Vassilvitskii" ], "title": "k-means++: The advantages of careful seeding", "venue": "In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms,", "year": 2007 }, { "authors": [ "Nipun Batra", "Hongning Wang", "Amarjeet Singh", "Kamin Whitehouse" ], "title": "Matrix factorisation for scalable energy breakdown", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Nipun Batra", "Yiling Jia", "Hongning Wang", "Kamin Whitehouse" ], "title": "Transferring decomposed tensors for scalable energy breakdown across regions", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Christian Beckel", "Wilhelm Kleiminger", "Romano Cicchetti", "Thorsten Staake", "Silvia Santini" ], "title": "The eco data set and the performance of non-intrusive load monitoring algorithms", "venue": "In Proceedings of the 1st ACM Conference on Embedded Systems for Energy-Efficient Buildings,", "year": 2014 }, { "authors": [ "Roberto Bonfigli", "Andrea Felicetti", "Emanuele Principi", "Marco Fagiani", "Stefano Squartini", "Francesco Piazza" ], "title": "Denoising autoencoders for non-intrusive load monitoring: improvements and comparative evaluation", "venue": "Energy and Buildings,", "year": 2018 }, { "authors": [ "Christian Debes", "Andreas Merentitis", "Sergey Sukhanov", "Maria Niessen", "Nikolaos Frangiadakis", "Alexander Bauer" ], "title": "Monitoring activities of daily living in smart homes: Understanding human behavior", "venue": "IEEE Signal Processing Magazine,", "year": 2016 }, { "authors": [ "Lorenzo Maria Donini", "Eleonora Poggiogalle", "Maria Piredda", "Alessandro Pinto", "Mario Barbagallo", "Domenico Cucinotta", "Giuseppe Sergi" ], "title": "Anorexia and eating patterns in the elderly", "venue": "PloS one,", "year": 2013 }, { "authors": [ "Zoubin Ghahramani", "Michael I Jordan" ], "title": "Factorial hidden markov models", "venue": "In Advances in Neural Information Processing Systems,", "year": 1996 }, { "authors": [ "Negar Ghourchian", "Michel Allegue-Martinez", "Doina Precup" ], "title": "Real-time indoor localization in smart homes using semi-supervised learning", "venue": "In Twenty-Ninth IAAI Conference,", "year": 2017 }, { "authors": [ "Lluis Gomez", "Yash Patel", "Marçal Rusiñol", "Dimosthenis Karatzas", "CV Jawahar" ], "title": "Self-supervised learning of visual features through embedding images into text topic spaces", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "David Harwath", "Adria Recasens", "Dídac Surís", "Galen Chuang", "Antonio Torralba", "James Glass" ], "title": "Jointly discovering visual objects and spoken words from raw sensory input", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Chen-Yu Hsu", "Aayush Ahuja", "Shichao Yue", "Rumen Hristov", "Zachary Kabelac", "Dina Katabi" ], "title": "Zero-effort in-home sleep and insomnia monitoring using radio signals", "venue": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies,", "year": 2017 }, { "authors": [ "Chen-Yu Hsu", "Yuchen Liu", "Zachary Kabelac", "Rumen Hristov", "Dina Katabi", "Christine Liu" ], "title": "Extracting gait velocity and stride length from surrounding radio signals", "venue": "In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems,", "year": 2017 }, { "authors": [ "Chen-Yu Hsu", "Rumen Hristov", "Guang-He Lee", "Mingmin Zhao", "Dina Katabi" ], "title": "Enabling identification and behavioral sensing in homes using radio reflections", "venue": "In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems,", "year": 2019 }, { "authors": [ "Yiling Jia", "Nipun Batra", "Hongning Wang", "Kamin Whitehouse" ], "title": "A tree-structured neural network model for household energy breakdown", "venue": "In The World Wide Web Conference,", "year": 2019 }, { "authors": [ "Matthew J Johnson", "Alan S Willsky" ], "title": "Bayesian nonparametric hidden semi-markov models", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "Kiran Joshi", "Dinesh Bharadia", "Manikanta Kotaru", "Sachin Katti" ], "title": "Wideo: fine-grained device-free motion tracing using rf backscatter", "venue": "In 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI", "year": 2015 }, { "authors": [ "Ossi Kaltiokallio", "Maurizio Bocca", "Neal Patwari" ], "title": "Follow@ grandma: Long-term device-free localization for residential monitoring", "venue": "In Local Computer Networks Workshops (LCN Workshops),", "year": 2012 }, { "authors": [ "Jack Kelly", "William Knottenbelt" ], "title": "Neural nilm: Deep neural networks applied to energy disaggregation", "venue": "In Proceedings of the 2nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments,", "year": 2015 }, { "authors": [ "Hyungsul Kim", "Manish Marwah", "Martin Arlitt", "Geoff Lyon", "Jiawei Han" ], "title": "Unsupervised disaggregation of low frequency power measurements", "venue": "In Proceedings of the 2011 SIAM international conference on data mining,", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "J Zico Kolter", "Tommi Jaakkola" ], "title": "Approximate inference in additive factorial hmms with application to energy disaggregation", "venue": "In Artificial intelligence and statistics,", "year": 2012 }, { "authors": [ "J Zico Kolter", "Siddharth Batra", "Andrew Y Ng" ], "title": "Energy disaggregation via discriminative sparse coding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2010 }, { "authors": [ "Ranjay Krishna", "Kenji Hata", "Frederic Ren", "Li Fei-Fei", "Juan Carlos Niebles" ], "title": "Dense-captioning events in videos", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Henning Lange", "Mario Berges" ], "title": "Variational bolt: approximate learning in factorial hidden markov models with application to energy disaggregation", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Xiang Li", "Shengjie Li", "Daqing Zhang", "Jie Xiong", "Yasha Wang", "Hong Mei" ], "title": "Dynamic-music: accurate device-free indoor localization", "venue": "In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing,", "year": 2016 }, { "authors": [ "Laurens van der Maaten", "Geoffrey Hinton" ], "title": "Visualizing data using t-sne", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Andrew Owens", "Alexei A Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Andrew Owens", "Phillip Isola", "Josh McDermott", "Antonio Torralba", "Edward H Adelson", "William T Freeman" ], "title": "Visually indicated sounds", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Oliver Parson", "Siddhartha Ghosh", "Mark Weal", "Alex Rogers" ], "title": "An unsupervised training method for non-intrusive appliance load monitoring", "venue": "Artificial Intelligence,", "year": 2014 }, { "authors": [ "Wei Wang", "Alex X Liu", "Muhammad Shahzad", "Kang Ling", "Sanglu Lu" ], "title": "Understanding and modeling of wifi signal based human activity recognition", "venue": "In Proceedings of the 21st annual international conference on mobile computing and networking,", "year": 2015 }, { "authors": [ "Yan Wang", "Jian Liu", "Yingying Chen", "Marco Gruteser", "Jie Yang", "Hongbo Liu" ], "title": "E-eyes: device-free location-oriented activity identification using fine-grained wifi signatures", "venue": "In Proceedings of the 20th annual international conference on Mobile computing and networking,", "year": 2014 }, { "authors": [ "Matt Wytock", "J Zico Kolter" ], "title": "Contextually supervised source separation with application to energy disaggregation", "venue": "In Twenty-eighth AAAI conference on artificial intelligence,", "year": 2014 }, { "authors": [ "Chaoyun Zhang", "Mingjun Zhong", "Zongzuo Wang", "Nigel Goddard", "Charles Sutton" ], "title": "Sequenceto-point learning with neural networks for non-intrusive load monitoring", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Split-brain autoencoders: Unsupervised learning by cross-channel prediction", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Bochao Zhao", "Lina Stankovic", "Vladimir Stankovic" ], "title": "On a training-less solution for non-intrusive appliance load monitoring using graph signal processing", "venue": "IEEE Access,", "year": 2016 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Andrew Rouditchenko", "Carl Vondrick", "Josh McDermott", "Antonio Torralba" ], "title": "The sound of pixels", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hang Zhao", "Chuang Gan", "Wei-Chiu Ma", "Antonio Torralba" ], "title": "The sound of motions", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Mingmin Zhao", "Shichao Yue", "Dina Katabi", "Tommi S Jaakkola", "Matt T Bianchi" ], "title": "Learning sleep stages from radio signals: A conditional adversarial architecture", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Mingjun Zhong", "Nigel Goddard", "Charles Sutton" ], "title": "Signal aggregate constraints in additive factorial hmms, with application to energy disaggregation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Mingjun Zhong", "Nigel Goddard", "Charles Sutton" ], "title": "Latent bayesian melding for integrating individual and population models", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Kaile Zhou", "Shanlin Yang" ], "title": "Understanding household energy consumption behavior: The contribution of energy big data analytics", "venue": "Renewable and Sustainable Energy Reviews,", "year": 2016 }, { "authors": [ "Adam Zipperer", "Patricia A Aloise-Young", "Siddharth Suryanarayanan", "Robin Roche", "Lieko Earle", "Dane Christensen", "Pablo Bauleo", "Daniel Zimmerle" ], "title": "Electric energy management in the smart home: Perspectives on enabling technologies and consumer behavior", "venue": "Proceedings of the IEEE,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Learning home appliance usage patterns is useful for understanding user habits and optimizing electricity consumption. For example, knowing when a person uses their microwave, stove, oven, coffee machine or toaster provides information about their eating patterns. Similarly, understanding when they use their TV, air-conditioner, or washer and dryer provides knowledge of their behavior and habits. Such information can be used to encourage energy saving by optimizing appliance usage (Armel et al., 2013), to track the wellbeing of elderly living alone (Donini et al., 2013; Debes et al., 2016), or to provide users with behavioral analytics (Zhou & Yang, 2016; Zipperer et al., 2013). This data is also useful for various businesses such as home insurance companies interested in assessing accident risks and utility companies interested in optimizing energy efficiency (Armel et al., 2013).\nThe problem can be modeled as event detection – i.e., given the total energy consumed by the house as a function of time, we want to detect when various appliances are turned on. Past work has looked at analyzing the energy signal from the home utility meter to detect when certain appliances are on.1 Most solutions, however, assume that the energy pattern for each appliance is unique and known, and use this knowledge to create labeled data for their supervised models. (Kolter et al., 2010; Zhong et al., 2014; 2015; Kelly & Knottenbelt, 2015; Zhang et al., 2018; Bonfigli et al., 2018). Unfortunately, such solutions do not generalize well because the energy pattern of an appliance depends on its brand and can differ from one home to another (Kelly & Knottenbelt, 2015; Bonfigli et al., 2018).2 The literature also contains some unsupervised methods, but they typically have limited accuracy (Kim et al., 2011; Kolter & Jaakkola, 2012; Johnson & Willsky, 2013; Parson et al., 2014; Wytock & Kolter, 2014; Zhao et al., 2016; Lange & Berges, 2018).\nUnsupervised event detection in a data stream is intrinsically challenging because we do not know what patterns to look for. In our task, not only may appliance energy patterns be unknown, but also the energy signal may include many background events unrelated to appliance activation, such as the fridge or HVAC power cycling events.\nOne way to address this challenge is to consider the self-supervised paradigm. If a different stream of data also observes the events of interest, we can use this second modality to provide self-supervising\n1The utility meter outputs the sum of the energy of all active appliances in a house as a function of time. 2For example, a Samsung dishwasher may have a different energy pattern from that of a Kenmore dishwasher.\nsignals for event detection. To that end, we leverage the availability of new fine-resolution motion sensors which track the locations of people at home (Adib et al., 2015; Joshi et al., 2015; Li et al., 2016; Ghourchian et al., 2017; Hsu et al., 2017b). Such sensors operate as a consumer radar, providing decimeter-level location accuracy. They do not require people to wear sensors on their bodies, can operate through walls, and track people’s locations in different rooms.\nThese location sensors indirectly observe the events of interest. Specifically, they capture the change in user locations as they reach out to an appliance to set it up or turn it on (e.g. put food in a microwave and turn it on). Hence, the output of such sensors can provide a second modality for self-supervision.\nBut how should one design the model? We cannot directly use location as a label for appliance activation events. People can be next to an appliance but neither activate it nor interact with it. Moreover, we do not assume appliance locations are known a priori. We also cannot use the two modalities to learn a joint representation of the event in a shared space. This is because location and energy are unrelated most of the time and become related only when the event of interest occurs. Furthermore, there are typically multiple residents in the home, making it hard to tell which of them interacted with the appliance.\nOur model is based on cross-modal prediction. We train a neural network that, given the home energy at a particular time, predicts the location of the home residents. Our intuition is that appliance activation events have highly predictable locations, typically the location of the appliance. In contrast, background energy events (e.g. power cycling of the fridge) do not lead to predictable locations. Thus, our model uses this learned predictability along with the associated location and energy representation to cluster the events in the energy stream. In addition, we use a mixture distribution to disentangle irrelevant location information of other residents in the home. Interestingly, our model not only learns when each appliance is activated but also discovers the location of that appliance in the home, all without any labeled data.\nWe summarize the contributions of this paper as follows:\n• The paper introduces a new method for self-supervised event detection from weakly related data streams. The method combines neural cross-modal prediction with custom clustering based on the learned predictability and representation. We apply it to the task of detecting appliance usage events using unlabeled data from two sensors in the home: the energy meter, and a location sensor. • To evaluate our design, we have created the first dataset with concurrent streams of home energy and location data, collected from 4 homes over a period of 7 months. For each home, data was collected for 2 to 4 months. Ground truth measurements are provided via smart plugs connected directly to each appliance. • Compared to past work on unsupervised learning of appliance usage and a new baseline that leverages the two modalities, our method achieves significant improvements of 67.3% and 51.9% respectively for the average detection F1 score.\nWe will release our code and dataset to encourage future work on multi-modal models for understanding appliance usage patterns and the underlying user behavior. 3" }, { "heading": "2 RELATED WORK", "text": "Energy disaggregation Our work is related to past work on energy disaggregation, which refers to the problem of separating appliance-level energy from a home’s total (or aggregate) energy signal. Past work in this domain can be broadly classified into two categories: supervised and unsupervised.\nSupervised methods assume that the power signatures of individual appliances are available. They use data from individual appliances to obtain models for each appliance power signature, and then use those models to detect appliance events from the aggregate energy signal. Early work learns sparse codes for different appliances (Kolter et al., 2010) or uses a Factorial HMM (FHMM) (Ghahramani & Jordan, 1996) to model each appliance as an HMM (Zhong et al., 2014; 2015). Other work uses matrix factorization approaches to estimate monthly energy breakdowns (Batra et al., 2017; 2018). More recently, neural networks have been used to model appliances (Kelly & Knottenbelt, 2015; Zhang et al., 2018; Jia et al., 2019; Bonfigli et al., 2018), where extracting appliance-level energy is formulated as a de-noising problem. However, supervised solutions typically do not generalize well\n3Project website: http://sapple.csail.mit.edu\nto new homes (Kelly & Knottenbelt, 2015; Bonfigli et al., 2018). This is because two appliances of the same type (e.g. coffee machine) in different homes are often manufactured by different brands, and thus have different power signatures.\nUnsupervised methods do not assume prior knowledge of appliance signatures; they attempt to learn those signatures from the aggregate energy signal. Early approaches use variants of FHMM, and learn appliance HMMs with Expectation-Maximization (Kim et al., 2011), approximate footprint extraction procedures (Kolter & Jaakkola, 2012), or using expert knowledge to configure prior parameters (Johnson & Willsky, 2013; Parson et al., 2014). Some papers propose using contextual information (such as temperature, hour of the day, and day of the week) (Wytock & Kolter, 2014), or use event-based signal processing methods to cluster appliances (Zhao et al., 2016). More recently, Lange & Berges (2018) proposed using a recurrent neural network as the variational distribution in learning the FHMM. In contrast, our work leverages people’s location data as a self-supervising signal. We cluster appliance events through learning the relation between energy events and people’s locations, and also learn appliance locations as a by-product.\nPassive location sensing Motivated by new in-home applications and continuous health monitoring, recent years have witnessed an increasing number of indoor location sensing systems (Adib et al., 2015; Joshi et al., 2015; Li et al., 2016; Ghourchian et al., 2017). They infer people’s locations passively by analyzing how people change the surrounding radio signals (e.g. WiFi) and do not require people to wear any sensors. These sensors have been used for various applications including activity recognition (Wang et al., 2014; 2015), sleep monitoring (Zhao et al., 2017; Hsu et al., 2017a), mobility and behavioral sensing (Hsu et al., 2017b; 2019), and health monitoring (Kaltiokallio et al., 2012). In our work, we leverage the availability of such sensors to introduce location data as an additional data modality for learning appliance usage patterns.\nSelf-supervised multi-modal learning Our work is related to a growing body of work on multimodal learning. Most approaches learn to encode the multi-modal data into a shared space (Gomez et al., 2017; Harwath et al., 2018; Owens & Efros, 2018; Zhao et al., 2018; 2019). In contrast, since our two modalities are mostly unrelated and become related only when an activation event happens, we learn to predict one modality conditioned on the other. Our work is also related to cross-modal prediction (Krishna et al., 2017; Owens et al., 2016; Zhang et al., 2017) but differs from it in an essential way. Past work on cross-modal prediction typically uses the prediction as the target outcome (e.g. output text for video captioning). In contrast, our objective is to discover the hidden appliance activation events. Thus, we design our method to leverage the learned predictability and cross-modal mapping for clustering activation events. Furthermore, we introduce a mixture prediction design to disentangle unrelated information in our predicted modality (location measurements unrelated to energy events)." }, { "heading": "3 PROBLEM FORMULATION", "text": "Our goal is to learn appliance activation events in an unsupervised way, using two input streams: home aggregate energy and residents’ location data. Figure 1 shows the two data modalities. We describe each of them formally and define appliance “events” below.\nAggregate energy signal A household’s total energy consumption is measured by a utility meter regularly. This measures the sum of the energy consumed by all appliances at each point in time. We denote the aggregate energy signal by y = (y1, y2, . . . , yT ), where yt ∈ R+. Suppose there are a total of K appliances in a home, and each appliance’s energy signal is denoted by xk = (x1,k, x2,k, . . . , xT,k), where xt,k ∈ R+. Only the aggregate energy signal is observed, yt =∑K\nk=1 xt,k + t, where t ∼ N (0, σ2) is the background noise. Figure 1a shows one day of an aggregate energy signal. The base power level shifts constantly throughout the day, depending on the background load (e.g. ceiling lights). Added on top of the base level are the various appliance events. Figure 1b zooms in around 20:30, and shows examples of those events. The stove was turned on around 20:28, and its power continued to cycle between a few levels. While the stove was on, the microwave was also turned on and ran for a few minutes, and the garbage disposer was turned on shortly.\nIndoor location data We use a single location sensor similar to that in Hsu et al. (2017b) to measure people’s indoor locations passively. The sensor sends out radio signals and analyzes the reflections to localize multiple people. Similarly to a regular WiFi router, the sensor has a limited coverage area of up to 40 feet. Suppose there are Pt people in the coverage area at time t. The location data is denoted by lt = (lt,1, lt,2, . . . , lt,Pt), where lt,p ∈ R2 is the x-y location of person p at time t. We can represent the location data over multiple time frames as l1:T = (l1, l2, . . . , lT ). Figure 1c shows one minute of location data from two people, and Figure 1d shows the data from a top-down view.\nAppliance activation events When an appliance is turned on, it causes a jump in energy consumption, i.e. a leading edge in the energy signal, as shown in Figure 1b. We call such a pattern an appliance activation event. On the other hand, when an appliance changes its internal state, it can also cause a change in the energy signal as shown in the same figure. We call such a pattern a background event. We are interested in discovering activation events to learn appliance usage patterns. Thus, for each jump in the aggregate signal, we take a time window (of 25 seconds) centered around that jump, and analyze it to detect whether it is an activation event and which appliance it corresponds to." }, { "heading": "4 MODEL", "text": "Our model operates on time windows (25 seconds) centered around jumps in aggregate energy signal, and the corresponding time windows of location data. The model aims to detect appliance activation events by finding windows with highly predictable user locations conditioned on the energy signal.\nFigure 2 shows our model. The idea underlying our model is to first learn a representation of appliance event windows that separates the information about appliance type, zt,cat, from the shape of the energy signal, zt,cont. This is achieved through the appliance energy encoder E. We can then use the appliance type encoding to predict the location data through the location predictor Le, which is conditioned on zt,cat. Since people’s locations have information unrelated to appliance events, the total location predictor is a mixture of Le and a second module Lg which captures event-independent location information. Below, we describe the design of these modules. More details about the neural network parameters and implementation are discussed in Appendix 8.4.\nAppliance Energy Encoder Given a window of aggregate energy signal yt:t+w1 = (yt, yt+1, . . . , yt+w1),\n4 the encoder E encodes the series into an event vector zt. We break the event vector into two parts: a categorical vector zt,cat and a continuous vector zt,cont. We aim to capture the appliance type with zt,cat (e.g. microwave vs. dishwasher) and use zt,cont to capture the variability within the signature of the same appliance. A softmax layer is applied to zt,cat to ensure that it is a valid distribution over appliance types. E is parametrized using convolution layers, with one fully connected layer to produce zt,cont and another for zt,cat. We denote by θE the parameters of the encoder.\n4We remove the base power level in each window by subtracting the minimum in the window.\n10 5 0 5 10 Time (seconds)\n0\n500\n1000\n1500\n2000\nPo we\nr ( W\nat ts\n)\n𝑥\n𝑦\n𝑡\nFigure 3: Total energy signal (top) and location information (bottom) as seen by the model\nLocation predictors We try to predict the location data conditioned on the appliance event, i.e. we predict a window of locations l = lt:t+w2 = (lt, lt+1, . . . , lt+w2) centered around the appliance event. We handle multiple people’s locations with a mixture model. Specifically, we use Le to predict locations related to energy events and Lg to handle other locations. The final prediction is a mixture of predictions from Le and Lg:\npθL(l|yt:t+w1 , c) = α ∗ pθLe (l|zt,cat) + (1− α) ∗ pθLg (l|c),\nwhere pθLe (·) is parametrized by Le with parameters θLe , pθLg (·) is parametrized by Lg with parameters θLg , θL = { θLe ,θLg } , and c includes context features. We use the number of people in the window (reported by the location sensor), the time of day, and the day of the week as the context features. The weight α depends on the number of people in the current window α = 1/Pt.\nTo represent location data, we blur each location measurement with a Gaussian kernel on the x-y plane to create an image, and process the window of locations lt:t+w2 into frames of images (Figure 3). We reuse the notation l ∈ R|X|×|Y |×|T | to represent frames of location images, where |X|, |Y |, |T | are the number of discretized points on the x-y and time dimensions. By presenting location data as images, we also remove the variable Pt while handling a variable number of people in each frame.\nWe choose pθLe (l|zt,cat) to be a multivariate Gaussian with a diagonal covariance structure: pθLe (l|zt,cat) , N (l;µe,Σe) = ∏ x,y,tN (lx,y,t;µx,y,t,σ2x,y,t) where µe = Le(zt,cat;θLe) ∈ R|X|×|Y |×|T | and we choose σx,y,t to be a constant. We use 3D deconvolution networks to model Le, which takes zt,cat as input and outputs the means of the location distributions. We model pθLg (·) and Lg in a similar way.\nDuring training, given a window of data (l,yt:t+w1 , c), we minimize the negative log likelihood of the mixture distribution in predicting the locations:\nLloc(θE ,θL) = − log pθL(l|yt:t+w1 , c). Note that since the gradient flows through zt,cat, the likelihood is a function of both θE and θL.\nHence, the encoder E also learns to encode the energy series based on the concurrent location data.\nEnergy Decoder The decoder D takes both zt,cat and zt,cont and learns to reconstruct the original input energy series by predicting ŷt:t+w1 . The decoder D is parametrized using deconvolution layers. We minimize the reconstruction loss during training:\nLrec(θE ,θD) = ||ŷt:t+w1 − yt:t+w1 ||2 The reconstruction loss encourages the encoder E to produce good initial vectors for Le to predict locations. At the same time, it serves as a regularizer to prevent encoder E from generating meaningless vectors by overfitting location predictions.\nTraining We train all components to jointly optimize the location predictions and energy reconstruction. We minimize the total loss: Ltotal = Lloc + λ ∗ Lrec over all windows of data, where λ is a parameter to balance the two terms. 5 The training details are discussed in Appendix 8.4." }, { "heading": "4.1 CLUSTERING APPLIANCE EVENTS WITH CROSS-MODAL PREDICTIONS", "text": "Once the model is trained, we obtain for each window of energy data its appliance event vector zt,cat and its cross-modal location prediction pθLe (·|zt,cat). Next, we use these two vectors for clustering. We design a density-based clustering algorithm leveraging the cross-modal relation we learned. Our intuition is that activation events for the same appliance will cluster together since they have the same appliance type and the same location. We omit the cat notation below for brevity.\nIt is typically difficult to cluster in a space learned by a neural encoder because the transformation is highly non-linear and the distance metric is not well-defined. We circumvent this problem by associating the encoded space with a Euclidean space, in which we can easily measure distance. Specifically, for two event vectors z1 and z2, we can measure their distance in the location space using pθLe (·|z1) and pθLe (·|z2).\nThe location prediction pθLe (·|zi) represents the likelihood of observing any location lx,y,t around the time of the appliance event. We found that for events related to human activities (e.g., turning on a kettle or microwave), pθLe (·|zi) shows a peak value at the location of the appliance in the x-y space at the time when a person interacted with the appliance. For events not related to human activities (e.g. fridge cycles or random background events), pθLe (·|zi) has low values and is diffused.\nWe define the location predictability score (or the confidence of location prediction) as s(zi) = maxx,y,t pθLe (lx,y,t = 1|zi), and the location distance Dloc between two events as: Dloc(z1, z2) = ||(x∗1 − x∗2, y∗1 − y∗2)||2, where (x∗i , y∗i , t∗i ) = argmaxx,y,t pθLe (lx,y,t = 1|zi).\n6 Similarly, the neighborhood distance Dnb between two events is defined as Dnb(z1, z2) = ||z1 − z2||2. Our clustering algorithm starts with a zi with high predictability score s(zi). It expands the cluster around zi’s local neighborhood in the z space. It stops expanding if a neighbor’s location distance Dloc is too far from the cluster center. If all neighbors of the current cluster are visited and none has a small enough Dloc, we start a new cluster from another event with high predictability score. The algorithm is described formally in Algorithm 1. We discuss the choice of parameters in Appendix 8.5.\nAlgorithm 1 Clustering energy events with the learned cross-modal relations Input: {zi} and s(·): event vectors and their location predictability scores\nηs, ηDloc , ηz : thresholds for predictability score, location distance, neighborhood distance Nmin: the minimum number of samples to form a valid cluster\nOutput: Clusters of appliance activation events that are associated with a consistent location 1: procedure EL-SCAN(ηs, ηDloc , ηz , Nmin) 2: Z ← {zi|s(zi) > ηs}, k ← 0 3: while Z 6= ∅ do 4: zseed = argmaxZ s(zi) 5: clusterk ← {zseed}, k ← k + 1 . Start a new cluster 6: ExpandCluster(k, zseed, ηDloc , ηz) 7: end while 8: return clusters with at least Nmin examples 9: end procedure 10: 11: function EXPANDCLUSTER(k, z, ηDloc , ηz) 12: zuk ← compute current cluster center 13: Znb ← {zi ∈ Z|Dnb(zi,z) < ηz and Dloc(zi,zuk ) < ηDloc} . Find valid neighbors 14: Z ← Z \\ Znb 15: clusterk ← clusterk ∪ Znb 16: Repeat ExpandCluster(.) for all zi in Znb 17: end function\n5We choose λ to be 0.1 in our experiments to put more emphasis on the location prediction. 6In the implementation, we compute the location predictability score as maxx,y,t µx,y,t for simplicity." }, { "heading": "5 DATASET", "text": "We collected concurrent streams of aggregate energy signal and location data from 4 homes over 7 months. 7 We use this dataset for our evaluation. To obtain ground truth labels of appliance events, we deployed programmable smart plugs on the power outlet associated with each appliance. Since not all appliances can be measured by a smart plug (e.g. some appliances do not have accessible power outlets), we also developed a labeling tool for manual labeling. The tool allows labelers to label appliance events from the aggregate energy signal, with the help of smart plug data and information collected from the home residents. The choice of sensors and their sampling rates are detailed in Appendix 8.1." }, { "heading": "6 RESULTS", "text": "We evaluate our model and clustering algorithm on unsupervised appliance activation event detection and their learned appliance locations. We name our approach SAPPLE (Self-supervised APPliance usage LEarning)." }, { "heading": "6.1 UNSUPERVISED APPLIANCE EVENT DETECTION", "text": "For appliance event detection, we compare with four baselines. Our method and two baselines have access to location information. EL-Kmeans takes both energy and location data as input and directly clusters them using k-means (Arthur & Vassilvitskii, 2007).8 E-only-Kmeans clusters only the energy signal with k-means. Methods with location information pre-filter the events and discard events without any location data, as they are unlikely to be activation events. The other two baselines only take the total energy signal as input: AFAMAP (Kolter & Jaakkola, 2012) uses factorial HMM, and VarBOLT (Lange & Berges, 2018) uses a recurrent neural network to model aggregate appliance signals. We use publicly available implementations for these methods (implementation, a;b).\nWe use the same hyper-parameters for the network architecture, training, and clustering algorithm across all homes. As our clustering algorithm is non-parametric, we choose the same number of clusters that it discovers for other methods if possible. For VarBOLT, we report results using 10 clusters, since the training time grows exponentially with the number of clusters and training with more clusters is prohibitively slow. As in past unsupervised work, we report the detection F1 scores based on the best cluster assignments with the ground truth appliances.\nTable 1 shows that SAPPLE has an average detection F1 score of 72.8%, outperforming other baselines ranging from 4.0% to 20.9%. As reported by Bonfigli et al. (2018), AFAMAP performs better when appliance-level data is available for training the HMMs. In the unsupervised setting, however, its footprint extraction procedure does not always produce meaningful HMMs for individual appliances (Bonfigli et al., 2018; Beckel et al., 2014), causing degraded performance. VarBOLT’s training objective focuses on explaining the total amount of energy in a home. Thus, it often uses multiple components (clusters) to model appliances that are on for a long period (e.g. fridge, heater, and dryer/washer). These types of appliances generate many background events, making the algorithm focus less on activation events of other appliances.\nComparing our method with baselines that also have location information (E-only-Kmeans and EL-Kmeans), our approach still outperforms them significantly. E-only-Kmeans performs better than AFAMAP and VarBOLT, showing that the presence of location data is highly related to activation events. However, naively using location data for clustering does not improve the results by much, as EL-Kmeans performs only slightly better than E-only-Kmeans. This is because not all location data is related to appliance events and vice versa. Our approach “cleans up” the data by learning the relation between the two modalities and discovers clusters with strong cross-modal predictability.\nTable 2 shows a break down of our results for different appliances." }, { "heading": "6.2 ABLATION STUDY", "text": "We perform an ablation study to show our results are contributed by all components in our method. As shown in Table 3, we compare our clustering algorithm (Method 1) with a different algorithm that concatenates the learned multi-modal embeddings (zt,cat and pθLe (·|zt,cat)) and directly clusters\n7The data collection is approved by our institutional review board (IRB). 8For each window of data, EL-Kmeans concatenates the energy signal, the frames of location images\n(flattened as a 1-d vector), and the context vector to create the feature vectors clustered by k-means.\nHome 1 Home 2 Home 3 Home 4\nKettle 91.9 - - 98.6\nthem with k-means (Method 2). Our clustering algorithm is more effective than directly clustering the multi-modal embeddings, providing an improvement of 14.0% in the average F1 score. This is because our clustering algorithm treats the two modalities differently. For location predictions, we can leverage our understanding of physical distance to set cluster boundaries. For the energy embedding, since it is a non-linear mapping with no clear distance metric, our algorithm iteratively groups together events in the embedding neighborhood that have approximately the same locations.\nApart from our clustering algorithm, we evaluate the benefits of our mixture component Lg by experimenting with removing Lg from the model, which reduces the F1 score by 4.3% (Method 1 vs 3). This shows the importance of having Lg extract background motion to allow the location predictor Le to focus on modeling the person who interacts with the appliance.\nWe also consider removing both Lg and Le, and clustering the input based only on the energy embedding zt,cat since there is no learned location predictions. The results shown under Method 4 demonstrate the importance of the location embedding generated by the combination of Lg and Le." }, { "heading": "6.3 LEARNED APPLIANCE LOCATIONS", "text": "Our model also learns the locations where people interacted with appliances, which are typically close to the appliances’ physical locations (we discuss remotely activated appliances in Appendix 8.6). For each appliance event, we take the location predicted by Le with the highest predictability score, and compare that with the ground truth appliance location measured by a laser meter. The average location prediction error is 0.65 meters with a standard deviation of 0.17 meters across homes. The errors are mostly due to location offsets between the person and the appliance. Figure 4a shows the location predictions and their ground truth of several appliance events in Home 1. The corresponding energy signals are shown in Figure 4b - Figure 4e.\nThe location information also helps disambiguate appliances with similar energy signals. For example, although the hair dryer and kettle (Figure 4d and Figure 4e) have very similar energy signatures, their different locations (green and orange in Figure 4a) guide the model to encode their events differently.\n6.4 LOCATION PREDICTIONS OF Le VS Lg\nWe visualize the location predictions from the event-related predictor Le and the event-independent predictor Lg to illustrate how they handle scenarios with multiple people. Figure 5 shows an example of how the mixture design handles the two types of locations. Since Le is conditioned on energy events, it naturally learns to predict locations related to appliance events. In this case, the location of the hair dryer is predicted by Le (Figure 5b). On the other hand, Lg predicts the typical locations people tend to stay (e.g., the couch in Figure 5c) based on the context. Having Lg to explain the other locations helps Le focus on learning the event-related locations." }, { "heading": "6.5 CONTEXTUAL LOCATION INFORMATION AND CLUSTER VISUALIZATIONS", "text": "In Appendix 8.2, we discuss emerging contextual relations between indoor locations through learning cross-modal predictions. We also visualize the learned event vectors to shed light on the design rationales behind our clustering algorithm in Appendix 8.3." }, { "heading": "7 CONCLUSION", "text": "We introduced a self-supervised solution for learning appliance usage patterns in homes. We infer appliance usage by learning from data streams of two modalities: the total energy consumed by the home and the residents’ location data. Our approach improves on unsupervised appliance event detection significantly, and learns appliance locations and usage patterns without any supervision. 9" }, { "heading": "ACKNOWLEDGMENTS", "text": "The authors thank the members of NETMIT at MIT and the reviewers for their feedback and helpful comments. We thank the participants in our study for facilitating the sensor deployments in their homes. We also thank the various companies sponsoring the MIT Center for Wireless Networks and Mobile Computing." }, { "heading": "8 APPENDIX", "text": "" }, { "heading": "8.1 SENSORS DETAILS", "text": "In this section, we describe details of the sensors used in our dataset collection.\nAggregate energy signal For flexible data collection, we install a sensor (emonPi) at the main circuit breaker in each house as a proxy for the utility meter. We programmed the sensor to collect the raw aggregate energy signal at 1.2 kHz. We down-sampled the data to 10 Hz for our problem to emulate the achievable data rate from a utility meter hardware (Armel et al., 2013).\nLocation data The wireless location sensor is built on a design similar to Hsu et al. (2017b). It is a single stand-alone sensor that hangs on the wall, and passively collects multiple people’s locations with decimeter-level accuracy. We down-sampled the location streams to 1 Hz.\nAppliance-level data (for ground truth labeling) We use TP-Link smart plugs (TP-link) with energy monitoring features for collecting appliance-level data. We wrote custom software using available APIs to collect appliance energy signals at 1 Hz. For appliances that cannot be connected to a smart plug, we asked the residents to write down appliance usage times to help with manual labeling." }, { "heading": "8.2 CONTEXTUAL LOCATION INFORMATION VIA LEARNED APPLIANCE EVENTS", "text": "By analyzing the location predictions of Le conditioned on different appliance events, we also discover interesting contextual relations between different indoor locations. Figure 6 visualizes the location predictions at different frames around a kettle event. We plot the per-frame location predictability score (or prediction confidence) over time in Figure 6a. The score peaks around t = 0s, the time of the event. This is because when people turn on a kettle, they may approach it from different locations, but the location when they push the button is consistent and can be predicted confidently. As a result, the prediction at t = 0s correctly shows the kettle’s location (Figure 6d).\nInterestingly, a smaller peak of predictability score shows at t = −10s in Figure 6a. If we look at the location prediction from t = −10s to t = 0s (Figure 6b - Figure 6d), we see how the prediction moves from the sink to the kettle 10. This is because people often fill water at the sink before starting the kettle. Through learning the cross-modal relation, contextual information among locations also emerges as different appliance events are discovered." }, { "heading": "8.3 VISUALIZATION OF THE LEARNED EVENT VECTORS AND LOCATION PREDICTABILITY", "text": "To illustrate what the model learns and the design rationales behind our clustering algorithm, we visualize the space of the learned event vectors zt,cat and their location predictability score s(z). Figure 7 shows the t-SNE (Maaten & Hinton, 2008) visualization of the event vectors on a 2- dimensional space. We color coded the events with three metrics: location predictability scores (Figure 7a), cluster ID discovered by our algorithm (Figure 7b), and ground truth label (Figure 7c). The predictability score depends on how strongly an appliance event co-occurred with a particular\n10We normalize each image to better visualize locations with lower prediction confidence.\nlocation. As shown in Figure 7a, most appliances related to human activities have high predictability scores (e.g., kettle, hairdryer, microwave, coffee machine, etc). On the other hand, appliances that cycle in the background (e.g., heater) have very low predictability. The stove has many clusters of background events. This is because when the stove is on, it cycles between a few power levels, and the cycle durations depend on the heating levels. Interestingly, we found that stove clusters with higher power levels (“stove-big-cycle”) also have high predictability scores, while others with cycling states (“stove-cycle”) show low scores. This is likely because people are next to the stove more often when the heating level is high.\nWe can also see that without clustering using both location predictions and event vectors, it is hard to separate some of the cluster boundaries. Besides, learning to relate energy events to location data enables us to measure the distances of events in a well-defined physical space." }, { "heading": "8.4 NETWORK IMPLEMENTATION AND TRAINING DETAILS", "text": "In this section, we provide implementation and training details of our neural network model. We use convolution and deconvolution layers for the energy encoder and decoder. Each module has 8 layers with a kernel size of 3 and a stride of 2. We choose the dimensions of zt,cat and zt,cont to be 128 and 3. The location predictors have 5 layers of 3D deconvolution with a kernel size of 3 and a stride of 2 in each dimension. The frames of location images for each time window have 32 × 32 × 32 pixels. We discretize the x, y, and time dimensions into 32 points, where the range of the x-y dimensions are 10 meters and the time dimension has 32 seconds. The neural networks are implemented in Tensorflow (Abadi et al., 2016). For training, we use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 and a batch size of 64." }, { "heading": "8.5 CLUSTERING PARAMETERS AND DETAILS", "text": "In all experiments, we set ηDloc = 0.4 meters, ηz = 0.03, ηs = 0.2, and Nmin = 10. These values are chosen based on physical and computational constraints. The value of ηDloc is based on the minimum physical separation between two appliances. The value of ηz only affects the search space in each iteration, and is chosen to be small for computational efficiency. The minimum predictability score ηs is chosen based on a validation set from one of the homes. The value of Nmin is set to 10 to say that we need the appliance to appear in the data at least 10 times before we trust that it is a real appliance." }, { "heading": "8.6 LIMITATIONS", "text": "We discuss the limitations of our approach in this section. One limitation is that some remotely activated appliances may not have predictable locations. However, from our experience collecting the dataset, the vast majority of the appliances used on a daily basis (Table 2) require human interaction. For example, a person has to put food into a microwave before turning it on, to hold a hair dryer while drying hair, and to push a button to get a coffee machine running. Even for an appliance with a remote controller, as long as the person has a regular place to interact with the appliance from (e.g., always turning the TV on while sitting on the couch), our model can still learn to predict the location of interaction. Another limitation is that our location sensor has a limited coverage area (around 40 feet in radius). This is enough to cover a typical one-bedroom apartment. For a larger house, one could deploy a second sensor, similarly to how a WiFi repeater extends the coverage area." } ]
2,020
null
SP:99d5b859a30f1825f5a21fb62fdf7a918b838b95
[ "The paper proposes an optimization method for solving unconstrained convex optimization problems where the objective function consists of a sum of several smooth components f_i and a (not necessarily smooth) convex function R. The proposed method AVR-SExtraGD is a stochastic descent method building on the previous algorithms Prox-SVRG (Lin 2014) and Katyusha (Zeyuan 2017). The previous Prox-SVRG method using a proximal operator is explained to converge fast but leads to inaccurate final solutions, while the Katyusha method is an algorithm based on momentum acceleration. The current paper builds on these two approaches and applies the momentum acceleration technique in a stochastic extragradient descent framework to achieve fast convergence.", "This is an optimization algorithm paper, using the idea of \"extragradient\" and proposing to combine acceleration with proximal gradient descent-type algorithms (Prox-SVRG). Their proposed algorithm, i.e., accelerated variance reduced stochastic extra gradient descent, combines the advantages of Prox-SVRG and momentum acceleration techniques. The authors prove the convergence rate and oracle complexity of their algorithm for strongly convex and non-strongly convex problems. Their experiments on face recognition show improvement on top of Prox-SVRG as well Katyusha. They also propose an asynchronous variant of their algorithm and show that it outperforms other asynchronous baselines." ]
Recently, many stochastic gradient descent algorithms with variance reduction have been proposed. Moreover, their proximal variants such as Prox-SVRG can effectively solve non-smooth problems, which makes that they are widely applied in many machine learning problems. However, the introduction of proximal operator will result in the error of the optimal value. In order to address this issue, we introduce the idea of extragradient and propose a novel accelerated variance reduced stochastic extragradient descent (AVR-SExtraGD) algorithm, which inherits the advantages of Prox-SVRG and momentum acceleration techniques. Moreover, our theoretical analysis shows that AVR-SExtraGD enjoys the best-known convergence rates and oracle complexities of stochastic first-order algorithms such as Katyusha for both strongly convex and non-strongly convex problems. Finally, our experimental results show that for ERM problems and robust face recognition via sparse representation, our AVR-SExtraGD can yield better performance than state-of-the-art algorithms such as Prox-SVRG and Katyusha. The asynchronous variant of AVR-SExtraGD outperforms KroMagnon and ASAGA, which are the asynchronous variants of SVRG and SAGA, respectively.
[]
[ { "authors": [ "A. Agarwal", "J.C. Duchi" ], "title": "Distributed delayed stochastic optimization", "venue": "In Decision and Control,", "year": 2011 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "Siam J Imaging Sciences,", "year": 2009 }, { "authors": [ "Doron Blatt", "Alfred O. Hero", "Hillel Gauchman" ], "title": "A convergent incremental gradient method with a constant step size", "venue": "Siam Journal on Optimization,", "year": 2007 }, { "authors": [ "Aaron Defazio", "Francis Bach", "Simon Lacostejulien" ], "title": "Saga: A fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In International Conference on Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ofer Dekel", "Gilad Bachrach Ran", "Ohad Shamir", "Xiao Lin" ], "title": "Optimal distributed online prediction using mini-batches", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Daniel Gabay", "Bertrand Mercier" ], "title": "A dual algorithm for the solution of nonlinear variational problems via finite element approximation", "venue": "Computers and Mathematics with Applications,", "year": 1976 }, { "authors": [ "Wright John", "Allen Y Yang", "Ganesh Arvind", "Sastry S Shankar", "Ma Yi" ], "title": "Robust face recognition via sparse representation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2009 }, { "authors": [ "R. Johnson", "Zhang Tong" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In International Conference on Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Zhou Kaiwen", "Shang Fanhua", "Cheng James" ], "title": "A simple stochastic variance reduced algorithm with fast convergence rates", "venue": null, "year": 2018 }, { "authors": [ "Jakub Konečný", "Peter Richtárik" ], "title": "Semi-stochastic gradient descent methods", "venue": "Mathematics, 3:9–,", "year": 2013 }, { "authors": [ "Jakub Konečný", "Liu Jie", "Peter Richtárik", "Martin" ], "title": "Taká. ms2gd: Mini-batch semi-stochastic gradient descent in the proximal setting", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2014 }, { "authors": [ "M G" ], "title": "Korpelevič. An extragradient method for finding saddle points and for other problems", "venue": "Matecon, 12:747–75,", "year": 1976 }, { "authors": [ "John Langford", "Li Lihong", "Zhang Tong" ], "title": "Sparse online learning via truncated gradient", "venue": "Journal of Machine Learning Research,", "year": 2009 }, { "authors": [ "Rémi Leblond", "Fabian Pedregosa", "Simon Lacoste-Julien" ], "title": "Asaga: Asynchronous parallel saga, 2016", "venue": null, "year": 2016 }, { "authors": [ "Xiao Lin", "Zhang Tong" ], "title": "A proximal stochastic gradient method with progressive variance reduction", "venue": "Siam Journal on Optimization,", "year": 2014 }, { "authors": [ "P.L. Lions", "B. Mercier" ], "title": "Splitting algorithms for the sum of two nonlinear operators", "venue": "Siam Journal on Numerical Analysis,", "year": 1979 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In ICLR 2017 (5th International Conference on Learning Representations),", "year": 2016 }, { "authors": [ "Horia Mania", "Xinghao Pan", "Dimitris Papailiopoulos", "Benjamin Recht", "Kannan Ramchandran", "Michael I. Jordan" ], "title": "Perturbed iterate analysis for asynchronous stochastic optimization", "venue": null, "year": 2015 }, { "authors": [ "Trong Phong Nguyen", "Edouard Pauwels", "Emile Richard", "Bruce W. Suter" ], "title": "Extragradient method in optimization: Convergence and complexity", "venue": "Journal of Optimization Theory and Applications,", "year": 2017 }, { "authors": [ "A. Nitanda" ], "title": "Stochastic proximal gradient descent with acceleration techniques", "venue": "In International Conference on Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "R. Tyrrell Rockafellar" ], "title": "Convex Analysis", "venue": null, "year": 1970 }, { "authors": [ "Nicolas Le Roux", "Mark Schmidt", "Francis Bach" ], "title": "A stochastic gradient method with an exponential convergence rate for finite training sets", "venue": "In International Conference on Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Zhang Ruiliang", "Zheng Shuai", "Kwok James T" ], "title": "Asynchronous distributed semi-stochastic gradient optimization", "venue": "In Thirtieth AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Ernest K. Ryu", "Yin Wotao" ], "title": "Proximal-proximal-gradient method", "venue": "UCLA CAM Report,", "year": 2017 }, { "authors": [ "Mark Schmidt", "Nicholas Le Roux", "Francis Bach" ], "title": "Minimizing finite sums with the stochastic average gradient", "venue": "Mathematical Programming,", "year": 2017 }, { "authors": [ "Othmane Sebbouh", "Nidham Gazagnadou", "Samy Jelassi", "Francis Bach", "Robert M. Gower" ], "title": "Towards closing the gap between the theory and practice of svrg, 2019", "venue": null, "year": 2019 }, { "authors": [ "Shai Shalev-Shwartz", "Zhang Tong" ], "title": "Proximal stochastic dual coordinate ascent", "venue": null, "year": 2012 }, { "authors": [ "Shai Shalev-Shwartz", "Zhang Tong" ], "title": "Accelerated mini-batch stochastic dual coordinate ascent", "venue": "Advances in Neural Information Processing Systems, pp", "year": 2013 }, { "authors": [ "Shai Shalev-shwartz", "Zhang Tong" ], "title": "Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization", "venue": "In International Conference on International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Zhang Tong" ], "title": "Solving large scale linear prediction problems using stochastic gradient descent algorithms", "venue": "In International Conference on Machine Learning Omnipress,", "year": 2004 }, { "authors": [ "P. Tseng" ], "title": "On accelerated proximal gradient methods for convex-concave optimization", "venue": "Siam Journal on Optimization,", "year": 2008 }, { "authors": [ "Allen-Zhu Zeyuan" ], "title": "Katyusha: the first direct acceleration of stochastic gradient methods", "venue": "STOC, pp", "year": 2017 }, { "authors": [ "Allen-Zhu Zeyuan", "Yang Yuan" ], "title": "Improved svrg for non-strongly-convex or sum-of-non-convex objectives", "venue": "ICML, pp", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "In this paper, we mainly consider the following composite convex optimization problem:\nmin x∈Rd\n{ P (x) def = F (x) +R(x) = 1\nn n∑ i=1 fi(x) +R(x)\n} (1)\nwhere F (x) : Rd→R is the average of smooth convex component functions fi(x), and R(x) is a relatively simple convex function (but may not be differentiable). In this paper, we use ‖·‖ to denote the standard Euclidean norm, and ‖·‖1 to denote the `1-norm. Moreover, we use P∗ to denote the real optimal value of P (·), and P̂∗ to denote the optimal value obtained by algorithms. This form of optimization problems often appears in machine learning, signal processing, data science, statistics and operations research, and has a wide range of applications such as regularized empirical risk minimization (ERM), sparse coding for image and video recovery, and representation learning for object recognition. Specifically, for a collection of given training examples {(a1, b1), ..., (an, bn)}, where ai ∈ Rd, bi ∈ R (i = 1, 2, ..., n) and ai is a feature vector, while bi is the desired response. When fi(x) = 12 (a T i x−bi)2, we can obtain the ridge regression problem by setting R(x) = λ2 ‖x‖ 2. We also get the Lasso or Elastic-Net problems by setting R(x) =λ‖x‖1 or R(x) = λ22 ‖x‖ 2+λ1‖x‖1, respectively. Moreover, if we set fi(x) = log(1+exp(−bixTai)), we will get the regularized logistic regression problem." }, { "heading": "1.1 RECENT RESEARCH PROGRESS", "text": "The proximal gradient descent (PGD) method is a standard and effective method for Problem (1), and can achieve linear convergence for strongly convex problems. Its accelerated algorithms, e.g., accelerated proximal gradient (APG) (Tseng (2008); Beck & Teboulle (2009)), attain the convergence rate ofO(1/T 2) for non-strongly convex problems, where T denotes the number of iterations.\nIn recent years, stochastic gradient descent (SGD) has been successfully applied to many large-scale learning problems, such as training for deep networks and linear prediction (Tong (2004)), because of its significantly lower per-iteration complexity than deterministic methods, i.e., O(d) vs. O(nd). Besides, many tricks for SGD have also been proposed, such as Loshchilov & Hutter (2016). However, the variance of the stochastic gradient may be large due to random sampling (Johnson & Tong (2013)), which leads that the algorithm requires a gradually reduced step size, thus it will converge slow. Even under the strongly convex condition, SGD only achieves a sub-linear convergence rate O(1/T ). Recently, many SGD methods with variance reduction have been proposed. For the case of R(x) = 0, Roux et al. (2012) developed a stochastic average gradient descent (SAG) method, which is a randomized variant of the incremental aggregated gradient method proposed by Blatt et al. (2007). Then stochastic variance reduced gradient (SVRG) (Johnson & Tong (2013)) was proposed, and has been widely introduced into various subsequent optimization algorithms, due to its lower storage space (i.e., O(d)) than that of SAG (i.e., O(nd)). SVRG reduced the variance effectively by changing the estimation of stochastic gradients. The introduction of a snapshot point x̃ mainly has the effect of correcting the direction of gradient descent, and reduces the variance. Later, Konečný & Richtárik (2013) proposed the semi-stochastic gradient descent methods as well as their mini-batch version (Konečný et al. (2014)). And their asynchronous distributed variant (Ruiliang et al. (2016)) is also been proposed later. More recently, Lin & Tong (2014) proposed the Prox-SVRG method, which introduced the proximal operator, and then applied the idea of SVRG to solve the non-smooth optimization problems. However, Prox-SVRG can only be used to solve the strongly convex optimization problems. In order to solve the non-strongly convex problems, Zeyuan & Yuan (2016) proposed the SVRG++ algorithm. Besides, to accelerate the algorithm and reducing the complexity, by combining the main ideas of APG and Prox-SVRG, Nitanda (2014) proposed an accelerated variance reduction proximal stochastic gradient descent (Acc-Prox-SVRG) method, which can effectively reduce the complexity of the algorithm compared to the two basic algorithms. Very recently, Zeyuan (2017) developed a novel Katyusha algorithm which introduced the Katyusha momentum to accelerate the algorithm. With the development of parallel and distributed computing which can effectively reduce computing time and improve performance, Ryu & Wotao (2017) came up with an algorithm called Proximal Proximal Gradient, which combined the proximal gradient method and ADMM (Gabay & Mercier (1976)). Furthermore, it is easy to implement in parallel and distributed environments because of its innovative algorithm structure." }, { "heading": "1.2 OUR MAIN CONTRIBUTIONS", "text": "We find that due to the introduction of proximal operator, there is a gap between P̂∗ and P∗, and its theoretical derivation can be seen in Appendix A. To address this issue, Nguyen et al. (2017) proposed the idea of extragradient which can be seen as a guide during the process, and introduced it into the optimization problems. Intuitively, this additional iteration allows us to examine the geometry of the problem and consider its curvature information, which is one of the most important bottlenecks for first order methods. By using the idea of extragradient, we can get a better result in each inner-iteration. Therefore, the idea of extragradient is our main motivation. In this paper, we propose a novel algorithm for solving non-smooth optimization problems. The main contributions of this paper are summarized as follows.\n• In order to improve the result of the gap between P̂∗ and P∗, and achieve fast convergence, a novel algorithm, which combines the idea of extragradient, Prox-SVRG and the trick of momentum acceleration, is proposed, called accelerated variance reduced stochastic extragradient descent (AVR-SExtraGD).\n•We provide the convergence analysis of our algorithm, which shows that AVR-SExtraGD achieves linear convergence for strongly convex problems, and the convergence condition in the non-strongly convex case is also given. According to the convergence rate, we can know that AVR-SExtraGD has the same excellent result as the best-known algorithms, such as Katyusha.\n• Finally, we show by experiments that the performance of AVR-SExtraGD (as well as VRSExtraGD, which is the basic algorithm of AVR-SExtraGD) is obviously better than the popular algorithm, Prox-SVRG, which confirms the advantage of extragradient. For the widely used accelerated algorithm, Katyusha, the performance of our algorithm is still improved." }, { "heading": "2 RELATED WORK", "text": "" }, { "heading": "2.1 BASIC ASSUMPTIONS", "text": "We first make the following assumptions to solve the problem (1): Assumption 1 (Smoothness). The convex function F (·) is L-smooth, i.e., there exists a constant L>0 such that for any x, y∈Rd, ‖∇F (x)−∇F (y)‖≤L‖x−y‖. Assumption 2 (Lower Semi-continuity). The regularization function R(·) is a lower semicontinuous function, i.e., ∀x0∈Rd,\nlim inf x→x0\nR(x)≥R(x0)." }, { "heading": "But it is not necessarily differentiable or continuous.", "text": "Assumption 3 (Strong Convexity). In Problem (1), the functionR(·) is µ-strongly convex, i.e., there exists a constant µ>0 such that for all x, y∈Rd, it holds that\nR(x) ≥ R(y) + 〈G, x− y〉+ µ 2 ‖x− y‖2, (2)\nwhere G∈∂R(y) which is the set of sub-gradient of R(·) at y." }, { "heading": "2.2 PROX-SVRG AND EXTRAGRADIENT DESCENT METHODS", "text": "An effective method for solving Problem (1) is Prox-SVRG which improved Prox-FG (Lions & Mercier (1979)) and Prox-SG (Langford et al. (2009)) by introducing the stochastic gradient and combining the idea of SVRG, respectively. For strongly convex problems, Prox-SVRG can reach linear convergence with a constant step size, and its main update rules are\n∇̃fik(xk−1) = ∇fik(xk−1)−∇fik(x̃) +∇F (x̃); xk = ProxRη (xk−1 − η∇̃fik(xk−1)), (3)\nwhere x̃ is the snapshot point used in SVRG, ∇̃fik(xk−1) is the variance reduced stochastic gradient estimator, and ProxRη (·) is the proximal operator. Although Prox-SVRG can converge fast, because of proximal operator, the final solution has the deviation, which makes the solution inaccurate, thus Prox-SVRG still needs to be further improved, which is our important motivation.\nThe extragradient method was first proposed by Korpelevič (1976). It is a classical method for solving variational inequality problems, and it generates an estimation sequence by using two projection gradients in each iteration. By combining this idea with some first-order descent methods, Nguyen et al. (2017) proposed an extended extragradient method (EEG) which can effectively solve the problem (1), and can also solve relatively more general problems as follows:\nmin x∈Rd\n{ P (x) def = F (x) +R(x) } where F (x) is not necessarily composed by multiple functions fi(x). Unlike the classical extragradient method, EEG uses proximal gradient instead of orthogonal projection in each iteration. The main update rules of EEG are\nyk = ProxRsk(xk − sk∇F (xk)); xk+1 = Prox R αk (xk − αk∇F (yk)),\nwhere sk and αk are two step sizes. From the update rules of EEG, we can see that in each iteration, EEG needs to calculate two gradients, which will definitely slow down the algorithm. Therefore, the algorithm needs to be further accelerated by an efficient technique." }, { "heading": "2.3 MOMENTUM ACCELERATION AND MIG", "text": "Firstly, we introduce the momentum acceleration technique whose main update rules are\nvdwt = βvdwt−1 + (1− β)dwt; wt = wt−1 − αvdwt ,\nwhere dw is the gradient of the objective function at w, β is a parameter, and α is a step size. The update rules take not only the gradient of the current position, but also the gradient of the past position into account, which makes the final descent direction of wt after using momentum\nAlgorithm 1 AVR-SExtraGD Input: Initial vector x0, the number of epochs S, the number of iterations m per epoch, the step\nsizes η1, η2, momentum parameter β, and the set K. Initialize: x̃0 = x10 = x0, ρ = 1+ηµ.\n1: for s = 1, 2, . . . , S do 2: Compute∇F (x̃s−1); 3: βs = β (SC) or βs = 2s+4 (non-SC); 4: for k = 1, 2, . . . ,m do 5: Pick ik uniformly at random from {1, ..., n}; 6: if k ∈ K then 7: xsk−1/2 = Prox R η1 ( xsk−1 − η1∇̃fik(βsxsk−1 + (1−βs)x̃s−1) ) ;\n8: xsk = Prox R η2 ( xsk−1/2 − η2∇̃fik(βsx s k−1/2 + (1−βs)x̃ s−1) )\n; 9: else\n10: xsk = Prox R η1(x s k−1 − η1∇̃fik(βsxsk−1 + (1−βs)x̃s−1)); 11: end if 12: end for 13: x̃s = βs( ∑m k=1 ρ k−1)−1 ∑m k=1 ρ k−1 x s k−1/2+x s k 2 +(1−βs)x̃ s−1 (SC)\nor x̃s = βsm ∑m k=1 xsk−1/2+x s k 2 +(1−βs)x̃ s−1 (non-SC);\n14: xs+10 = x s m; 15: end for Output: x̃S .\nreduce the oscillation of descent, thus this method can effectively accelerate the convergence of the algorithm.\nAccording to the Nesterov’s momentum, lots of accelerated algorithms were proposed, such as APG and Acc-Prox-SVRG. Later, Zeyuan (2017) proposed Katyusha to further accelerate the algorithm, and MiG (Kaiwen et al. (2018)) was proposed to simplify the structure of Katyusha, and the momentum acceleration of MiG is embodied in each iteration as follows:\nysk−1 = βsx s k−1 + (1− βs)x̃s−1.\nMoreover, it is easy to get that the oracle complexity of MiG is less than that of Prox-SVRG and APG, which means that MiG can effectively accelerate the original Prox-SVRG algorithm. Therefore, we can also use this acceleration technique to accelerate our algorithm and address the issue of slow convergence due to the calculations of two different gradients." }, { "heading": "3 OUR AVR-SEXTRAGD METHOD", "text": "We note that EEG requires computing two full gradients in each iteration, which will take a lot of time for large-scale machine learning problems. Therefore, we first consider and propose the stochastic variant of the algorithm, namely stochastic extragradient descent (SExtraGD), to reduce the per-iteration computational complexity, and further propose an efficient variance reduced stochastic extragradient descent (VR-SExtraGD) algorithm. Their main update rules and the detailed algorithm of VR-SExtraGD can be found in Appendix C.\nOn the basis of VR-SExtraGD, we refer to the momentum acceleration technique proposed in MiG, and propose an innovative accelerated variance reduced stochastic extragradient descent algorithm, called AVR-SExtraGD. It is used to solve non-smooth (both SC and non-SC) optimization problems. To further accelerate the algorithm and address the issue of slow convergence speed caused by two gradients in each inner-iteration, only part of the iterations are updated by extragradient descent. Our AVR-SExtraGD algorithm is outlined in Algorithm 1.\nFirstly, we give some explanation for Algorithm 1. For our AVR-SExtraGD, we need to compute a full gradient of F (x). And Step 3 in Algorithm 1 is the selection of momentum parameter. Step 6 to Step 8 are the update rule of AVR-SExtraGD, and Step 9 is the update rule of MiG. Here we only use AVR-SExtraGD in the set K. Step 13 is the formulation of the snapshot point. Finally, we\ngive the set of the start point of next inner-iteration in Step 14. Moreover, our output is the snapshot point of the last outer iteration.\nAnd for our AVR-SExtraGD algorithm, we have the following remarks.\n• Following the requirement of step sizes in EEG, the step sizes in our algorithms also need to satisfy similar conditions. After combining all the conditions, we get the conditions: η1≤ 12L , η2≤ 1 L−η1. • In AVR-SExtraGD, we use one more trick to speed up the algorithm, that is, only part of the iterations (i.e., when k ∈K) are updated by extragradient descent, and the rest of the iterations are still updated by the update rules of MiG. For different problems and different data sets, we manually adjust the choice of K, and the details can be seen in Section 5.3.\n• For the momentum parameter β, when P (·) is a strongly convex function, we can set β as a constant which is generally set as 0.9. And β is also set as 1− √ µη2\n1+ √ µη2 in Acc-Prox-SVRG, while we set β = 0.9 in our AVR-SExtraGD. However, when P (·) is non-strongly convex, the value of β in each iteration is no longer fixed. We set βs as a decreasing sequence, which satisfies 1β2s−1 ≥ 1−βs β2s . Particularly, in AVR-SExtraGD, we set βs= 2s+4 , which satisfies the inequality defined above.\nAs we all know, for the general GD method, the iterate xk in each iteration eventually converges to the real optimal point of the function, so there is no error in the final optimal value. Therefore, the proximal operator will introduce a bad result in convergence.\nTo adress this issue, we introduce the idea of extragradient, which takes one more proximal operator than Prox-SVRG. And according to the idea of extragradient, we know that the update structure of EEG can make use of the curvature information of the objective function. Although we change the original EEG into a stochastic version, the advantage of the extragradient structure is still retained to some degree, and thus our algorithm can get a better result than the algorithm without extragradient.That is, although the method of extragradient can not directly reduce the gap of the optimal value, it can improve the bad result brought by the gap, and obtain a better result after every inner loop, Thus, AVR-SExtraGD can improve the accuracy of the algorithm.\nIn summary, our AVR-SExtraGD method combines the advantage of Prox-SG for solving nonsmooth optimization problems, the advantage of EEG, and the trick of SVRG to reduce the variance of stochastic gradient. And it is further accelerated by introducing the momentum acceleration used in MiG. Therefore, our algorithm has more advantages than the basic algorithms mentioned above." }, { "heading": "4 CONVERGENCE ANALYSIS", "text": "In this section, we analyze the convergence properties of AVR-SExtraGD under strongly convex and non-strongly convex conditions. For convenience analysis, we use ∇̃ikF (·) to denote ∇̃fik(·), that defined in (3) in the analysis of AVR-SExtraGD. We give some key lemmas, which are important to prove the convergence of AVR-SExtraGD in Appendix B, and all the proofs of our lemmas and theorems in this section are also given in Appendix B." }, { "heading": "4.1 FOR SC PROBLEMS", "text": "For strongly convex problems, the linear convergence of AVR-SExtraGD can be guaranteed by the following theorem. Theorem 1 (Strongly Convex). Suppose that Assumptions 1, 2 and 3 hold, and let x∗ = arg minx P (x). In addition, assume η1 = η2 = η > 0 and Lβ+ Lβ1−β ≤ 1 η . Then, by appropriately choosing η, β and m= Θ(n), Algorithm 1 achieves an -additive error with following oracle complexities in expectation:{\nO( √ κn log P (x0)−P (x∗) ), if\nm κ ≤ 3 4 ,\nO(n log P (x0)−P (x∗) ), if m κ > 3 4 ,\nwhich also means that for SC problems, the oracle complexity of Algorithm 1 is O((n +√ κn) log P (x0)−P (x∗) ).\nThis result means that for strongly convex problems, AVR-SExtraGD achieves linear convergence and enjoys the best-known oracle complexity of stochastic first-order algorithms, such as Katyusha." }, { "heading": "4.2 FOR NON-SC PROBLEMS", "text": "The convergence of AVR-SExtraGD for solving non-SC problems can be guaranteed by the following theorem. Theorem 2 (Non-Strongly Convex). Suppose that Assumptions 1 and 2 hold, and let x∗ = arg minx P (x). In addition, assume η1 = η2 = η = 1Lα > 0 and 1−βs− 1 α−1 ≥ 0, where α is a constant. Then by setting βs= 2s+4 , we have\nE[P (x̃S)−P (x∗)]≤ 4(1−β1)\n(S+4)2β21 (P (x0)−P (x∗))+\nLα\n(S+4)2m ‖x0−x∗‖2,\nwhich also means that when we choose m= Θ(n), Algorithm 1 achieves the following oracle complexity in expectation: O(n √ P (x0)−P (x∗) + √ nL‖x0−x∗‖2 ).\nThe result shows that AVR-SExtraGD enjoys the same oracle complexity as Katyusha and MiG, which is close to the best-known complexity in this case (i.e., O(n log 1 + √ nL )). In addition, we\nalso analyze the convergence of VR-SExtraGD and give and prove the related lemmas and theorems to guarantee its convergence, which can be found in Appendix D." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we evaluate the performance of AVR-SExtraGD and compare it with its counterparts including Prox-SVRG and Katyusha on real-world data sets, whose information is shown as Table 2 in Appendix E.\nBesides, for these real-world data sets, we consider the two common problem models: Lasso and Elastic-Net. We also apply our algorithm to face recognition tasks and compare it with the compared algorithms. Next, we give the setup of the related parameters as follows:\n• Regularization Parameters: The regularization parameters for real-world datasets are shown in Table 2.\n• The Number of Inner-Iteration: The number of inner-iterations of Katyusha and Prox-SVRG is usually set asm=2n. Our algorithm adds one more gradient calculation in each inner-iteration than Prox-SVRG, and for an equal complexity of each epoch, we set m=n in AVR-SExtraGD, so that in each epoch, all the three algorithms require calculating 3n stochastic gradients. What’s more, the reasonableness of such a setting can be found in Sebbouh et al. (2019).\n• Step Sizes: We set our step sizes as: η1 = 25L , η2 = 3 5L . We note that the selected step sizes do not satisfy the conditions requested in the remark of Section 3. Nevertheless, we can see from the experimental results that our algorithm still converges well, which means that in practice experiments, we can choose larger step sizes to improve the convergence speed.\nFor fair comparison, we implemented all the methods in C++ with a Matlab interface, and performed all the experiments on a PC with an Intel i7-7700 CPU and 32GB RAM." }, { "heading": "5.1 RESULTS OF LASSO, ELASTIC-NET AND LOGISTIC REGRESSION", "text": "In this part, we consider three common problems, including Lasso, Elastic-Net and the `1-norm regularized logistic regression, whose models can be found in Appendix E.\nFigure 1 shows the performance of all the algorithms for Lasso and Elastic-Net on all the data sets. For running time, our AVR-SExtraGD obviously outperforms Prox-SVRG and Kayusha, which shows the faster convergence speed of AVR-SExtraGD than Katyusha, and justifies that the extragradient and the momentum acceleration are able to improve Prox-SVRG efficiently. Moreover, for\nthe two problems, we propose the asynchronous sparse variant of AVR-SExtraGD by bringing our algorithm into a sparse asynchronous framework and compare its performance with KroMagnon (Mania et al. (2015)) and ASAGA (Leblond et al. (2016)) on rcv1 and real-sim, as shown in Table 2. The results are shown in Figure 2, which verify that the asynchronous variant of AVR-SExtraGD significantly outperforms the variants of SVRG (i.e., KroMagnon) and SAGA (Defazio et al. (2014)) (i.e., ASAGA) in terms of iterations and running time. Then, for a more comprehensive comparison, we compare the performance of more algorithms for Lasso and the `1-norm regularized logistic regression on a9a and Covtype, and the results are shown as Figure 4 and Figure 5 in Appendix E." }, { "heading": "5.2 RESULTS ON FACE RECOGNITION", "text": "We also apply our AVR-SExtraGD as well as Prox-SVRG and Katyusha to robust face recognition via sparse representation (John et al. (2009)) on the AR and Yale. We set the loss function\nin the training process as the same function as the Lasso and Elastic-Net problems. For approximately equal time, the number of outer loops is 200 for Prox-SVRG and AVR-SExtraGD, and 50 for Katyusha. In order to compare the results reasonably, we implement all the algorithms for 20 times and get the average and standard deviation of recognition rates, as shown in Table 1.\nThe results in Table 1 show that the recognition rate of AVR-SExtraGD is significantly higher than other algorithms on both the AR and Yale data sets. This means that our AVR-SExtraGD can learn a more efficient representation for face recognition." }, { "heading": "5.3 THE SELECTION OF K IN AVR-SEXTRAGD", "text": "For the selection of K, we do some relevant experiments as examples. We choose different K to solve Lasso problem by our algorithm, and get the results as shown in Figure 3.\nwhere, K = n means that the extragradient is calculated every integer multiple of n (n ∈ {1, 8, 25, 75, 250}). For a9a, when the extragradient is used every time, the function value decreases faster with respect to the number of iterations, but the result is not good for running time. Thus, for a9a, we choose K=25. As for Covtype, obviously, K=1 is the best choice." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "In this paper, we mainly considered the non-smooth optimization problem in large-scale and highdimensional settings. By introducing the idea of extragradient and momentum acceleration, we improved the classical Prox-SVRG and then proposed a novel algorithm, called AVR-SExtraGD. From our theoretical analysis, we can know that AVR-SExtraGD attains linear convergence for SC problems, and achieves the same oracle complexity as Katyusha, which is the best-known one of stochastic first-order algorithms in both SC and non-SC cases. Finally, the experimental results showed that AVR-SExtraGD improved the result of the gap of the optimal value introduced by proximal operator, and thus improved the accuracy of solutions and convergence speed, which confirmed the efficiency of extragradient and momentum acceleration. For future work, we can extend the ideas introduced in this paper to many existing proximal algorithms, including Prox-AFG (Beck & Teboulle (2009)), Prox-SAG (Schmidt et al. (2017)) and Prox-SDCA (Shalev-Shwartz & Tong (2012); Shalev-shwartz & Tong (2014)) which is a proximal variant of SDCA (Shalev-Shwartz & Tong (2013)), and it will certainly improve the performance of these algorithms. Moreover, we can also rewrite our algorithm into the form of mini-batch, whose computation of gradient evaluations can be parallelized (Agarwal & Duchi (2011); Dekel et al. (2012))." }, { "heading": "APPENDIX", "text": "" }, { "heading": "A THE ERROR OF OPTIMAL VALUE", "text": "In this part, we prove that Prox-FG will cause the deviation of the optimal value. Based on its update rules, we can explain why Problem (1) can be solved by proximal operators and find out the reason for the introduction of the error. According to Prox-FG, we have\nxk = ProxRηk(xk−1 − ηk∇F (xk−1))\n= arg min u∈Rd {R(u) + 1 2ηk ‖u− (xk−1 − ηk∇F (xk−1))‖2}\n= arg min u∈Rd {R(u) + ηk 2 ‖∇F (xk−1)‖2 +∇F (xk−1)T (u− xk−1) + 1 2ηk ‖u− xk−1‖2}\n= arg min u∈Rd\n{R(u) + F (xk−1) +∇F (xk−1)T (u− xk−1) + 1\n2ηk ‖u− xk−1‖2}\n≈ arg min u∈Rd {F (u) +R(u)}.\nThe final approximation is obtained by the second-order Taylor expansion of F (u) at xk−1. From the above analysis, we can see that when using proximal operators to solve the problems with the `1-norm regularization, the iterate xk in each iteration is an estimation of the optimal point, not the real optimal point. Therefore, in the last iteration, the final output point is also an estimation of P∗, which results in the deviation of the optimal value." }, { "heading": "B PROOFS OF AVR-SEXTRAGD", "text": "" }, { "heading": "B.1 KEY LEMMAS", "text": "Lemma 1. If two vectors xk, xk−1∈Rd satisfy the following equality,\nxk = arg min x { 1 2η ‖x−xk−1‖2+〈∇̃ikF (xk−1), x〉+R(x)}\nwith a constant vector ∇̃ikF (xk−1) and a convex function R(·), then for ∀u∈Rd, we have\n〈∇̃ikF (xk−1), xk−u〉 ≤ − 1\n2η ‖xk−1−xk‖2+\n1\n2η ‖xk−1−u‖2−\n1\n2η ‖xk−u‖2+R(u)−R(xk).\nMoreover, if R(·) is µ-strongly convex, the above inequality becomes\n〈∇̃ikF (xk−1), xk−u〉≤− 1\n2η ‖xk−1−xk‖2+\n1\n2η ‖xk−1−u‖2−\n1+ηµ\n2η ‖xk−u‖2+R(u)−R(xk).\nLemma 2 (Variance Bound). Suppose each function fi(·) is Li-smooth, let ∇̃ikF (xk−1) =\n∇ikF (xk−1)−∇ikF (x̃s−1)+∇F (x̃s−1), which is the gradient estimation operator used in Algo-\nrithm 1. Then the following inequality holds\nE‖∇F (xk−1)−∇̃ikF (xk−1)‖2 ≤ 2L(F (x̃s−1)−F (xk−1)−〈∇F (xk−1), x̃s−1−xk−1〉).\nThe detailed proof of Lemma 1 can be found in Kaiwen et al. (2018), and the proof of Lemma 2 can be seen in Zeyuan (2017), and thus we omit the proofs here. Next, we give the proofs of Theorems 1 and 2." }, { "heading": "B.2 PROOF OF THEOREM 1", "text": "Proof. In this part, we consider one particular epoch and omit the number of outer iteration s (except\nx̃s−1 and x̃s). We assume the parameters η and β satisfy the following inequality,\nLβ + Lβ 1−β ≤ 1 η . (4)\nLet x̂k−1 =βxk−1+(1−β)x̃s−1, x̂k−1/2 =βxk−1/2+(1−β)x̃s−1. Thus, we can obtain x̂k−1/2−x̂k−1 =\nβ(xk−1/2−xk−1). Thus, according to the L-smoothness of F (·), we can obtain\nP (x̂k−1/2)≤βR(xk−1/2)+(1−β)R(x̃s−1)+F (x̂k−1)+〈∇̃ikF (x̂k−1), β(xk−1/2−xk−1)〉\n+ Lβ2\n2 ‖xk−1/2−xk−1‖2+〈∇F (x̂k−1)−∇̃ikF (x̂k−1), β(xk−1/2−xk−1)〉.\nThen by using (4), we have\n1 β P (x̂k−1/2)≤R(xk−1/2)+ 1−β β R(x̃s−1)+ 1 β F (x̂k−1)+〈∇̃ikF (x̂k−1), xk−1/2−xk−1〉\n+ 1\n2η ‖xk−1/2−xk−1‖2−\nLβ\n2(1−β) ‖xk−1/2−xk−1‖2\n+〈∇F (x̂k−1)−∇̃ikF (x̂k−1), xk−1/2−xk−1〉.\nAccording to Lemma 1 with xk−1, xk=xk−1/2, u=x∗ and using the Young’s inequality to expand\n〈∇F (x̂k−1)−∇̃ikF (x̂k−1), xk−1/2−xk−1〉 with the parameter θ > 0 and taking expectation with\nrespect to the sample ik, we have\n1 β EP (x̂k−1/2)≤R(x∗)+ 1−β β R(x̃s−1)+ 1 β F (x̂k−1)+E〈∇̃ikF (x̂k−1), x∗−xk−1〉\n+ 1\n2η ‖x∗−xk−1‖2−\n1+ηµ\n2η E‖x∗−xk−1/2‖2−\nLβ\n2(1−β) E‖xk−1/2−xk−1‖2\n+ θ\n2 E‖xk−1/2−xk−1‖2+\n1\n2θ E‖∇F (x̂k−1)−∇̃ikF (x̂k−1)‖2.\nWe set θ= Lβ1−β >0 and apply Lemma 2, then\n1 β EP (x̂k−1/2)≤R(x∗)+ 1−β β R(x̃s−1)+ 1 β F (x̂k−1)\n+ 1\n2η ‖x∗−xk−1‖2−\n1+ηµ\n2η E‖x∗−xk−1/2‖2+ 1−β β [F (x̃s−1)−F (x̂k−1)]\n+E〈∇̃ikF (x̂k−1), x∗+ 1−β β x̃s−1− 1 β x̂k−1+ 1−β β (x̂k−1−x̃s−1)〉\n≤R(x∗)+ 1−β β R(x̃s−1)+ 1 β F (x̂k−1)\n+ 1\n2η ‖x∗−xk−1‖2−\n1+ηµ\n2η E‖x∗−xk−1/2‖2+ 1−β β [F (x̃s−1)−F (x̂k−1)]\n+ 1\nβ E〈∇̃ikF (x̂k−1), βx∗+(1−β)x̂k−1−x̂k−1〉\n≤R(x∗)+ 1−β β R(x̃s−1)+ 1 β F (x̂k−1)\n+ 1\n2η ‖x∗−xk−1‖2−\n1+ηµ\n2η E‖x∗−xk−1/2‖2+ 1−β β [F (x̃s−1)−F (x̂k−1)]\n+ 1\nβ F (βx∗+(1−β)x̂k−1)−\n1 β F (x̂k−1)\n≤R(x∗)+ 1−β β R(x̃s−1)+ 1 β F (x̂k−1)+ 1−β β [F (x̃s−1)−F (x̂k−1)]+F (x∗)\n+ 1−β β F (x̂k−1)− 1 β F (x̂k−1)+ 1 2η ‖x∗−xk−1‖2− 1+ηµ 2η E‖x∗−xk−1/2‖2\n= 1−β β P (x̃s−1)+P (x∗)+ 1 2η ‖x∗−xk−1‖2− 1+ηµ 2η E‖x∗−xk−1/2‖2.\nThe third inequality holds due to E[∇̃ikF (x̂k−1)] =∇F (x̂k−1) and the convexity of F (·), then we\nget\n1 β E[P (x̂k−1/2)−P (x∗)]≤ 1−β β [P (x̃s−1)−P (x∗)]+ 1 2η ‖x∗−xk−1‖2− 1+ηµ 2η E‖x∗−xk−1/2‖2.\nMoreover, because x̂k−1 =βxk−1+(1−β)x̃s−1, we can obtain x̂k=βxk+(1−β)x̃s−1. Thus, it is\nnot hard to know\n1 β E[P (x̂k)−P (x∗)]≤ 1−β β [P (x̃s−1)−P (x∗)]+ 1 2η ‖x∗−xk−1/2‖2− 1+ηµ 2η E‖x∗−xk‖2.\nWe set yk=β xk−1/2+xk 2 +(1−β)x̃ s−1 and it is obvious that 12η− 1+ηµ 2η ≤0, then we have\n1 β E[P (yk)−P (x∗)]≤ 1−β β [P (x̃s−1)−P (x∗)]+ 1 4η ‖x∗−xk−1‖2− 1+ηµ 4η E‖x∗−xk‖2. (5)\nBy setting ρ=1+ηµ and summing (5) over k=1, ...,m with increasing weight ρk−1, we have\n1\nβ m∑ k=1 ρk−1E[P (yk)−P (x∗)]+ ρm 4η ‖xm−x∗‖2≤ 1−β β m∑ k=1 ρk−1[P (x̃s−1)−P (x∗)]+ 1 4η ‖x0−x∗‖2.\nBecause, for SC problems, we set x̃s=( ∑m k=1 ρk−1) −1∑m k=1 ρ k−1yk in Algorithm 1 , we have\n1\nβ m∑ k=1 ρk−1E[P (x̃s)−P (x∗)]+ ρm 4η ‖xsm−x∗‖2≤ 1−β β m∑ k=1 ρk−1[P (x̃s−1)−P (x∗)]+ 1 4η ‖xs0−x∗‖2.\nThen, according to the convergence analysis for SC problems in (Kaiwen et al. (2018)), we can get\na similar result. That is, for the case with mκ ≤ 3 4 , we set η=\n√ 1 3µmL , β= √ m 3κ ≤ 1 2 , and m=Θ(n),\nthen we can obtain\nE[P (x̃S)−P (x∗)]≤ ( O(1+ √ 1\n3nκ )\n)−Sm ·O(P (x̃0)−P (x∗)).\nWe note that x̃0 =x0, so we get\nE[P (x̃S)−P (x∗)]≤ ( O(1+ √ 1\n3nκ )\n)−Sm ·O(P (x0)−P (x∗)),\nwhich implies that the oracle complexity in this case to achieve an -additive error is\nO( √ κn log P (x0)−P (x∗) ). However, for the case with\nm κ > 3 4 , we set η= 2 3L , β= 1 2 and m= Θ(n),\nthen we can obtain\nE[P (x̃S)−P (x∗)]≤ ( 2\n3\n)S ·O(P (x̃0)−P (x∗)).\nWe know that x̃0 =x0, so we have\nE[P (x̃S)−P (x∗)]≤ ( 2\n3\n)S ·O(P (x0)−P (x∗)),\nwhich implies that the oracle complexity of AVR-SExtraGD in this case is O ( n log P (x0)−P (x∗) ) ." }, { "heading": "B.3 PROOF OF THEOREM 2", "text": "Proof. In this part, we also omit the number of outer iteration s (except x̃s−1 and x̃s). Due to\nx̂k−1/2 =βxk−1/2+(1−β)x̃s−1, we can get\nP (x̂k−1/2)=P (βxk−1/2+(1−β)x̃s−1)=R(βxk−1/2+(1−β)x̃s−1)+F (x̂k−1/2).\nFrom the convexity of R(·) and L-smoothness of F (·), we obtain\nP (x̂k−1/2)≤βR(xk−1/2)+(1−β)R(x̃s−1)+F (x̂k−1)+〈∇̃ikF (x̂k−1), β(xk−1/2−xk−1)〉\n+ Lβ2\n2 ‖xk−1/2−xk−1‖2+〈∇F (x̂k−1)−∇̃ikF (x̂k−1), β(xk−1/2−xk−1)〉\n≤βR(xk−1/2)+(1−β)R(x̃s−1)+F (x̂k−1)+〈∇̃ikF (x̂k−1), β(xk−1/2−xk−1)〉\n+ Lαβ2\n2 ‖xk−1/2−xk−1‖2+\n1\n2L(α−1) ‖∇F (x̂k−1)−∇̃ikF (x̂k−1)‖2,\nwhere the second inequality holds by using the Young’s inequality with the parameter L(α−1),\nwhere α is a small constant. After applying Lemma 1 and taking expectation with respect to the\nsample ik, we have\nEP (x̂k−1/2)≤βR(x∗)+(1−β)R(x̃s−1)+F (x̂k−1)+E〈∇̃ikF (x̂k−1), β(x∗−xk−1)〉\n+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2)\n+ 1\n2L(α−1) E‖∇F (x̂k−1)−∇̃ikF (x̂k−1)‖2\n≤βR(x∗)+(1−β)R(x̃s−1)+F (x̂k−1)+E〈∇̃ikF (x̂k−1), β(x∗−xk−1)〉\n+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2)\n+ 1\nα−1 [F (x̃s−1)−F (x̂k−1)+E〈∇̃ikF (x̂k−1), x̂k−1−x̃s−1〉]\n≤βR(x∗)+(1−β)R(x̃s−1)+F (x̂k−1)+ 1\nα−1 [F (x̃s−1)−F (x̂k−1)]\n+E〈∇̃ikF (x̂k−1), βx∗+(1−β)x̃s−1−x̂k−1+ 1 α−1 (x̂k−1−x̃s−1)〉\n+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2)\n≤βR(x∗)+(1−β)R(x̃s−1)+F (x̂k−1)+ 1\nα−1 [F (x̃s−1)−F (x̂k−1)]\n+F (βx∗+(1−β− 1\nα−1 )x̃s−1+\n1\nα−1 x̂k−1)−F (x̂k−1)\n+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2)\n≤βR(x∗)+(1−β)R(x̃s−1)+F (x̂k−1)+ 1\nα−1 [F (x̃s−1)−F (x̂k−1)]\n+βF (x∗)+(1−β− 1\nα−1 )F (x̃s−1)+\n1\nα−1 F (x̂k−1)−F (x̂k−1)\n+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2)\n=(1−β)P (x̃s−1)−βP (x∗)+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2).\nThe second inequality holds due to Lemma 2. The reasons why the fourth inequality holds are\nE[∇̃ikF (x̂k−1)]=∇F (x̂k−1) and the convexity of F (·). Besides, we need to assume 1−β− 1α−1≥0\nin this step. Then, we get\nE[P (x̂k−1/2)−P (x∗)]≤(1−β)[P (x̃s−1)−P (x∗)]+ Lαβ2\n2 (‖x∗−xk−1‖2−E‖x∗−xk−1/2‖2).\nMoreover, because x̂k−1 =βxk−1+(1−β)x̃s−1, we can obtain x̂k=βxk+(1−β)x̃s−1. Thus, it is\nnot hard to know\nE[P (x̂k)−P (x∗)]≤(1−β)[P (x̃s−1)−P (x∗)]+ Lαβ2\n2 (‖x∗−xk−1/2‖2−E‖x∗−xk‖2).\nLet yk=β xk−1/2+xk 2 +(1−β)x̃ s−1, we have\nE[P (yk)−P (x∗)]≤(1−β)[P (x̃s−1)−P (x∗)]+ Lαβ2\n4 (‖x∗−xk−1‖2−E‖x∗−xk‖2).\nThat is,\n1\nβ2 E[P (yk)− P (x∗)]≤ (1−β) β2 [P (x̃s−1)−P (x∗)]+ Lα 4 (‖x∗−xk−1‖2−E‖x∗−xk‖2).\nSince we have x̃s = βm ∑m k=1 xk−1/2+xk 2 +(1−β)x̃ s−1 = 1m ∑m k=1 yk in Algorithm 1, and then by\nsumming the previous inequality over k=1, ...,m and according to xs+10 =x s m, we obtain\n1\nβ2s E[P (x̃s)−P (x∗)]≤ (1−βs) β2s [P (x̃s−1)−P (x∗)]+ Lα 4m (‖x∗−xs0‖2−‖x∗−xs+10 ‖2).\nWe set βs= 2s+4 and can easily obtain 1 β2s−1 ≥ 1−βsβ2s . Then by summing the previous inequality over\ns=1, ..., S, we can get\n1\nβ2S E[P (x̃S)−P (x∗)]≤ (1−β1) β21 [P (x̃s−1)−P (x∗)]+ Lα 4m (‖x∗−x10‖2−‖x∗−xSm‖2).\nThen we have\nE[P (x̃S)−P (x∗)]≤ 4(1−β1)\n(S+4)2β21 (P (x̃0)−P (x∗))+\nLα\n(S+4)2m ‖x̃0−x∗‖2,\nwhich holds because ‖x∗−xSm‖2≥0. We note that x̃0 =x10 =x0, so we get\nE[P (x̃S)−P (x∗)]≤ 4(1−β1)\n(S+4)2β21 (P (x0)−P (x∗))+\nLα\n(S+4)2m ‖x0−x∗‖2.\nIn other words, by choosing m=Θ(n), the total oracle complexity is\nO ( n √ P (x0)−P (x∗) + √ nL‖x0−x∗‖2 ) .\nFinally, we finished the convergence analysis of AVR-SExtraGD. In our Algorithm 1, only part of the iterations are updated by extragradient descent, and the other iterations are updated with the update rules of MiG. And we know both MiG and AVR-SExtraGD can make the objective function converge to P∗. Therefore, the method in Algorithm 1 will not affect the results and can accelerate the algorithm.\nAlgorithm 2 VR-SExtraGD Input: Initial vector x0, the number of epochs S, the number of iterations m per epoch, and the\nstep sizes η1, η2. Initialize: x̃0 = x0.\n1: for s=1, 2, . . . , S do 2: µ̃s−1 =∇F (x̃s−1); 3: xs0 = x̃ s−1 (SC) or xs0 =x s−1 m (non-SC); 4: for k=1, 2, . . . ,m do 5: Pick ik uniformly at random from {1, ..., n}; 6: ∇̃fik(xsk−1)=∇fik(xsk−1)−∇fik(x̃s−1)+µ̃s−1; 7: xsk−1/2 =Prox R η1(x s k−1 − η1∇̃fik(xsk−1)); 8: ∇̃fik(xsk−1/2)=∇fik(x s k−1/2)−∇fik(x̃\ns−1)+µ̃s−1; 9: xsk=Prox R η2(x s k−1/2−η2∇̃fik(x s k−1/2));\n10: end for 11: x̃s= 1m ∑m k=1 x s k; 12: end for" }, { "heading": "Output: x̃S .", "text": "" }, { "heading": "C SEXTRAGD AND VR-SEXTRAGD", "text": "Based on the update rules of EEG, we first consider and propose the stochastic variant of the algorithm, namely the stochastic extragradient descent (SExtraGD) algorithm, and its variance reduced variant called variance reduced stochastic extragradient descent (VR-SExtraGD), whose update rules can be formulated as follows:\n• The update rules of SExtraGD: xk−1/2 = ProxRη1(xk−1 − η1∇fik(xk−1)) ; xk = Prox R η2 ( xk−1/2 − η2∇fik(xk−1/2) ) . • The update rules of VR-SExtraGD:\nxk−1/2 = ProxRη1 ( xk−1 − η1∇̃fik(xk−1) ) ; xk = ProxRη2 ( xk−1/2 − η2∇̃fik(xk−1/2) ) ,\nwhere ∇̃fik(·) is the gradient estimation defined in (3). We note that one difference between these two algorithms and EEG is that we change xk−1 in Step two to xk−1/2, which will be beneficial to our theoretical analysis, but will not cause any major change to the results of the algorithms. Thus, we propose the stochastic extragradient descent algorithms, called SExtraGD and VR-SExtraGD, for solving non-smooth (both SC and non-SC) problems. In addition, the detailed process of VR-SExtraGD is shown as outlined in Algorithm 2." }, { "heading": "D CONVERGENCE ANALYSIS OF VR-SEXTRAGD", "text": "Firstly, we give some key lemmas which are helpful for the convergence analysis of VR-SExtraGD. Lemmas 3, 4 and 5 are used to prove Lemma 6 which is an important lemma to prove the convergence of VR-SExtraGD.\nLemma 3. Let R(·) be a convex function from Rd to R, and ηk>0. Then, for all x, y ∈ Rd,\n‖ProxRηk(x)− Prox R ηk (y)‖ ≤ ‖x− y‖.\nLemma 4. Let P (x)=F (x)+R(x), and∇F (x) is Lipschitz continuous with parameter L. For any\nx ∈ Rd and arbitrary v ∈ Rd, we define\nx′ = ProxRη (x− ηv), g = 1\nη (x− x′), 4 = v −∇F (x),\nwhere η is a step size that satisfies 0 < η ≤ 1L . Thus, we can know that for any y ∈ R d,\nP (y) ≥ P (x′) + gT (y − x) + η 2 ‖g‖2 +4T (x′ − y).\nLemma 5. Considering P (x) as defined in Problem (1) and ∇̃fik(v) as defined in (3), where v is\nan arbitrary stochastic sample, and let x∗ = arg minx P (x). We have E[∇̃fik(v)]=∇F (v) and\nE‖∇̃fik(v)−∇F (v)‖2 ≤ 4L[P (v)− P (x∗) + P (x̃)− P (x∗)].\nBecause Lemma 3 is well known and often used (e.g., see Section 3 in Rockafellar (1970)), we omit the proof of this lemma here. For Lemma 4, it is very similar to Lemma 3.7 of Lin & Tong (2014), and thus can be easily proved, so we also omit the proof here. Similarly, according to Lemma 3.4 in Lin & Tong (2014), Lemma 5 can be also easily proved, and thus we will not give the detail about it. Then we can prove the following lemma by these three lemmas.\nLemma 6. For all randomly selected sample v and h, if\nv = ProxRη ( h− η∇̃fik(h) ) , (6)\nwe have\nE‖v − x∗‖2 ≤ ‖h− x∗‖2 − 2η[EP (v)− P (x∗)] + 8Lη2[P (h)− P (x∗) + P (x̃)− P (x∗)],\nwhere x∗ = arg minx P (x).\nProof. First, we define a stochastic gradient mapping as follows:\ng = 1 η (h− v) = 1 η (h− ProxRη\n( h− η∇̃fik(h) ) .\nAccording to this definition, (6) can be expressed more succinctly as follows:\nv = h− ηg.\nThen we consider the distance between v and x∗.\n‖v − x∗‖2 = ‖h− ηg − x∗‖2 = ‖h− x∗‖2 − 2ηgT (h− x∗) + η2‖g‖2.\nApplying Lemma 4 with x = h, v = ∇̃fik(h), x+ = v and y = x∗, we have\n−gT (h− x∗) + η\n2 ‖g‖2 ≤ P (x∗)− P (v)−4Tk (v − x∗)\nwhere4k = ∇̃fik(h)−∇F (h). Thus we have\n‖v− x∗‖2 ≤ ‖h− x∗‖2 − 2η[P (v)− P (x∗)]− 2η4Tk (v − x∗).\nThen we can give an upper bound of −2η4Tk (v − x∗). First of all, we can define the update of\nProx-FG as shown below (although it is not used in our algorithm):\nh̄ = ProxRη (h− η∇F (h)).\nSo, we can obtain\n−2η4Tk (v − x∗) = −2η4Tk (v − h̄)− 2η4Tk (h̄− x∗)\n≤ 2η‖4k‖‖v − h̄‖ − 2η4Tk (h̄− x∗)\n≤ 2η‖4k‖‖(h− η∇̃fik(h))− (h− η∇F (h))‖ − 2η4Tk (h̄− x∗)\n= 2η2‖4k‖2 − 2η4Tk (h̄− x∗).\nThe first inequality follows from the Cauchy-Schwarz inequality, and the second inequality holds\ndue to Lemma 3. So, we have\n‖h− x∗‖2 ≤ ‖v − x∗‖2 − 2η[P (h)− P (x∗)] + 2η2‖4k‖2 − 2η4Tk (h̄− x∗).\nThen, we take expectation on both sides of the above inequality with respect to ik to obtain\nE‖h− x∗‖ ≤ ‖v − x∗‖2 − 2η[EP (h)− P (x∗)] + 2η2E‖4k‖2 − 2ηE[4Tk (h̄− x∗)]. (7)\nIt can be noted that both h̄ and x∗ are independent of the random variable ik, and we can easily\nknow that E4k = 0. So\nE[4Tk (h̄− x∗)] = (E4k)T (h̄− x∗) = 0. (8)\nSubstituting Lemma 5 and (8) into (7), we obtain\nE‖h− x∗‖2 ≤ ‖v − x∗‖2 − 2η[EP (h)− P (x∗)] + 8Lη2[P (v)− P (x∗) + P (x̃)− P (x∗)]. (9)" }, { "heading": "D.1 FOR SC PROBLEMS", "text": "Firstly, we can prove the convergence of VR-SExtraGD for strongly convex (SC) problems, which is showed by the following theorem.\nTheorem 3 (Strongly Convex). Suppose that Assumptions 1, 2 and 3 hold, and let x∗ =\narg minx P (x). In addition, assume η1 > 0, η2 > 0, η1 ≥ 4Lη22 and η2 ≥ 4Lη21 and m is suf-\nficiently large so that\nθ = 1\nµm(η2 − 4Lη21) +\n4L[(m+ 1)η21 +mη 2 2 ]\nm(η2 − 4Lη21) ≤ 1.\nThen VR-SExtraGD outlined in Algorithm 2 achieves linear convergence in the expected form, which\ncan be formulated as follows:\nE[P (x̃S)− P (x∗)] ≤ θS [P (x0)− P (x∗)]. (10)\nProof. According to Lemma 6 and the update rules of VR-SExtraGD, we can easily get:\nE‖xk−1/2 − x∗‖2 ≤ ‖xk−1 − x∗‖2 − 2η1[EP (xk−1/2)− P (x∗)]\n+ 8Lη21 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)]\n(11)\nand\nE‖xk − x∗‖2 ≤ ‖xk−1/2 − x∗‖2 − 2η2[EP (xk)− P (x∗)]\n+ 8Lη22 [P (xk−1/2)− P (x∗) + P (x̃)− P (x∗)].\n(12)\nThen, we substitute (11) into (12) to obtain\nE‖xk − x∗‖2 ≤ ‖xk−1 − x∗‖2 − 2η1[EP (xk−1/2)− P (x∗)]\n+ 8Lη21 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)]− 2η2[EP (xk)− P (x∗)]\n+ 8Lη22 [P (xk−1/2)− P (x∗) + P (x̃)− P (x∗)].\nSuppose η1 ≥ 4Lη22 , i.e., −2η1 ≤ −8Lη22 , we can get\nE‖xk − x∗‖2 ≤ ‖xk−1 − x∗‖2 − 8Lη22 [EP (xk− 12 )− P (x∗)]\n+ 8Lη21 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)]− 2η2[EP (xk)− P (x∗)]\n+ 8Lη22 [P (xk− 12 )− P (x∗)] + 8Lη 2 2 [P (x̃)− P (x∗)]\n= ‖xk−1 − x∗‖2 + 8Lη21 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)]\n− 2η2[EP (xk)− P (x∗)] + 8Lη22 [P (x̃)− P (x∗)].\nBy summing the previous inequality over k = 1, ...,m, we obtain\n‖xm − x∗‖2 + 2η2[EP (xm)− P (x∗)] + 2(η2 − 4Lη21) m−1∑ k=1 [EP (xk)− P (x∗)]\n≤ ‖x0 − x∗‖2 + 8Lη21 [P (x0)− P (x∗)] +m8L(η21 + η22)[P (x̃)− P (x∗)].\nSince η2 − 4Lη21 < η2, and from the algorithm of VR-SExtraGD, we know x0 = x̃. Therefore,\n2(η2 − 4Lη21) m∑ k=1 [EP (xk)− P (x∗)] ≤ ‖x̃− x∗‖2 + 8L[(m+ 1)η21 +mη22 ][P (x̃)− P (x∗)].\nBecause in a fixed epoch, such as the s-th epoch, there are x̃s = 1m ∑m k=1 xk and x̃ s−1 = x0, and according to the convexity of P (·), P (x̃s) ≤ 1m ∑m k=1 P (xk) can be obtained. Therefore,\n2(η2−4Lη21)m[EP (x̃s)−P (x∗)] ≤ 8L[(m+1)η21+mη22 ][P (x̃s−1)−P (x∗)]+‖x̃s−1−x∗‖2. (13)\nBecause of the convexity of F (·) and the strong convexity of R(·), we know P (·) is also strongly\nconvex, then we have ‖x̃s−1 − x∗‖2 ≤ 2µ [P (x̃ s−1)− P (x∗)]. Thus,\n2(η2 − 4Lη21)m[EP (x̃s)− P (x∗)] ≤ ( 2\nµ + 8L((m+ 1)η21 +mη 2 2))[P (x̃ s−1)− P (x∗)],\nwhich is equivalent to\nE[P (x̃s)− P (x∗)] ≤ θ[P (x̃s−1)− P (x∗)],\nwhere\nθ = 1\nµm(η2 − 4Lη21) +\n4L[(m+ 1)η21 +mη 2 2 ]\nm(η2 − 4Lη21) .\nAt last, we have\nE[P (x̃S)− P (x∗)] ≤ θE[P (x̃S−1)− P (x∗)]\n≤ θ2E[P (x̃S−2)− P (x∗)]\n≤ ... ≤ θS [P (x̃0)− P (x∗)].\nWe note that x̃0 = x0, so we can obtain\nE[P (x̃S)− P (x∗)] ≤ θS [P (x0)− P (x∗)]" }, { "heading": "D.2 FOR NON-SC PROBLEMS", "text": "We can also use a theorem to give the convergence of VR-SExtraGD for solving non-SC problems, as shown below.\nTheorem 4 (Non-Strongly Convex). Suppose that Assumptions 1 and 2 hold, and let x∗ =\narg minx P (x). In addition, assume η1 > 0, η2 > 0, and η1 = η2 = η = 1Lα . Then, the conver-\ngence property of VR-SExtraGD, as outlined in Algorithm 2, is given as follows:\nE[P (xout)− P (x∗)] ≤ 4m+ 2\nm(α− 7)S [P (x0)− P (x∗)] + Lα(α− 1) 2m(α− 7)S ‖x0 − x∗‖2. (14)\nwhere xout = 1S ∑S s=1 x̃ s.\nProof. Because we know F (·) is L-smooth, then we have\nP (xk−1/2) ≤ R(xk−1/2) + F (xk−1) + 〈∇̃fik(xk−1), xk−1/2 − xk−1〉\n+ L\n2 ‖xk−1/2 − xk−1‖2 + 〈∇f(xk−1)− ∇̃fik(xk−1), xk−1/2 − xk−1〉\n≤ R(xk−1/2) + F (xk−1) + 〈∇̃fik(xk−1), xk−1/2 − xk−1〉+ L\n2 ‖xk−1/2 − xk−1‖2\n+ 1\n2L(α− 1) ‖∇F (xk−1)− ∇̃fik(xk−1)‖2 + L(α− 1) 2 ‖xk−1/2 − xk−1‖2.\nThe second inequality holds due to Young’s inequality with parameter L(α− 1), where α is a small\nconstant. Then after taking expectation with respect to the sample ik and using Lemma 5, we obtain\nE[P (xk−1/2)] ≤ R(xk−1/2) + F (xk−1) + E〈∇̃fik(xk−1), xk−1/2 − xk−1〉\n+ Lα\n2 E‖xk−1/2 − xk−1‖2 +\n2\nα− 1 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)].\nNext, we apply Lemma 1 with xk−1, xk = xk−1/2, u = x∗, and have\nE[P (xk−1/2)] ≤ R(x∗) + F (xk−1) + E〈∇̃fik(xk−1), x∗ − xk−1〉\n+ Lα\n2 (‖x∗ − xk−1‖2 − E‖x∗ − xk−1/2‖2)\n+ 2\nα− 1 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)]\n≤ R(x∗) + F (x∗) + Lα\n2 (‖x∗ − xk−1‖2 − E‖x∗ − xk−1/2‖2)\n+ 2\nα− 1 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)].\nThe second inequality holds because E[∇̃fik(xk−1)] = ∇F (xk−1) and F (·) is convex. Then we\nknow\nE[P (xk−1/2)− P (x∗)] ≤ 2\nα− 1 [P (xk−1)− P (x∗) + P (x̃)− P (x∗)]\n+ Lα\n2 (‖x∗ − xk−1‖2 − E‖x∗ − xk−1/2‖2).\n(15)\nAnd for P (xk), we can deduce by the same way, and obtain the similar result:\nE[P (xk)− P (x∗)] ≤ 2\nα− 1 [P (xk−1/2)− P (x∗) + P (x̃)− P (x∗)]\n+ Lα\n2 (‖x∗ − xk−1/2‖2 − E‖x∗ − xk‖2).\n(16)\nWe assume that α is sufficiently large to make 2α−1 ≤ 1, and sum (15) and (16) together, then we\nobtain\nE[P (xk)− P (x∗)] ≤ 2\nα− 1 [P (xk−1)− P (x∗)] +\n4\nα− 1 [P (x̃)− P (x∗)]\n+ Lα\n2 (‖x∗ − xk−1‖2 − E‖x∗ − xk‖2).\nwhich is equivalent to\n(1− 2 α− 1 )E[P (xk)− P (x∗)] ≤ 2 α− 1 {[P (xk−1)− P (x∗)]− E[P (xk)− P (x∗)]}\n+ 4\nα− 1 [P (x̃)− P (x∗)] +\nLα\n2 (‖x∗ − xk−1‖2 − E‖x∗ − xk‖2).\nBy summing the previous inequality over k = 1, ...,m, we obtain\n(1− 2 α− 1 ) m∑ k=1 E[P (xk)− P (x∗)] ≤ 2 α− 1 {[P (x0)− P (x∗)]− [P (xm)− P (x∗)]}\n+ 4m\nα− 1 [P (x̃)− P (x∗)] +\nLα\n2 (‖x∗ − x0‖2 − ‖x∗ − xm‖2).\nSince we set x̃s = 1m ∑m k=1 xk and x s+1 0 = xm, and we know F (·) is a convex function. Thus, we\nhave\n(1− 2 α− 1 )E[P (x̃s)− P (x∗)] ≤ 2 m(α− 1) {[P (xs0)− P (x∗)]− [P (xs+10 )− P (x∗)]}\n+ 4\nα− 1 [P (x̃s−1)− P (x∗)] +\nLα 2m (‖x∗ − xs0‖2 − ‖x∗ − xs+10 ‖2).\nBy summing the previous inequality over s = 1, ..., S, we obtain\n(1− 2 α− 1 ) S∑ s=1 E[P (x̃s)− P (x∗)] ≤ 4 α− 1 S−1∑ s=0 [P (x̃s)− P (x∗)] + Lα 2m (‖x∗ − x10‖2 − ‖x∗ − xSm‖2)\n+ 2\nm(α− 1) {[P (x10)− P (x∗)]− [P (xSm)− P (x∗)]}.\nThat is,\n(1− 2 α− 1 − 4 α− 1 ) S∑ s=1 E[P (x̃s)− P (x∗)]\n≤ 4 α− 1 [P (x̃0)− P (x∗)] + 2 m(α− 1) ([P (x10)− P (x∗)]− [P (xSm)− P (x∗)])\n+ Lα\n2m (‖x∗ − x10‖2 − ‖x∗ − xSm‖2)\n≤ 2 m(α− 1) [P (x10)− P (x∗)] + 4 α− 1 [P (x̃0)− P (x∗)] + Lα 2m ‖x∗ − x10‖2.\nThe first inequality holds due to 1− 2α−1 ≥ 1− 2 α−1 − 4 α−1 and the second inequality holds because\nP (xSm)− P (x∗) ≥ 0 and ‖x∗ − xSm‖2 ≥ 0. Because x10 = x̃0, we have\n(1− 6 α− 1 ) S∑ s=1 E[P (x̃s)− P (x∗)] ≤ ( 2 m(α− 1) + 4 α− 1 )[P (x̃0)− P (x∗)] + Lα 2m ‖x̃0 − x∗‖2\nDue to the convexity of F (·), we have\nEP ( S∑ s=1 x̃s)− P (x∗) ≤ 1 S S∑ s=1 E[P (x̃s)− P (x∗)]\n≤ 4m+ 2 m(α− 7)S [P (x̃0)− P (x∗)] + Lα(α− 1) 2m(α− 7)S ‖x̃0 − x∗‖2\nWe note that x̃0 = x0, so we have" }, { "heading": "E MORE EXPERIMENTAL RESULTS", "text": "" }, { "heading": "E.1 THE INFORMATION OF DATA SETS", "text": "Data sets Sizes n Dimensions d Sparsity λ1 λ2 a9a 32,562 123 Sparse 10−6 10−4 Covtype 581,012 54 Dense 10−5 10−8\nrcv1 20,242 47,236 Sparse 10−8 10−10 real-sim 72,309 20,598 Sparse 10−6 10−8" }, { "heading": "E.2 THE PROBLEM MODELS", "text": "In this part, we introduce two common problem models. The first one is\nmin x∈Rd\n1\n2n n∑ i=1 (aTi x−bi)2+λ1‖x‖1+ λ2 2 ‖x‖2.\nWhen λ1 ≥ 0, λ2 ≡ 0, we can obtain the Lasso problem, and when λ1, λ2 ≥ 0, we can obtain the Elastic-Net problem, which are all non-smooth optimization problems. The second problem model is\nmin x∈Rd\n1\nn n∑ i=1 log(1+exp(−bixTai))+λ‖x‖1,\nwhich is called the `1 norm regularized logistic regression problem." }, { "heading": "E.3 MORE EXPERIMENTAL RESULTS", "text": "For more comprehensive comparison, we provide the performance comparison of more algorithms, including SVRG++, MiG and our VR-SExtraGD.\nFigure 4 shows the experimental result of different algorithms on different data set to solve the Lasso problem. We can see that AVR-SExtraGD is superior to other algorithms in terms of the number of effective passes and running time. Besides, we note that VR-SExtraGD achieves almost the same result as AVR-SExtraGD in terms of effective passes, which may be due to the advantage of the extragradient structure. But, since VR-SExtraGD needs to calculate the stochastic gradient twice in each inner-iteration, the result in terms of running time is not as good as AVR-SExtraGD.\nMoreover, in order to avoid the particularity of the problems solved by our algorithm, we also provide the performance comparison of different algorithms on different data set to solve the `1- norm regularized logistic regression problem, as shown in Figure 5.\nFrom Figure 5, we know that our AVR-SExtraGD outperforms other compared algorithms in terms of both effective passes and running time. Although Katyusha is better than our algorithm in terms of effective passes, its result in terms of running time in not as good as ours because of the complicated structure of the algorithm." } ]
2,019
null
SP:8357fc2c4234854bf476afd5305b1191ef56c11a
[ "This paper proposes an end-to-end multi-frame super-resolution algorithm, that relies on a pair-wise co-registrations and fusing blocks (convolutional residual blocks), embedded in a encoder-decoder network 'HighRes-net' that estimates the super-resolution image. Because the ground truth SR image is typically misaligned with the estimation SR image, the authors proposed to learn the shift with a neural network 'ShiftNet' in a cooperative setting with HighRes-net. The experiments were performed on the ESA challenge on satellite images, showing good results.", "This paper presents a multi-frame super-resolution method applied to satellite imagery. It first estimates a reference image for the multiple input LR images by median filtering. Then it pairwise encodes the reference image and each of the multiple images in a recursive fashion then fuses the corresponding feature maps with residual blocks and bottleneck layers until only one feature maps for the entire multiple images obtained. In other words, LR images are fused into a single global encoding. Then, it applies a standard upsampling network to obtain the super-resolved image this image is fed into a network that estimates only the translational shift, and the shifted image with the estimated translation parameters finally resampled. " ]
Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details. Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views. This is important for satellite monitoring of human impact on the planet – from deforestation, to human rights violations – that depend on reliable imagery. To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss. Co-registration of low-res views is learned implicitly through a reference-frame channel, with no explicit registration mechanism. We learn a global fusion operator that is applied recursively on an arbitrary number of low-res pairs. We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet. We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth observation data at scale. Our approach recently topped the European Space Agency’s MFSR competition on real-world satellite imagery.
[]
[ { "authors": [ "REFERENCES Joan Bruna", "Pablo Sprechmann", "Yann LeCun" ], "title": "Super-resolution with deep convolutional sufficient statistics", "venue": "arXiv preprint arXiv:1511.05666,", "year": 2015 }, { "authors": [ "Adrian Bulat", "Jing Yang", "Georgios Tzimiropoulos" ], "title": "To learn image super-resolution, use a gan to learn how to do image degradation first", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "David Capel", "Andrew Zisserman" ], "title": "Super-resolution from multiple views using learnt image models", "venue": "In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001,", "year": 2001 }, { "authors": [ "Toby N Carlson", "David A Ripley" ], "title": "On the relation between ndvi, fractional vegetation cover, and leaf area index", "venue": "Remote sensing of Environment,", "year": 1997 }, { "authors": [ "Tony F Chan", "Chiu-Kwong Wong" ], "title": "Total variation blind deconvolution", "venue": "IEEE transactions on Image Processing,", "year": 1998 }, { "authors": [ "Hong Chang", "Dit-Yan Yeung", "Yimin Xiong" ], "title": "Super-resolution through neighbor embedding", "venue": "In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,", "year": 2004 }, { "authors": [ "Zhengping Che", "Sanjay Purushotham", "Kyunghyun Cho", "David Sontag", "Yan Liu" ], "title": "Recurrent neural networks for multivariate time series with missing values", "venue": "Scientific reports,", "year": 2018 }, { "authors": [ "Colin B Clement", "Matthew Bierbaum", "James P Sethna" ], "title": "Image registration and super resolution from first principles", "venue": "arXiv preprint arXiv:1809.05583,", "year": 2018 }, { "authors": [ "Julien Cornebise", "Daniel Worrall", "Micah Farfour", "Milena Marin" ], "title": "Witnessing atrocities: Quantifying villages destruction in darfur with crowdsourcing and transfer learning", "venue": "In AI for Social Good NIPS2018 Workshop,", "year": 2018 }, { "authors": [ "Daniel DeTone", "Tomasz Malisiewicz", "Andrew Rabinovich" ], "title": "Deep image homography estimation", "venue": "arXiv preprint arXiv:1606.03798,", "year": 2016 }, { "authors": [ "Wouter Dierckx", "Sindy Sterckx", "Iskander Benhadj", "Stefan Livens", "Geert Duhoux", "Tanja Van Achteren", "Michael Francois", "Karim Mellab", "Gilbert Saint" ], "title": "Proba-v mission for global vegetation monitoring: standard products and image quality", "venue": "International Journal of Remote Sensing,", "year": 2014 }, { "authors": [ "Chao Dong", "Chen Change Loy", "Kaiming He", "Xiaoou Tang" ], "title": "Learning a deep convolutional network for image super-resolution", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "K Eric Drexler" ], "title": "Reframing superintelligence: Comprehensive ai services as general intelligence, 2019", "venue": null, "year": 2019 }, { "authors": [ "Sina Farsiu", "M Dirk Robinson", "Michael Elad", "Peyman Milanfar" ], "title": "Fast and robust multiframe super resolution", "venue": "IEEE transactions on image processing,", "year": 2004 }, { "authors": [ "William T Freeman", "Thouis R Jones", "Egon C Pasztor" ], "title": "Example-based super-resolution", "venue": "IEEE Computer graphics and Applications,", "year": 2002 }, { "authors": [ "Theresa L. Harris", "Jonathan Drake", "Jessica M. Wyndham", "Susan R. Wolfinbarger", "Stephen D. Lott", "Michael Lerner" ], "title": "Geospatial evidence in international human rights litigation: Technical and legal considerations", "venue": "Technical report, AAAS Scientific Responsibility, Human Rights and Law Program,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification", "venue": "In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Patrick Helber", "Bradley Gram-Hansen", "Indhu Varatharajan", "Faiza Azam", "Alejandro Coca-Castro", "Veronika Kopackova", "Piotr Bilinski" ], "title": "Mapping informal settlements in developing countries with multi-resolution, multi-spectral data", "venue": "arXiv preprint arXiv:1812.00812,", "year": 2018 }, { "authors": [ "Michal Irani", "Shmuel Peleg" ], "title": "Improving resolution by image registration", "venue": "CVGIP: Graphical models and image processing,", "year": 1991 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "David Jensen", "Jillian Campbell" ], "title": "The promise and peril of a digital ecosystem for the planet, September 2019", "venue": "URL https://medium.com/@davidedjensen 99356/building-a-digitalecosystem-for-the-planet-557c41225dc2. [Online; posted 11-September-2019]", "year": 2019 }, { "authors": [ "Justin Johnson", "Alexandre Alahi", "Li Fei-Fei" ], "title": "Perceptual losses for real-time style transfer and superresolution", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Michal Kawulok", "Pawel Benecki", "Krzysztof Hrynczenko", "Daniel Kostrzewa", "Szymon Piechaczek", "Jakub Nalepa", "Bogdan Smolka" ], "title": "Deep learning for fast super-resolution reconstruction from multiple images", "venue": "In Real-Time Image Processing and Deep Learning 2019,", "year": 2019 }, { "authors": [ "Jiwon Kim", "Jung Kwon Lee", "Kyoung Mu Lee" ], "title": "Deeply-recursive convolutional network for image superresolution", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kwang In Kim", "Younghee Kwon" ], "title": "Single-image super-resolution using sparse regression and natural image prior", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2010 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszár", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang" ], "title": "Photo-realistic single image super-resolution using a generative adversarial network", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Robert J II Marks" ], "title": "Introduction to Shannon sampling and interpolation theory", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Marcus Märtens", "Dario Izzo", "Andrej Krzic", "Daniël Cox" ], "title": "Super-resolution of proba-v images using convolutional neural networks. Astrodynamics", "venue": null, "year": 2019 }, { "authors": [ "Andrea Bordone Molini", "Diego Valsesia", "Giulia Fracastoro", "Enrico Magli" ], "title": "Deepsum: Deep neural network for super-resolution of unregistered multitemporal images", "venue": null, "year": 1907 }, { "authors": [ "Seungjun Nah", "Radu Timofte", "Sungyong Baik", "Seokil Hong", "Gyeongsik Moon", "Sanghyun Son", "Kyoung Mu Lee" ], "title": "Ntire 2019 challenge on video deblurring: Methods and results", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Nhat Nguyen", "Peyman Milanfar", "Gene Golub" ], "title": "A computationally efficient superresolution image reconstruction algorithm", "venue": "IEEE transactions on image processing,", "year": 2001 }, { "authors": [ "Harry Nyquist" ], "title": "Certain topics in telegraph transmission theory", "venue": "Transactions of the American Institute of Electrical Engineers,", "year": 1928 }, { "authors": [ "Athanasios Papoulis" ], "title": "Generalized sampling expansion", "venue": "IEEE transactions on circuits and systems,", "year": 1977 }, { "authors": [ "Nathalie Pettorelli", "Jon Olav Vik", "Atle Mysterud", "Jean-Michel Gaillard", "Compton J Tucker", "Nils Chr Stenseth" ], "title": "Using the satellite-derived ndvi to assess ecological responses to environmental change", "venue": "Trends in ecology & evolution,", "year": 2005 }, { "authors": [ "Lyndsey C Pickup", "Stephen J Roberts", "Andrew Zisserman" ], "title": "Optimizing and learning for super-resolution", "venue": "In BMVC,", "year": 2006 }, { "authors": [ "David Rolnick", "Priya L Donti", "Lynn H Kaack", "Kelly Kochanski", "Alexandre Lacoste", "Kris Sankaran", "Andrew Slavin Ross", "Nikola Milojevic-Dupont", "Natasha Jaques", "Anna Waldman-Brown" ], "title": "Tackling climate change with machine learning", "venue": "arXiv preprint arXiv:1906.05433,", "year": 2019 }, { "authors": [ "Tim GJ Rudner", "Marc Rußwurm", "Jakub Fil", "Ramona Pelich", "Benjamin Bischke", "Veronika Kopačková", "Piotr" ], "title": "Biliński. Multi3net: Segmenting flooded buildings via fusion of multiresolution, multisensor, and multitemporal satellite imagery", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Mehdi SM Sajjadi", "Raviteja Vemulapalli", "Matthew Brown" ], "title": "Frame-recurrent video super-resolution", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Eduardo Sanchez", "Mathieu Serrurier", "Mathias Ortner" ], "title": "Learning disentangled representations of satellite image time series", "venue": "arXiv preprint arXiv:1903.08863,", "year": 2019 }, { "authors": [ "Claude Elwood Shannon" ], "title": "Communication in the presence of noise", "venue": "Proceedings of the IRE,", "year": 1949 }, { "authors": [ "Assaf Shocher", "Nadav Cohen", "Michal Irani" ], "title": "zero-shot super-resolution using deep internal learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xin Tao", "Hongyun Gao", "Renjie Liao", "Jue Wang", "Jiaya Jia" ], "title": "Detail-revealing deep video super-resolution", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Radu Timofte", "Shuhang Gu", "Jiqing Wu", "Luc Van Gool", "Lei Zhang", "Ming-Hsuan Yang", "Muhammad Haris" ], "title": "Ntire 2018 challenge on single image super-resolution: Methods and results", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2018 }, { "authors": [ "RY Tsai" ], "title": "Multiple frame image restoration and registration", "venue": "Advances in Computer Vision and Image Processing,", "year": 1984 }, { "authors": [ "Ken Turkowski" ], "title": "Filters for common resampling tasks", "venue": "In Graphics gems,", "year": 1990 }, { "authors": [ "Oriol Vinyals", "Samy Bengio", "Manjunath Kudlur" ], "title": "Order matters: Sequence to sequence for sets", "venue": "arXiv preprint arXiv:1511.06391,", "year": 2015 }, { "authors": [ "Longguang Wang", "Yingqian Wang", "Zhengfa Liang", "Zaiping Lin", "Jungang Yang", "Wei An", "Yulan Guo" ], "title": "Learning parallax attention for stereo image super-resolution", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Xintao Wang", "Kelvin CK Chan", "Ke Yu", "Chao Dong", "Chen Change Loy" ], "title": "Edvr: Video restoration with enhanced deformable convolutional networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2019 }, { "authors": [ "Yingqian Wang", "Longguang Wang", "Jungang Yang", "Wei An", "Yulan Guo" ], "title": "Flickr1024: A large-scale dataset for stereo image super-resolution", "venue": "In Proceedings of the IEEE International Conference on Computer Vision Workshops,", "year": 2019 }, { "authors": [ "Zhihao Wang", "Jian Chen", "Steven CH Hoi" ], "title": "Deep learning for image super-resolution: A survey", "venue": "arXiv preprint arXiv:1902.06068,", "year": 2019 }, { "authors": [ "Erwin Wolters", "Wouter Dierckx", "J Dries", "Else Swinnen" ], "title": "Proba-v products user manual", "venue": "VITO. http://probav. vgt. vito. be/sites/default/files/Product User Manual. pdf,", "year": 2014 }, { "authors": [ "Li Xu", "Jimmy SJ Ren", "Ce Liu", "Jiaya Jia" ], "title": "Deep convolutional neural network for image deconvolution", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Bo Yan", "Chuming Lin", "Weimin Tan" ], "title": "Frame and feature-context video super-resolution", "venue": "arXiv preprint arXiv:1909.13057,", "year": 2019 }, { "authors": [ "Jianchao Yang", "John Wright", "Thomas S Huang", "Yi Ma" ], "title": "Image super-resolution via sparse representation", "venue": "IEEE transactions on image processing,", "year": 2010 }, { "authors": [ "Roman Zeyde", "Michael Elad", "Matan Protter" ], "title": "On single image scale-up using sparse-representations", "venue": "In International conference on curves and surfaces,", "year": 2010 } ]
[ { "heading": "1 INTRODUCTION", "text": "Multiple low-resolution images collectively contain more information than any individual lowresolution image, due to minor geometric displacements, e.g. shifts, rotations, atmospheric turbulence, and instrument noise. Multi-Frame Super-Resolution (MFSR) (Tsai, 1984) aims to reconstruct hidden high-resolution details from multiple low-resolution views of the same scene. Single Image Super-Resolution (SISR), as a special case of MFSR, has attracted much attention in the computer vision, machine learning and deep learning communities in the last 5 years, with neural networks learning complex image priors to upsample and interpolate images (Xu et al., 2014; Srivastava et al., 2015; He et al., 2016). However, in the meantime not much work has explored the learning of representations for the more general problem of MFSR to address the additional challenges of co-registration and fusion of multiple low-resolution images.\nThis paper explores how Multi-Frame Super-Resolution (MFSR) can benefit from recent advances in learning representations with neural networks. To the best of our knowledge, this work is the first to introduce a deep-learning approach that solves the co-registration, fusion and registration-at-theloss problems in an end-to-end learning framework.\nPrompting this line of research is the increasing drive towards planetary-scale Earth observation to monitor the environment and human rights violations. Such observation can be used to inform policy, achieve accountability and direct on-the-ground action, e.g. within the framework of the Sustainable Development Goals (Jensen & Campbell, 2019).\nNomenclature Registration is the problem of estimating the relative geometric differences between two images (e.g. due to shifts, rotations, deformations). Fusion, in the MFSR context, is the problem of mapping multiple low-res representations into a single representation. By coregistration, we mean the problem of registering all low-resolution views to improve their fusion. By registration-at-the-loss, we mean the problem of registering the super-resolved reconstruction\nto the high-resolution ground-truth prior to computing the loss. This gives rise to the notion of a registered loss.\nCo-registration of multiple images is required for longitudinal studies of land change and environmental degradation. The fusion of multiple images is key to exploiting cheap, high-revisit-frequency satellite imagery, but of low-resolution, moving away from the analysis of infrequent and expensive high-resolution images. Finally, beyond fusion itself, super-resolved generation is required throughout the technical stack: both for labeling, but also for human oversight (Drexler, 2019) demanded by legal context (Harris et al., 2018).\nSummary of contributions\n• HighRes-net: We propose a deep architecture that learns to fuse an arbitrary number of lowresolution frames with implicit co-registration through a reference-frame channel.\n• ShiftNet: Inspired by HomographyNet (DeTone et al., 2016), we define a model that learns to register and align the super-resolved output of HighRes-net, using ground-truth high-resolution frames as supervision. This registration-at-the-loss mechanism enables more accurate feedback from the loss function into the fusion model, when comparing a super-resolved output to a ground truth high resolution image. Otherwise, a MFSR model would naturally yield blurry outputs to compensate for the lack of registration, to correct for sub-pixel shifts and account for misalignments in the loss.\n• By combining the two components above, we contribute the first architecture to learn fusion and registration end-to-end.\n• We test and compare our approach to several baselines on real-world imagery from the PROBA-V satellite of ESA. Our performance has topped the Kelvins competition on MFSR, organized by the Advanced Concepts Team of ESA (Märtens et al., 2019) (see section 5).\nThe rest of the paper is divided as follows: in Section 2, we discuss related work on SISR and MFSR; Section 3 outlines HighRes-net and section 4 presents ShiftNet, a differentiable registration component that drives our registered loss mechanism during end-to-end training. We present our results in section 5, and in Section 6 we discuss some opportunities for and limitations and risks of super-resolution." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 MULTI-FRAME SUPER-RESOLUTION", "text": "How much detail can we resolve in the digital sample of some natural phenomenon? Nyquist (1928) observed that it depends on the instrument’s sampling rate and the oscillation frequency of the underlying natural signal. Shannon (1949) built a sampling theory that explained Nyquist’s observations when the sampling rate is constant (uniform sampling) and determined the conditions of aliasing in a sample. Figure 2 illustrates this phenomenon.\nUnder review as a conference paper at ICLR 2020\n0.0 0.2 0.4 0.6 0.8 1.0t (sec)\nchirp signal sin(2πω(t) t)\n0.0 0.2 0.4 0.6 0.8 1.0t (sec) 0\n250 (Hz) frequency ω(t)128 t sufficient (Nyquist) sampling rate\n0.0 0.2 0.4 0.6 0.8 1.0t (sec)\nhigh-resolution (HR) sampling frequency 1 kHzfHR\n−400 −200 0 200 400 frequency (Hz)\nDFT magnitudeF fHR\n0.0 0.2 0.4 0.6 0.8 1.0t (sec)\nchirp signal sin(ω(t) t)\n0.0 0.2 0.4 0.6 0.8 1.0t (sec) 0\n250 (Hz) frequency ω(t)/2π128 t sufficient (Nyquist) sampling rate\n0.0 0.2 0.4 0.6 0.8 1.0t (sec)\nhigh-resolution (HR) sampling frequency 1 kHzfHR\n−400 −200 0 200 400 frequency (Hz)\nDFT magnitudeF fHR\n0.0 0.2 0.4 0.6 0.8 1.0t (sec)\nchirp signal sin(ω(t) t)\n0.0 0.2 0.4 0.6 0.8 1.0t (sec) 0\n250 (Hz) frequency ω(t)/2π 128 t sufficient (Nyquist) sampling rate fLR sampling rate\n0.0 0.2 0.4 0.6 0.8 1.0t (sec)\nALIASINGlow-resolution (LR) sampling frequency 250 HzfLR\n−100 −50 0 50 100 frequency (Hz) DFT magnitudeF fLRFigure 2: Top: A chirp harmonic oscillator sin (2πω(t)t), with instantaneous frequency ω(t). Left: The shape of the high-resolution sample resembles the underlying chirp signal. Right: Close to t = 1, the apparent frequency of the low-resolution sample does not match that of the chirp. This is an example of aliasing (shown with red at its most extreme), and it happens when the sampling rate falls below the Nyquist rate, sN = 2 · sB , where sB is the highest non-zero frequency of the signal.\nSampling at high-resolution (left) maintains the frequency of the chirp signal (top). Sampling at a lower resolution (right), this apparent chirped frequency is lost due to aliasing, which means that the lower-resolution sample has a fundamentally smaller capacity for resolving the information of the natural signal, and a higher sampling rate can resolve more information.\nShannon’s sampling theory has since been generalized for multiple interleaved sampling frames (Papoulis, 1977; Marks, 2012). One result of the generalized sampling theory is that we can go beyond the Nyquist limit of any individual uniform sample by interleaving several uniform samples taken concurrently. When an image is down-sampled to a lower resolution, its high-frequency details are lost permanently and cannot be recovered from any image in isolation. However, by combining multiple low-resolution images, it becomes possible to recover the original scene at a higher resolution.\nMoreover, different low-resolution samples may be sampled at different phases, such that the same high-resolution frequency information will be packed with a phase shift. As a consequence, when multiple low-resolution samples are available, the fundamental challenge of MFSR is de-aliasing, i.e. disentangling the high-frequency components (Tsai, 1984).\nThe first work on MSFR (Tsai, 1984) considered the reconstruction of a high-resolution image as a fusion of co-registered low-resolution images in the Fourier domain. With proper registration and fusion (Irani & Peleg, 1991; Fitzpatrick et al., 2000; Capel & Zisserman, 2001), a composite super-resolved image can reveal some of the original high-frequency detail that would not have been accessible from single low-resolution image. In this work, we introduce HighRes-Net, which aims to provide an end-to-end deep learning framework for MFSR settings.\nRelation to Video and Stereo Super-Resolution While there are obvious similarities to Video SR (Tao et al., 2017; Sajjadi et al., 2018; Yan et al., 2019; Wang et al., 2019b) and Stereo SR (Wang et al., 2019d) (Wang et al., 2019a), the setting of this work differs in several ways: HighResnet learns to super-resolve sets and not sequences of low-res views. Video SR relies on motion estimation from a sequence of observations. Also, prediction at time t = T relies on predictions at t < T (autoregressive approach). Whereas in our case, we predict a single image from an unordered set of low-res inputs. Also, the low-res views are multi-temporal (taken at different times).\nVideo SR methods assume that the input is a temporal sequence of frames. Motion or optical flow can be estimated to super-resolve the sequences of frames. In this work, we do not assume low-res inputs to be ordered in time. Our training input is a set of low-res views with unknown timestamps and our target output is a single image — not another sequence." }, { "heading": "2.2 A PROBABILISTIC APPROACH", "text": "In addition to aliasing, MFSR deals with random processes like noise, blur, geometric distortions – all contributing to random low-resolution images. Traditionally, MFSR methods assume a-priori knowledge of the data generating motion model, blur kernel, noise level and degradation process; see for example, Pickup et al. (2006). Given multiple low-resolution images, the challenge of MFSR is to reconstruct a plausible image of higher-resolution that could have generated the observed lowresolution images. Optimization methods aim to improve an initial guess by minimizing an error between simulated and observed low-resolution images. These methods traditionally model the\nadditive noise and prior knowledge about natural images explicitly, to constrain the parameter search space and derive objective functions, using e.g. Total Variation (Chan & Wong, 1998; Farsiu et al., 2004), Tikhonov regularization (Nguyen et al., 2001) or Huber potential (Pickup et al., 2006) to define appropriate constraints on images.\nIn some situations, the image degradation process is complex or not available, motivating the development of nonparametric strategies. Patch-based methods learn to form high-resolution images directly from low-resolution patches, e.g. with k-nearest neighbor search (Freeman et al., 2002; Chang et al., 2004), sparse coding and sparse dictionary methods (Yang et al., 2010; Zeyde et al., 2010; Kim & Kwon, 2010)). The latter represents images in an over-complete basis and allows for sharing a prior across multiple sites.\nIn this work, we are particularly interested in super-resolving satellite imagery. Much of the recent work in Super-Resolution has focused on SISR for natural images. For instance, Dong et al. (2014) showed that training a CNN for super-resolution is equivalent to sparse coding and dictionary based approaches. Kim et al. (2016) proposed an approach to SISR using recursion to increase the receptive field of a model while maintaining capacity by sharing weights. Many more networks and learning strategies have recently been introduced for SISR and image deblurring. Benchmarks for SISR (Timofte et al., 2018), differ mainly in their upscaling method, network design, learning strategies, etc. We refer the reader to (Wang et al., 2019d) for a more comprehensive review.\nFew deep-learning approaches have considered the more general MFSR setting and attempted to address it in an end-to-end learning framework. Reecently, Kawulok et al. (2019) proposed a shiftand-add method and suggested “including image registration” in the learning process as future work.\nIn the following sections, we describe our approach to solving both aspects of the registration problem – co-registration and registration-at-the-loss – in a memory-efficient manner." }, { "heading": "3 HIGHRES-NET: MFSR BY RECURSIVE FUSION", "text": "In this section, we present HighRes-net, a neural network for multi-frame super-resolution inside a single spectral band (greyscale images), using joint co-registration and fusion of multiple lowresolution views in an end-to-end learning framework. From a high-level, HighRes-net consists of an encoder-decoder architecture and can be trained by stochastic gradient descent using highresolution ground truth as supervision, as shown in Figure 3.\nNotation We denote by θ the parameters of HighRes-net trained for a given upscaling factor γ. LRv,i ∈ RC×W×H is one of a set of K low-resolution views from the same site v, where C, W and H are the number of input channels, width and height of LRv,i, respectively. We denote by SRθv = F γ θ (LRv,1, . . . , LRv,K), the output of HighRes-net and by HRv ∈ RC×γW×γH a ground truth high-resolution image. We denote by [T1, T2] the concatenation of two images channelwise. In the following we supress the index v over sites for clarity.\nHighRes-Net consists of three main steps: (1) encoding, which learns relevant features associated with each low-resolution view, (2) fusion, which merges relevant information from views within the same scene, and (3) decoding, which proposes a high-resolution reconstruction from the fused summary." }, { "heading": "3.1 ENCODE, FUSE, DECODE", "text": "Embed, Encode The core assumption of MFSR is that the low-resolution image set contains collectively more information than any single low-resolution image alone, due to differences in photometric or spatial coverage for instance. However, the redundant low frequency information in multiple views can hinder the training and test performance of a MFSR model. We thus compute a reference image ref as a shared representation for multiple low-resolution views (LRi) K i=1 and embed each image jointly with ref. This highlights differences across the multiple views (Sanchez et al., 2019), and potentially allows HighRes-net to focus on difficult high-frequency features such as crop boundaries and rivers during super-resolution. The shared representation or reference image intuitively serves as an anchor for implicitly aligning and denoising multiple views in deeper layers. We refer to this mechanism as implicit co-registration.\nHighRes-net’s embedding layer embθ consists of a convolutional layer and two residual blocks with PReLu activations (He et al., 2015) and is shared across all views. The embedded hidden states s0i are computed in parallel as follows:\nref (c, i, j) = median (LR1(c, i, j), . . . , LRK(c, i, j)) , such that ref ∈ RC×W×H (1) s0i = embθ ([LRi, ref ]) ∈ RCh×W×H , (2)\nwhere Ch denotes the channels of the hidden state.\nThe imageset is padded if the number of low-res views K ′ is not a power of 2: we pad the set with dummy zero-valued views, such that the new size of the imageset K is the next power of 2. See Algorithm 1, line 1.\nFuse The embedded hidden states s0i are then fused recursively, halving by two the number of low-resolution states at each fusion step t, as shown in Figure 4. Given a pair of hidden states sti, s t j , HighRes-net computes a new representation:[ s̃ti, s̃ t j ] = [ sti, s t j ] + gθ ([ sti, s t j ]) ∈ R2Ch×W×H (3)\nst+1i = s t i + αjfθ ( s̃ti, s̃ t j ) ∈ RCh×W×H , (4)\nwhere s̃ti, s̃ t j are intermediate representations; gθ is a shared-representation within an inner residual block (equation 3); fθ is a fusion block, and αj is 0 if the j-th low-resolution view is part of the padding, and 1 otherwise. fθ squashes 2Ch input channels into Ch channels and consists of a (conv2d+PreLu). Intuitively, gθ aligns the two representations and it consists of two (conv2d + PreLU) layers.\nThe blocks (fθ, gθ) are shared across all pairs and depths, giving it the flexibility to deal with variable size inputs and significantly reduce the number of parameters to learn.\nUpscale and Decode After T = log2K fusion layers, the final low-resolution encoded state sTi contains information from all K input views. Any information of a spatial location that was initially missing from LRi, is now encoded implicitly in sTi . T is called the depth of HighRes-net. Only then, sTi is upsampled with a deconvolutional layer (Xu et al., 2014) to a higher-resolution space sTHR ∈ RCh×γW×γH . The hidden high-resolution encoded state sTHR is eventually convolved with a 1×1 2D kernel to produce a final super-resolved image SRθ ∈ RC×γW×γH .\nThe overall architecture of HighRes-net is summarized in Figure 3(a) and the pseudocode for the forward pass is given in Algorithm 1.\nAlgorithm 1: HighRes-net forward pass Input: low-res views LR1 . . . LRK′\n1 (LR1 . . . LRK , α1 . . . αK)← pad (LR1 . . . LRK′) // pad inputs to next power of 2 2 s0i ← encode (LRi) // parallelized across 1 . . . K 3 T ← log2K // fusion depth 4 k ← K 5 for t = 1 . . . T do 6 for i = 1 . . . k/2 do 7 sti ← fuse ( st−1i , s t−1 k−i, αk−i ) // fuse encoded views 8 k = k/2\n9 SR ← decode ( sTi )\nOutput: super-resolved view SR" }, { "heading": "4 REGISTRATION MATTERS", "text": "Co-registration matters for fusion. HighRes-net learns to implicity co-register multiple lowresolution views LRi and fuse them into a single super resolved image SRθ.\nA more explicit registration-at-the-loss can also be used for measuring similarity metrics and distances between SRθ and HR. Indeed, training HighRes-Net alone, by minimizing a reconstruction error such as the mean-squared error between SRθ andHR, leads to blurry outputs, since the neural network has to compensate for pixel and sub-pixel misalignments between its output SRθ and HR.\nHere, we present ShiftNet-Lanczos, a neural network that can be paired with HighRes-net to account for pixel and sub-pixel shifts in the loss, as depicted in Figure 3(b). Our ablation study A.2 and qualitative visual analysis suggest that this strategy helps HighRes-net learn to super-resolve and leads to clearly improved results." }, { "heading": "4.1 SHIFTNET-LANCZOS", "text": "ShiftNet learns to align a pair of images with sub-pixel translations. ShiftNet registers pairs of images by predicting two parameters defining a global translation. Once a sub-pixel translation is found for a given pair of images, it is applied through a Lanczos shift kernel to align the images.\nShiftNet The architecture of ShiftNet is adapted from HomographyNet (DeTone et al., 2016). Translations are a special case of homographies. In this sense, ShiftNet is simply a special case of HomographyNet, predicting 2 shift parameters instead of 8 homography parameters. See Appendix A.3, for details on the architecture of ShiftNet.\nAlgorithm 2: Sub-pixel registered loss through ShiftNet-Lanczos\nInput: SRθ, HR // super-resolved view, high-resolution ground-truth 1 (∆x, ∆y) ← ShiftNet (SRθ, HR) // register SR to HR 2 κ∆ ← LanczosShiftKernel (∆) // 1D Lanczos kernels for x and y sub-pixel shifts 3 SRθ,∆ ← SRθ ∗ κ∆x ∗ κ∆y // 2D sub-pixel shift by separable 1D convolutions 4 `θ,∆ ← loss (SRθ,∆, HR) // sub-pixel registered loss\nOutput: `θ,∆\nOne major difference from HomographyNet is the way we train ShiftNet: In (DeTone et al., 2016), HomographyNet is trained on synthetically transformed data, supervised with ground-truth homography matrices. In our setting, ShiftNet is trained to cooperate with HighRes-net, towards the common goal of MFSR (see section Objective function below).\nLanczos shift & interpolation kernel To shift and align an image by a sub-pixel amount, it must be convolved with a filter that shifts for the integer parts and interpolates for the fractional parts of the translation. Standard options for interpolation include the nearest-neighbor, sinc, bilinear, bicubic, and Lanczos filters (Turkowski, 1990). The sinc filter has an infinite support as opposed to any digital signal, so in practice it produces ringing or ripple artifacts — an example of the Gibbs phenomenon. The nearest-neighbor and bilinear filters do not induce ringing, but strongly attenuate the higher-frequency components (over-smoothing), and can even alias the image. The Lanczos filter reduces the ringing significantly by using only a finite part of the since (up to a few lobes from the origin). Experimentally, we found the Lanczos filter to perform the best.\nObjective function In our setting, registration benefits super-resolution. HighRes-net receives more informative gradient signals when its output is aligned with the ground truth high-resolution image. Conversely, super-resolution benefits registration, since good features are key to align images (Clement et al., 2018). We thus trained HighRes-Net and ShiftNet-Lanczos in a cooperative setting, where both neural networks work together to minimize an objective function, as opposed to an adversarial setting where a generator tries to fool a discriminator. HighRes-net infers a latent superresolved variable and ShiftNet maximises its similarity to a ground truth high-resolution image with sub-pixel shifts.\nBy predicting and applying sub-pixel translations in a differentiable way, our approach for registration and super-resolution can be combined in an end-to-end learning framework. Shift-Net predicts a sub-pixel shift ∆ from a pair of high-resolution images. The predicted transformation is applied with Lanczos interpolation to align the two images at a pixel level. ShiftNet and HighRes-Net are trained jointly to minimize a common loss function, using backpropagation and stochastic gradient descent. Our objective function is composed of a registered reconstruction loss computed as in Algorithm 2. In our case, we used the corrected clear PSNR metric (cPSNR), chosen by ESA, which is a variant of the mean squared error, designed to correct for brightness and clouds in satellite images (Märtens et al., 2019), but the proposed architecture is decoupled from the choice of loss.\nSee Algorithm 4 for the pseudo-code for computing the alignment loss `θ,∆. We further regularize the L2 norm of ShiftNet’s ouput with a hyperparameter λ and our final joint objective is given by:\nLθ,∆(SRθ, HR) = `θ,∆ + λ||∆||2. (5)" }, { "heading": "5 EXPERIMENTS AND RESULTS", "text": "Prior SR work has focused on super-resolving low-res images that are artificially generated by simple bilinear down-sampling, (Bulat et al., 2018) The PROBA-V satellite has separate cameras onboard for capturing high-res / low-res pairs. As far as we know, the PROBA-V dataset is the first publicly available dataset for MFSR that contains naturally occurring low-res and high-res pairs. This is in contrast to most of the work in SR (SISR, MFSR, Video SR, Stereo SR) that synthetically down-sample high-res images and frames (Wang et al., 2019c; Nah et al., 2019). Methods that are\ntrained on artificially downscaled datasets fail to produce good results when applied to real-world low-resolution, low quality images (Shocher et al., 2018). For this reason, we experiment only on PROBA-V, a dataset that does not suffer from biases induced by artificial down-sampling." }, { "heading": "5.1 PROBA-V KELVIN DATASET", "text": "The performance of our method is illustrated with satellite imagery from the Kelvin competition, organized by ESA’s Advanced Concept Team (ACT).\nThe Proba-V Kelvin dataset (Märtens et al., 2019) contains 1450 scenes (RED and NIR spectral bands) from 74 hand-selected Earth regions around the globe at different points in time. The scenes are split into 1160 scenes for training and 290 scenes for testing. Each data-point consists of exactly one 100m resolution image as 384× 384 grey-scale pixel images (HR) and several 300m resolution images from the same scene as 128×128 grey-scale pixel images (LR), spaced days apart. We refer the reader to the Proba-V manual (Wolters et al., 2014) for further details on image acquisition.\nEach scene comes with at least 9 low-res views, and an average of 19. Each view comes with a noisy quality map. The quality map is a binary map, that indicates concealed pixels due to volatile features, such as clouds, cloud shadows, ice, water and snow. The sum of clear pixels (1s in the binary mask) is defined as the clearance of a low-res view. These incidental and noisy features can change fundamental aspects of the image, such as the contrast, brightness, illumination and landscape features. We use the clearance scores to randomly sample from the imageset of low-res views, such that views with higher clearance are more likely to be selected. This strategy helps to prevent overfitting. See Appendix A.4 for more details.\nWorking with missing or noisy values A quality map can be used as a binary mask to indicate noisy or occluded pixels, due to clouds, snow, or other volatile objects. Such a mask can be fed as an additional input channel in the respective low-res view, in the same fashion as thereference frame. When missing value masks are available, neural networks can learn which parts of the input are anomalous, noisy, or missing, when provided with such binary masks (see e.g. Che et al. (2018)). In satellite applications where clouds masks are not available, other segmentation methods would be in order to infer such masks as a preprocessing step (e.g. Long et al. (2015)). In the case of the PROBA-V dataset, we get improved results when we make no use of the masks provided. Instead we use the masks only to inform the sampling scheme within the low-res imageset to prevent overfitting." }, { "heading": "5.2 EXPERIMENTS", "text": "Across all experiments, we used the same hyperparameters, reported in Appendix A.1. By default, each imageset is padded to 32 views for training and testing, unless specified otherwise. Our pytorch implementation requires less than 9h of training on a single NVIDIA V100 GPU. At test time, superresolving an imageset of size 128x128 by a factor of 3, takes less than 0.2 seconds. Our code is made available on github1.\nWe evaluated different models on ESA’s Kelvin competition. Our best model, HighRes-Net trained jointly with shiftNet-Lanczos, scored consistently at the top of the public and final leaderboard, see Table 1. In the following, we discuss several baselines and report our experiments." }, { "heading": "5.3 COMPARISONS", "text": "• ESA baseline upsamples each low-resolution view separately with bicubic up-sampling and averages those of maximum clearance. • SRResNet (Ledig et al., 2017) is a deep learning SISR baseline. • SRResNet-1 + shiftNet was trained with ShiftNet. • SRResNet-6 + shiftNet differs from the previous model during test time only. It independently\nupsamples 6 low-resolution views with SRResNet-1, co-registers the super-resolved images using shiftNet, and averages the 6 aligned super-resolved images into a final prediction. • ACT baseline (Märtens et al., 2019) is a Convolutional Neural Network with five fixed channels for the five clearest low-resolution views. 1https://anonymous.4open.science/r/b3404d0d-e541-4f52-bbe9-f84f2a52972e/\n• DeepSUM baseline (Molini et al., 2019) can be seen as a variant of SRResNet-6 + shiftNet. Multiple low-res views are independently upsampled, then co-registered and fused into a single image. • HighRes-net + shiftNet are described in sections 3 and 4. Upsampling is done in the last step as opposed to (Molini et al., 2019). • Ensemble An ensemble of two trained (HighRes-net + shiftNet) models, one with K=16 and one with K=32 input views, whose outputs are averaged." }, { "heading": "5.4 ESA KELVIN LEADERBOARD", "text": "The Kelvin competition used the corrected clear PSNR (cPSNR) quality metric as the standardized measure of performance. The cPSNR is a variant of the Peak Signal to Noise Ratio (PSNR) used to compensate for pixel-shifts and brightness bias. We refer the reader to (Märtens et al., 2019) for the motivation and derivation of this quality metric. The cPSNR metric is normalized by the score of the ESA baseline algorithm so that a score smaller than 1 means “better than the baseline” and lower is better. We also use it as our training objective with sub-pixel registration (see also section 3(b) on ShiftNet)." }, { "heading": "5.4.1 ABLATION STUDY", "text": "We further ran an ablation study on the available labeled data (1450 image sets), split in 90% / 10% for training and testing. Our results suggest that more low-resolution views benefit the reconstruction error, plateuing after 16 views, see Appendix A.2. Another finding is that registration matters for MFSR, both in co-registering low-res views, and registering-at-the-loss, see Appendix A.3. Finally, selecting the k clearest views for fusion can lead to ovefitting. One remedy is to randomly sample the views with a bias for clearance, see A.4." }, { "heading": "6 DISCUSSION", "text": "" }, { "heading": "6.1 THE IMPORTANCE OF GROUNDED DETAILS", "text": "The PROBA-V satellite (Dierckx et al., 2014) was launched by ESA to monitor Earth’s vegetation growth, water resources and agriculture. As a form of data fusion and enrichment, multi-frame super-resolution could enhance the vision of such satellites for scientific and monitoring applications (Carlson & Ripley, 1997; Pettorelli et al., 2005). More broadly, satellite imagery can help NGOs and non-profits monitor the environment and human rights (Cornebise et al., 2018; Helber et al., 2018; Rudner et al., 2019; Rolnick et al., 2019) at scale, from space, ultimately contributing to the UN sustainable development goals. Low-resolution imagery is cheap or sometimes free, and it is frequently updated. However, with the addition of fake or imaginary details, such enhancement wouldn’t be valuable as scientific, legal, or forensic evidence." }, { "heading": "6.2 FUTURE WORK", "text": "Registration matters at the loss stage but also at the fusion stage. The latter is not explicit in our model and the reason why and how it works is less understood. Learning to sample a reference frame\nand learning to fuse multiple representations with attention could also be a promising approach to extend HighRes-net.\nEnsuring authenticity of detail is a major challenge and quantifying uncertainty of super-resolved images is an important line of future work for real world applications. Along this line of research, the question of how to evaluate a super-resolved image is important for downstream tasks and, more generally, similarity metrics remain an open question for many computer visions tasks (Bruna et al., 2015; Johnson et al., 2016; Isola et al., 2017; Ledig et al., 2017)." }, { "heading": "6.3 CONCLUSION", "text": "In this paper, we presented HighRes-net – the first deep learning approach to multi-frame superresolution that learns typical sub-tasks of MFSR in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss.\nIt recursively fuses a variable number of low-resolution views by learning a global fusion operator. The overall fusion also aligns all low-resolution views with an implicit co-registration mechanism through the reference channel. We also introduced ShiftNet-Lanczos, a network that learns to register and align the super-resolved output of HighRes-net with a high-resolution ground-truth.\nRegistration is important both to align multiple low-resolution inputs (co-registration) and to compute similarity metrics between shifted signals. Our experiments suggest that an end-to-end cooperative setting (HighRes-net + ShiftNet-Lanczos) helps with training and test performance. By design, our approach is fast to train, faster to test, and low in terms of memory-footprint by doing the bulk of the computational work (co-registration + fusion) on multiple images while maintaining their low-resolution height & width.\nThere is an ongoing proliferation of low-resolution yet high-revisit low-cost satellite imagery, but they often lack the detailed information of expensive high-resolution imagery. We believe MFSR can raise its potential to NGOs and non-profits that contribute to the UN Sunstainable Development Goals." }, { "heading": "A APPENDIX", "text": "A.1 EXPERIMENTAL DETAILS\nWe trained our models on low-resolution patches of size 64x64. HighRes-net’s architecture is reported in Table 3. We denote by Conv2d(in, out, k, s, p) a conv2D layer with in and out input/output channels, kernel of size (k,k), stride (s,s) and padding p. We used the ADAM optimizer (Kingma & Ba, 2014) with default hyperparameters and trained our models on batches of size 32, for 400 epochs, using 90% of the data for training and 10% for validation. Our learning rate is initialized to 0.0007, decayed by a factor of 0.97 if the validation loss plateaus for more than 2 epochs. For the regularization of shiftNet, we employed λ = 0.000001.\nThanks to weight sharing, HighRes-net super-resolves scenes with 32 views in 5 recursive steps, while requiring less than 600K parameters. ShiftNet has more than 34M parameters (34187648) but is dropped during test time. We report GPU memory requirements in table 4 for reproducibility purposes.\nA.2 HOW MANY FRAMES DO YOU NEED?\nWe trained and tested HighRes-net with ShiftNet using 1 to 32 frames. With a single image, our approach performs worse than the ESA baseline. Doubling the number of frames significantly improves both our training and validation scores. After 16 frames, our model’s performance stops increasing as show in Figure 5.\nA.3 REGISTRATION MATTERS\nRegistered loss The only explicit registration that we perform is at the loss stage, to allow the model partial credit for a solution. This solution can be enhanced but otherwise mis-registered wrt to the ground truth. We trained our base model HighRes-net without ShiftNet-Lanczos and observed a drop in performance as shown in Table 5. Registration matters and aligning outputs with targets helps HighRes-net generate sharper outputs and achieve competitive results.\nImplicit co-registration The traditional practice in MFSR is to explicitly co-register the LR views prior to super-resolution (Tsai, 1984; Molini et al., 2019). The knowledge of sub-pixel missalignments tells an algorithm what pieces of information to fuse from each LR image for any pixel in the SR output. Contrary to the conventional practice in MFSR, we propose implicit co-registeration by pairing LR views with a reference frame a.k.a. anchor. In this sense, we never explicitly compute the relative shifts between any LR pair. Instead, we simply stack each view with a chosen reference frame as an additional channel to the input. We call this strategy implicit co-registration, We found this strategy to be effective in the following ablation study which addresses the impact of the choice of a reference frame aka anchor.\nWe observe the median reference is the most effective in terms of train and test score. We suspect the median performs better than the mean because the median is more robust to outliers and can help denoise the LR views. Interestingly, training and testing without a shared reference performed worse than the ESA baseline. This shows that co-registration (implicit or explicit) matters. This can be due to the fact that the model lacks information to align and fuse the multiple views.\nShiftNet architecture ShitNet has 8 layers of (conv2D + BatchNorm2d + ReLU). Layer 2, 4 and 6 are followed by MaxPool2d. The final output is flattened to a vector x of size 32768. Then, we compute a vector of size 1024, x = ReLU(fc1(dropout(x))). The final shift prediction is\nfc2(x) of size 2. The bulk of the parameters come from fc1, with 32768 × 1024 weights. These alone, account for 99% of ShiftNet’s parameters. Adding a MaxPool2d on top of layer 3, 5, 7 or 8 halves the parameters of ShiftNet.\nA.4 TOWARDS PERMUTATION INVARIANCE\nA desirable property of a fusion model acting on an un-ordered set of images, is permutationinvariance: the output of the model should be invariant to the order in which the LR views are fused. An easy approach to encourage permutation invariant neural networks is to randomly shuffle the inputs at training time before feeding them to a model (Vinyals et al., 2015).\nIn addition to randomization, we still want to give more importance to clear LR views (with high clearance score), which can be done by sorting them by clearance. A good trade-off between uniform sampling and deterministic sorting by clearance, is to sample k LR views without replacement and with a bias towards higher clearance:\np(i | C1, . . . , Ck) = eβCi∑k j=1 e βCj , (6)\nwhere k is the total number of LR views, Ci is the clearance score of LRi and β regulates the bias towards higher clearance scores,\nWhen β = 0, this sampling strategy corresponds to uniform sampling and when β = +inf , this corresponds to picking the k-clearest views in a deterministic way. Our default model was trained with β = 50 and our experiments are reported in Table 7.\nFrom Table 7, β = ∞ reaches best training score and worst testing score. For β = 50 and β = 0, the train/test gap is much more reduced. This suggests that the deterministic strategy is overfitting and randomness prevents overfitting (diversity matters). On the other hand, β = 50 performs significantly better than β = 0 suggesting that biasing a model towards higher clearances could be beneficial i.e., clouds matter too." } ]
2,019
null
SP:8719843b0fa8359a27642c1ffe94e17b748a0a60
[ "The paper presents a class-agnostic method for tracking multiple moving objects (MOHART) that extends an existing single-object tracking method (Hierarchical Attentive Recurrent Tracking, HART). Similarly to HART, MOHART utilizes an attention mechanism and LSTM units. The extension form HART to MOHART is done in two main steps: HART is applied to multiple objects in parallel, with a presence variable attached to each, and a permutation-invariant network that learns the interactions between the objects.", "This paper deals with the problem of multiple object tracking and trajectory prediction in multiple frames of videos. The main focus is adding a relation-reasoning building block to the original HART framework. With multiple objects, the key is to be able to learn the permutation invariant representation during potential changing and dynamic object trajectories. The paper also uses toy examples to show that the proposed block of relation reasoning is not necessarily beneficial when the object trajectory is less random and more static. Finally, experiments on real data demonstrate that the proposed method that accounts for relation reasoning is helpful by a limited magnitude." ]
Relational reasoning—the ability to model interactions and relations between objects— is valuable for robust multi-object tracking and pivotal for trajectory prediction. In this paper we propose MOHART, a class-agnostic, end-to-end multi-object tracking and trajectory prediction algorithm, which explicitly accounts for permutation invariance in its relational reasoning. We explore a number of permutation invariant architectures and show that multi-headed self-attention outperforms the provided baselines and better accounts for complex physical interactions in a challenging toy experiment. We show on three real-world tracking datasets that adding relational reasoning capabilities in this way increases the tracking and trajectory prediction performance, particularly in the presence of ego-motion, occlusions, crowded scenes, and faulty sensor inputs. To the best of our knowledge, MOHART is the first fully end-to-end multi-object tracking from vision approach applied to real-world data reported in the literature.
[]
[ { "authors": [ "Alexandre Alahi", "Kratarth Goel", "Vignesh Ramanathan", "Alexandre Robicquet", "Li Fei-Fei", "Silvio Savarese" ], "title": "Social LSTM: Human trajectory prediction in crowded spaces", "venue": null, "year": 2016 }, { "authors": [ "Seung-Hwan Bae", "Kuk-Jin Yoon" ], "title": "Confidence-based data association and discriminative deep appearance learning for robust online multi-object tracking", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Peter W. Battaglia", "Razvan Pascanu", "Matthew Lai", "Danilo Rezende", "Koray Kavukcuoglu" ], "title": "Interaction Networks for Learning about Objects, Relations and Physics", "venue": null, "year": 2016 }, { "authors": [ "Tharindu Fernando", "Simon Denman", "Sridha Sridharan", "Clinton Fookes" ], "title": "Soft+ hardwired attention: An lstm framework for human trajectory prediction and abnormal event detection", "venue": "Neural networks,", "year": 2018 }, { "authors": [ "Daniel Gordon", "Ali Farhadi", "Dieter Fox" ], "title": "Real-Time Recurrent Regression Networks for Visual Tracking of Generic Objects", "venue": "RA-L,", "year": 2018 }, { "authors": [ "Agrim Gupta", "Justin Johnson", "Li Fei-Fei", "Silvio Savarese", "Alexandre Alahi" ], "title": "Social GAN: socially acceptable trajectories with generative adversarial networks", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "David Held", "Sebastian Thrun", "Silvio Savarese" ], "title": "Learning to track at 100 fps with deep regression networks", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Samira Ebrahimi Kahou", "Vincent Michalski", "Roland Memisevic" ], "title": "RATM: recurrent attentive tracking model", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2017 }, { "authors": [ "Margret Keuper", "Siyu Tang", "Bjorn Andres", "Thomas Brox", "Bernt Schiele" ], "title": "Motion segmentation & multiple object tracking by correlation co-clustering", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2018 }, { "authors": [ "Adam Kosiorek", "Hyunjik Kim", "Yee Whye Teh", "Ingmar Posner" ], "title": "Sequential attend, infer, repeat: Generative modelling of moving objects", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Adam R. Kosiorek", "Alex Bewley", "Ingmar Posner" ], "title": "Hierarchical attentive recurrent tracking", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alon Lerner", "Yiorgos Chrysanthou", "Dani Lischinski" ], "title": "Crowds by example", "venue": "In Computer Graphics Forum,", "year": 2007 }, { "authors": [ "A. Milan", "L. Leal-Taixé", "I. Reid", "S. Roth", "K. Schindler" ], "title": "MOT16: A benchmark for multi-object tracking", "venue": null, "year": 2016 }, { "authors": [ "Anton Milan", "Stefan Roth", "Konrad Schindler" ], "title": "Continuous energy minimization for multitarget tracking", "venue": null, "year": 2014 }, { "authors": [ "Stefano Pellegrini", "Andreas Ess", "Konrad Schindler", "Luc Van Gool" ], "title": "You’ll never walk alone: Modeling social behavior for multi-target tracking", "venue": "In ICCV,", "year": 2009 }, { "authors": [ "Maryam Rasouli Danesh", "Srishti Yadav", "Sachini Herath", "Yasaman Vaghei", "Shahram Payandeh" ], "title": "Deep attention models for human tracking using rgbd", "venue": "Sensors, 19:750,", "year": 2019 }, { "authors": [ "Joseph Redmon", "Santosh Kumar Divvala", "Ross B. Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "A. Robicquet", "A. Sadeghian", "A. Alahi", "S. Savaresei" ], "title": "Learning social etiquette: Human trajectory prediction in crowded scenes", "venue": "European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Andrey Rudenko", "Luigi Palmieri", "Kai O Arras" ], "title": "Joint long-term prediction of human motion using a planning-based social force approach", "venue": "IEEE International Conference on Robotics and Automation (ICRA)", "year": 2018 }, { "authors": [ "Amir Sadeghian", "Vineet Kosaraju", "Ali Sadeghian", "Noriaki Hirose", "Hamid Rezatofighi", "Silvio Savarese. Sophie" ], "title": "An attentive gan for predicting paths compliant to social and physical constraints", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Christoph Schöller", "Vincent Aravantinos", "Florian Lay", "Alois Knoll" ], "title": "The simpler the better: Constant velocity for pedestrian motion prediction", "venue": null, "year": 1903 }, { "authors": [ "Hang Su", "Yinpeng Dong", "Jun Zhu", "Haibin Ling", "Bo Zhang" ], "title": "Crowd scene understanding with coherent recurrent neural networks", "venue": "In International Joint Conferences on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Li Sun", "Zhi Yan", "Sergi Molina Mellado", "Marc Hanheide", "Tom Duckett" ], "title": "3dof pedestrian trajectory prediction learned from long-term autonomous mobile robot deployment data", "venue": "IEEE International Conference on Robotics and Automation", "year": 2018 }, { "authors": [ "Peter Trautman", "Andreas Krause" ], "title": "Unfreezing the robot: Navigation in dense, interacting crowds", "venue": "In IROS,", "year": 2010 }, { "authors": [ "Jack Valmadre", "Luca Bertinetto", "João F. Henriques", "Andrea Vedaldi", "Philip Hilaire Sean Torr" ], "title": "End-to-end representation learning for correlation filter based tracking", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Sjoerd van Steenkiste", "Michael Chang", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Relational neural expectation maximization: Unsupervised discovery of objects and their interactions", "venue": null, "year": 2018 }, { "authors": [ "Daksh Varshneya", "G Srinivasaraghavan" ], "title": "Human trajectory prediction using spatially aware deep attention models", "venue": "arXiv preprint:1705.09436,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Edward Wagstaff", "Fabian B. Fuchs", "Martin Engelcke", "Ingmar Posner", "Michael A. Osborne" ], "title": "On the limitations of representing functions on sets", "venue": "International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Nicholas Watters", "Daniel Zoran", "Theophane Weber", "Peter Battaglia", "Razvan Pascanu", "Andrea Tacchetti" ], "title": "Visual Interaction Networks: Learning a Physics Simulator from Video", "venue": null, "year": 2017 }, { "authors": [ "Longyin Wen", "Dawei Du", "Zhaowei Cai", "Zhen Lei", "Ming-Ching Chang", "Honggang Qi", "Jongwoo Lim", "Ming-Hsuan Yang", "Siwei Lyu" ], "title": "DETRAC: A new benchmark and protocol for multi-object", "venue": "tracking. arXiv,", "year": 2015 }, { "authors": [ "Kota Yamaguchi", "Alexander C Berg", "Luis E Ortiz", "Tamara L Berg" ], "title": "Who are you with and where are you going", "venue": "In CVPR,", "year": 2011 }, { "authors": [ "Li Zhang", "Yuan Li", "Ramakant Nevatia" ], "title": "Global data association for multi-object tracking using network", "venue": null, "year": 2008 }, { "authors": [ "Milan" ], "title": "Prediction results on the MOTChallenge", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Real-world environments can be rich and contain countless types of interacting objects. Intelligent autonomous agents need to understand both the objects and interactions between them if they are to operate in those environments. This motivates the need for class-agnostic algorithms for tracking multiple objects—a capability that is not supported by the popular tracking-by-detection paradigm. In tracking-by-detection, objects are detected in each frame independently, e. g., by a pre-trained deep convolutional neural network (CNN) such as YOLO (Redmon et al. (2016)), and then linked across frames. Algorithms from this family\ncan achieve high accuracy, provided sufficient labelled data to train the object detector, and given that all encountered objects can be associated with known classes, but fail when faced with objects from previously unseen categories.\nHierarchical attentive recurrent tracking (HART) is a recently-proposed, alternative method for single-object tracking (SOT), which can track arbitrary objects indicated by the user (Kosiorek et al. (2017)). This is done by providing an initial bounding-box, which may be placed over any part of the image, regardless of whether it contains an object or what class the object is. HART efficiently processes just the relevant part of an image using spatial attention; it also integrates object detection, feature extraction, and motion modelling into one network, which is trained fully end-to-end. Contrary to tracking-by-detection, where only one video frame is typically processed at any given time to generate bounding box proposals, end-to-end learning in HART allows for discovering complex visual and spatio-temporal patterns in videos, which is conducive to inferring what an object is and how it moves.\nIn the original formulation, HART is limited to the single-object modality—as are other existing end-to-end trackers (Kahou et al. (2017); Rasouli Danesh et al. (2019); Gordon et al. (2018)). In this work, we present MOHART, a class-agnostic tracker with complex relational reasoning capabilities provided by a multi-headed self-attention module (Vaswani et al. (2017); Lee et al. (2019)). MOHART infers the latent state of every tracked object in parallel, and uses self-attention to inform per-object states about other tracked objects. This helps to avoid performance loss under self-occlusions of tracked objects or strong camera motion. Moreover, since the model is trained end-to-end, it is able to learn how to manage faulty or missing sensor inputs. See fig. 1 for a high-level illustration of MOHART.\nIn order to track objects, MOHART estimates their states, which can be naturally used to predict future trajectories over short temporal horizons, which is especially useful for planning in the context of autonomous agents. MOHART can be trained simultaneously for object tracking and trajectory prediction at the same time, thereby increasing statistical efficiency of learning. In contrast to prior art, where trajectory prediction and object tracking are usually addressed as separate problems with unrelated solutions, our work show trajectory prediction and object tracking are best addressed jointly.\nSection 2 describes prior art in tracking-by-detection, end-to-end tracking and predestrian trajectory prediction. In Section 3, we describe our approach, which uses a permutation-invariant self-attention module to enable tracking multiple objects end-to-end with relational reasoning. Section 4 contrasts our approach with multi-object trackers which do not explicitly enforce permutation invariance but have the capacity to learn it, simpler permutation-invariant architectures, as well as multiple single-object trackers running in parallel. We show that multi-headed self-attention significantly outperforms other approaches. Finally, in Section 5, we apply MOHART to real world datasets and show that permutation-invariant relational reasoning leads to consistent performance improvement compared to HART both in tracking and trajectory prediction." }, { "heading": "2 RELATED WORK", "text": "Tracking-by-Detection Vision-based tracking approaches typically follow a tracking-by-detection paradigm: objects are first detected in each frame independently, and then a tracking algorithm links the detections from different frames to propose a coherent trajectory (Zhang et al. (2008); Milan et al. (2014); Bae and Yoon (2017); Keuper et al. (2018)). Motion models and appearance are often used to improve the association between detected bounding-boxes in a postprocessing step. Tracking-by-detection algorithms currently provide the state-of-the-art in multi-object tracking on common benchmark suites, and we fully acknowledge that MOHART is not competitive at this stage in scenarios where high-quality detections are available for each frame. MOHART can in principle be equipped with the ability to use bounding boxes provided by an object detector, but this is beyond the scope of this project.\nEnd-to-End Tracking A newly established and much less explored stream of work approaches tracking in an end-to-end fashion. A key difficulty here is that extracting an image crop (according to bounding-boxes provided by a detector), is non-differentiable and results in high-variance gradient estimators. Kahou et al. (2017) propose an end-to-end tracker with soft spatial-attention using a 2D grid of Gaussians instead of a hard bounding-box. HART draws inspiration from this idea, employs an additional attention mechanism, and shows promising performance on the real-world KITTI dataset (Kosiorek et al. (2017)). HART forms the foundation of this work. It has also been extended to incorporate depth information from RGBD cameras (Rasouli Danesh et al. (2019)). Gordon et al. (2018) propose an approach in which the crop corresponds to the scaled up previous bounding-box. This simplifies the approach, but does not allow the model to learn where to look— i. e., no gradient is backpropagated through crop coordinates. To the best of our knowledge, there are no successful implementations of any such end-to-end approaches for multi-object tracking beyond SQAIR (Kosiorek et al. (2018)), which works only on datasets with static backgrounds. On real-world data, the only end-to-end approaches correspond to applying multiple single-object trackers in parallel—a method which does not leverage the potential of scene context or inter-object interactions.\nPedestrian trajectory prediction Predicting pedestrian trajectories has a long history in computer vision and robotics. Initial research modelled social forces using hand-crafted features (Lerner et al. (2007); Pellegrini et al. (2009); Trautman and Krause (2010); Yamaguchi et al. (2011)) or MDP-based motion transition models (Rudenko et al. (2018)), while more recent approaches learn from context information, e. g., positions of other pedestrians or landmarks in the environment. Social-LSTM (Alahi et al. (2016)) employs a long short-term memory (LSTM) to predict pedestrian trajectories and uses max-pooling to model global social context. Attention mechanisms have been employed to query the most relevant information, such as neighbouring pedestrians, in a learnable fashion (Su et al. (2016); Fernando et al. (2018); Sadeghian et al. (2019)). Apart from relational learning, context (Varshneya and Srinivasaraghavan (2017)), periodical time information (Sun et al. (2018)), and constant motion priors (Schöller et al. (2019)) have proven effective in predicting long-term trajectories.\nOur work stands apart from this prior art by not relying on ground truth tracklets. It addresses the more challenging task of working directly with visual input, performing tracking, modelling interactions, and, depending on the application scenario, simultaneously predicting future motions. As such, it can also be compared to Visual Interaction Networks (VIN) (Watters et al. (2017)), which use a CNN to encode three consecutive frames into state vectors—one per object—and feed these into a recurrent neural network (RNN), which has an Interaction Network (Battaglia et al. (2016)) at its core. More recently, Relational Neural Expectation Maximization (R-NEM) has been proposed as an unsupervised approach which combines scene segmentation and relational reasoning (van Steenkiste et al. (2018)). Both VINs and R-NEM make accurate predictions in physical scenarios, but, to the best of our knowledge, have not been applied to real world data." }, { "heading": "3 RECURRENT MULTI-OBJECT TRACKING WITH SELF-ATTENTION", "text": "This section describes the model architecture in fig. 1. We start by describing the hierarchical attentive recurrent tracking (HART) algorithm (Kosiorek et al. (2017)), and then follow with an extension of HART to tracking multiple objects, where multiple instances of HART communicate with each other using multiheaded attention to facilitate relational reasoning. We also explain how this method can be extended to trajectory prediction instead of just tracking." }, { "heading": "3.1 HIERARCHICAL ATTENTIVE RECURRENT TRACKING (HART)", "text": "HART is an attention-based recurrent algorithm, which can efficiently track single objects in a video. It uses a spatial attention mechanism to extract a glimpse gt, which corresponds to a small crop of the image xt at time-step t, containing the object of interest. This allows it to dispense with the processing of the whole image and can significantly decrease the amount of computation required. HART uses a CNN to convert the\nglimpse gt into features ft, which then update the hidden state ht of a LSTM core. The hidden state is used to estimate the current bounding-box bt, spatial attention parameters for the next time-step at+1, as well as object appearance. Importantly, the recurrent core can learn to predict complicated motion conditioned on the past history of the tracked object, which leads to relatively small attention glimpses—contrary to CNN-based approaches (Held et al. (2016); Valmadre et al. (2017)), HART does not need to analyse large regions-of-interest to search for tracked objects. In the original paper, HART processes the glimpse with an additional ventral and dorsal stream on top of the feature extractor. Early experiments have shown that this does not improve performance on the MOTChallenge dataset, presumably due to the oftentimes small objects and overall small amount of training data. Further details are provided in Appendix B.\nThe algorithm is initialised with a bounding-box1 b1 for the first time-step, and operates on a sequence of raw images x1:T . For time-steps t ≥ 2, it recursively outputs bounding-box estimates for the current time-step and predicted attention parameters for the next time-step. The performance is measured as intersection-overunion (IoU) averaged over all time steps in which an object is present, excluding the first time step.\nAlthough HART can track arbitrary objects, it is limited to tracking one object at a time. While it can be deployed on several objects in parallel, different HART instances have no means of communication. This results in performance loss, as it is more difficult to identify occlusions, ego-motion and object interactions. Below, we propose an extension of HART which remedies these shortcomings." }, { "heading": "3.2 MULTI-OBJECT HIERARCHICAL ATTENTIVE RECURRENT TRACKING (MOHART)", "text": "Multi-object support in HART requires the following modifications. Firstly, in order to handle a dynamically changing number of objects, we apply HART to multiple objects in parallel, where all parameters between HART instances are shared. We refer to each HART instance as a tracker. Secondly, we introduce a presence variable pt,m for object m. It is used to mark whether an object should interact with other objects, as well as to mask the loss function (described in (Kosiorek et al. (2017))) for the given object when it is not present. In this setup, parallel trackers cannot exchange information and are conceptually still single-object trackers, which we use as a baseline, referred to as HART (despite it being an extension of the original algorithm). Finally, to enable communication between trackers, we augment HART with an additional step between feature extraction and the LSTM.\n1We can use either a ground-truth bounding-box or one provided by an external detector; the only requirement is that it contains the object of interest.\nFor each object, a glimpse is extracted and processed by a CNN (see fig. 1). Furthermore, spatial attention parameters are linearly projected on a vector of the same size and added to this representation, acting as a positional encoding. This is then concatenated with the hidden state of the recurrent module of the respective object (see fig. 2). Let ft,m denote the resulting feature vector corresponding to the mth object, and let ft,1:M be the set of such features for all objects. Since different objects can interact with each other, it is necessary to use a method that can inform each object about the effects of their interactions with other objects. Moreover, since features extracted from different objects comprise a set, this method should be permutation-equivariant, i. e., the results should not depend on the order in which object features are processed. Therefore, we use the multi-head self-attention block (SAB, Lee et al. (2019)), which is able to account for higher-order interactions between set elements when computing their representations. Intuitively, in our case, SAB allows any of the trackers to query other trackers about attributes of their respective objects, e. g., distance between objects, their direction of movement, or their relation to the camera. This is implemented as follows,\nQ = Wqf1:M + bq , K = Wkf1:M + bk , V = Wvf1:M + bv , (1) Oi = softmax ( QiK T i ) Vi , i = 1, . . . ,H , (2)\no1:M = O = concat(Oi, . . . , OH) , (3) where om is the output of the relational reasoning module for object m. Time-step subscripts are dropped to decrease clutter. In Eq. 1, each of the extracted features ft,m is linearly projected into a triplet of key kt,m, query qt,m and value vt,m vectors. Together, they comprise K,Q and V matrices with M rows and dq, dk, dk columns, respectively. K,Q and V are then split up into multiple heads H ∈ N+, which allows to query different attributes by comparing and aggregating different projection of features. Multiplying QiKTi in Eq. 2 allows to compare every query vector qt,m,i to all key vectors kt,1:M,i, where the value of the corresponding dot-products represents the degree of similarity. Similarities are then normalised via a softmax operation and used to aggregate values V . Finally, outputs of different attention heads are concatenated in Eq. 3. SAB produces M output vectors, one for each input, which are then concatenated with corresponding inputs and fed into separate LSTMs for further processing, as in HART—see fig. 1.\nMOHART is trained fully end-to-end, contrary to other tracking approaches. It maintains a hidden state, which can contain information about the object’s motion. One benefit is that in order to predict future trajectories, one can simply feed black frames into the model. Our experiments show that the model learns to fall back on the motion model captured by the LSTM in this case." }, { "heading": "3.3 MULTI-OBJECT BASELINES", "text": "Multilayer perceptron (MLP) In this version, the representations of all objects are concatenated and fed into a fully connected layer followed by ELU activations. The output is then again concatenated to the unaltered feature vector of each object. This concatenated version is then fed to the recurrent module of HART. This way of exchanging information allows for universal function approximation (in the limit of infinite layer sizes) but does not impose permutation invariance.\nDeepSets Here, the learned representations of the different objects are summed up instead of concatenated and then divided by total number of objects. This is closely related to DeepSets (Zaheer et al. (2017)) and allows for universal function approximation of all permutation invariant functions (Wagstaff et al. (2019)).\nMax-Pooling Similar to DeepSets, but using max-pooling as the permutation invariant operation. This way of exchanging information is used, e.g., by Alahi et al. (2016) who predict future pedestrian trajectories from ground truth tracklets in coordinate space." }, { "heading": "4 VALIDATION ON SIMULATED DATA", "text": "We test and compare the relational reasoning capabilities of the proposed algorithms on a toy domain. The domain is a 2D squared box which contains circular objects with approximated elastic collisions (energy\nUnder review as a conference paper at ICLR 2020\n0308e/f seq_0 29-33\nand momentum conservation) between objects and with walls. To investigate how the model understands motion patterns and interactions between objects, we train it to predict future object locations in contrast to traditional tracking.\nIn the first experiment, each circle exerts repulsive forces on each other, where the force scales with 1/r, r being the distance between them. Predicting the future location just using the previous motion of one object (i.e. without relational reasoning) accurately is therefore challenging. We show that HART as an end-toend single-object tracker is able to capture complex motion patterns and leverage these to make accurate predictions (see Appendix C). This indicates that HART is able to draw conclusions about the (deterministic, but not static) force field.\nIn the second experiment, we introduce randomness, rendering the scenario not solvable for a single object tracker as it requires knowledge about the state of the other objects and relational reasoning (see fig. 3). In each time step, we assign a colour-coded identity to the objects. Objects of the same identity repel each other, object of different identities attract each other (the objects can be thought of as electrons and protons). The qualitative results in fig. 3 show that MOHART, using self-attention for relational reasoning, is able to capture these interactions with high accuracy. Figure 4 (left) shows a quantitative comparison of augmenting HART with different relational reasoning modules when identities are re-assigned in every timestep (randomness = 1.0). Exchanging information between trackers of different objects in the latent space with an MLP leads to slightly worse performance than the HART baseline, while simple max-pooling performs significantly better (∆IoU ∼ 17%). This can be explained through the permutation invariance of the problem: the list of latent representation of the different objects has no meaningful order and the output of the model should therefore be invariant to the ordering of the objects. The MLP is in itself not permutation\ninvariant and therefore prone to overfit to the (meaningless) order of the objects in the training data. Maxpooling, however, is permutation invariant and can in theory, despite its simplicity, be used to approximate any permutation invariant function given a sufficiently large latent space (Wagstaff et al. (2019)). Maxpooling is often used to exchange information between different tracklets, e.g., in the trajectory prediction domain (Alahi et al. (2016); Gupta et al. (2018)). However, self-attention, allowing for learned querying and encoding of information, solves the relational reasoning task significantly more accurately. In fig. 4 (right), the frequency with which object identities are reassigned randomly is varied. The results show that, in a deterministic environment, tracking does not necessarily profit from relational reasoning - even in the presence of long-range interactions. The less random, the more static the force field is and a static force field can be inferred from a small number of observations (see fig. 6). This does not mean that all stochastic environments profit from relational reasoning. What these experiments indicate is that tracking can not be expected to profit from relational reasoning by default in any environment, but instead in environments which feature (potentially non-deterministic) dynamics and predictable interactions." }, { "heading": "5 RELATIONAL REASONING IN REAL-WORLD TRACKING", "text": "Having established that MOHART is capable of performing complex relational reasoning, we now test the algorithm on three real world datasets and analyse the effects of relational reasoning on performance depending on dataset and task. We find consistent improvements of MOHART compared to HART throughout. Relational reasoning yields particularly high gains for scenes with ego-motion, crowded scenes, and simulated faulty sensor inputs." }, { "heading": "5.1 EXPERIMENTAL DETAILS", "text": "We investigate three qualitatively different datasets: the MOTChallenge dataset (Milan et al. (2016)), the UA-DETRAC dataset (Wen et al. (2015)), and the Stanford Drone dataset (Robicquet et al. (2016)). To increase scene dynamics and make the tracking/prediction problems more challenging, we sub-sample some of the high framerate scenes with a stride of two, resulting in scenes with 7-15 frames per second. Training and architecture details are given in Appendices A and B. We conduct experiments in three different modes:\nTracking. The model is initialised with the ground truth bounding boxes for a set of objects in the first frame. It then consecutively sees the following frames and predicts the bounding boxes. The sequence length is 30 time steps and the performance is measured as intersection over union (IoU) averaged over the entire sequence excluding the first frame. This algorithm is either applied to the entire dataset or subsets of it to study the influence of certain properties of the data.\nCamera Blackout. This simulates unsteady or faulty sensor inputs. The setup is the same as in Tracking, but sub-sequences of the input are replaced with black images. The algorithm is expected to recognise that no new information is available and that it should resort to its internal motion model.\nPrediction. Testing MOHART’s ability to capture motion patterns, only the first two frames are shown to the model followed by three black frames. IoU is measured seperately for each time step." }, { "heading": "5.2 RESULTS AND ANALYSIS", "text": "On the MOTChallenge dataset, HART achieves 66.6% intersection over union (see Table 1), which in itself is impressive given the small amount of training data of only 5225 training frames and no pre-training. MOHART achieves 68.5% (both numbers are averaged over 5 runs, independent samples t-test resulted in p < 0.0001). The performance gain increases when only considering ego-motion data. This is readily explained: movements of objects in the image space due to ego-motion are correlated and can therefore be better understood when combining information from movements of multiple objects, i.e. performing relational reasoning. In another ablation, we filtered for only crowded scenes by requesting five objects to be present for, on average, 90% of the frames in a sub-sequence. For the MOT-Challenge dataset, this only\nTABLE 3: Stanford Drone Dataset\nAll Camera Blackout\nCamBlack Bikes\n57.3% 53.3% 53.3% 56.1% 52.6% 50.7%\n1.2% 0.7% 2.6%\nleads to a minor increase of the performance gain of MOHART indicating that the dataset exhibits a sufficient density of objects to learn interactions. The biggest benefit from relational reasoning can be observed in the camera blackout experiments (setup explained in Section 5.1). Both HART and MOHART learn to rely on their internal motion models when confronted with black frames and propagate the bounding boxes according to the previous movement of the objects. It is unsurprising that this scenario profits particularly from relational reasoning. Qualitative tracking and camera blackout results are shown in fig. 5 and Appendix E.\nTracking performance on the UA-DETRAC dataset only profits from relational reasoning when filtering for crowded scenes (see Table 2). The fact that the performance of MOHART is slightly worse on the vanilla dataset (∆ = −0.3%) can be explained with more overfitting. As there is no exchange between trackers for each object, each object constitutes an independent training sample.\nThe Stanford drone dataset (see Table 3) is different to the other two—it is filmed from a birds-eye view. The scenes are more crowded and each object covers a small number of pixels, rendering it a difficult problem for tracking. The dataset was designed for trajectory prediction—a setup where an algorithm is typically provided with ground-truth tracklets in coordinate space and potentially an image as context information. The task is then to extrapolate these tracklets into the future. The tracking performance profits from relational reasoning more than on the UA-DETRAC dataset but less than on the MOTChallenge dataset. The performance gain on the camera blackout experiments are particularly strong when only considering cyclists.\nIn the prediction experiments (see Appendix D), MOHART consistently outperforms HART. On both datasets, the model outperforms a baseline which uses momentum to linearly extrapolate the bounding boxes from the first two frames. This shows that even from just two frames, the model learns to capture motion models which are more complex than what could be observed from just the bounding boxes (i.e. momentum), suggesting that it uses visual information (HART & MOHART) as well as relational reasoning (MOHART)." }, { "heading": "6 CONCLUSION", "text": "With MOHART, we introduce an end-to-end multi-object tracker that is capable of capturing complex interactions and leveraging these for precise predictions as experiments both on toy and real world data show. However, the experiments also show that the benefit of relational reasoning strongly depends on the nature of the data. The toy experiments showed that in an entirely deterministic world relational reasoning was much less important than in a stochastic environment. Amongst the real-world dataset, the highest performance gains from relational reasoning were achieved on the MOTChallenge dataset, which features crowded scenes, ego-motion and occlusions." }, { "heading": "A EXPERIMENTAL DETAILS", "text": "The MOTChallenge and the UA-DETRAC dataset discussed in this section are intended to be used as a benchmark suite for multi-object-tracking in a tracking-by-detection paradigm. Therefore, ground truth bounding boxes are only available for the training datasets. The user is encouraged to upload their model which performs tracking in a data association paradigm leveraging the provided bounding box proposals from an external object detector. As we are interested in a different analysis (IoU given inital bounding boxes), we divide the training data further into training and test sequences. To make up for the smaller training data, we extend the MOTChallenge 2017 dataset with three sequences from the 2015 dataset (ETHSunnyday, PETS09-S2L1, ETH-Bahnhof). We use the first 70% of the frames of each of the ten sequences for training and the rest for testing. Sequences with high frame rates (30Hz) are sub-sampled with a stride of two. For the UA-DETRAC dataset, we split the 60 available sequences into 44 training sequences and 16 test sequences. For the considerably larger Stanford Drone dataset we took three videos of the scene deathCircle for training and the remaining two videos from the same scene for testing. The videos of the drone dataset were also sub-sampled with a stride of two to increase scene dynamics." }, { "heading": "B ARCHITECTURE DETAILS", "text": "The architecture details were chosen to optimise HART performance on the MOTChallenge dataset. They deviate from the original HART implementation (Kosiorek et al. (2017)) as follows: A presence variable predicts whether an object is in the scene and successfully tracked. This is trained with a binary cross entropy loss. The maximum number of objects to be tracked simultaneously was set to 5 for the UA-DETRAC and MOTChallenge dataset. For the more crowded Stanford drone dataset, this number was set to 10. The feature extractor is a three layer convolutional network with a kernel size of 5, a stride of 2 in the first and last layer, 32 channels in the first two layers, 64 channels in the last layer, ELU activations, and skip connections. This converts the initial 32 × 32 × 3 glimpse into a 7 × 7 × 64 feature representation. This is followed by a fully connected layer with a 128 dimensional output and an elu activation. The spatial attention parameters are linearly projected onto 128 dimensions and added to this feature representation serving as a positional encoding. The LSTM has a hidden state size of 128. The self-attention unit in MOHART comprises linear projects the inputs to dimensionality 128 for each keys, queries and values. For the real-world experiments, in addition to the extracted features from the glimpse, the hidden states from the previous LSTM state are also fed as an input by concatinating them with the features. In all cases, the output of the attention module is concatenated to the input features of the respective object.\nAs an optimizer, we used RMSProp with momentum set to 0.9 and learning rate 5∗10−6. For the MOTChallenge dataset and the UA-DETRAC dataset, the models were trained for 100,000 iterations of batch size 10 and the reported IoU is exponentially smoothed over iterations to achieve lower variance. For the Stanford Drone dataset, the batch size was increased to 32, reducing time to convergence and hence model training to 50,000 iterations." }, { "heading": "C DETERMINISTIC TOY DOMAIN", "text": "In our first experiment in the toy domain (Figure 6), four circles each exert repulsive forces on each other, where the force scales with 1/r, r being their distance. HART is applied four times in parallel and is trained to predict the location of each circle three time steps into the future. The different forces from different objects lead to a non-trivial force field at each time step. Predicting the future location just using the previous motion of one object (Figure 6 shows that each spatial attention box covers only the current object) accurately is therefore challenging. Surprisingly, the single object tracker solves this task with an average of 95% IoU over sequences of 15 time steps. This shows the efficacy of end-to-end tracking to capture complex motion\npatterns and use them to predict future locations. This, of course, could also be used to generate robust bounding boxes for a tracking task." }, { "heading": "D PREDICTION EXPERIMENTS", "text": "In the results from the prediction experiments (see Figure 7) MOHART consistently outperforms HART. On both datasets, the model outperforms a baseline which uses momentum to linearly extrapolate the bounding boxes from the first two frames. This shows that even from just two frames, the model learns to capture motion models which are more complex than what could be observed from just the bounding boxes (i.e. momentum), suggesting that it uses visual information (HART & MOHART) as well as relational reasoning (MOHART). The strong performance gain of MOHART compared to HART on the UA-DETRAC dataset,\ndespite the small differences for tracking on this dataset, can be explained as follows: this dataset features little interactions but strong correlations in motion. Hence when only having access to the first two frames, MOHART profits from estimating the velocities of multiple cars simultaneously." }, { "heading": "E QUALITATIVE TRACKING RESULTS", "text": "In Section 5, we tested MOHART on three different real world data sets and in a number of different setups. Figure 8 shows qualitative results both for HART and MOHART on the MOTChallenge dataset.\nFurthermore, we conducted a set of camera blackout experiments to test MOHART’s capability of dealing with faulty sensor inputs. While traditional pipeline methods require careful consideration of different types of corner cases to properly handle erroneous sensor inputs, MOHART is able to capture these automatically, especially when confronted with similar issues in the training scenarios. To simulate this, we replace subsequences of the images with black frames. Figure 9 and Figure 5 show two such examples from the test data together with the model’s prediction. MOHART learns not to update its internal model when confronted with black frames and instead uses the LSTM to propagate the bounding boxes. When proper sensor input is available again, the model uses this to make a rapid adjustment to its predicted location and ‘snap’ back onto the object. This works remarkably well in both the presence of occlusion (Figure 9) and ego-motion (Figure 5). Tables 1 to 3 show that the benefit of relational reasoning is particularly high in these scenarios specifically. These experiments can also be seen as a proof of concept of MOHART’s capabalities of predicting future trajectories—and how this profits from relational reasoning." } ]
2,019
null
SP:4605e601a717bc05833778d0916a393ffdf8c331
[ "This paper proposes a new reweighted-RNN by unfolding a reweighted L1-L1 minimization problem. It develops an iterative algorithm to solve the reweighted L1-L1 minimization problem, where the soft-thresholding functions can be adaptively learned. This paper provides the generalization error bound for deep RNNs and shows that the proposed reweighted-RNN has a lower generalization error bound. In addition, the paper shows that the proposed algorithm can be applied to video-frame reconstruction and achieves favorable results against state-of-the-art methods. The paper is well organized, and the motivation is clear. ", "This paper proposes a novel method to solve the sequential signal reconstruction problem. The method is based on the deep unfolding methods and incorporates the reweighting mechanism. Additionally, they derive the generalization error bound and show how their over-parameterized reweighting RNNs ensure good generalization. Lastly, the experiments on the task of video sequence reconstruction suggest the superior performance of the proposed method." ]
Deep unfolding methods design deep neural networks as learned variations of optimization methods. These networks have been shown to achieve faster convergence and higher accuracy than the original optimization methods. In this line of research, this paper develops a novel deep recurrent neural network (coined reweighted-RNN) by unfolding a reweighted `1-`1 minimization algorithm and applies it to the task of sequential signal reconstruction. To the best of our knowledge, this is the first deep unfolding method that explores reweighted minimization. Due to the underlying reweighted minimization model, our RNN has a different soft-thresholding function (alias, different activation function) for each hidden unit in each layer. Furthermore, it has higher network expressivity than existing deep unfolding RNN models due to the over-parameterizing weights. Moreover, we establish theoretical generalization error bounds for the proposed reweighted-RNN model by means of Rademacher complexity. The bounds reveal that the parameterization of the proposed reweighted-RNN ensures good generalization. We apply the proposed reweighted-RNN to the problem of video-frame reconstruction from low-dimensional measurements, that is, sequential frame reconstruction. The experimental results on the moving MNIST dataset demonstrate that the proposed deep reweighted-RNN significantly outperforms existing RNN models.
[]
[ { "authors": [ "Martin Arjovsky", "Amar Shah", "Yoshua Bengio" ], "title": "Unitary evolution recurrent neural networks", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Peter L Bartlett", "Dylan J Foster", "Matus J Telgarsky" ], "title": "Spectrally-normalized margin bounds for neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "A. Beck", "M. Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "SIAM Journal on Imaging Sciences,", "year": 2009 }, { "authors": [ "M. Borgerding", "P. Schniter", "S. Rangan" ], "title": "AMP-inspired deep networks for sparse linear inverse problems", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Emmanuel J. Candès", "Michael B. Wakin", "Stephen P. Boyd" ], "title": "Enhancing sparsity by reweighted `1 minimization", "venue": "Journal of Fourier Analysis and Applications,", "year": 2008 }, { "authors": [ "Xiaohan Chen", "Jialin Liu", "Zhangyang Wang", "Wotao Yin" ], "title": "Theoretical linear convergence of unfolded ista and its practical weights and thresholds", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "K. Cho", "B. Van Merriënboer", "C. Gulcehre", "D. Bahdanau", "F. Bougares", "H. Schwenk", "Y. Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "I. Daubechies", "M. Defrise", "C. De Mol" ], "title": "An iterative thresholding algorithm for linear inverse problems with a sparsity constraint", "venue": "Communications on Pure and Applied Mathematics,", "year": 2004 }, { "authors": [ "D. Donoho" ], "title": "Compressed sensing", "venue": "IEEE Transactions on Information Theory,", "year": 2006 }, { "authors": [ "Jeffrey L. Elman" ], "title": "Finding structure in time", "venue": "Cognitive Science,", "year": 1990 }, { "authors": [ "Noah Golowich", "Alexander Rakhlin", "Ohad Shamir" ], "title": "Size-independent sample complexity of neural networks", "venue": "In Proceedings of the 31st Conference On Learning Theory,", "year": 2018 }, { "authors": [ "Karol Gregor", "Yann LeCun" ], "title": "Learning fast approximations of sparse coding", "venue": "In Proceedings of the 27th International Conference on International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "G. Huang", "Z. Liu", "L. v. d. Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Aditya Kusupati", "Manish Singh", "Kush Bhatia", "Ashish Kumar", "Prateek Jain", "Manik Varma" ], "title": "Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Hung Duy Le", "Huynh Van Luong", "Nikos Deligiannis" ], "title": "Designing recurrent neural networks by unfolding an l1-l1 minimization algorithm", "venue": "In Proceedings of IEEE International Conference on Image Processing,", "year": 2019 }, { "authors": [ "Quoc V. Le", "Navdeep Jaitly", "Geoffrey E. Hinton" ], "title": "A simple way to initialize recurrent networks of rectified linear units", "venue": "CoRR, abs/1504.00941,", "year": 2015 }, { "authors": [ "S. Li", "W. Li", "C. Cook", "C. Zhu", "Y. Gao" ], "title": "Independently recurrent neural network (indrnn): Building a longer and deeper rnn", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jialin Liu", "Xiaohan Chen", "Zhangyang Wang", "Wotao Yin" ], "title": "ALISTA: Analytic weights are as good as learned weights in LISTA", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "A. Lucas", "M. Iliadis", "R. Molina", "A.K. Katsaggelos" ], "title": "Using deep neural networks for inverse problems in imaging: beyond analytical methods", "venue": "IEEE Signal Processing Magazine,", "year": 2018 }, { "authors": [ "W. Luo", "W. Liu", "S. Gao" ], "title": "A revisit of sparse coding based anomaly detection in stacked rnn framework", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "H.V. Luong", "N. Deligiannis", "J. Seiler", "S. Forchhammer", "A. Kaup" ], "title": "Compressive online robust principal component analysis via n-`1 minimization", "venue": "IEEE Transactions on Image Processing,", "year": 2018 }, { "authors": [ "Mehryar Mohri", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Foundations of Machine Learning, Second edition", "venue": null, "year": 2018 }, { "authors": [ "J.F.C. Mota", "N. Deligiannis", "A.C. Sankaranarayanan", "V. Cevher", "M.R.D. Rodrigues" ], "title": "Adaptiverate reconstruction of time-varying signals with application in compressive foreground extraction", "venue": "IEEE Transactions on Signal Processing,", "year": 2016 }, { "authors": [ "J.F.C. Mota", "N. Deligiannis", "M.R.D. Rodrigues" ], "title": "Compressed sensing with prior information: Strategies, geometry, and bounds", "venue": "IEEE Transactions on Information Theory,", "year": 2017 }, { "authors": [ "Ali Mousavi", "Ankit B Patel", "Richard G Baraniuk" ], "title": "A deep learning approach to structured signal recovery", "venue": "In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton),", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "Norm-based capacity control in neural networks", "venue": "In Proceedings of The 28th Conference on Learning Theory,", "year": 2015 }, { "authors": [ "Behnam Neyshabur", "Zhiyuan Li", "Srinadh Bhojanapalli", "Yann LeCun", "Nathan Srebro" ], "title": "The role of over-parametrization in generalization of neural networks", "venue": "In Int. Conf. on Learning Representations,", "year": 2019 }, { "authors": [ "R. Pascanu", "C. Gulcehre", "K. Cho", "Y. Bengio" ], "title": "How to construct deep recurrent neural networks", "venue": "arXiv preprint arXiv:1312.6026,", "year": 2013 }, { "authors": [ "R. Pascanu", "C. Gulcehre", "K. Cho", "Y. Bengio" ], "title": "How to construct deep recurrent neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding Machine Learning: From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "P. Sprechmann", "A.M. Bronstein", "G. Sapiro" ], "title": "Learning efficient sparse and low rank models", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2015 }, { "authors": [ "N. Srivastava", "E. Mansimov", "R. Salakhudinov" ], "title": "Unsupervised learning of video representations using LSTMs", "venue": "In Proceedings of the 32nd International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "J. Sun", "H. Li", "Z. Xu" ], "title": "Deep ADMM-Net for compressive sensing MRI", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "S. Wisdom", "T. Powers", "J. Pitton", "L. Atlas" ], "title": "Building recurrent networks by unfolding iterative thresholding for sequential sparse recovery", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2017 }, { "authors": [ "B. Xin", "Y. Wang", "W. Gao", "D. Wipf", "B. Wang" ], "title": "Maximal sparsity with deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Jiong Zhang", "Qi Lei", "Inderjit S. Dhillon" ], "title": "Stabilizing gradients for deep neural networks via efficient SVD parameterization", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Pascanu" ], "title": "see Fig 2(c)] to build deep networks for other RNN models", "venue": null, "year": 2013 }, { "authors": [ "Le" ], "title": "2019) (as the baseline method) and Reweighted-RNN due to their similarities in implementations", "venue": "At the default settings,", "year": 2019 }, { "authors": [ "Y ∈ Rny×h" ], "title": "The generalization error in Zhang et al. (2018) is derived for a classification problem. For any δ > 0, γ > 0, with probability ≥ 1 − δ over a training set S of size m, the generalization error (Zhang et al., 2018) of SpectralRNN is bounded by O", "venue": null, "year": 2018 }, { "authors": [ "Lemma D" ], "title": "Lemma 26.9—Contraction lemma) Let F be a set of functions", "venue": "F = {f : X 7→ R},", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "The problem of reconstructing sequential signals from low-dimensional measurements across time is of great importance for a number of applications such as time-series data analysis, future-frame prediction, and compressive video sensing. Specifically, we consider the problem of reconstructing a sequence of signals st ∈ Rn0 , t = 1, 2, . . . , T , from low-dimensional measurements xt = Ast, where A ∈ Rn×n0 (n n0) is a sensing matrix. We assume that st has a sparse representation ht ∈ Rh in a dictionary D ∈ Rn0×h, that is, st = Dht. At each time step t, the signal st can be independently reconstructed using the measurements xt by solving (Donoho, 2006):\nmin ht {1 2 ‖xt −ADht‖22 + λ‖ht‖1 } , (1)\nwhere ‖ · ‖p is the `p-norm and λ is a regularization parameter. The iterative shrinkage-thresholding algorithm (ISTA) (Daubechies et al., 2004) solves (1) by iterating over h(l)t = φλ\nc (h\n(l−1) t −\n1 cD TAT(ADh (l−1) t − xt)), where l is the iteration counter, φγ(u) = sign(u)[0, |u| − γ]+ is the soft-thresholding operator, γ = λc , and c is an upper bound on the Lipschitz constant of the gradient of 12‖xt −ADht‖ 2 2.\nUnder the assumption that sequential signal instances are correlated, we consider the following sequential signal reconstruction problem:\nmin ht {1 2 ‖xt −ADht‖22 + λ1‖ht‖1 + λ2R(ht,ht−1) } , (2)\nwhere λ1, λ2 > 0 are regularization parameters and R(ht,ht−1) is an added regularization term that expresses the similarity of the representations ht and ht−1 of two consecutive signals. Wisdom et al. (2017) proposed an RNN design (coined Sista-RNN) by unfolding the sequential version of\nISTA. That study assumed that two consecutive signals are close in the `2-norm sense, formally, R(ht,ht−1) = 1 2‖Dht − FDht−1‖ 2 2, where F ∈ Rn0×n0 is a correlation matrix between st and st−1. More recently, the study by Le et al. (2019) designed the `1-`1-RNN, which stems from unfolding an algorithm that solves the `1-`1 minimization problem (Mota et al., 2017; 2016). This is a version of Problem (2) with R(ht,ht−1) = ‖ht −Ght−1‖1, where G ∈ Rh×h is an affine transformation that promotes the correlation between ht and ht−1. Both studies (Wisdom et al., 2017; Le et al., 2019) have shown that carefully-designed deep RNN models outperform the generic RNN model and ISTA (Daubechies et al., 2004) in the task of sequential frame reconstruction.\nDeep neural networks (DNN) have achieved state-of-the-art performance in solving (1) for individual signals, both in terms of accuracy and inference speed (Mousavi et al., 2015). However, these models are often criticized for their lack of interpretability and theoretical guarantees (Lucas et al., 2018). Motivated by this, several studies focus on designing DNNs that incorporate domain knowledge, namely, signal priors. These include deep unfolding methods which design neural networks to learn approximations of iterative optimization algorithms. Examples of this approach are LISTA (Gregor & LeCun, 2010) and its variants, including ADMM-Net (Sun et al., 2016), AMP (Borgerding et al., 2017), and an unfolded version of the iterative hard thresholding algorithm (Xin et al., 2016).\nLISTA (Gregor & LeCun, 2010) unrolls the iterations of ISTA into a feed-forward neural network with weights, where each layer implements an iteration: h(l)t = φγ(l)(W (l)h (l−1) t + U\n(l)xt), with W(l) = I − 1cD TATAD, U(l) = 1cD TAT, and γ(l) being learned from data. It has been shown (Gregor & LeCun, 2010; Sprechmann et al., 2015) that a d-layer LISTA network with trainable parameters Θ = {W(l),U(l), γ(l)}dl=1 achieves the same performance as the original ISTA but with much fewer iterations (i.e., number of layers). Recent studies (Chen et al., 2018; Liu et al., 2019) have found that exploiting dependencies between W(l) and U(l) leads to reducing the number of trainable parameters while retaining the performance of LISTA. These works provided theoretical insights to the convergence conditions of LISTA. However, the problem of designing deep unfolding methods for dealing with sequential signals is significantly less explored. In this work, we will consider a deep RNN for solving Problem (2) that outputs a sequence, ŝ1, . . . , ŝT from an input measurement sequence, x1, . . . ,xT , as following:\nht = φγ(Wht−1 + Uxt),\nŝt = Dht. (3)\nIt has been shown that reweighted algorithms—such as the reweighted `1 minimization method by Candès et al. (2008) and the reweighted `1-`1 minimization by Luong et al. (2018)—outperform their non-reweighted counterparts. Driven by this observation, this paper proposes a novel deep RNN architecture by unfolding a reweighted-`1-`1 minimization algorithm. Due to the reweighting, our network has higher expressivity than existing RNN models leading to better data representations, especially when depth increases. This is in line with recent studies (He et al., 2016; Cortes et al.; Huang et al., 2017), which have shown that better performance can be achieved by highly overparameterized networks, i.e., networks with far more parameters than the number of training samples. While the most recent studies (related over-parameterized DNNs) consider fully-connected networks applied on classification problems (Neyshabur et al., 2019), our approach focuses on deep-unfolding architectures and opts to understand how the networks learn a low-complexity representation for sequential signal reconstruction, which is a regression problem across time. Furthermore, while there have been efforts to build deep RNNs (Pascanu et al., 2014; Li et al., 2018; Luo et al., 2017; Wisdom et al., 2017), examining the generalization property of such deep RNN models on unseen sequential data still remains elusive. In this work, we derive the generalization error bound of the proposed design and further compare it with existing RNN bounds (Zhang et al., 2018; Kusupati et al., 2018).\nContributions. The contributions of this work are as follows:\n• We propose a principled deep RNN model for sequential signal reconstruction by unfolding a reweighted `1-`1 minimization method. Our reweighted-RNN model employs different soft-thresholding functions that are adaptively learned per hidden unit. Furthermore, the proposed model is over-parameterized, has high expressivity and can be efficiently stacked.\n• We derive the generalization error bound of the proposed model (and deep RNNs) by measuring Rademacher complexity and show that the over-parameterization of our RNN ensures good generalization. To best of our knowledge, this is the first generalization error\nbound for deep RNNs; moreover, our bound is tighter than existing bounds derived for shallow RNNs (Zhang et al., 2018; Kusupati et al., 2018).\n• We provide experiments in the task of reconstructing video sequences from low-dimensional measurements. We show significant gains when using our model compared to several state-of-the-art RNNs (including unfolding architectures), especially when the depth of RNNs increases.\n2 A DEEP RNN VIA UNFOLDING REWEIGHTED-`1-`1 MINIMIZATION\nIn this section, we describe a reweighted `1-`1 minimization problem for sequential signal reconstruction and propose an iterative algorithm based on the proximal method. We then design a deep RNN architecture by unfolding this algorithm.\nThe proposed reweighted `1-`1 minimization. We introduce the following problem:\nmin ht {1 2 ‖xt −ADZht‖22 + λ1‖g ◦ Zht‖1 + λ2‖g ◦ (Zht −Ght−1)‖1 } , (4)\nwhere “◦” denotes element-wise multiplication, g ∈ Rh is a vector of positive weights, Z ∈ Rh×h is a reweighting matrix, and G ∈ Rh×h is an affine transformation that promotes the correlation between ht−1 and ht. Intuitively, Z is adopted to transform ht to Zht ∈ Rh, producing a reweighted version of it. Thereafter, g aims to reweight each transformed component of Zht and Zht −Ght−1 in the `1-norm regularization terms. Because of applying reweighting (Candès et al., 2008), the solution of Problem (4) is a more accurate sparse representation compared to the solution of the `1-`1 minimization problem in Le et al. (2019) (where Z = I and g = I). Furthermore, the use of the reweighting matrix Z to transform ht to Zht differentiates Problem (4) from the reweighted `1-`1 minimization problem in Luong et al. (2018) where Z = I.\nThe objective function in (4) consists of the differentiable fidelity term f(Zht) = 12‖xt−ADZht‖ 2 2 and the non-smooth term g(Zht) = λ1‖g ◦ Zht‖1 + λ2‖g ◦ (Zht −Ght−1)‖1. We use a proximal gradient method (Beck & Teboulle, 2009) to solve (4): At iteration l, we first update h(l−1)t —after being multiplied by Zl—with a gradient descent step on the fidelity term as u = Zlh (l−1) t − 1 cZl∇f(h (l−1) t ), where ∇f(h (l−1) t ) = D TAT(ADh (l−1) t − xt). Then, h (l) t is updated as\nh (l) t = Φλ1\nc gl, λ2 c gl,Ght−1\n( Zlh (l−1) t − 1\nc Zl∇f(h(l−1)t )\n) , (5)\nwhere the proximal operator Φλ1 c gl, λ2 c gl,Ght−1 (u) is defined as\nΦλ1 c gl, λ2 c gl,~ (u) = arg min v∈Rh {1 c g(v) + 1 2 ||v − u||22 } , (6)\nwith ~ = Ght−1. Since the minimization problem is separable, we can minimize (6) independently for each of the elements gl, ~, u of the corresponding gl, ~, u vectors. After solving (6), we obtain Φλ1\nc gl, λ2 c gl,~\n(u) [for solving (6), we refer to Proposition B.1 in Appendix B]. For ~ ≥ 0:\nΦλ1 c gl, λ2 c gl,~≥0 (u) = u− λ1c gl − λ2 c gl, ~ + λ1 c gl + λ2 c gl < u <∞ ~, ~ + λ1c gl − λ2 c gl ≤ u ≤ ~ + λ1 c gl + λ2 c gl u− λ1c gl + λ2 c gl, λ1 c gl − λ2 c gl < u < ~ + λ1 c gl − λ2 c gl 0, −λ1c gl − λ2 c gl ≤ u ≤ λ1 c gl − λ2 c gl\nu+ λ1c gl + λ2 c gl, −∞ < u < − λ1 c gl − λ2 c gl,\n(7)\nand for ~ < 0:\nΦλ1 c gl, λ2 c gl,~<0 (u) = u− λ1c gl − λ2 c gl, λ1 c gl + λ2 c gl < u <∞ 0, −λ1c gl + λ2 c gl ≤ u ≤ λ1 c gl + λ2 c gl u+ λ1c gl − λ2 c gl, ~− λ1 c gl + λ2 c gl < u < − λ1 c gl + λ2 c gl ~, ~− λ1c gl − λ2 c gl ≤ u ≤ ~− λ1 c gl + λ2 c gl\nu− λ1c gl + λ2 c gl, −∞ < u < ~− λ1 c gl − λ2 c gl\n(8)\nUnder review as a conference paper at ICLR 2020\nℏ − − 1 2\nℏ − +\n1 2\n( ) , ,ℏ 1 2\nℏ\n+\n1 2\n− +\n1 2\n0\nℏ + + 1 2 − − 1 2 ( ) , ,ℏ 1 2 ℏ + − 1 2 ℏ − 1 2 0\nAlgorithm 1: The proposed algorithm for sequential signal reconstruction. 1 Input: Measurements x1, . . . ,xT , measurement matrix A, dictionary D, affine transform G,\ninitial h(d)0 ≡ h0, reweighting matrices Z1, . . . ,Zd and vectors g1, . . . , gd, c, λ1, λ2. 2 Output: Sequence of sparse codes h1, . . . ,hT . 3 for t = 1,. . . ,T do 4 h\n(0) t = Gh (d) t−1\n5 for l = 1 to d do 6 u = [Zl − 1cZlD TATAD]h(l−1)t + 1cZlD TATxt 7 h (l) t = Φλ1\nc gl, λ2 c gl,Gh (d) t−1\n(u)\n8 end 9 end\n10 return h(d)1 , . . . ,h (d) T\nFig. 1 depicts the proximal operators for ~ ≥ 0 and ~ < 0. Observe that different values of gl lead to different shapes of the proximal functions Φλ1\nc gl, λ2 c gl,~ (u) for each element u of u. Our iterative algorithm is given in Algorithm 1. We reconstruct a sequence h1, . . . ,hT from a sequence of measurements x1, . . . ,xT . For each time step t, Step 6 applies a gradient descent update for f(Zht−1) and Step 7 applies the proximal operator Φλ1\nc gl, λ2 c gl,Gh (d) t−1 element-wise to the result. Let us compare the proposed method against the algorithm in Le et al. (2019)—which resulted in the `1-`1-RNN—that solves the `1-`1 minimization in Mota et al. (2016) (where Zl = I and gl = I). In that algorithm, the update terms in Step 6, namely I− 1cD TATAD and 1cD TAT, and the proximal operator in Step 7 are the same for all iterations of l. In contrast, Algorithm 1 uses a different Zl matrix per iteration to reparameterize the update terms (Step 6) and, through updating gl, it applies a different proximal operator to each element u (in Step 7) per iteration l.\nThe proposed reweighted-RNN. We now describe the proposed architecture for sequential signal recovery, designed by unrolling the steps of Algorithm 1 across the iterations l = 1, . . . , d (yielding the hidden layers) and time steps t = 1, . . . , T . Specifically, the l-th hidden layer is given by\nh (l) t = Φλ1c g1,λ2c g1,Gh(d)t−1 ( W1h (d) t−1 + U1xt ) , if l = 1,\nΦλ1 c gl, λ2 c gl,Gh (d) t−1\n( Wlh (l−1) t + Ulxt ) , if l > 1,\n(9)\nand the reconstructed signal at time step t is given by ŝt = Dh (d) t ; where Ul, Wl, V are defined as\nUl = 1\nc ZlD\nTAT,∀l, (10)\nW1 = Z1G− 1\nc Z1DTATADG, (11)\nWl = Zl − 1\nc ZlD\nTATAD, l > 1. (12)\nThe activation function is the proximal operator Φλ1 c gl, λ2 c gl,~ (u) with learnable parameters λ1, λ2, c, gl (see Fig. 1 for the shapes of the activation functions).\nUnder review as a conference paper at ICLR 2020 11/12/2019 Copy of Copy of Copy of deepRNN.drawio\n3/4\n𝐡 (𝑑)\n𝑡−1\n𝐡 (1) 𝑡 𝐡 (2) 𝑡\n𝐖2\n𝐠1\n𝐆\n...\n...\n...\n𝐬𝑡 𝐀 𝐱𝑡 𝐔1\n𝐔2 𝐔𝑑\n𝐬 ̂ 𝑡\n𝐆 𝐆\n𝐖3 𝐖𝑑 𝐃\n𝐠2 𝐠𝑑\n𝐡 (𝑑) 𝑡\n𝐡 (𝑑)\n𝑡−1\n𝐡 (1) 𝑡 𝐡 (2) 𝑡\n𝐖1\n𝐖2\n𝐆\n...\n...\n...\n𝐬𝑡 𝐀 𝐱𝑡 𝐔1\n𝐔1 𝐔1\n𝐬 ̂ 𝑡\n𝐆 𝐆\n𝐖2 𝐖2 𝐃 𝐡 (𝑑) 𝑡\n𝐡 (1)\n𝑡−1\n𝐡 (1) 𝑡 𝐡 (2) 𝑡\n𝐕1\n𝐖2 ...\n...\n𝐱𝑡 𝐔1 𝐖3 𝐖𝑑 𝐃\n𝐡 (𝑑) 𝑡\n𝐡 (𝑑) 𝑡\n𝐬𝑡 𝐬 ̂ 𝑡 𝐀\n𝐡 (2)\n𝑡−1\n𝐕2\n𝐡 (𝑑)\n𝑡−1\n𝐕𝑑\n𝐖1\n𝐡0\n𝐡 (1) 1 𝐡 (2)\n1\n𝐖2\n𝐠1\n𝐆\n...\n...\n...\n𝐬1 𝐀 𝐱1 𝐔1\n𝐔2 𝐔𝑑\n𝐆 𝐆\n𝐖3 𝐖𝑑 𝐃\n𝐠2 𝐠𝑑\n𝐬 ̂ 1\n𝐡 (𝑑)\n1\n𝐡 (1) 2 𝐡 (2) 2\n𝐖2\n𝐠1\n𝐆\n...\n...\n...\n𝐱2 𝐔1\n𝐔2 𝐔𝑑\n𝐆 𝐆\n𝐖3 𝐖𝑑\n𝐠2 𝐠𝑑\n𝐬2 𝐬 ̂ 2𝐡(𝑑)2\n𝐡 (𝑑)\n1\n𝐡 (𝑑)\n𝑇−1\n𝐡 (1) 𝑇 𝐡 (2) 𝑇\n𝐖2\n𝐠1\n𝐆\n...\n...\n...\n𝐱𝑇 𝐔1\n𝐔2 𝐔𝑑\n𝐆 𝐆\n𝐖3 𝐖𝑑\n𝐠2 𝐠𝑑\n𝐬𝑇 𝐬 ̂ 𝑇𝐡(𝑑)𝑇\n... ... ... ... . ...\n𝐀\n𝐀\n𝐃\n𝐃\n𝐖1\n𝐖1\n𝐖1\n(a) The proposed reweighted-RNN\n5/14/2019 deepRNN.drawio\n1/2\nx1s1 A U1 W2 h1(2) g2 h1(1) U2\nWd h1(d) Ud D s1 W1 h0 G x2s2 A U1 W2 h2(2)h2(1) U2 Wd h2(d) Ud D s2 W1 G W1 G xTA U1 W2 hT(2)hT(1) U2 Wd hT(d) Ud D sT W1 G ... sT ... ... ...... ... ... . ... ... ... . ... ... ... ^ ^ ^ g1 gd g2g1 gd g2g1 gd\nh (d)\nt−1\nh (1) t\nh (2) t\nW1\nW2\ng1\nˤ\n, ,G ̌ 1 c ̌ 2 c h (d) t−1\nG\n...\n...\n...\nst\nA\nxt\nU1\nU2\nUd\ns ̂ t\nG\nG\nW3\nWd\nD\ng2\ngd\nh (d) t\nh (d)\nt−1\nh (1) t\nh (2) t\nW1\nW2\nG\n...\n...\n...\nst\nA\nxt\nU1\nU1\nU1\ns ̂ t\nG\nG\nW2\nW2\nD\nh (d) t\n(b) `1-`1-RNN.\n5/16/2019 deepRNN.drawio\n1/2\nx1s1 A U1 W2 h1(2) g2 h1(1) U2\nWd h1(d) Ud D s1 W1 h0 G x2s2 A U1 W2 h2(2)h2(1) U2 Wd h2(d) Ud D s2 W1 G W1 G xTA U1 W2 hT(2)hT(1)\nU2\nWd hT( )\nUd\nD sT W1 G ... sT ... ... ...... ... ... . ... ... ... . ... ... ... ^ ^ ^ g1 gd g2g1 gd\ng2g1 gd\nh (d)\nt−1\nh (1) t\nh (2) t\nW1\nW2\ng1\nG\n...\n...\n...\nst\nA\nxt\nU1\nU2\nUd\ns ̂ t\nG\nG\nW3\nWd\nD\ng2\ngd\nh (d) t\nh (d)\nt−1\nh (1) t\nh (2) t\nW1\nW2\nG\n...\n...\n...\nst\nA\nxt\nU1\nU1\nU1\ns ̂ t\nG\nG\nW2\nW2\nD\nh (d) t\nh (1)\nt−1\nh (1) t\nh (2) t\nV1\nW2 ...\n...\nxt\nU1\nW3\nWd\nD\nh (d) t\nh (d) t\nst\ns ̂ t\nA\nh (2)\nt−1\nV2\nh (d)\nt−1\nVd\n(c) Stacked RNN.\nFigure 2: The proposed (a) reweighted-RNN vs. (b) `1-`1-RNN and (c) Stacked RNN with d layers.\nFig. 2(a) depicts the architecture of the proposed reweighted-RNN. Input vectors st, t = 1, . . . , T are compressed by a linear measurement layer A, resulting in compressive measurements xt. The reconstructed vectors ŝt, t = 1, . . . , T , are obtained by multiplying linearly the hidden representation h (d) t with the dictionary D. We train our network in an end-to-end fashion. During training, we minimize the loss function L(Θ) = E\ns1,··· ,sT\n[∑T t=1 ‖st − ŝt‖22 ] using stochastic gradient descent on mini-\nbatches, where the trainable parameters are Θ = {A,D,G,h0,Z1, . . . ,Zd, g1, . . . , gd, c, λ1, λ2}. We now compare the proposed reweighted-RNN [Fig. 2(a)] against the recent `1-`1-RNN (Le et al., 2019) [Fig. 2(b)]. The l-th hidden layer in `1-`1-RNN is given by\nh (l) t = Φλ1c ,λ2c ,Gh(d)t−1 ( W1h (d) t−1 + U1xt ) , if l = 1,\nΦλ1 c , λ2 c ,Gh (d) t−1\n( W2h (l−1) t + U1xt ) , if l > 1.\n(13)\nThe proposed model has the following advantages over `1-`1-RNN. Firstly, `1-`1-RNN uses the proximal operator Φλ1\nc , λ2 c ,~\n(u) as activation function, whose learnable parameters λ1, λ2 are fixed\nacross the network. Conversely, the corresponding parameters λ1c gl and λ2 c gl [see (7), (8), and Fig. 1] in our proximal operator, Φλ1 c gl, λ2 c gl,~\n(u), are learned for each hidden layer due to the reweighting vector gl; hence, the proposed model has a different activation function for each unit per layer. The second difference comes from the set of parameters {Wl,Ul} in (13) and (9). The `1-`1-RNN model uses the same {W2,U1} for the second and higher layers. In contrast, our reweighted-RNN has different sets of {Wl,Ul} per hidden layer due to the reweighting matrix Zl. These two aspects [which are schematically highlighted in blue fonts in Fig. 2(a)] can lead to an increase in the learning capability of the proposed reweighted-RNN, especially when the depth of the model increases.\nIn comparison to a generic stacked RNN (Pascanu et al., 2014) [Fig. 2(c)], reweighted-RNN promotes the inherent data structure, that is, each vector st has a sparse representation ht and consecutive ht’s are correlated. This design characteristic of the reweighted-RNN leads to residual connections which reduce the risk of vanishing gradients during training [the same idea has been shown in several works (He et al., 2016; Huang et al., 2017) in deep neural network literature]. Furthermore, in (10) and (12), we see a weight coupling of Wl and Ul (due to the shared components of A, D and Z). This coupling satisfies the necessary condition of the convergence in Chen et al. (2018) (Theorem 1). Using Theorem 2 in Chen et al. (2018), it can be shown that reweighted-RNN, in theory, needs a smaller number of iterations (i.e., d in Algorithm 1) to reach convergence, compared to ISTA (Daubechies et al., 2004) and FISTA (Beck & Teboulle, 2009)." }, { "heading": "3 GENERALIZATION ERROR BOUND", "text": "While increasing the network expressivity, the over-parameterization of reweighted-RNN raises the question of whether our network ensures good generalization. In this section, we derive and analyze the generalization properties of the proposed reweighted-RNN model in comparison to state-ofthe-art RNN architectures. We provide bounds on the Rademacher complexity (Shalev-Shwartz & Ben-David, 2014) for functional classes of the considered deep RNNs, which are used to derive generalization error bounds for evaluating their generalization properties (we refer to Appendix C.1 for definitions of the Rademacher complexity and the generalization error bound).\nPreliminaries: We consider a deep RNN as a d-layer network f (d)W,U ∈ Fd,T : Rh × Rn 7→ Rh with weight parametersW = (W1, ...,Wd) and U = (U1, ...,Ud), where Wl ∈ Rh×h, Ul ∈ Rh×n. As in Bartlett et al. (2017); Golowich et al. (2018); Neyshabur et al. (2019; 2015), we derive generalization error bounds by controlling norms of the weight matrices. Let ‖Wl‖p,q = ( ∑ j(‖wl,j‖p)q)1/q define the `p,q-norm, p, q ≥ 1, of the weight-matrix Wl, where wl,j ∈ Rh is the jth row of Wl. Since we focus on deep networks with soft-thresholding-based activation units— designed by unfolding algorithms for `1-norm minimization—we derive the network complexities under bounding per-unit `1 regularization, i.e., ‖wl,j‖1. We also denote ‖Wl‖1,∞ = maxj ‖wl,j‖1 as the maximum of the `1-norms of the matrix’s rows. We assume that the `1-norm of the weights of each neuron is bounded as ‖Wl‖1,∞ = maxj ‖wl,j‖1 ≤ αl; similarly, ‖Ul‖1,∞ = maxj ‖ul,j‖1 ≤ βl, where ul,j is the jth row of the matrix Ul. As shown in (9), we can write the reweighted-RNN model recursively as h(1)t = f (1) W,U (h (d) t−1,xt) = Φ(W1h (d) t−1+U1xt) and h (l) t = f (l) W,U (h (d) t−1,xt) = Φ(Wlf (l−1) W,U (h (d) t−1,xt) + Ulxt), where Φ(·) is an activation function. For convenience, we denote the input layer as f (0)W,U = h (d) t−1; namely, at t = 1, we have h (l) 0 ≡ h0.\nWe denote the true and training loss by LD(f) and LS(f), respectively, where S is the training set (of size m) drawn i.i.d. from the distribution D. The generalization error is LD(f)−LS(f), with f a function from the functional class Fd,T . At time step t, we define Xt ∈ Rn×m as a matrix composed of m columns from the input vectors {xt,i}mi=1. We also define ‖Xt‖2,∞ = √ max\nk∈{1,...,n}\n∑m i=1 x 2 t,i,k\nas the maximum of the `2-norms of the rows of matrix Xt, and ‖h0‖∞ = maxj |h0,j |. Generalization error bound. We first derive the generalization error bound for the proposed reweighted-RNN (with T time steps) based on Rademacher complexity (see Theorem 26.5 in ShalevShwartz & Ben-David (2014) and Theorem C.1 in the Appendix). Theorem 3.1 (Generalization error bound). Let Fd,T : Rh × Rn 7→ Rh denote the functional class of reweighted-RNN with T time steps and d layers, where ‖Wl‖1,∞ ≤ αl, ‖Ul‖1,∞ ≤ βl, and 1 ≤ l ≤ d. Assume that the input data ‖Xt‖2,∞ ≤ √ mBx, the initial hidden state is h0, and the loss function is 1-Lipschitz and bounded by η. Then, for f ∈ Fd,T and any δ > 0, with probability at least 1− δ over a training set S of size m drawn i.i.d. from the distribution D,\nLD(f)− LS(f) ≤ 2RS(Fd,T ) + 4η √ 2 log(4/δ)\nm , (14)\nwhere RS(Fd,T ) ≤ √ 2(4dT log 2 + log n+ log h)\nm · √√√√( d∑ l=1 βlΛl )2(ΛT0 − 1 Λ0 − 1 )2 B2x + Λ 2T 0 ‖h0‖2∞, (15)\nwith Λl defined as follows: Λl = d∏\nk=l+1\nαk with 0 ≤ l ≤ d− 1 and Λd = 1.\nProof. The proof is given in Appendix D.\nThe generalization error in (14) is bounded by the Rademacher complexity, which depends on the training set S. If the Rademacher complexity is small, the network can be learned with a small generalization error. The bound in (15) is in the order of the square root of the network depth d multiplied by the number of time steps T . The bound depends on the logarithm of the number of measurements n and the number of hidden units h. It is worth mentioning that the second square root in (15) only depends on the norm constraints and the input training data, and it is independent of the network depth d and the number of time steps T under the appropriate norm constraints.\nTo compare our model with `1-`1-RNN (Le et al., 2019) and Sista-RNN (Wisdom et al., 2017), we derive bounds on their Rademacher complexities for a time step t. The definitions of a functional class Fd,t for the tth time step of reweighted-RNN, `1-`1-RNN, and Sista-RNN are given in Appendix C.2. Let Ht−1 ∈ Rh×m denote a matrix with columns the vectors of the previous hidden state {ht−1,i}mi=1, and ‖Ht−1‖2,∞ = √ max\nk∈{1,...,h}\n∑m i=1 h 2 t−1,i,k ≤ √ mBht−1 .\nCorollary 3.1.1. The empirical Rademacher complexity of Fd,t for reweighted-RNN is bounded as\nRS(Fd,t) ≤ √ 2(4d log 2 + log n+ log h)\nm · √√√√( d∑ l=1 βlΛl )2 B2x + Λ 2 0B 2 ht−1 , (16)\nwith m the number of training samples and Λl given by Λd = 1, Λl = d∏\nk=l+1\nαk with 0 ≤ l ≤ d− 1.\nProof. The proof is a special case of Theorem 3.1 for time step t.\nFollowing the proof of Theorem 3.1, we can obtain the Rademacher complexities for `1-`1-RNN and Sista-RNN: Corollary 3.1.2. The empirical Rademacher complexity of Fd,t for `1-`1-RNN is bounded as:\nRS(Fd,t) ≤ √ 2(4d log 2 + log n+ log h)\nm ·\n√ β21 (αd2 − 1 α2 − 1 )2 B2x + α 2 1α 2(d−1) 2 B 2 ht−1 . (17)\nCorollary 3.1.3. The empirical Rademacher complexity of Fd,t for Sista-RNN is bounded as: RS(Fd,t)\n≤ √ 2(4d log 2 + log n+ log h)\nm · √√√√β21(αd2 − 1α2 − 1 )2 B2x + ( α1α (d−1) 2 + β2 (αd−12 − 1 α2 − 1 ))2 B2ht−1 .\n(18)\nBy contrasting (16) with (17) and (18), we see that the complexities of `1-`1-RNN and Sista-RNN have a polynomial dependence on α1, β1 and α2, β2 (the norms of first two layers), whereas the complexity of reweighted-RNN has a polynomial dependence on α1, . . . , αd and β1, . . . , βd (the norms of all layers). This over-parameterization offers a flexible way to control the generalization error of reweighted-RNN. We derive empirical generalization errors in Fig. 6 in Appendix A demonstrating that increasing the depth of reweighted-RNN still ensures the low generalization error." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "We assess the proposed RNN model in the task of video-frame reconstruction from compressive measurements. The performance is measured using the peak signal-to-noise ratio (PSNR) between the reconstructed ŝt and the original frame st. We use the moving MNIST dataset (Srivastava et al., 2015), which contains 10K video sequences of equal length (20 frames per sequence). Similar to the setup in Le et al. (2019), the dataset is split into training, validation, and test sets of 8K, 1K, and 1K sequences, respectively. In order to reduce the training time and memory requirements, we downscale the frames from 64× 64 to 16× 16 pixels using bilinear decimation. After vectorizing, we obtain sequences of s1, . . . , sT ∈ R256. Per sequence, we obtain measurements x1, . . . ,xT ∈ Rn using a trainable linear sensing matrix A ∈ Rn×n0 , with T = 20, n0 = 256 and n < n0. We compare the reconstruction performance of the proposed reweighted-RNN model against deepunfolding RNN models, namely, `1-`1-RNN (Le et al., 2019), Sista-RNN (Wisdom et al., 2017), and stacked-RNN models, that is, sRNN (Elman, 1990), LSTM (Hochreiter & Schmidhuber, 1997), GRU (Cho et al., 2014), FastRNN (Kusupati et al., 2018)1, IndRNN (Li et al., 2018) and SpectralRNN (Zhang et al., 2018). For the vanilla RNN, LSTM and GRU, the native Pytorch cell implementations were used. The unfolding-based methods were implemented in Pytorch2, with Sista-RNN and `1-`1-RNN tested by reproducing the experiments in Wisdom et al. (2017); Le et al. (2019). For FastRNN, IndRNN, and SpectralRNN cells, we use the publically available Tensorflow implementations. While Sista-RNN, `1-`1-RNN and reweighted-RNN have their own layer-stacking schemes derived from unfolding minimization algorithms, we use the stacking rule in Pascanu et al. (2013) [see Fig 2(c)] to build deep networks for other RNN architectures.\n1Kusupati et al. (2018) also proposed FastGRNN; we found that, in our application scenario, the non-gated variant (the FastRNN) consistently outperformed FastGRNN. As such, we do not include results with the latter.\n2Our implementations are available at https://1drv.ms/u/s!ApHn770BvhH2aWay9xEhAiXydfo?e=aCX1X0.\nOur default settings are: a compressed sensing (CS) rate of n/n0 = 0.2, d = 3 hidden layers3 with h = 210 hidden units per layer. In each set of experiments, we vary each of these hyperparameters while keeping the other two fixed. For the unfolding methods, the overcomplete dictionary D ∈ Rn0×h is initialized with the discrete cosine transform (DCT) with varying dictionary sizes of h = {27, 28, 29, 210, 211, 212} (corresponding to a number of hidden neurons in the other methods). For initializing λ1, λ2 [see (2), (4)], we perform a random search in the range of [10−5, 3.0] in the validation set. To avoid the problem of exploding gradients, we clip the gradients during backpropagation such that the `2-norms are less than or equal to 0.25. We do not apply weight decay regularization as we found it often leads to worse performance, especially since gradient clipping is already used for training stability. We train the networks for 200 epochs using the Adam optimizer with an initial learning rate of 0.0003, and a batch size of 32. During training, if the validation loss does not decrease for 5 epochs, we reduce the learning rate to 0.3 of its current value.\nTable 1 summarizes the reconstruction results for different CS rates n/n0. The reweighted-RNN model systematically outperforms the other models, often by a large margin. Table 2 shows similar improvements for various dimensions of hidden units. Table 3 shows that IndRNN delivers higher reconstruction performance than our model when a small number of hidden layers (d = 1, 2) is used. Moreover, when the depth increases, reweighted-RNN surpasses all other models. Our network also has fewer trainable parameters compared to the popular variants of RNN. At the default settings, reweighted-RNN, the stacked vanilla RNN, the stacked LSTM, and the stacked GRU have 4.47M, 5.58M, 21.48M, and 16.18M parameters, respectively." }, { "heading": "5 CONCLUSIONS", "text": "We designed a novel deep RNN by unfolding an algorithm that solves a reweighted `1-`1 minimization problem. Our model has high network expressivity due to per-unit learnable activation functions and over-parameterized weights. We also established the generalization error bound for the proposed model via Rademacher complexity. We showed that reweighted-RNN has good generalization properties and its error bound is tighter than existing RNNs in function of the number of time steps. Experimentation on the task of sequential video-frame reconstruction shows that our model (i) outperforms various state-of-the-art RNNs in terms of accuracy and (ii) is capable of stacking many hidden layers resulting in a better learning capability than the existing unfolding methods.\n3In our experiments, the 2-layer LSTM network outperforms the 3-layer one (see Table 3), the default setting for LSTM is thus using 2 layers." }, { "heading": "A SUPPLEMENTARY EXPERIMENTS", "text": "In our experiments, we use the publically available Tensorflow implementations for FastRNN4, IndRNN5, and SpectralRNN6 cells. While Sista-RNN, `1-`1-RNN and reweighted-RNN have their own layer-stacking schemes derived from unfolding minimization algorithms, we use the stacking rule in Pascanu et al. (2013) [see Fig 2(c)] to build deep networks for other RNN models.\nFigure 3 shows the learning curves of all methods under the default setting. It can be seen that reweighted-RNN achieves the lowest mean square error on both the training and validation sets. It can also be observed that the unfolding methods converge faster than the stacked RNNs, with the proposed reweighted-RNN being the fastest. More experimental results for the proposed reweighted-RNN are provided to illustrate the learning curves, which measure the average mean square error vs. the training epochs between the original frames and their reconstructed counterparts, with different CS rates [Fig. 4], different network depths d [Fig. 6], and different network widths h [Fig. 5].\nSince we use different frameworks to implement the RNNs used in our benchmarks, we do not report and compare the computational time for training of the models. Specifically, we rely on the Tensorflow implementations from the authors of Independent-RNN, Fast-RNN and Spectral RNN, while the rest is written in Pytorch. Furthermore, even among Pytorch models, the vanilla RNN, LSTM, and GRU cells are written in CuDNN (default Pytorch implementations), so that they are significantly faster in training than the others. This does not mean that these networks have better runtime complexities, but rather more efficient implementations. However, an important comparison could be made between `1-`1-RNN Le et al. (2019) (as the baseline method) and Reweighted-RNN due to their similarities in implementations. At the default settings, it takes 3,521 seconds and 2,985 seconds to train Reweighted-RNN and `1-`1-RNN Le et al. (2019), respectively.\n4Code available at https://github.com/microsoft/EdgeML 5Code available at https://github.com/batzner/indrnn 6Code available at https://github.com/zhangjiong724/spectral-RNN" }, { "heading": "B THE PROXIMAL OPERATOR FOR PROBLEM (4)", "text": "Proposition B.1. The proximal operator Φλ1 c g, λ2 c g,~ (u) in (6) for the reweighted `1-`1 minimization problem (4), for which g(v) = λ1g|v|+ λ2g|v − ~|, is given by\nΦλ1 c g, λ2 c g,~≥0 (u) = u− λ1c g − λ2 c g, ~ + λ1 c g + λ2 c g < u <∞ ~, ~ + λ1c g − λ2 c g ≤ u ≤ ~ + λ1 c g + λ2 c g u− λ1c g + λ2 c g, λ1 c g − λ2 c g < u < ~ + λ1 c g − λ2 c g 0, −λ1c g − λ2 c g ≤ u ≤ λ1 c g − λ2 c g\nu+ λ1c g + λ2 c g, −∞ < u < − λ1 c g − λ2 c g\n(19)\nΦλ1 c g, λ2 c g,~<0 (u) = u− λ1c g − λ2 c g, λ1 c g + λ2 c g < u <∞ 0, −λ1c g + λ2 c g ≤ u ≤ λ1 c g + λ2 c g u+ λ1c g − λ2 c g, ~− λ1 c g + λ2 c g < u < − λ1 c g + λ2 c g ~, ~− λ1c g − λ2 c g ≤ u ≤ ~− λ1 c g + λ2 c g\nu− λ1c g + λ2 c g, −∞ < u < ~− λ1 c g − λ2 c g\n(20)\nProof. We compute the proximal operator Φλ1 c g, λ2 c g,~ (u) (19) for ~ ≥ 0, it is similar for ~ < 0. From (6), Φλ1\nc g, λ2 c g,~\n(u) is expressed by:\nΦλ1 c g, λ2 c g,~ (u) = arg min v∈R\n{ ϕ(v) :=\nλ1 c g|v|+ λ2 c g|v − ~|+ 1 2 |v − u|2\n} . (21)\nWe first consider the ∂ϕ(v)/∂v in v ∈ {(−∞, 0), (0, ~), (~,∞)}, in which ∂ϕ(v) exists. Taking the derivative of ϕ(v) in these intervals delivers\n∂ϕ(v) ∂v = λ1 c g · sign(v) + λ2 c g · sign(v − ~) + (v − u), (22)\nwhere sign(.) is a sign function. When setting ∂ϕ(v)/∂v = 0 to minimize ϕ(v), we derive:\nv = u− λ1c g − λ2 c g, ~ < v <∞ u− λ1c g + λ2 c g, 0 < v < ~\nu+ λ1c g + λ2 c g, −∞ < v < 0\n(23)\nFrom (21) and (23), we have\nΦλ1 c g, λ2 c g,~ (u) = u− λ1c g − λ2 c g, ~ + λ1 c g + λ2 c g < u <∞ u− λ1c g + λ2 c g, λ1 c g − λ2 c g < u < ~ + λ1 c g − λ2 c g\nu+ λ1c g + λ2 c g, −∞ < u < − λ1 c g − λ2 c g\n(24)\nIn the remaining range value of u, namely, −λ1c g − λ2 c g ≤ u ≤ λ1 c g − λ2 c g and ~ + λ1 c g − λ2 c g ≤ u ≤ ~ + λ1c g + λ2 c g, we prove that the minimum of ϕ(v) (21) is obtained when v = 0 and v = ~, respectively.\nLet us rewrite ϕ(v), which was defined in (21), as\nϕ(v) = λ1 c g|v|+ λ2 c g|v − ~|+ 1 2 |v − u|2 (25)\nBy applying the inequality |a− b| ≥ |a| − |b|, where a, b ∈ R, to (25), we obtain:\nϕ(v) ≥ λ1 c g|v|+ λ2 c g|v| − λ2 c g|~|+ 1 2 v2 − vu+ 1 2 u2\n≥ |v| (λ1 c g + λ2 c g − |u| ) + 1 2 v2 − λ2 c g|~|+ 1 2 u2 (26)\nFor−λ1c g− λ2 c g ≤ u ≤ λ1 c g− λ2 c g, from (26), ϕ(v) is minimal when v = 0, due to λ1 c g+ λ2 c g−|u| ≥ 0.\nSimilarly, for ~+ λ1c g− λ2 c g ≤ u ≤ ~+ λ1 c g+ λ2 c g, i.e., λ1 c g− λ2 c g ≤ u−~ ≤ λ1 c g+ λ2 c g, we have\nϕ(v) ≥λ1 c g|v − ~| − λ1 c g|~|+ λ2 c g|v − ~|+ 1 2 (v − ~)2 − |v − ~||u− ~|+ 1 2 (u− ~)2\n≥|v − ~| (λ1 c g + λ2 c g − |u− ~| ) + 1 2 (v − ~)2 − λ1 c g|~|+ 1 2 (u− ~)2. (27)\nFrom (27), ϕ(v) is minimal when v = ~, since λ1c g + λ2 c g − |u− ~| ≥ 0. Combining these results with the result in (24), we conclude the proof." }, { "heading": "C GENERALIZATION AND DEEP UNFOLDING RNNS", "text": "C.1 GENERALIZATION ERROR BOUND DEFINITION\nNotations. Let f (d)W : Rn 7→ Rh be the function computed by a d-layer network with weight parametersW . The network f (d)W maps an input sample xi ∈ Rn (from an input space X ) to an output yi ∈ Rh (from an output space Y), i.e., yi = f (d) W (xi). Let S denote a training set of size m, i.e., S = {(xi,yi)}mi=1 and E(xi,yi)∼S [·] denote an expectation over (xi,yi) from S. The set S is drawn i.i.d. from a distribution D, denoted as S ∼ Dm, over a space Z = X × Y . Let F be a (class) set of functions. Let ` : F × Z 7→ R denote the loss function and ` ◦ F = {z 7→ `(f, z) : f ∈ F}. We define the true loss and the empirical (training) loss by LD(f) and LS(f), respectively, as follows:\nLD(f) = E(xi,yi)∼D [ ` ( f(xi),yi )] , (28)\nand LS(f) = E(xi,yi)∼S [ ` ( f(xi),yi )] . (29) The generalization error, which is defined as a measure of how accurately a learned algorithm is able to predict outcome values for unseen data, is calculated by LD(f)− LS(f). Rademacher complexity. Let F be a hypothesis set of functions (neural networks). The empirical Rademacher complexity of F (Shalev-Shwartz & Ben-David, 2014) for a training sample set S is defined as follows:\nRS(F) = 1\nm E ∈{±1}m [ sup f∈F m∑ i=1 if(xi) ] , (30)\nwhere = ( 1, ..., m); here i are independent uniformly distributed random (Rademacher) variables from {±1}, according to P[ i = 1] = P[ i = −1] = 1/2. The generalization error bound (Shalev-Shwartz & Ben-David, 2014) is derived based on the Rademacher complexity defined in the following theorem: Theorem C.1. (Shalev-Shwartz & Ben-David, 2014, Theorem 26.5) Assume that |`(f, z)| ≤ η for all f ∈ F and z. Then, for any δ > 0, with probability at least 1− δ,\nLD(f)− LS(f) ≤ 2RS(` ◦ F) + 4η √ 2 log(4/δ)\nm . (31)\nIt can be noted that the bound in (31) via the Rademacher complexity depends on the training set S, which makes it applicable to a number of learning problems, e.g., regression and classification, under given a loss function `.\nC.2 NOTATION FOR DEEP UNFOLDED RNNS\nIn this subsection, we provide the required notation for the proposed reweighted-RNN model, `1- `1-RNN, and Sista-RNN, which will be used in the derivation of their respective generalization analysis.\nThe proposed reweighted-RNN. Let h(l)t be the hidden states in layer l evolving in time step t. We write the reweighted-RNN model recursively as h(1)t = f (1) W,U (h (d) t−1,xt) = Φ(W1h (d) t−1+U1xt) and h (l) t = f (l) W,U (h (d) t−1,xt) = Φ ( Wlf (l−1) W,U (h (d) t−1,xt) +Ulxt ) , where Φ is an activation function. The\nhidden state is updated as shown in (9). The real-valued family of functions, Fd,t : Rh × Rn 7→ R, for the functions f (d)W,U in layer d is defined by:\nFd,t = { (h (d) t−1,xt) 7→ Φ(wTd f (d−1) W,U (h (d) t−1,xt) + u T d xt) : ‖Wd‖1,∞ ≤ αd, ‖Ud‖1,∞ ≤ βd } ,\n(32)\nwhere αl, βl are nonnegative hyper-parameters for layer l, where 1 < l ≤ d. In layer l = 1, the real-valued family of functions, F1,t : Rh × Rn 7→ R, for the functions f (1)W,U is defined by:\nF1,t = { (h (d) t−1,xt) 7→ Φ(wT1 h (d) t−1 + u T 1 xt) : ‖W1‖1,∞ ≤ α1, ‖U‖1,∞ ≤ β1 } , (33)\nwhere α1, β1 are nonnegative hyper-parameters. We denote the input layer as f (0) W,U = h (d) t−1, in particular, at t = 1, h(l)0 ≡ h0.\nThe `1-`1-RNN model (Le et al., 2019). The hidden state h (l) t for `1-`1-RNN is updated as shown in (13). The real-valued family of functions, Fd,t : Rh × Rn 7→ R, for the function f (d)W,U in layer d is defined by:\nFd,t = { (h (d) t−1,xt) 7→ Φ(wT2 f (d−1) W,U (h (d) t−1,xt) + u T 1 xt) : ‖W2‖1,∞ ≤ α2, ‖U1‖1,∞ ≤ β1 } ,\n(34)\nwhere α2, β1 are nonnegative hyper-parameters for layer l, where 1 < l ≤ d. In layer l = 1, the real-valued family of functions, F1,t : Rh × Rn 7→ R, for the functions f (1)W,U is defined by:\nF1,t = { (h (d) t−1,xt) 7→ Φ(wT1 h (d) t−1 + u T 1 xt) : ‖W1‖1,∞ ≤ α1, ‖U‖1,∞ ≤ β1 } , (35)\nwhere α1, β1 are nonnegative hyper-parameters.\nThe Sista-RNN model (Wisdom et al., 2017). The hidden state h(l)t in Sista-RNN is updated by:\nh (l) t = φ ( W1h (d) t−1 + U1xt ) , l = 1, φ ( W2h (l−1) t + U1xt + U2h (d) t−1 ) , l > 1,\n(36)\nThe real-valued family of functions, Fd,t : Rh × Rn 7→ R, for the functions f (d)W,U in layer d is defined by:\nFd,t = { (h (d) t−1,xt) 7→ φ ( wT2 f (d−1) W,U (h (d) t−1,xt) + u T 1 xt + u T 2 h (d) t−1 ) : ‖W2‖1,∞ ≤ α2, ‖U‖1,∞ ≤ β1, ‖U‖2,∞ ≤ β2 } , (37)\nwhere α2, β1, β2 are nonnegative hyper-parameters. In layer l = 1, F1,t = { (h (d) t−1,xt) 7→ φ ( wT1 h (d) t−1 + u T 1 xt ) : ‖W1‖1,∞ ≤ α1, ‖U‖1,∞ ≤ β1 } , (38)\nwhere α1, β1 are nonnegative hyper-parameters." }, { "heading": "D PROOF OF THEOREM 3.1", "text": "Proof. We consider the real-valued family of functions Fd,T : Rh×Rn 7→ R for the functions f (d)W,U to update h(d)T in layer d, time step T , defined as\nFd,T = { (h (d) T−1,xT ) 7→ Φ(w T d f (d−1) W,U (h (d) T−1,xT ) + u T d xT ) : ‖Wd‖1,∞ ≤ αd, ‖Ud‖1,∞ ≤ βd } ,\n(39)\nwhere wd,ud are the corresponding rows from Wd,Ud, respectively, and αl, βl, with 1 < l ≤ d, are nonnegative hyper-parameters. For the first layer and the first time step, i.e., l = 1, t = 1, the real-valued family of functions, F1,1 : Rh × Rn 7→ R, for the functions f (1)W,U is defined by:\nF1,1 = { (h0,x1) 7→ Φ(wT1 h0 + uT1 x1) : ‖W1‖1,∞ ≤ α1, ‖U‖1,∞ ≤ β1 } , (40)\nwhere α1, β1 are nonnegative hyper-parameters. We denote the input layer as f (0) W,U = h0 at the first time step. From the definition of Rademacher complexity in (30) and the family of functions in (39) and (40), we obtain:\nmRS(Fd,T ) (41a)\n≤ E ∈{±1}m [ sup W,U\n‖wd‖1≤αd ‖ud‖1≤βd\nm∑ i=1 iΦ ( wTd f (d−1) W,U (hT−1,i,xT,i) + u T d xT,i\n)]\n≤ 1 λ log exp\n( E\n∈{±1}m [ sup W,U\n‖wd‖1≤αd ‖ud‖1≤βd\nλ m∑ i=1 i ( wTd f (d−1) W,U (hT−1,i,xT,i) + u T d xT,i\n)])\n≤ 1 λ log E ∈{±1}m [ sup W,U\n‖wd‖1≤αd ‖ud‖1≤βd\nexp ( λ\nm∑ i=1 i ( wTd f (d−1) W,U (hT−1,i,xT,i) ) + λ m∑ i=1 iu T d xT,i\n)]\n(41b)\n≤ 1 λ log E ∈{±1}m [ sup W,U\n‖wd‖1≤αd\nexp ( λ\nm∑ i=1 i ( wTd f (d−1) W,U (hT−1,i,xT,i)\n))\n· sup ‖ud‖1≤βd exp\n( λ\nm∑ i=1 iu T d xT,i\n)] , (41c)\nwhere λ > 0 is an arbitrary parameter, Eq. (41b) follows Lemma D.1 for 1-Lipschitz Φ a long with Inequality (62), and (41c) holds by Inequality (59).\nFor layer 1 ≤ l ≤ d and time step t, let us denote:\n∆ (l) ht−1,xt\n= sup W,U\n‖wl‖1≤αl\nexp ( λΛl\nm∑ i=1 i ( wTl f (l−1) W,U (ht−1,i,xt,i) )) , (42)\n∆(l)xt = sup ‖ul‖1≤βl exp\n( λΛl\nm∑ i=1 i ( uTl xt,i )) , (43)\nwhere Λl is defined as follows: Λd = 1, Λl = d∏\nk=l+1 αk with 1 ≤ l ≤ d− 1, and Λ0 = d∏ k=1 αk.\nFollowing the Hölder’s inequality in (58) in case of p = 1 and q = ∞ applied to wTl and f (l−1) W,U (ht−1,i,xt,i) in (42), respectively, we get:\n∆ (d) ht−1,xt\n(44)\n≤ sup W,U\n‖Wd−1‖1,∞≤αd−1 ‖Ud−1‖1,∞≤βd−1\nexp ( λαd ∥∥∥∥∥ m∑ i=1 iΦ ( Wd−1f (d−2) W,U (ht−1,i,xt,i) + Ud−1xt,i )∥∥∥∥∥ ∞ )\n≤ sup W,U\n‖wd−1,k‖1≤αd−1 ‖ud−1,k‖1≤βd−1\nexp ( λαd max\nk∈{1,··· ,h} ∣∣∣∣∣ m∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (ht−1,i,xt,i) + u T d−1,kxt,i )∣∣∣∣∣ )\n≤ sup W,U\n‖wd−1,k‖1≤αd−1 ‖ud−1,k‖1≤βd−1\nexp ( λαd ∣∣∣∣∣ m∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (ht−1,i,xt,i) + u T d−1,kxt,i )∣∣∣∣∣ ) . (45)\nSimilarly, from (43), we obtain:\n∆(d)xt ≤ sup ‖ud‖1≤βd exp\n( λ\nm∑ i=1 iu T d xt,i\n) ≤ exp ( λβd ∥∥∥ m∑ i=1 ixt,i ∥∥∥ ∞ ) ≤ exp ( λβd ∣∣∣ m∑ i=1 ixτ,i,κ ∣∣∣), (46)\nwhere {τ, κ} = argmax t∈{1,...,T},j∈{1,...,n} ∣∣∣ m∑ i=1 ixt,i,j ∣∣∣. From (41c), (44), and (46), we get:\nmRS(Fd,T )\n≤ 1 λ log\n( E\n∈{±1}m [ sup W,U\n‖wd−1,k‖1≤αd−1 ‖ud−1,k‖1≤βd−1\nexp ( λαd ∣∣∣∣∣ m∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (hT−1,i,xT,i) + u T d−1,kxT,i )∣∣∣∣∣ + λβd\n∣∣∣ m∑ i=1 ixτ,i,κ ∣∣∣)])\n≤ 1 λ log\n( E\n∈{±1}m [ sup W,U\n‖wd−1,k‖1≤αd−1 ‖ud−1,k‖1≤βd−1\n(\nexp ( λαd\nm∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (hT−1,i,xT,i) + u T d−1,kxT,i ) + λβd m∑ i=1 ixτ,i,κ\n)\n+ exp ( λαd\nm∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (hT−1,i,xT,i) + u T d−1,kxT,i ) − λβd m∑ i=1 ixτ,i,κ\n)\n+ exp ( − λαd\nm∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (hT−1,i,xT,i) + u T d−1,kxT,i ) + λβd m∑ i=1 ixτ,i,κ\n)\n+ exp ( − λαd\nm∑ i=1 iΦ ( wTd−1,kf (d−2) W,U (hT−1,i,xT,i) + u T d−1,kxT,i ) − λβd m∑ i=1 ixτ,i,κ\n))])\n≤ 1 λ log ( 4 E ∈{±1}m [ ∆ (d−1) hT−1,xT ∆(d−1)xT exp ( βdλ m∑ i=1 ixτ,i,κ )]) (47a)\n≤ 1 λ log\n( 4d−1 E\n∈{±1}m\n[ ∆\n(1) hT−1,xT ∆(1)xT exp ( λ ( d∑ l=2 βlΛl ) m∑ i=1 ixτ,i,κ )]) (47b)\n≤ 1 λ log\n( 4d−1 E\n∈{±1}m\n[ exp ( λ ( d∑ l=2 βlΛl ) m∑ i=1 ixτ,i,κ ) sup ‖w1‖1≤α1 exp ( λΛ1 m∑ i=1 i ( wT1 hT−1,i ))\n· sup ‖u1‖1≤β1 exp\n( λΛ1\nm∑ i=1 i ( uT1 xT,i ))]) (47c)\n≤ 1 λ log\n( 4d−1 E\n∈{±1}m\n[ exp ( λ ( d∑ l=2 βlΛl ) m∑ i=1 ixτ,i,κ ) sup W,U\n‖wd‖1≤αd ‖ud‖1≤βd\nexp ( λΛ0 ∥∥∥ m∑ i=1 ihT−1,i ∥∥∥ ∞ )\n· exp ( λβ1Λ1 ∥∥∥ m∑ i=1 ixT,i ∥∥∥ ∞ ))]) (47d)\n≤ 1 λ log\n( 4d E\n∈{±1}m\n[ exp ( λ ( d∑ l=1 βlΛl ) m∑ i=1 ixτ,i,κ )\n· sup W,U\n‖wd‖1≤αd ‖ud‖1≤βd\nexp ( λΛ0\nm∑ i=1 iΦ ( wTd f (d−1) W,U (hT−2,i,xT−1,i) + u T d xT−1,i ))]) , (47e)\nwhere (47a) holds by inequality (59), and (47b) follows by repeating the process from layer d− 1 to layer 1 for time step T . Furthermore, (47c) is obtained as the beginning of the process for time step T − 1 and (47d) follows inequality (58). Proceeding by repeating the above procedure in (47e) from time step T − 1 to time step 1, we get: mRS(Fd,T )\n≤ 1 λ log\n( 4dT E\n∈{±1}m\n[ exp ( λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) m∑ i=1 ixτ,i,κ ) exp ( λΛT0 ∥∥∥ m∑ i=1 ih0 ∥∥∥ ∞ ]) .\n(48)\nLet us denote µ = argmax j∈{1,...,h} ∣∣∣ m∑ i=1 ih0,j ∣∣∣, from (48), we have: mRS(Fd,T )\n≤ 1 λ log\n( 4dT E\n∈{±1}m\n[ exp ( λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) m∑ i=1 ixτ,i,κ ) exp ( λΛT0 m∑ i=1 ih0,µ )])\n≤ 2dT log 2 λ + 1 2λ log\n( E\n∈{±1}m\n[ exp ( λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) m∑ i=1 ixτ,i,κ )\n· exp ( λΛT0 m∑ i=1 ih0,µ )])2\n≤ 2dT log 2 λ + 1 2λ log E ∈{±1}m\n[ exp ( 2λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) m∑ i=1 ixτ,i,κ )]\n+ 1\n2λ log E ∈{±1}m\n[ exp ( 2λΛT0 m∑ i=1 ih0,µ )] (49a)\n≤ 2dT log 2 λ + 1 2λ log n∑ j=1 E ∈{±1}m\n[ exp ( 2λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) m∑ i=1 ixτ,i,j )]\n+ 1\n2λ log h∑ j=1 E ∈{±1}m\n[ exp ( 2λΛT0 m∑ i=1 ih0,j )] (49b)\n≤ 2dT log 2 λ + 1 2λ log n∑ j=1 m∏ i=1 E ∈{±1}m\n[ exp ( 2λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) ixτ,i,j )]\n+ 1\n2λ log h∑ j=1 m∏ i=1 E ∈{±1}m\n[ exp ( 2λΛT0 ih0,j )]\n≤ 2dT log 2 λ + 1 2λ log n∑ j=1 m∏ i=1\n[ 1\n2 exp ( 2λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) xτ,i,j )\n+ 1\n2 exp\n( − 2λ ( d∑ l=1 βlΛl )(ΛT0 − 1 Λ0 − 1 ) xτ,i,j )]\n+ 1\n2λ log h∑ j=1 m∏ i=1\n[ 1\n2 exp\n( 2λΛT0 h0,j ) + 1\n2 exp\n( − 2λΛT0 h0,j )]\n≤ 2dT log 2 λ + 1 2λ log n∑ j=1\n[ exp ( 2λ2 ( d∑ l=1 βlΛl )2(ΛT0 − 1 Λ0 − 1 )2 m∑ i=1 x2τ,i,j )]\n+ 1\n2λ log h∑ j=1\n[ exp ( 2λ2Λ2T0 m∑ i=1 h20,j )] (49c)\n≤ 2dT log 2 λ + log n 2λ + λ ( d∑ l=1 βlΛl )2(ΛT0 − 1 Λ0 − 1 )2 mB2x + log h 2λ + λΛ2T0 m‖h0‖2∞ ≤ 2dT log 2 + log √ n+ log √ h\nλ + λ (( d∑ l=1 βlΛl )2(ΛT0 − 1 Λ0 − 1 )2 mB2x + Λ 2T 0 m‖h0‖2∞ ) , (49d)\nwhere (49a) follows inequality (61), and (49b) holds by replacing with ∑n j=1 and ∑h j=1, respectively. In addition, (49c) follows (60), and (49d) is obtained by the following definition: At time step t, we define Xt ∈ Rn×m, a matrix composed of m columns from the m input vectors {xt,i}mi=1; we also define ‖Xt‖2,∞ = √ max\nk∈{1,...,n}\n∑m i=1 x 2 t,i,k ≤ √ mBx, representing the maximum of the `2-norms\nof the rows of matrix Xt, and ‖h0‖∞ = max j |h0,j |.\nChoosing λ = √√√√ 2dT log 2+log√n+log√h( d∑\nl=1\nβlΛl )2( ΛT0 −1 Λ0−1 )2 mB2x+Λ 2T 0 m‖h0‖2∞ , we achieve the upper bound:\nRS(Fd,T ) ≤ √√√√2(4dT log 2 + log n+ log h) m (( d∑ l=1 βlΛl )2(ΛT0 − 1 Λ0 − 1 )2 B2x + Λ 2T 0 ‖h0‖2∞ ) .\n(50)\nIt can be noted that RS(Fd,T ) in (50) is derived for the real-valued functions Fd,T . For the vectorvalued functions Fd,T : Rh × Rn 7→ Rh (in Theorem 3.1), we apply the contraction lemma (Lemma D.1) to a Lipschitz loss to obtain the complexity of such vector-valued functions by means of the complexity of the real-valued functions. Specifically, in Theorem 3.1, under the assumption of the 1-Lipschitz loss function and from Theorem C.1, Lemma D.1, we complete the proof.\nD.1 COMPARISON WITH EXISTING GENERALIZATION BOUNDS\nRecent works have established generalization bounds for RNN models with a single recurrent layer (d = 1) using Rademacher complexity [see FastRNN in Kusupati et al. (2018)] or PAC-Bayes theory [see SpectralRNN in Zhang et al. (2018)]. We re-state these generalization bounds below and apply Theorem 3.1 with d = 1 to compare with our bound for reweighted-RNN.\nFastRNN (Kusupati et al., 2018). The hidden state ht of FastRNN is updated as follows:\nh̃t = φ(Wht−1 + Uxt)\nht = ah̃t + bht−1, (51)\nwhere 0 ≤ a, b ≤ 1 are trainable parameters parameterized by the sigmoid function. Under the assumption that a + b = 1, the Rademacher complexity RS(FT ) of the class FT of FastRNN (Kusupati et al., 2018), with ‖W‖F ≤ αF , ‖U‖F ≤ βF , and ‖xt‖2 ≤ B, is given by\nRS(FT ) ≤ 2a√ m BβF ( (1 + a(2αF − 1))T+1 − 1 a(2αF − 1) ) . (52)\nAlternatively, under the additional assumption that a ≤ 12(2αF−1)T , the bound in Kusupati et al. (2018) becomes:\nRS(FT ) ≤ 2a√ m BβF (2a(2αF − 1)(T + 1)− 1 (2αF − 1) ) . (53)\nSpectralRNN (Zhang et al., 2018). The hidden state ht and output yt ∈ Rny of SpectralRNN are computed as:\nht =φ(Wht−1 + Uxt)\nyt =Yht, (54)\nwhere Y ∈ Rny×h. The generalization error in Zhang et al. (2018) is derived for a classification problem. For any δ > 0, γ > 0, with probability ≥ 1 − δ over a training set S of size m, the generalization error (Zhang et al., 2018) of SpectralRNN is bounded by\nO\n(√ B2T 4ξ ln(ξ)\nγ2 (‖W‖ 2 F + ‖U‖2F + ‖Y‖2F ) · ζ + ln m δ\nm\n) , (55)\nwhere ζ = max{‖W‖2T−22 , 1}max{‖U‖22, 1}max{‖Y‖22, 1} and ξ = max{n, ny, h}. Reweighted-RNN. Based on Theorem 3.1, under the assumption that the initial hidden state h0 = 0, the Rademacher complexity of reweighted-RNN with d = 1 is bounded as\nRS(F1,T ) ≤ √ 4T log 2 + log n+ log h\nm\n(√ 2β1\nαT1 − 1 α1 − 1 Bx\n) . (56)\nWe observe that the bound of SpectralRNN in (55) depends on T 2, whereas the bound of FastRNN either grows exponentially with T (52) or is proportional to T (53). Our bound (56) depends on√ T , given that the second factor in (56) is only dependent on the norm constraints α1, β1 and the input training data; meaning that it is tighter than those of SpectralRNN and FastRNN in terms of the number of time steps.\nD.2 BACKGROUND ON RADEMACHER COMPLEXITY CALCULUS\nThe contraction lemma in Shalev-Shwartz & Ben-David (2014) gives the Rademacher complexity of the composition of a class of functions with ρ-Lipschitz functions. Lemma D.1. (Shalev-Shwartz & Ben-David, 2014, Lemma 26.9—Contraction lemma) Let F be a set of functions, F = {f : X 7→ R}, and Φ1, ..., Φm, ρ-Lipschitz functions, namely, |Φi(α) − Φi(β)| ≤ ρ|α − β| for all α, β ∈ R for some ρ > 0. For any sample set S of m points x1, ...,xm ∈ X , let (Φ ◦ f)(xi) = Φ(f(xi)). Then,\n1 m E ∈{±1}m [ sup f∈F m∑ i=1 i(Φ ◦ f)(xi) ] ≤ ρ m E ∈{±1}m [ sup f∈F m∑ i=1 if(xi) ] , (57)\nalternatively, RS(Φ ◦ F) ≤ ρRS(F), where Φ denotes Φ1(x1), ..., Φm(xm) for S. Proposition D.2. (Mohri et al., 2018, Proposition A.1—Hölder’s inequality) Let p, q ≥ 1 be conjugate: 1p + 1 q = 1. Then, for all x,y ∈ R n,\n‖x · y‖1 ≤ ‖x‖p‖y‖q, (58)\nwith the equality when |yi| = |xi|p−1 for all i ∈ [1, n].\nSupporting inequalities:\n(i) If A, B are sets of positive real numbers, then:\nsup(AB) = sup(A) · sup(B). (59)\n(ii) Given x ∈ R, we have: exp(x) + exp(−x)\n2 ≤ exp(x2/2). (60)\n(iii) Let X and Y be random variables, the Cauchy–Bunyakovsky–Schwarz inequality gives:\n(E[XY])2 ≤ E[X2] · E[Y2]. (61)\n(iv) If ψ is a convex function, the Jensen’s inequality gives:\nψ(E[X]) ≤ E[ψ(X)]. (62)" }, { "heading": "E ADDITIONAL EXPERIMENTS", "text": "We test our model on three popular tasks for RNNs, namely the sequential pixel MNIST classification, the adding task, and the copy task (Le et al., 2015; Arjovsky et al., 2016; Zhang et al., 2018).\nSequential pixel MNIST and permuted pixel MNIST classification. This task aims to classify MNIST images to a class label. MNIST images are formed by a 28×28 gray-scale image with a label from 0 to 9. We use the reweighted-RNN along with a softmax for category classification. We set d = 5 layers and h = 256 hidden units for the reweighted-RNN. We consider two scenarios: the first one where the pixels of each MNIST image are read in the order from left-to-right and bottom-to-top and the second one where the pixels of each MNIST image are randomly permuted. The classification accuracy results are shown in Fig. 7(a) (for pixel MNIST) and Fig. 7(b) (for permuted pixel MNIST).\nAdding Task. The task inputs two sequences of length T . The first sequence consists of entries that are uniformly sampled from [0, 1]. The second sequence comprises two entries of 1 and the\nremaining entries of 0, in which the first entry of 1 is randomly located in the first half of the sequence and the second entry of 1 is randomly located in the second half. The output is the sum of the two entrie of the first sequence, where are located in the same posisions of the entries of 1 in the second sequence. We also use the reweighted-RNN with d = 5 layers and h = 256 hidden units for the input sequences of length T = 300. Fig. 8(a) shows the mean square error versus training epoches on the validation set.\nCopy task. We consider an input sequence x ∈ AT+20 (Zhang et al., 2018), where A = {a0, · · · , a9}. x0, · · · , x9 are uniformly sampled from {a0, · · · , a7}, xT+10 = a9, and the remaining xi are set to a8. The purpose of this task is to copy x0, · · · , x9 to the end of the output sequence y ∈ AT+20 given a time lag T , i.e., {yT+10, · · · , yT+19} ≡ {x0, · · · , x9} and the remaining yi are equal to a8. We set the reweighted-RNN d = 5 layers and h = 256 hidden units for the input sequences of a time lag T = 100. Fig. 8(b) shows the cross entropy versus training epoches on the validation set." } ]
2,019
null
SP:896e4cb1fcd0dbbb9ecaa510dd5052721d46c68f
[ "In this paper, the authors propose a method for black box adversarial image generation. The idea is to learn a parameterization of a precision matrix so that gradients of a network's loss are assumed to be drawn from a corresponding Gaussian. The parameters of this model are fit efficiently using the spectral theorem that their particular parameterization of the precision matrix allows them to use. Gradient estimation is then viewed as a Gaussian conditioning problem given observations (see last equation on page 5).", "This paper deals with the problem of finding an adversarial examples when only the output of a model can be evaluated, but not its gradient. The key idea of the paper is building a Gaussian MRF (a Gaussian with a sparse inverse covariance matrix with a special band structure) to maintain a model for the gradients for predicting search directions. The approach is sensible and uses the FFT trick applicable for diagonalizing covariance matrices with circulant structure." ]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle, providing us with loss function evaluations. We employ Markov Random Fields (MRF) to exploit the structure of input data to systematically model the covariance structure of the gradients. The MRF structure in addition to Bayesian inference for the gradients facilitates one-step attacks akin to Fast Gradient Sign Method (FGSM) albeit in the blackbox setting. The resulting method uses fewer queries than the current state of the art to achieve comparable performance. In particular, in the regime of lower query budgets, we show that our method is particularly effective in terms of fewer average queries with high attack accuracy while employing one-step attacks.
[]
[ { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "arXiv preprint arXiv:1712.04248,", "year": 2017 }, { "authors": [ "Jianbo Chen", "Michael I. Jordan", "Martin J. Wainwright" ], "title": "Hopskipjumpattack: A query-efficient decision-based attack", "venue": null, "year": 2019 }, { "authors": [ "Pin-Yu Chen", "Huan Zhang", "Yash Sharma", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "venue": "In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec", "year": 2017 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Anish Athalye", "Jessy Lin" ], "title": "Black-box adversarial attacks with limited queries and information", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Andrew Ilyas", "Logan Engstrom", "Aleksander Madry" ], "title": "Prior convictions: Black-box adversarial attacks with bandits and priors", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jack Kiefer", "Jacob Wolfowitz" ], "title": "Stochastic estimation of the maximum of a regression function", "venue": "The Annals of Mathematical Statistics,", "year": 1952 }, { "authors": [ "Yann Lecun", "LÃl’on Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Alex Tong Lin", "Yonatan Dukler", "Wuchen Li", "Guido Montúfar" ], "title": "Wasserstein diffusion tikhonov regularization", "venue": "arXiv preprint arXiv:1909.06860,", "year": 2019 }, { "authors": [ "Seungyong Moon", "Gaon An", "Hyun Oh Song" ], "title": "Parsimonious black-box adversarial attacks via efficient combinatorial optimization", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Nina Narodytska", "Shiva Kasiviswanathan" ], "title": "Simple black-box adversarial attacks on deep neural networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2017 }, { "authors": [ "Blaine Nelson", "Benjamin IP Rubinstein", "Ling Huang", "Anthony D Joseph", "Steven J Lee", "Satish Rao", "JD Tygar" ], "title": "Query strategies for evading convex-inducing classifiers", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Yurii Nesterov", "Vladimir Spokoiny" ], "title": "Random gradient-free minimization of convex functions", "venue": "Foundations of Computational Mathematics,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick McDaniel", "Ian Goodfellow", "Somesh Jha", "Z Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against machine learning", "venue": "In Proceedings of the 2017 ACM on Asia conference on computer and communications security,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition, 2014. cite arxiv:1409.1556", "venue": null, "year": 2014 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jonathon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Chun-Chen Tu", "Pai-Shun Ting", "Pin-Yu Chen", "Sijia Liu", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh", "Shin-Ming Cheng" ], "title": "Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Weilin Xu", "Yanjun Qi", "David Evans" ], "title": "Automatically evading classifiers", "venue": "In Proceedings of the 2016 Network and Distributed Systems Symposium,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Most methods for adversarial attacks on deep learning models operate in the so-called white-box setting (Goodfellow et al., 2014), where the model being attacked, and its gradients, are assumed to be fully known. Recently, however there has also been considerable attention given to the black-box setting as well, where the model is unknown and can only be queried by a user, and which much better captures the “typical” state by which an attack can interact with a model (Chen et al., 2017; Tu et al., 2019; Ilyas et al., 2019; Moon et al., 2019). And several past methods in this area have conclusively demonstrated that, given sufficient number of queries, it is possible to achieve similarly effective attacks in the black-box setting akin to the white-box setting. However, as has also been demonstrated by past work Ilyas et al. (2019; 2018), the efficiency of these black-box attacks (the number of queries need to find an adversarial example) is fundamentally limited unless they can exploit the spatial correlation structure inherent in the model’s gradients. Yet, at the same time, most previous methods have used rather ad-hoc methods of modeling such correlation structure, such as using “tiling” bases and priors over time Ilyas et al. (2019) that require attack vectors be constant over large regions, or by other means such as using smoothly-varying perturbations Ilyas et al. (2018) to estimate these gradients.\nIn this work, we present a new, more rigorous approach to model the correlation structure of the gradients within the black-box adversarial setting. In particular, we propose to model the gradient of the model loss function with respect to the input image using a Gaussian Markov Random Field (GMRF). This approach offers a number of advantages over prior methods: 1) it naturally captures the spatial correlation observed empirically in most deep learning models; 2) using the model, we are able to compute exact posterior estimates over the true gradient given observed data, while also fitting the parameters of the GMRF itself via an expectation maximization (EM) approach; and 3) the method provides a natural alternative to uniformly sampling perturbations, based upon the eigenvectors of the covariance matrix. Although representing the joint covariance over the entire input image may seem intractable for large-scale images, we can efficiently compute necessary terms for very general forms of grid-based GMRFs using the Fast Fourier Transform (FFT).\nWe evaluate our approach by attempting to find adversarial examples, over multiple different data sets and model architectures, using the GMRF combined with a very simple greedy zeroth order search technique; the method effectively forms a “black-box” version of the fast gradient sign method (FGSM), by constructing an estimate of the gradient at the input image itself, then taking a single signed step in this direction. Despite its simplicity, we show that owing to the correlation structure provided by the GMRF model, the approach outperforms more complex approaches such as the BANDITS-TD (Ilyas et al., 2019) or PARSIMONIOUS (Moon et al., 2019) methods (the current state of the art in black-box attacks), especially for small query budgets." }, { "heading": "2 RELATED WORK", "text": "Black-box adversarial attacks can be broadly categorized across a few different dimensions: optimization-based versus transfer-based attacks, and score-based versus decision-based attacks.\nIn the optimization-based adversarial setting, the adversarial attack is formulated as the problem of maximizing some loss function (e.g., the accuracy of the classifier or some continuous variant) using a zeroth order oracle, i.e., by making queries to the classifier. And within this optimization setting, there is an important distinction between score-based attacks, which directly observe a traditional model loss, class probability, or other continuous output of the classifier on a given a example, versus decision-based attacks, which only observe the hard label predicted by the classifier. Decision based attacks have been studied by Brendel et al. (2017); Chen et al. (2019; 2017), and (not surprisingly) typically require more queries to the classifier than the score-based setting.\nIn the regime of score-based attacks, the first such iterative attack on a class of binary classifiers was first studied by Nelson et al. (2012). A real-world application of black-box attacks to fool a PDF malware classifier was demonstrated by (Xu et al., 2016), for which a genetic algorithm was used. Narodytska & Kasiviswanathan (2017) demonstrated the first black-box attack on deep neural networks. Subsequently black-box attacks based on zeroth order optimization schemes, using techniques such as KWSA (Kiefer et al., 1952) and RDSA (Nesterov & Spokoiny, 2017) were developed in Chen et al. (2017); Ilyas et al. (2018). Though Chen et al. (2017) generated successful attacks attaining high attack accuracy, the method was found to be extremely query hungry which was then remedied to an extent by (Ilyas et al., 2018). In Ilyas et al. (2019), the authors exploit correlation of gradients across iterations by setting a prior and use a piece wise constant perturbation, i.e., tiling to develop a query efficient black-box method. Recently Moon et al. (2019) used a combinatorial optimization perspective to address the black-box adversarial attack problem.\nA concurrent line of work Papernot et al. (2017) has considered the transfer-based setting, rather than the optimization setting. These approaches create adversarial attacks by training a surrogate network with the aim to mimic the target model’s decisions, which are then obtained through blackbox queries. With the substitute model in place, the attack method then uses white-box attack strategies in order to transfer the attacks to the original target model. However, substitute network based attack strategies have been found to have a higher query complexity than those based on gradient estimation.\nThe exploitation of the structure of the input data space so as to append a regularizer has been recently found to be effective for robust learning. In particular, in Lin et al. (2019) showed that by using Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images helps to learn discriminative models that are robust to in-class variations of the input data.\nSetting of this work In this paper, we are specifically focused on the optimization-based, scorebased setting, following most directly upon the work of (Chen et al., 2017; Ilyas et al., 2018; 2019; Moon et al., 2019). However, our contribution is also largely orthogonal to the methods presented in these prior works. Specifically, we show that by modeling the covariance structure of the gradient using a Gaussian MRF, a very simple approach (which largely mirrors the simple black box search from (Ilyas et al., 2018)) achieves performance that is competitive than the best current methods, especially when using relatively few queries. We further emphasize that while we focus on this simple search strategy here, nothing would prevent this GMRF approach from being applied to other black-box search strategies as well." }, { "heading": "3 ADVERSARIAL ATTACKS", "text": "In the context of classifiers, adversarial examples are carefully crafted inputs to the classifier which have been perturbed by an additive perturbation so as to cause the classifier to misclassify the input. In particular, so as to ensure minimal visual distortion, the perturbation is subjected to a constraint in terms of its magnitude typically pre-specified in terms of `p-norm, for some fixed p, less than some p. Furthermore in the context of classifiers, attacks can be further classified into targeted or untargeted attacks. For simplicity and brevity, in this paper, we restrict our attention to untargeted attacks.\nFormally, define a classifier C : X 7→ Y with a corresponding classification loss function L(x, y), where x ∈ X is the input to the classifier, y ∈ Y , X is the set of inputs and Y is the set of labels. Technically speaking, the objective of generating a misclassified example can be posed as\nan optimization problem. In particular, the aim is to generate an adversarial example x′ for a given input x which maximizes L(x′, y) but still remains p-close in terms of a specified metric, to the original input. Thus, the generation of an adversarial attack can be formalized as a constrained optimization as follows:\nx′ = arg max x′:‖x′−x‖p≤ p L(x′, y). (1)\nWe give a brief overview of adversarial attacks categorized in terms of access to information namely, white-box and black-box attacks." }, { "heading": "3.1 WHITE-BOX ADVERSARIAL ATTACKS", "text": "White-box settings assume access to the entire classifier and the analytical form of the possibly nonconvex classifier loss function. White-box methods can be further categorized into single iteration and multiple iterations based methods. In the class of single iteration white-box methods, the Fast Gradient Sign Method (FGSM) has been very successful, which computes the adversarial input in the following way:\nx′ = x + psign (∇L(x, y)) . (2) FGSM is however limited to generation of `∞ based bounded adversarial inputs. Given, a constrained optimization problem at hand with access to a first order oracle, the most effective method is projected gradient descent (PGD). This multi iteration method generates the adversarial input xk by performing k iterations with x0 = x, where k is specified apriori. In particular, at the l-th iteration, PGD generates the perturbed input xl as follows:\nxl = ΠBp(x, )(xl−1 + ηsl) with sl = Π∂Bp(0,1)∇xL(xl−1, y), (3) where ΠS denotes the projection onto the set S, Bp(x′, ε′) is the `p ball of radius ε′ centered at x′, η denotes the step size, and ∂U is the boundary of a set U . By making, sl to be the projection of the gradient ∇xL(xl−1, y) at xl−1 onto the unit `p ball, it is ensured that sl is the unit `p-norm vector that has the largest inner product with ∇xL(xl−1, y). When p = 2, the projection corresponds to the normalized gradient, while for p = ∞, the projection corresponds to the sign of the gradient. Moreover, due to the projection at each iteration, the adversarial input generated at every iteration conforms to the specified constraint.\nHowever, in most real world deployments, it is impractical to assume complete access to the classifier and analytic form of the corresponding loss function, which makes black-box settings more realistic." }, { "heading": "3.2 BLACK-BOX ADVERSARIAL ATTACKS", "text": "In a typical black-box setting, the adversary has only access to a zeroth order oracle, which when queried for an input (x, y), yields the value of the loss function L(x, y). In spite of the information constraints and typically high dimensional inputs, black-box attacks have been shown to be pretty effective (Ilyas et al., 2019; 2018; Moon et al., 2019).\nThe main building block of black-box methods is finite difference schemes so as to estimate gradients. Two of the most widely finite difference schemes are the Kiefer-Wolfowitz Stochastic Approximation (KWSA)Kiefer et al. (1952) and Random Directions Stochastic Approximation (RDSA)(Nesterov & Spokoiny, 2017). KWSA operates as follows:\n∇̂xL(x, y) = d∑ k=1 ek (L(x + δek, y)− L(x, y)) /δ ≈ d∑ k=1 ek∇xL(x, y)ek, (4)\nwhere e1, . . . , ed are canonical basis vectors. The estimator can be further extended to higher order finite difference operators, but in the face of possibly non-smooth loss functions do not improve the accuracy of the estimator at the cost of additional queries. Hence, the first or the second order finite difference operators have been proven to be extremely effective. Though KWSA yields reasonably accurate gradient estimates, it is prohibitively query hungry. For instance, for Inception-v3 classifier for ImageNet, for every gradient estimate KWSA would require 299 × 299 × 3 queries. RDSA provides a better alternative which operates as follows:\n∇̂xL(x, y) = 1\nm m∑ k=1 zk (L(x + δzk, y)− L(x, y)) /δ, (5)\nwhere zk’s are usually drawn from a normal distribution. While RDSA is more query efficient than KWSA, it still needs a lot of queries to have a reasonably accurate estimate which typically scales with dimension. The step size δ > 0 in RDSA and KWSA is a key parameter of choice; a higher δ could lead to extremely biased estimates, while a lower δ can lead to an unstable estimator. In light of the two aforementioned gradient estimation schemes, the PGD attack (c.f. equation 3) can be suitably modified to suit black-box attacks as follows:\nxl = ΠBp(x, )(xl−1 + ηŝl) with ŝl = Π∂Bp(0,1)∇̂xL(xl−1, y). (6)\nHowever, owing to the biased gradient estimates, though a PGD based black-box attack is successful turns out to be query hungry. In particular, in order to ensure sufficient increase of the objective at each iteration the query complexity scales with dimension and hence is prohibitively large. However, most if not all successful black-box adversarial attacks tend to be multi iteration based methods. In the sequel, we develop a query efficient single step black-box adversarial attack." }, { "heading": "4 QUERY EFFICIENT SINGLE ITERATION BLACK-BOX ATTACKS", "text": "In this section, we develop the query efficient single iteration black-box attack method." }, { "heading": "4.1 GRADIENT CORRELATION", "text": "In most black-box adversarial attacks, the gradient terms across different images are implicitly assumed to be independent from each other. However, even inspecting adversarial examples visually, it is apparent that the gradients across different images exhibit correlation. In fact, gradient terms seem to be heavily correlated, and a black-box method aiming to find adversarial examples using as few queries as possible should exploit this correlation. We propose to exploit and model these correlations using a Gaussian Markov random field. Formally, letting x be the input to a classifier, and g = ∇L(x, y) the gradient of the loss function with respect to the input, we are attempting to query and estimate the gradient, then we aim to put a prior distribution over g\ng ∼ N (0,Σ) (7)\nwhere Σ is a non-identity covariance matrix modeling the correlation between terms. Following common practice we are not going to model Sigma, but rather model the inverse covariance matrix Λ = Σ−1, a setting also known as the Gaussian Markov random field (GMRF) setting, given that the non-zero entries in Λ correspond exactly to the edges in a graphical model describing the distribution. And even more specifically, we are not going to attempt to model each entry of Λ separately, but use a parameterized Gaussian MRF with relatively few free parameters. For example, if x is a 2D image, then we may have one parameter α governing the diagonal terms Λi,i = α,∀i, and another governing adjacent pixels Λi,j = β for i, j corresponding to indices that are neighboring in the original image. We will jointly refer to all the parameters of this model as θ, so in this case θ = (α, β), and we refer to the resulting Λ as Λ(θ).\nWe then consider the problem of fitting a parameterized MRF model to estimate gradients of inputs x(1), . . . ,x(m), using n directional derivatives for each input given by G = [g(1), . . . ,g(mn)]. The maximum likelihood estimation for this problem is precisely the optimization problem\nmin θ\ntr(SΛ(θ))− logdet(Λ(θ)), (8)\nwhere S = 1mn ∑ i g (i)g(i) >\nis the sample covariance and logdet denotes the log determinant. This is in fact the standard Gaussian maximum likelihood estimation problem. While this is a standard problem for the case of general covariance (minimizing Λ is just the inverse of the same covariance), when we use a parameterized form of Λ, it becomes less clear how to solve this optimization problem efficiently. As we show, however, this optimization problem can be easily solved using the Fourier Transform; we focus for simplicity of presentation on the 2D case, but the method is easily generalizable to three dimensional convolutions to capture color channels in addition to the spatial dimensions itself (and we use this 3D form for all color images). First, we focus on evaluating the\ntrace term. The key idea here is that the Lambda operator can be viewed as a (circular) convolution1\nK = [ 0 β 0 β α β 0 β 0 ] ,\nwhich then lets us compute tr(SΛ(θ)) as sum of the elements of the product of G and the zero padded 2D convolution of G and K. For the log determinant term, we can again exploit the fact that Λ is a convolution operator. Specifically, because it is a convolution, we know it can be diagonalized using the discrete Fourier transform.\nΛ = QHDQ\nwhere Q is the the Fast Fourier Transform (FFT) basis, and the eigenvalues being the diagonal elements of D can be found by an Fast Fourier Transform (FFT) to the zero-padded convolution operator; thus, we can compute the log determinant term by simply taking the sum of the log of the FFT-computed eigenvalues. We then employ Newton’s method to optimize the objective. The entire procedure of estimating the GMRF parameters is depicted in Algorithm 1. For a N ×N sized image, the dominating cost for the procedure will be the O(N2 logN) computation of the 2D FFT; this constrasts with the O(N6) naive complexity of forming, e.g., the naive eigen decomposition of the N2 ×N2 inverse covariance.\nAlgorithm 1 Solving for GMRF\n1: procedure SOLVING GMRF({x(i)}mi=1, δ) 2: Draw n vectors u(1), . . . ,u(n) ∼ N (0, I) 3: Estimate g(1), . . . ,g(mn), where gij = u(j) ( L(x(i) + δu(j), y)− L(x(i) − δu(j), y) ) /2δ\n4: Generate Ĝ from G by concatenating along each dimension. 5: Calculate tr(SΛ(θ)) = sum ( G× conv2d(K, Ĝ) ) ; conv2d denotes 2D convolution\n6: Calculate logdet(Λ(θ)) = sum (log(FFT(A))); A is the zero-padded convolution operator 7: Use Newton’s method to minimize the objective in equation 8 8: return θ" }, { "heading": "4.2 GRADIENT ESTIMATION", "text": "Under the aforementioned GMRF framework, we can interpret black-box gradient estimation as a Gaussian inference problem. Specifically, in our setting above, we have assumed that the gradient at a point x follows the normal distribution with the prescribed inverse covariance\ng ∼ N (0,Λ−1). When we observe the loss function value at some point x′, this can be viewed as a noisy observation of the gradient L(x′) ≈ L(x) + g>(x′ − x), where we do an abuse of notation by dropping y in L(x, y). Thus, given a set of sample points x(1), . . . ,x(m) and their corresponding loss function values L(x(1)), . . . , L(x(m)), we have the following characterization of the distribution\nL1|g ∼ N (Xg, σ2I), where\nL1 = [ L(x(1))− L(x), · · · , L(x(m))− L(x) ] , X = [ (x(1) − x)>, · · · , (x(m) − x)> ] .\nThe perturbed points {x′(i)}mi=1 are generated according to supplied vectors {z(1)}mi=1 to the procedure. Under this condition, the posterior g|L1 is given by\ng|L1 ∼ N (( Λ + X>X/σ2 )−1 X>L1/σ 2, ( Λ + X>X/σ2 )−1) .\n1The FFT operation technically operates circular convolutions (meaning the convolution wraps around the image), and thus the covariance naively models a correlation between, e.g., the first and last rows of an image. However, this is a minor issue in practice since: 1) it can be largely mitigated by zero-padding the input image before applying the FFT-based convolution, and 2) even if ignored entirely, the effect of a few additional circular terms in the covariance estimation is minimal.\nThe matrix of interest that we need to solve for is the inverse covariance term\nΛ + 1\nσ2 X>X.\nThis is a convolution plus a low rank matrix (the X>X term is rank m, and we typically have m n because we have relatively few samples and a high dimensional input). We can not solve for this matrix exactly using the FFT, but we can still solve for it efficiently (requiring only an m ×m inverse) using the matrix inversion lemma, specifically using Woodburry’s matrix inversion lemma(\nΛ + 1\nσ2 X>X\n)−1 = Λ−1 − Λ−1X> ( σ2I + XΛ−1X> )−1 XΛ−1.\nSince the term needs to be computed explicitly for at least the inner inverse, we explicitly maintain the term U = Λ−1X>. Note that in the sequential sampling setting (where we sequentially sample x(i) points one at a time), this matrix could be maintained over all samples, so that just a single solve would be required for each new sample. We use the mean of the the conditional distribution g|L1 as the gradient estimate. The gradient estimation procedure is depicted in Algorithm 2\nAlgorithm 2 Gradient Estimation\n1: procedure GRADEST(x, {z(i)}mi=1, σ, δ1, θ) 2: Compute L1 by querying the model at {x + δ1z(i)}mi=1 and {x− δ1z(i)}mi=1 3: Compute X = 2δ1[z(1), . . . , z(m)] 4: Compute ĝ = ( Λ + X>X/σ2 )−1 X>L1/σ\n2 using FFT 5: return ĝ" }, { "heading": "4.3 GRADIENT ESTIMATION: DIRECTIONAL DERIVATIVES", "text": "In order to estimate the gradient efficiently in Algorithm 2, a key role is played by the vectors {z(1)}mi=1 which perturb the input. One particular choice being sampling the directions from a normal distribution. However, it is worth noting that the inverse covariance distribution of the gradients by construction is a convolution operator and hence is diagonalized by the FFT basis. In particular, the low frequency components of the FFT basis vectors enhanced the attack accuracy significantly, an example of which is depicted in Figure 1 for black-box attacks on VGG16-bn classifier for ImageNet with = 0.05. With the gradient estimate at hand, the adversarial input for x is generated using FGSM as follows:\nxadv = x + sign (ĝ) . (9)\nThe entire procedure consisting of GMRF inference and gradient inference is presented in Algorithm 3. We provide more details about the performance of our gradient estimation scheme in terms of various metrics such as mean square error and cosine similarity between the estimated gradient and the true gradients in the Appendix A.6.\nAlgorithm 3 GMRF based black-box FGSM 1: procedure BB-FGSM(σ, δ1, δ, ) 2: θ ← SOLVING GMRF({x(i)}mi=1, δ) 3: {z(1)}mi=1 ← FFT basis vectors 4: ĝ← GRADEST(x, {z(i)}ni=1, σ, δ1, θ) 5: xadv = x + sign (ĝ) 6: return xadv" }, { "heading": "5 EXPERIMENTS", "text": "In this section, we focus on the untargeted attack setting where the goal is to generate a perturbation so as to get an original image originally classified correctly by the classification model to be misclassified to any class other than the original class. In particular, we consider `∞ attacks on ImageNet (Deng et al., 2009) and MNIST (Lecun et al., 1998) and evaluate the performance in terms of attack success rate with a given query budget and average query count.\nThe attack success rate is defined as the ratio of the number of images successfully misclassified to the total number of input images. Among the set of input images, we discard images which were misclassified by the classifier. The average query count is computed on the queries made for successful attacks." }, { "heading": "5.1 EXPERIMENTS ON MNIST", "text": "We compare the performance of the proposed method with that of white-box FGSM (Goodfellow et al., 2014) across different values of `∞ bounds ranging from 0.05 to 0.3 in increments of 0.05 over query budgets from 20 to 200. We use the pre-trained LeNet model available from Pytorch to demonstrate the attacks. We use all the correctly classified images from the 10,000 images (scaled to [0, 1]) in the MNIST test set. To generate the sample gradients, for GMRF estimation, 1, 000 queries were used. To further illustrate the importance of incorporating a non-identity gradient covariance, we also provide experimental results for a version of our proposed algorithm which takes the gradient covariance to be an identity matrix. As shown in Figure 2, our proposed attack method exhibits better attack accuracy than white-box FGSM in around 75 queries and the gap in the performance is further magnified with increasing number of queries. On the other hand, the version of our algorithm with identity gradient covariance consistently under performs with respect to the white-box FGSM attack. In particular, the identity covariance version only reaches close to the white-box FGSM attack performance in 200 queries. This further illustrates the effectiveness of our proposed algorithm and the importance of modelling the gradient covariance as a GMRF and more generally as a non-identity covariance matrix.\nThe superior performance of our proposed framework as compared to white-box FGSM as demonstrated in Figure 2 can be attributed to the following reason. First, incorporating the gradient nonidentity covariance structure into the gradient estimation scheme, allows our perturbation to be able to use structural gradient information from other images too. On the other hand, white box FGSM treats gradient of every image to be independent of gradients of other images. This is further illustrated by our experimental findings based on the version of our algorithm considering the gradient covariance to be an identity matrix." }, { "heading": "5.2 EXPERIMENTS ON IMAGENET", "text": "We compare the performance of the proposed method with that of NES (Ilyas et al., 2018), BANDITS-TD (Ilyas et al., 2019) and PARSIMONIOUS (Moon et al., 2019), which are the current state of the art in `∞ based black-box attacks. For ImageNet, we consider three classifiers namely, ResNet50 (He et al., 2015), Inception-v3 (Szegedy et al., 2015) and VGG16-bn (Simonyan & Zisserman, 2014). We use the pre-trained models provided by PyTorch for attacking these classifiers. We use all the correctly classified images from the 50,000 images (scaled to [0, 1]) in the ImageNet validation set.\nThe `∞ perturbation bound is set to = 0.05. We use the implementation2 and hyperparameters provided by Ilyas et al. (2019) for NES and BANDITS-TD. Similarly for PARSIMONIOUS, we use the implementation3 and hyperparameters given by (Moon et al., 2019). The last 50 images of the ImageNet validation set are used for estimating the parameters of the GMRF. In particular, to generate the sample gradients, for each model, 5, 000 queries were used for the GMRF estimation. The specifics of the GMRF model, the values of the parameters and the associated hyperparameters (which were obtained by grid search) for the proposed algorithm for the three classifiers are relegated to the Appendix. Figures 3 - 5 show the evolution of attack accuracy with different querying budgets. In the regime of low query budget with less than 200 query budget, our algorithm outperforms PARSIMONIOUS, though it exhibits inferior performance in the higher query budget regime.\nAs shown in Table 1, our proposed algorithm in spite of being a single-step attack, outperforms BANDITS-TD and NES by achieving higher attack success rate. Despite the higher success rate, the proposed method uses fewer queries on an average as compared to BANDITS-TD and NES. Thus, the proposed method strictly dominates BANDITS-TD and NES on every metric.\nThe proficiency of our proposed scheme in the low query budget regime can be attributed to the utilization of the correlation between gradients across images and the usage of FFT basis vectors for calculating directional derivatives. In particular, the FFT basis vectors being the eigen vectors of the covariance matrix provide for systematic dimensionality reduction.\n2https://github.com/MadryLab/blackbox-bandits 3https://github.com/snu-mllab/parsimonious-blackbox-attack" }, { "heading": "6 CONCLUSION", "text": "In this paper, we have developed a GMRF based covariance modeling technique so as to streamline the gradient estimation scheme catered towards black-box adversarial attacks. In particular, due to the streamlined gradient estimation scheme, we could alleviate the issue of random directional derivative searches which plagues every zeroth order optimization scheme due to biased gradient estimates. The gradient estimation scheme can be used in any gradient based black-box adversarial attack method to attain higher attack accuracies with lower query counts. Our method facilitates single iteration based query efficient black-box attacks which we demonstrated to be as potent as multi-step attacks on multiple architectures and datasets in terms of attack success rate. We also employed techniques from matrix analysis and FFT to make our attack computationally efficient. Our results open avenues for more effective covariance modeling techniques so as to further streamline gradient estimation schemes so as to facilitate more query efficient black-box adversarial attacks." }, { "heading": "A APPENDIX", "text": "A.1 MNIST EXPERIMENTS\nThe GMRF model used for MNIST is given by Λi,i = α,Λi,i+1 = Λi+1,i = Λi,i−1 = Λi−1,i = β,Λi+1,i+1 = Λi−1,i−1 = Λi−1,i+1 = Λi+1,i−1 = γ, where Λi,j denotes the (i, j)-th element of Λ. For estimating the GMRF parameters, we use the last 20 images of the MNIST test set and perturb each of them with 50 vectors drawn from a normal distribution. For the attack, we use low frequency basis vectors of the FFT basis. The following table gives the values of the different hyperparameters used in the attack. Except for the GMRF parameters, all the other parameters were determined using grid search.\nA.2 VGG16 EXPERIMENTS\nThe GMRF model used for VGG16 for Imagenet is given by Λ0,i,i = α,Λ0,i,i+1 = Λ0,i+1,i = Λ0,i,i−1 = Λ0,i−1,i = β, Λ0,i+1,i+1 = Λ0,i−1,i+1 = Λ0,i−1,i−1 = Λ0,i+1,i−1 = κ, Λ1,i,i = Λ−1,i,i = γ, where in Λk,i,j , k denotes the channel. We also tried out GMRF models of lower and higher degree of association and we selected the one performing the best. For estimating the\nGMRF parameters, we use the last 50 images of the ImageNet validation set and perturb each of them with 50 vectors drawn from a normal distribution. For the attack, we use low frequency basis vectors of the FFT basis. The following table gives the values of the different hyperparameters used in the attack. Except for the GMRF parameters, all the other parameters were determined using grid\nsearch.\nA.3 RESNET50 EXPERIMENTS\nThe GMRF model used for VGG16 for Imagenet is given by Λ0,i,i = α,Λ0,i,i+1 = Λ0,i+1,i = Λ0,i,i−1 = Λ0,i−1,i = β, Λ0,i+1,i+1 = Λ0,i−1,i+1 = Λ0,i−1,i−1 = Λ0,i+1,i−1 = κ, Λ0,i,i+2 = Λ0,i,i−2 = Λ0,i−2,i = Λ0,i+2,i = Λ0,i+1,i+2 = Λ0,i−1,i+2 = Λ0,i+2,i+1 = Λ0,i+2,i−1 = Λ0,i−1,i−2 = Λ0,i+1,i−2 = Λ0,i−2,i−1 = Λ0,i−2,i+1 = ν, Λ1,i,i = Λ−1,i,i = γ, where in Λk,i,j , k denotes the channel. We also tried out GMRF models of lower and higher degree of association and we selected the one performing the best. For estimating the GMRF parameters, we use the last 50 images of the ImageNet validation set and perturb each of them with 50 vectors drawn from a normal distribution. For the attack, we use low frequency basis vectors of the FFT basis. The following table gives the values of the different hyperparameters used in the attack. Except for the\nGMRF parameters, all the other parameters were determined using grid search.\nA.4 INCEPTION V3 EXPERIMENTS\nThe GMRF model used for VGG16 for Imagenet is given by Λ0,i,i = α,Λ0,i,i+1 = Λ0,i+1,i = Λ0,i,i−1 = Λ0,i−1,i = β, Λ0,i+1,i+1 = Λ0,i−1,i+1 = Λ0,i−1,i−1 = Λ0,i+1,i−1 = κ, Λ0,i,i+2 = Λ0,i,i−2 = Λ0,i−2,i = Λ0,i+2,i = Λ0,i+1,i+2 = Λ0,i−1,i+2 = Λ0,i+2,i+1 = Λ0,i+2,i−1 = Λ0,i−1,i−2 = Λ0,i+1,i−2 = Λ0,i−2,i−1 = Λ0,i−2,i+1 = ν, Λ1,i,i = Λ−1,i,i = γ, where in Λk,i,j , k denotes the channel. We also tried out GMRF models of lower and higher degree of association and we selected the one performing the best. For estimating the GMRF parameters, we use the last 50 images of the ImageNet validation set and perturb each of them with 50 vectors drawn from a normal distribution. For the attack, we use low frequency cosine basis vectors of the FFT basis. The following table gives the values of the different hyperparameters used in the attack. Except for the\nGMRF parameters, all the other parameters were determined using grid search.\nA.5 EFFICIENT COMPUTATION OF FFT BASIS\nWe use the fact that the covariance and the inverse covariance matrix because of being convolutional operators are diagonalized by the FFT basis. Let us assume the image is of size c × h × w, where c,h and w denote the number of channels, height and width of the gradient. We define a tensor S of zeros of size c × h × w × 2, where the last dimension is to account for both the real and complex components. In order to generate the lowest frequency basis vector, we set the the first element of the tensor of the first channel, i.e, S0,0,0,0 = 1 and take the inverse FFT. This gives us the lowest cosine basis vector. We do the same for the other channels, by just setting the corresponding component to 1 and taking the inverse FFT. Setting, S0,0,0,1 = 1 and then taking the inverse FFT yields the lowest frequent sine component. In order to generate the low frequency components, we start from the beginning of a row and proceed along diagonally by incrementing the row and column index by one. At each entry of the tensor, we repeat it for every channel once at a time.\nA.6 GRADIENT ESTIMATION PERFORMANCE\nWe illustrate the performance of our gradient estimation scheme in this section through experiments on the MNIST and the ImageNet dataset using LeNet and VGG-16bn respectively. We use two metrics namely, mean squared error of the normalized estimated gradient with respect to the normalized true gradient and the cosine similarity between the estimated gradient and the true gradient. In order to perform our analysis, we use 500 data samples from the test set to estimate the gradient using the GMRF framework. First, we estimate the GMRF parameters as previously described in Algorithm 1 and then perform the MAP estimation for the gradient. For MNIST, we demonstrate the gradient estimation performance on two regimes, i.e., 40 query budget where our scheme is outperfomed by white-box FGSM and 200 query budget where the vice versa happens. As evident from Figures 6b and 7b, our scheme generates gradient estimates which have low MSE. For, the 40 query budget setting, the cosine similarity is centered around 0.3, while for the 200 query budget setting, the cosine similarity is centered around 0.2. Had our gradient estimates completely aligned with that of the true gradient as in white-box FGSM our performance would have been upper bounded by the performance of white-box FGSM. In essence, our scheme finds directions for adversarial perturbations, which in itself does not maximize the loss but is able to find a direction which leads to misclassification of the examples.\nThe gradient estimation performance for ImageNet using VGG16-bn is depicted in Figure 8. In order to perform our analysis, we sample 500 data samples from the ImageNet validation set to estimate the gradient using the GMRF framework. First, we estimate the GMRF parameters as previously described in Algorithm 1 and then perform the MAP estimation for the gradient. We specifically consider the query budget to be 200. Out of the 400 correctly classified images, whitebox FGSM and our proposed algorithm attain attack accuracies of 0.9268 and 0.7804 respectively. While, the gradient estimation performance in terms of MSE is impressive, the cosine similarity shows that the estimated gradient does not quite coincide directionally with the true gradient. The difference in the directions explains the inferior performance of our proposed scheme in this regime. It is worth noting that the dimension of the input data for VGG16-bn is 150528. From classical results in zeroth-order optimization it is well known that in a d-dimensional space, O(d) queries are required to obtain a nearly bias-free gradient estimate. Our framework uses only 200 queries to estimate the gradient which resides in a 150528 dimensional space. In spite of the possible erroneous directional characteristic of our estimated gradient, it still manages to achieve a 0.78 success rate and outperforms BANDITS-TD and PARSIMONIOUS in the 200 query budget regime.\nA.7 AUTOCORRELATION\nWe provide further evidence for considering a non-identity covariance for modelling the gradient covariance. In figure 9, we plot the autocorrelation of the true gradients from 100 images sampled from the MNIST test set for two different kernel sizes, i.e., 9× 9 and 11× 11 for the LeNet model. In figure 10, we plot the autocorrelation of the third channel of the true gradients from 50 images sampled from the ImageNet validation set for two different kernel sizes, i.e., 9× 9 and 11× 11 for the VGG16-bn model. The autocorrelation plots show the correlation across dimensions and across images to be substantial so as to provide even more evidence and reason for the gradient model we have considered in this paper." } ]
2,019
null